id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
10,220,713 | https://en.wikipedia.org/wiki/Zero%20field%20splitting | Zero field splitting (ZFS) describes various interactions of the energy levels of a molecule or ion resulting from the presence of more than one unpaired electron. In quantum mechanics, an energy level is called degenerate if it corresponds to two or more different measurable states of a quantum system. In the presence of a magnetic field, the Zeeman effect is well known to split degenerate states. In quantum mechanics terminology, the degeneracy is said to be "lifted" by the presence of the magnetic field. In the presence of more than one unpaired electron, the electrons mutually interact to give rise to two or more energy states. Zero field splitting refers to this lifting of degeneracy even in the absence of a magnetic field. ZFS is responsible for many effects related to the magnetic properties of materials, as manifested in their electron spin resonance spectra and magnetism.
The classic case for ZFS is the spin triplet, i.e., the S=1 spin system. In the presence of a magnetic field, the levels with different values of magnetic spin quantum number (MS=0,±1) are separated and the Zeeman splitting dictates their separation. In the absence of magnetic field, the 3 levels of the triplet are isoenergetic to the first order. However, when the effects of inter-electron repulsions are considered, the energy of the three sublevels of the triplet can be seen to have separated. This effect is thus an example of ZFS. The degree of separation depends on the symmetry of the system.
Quantum mechanical description
The corresponding Hamiltonian can be written as:
Where S is the total spin quantum number, and are the spin matrices.
The value of the ZFS parameter are usually defined via D and E parameters. D describes the axial component of the magnetic dipole–dipole interaction, and E the transversal component. Values of D have been obtained for a wide number of organic biradicals by EPR measurements. This value may be measured by other magnetometry techniques such as SQUID; however, EPR measurements provide more accurate data in most cases. This value can also be obtained with other techniques such as optically detected magnetic resonance (ODMR; a double resonance technique which combines EPR with measurements such as fluorescence, phosphorescence and absorption), with sensitivity down to a single molecule or defect in solids like diamond (e.g. N-V center) or silicon carbide.
Algebraic derivation
The start is the corresponding Hamiltonian . describes the dipolar spin-spin interaction between two unpaired spins ( and ). Where is the total spin , and being a symmetric and traceless (which it is when arises from dipole-dipole interaction) matrix, which means it is diagonalizable.
with being traceless (). For simplicity is defined as . The Hamiltonian becomes:
The key is to express as its mean value and a deviation
To find the value for the deviation which is then by rearranging equation ():
By inserting () and () into () the result reads as:
Note, that in the second line in ()
was added. By doing so can be further used.
By using the fact, that is traceless () equation () simplifies to:
By defining D and E parameters equation () becomes to:
with and (measurable) zero field splitting values.
References
Further reading
Principles of electron spin resonance: By N M Atherton. pp 585. Ellis Horwood PTR Prentice Hall. 1993
External links
Description of the origins of Zero Field Splitting
Electron paramagnetic resonance | Zero field splitting | [
"Physics",
"Chemistry"
] | 750 | [
"Electron paramagnetic resonance",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
10,221,795 | https://en.wikipedia.org/wiki/Integrative%20bioinformatics | Integrative bioinformatics is a discipline of bioinformatics that focuses on problems of data integration for the life sciences.
With the rise of high-throughput (HTP) technologies in the life sciences, particularly in molecular biology, the amount of collected data has grown in an exponential fashion. Furthermore, the data are scattered over a plethora of both public and private repositories, and are stored using a large number of different formats. This situation makes searching these data and performing the analysis necessary for the extraction of new knowledge from the complete set of available data very difficult. Integrative bioinformatics attempts to tackle this problem by providing unified access to life science data.
Approaches
Semantic web approaches
In the Semantic Web approach, data from multiple websites or databases is searched via metadata. Metadata is machine-readable code, which defines the contents of the page for the program so that the comparisons between the data and the search terms are more accurate. This serves to decrease the number of results that are irrelevant or unhelpful. Some meta-data exists as definitions called ontologies, which can be tagged by either users or programs; these serve to facilitate searches by using key terms or phrases to find and return the data. Advantages of this approach include the general increased quality of the data returned in searches and with proper tagging, ontologies finding entries that may not explicitly state the search term but are still relevant. One disadvantage of this approach is that the results that are returned come in the format of the database of their origin and as such, direct comparisons may be difficult. Another problem is that the terms used in tagging and searching can sometimes be ambiguous and may cause confusion among the results. In addition, the semantic web approach is still considered an emerging technology and is not in wide-scale use at this time.
One of the current applications of ontology-based search in the biomedical sciences is GoPubMed, which searches the PubMed database of scientific literature. Another use of ontologies is within databases such as SwissProt, Ensembl and TrEMBL, which use this technology to search through the stores of human proteome-related data for tags related to the search term.
Some of the research in this field has focused on creating new and specific ontologies. Other researchers have worked on verifying the results of existing ontologies. In a specific example, the goal of Verschelde, et al. was the integration of several different ontology libraries into a larger one that contained more definitions of different subspecialties (medical, molecular biological, etc.) and was able to distinguish between ambiguous tags; the result was a data-warehouse like effect, with easy access to multiple databases through the use of ontologies. In a separate project, Bertens, et al. constructed a lattice work of three ontologies (for anatomy and development of model organisms) on a novel framework ontology of generic organs. For example, results from a search of ‘heart’ in this ontology would return the heart plans for each of the vertebrate species whose ontologies were included. The stated goal of the project is to facilitate comparative and evolutionary studies.
Data warehousing approaches
In the data warehousing strategy, the data from different sources are extracted and integrated in a single database. For example, various 'omics' datasets may be integrated to provide biological insights into biological systems. Examples include data from genomics, transcriptomics, proteomics, interactomics, metabolomics. Ideally, changes in these sources are regularly synchronized to the integrated database. The data is presented to the users in a common format. Many programs aimed to aid in the creation of such warehouses are designed to be extremely versatile to allow for them to be implemented in diverse research projects. One advantage of this approach is that data is available for analysis at a single site, using a uniform schema. Some disadvantages are that the datasets are often huge and difficult to keep up to date. Another problem with this method is that it is costly to compile such a warehouse.
Standardized formats for different types of data (ex: protein data) are now emerging due to the influence of groups like the Proteomics Standards Initiative (PSI). Some data warehousing projects even require the submission of data in one of these new formats.
Other approaches
Data mining uses statistical methods to search for patterns in existing data. This method generally returns many patterns, of which some are spurious and some are significant, but all of the patterns the program finds must be evaluated individually. Currently, some research is focused on incorporating existing data mining techniques with novel pattern analysis methods that reduce the need to spend time going over each pattern found by the initial program, but instead, return a few results with a high likelihood of relevance. One drawback of this approach is that it does not integrate multiple databases, which means that comparisons across databases are not possible. The major advantage to this approach is that it allows for the generation of new hypotheses to test.
See also
Biological database
Biological data visualization
InterMine - an open-source biological data warehouse system
References
External links
Journal of Integrative Bioinformatics
IMBio
GoPubMed
BMC Bioinformatics
Netherlands Bioinformatics Centre
Bioinformatics | Integrative bioinformatics | [
"Engineering",
"Biology"
] | 1,080 | [
"Bioinformatics",
"Biological engineering"
] |
10,222,049 | https://en.wikipedia.org/wiki/Narkomfin%20building | The Narkomfin Building is a block of flats at 25, Novinsky Boulevard, in the Central district of Moscow, Russia. Conceived as a "transitional type of experimental house", it is a renowned example of Constructivist architecture and avant-garde housing design.
Though a listed "Cultural Heritage Monument" on the Russian cultural heritage register, it was in a deteriorating state for many years. Many units were vacated by residents. A reconstruction, which lasted more than three years, was completed in the summer of 2020, with the official opening of the renovated apartment building took place on 9 July.
Architecture for collective living
The project for four planned buildings was designed by Moisei Ginzburg with Ignaty Milinis in 1928. Only two were built, completed in 1932. The color design for the buildings was created by Bauhaus student Hinnerk Scheper.
This apartment block, designed for high rank employees at the Commissariat of Finance (shortened to Narkomfin) was an opportunity for Ginzburg to try out many of the theories advanced by the Constructivist OSA group between 1926 and 1930 on architectural form and collective living. The building is made from reinforced concrete and is set in a park. It originally consisted of a long block of apartments raised on pilotis (with a penthouse and roof garden), connected by an enclosed bridge to a smaller, glazed block of collective facilities.
As advertised by the architects, the apartments were to form an intervention into the everyday life (or byt) of the inhabitants. By offering Communal facilities such as kitchens, creches and laundry as part of the block, the tenants were encouraged into a more socialist and, by taking women out of their traditional roles, feminist way of life. The structure was thus to act as a 'social condenser' by including within it a library and gymnasium.
On the other hand, architects of the 1920s had to face the social reality of an overcrowded socialist city: any single-family apartment unit with more than one room would eventually be converted to a multi-family kommunalka. Apartments could retain the single-family status if, and only if, they were physically small and could not be partitioned to accommodate more than one family. Any single-level apartment could be partitioned; thus, the avant-garde community (notably, Ginzburg and Konstantin Melnikov) designed such model units, relying on vertical separation of bedroom (top level) and combined kitchen and living room (lower level). Ilya Golosov implemented these cells for his Collective House in Ivanovo, and Pavel Gofman for communal housing in Saratov. Ginzburg refined their cell design based on real-life experience.
Vertical apartment plan
Narkomfin has 54 units, none of them has a dedicated kitchen - at least, legally. Many residents partitioned their apartments to set aside a tiny kitchen. There are five inhabited floors, but only two corridors on second and Fifth level (an apartment split between third and second level connects to the second floor corridor, etc.).
Apartments were graded by how far along they were to being 'fully collectivised', ranging from rooms with their own kitchens to apartments purely for sleep and study. Most of the units belong to "Cell K" type (with double-height living room) and "Cell F" connecting to an outdoor gallery. The sponsor of the building, Commissar of Finance Nikolay Alexandrovich Milyutin, enjoyed a penthouse (originally planned as a communal recreation area). Milyutin is also known as an experimental city planner who had developed plans for a linear city.
Influence
Le Corbusier, who studied the building during his visits to the Soviet Union, was vocal about the debt he owed to the pioneering ideas of the Narkomfin building, and he used a variant of its duplex flat plans in his Unité d'Habitation. Other architects to have reused its ideas include Moshe Safdie, in his Expo 67 flats Habitat 67 and Denys Lasdun, in his luxury flats in St James', London. The idea of the 'social condenser' was also acknowledged by Berthold Lubetkin an influence on his work.
The Narkomfin building as reality
The Utopianism and reformism of everyday life that was behind the building's idea fell out of favour almost as soon as it was finished. After the start of the Five Year Plan and Joseph Stalin's consolidation of power, its collectivist and feminist ideas were rejected as 'Leftist' or Trotskyist. In the 1930s, the ground floor, which was originally left free and suspended with pilotis, was filled with flats to help alleviate Moscow's severe housing shortage, while a planned adjoining block was built in the eclectic Stalinist style.
The building looks over the US embassy, which has discouraged the inhabitants from using the roof garden. The vicissitudes of the building were charted in Victor Buchli's book An Archaeology of Socialism which takes the flats and their inhabitants as a starting point for an analysis of Soviet 'material culture'.
Modern status
Legally, each apartment unit in the building was privatized (beginning in 1992) by the residents. Later, a real estate speculator bought out a significant proportion of the apartments, as a consolidated apartment package with the city MIAN agency. The rest were still owned and inhabited by the residents, but with MIAN dominance creating a legal stalemate where the residents were unable to form a condominium association and operate the building independently. Therefore, the city agency had control over the future of the Narkomfin building.
By 2010, the building was in a very dilapidated state, although it was still partially inhabited. UNESCO placed it at the top of their 'Endangered Buildings' list, and it was placed on the World Monuments Fund's watchlist of endangered heritage sites three times. An international campaign was launched to save the landmark. Despite the Russian "Cultural Heritage Monument" code prohibiting any major re-planning of internal walls and partitions, there were accusations that illegal renovations were taking place. Alexei Ginzburg, grandson of Moisei Ginzburg, stated that "The situation [was] out of control" in 2014.
In 2016, the building began renovation under the guidance of Alexei Ginsburg, after development company Liga Prav bought it from an auction. Renovation was completed in July 2020, with the original designs restored where possible and all later additions removed.
Gallery
References
External links
Moscow Architecture Preservation Society Profile
Campaign for the Preservation of the Narkomfin Building
The Art Newspaper on the Narkomfin
zdanija.ru Russian forum: photos of the Narkomfin Building
.
Apartment buildings
Residential buildings in Moscow
Buildings and structures built in the Soviet Union
Russian avant-garde
Architecture related to utopias
Residential buildings completed in 1932
Constructivist buildings and structures
Modernist architecture in Russia
Cultural heritage monuments of regional significance in Moscow | Narkomfin building | [
"Engineering"
] | 1,410 | [
"Architecture related to utopias",
"Architecture"
] |
10,223,066 | https://en.wikipedia.org/wiki/Spacetime%20algebra | In mathematical physics, spacetime algebra (STA) is the application of Clifford algebra Cl1,3(R), or equivalently the geometric algebra to physics. Spacetime algebra provides a "unified, coordinate-free formulation for all of relativistic physics, including the Dirac equation, Maxwell equation and General Relativity" and "reduces the mathematical divide between classical, quantum and relativistic physics."
Spacetime algebra is a vector space that allows not only vectors, but also bivectors (directed quantities describing rotations associated with rotations or particular planes, such as areas, or rotations) or blades (quantities associated with particular hyper-volumes) to be combined, as well as rotated, reflected, or Lorentz boosted. It is also the natural parent algebra of spinors in special relativity. These properties allow many of the most important equations in physics to be expressed in particularly simple forms, and can be very helpful towards a more geometric understanding of their meanings.
In comparison to related methods, STA and Dirac algebra are both Clifford Cl1,3 algebras, but STA uses real number scalars while Dirac algebra uses complex number scalars.
The STA spacetime split is similar to the algebra of physical space (APS, Pauli algebra) approach. APS represents spacetime as a paravector, a combined 3-dimensional vector space and a 1-dimensional scalar.
Structure
For any pair of STA vectors, , there is a vector (geometric) product , inner (dot) product and outer (exterior, wedge) product . The vector product is a sum of an inner and outer product:
The inner product generates a real number (scalar), and the outer product generates a bivector. The vectors and are orthogonal if their inner product is zero; vectors and are parallel if their outer product is zero.
The orthonormal basis vectors are a timelike vector and 3 spacelike vectors . The Minkowski metric tensor's nonzero terms are the diagonal terms, . For :
The Dirac matrices share these properties, and STA is equivalent to the algebra generated by the Dirac matrices over the field of real numbers; explicit matrix representation is unnecessary for STA.
Products of the basis vectors generate a tensor basis containing one scalar , four vectors , six bivectors , four pseudovectors (trivectors) and one pseudoscalar with . The pseudoscalar commutes with all even-grade STA elements, but anticommutes with all odd-grade STA elements.
Subalgebra
STA's even-graded elements (scalars, bivectors, pseudoscalar) form a Clifford Cl3,0(R) even subalgebra equivalent to the APS or Pauli algebra. The STA bivectors are equivalent to the APS vectors and pseudovectors. The STA subalgebra becomes more explicit by renaming the STA bivectors as and the STA bivectors as . The Pauli matrices, , are a matrix representation for . For any pair of , the nonzero inner products are , and the nonzero outer products are:
The sequence of algebra to even subalgebra continues as algebra of physical space, quaternion algebra, complex numbers and real numbers. The even STA subalgebra Cl+(1,3) of real space-time spinors in Cl(1,3) is isomorphic to the Clifford geometric algebra Cl(3,0) of Euclidean space R3 with basis elements. See the illustration of space-time algebra spinors in Cl+(1,3) under the octonionic product as a Fano plane.
Division
A nonzero vector is a null vector (degree 2 nilpotent) if . An example is . Null vectors are tangent to the light cone (null cone). An element is an idempotent if . Two idempotents and are orthogonal idempotents if . An example of an orthogonal idempotent pair is and with . Proper zero divisors are nonzero elements whose product is zero such as null vectors or orthogonal idempotents. A division algebra is an algebra that contains multiplicative inverse (reciprocal) elements for every element, but this occurs if there are no proper zero divisors and if the only idempotent is 1.
The only associative division algebras are the real numbers, complex numbers and quaternions. As STA is not a division algebra, some STA elements may lack an inverse; however, division by the non-null vector may be possible by multiplication by its inverse, defined as .
Reciprocal frame
Associated with the orthogonal basis is the reciprocal basis set satisfying these equations:
These reciprocal frame vectors differ only by a sign, with , but .
A vector may be represented using either the basis vectors or the reciprocal basis vectors with summation over , according to the Einstein notation. The inner product of vector and basis vectors or reciprocal basis vectors generates the vector components.
The metric and index gymnastics raise or lower indices:
Spacetime gradient
The spacetime gradient, like the gradient in a Euclidean space, is defined such that the directional derivative relationship is satisfied:
This requires the definition of the gradient to be
Written out explicitly with , these partials are
Spacetime split
In STA, a spacetime split is a projection from four-dimensional space into (3+1)-dimensional space in a chosen reference frame by means of the following two operations:
a collapse of the chosen time axis, yielding a 3-dimensional space spanned by bivectors, equivalent to the standard 3-dimensional basis vectors in the algebra of physical space and
a projection of the 4D space onto the chosen time axis, yielding a 1-dimensional space of scalars, representing the scalar time.
This is achieved by pre-multiplication or post-multiplication by a timelike basis vector , which serves to split a four vector into a scalar timelike and a bivector spacelike component, in the reference frame co-moving with . With we have
Spacetime split is a method for representing an even-graded vector of spacetime as a vector in the Pauli algebra, an algebra where time is a scalar separated from vectors that occur in 3 dimensional space. The method replaces these spacetime vectors
As these bivectors square to unity, they serve as a spatial basis. Utilizing the Pauli matrix notation, these are written . Spatial vectors in STA are denoted in boldface; then with and , the -spacetime split , and its reverse are:
However, the above formulas only work in the Minkowski metric with signature (+ - - -). For forms of the spacetime split that work in either signature, alternate definitions in which and must be used.
Transformations
To rotate a vector in geometric algebra, the following formula is used:
,
where is the angle to rotate by, and is the normalized bivector representing the plane of rotation so that .
For a given spacelike bivector, , so Euler's formula applies, giving the rotation
.
For a given timelike bivector, , so a "rotation through time" uses the analogous equation for the split-complex numbers:
.
Interpreting this equation, these rotations along the time direction are simply hyperbolic rotations. These are equivalent to Lorentz boosts in special relativity.
Both of these transformations are known as Lorentz transformations, and the combined set of all of them is the Lorentz group. To transform an object in STA from any basis (corresponding to a reference frame) to another, one or more of these transformations must be used.
Any spacetime element is transformed by multiplication with the pseudoscalar to form its dual element . Duality rotation transforms spacetime element to element through angle with pseudoscalar is:
Duality rotation occurs only for non-singular Clifford algebra, non-singular meaning a Clifford algebra containing pseudoscalars with a non-zero square.
Grade involution (main involution, inversion) transforms every r-vector to :
Reversion transformation occurs by decomposing any spacetime element as a sum of products of vectors and then reversing the order of each product. For multivector
arising from a product of vectors, the reversion is :
Clifford conjugation of a spacetime element combines reversion and grade involution transformations, indicated as :
The grade involution, reversion and Clifford conjugation transformations are involutions.
Classical electromagnetism
The Faraday bivector
In STA, the electric field and magnetic field can be unified into a single bivector field, known as the Faraday bivector, equivalent to the Faraday tensor. It is defined as:
where and are the usual electric and magnetic fields, and is the STA pseudoscalar. Alternatively, expanding in terms of components, is defined that
The separate and fields are recovered from using
The term represents a given reference frame, and as such, using different reference frames will result in apparently different relative fields, exactly as in standard special relativity.
Since the Faraday bivector is a relativistic invariant, further information can be found in its square, giving two new Lorentz-invariant quantities, one scalar, and one pseudoscalar:
The scalar part corresponds to the Lagrangian density for the electromagnetic field, and the pseudoscalar part is a less-often seen Lorentz invariant.
Maxwell's equation
STA formulates Maxwell's equations in a simpler form as one equation, rather than the 4 equations of vector calculus. Similarly to the above field bivector, the electric charge density and current density can be unified into a single spacetime vector, equivalent to a four-vector. As such, the spacetime current is given by
where the components are the components of the classical 3-dimensional current density. When combining these quantities in this way, it makes it particularly clear that the classical charge density is nothing more than a current travelling in the timelike direction given by .
Combining the electromagnetic field and current density together with the spacetime gradient as defined earlier, we can combine all four of Maxwell's equations into a single equation in STA.
The fact that these quantities are all covariant objects in the STA automatically guarantees Lorentz covariance of the equation, which is much easier to show than when separated into four separate equations.
In this form, it is also much simpler to prove certain properties of Maxwell's equations, such as the conservation of charge. Using the fact that for any bivector field, the divergence of its spacetime gradient is , one can perform the following manipulation:
This equation has the clear meaning that the divergence of the current density is zero, i.e. the total charge and current density over time is conserved.
Using the electromagnetic field, the form of the Lorentz force on a charged particle can also be considerably simplified using STA.
Potential formulation
In the standard vector calculus formulation, two potential functions are used: the electric scalar potential, and the magnetic vector potential. Using the tools of STA, these two objects are combined into a single vector field , analogous to the electromagnetic four-potential in tensor calculus. In STA, it is defined as
where is the scalar potential, and are the components of the magnetic potential. As defined, this field has SI units of webers per meter (V⋅s⋅m−1).
The electromagnetic field can also be expressed in terms of this potential field, using
However, this definition is not unique. For any twice-differentiable scalar function , the potential given by
will also give the same as the original, due to the fact that
This phenomenon is called gauge freedom. The process of choosing a suitable function to make a given problem simplest is known as gauge fixing. However, in relativistic electrodynamics, the Lorenz condition is often imposed, where .
To reformulate the STA Maxwell equation in terms of the potential , is first replaced with the above definition.
Substituting in this result, one arrives at the potential formulation of electromagnetism in STA:
Lagrangian formulation
Analogously to the tensor calculus formalism, the potential formulation in STA naturally leads to an appropriate Lagrangian density.
The multivector-valued Euler-Lagrange equations for the field can be derived, and being loose with the mathematical rigor of taking the partial derivative with respect to something that is not a scalar, the relevant equations become:
To begin to re-derive the potential equation from this form, it is simplest to work in the Lorenz gauge, setting
This process can be done regardless of the chosen gauge, but this makes the resulting process considerably clearer. Due to the structure of the geometric product, using this condition results in .
After substituting in , the same equation of motion as above for the potential field is easily obtained.
The Pauli equation
STA allows the description of the Pauli particle in terms of a real theory in place of a matrix theory. The matrix theory description of the Pauli particle is:
where is a spinor, is the imaginary unit with no geometric interpretation, are the Pauli matrices (with the 'hat' notation indicating that is a matrix operator and not an element in the geometric algebra), and is the Schrödinger Hamiltonian.
The STA approach transforms the matrix spinor representation to the STA representation using elements, , of the even-graded spacetime subalgebra and the pseudoscalar :
The Pauli particle is described by the real Pauli–Schrödinger equation:
where now is an even multi-vector of the geometric algebra, and the Schrödinger Hamiltonian is . Hestenes refers to this as the real Pauli–Schrödinger theory to emphasize that this theory reduces to the Schrödinger theory if the term that includes the magnetic field is dropped. The vector is an arbitrarily selected fixed vector; a fixed rotation can generate any alternative selected fixed vector .
The Dirac equation
STA enables a description of the Dirac particle in terms of a real theory in place of a matrix theory. The matrix theory description of the Dirac particle is:
where are the Dirac matrices and is the imaginary unit with no geometric interpretation.
Using the same approach as for Pauli equation, the STA approach transforms the matrix upper spinor and matrix lower spinor of the matrix Dirac bispinor to the corresponding geometric algebra spinor representations and . These are then combined to represent the full geometric algebra Dirac bispinor .
Following Hestenes' derivation, the Dirac particle is described by the equation:
Here, is the spinor field, and are elements of the geometric algebra, is the electromagnetic four-potential, and is the spacetime vector derivative.
Dirac spinors
A relativistic Dirac spinor can be expressed as:
where, according to its derivation by David Hestenes, is an even multivector-valued function on spacetime, is a unimodular spinor or "rotor", and and are scalar-valued functions. In this construction, the components of directly correspond with the components of a Dirac spinor, both having 8 scalar degrees of freedom.
This equation is interpreted as connecting spin with the imaginary pseudoscalar.
The rotor, , Lorentz transforms the frame of vectors into another frame of vectors by the operation ; note that indicates the reverse transformation.
This has been extended to provide a framework for locally varying vector- and scalar-valued observables and support for the Zitterbewegung interpretation of quantum mechanics originally proposed by Schrödinger.
Hestenes has compared his expression for with Feynman's expression for it in the path integral formulation:
where is the classical action along the -path.
Using the spinors, the current density from the field can be expressed by
Symmetries
Global phase symmetry is a constant global phase shift of the wave function that leaves the Dirac equation unchanged. Local phase symmetry is a spatially varying phase shift that leaves the Dirac equation unchanged if accompanied by a gauge transformation of the electromagnetic four-potential as expressed by these combined substitutions.
In these equations, the local phase transformation is a phase shift at spacetime location with pseudovector and of even-graded spacetime subalgebra applied to wave function ; the gauge transformation is a subtraction of the gradient of the phase shift from the electromagnetic four-potential with particle electric charge .
Researchers have applied STA and related Clifford algebra approaches to gauge theories, electroweak interaction, Yang–Mills theory, and the standard model.
The discrete symmetries are parity , charge conjugation and time reversal applied to wave function . These effects are:
General relativity
General relativity
Researchers have applied STA and related Clifford algebra approaches to relativity, gravity and cosmology. The gauge theory gravity (GTG) uses STA to describe an induced curvature on Minkowski space while admitting a gauge symmetry under "arbitrary smooth remapping of events onto spacetime" leading to this geodesic equation.
and the covariant derivative
where is the connection associated with the gravitational potential, and is an external interaction such as an electromagnetic field.
The theory shows some promise for the treatment of black holes, as its form of the Schwarzschild solution does not break down at singularities; most of the results of general relativity have been mathematically reproduced, and the relativistic formulation of classical electrodynamics has been extended to quantum mechanics and the Dirac equation.
See also
Geometric algebra
Dirac algebra
Maxwell's equations
Dirac equation
General relativity
Notes
Citations
References
PDF
Reprint
External links
Exploring Physics with Geometric Algebra, book I
Exploring Physics with Geometric Algebra, book II
A multivector Lagrangian for Maxwell's equation
Imaginary numbers are not real – the geometric algebra of spacetime, a tutorial introduction to the ideas of geometric algebra, by S. Gull, A. Lasenby, C. Doran
Physical Applications of Geometric Algebra course-notes, see especially part 2.
Cambridge University Geometric Algebra group
Geometric Calculus research and development
Geometric algebra
Clifford algebras
Minkowski spacetime
Mathematical physics | Spacetime algebra | [
"Physics",
"Mathematics"
] | 3,756 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
18,268,930 | https://en.wikipedia.org/wiki/AguaClara | AguaClara Cornell is an engineering based project team within Cornell University's College of Engineering that designs sustainable water treatment plants using open source technology. The program's mission is to uphold and protect “the fundamental human right to access safe drinking water. We are committed to the ongoing development of resilient, gravity-powered drinking water and wastewater treatment technologies.” AguaClara plants are unique among municipal-scale facilities in that they have no electrical or complex mechanical components and instead operate through hydraulic processes driven by gravity.
The AguaClara Cornell program provides undergraduate and graduate students the opportunity to enhance their education through hands-on experience working on projects with real applications. In 2012, the National Academy of Engineering showcased AguaClara as one of the 29 engineering program at US colleges that effectively incorporates real world experiences in their curriculum.
In 2017, a non-profit organization, AguaClara Reach, was formed with the continued mission of bringing clean drinking water on tap to communities around the world. AguaClara Reach works with AguaClara Cornell to pilot the latest open-source innovations developed in the lab, while sharing lessons learned from the field to drive further research.
In Honduras, implementation partner Agua Para el Pueblo (Water for People), a NGO working in Honduras who manages the construction and technical support for AguaClara plants. AguaClara Reach partners with Gram Vikas in India to build Hydrodosers. The Hydrodoser, an AguaClara technology, is a modular, easy to install unit that, on its own, can be used to dose chlorine to disinfect water that has no more than 5 NTU of turbidity, which is typical of well water.
History
AguaClara was formed in 2005 by Cornell University senior lecturer Monroe Weber-Shirk, who volunteered in Central American refugee camps during the 1980s. Weber-Shirk used the connections he developed through his volunteer work to partner with Jacabo Nuñez, the director of Agua para el Pueblo to find the answer to a crucial question: What can we do to treat the dirty water that we are providing to rural communities?
In 2005, he founded the AguaClara program to address the need for sustainable municipal scale water treatment in resource poor communities. The first AguaClara plant was built in 2006 in Ojojona to serve a population of 2000 people. Since 2005, Agua Para el Pueblo has commissioned eighteen drinking water treatment facilities implementing AguaClara technology across Honduras. Upon request of local communities in neighboring Nicaragua, an additional two facilities were commissioned in that country in 2017.
In 2017 with the founding of AguaClara Reach, the project team appended Cornell to its name to distinguish it from its non-profit counterpart.
Design tool
AguaClara Cornell has developed an automated design tool that allows interested parties to input basic design parameters such as flow rate into a simple frontend and receive customized designs via email in five minutes or less. The user frontend communicates with the AguaClara server to populate MathCad scripts that calculate design parameters for input into AutoCAD scripts, which produce the final design. The design algorithms can be continuously improved and any changes will be immediately implemented the next time a design is requested.
The AguaClara design tool applies an economy of scale to water treatment design, in that there are almost no marginal costs to produce an additional design. This is significant considering that the World Health Organization estimates the global unmet demand for improved water at approximately 844 million people, including 100 million using surface water sources that would be viable for treatment with AguaClara technology. From the AguaClara website:
Plants
AguaClara designs gravity-powered water treatment plants that require no electricity and are constructed by its implementation partners. The plants use hydraulic flocculators and high-flow vertical-flow sedimentation tanks to remove turbidity from surface waters.
La 34, or "La treinta y quatro," once a numbered plantation run by United Fruit, is the first site of an AguaClara plant. Construction on the La 34 plant began in December 2004 and was inaugurated in August 2005. The plant serves a population of 2000 with a design flow of 285 LPM.
Marcala The Marcala plant began in the Fall of 2007 and was completed in June 2008. The plant was upgraded in May 2011 to a flow rate of 3200 LPM.
Cuatro Comunidades In the Fall of 2008, the AguaClara team designed a water treatment plant with shallower tanks that doesn't need an elevated platform for the plant operator. The full scale pilot facility for this new design was built for the four communities of Los Bayos, Rio Frio, Aldea Bonito and Las Jaguas. Construction was completed in March 2009.
Sponsors
The Sanjuan Fund
Ken Brown '74 & Elizabeth Sanjuan
Rotary Clubs
Cornell University School of Civil & Environmental Engineering
Cornell University College of Engineering
Engineers for a Sustainable World
National Rural Water Association
EPA P3 Award Student design competition for sustainability
Kaplan Family Distinguished Faculty Fellowships (CU Public Service)
Awards and recognition
2012 NAE "Infusing World Experiences into Engineering Education"
2011 Intel Environment Tech Award
See also
Water purification
Cornell University
Notes and references
This article incorporates text from the old AguaClara website and the new AguaClara website, licensed under a Creative Commons Attribution-Share Alike 3.0 United States License.
External links
Water treatment
Industrial buildings in Honduras
Cornell University student organizations
2005 establishments in New York (state)
Non-profit organizations based in New York (state)
Organizations established in 2005 | AguaClara | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,156 | [
"Water treatment",
"Water pollution",
"Water technology",
"Environmental engineering"
] |
14,230,307 | https://en.wikipedia.org/wiki/Kozeny%E2%80%93Carman%20equation | The Kozeny–Carman equation (or Carman–Kozeny equation or Kozeny equation) is a relation used in the field of fluid dynamics to calculate the pressure drop of a fluid flowing through a packed bed of solids. It is named after Josef Kozeny and Philip C. Carman. The equation is only valid for creeping flow, i.e. in the slowest limit of laminar flow. The equation was derived by Kozeny (1927) and Carman (1937, 1956) from a starting point of (a) modelling fluid flow in a packed bed as laminar fluid flow in a collection of curving passages/tubes crossing the packed bed and (b) Poiseuille's law describing laminar fluid flow in straight, circular section pipes.
Equation
The equation is given as:
where:
is the pressure drop;
is the total height of the bed;
is the viscosity of the fluid;
is the porosity of the bed ( for randomly packed spheres);
is the sphericity of the particles in the packed bed ( = 1.0 for spherical particles);
is the diameter of the volume equivalent spherical particle;
is the superficial or "empty-tower" velocity which is directly proportional to the average volumetric fluid flux in the channels (q), and porosity ().
This equation holds for flow through packed beds with particle Reynolds numbers up to approximately 1.0, after which point frequent shifting of flow channels in the bed causes considerable kinetic energy losses.
This equation is a particular case of Darcy's law, with a very specific permeability. Darcy's law states that "flow is proportional to the pressure gradient and inversely proportional to the fluid viscosity" and is given as:
q
Combining these equations gives the final Kozeny equation for absolute (single phase) permeability:
where:
is the absolute (i.e., single phase) permeability.
History
The equation was first proposed by Kozeny (1927) and later modified by Carman (1937, 1956). A similar equation was derived independently by Fair and Hatch in 1933. A comprehensive review of other equations has been published.
See also
Fractionating column
Random close pack
Raschig ring
Ergun equation
References
Eponymous laws of physics
Equations of fluid dynamics
Unit operations
Porous media | Kozeny–Carman equation | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 480 | [
"Equations of fluid dynamics",
"Equations of physics",
"Unit operations",
"Porous media",
"Materials science",
"Chemical process engineering",
"Fluid dynamics"
] |
14,231,240 | https://en.wikipedia.org/wiki/PARSEC | PARSEC is a package designed to perform electronic structure calculations of solids and molecules using density functional theory (DFT). The acronym stands for Pseudopotential Algorithm for Real-Space Electronic Calculations. It solves the Kohn–Sham equations in real space, without the use of explicit basis sets.
One of the strengths of this code is that it handles non-periodic boundary conditions in a natural way, without the use of super-cells, but can equally well handle periodic and partially periodic boundary conditions. Another key strength is that it is readily amenable to efficient massive parallelization, making it highly effective for very large systems.
Its development started in early 1990s with James Chelikowsky (now at the University of Texas), Yousef Saad and collaborators at the University of Minnesota. The code is freely available under the GNU GPLv2. Currently, its public version is 1.4.4. Some of the physical/chemical properties calculated by this code are: Kohn–Sham band structure, atomic forces (including molecular dynamics capabilities), static susceptibility, magnetic dipole moment, and many additional molecular and solid state properties.
See also
Density functional theory
Quantum chemistry computer programs
References
External links
Computational chemistry software
Density functional theory software
Physics software | PARSEC | [
"Physics",
"Chemistry"
] | 258 | [
"Computational chemistry software",
"Chemistry software",
"Computational physics",
"Computational chemistry",
"Density functional theory software",
"Physics software"
] |
14,231,342 | https://en.wikipedia.org/wiki/Error%20management%20theory | Error management theory (EMT) is an approach to perception and cognition biases originally coined by David Buss and Martie Haselton. Error management training is a related area that uses this theory. The objective of it is to encourage trainees to make errors and encourage them in reflection to understand the causes of those errors and to identify suitable strategies to avoid making them in future.
Various biases in thinking and decision-making have been highlighted by Daniel Kahneman and have been shown to cause cognitive errors in psychological and economic decisions. Cognitive biases in error management theory refer to biases and heuristics that have undergone positive selection because they confer evolutionary benefits. According to this theory, recurrent cost asymmetries between two types of errors Type 1 and 2 over evolutionary time should result in a bias to make the less costly error (i.e., adaptive rationality leads to cognitive biases).
Error management theory asserts that evolved mindreading mechanisms will be biased to produce more of one type of inferential error than another. These mindreading biases have been examined in the domain of mating psychology. Error management theory provides a possible explanation for the discovery that men often tend to overperceive women's sexual interest and women tend to underperceive men's commitment intent. The theory has been supported by empirical findings, but researchers are still testing and refining it. Newer research suggests exceptions and refinements to the theory, such as postmenopausal effects, the possible projection of sexual and commitment self-interest, and other differences including unrestricted sociosexuality.
Type errors
In the decision-making process, when faced with uncertainty, a subject can make two possible errors: type I or type II.
A type I error is a false positive, thinking that an effect is there, when it is not. For example, acting on a fire alarm that turns out to be false. When someone infers sexual interest, where there is none, then a false-positive error has occurred.
A type II error is a false negative, not seeing an effect where one exists. Ignoring the fire alarm that turns out to be accurate, due to scepticism, illustrates this point. Falsely inferring a lack of intent about sexual interest means a false negative error has occurred.
Sexual overperception bias
Males
One of the aims of error management theory is to explain sexual overperception bias. Sexual overperception occurs when a type I error is committed by an individual. An individual committing this type error falsely concludes that someone else has a sexual interest in them. Research has shown that males are more likely than females to commit sexual overperception bias – men tend to overestimate women's sexual interest while women tend to underestimate men's. This is theorised to be likely due to the fact that the reproductive costs of sexual underperception are greater for men than the risk of making false positives. Men who perceive themselves as especially high in mate value are especially prone to experiencing this phenomenon. In addition, men who are also more inclined to pursue a short term mating strategy exhibit a more prominent case of sexual overperception bias.
Manipulation
Differences in perceptions of sexual interest between men and women may be exploited by both genders. Men may present themselves as more emotionally invested in a woman than they actually are in order to gain sexual access; 71% of men report engaging in this form of manipulation and 97% of women report having experienced this form of manipulation. Women may present themselves as more sexually interested in a man than they actually are in order to fulfill other needs and desires. The manipulations create conflicts between men and women as to the status of their relationships. Women on the receiving end of emotional manipulation may complain that the relationship is moving too quickly while men on the receiving end of sexual manipulation may complain about "being led on".
Exceptions
The sister effect
The sister effect is an exception to male overperception bias. Haselton and Buss (2000) found that sexual overperception bias would not occur when the target the men had to perceive sexual intent from was their sister. They found that the men's perception of their sister's sexual intent was lower than their perception of sexual intent from other females. Haselton and Buss (2000) believed that this perception of female sexual interest was most accurate as it fell between women's perception of women (high interest) and women's perception of their own sexual interest (low interest). This could be a product of incest-avoidance mechanisms.
Sexual and commitment self-interest
Sexual underperception in males is also observed, in cases where men report low levels of their own sexual interest. A person's own level of attraction, rather than their gender, may lead to over or under-perception. The exact mechanism for this is unclear but it is suggested that individuals may project their own level of sexual and commitment interestedness on to their interaction partner, whether they are in a relationship with them or they were strangers before the interaction.
Male insensitivity bias
A different explanation for the presence of both overperception and underperception in men is the male insensitivity bias. Evidence has shown that males lack perceptual sensitivity, so they are more likely to misperceive friendliness as sexual interest, but also more likely to misperceive sexual interest as friendliness, in comparison to females, something that explains the presence of both biases in males.
Sexual underperception bias
Females
Women also fall victim to misconceptions during male-female interactions. Haselton and Buss (2000) argued that these errors primarily stem from women's perceived desire for a committed relationship by a male counterpart. Women have evolved strategies to protect themselves from deception. One of these evolved strategies is to commit the Skeptical Commitment Bias.
Skeptical commitment bias
Women's commitment skepticism arises from the high costs of falsely inferring a mate's commitment to a relationship. It hypothesizes that women have adapted to be cognitively biased towards under perceiving male interest and commitment. This is due to the high cost of a false positive – a man not being committed and a woman accepting him – that could lead to raising a child without an investing mate, reputational damage and risk reducing chances of future courtship. The cost of a false negative – a man that is committed and a woman rejecting him – is far less costly to the female.
Women are limited to how many children they can have in a lifetime. However, men are not limited and can reproduce multiple times. Therefore, overperception costs are higher for females. This hypothesis is mentioned briefly by Buss (2012).
Females' commitment skepticism is unique to humans. For other animals, courtship rituals are not particularly varied and there is no guesswork or ambiguity involved. For instance, a long-tailed manakin bird has a mating dance that is instinctive and intricate and requires a young apprentice to perform as a duet to the female. If the dance is good enough the female will mate with the male, if the duet falls flat then she will not choose him to reproduce with. However, human courtship behaviour is more ambiguous and so requires these types of cognitive biases to avoid costly errors, in this case, sexual deception.
Exceptions
"Skeptical dad" and "Encouraging mum" hypothesis
Previously, commitment skepticism and overperception biases were thought of as sex specific. Women would underplay or fail to infer a psychological state that is there in order to prevent a false negative error. Men would over perceive female interest because the reproductive costs of sexual under perception are greater for men than women. Al-Shawaf (2016) stated that this is not what the core logic of the Error Management Theory (EMT) suggests. EMT states that the ancestral cost-benefit matrix of both false positive and false negative errors is what drives the cognitive biases and decision-making processes, not gender which is what it has been defined by.
Imagine a woman is assessing her potential mate's commitment intent. The woman's father also has a vested interest in whether she reproduces because he shares genes with her and thus, his reproductive interests extend to his daughter's mate choice. The father also has to evaluate the costs and benefits of the two types of errors she could make when evaluating her mate's commitment intent. If the chosen mate sexually deceives and then leaves her then the outcome is more costly for him than if his daughter is more cautious and underestimates intent. Thus, the father might take time before offering his parental seal of approval. The father shows the same skeptical commitment bias as his daughter, favouring the false negative error because it is less costly.
Taking the parental dynamic and switching it from father to mother, the same could be said for sexual overperception bias. A mother has an interest in who her son decides to mate with and therefore will favour the false positive error over false negative. If she fails to detect real interest in the woman, and thus, fails to share this female interest with her son, then it is more costly to her than if she falsely detects sexual interest from a woman towards her son and encourages him to pursue. If her son misses an opportunity, he has missed the chance to pass on his, and in doing so her own, genes. Therefore, the mother shows the same overperception bias as her son, favouring the false positive error because it is less costly.
It is not sex or gender that predicts what type of cognitive bias might be expressed but rather the potential costs to reproductive success.
Postmenopausal females
Contrasting the evidence for fertile females, skeptical commitment bias does not occur in postmenopausal women. Haselton and Buss (2000) found evidence for the perception biases studying young subjects; however, this was not representative of older females, who have passed through menopause. The reason for this disparity between pre- and postmenopausal females is that fertile females underestimate the intentions of males to invest in the relationship, in order to avoid the costs of pregnancy without support; however, postmenopausal women do not perceive such costs. Their inability to conceive means that there is no reason to underestimate a male's intentions.
Alternative explanations
Some recent studies researching error management theory have found men and women's perceptions of opposite gender sexual and commitment interest may be mitigated by other explanations.
Culture
With a universal proclivity, it would be possible to document the bias across cultures and "across different demographic groups, including among men varying in age, ethnicity, and education level" within cultures and in females based on their job status, health, levels of education and income equality. When investigated in Norway, one of the world's most gender egalitarian societies, error management theory and its evolutionary explanation were supported. In addition, the pattern of misperception of men and women held up across demographic groups differing in relationship status (singles versus partnered participants).
Individual differences
Sexual over-perception relative to under-perception was reported more frequently among younger participants, among singles, and among participants with an unrestricted socio-sexual orientation. Endorsing and being more open to casual sex may have evoked more sexual interest from members of the opposite sex, leading to more frequent reports of sexual overperception. Socially unrestricted male and female high school students were found to report being more subject to sexual harassment as well as sexually harassing others. From this, it is possible that being subject to sexual over-perception may explain the link between socio-sexuality and being subject to sexual harassment.
Projection
As stated above what was reported about male sexual and commitment self-interest was also true of women. They self-reported levels of sexual interest and desire for commitment which also predicted their perceptions of their partners' sexual interest and desire for commitment. This implies that instead of males and females falling victims of overperception and underperception respectively, both sexes project their own level of interest onto the individuals they are interacting with.
Reciprocity
Another explanation that removes overperception and underperception from the picture is how males and females reciprocate the perceived interest in one another. Evidence from speed dating shows that a partner's level of attraction for an individual, influences the individual's own interest in that particular partner. Unlike the "fox and the grapes" approach, which explains how underperception occurs in men as a means of face-saving, reciprocity reflects a real shift in the level of interest in a partner as a result of returning the perceived interest.
Other examples
Similar examples can also be seen in the judgment of whether a noise in the wild was a predator when it was more likely the wind—humans who assumed it was a predator were less likely to be attacked as prey over time than those who were skeptical. This is similar to the animistic fallacy.
Smoke detectors are designed with this theory in mind. Since the cost of a Type I error (false positive, e.g. a nuisance alarm) is much lower than the cost of a Type II error (false negative, e.g. an undetected fire that could burn a house down), the sensitivity threshold of a smoke detector is designed to error on the side of Type I errors. This explains why nuisance alarms are relatively common.
See also
Reinforcement learning
Notes
Further reading
Evolutionary psychology
Management cybernetics
Error detection and correction | Error management theory | [
"Engineering"
] | 2,749 | [
"Error detection and correction",
"Reliability engineering"
] |
14,236,540 | https://en.wikipedia.org/wiki/Solid%20sweep | The sweep Sw of a solid S is defined as the solid created when a motion M is applied to a given solid. The solid S should be considered to be a set of points in the Euclidean space R3. Then the solid Sw which is generated by sweeping S over M will contain all the points over which the points of S have moved during the motion M. Solid sweeping which uses this process is employed in different fields, including the modelling of fillets and rounds, interference detection and the simulation of the numerical controlled machining process.
References
Euclidean solid geometry | Solid sweep | [
"Physics",
"Mathematics"
] | 113 | [
"Euclidean solid geometry",
"Space",
"Geometry",
"Geometry stubs",
"Spacetime"
] |
3,382,576 | https://en.wikipedia.org/wiki/Choke%20%28electronics%29 | In electronics, a choke is an inductor used to block higher-frequency alternating currents (AC) while passing direct current (DC) and lower-frequency ACs in a circuit. A choke usually consists of a coil of insulated wire often wound on a magnetic core, although some consist of a doughnut-shaped ferrite bead strung on a wire. The choke's impedance increases with frequency. Its low electrical resistance passes both AC and DC with little power loss, but its reactance limits the amount of AC passed.
The name comes from blocking—"choking"—high frequencies while passing low frequencies. It is a functional name; the name "choke" is used if an inductor is used for blocking or decoupling higher frequencies, but the component is simply called an "inductor" if used in electronic filters or tuned circuits. Inductors designed for use as chokes are usually distinguished by not having low-loss construction (high Q factor) required in inductors used in tuned circuits and filtering applications.
Types and construction
Chokes are divided into two broad classes:
Audio frequency chokes—designed to block audio and power line frequencies while allowing DC to pass
Radio frequency chokes—designed to block radio frequencies while allowing audio and DC to pass.
Audio frequency choke
Audio frequency chokes usually have ferromagnetic cores to increase their inductance. They are often constructed similarly to transformers, with laminated iron cores and an air gap. The iron core increases the inductance for a given volume of the core. Chokes were frequently used in the design of rectifier power supplies for vacuum tube equipment such as radio receivers or amplifiers. They are commonly found in direct-current motor controllers to produce direct current (DC), where they were used in conjunction with large electrolytic capacitors to remove the voltage ripple (AC) at the output DC. A rectifier circuit designed for a choke-output filter may produce too much DC output voltage and subject the rectifier and filter capacitors to excessive in-rush and ripple currents if the inductor is removed. However, modern electrolytic capacitors with high ripple current ratings, and voltage regulators that remove more power supply ripple than chokes could, have eliminated heavy, bulky chokes from mains frequency power supplies. Smaller chokes are used in switching power supplies to remove the higher-frequency switching transients from the output and sometimes from feeding back into the mains input. They often have toroidal ferrite cores.
Some car audio hobbyists use choke coils with automobile audio systems (specifically in the wiring of a subwoofer, to remove high frequencies from the amplified signal).
Radio frequency choke
Radio frequency chokes (RFC) often have iron powder or ferrite cores which increases inductance and overall operation. They are often wound in complex patterns (basket winding) to reduce self-capacitance and proximity effect losses. Chokes for even higher frequencies have non-magnetic cores and low inductance.
A modern form of choke used for eliminating digital RF noise from lines is the ferrite bead, a cylindrical or torus-shaped core of ferrite slipped over a wire. These are often seen on computer cables.
Common-mode choke
A common-mode (CM) choke is a special application where a choke is used to act upon a common-mode signal.
These chokes are useful for suppression of electromagnetic interference (EMI) and radio frequency interference (RFI) frequently introduced on high current wires such as on power supply lines, which may cause unwanted operation. Reducing this noise is frequently done by using a common mode choke - two parallel coil windings on a single core. Common mode chokes allow differential currents to pass while blocking signals that are affecting both wires. Because the magnetic flux produced by differential-mode currents in the core of a common mode choke tend to cancel each other out, the choke presents little impedance to differential mode currents. It achieves this by the placement of windings such that they generate equal but opposite fields that cancel each other out for differential mode signals. Normally this also means that the core will not saturate for large differential mode currents, and the maximum current rating is instead determined by the heating effect of the winding resistance. On the other hand, common mode currents see a high impedance path due to the combined inductance of the windings that reinforce each other.
CM chokes are commonly used in industrial, electrical and telecommunications applications to remove or decrease noise and related electromagnetic interference.
When the CM choke is conducting CM current, most of the magnetic flux generated by the windings is confined within the inductor core because of its high permeability. In this case, the leakage flux, which is also the near magnetic field emission of the CM choke is low. However, the DM current flowing through the windings will generate high emitted near magnetic field since the windings are negative coupled in this case. To reduce the near magnetic field emission, a twisted winding structure can be applied to the CM choke.
The difference between the balanced twisted windings CM choke and conventional balanced two winding CM choke is that the windings interact in the center of the core open window. When it is conducting CM current, the balanced twisted winding CM inductor can provide identical CM inductance as the conventional CM inductor. When it is conducting DM current, the equivalent current loops will generate inversed direction magnetic fields in space so that they tend to cancel each other.
A current is passed through an inductor and a probe measures the near field emission. A signal generator, serving as a voltage source, is connected to an amplifier. The output of the amplifier is then connected to the inductor under measurement. To monitor and control the current flowing through the inductor, a current clamp meter is clamped around the conducting wire. An oscilloscope connected to the current clamp to measures the current waveform. A probe measures the flux in the air. A spectrum analyzer connected to the probe collects data.
See also
Line reactor
Waveguide choke - designed to enhance or inhibit propagation of specific modes in waveguides.
References
Further reading
Wildi, Théodore (1981) Electrical power technology,
External links
Common Mode Choke Theory
Electromagnetic coils
Electrodynamics
Wireless tuning and filtering
sv:Spole#Drosslar | Choke (electronics) | [
"Mathematics",
"Engineering"
] | 1,311 | [
"Electrodynamics",
"Radio electronics",
"Wireless tuning and filtering",
"Dynamical systems"
] |
3,383,505 | https://en.wikipedia.org/wiki/Bi-elliptic%20transfer | In astronautics and aerospace engineering, the bi-elliptic transfer is an orbital maneuver that moves a spacecraft from one orbit to another and may, in certain situations, require less delta-v than a Hohmann transfer maneuver.
The bi-elliptic transfer consists of two half-elliptic orbits. From the initial orbit, a first burn expends delta-v to boost the spacecraft into the first transfer orbit with an apoapsis at some point away from the central body. At this point a second burn sends the spacecraft into the second elliptical orbit with periapsis at the radius of the final desired orbit, where a third burn is performed, injecting the spacecraft into the desired orbit.
While they require one more engine burn than a Hohmann transfer and generally require a greater travel time, some bi-elliptic transfers require a lower amount of total delta-v than a Hohmann transfer when the ratio of final to initial semi-major axis is 11.94 or greater, depending on the intermediate semi-major axis chosen.
The idea of the bi-elliptical transfer trajectory was first published by Ary Sternfeld in 1934.
Calculation
Delta-v
The three required changes in velocity can be obtained directly from the vis-viva equation
where
is the speed of an orbiting body,
is the standard gravitational parameter of the primary body,
is the distance of the orbiting body from the primary, i.e., the radius,
is the semi-major axis of the body's orbit.
In what follows,
is the radius of the initial circular orbit,
is the radius of the final circular orbit,
is the common apoapsis radius of the two transfer ellipses and is a free parameter of the maneuver,
and are the semimajor axes of the two elliptical transfer orbits, which are given by and
Starting from the initial circular orbit with radius (dark blue circle in the figure to the right), a prograde burn (mark 1 in the figure) puts the spacecraft on the first elliptical transfer orbit (aqua half-ellipse). The magnitude of the required delta-v for this burn is
When the apoapsis of the first transfer ellipse is reached at a distance from the primary, a second prograde burn (mark 2) raises the periapsis to match the radius of the target circular orbit, putting the spacecraft on a second elliptic trajectory (orange half-ellipse). The magnitude of the required delta-v for the second burn is
Lastly, when the final circular orbit with radius is reached, a retrograde burn (mark 3) circularizes the trajectory into the final target orbit (red circle). The final retrograde burn requires a delta-v of magnitude
If , then the maneuver reduces to a Hohmann transfer (in that case can be verified to become zero). Thus the bi-elliptic transfer constitutes a more general class of orbital transfers, of which the Hohmann transfer is a special two-impulse case.
The maximal possible savings can be computed by assuming that , in which case the total simplifies to . In this case, one also speaks of a bi-parabolic transfer because the two transfer trajectories are no longer ellipses but parabolas. The transfer time increases to infinity too.
Transfer time
Like the Hohmann transfer, both transfer orbits used in the bi-elliptic transfer constitute exactly one half of an elliptic orbit. This means that the time required to execute each phase of the transfer is half the orbital period of each transfer ellipse.
Using the equation for the orbital period and the notation from above,
The total transfer time is the sum of the times required for each half-orbit. Therefore:
and finally:
Comparison with the Hohmann transfer
Delta-v
The figure shows the total required to transfer from a circular orbit of radius to another circular orbit of radius . The is shown normalized to the orbital speed in the initial orbit, , and is plotted as a function of the ratio of the radii of the final and initial orbits, ; this is done so that the comparison is general (i.e. not dependent of the specific values of and , only on their ratio).
The thick black curve indicates the for the Hohmann transfer, while the thinner colored curves correspond to bi-elliptic transfers with varying values of the parameter , defined as the apoapsis radius of the elliptic auxiliary orbit normalized to the radius of the initial orbit, and indicated next to the curves. The inset shows a close-up of the region where the bi-elliptic curves cross the Hohmann curve for the first time.
One sees that the Hohmann transfer is always more efficient if the ratio of radii is smaller than 11.94. On the other hand, if the radius of the final orbit is more than 15.58 times larger than the radius of the initial orbit, then any bi-elliptic transfer, regardless of its apoapsis radius (as long as it's larger than the radius of the final orbit), requires less than a Hohmann transfer. Between the ratios of 11.94 and 15.58, which transfer is best depends on the apoapsis distance . For any given in this range, there is a value of above which the bi-elliptic transfer is superior and below which the Hohmann transfer is better. The following table lists the value of that results in the bi-elliptic transfer being better for some selected cases.
Transfer time
The long transfer time of the bi-elliptic transfer,
is a major drawback for this maneuver. It even becomes infinite for the bi-parabolic transfer limiting case.
The Hohmann transfer takes less than half of the time because there is just one transfer half-ellipse. To be precise,
Versatility in combination maneuvers
While a bi-elliptic transfer has a small parameter window where it's strictly superior to a Hohmann Transfer in terms of delta V for a planar transfer between circular orbits, the savings is fairly small, and a bi-elliptic transfer is a far greater aid when used in combination with certain other maneuvers.
At apoapsis, the spacecraft is travelling at low orbital velocity, and significant changes in periapsis can be achieved for small delta V cost. Transfers that resemble a bi-elliptic but which incorporate a plane-change maneuver at apoapsis can dramatically save delta-V on missions where the plane needs to be adjusted as well as the altitude, versus making the plane change in low circular orbit on top of a Hohmann transfer.
Likewise, dropping periapsis all the way into the atmosphere of a planetary body for aerobraking is inexpensive in velocity at apoapsis, but permits the use of "free" drag to aid in the final circularization burn to drop apoapsis; though it adds an extra mission stage of periapsis-raising back out of the atmosphere, this may, under some parameters, cost significantly less delta V than simply dropping periapsis in one burn from circular orbit.
Example
To transfer from a circular low Earth orbit with to a new circular orbit with using a Hohmann transfer orbit requires a Δv of . However, because , it is possible to do better with a bi-elliptic transfer. If the spaceship first accelerated 3061.04 m/s, thus achieving an elliptic orbit with apogee at , then at apogee accelerated another 608.825 m/s to a new orbit with perigee at , and finally at perigee of this second transfer orbit decelerated by 447.662 m/s, entering the final circular orbit, then the total Δv would be only 4117.53 m/s, which is 16.19 m/s (0.4%) less.
The Δv saving could be further improved by increasing the intermediate apogee, at the expense of longer transfer time. For example, an apogee of (1.3 times the distance to the Moon) would result in a 1% Δv saving over a Hohmann transfer, but require a transit time of 17 days. As an impractical extreme example, an apogee of (30 times the distance to the Moon) would result in a 2% Δv saving over a Hohmann transfer, but the transfer would require 4.5 years (and, in practice, be perturbed by the gravitational effects of other Solar system bodies). For comparison, the Hohmann transfer requires 15 hours and 34 minutes.
Evidently, the bi-elliptic orbit spends more of its delta-v closer to the planet (in the first burn). This yields a higher contribution to the specific orbital energy and, due to the Oberth effect, is responsible for the net reduction in required delta-v.
See also
Delta-v budget
Oberth effect
References
Astrodynamics
Spacecraft propulsion
Orbital maneuvers | Bi-elliptic transfer | [
"Engineering"
] | 1,807 | [
"Astrodynamics",
"Aerospace engineering"
] |
3,385,027 | https://en.wikipedia.org/wiki/Engineers%20Without%20Borders%20%28Belgium%29 | Ingénieurs sans Frontières - Ingénieurs Assistance Internationale (ISF-IAI, more commonly known as ISF, Belgium) is a Belgian NGO assisting developing areas of the world with their engineering needs and whose fundamental purpose is to adapt technological development to the needs of those living in underprivileged areas.
Overview
It should not be confused with Ingenieurs zonder Grenzen (Dutch for "Engineers Without Borders").
Founded on the initiative of a few engineers and with the support of the Associations of Schools for Engineers (FABI), ISF can count on the contribution of several hundred volunteers: engineers with varying qualifications and students willing to put their time and skills at the disposal of development projects.
Thanks to many contacts in both professional and associative environment, ISF can seek advice from engineers and technicians on specific problems in every sector of technology. All ISF work is done on a voluntary basis.
ISF, looking for collaboration with other Belgian NGOs, has formed the CHAKA group with CODEART and ADG, both Belgian associations. ISF is also a member of the Federation of the French and German speaking NGOs of Belgium (ACODEV) and is approved by the General Directorate for International Cooperation of the Belgian Federal Government (DGCI).
ISF is also a member of EWB-International (Engineers Without Borders International Network)
See also
Ingenieurs zonder Grenzen - Belgian "Engineers Without Borders" organisations, with a Dutch name.
Without Borders
External links
Ingénieurs sans Frontières - Official site (French/English/Spanish language)
Belgium
Development charities based in Belgium | Engineers Without Borders (Belgium) | [
"Engineering"
] | 335 | [
"Engineers Without Borders"
] |
3,386,119 | https://en.wikipedia.org/wiki/%27t%20Hooft%20loop | In quantum field theory, the 't Hooft loop is a magnetic analogue of the Wilson loop for which spatial loops give rise to thin loops of magnetic flux associated with magnetic vortices. They play the role of a disorder parameter for the Higgs phase in pure gauge theory. Consistency conditions between electric and magnetic charges limit the possible 't Hooft loops that can be used, similarly to the way that the Dirac quantization condition limits the set of allowed magnetic monopoles. They were first introduced by Gerard 't Hooft in 1978 in the context of possible phases that gauge theories admit.
Definition
There are a number of ways to define 't Hooft lines and loops. For timelike curves they are equivalent to the gauge configuration arising from the worldline traced out by a magnetic monopole. These are singular gauge field configurations on the line such that their spatial slice have a magnetic field whose form approaches that of a magnetic monopole
where in Yang–Mills theory is the generally Lie algebra valued object specifying the magnetic charge. 't Hooft lines can also be inserted in the path integral by requiring that the gauge field measure can only run over configurations whose magnetic field takes the above form.
More generally, the 't Hooft loop can be defined as the operator whose effect is equivalent to performing a modified gauge transformations that is singular on the loop in such a way that any other loop parametrized by with a winding number around satisfies
These modified gauge transformations are not true gauge transformations as they do not leave the action invariant. For temporal loops they create the aforementioned field configurations while for spatial loops they instead create loops of color magnetic flux, referred to as center vortices. By constructing such gauge transformations, an explicit form for the 't Hooft loop can be derived by introducing the Yang–Mills conjugate momentum operator
If the loop encloses a surface , then an explicitly form of the 't Hooft loop operator is
Using Stokes' theorem this can be rewritten in a way which show that it measures the electric flux through , analogous to how the Wilson loop measures the magnetic flux through the enclosed surface.
There is a close relation between 't Hooft and Wilson loops where given a two loops and that have linking number , then the 't Hooft loop and Wilson loop satisfy
where is an element of the center of the gauge group. This relation can be taken as a defining feature of 't Hooft loops. The commutation properties between these two loop operators is often utilized in topological field theory where these operators form an algebra.
Disorder operator
The 't Hooft loop is a disorder operator since it creates singularities in the gauge field, with their expectation value distinguishing the disordered phase of pure Yang–Mills theory from the ordered confining phase. Similarly to the Wilson loop, the expectation value of the 't Hooft loop can follow either the area law
where is the area enclosed by loop and is a constant, or it can follow the perimeter law
where is the length of the loop and is a constant.
On the basis of the commutation relation between the 't Hooft and Wilson loops, four phases can be identified for gauge theories that additionally contain scalars in representations invariant under the center symmetry. The four phases are
Confinement: Wilson loops follow the area law while 't Hooft loops follow the perimeter law.
Higgs phase: Wilson loops follow the perimeter law while 't Hooft loops follow the area law.
Confinement together with a partially Higgsed phase: both follow the area law.
Mixed phase: both follow the perimeter law.
In the third phase the gauge group is only partially broken down to a smaller non-abelian subgroup. The mixed phase requires the gauge bosons to be massless particles and does not occur for theories, but is similar to the Coulomb phase for abelian gauge theory.
Since 't Hooft operators are creation operators for center vortices, they play an important role in the center vortex scenario for confinement. In this model, it is these vortices that lead to the area law of the Wilson loop through the random fluctuations in the number of topologically linked vortices.
Charge constraints
In the presence of both 't Hooft lines and Wilson lines, a theory requires consistency conditions similar to the Dirac quantization condition which arises when both electric and magnetic monopoles are present. For a gauge group where is the universal covering group with a Lie algebra and is a subgroup of the center, then the set of allowed Wilson lines is in one-to-one correspondence with the representations of . This can be formulated more precisely by introducing the weights of the Lie algebra, which span the weight lattice . Denoting as the lattice spanned by the weights associated with the algebra of rather than , the Wilson lines are in one-to-one correspondence with the lattice points lattice where is the Weyl group.
The Lie algebra valued charge of the 't Hooft line can always be written in terms of the rank Cartan subalgebra as , where is an -dimensional charge vector. Due to Wilson lines, the 't Hooft charge must satisfy the generalized Dirac quantization condition , which must hold for all representations of the Lie algebra.
The generalized quantization condition is equivalent to the demand that holds for all weight vectors. To get the set of vectors that satisfy this condition, one must consider roots which are adjoint representation weight vectors. Co-roots, defined using roots by , span the co-root lattice . These vectors have the useful property that meaning that the only magnetic charges allowed for the 't Hooft lines are ones that are in the co-root lattice
This is sometimes written in terms of the Langlands dual algebra of with a weight lattice , in which case the 't Hooft lines are described by .
More general classes of dyonic line operators, with both electric and magnetic charges, can also be constructed. Sometimes called Wilson–'t Hooft line operators, they are defined by pairs of charges up to the identification that for all it holds that
Line operators play a role in indicating differences in gauge theories of the form that differ by the center subgroup . Unless they are compactified, these theories do not differ in local physics and no amount of local experiments can deduce the exact gauge group of the theory. Despite this, the theories do differ in their global properties, such as having different sets of allowed line operators. For example, in gauge theories, Wilson loops are labelled by while 't Hooft lines by . However in the lattices are reversed where now the Wilson lines are determined by while the 't Hooft lines are determined by .
See also
Polyakov loop
References
Gauge theories
Phase transitions
Magnetic monopoles | 't Hooft loop | [
"Physics",
"Chemistry",
"Astronomy"
] | 1,366 | [
"Physical phenomena",
"Phase transitions",
"Astronomical hypotheses",
"Phases of matter",
"Critical phenomena",
"Unsolved problems in physics",
"Magnetic monopoles",
"Statistical mechanics",
"Matter"
] |
3,386,142 | https://en.wikipedia.org/wiki/Order%20operator | In quantum field theory, an order operator or an order field is a quantum field version of Landau's order parameter whose expectation value characterizes phase transitions. There exists a dual version of it, the disorder operator or disorder field, whose expectation value characterizes a phase transition by indicating the prolific presence of defect or vortex lines in an ordered phase.
The disorder operator is an operator that creates a discontinuity of the ordinary order operators or a monodromy for their values. For example, a 't Hooft operator is a disorder operator. So is the Jordan–Wigner transformation. The concept of a disorder observable was first introduced in the context of 2D Ising spin lattices, where a phase transition between spin-aligned (magnetized) and disordered phases happens at some temperature.
See also
Operator (physics)
Books
Kleinert, Hagen, Gauge Fields in Condensed Matter, Vol. I, " SUPERFLOW AND VORTEX LINES", pp. 1–742, Vol. II, "STRESSES AND DEFECTS", pp. 743–1456, World Scientific (Singapore, 1989); Paperback (also available online: Vol. I and Vol. II)
References
Quantum field theory
Statistical mechanics
Phase transitions | Order operator | [
"Physics",
"Chemistry"
] | 249 | [
"Quantum field theory",
"Physical phenomena",
"Phase transitions",
"Matter",
"Phases of matter",
"Quantum mechanics",
"Critical phenomena",
"Statistical mechanics",
"Quantum physics stubs"
] |
3,386,815 | https://en.wikipedia.org/wiki/Canadian%20Clay%20and%20Glass%20Gallery | The Canadian Clay and Glass Gallery (CCGG) is a public art gallery located in Waterloo, Ontario, Canada. It is the only Canadian art gallery exclusively dedicated to exhibiting and collecting contemporary Canadian ceramic, glass, enamel and stained glass works of art. It has approximately 20,000 annual visitors.
The Canadian Guild of Potters — which would later become known as Ceramists Canada —was formed in 1936. Concurrent with the founding of the guild came the first thought of establishing a national ceramics gallery. It wasn't until 45 years later that the idea gained traction. Construction of the gallery began in 1991 and the Canadian Clay & Glass Gallery opened in June 1993. The building was designed by Patkau Architects of Vancouver, who received a Medal of Excellence for the design under the Governor General's Awards for Architecture program in June 1997.
The gallery has more than nine hundred items in the permanent collection and five exhibiting galleries.
Building history
Construction
At a Ceramists Canada meeting in 1981 came a renewed call to find a permanent home for the country's finest ceramic pieces. This was prompted by a desire to honour the memory of recently dead Canadian potter Ruth Gowdy McKinley. The Glass Art Association of Canada and Artists in Stained Glass later became enthusiastic collaborators as the gallery concept evolved to also incorporate glass, stained glass, and enamel works.
Calgary, Halifax, North York, Victoria, and Waterloo were investigated and evaluated as potential sites for the national gallery. During the Ceramists Canada 1982 Annual General Meeting, Waterloo emerged as the clear choice on the strength of its commitment to provide a prime uptown location.
Over the next nine years, a core team of volunteers guided the planning and conducted the fundraising essential for a project of such magnitude. A 1984 feasibility study, funded by the federal and provincial governments, validated the need for a national gallery dedicated to the ceramic arts.
In October 1986, a national architectural competition was held, inviting eight firms from across Canada for submissions. Vancouver architects John and Patricia Patkau finished first and subsequently transformed the property at the corner of Caroline and Erb Streets from an aged hockey arena into a cultural edifice.
1991 witnessed the start of construction and the Gallery officially opened its doors on June 19, 1993.
Operations
The Canadian Clay & Glass Gallery was incorporated as a not-for-profit public gallery in 1982 and is registered as a charitable organization. The Gallery's building in Uptown Waterloo is owned, managed, and maintained by the City of Waterloo. Under the terms of a 1994 lease agreement, the Gallery will remain the building's primary tenant for a 50-year term.
Funding
Currently The Clay & Glass receives annual and multi-year funding from the three levels of Canadian government –– municipal, provincial and federal. This operating support is distributed by the Canada Council for the Arts, the Ontario Arts Council, Ontario Trillium Foundation, Museum Assistance Program and the City of Waterloo. The current operating budget is derived from three main revenue sources: government grants, community support and earned revenue.
Administrative
The Clay & Glass operates with an independent volunteer board of directors who provide oversight to the full-time Executive Director and staff. The Gallery is run by six departments, each headed by a full or part-time staff member. These departments include: Curatorial, Development, Marketing, Collections Management, Programming, and Gallery Shop.
Exhibitions
The gallery opened in 1993 with the purpose of exhibiting glass, ceramic, or enamel works. The Canadian Clay & Glass Gallery's exhibition mandate is to show contemporary work in these media. Since opening, The Clay & Glass has mounted over 178 exhibitions that have included works by artists from 25 countries. As such, the types of work exhibited have expanded greatly to include installation, video, performance, ephemeral, and environmental works. The gallery has 6088 square feet of exhibition space throughout five galleries: the Keith & Winifred Shantz Gallery, Donald & Pamela Bierstock Circular Gallery, The Mutual Group Tower Gallery, Dr. Douglas Wright Education Gallery, and the John A. Pollock Family Courtyard Gallery.
Permanent collections
One of the Canadian Clay & Glass Gallery's key objectives is the development, management and conservation of its Permanent Collection to the highest art museum standards for the benefit and enjoyment of present and future audiences. The collection mandate is to acquire ceramic, glass, enamel and stained glass. The primary permanent collection currently comprises over 900 works. In keeping with a mandate of being an institution whose focus is on contemporary practice, the Gallery's art collection features works of art from the mid-20th century to the present.
The collection began with a donation of about 200 pieces. This gift provided the foundation and first major grouping around which the permanent collection would be built. Indusmin, a mining company providing raw silica products, was a corporate supporter and collector of silica based artworks for many years. Eventually the company was bought out and merged with another company called Unimin Canada Ltd., who acquired the former's vast art collection. Unimin Canada Ltd. wished to support the emerging national gallery and donated the Indusmin collection to the Gallery in 1991.
As the gallery developed and expanded, curatorial interest changed along with staff. The collecting policy gradually shifted to focusing on contemporary practice. One aspect of adding to the collection, which is taking precedence, is the acquisition of work that has been part of the Gallery's exhibitions.
Cultural Property
On June 30, 2008, the Gallery was granted Category 'A' status within the Cultural Properties Export and Import Act. The Minister of Canadian Heritage provides this designation to "institutions that have the capacity to ensure their long-term preservation and to make them accessible to the public through research, exhibitions, publications and websites." In effect, the Government of Canada entrusts the Gallery to house and steward artworks deemed to be national treasures, on behalf of the people of Canada.
The Canadian Clay and Glass Gallery is honoured to be an institutional custodian of works of cultural importance on behalf of Canadians everywhere.
Study Collection
The Gallery has an extensive education and scholarly Study Collection. This collection comprises primarily industrial or functional objects, including paperweights, bottles, technical objects, vessels, commercial pressed glass and ceramic moulds. These objects are available for academics, researchers and other artists to touch, feel and inspect. This collection is also the foundation upon which an extensive educational curriculum is based. The gallery uses these materials primarily as a "study collection".
Archives
The Ann Roberts Archival Centre holds an extensive and growing number of archives. Currently there are over 8 fonds from individual artists, galleries, companies, scholars and organizations. They consist of unique primary materials such as personal papers, notebooks, sketchbooks, scrapbooks, drawings, blueprints, ephemera, photographs, slides, and transparencies. The archival collections are available for use by researchers.
Library
The Sinclair Family Research Library of the Canadian Clay & Glass Gallery is Canada's foremost library on the art and history of glass and ceramics. The library has an extensive holding of Exhibition catalogues, both from the venue itself and outside, trade publications related to the fields of glass and ceramics, works on paper, slides of the collection and early gallery, as well as educational movies.
The library is open to Academics, persons interested in the field, practising artists as well as the public by appointment.
Public programming
The educational offerings have evolved to include regular ongoing school and after-school programs, spring and summer art camps, teen- and family-focused sessions, youth public art projects, and adult workshops for both beginners and experienced artists. Most of the recreational educational offerings are intended to help individuals develop an arts vocabulary and gain enough knowledge to interpret and appreciate contemporary glass, ceramic, and enamel artworks.
Over 4,000 children participate in curriculum-based school programs. General public programming includes the very successful Play with Clay program, and public lectures. The four completed Youth Public Art Projects gave local high school students the opportunity to collaborate on a permanent, large-scale artwork that deals with current art issues and that has a visible presence in their community.
See also
List of art museums
List of museums in Ontario
References
Event venues established in 1993
Art museums and galleries in Ontario
Glass museums and galleries
Contemporary art galleries in Canada
Museums in Waterloo, Ontario
1993 establishments in Ontario
Art museums and galleries established in 1993 | Canadian Clay and Glass Gallery | [
"Materials_science",
"Engineering"
] | 1,679 | [
"Glass engineering and science",
"Glass museums and galleries"
] |
3,387,044 | https://en.wikipedia.org/wiki/Bamberger%20rearrangement | The Bamberger rearrangement is the chemical reaction of phenylhydroxylamines with strong aqueous acid, which will rearrange to give 4-aminophenols. It is named for the German chemist Eugen Bamberger (1857–1932).
The starting phenylhydroxylamines are typically synthesized by the transfer hydrogenation of nitrobenzenes using rhodium or zinc catalysts.
One application is in the synthesis of .
Reaction mechanism
The mechanism of the Bamberger rearrangement proceeds from the monoprotonation of N-phenylhydroxylamine 1. N-protonation 2 is favored, but unproductive. O-protonation 3 can form the nitrenium ion 4, which can react with nucleophiles (H2O) to form the desired 4-aminophenol 5.
See also
Friedel–Crafts alkylation-like reactions:
Hofmann–Martius rearrangement
Fries rearrangement
Fischer–Hepp rearrangement
Wallach rearrangement
Bamberger triazine synthesis — same inventor
References
Rearrangement reactions
Name reactions | Bamberger rearrangement | [
"Chemistry"
] | 237 | [
"Name reactions",
"Rearrangement reactions",
"Organic reactions"
] |
3,388,231 | https://en.wikipedia.org/wiki/%C3%89cole%20Nationale%20Sup%C3%A9rieure%20d%27%C3%89lectrochimie%20et%20d%27%C3%89lectrom%C3%A9tallurgie%20de%20Grenoble | The École Nationale Supérieure d'Électrochimie et d'Électrométallurgie de Grenoble, or ENSEEG, was one of the French Grandes écoles of engineering (engineering schools). It has been created in 1921 under the name Institut d’électrochimie et d’électrométallurgie (IEE) (Institute of Electrochemistry and Electrometallurgy). The name ENSEEG has been chosen in 1948 and ENSEEG has been part of Grenoble Institute of Technology (INPG or GIT) since its creation in 1971. Therefore, the name INPG-ENSEEG has also been commonly used.
ENSEEG delivered a multidisciplinary education in physical chemistry. The ENSEEG engineers are especially competent in materials science, process engineering and electrochemistry. From September 2008, ENSEEG merged with two other Grandes écoles to create Phelma.
External links
ENSEEG Website
ENSEEG Student Website
ENSEEG Student Firm
Electrochimie et d'Électrométallurgie de Grenoble
Electrochemical engineering
Metallurgical organizations
Universities and colleges established in 1921
Educational institutions disestablished in 2008
1921 establishments in France
2008 disestablishments in France | École Nationale Supérieure d'Électrochimie et d'Électrométallurgie de Grenoble | [
"Chemistry",
"Materials_science",
"Engineering"
] | 261 | [
"Metallurgy",
"Chemical engineering",
"Electrochemistry stubs",
"Electrochemical engineering",
"Electrochemistry",
"Metallurgical organizations",
"Electrical engineering",
"Physical chemistry stubs"
] |
3,389,593 | https://en.wikipedia.org/wiki/Current%20divider | In electronics, a current divider is a simple linear circuit that produces an output current (IX) that is a fraction of its input current (IT). Current division refers to the splitting of current between the branches of the divider. The currents in the various branches of such a circuit will always divide in such a way as to minimize the total energy expended.
The formula describing a current divider is similar in form to that for the voltage divider. However, the ratio describing current division places the impedance of the considered branches in the denominator, unlike voltage division, where the considered impedance is in the numerator. This is because in current dividers, total energy expended is minimized, resulting in currents that go through paths of least impedance, hence the inverse relationship with impedance. Comparatively, voltage divider is used to satisfy Kirchhoff's voltage law (KVL). The voltage around a loop must sum up to zero, so the voltage drops must be divided evenly in a direct relationship with the impedance.
To be specific, if two or more impedances are in parallel, the current that enters the combination will be split between them in inverse proportion to their impedances (according to Ohm's law). It also follows that if the impedances have the same value, the current is split equally.
Current divider
A general formula for the current IX in a resistor RX that is in parallel with a combination of other resistors of total resistance RT (see Figure 1) is
where IT is the total current entering the combined network of RX in parallel with RT. Notice that when RT is composed of a parallel combination of resistors, say R1, R2, ... etc., then the reciprocal of each resistor must be added to find the reciprocal of the total resistance RT:
General case
Although the resistive divider is most common, the current divider may be made of frequency-dependent impedances. In the general case:
and the current IX is given by
where ZT refers to the equivalent impedance of the entire circuit.
Using admittance
Instead of using impedances, the current divider rule can be applied just like the voltage divider rule if admittance (the inverse of impedance) is used:
Take care to note that YT is a straightforward addition, not the sum of the inverses inverted (as would be done for a standard parallel resistive network). For Figure 1, the current IX would be
Example: RC combination
Figure 2 shows a simple current divider made up of a capacitor and a resistor. Using the formula below, the current in the resistor is
where ZC = 1/(jωC) is the impedance of the capacitor, and j is the imaginary unit.
The product τ = CR is known as the time constant of the circuit, and the frequency for which ωCR = 1 is called the corner frequency of the circuit. Because the capacitor has zero impedance at high frequencies and infinite impedance at low frequencies, the current in the resistor remains at its DC value IT for frequencies up to the corner frequency, whereupon it drops toward zero for higher frequencies as the capacitor effectively short-circuits the resistor. In other words, the current divider is a low-pass filter for current in the resistor.
Loading effect
The gain of an amplifier generally depends on its source and load terminations. Current amplifiers and transconductance amplifiers are characterized by a short-circuit output condition, and current amplifiers and transresistance amplifiers are characterized using ideal infinite-impedance current sources. When an amplifier is terminated by a finite, non-zero termination, and/or driven by a non-ideal source, the effective gain is reduced due to the loading effect at the output and/or the input, which can be understood in terms of current division.
Figure 3 shows a current amplifier example. The amplifier (gray box) has input resistance Rin, output resistance Rout and an ideal current gain Ai. With an ideal current driver (infinite Norton resistance) all the source current iS becomes input current to the amplifier. However, for a Norton driver a current divider is formed at the input that reduces the input current to
which clearly is less than iS. Likewise, for a short circuit at the output, the amplifier delivers an output current iout = Ainii to the short circuit. However, when the load is a non-zero resistor RL, the current delivered to the load is reduced by current division to the value
Combining these results, the ideal current gain Ai realized with an ideal driver and a short-circuit load is reduced to the loaded gain Aloaded:
The resistor ratios in the above expression are called the loading factors. For more discussion of loading in other amplifier types, see .
Unilateral versus bilateral amplifiers
Figure 3 and the associated discussion refers to a unilateral amplifier. In a more general case where the amplifier is represented by a two-port network, the input resistance of the amplifier depends on its load, and the output resistance on the source impedance. The loading factors in these cases must employ the true amplifier impedances including these bilateral effects. For example, taking the unilateral current amplifier of Figure 3, the corresponding bilateral two-port network is shown in Figure 4 based upon h-parameters. Carrying out the analysis for this circuit, the current gain with feedback Afb is found to be
That is, the ideal current gain Ai is reduced not only by the loading factors, but due to the bilateral nature of the two-port by an additional factor , which is typical for negative-feedback amplifier circuits. The factor β(RL/RS) is the current feedback provided by the voltage feedback source of voltage gain β V/V. For instance, for an ideal current source with RS = ∞ Ω, the voltage feedback has no influence, and for RL = 0 Ω, there is zero load voltage, again disabling the feedback.
References and notes
See also
Voltage divider
Resistor
Ohm's law
Thévenin's theorem
Voltage regulation
External links
Divider Circuits and Kirchhoff's Laws chapter from Lessons In Electric Circuits Vol 1 DC free ebook and Lessons In Electric Circuits series.
University of Texas: Notes on electronic circuit theory
Analog circuits
Electric current | Current divider | [
"Physics",
"Engineering"
] | 1,308 | [
"Physical quantities",
"Analog circuits",
"Electronic engineering",
"Electric current",
"Wikipedia categories named after physical quantities"
] |
3,390,039 | https://en.wikipedia.org/wiki/%28Bis%28trifluoroacetoxy%29iodo%29benzene | (Bis(trifluoroacetoxy)iodo)benzene, , is a hypervalent iodine compound used as a reagent in organic chemistry. It can be used to carry out the Hofmann rearrangement under acidic conditions.
Preparation
The syntheses of all aryl hypervalent iodine compounds start from iodobenzene. The compound can be prepared by reaction of iodobenzene with a mixture of trifluoroperacetic acid and trifluoroacetic acid in a method analogous to the synthesis of
It can also be prepared by dissolving diacetoxyiodobenzene (a commercially-available compound) with heating in trifluoroacetic acid:
Uses
It also brings around the conversion of a hydrazone to a diazo compound, for example in the diazo-thioketone coupling. It also converts thioacetals to their parent carbonyl compounds.
Hofmann rearrangement
The Hofmann rearrangement is a decarbonylation reaction whereby an amide is converted to an amine by way of an isocyanate intermediate. It is usually carried out under strongly basic conditions.
The reaction can also be carried out under mildly acidic conditions by way of the same intermediate using a hypervalent iodine compound in aqueous solution. An example published in Organic Syntheses is the conversion of cyclobutanecarboxamide, easily synthesized from cyclobutylcarboxylic acid, to cyclobutylamine. The primary amine is initially present as its trifluoroacetate salt, which can be converted to the hydrochloride salt to facilitate product purification.
References
Iodanes
Reagents for organic chemistry
Phenyl compounds
Trifluoroacetates | (Bis(trifluoroacetoxy)iodo)benzene | [
"Chemistry"
] | 376 | [
"Iodanes",
"Oxidizing agents",
"Reagents for organic chemistry"
] |
11,647,120 | https://en.wikipedia.org/wiki/Key%20risk%20indicator | A key risk indicator (KRI) is a measure used in management to indicate how risky an activity is. Key risk indicators are metrics used by organizations to provide an early signal of increasing risk exposures in various areas of the enterprise. It differs from a key performance indicator (KPI) in that the latter is meant as a measure of how well something is being done while the former is an indicator of the possibility of future adverse impact. KRI give an early warning to identify potential events that may harm continuity of the activity/project.
KRIs are a mainstay of operational risk analysis.
Definitions
According to OECD
A risk indicator is an indicator that estimates the potential for some form of resource degradation using mathematical formulas or models.
Risk management
Security risk management
According to Risk IT framework by ISACA, key risk indicators are metrics capable of showing that the organization is subject or has a high probability of being subject to a risk that exceed the defined risk appetite.
Organizations have different sizes and environment. So every enterprise should choose its own KRI, taking into account the following steps:
Consider the different stakeholders of the organization
Make a balanced selection of risk indicators, covering performance indicators, lead indicators and trends
Ensure that the selected indicators drill down to the root cause of the events
Choose high relevant and high probability of predicting important risks:
High business impact
Easy to measure
With high correlation with the risk
Sensitivity
Determine thresholds and triggers for the set of KRI's
Locate and fold in data sources that contribute or feed data into KRI triggers
Determine notification methods, recipients, and action or response sequences
The constant measure of KRI can bring the following benefits to the organization:
Provide an early warning: a proactive action can take place
Provide a backward looking view on risk events, so lesson can be learned by the past
Provide an indication that the risk appetite and tolerance are reached
Provide real time actionable intelligence to decision makers and risk managers
Advances in hosted cloud data storage, data federation, and data aggregation have enabled data supply chains for real time calculation of key risk indicators across heretofore unlinked or disconnected data sources. Risk level dashboards can be supplemented with real time push notifications of risk. Systems methods and tools addressing triggering of notifications when targets are attained for key risk indicators have been evolving. Calculating and enabling notifications of key risk indicators used to be a unique benefit of enterprise software packages. With the evolution of API's to calculate trigger values for key risk indicators across various data sources, the potential for risk managers to include data external to an enterprise or external to an enterprise database has changed the risk management landscape.
Qualities of good key risk indicators
Some qualities of a good key risk indicator include:
Ability to measure the right thing (e.g., supports the decisions that need to be made)
Quantifiable (e.g., damages in dollars of profit loss)
Capability to be measured precisely and accurately
Ability to be validated against ground truth, and confidence level one has in the assertions made within the framework of the metric
Comparability Over Time and Business Units
Assessment of Risk Owners’ Performance
See also
Committee of Sponsoring Organizations of the Treadway Commission
Enterprise risk management
ISO 31000
References
Metrics
Operational risk | Key risk indicator | [
"Mathematics"
] | 650 | [
"Quantity",
"Metrics"
] |
11,647,284 | https://en.wikipedia.org/wiki/John%20Iliopoulos | John (Jean) Iliopoulos (Greek: Ιωάννης Ηλιόπουλος; 1940) is a Greek physicist. He is the first person to present the Standard Model of particle physics in a single report. He is best known for his prediction of the charm quark with Sheldon Glashow and Luciano Maiani (the "GIM mechanism"). Iliopoulos is also known for demonstrating the cancellation of anomalies in the Standard model. He is further known for the Fayet–Iliopoulos D-term formula, which was introduced in 1974. He is currently an honorary member of Laboratory of theoretical physics of École normale supérieure, Paris.
Biography
Iliopoulos graduated from National Technical University of Athens (NTUA) in 1962 as a Mechanical-Electrical Engineer. He continued his studies in the field of Theoretical Physics in University of Paris, and in 1963 he obtained the D.E.A, in 1965 the Doctorat 3e Cycle, and in 1968 the Doctorat d' Etat titles. Between the years 1966 and 1968 he was a scholar at CERN, Geneva. From 1969 till 1971 he was a Research Associate in Harvard University. In 1971 he returned in Paris and began working at CNRS. He also held the director position of the Laboratory of Theoretical Physics of the École normale supérieure between the years 1991-1995 and 1998-2002. In 2002, Iliopoulos was the first recipient of the Aristeio prize, which has been instituted to recognize Greeks who have made significant contributions towards furthering their chosen fields of science. Iliopoulos and Maiani were jointly awarded the 1987 Sakurai Prize for theoretical particle physics. In 2007 Iliopoulos and Maiani received the Dirac Medal of the ICTP "(f)or their work on the physics of the charm quark, a major contribution to the birth of the Standard Model, the modern theory of Elementary Particles." And in 2011, Glashow, Iliopoulos, and Maiani received the High Energy and Particle Physics Prize, awarded by the European Physical Society (EPS), "(f)or their crucial contribution to the theory of flavour, presently embedded in the Standard Theory of strong and electroweak interactions."
Scientific work
Iliopoulos is a specialist in high energy theoretical physics and elementary particle physics. In 1970, in collaboration with Sheldon Glashow and Luciano Maïani, he introduced the so-called "GIM mechanism" (named after the three authors) which is an essential element of the theory of fundamental interactions known as the "Standard Model ". This mechanism postulates the existence of a new elementary particle, the "charmed" quark, a prediction that was confirmed by experience. In 1972, in collaboration with Claude Bouchiat and Philippe Meyer, he demonstrated that the mathematical coherence of the Standard Model requires symmetry between the elementary constituents of matter, namely quarks (which form hadrons such as proton and neutron) and leptons (such as electron, muon and neutrinos). This symmetry is also verified experimentally.
Iliopoulos was one of the pioneers of supersymmetry, the hypothetical symmetry that links fermions and bosons. He showed that it has remarkable convergence properties and, in collaboration with Pierre Fayet, he proposed a mechanism that leads to its spontaneous breakage. He also studied some aspects of the quantum theory of gravitation as well as the mathematical properties of invariant gauge theories formulated in a non-commutative geometric space.
Most significant publications
J. Iliopoulos, Aux origines de la masse, EDP Sciences (2015)
J. Iliopoulos, The Origin of Mass, Oxford University Press (2017)
L. Baulieu, J. Iliopoulos, R. Sénéor, From Classical to Quantum Fields, Oxford University Press (2017)
Theodore N. Tomaras, John Iliopoulos, Elementary Particle Physics - The Standard Theory, Oxford University Press (2021)
Awards
1978 Paul Langevin Prize of the French Physical Society
1980 Corresponding Member, Academy of Athens, Greece
1984 Jean Ricard Prize of the French Physical Society
1987 Sakurai Prize of the American Physical Society
1990 / 2002 Corresponding / Full Member of the French Academy of Sciences
1996 Doctor honoris causa, Université de la Méditerranée, Aix-Marseille, France
1999 Doctor honoris causa, University of Crete, Greece
2002 Doctor honoris causa, University of Ioannina, Greece
2002 Doctor honoris causa, University of Athens, Greece
2002 Bodossaki Prize
2005 Matteucci Medal, Accademia Nazionale delle Scienze, detta dei XL
2007 Dirac Medal, Abdus Salam International Centre for Theoretical Physics, Trieste, Italy
2011 High Energy Physics Prize, European Physical Society
2013 Three Physicists Prize, Ecole Normale Supérieure, France
2017 Doctor honoris causa, National Technical University of Athens, Greece
2023 Antonio Feltrinelli Prize, Accademia dei Lincei, Italy
See also
GIM mechanism
References
1940 births
Living people
20th-century Greek physicists
Theoretical physicists
University of Paris alumni
National Technical University of Athens alumni
Academic staff of the École Normale Supérieure
Harvard University faculty
People associated with CERN
Members of the French Academy of Sciences
Corresponding Members of the Academy of Athens (modern)
J. J. Sakurai Prize for Theoretical Particle Physics recipients
Recipients of the Matteucci Medal
Scientists from Kalamata
Greek emigrants to France | John Iliopoulos | [
"Physics"
] | 1,119 | [
"Theoretical physics",
"Theoretical physicists"
] |
11,653,740 | https://en.wikipedia.org/wiki/AFGROW | AFGROW (Air Force Grow) is a Damage Tolerance Analysis (DTA) computer program that calculates crack initiation, fatigue crack growth, and fracture to predict the life of metallic structures. Originally developed by the Air Force Research Laboratory, AFGROW is mainly used for aerospace applications, but can be applied to any type of metallic structure that experiences fatigue cracking.
History
AFGROW's history traces back to a crack growth life prediction program (ASDGRO) which was written in BASIC for IBM-PCs by E. Davidson at ASD/ENSF in the early-mid-1980s. In 1985, ASDGRO was used as the basis for crack growth analysis for the Sikorsky H-53 helicopter under contract to Warner-Robins ALC. The program was modified to utilize very large load spectra, approximate stress intensity solutions for cracks in arbitrary stress fields, and use a tabular crack growth rate relationship based on the Walker equation on a point-by-point basis (Harter T-Method). The point loaded crack solution from the Tada, Paris, and Irwin Stress Intensity Factor Handbook
was originally used to determine K (for arbitrary stress fields) by integration over the crack length using the unflawed stress distribution independently for each crack dimension. A new method was developed by F. Grimsley (AFWAL/FIBEC) to determine stress intensity, which used a 2-D Gaussian integration scheme with Richardson Extrapolation which was optimized by G. Sendeckyj (AFWAL/FIBEC). The resulting program was named MODGRO since it was a modified version of ASDGRO.
Many modifications were made during the late 1980s and early 1990s. The primary modification was changing the coding language from BASIC to Turbo Pascal and C. Numerous small changes/repairs were made based on errors that were discovered. During this time period, NASA/Dryden implemented MODGRO in the analysis for the flight test program for the X-29.
In 1993, the Navy was interested in using MODGRO to assist in a program to assess the effect of certain (classified) environments on the damage tolerance of aircraft. Work began at that time to convert the MODGRO, Version 3.X to the C language for UNIX to provide performance and portability to several UNIX Workstations. In 1994, MODGRO was renamed AFGROW, Version 3.X.
Since 1996, the Windows-based version of AFGROW has replaced the UNIX version since the demand for the UNIX version did not justify the cost to maintain it. There was also an experiment to port AFGROW to the Mac OS but there was a lack of demand. An automated capability was added in the form of a Microsoft Component Object Model (COM) interface.
The program is now developed and maintained by LexTech, Inc.
Software architecture
The stress intensity factor library provides models for over 30 different crack geometries (including tension, bending and bearing loading for many cases). In addition, a multiple crack capability allows the analysis of two independent cracks in a plate (including hole effects) and a non-symmetric cracked corner. Finite Element (FE) based solutions are available for two, non-symmetric through cracks at holes as well as cracks growing toward holes. This capability allows the analysis of more than one crack growing from a row of fastener holes.
AFGROW implements five different crack growth models (Forman Equation,
Walker Equation, Tabular lookup, Harter-T Method and NASGRO Equation
) to determine crack growth per applied cyclic loading. Other user options include five load interaction (retardation) models (closure, Fastran, Hsu, Wheeler, and Generalized Willenborg), a strain-life based fatigue crack initiation model, and the ability to perform a crack growth analysis with the effect of the bonded repair. The program also includes tools such as: stress intensity solutions, beta modification factors (ability to estimate stress intensity factors for cases, which may not be an exact match for one of the stress intensity solutions provided), a residual stress analysis capability, cycle counting, and the ability to automatically transfer output data to Microsoft Excel.
AFGROW uses COM (Component Object Model) Automation interfaces that allow the use of scripts in other Windows applications.
The program has a plug-in crack geometry interface that interfaces with structural analysis programs capable of calculating stress intensity factors (K) in the Windows environment. Users may create their own stress intensity solutions by writing and compiling dynamic link libraries (DLLs) using relatively simple codes. This includes the ability to animate the crack growth. This interface also makes it possible for finite element analysis software to provide three-dimensional based stress intensity information throughout the crack life prediction process.
It is possible to select cases with two, independent cracks (with and without holes). A plug-in stress intensity model capability allows the creation of stress intensity solutions in the form of a Windows DLL (dynamic link library). Drawing tools allow solutions to be animated during the analysis. Interactive stress intensity solutions allow the use of an external FEM code to return updated stress intensity solutions.
References
External links
Homepage
Version Information
Mechanical engineering
Fracture mechanics
Structural analysis | AFGROW | [
"Physics",
"Materials_science",
"Engineering"
] | 1,058 | [
"Structural engineering",
"Applied and interdisciplinary physics",
"Fracture mechanics",
"Structural analysis",
"Materials science",
"Mechanical engineering",
"Aerospace engineering",
"Materials degradation"
] |
7,907,151 | https://en.wikipedia.org/wiki/Rellich%E2%80%93Kondrachov%20theorem | In mathematics, the Rellich–Kondrachov theorem is a compact embedding theorem concerning Sobolev spaces. It is named after the Austrian-German mathematician Franz Rellich and the Russian mathematician Vladimir Iosifovich Kondrashov. Rellich proved the L2 theorem and Kondrashov the Lp theorem.
Statement of the theorem
Let Ω ⊆ Rn be an open, bounded Lipschitz domain, and let 1 ≤ p < n. Set
Then the Sobolev space W1,p(Ω; R) is continuously embedded in the Lp space Lp∗(Ω; R) and is compactly embedded in Lq(Ω; R) for every 1 ≤ q < p∗. In symbols,
and
Kondrachov embedding theorem
On a compact manifold with boundary, the Kondrachov embedding theorem states that if and then the Sobolev embedding
is completely continuous (compact).
Consequences
Since an embedding is compact if and only if the inclusion (identity) operator is a compact operator, the Rellich–Kondrachov theorem implies that any uniformly bounded sequence in W1,p(Ω; R) has a subsequence that converges in Lq(Ω; R). Stated in this form, in the past the result was sometimes referred to as the Rellich–Kondrachov selection theorem, since one "selects" a convergent subsequence. (However, today the customary name is "compactness theorem", whereas "selection theorem" has a precise and quite different meaning, referring to set-valued functions.)
The Rellich–Kondrachov theorem may be used to prove the Poincaré inequality, which states that for u ∈ W1,p(Ω; R) (where Ω satisfies the same hypotheses as above),
for some constant C depending only on p and the geometry of the domain Ω, where
denotes the mean value of u over Ω.
References
Literature
Kondrachov, V. I., On certain properties of functions in the space L p .Dokl. Akad. Nauk SSSR 48, 563–566 (1945).
Leoni, Giovanni (2009). A First Course in Sobolev Spaces. Graduate Studies in Mathematics. 105. American Mathematical Society. pp. xvi+607. . MR 2527916. Zbl 1180.46001
Theorems in analysis
Sobolev spaces | Rellich–Kondrachov theorem | [
"Mathematics"
] | 518 | [
"Mathematical analysis",
"Theorems in mathematical analysis",
"Mathematical theorems",
"Mathematical problems"
] |
7,909,383 | https://en.wikipedia.org/wiki/Programmable%20matter | Programmable matter is matter which has the ability to change its physical properties (shape, density, moduli, conductivity, optical properties, etc.) in a programmable fashion, based upon user input or autonomous sensing. Programmable matter is thus linked to the concept of a material which inherently has the ability to perform information processing.
History
Programmable matter is a term originally coined in 1991 by Toffoli and Margolus to refer to an ensemble of fine-grained computing elements arranged in space. Their paper describes a computing substrate that is composed of fine-grained compute nodes distributed throughout space which communicate using only nearest neighbor interactions. In this context, programmable matter refers to compute models similar to cellular automata and lattice gas automata. The CAM-8 architecture is an example hardware realization of this model. This function is also known as "digital referenced areas" (DRA) in some forms of self-replicating machine science.
In the early 1990s, there was a significant amount of work in reconfigurable modular robotics with a philosophy similar to programmable matter.
As semiconductor technology, nanotechnology, and self-replicating machine technology have advanced, the use of the term programmable matter has changed to reflect the fact that
it is possible to build an ensemble of elements which can be "programmed" to change their physical properties in reality, not just in simulation. Thus, programmable matter has come to mean "any bulk substance which can be programmed to change its physical properties."
In the summer of 1998, in a discussion on artificial atoms and programmable matter, Wil McCarthy and G. Snyder coined the term "quantum wellstone" (or simply "wellstone") to describe this hypothetical but plausible form of programmable matter. McCarthy has used the term in his fiction.
In 2002, Seth Goldstein and Todd Mowry started the claytronics project at Carnegie Mellon University to investigate the underlying hardware and software mechanisms necessary to realize programmable matter.
In 2004, the DARPA Information Science and Technology group (ISAT) examined the potential of programmable matter. This resulted in the 2005–2006 study "Realizing Programmable Matter", which laid out a multi-year program for the research and development of programmable matter.
In 2007, programmable matter was the subject of a DARPA research solicitation and subsequent program.
From 2016 to 2022, the ANR has funded several research programs coordinated by Julien Bourgeois and Benoit Piranda at the FEMTO-ST Institute, which is taking the lead in the Claytronics project initiated by Intel and Carnegie Mellon University.
Approaches
In one school of thought, the programming could be external to the material and might be achieved by the "application of light, voltage, electric or magnetic fields, etc." . For example, a liquid crystal display is a form of programmable matter. A second school of thought is that the individual units of the ensemble can compute and the result of their computation is a change in the ensemble's physical properties. An example of this more ambitious form of programmable matter is claytronics.
There are many proposed implementations of programmable matter. Scale is one key differentiator between different forms of programmable matter. At one end of the spectrum, reconfigurable modular robotics pursues a form of programmable matter where the individual units are in the centimeter size range.
At the nanoscale end of the spectrum, there are a tremendous number of different bases for programmable matter, ranging from shape changing molecules to quantum dots. Quantum dots are in fact often referred to as artificial atoms. In the micrometer to sub-millimeter range examples include MEMS-based units, cells created using synthetic biology, and the utility fog concept.
An important sub-group of programmable matter are robotic materials, which combine the structural aspects of a composite with the affordances offered by tight integration of sensors, actuators, computation, and communication, while foregoing reconfiguration by particle motion.
Examples
There are many conceptions of programmable matter, and thus many discrete avenues of research using the name. Below are some specific examples of programmable matter.
"Solid-liquid phase-change pumping"
Shape-changing and locomotion of solid objects are possible with solid-liquid phase change pumping. This approach allows deforming objects into any intended shape with sub-millimetre resolution and freely changing their topology.
"Simple"
These include materials that can change their properties based on some input, but do not have the ability to do complex computation by themselves.
Complex fluids
The physical properties of several complex fluids can be modified by applying a current or voltage, as is the case with liquid crystals.
Metamaterials
Metamaterials are artificial composites that can be controlled to react in ways that do not occur in nature. One example developed by David Smith and then by John Pendry and David Schuri is of a material that can have its index of refraction tuned so that it can have a different index of refraction at different points in the material. If tuned properly, this could result in an invisibility cloak.
A further example of programmable -mechanical- metamaterial is presented by Bergamini et al. Here, a pass band within the phononic bandgap is introduced, by exploiting variable stiffness of piezoelectric elements linking aluminum stubs to the aluminum plate to create a phononic crystal as in the work of Wu et al. The piezoelectric elements are shunted to ground over synthetic inductors. Around the resonance frequency of the LC circuit formed by the piezoelectric and the inductors, the piezoelectric elements exhibit near zero stiffness, thus effectively disconnecting the stubs from the plate. This is considered an example of programmable mechanical metamaterial.
In 2021, Chen et al. demonstrated a mechanical metamaterial whose unit cells can each store a binary digit analogous to a bit inside a hard disk drive. Similarly, these mechanical unit cells are programmed through the interaction between two electromagnetic coils in the Maxwell configuration, and an embedded magnetorheological elastomer. Different binary states are associated with different stress-strain response of the material.
Shape-changing molecules
An active area of research is in molecules that can change their shape, as well as other properties, in response to external stimuli. These molecules can be used individually or en masse to form new kinds of materials. For example, J Fraser Stoddart's group at UCLA has been developing molecules that can change their electrical properties.
Electropermanent magnets
An electropermanent magnet is a type of magnet which consists of both an electromagnet and a dual material permanent magnet, in which the magnetic field produced by the electromagnet is used to change the magnetization of the permanent magnet. The permanent magnet consists of magnetically hard and soft materials, of which only the soft material can have its magnetization changed. When the magnetically soft and hard materials have opposite magnetizations the magnet has no net field, and when they are aligned the magnet displays magnetic behaviour.
They allow creating controllable permanent magnets where the magnetic effect can be maintained without requiring a continuous supply of electrical energy. For these reasons, electropermanent magnets are essential components of the research studies aiming to build programmable magnets that can give rise to self-building structures.
Robotics-based approaches
Self-reconfiguring modular robotics
Self-reconfiguring modular robotics involves a group of basic robot modules working together to dynamically form shapes and create behaviours suitable for many tasks, similar to programmable matter. SRCMR aims to offer significant improvement to many kinds of objects or systems by introducing many new possibilities. For example: 1. Most important is the incredible flexibility that comes from the ability to change the physical structure and behavior of a solution by changing the software that controls modules. 2. The ability to self-repair by automatically replacing a broken module will make SRCMR solution incredibly resilient. 3. Reducing the environmental footprint by reusing the same modules in many different solutions. Self-reconfiguring modular robotics enjoys a vibrant and active research community.
Claytronics
Claytronics is an emerging field of engineering concerning reconfigurable nanoscale robots ('claytronic atoms', or catoms) designed to form much larger scale machines or mechanisms. The catoms will be sub-millimeter computers that will eventually have the ability to move around, communicate with other computers, change color, and electrostatically connect to other catoms to form different shapes.
Cellular automata
Cellular automata are a useful concept to abstract some of the concepts of discrete units interacting to give a desired overall behavior.
Quantum wells
Quantum wells can hold one or more electrons. Those electrons behave like artificial atoms which, like real atoms, can form covalent bonds, but these are extremely weak. Because of their larger sizes, other properties are also widely different.
Synthetic biology
Synthetic biology is a field that aims to engineer cells with "novel biological functions." Such cells are usually used to create larger systems (e.g., biofilms) which can be "programmed" utilizing synthetic gene networks such as genetic toggle switches, to change their color, shape, etc. Such bioinspired approaches to materials production has been demonstrated, using self-assembling bacterial biofilm materials that can be programmed for specific functions, such as substrate adhesion, nanoparticle templating, and protein immobilization.
See also
Computronium
Nanotechnology
Self-assembly
Smart material
Smartdust
Ubiquitous computing
Universal Turing machine
Utility fog
References
Further reading
External links
Smart materials
Articles containing video clips
Synthetic biology
Robotics engineering | Programmable matter | [
"Materials_science",
"Technology",
"Engineering",
"Biology"
] | 1,996 | [
"Synthetic biology",
"Biological engineering",
"Computer engineering",
"Robotics engineering",
"Materials science",
"Bioinformatics",
"Molecular genetics",
"Smart materials"
] |
7,912,154 | https://en.wikipedia.org/wiki/Body%20Wars | Body Wars was a motion simulator attraction inside the Wonders of Life pavilion at the Walt Disney World Resort's Epcot. Riders would be taken on a mission by the fictional Miniaturized Exploration Technologies corporation (Stylized as MET) to study the effects of the white blood cells on a splinter inside the left index finger of a volunteer. The attraction used the Advanced Technology Leisure Application Simulator technology previously seen at Disneyland's Star Tours attraction. The ride is no longer in operation along with the other attractions inside the Wonders of Life pavilion, which opened on October 19, 1989, and closed on January 1, 2007.
History
On January 22, 1988, Epcot announced that they would be building a new pavilion in Future World East. It would be called Wonders of Life and be themed to health care. The pavilion would be located between Universe of Energy and Horizons. Wonders of Life would include new restaurants, stores and several attractions. One of these attractions would be a motion simulator ride named Body Wars. The sponsor of Wonders of Life would be MetLife. Construction of the pavilion began in February of that year.
The Wonders of Life pavilion would officially open to the general public on October 19, 1989. Upon opening, Body Wars had a wait time of 90 minutes. Just two months after opening, a similar ride named Star Tours opened at Disney's Hollywood Studios.
Body Wars received a mixed reception from guests, as some praised the thrilling experience, but others complained of motion sickness and nausea since it was considered to be a rough ride.
In the late 90’s, tensions between MetLife and Disney began to occur as MetLife would often setup tables at the pavilion to sell park guests life insurance which is a not allowed. After MetLife's sponsorship expired in June 2001, Epcot would continue to operate the Wonders of Life pavilion. The popularity of Body Wars began to decline over the years. In 2004, the park announced that the attraction would begin seasonal operation. The entire pavilion would officially close on January 1, 2007.
Attraction description
Queue
Guests entered the queue on the left side of the Wonders of Life Dome. If the attraction was in high demand, an extended queue would be utilized, decorated with signage and pastel colored shapes lining the walls. This would lead into the main queue, contained within a separate external wing of the building. As guests entered, they were informed via in-queue announcements of details surrounding the fictional MET company. The guests were referred to as "MET Observation Team Members", and would be informed via a preshow shown within the queue of the mission that they would be going on, with another volunteer who had a bruise on his arm, but wasn't shown inside of. The queue would begin with the logo of the MET Company, with various images depicting the company and the inside of the human body. Until 1993, signage would be hung up stating that the company was founded in 2063, as well as their motto "Pioneering the Universe Within". This would lead into the first of two "Dermatopic Purification" stations, before a hallway with in queue TV sets, and the second of two "Dermatopic Purification" stations.
Boarding
Dr. Cynthia Lair had volunteered to be miniaturized to observe a splinter. The guests were told they would board vehicle Bravo 229 and would be shrunk. Their mission was to meet up with Dr. Lair and bring her out. Captain Braddock would be the guests' pilot.
Guests learned that their "LGS 250"-type probe vehicle weighed approximately 26 tons, but once miniaturized, weighed less than a drop of water.
Ride
The guests' vehicle, Bravo 229, moved from the bay to the miniaturization room, where technicians focused a "particle reducer" on the ship. The ship and crew mates were shrunk and sent into the subject's body, under his skin. White blood cells were seen on their way to destroy his splinter.
The guests arrived at the splinter, meeting with Dr. Cynthia Lair. She began to take a cell count when she was accidentally pulled into a capillary. Captain Braddock followed Dr. Lair into the vein, entering an unauthorized area. The captain steered Bravo past the heart and into the right ventricle. The guests entered the lungs where Dr. Lair was being attacked by a white blood cell.
Braddock used his lasers to free Dr. Lair. By now, the ship was very low on power. Dr. Lair suggested that they use the brain's energy to recharge the ship. Passing the heart's left atrium, the ship went through the artery to get to the brain. A neuron contacted the ship, allowing it to regain power and de-miniaturize outside of the subject's body. As the subject sits up, Mission Control congratulates Braddock, Lair, and the guests on pulling off the most spectacular mission in the history of MET.
Attraction facts
Cast:
Jenifer Lewis as Ride Queue Instructional Video Announcer (uncredited)
Tim Matheson as Captain Braddock
Dakin Matthews as Mission Control
Elisabeth Shue as Dr. Cynthia Lair
John Reilly as Subject in pre-show (uncredited)
Dayna Beilenson as Scientist (uncredited)
Vehicle names: (all bays and vehicles were fictional except for Bravo 229)
Bay #1: "Zulu 174"
Bay #2: "Bravo 229"
Bay #3: "Sierra 657" and "Foxtrot 817"
Bay #4: "Charlie 218"
Used same ATLAS Technology as Star Tours.
Current status
As of November 2014, the four simulators have been dismantled and removed from the ride building. The queue is still intact, but most of the lighting and electronic equipment has been removed. The show building is currently used for storage for the Epcot Food & Wine Festival, along with the Flower & Garden Festival. As of November 2016, the queue is being slowly dismantled while few remnants remain. The pavilion is currently being transformed into the new Play! pavilion (construction is under way). The exit area has had the same treatment, with all signage removed. Red archive tags have been applied to the beginning MET Sign, and to the Body Wars safety information sign near the exit.
On February 21, 2019, it was announced that the new Play! Pavilion would be replacing the entire Wonders of Life pavilion, including the ride. What will happen to Body Wars during the transformation is currently unknown.
See also
Epcot attraction and entertainment history
Wonders of Life
Incidents at Walt Disney World
Advanced Technology Leisure Application Simulator - the technology underlying Body Wars.
Fantastic Voyage
References
External links
Amusement rides introduced in 1989
Amusement rides that closed in 2007
Amusement rides manufactured by Rediffusion Simulation
1989 films
Human body
Former Walt Disney Parks and Resorts attractions
Epcot
Simulator rides
Walt Disney Parks and Resorts films
Future World (Epcot)
Films scored by Leonard Rosenman
Films directed by Leonard Nimoy
1989 establishments in Florida
2007 disestablishments in Florida | Body Wars | [
"Physics"
] | 1,406 | [
"Human body",
"Physical objects",
"Matter"
] |
7,914,639 | https://en.wikipedia.org/wiki/Analyte-specific%20reagent | Analyte-specific reagents (ASRs) are a class of biological molecules which can be used to identify and measure the amount of an individual chemical substance in biological specimens.
Regulatory definition
The U.S. Food and Drug Administration (FDA) defines analyte specific reagents (ASRs) in 21 CFR 864.4020 as “antibodies, both polyclonal and monoclonal, specific receptor proteins, ligands, nucleic acid sequences, and similar reagents which, through specific binding or chemical reaction with substances in a specimen, are intended to be used in a diagnostic application for identification and quantification of an individual chemical substance or ligand in biological specimens.”
In simple terms, an analyte specific reagent is the active ingredient of an in-house test.
External links
Guidance for Industry and FDA Staff - Commercially Distributed Analyte Specific Reagents (ASRs): Frequently Asked Questions
Code of Federal Regulations - Specimen Preparation Reagents (21CFR864.4020)
Chemical tests
Biomolecules | Analyte-specific reagent | [
"Chemistry",
"Biology"
] | 217 | [
"Natural products",
"Chemical tests",
"Molecular biology stubs",
"Organic compounds",
"Biomolecules",
"Structural biology",
"Biochemistry",
"Molecular biology"
] |
2,467,030 | https://en.wikipedia.org/wiki/Lime%20plaster | Lime plaster is a type of plaster composed of sand, water, and lime, usually non-hydraulic hydrated lime (also known as slaked lime, high calcium lime or air lime). Ancient lime plaster often contained horse hair for reinforcement and pozzolan additives to reduce the working time.
Traditional non-hydraulic hydrated lime only sets through carbonatation when the plaster is kept moist and access of CO2 from the air is possible. It will not set when submersed in water. When a very thick layer or several layers are applied, the lime can remain soft for weeks.
The curing time of lime plaster can be shortened by using (natural) hydraulic lime or adding pozzolan additives, transforming it into artificially hydraulic lime. In ancient times, Roman lime plaster incorporated pozzolanic volcanic ash; in modern times, fly ash is preferred. Non-hydraulic lime plaster can also be made to set faster by adding gypsum.
Lime production for use in plastering home-made cisterns (in making them impermeable) was especially important in countries where rain-fall was scarce in summer. This enabled them to collect the winter run-off of rain water and to have it stored for later use, whether for personal or agricultural needs.
Advantages
Lime plaster sets up to a solid mass that is durable yet flexible. Hydraulic lime plaster is not as hard as cement plaster. Hydraulic limes and historic limes were graded as feeble, moderate and eminent. Modern hydraulic limes would be graded at 2, 3.5, or 5 newtons. Portland cement plaster on the other hand would typically be in the region of 25 to 35 newtons when cured; i.e. up to 10 times harder. Lime plaster is less affected by water and will not soften or dissolve like drywall and earthen or gypsum plaster. Unlike gypsum or clay plaster, lime plaster is sufficiently durable and resistant to the elements to be used for exterior plastering.
Compared to cement plaster, plaster made from hydrated lime is less brittle and less prone to cracking, requiring no expansion joints. It will not detach from the wall when subjected to shear stress due to expansion inflicted by solar radiation and moisture. Unlike cement plaster, it will shield softer materials from shear stresses. This would otherwise possibly cause the deterioration of the underlying surface. It is usually not recommended to replace more than 20% of the lime content with cement when rendering the facade, and it is a matter of contention whether adding any concrete is ever appropriate in order to maintain the benefits of lime over concrete.
Lime plaster is permeable and allows for the diffusion and evaporation of moisture. However, when properly worked with pozzolanic agents and animal fat, it becomes impermeable.
The elevated pH of the lime in the plaster acts as a fungicide, preventing mold from growing in lime plaster.
Disadvantages
Non-hydraulic lime plaster sets slowly and is quite caustic while wet, with a pH of 12. Plasterers must take care to protect themselves or use mild acids as vinegar or lemon juice to neutralize chemical burn. When the plaster is dry, the pH falls to about 8.6. Non-hydraulic lime plaster requires moisture to set and has to be prevented from drying for several days. The number of qualified tradesmen capable of plastering with lime has declined due to industrialization, deskilling of trade crafts, and widespread adoption of drywall and gypsum veneer plaster.
Venetian Plaster Techniques
Venetian plaster is a type of polished plaster that is widely used for wall and ceiling finishes. It consists of a mixture of plaster and marble dust, which is applied in thin layers using a spatula or trowel. The technique involves applying multiple layers of the plaster mixture and then burnishing the surface to create a smooth finish with the illusion of depth and texture.
There are various techniques that can be used to achieve different effects with Venetian plaster. Marmorino is one such technique, which involves adding marble dust to the plaster mixture to create a polished marble-like appearance. Scagliola is another technique that imitates various types of stone, while sgraffito involves scratching the surface of the plaster to reveal different layers and create decorative patterns.
When left un-burnished, Venetian plaster has a rough and stone-like matte finish. However, when applied correctly and burnished, it can result in a highly polished, rock-hard finish that resembles marble. This makes Venetian plaster an excellent alternative to expensive and heavy marble installations, as it can be used on surfaces such as columns, corbels, and curved walls.
Venetian Plaster History
The history of polished plaster can be traced back to ancient times, with evidence of its use in ancient Egyptian, Roman, and Greek architecture. The technique was highly valued for its durability and aesthetic appeal, and it has continued to be used and refined throughout history.
Throughout ancient times, lime was a widely employed material for constructing plaster on both interior and exterior walls. The Greeks, in particular, made a remarkable discovery regarding the production of a special adhesive by subjecting limestone rocks to intense heat within expansive ovens. Nevertheless, this transformative process, which involved converting limestone into calcium oxide, carbon dioxide, and steam, posed significant challenges due to the requirement of extremely high temperatures, reaching approximately 2200 °F. The resulting substance, known as quicklime or lump-lime, was subsequently pulverized into a fine powder and combined with water in a process called "slaking." Through this procedure, a fundamental binding agent called "lime putty" was created and utilized for plastering purposes. The slaked lime, a dense and moist substance, would then be stored in a designated pit for several months, or even years, to ensure complete hydration. Historical accounts suggest that the Romans enforced a regulation stipulating that slaked lime could only be employed if it had aged for a minimum of three years.
Venetian plaster, a distinctive type of wall covering, boasts a rich historical legacy that traces back to ancient times, with its origins linked to Pompeii and the subsequent Roman Empire. Vitruvius, who lived around 80-70 B.C., documented the process of manufacturing lime plaster in his renowned work "De architecture" or "Ten Books of Architecture." These methods were further elaborated upon by Pliny the Elder in his book "Natural History," dating back approximately 2,000 years. The Romans referred to the finished product as "Marmoratum Opus," meaning "smooth marble." The rediscovery of Venetian plaster can be attributed to the Renaissance period, characterized by a renewed interest in the ancient techniques of Rome. Palladio, a renowned Renaissance architect, referred to the process as "Pietra d'Istria" since the plaster bore a striking resemblance to natural rocks such as marble, granite, and travertine commonly found near Venice. Palladio's architectural creations, although seemingly constructed from stone, were in fact composed of brick and stucco. The plastering process involved the initial application of a coarse layer of plaster known as "arricio," followed by subsequent layers of lime putty blended with powdered marble to achieve a smooth and polished surface. On occasion, pigments were added to the wet plaster to introduce vibrant hues.
During the Baroque period, Venetian plaster experienced a decline in popularity, echoing the diminished prominence witnessed after the fall of the Roman Empire. However, in the 1950s, a notable Venetian builder named Carlo Scarpa played a pivotal role in revitalizing the use of Marmorino in contemporary construction. Scarpa not only adhered to the methods outlined by Vitruvius and Palladio but also introduced innovative techniques involving the utilization of animal hides and acrylic resins.
Historical use in the arts
One of the earliest examples of lime plaster dates back to the end of the eighth millennium BC. Three statues were discovered in a buried pit at 'Ain Ghazal in Jordan that were sculpted with lime plaster over armatures of reeds and twine. They were made in the pre-pottery neolithic period, around 7200 BC. The fact that these sculptures have lasted so long is a testament to the durability of lime plaster.
Historical uses in building
Lime plaster was a common multi-purpose material used throughout the PPNB Levant, Iran and Anatolia, including Jericho, 'Ain Ghazal, Çatalhöyük and Çayönü. It was used for internal walls, floors and internal platforms. At the archaeological site of 'Ain Ghazal in modern-day Jordan, occupied from 7200 BC to 5000 BC, lime plaster is believed to have been used as the main component of the large anthropomorphical figurines discovered there in the 1980s.
Qadad lime plaster is waterproof and used for interiors and exteriors
Some of the earliest known examples of lime used for building purposes are in ancient Egyptian buildings (primarily monuments). Some of these edifices are found in the chambers of the pyramids, and date to between the Ninth and Tenth Dynasties (~2000 BC). They are still hard and intact.
Archaeological digs carried out on the island of Malta have shown that in places like Tarxien and Hagar, lime stucco was also used as a binder to hold stone together as well as for decoration at sites dating back as far as 3000–2500 BC.
At el-Amarna, a large pavement on brick was discovered that dates back to 1400 BC. It was apparently the floor of part of the harem of King Amenhotep IV.
Ancient Chinese used Suk-wui (the Chinese word for slaked lime) in the construction of The Great Wall of China.
Ancient Romans used hydraulic lime (added volcanic ash, an activated aluminium silicate) to ensure hardening of plaster and concrete in cold or wet conditions.
The Aztec Empire and other Mesoamerican civilizations used lime plaster to pave streets in their cities. It was also used to coat the walls and floors of buildings.
This material was used in the San Luis Mission architecture.
See also
'Ain Ghazal
Faux Finishing
Fresco, a method of painting on fresh plaster
Gypsum
Lime (material)
Lime mortar
Limepit
Limestone
Plaster
Plaster of Paris
Plasterwork
Qadad, a waterproofing method for lime plaster
Tadelakt, a waterproofing method for lime plaster
Sarooj
Whitewash
References
Further reading
Cedar Rose Guelberth and Dan Chiras, The Natural Plaster Book: earth, lime and gypsum plasters for natural homes
J.N. Tubb, Canaanites, London, The British Museum Press, 1998
Stafford Holmes, Michael Wingate, Building With Lime: A Practical Introduction, Intermediate Technology Publications Ltd,
Lancaster Limeworks Learning Center
External links
British Museum: Lime Plaster Statues
Excavations
What Is Venetian Plaster
Building materials
Plastering
Pre-Pottery Neolithic B | Lime plaster | [
"Physics",
"Chemistry",
"Engineering"
] | 2,233 | [
"Building engineering",
"Coatings",
"Architecture",
"Construction",
"Materials",
"Plastering",
"Matter",
"Building materials"
] |
2,467,284 | https://en.wikipedia.org/wiki/Molecular%20autoionization | In chemistry, molecular autoionization (or self-ionization) is a chemical reaction between molecules of the same substance to produce ions. If a pure liquid partially dissociates into ions, it is said to be self-ionizing. In most cases the oxidation number on all atoms in such a reaction remains unchanged. Such autoionization can be protic ( transfer), or non-protic.
Examples
Protic solvents
Protic solvents often undergo some autoionization (in this case autoprotolysis):
2 H2O <=> H3O+ + OH-
The self-ionization of water is particularly well studied, due to its implications for acid-base chemistry of aqueous solutions.
2 NH3 <=> NH4+ + NH2-
2 H2SO4 <=> H3SO4+ + HSO4-
3 HF <=> H2F+ + HF2-
Here proton transfer between two HF combines with homoassociation of and a third HF to form
Non-protic solvents
2 PF5 <=> PF6- + PF4+
N2O4 <=> NO+ + NO3-
Here the nitrogen oxidation numbers change from (+4 and +4) to (+3 and +5).
2 BrF3 <=> BrF2+ + BrF4-
These solvents all possess atoms with odd atomic numbers, either nitrogen or a halogen. Such atoms enable the formation of singly charged, nonradical ions (which must have at least one odd-atomic-number atom), which are the most favorable autoionization products. Protic solvents, mentioned previously, use hydrogen for this role. Autoionization would be much less favorable in solvents such as sulfur dioxide or carbon dioxide, which have only even-atomic-number atoms.
Coordination chemistry
Autoionization is not restricted to neat liquids or solids. Solutions of metal complexes exhibit this property. For example, compounds of the type (where X = Cl or Br) are unstable with respect to autoionization forming .
See also
Ionization
Ion association
References
Molecular physics | Molecular autoionization | [
"Physics",
"Chemistry"
] | 449 | [
"Molecular physics",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
2,467,496 | https://en.wikipedia.org/wiki/Pseudorapidity | In experimental particle physics, pseudorapidity, , is a commonly used spatial coordinate describing the angle of a particle relative to the beam axis. It is defined as
where is the angle between the particle three-momentum and the positive direction of the beam axis. Inversely,
As a function of three-momentum , pseudorapidity can be written as
where is the component of the momentum along the beam axis (i.e. the longitudinal momentum – using the conventional system of coordinates for hadron collider physics, this is also commonly denoted ). In the limit where the particle is travelling close to the speed of light, or equivalently in the approximation that the mass of the particle is negligible, one can make the substitution (i.e. in this limit, the particle's only energy is its momentum-energy, similar to the case of the photon), and hence the pseudorapidity converges to the definition of rapidity used in experimental particle physics:
This differs slightly from the definition of rapidity in special relativity, which uses instead of . However, pseudorapidity depends only on the polar angle of the particle's trajectory, and not on the energy of the particle. One speaks of the "forward" direction in a hadron collider experiment, which refers to regions of the detector that are close to the beam axis, at high ; in contexts where the distinction between "forward" and "backward" is relevant, the former refers to the positive z-direction and the latter to the negative z-direction.
In hadron collider physics, the rapidity (or pseudorapidity) is preferred over the polar angle because, loosely speaking, particle production is constant as a function of rapidity, and because differences in rapidity are Lorentz invariant under boosts along the longitudinal axis: they transform additively, similar to velocities in Galilean relativity. A measurement of a rapidity difference between particles (or if the particles involved are massless) is hence not dependent on the longitudinal boost of the reference frame (such as the laboratory frame). This is an important feature for hadron collider physics, where the colliding partons carry different longitudinal momentum fractions x, which means that the rest frames of the parton-parton collisions will have different longitudinal boosts.
The rapidity as a function of pseudorapidity is given by
where is the transverse momentum (i.e. the component of the three-momentum perpendicular to the beam axis).
Using a second-order Maclaurin expansion of expressed in one can approximate rapidity by
which makes it easy to see that for relativistic particles with , pseudorapidity becomes equal to (true) rapidity.
Rapidity is used to define a measure of angular separation between particles commonly used in particle physics , which is Lorentz invariant under a boost along the longitudinal (beam) direction. Often, the rapidity term in this expression is replaced by pseudorapidity, yielding a definition with purely angular quantities: , which is Lorentz invariant if the involved particles are massless. The difference in azimuthal angle, , is invariant under Lorentz boosts along the beam line (z-axis) because it is measured in a plane (i.e. the "transverse" x-y plane) orthogonal to the beam line.
Values
Here are some representative values:
{| class=wikitable
!
!
!
!
|-
| 0°
| ∞
| 180°
| −∞
|-
| 0.1°
| 7.04
| 179.9°
| −7.04
|-
| 0.5°
| 5.43
| 179.5°
| −5.43
|-
| 1°
| 4.74
| 179°
| −4.74
|-
| 2°
| 4.05
| 178°
| −4.05
|-
| 5°
| 3.13
| 175°
| −3.13
|-
| 10°
| 2.44
| 170°
| −2.44
|-
| 20°
| 1.74
| 160°
| −1.74
|-
| 30°
| 1.32
| 150°
| −1.32
|-
| 45°
| 0.88
| 135°
| −0.88
|-
| 60°
| 0.55
| 120°
| −0.55
|-
| 80°
| 0.175
| 100°
| −0.175
|-
| 90°
| 0
|colspan=2|
|}
Pseudorapidity is odd about . In other words, .
Conversion to Cartesian momenta
Hadron colliders measure physical momenta in terms of transverse momentum , polar angle in the transverse plane and pseudorapidity . To obtain Cartesian momenta (with the -axis defined as the beam axis), the following conversions are used:
which gives . Note that is the longitudinal momentum component, which is denoted in the text above ( is the standard notation at hadron colliders).
The equivalent relations to get the full four-momentum (in natural units) using "true" rapidity are:
where is the transverse mass.
A boost of velocity along the beam-axis of velocity corresponds to an additive change in rapidity of using the relation . Under such a Lorentz transformation, the rapidity of a particle will become and the four-momentum becomes
This sort of transformation is common in hadron colliders. For example, if two hadrons of identical type undergo an inelastic collision along the beam axis with the same speed, then the corresponding rapidity will be
where and are the momentum fraction of the colliding partons. When several particles are produced in the same collision, the difference in rapidity between any two particles and will be invariant under any such boost along the beam axis, and if both particles are massless (), this will also hold for pseudorapidity ().
References
V. Chiochia (2010) Accelerators and Particle Detectors from University of Zurich
Experimental particle physics | Pseudorapidity | [
"Physics"
] | 1,243 | [
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
2,467,948 | https://en.wikipedia.org/wiki/Sulfonucleotide%20reductase | Sulfonucleotide reductases are a class of enzymes involved in reductive sulfur assimilation. This reaction consists of a conversion from activated sulfate to sulfite. (Inorganic sulfate occurs abundantly on Earth; terrestrial organisms must use sulfate assimilation to convert it to sulfide). The sulfite is used in essential biomolecules such as cysteine. The sulfonucleotide reductases are through to have all evolved from a common ancestor.
The enzymes reduce adenosine-5'-phosphosulfate by nucleophilic attack to produce the sulfite product. This typically involves a cofactor (such as an iron-sulphur cluster), however the cofactor varies in different families.
References
Sulfur metabolism | Sulfonucleotide reductase | [
"Chemistry"
] | 164 | [
"Sulfur metabolism",
"Metabolism"
] |
2,468,107 | https://en.wikipedia.org/wiki/PEDOT%3APSS | Poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) is a composite material where PEDOT (the conductive polymer) provides electrical conductivity, and PSS (polystyrene sulfonate) acts as a counter-ion to balance the charge and improve the water solubility and processability of PEDOT. Polystyrene sulfonate is a sulfonated polystyrene. Part of the sulfonyl groups are deprotonated and carry a negative charge. The other component poly(3,4-ethylenedioxythiophene) (PEDOT) is a conjugated polymer and carries positive charges and is based on polythiophene. Together the charged macromolecules form a macromolecular salt.
Synthesis
PEDOT:PSS can be prepared by mixing an aqueous solution of PSS with EDOT monomer, and to the resulting mixture, a solution of sodium persulfate and ferric sulfate.
The addition of these reagents initiates the oxidative chemical polymerization of EDOT in water to form PEDOT. The stabilizing PSS forms a shell around a core of PEDOT in a nano-sized structure. The negatively charged sulfonic acid ions help stabilize the positively charged PEDOT ions.
Applications
PEDOT:PSS has the highest efficiency among conductive organic thermoelectric materials (ZT~0.42) and thus can be used in flexible thermoelectric generators. Yet its largest application is as a transparent, conductive polymer with high ductility. For example, AGFA coats 200 million photographic films per year with a thin, extensively-stretched layer of virtually transparent and colorless PEDOT:PSS as an antistatic agent to prevent electrostatic discharges during production and normal film use, independent of humidity conditions, and as electrolyte in polymer electrolytic capacitors.
If organic compounds, including high boiling solvents like methylpyrrolidone, dimethyl sulfoxide, sorbitol, ionic liquids and surfactants, are added conductivity increases by many orders of magnitude. This makes it also suitable as a transparent electrode, for example in touchscreens, organic light-emitting diodes, flexible organic solar cells and electronic paper to replace the traditionally used indium tin oxide (ITO). Owing to the high conductivity (up to 4600 S/cm), it can be used as a cathode material in capacitors replacing manganese dioxide or liquid electrolytes. It is also used in organic electrochemical transistors.
The conductivity of PEDOT:PSS can also be significantly improved by a post-treatment with various compounds, such as ethylene glycol, dimethyl sulfoxide (DMSO), salts, zwitterions, cosolvents, acids, alcohols, phenol, geminal diols and amphiphilic fluoro-compounds. This conductivity is comparable to that of ITO, the popular transparent electrode material, and it can triple that of ITO after a network of carbon nanotubes and silver nanowires is embedded into PEDOT:PSS and used for flexible organic devices.
PEDOT:PSS is generally applied as a dispersion of gelled particles in water. A conductive layer on glass is obtained by spreading a layer of the dispersion on the surface usually by spin coating and driving out the water by heat. Special PEDOT:PSS inks and formulations were developed for different coating and printing processes. Water-based PEDOT:PSS inks are mainly used in slot die coating, flexography, rotogravure and inkjet printing. If a high viscous paste and slow drying is required like in screen-printing processes PEDOT:PSS can also be supplied in high boiling solvents like propanediol. Dry PEDOT:PSS pellets can be produced with a freeze drying method which are redispersable in water and different solvents, for example ethanol to increase drying speed during printing. Finally, to overcome degradation to ultraviolet light and high temperature or humidity conditions PEDOT:PSS UV-stabilizers are available.
Linköping University claim to have made a "wooden transistor" by replacing the lignin from balsawood with PEDOT:PSS
Mechanical Properties
Since PEDOT:PSS is most frequently used in thin film architectures, several methods have been developed to accurately probe its mechanical properties; for example, water-supported tensile testing, four-point bend tests to measure adhesive and cohesive fracture energy, buckling tests to measure modulus, and bending tests on PDMS and polyethylene supports to probe the crack onset strain. Though PEDOT:PSS has a lower electrical mobility than silicon, which can also be incorporated into flexible electronics through the incorporation of stress-relief structures, sufficiently flexible PEDOT:PSS can enable lower cost-processing, such as roll-to-roll processing. The most important characteristics for an organic semiconductor used in thin-film architectures are low modulus in the elastic regime and high stretchability prior to fracture. These properties have been found to be highly correlated to relative humidity. At high relative humidity (>40%) hydrogen bonds are weakened in the PSS due to the uptake of water which leads to higher strain before fracture and lower elastic modulus. At low relative humidity (<23%) the presence of strong bonding between PSS grains leads to higher modulus and lower strain before fracture. Films at higher relative humidity are presumed to fail by intergranular fracture, whereas lower relative humidity leads to transgranular fracture. Additives like 3-glycidoxypropyltrimethoxysilane (GOPS) can drastically improve the mechanical stability in aqueous media even at low concentrations of 1 wt% without significantly impeding the electrical properties.
PEDOT:PSS can also show self-healing properties if submerged in water after sustaining mechanical damage. This self-healing capability is proposed to be enabled by the hygroscopic property of PSS−. Common PEDOT:PSS additives that improve the electrical conductivity have varying effects on self-healing. While ethylene glycol improves electrical and mechanical self-healing, sulfuric acid reduces the former but improves the latter, presumably because it undergoes autoprotolysis. Polyethylene glycol improves the electrical and thermoelectric self-healing, but reduces the mechanical self-healing.
PEDOT:PSS is also attractive for conductive textile applications. Though it results in inferior thermoelectric properties, wet-spinning has been shown to result in high conductivity and stiff fibers due to preferential alignment of polymer chains during fiber drawing.
References
Organic polymers
Organic semiconductors
Conductive polymers
Transparent electrodes
Polyelectrolytes
Copolymers
Antistatic agents
Display technology | PEDOT:PSS | [
"Chemistry",
"Engineering"
] | 1,461 | [
"Organic polymers",
"Semiconductor materials",
"Molecular electronics",
"Conductive polymers",
"Organic compounds",
"Electronic engineering",
"Display technology",
"Antistatic agents",
"Process chemicals",
"Organic semiconductors"
] |
2,468,460 | https://en.wikipedia.org/wiki/Float-zone%20silicon | Float-zone silicon is very pure silicon obtained by vertical zone melting. The process was developed at Bell Labs by Henry Theuerer in 1955 as a modification of a method developed by William Gardner Pfann for germanium. In the vertical configuration molten silicon has sufficient surface tension to keep the charge from separating. The major advantages is crucibleless growth that prevents contamination of the silicon from the vessel itself and therefore an inherently high-purity alternative to boule crystals grown by the Czochralski method.
The concentrations of light impurities, such as carbon (C) and oxygen (O2) elements, are extremely low. Another light impurity, nitrogen (N2), helps to control microdefects and also brings about an improvement in mechanical strength of the wafers, and is now being intentionally added during the growth stages.
The diameters of float-zone wafers are generally not greater than 200 mm due to the surface tension limitations during growth. A polycrystalline rod of ultrapure electronic-grade silicon is passed through an RF heating coil, which creates a localized molten zone from which the crystal ingot grows. A seed crystal is used at one end to start the growth. The whole process is carried out in an evacuated chamber or in an inert gas purge.
The molten zone carries the impurities away with it and hence reduces impurity concentration (most impurities are more soluble in the melt than the crystal). Specialized doping techniques like core doping, pill doping, gas doping and neutron transmutation doping are used to incorporate a uniform concentration of desirable impurity.
Float-zone silicon wafers may be irradiated by neutrons to turn it into a n-doped semiconductor.
Application
Float-zone silicon is typically used for power devices and detector applications, where high-resistivity is required. It is highly transparent to terahertz radiation, and is usually used to fabricate optical components, such as lenses and windows, for terahertz applications. It is also used in solar arrays of satellites as it has higher conversion efficiency.
See also
Bridgman–Stockbarger method
Micro-pulling-down
Laser-heated pedestal growth
References
Michael Riordan & Lillian Hoddeson (1997) Crystal Fire: the birth of the information age, page 230, W. W. Norton & Company .
Industrial processes
Semiconductor growth
Silicon, Float-zone
Methods of crystal growth | Float-zone silicon | [
"Chemistry",
"Materials_science"
] | 490 | [
"Crystallography",
"Semiconductor materials",
"Methods of crystal growth",
"Group IV semiconductors"
] |
2,468,892 | https://en.wikipedia.org/wiki/Newton%27s%20identities | In mathematics, Newton's identities, also known as the Girard–Newton formulae, give relations between two types of symmetric polynomials, namely between power sums and elementary symmetric polynomials. Evaluated at the roots of a monic polynomial P in one variable, they allow expressing the sums of the k-th powers of all roots of P (counted with their multiplicity) in terms of the coefficients of P, without actually finding those roots. These identities were found by Isaac Newton around 1666, apparently in ignorance of earlier work (1629) by Albert Girard. They have applications in many areas of mathematics, including Galois theory, invariant theory, group theory, combinatorics, as well as further applications outside mathematics, including general relativity.
Mathematical statement
Formulation in terms of symmetric polynomials
Let x1, ..., xn be variables, denote for k ≥ 1 by pk(x1, ..., xn) the k-th power sum:
and for k ≥ 0 denote by ek(x1, ..., xn) the elementary symmetric polynomial (that is, the sum of all distinct products of k distinct variables), so
Then Newton's identities can be stated as
valid for all .
Also, one has
for all .
Concretely, one gets for the first few values of k:
The form and validity of these equations do not depend on the number n of variables (although the point where the left-hand side becomes 0 does, namely after the n-th identity), which makes it possible to state them as identities in the ring of symmetric functions. In that ring one has
and so on; here the left-hand sides never become zero.
These equations allow to recursively express the ei in terms of the pk; to be able to do the inverse, one may rewrite them as
In general, we have
valid for all n ≥k ≥ 1.
Also, one has
for all k > n ≥ 1.
Application to the roots of a polynomial
The polynomial with roots xi may be expanded as
where the coefficients are the symmetric polynomials defined above.
Given the power sums of the roots
the coefficients of the polynomial with roots may be expressed recursively in terms of the power sums as
Formulating polynomials in this way is useful in using the method of Delves and Lyness to find the zeros of an analytic function.
Application to the characteristic polynomial of a matrix
When the polynomial above is the characteristic polynomial of a matrix (in particular when is the companion matrix of the polynomial), the roots are the eigenvalues of the matrix, counted with their algebraic multiplicity. For any positive integer , the matrix has as eigenvalues the powers , and each eigenvalue of contributes its multiplicity to that of the eigenvalue of . Then the coefficients of the characteristic polynomial of are given by the elementary symmetric polynomials in those powers . In particular, the sum of the , which is the -th power sum of the roots of the characteristic polynomial of , is given by its trace:
The Newton identities now relate the traces of the powers to the coefficients of the characteristic polynomial of . Using them in reverse to express the elementary symmetric polynomials in terms of the power sums, they can be used to find the characteristic polynomial by computing only the powers and their traces.
This computation requires computing the traces of matrix powers and solving a triangular system of equations. Both can be done in complexity class NC (solving a triangular system can be done by divide-and-conquer). Therefore, characteristic polynomial of a matrix can be computed in NC. By the Cayley–Hamilton theorem, every matrix satisfies its characteristic polynomial, and a simple transformation allows to find the adjugate matrix in NC.
Rearranging the computations into an efficient form leads to the Faddeev–LeVerrier algorithm (1840), a fast parallel implementation of it is due to L. Csanky (1976). Its disadvantage is that it requires division by integers, so in general the field should have characteristic 0.
Relation with Galois theory
For a given n, the elementary symmetric polynomials ek(x1,...,xn) for k = 1,..., n form an algebraic basis for the space of symmetric polynomials in x1,.... xn: every polynomial expression in the xi that is invariant under all permutations of those variables is given by a polynomial expression in those elementary symmetric polynomials, and this expression is unique up to equivalence of polynomial expressions. This is a general fact known as the fundamental theorem of symmetric polynomials, and Newton's identities provide explicit formulae in the case of power sum symmetric polynomials. Applied to the monic polynomial with all coefficients ak considered as free parameters, this means that every symmetric polynomial expression S(x1,...,xn) in its roots can be expressed instead as a polynomial expression P(a1,...,an) in terms of its coefficients only, in other words without requiring knowledge of the roots. This fact also follows from general considerations in Galois theory (one views the ak as elements of a base field with roots in an extension field whose Galois group permutes them according to the full symmetric group, and the field fixed under all elements of the Galois group is the base field).
The Newton identities also permit expressing the elementary symmetric polynomials in terms of the power sum symmetric polynomials, showing that any symmetric polynomial can also be expressed in the power sums. In fact the first n power sums also form an algebraic basis for the space of symmetric polynomials.
Related identities
There are a number of (families of) identities that, while they should be distinguished from Newton's identities, are very closely related to them.
A variant using complete homogeneous symmetric polynomials
Denoting by hk the complete homogeneous symmetric polynomial (that is, the sum of all monomials of degree k), the power sum polynomials also satisfy identities similar to Newton's identities, but not involving any minus signs. Expressed as identities of in the ring of symmetric functions, they read
valid for all n ≥ k ≥ 1. Contrary to Newton's identities, the left-hand sides do not become zero for large k, and the right-hand sides contain ever more non-zero terms. For the first few values of k, one has
These relations can be justified by an argument analogous to the one by comparing coefficients in power series given above, based in this case on the generating function identity
Proofs of Newton's identities, like these given below, cannot be easily adapted to prove these variants of those identities.
Expressing elementary symmetric polynomials in terms of power sums
As mentioned, Newton's identities can be used to recursively express elementary symmetric polynomials in terms of power sums. Doing so requires the introduction of integer denominators, so it can be done in the ring ΛQ of symmetric functions with rational coefficients:
and so forth. The general formula can be conveniently expressed as
where the Bn is the complete exponential Bell polynomial. This expression also leads to the following identity for generating functions:
Applied to a monic polynomial, these formulae express the coefficients in terms of the power sums of the roots: replace each ei by ai and each pk by sk.
Expressing complete homogeneous symmetric polynomials in terms of power sums
The analogous relations involving complete homogeneous symmetric polynomials can be similarly developed, giving equations
and so forth, in which there are only plus signs. In terms of the complete Bell polynomial,
These expressions correspond exactly to the cycle index polynomials of the symmetric groups, if one interprets the power sums pi as indeterminates: the coefficient in the expression for hk of any monomial p1m1p2m2...plml is equal to the fraction of all permutations of k that have m1 fixed points, m2 cycles of length 2, ..., and ml cycles of length l. Explicitly, this coefficient can be written as where ; this N is the number permutations commuting with any given permutation of the given cycle type. The expressions for the elementary symmetric functions have coefficients with the same absolute value, but a sign equal to the sign of , namely (−1)m2+m4+....
It can be proved by considering the following inductive step:
By analogy with the derivation of the generating function of the , we can also obtain the generating function of the , in terms of the power sums, as:
This generating function is thus the plethystic exponential of .
Expressing power sums in terms of elementary symmetric polynomials
One may also use Newton's identities to express power sums in terms of elementary symmetric polynomials, which does not introduce denominators:
The first four formulas were obtained by Albert Girard in 1629 (thus before Newton).
The general formula (for all positive integers m) is:
This can be conveniently stated in terms of ordinary Bell polynomials as
or equivalently as the generating function:
which is analogous to the Bell polynomial exponential generating function given in the previous subsection.
The multiple summation formula above can be proved by considering the following inductive step:
Expressing power sums in terms of complete homogeneous symmetric polynomials
Finally one may use the variant identities involving complete homogeneous symmetric polynomials similarly to express power sums in term of them:
and so on. Apart from the replacement of each ei by the corresponding hi, the only change with respect to the previous family of identities is in the signs of the terms, which in this case depend just on the number of factors present: the sign of the monomial is −(−1)m1+m2+m3+.... In particular the above description of the absolute value of the coefficients applies here as well.
The general formula (for all non-negative integers m) is:
Expressions as determinants
One can obtain explicit formulas for the above expressions in the form of determinants, by considering the first n of Newton's identities (or it counterparts for the complete homogeneous polynomials) as linear equations in which the elementary symmetric functions are known and the power sums are unknowns (or vice versa), and apply Cramer's rule to find the solution for the final unknown. For instance taking Newton's identities in the form
we consider and as unknowns, and solve for the final one, giving
Solving for instead of for is similar, as the analogous computations for the complete homogeneous symmetric polynomials; in each case the details are slightly messier than the final results, which are (Macdonald 1979, p. 20):
Note that the use of determinants makes that the formula for has additional minus signs compared to the one for , while the situation for the expanded form given earlier is opposite. As remarked in (Littlewood 1950, p. 84) one can alternatively obtain the formula for by taking the permanent of the matrix for instead of the determinant, and more generally an expression for any Schur polynomial can be obtained by taking the corresponding immanant of this matrix.
Derivation of the identities
Each of Newton's identities can easily be checked by elementary algebra; however, their validity in general needs a proof. Here are some possible derivations.
From the special case n = k
One can obtain the k-th Newton identity in k variables by substitution into
as follows. Substituting xj for t gives
Summing over all j gives
where the terms for i = 0 were taken out of the sum because p0 is (usually) not defined. This equation immediately gives the k-th Newton identity in k variables. Since this is an identity of symmetric polynomials (homogeneous) of degree k, its validity for any number of variables follows from its validity for k variables. Concretely, the identities in n < k variables can be deduced by setting k − n variables to zero. The k-th Newton identity in n > k variables contains more terms on both sides of the equation than the one in k variables, but its validity will be assured if the coefficients of any monomial match. Because no individual monomial involves more than k of the variables, the monomial will survive the substitution of zero for some set of n − k (other) variables, after which the equality of coefficients is one that arises in the k-th Newton identity in k (suitably chosen) variables.
Comparing coefficients in series
Another derivation can be obtained by computations in the ring of formal power series R, where R is Z[x1,..., xn], the ring of polynomials in n variables x1,..., xn over the integers.
Starting again from the basic relation
and "reversing the polynomials" by substituting 1/t for t and then multiplying both sides by tn to remove negative powers of t, gives
(the above computation should be performed in the field of fractions of R; alternatively, the identity can be obtained simply by evaluating the product on the left side)
Swapping sides and expressing the ai as the elementary symmetric polynomials they stand for gives the identity
One formally differentiates both sides with respect to t, and then (for convenience) multiplies by t, to obtain
where the polynomial on the right hand side was first rewritten as a rational function in order to be able to factor out a product out of the summation, then the fraction in the summand was developed as a series in t, using the formula
and finally the coefficient of each t j was collected, giving a power sum. (The series in t is a formal power series, but may alternatively be thought of as a series expansion for t sufficiently close to 0, for those more comfortable with that; in fact one is not interested in the function here, but only in the coefficients of the series.) Comparing coefficients of tk on both sides one obtains
which gives the k-th Newton identity.
As a telescopic sum of symmetric function identities
The following derivation, given essentially in (Mead, 1992), is formulated in the ring of symmetric functions for clarity (all identities are independent of the number of variables). Fix some k > 0, and define the symmetric function r(i) for 2 ≤ i ≤ k as the sum of all distinct monomials of degree k obtained by multiplying one variable raised to the power i with k − i distinct other variables (this is the monomial symmetric function mγ where γ is a hook shape (i,1,1,...,1)). In particular r(k) = pk; for r(1) the description would amount to that of ek, but this case was excluded since here monomials no longer have any distinguished variable. All products piek−i can be expressed in terms of the r(j) with the first and last case being somewhat special. One has
since each product of terms on the left involving distinct variables contributes to r(i), while those where the variable from pi already occurs among the variables of the term from ek−i contributes to r(i + 1), and all terms on the right are so obtained exactly once. For i = k one multiplies by e0 = 1, giving trivially
Finally the product p1ek−1 for i = 1 gives contributions to r(i + 1) = r(2) like for other values i < k, but the remaining contributions produce k times each monomial of ek, since any one of the variables may come from the factor p1; thus
The k-th Newton identity is now obtained by taking the alternating sum of these equations, in which all terms of the form r(i) cancel out.
Combinatorial proof
A short combinatorial proof of Newton's identities was given by Doron Zeilberger in 1984.
See also
Power sum symmetric polynomial
Elementary symmetric polynomial
Newton's inequalities
Symmetric function
Fluid solutions, an article giving an application of Newton's identities to computing the characteristic polynomial of the Einstein tensor in the case of a perfect fluid, and similar articles on other types of exact solutions in general relativity.
References
External links
Newton–Girard formulas on MathWorld
A Matrix Proof of Newton's Identities in Mathematics Magazine
Application on the number of real roots
A Combinatorial Proof of Newton's Identities by Doron Zeilberger
Isaac Newton
Group theory
Invariant theory
Linear algebra
Algebraic identities
Symmetric functions
Algebraic combinatorics
Galois theory | Newton's identities | [
"Physics",
"Mathematics"
] | 3,344 | [
"Symmetry",
"Group actions",
"Mathematical identities",
"Algebraic identities",
"Algebra",
"Combinatorics",
"Group theory",
"Fields of abstract algebra",
"Symmetric functions",
"Linear algebra",
"Algebraic combinatorics",
"Invariant theory"
] |
2,468,995 | https://en.wikipedia.org/wiki/Malonyl-CoA | Malonyl-CoA is a coenzyme A derivative of malonic acid.
Functions
It plays a key role in chain elongation in fatty acid biosynthesis and polyketide biosynthesis.
Cytosolic fatty acid biosynthesis
Malonyl-CoA provides 2-carbon units to fatty acids and commits them to fatty acid chain synthesis.
Malonyl-CoA is formed by carboxylating acetyl-CoA using the enzyme acetyl-CoA carboxylase. One molecule of acetyl-CoA joins with a molecule of bicarbonate, requiring energy rendered from ATP.
Malonyl-CoA is utilised in fatty acid biosynthesis by the enzyme malonyl coenzyme A:acyl carrier protein transacylase (MCAT). MCAT serves to transfer malonate from malonyl-CoA to the terminal thiol of holo-acyl carrier protein (ACP).
Mitochondrial fatty acid synthesis
Malonyl-CoA is formed in the first step of mitochondrial fatty acid synthesis (mtFASII) from malonic acid by malonyl-CoA synthetase (ACSF3).
Polyketide biosynthesis
MCAT is also involved in bacterial polyketide biosynthesis. The enzyme MCAT together with an acyl carrier protein (ACP), and a polyketide synthase (PKS) and chain-length factor heterodimer, constitutes the minimal PKS of type II polyketides.
Regulation
Malonyl-CoA is a highly regulated molecule in fatty acid synthesis; as such, it inhibits the rate-limiting step in beta-oxidation of fatty acids. Malonyl-CoA inhibits fatty acids from associating with carnitine by regulating the enzyme carnitine acyltransferase, thereby preventing them from entering the mitochondria, where fatty acid oxidation and degradation occur.
Related diseases
Malonyl-CoA plays a special role in the mitochondrial clearance of toxic malonic acid in the metabolic disorder combined malonic and methylmalonic aciduria (CMAMMA). In CMAMMA due to ACSF3, malonyl-CoA synthetase is decreased, which can generate malonyl-CoA from malonic acid, which can then be converted to acetyl-CoA by malonyl-CoA decarboxylase. In contrast, in CMAMMA due to malonyl-CoA decarboxylase deficiency, malonyl-CoA decarboxylase is decreased, which converts malonyl-CoA to acetyl-CoA.
See also
MCAT (gene)
References
External links
Hope for new way to beat obesity
Metabolism
Thioesters of coenzyme A | Malonyl-CoA | [
"Chemistry",
"Biology"
] | 576 | [
"Biochemistry",
"Metabolism",
"Cellular processes"
] |
2,470,340 | https://en.wikipedia.org/wiki/Burgess%20reagent | The Burgess reagent (methyl N-(triethylammoniumsulfonyl)carbamate) is a mild and selective dehydrating reagent often used in organic chemistry. It was developed in the laboratory of Edward M. Burgess at Georgia Tech.
The Burgess reagent is used to convert secondary and tertiary alcohols with an adjacent proton into alkenes. Dehydration of primary alcohols does not work well. The reagent is soluble in common organic solvents and alcohol dehydration takes place with syn elimination through an intramolecular elimination reaction. The Burgess reagent is a carbamate and an inner salt. A general mechanism is shown below.
Preparation
The reagent is prepared from chlorosulfonylisocyanate by reaction with methanol and triethylamine in benzene:
References
Reagents for organic chemistry
Quaternary ammonium compounds
Carbamates
Zwitterions
Dehydrating agents
Methyl esters | Burgess reagent | [
"Physics",
"Chemistry"
] | 208 | [
"Matter",
"Zwitterions",
"Reagents for organic chemistry",
"Dehydrating agents",
"Ions"
] |
2,470,504 | https://en.wikipedia.org/wiki/Upsilon%20meson | The Upsilon meson () is a quarkonium state (i.e. flavourless meson) formed from a bottom quark and its antiparticle. It was discovered by the E288 experiment team, headed by Leon Lederman, at Fermilab in 1977, and was the first particle containing a bottom quark to be discovered because it is the lightest that can be produced without additional massive particles. It has a lifetime of and a mass about in the ground state.
See also
Oops-Leon, an erroneously-claimed discovery of a similar particle at a lower mass in 1976.
The particle is the analogous state made from strange quarks.
The particle is the analogous state made from charm quarks.
List of mesons
References
Mesons
Onia
Subatomic particles with spin 1 | Upsilon meson | [
"Physics"
] | 171 | [
"Particle physics stubs",
"Particle physics"
] |
2,472,170 | https://en.wikipedia.org/wiki/Tubocurarine%20chloride | Tubocurarine (also known as d-tubocurarine or DTC) is a toxic benzylisoquinoline alkaloid historically known for its use as an arrow poison. In the mid-1900s, it was used in conjunction with an anesthetic to provide skeletal muscle relaxation during surgery or mechanical ventilation. Safer alternatives, such as cisatracurium and rocuronium, have largely replaced it as an adjunct for clinical anesthesia and it is now rarely used.
History
Tubocurarine is a naturally occurring mono-quaternary alkaloid obtained from the bark of the Menispermaceous South American plant Chondrodendron tomentosum, a climbing vine known to the European world since the Spanish conquest of South America. Curare had been used as a source of arrow poison by South American natives to hunt animals, and they were able to eat the animals' contaminated flesh subsequently without any adverse effects because tubocurarine cannot easily cross mucous membranes. Thus, tubocurarine is effective only if given parenterally, as demonstrated by Bernard, who also showed that the site of its action was at the neuromuscular junction. Virchow and Munter confirmed the paralyzing action was limited to voluntary muscles.
Etymology
The word curare comes from a word in the Cariban languages. Tubocurarine is so-called because some of the plant extracts designated curare were stored, and subsequently shipped to Europe, in bamboo tubes. Likewise, curare stored in calabash containers was called calabash curare, although this was usually an extract not of Chondrodendron, but of the Strychnos species S. toxifera, containing a different alkaloid, namely toxiferine. Pot curare was generally a mixture of extracts from various genera in the families Menispermaceae and Strychnaceae. The tripartite classification into "tube", "calabash", and "pot" curares early became untenable, due to inconsistencies in the use of the different types of vessels and the complexities of the dart poison recipes themselves.
Use in anesthesia
Griffith and Johnson are credited with pioneering the formal clinical introduction of tubocurarine as an adjunct to anesthetic practice on 23 January 1942, at the Montreal Homeopathic Hospital. In this sense, tubocurarine is the prototypical adjunctive neuromuscular non-depolarizing agent. However, others before Griffith and Johnson had attempted use of tubocurarine in several situations: some under controlled study conditions while others not quite controlled and remained unpublished. Regardless, all in all some 30,000 patients had been given tubocurarine by 1941, although it was Griffith and Johnson's 1942 publication that provided the impetus to the standard use of neuromuscular blocking agents in clinical anesthetic practice – a revolution that rapidly metamorphosized into the standard practice of "balanced" anesthesia: the triad of barbiturate hypnosis, light inhalational anesthesia and muscle relaxation. The technique as described by Gray and Halton was widely known as the "Liverpool technique", and became the standard anesthetic technique in England in the 1950s and 1960s for patients of all ages and physical status. Present clinical anesthetic practice still employs the central principle of balanced anesthesia though with some differences to accommodate subsequent technological advances and introductions of new and better gaseous anesthetic, hypnotic and neuromuscular blocking agents, and tracheal intubation, as well as monitoring techniques that were nonexistent in the day of Gray and Halton: pulse oximetry, capnography, peripheral nerve stimulation, noninvasive blood pressure monitoring, etc.
Chemical properties
Structurally, tubocurarine is a benzylisoquinoline derivative. Its structure, when first elucidated in 1948 and for many years, was incorrectly thought to be bis-quaternary: in other words, it was thought to be an N,N-dimethylated alkaloid. In 1970, the correct structure was finally established, showing one of the two nitrogens to be tertiary, actually a mono-N-methylated alkaloid.
Biosynthesis
Tubocurarine biosynthesis involves a radical coupling of the two enantiomers of N-methylcoclaurine. (R) and (S)-N-methylcoclaurine come from a Mannich-like reaction between dopamine and 4-hydroxyphenylacetaldehyde, facilitated by norcoclaurine synthase (NCS). Both dopamine and 4-hydroxyphenylacetaldehyde originate from L-tyrosine. Methylation of the amine and hydroxyl substituents are facilitated by S-adenosyl methionine (SAM). One methyl group is present on each nitrogen atom prior to the radical coupling. The additional methyl group is transferred to form tubocurarine, with its single quaternary N,N-dimethylamino group.
Biological effects
Without intervention, acetylcholine (ACh) in the peripheral nervous system activates skeletal muscles. Acetylcholine is produced in the body of the neuron by choline acetyltransferase and transported down the axon to the synaptic gap. Tubocurarine chloride acts as an antagonist for the nicotinic acetylcholine receptor (nAChr), meaning it blocks the receptor site from ACh. This may be due to the quaternary amino structural motif found on both molecules.
Clinical pharmacology
Unna et al. reported the effects of tubocurarine on humans:
Tubocurarine has a time of onset of around 5 minutes which is relatively slow among neuromuscular-blocking drugs, and has a duration of action of 60 to 120 minutes. It also causes histamine release, now a recognized hallmark of the tetrahydroisioquinolinium class of neuromuscular blocking agents. Histamine release is associated with bronchospasms, hypotension, and salivary secretions, making it dangerous for asthmatics, children, and those who are pregnant or lactating. However, the main disadvantage in the use of tubocurarine is its significant ganglion-blocking effect, that manifests as hypotension, in many patients; this constitutes a relative contraindication to its use in patients with myocardial ischaemia.
Because of the shortcomings of tubocurare, much research effort was undertaken soon after its clinical introduction to find a suitable replacement. The efforts unleashed a multitude of compounds borne from structure-activity relations developed from the tubocurare molecule. Some key compounds that have seen clinical use are identified in the muscle relaxants template box below. Of the many tried as replacements, only a few enjoyed as much popularity as tubocurarine: pancuronium, vecuronium, rocuronium, atracurium, and cisatracurium. Succinylcholine is a widely used muscle relaxant drug which acts by activating, instead of blocking, the ACh receptor.
The potassium channel blocker tetraethylammonium (TEA) has been shown to reverse the effects of tubocurarine. It is thought to do so by increasing ACh release, which counteracts the antagonistic effects of tubocurarine on the ACh receptor.
Use as spider bite treatment
Spiders of the genus Latrodectus have α-latrotoxin in their venom. The most well known spider in this genus is the black widow spider. α-latrotoxin causes the release of neurotransmitters into the synaptic gap, including acetylcholine. Bites are usually not fatal, but do cause a significant amount of pain in addition to muscle spasms. The venom is the most damaging to nerve endings, but the introduction of d-tubocurarine chloride blocks the nAChr, alleviating pain and muscle spasms while an antivenom can be administered.
Toxicology
An individual administered tubocurarine chloride will be unable to move any voluntary muscles, including the diaphragm. A large enough dose will therefore result in death from respiratory failure unless artificial ventilation is initiated. The LD50 for mice and rabbits are 0.13 mg/kg and 0.146 mg/kg intravenously, respectively. It releases histamine and causes hypotension.
References
Benzylisoquinoline alkaloids
Cyclophanes
Macrocycles
Muscle relaxants
Neuromuscular blockers
Neurotoxins
Nicotinic antagonists
Norsalsolinol ethers
Phenols
Quaternary ammonium compounds
Resorcinol ethers
Hydrochlorides | Tubocurarine chloride | [
"Chemistry"
] | 1,860 | [
"Organic compounds",
"Alkaloids by chemical classification",
"Tetrahydroisoquinoline alkaloids",
"Macrocycles",
"Neurochemistry",
"Neurotoxins"
] |
2,472,618 | https://en.wikipedia.org/wiki/Signed%20graph | In the area of graph theory in mathematics, a signed graph is a graph in which each edge has a positive or negative sign.
A signed graph is balanced if the product of edge signs around every cycle is positive. The name "signed graph" and the notion of balance appeared first in a mathematical paper of Frank Harary in 1953. Dénes Kőnig had already studied equivalent notions in 1936 under a different terminology but without recognizing the relevance of the sign group.
At the Center for Group Dynamics at the University of Michigan, Dorwin Cartwright and Harary generalized Fritz Heider's psychological theory of balance in triangles of sentiments to a psychological theory of balance in signed graphs.
Signed graphs have been rediscovered many times because they come up naturally in many unrelated areas. For instance, they enable one to describe and analyze the geometry of subsets of the classical root systems. They appear in topological graph theory and group theory. They are a natural context for questions about odd and even cycles in graphs. They appear in computing the ground state energy in the non-ferromagnetic Ising model; for this one needs to find a largest balanced edge set in Σ. They have been applied to data classification in correlation clustering.
Fundamental theorem
The sign of a path is the product of the signs of its edges. Thus a path is positive only if there are an even number of negative edges in it (where zero is even). In the mathematical balance theory of Frank Harary, a signed graph is balanced when every cycle is positive. Harary proves that a signed graph is balanced when (1) for every pair of nodes, all paths between them have the same sign, or (2) the vertices partition into a pair of subsets (possibly empty), each containing only positive edges, but connected by negative edges. It generalizes the theorem that an ordinary (unsigned) graph is bipartite if and only if every cycle has even length.
A simple proof uses the method of switching. Switching a signed graph means reversing the signs of all edges between a vertex subset and its complement. To prove Harary's theorem, one shows by induction that Σ can be switched to be all positive if and only if it is balanced.
A weaker theorem, but with a simpler proof, is that if every 3-cycle in a signed complete graph is positive, then the graph is balanced. For the proof, pick an arbitrary node n and place it and all those nodes that are linked to n by a positive edge in one group, called A, and all those linked to n by a negative edge in the other, called B. Since this is a complete graph, every two nodes in A must be friends and every two nodes in B must be friends, otherwise there would be a 3-cycle which was unbalanced. (Since this is a complete graph, any one negative edge would cause an unbalanced 3-cycle.) Likewise, all negative edges must go between the two groups.
Frustration
Frustration index
The frustration index (early called the line index of balance) of Σ is the smallest number of edges whose deletion, or equivalently whose sign reversal (a theorem of Harary), makes Σ balanced. The reason for the equivalence is that the frustration index equals the smallest number of edges whose negation (or, equivalently, deletion) makes Σ balanced.
A second way of describing the frustration index is that it is the smallest number of edges that cover all negative cycles. This quantity has been called the negative cycle cover number.
There is another equivalent definition (which can be proved easily by switching). Give each vertex a value of +1 or −1; we call this a state of Σ. An edge is called satisfied if it is positive and both endpoints have the same value, or it is negative and the endpoints have opposite values. An edge that is not satisfied is called frustrated. The smallest number of frustrated edges over all states is the frustration index. This definition was first introduced in a different notation by Abelson and Rosenberg under the (obsolete) name complexity. The complement of such a set is a balanced subgraph of Σ with the most possible edges.
Finding the frustration index is an NP-hard problem.
One can see the NP-hard complexity by observing that the frustration index of an all-negative signed graph is the same as the maximum cut problem in graph theory, which is NP-hard.
The frustration index is important in a model of spin glasses, the mixed Ising model. In this model, the signed graph is fixed. A state consists of giving a "spin", either "up" or "down", to each vertex. We think of spin up as +1 and spin down as −1. Thus, each state has a number of frustrated edges. The energy of a state is larger when it has more frustrated edges, so a ground state is a state with the fewest frustrated energy. Thus, to find the ground state energy of Σ one has to find the frustration index.
Frustration number
The analogous vertex number is the frustration number, defined as the smallest number of vertices whose deletion from Σ results in balance. Equivalently, one wants the largest order of a balanced induced subgraph of Σ.
Algorithmic problems
Three fundamental questions about a signed graph are: Is it balanced? What is the largest size of a balanced edge set in it? What is the smallest number of vertices that must be deleted to make it balanced? The first question is easy to solve in polynomial time. The second question is called the Frustration Index or Maximum Balanced Subgraph problem. It is NP-hard because its special case (when all edges of the graph are negative) is the NP-hard problem Maximum Cut. The third question is called the Frustration Number or Maximum Balanced Induced Subgraph problem, is also NP-hard; see e.g.
Matroid theory
There are two matroids associated with a signed graph, called the signed-graphic matroid (also called the frame matroid or sometimes bias matroid) and the lift matroid, both of which generalize the cycle matroid of a graph. They are special cases of the same matroids of a biased graph.
The frame matroid (or signed-graphic matroid) M(G) has for its ground set the edge set E. An edge set is independent if each component contains either no circles or just one circle, which is negative. (In matroid theory a half-edge acts exactly like a negative loop.) A circuit of the matroid is either a positive circle, or a pair of negative circles together with a connecting simple path, such that the two circles are either disjoint (then the connecting path has one end in common with each circle and is otherwise disjoint from both) or share just a single common vertex (in this case the connecting path is that single vertex). The rank of an edge set S is n − b, where n is the number of vertices of G and b is the number of balanced components of S, counting isolated vertices as balanced components.
This matroid is the column matroid of the incidence matrix of the signed graph.
That is why it describes the linear dependencies of the roots of a classical root system.
The extended lift matroid L0(G) has for its ground set the set E0 the union of edge set E with an extra point, which we denote e0. The lift matroid L(G) is the extended lift matroid restricted to E. The extra point acts exactly like a negative loop, so we describe only the lift matroid. An edge set is independent if it contains either no circles or just one circle, which is negative. (This is the same rule that is applied separately to each component in the signed-graphic matroid.) A matroid circuit is either a positive circle or a pair of negative circles that are either disjoint or have just a common vertex. The rank of an edge set S is n − c + ε, where c is the number of components of S, counting isolated vertices, and ε is 0 if S is balanced and 1 if it is not.
Other kinds of "signed graph"
Sometimes the signs are taken to be +1 and −1. This is only a difference of notation, if the signs are still multiplied around a circle and the sign of the product is the important thing. However, there are two other ways of treating the edge labels that do not fit into signed graph theory.
The term signed graph is applied occasionally to graphs in which each edge has a weight, w(e) = +1 or −1. These are not the same kind of signed graph; they are weighted graphs with a restricted weight set. The difference is that weights are added, not multiplied. The problems and methods are completely different.
The name is also applied to graphs in which the signs function as colors on the edges. The significance of the color is that it determines various weights applied to the edge, and not that its sign is intrinsically significant. This is the case in knot theory, where the only significance of the signs is that they can be interchanged by the two-element group, but there is no intrinsic difference between positive and negative. The matroid of a sign-colored graph is the cycle matroid of the underlying graph; it is not the frame or lift matroid of the signed graph. The sign labels, instead of changing the matroid, become signs on the elements of the matroid.
In this article we discuss only signed graph theory in the strict sense. For sign-colored graphs see colored matroids.
Signed digraph
A signed digraph is a directed graph with signed arcs. Signed digraphs are far more complicated than signed graphs, because only the signs of directed cycles are significant. For instance, there are several definitions of balance, each of which is hard to characterize, in strong contrast with the situation for signed undirected graphs.
Signed digraphs should not be confused with oriented signed graphs. The latter are bidirected graphs, not directed graphs (except in the trivial case of all positive signs).
Vertex signs
A vertex-signed graph, sometimes called a marked graph, is a graph whose vertices are given signs. A circle is called consistent (but this is unrelated to logical consistency) or harmonious if the product of its vertex signs is positive, and inconsistent or inharmonious if the product is negative. There is no simple characterization of harmonious vertex-signed graphs analogous to Harary's balance theorem; instead, the characterization has been a difficult problem, best solved (even more generally) by Joglekar, Shah, and Diwan (2012).
It is often easy to add edge signs to the theory of vertex signs without major change; thus, many results for vertex-signed graphs (or "marked signed graphs") extend naturally to vertex-and-edge-signed graphs. This is notably true for the characterization of harmony by Joglekar, Shah, and Diwan (2012).
The difference between a marked signed graph and a signed graph with a state function (as in § Frustration) is that the vertex signs in the former are part of the essential structure, while a state function is a variable function on the signed graph.
Note that the term "marked graph" is widely used in Petri nets in a completely different meaning; see the article on marked graphs.
Coloring
As with unsigned graphs, there is a notion of signed graph coloring. Where a coloring of a graph is a mapping from the vertex set to the natural numbers, a coloring of a signed graph is a mapping from the vertex set to the integers.
The constraints on proper colorings come from the edges of the signed graph. The integers assigned to two vertices must be distinct if they are connected by a positive edge. The labels on adjacent vertices must not be additive inverses if the vertices are connected by a negative edge. There can be no proper coloring of a signed graph with a positive loop.
When restricting the vertex labels to the set of integers with magnitude at most a natural number k, the set of proper colorings of a signed graph is finite. The relation between the number of such proper colorings and k is a polynomial in k; when expressed in terms of it is called the chromatic polynomial of the signed graph. It is analogous to the chromatic polynomial of an unsigned graph.
Applications
Social psychology
In social psychology, signed graphs have been used to model social situations, with positive edges representing friendships and negative edges enmities between nodes, which represent people. Then, for example, a positive 3-cycle is either three mutual friends, or two friends with a common enemy; while a negative 3-cycle is either three mutual enemies, or two enemies who share a mutual friend. According to balance theory, positive cycles are balanced and supposed to be stable social situations, whereas negative cycles are unbalanced and supposed to be unstable. According to the theory, in the case of three mutual enemies, this is because sharing a common enemy is likely to cause two of the enemies to become friends. In the case of two enemies sharing a friend, the shared friend is likely to choose one over the other and turn one of his or her friendships into an enemy.
Antal, Krapivsky and Reder consider social dynamics as the change in sign on an edge of a signed graph. The social relations with previous friends of a divorcing couple are used to illustrate the evolution of a signed graph in society. Another illustration describes the changing international alliances between European powers in the decades before the First World War. They consider local triad dynamics and constrained triad dynamics, where in the latter case a relationship change is made only when the total number of unbalanced triads is reduced. The simulation presumed a complete graph with random relations having a random unbalanced triad selected for transformation. The evolution of the signed graph with N nodes under this process is studied and simulated to describe the stationary density of friendly links.
Balance theory has been severely challenged, especially in its application to large systems, on the theoretical ground that friendly relations tie a society together, while a society divided into two camps of enemies would be highly unstable.
Experimental studies have also provided only weak confirmation of the predictions of structural balance theory.
Spin glasses
In physics, signed graphs are a natural context for the nonferromagnetic Ising model, which is applied to the study of spin glasses.
Complex systems
Using an analytic method initially developed in population biology and ecology, but now used in many scientific disciplines, signed digraphs have found application in reasoning about the behavior of complex causal systems.
Such analyses answer questions about feedback at given levels of the system, and about the direction of variable responses given a perturbation to a system at one or more points, variable correlations given such perturbations, the distribution of variance across the system, and the sensitivity or insensitivity of particular variables to system perturbations.
Data clustering
Correlation clustering looks for natural clustering of data by similarity. The data points are represented as the vertices of a graph, with a positive edge joining similar items and a negative edge joining dissimilar items.
Neuroscience
Brain can be considered as a signed graph where synchrony and anti-synchrony between activity patterns of brain regions determine positive and negative edges. In this regard, stability and energy of the brain network can be explored. Also, recently, the concept of frustration has been used in brain network analysis to identify the non-trivial assemblage of neural connections and highlight the adjustable elements of the brain.
Generalizations
A signed graph is the special kind of gain graph in which the gain group has order 2. The pair (G, B(Σ)) determined by a signed graph Σ is a special kind of biased graph. The sign group has the special property, not shared by larger gain groups, that the edge signs are determined up to switching by the set B(Σ) of balanced cycles.
Notes
References
.
.
Matroid theory
Extensions and generalizations of graphs
Oriented matroids
Sign (mathematics) | Signed graph | [
"Mathematics"
] | 3,278 | [
"Sign (mathematics)",
"Mathematical objects",
"Graph theory",
"Combinatorics",
"Mathematical relations",
"Extensions and generalizations of graphs",
"Numbers",
"Matroid theory"
] |
6,018,334 | https://en.wikipedia.org/wiki/Bending%20moment | In solid mechanics, a bending moment is the reaction induced in a structural element when an external force or moment is applied to the element, causing the element to bend. The most common or simplest structural element subjected to bending moments is the beam. The diagram shows a beam which is simply supported (free to rotate and therefore lacking bending moments) at both ends; the ends can only react to the shear loads. Other beams can have both ends fixed (known as encastre beam); therefore each end support has both bending moments and shear reaction loads. Beams can also have one end fixed and one end simply supported. The simplest type of beam is the cantilever, which is fixed at one end and is free at the other end (neither simple nor fixed). In reality, beam supports are usually neither absolutely fixed nor absolutely rotating freely.
The internal reaction loads in a cross-section of the structural element can be resolved into a resultant force and a resultant couple. For equilibrium, the moment created by external forces/moments must be balanced by the couple induced by the internal loads. The resultant internal couple is called the bending moment while the resultant internal force is called the shear force (if it is transverse to the plane of element) or the normal force (if it is along the plane of the element). Normal force is also termed as axial force.
The bending moment at a section through a structural element may be defined as the sum of the moments about that section of all external forces acting to one side of that section. The forces and moments on either side of the section must be equal in order to counteract each other and maintain a state of equilibrium so the same bending moment will result from summing the moments, regardless of which side of the section is selected. If clockwise bending moments are taken as negative, then a negative bending moment within an element will cause "hogging", and a positive moment will cause "sagging". It is therefore clear that a point of zero bending moment within a beam is a point of contraflexure—that is, the point of transition from hogging to sagging or vice versa.
Moments and torques are measured as a force multiplied by a distance so they have as unit newton-metres (N·m), or pound-foot (lb·ft). The concept of bending moment is very important in engineering (particularly in civil and mechanical engineering) and physics.
Background
Tensile and compressive stresses increase proportionally with bending moment, but are also dependent on the second moment of area of the cross-section of a beam (that is, the shape of the cross-section, such as a circle, square or I-beam being common structural shapes). Failure in bending will occur when the bending moment is sufficient to induce tensile/compressive stresses greater than the yield stress of the material throughout the entire cross-section. In structural analysis, this bending failure is called a plastic hinge, since the full load carrying ability of the structural element is not reached until the full cross-section is past the yield stress. It is possible that failure of a structural element in shear may occur before failure in bending, however the mechanics of failure in shear and in bending are different.
Moments are calculated by multiplying the external vector forces (loads or reactions) by the vector distance at which they are applied. When analysing an entire element, it is sensible to calculate moments at both ends of the element, at the beginning, centre and end of any uniformly distributed loads, and directly underneath any point loads. Of course any "pin-joints" within a structure allow free rotation, and so zero moment occurs at these points as there is no way of transmitting turning forces from one side to the other.
It is more common to use the convention that a clockwise bending moment to the left of the point under consideration is taken as positive. This then corresponds to the second derivative of a function which, when positive, indicates a curvature that is 'lower at the centre' i.e. sagging. When defining moments and curvatures in this way calculus can be more readily used to find slopes and deflections.
Critical values within the beam are most commonly annotated using a bending moment diagram, where negative moments are plotted to scale above a horizontal line and positive below. Bending moment varies linearly over unloaded sections, and parabolically over uniformly loaded sections.
Engineering descriptions of the computation of bending moments can be confusing because of unexplained sign conventions and implicit assumptions. The descriptions below use vector mechanics to compute moments of force and bending moments in an attempt to explain, from first principles, why particular sign conventions are chosen.
Computing the moment of force
An important part of determining bending moments in practical problems is the computation of moments of force.
Let be a force vector acting at a point A in a body. The moment of this force about a reference point (O) is defined as
where is the moment vector and is the position vector from the reference point (O) to the point of application of the force (A). The symbol indicates the vector cross product. For many problems, it is more convenient to compute the moment of force about an axis that passes through the reference point O. If the unit vector along the axis is , the moment of force about the axis is defined as
where indicates the vector dot product.
Example
The adjacent figure shows a beam that is acted upon by a force . If the coordinate system is defined by the three unit vectors , we have the following
Therefore,
The moment about the axis is then
Sign conventions
The negative value suggests that a moment that tends to rotate a body clockwise around an axis should have a negative sign. However, the actual sign depends on the choice of the three axes . For instance, if we choose another right handed coordinate system with , we have
Then,
For this new choice of axes, a positive moment tends to rotate body clockwise around an axis.
Computing the bending moment
In a rigid body or in an unconstrained deformable body, the application of a moment of force causes a pure rotation. But if a deformable body is constrained, it develops internal forces in response to the external force so that equilibrium is maintained. An example is shown in the figure below. These internal forces will cause local deformations in the body.
For equilibrium, the sum of the internal force vectors is equal to the negative of the sum of the applied external forces, and the sum of the moment vectors created by the internal forces is equal to the negative of the moment of the external force. The internal force and moment vectors are oriented in such a way that the total force (internal + external) and moment (external + internal) of the system is zero. The internal moment vector is called the bending moment.
Though bending moments have been used to determine the stress states in arbitrary shaped structures, the physical interpretation of the computed stresses is problematic. However, physical interpretations of bending moments in beams and plates have a straightforward interpretation as the stress resultants in a cross-section of the structural element. For example, in a beam in the figure, the bending moment vector due to stresses in the cross-section A perpendicular to the x-axis is given by
Expanding this expression we have,
We define the bending moment components as
The internal moments are computed about an origin that is at the neutral axis of the beam or plate and the integration is through the thickness ()
Example
In the beam shown in the adjacent figure, the external forces are the applied force at point A () and the reactions at the two support points O and B ( and ).
For this situation, the only non-zero component of the bending moment is
where is the height in the direction of the beam. The minus sign is included to satisfy the sign convention.
In order to calculate , we begin by balancing the forces, which gives one equation with the two unknown reactions,
To obtain each reaction a second equation is required. Balancing the moments about any arbitrary point X would give us a second equation we can use to solve for and in terms of . Balancing about the point O is simplest but let's balance about point A just to illustrate the point, i.e.
If is the length of the beam, we have
Evaluating the cross-products:
If we solve for the reactions we have
Now to obtain the internal bending moment at X we sum all the moments about the point X due to all the external forces to the right of X (on the positive side), and there is only one contribution in this case,
We can check this answer by looking at the free body diagram and the part of the beam to the left of point X, and the total moment due to these external forces is
If we compute the cross products, we have
Thanks to the equilibrium, the internal bending moment due to external forces to the left of X must be exactly balanced by the internal turning force obtained by considering the part of the beam to the right of X
which is clearly the case.
Sign convention
In the above discussion, it is implicitly assumed that the bending moment is positive when the top of the beam is compressed. That can be seen if we consider a linear distribution of stress in the beam and find the resulting bending moment. Let the top of the beam be in compression with a stress and let the bottom of the beam have a stress . Then the stress distribution in the beam is . The bending moment due to these stresses is
where is the area moment of inertia of the cross-section of the beam. Therefore, the bending moment is positive when the top of the beam is in compression.
Many authors follow a different convention in which the stress resultant is defined as
In that case, positive bending moments imply that the top of the beam is in tension. Of course, the definition of top depends on the coordinate system being used. In the examples above, the top is the location with the largest -coordinate.
See also
Buckling
Deflection including deflection of a beam
Twisting moment
Shear and moment diagrams
Stress resultants
First moment of area
Influence line
Second moment of area
List of area moments of inertia
Wing bending relief
References
External links
Stress resultants for beams
Free online Calculation tools for bending moment
Beam theory
Force
Continuum mechanics
Civil engineering
Moment (physics)
Mechanical quantities
ja:断面力#曲げモーメント | Bending moment | [
"Physics",
"Mathematics",
"Engineering"
] | 2,092 | [
"Mechanical quantities",
"Force",
"Physical quantities",
"Continuum mechanics",
"Quantity",
"Mass",
"Classical mechanics",
"Construction",
"Civil engineering",
"Mechanics",
"Wikipedia categories named after physical quantities",
"Matter",
"Moment (physics)"
] |
6,018,468 | https://en.wikipedia.org/wiki/Form%20factor%20%28quantum%20field%20theory%29 | In elementary particle physics and mathematical physics, in particular in effective field theory, a form factor is a function that encapsulates the properties of a certain particle interaction without including all of the underlying physics, but instead, providing the momentum dependence of suitable matrix elements. It is further measured experimentally in confirmation or specification of a theory—see experimental particle physics.
Photon–nucleon example
For example, at low energies the interaction of a photon with a nucleon is a very complicated calculation involving interactions between the photon and a sea of quarks and gluons, and often the calculation cannot be fully performed from first principles. Often in this context, form factors are also called "structure functions", since they can be used to describe the structure of the nucleon.
However, the generic Lorentz-invariant form of the matrix element for the electromagnetic current interaction is known,
where represents the photon momentum (equal in magnitude to E/c, where E is the energy of the photon). The three functions: are associated to the electric and magnetic form factors for this interaction, and are routinely measured experimentally; these three effective vertices can then be used to check, or perform calculations that would otherwise be too difficult to perform from first principles. This matrix element then serves to determine the transition amplitude involved in the scattering interaction or the respective particle decay—cf. Fermi's golden rule.
In general, the Fourier transforms of form factor components correspond to electric charge or magnetic profile space distributions (such as the charge radius) of the hadron involved. The analogous QCD structure functions are a probe of the quark and gluon distributions of nucleons.
See also
Structure function
Atomic form factor
Electric form factor
Magnetic form factor
Photon structure function
Quantum field theory
Standard model
Quantum mechanics
Special relativity
Charge radius
References
p 400
Wilson, R. (1969). "Form factors of elementary particles", Physics today 22 p 47,
Charles Perdrisat and Vina Punjabi (2010). "Nucleon Form factors", Scholarpedia 5(8): 10204. online article
Quantum field theory | Form factor (quantum field theory) | [
"Physics"
] | 430 | [
"Quantum field theory",
"Quantum mechanics",
"Quantum physics stubs"
] |
6,020,635 | https://en.wikipedia.org/wiki/Hannay%20angle | In classical mechanics, the Hannay angle is a mechanics analogue of the geometric phase (or Berry phase). It was named after John Hannay of the University of Bristol, UK. Hannay first described the angle in 1985, extending the ideas of the recently formalized Berry phase to classical mechanics.
Consider a one-dimensional system moving in a cycle, like a pendulum. Now slowly vary a slow parameter , like pulling and pushing on the string of a pendulum. We can picture the motion of the system as having a fast oscillation and a slow oscillation. The fast oscillation is the motion of the pendulum, and the slow oscillation is the motion of our pulling on its string. If we picture the system in phase space, its motion sweeps out a torus.
The adiabatic theorem in classical mechanics states that the action variable, which corresponds to the phase space area enclosed by the system's orbit, remains approximately constant. Thus, after one slow oscillation period, the fast oscillation is back to the same cycle, but its phase on the cycle has changed during the time. The phase change has two leading orders.
The first order is the "dynamical angle", which is simply . This angle depends on the precise details of the motion, and it is of order .
The second order is Hannay's angle, which surprisingly is independent of the precise details of . It depends on the trajectory of , but not how fast or slow it traverses the trajectory. It is of order .
Hannay angle in classical mechanics
The Hannay angle is defined in the context of action-angle coordinates. In an initially time-invariant system, an action variable is a constant. After introducing a periodic perturbation , the action variable becomes an adiabatic invariant, and the Hannay angle for its corresponding angle variable can be calculated according to the path integral that represents an evolution in which the perturbation gets back to the original value
where and are canonical variables of the Hamiltonian, and is the symplectic Hamiltonian 2-form.
Example
Foucault pendulum
The Foucault pendulum is an example from classical mechanics that is sometimes also used to illustrate the Berry phase. Below we study the Foucault pendulum using action-angle variables. For simplicity, we will avoid using the Hamilton–Jacobi equation, which is employed in the general protocol.
We consider a plane pendulum with frequency under the effect of Earth's rotation whose angular velocity is with amplitude denoted as . Here, the direction points from the center of the Earth to the pendulum. The Lagrangian for the pendulum is
The corresponding motion equation is
We then introduce an auxiliary variable that is in fact an angle variable. We now have an equation for :
From its characteristic equation
we obtain its characteristic root (we note that )
The solution is then
After the Earth rotates one full rotation that is , we have the phase change for
The first term is due to dynamic effect of the pendulum and is termed as the dynamic phase, while the second term representing a geometric phase that is essentially the Hannay angle
Rotation of a rigid body
A free rigid body tumbling in free space has two conserved quantities: energy and angular momentum vector . Viewed from within the rigid body's frame, the angular momentum direction is moving about, but its length is preserved. After a certain time , the angular momentum direction would return to its starting point.
Viewed in the inertial frame, the body has undergone a rotation (since all elements in SO(3) are rotations). A classical result states that during time , the body has rotated by angle
where is the solid angle swept by the angular momentum direction as viewed from within the rigid body's frame.
Other examples
The heavy top.
The orbit of earth, periodically perturbed by the orbit of Jupiter.
The rotational transform associated with the magnetic surfaces of a toroidal magnetic field with a nonplanar axis.
References
External links
Professor John H. Hannay: Research Highlights. Department of Physics, University of Bristol.
Classical mechanics | Hannay angle | [
"Physics"
] | 826 | [
"Classical mechanics stubs",
"Mechanics",
"Classical mechanics"
] |
6,024,570 | https://en.wikipedia.org/wiki/Bandwidth%20allocation | Bandwidth allocation is the process of assigning radio frequencies to different applications. The radio spectrum is a finite resource, which means there is great need for an effective allocation process. In the United States, the Federal Communications Commission or FCC has the responsibility of allocating discrete portions of the spectrum, or bands, to various industries. The FCC did this recently, when it shifted the location of television broadcasting on the spectrum in order to open up more space for mobile data. Different bands of spectrum are able to transmit more data than others, and some bands of the spectrum transmit a clearer signal than others. Bands that are particularly fast or that have long range are of critical importance for companies that intend to operate a business involving wireless communications.
FCC methods
Auctions
The FCC generally uses auctions to allocate bandwidth between companies. Some economists believe based on Auction Theory, auctions are the most efficient method of allocating resources. Due to the differences in the amount of data each band can transmit and the clarity of the signal, auctions allow the more desirable bands to sell for more. The United States currently auctions off bands that then become the property of purchaser. The FCC spectrum auctions have multiple rounds of bidding, as opposed to each party submitting one sealed bid. The FCC, when auctioning multiple bands, auctions them simultaneously. This allows for a more efficient bidding process, and keeps bands being auctioned at the end of the auction from being over or under valued. An example of this practice was the 700 MHz auction in 2008. While this method raises billions of dollars for the government, there is concern that smaller companies may be priced out of the market and therefore rendered unable to compete with large firms. This would reduce the number of points of view in the communications industry, which would violate one of the principles of the FCC, to protect the public interest. To help mitigate this concern, the FCC often sets aside a portion of the spectrum being auctioned so that it can only be bid on by smaller industry players.
Lotteries
Another method used to allocate bands of frequencies was lotteries. Lotteries were used by the FCC in the 1980s. A benefit of lotteries was that it gave all parties a chance at winning, unlike auctions which favor parties with more money. By giving all parties a chance it was believed that it served the public interest better. Some disadvantages of the lottery method was that some firm would engage in rent-seeking behavior, and try to get multiple licenses that they did not intend to use, but only intend to sell to another firm. In this situation not only were firms using rent-seeking behavior on a public resource, but the negotiations between firms could go on for years, meaning that frequencies were not being used and the public interest was not being served.
Comparative hearings
A third method used to allocate bands is the administrative process, also called comparative hearings. This method was used primarily before 1982. In this method all interested firms would make a presentation about why they should receive the license for that band of frequencies. Some advantages of this method are that they are flexible, meaning that FCC can use different criteria for different bands. This would allow the FCC to ensure that the public interest was acknowledged. There are also disadvantages to this method. A primary disadvantage is that the government does not raise revenue from hearing, as they would under other methods such as auctions. Along with the flexibility that the method allows for, it also can cause a lack of transparency because the criteria the decision is based on can differ from case to case. Another disadvantage is that the hearings process can take a long time to come to a conclusion.
The FCC is also responsible for reallocating bands of frequencies to different allocations. As new technologies develop the demand for frequency bands changes and makes some bands more desirable than previously. When this occurs, the FCC may make a decision to move an application to a different band of spectrum to make room for something else. In this case the FCC gives the existing application several years to prepare for the transition. An example of this transition when the FCC reallocated the 700mhz band from broadcast television to mobile phone applications. The FCC first voted to reallocate the band in 2002, however the broadcast television firms were not required to stop broadcasting until February 2009.
Limitations of Bandwidth
The exponential increase in mobile data traffic during the decades of the 1990s and 2000s has led to the massive deployment of wireless systems.
As a consequence, the limited available RF spectrum is subject to an aggressive spatial reuse and co-channel interference has become a major capacity limiting factor.
Therefore, there have been many independent warnings
of a looming "RF spectrum crisis"
as the mobile data demands continue to increase while the network spectral efficiency saturates despite newly introduced standards and great technological advancements in the field.
It is estimated that by 2017, more than 11 exabytes of data traffic will have to be transferred through mobile networks
every month. A possible solution is the replacement of some RF-technologies, like Wi-Fi, by others that do not use RF, like Li-Fi, as proposed by the Li-Fi Consortium.
Data crunch
The radio frequency spectrum is a limited natural resource which is increasingly in demand from a large and growing number of services such as fixed, mobile, broadcasting, amateur, space research, emergency telecommunications, meteorology, global positioning systems, environmental monitoring and communication services – that ensure safety of life on land, at sea and in the skies. Un-coordinated use can lead to malfunctioning of telecommunication services. ITU-R plays a key to ensure radio communications. In its capacity as the unique global radio spectrum manager, ITU-R identifies and harmonizes spectrum for use by wireless broadband systems, ensuring that these valuable frequencies are used efficiently and without interference from other radio systems. Allocates spectrum for communications (including mobile and broadcasting), satellite communications, and spectrum for advanced aeronautical communications, global maritime issues, protect frequencies for Earth-exploration satellites to monitor resources, emergencies, meteorology and climate change. Telecom services are converging and actors in the ICT world must adapt to all- IP (all data) networks. Data usage over wireless networks is rapidly increasing as more consumers surf the web, check email, and watch video on mobile devices. Moreover, according to Cisco, the surging growth in global mobile data traffic is projected to rise by sixty-six times by 2013, with video accounting for the lion's share of this increase in traffic. The evolution in data traffic foresees a future “data crunch”. In wireless services, this “data crunch” is putting further pressure on a more efficient use of spectrum. In the United States, according to FCC Chairman Julius Genachowski, "The explosive growth in mobile communications is outpacing our ability to keep up. If no proactive course is taken to update spectrum policies for the 21st century, limits will be reached. Some countries are already adapting to the impending crisis by investing in broadband and reassigning spectrum bands. ITU is raising awareness to promote investment in broadband and keeps working to improve spectrum management worldwide. However, the argument about a looming bandwidth crunch is refutable according to some points of view. Former FCC official Uzoma Onyeije conducted a study that questions the existence of a broadband spectrum crisis, and further goes on to suggest alternatives to existing networks that would mitigate the need to reallocate spectrum. Onyeije argues that before claiming a “Spectrum Crisis” exists, carriers should leverage available marketplace solutions to appease the current infrastructure namely upgrading network technology, adopting fair use policies, migrating voice to internet protocol, leveraging consumer infrastructure, enhancing carrier Infrastructure, packet prioritization, caching, channel bonding and encouraging the development of bandwidth-sensitive applications and devices.
Alternatively, the User-in-the-loop paradigm mitigates the data crunch by shaping the demand side by involving all the users, which makes expensive over-provisioning obsolete.
Bandwidth allocation can also be used in reference to the computing industry, in scenarios such as allocating bandwidth to a web site running on a server, or allocating bandwidth to a computer on a network. Allocations in computing are often administered/enforced by terminating or temporarily suspending access once the allocated bandwidth has been utilized. Setting it to high increases download speed and the connectivity of other devices on the network.
Control of bandwidth allocation
United States
The Federal Communications Commission (FCC) is an independent agency of the United States government that is responsible for allocating portions of the wireless spectrum for broadband, public safety, and the media.
Egypt
Unlike the government of the U.S., the government of Egypt does not have their own communications infrastructure, but private companies operate their own communication infrastructure. This non-government controlled allocation of communications became the unprecedented issue of discussion during the Egyptian social protests on January 25, 2011. The Egyptian government shut down all forms communications, including the internet and all on-line services. At first, the Egyptian government blocked the internet data usage for smart phones and black-berrys, and social media websites such as Facebook, Twitter, Instagram and YouTube as well. They also eventually cut off mobile phone service. This was possible even though the Egyptian government does not control communications because there are no agencies mandated to control communications. The fact is, the cooperation of the Internet service providers with the Egyptian government was necessary because they would otherwise have difficulties ethically running their business. If the government calls up the company service provider and makes a request within legal rights, then they give into the government's demands even if they are illegal according to the law. They have to follow the government's requests, in order to conduct business in that country.
NTIA spectrum management & policy
Office of Spectrum Management (OSM)
The Office of Spectrum Management (OSM) is solely responsible for managing the United States Federal Government's usage of the radio frequency spectrum. OSM manages and works together with the sub-office, Interdepartment Radio Advisory Committee (IRAC) to execute various operations for Federal Government use.
OSM and the IRAC collaborate to establish and issue policy regarding allocations and regulations governing the Federal Government's spectrum usage; develop plans for the peacetime and wartime use of the spectrum; prepare for, participate in, and implementing the results of international radio conferences; assigning frequencies; assigning of government specific frequencies; and maintaining Federal agencies new telecommunications systems and certifying that spectrum will be available. Additionally, the OSM together with the IRAC provides the technical engineering expertise needed to perform specific spectrum resources assessment and automated computer capabilities needed to carry out these investigations; participate in all aspects of the Federal Government's communications related emergency readiness activities; and participate in Federal Government telecommunication automated information systems security activities.
U.S. Federal Government spectrum management – Spectrum policy for 21st century
The 21st century has presented a society of wireless communications that has become a key element for a free society of information. Due to a modern need for fast and reliable information and communication the United States Federal Government has implemented the insurance of national and homeland defense, available public safety, first-responder services, and jobs revolving around research and service provision under the United States national radio communications services.
The United States and the President has additionally and personally established the position for spectrum management policies for Federal and non-Federal usage. The National Telecommunications and Information Administration (NTIA) continues annual regulation of spectrum bandwidth, specifically Federal usage. Additionally, an Executive Memorandum, issued directly from the President states a direct policy for continued improvements on spectrum management within the United States.
Spectrum Policy Task Force
Established in June 2002, the Spectrum Policy Task Force was created to help assist the Federal Communications Commission (FCC) in understanding the constant changing forces upon spectrum policy. The Spectrum Policy Task Force ultimately maximizes the public access, usage and benefits that derive from the radio spectrum.
Exact tasks of the Spectrum Task Force include the provision of specific information and recommendations to the FCC for evolving methods to the current “command and control” (C&C) approach to spectrum policy. The Spectrum Task Force also specializes in assisting the FCC in addressing spectrum issues such as: technical device/signal interference, spectrum efficiency, and effective public safety communications for domestic and international spectrum policies.
See also
Bandwidth allocation protocol
References
Radio spectrum
Radio regulations | Bandwidth allocation | [
"Physics"
] | 2,498 | [
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
6,025,205 | https://en.wikipedia.org/wiki/Groundwater%20recharge | Groundwater recharge or deep drainage or deep percolation is a hydrologic process, where water moves downward from surface water to groundwater. Recharge is the primary method through which water enters an aquifer. This process usually occurs in the vadose zone below plant roots and is often expressed as a flux to the water table surface. Groundwater recharge also encompasses water moving away from the water table farther into the saturated zone. Recharge occurs both naturally (through the water cycle) and through anthropogenic processes (i.e., "artificial groundwater recharge"), where rainwater and/or reclaimed water is routed to the subsurface.
The most common methods to estimate recharge rates are: chloride mass balance (CMB); soil physics methods; environmental and isotopic tracers; groundwater-level fluctuation methods; water balance (WB) methods (including groundwater models (GMs)); and the estimation of baseflow (BF) to rivers.
Processes
Diffused or focused mechanisms
Groundwater recharge can occur through diffuse or focused mechanisms. Diffuse recharge occurs when precipitation infiltrates through the soil to the water table, and is by definition distributed over large areas. Focused recharge occurs where water leaks from surface water sources (rivers, lakes, wadis, wetlands) or land surface depressions, and generally becomes more dominant with aridity.
Natural recharge
Water is recharged naturally by rain and snow melt and to a smaller extent by surface water (rivers and lakes). Recharge may be impeded somewhat by human activities including paving, development, or logging. These activities can result in loss of topsoil resulting in reduced water infiltration, enhanced surface runoff and reduction in recharge. Use of groundwater, especially for irrigation, may also lower the water tables. Groundwater recharge is an important process for sustainable groundwater management, since the volume-rate abstracted from an aquifer in the long term should be less than or equal to the volume-rate that is recharged.
Recharge can help move excess salts that accumulate in the root zone to deeper soil layers, or into the groundwater system. Tree roots increase water saturation into groundwater reducing water runoff. Flooding temporarily increases river bed permeability by moving clay soils downstream, and this increases aquifer recharge.
Wetlands
Wetlands help maintain the level of the water table and exert control on the hydraulic head. This provides force for groundwater recharge and discharge to other waters as well. The extent of groundwater recharge by a wetland is dependent upon soil, vegetation, site, perimeter to volume ratio, and water table gradient. Groundwater recharge occurs through mineral soils found primarily around the edges of wetlands. The soil under most wetlands is relatively impermeable. A high perimeter to volume ratio, such as in small wetlands, means that the surface area through which water can infiltrate into the groundwater is high. Groundwater recharge is typical in small wetlands such as prairie potholes, which can contribute significantly to recharge of regional groundwater resources. Researchers have discovered groundwater recharge of up to 20% of wetland volume per season.
Artificial groundwater recharge
Managed aquifer recharge (MAR) strategies to augment freshwater availability include streambed channel modification, bank filtration, water spreading and recharge wells. A facility in Orange County, California cleans and injects 100 million gallons per day; or 90 billion gallons per year.
Artificial groundwater recharge is becoming increasingly important in India, where over-pumping of groundwater by farmers has led to underground resources becoming depleted. In 2007, on the recommendations of the International Water Management Institute, the Indian government allocated to fund dug-well recharge projects (a dug-well is a wide, shallow well, often lined with concrete) in 100 districts within seven states where water stored in hard-rock aquifers had been over-exploited. Another environmental issue is the disposal of waste through the water flux such as dairy farms, industrial, and urban runoff.
Pollution in stormwater run-off collects in retention basins. Concentrating degradable contaminants can accelerate biodegradation. However, where and when water tables are high this affects appropriate design of detention ponds, retention ponds and rain gardens.
Depression-focused recharge
If water falls uniformly over a field such that field capacity of the soil is not exceeded, then negligible water percolates to groundwater. If instead water puddles in low-lying areas, the same water volume concentrated over a smaller area may exceed field capacity resulting in water that percolates down to recharge groundwater. The larger the relative contributing runoff area is, the more focused infiltration is. The recurring process of water that falls relatively uniformly over an area, flowing to groundwater selectively under surface depressions is depression focused recharge. Water tables rise under such depressions.
Depression focused groundwater recharge can be very important in arid regions. More rain events are capable of contributing to groundwater supply.
Depression focused groundwater recharge also profoundly effects contaminant transport into groundwater. This is of great concern in regions with karst geological formations because water can eventually dissolve tunnels all the way to aquifers, or otherwise disconnected streams. This extreme form of preferential flow, accelerates the transport of contaminants and the erosion of such tunnels. In this way depressions intended to trap runoff water—before it flows to vulnerable water resources—can connect underground over time. Cavitation of surfaces above into the tunnels, results in potholes or caves.
Deeper ponding exerts pressure that forces water into the ground faster. Faster flow dislodges contaminants otherwise adsorbed on soil and carries them along. This can carry pollution directly to the raised water table below and into the groundwater supply. Thus, the quality of water collecting in infiltration basins is of special concern.
Estimation methods
Rates of groundwater recharge are difficult to quantify. This is because other related processes, such as evaporation, transpiration (or evapotranspiration) and infiltration processes must first be measured or estimated to determine the balance. There are no widely applicable method available that can directly and accurately quantify the volume of rainwater that reaches the water table.
The most common methods to estimate recharge rates are: chloride mass balance (CMB); soil physics methods; environmental and isotopic tracers; groundwater-level fluctuation methods; water balance (WB) methods (including groundwater models (GMs)); and the estimation of baseflow (BF) to rivers.
Regional, continental and global estimates of recharge commonly derive from global hydrological models.
Physical
Physical methods use the principles of soil physics to estimate recharge. The direct physical methods are those that attempt to actually measure the volume of water passing below the root zone. Indirect physical methods rely on the measurement or estimation of soil physical parameters, which along with soil physical principles, can be used to estimate the potential or actual recharge. After months without rain the level of the rivers under humid climate is low and represents solely drained groundwater. Thus, the recharge can be calculated from this base flow if the catchment area is already known.
Chemical
Chemical methods use the presence of relatively inert water-soluble substances, such as an isotopic tracer or chloride, moving through the soil, as deep drainage occurs.
Numerical models
Recharge can be estimated using numerical methods, using such codes as Hydrologic Evaluation of Landfill Performance, UNSAT-H, SHAW (short form of Simultaneous Heat and Water Transfer model), WEAP, and MIKE SHE. The 1D-program HYDRUS1D is available online. The codes generally use climate and soil data to arrive at a recharge estimate and use the Richards equation in some form to model groundwater flow in the vadose zone.
Factors affecting groundwater recharge
Climate change
Urbanization
Further implications of groundwater recharge are a consequence of urbanization. Research shows that the recharge rate can be up to ten times higher in urban areas compared to rural regions. This is explained through the vast water supply and sewage networks supported in urban regions in which rural areas are not likely to obtain. Recharge in rural areas is heavily supported by precipitation, and this is the opposite for urban areas. Road networks and infrastructure within cities prevent surface water from percolating into the soil, resulting in most surface runoff entering storm drains for local water supply. As urban development continues to spread across various regions, groundwater recharge rates will increase relative to the existing rates of the previous rural region. A consequence of sudden influxes in groundwater recharge includes flash flooding. The ecosystem will have to adjust to the elevated groundwater surplus due to groundwater recharge rates. Additionally, road networks are less permeable compared to soil, resulting in higher amounts of surface runoff. Therefore, urbanization increases the rate of groundwater recharge and reduces infiltration, resulting in flash floods as the local ecosystem accommodates changes to the surrounding environment.
Adverse factors
Drainage
Impervious surfaces
Soil compaction
Groundwater pollution
See also
Aquifer storage and recovery
Bioswale
Contour trenching
Depression focused recharge
Dry well
Groundwater model
Groundwater remediation
Groundwater recharge in California
Hydrology (agriculture)
Infiltration (hydrology)
International trade and water
Peak water
Rainwater harvesting
Soil salinity control by subsurface drainage
Subsurface dyke
Watertable control
References
Aquifers
Soil mechanics
Hydraulic engineering
Hydrology
Land management
Liquid water
Soil physics
Sustainable design
Sustainable gardening
Sustainable technologies
Water and the environment
Water conservation
Water resources management
Water | Groundwater recharge | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,982 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Soil mechanics",
"Soil physics",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Aquifers",
"Environmental engineering",
"Water",
"Hydraulic engineering"
] |
1,176,394 | https://en.wikipedia.org/wiki/Domain%20wall | A domain wall is a type of topological soliton that occurs whenever a discrete symmetry is spontaneously broken. Domain walls are also sometimes called kinks in analogy with closely related kink solution of the sine-Gordon model or models with polynomial potentials. Unstable domain walls can also appear if spontaneously broken discrete symmetry is approximate and there is a false vacuum.
A domain (hyper volume) is extended in three spatial dimensions and one time dimension. A domain wall is the boundary between two neighboring domains. Thus a domain wall is extended in two spatial dimensions and one time dimension.
Important examples are:
Domain wall (magnetism), an interface separating magnetic domains
Domain wall (optics), for domain walls in optics
Domain wall (string theory), a theoretical 2-dimensional singularity
Besides these important cases similar solitons appear in wide spectrum of the models. Here are other examples:
Early in the universe, spontaneous breaking of discrete symmetries produced domain walls. The resulting network of domain walls influenced the late stages of cosmological inflation and the cosmic microwave background radiation. Observations constrain the existence of stable domain walls. Models beyond the Standard Model can account for those constraints. Unstable cosmic domain walls may decay and produce observable radiation.
There exist a class of the braneworld models where the brane is assumed to be a domain wall formed by interacting extra-dimensional fields. The matter is localized due to the interaction with this configuration and can leave it at sufficiently high energies. The jargon term for this domain wall is "thick brane" in contrast to the "thin brane" of the models where it is described as delta-potential or simply as some ideal surface with matter fields on it.
References
Further reading
Vachaspati, Tanmay (2006). Kinks and Domain Walls: An Introduction to Classical and Quantum Solitons. Cambridge University Press.
External links
Solitons | Domain wall | [
"Physics",
"Materials_science"
] | 386 | [
"Materials science stubs",
"Quantum mechanics",
"Condensed matter physics",
"Condensed matter stubs",
"Quantum physics stubs"
] |
1,177,234 | https://en.wikipedia.org/wiki/Potassium%20dichromate | Potassium dichromate, , is a common inorganic chemical reagent, most commonly used as an oxidizing agent in various laboratory and industrial applications. As with all hexavalent chromium compounds, it is acutely and chronically harmful to health. It is a crystalline ionic solid with a very bright, red-orange color. The salt is popular in laboratories because it is not deliquescent, in contrast to the more industrially relevant salt sodium dichromate.
Chemistry
Production
Potassium dichromate is usually prepared by the reaction of potassium chloride on sodium dichromate. Alternatively, it can be also obtained from potassium chromate by roasting chromite ore with potassium hydroxide. It is soluble in water and in the dissolution process it ionizes:
Reaction
Potassium dichromate is an oxidising agent in organic chemistry, and is milder than potassium permanganate. It is used to oxidize alcohols. It converts primary alcohols into aldehydes and, under more forcing conditions, into carboxylic acids. In contrast, potassium permanganate tends to give carboxylic acids as the sole products. Secondary alcohols are converted into ketones. For example, menthone may be prepared by oxidation of menthol with acidified dichromate. Tertiary alcohols cannot be oxidized.
In an aqueous solution the color change exhibited can be used to test for distinguishing aldehydes from ketones. Aldehydes reduce dichromate from the +6 to the +3 oxidation state, changing color from orange to green. This color change arises because the aldehyde can be oxidized to the corresponding carboxylic acid. A ketone will show no such change because it cannot be oxidized further, and so the solution will remain orange.
When heated strongly, it decomposes with the evolution of oxygen.
When an alkali is added to an orange-red solution containing dichromate ions, a yellow solution is obtained due to the formation of chromate ions (). For example, potassium chromate is produced industrially using potash:
The reaction is reversible.
Treatment with cold sulfuric acid gives red crystals of chromic anhydride (chromium trioxide, CrO3):
On heating with concentrated acid, oxygen is evolved:
Uses
Potassium dichromate has few major applications, as the sodium salt is dominant industrially. The main use is as a precursor to potassium chrome alum, used in leather tanning.
Cleaning
Like other chromium(VI) compounds (chromium trioxide, sodium dichromate), potassium dichromate has been used to prepare "chromic acid" for cleaning glassware and etching materials. Because of safety concerns associated with hexavalent chromium, this practice has been largely discontinued.
Construction
It is used as an ingredient in cement in which it retards the setting of the mixture and improves its density and texture. This usage commonly causes contact dermatitis in construction workers.
Photography and printing
In 1839, Mungo Ponton discovered that paper treated with a solution of potassium dichromate was visibly tanned by exposure to sunlight, the discoloration remaining after the potassium dichromate had been rinsed out. In 1852, Henry Fox Talbot discovered that exposure to ultraviolet light in the presence of potassium dichromate hardened organic colloids such as gelatin and gum arabic, making them less soluble.
These discoveries soon led to the carbon print, gum bichromate, and other photographic printing processes based on differential hardening. Typically, after exposure, the unhardened portion was rinsed away with warm water, leaving a thin relief that either contained a pigment included during manufacture or was subsequently stained with a dye. Some processes depended on the hardening only, in combination with the differential absorption of certain dyes by the hardened or unhardened areas. Because some of these processes allowed the use of highly stable dyes and pigments, such as carbon black, prints with an extremely high degree of archival permanence and resistance to fading from prolonged exposure to light could be produced.
Dichromated colloids were also used as photoresists in various industrial applications, most widely in the creation of metal printing plates for use in photomechanical printing processes.
Chromium intensification or Photochromos uses potassium dichromate together with equal parts of concentrated hydrochloric acid diluted down to approximately 10% v/v to treat weak and thin negatives of black and white photograph roll. This solution reconverts the elemental silver particles in the film to silver chloride. After thorough washing and exposure to actinic light, the film can be redeveloped to its end-point yielding a stronger negative which is able to produce a more satisfactory print.
A potassium dichromate solution in sulfuric acid can be used to produce a reversal negative (that is, a positive transparency from a negative film). This is effected by developing a black and white film but allowing the development to proceed more or less to the end point. The development is then stopped by copious washing and the film then treated in the acid dichromate solution. This converts the silver metal to silver sulfate, a compound that is insensitive to light. After thorough washing and exposure to actinic light, the film is developed again allowing the previously unexposed silver halide to be reduced to silver metal. The results obtained can be unpredictable, but sometimes excellent results are obtained producing images that would otherwise be unobtainable. This process can be coupled with solarisation so that the end product resembles a negative and is suitable for printing in the normal way.
Cr(VI) compounds have the property of tanning animal proteins when exposed to strong light. This quality is used in photographic screen-printing.
In screen-printing a fine screen of bolting silk or similar material is stretched taut onto a frame similar to the way canvas is prepared before painting. A colloid sensitized with a dichromate is applied evenly to the taut screen. Once the dichromate mixture is dry, a full-size photographic positive is attached securely onto the surface of the screen, and the whole assembly exposed to strong light – times vary from 3 minutes to a half an hour in bright sunlight – hardening the exposed colloid. When the positive is removed, the unexposed mixture on the screen can be washed off with warm water, leaving the hardened mixture intact, acting as a precise mask of the desired pattern, which can then be printed with the usual screen-printing process.
Analytical reagent
Because it is non-hygroscopic, potassium dichromate is a common reagent in classical "wet tests" in analytical chemistry.
Ethanol determination
The concentration of ethanol in a sample can be determined by back titration with acidified potassium dichromate. Reacting the sample with an excess of potassium dichromate, all ethanol is oxidized to acetic acid:
Full reaction of converting ethanol to acetic acid:
The excess dichromate is determined by titration against sodium thiosulfate. Adding the amount of excess dichromate from the initial amount, gives the amount of ethanol present. Accuracy can be improved by calibrating the dichromate solution against a blank.
One major application for this reaction is in old police breathalyzer tests. When alcohol vapor makes contact with the orange dichromate-coated crystals, the color changes from Cr(VI) orange to Cr(III) green. The degree of the color change is directly related to the level of alcohol in the suspect's breath.
Silver test
When dissolved in an approximately 35% nitric acid solution it is called Schwerter's solution and is used to test for the presence of various metals, notably for determination of silver purity. Pure silver will turn the solution bright red, sterling silver will turn it dark red, low grade coin silver (0.800 fine) will turn brown (largely due to the presence of copper which turns the solution brown) and even green for 0.500 silver.
Brass turns dark brown, copper turns brown, lead and tin both turn yellow while gold and palladium do not change.
Sulfur dioxide test
Potassium dichromate paper can be used to test for sulfur dioxide, as it turns distinctively from orange to green. This is typical of all redox reactions where hexavalent chromium is reduced to trivalent chromium. Therefore, it is not a conclusive test for sulfur dioxide. The final product formed is Cr2(SO4)3.
Wood treatment
Potassium dichromate is used to stain certain types of wood by darkening the tannins in the wood. It produces deep, rich browns that cannot be achieved with modern color dyes. It is a particularly effective treatment on mahogany.
Natural occurrence
Potassium dichromate occurs naturally as the rare mineral lópezite. It has only been reported as vug fillings in the nitrate deposits of the Atacama Desert of Chile and in the Bushveld igneous complex of South Africa.
Safety
In 2005–06, potassium dichromate was the 11th-most-prevalent allergen in patch tests (4.8%).
Potassium dichromate is one of the most common causes of chromium dermatitis; chromium is highly likely to induce sensitization leading to dermatitis, especially of the hand and forearms, which is chronic and difficult to treat. Toxicological studies have further illustrated its highly toxic nature. With rabbits and rodents, concentrations as low as 14 mg/kg have shown a 50% fatality rate amongst test groups. Aquatic organisms are especially vulnerable if exposed, and hence responsible disposal according to the local environmental regulations is advised.
As with other Cr(VI) compounds, potassium dichromate is carcinogenic. The compound is also corrosive and exposure may produce severe eye damage or blindness. Human exposure further encompasses impaired fertility.
References
External links
Potassium Dichromate at The Periodic Table of Videos (University of Nottingham)
International Chemical Safety Card 1371
National Pollutant Inventory – Chromium VI and compounds fact sheet
NIOSH Pocket Guide to Chemical Hazards
IARC Monograph "Chromium and Chromium compounds"
Gold refining article listing color change when testing metals with Schwerter's Solution
Potassium compounds
Dichromates
Photographic chemicals
IARC Group 1 carcinogens
Light-sensitive chemicals
Oxidizing agents | Potassium dichromate | [
"Chemistry"
] | 2,181 | [
"Light-sensitive chemicals",
"Light reactions",
"Redox",
"Oxidizing agents"
] |
1,177,592 | https://en.wikipedia.org/wiki/Ellipsometry | Ellipsometry is an optical technique for investigating the dielectric properties (complex refractive index or dielectric function) of thin films. Ellipsometry measures the change of polarization upon reflection or transmission and compares it to a model.
It can be used to characterize composition, roughness, thickness (depth), crystalline nature, doping concentration, electrical conductivity and other material properties. It is very sensitive to the change in the optical response of incident radiation that interacts with the material being investigated.
A spectroscopic ellipsometer can be found in most thin film analytical labs. Ellipsometry is also becoming more interesting to researchers in other disciplines such as biology and medicine. These areas pose new challenges to the technique, such as measurements on unstable liquid surfaces and microscopic imaging.
Etymology
The name "ellipsometry" stems from the fact that elliptical polarization of light is used. The term "spectroscopic" relates to the fact that the information gained is a function of the light's wavelength or energy (spectra). The technique has been known at least since 1888 by the work of Paul Drude and has many applications today.
The first documented use of the term "ellipsometry" was in 1945.
Basic principles
The measured signal is the change in polarization as the incident radiation (in a known state) interacts with the material structure of interest (reflected, absorbed, scattered, or transmitted). The polarization change is quantified by the amplitude ratio, Ψ, and the phase difference, Δ (defined below). Because the signal depends on the thickness as well as the material properties, ellipsometry can be a universal tool for contact free determination of thickness and optical constants of films of all kinds.
Upon the analysis of the change of polarization of light, ellipsometry can yield information about layers that are thinner than the wavelength of the probing light itself, even down to a single atomic layer. Ellipsometry can probe the complex refractive index or dielectric function tensor, which gives access to fundamental physical parameters like those listed above. It is commonly used to characterize film thickness for single layers or complex multilayer stacks ranging from a few angstroms or tenths of a nanometer to several micrometers with an excellent accuracy.
Experimental details
Typically, ellipsometry is done only in the reflection setup. The exact nature of the polarization change is determined by the sample's properties (thickness, complex refractive index or dielectric function tensor). Although optical techniques are inherently diffraction-limited, ellipsometry exploits phase information (polarization state), and can achieve sub-nanometer resolution. In its simplest form, the technique is applicable to thin films with thickness of less than a nanometer to several micrometers. Most models assume the sample is composed of a small number of discrete, well-defined layers that are optically homogeneous and isotropic. Violation of these assumptions requires more advanced variants of the technique (see below).
Methods of immersion or multiangular ellipsometry are applied to find the optical constants of the material with rough sample surface or presence of inhomogeneous media. New methodological approaches allow the use of reflection ellipsometry to measure physical and technical characteristics of gradient elements in case the surface layer of the optical detail is inhomogeneous.
Experimental setup
Electromagnetic radiation is emitted by a light source and linearly polarized by a polarizer. It can pass through an optional compensator (retarder, quarter wave plate) and falls onto the sample. After reflection the radiation passes a compensator (optional) and a second polarizer, which is called an analyzer, and falls into the detector. Instead of the compensators, some ellipsometers use a phase-modulator in the path of the incident light beam. Ellipsometry is a specular optical technique (the angle of incidence equals the angle of reflection). The incident and the reflected beam span the plane of incidence. Light which is polarized parallel to this plane is named p-polarized. A polarization direction perpendicular is called s-polarized (s-polarised), accordingly. The "s" is contributed from the German "" (perpendicular).
Data acquisition
Ellipsometry measures the complex reflectance ratio of a system, which may be parametrized by the amplitude component and the phase difference . The polarization state of the light incident upon the sample may be decomposed into an s and a p component (the s component is oscillating perpendicular to the plane of incidence and parallel to the sample surface, and the p component is oscillating parallel to the plane of incidence). The amplitudes of the s and p components, after reflection and normalized to their initial value, are denoted by and respectively. The angle of incidence is chosen close to the Brewster angle of the sample to ensure a maximal difference in and . Ellipsometry measures the complex reflectance ratio (a complex quantity), which is the ratio of over :
Thus, is the amplitude ratio upon reflection, and is the phase shift (difference). (Note that the right side of the equation is simply another way to represent a complex number.) Since ellipsometry is measuring the ratio (or difference) of two values (rather than the absolute value of either), it is very robust, accurate, and reproducible. For instance, it is relatively insensitive to scatter and fluctuations and requires no standard sample or reference beam.
Data analysis
Ellipsometry is an indirect method, i.e. in general the measured and cannot be converted directly into the optical constants of the sample. Normally, a model analysis must be performed, for example the Forouhi Bloomer model. This is one weakness of ellipsometry. Models can be physically based on energy transitions or simply free parameters used to fit the data.
Direct inversion of and is only possible in very simple cases of isotropic, homogeneous and infinitely thick films. In all other cases a layer model must be established, which considers the optical constants (refractive index or dielectric function tensor) and thickness parameters of all individual layers of the sample including the correct layer sequence. Using an iterative procedure (least-squares minimization) unknown optical constants and/or thickness parameters are varied, and and values are calculated using the Fresnel equations. The calculated and values which match the experimental data best provide the optical constants and thickness parameters of the sample.
Definitions
Modern ellipsometers are complex instruments that incorporate a wide variety of radiation sources, detectors, digital electronics and software. The range of wavelength employed is far in excess of what is visible so strictly these are no longer optical instruments.
Single-wavelength vs. spectroscopic ellipsometry
Single-wavelength ellipsometry employs a monochromatic light source. This is usually a laser in the visible spectral region, for instance, a HeNe laser with a wavelength of 632.8 nm. Therefore, single-wavelength ellipsometry is also called laser ellipsometry. The advantage of laser ellipsometry is that laser beams can be focused on a small spot size. Furthermore, lasers have a higher power than broad band light sources. Therefore, laser ellipsometry can be used for imaging (see below). However, the experimental output is restricted to one set of and values per measurement. Spectroscopic ellipsometry (SE) employs broad band light sources, which cover a certain spectral range in the infrared, visible or ultraviolet spectral region. By that the complex refractive index or the dielectric function tensor in the corresponding spectral region can be obtained, which gives access to a large number of fundamental physical properties. Infrared spectroscopic ellipsometry (IRSE) can probe lattice vibrational (phonon) and free charge carrier (plasmon) properties. Spectroscopic ellipsometry in the near infrared, visible up to ultraviolet spectral region studies the refractive index in the transparency or below-band-gap region and electronic properties, for instance, band-to-band transitions or excitons.
Standard vs. generalized ellipsometry (anisotropy)
Standard ellipsometry (or just short 'ellipsometry') is applied, when no s polarized light is converted into p polarized light nor vice versa. This is the case for optically isotropic samples, for instance, amorphous materials or crystalline materials with a cubic crystal structure. Standard ellipsometry is also sufficient for optically uniaxial samples in the special case, when the optical axis is aligned parallel to the surface normal. In all other cases, when s polarized light is converted into p polarized light and/or vice versa, the generalized ellipsometry approach must be applied. Examples are arbitrarily aligned, optically uniaxial samples, or optically biaxial samples.
Jones matrix vs. Mueller matrix formalism (depolarization)
There are typically two different ways of mathematically describing how an electromagnetic wave interacts with the elements within an ellipsometer (including the sample): the Jones matrix and the Mueller matrix formalisms. In the Jones matrix formalism, the electromagnetic wave is described by a Jones vector with two orthogonal complex-valued entries for the electric field (typically and ), and the effect that an optical element (or sample) has on it is described by the complex-valued 2×2 Jones matrix. In the Mueller matrix formalism, the electromagnetic wave is described by Stokes vectors with four real-valued entries, and their transformation is described by the real-valued 4x4 Mueller matrix. When no depolarization occurs both formalisms are fully consistent. Therefore, for non-depolarizing samples, the simpler Jones matrix formalism is sufficient. If the sample is depolarizing the Mueller matrix formalism should be used, because it also gives the amount of depolarization. Reasons for depolarization are, for instance, thickness non-uniformity or backside-reflections from a transparent substrate.
Advanced experimental approaches
Imaging ellipsometry
Ellipsometry can also be done as imaging ellipsometry by using a CCD camera as a detector. This provides a real time contrast image of the sample, which provides information about film thickness and refractive index. Advanced imaging ellipsometer technology operates on the principle of classical null ellipsometry and real-time ellipsometric contrast imaging. Imaging ellipsometry is based on the concept of nulling. In ellipsometry, the film under investigation is placed onto a reflective substrate. The film and the substrate have different refractive indexes. In order to obtain data about film thickness, the light reflecting off of the substrate must be nulled. Nulling is achieved by adjusting the analyzer and polarizer so that all reflected light off of the substrate is extinguished. Due to the difference in refractive indexes, this will allow the sample to become very bright and clearly visible. The light source consists of a monochromatic laser of the desired wavelength. A common wavelength that is used is 532 nm green laser light. Since only intensity of light measurements are needed, almost any type of camera can be implemented as the CCD, which is useful if building an ellipsometer from parts. Typically, imaging ellipsometers are configured in such a way so that the laser (L) fires a beam of light which immediately passes through a linear polarizer (P). The linearly polarized light then passes through a quarter wavelength compensator (C) which transforms the light into elliptically polarized light. This elliptically polarized light then reflects off the sample (S), passes through the analyzer (A) and is imaged onto a CCD camera by a long working distance objective. The analyzer here is another polarizer identical to the P, however, this polarizer serves to help quantify the change in polarization and is thus given the name analyzer. This design is commonly referred to as a LPCSA configuration.
The orientation of the angles of P and C are chosen in such a way that the elliptically polarized light is completely linearly polarized after it is reflected off the sample. For simplification of future calculations, the compensator can be fixed at a 45 degree angle relative to the plane of incidence of the laser beam. This set up requires the rotation of the analyzer and polarizer in order to achieve null conditions. The ellipsometric null condition is obtained when A is perpendicular with respect to the polarization axis of the reflected light achieving complete destructive interference, i.e., the state at which the absolute minimum of light flux is detected at the CCD camera. The angles of P, C, and A obtained are used to determine the Ψ and Δ values of the material.
and
where A and P are the angles of the analyzer and polarizer under null conditions respectively. By rotating the analyzer and polarizer and measuring the change in intensities of light over the image, analysis of the measured data by use of computerized optical modeling can lead to a deduction of spatially resolved film thickness and complex refractive index values.
Due to the fact that the imaging is done at an angle, only a small line of the entire field of view is actually in focus. The line in focus can be moved along the field of view by adjusting the focus. In order to analyze the entire region of interest, the focus must be incrementally moved along the region of interest with a photo taken at each position. All of the images are then compiled into a single, in focus image of the sample.
In situ ellipsometry
In situ ellipsometry refers to dynamic measurements during the modification process of a sample. This process can be used to study, for instance, the growth of a thin film, including calcium phosphate mineralization at the air-liquid interface, etching or cleaning of a sample. By in situ ellipsometry measurements it is possible to determine fundamental process parameters, such as, growth or etch rates, variation of optical properties with time. In situ ellipsometry measurements require a number of additional considerations: The sample spot is usually not as easily accessible as for ex situ measurements outside the process chamber. Therefore, the mechanical setup has to be adjusted, which can include additional optical elements (mirrors, prisms, or lenses) for redirecting or focusing the light beam. Because the environmental conditions during the process can be harsh, the sensitive optical elements of the ellipsometry setup must be separated from the hot zone. In the simplest case this is done by optical view ports, though strain induced birefringence of the (glass-) windows has to be taken into account or minimized. Furthermore, the samples can be at elevated temperatures, which implies different optical properties compared to samples at room temperature. Despite all these problems, in situ ellipsometry becomes more and more important as process control technique for thin film deposition and modification tools. In situ ellipsometers can be of single-wavelength or spectroscopic type. Spectroscopic in situ ellipsometers use multichannel detectors, for instance CCD detectors, which measure the ellipsometric parameters for all wavelengths in the studied spectral range simultaneously.
Ellipsometric porosimetry
Ellipsometric porosimetry measures the change of the optical properties and thickness of the materials during adsorption and desorption of a volatile species at atmospheric pressure or under reduced pressure depending on the application. The EP technique is unique in its ability to measure porosity of very thin films down to 10 nm, its reproducibility and speed of measurement. Compared to traditional porosimeters, Ellipsometer porosimeters are well suited to very thin film pore size and pore size distribution measurement. Film porosity is a key factor in silicon based technology using low-κ materials, organic industry (encapsulated organic light-emitting diodes) as well as in the coating industry using sol gel techniques.
Magneto-optic generalized ellipsometry
Magneto-optic generalized ellipsometry (MOGE) is an advanced infrared spectroscopic ellipsometry technique for studying free charge carrier properties in conducting samples. By applying an external magnetic field it is possible to determine independently the density, the optical mobility parameter and the effective mass parameter of free charge carriers. Without the magnetic field only two out of the three free charge carrier parameters can be extracted independently.
Applications
This technique has found applications in many different fields, from semiconductor physics to microelectronics and biology, from basic research to industrial applications. Ellipsometry is a very sensitive measurement technique and provides unequaled capabilities for thin film metrology. As an optical technique, spectroscopic ellipsometry is non-destructive and contactless. Because the incident radiation can be focused, small sample sizes can be imaged and desired characteristics can be mapped over a larger area (m2).
Advantages
Ellipsometry has a number of advantages compared to standard reflection intensity measurements:
Ellipsometry measures at least two parameters at each wavelength of the spectrum. If generalized ellipsometry is applied up to 16 parameters can be measured at each wavelength.
Ellipsometry measures an intensity ratio instead of pure intensities. Therefore, ellipsometry is less affected by intensity instabilities of the light source or atmospheric absorption.
By using polarized light, normal ambient unpolarized stray light does not significantly influence the measurement, no dark box is necessary.
No reference measurement is necessary.
Ellipsometry is especially superior to reflectivity measurements when studying anisotropic samples.
See also
Petrographic microscope
Photo-reflectance
Polarimetry
Spectroscopy
References
Further reading
R. M. A. Azzam and N. M. Bashara, Ellipsometry and Polarized Light, Elsevier Science Pub Co (1987)
A. Roeseler, Infrared Spectroscopic Ellipsometry, Akademie-Verlag, Berlin (1990),
H. G. Tompkins, A Users's Guide to Ellipsometry, Academic Press Inc, London (1993),
H. G. Tompkins and W. A. McGahan, Spectroscopic Ellipsometry and Reflectometry, John Wiley & Sons Inc (1999)
I. Ohlidal and D. Franta, Ellipsometry of Thin Film Systems, in Progress in Optics, vol. 41, ed. E. Wolf, Elsevier, Amsterdam, 2000, pp. 181–282
M. Schubert, Infrared Ellipsometry on semiconductor layer structures: Phonons, Plasmons, and Polaritons, Series: Springer Tracts in Modern Physics, Vol. 209, Springer (2004),
H. G. Tompkins and E. A. Irene (Editors), Handbook of Ellipsometry William Andrews Publications, Norwich, NY (2005),
H. Fujiwara, Spectroscopic Ellipsometry: Principles and Applications, John Wiley & Sons Inc (2007),
M. Losurdo and K. Hingerl (Editors), Ellipsometry at the Nanoscale, Springer (2013),
K. Hinrichs and K.-J. Eichhorn (Editors), Ellipsometry of Functional Organic Surfaces and Films, Springer (2014),
Optical metrology
Radiometry
Spectroscopy | Ellipsometry | [
"Physics",
"Chemistry",
"Engineering"
] | 4,032 | [
"Telecommunications engineering",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Spectroscopy",
"Radiometry"
] |
1,177,781 | https://en.wikipedia.org/wiki/Terrell%20rotation | Terrell rotation or the Terrell effect is the visual distortion that a passing object would appear to undergo, according to the special theory of relativity, if it were travelling at a significant fraction of the speed of light. This behaviour was described independently by both Roger Penrose and James Edward Terrell. Penrose's article was submitted 29 July 1958 and published in January 1959. Terrell's article was submitted 22 June 1959 and published 15 November 1959. The general phenomenon was noted already in 1924 by Austrian physicist Anton Lampa.
This phenomenon was popularized by Victor Weisskopf in a Physics Today article.
Due to an early dispute about priority and correct attribution, the effect is also sometimes referred to as the Penrose–Terrell effect, the Terrell–Penrose effect or the Lampa–Terrell–Penrose effect, but not the Lampa effect.
Further detail
By symmetry, it is equivalent to the visual appearance of the object at rest as seen by a moving observer. Since the Lorentz transform does not depend on the acceleration, the visual appearance of the object depends only on the instantaneous velocity, and not the acceleration of the observer.
Terrell's and Penrose's papers pointed out that although special relativity appeared to describe an "observed contraction" in moving objects, these interpreted "observations" were not to be confused with the theory's literal predictions for the visible appearance of a moving object. Thanks to the differential timelag effects in signals reaching the observer from the object's different parts, a receding object would appear contracted, an approaching object would appear elongated (even under special relativity) and the geometry of a passing object would appear skewed, as if rotated. By R. Penrose: "the light from the trailing part reaches the observer from behind the sphere, which it can do since the sphere is continuously moving out of its way".
For images of passing objects, the apparent contraction of distances between points on the object's transverse surface could then be interpreted as being due to an apparent change in viewing angle, and the image of the object could be interpreted as appearing instead to be rotated. A previously popular description of special relativity's predictions, in which an observer sees a passing object to be contracted (for instance, from a sphere to a flattened ellipsoid), was wrong. A sphere maintains its circular outline since, as the sphere moves, light from further points of the Lorentz-contracted ellipsoid takes longer to reach the eye.
Terrell's and Penrose's papers prompted a number of follow-up papers, mostly in the American Journal of Physics, exploring the consequences of this correction. These papers pointed out that some existing discussions of special relativity were flawed and "explained" effects that the theory did not actually predict – while these papers did not change the actual mathematical structure of special relativity in any way, they did correct a misconception regarding the theory's predictions.
A representation of the Terrell effect can be seen in the physics simulator "A Slower Speed of Light," published by MIT.
See also
Length contraction
Stellar aberration
References and further reading
External links
A webpage explaining the Penrose-Terrell Effect
Extensive explanations and visualizations of the appearance of moving objects
Interactive simulation of the Penrose-Terrell Effect
Special relativity | Terrell rotation | [
"Physics"
] | 675 | [
"Special relativity",
"Theory of relativity"
] |
1,178,438 | https://en.wikipedia.org/wiki/Large%20eddy%20simulation | Large eddy simulation (LES) is a mathematical model for turbulence used in computational fluid dynamics. It was initially proposed in 1963 by Joseph Smagorinsky to simulate atmospheric air currents, and first explored by Deardorff (1970). LES is currently applied in a wide variety of engineering applications, including combustion, acoustics, and simulations of the atmospheric boundary layer.
The simulation of turbulent flows by numerically solving the Navier–Stokes equations requires resolving a very wide range of time and length scales, all of which affect the flow field. Such a resolution can be achieved with direct numerical simulation (DNS), but DNS is computationally expensive, and its cost prohibits simulation of practical engineering systems with complex geometry or flow configurations, such as turbulent jets, pumps, vehicles, and landing gear.
The principal idea behind LES is to reduce the computational cost by ignoring the smallest length scales, which are the most computationally expensive to resolve, via low-pass filtering of the Navier–Stokes equations. Such a low-pass filtering, which can be viewed as a time- and spatial-averaging, effectively removes small-scale information from the numerical solution. This information is not irrelevant, however, and its effect on the flow field must be modelled, a task which is an active area of research for problems in which small-scales can play an important role, such as near-wall flows, reacting flows, and multiphase flows.
Filter definition and properties
An LES filter can be applied to a spatial and temporal field and perform a spatial filtering operation, a temporal filtering operation, or both. The filtered field, denoted with a bar, is defined as:
where is the filter convolution kernel. This can also be written as:
The filter kernel has an associated cutoff length scale and cutoff time scale . Scales smaller than these are eliminated from . Using the above filter definition, any field may be split up into a filtered and sub-filtered (denoted with a prime) portion, as
It is important to note that the large eddy simulation filtering operation does not satisfy the properties of a Reynolds operator.
Filtered governing equations
The governing equations of LES are obtained by filtering the partial differential equations governing the flow field . There are differences between the incompressible and compressible LES governing equations, which lead to the definition of a new filtering operation.
Incompressible flow
For incompressible flow, the continuity equation and Navier–Stokes equations are filtered, yielding the filtered incompressible continuity equation,
and the filtered Navier–Stokes equations,
where is the filtered pressure field and is the rate-of-strain tensor evaluated using the filtered velocity. The nonlinear filtered advection term is the chief cause of difficulty in LES modeling. It requires knowledge of the unfiltered velocity field, which is unknown, so it must be modeled. The analysis that follows illustrates the difficulty caused by the nonlinearity, namely, that it causes interaction between large and small scales, preventing separation of scales.
The filtered advection term can be split up, following Leonard (1975), as:
where is the residual stress tensor, so that the filtered Navier-Stokes equations become
with the residual stress tensor grouping all unclosed terms. Leonard decomposed this stress tensor as and provided physical interpretations for each term. , the Leonard tensor, represents interactions among large scales, , the Reynolds stress-like term, represents interactions among the sub-filter scales (SFS), and , the Clark tensor, represents cross-scale interactions between large and small scales. Modeling the unclosed term is the task of sub-grid scale (SGS) models. This is made challenging by the fact that the subgrid stress tensor must account for interactions among all scales, including filtered scales with unfiltered scales.
The filtered governing equation for a passive scalar , such as mixture fraction or temperature, can be written as
where is the diffusive flux of , and is the sub-filter flux for the scalar . The filtered diffusive flux is unclosed, unless a particular form is assumed for it, such as a gradient diffusion model . is defined analogously to ,
and can similarly be split up into contributions from interactions between various scales. This sub-filter flux also requires a sub-filter model.
Derivation
Using Einstein notation, the Navier–Stokes equations for an incompressible fluid in Cartesian coordinates are
Filtering the momentum equation results in
If we assume that filtering and differentiation commute, then
This equation models the changes in time of the filtered variables . Since the unfiltered variables are not known, it is impossible to directly calculate . However, the quantity is known. A substitution is made:
Let . The resulting set of equations are the LES equations:
Compressible governing equations
For the governing equations of compressible flow, each equation, starting with the conservation of mass, is filtered. This gives:
which results in an additional sub-filter term. However, it is desirable to avoid having to model the sub-filter scales of the mass conservation equation. For this reason, Favre proposed a density-weighted filtering operation, called Favre filtering, defined for an arbitrary quantity as:
which, in the limit of incompressibility, becomes the normal filtering operation. This makes the conservation of mass equation:
This concept can then be extended to write the Favre-filtered momentum equation for compressible flow. Following Vreman:
where is the shear stress tensor, given for a Newtonian fluid by:
and the term represents a sub-filter viscous contribution from evaluating the viscosity using the Favre-filtered temperature . The subgrid stress tensor for the Favre-filtered momentum field is given by
By analogy, the Leonard decomposition may also be written for the residual stress tensor for a filtered triple product . The triple product can be rewritten using the Favre filtering operator as , which is an unclosed term (it requires knowledge of the fields and , when only the fields and are known). It can be broken up in a manner analogous to above, which results in a sub-filter stress tensor . This sub-filter term can be split up into contributions from three types of interactions: the Leondard tensor , representing interactions among resolved scales; the Clark tensor , representing interactions between resolved and unresolved scales; and the Reynolds tensor , which represents interactions among unresolved scales.
Filtered kinetic energy equation
In addition to the filtered mass and momentum equations, filtering the kinetic energy equation can provide additional insight. The kinetic energy field can be filtered to yield the total filtered kinetic energy:
and the total filtered kinetic energy can be decomposed into two terms: the kinetic energy of the filtered velocity field ,
and the residual kinetic energy ,
such that .
The conservation equation for can be obtained by multiplying the filtered momentum transport equation by to yield:
where is the dissipation of kinetic energy of the filtered velocity field by viscous stress, and represents the sub-filter scale (SFS) dissipation of kinetic energy.
The terms on the left-hand side represent transport, and the terms on the right-hand side are sink terms that dissipate kinetic energy.
The SFS dissipation term is of particular interest, since it represents the transfer of energy from large resolved scales to small unresolved scales. On average, transfers energy from large to small scales. However, instantaneously can be positive or negative, meaning it can also act as a source term for , the kinetic energy of the filtered velocity field. The transfer of energy from unresolved to resolved scales is called backscatter (and likewise the transfer of energy from resolved to unresolved scales is called forward-scatter).
Numerical methods for LES
Large eddy simulation involves the solution to the discrete filtered governing equations using computational fluid dynamics. LES resolves scales from the domain size down to the filter size , and as such a substantial portion of high wave number turbulent fluctuations must be resolved. This requires either high-order numerical schemes, or fine grid resolution if low-order numerical schemes are used. Chapter 13 of Pope addresses the question of how fine a grid resolution is needed to resolve a filtered velocity field . Ghosal found that for low-order discretization schemes, such as those used in finite volume methods, the truncation error can be the same order as the subfilter scale contributions, unless the filter width is considerably larger than the grid spacing . While even-order schemes have truncation error, they are non-dissipative, and because subfilter scale models are dissipative, even-order schemes will not affect the subfilter scale model contributions as strongly as dissipative schemes.
Filter implementation
The filtering operation in large eddy simulation can be implicit or explicit. Implicit filtering recognizes that the subfilter scale model will dissipate in the same manner as many numerical schemes. In this way, the grid, or the numerical discretization scheme, can be assumed to be the LES low-pass filter. While this takes full advantage of the grid resolution, and eliminates the computational cost of calculating a subfilter scale model term, it is difficult to determine the shape of the LES filter that is associated with some numerical issues. Additionally, truncation error can also become an issue.
In explicit filtering, an LES filter is applied to the discretized Navier–Stokes equations, providing a well-defined filter shape and reducing the truncation error. However, explicit filtering requires a finer grid than implicit filtering, and the computational cost increases with . Chapter 8 of Sagaut (2006) covers LES numerics in greater detail.
Boundary conditions of large eddy simulations
Inlet boundary conditions affect the accuracy of LES significantly, and the treatment of inlet conditions for LES is a complicated problem. Theoretically, a good boundary condition for LES should contain the following features:
(1) providing accurate information of flow characteristics, i.e. velocity and turbulence;
(2) satisfying the Navier-Stokes equations and other physics;
(3) being easy to implement and adjust to different cases.
Currently, methods of generating inlet conditions for LES are broadly divided into two categories classified by Tabor et al.:
The first method for generating turbulent inlets is to synthesize them according to particular cases, such as Fourier techniques, principle orthogonal decomposition (POD) and vortex methods. The synthesis techniques attempt to construct turbulent field at inlets that have suitable turbulence-like properties and make it easy to specify parameters of the turbulence, such as turbulent kinetic energy and turbulent dissipation rate. In addition, inlet conditions generated by using random numbers are computationally inexpensive. However, one serious drawback exists in the method. The synthesized turbulence does not satisfy the physical structure of fluid flow governed by Navier-Stokes equations.
The second method involves a separate and precursor calculation to generate a turbulent database which can be introduced into the main computation at the inlets. The database (sometimes named as ‘library’) can be generated in a number of ways, such as cyclic domains, pre-prepared library, and internal mapping. However, the method of generating turbulent inflow by precursor simulations requires large calculation capacity.
Researchers examining the application of various types of synthetic and precursor calculations have found that the more realistic the inlet turbulence, the more accurate LES predicts results.
Modeling unresolved scales
To discuss the modeling of unresolved scales, first the unresolved scales must be classified. They fall into two groups: resolved sub-filter scales (SFS), and sub-grid scales(SGS).
The resolved sub-filter scales represent the scales with wave numbers larger than the cutoff wave number , but whose effects are dampened by the filter. Resolved sub-filter scales only exist when filters non-local in wave-space are used (such as a box or Gaussian filter). These resolved sub-filter scales must be modeled using filter reconstruction.
Sub-grid scales are any scales that are smaller than the cutoff filter width . The form of the SGS model depends on the filter implementation. As mentioned in the Numerical methods for LES section, if implicit LES is considered, no SGS model is implemented and the numerical effects of the discretization are assumed to mimic the physics of the unresolved turbulent motions.
Sub-grid scale models
Without a universally valid description of turbulence, empirical information must be utilized when constructing and applying SGS models, supplemented with fundamental physical constraints such as Galilean invariance
.
Two classes of SGS models exist; the first class is functional models and the second class is structural models. Some models may be categorized as both.
Functional (eddy–viscosity) models
Functional models are simpler than structural models, focusing only on dissipating energy at a rate that is physically correct. These are based on an artificial eddy viscosity approach, where the effects of turbulence are lumped into a turbulent viscosity. The approach treats dissipation of kinetic energy at sub-grid scales as analogous to molecular diffusion. In this case, the deviatoric part of is modeled as:
where is the turbulent eddy viscosity and is the rate-of-strain tensor.
Based on dimensional analysis, the eddy viscosity must have units of . Most eddy viscosity SGS models model the eddy viscosity as the product of a characteristic length scale and a characteristic velocity scale.
Smagorinsky–Lilly model
The first SGS model developed was the Smagorinsky–Lilly SGS model, which was developed by Smagorinsky and used in the first LES simulation by Deardorff. It models the eddy viscosity as:
where is the grid size and is a constant.
This method assumes that the energy production and dissipation of the small scales are in equilibrium - that is, .
The Dynamic Model (Germano et al. and beyond)
Germano et al. identified a number of studies using the Smagorinsky model that each found different values for the Smagorinsky constant for different flow configurations. In an attempt to formulate a more universal approach to SGS models, Germano et al. proposed a dynamic Smagorinsky model, which utilized two filters: a grid LES filter, denoted , and a test LES filter, denoted for any turbulent field . The test filter is larger in size than the grid filter and adds an additional smoothing of the turbulence field over the already smoothed fields represented by the LES. Applying the test filter to the LES equations (which are obtained by applying the "grid" filter to Navier-Stokes equations) results in a new set of equations that are identical in form but with the SGS stress replaced by . Germano et al. noted that even though neither nor can be computed exactly because of the presence of unresolved scales, there is an exact relation connecting these two tensors. This relation, known as the Germano identity is
Here can be explicitly evaluated as it involves only the filtered velocities and the operation of test filtering. The significance of the identity is that if one assumes that turbulence is self similar so that the SGS stress at the grid and test levels have the same form and , then the Germano identity provides an equation from which the Smagorinsky coefficient (which is no longer a 'constant') can potentially be determined.
[Inherent in the procedure is the assumption that the coefficient is invariant of scale (see review
)].
In order to do this, two additional steps were introduced in the original formulation. First, one assumed that even though was in principle variable, the variation was sufficiently slow that it can be moved out of the filtering operation . Second, since was a scalar, the Germano identity was contracted with a second rank tensor (the rate of strain tensor was chosen) to convert it to a scalar equation from which could be determined.
Lilly
found a less arbitrary and therefore more satisfactory approach for obtaining C from the tensor identity. He noted that the Germano identity required the satisfaction of nine equations at each point in space (of which only five are independent) for a single quantity . The problem of obtaining was therefore over-determined. He proposed therefore that be determined using a least square fit by minimizing the residuals. This results in
Here
and for brevity
,
Initial attempts to implement the model in LES simulations proved unsuccessful. First, the computed coefficient
was not at all "slowly varying" as assumed and varied as much as any other turbulent field. Secondly,
the computed could be positive as well as negative. The latter fact in itself should not be regarded as a
shortcoming as a priori tests using filtered DNS fields have shown that the local subgrid dissipation rate
in a turbulent field is almost as likely to be negative as it is positive even though the integral over the fluid domain is always positive representing a net dissipation of energy in the large scales. A slight preponderance of positive values as opposed to strict positivity of the eddy-viscosity results in the observed net dissipation. This so-called "backscatter" of energy from small to large scales indeed corresponds to negative C values in the Smagorinsky model. Nevertheless, the Germano-Lilly formulation was found not to result in stable calculations. An ad hoc measure was adopted by averaging the numerator and denominator over homogeneous directions (where such directions exist in the flow)
When the averaging involved a large enough statistical sample that the computed was positive (or at
least only rarely negative) stable calculations were possible. Simply setting the negative values to zero (a procedure called "clipping") with or without the averaging also resulted in stable calculations.
Meneveau proposed
an averaging over Lagrangian fluid trajectories with an exponentially decaying "memory". This can be applied to problems lacking homogeneous directions and can be stable if the effective time over which the averaging is done is long enough and yet not so long as to smooth out spatial inhomogenieties of interest.
Lilly's modification of the Germano method followed by a statistical averaging or synthetic removal of negative viscosity regions seems ad hoc, even if it could be made to "work". An alternate formulation of the least square minimization procedure known as the "Dynamic Localization Model" (DLM) was suggested by
Ghosal et al.
In this approach one first defines a quantity
with the tensors and replaced by the appropriate SGS model. This tensor then represents the amount by which the subgrid model fails to respect the Germano identity at each spatial location. In Lilly's approach, is then pulled out of the hat operator
making an algebraic function of which is then determined by requiring that
considered as a function of C have the least possible value.
However, since the thus obtained turns out to be just as variable as any other fluctuating quantity in turbulence, the original assumption of the constancy of cannot be justified a posteriori. In the DLM approach one avoids this inconsistency by not invoking the step of removing
C from the test filtering operation. Instead, one defines a global error over the entire flow domain by the quantity
where the integral ranges over the whole fluid volume. This global error is then a functional of the spatially varying function (here the time instant, , is fixed and therefore appears just as a parameter) which is determined so as to minimize this functional. The solution to this variational problem is that must satisfy a Fredholm integral equation of the second kind
where the functions and are defined in terms of the resolved fields and are therefore known at each time step and the integral ranges over the whole fluid domain. The integral equation is solved numerically by an iteration procedure and convergence was found to be generally rapid if used with a pre-conditioning scheme. Even though this variational approach removes an inherent inconsistency in Lilly's approach, the obtained from the integral equation still displayed the instability associated with negative viscosities. This can be resolved by insisting that be minimized subject to the constraint . This leads to an equation for that is nonlinear
Here the suffix + indicates the "positive part of" that is, . Even though this superficially looks like "clipping" it is not an ad hoc scheme but a bonafide solution of the constrained variational problem. This DLM(+) model was found to be stable and yielded excellent results for forced and decaying isotropic turbulence, channel flows and a variety of other more complex geometries. If a flow happens to have homogeneous directions (let us say the directions x and z) then one can introduce the ansatz
. The variational approach then immediately yields Lilly's result with averaging over homogeneous directions without any need for ad hoc modifications of a prior result.
One shortcoming of the DLM(+) model was that it did not describe backscatter which is known to be a real "thing" from analyzing DNS data. Two approaches were developed to address this. In one approach due to Carati et al.
a fluctuating force with amplitude determined by the fluctuation-dissipation theorem is added in
analogy to Landau's theory of fluctuating hydrodynamics. In the second approach, one notes that
any "backscattered" energy appears in the resolved scales only at the expense of energy in the subgrid
scales. The DLM can be modified in a simple way to take into account this physical fact so as to allow
for backscatter while being inherently stable. This k-equation version of the DLM, DLM(k) replaces
in the Smagorinsky eddy viscosity model by as an appropriate velocity scale. The procedure for determining remains identical to the "unconstrained" version except that the tensors ,
where the sub-test scale kinetic
energy K is related to the subgrid scale kinetic energy k by
(follows by taking the trace of the Germano identity). To determine k we now use a transport equation
where is the kinematic viscosity and are positive coefficients
representing kinetic energy dissipation and diffusion respectively. These can be determined following the dynamic
procedure with constrained minimization as in DLM(+). This approach, though more expensive to implement than the DLM(+) was found to be stable and resulted in good agreement with experimental data for a variety of flows
tested. Furthermore, it is mathematically impossible for the DLM(k) to result in an unstable computation as the sum of the large scale and SGS energies is non-increasing by construction. Both of these approaches incorporating backscatter works well. They yield models that are slightly less dissipative with somewhat improved performance over the DLM(+). The DLM(k) model additionally yields the subgrid kinetic energy, which may be a physical quantity of interest. These improvements are achieved at a somewhat increased cost in model implementation.
The Dynamic Model originated at the 1990 Summer Program of the Center for Turbulence Research (CTR) at Stanford University. A series of "CTR-Tea" seminars celebrated the 30th Anniversary of this important milestone in turbulence modeling.
Structural models
See also
Direct numerical simulation
Fluid mechanics
Galilean invariance – an important property of certain types of filters
Reynolds-averaged Navier–Stokes equations
Turbulence
Further reading
Heus, T.; van Heerwaarden, C. C.;Jonker, H. J. J.; Pier Siebesma, A.; Axelsen, S.«Formulation of the Dutch Atmospheric Large-Eddy Simulation (DALES) and overview of its applications» Geoscientific Model Development, 3, 2, 30-09-2010, pàg. 415–444. DOI: 10.5194/gmd-3-415-2010. ISSN: 1991-9603.
References
Partial differential equations
Fluid dynamics
Fluid mechanics
Turbulence
Turbulence models
Computational fluid dynamics | Large eddy simulation | [
"Physics",
"Chemistry",
"Engineering"
] | 4,898 | [
"Turbulence",
"Computational fluid dynamics",
"Chemical engineering",
"Computational physics",
"Civil engineering",
"Piping",
"Fluid mechanics",
"Fluid dynamics"
] |
1,178,500 | https://en.wikipedia.org/wiki/Index%20of%20construction%20articles | This page is a list of construction topics.
A
Abated
- Abrasive blasting
- AC power plugs and sockets
- Access mat
- Accrington brick
- Accropode
- Acid brick
- Acoustic plaster
- Active daylighting
- Adaptive reuse
- Aerial crane
- Aerosol paint
- Aggregate base
- Agile construction
- Akmon
- Alternative natural materials
- Anchorage in reinforced concrete
- Angle grinder
- Arc welding
- Artificial stone
- Asbestos cement
- Asbestos insulating board
- Asbestos shingle
- Asphalt concrete
- Asphalt roll roofing
- Autoclaved aerated concrete
- Autonomous building
- Azulejo
- Australian Construction Contracts
- Axe
B
Backhoe
- Balloon framing
- Bamboo construction
- Bamboo-mud wall
- Bandsaw
- Banksman
- Barrel roof
- Baseboard
- Basement waterproofing
- Batten
- Batter board
- Belt sander
- Bill of quantities
- Bioasphalt
- Biocidal natural building material
- Bituminous waterproofing
- Block paving
- Blowtorch
- Board roof
- Bochka roof
- Bond beam
- Boulder wall
- Bowen Construction
- Box crib
- Breaker
- Brettstapel
- Brick
- Brick clamp
- Brick hod
- Bricklayer
- Brickwork
- Bughole
- Builder's risk insurance
- Builders hardware
- Builders' rites
- Building
- Building automation
- Building code
- Building construction
- Building control body
- Building cooperative
- Building design
- Building diagnostics
- Building engineer
- Building envelope
- Building estimator
- Building implosion
- Building information modeling
- Building information modeling in green building
- Building insulation
- Building insulation materials
- Building-integrated photovoltaics
- Building life cycle
- Building maintenance unit
- Building material
- Building officials
- Building performance
- Building performance simulation
- Building regulations approval
- Building regulations in the United Kingdom
- Building science
- Building services engineering
- Building typology
- Bull's eye level
- Bulldozer
- Bundwerk
- Bush hammer
- Butterfly roof
C
Calcium aluminate cements
- Camber beam
- Carpenter's axe
- Carpentry
- Cast in place concrete
- Cast stone
- Caulk
- Cavity wall insulation
- Cellulose insulation
- Cement
- Cement board
- Cement-bonded wood fiber
- Cement clinker
- Cement kiln
- Cement mill
- Cement render
- Cement tile
- Cementing equipment
- Cementitious foam insulation
- Cenocell
- Central heating
- Centring
- Ceramic building material
- Ceramic tile cutter
- Chaska brick
- Chief Construction Adviser to UK Government
- Chimney
- Circular saw
- Civil engineer
- Civil engineering
- Civil estimator
- Cladding (construction)
- Clerk of the Works
- Climate-adaptive building shell
- Climbing formwork
- Clinker brick
- Close studding
- Coastal engineering
- Coating
- Cold-formed steel
- Collar beam
- Collyweston stone slate
- Compactor
- Complex Projects Contract
- Composite material
- Composting toilet
- Compressed earth block
- Computer-aided design
- Concrete
- Concrete degradation
- Concrete densifier
- Concrete finisher
- Concrete float
- Concrete fracture analysis
- Concrete grinder
- Concrete hinge
- Concrete leveling
- Concrete mixer
- Concrete masonry unit
- Concrete moisture meter
- Concrete plant
- Concrete pump
- Concrete recycling
- Concrete saw
- Concrete sealer
- Concrete ship
- Concrete slab
- Concrete slump test
- Conical roof
- Constructability
- Constructed wetland
- Constructing Excellence
- Construction
- Construction Alliance
- Construction and renovation fires
- Construction bidding
- Construction buyer
- Construction collaboration technology
- Construction communication
- Construction contract
- Construction delay
- Construction engineering
- Construction equipment theft
- Construction estimating software
- Construction foreman
- Construction industry of India
- Construction industry of Iran
- Construction industry of Japan
- Construction industry of Romania
- Construction industry of the United Kingdom
- Construction law
- Constructionline
- Construction loan
- Construction management
- Construction paper
- Construction partnering
- Construction Photography
- Construction Research and Innovation Strategy Panel
- Construction site safety
- Construction trailer
- Construction waste
- Construction worker
- Cool pavement
- Copper cladding
- Cordwood construction
- Core-and-veneer
- Corn construction
- Cornerstone
- Corrosion fatigue
- Corrugated galvanised iron
- Cost engineering
- Cost overrun
- Cover meter
- Crane
- Crane vessel
- Crawl space
- Crawler excavator
- Cream City brick
- Creep and shrinkage of concrete
- Cross bracing
- Cross-laminated timber
- Custom home
- Cutting tool
D
Damp proofing
- Deck
- Deconstruction
- Decorative concrete
- Decorative laminate
- Decorative stones
- Deep foundation
- Deep plan
- Demolition
- Demolition waste
- Design–bid–build
- Design–build
- Detailed engineering
- Diagrid
- Diamond grinding
- Diamond grinding of pavement
- Die grinder
- Dimensional lumber
- Directional boring
- Displacement ventilation
- Distribution board
- Dolos
- Domestic roof construction
- Double envelope house
- Double tee
- Dragon beam
- Drain (plumbing)
- Drainage
- Drifter drill
- Drill
- Drilling and blasting
- Driven to refusal
- Dropped ceiling
- Dry mortar production line
- Drywall
- Drywall mechanic
- Ducrete
- Dump truck
- Dumper
- Duplex
- Dutch brick
- Dutch gable roof
- Dutch roof tiles
- Dwang
E
Early skyscrapers
- Earthbag construction
- Earthquake engineering
- Earthquake-resistant structures
- Earthquake simulation
- Eco-cement
- Egyptian pyramid construction techniques
- Electrical engineer
- Electrical wiring
- Electrician
- Electric resistance welding
- Elemental cost planning
- Elevator mechanic
- Encasement
- Encaustic tile
- Endurance time method
- Energetically modified cement
- Engineering
- Engineered cementitious composite
- Engineering brick
- Engineering, procurement, and construction
- Enviroboard
- Environmental impact of concrete
- Equivalent Concrete Performance Concept
- Erosion control
- Eternit
- Excavator
- Expanded clay aggregate
- Expanded polystyrene concrete
- External render
- Exterior insulation finishing system
- External wall insulation
F
Falsework
- Facade
- Facade engineering
- Facadism
- Facility condition assessment
- Fareham red brick
- Fast-track construction
- Fastener
- Faux painting
- Fédération Française du Bâtiment
- Ferrocement
- Fiberboard
- Fiber cement siding
- Fiberglass
- Fiberglass sheet laminating
- Fiber-reinforced composite
- Fiber-reinforced concrete
- Fiber roll
- Fibre cement
- Fibre-reinforced plastic
- Filigree concrete
- Fill trestle
- Fire brick
- Fire door
- Fire protection (Active fire protection / Passive fire protection)
- Fire Protection Engineering
- Firestopping
- Fireproofing
- Fire safety
- Fire sprinkler system
- First fix and second fix
- Flashing
- Flat roof
- Floating raft system
- Floor plan
- Flux-cored arc welding
- Fly ash brick
- Foam concrete
- Foam glass
- Forge welding
- Formstone
- Formwork
- Foundation
- Framer
- Framing
- Frost damage
- Furring
G
Gable roof
- Gambrel
- Gas metal arc welding
- Geofoam
- Geologic preliminary investigation
- GigaCrete
- Girt
- Glass brick
- Glass fiber reinforced concrete
- Glazier
- Glazing
- Glued laminated timber
- Grade beam
- Grader
- Grating
- Green building
- Green building and wood
- Green building in Germany
- Green (certification)
- Green roof
- Green wall
- Groundbreaking
- Ground reinforcement
- Grout
- Grouted roof
- Guastavino tile
- Gypsum block
- Gypsum concrete
H
Hammer
- Hammerbeam roof
- Hammer drill
- Hard hat
- Harling
- Harvard brick
- Heat pump
- Heavy equipment
- Heavy equipment operator
- Hempcrete
- Herodotus Machine
- Herringbone pattern
- High-performance fiber-reinforced cementitious composites
- High-rise building
- High-visibility clothing
- History of construction
- History of structural engineering
- History of the world's tallest buildings
- Heating, ventilation, and air-conditioning
- Hoisting
- Home construction
- Home improvement
- Home wiring
- Hot-melt adhesive
- House
- House painter and decorator
- House raising
- Housewrap
- Hurricane-proof building
- Hybrid masonry
- Hydrodemolition
- Hydrophobic concrete
- Hypertufa
I
I-beam
- I-joist
- Iberian paleochristian decorated tile
- Illegal construction
- Imbrex and tegula
- Impact wrench
- Imperial roof decoration
- Industrialization of construction
- Insulated glazing
- Insulated siding
- Insulating concrete form
- Insulation materials
- Integrated framing assembly
- Integrated project delivery
- Interior protection
- International Building Code
- Ironworker
J
Jackhammer
- Jack post
- Japanese carpentry
- Jettying
- Jigsaw
- Joinery
- Joint
- Joint compound
- Johnson bar
K
Knee wall
- Knockdown texture
L
Laborer
- Ladder
- Lakhori bricks
- Laminate panel
- Lath and plaster
- Laser level
- Launching gantry
- Lean construction
- Level luffing crane
- Lewis (lifting appliance)
- Lift slab construction
- Lifting equipment
- Lighting
- Light tower
- Lightening holes
- Lime mortar
- Line of thrust
- Live bottom trailer
- Living building material
- Load-bearing wall
- Loader
- Log building
- London stock brick
- Low-energy building techniques
- Low-energy house
- Low-rise building
- Lump sum contract
- Lunarcrete
- Lustron house
M
Mansard roof
- Marbleizing
- Masonry
- Masonry trowel
- Masonry veneer
- Mass concrete
- Master builder
- Material efficiency
- Material passport
- Mathematical tile
- Mechanical connections
- Mechanical, electrical, and plumbing
- Mechanics lien
- Medieval letter tile
- Medium-density fibreboard
- Megaproject
- Megastructure
- Metal profiles
- Microtunneling
- Middle-third rule
- Miller Act
- Millwork
- Millwright
- Mobile crane
- Modular addition
- Modular building
- Moiré tell-tale
- Moling
- Moment-resisting frame
- Monocrete construction
- Mono-pitched roof
- Mortar
- Mudbrick
- Mudcrete
- Multi-tool
N
Nail gun
- Nanak Shahi bricks
- Nanoconcrete
- NEC Engineering and Construction Contract
- New Austrian tunnelling method
- New-construction building commissioning
- Nibbler
- Non-shrink grout
O
Occupancy
- Offshore construction
- Off-site construction
- Operational bill
- Opus africanum
- Opus albarium
- Opus craticum
- Opus gallicum
- Opus incertum
- Opus isodomum
- Opus latericium
- Opus mixtum
- Opus quadratum
- Opus reticulatum
- Opus spicatum
- Opus vittatum
- Oriented strand board
- Oxy-fuel welding and cutting
P
Painter and decorator
- Painterwork
- Panelling
- Pantile
- Papercrete
- Parge coat
- Particle board
- Passive daylighting
- Passive house
- Passive survivability
- Pavement
- Pavement engineering
- Pavement milling
- Paver base
- Penetrant (mechanical, electrical, or structural)
- Performance bond
- Permeable paving
- Pierrotage
- Pile cap
- Pile driver
- Pile splice
- Pipefitter
- Pipelayer
- Planetary surface construction
- Plank (wood)
- Planning permission
- Planning permission in the United Kingdom
- Plasma arc welding
- Plasterer
- Plasterwork
- Plastic lumber
- Plot plan
- Plug and feather
- Plumb bob
- Plumber
- Plumbing
- Plumbing drawing
- Pneumatic tool
- Pole building framing
- Polished concrete
- Polychrome brickwork
- Polymer concrete
- Porch collapse
- Portable building
- Portland cement
- Portland stone
- Portuguese pavement
- Post in ground
- Poteaux-sur-sol
- Powder coating
- Power concrete screed
- Power shovel
- Power tool
- Power trowel
- Precast concrete
- Pre-construction services
- Pre-engineered building
- Prefabricated building
- Prefabrication
- Prestressed concrete
- Prestressed structure
- Primer (paint)
- Project agreement (Canada)
- Project delivery method
- Project management
- Properties of concrete
- Punch list
- Purlin
Q
Quadruple glazing
- Quantity surveyor
- Quantity take-off
- Quarry tile
- Quarter minus
R
R-value (insulation)
- Radial arm saw
- Radiant barrier
- Radiator reflector
- Rafter
- Rainscreen
- Raised floor
- RAL colour standard
- RAL colors
- Rammed earth
- Random orbital sander
- Rapid construction
- Ready-mix concrete
- Real estate
- Rebar
- Rebar detailing
- Rebar spacer
- Reciprocal frame
- Reciprocating saw
- Red List building materials
- Red rosin paper
- Redevelopment
- Reed mat (plastering)
- Reema construction
- Reglet
- Reinforced concrete
- Reinforced concrete structures durability
- Relocatable buildings
- Repointing
- Resilience (engineering and construction)
- Retentions in the British construction industry
- Rice-hull bagwall construction
- Rigger
- Rigid panel
- Ring crane
- Rivet gun
- Road
- Road surface
- Roller-compacted concrete
- Roman cement
- Roof
- Roof coating
- Roof edge protection
- Roofer
- Roof shingle
- Roof tiles
- Room air distribution
- Rosendale cement
- Rotary hammer
- Roughcast
- Rubberized asphalt
- Rubble
- Rubble trench foundation
- Rubblization
- Ruin value
S
Saddle roof
- Salt-concrete
- Saltillo tile
- Sander
- Sandhog
- Sandjacking
- Sandwich panel
- Sarking
- Saw-tooth roof
- Sawyer
- Scabbling
- Scaffolding
- Schmidt hammer
- Screed
- Screw piles
- Scrim and sarking
- Sediment control
- Segregation in concrete
- Self-build
- Self-cleaning floor
- Self-consolidating concrete
- Self-framing metal buildings
- Self-leveling concrete
- Septic tank
- Serviceability
- Sett
- Settlement (structural)
- Sewage treatment
- Shallow foundation
- Shear
- Shear wall
- Sheet metal
- Shelf angle
- Shielded metal arc welding
- Shiplap
- Shop drawing
- Shoring
- Shotcrete
- Shovel
- Shovel ready
- Sick building syndrome
- Siding
- Sill plate
- Site survey
- Skyscraper
- Skyscraper design and construction
- Slate industry in Wales
- Slater
- Sledgehammer
- Slipform stonemasonry
- Slip forming
- Smalley
- Snecked masonry
- Soft story building
- Soil cement
- Solid ground floor
- Sorel cement
- Spackling paste
- Spirit level
- Split-level home
- Spray painting
- Stack effect
- Staff
- Staffordshire blue brick
- Staggered truss system
- Staircase jig
- Stair tread
- Stairs
- Stamped asphalt
- Stamped concrete
- Steam shovel
- Steeplejack
- Sticky rice mortar
- Stonemason's hammer
- Storey pole
- Storm drain
- Storm window
- Steel building
- Steel fixer
- Steel frame
- Steel plate construction
- Stone carving
- Stone sealer
- Stone veneer
- Storey
- Strand jack
- Strap footing
- Straw-bale construction
- Strength of materials
- Strongback
- Structural building components
- Structural channel
- Structural clay tile
- Structural drawing
- Structural engineering
- Structural insulated panel
- Structural integrity and failure
- Structural material
- Structural robustness
- Structural steel
- Structure relocation
- Strut channel
- Stucco
- Submerged arc welding
- Submittals
- Subsidence
- Substructure
- Suction excavator
- Suicide bidding
- Sulfur concrete
- Superadobe
- Superinsulation
- Superintendent
- Surfaced block
- Survey stakes
- Sustainability in construction
- Sustainable flooring
- Sustainable refurbishment
T
T-beam
- Tabby concrete
- Table saw
- Tar paper
- Teardown
- Telescopic handler
- Temperley transporter
- Temporary fencing
- Tented roof
- Terraced house
- Tetrapod
- Textile-reinforced concrete
- Thatching
- Thermal bridge
- Thermal insulation
- Thinset
- Thin-shell structure
- Three-decker
- Tie
- Tie down hardware
- Tile
- Tilt slab
- Tilt up
- Timber
- Timber framing
- Timber framing tools
- Timber pilings
- Timber recycling
- Timber roof truss
- Tin ceiling
- Tiocem
- Toe board
- Topping out
- Townhouse
- Tracked loader
- Traditional Korean roof construction
- Transite
- Treadwheel crane
- Trench shield
- Trencher
- Trenchless technology
- Truss
- Tube and clamp scaffold
- Tuckpointing
- Tunnel boring machine
- Tunnel construction
- Tunnel hole-through
- Tunnel rock recycling
- Twig work
- Types of concrete
U
Umarell
- Uncertainties in building design and building energy assessment
- Underfloor air distribution
- Underground construction
- Underpinning
- Unfinished building
- Uniclass
- Uniformat
V
Verify in field
- Vertical damp proof barrier
- Vibro stone column
- Vinyl composition tile
- Vinyl siding
- Virtual design and construction
- Vitrified tile
- Voided biaxial slab
- Volumetric concrete mixer
W
Waffle slab
- Walking excavator
- Wall
- Wall chaser
- Wall footing
- Wall plan
- Wall stud
- Water–cement ratio
- Water heating
- Water level
- Waterproofing
- Wattle and daub
- Wearing course
- Weathering steel
- Weatherization
- Weld access hole
- Welded wire mesh
- Welder
- Welding
- Welding power supply
- Wheel tractor-scraper
- White Card
- Window capping
- Window insulation film
- Window well cover
- Wiring closet
- Wood-plastic composite
- Wood shingle
- Wool insulation
- Wrecking ball
- Wrought iron
X
Xbloc
Y
Z
Zellij
- Zero-energy building
- Zome
See also
Outline of construction
Glossary of British bricklaying
Glossary of construction cost estimating
List of building materials
List of building types
List of buildings
List of construction methods
List of construction trades
List of roof shapes
Construction topics | Index of construction articles | [
"Engineering"
] | 3,482 | [
"Building engineering",
"Civil engineering",
"Construction",
"Architecture"
] |
1,179,005 | https://en.wikipedia.org/wiki/Cerulean | The color cerulean (American English) or caerulean (British English, Commonwealth English), is a variety of the hue of blue that may range from a light azure blue to a more intense sky blue, and may be mixed as well with the hue of green. The first recorded use of cerulean as a color name in English was in 1590. The word is derived from the Latin word caeruleus (), "dark blue, blue, or blue-green", which in turn probably derives from caerulum, diminutive of caelum, "heaven, sky".
"Cerulean blue" is the name of a blue-green pigment consisting of cobalt stannate (). The pigment was first synthesized in the late eighteenth century by Albrecht Höpfner, a Swiss chemist, and it was known as Höpfner blue during the first half of the nineteenth century. Art suppliers began referring to cobalt stannate as cerulean in the second half of the nineteenth century. It was not widely used by artists until the 1870s when it became available in oil paint.
Pigment characteristics
The primary chemical constituent of the pigment is cobalt(II) stannate (). The pigment is a greenish-blue color. In watercolor, it has a slight chalkiness. When used in oil paint, it loses this quality.
Today, cobalt chromate is sometimes marketed under the cerulean blue name but is darker and greener than the cobalt stannate version. The chromate makes excellent turquoise colors and is identified by Rex Art and some other manufacturers as "cobalt turquoise".
Cerulean is inert with good light resistance, and it exhibits a high degree of stability in both watercolor and acrylic paint.
History
Cobalt stannate pigment was first synthesized in 1789 by the Swiss chemist Albrecht Höpfner by heating roasted cobalt and tin oxides together. Subsequently, there was limited German production under the name of Cölinblau. It was generally known as Höpfner blue from the late eighteenth century until the middle of the nineteenth century.
In the late 1850s, art suppliers begin referring to the pigment as "ceruleum" blue. The London Times of 28 December 1859 had an advertisement for "Caeruleum, a new permanent color prepared for the use of artists." Ure's Dictionary of Arts from 1875 describes the pigment as "Caeruleum . . . consisting of stannate of protoxide of cobalt, mixed with stannic acid and sulphate of lime." Cerulean was also referred to as coeurleum, cerulium, bleu céleste (celestial blue). Other nineteenth century English pigment names included "ceruleum blue" and "corruleum blue". By 1935, Max Doerner referred to the pigment as cerulean, as do most modern sources, though ceruleum is still used.
Some sources claim that cerulean blue was first marketed in the United Kingdom by colourman George Rowney, as "coeruleum" in the early 1860s. However, the British firm of Roberson was buying "Blue No. 58 (Cerulium)" from a German firm of Frauenknecht and Stotz prior to Rowney. Cerulean blue was only available as a watercolor in the 1860s and was not widely adopted until the 1870s when it was used in oil paint. It was popular with artists including Claude Monet, Paul Signac, and Picasso. Van Gogh created his own approximation of cerulean blue using a mixture of cobalt blue, cadmium yellow, and white.
Notable occurrences
In 1877, Monet had added the pigment to his palette, using it in a painting from his series La Gare Saint-Lazare (now in the National Gallery, London). The blues in the painting include cobalt and cerulean blue, with some areas of ultramarine. Laboratory analysis conducted by the National Gallery identified a relatively pure example of cerulean blue pigment in the shadows of the station's canopy. Researchers at the National Gallery suggested that "cerulean probably offered a pigment of sufficiently greenish tone to displace Prussian blue, which may not have been popular by this time."
Berthe Morisot painted the blue coat of the woman in her Summer's Day, 1879 in cerulean blue in conjunction with artificial ultramarine and cobalt blue.
When the United Nations was formed at the end of World War II, they adopted cerulean blue for their emblem. The designer Oliver Lundquist stated that he chose the color because it was "the opposite of red, the color of war."
In the Catholic Church, cerulean vestments are permitted on certain Marian feast days, primarily the Immaculate Conception in diocese currently or formerly under the Spanish Crown.
Other color variations
Pale cerulean
Pantone, in a press release, declared the pale hue of cerulean at right, which they call cerulean, as the "color of the millennium".
The source of this color is the "Pantone Textile Paper eXtended (TPX)" color list, color #15-4020 TPX—Cerulean.
Cerulean (Crayola)
This bright tone of cerulean is the color called cerulean by Crayola crayons.
Cerulean frost
At right is displayed the color cerulean frost.
Cerulean frost is one of the colors in the special set of metallic colored Crayola crayons called Silver Swirls, the colors of which were formulated by Crayola in 1990.
Curious Blue
Curious Blue is one of the brighter-toned colors of cerulean.
In nature
Cerulean cuckooshrike
Cerulean kingfisher
Cerulean flycatcher
Cerulean warbler
Cerulean-capped manakin
See also
The Devil Wears Prada (film) § Cerulean sweater speech
Pusher (The X-Files episode) § "Cerulean blue is a gentle breeze"
List of colors
Pigment
Blue pigments
Explanatory notes
References
External links
A page on Cerulean Blue
Cerulean blue at ColourLex
Quaternary colors
Pigments
Inorganic pigments
Shades of azure
Shades of blue
Shades of cyan
Bird colours
Cobalt compounds | Cerulean | [
"Chemistry"
] | 1,289 | [
"Inorganic pigments",
"Inorganic compounds"
] |
1,179,028 | https://en.wikipedia.org/wiki/Grinding%20machine | A grinding machine, often shortened to grinder, is any of various power tools or machine tools used for grinding. It is a type of material removal using an abrasive wheel as the cutting tool. Each grain of abrasive on the wheel's surface cuts a small chip from the workpiece via shear deformation.
Grinding as a type of machining is used to finish workpieces that must show high surface quality (e.g., low surface roughness) and high accuracy of shape and dimension. As the accuracy in dimensions in grinding is of the order of 0.000025 mm, in most applications, it tends to be a finishing operation and removes comparatively little metal, about 0.25 to 0.50 mm depth. However, there are some roughing applications in which grinding removes high volumes of metal quite rapidly. Thus, grinding is a diverse field.
Overview
The grinding machine consists of a bed with a fixture to guide and hold the workpiece and a power-driven grinding wheel spinning at the required speed. The wheel’s diameter and the manufacturer’s rating determine the speed. The grinding head can travel across a fixed workpiece, or the workpiece can be moved while the grinding head stays in a fixed position.
Fine control of the grinding head or table position is possible using a vernier calibrated hand wheel or using the features of numerical controls.
Grinding machines remove material from the workpiece by abrasion, which can generate substantial amounts of heat. To cool the workpiece so that it does not overheat and go outside its tolerance, grinding machines incorporate a coolant. The coolant also benefits the machinist as the heat generated may cause burns. In high-precision grinding machines (most cylindrical and surface grinders), the final grinding stages are usually set up so that they remove about 200 nm (less than 1/10000 in) per pass - this generates so little heat that even with no coolant, the temperature rise is negligible.
Types
These machines include the:
Belt grinder, which is usually used as a machining method to process metals and other materials, with the aid of coated abrasives. Analogous to a belt sander (which itself is often used for wood but sometimes metal). Belt grinding is a versatile process suitable for all kind of applications, including finishing, deburring, and stock removal.
Bench grinder, which usually has two wheels of different grain sizes for roughing and finishing operations and is secured to a workbench or floor stand. Its uses include shaping tool bits or various tools that need to be made or repaired. Bench grinders are manually operated.
Cylindrical grinder, which includes both the types that use centers and the centerless types. A cylindrical grinder may have multiple grinding wheels. The work piece is rotated and fed past the wheel(s) to form a cylinder. It is used to make precision rods, tubes, bearing races, bushings, and many other parts.
Surface grinder, which has a head that is lowered to a work piece, which is moved back and forth under the grinding wheel on a table that typically has a controllable permanent magnet (magnetic chuck) for use with magnetic stock (especially ferrous stock) but can have a vacuum chuck or other fixture means. The most common surface grinders have a grinding wheel rotating on a horizontal axis, cutting around the circumference of the grinding wheel. Rotary surface grinders, commonly known as "Blanchard" style grinders, have a grinding head that rotates the grinding wheel on a vertical axis cutting on the end face of the grinding wheel, while a table rotates the work piece in the opposite direction underneath. This type of machine removes large amounts of material and grinds flat surfaces with noted spiral grind marks. It can also be used to make and sharpen metal stamping die sets, flat shear blades, fixture bases or any flat and parallel surfaces. Surface grinders can be manually operated or have CNC controls.
Tool and cutter grinder, which usually can perform the minor function of the drill bit grinder or other specialist toolroom grinding operations.
Jig grinder, which as the name implies, has a variety of uses when finishing jigs, dies, and fixtures. Its primary function is in the realm of grinding holes for drill bushings and grinding pins. It can also be used for complex surface grinding to finish work started on a mill.
Gear grinder, which is usually employed as the final machining process when manufacturing a high-precision gear. The primary function of these machines is to remove the remaining few thousandths of an inch of material left by other manufacturing methods (such as gashing or hobbing).
Centre grinder, which is usually employed as a machining process when manufacturing all kinds of high-precision shafts. The primary function of these machines is to grind the centers of a shaft very precisely. Accurate round center holes on both sides ensure a position with high repeat accuracy on the live centers.
Die grinder, which is a high-speed hand-held rotary tool with a small diameter grinding bit. They are typically air-driven (using compressed air), but can be driven with a small electric motor directly or via a flexible shaft.
Angle grinder, another handheld power tool, often used in fabrication and construction work.
Internal grinder, which is used for grinding internal surfaces of workpieces, boron carbide wheels are effective when dealing with extremely hard materials that need high levels of precision.
See also
Diamond tool
References
Power tools
Sharpening | Grinding machine | [
"Physics"
] | 1,132 | [
"Power (physics)",
"Power tools",
"Physical quantities"
] |
1,179,217 | https://en.wikipedia.org/wiki/Dextran | Dextran is a complex branched glucan (polysaccharide derived from the condensation of glucose), originally derived from wine. IUPAC defines dextrans as "Branched poly-α-d-glucosides of microbial origin having glycosidic bonds predominantly C-1 → C-6". Dextran chains are of varying lengths (from 3 to 2000 kilodaltons).
The polymer main chain consists of α-1,6 glycosidic linkages between glucose monomers, with branches from α-1,3 linkages. This characteristic branching distinguishes a dextran from a dextrin, which is a straight chain glucose polymer tethered by α-1,4 or α-1,6 linkages.
Occurrence
Dextran was discovered by Louis Pasteur as a microbial product in wine, but mass production was only possible after the development by Allene Jeanes of a process using bacteria. Dental plaque is rich in dextrans. Dextran is a complicating contaminant in the refining of sugar because it elevates the viscosity of sucrose solutions and fouls plumbing.
Dextran is now produced from sucrose by certain lactic acid bacteria of the family lactobacillus. Species include Leuconostoc mesenteroides and Streptococcus mutans. The structure of dextran produced depends not only on the family and species of the bacterium but on the strain. They are separated by fractional precipitation from protein-free extracts using ethanol. Some bacteria coproduce fructans, which can complicate isolation of the dextrans.
Uses
Dextran 70 is on the WHO Model List of Essential Medicines, the most important medications needed in a health system.
Medicinally it is used as an antithrombotic (antiplatelet), to reduce blood viscosity, and as a volume expander in hypovolaemia.
Microsurgery
These agents are used commonly by microsurgeons to decrease vascular thrombosis. The antithrombotic effect of dextran is mediated through its binding of erythrocytes, platelets, and vascular endothelium, increasing their electronegativity and thus reducing erythrocyte aggregation and platelet adhesiveness. Dextrans also reduce factor VIII-Ag Von Willebrand factor, thereby decreasing platelet function. Clots formed after administration of dextrans are more easily lysed due to an altered thrombus structure (more evenly distributed platelets with coarser fibrin). By inhibiting α-2 antiplasmin, dextran serves as a plasminogen activator, so possesses thrombolytic features.
Outside of these features, larger dextrans, which do not pass out of the vessels, are potent osmotic agents, thus have been used urgently to treat hypovolemia . The hemodilution caused by volume expansion with dextran use improves blood flow, thus further improving patency of microanastomoses and reducing thrombosis. Still, no difference has been detected in antithrombotic effectiveness in comparison of intra-arterial and intravenous administration of dextran.
Dextrans are available in multiple molecular weights ranging from 3 kDa to 2 MDa. The larger dextrans (>60,000 Da) are excreted poorly from the kidney, so remain in the blood for as long as weeks until they are metabolized. Consequently, they have prolonged antithrombotic and colloidal effects. In this family, dextran-40 (MW: 40,000 Da), has been the most popular member for anticoagulation therapy. Close to 70% of dextran-40 is excreted in urine within the first 24 hours after intravenous infusion, while the remaining 30% are retained for several more days.
Other medical uses
Dextran is used in some eye drops as a lubricant. and in certain intravenous fluids to solubilize other factors, such as iron (in a solution known as Iron Dextran).
Intravenous solutions with dextran function both as volume expanders and means of parenteral nutrition. Such a solution provides an osmotically neutral fluid that once in the body is digested by cells into glucose and free water. It is occasionally used to replace lost blood in emergency situations, when replacement blood is not available, but must be used with caution as it does not provide necessary electrolytes and can cause hyponatremia or other electrolyte disturbances.
Dextran also increases blood sugar levels.
Dextran can be used in an ATPS for PEGylation
Laboratory uses
Dextran is used in the osmotic stress technique for applying osmotic pressure to biological molecules.
It is also used in some size-exclusion chromatography matrices; an example is Sephadex.
Dextran has also been used in bead form to aid in bioreactor applications.
Dextran has been used as an immobilization agent in biosensors.
Dextran preferentially binds to early endosomes; fluorescent-labelled dextran can be used to visualize these endosomes under a microscope.
Dextran can be used as a stabilizing coating to protect metal nanoparticles from oxidation and improve biocompatibility.
Dextran coupled with a fluorescent molecule such as fluorescein isothiocyanate can be used to create concentration gradients of diffusible molecules for imaging and allow subsequent characterization of gradient slope.
Solutions of fluorescently-labelled dextran can be perfused through engineered vessels to analyze vascular permeability
Dextran is used to make microcarriers for industrial cell culture
Orally-administered dextran sodium sulphate is used to induce colitis in animal models of inflammatory bowel disease.
Dextran is a common model compound to test the potential of drug formulations to facilitate intestinal absorption via the paracellular route.
Side effects
Although relatively few side effects are associated with dextran use, these side effects can be very serious. These include anaphylaxis, volume overload, pulmonary edema, cerebral edema, or platelet dysfunction.
An uncommon but significant complication of dextran osmotic effect is acute kidney injury. The pathogenesis of this kidney failure is the subject of many debates with direct toxic effect on tubules and glomerulus versus intraluminal hyperviscosity being some of the proposed mechanisms. Patients with history of diabetes mellitus, chronic kidney disease, or vascular disorders are most at risk. Brooks and others recommend the avoidance of dextran therapy in patients with chronic kidney disease.
Research
Efforts have been made to develop modified dextran polymers. One of these has acetal modified hydroxyl groups. It is insoluble in water, but soluble in organic solvents. This allows it to be processed in the same manner as many polyesters, like poly(lactic-co-glycolic acid), through processes like solvent evaporation and emulsion. Acetalated dextran is structurally different from acetylated dextran. As of 2017 several uses for drug delivery had been explored in vitro and a few had been tested in animal models.
See also
Dextran drug delivery systems
Pentoxifylline
References
External links
Resource on dextran properties and structure of dextran polymers
Biotechnology products
Polysaccharides
Intravenous fluids | Dextran | [
"Chemistry",
"Biology"
] | 1,592 | [
"Carbohydrates",
"Biotechnology products",
"Polysaccharides"
] |
1,179,352 | https://en.wikipedia.org/wiki/Rocket-based%20combined%20cycle | The RBCC, or rocket-based combined cycle propulsion system, was one of the two types of propulsion systems that may have been tested in the Boeing X-43 experimental aircraft. The RBCC, or strutjet as it is sometimes called, is a combination propulsion system that consists of a ramjet, scramjet, and ducted rocket, where all three systems use a shared flow path.
A TBCC, or turbine-based combined cycle propulsion system, is a turbine engine combined with a ramjet and scramjet.
A TRCC, or turbo rocket combined cycle propulsion system, is another combination propulsion system that combines an afterburning turbine engine with a RBCC propulsion system.
See also
SABRE (Synergistic Air Breathing Rocket Engine), a pre-cooled air-breathing rocket/RAM-jet engine based on General Dynamics' exploration of LACE concepts (Liquid Air Cycle Engine) by Reaction Engines, UK.
References
External links
Performance Evaluation of the NASA GTX RBCC Flowpath - Glenn Research Center - NASA
Parametric Study Conducted of Rocket-Based, Combined-Cycle Nozzles - Glenn Research Center - NASA
Aerojet Successfully Tests RBCC Single Thruster, Demonstrating Tri-Fluid Rocket Injector Capabilities - SpaceRef
Hypersonic inlet studies for a small-scale rocket-based combined-cycle engine, Journal of propulsion and power, 2007, vol. 23, no6, pp. 1160–1167, AIAA.
Rocket-Based Combined-Cycle Engine (RBCC): Ramrocket, University of Toronto, High-Speed Vehicle Propulsion Systems Group.
Jet engines | Rocket-based combined cycle | [
"Astronomy",
"Technology"
] | 327 | [
"Engines",
"Spacecraft stubs",
"Rocketry stubs",
"Astronomy stubs",
"Jet engines"
] |
1,179,505 | https://en.wikipedia.org/wiki/Well%20poisoning | Well poisoning is the act of malicious manipulation of potable water resources in order to cause illness or death, or to deny an opponent access to fresh water resources.
Well poisoning has been historically documented as a strategy during wartime since antiquity, and was used both offensively (as a terror tactic to disrupt and depopulate a target area) and defensively (as a scorched earth tactic to deny an invading army sources of clean water). Rotting corpses (both animal and human) thrown down wells were the most common implementation; in one of the earliest examples of biological warfare, corpses known to have died from common transmissible diseases of the Pre-Modern era such as bubonic plague or tuberculosis were especially favored for well-poisoning.
History of implementation
Instances of medieval usage
Well poisoning has been used as an important scorched earth tactic at least since medieval times. In 1462, for example, Prince Vlad III the Impaler of Wallachia utilized this method to delay his pursuing adversaries. Nearly 500 years later during the Winter War, the Finns rendered wells unusable by putting animal carcasses or feces in them in order to passively combat invading Soviet forces.
Instances of modern usage
During the 20th century, the practice of poisoning wells has lost most of its potency and practicality against an organized force as modern military logistics ensure secure and decontaminated supplies and resources. Nevertheless, German forces during First World War poisoned wells in France as part of Operation Alberich.
After World War 2 Nakam, a paramilitary organisation of about fifty Holocaust survivors, sought revenge for the murder of six million Jews during the Holocaust. The group's leader Abba Kovner went to Mandatory Palestine in order to secure large quantities of poison for poisoning water mains to kill large numbers of Germans. His followers infiltrated the water system of Nuremberg. However, Kovner was arrested upon arrival in the British zone of occupied Germany and had to throw the poison overboard.
Israel poisoned the wells and water supplies of certain Palestinian towns and villages as part of their biological warfare program during the 1948 Palestine war, including a successful operation that caused a typhoid epidemic in Acre in early May 1948, and an unsuccessful attempt in Gaza that was foiled by the Egyptians in late May.
In the late 20th century, accusations of well-poisoning were brought up, most notoriously in relation to the Kosovo Liberation War. In the 21st century, Israeli settlers have been condemned due to suspicions of poisoning weels of villages in the occupied Palestine.
As libel against Jews
Medieval accusations against Jews
Despite some vague understanding of how diseases could spread, the existence of viruses and bacteria was unknown in medieval times, and the outbreak of disease could not be scientifically explained. Any sudden deterioration of health was often blamed on poisoning. Europe was hit by several waves of the Black Death throughout the late Middle Ages. Crowded cities were especially hard hit by the disease, with death tolls as high as 50% of the population. In their distress, emotionally distraught survivors searched desperately for an explanation. The city-dwelling Jews of the Middle Ages, living in walled-up, segregated ghetto districts, aroused suspicion. An outbreak of plague thus became the trigger for Black Death persecutions, with hundreds of Jews burned at the stake, or rounded up in synagogues and private houses that were then set aflame.
Walter Laqueur writes in his book The Changing Face of Anti-Semitism: From Ancient Times to the Present Day:
There were no mass attacks against "Jewish poisoners" after the period of the Black Death, but the accusation became part and parcel of antisemitic dogma and language. It appeared again in early 1953 in the form of the "doctors' plot" in Stalin's last days, when hundreds of Jewish physicians in the Soviet Union were arrested and some of them killed on the charge of having caused the death of prominent Communist leaders... Similar charges were made in the 1980s and 1990s in radical Arab nationalist and Muslim fundamentalist propaganda that accused the Jews of spreading AIDS and other infectious diseases.
Modern instances of antisemitic libel
Allegations of well poisoning entwined with antisemitism have also emerged in the discourse around modern epidemics and pandemics such as swine flu, Ebola, avian flu, SARS, and COVID-19.
EU address by Mahmoud Abbas
In his address to the European Parliament on 23 June 2016, in Brussels, Palestinian Authority president and PLO chairman Mahmoud Abbas made an unsubstantiated allegation, "accusing rabbis of poisoning Palestinian wells". This was based on false media reports saying Israeli rabbis were inciting the poisoning of water of Palestinians, led by a rabbi Shlomo Mlma or Mlmad from the Council of Rabbis in the West Bank settlements. A rabbi by that name could not be located, nor is such an organization listed.
Abbas said: "Only a week ago, a number of rabbis in Israel announced, and made a clear announcement, demanding that their government poison the water to kill the Palestinians ... Isn't that clear incitement to commit mass killings against the Palestinian people?"
The speech received a standing ovation. The speech was described as "echoing anti-Semitic claims". A day later, on Saturday 26 June, Abbas admitted that "his claims at the EU were baseless". Abbas' further said that he "didn't intend to do harm to Judaism or to offend Jewish people around the world." Israeli Prime Minister Benjamin Netanyahu stated in reaction, that Abbas had spread a "blood libel" in his European Parliament address.
See also
Operation Cast Thy Bread
Environmental impact of war
Groundwater pollution
In My Country There Is Problem
Jonestown
Nakam
Water supply terrorism
References
Works cited
External links
Accusation of Well-Poisoning (Jewish Encyclopedia)
The Virtual Jewish History Tour. Belgium
Christian antisemitism in the Middle Ages
Antisemitic tropes
Environmental impact of war
Biological warfare
Black Death
Water wells
Terrorism tactics
Mass poisoning
Chemical warfare | Well poisoning | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 1,220 | [
"Hydrology",
"Biological warfare",
"Water wells",
"nan",
"Environmental engineering"
] |
229,160 | https://en.wikipedia.org/wiki/Q%20factor | In physics and engineering, the quality factor or factor is a dimensionless parameter that describes how underdamped an oscillator or resonator is. It is defined as the ratio of the initial energy stored in the resonator to the energy lost in one radian of the cycle of oscillation. factor is alternatively defined as the ratio of a resonator's centre frequency to its bandwidth when subject to an oscillating driving force. These two definitions give numerically similar, but not identical, results. Higher indicates a lower rate of energy loss and the oscillations die out more slowly. A pendulum suspended from a high-quality bearing, oscillating in air, has a high , while a pendulum immersed in oil has a low one. Resonators with high quality factors have low damping, so that they ring or vibrate longer.
Explanation
The factor is a parameter that describes the resonance behavior of an underdamped harmonic oscillator (resonator). Sinusoidally driven resonators having higher factors resonate with greater amplitudes (at the resonant frequency) but have a smaller range of frequencies around that frequency for which they resonate; the range of frequencies for which the oscillator resonates is called the bandwidth. Thus, a high- tuned circuit in a radio receiver would be more difficult to tune, but would have more selectivity; it would do a better job of filtering out signals from other stations that lie nearby on the spectrum. High- oscillators oscillate with a smaller range of frequencies and are more stable.
The quality factor of oscillators varies substantially from system to system, depending on their construction. Systems for which damping is important (such as dampers keeping a door from slamming shut) have near . Clocks, lasers, and other resonating systems that need either strong resonance or high frequency stability have high quality factors. Tuning forks have quality factors around 1000. The quality factor of atomic clocks, superconducting RF cavities used in accelerators, and some high- lasers can reach as high as 1011 and higher.
There are many alternative quantities used by physicists and engineers to describe how damped an oscillator is. Important examples include: the damping ratio, relative bandwidth, linewidth and bandwidth measured in octaves.
The concept of originated with K. S. Johnson of Western Electric Company's Engineering Department while evaluating the quality of coils (inductors). His choice of the symbol was only because, at the time, all other letters of the alphabet were taken. The term was not intended as an abbreviation for "quality" or "quality factor", although these terms have grown to be associated with it.
Definition
The definition of since its first use in 1914 has been generalized to apply to coils and condensers, resonant circuits, resonant devices, resonant transmission lines, cavity resonators, and has expanded beyond the electronics field to apply to dynamical systems in general: mechanical and acoustic resonators, material and quantum systems such as spectral lines and particle resonances.
Bandwidth definition
In the context of resonators, there are two common definitions for , which are not exactly equivalent. They become approximately equivalent as becomes larger, meaning the resonator becomes less damped. One of these definitions is the frequency-to-bandwidth ratio of the resonator:
where is the resonant frequency is the resonance width or full width at half maximum (FWHM) i.e. the bandwidth over which the power of vibration is greater than half the power at the resonant frequency, is the angular resonant frequency, and is the angular half-power bandwidth.
Under this definition, is the reciprocal of fractional bandwidth.
Stored energy definition
The other common nearly equivalent definition for is the ratio of the energy stored in the oscillating resonator to the energy dissipated per cycle by damping processes:
The factor makes expressible in simpler terms, involving only the coefficients of the second-order differential equation describing most resonant systems, electrical or mechanical. In electrical systems, the stored energy is the sum of energies stored in lossless inductors and capacitors; the lost energy is the sum of the energies dissipated in resistors per cycle. In mechanical systems, the stored energy is the sum of the potential and kinetic energies at some point in time; the lost energy is the work done by an external force, per cycle, to maintain amplitude.
More generally and in the context of reactive component specification (especially inductors), the frequency-dependent definition of is used:
where is the angular frequency at which the stored energy and power loss are measured. This definition is consistent with its usage in describing circuits with a single reactive element (capacitor or inductor), where it can be shown to be equal to the ratio of reactive power to real power. (See Individual reactive components.)
factor and damping
The factor determines the qualitative behavior of simple damped oscillators. (For mathematical details about these systems and their behavior see harmonic oscillator and linear time invariant (LTI) system.)
A system with low quality factor () is said to be overdamped. Such a system doesn't oscillate at all, but when displaced from its equilibrium steady-state output it returns to it by exponential decay, approaching the steady state value asymptotically. It has an impulse response that is the sum of two decaying exponential functions with different rates of decay. As the quality factor decreases the slower decay mode becomes stronger relative to the faster mode and dominates the system's response resulting in a slower system. A second-order low-pass filter with a very low quality factor has a nearly first-order step response; the system's output responds to a step input by slowly rising toward an asymptote.
A system with high quality factor () is said to be underdamped. Underdamped systems combine oscillation at a specific frequency with a decay of the amplitude of the signal. Underdamped systems with a low quality factor (a little above ) may oscillate only once or a few times before dying out. As the quality factor increases, the relative amount of damping decreases. A high-quality bell rings with a single pure tone for a very long time after being struck. A purely oscillatory system, such as a bell that rings forever, has an infinite quality factor. More generally, the output of a second-order low-pass filter with a very high quality factor responds to a step input by quickly rising above, oscillating around, and eventually converging to a steady-state value.
A system with an intermediate quality factor () is said to be critically damped. Like an overdamped system, the output does not oscillate, and does not overshoot its steady-state output (i.e., it approaches a steady-state asymptote). Like an underdamped response, the output of such a system responds quickly to a unit step input. Critical damping results in the fastest response (approach to the final value) possible without overshoot. Real system specifications usually allow some overshoot for a faster initial response or require a slower initial response to provide a safety margin against overshoot.
In negative feedback systems, the dominant closed-loop response is often well-modeled by a second-order system. The phase margin of the open-loop system sets the quality factor of the closed-loop system; as the phase margin decreases, the approximate second-order closed-loop system is made more oscillatory (i.e., has a higher quality factor).
Some examples
Physical interpretation
Physically speaking, is approximately the ratio of the stored energy to the energy dissipated over one radian of the oscillation; or nearly equivalently, at high enough values, 2 times the ratio of the total energy stored and the energy lost in a single cycle.
It is a dimensionless parameter that compares the exponential time constant for decay of an oscillating physical system's amplitude to its oscillation period. Equivalently, it compares the frequency at which a system oscillates to the rate at which it dissipates its energy. More precisely, the frequency and period used should be based on the system's natural frequency, which at low values is somewhat higher than the oscillation frequency as measured by zero crossings.
Equivalently (for large values of ), the factor is approximately the number of oscillations required for a freely oscillating system's energy to fall off to , or about or 0.2%, of its original energy. This means the amplitude falls off to approximately or 4% of its original amplitude.
The width (bandwidth) of the resonance is given by (approximately):
where is the natural frequency, and , the bandwidth, is the width of the range of frequencies for which the energy is at least half its peak value.
The resonant frequency is often expressed in natural units (radians per second), rather than using the in hertz, as
The factors , damping ratio , natural frequency , attenuation rate , and exponential time constant are related such that:
and the damping ratio can be expressed as:
The envelope of oscillation decays proportional to or , where and can be expressed as:
and
The energy of oscillation, or the power dissipation, decays twice as fast, that is, as the square of the amplitude, as or .
For a two-pole lowpass filter, the transfer function of the filter is
For this system, when (i.e., when the system is underdamped), it has two complex conjugate poles that each have a real part of . That is, the attenuation parameter represents the rate of exponential decay of the oscillations (that is, of the output after an impulse) into the system. A higher quality factor implies a lower attenuation rate, and so high- systems oscillate for many cycles. For example, high-quality bells have an approximately pure sinusoidal tone for a long time after being struck by a hammer.
Electrical systems
For an electrically resonant system, the Q factor represents the effect of electrical resistance and, for electromechanical resonators such as quartz crystals, mechanical friction.
Relationship between and bandwidth
The 2-sided bandwidth relative to a resonant frequency of (Hz) is .
For example, an antenna tuned to have a value of 10 and a centre frequency of 100 kHz would have a 3 dB bandwidth of 10 kHz.
In audio, bandwidth is often expressed in terms of octaves. Then the relationship between and bandwidth is
where is the bandwidth in octaves.
RLC circuits
In an ideal series RLC circuit, and in a tuned radio frequency receiver (TRF) the factor is:
where , , and are the resistance, inductance and capacitance of the tuned circuit, respectively. Larger series resistances correspond to lower circuit values.
For a parallel RLC circuit, the factor is the inverse of the series case:
Consider a circuit where , , and are all in parallel. The lower the parallel resistance is, the more effect it will have in damping the circuit and thus result in lower . This is useful in filter design to determine the bandwidth.
In a parallel LC circuit where the main loss is the resistance of the inductor, , in series with the inductance, , is as in the series circuit. This is a common circumstance for resonators, where limiting the resistance of the inductor to improve and narrow the bandwidth is the desired result.
Individual reactive components
The of an individual reactive component depends on the frequency at which it is evaluated, which is typically the resonant frequency of the circuit that it is used in. The of an inductor with a series loss resistance is the of a resonant circuit using that inductor (including its series loss) and a perfect capacitor.
where:
is the resonance frequency in radians per second;
is the inductance;
is the inductive reactance; and
is the series resistance of the inductor.
The of a capacitor with a series loss resistance is the same as the of a resonant circuit using that capacitor with a perfect inductor:
where:
is the resonance frequency in radians per second;
is the capacitance;
is the capacitive reactance; and
is the series resistance of the capacitor.
In general, the of a resonator involving a series combination of a capacitor and an inductor can be determined from the values of the components, whether their losses come from series resistance or otherwise:
Mechanical systems
For a single damped mass-spring system, the factor represents the effect of simplified viscous damping or drag, where the damping force or drag force is proportional to velocity. The formula for the factor is:
where is the mass, is the spring constant, and is the damping coefficient, defined by the equation , where is the velocity.
Acoustical systems
The of a musical instrument is critical; an excessively high in a resonator will not evenly amplify the multiple frequencies an instrument produces. For this reason, string instruments often have bodies with complex shapes, so that they produce a wide range of frequencies fairly evenly.
The of a brass instrument or wind instrument needs to be high enough to pick one frequency out of the broader-spectrum buzzing of the lips or reed.
By contrast, a vuvuzela is made of flexible plastic, and therefore has a very low for a brass instrument, giving it a muddy, breathy tone. Instruments made of stiffer plastic, brass, or wood have higher values. An excessively high can make it harder to hit a note. in an instrument may vary across frequencies, but this may not be desirable.
Helmholtz resonators have a very high , as they are designed for picking out a very narrow range of frequencies.
Optical systems
In optics, the factor of a resonant cavity is given by
where is the resonant frequency, is the stored energy in the cavity, and is the power dissipated. The optical is equal to the ratio of the resonant frequency to the bandwidth of the cavity resonance. The average lifetime of a resonant photon in the cavity is proportional to the cavity's . If the factor of a laser's cavity is abruptly changed from a low value to a high one, the laser will emit a pulse of light that is much more intense than the laser's normal continuous output. This technique is known as -switching. factor is of particular importance in plasmonics, where loss is linked to the damping of the surface plasmon resonance. While loss is normally considered a hindrance in the development of plasmonic devices, it is possible to leverage this property to present new enhanced functionalities.
See also
Acoustic resonance
Attenuation
Chu–Harrington limit
List of piezoelectric materials
Phase margin
Q meter
Q multiplier
Dissipation factor
References
Further reading
External links
Calculating the cut-off frequencies when center frequency and Q factor is given
Explanation of Q factor in radio tuning circuits
Electrical parameters
Linear filters
Mechanics
Laser science
Engineering ratios | Q factor | [
"Physics",
"Mathematics",
"Engineering"
] | 3,178 | [
"Metrics",
"Engineering ratios",
"Quantity",
"Mechanics",
"Mechanical engineering",
"Electrical engineering",
"Electrical parameters"
] |
229,553 | https://en.wikipedia.org/wiki/Hooke%27s%20law | In physics, Hooke's law is an empirical law which states that the force () needed to extend or compress a spring by some distance () scales linearly with respect to that distance—that is, where is a constant factor characteristic of the spring (i.e., its stiffness), and is small compared to the total possible deformation of the spring. The law is named after 17th-century British physicist Robert Hooke. He first stated the law in 1676 as a Latin anagram. He published the solution of his anagram in 1678 as: ("as the extension, so the force" or "the extension is proportional to the force"). Hooke states in the 1678 work that he was aware of the law since 1660.
Hooke's equation holds (to some extent) in many other situations where an elastic body is deformed, such as wind blowing on a tall building, and a musician plucking a string of a guitar. An elastic body or material for which this equation can be assumed is said to be linear-elastic or Hookean.
Hooke's law is only a first-order linear approximation to the real response of springs and other elastic bodies to applied forces. It must eventually fail once the forces exceed some limit, since no material can be compressed beyond a certain minimum size, or stretched beyond a maximum size, without some permanent deformation or change of state. Many materials will noticeably deviate from Hooke's law well before those elastic limits are reached.
On the other hand, Hooke's law is an accurate approximation for most solid bodies, as long as the forces and deformations are small enough. For this reason, Hooke's law is extensively used in all branches of science and engineering, and is the foundation of many disciplines such as seismology, molecular mechanics and acoustics. It is also the fundamental principle behind the spring scale, the manometer, the galvanometer, and the balance wheel of the mechanical clock.
The modern theory of elasticity generalizes Hooke's law to say that the strain (deformation) of an elastic object or material is proportional to the stress applied to it. However, since general stresses and strains may have multiple independent components, the "proportionality factor" may no longer be just a single real number, but rather a linear map (a tensor) that can be represented by a matrix of real numbers.
In this general form, Hooke's law makes it possible to deduce the relation between strain and stress for complex objects in terms of intrinsic properties of the materials they are made of. For example, one can deduce that a homogeneous rod with uniform cross section will behave like a simple spring when stretched, with a stiffness directly proportional to its cross-section area and inversely proportional to its length.
Formal definition
Linear springs
Consider a simple helical spring that has one end attached to some fixed object, while the free end is being pulled by a force whose magnitude is . Suppose that the spring has reached a state of equilibrium, where its length is not changing anymore. Let be the amount by which the free end of the spring was displaced from its "relaxed" position (when it is not being stretched). Hooke's law states that or, equivalently,
where is a positive real number, characteristic of the spring. A spring with spaces between the coils can be compressed, and the same formula holds for compression, with and both negative in that case.
According to this formula, the graph of the applied force as a function of the displacement will be a straight line passing through the origin, whose slope is .
Hooke's law for a spring is also stated under the convention that is the restoring force exerted by the spring on whatever is pulling its free end. In that case, the equation becomes since the direction of the restoring force is opposite to that of the displacement.
Torsional springs
The torsional analog of Hooke's law applies to torsional springs. It states that the torque (τ) required to rotate an object is directly proportional to the angular displacement (θ) from the equilibrium position. It describes the relationship between the torque applied to an object and the resulting angular deformation due to torsion. Mathematically, it can be expressed as:
Where:
τ is the torque measured in Newton-meters or N·m.
k is the torsional constant (measured in N·m/radian), which characterizes the stiffness of the torsional spring or the resistance to angular displacement.
θ is the angular displacement (measured in radians) from the equilibrium position.
Just as in the linear case, this law shows that the torque is proportional to the angular displacement, and the negative sign indicates that the torque acts in a direction opposite to the angular displacement, providing a restoring force to bring the system back to equilibrium.
General "scalar" springs
Hooke's spring law usually applies to any elastic object, of arbitrary complexity, as long as both the deformation and the stress can be expressed by a single number that can be both positive and negative.
For example, when a block of rubber attached to two parallel plates is deformed by shearing, rather than stretching or compression, the shearing force and the sideways displacement of the plates obey Hooke's law (for small enough deformations).
Hooke's law also applies when a straight steel bar or concrete beam (like the one used in buildings), supported at both ends, is bent by a weight placed at some intermediate point. The displacement in this case is the deviation of the beam, measured in the transversal direction, relative to its unloaded shape.
Vector formulation
In the case of a helical spring that is stretched or compressed along its axis, the applied (or restoring) force and the resulting elongation or compression have the same direction (which is the direction of said axis). Therefore, if and are defined as vectors, Hooke's equation still holds and says that the force vector is the elongation vector multiplied by a fixed scalar.
General tensor form
Some elastic bodies will deform in one direction when subjected to a force with a different direction. One example is a horizontal wood beam with non-square rectangular cross section that is bent by a transverse load that is neither vertical nor horizontal. In such cases, the magnitude of the displacement will be proportional to the magnitude of the force , as long as the direction of the latter remains the same (and its value is not too large); so the scalar version of Hooke's law will hold. However, the force and displacement vectors will not be scalar multiples of each other, since they have different directions. Moreover, the ratio between their magnitudes will depend on the direction of the vector .
Yet, in such cases there is often a fixed linear relation between the force and deformation vectors, as long as they are small enough. Namely, there is a function from vectors to vectors, such that , and for any real numbers , and any displacement vectors , . Such a function is called a (second-order) tensor.
With respect to an arbitrary Cartesian coordinate system, the force and displacement vectors can be represented by 3 × 1 matrices of real numbers. Then the tensor connecting them can be represented by a 3 × 3 matrix of real coefficients, that, when multiplied by the displacement vector, gives the force vector:
That is, for . Therefore, Hooke's law can be said to hold also when and are vectors with variable directions, except that the stiffness of the object is a tensor , rather than a single real number .
Hooke's law for continuous media
The stresses and strains of the material inside a continuous elastic material (such as a block of rubber, the wall of a boiler, or a steel bar) are connected by a linear relationship that is mathematically similar to Hooke's spring law, and is often referred to by that name.
However, the strain state in a solid medium around some point cannot be described by a single vector. The same parcel of material, no matter how small, can be compressed, stretched, and sheared at the same time, along different directions. Likewise, the stresses in that parcel can be at once pushing, pulling, and shearing.
In order to capture this complexity, the relevant state of the medium around a point must be represented by two-second-order tensors, the strain tensor (in lieu of the displacement ) and the stress tensor (replacing the restoring force ). The analogue of Hooke's spring law for continuous media is then where is a fourth-order tensor (that is, a linear map between second-order tensors) usually called the stiffness tensor or elasticity tensor. One may also write it as where the tensor , called the compliance tensor, represents the inverse of said linear map.
In a Cartesian coordinate system, the stress and strain tensors can be represented by 3 × 3 matrices
Being a linear mapping between the nine numbers and the nine numbers , the stiffness tensor is represented by a matrix of real numbers . Hooke's law then says that
where .
All three tensors generally vary from point to point inside the medium, and may vary with time as well. The strain tensor merely specifies the displacement of the medium particles in the neighborhood of the point, while the stress tensor specifies the forces that neighboring parcels of the medium are exerting on each other. Therefore, they are independent of the composition and physical state of the material. The stiffness tensor , on the other hand, is a property of the material, and often depends on physical state variables such as temperature, pressure, and microstructure.
Due to the inherent symmetries of , , and , only 21 elastic coefficients of the latter are independent. This number can be further reduced by the symmetry of the material: 9 for an orthorhombic crystal, 5 for an hexagonal structure, and 3 for a cubic symmetry. For isotropic media (which have the same physical properties in any direction), can be reduced to only two independent numbers, the bulk modulus and the shear modulus , that quantify the material's resistance to changes in volume and to shearing deformations, respectively.
Analogous laws
Since Hooke's law is a simple proportionality between two quantities, its formulas and consequences are mathematically similar to those of many other physical laws, such as those describing the motion of fluids, or the polarization of a dielectric by an electric field.
In particular, the tensor equation relating elastic stresses to strains is entirely similar to the equation relating the viscous stress tensor and the strain rate tensor in flows of viscous fluids; although the former pertains to static stresses (related to amount of deformation) while the latter pertains to dynamical stresses (related to the rate of deformation).
Units of measurement
In SI units, displacements are measured in meters (m), and forces in newtons (N or kg·m/s2). Therefore, the spring constant , and each element of the tensor , is measured in newtons per meter (N/m), or kilograms per second squared (kg/s2).
For continuous media, each element of the stress tensor is a force divided by an area; it is therefore measured in units of pressure, namely pascals (Pa, or N/m2, or kg/(m·s2). The elements of the strain tensor are dimensionless (displacements divided by distances). Therefore, the entries of are also expressed in units of pressure.
General application to elastic materials
Objects that quickly regain their original shape after being deformed by a force, with the molecules or atoms of their material returning to the initial state of stable equilibrium, often obey Hooke's law.
Hooke's law only holds for some materials under certain loading conditions. Steel exhibits linear-elastic behavior in most engineering applications; Hooke's law is valid for it throughout its elastic range (i.e., for stresses below the yield strength). For some other materials, such as aluminium, Hooke's law is only valid for a portion of the elastic range. For these materials a proportional limit stress is defined, below which the errors associated with the linear approximation are negligible.
Rubber is generally regarded as a "non-Hookean" material because its elasticity is stress dependent and sensitive to temperature and loading rate.
Generalizations of Hooke's law for the case of large deformations is provided by models of neo-Hookean solids and Mooney–Rivlin solids.
Derived formulae
Tensional stress of a uniform bar
A rod of any elastic material may be viewed as a linear spring. The rod has length and cross-sectional area . Its tensile stress is linearly proportional to its fractional extension or strain by the modulus of elasticity :
The modulus of elasticity may often be considered constant. In turn,
(that is, the fractional change in length), and since
it follows that:
The change in length may be expressed as
Spring energy
The potential energy stored in a spring is given by which comes from adding up the energy it takes to incrementally compress the spring. That is, the integral of force over displacement. Since the external force has the same general direction as the displacement, the potential energy of a spring is always non-negative. Substituting gives
This potential can be visualized as a parabola on the -plane such that . As the spring is stretched in the positive -direction, the potential energy increases parabolically (the same thing happens as the spring is compressed). Since the change in potential energy changes at a constant rate:
Note that the change in the change in is constant even when the displacement and acceleration are zero.
Relaxed force constants (generalized compliance constants)
Relaxed force constants (the inverse of generalized compliance constants) are uniquely defined for molecular systems, in contradistinction to the usual "rigid" force constants, and thus their use allows meaningful correlations to be made between force fields calculated for reactants, transition states, and products of a chemical reaction. Just as the potential energy can be written as a quadratic form in the internal coordinates, so it can also be written in terms of generalized forces. The resulting coefficients are termed compliance constants. A direct method exists for calculating the compliance constant for any internal coordinate of a molecule, without the need to do the normal mode analysis. The suitability of relaxed force constants (inverse compliance constants) as covalent bond strength descriptors was demonstrated as early as 1980. Recently, the suitability as non-covalent bond strength descriptors was demonstrated too.
Harmonic oscillator
A mass attached to the end of a spring is a classic example of a harmonic oscillator. By pulling slightly on the mass and then releasing it, the system will be set in sinusoidal oscillating motion about the equilibrium position. To the extent that the spring obeys Hooke's law, and that one can neglect friction and the mass of the spring, the amplitude of the oscillation will remain constant; and its frequency will be independent of its amplitude, determined only by the mass and the stiffness of the spring:
This phenomenon made possible the construction of accurate mechanical clocks and watches that could be carried on ships and people's pockets.
Rotation in gravity-free space
If the mass were attached to a spring with force constant and rotating in free space, the spring tension () would supply the required centripetal force ():
Since and , then:
Given that , this leads to the same frequency equation as above:
Linear elasticity theory for continuous media
Isotropic materials
Isotropic materials are characterized by properties which are independent of direction in space. Physical equations involving isotropic materials must therefore be independent of the coordinate system chosen to represent them. The strain tensor is a symmetric tensor. Since the trace of any tensor is independent of any coordinate system, the most complete coordinate-free decomposition of a symmetric tensor is to represent it as the sum of a constant tensor and a traceless symmetric tensor. Thus in index notation:
where is the Kronecker delta. In direct tensor notation:
where is the second-order identity tensor.
The first term on the right is the constant tensor, also known as the volumetric strain tensor, and the second term is the traceless symmetric tensor, also known as the deviatoric strain tensor or shear tensor.
The most general form of Hooke's law for isotropic materials may now be written as a linear combination of these two tensors:
where is the bulk modulus and is the shear modulus.
Using the relationships between the elastic moduli, these equations may also be expressed in various other ways. A common form of Hooke's law for isotropic materials, expressed in direct tensor notation, is
where and are the Lamé constants, is the second-rank identity tensor, and I is the symmetric part of the fourth-rank identity tensor. In index notation:
The inverse relationship is
Therefore, the compliance tensor in the relation is
In terms of Young's modulus and Poisson's ratio, Hooke's law for isotropic materials can then be expressed as
This is the form in which the strain is expressed in terms of the stress tensor in engineering. The expression in expanded form is
where is Young's modulus and is Poisson's ratio. (See 3-D elasticity).
In matrix form, Hooke's law for isotropic materials can be written as
where is the engineering shear strain. The inverse relation may be written as
which can be simplified thanks to the Lamé constants:
In vector notation this becomes
where is the identity tensor.
Plane stress
Under plane stress conditions, . In that case Hooke's law takes the form
In vector notation this becomes
The inverse relation is usually written in the reduced form
Plane strain
Under plane strain conditions, . In this case Hooke's law takes the form
Anisotropic materials
The symmetry of the Cauchy stress tensor () and the generalized Hooke's laws () implies that . Similarly, the symmetry of the infinitesimal strain tensor implies that . These symmetries are called the minor symmetries of the stiffness tensor c. This reduces the number of elastic constants from 81 to 36.
If in addition, since the displacement gradient and the Cauchy stress are work conjugate, the stress–strain relation can be derived from a strain energy density functional (), then
The arbitrariness of the order of differentiation implies that . These are called the major symmetries of the stiffness tensor. This reduces the number of elastic constants from 36 to 21. The major and minor symmetries indicate that the stiffness tensor has only 21 independent components.
Matrix representation (stiffness tensor)
It is often useful to express the anisotropic form of Hooke's law in matrix notation, also called Voigt notation. To do this we take advantage of the symmetry of the stress and strain tensors and express them as six-dimensional vectors in an orthonormal coordinate system () as
Then the stiffness tensor (c) can be expressed as
and Hooke's law is written as
Similarly the compliance tensor (s) can be written as
Change of coordinate system
If a linear elastic material is rotated from a reference configuration to another, then the material is symmetric with respect to the rotation if the components of the stiffness tensor in the rotated configuration are related to the components in the reference configuration by the relation
where are the components of an orthogonal rotation matrix . The same relation also holds for inversions.
In matrix notation, if the transformed basis (rotated or inverted) is related to the reference basis by
then
In addition, if the material is symmetric with respect to the transformation then
Orthotropic materials
Orthotropic materials have three orthogonal planes of symmetry. If the basis vectors () are normals to the planes of symmetry then the coordinate transformation relations imply that
The inverse of this relation is commonly written as
where
is the Young's modulus along axis
is the shear modulus in direction on the plane whose normal is in direction
is the Poisson's ratio that corresponds to a contraction in direction when an extension is applied in direction .
Under plane stress conditions, , Hooke's law for an orthotropic material takes the form
The inverse relation is
The transposed form of the above stiffness matrix is also often used.
Transversely isotropic materials
A transversely isotropic material is symmetric with respect to a rotation about an axis of symmetry. For such a material, if is the axis of symmetry, Hooke's law can be expressed as
More frequently, the axis is taken to be the axis of symmetry and the inverse Hooke's law is written as
Universal elastic anisotropy index
To grasp the degree of anisotropy of any class, a universal elastic anisotropy index (AU) was formulated. It replaces the Zener ratio, which is suited for cubic crystals.
Thermodynamic basis
Linear deformations of elastic materials can be approximated as adiabatic. Under these conditions and for quasistatic processes the first law of thermodynamics for a deformed body can be expressed as
where is the increase in internal energy and is the work done by external forces. The work can be split into two terms
where is the work done by surface forces while is the work done by body forces. If is a variation of the displacement field in the body, then the two external work terms can be expressed as
where is the surface traction vector, is the body force vector, represents the body and represents its surface. Using the relation between the Cauchy stress and the surface traction, (where is the unit outward normal to ), we have
Converting the surface integral into a volume integral via the divergence theorem gives
Using the symmetry of the Cauchy stress and the identity
we have the following
From the definition of strain and from the equations of equilibrium we have
Hence we can write
and therefore the variation in the internal energy density is given by
An elastic material is defined as one in which the total internal energy is equal to the potential energy of the internal forces (also called the elastic strain energy). Therefore, the internal energy density is a function of the strains, and the variation of the internal energy can be expressed as
Since the variation of strain is arbitrary, the stress–strain relation of an elastic material is given by
For a linear elastic material, the quantity is a linear function of , and can therefore be expressed as
where c is a fourth-rank tensor of material constants, also called the stiffness tensor. We can see why c must be a fourth-rank tensor by noting that, for a linear elastic material,
In index notation
The right-hand side constant requires four indices and is a fourth-rank quantity. We can also see that this quantity must be a tensor because it is a linear transformation that takes the strain tensor to the stress tensor. We can also show that the constant obeys the tensor transformation rules for fourth-rank tensors.
See also
Acoustoelastic effect
Elastic potential energy
Laws of science
List of scientific laws named after people
Quadratic form
Series and parallel springs
Spring system
Simple harmonic motion of a mass on a spring
Sine wave
Solid mechanics
Spring pendulum
Notes
References
Hooke's law - The Feynman Lectures on Physics
Hooke's Law - Classical Mechanics - Physics - MIT OpenCourseWare
External links
JavaScript Applet demonstrating Springs and Hooke's law
JavaScript Applet demonstrating Spring Force
1676 in science
Elasticity (physics)
Solid mechanics
Springs (mechanical)
Structural analysis | Hooke's law | [
"Physics",
"Materials_science",
"Engineering"
] | 4,882 | [
"Structural engineering",
"Solid mechanics",
"Physical phenomena",
"Elasticity (physics)",
"Deformation (mechanics)",
"Structural analysis",
"Mechanics",
"Mechanical engineering",
"Aerospace engineering",
"Physical properties"
] |
229,643 | https://en.wikipedia.org/wiki/Molality | In chemistry, molality is a measure of the amount of solute in a solution relative to a given mass of solvent. This contrasts with the definition of molarity which is based on a given volume of solution.
A commonly used unit for molality is the moles per kilogram (mol/kg). A solution of concentration 1 mol/kg is also sometimes denoted as 1 molal. The unit mol/kg requires that molar mass be expressed in kg/mol, instead of the usual g/mol or kg/kmol.
Definition
The molality (b), of a solution is defined as the amount of substance (in moles) of solute, nsolute, divided by the mass (in kg) of the solvent, msolvent:
.
In the case of solutions with more than one solvent, molality can be defined for the mixed solvent considered as a pure pseudo-solvent. Instead of mole solute per kilogram solvent as in the binary case, units are defined as mole solute per kilogram mixed solvent.
Origin
The term molality is formed in analogy to molarity which is the molar concentration of a solution. The earliest known use of the intensive property molality and of its adjectival unit, the now-deprecated molal, appears to have been published by G. N. Lewis and M. Randall in the 1923 publication of Thermodynamics and the Free Energies of Chemical Substances. Though the two terms are subject to being confused with one another, the molality and molarity of a dilute aqueous solution are nearly the same, as one kilogram of water (solvent) occupies the volume of 1 liter at room temperature and a small amount of solute has little effect on the volume.
Unit
The SI unit for molality is moles per kilogram of solvent.
A solution with a molality of 3 mol/kg is often described as "3 molal", "3 m" or "3 m". However, following the SI system of units, the National Institute of Standards and Technology, the United States authority on measurement, considers the term "molal" and the unit symbol "m" to be obsolete, and suggests mol/kg or a related unit of the SI.
Usage considerations
Advantages
The primary advantage of using molality as a measure of concentration is that molality only depends on the masses of solute and solvent, which are unaffected by variations in temperature and pressure. In contrast, solutions prepared volumetrically (e.g. molar concentration or mass concentration) are likely to change as temperature and pressure change. In many applications, this is a significant advantage because the mass, or the amount, of a substance is often more important than its volume (e.g. in a limiting reagent problem).
Another advantage of molality is the fact that the molality of one solute in a solution is independent of the presence or absence of other solutes.
Problem areas
Unlike all the other compositional properties listed in "Relation" section (below), molality depends on the choice of the substance to be called “solvent” in an arbitrary mixture. If there is only one pure liquid substance in a mixture, the choice is clear, but not all solutions are this clear-cut: in an alcohol–water solution, either one could be called the solvent; in an alloy, or solid solution, there is no clear choice and all constituents may be treated alike. In such situations, mass or mole fraction is the preferred compositional specification.
Relation to other compositional quantities
In what follows, the solvent may be given the same treatment as the other constituents of the solution, such that the molality of the solvent of an n-solute solution, say b0, is found to be nothing more than the reciprocal of its molar mass, M0 (expressed in the unit kg/mol):
.
For the solutes the expression of molalities is similar:
.
The expressions linking molalities to mass fractions and mass concentrations contain the molar masses of the solutes Mi:
.
Similarly the equalities below are obtained from the definitions of the molalities and of the other compositional quantities.
The mole fraction of solvent can be obtained from the definition by dividing the numerator and denominator to the amount of solvent n0:
.
Then the sum of ratios of the other mole amounts to the amount of solvent is substituted with expressions from below containing molalities:
giving the result
.
Mass fraction
The conversions to and from the mass fraction, w1, of the solute in a single-solute solution are
where b1 is the molality and M1 is the molar mass of the solute.
More generally, for an n-solute/one-solvent solution, letting bi and wi be, respectively, the molality and mass fraction of the i-th solute,
,
where Mi is the molar mass of the ith solute, and w0 is the mass fraction of the solvent, which is expressible both as a function of the molalities as well as a function of the other mass fractions,
.
Substitution gives:
.
Mole fraction
The conversions to and from the mole fraction, x1 mole fraction of the solute in a single-solute solution are
,
where M0 is the molar mass of the solvent.
More generally, for an n-solute/one-solvent solution, letting xi be the mole fraction of the ith solute,
,
where x0 is the mole fraction of the solvent, expressible both as a function of the molalities as well as a function of the other mole fractions:
.
Substitution gives:
.
Molar concentration (molarity)
The conversions to and from the molar concentration, c1, for one-solute solutions are
,
where ρ is the mass density of the solution, b1 is the molality, and M1 is the molar mass (in kg/mol) of the solute.
For solutions with n solutes, the conversions are
,
where the molar concentration of the solvent c0 is expressible both as a function of the molalities as well as a function of the other molarities:
.
Substitution gives:
,
Mass concentration
The conversions to and from the mass concentration, ρsolute, of a single-solute solution are
,
or
,
where ρ is the mass density of the solution, b1 is the molality, and M1 is the molar mass of the solute.
For the general n-solute solution, the mass concentration of the ith solute, ρi, is related to its molality, bi, as follows:
,
where the mass concentration of the solvent, ρ0, is expressible both as a function of the molalities as well as a function of the other mass concentrations:
.
Substitution gives:
.
Equal ratios
Alternatively, one may use just the last two equations given for the compositional property of the solvent in each of the preceding sections, together with the relationships given below, to derive the remainder of properties in that set:
,
where i and j are subscripts representing all the constituents, the n solutes plus the solvent.
Example of conversion
An acid mixture consists of 0.76, 0.04, and 0.20 mass fractions of 70% HNO3, 49% HF, and H2O, where the percentages refer to mass fractions of the bottled acids carrying a balance of H2O. The first step is determining the mass fractions of the constituents:
.
The approximate molar masses in kg/mol are
.
First derive the molality of the solvent, in mol/kg,
,
and use that to derive all the others by use of the equal ratios:
.
Actually, bH2O cancels out, because it is not needed. In this case, there is a more direct equation: we use it to derive the molality of HF:
.
The mole fractions may be derived from this result:
,
,
.
Osmolality
Osmolality is a variation of molality that takes into account only solutes that contribute to a solution's osmotic pressure. It is measured in osmoles of the solute per kilogram of water. This unit is frequently used in medical laboratory results in place of osmolarity, because it can be measured simply by depression of the freezing point of a solution, or cryoscopy (see also: osmostat and colligative properties).
Relation to apparent (molar) properties
Molality appears in the expression of the apparent (molar) volume of a solute as a function of the molality b of that solute (and density of the solution and solvent):
,
.
For multicomponent systems the relation is slightly modified by the sum of molalities of solutes. Also a total molality and a mean apparent molar volume can be defined for the solutes together and also a mean molar mass of the solutes as if they were a single solute. In this case the first equality from above is modified with the mean molar mass M of the pseudosolute instead of the molar mass of the single solute:
,
, yi,j being ratios involving molalities of solutes i,j and the total molality bT.
The sum of products molalities - apparent molar volumes of solutes in their binary solutions equals the product between the sum of molalities of solutes and apparent molar volume in ternary or multicomponent solution.
.
Relation to apparent molar properties and activity coefficients
For concentrated ionic solutions the activity coefficient of the electrolyte is split into electric and statistical components.
The statistical part includes molality b, hydration index number h, the number of ions from the dissociation and the ratio ra between the apparent molar volume of the electrolyte and the molar volume of water.
Concentrated solution statistical part of the activity coefficient is:
.
Molalities of a ternary or multicomponent solution
The molalities of solutes b1, b2 in a ternary solution obtained by mixing two binary aqueous solutions with different solutes (say a sugar and a salt or two different salts) are different than the initial molalities of the solutes bii in their binary solutions:
,
,
,
.
The content of solvent in mass fractions w01 and w02 from each solution of masses ms1 and ms2 to be mixed as a function of initial molalities is calculated. Then the amount (mol) of solute from each binary solution is divided by the sum of masses of water after mixing:
,
.
Mass fractions of each solute in the initial solutions w11 and w22
are expressed as a function of the initial molalities b11, b22:
,
.
These expressions of mass fractions are substituted in the final molalitaties:
,
.
The results for a ternary solution can be extended to a multicomponent solution (with more than two solutes).
From the molalities of the binary solutions
The molalities of the solutes in a ternary solution can be expressed also from molalities in the binary solutions and their masses:
,
.
The binary solution molalities are:
,
.
The masses of the solutes determined from the molalities of the solutes and the masses of water can be substituted in the expressions of the masses of solutions:
.
Similarly for the mass of the second solution:
.
One can obtain the masses of water present in the sum from the denominator of the molalities of the solutes in the ternary solutions as functions of binary molalities and masses of solution:
,
.
Thus the ternary molalities are:
,
.
For solutions with three or more solutes the denominator is a sum of the masses of solvent in the n binary solutions which are mixed:
,
,
.
See also
Molarity
References
Chemical properties
Mass-specific quantities
es:Concentración#Molalidad | Molality | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,537 | [
"Physical quantities",
"Quantity",
"Mass",
"Intensive quantities",
"nan",
"Mass-specific quantities",
"Matter"
] |
230,072 | https://en.wikipedia.org/wiki/Four-force | In the special theory of relativity, four-force is a four-vector that replaces the classical force.
In special relativity
The four-force is defined as the rate of change in the four-momentum of a particle with respect to the particle's proper time. Hence,:
For a particle of constant invariant mass , the four-momentum is given by the relation , where is the four-velocity. In analogy to Newton's second law, we can also relate the four-force to the four-acceleration, , by equation:
Here
and
where , and are 3-space vectors describing the velocity, the momentum of the particle and the force acting on it respectively; and is the total energy of the particle.
Including thermodynamic interactions
From the formulae of the previous section it appears that the time component of the four-force is the power expended, , apart from relativistic corrections . This is only true in purely mechanical situations, when heat exchanges vanish or can be neglected.
In the full thermo-mechanical case, not only work, but also heat contributes to the change in energy, which is the time component of the energy–momentum covector. The time component of the four-force includes in this case a heating rate , besides the power . Note that work and heat cannot be meaningfully separated, though, as they both carry inertia. This fact extends also to contact forces, that is, to the stress–energy–momentum tensor.
Therefore, in thermo-mechanical situations the time component of the four-force is not proportional to the power but has a more generic expression, to be given case by case, which represents the supply of internal energy from the combination of work and heat, and which in the Newtonian limit becomes .
In general relativity
In general relativity the relation between four-force, and four-acceleration remains the same, but the elements of the four-force are related to the elements of the four-momentum through a covariant derivative with respect to proper time.
In addition, we can formulate force using the concept of coordinate transformations between different coordinate systems. Assume that we know the correct expression for force in a coordinate system at which the particle is momentarily at rest. Then we can perform a transformation to another system to get the corresponding expression of force. In special relativity the transformation will be a Lorentz transformation between coordinate systems moving with a relative constant velocity whereas in general relativity it will be a general coordinate transformation.
Consider the four-force acting on a particle of mass which is momentarily at rest in a coordinate system. The relativistic force in another coordinate system moving with constant velocity , relative to the other one, is obtained using a Lorentz transformation:
where .
In general relativity, the expression for force becomes
with covariant derivative . The equation of motion becomes
where is the Christoffel symbol. If there is no external force, this becomes the equation for geodesics in the curved space-time. The second term in the above equation, plays the role of a gravitational force. If is the correct expression for force in a freely falling frame , we can use then the equivalence principle to write the four-force in an arbitrary coordinate :
Examples
In special relativity, Lorentz four-force (four-force acting on a charged particle situated in an electromagnetic field) can be expressed as:
where
is the electromagnetic tensor,
is the four-velocity, and
is the electric charge.
See also
four-vector
four-velocity
four-acceleration
four-momentum
four-gradient
References
Four-vectors
Force | Four-force | [
"Physics",
"Mathematics"
] | 727 | [
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Four-vectors",
"Classical mechanics",
"Vector physical quantities",
"Wikipedia categories named after physical quantities",
"Matter"
] |
230,401 | https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall%20algorithm | In computer science, the Floyd–Warshall algorithm (also known as Floyd's algorithm, the Roy–Warshall algorithm, the Roy–Floyd algorithm, or the WFI algorithm) is an algorithm for finding shortest paths in a directed weighted graph with positive or negative edge weights (but with no negative cycles). A single execution of the algorithm will find the lengths (summed weights) of shortest paths between all pairs of vertices. Although it does not return details of the paths themselves, it is possible to reconstruct the paths with simple modifications to the algorithm. Versions of the algorithm can also be used for finding the transitive closure of a relation , or (in connection with the Schulze voting system) widest paths between all pairs of vertices in a weighted graph.
History and naming
The Floyd–Warshall algorithm is an example of dynamic programming, and was published in its currently recognized form by Robert Floyd in 1962. However, it is essentially the same as algorithms previously published by Bernard Roy in 1959 and also by Stephen Warshall in 1962 for finding the transitive closure of a graph, and is closely related to Kleene's algorithm (published in 1956) for converting a deterministic finite automaton into a regular expression, with the difference being the use of a min-plus semiring. The modern formulation of the algorithm as three nested for-loops was first described by Peter Ingerman, also in 1962.
Algorithm
The Floyd–Warshall algorithm compares many possible paths through the graph between each pair of vertices. It is guaranteed to find all shortest paths and is able to do this with comparisons in a graph, even though there may be edges in the graph. It does so by incrementally improving an estimate on the shortest path between two vertices, until the estimate is optimal.
Consider a graph with vertices numbered 1 through . Further consider a function that returns the length of the shortest possible path (if one exists) from to using vertices only from the set as intermediate points along the way. Now, given this function, our goal is to find the length of the shortest path from each to each using any vertex in . By definition, this is the value , which we will find recursively.
Observe that must be less than or equal to : we have more flexibility if we are allowed to use the vertex . If is in fact less than , then there must be a path from to using the vertices that is shorter than any such path that does not use the vertex . Since there are no negative cycles this path can be decomposed as:
(1) a path from to that uses the vertices , followed by
(2) a path from to that uses the vertices .
And of course, these must be a shortest such path (or several of them), otherwise we could further decrease the length. In other words, we have arrived at the recursive formula:
.
The base case is given by
where denotes the weight of the edge from to if one exists and ∞ (infinity) otherwise.
These formulas are the heart of the Floyd–Warshall algorithm. The algorithm works by first computing for all pairs for , then , then , and so on. This process continues until , and we have found the shortest path for all pairs using any intermediate vertices. Pseudocode for this basic version follows.
Pseudocode
let dist be a |V| × |V| array of minimum distances initialized to ∞ (infinity)
for each edge (u, v) do
dist[u][v] = w(u, v) // The weight of the edge (u, v)
for each vertex v do
dist[v][v] = 0
for k from 1 to |V|
for i from 1 to |V|
for j from 1 to |V|
if dist[i][j] > dist[i][k] + dist[k][j]
dist[i][j] = dist[i][k] + dist[k][j]
end if
Example
The algorithm above is executed on the graph on the left below:
Prior to the first recursion of the outer loop, labeled above, the only known paths correspond to the single edges in the graph. At , paths that go through the vertex 1 are found: in particular, the path [2,1,3] is found, replacing the path [2,3] which has fewer edges but is longer (in terms of weight). At , paths going through the vertices {1,2} are found. The red and blue boxes show how the path [4,2,1,3] is assembled from the two known paths [4,2] and [2,1,3] encountered in previous iterations, with 2 in the intersection. The path [4,2,3] is not considered, because [2,1,3] is the shortest path encountered so far from 2 to 3. At , paths going through the vertices {1,2,3} are found. Finally, at , all shortest paths are found.
The distance matrix at each iteration of , with the updated distances in bold, will be:
Behavior with negative cycles
A negative cycle is a cycle whose edges sum to a negative value. There is no shortest path between any pair of vertices , which form part of a negative cycle, because path-lengths from to can be arbitrarily small (negative). For numerically meaningful output, the Floyd–Warshall algorithm assumes that there are no negative cycles. Nevertheless, if there are negative cycles, the Floyd–Warshall algorithm can be used to detect them. The intuition is as follows:
The Floyd–Warshall algorithm iteratively revises path lengths between all pairs of vertices , including where ;
Initially, the length of the path is zero;
A path can only improve upon this if it has length less than zero, i.e. denotes a negative cycle;
Thus, after the algorithm, will be negative if there exists a negative-length path from back to .
Hence, to detect negative cycles using the Floyd–Warshall algorithm, one can inspect the diagonal of the path matrix, and the presence of a negative number indicates that the graph contains at least one negative cycle. During the execution of the algorithm, if there is a negative cycle, exponentially large numbers can appear, as large as , where is the largest absolute value of a negative edge in the graph. To avoid overflow/underflow problems one should check for negative numbers on the diagonal of the path matrix within the inner for loop of the algorithm. Obviously, in an undirected graph a negative edge creates a negative cycle (i.e., a closed walk) involving its incident vertices. Considering all edges of the above example graph as undirected, e.g. the vertex sequence 4 – 2 – 4 is a cycle with weight sum −2.
Path reconstruction
The Floyd–Warshall algorithm typically only provides the lengths of the paths between all pairs of vertices. With simple modifications, it is possible to create a method to reconstruct the actual path between any two endpoint vertices. While one may be inclined to store the actual path from each vertex to each other vertex, this is not necessary, and in fact, is very costly in terms of memory. Instead, we can use the shortest-path tree, which can be calculated for each node in time using memory, and allows us to efficiently reconstruct a directed path between any two connected vertices.
Pseudocode
The array holds the penultimate vertex on the path from to (except in the case of , where it always contains even if there is no self-loop on ):
let dist be a array of minimum distances initialized to (infinity)
let prev be a array of vertex indices initialized to null
procedure FloydWarshallWithPathReconstruction() is
for each edge (u, v) do
dist[u][v] = w(u, v) // The weight of the edge (u, v)
prev[u][v] = u
for each vertex v do
dist[v][v] = 0
prev[v][v] = v
for k from 1 to |V| do // standard Floyd-Warshall implementation
for i from 1 to |V|
for j from 1 to |V|
if dist[i][j] > dist[i][k] + dist[k][j] then
dist[i][j] = dist[i][k] + dist[k][j]
prev[i][j] = prev[k][j]
procedure Path(u, v) is
if prev[u][v] = null then
return []
path = [v]
while u ≠ v do
v = prev[u][v]
path.prepend(v)
return path
Time complexity
Let be , the number of vertices. To find all of
(for all and ) from those of
requires operations. Since we begin with
and compute the sequence of matrices , , , , each having a cost of ,
the total time complexity of the algorithm is .
Applications and generalizations
The Floyd–Warshall algorithm can be used to solve the following problems, among others:
Shortest paths in directed graphs (Floyd's algorithm).
Transitive closure of directed graphs (Warshall's algorithm). In Warshall's original formulation of the algorithm, the graph is unweighted and represented by a Boolean adjacency matrix. Then the addition operation is replaced by logical conjunction (AND) and the minimum operation by logical disjunction (OR).
Finding a regular expression denoting the regular language accepted by a finite automaton (Kleene's algorithm, a closely related generalization of the Floyd–Warshall algorithm)
Inversion of real matrices (Gauss–Jordan algorithm)
Optimal routing. In this application one is interested in finding the path with the maximum flow between two vertices. This means that, rather than taking minima as in the pseudocode above, one instead takes maxima. The edge weights represent fixed constraints on flow. Path weights represent bottlenecks; so the addition operation above is replaced by the minimum operation.
Fast computation of Pathfinder networks.
Widest paths/Maximum bandwidth paths
Computing canonical form of difference bound matrices (DBMs)
Computing the similarity between graphs
Transitive closure in AND/OR/threshold graphs.
Implementations
Implementations are available for many programming languages.
For C++, in the boost::graph library
For C#, at QuikGraph
For C#, at QuickGraphPCL (A fork of QuickGraph with better compatibility with projects using Portable Class Libraries.)
For Java, in the Apache Commons Graph library
For JavaScript, in the Cytoscape library
For Julia, in the Graphs.jl package
For MATLAB, in the Matlab_bgl package
For Perl, in the Graph module
For Python, in the SciPy library (module scipy.sparse.csgraph) or NetworkX library
For R, in packages e1071 and Rfast
For C, a pthreads, parallelized, implementation including a SQLite interface to the data at floydWarshall.h
Comparison with other shortest path algorithms
For graphs with non-negative edge weights, Dijkstra's algorithm can be used to find all shortest paths from a single vertex with running time . Thus, running Dijkstra starting at each vertex takes time . Since , this yields a worst-case running time of repeated Dijkstra of . While this matches the asymptotic worst-case running time of the Floyd-Warshall algorithm, the constants involved matter quite a lot. When a graph is dense (i.e., ), the Floyd-Warshall algorithm tends to perform better in practice. When the graph is sparse (i.e., is significantly smaller than ), Dijkstra tends to dominate.
For sparse graphs with negative edges but no negative cycles, Johnson's algorithm can be used, with the same asymptotic running time as the repeated Dijkstra approach.
There are also known algorithms using fast matrix multiplication to speed up all-pairs shortest path computation in dense graphs, but these typically make extra assumptions on the edge weights (such as requiring them to be small integers). In addition, because of the high constant factors in their running time, they would only provide a speedup over the Floyd–Warshall algorithm for very large graphs.
References
External links
Interactive animation of the Floyd–Warshall algorithm
Interactive animation of the Floyd–Warshall algorithm (Technical University of Munich)
Graph algorithms
Routing algorithms
Polynomial-time problems
Articles with example pseudocode
Dynamic programming
Graph distance | Floyd–Warshall algorithm | [
"Mathematics"
] | 2,629 | [
"Graph theory",
"Computational problems",
"Polynomial-time problems",
"Mathematical relations",
"Mathematical problems",
"Graph distance"
] |
230,428 | https://en.wikipedia.org/wiki/Angular%20resolution | Angular resolution describes the ability of any image-forming device such as an optical or radio telescope, a microscope, a camera, or an eye, to distinguish small details of an object, thereby making it a major determinant of image resolution. It is used in optics applied to light waves, in antenna theory applied to radio waves, and in acoustics applied to sound waves. The colloquial use of the term "resolution" sometimes causes confusion; when an optical system is said to have a high resolution or high angular resolution, it means that the perceived distance, or actual angular distance, between resolved neighboring objects is small. The value that quantifies this property, θ, which is given by the Rayleigh criterion, is low for a system with a high resolution. The closely related term spatial resolution refers to the precision of a measurement with respect to space, which is directly connected to angular resolution in imaging instruments. The Rayleigh criterion shows that the minimum angular spread that can be resolved by an image-forming system is limited by diffraction to the ratio of the wavelength of the waves to the aperture width. For this reason, high-resolution imaging systems such as astronomical telescopes, long distance telephoto camera lenses and radio telescopes have large apertures.
Definition of terms
Resolving power is the ability of an imaging device to separate (i.e., to see as distinct) points of an object that are located at a small angular distance or it is the power of an optical instrument to separate far away objects, that are close together, into individual images. The term resolution or minimum resolvable distance is the minimum distance between distinguishable objects in an image, although the term is loosely used by many users of microscopes and telescopes to describe resolving power. As explained below, diffraction-limited resolution is defined by the Rayleigh criterion as the angular separation of two point sources when the maximum of each source lies in the first minimum of the diffraction pattern (Airy disk) of the other. In scientific analysis, in general, the term "resolution" is used to describe the precision with which any instrument measures and records (in an image or spectrum) any variable in the specimen or sample under study.
The Rayleigh criterion
The imaging system's resolution can be limited either by aberration or by diffraction causing blurring of the image. These two phenomena have different origins and are unrelated. Aberrations can be explained by geometrical optics and can in principle be solved by increasing the optical quality of the system. On the other hand, diffraction comes from the wave nature of light and is determined by the finite aperture of the optical elements. The lens' circular aperture is analogous to a two-dimensional version of the single-slit experiment. Light passing through the lens interferes with itself creating a ring-shape diffraction pattern, known as the Airy pattern, if the wavefront of the transmitted light is taken to be spherical or plane over the exit aperture.
The interplay between diffraction and aberration can be characterised by the point spread function (PSF). The narrower the aperture of a lens the more likely the PSF is dominated by diffraction. In that case, the angular resolution of an optical system can be estimated (from the diameter of the aperture and the wavelength of the light) by the Rayleigh criterion defined by Lord Rayleigh: two point sources are regarded as just resolved when the principal diffraction maximum (center) of the Airy disk of one image coincides with the first minimum of the Airy disk of the other, as shown in the accompanying photos. (In the bottom photo on the right that shows the Rayleigh criterion limit, the central maximum of one point source might look as though it lies outside the first minimum of the other, but examination with a ruler verifies that the two do intersect.) If the distance is greater, the two points are well resolved and if it is smaller, they are regarded as not resolved. Rayleigh defended this criterion on sources of equal strength.
Considering diffraction through a circular aperture, this translates into:
where θ is the angular resolution (radians), λ is the wavelength of light, and D is the diameter of the lens' aperture. The factor 1.22 is derived from a calculation of the position of the first dark circular ring surrounding the central Airy disc of the diffraction pattern. This number is more precisely 1.21966989... (), the first zero of the order-one Bessel function of the first kind divided by π.
The formal Rayleigh criterion is close to the empirical resolution limit found earlier by the English astronomer W. R. Dawes, who tested human observers on close binary stars of equal brightness. The result, θ = 4.56/D, with D in inches and θ in arcseconds, is slightly narrower than calculated with the Rayleigh criterion. A calculation using Airy discs as point spread function shows that at Dawes' limit there is a 5% dip between the two maxima, whereas at Rayleigh's criterion there is a 26.3% dip. Modern image processing techniques including deconvolution of the point spread function allow resolution of binaries with even less angular separation.
Using a small-angle approximation, the angular resolution may be converted into a spatial resolution, Δℓ, by multiplication of the angle (in radians) with the distance to the object. For a microscope, that distance is close to the focal length f of the objective. For this case, the Rayleigh criterion reads:
.
This is the radius, in the imaging plane, of the smallest spot to which a collimated beam of light can be focused, which also corresponds to the size of the smallest object that the lens can resolve. The size is proportional to wavelength, λ, and thus, for example, blue light can be focused to a smaller spot than red light. If the lens is focusing a beam of light with a finite extent (e.g., a laser beam), the value of D corresponds to the diameter of the light beam, not the lens. Since the spatial resolution is inversely proportional to D, this leads to the slightly surprising result that a wide beam of light may be focused on a smaller spot than a narrow one. This result is related to the Fourier properties of a lens.
A similar result holds for a small sensor imaging a subject at infinity: The angular resolution can be converted to a spatial resolution on the sensor by using f as the distance to the image sensor; this relates the spatial resolution of the image to the f-number, #:
.
Since this is the radius of the Airy disk, the resolution is better estimated by the diameter,
Specific cases
Single telescope
Point-like sources separated by an angle smaller than the angular resolution cannot be resolved. A single optical telescope may have an angular resolution less than one arcsecond, but astronomical seeing and other atmospheric effects make attaining this very hard.
The angular resolution R of a telescope can usually be approximated by
where λ is the wavelength of the observed radiation, and D is the diameter of the telescope's objective. The resulting R is in radians. For example, in the case of yellow light with a wavelength of 580 nm, for a resolution of 0.1 arc second, we need D=1.2 m. Sources larger than the angular resolution are called extended sources or diffuse sources, and smaller sources are called point sources.
This formula, for light with a wavelength of about 562 nm, is also called the Dawes' limit.
Telescope array
The highest angular resolutions for telescopes can be achieved by arrays of telescopes called astronomical interferometers: These instruments can achieve angular resolutions of 0.001 arcsecond at optical wavelengths, and much higher resolutions at x-ray wavelengths. In order to perform aperture synthesis imaging, a large number of telescopes are required laid out in a 2-dimensional arrangement with a dimensional precision better than a fraction (0.25x) of the required image resolution.
The angular resolution R of an interferometer array can usually be approximated by
where λ is the wavelength of the observed radiation, and B is the length of the maximum physical separation of the telescopes in the array, called the baseline. The resulting R is in radians. Sources larger than the angular resolution are called extended sources or diffuse sources, and smaller sources are called point sources.
For example, in order to form an image in yellow light with a wavelength of 580 nm, for a resolution of 1 milli-arcsecond, we need telescopes laid out in an array that is 120 m × 120 m with a dimensional precision better than 145 nm.
Microscope
The resolution R (here measured as a distance, not to be confused with the angular resolution of a previous subsection) depends on the angular aperture :
where .
Here NA is the numerical aperture, is half the included angle of the lens, which depends on the diameter of the lens and its focal length, is the refractive index of the medium between the lens and the specimen, and is the wavelength of light illuminating or emanating from (in the case of fluorescence microscopy) the sample.
It follows that the NAs of both the objective and the condenser should be as high as possible for maximum resolution. In the case that both NAs are the same, the equation may be reduced to:
The practical limit for is about 70°. In a dry objective or condenser, this gives a maximum NA of 0.95. In a high-resolution oil immersion lens, the maximum NA is typically 1.45, when using immersion oil with a refractive index of 1.52. Due to these limitations, the resolution limit of a light microscope using visible light is about 200 nm. Given that the shortest wavelength of visible light is violet (),
which is near 200 nm.
Oil immersion objectives can have practical difficulties due to their shallow depth of field and extremely short working distance, which calls for the use of very thin (0.17 mm) cover slips, or, in an inverted microscope, thin glass-bottomed Petri dishes.
However, resolution below this theoretical limit can be achieved using super-resolution microscopy. These include optical near-fields (Near-field scanning optical microscope) or a diffraction technique called 4Pi STED microscopy. Objects as small as 30 nm have been resolved with both techniques. In addition to this Photoactivated localization microscopy can resolve structures of that size, but is also able to give information in z-direction (3D).
List of telescopes and arrays by angular resolution
See also
Angular diameter
Beam diameter
Dawes' limit
Diffraction-limited system
Ground sample distance
Image resolution
Optical resolution
Sparrow's resolution limit
Visual acuity
Notes
References
External links
"Concepts and Formulas in Microscopy: Resolution" by Michael W. Davidson, Nikon MicroscopyU (website).
Angle
Optics | Angular resolution | [
"Physics",
"Chemistry"
] | 2,234 | [
"Geometric measurement",
"Scalar physical quantities",
"Applied and interdisciplinary physics",
"Optics",
"Physical quantities",
" molecular",
"Atomic",
"Wikipedia categories named after physical quantities",
"Angle",
" and optical physics"
] |
230,487 | https://en.wikipedia.org/wiki/Poincar%C3%A9%20group | The Poincaré group, named after Henri Poincaré (1905), was first defined by Hermann Minkowski (1908) as the isometry group of Minkowski spacetime. It is a ten-dimensional non-abelian Lie group that is of importance as a model in our understanding of the most basic fundamentals of physics.
Overview
The Poincaré group consists of all coordinate transformations of Minkowski space that do not change the spacetime interval between events. For example, if everything were postponed by two hours, including the two events and the path you took to go from one to the other, then the time interval between the events recorded by a stopwatch that you carried with you would be the same. Or if everything were shifted five kilometres to the west, or turned 60 degrees to the right, you would also see no change in the interval. It turns out that the proper length of an object is also unaffected by such a shift.
In total, there are ten degrees of freedom for such transformations. They may be thought of as translation through time or space (four degrees, one per dimension); reflection through a plane (three degrees, the freedom in orientation of this plane); or a "boost" in any of the three spatial directions (three degrees). Composition of transformations is the operation of the Poincaré group, with rotations being produced as the composition of an even number of reflections.
In classical physics, the Galilean group is a comparable ten-parameter group that acts on absolute time and space. Instead of boosts, it features shear mappings to relate co-moving frames of reference.
In general relativity, i.e. under the effects of gravity, Poincaré symmetry applies only locally. A treatment of symmetries in general relativity is not in the scope of this article.
Poincaré symmetry
Poincaré symmetry is the full symmetry of special relativity. It includes:
translations (displacements) in time and space, forming the abelian Lie group of spacetime translations (P);
rotations in space, forming the non-abelian Lie group of three-dimensional rotations (J);
boosts, transformations connecting two uniformly moving bodies (K).
The last two symmetries, J and K, together make the Lorentz group (see also Lorentz invariance); the semi-direct product of the spacetime translations group and the Lorentz group then produce the Poincaré group. Objects that are invariant under this group are then said to possess Poincaré invariance or relativistic invariance.
10 generators (in four spacetime dimensions) associated with the Poincaré symmetry, by Noether's theorem, imply 10 conservation laws:
1 for the energy – associated with translations through time
3 for the momentum – associated with translations through spatial dimensions
3 for the angular momentum – associated with rotations between spatial dimensions
3 for a quantity involving the velocity of the center of mass – associated with hyperbolic rotations between each spatial dimension and time
Poincaré group
The Poincaré group is the group of Minkowski spacetime isometries. It is a ten-dimensional noncompact Lie group. The four-dimensional abelian group of spacetime translations is a normal subgroup, while the six-dimensional Lorentz group is also a subgroup, the stabilizer of the origin. The Poincaré group itself is the minimal subgroup of the affine group which includes all translations and Lorentz transformations. More precisely, it is a semidirect product of the spacetime translations group and the Lorentz group,
with group multiplication
.
Another way of putting this is that the Poincaré group is a group extension of the Lorentz group by a vector representation of it; it is sometimes dubbed, informally, as the inhomogeneous Lorentz group. In turn, it can also be obtained as a group contraction of the de Sitter group , as the de Sitter radius goes to infinity.
Its positive energy unitary irreducible representations are indexed by mass (nonnegative number) and spin (integer or half integer) and are associated with particles in quantum mechanics (see Wigner's classification).
In accordance with the Erlangen program, the geometry of Minkowski space is defined by the Poincaré group: Minkowski space is considered as a homogeneous space for the group.
In quantum field theory, the universal cover of the Poincaré group
which may be identified with the double cover
is more important, because representations of are not able to describe fields with spin 1/2; i.e. fermions. Here is the group of complex matrices with unit determinant, isomorphic to the Lorentz-signature spin group .
Poincaré algebra
The Poincaré algebra is the Lie algebra of the Poincaré group. It is a Lie algebra extension of the Lie algebra of the Lorentz group. More specifically, the proper (), orthochronous () part of the Lorentz subgroup (its identity component), , is connected to the identity and is thus provided by the exponentiation of this Lie algebra. In component form, the Poincaré algebra is given by the commutation relations:
where is the generator of translations, is the generator of Lorentz transformations, and is the Minkowski metric (see Sign convention).
The bottom commutation relation is the ("homogeneous") Lorentz group, consisting of rotations, , and boosts, . In this notation, the entire Poincaré algebra is expressible in noncovariant (but more practical) language as
where the bottom line commutator of two boosts is often referred to as a "Wigner rotation". The simplification permits reduction of the Lorentz subalgebra to and efficient treatment of its associated representations. In terms of the physical parameters, we have
The Casimir invariants of this algebra are and where is the Pauli–Lubanski pseudovector; they serve as labels for the representations of the group.
The Poincaré group is the full symmetry group of any relativistic field theory. As a result, all elementary particles fall in representations of this group. These are usually specified by the four-momentum squared of each particle (i.e. its mass squared) and the intrinsic quantum numbers , where is the spin quantum number, is the parity and is the charge-conjugation quantum number. In practice, charge conjugation and parity are violated by many quantum field theories; where this occurs, and are forfeited. Since CPT symmetry is invariant in quantum field theory, a time-reversal quantum number may be constructed from those given.
As a topological space, the group has four connected components: the component of the identity; the time reversed component; the spatial inversion component; and the component which is both time-reversed and spatially inverted.
Other dimensions
The definitions above can be generalized to arbitrary dimensions in a straightforward manner. The -dimensional Poincaré group is analogously defined by the semi-direct product
with the analogous multiplication
.
The Lie algebra retains its form, with indices and now taking values between and . The alternative representation in terms of and has no analogue in higher dimensions.
See also
Euclidean group
Galilean group
Representation theory of the Poincaré group
Wigner's classification
Symmetry in quantum mechanics
Pauli–Lubanski pseudovector
Particle physics and representation theory
Continuous spin particle
super-Poincaré algebra
Notes
References
Lie groups
Group
Quantum field theory
Theory of relativity
Symmetry | Poincaré group | [
"Physics",
"Mathematics"
] | 1,540 | [
"Quantum field theory",
"Lie groups",
"Mathematical structures",
"Quantum mechanics",
"Algebraic structures",
"Geometry",
"Theory of relativity",
"Symmetry"
] |
230,488 | https://en.wikipedia.org/wiki/Minkowski%20space | In physics, Minkowski space (or Minkowski spacetime) () is the main mathematical description of spacetime in the absence of gravitation. It combines inertial space and time manifolds into a four-dimensional model.
The model helps show how a spacetime interval between any two events is independent of the inertial frame of reference in which they are recorded. Mathematician Hermann Minkowski developed it from the work of Hendrik Lorentz, Henri Poincaré, and others said it "was grown on experimental physical grounds".
Minkowski space is closely associated with Einstein's theories of special relativity and general relativity and is the most common mathematical structure by which special relativity is formalized. While the individual components in Euclidean space and time might differ due to length contraction and time dilation, in Minkowski spacetime, all frames of reference will agree on the total interval in spacetime between events. Minkowski space differs from four-dimensional Euclidean space insofar as it treats time differently than the three spatial dimensions.
In 3-dimensional Euclidean space, the isometry group (maps preserving the regular Euclidean distance) is the Euclidean group. It is generated by rotations, reflections and translations. When time is appended as a fourth dimension, the further transformations of translations in time and Lorentz boosts are added, and the group of all these transformations is called the Poincaré group. Minkowski's model follows special relativity, where motion causes time dilation changing the scale applied to the frame in motion and shifts the phase of light.
Spacetime is equipped with an indefinite, non-degenerate, symmetric, bilinear form, called the Minkowski metric, the Minkowski norm squared or Minkowski inner product depending on the context. The Minkowski inner product is defined so as to yield the spacetime interval between two events when given their coordinate difference vector as an argument. Equipped with this inner product (albeit, not technically an inner product), the mathematical model of spacetime is called Minkowski space. The group of transformations for Minkowski space that preserves the spacetime interval (as opposed to the spatial Euclidean distance) is the Poincaré group (as opposed to the Galilean group).
History
Complex Minkowski spacetime
In his second relativity paper in 1905, Henri Poincaré showed how, by taking time to be an imaginary fourth spacetime coordinate , where is the speed of light and is the imaginary unit, Lorentz transformations can be visualized as ordinary rotations of the four-dimensional Euclidean sphere. The four-dimensional spacetime can be visualized as a four-dimensional space, with each point representing an event in spacetime. The Lorentz transformations can then be thought of as rotations in this four-dimensional space, where the rotation axis corresponds to the direction of relative motion between the two observers and the rotation angle is related to their relative velocity.
To understand this concept, one should consider the coordinates of an event in spacetime represented as a four-vector . A Lorentz transformation is represented by a matrix that acts on the four-vector, changing its components. This matrix can be thought of as a rotation matrix in four-dimensional space, which rotates the four-vector around a particular axis.
Rotations in planes spanned by two space unit vectors appear in coordinate space as well as in physical spacetime as Euclidean rotations and are interpreted in the ordinary sense. The "rotation" in a plane spanned by a space unit vector and a time unit vector, while formally still a rotation in coordinate space, is a Lorentz boost in physical spacetime with real inertial coordinates. The analogy with Euclidean rotations is only partial since the radius of the sphere is actually imaginary, which turns rotations into rotations in hyperbolic space (see hyperbolic rotation).
This idea, which was mentioned only briefly by Poincaré, was elaborated by Minkowski in a paper in German published in 1908 called "The Fundamental Equations for Electromagnetic Processes in Moving Bodies". He reformulated Maxwell equations as a symmetrical set of equations in the four variables combined with redefined vector variables for electromagnetic quantities, and he was able to show directly and very simply their invariance under Lorentz transformation. He also made other important contributions and used matrix notation for the first time in this context.
From his reformulation, he concluded that time and space should be treated equally, and so arose his concept of events taking place in a unified four-dimensional spacetime continuum.
Real Minkowski spacetime
In a further development in his 1908 "Space and Time" lecture, Minkowski gave an alternative formulation of this idea that used a real time coordinate instead of an imaginary one, representing the four variables of space and time in the coordinate form in a four-dimensional real vector space. Points in this space correspond to events in spacetime. In this space, there is a defined light-cone associated with each point, and events not on the light cone are classified by their relation to the apex as spacelike or timelike. It is principally this view of spacetime that is current nowadays, although the older view involving imaginary time has also influenced special relativity.
In the English translation of Minkowski's paper, the Minkowski metric, as defined below, is referred to as the line element. The Minkowski inner product below appears unnamed when referring to orthogonality (which he calls normality) of certain vectors, and the Minkowski norm squared is referred to (somewhat cryptically, perhaps this is a translation dependent) as "sum".
Minkowski's principal tool is the Minkowski diagram, and he uses it to define concepts and demonstrate properties of Lorentz transformations (e.g., proper time and length contraction) and to provide geometrical interpretation to the generalization of Newtonian mechanics to relativistic mechanics. For these special topics, see the referenced articles, as the presentation below will be principally confined to the mathematical structure (Minkowski metric and from it derived quantities and the Poincaré group as symmetry group of spacetime) following from the invariance of the spacetime interval on the spacetime manifold as consequences of the postulates of special relativity, not to specific application or derivation of the invariance of the spacetime interval. This structure provides the background setting of all present relativistic theories, barring general relativity for which flat Minkowski spacetime still provides a springboard as curved spacetime is locally Lorentzian.
Minkowski, aware of the fundamental restatement of the theory which he had made, said
Though Minkowski took an important step for physics, Albert Einstein saw its limitation:
For further historical information see references , and .
Causal structure
Where is velocity, , , and are Cartesian coordinates in 3-dimensional space, is the constant representing the universal speed limit, and is time, the four-dimensional vector is classified according to the sign of . A vector is timelike if , spacelike if , and null or lightlike if . This can be expressed in terms of the sign of , also called scalar product, as well, which depends on the signature. The classification of any vector will be the same in all frames of reference that are related by a Lorentz transformation (but not by a general Poincaré transformation because the origin may then be displaced) because of the invariance of the spacetime interval under Lorentz transformation.
The set of all null vectors at an event of Minkowski space constitutes the light cone of that event. Given a timelike vector , there is a worldline of constant velocity associated with it, represented by a straight line in a Minkowski diagram.
Once a direction of time is chosen, timelike and null vectors can be further decomposed into various classes. For timelike vectors, one has
future-directed timelike vectors whose first component is positive (tip of vector located in causal future (also called the absolute future) in the figure) and
past-directed timelike vectors whose first component is negative (causal past (also called the absolute past)).
Null vectors fall into three classes:
the zero vector, whose components in any basis are (origin),
future-directed null vectors whose first component is positive (upper light cone), and
past-directed null vectors whose first component is negative (lower light cone).
Together with spacelike vectors, there are 6 classes in all.
An orthonormal basis for Minkowski space necessarily consists of one timelike and three spacelike unit vectors. If one wishes to work with non-orthonormal bases, it is possible to have other combinations of vectors. For example, one can easily construct a (non-orthonormal) basis consisting entirely of null vectors, called a null basis.
Vector fields are called timelike, spacelike, or null if the associated vectors are timelike, spacelike, or null at each point where the field is defined.
Properties of time-like vectors
Time-like vectors have special importance in the theory of relativity as they correspond to events that are accessible to the observer at (0, 0, 0, 0) with a speed less than that of light. Of most interest are time-like vectors that are similarly directed, i.e. all either in the forward or in the backward cones. Such vectors have several properties not shared by space-like vectors. These arise because both forward and backward cones are convex, whereas the space-like region is not convex.
Scalar product
The scalar product of two time-like vectors and is
Positivity of scalar product: An important property is that the scalar product of two similarly directed time-like vectors is always positive. This can be seen from the reversed Cauchy–Schwarz inequality below. It follows that if the scalar product of two vectors is zero, then one of these, at least, must be space-like. The scalar product of two space-like vectors can be positive or negative as can be seen by considering the product of two space-like vectors having orthogonal spatial components and times either of different or the same signs.
Using the positivity property of time-like vectors, it is easy to verify that a linear sum with positive coefficients of similarly directed time-like vectors is also similarly directed time-like (the sum remains within the light cone because of convexity).
Norm and reversed Cauchy inequality
The norm of a time-like vector is defined as
The reversed Cauchy inequality is another consequence of the convexity of either light cone. For two distinct similarly directed time-like vectors and this inequality is
or algebraically,
From this, the positive property of the scalar product can be seen.
Reversed triangle inequality
For two similarly directed time-like vectors and , the inequality is
where the equality holds when the vectors are linearly dependent.
The proof uses the algebraic definition with the reversed Cauchy inequality:
The result now follows by taking the square root on both sides.
Mathematical structure
It is assumed below that spacetime is endowed with a coordinate system corresponding to an inertial frame. This provides an origin, which is necessary for spacetime to be modeled as a vector space. This addition is not required, and more complex treatments analogous to an affine space can remove the extra structure. However, this is not the introductory convention and is not covered here.
For an overview, Minkowski space is a -dimensional real vector space equipped with a non-degenerate, symmetric bilinear form on the tangent space at each point in spacetime, here simply called the Minkowski inner product, with metric signature either or . The tangent space at each event is a vector space of the same dimension as spacetime, .
Tangent vectors
In practice, one need not be concerned with the tangent spaces. The vector space structure of Minkowski space allows for the canonical identification of vectors in tangent spaces at points (events) with vectors (points, events) in Minkowski space itself. See e.g. or These identifications are routinely done in mathematics. They can be expressed formally in Cartesian coordinates as
with basis vectors in the tangent spaces defined by
Here, and are any two events, and the second basis vector identification is referred to as parallel transport. The first identification is the canonical identification of vectors in the tangent space at any point with vectors in the space itself. The appearance of basis vectors in tangent spaces as first-order differential operators is due to this identification. It is motivated by the observation that a geometrical tangent vector can be associated in a one-to-one manner with a directional derivative operator on the set of smooth functions. This is promoted to a definition of tangent vectors in manifolds not necessarily being embedded in . This definition of tangent vectors is not the only possible one, as ordinary n-tuples can be used as well.
A tangent vector at a point may be defined, here specialized to Cartesian coordinates in Lorentz frames, as column vectors associated to each Lorentz frame related by Lorentz transformation such that the vector in a frame related to some frame by transforms according to . This is the same way in which the coordinates transform. Explicitly,
This definition is equivalent to the definition given above under a canonical isomorphism.
For some purposes, it is desirable to identify tangent vectors at a point with displacement vectors at , which is, of course, admissible by essentially the same canonical identification. The identifications of vectors referred to above in the mathematical setting can correspondingly be found in a more physical and explicitly geometrical setting in . They offer various degrees of sophistication (and rigor) depending on which part of the material one chooses to read.
Metric signature
The metric signature refers to which sign the Minkowski inner product yields when given space (spacelike to be specific, defined further down) and time basis vectors (timelike) as arguments. Further discussion about this theoretically inconsequential but practically necessary choice for purposes of internal consistency and convenience is deferred to the hide box below. See also the page treating sign convention in Relativity.
In general, but with several exceptions, mathematicians and general relativists prefer spacelike vectors to yield a positive sign, , while particle physicists tend to prefer timelike vectors to yield a positive sign, . Authors covering several areas of physics, e.g. Steven Weinberg and Landau and Lifshitz ( and respectively) stick to one choice regardless of topic. Arguments for the former convention include "continuity" from the Euclidean case corresponding to the non-relativistic limit . Arguments for the latter include that minus signs, otherwise ubiquitous in particle physics, go away. Yet other authors, especially of introductory texts, e.g. , do not choose a signature at all, but instead, opt to coordinatize spacetime such that the time coordinate (but not time itself!) is imaginary. This removes the need for the explicit introduction of a metric tensor (which may seem like an extra burden in an introductory course), and one needs not be concerned with covariant vectors and contravariant vectors (or raising and lowering indices) to be described below. The inner product is instead affected by a straightforward extension of the dot product in to . This works in the flat spacetime of special relativity, but not in the curved spacetime of general relativity, see (who, by the way use ). MTW also argues that it hides the true indefinite nature of the metric and the true nature of Lorentz boosts, which are not rotations. It also needlessly complicates the use of tools of differential geometry that are otherwise immediately available and useful for geometrical description and calculation – even in the flat spacetime of special relativity, e.g. of the electromagnetic field.
Terminology
Mathematically associated with the bilinear form is a tensor of type at each point in spacetime, called the Minkowski metric. The Minkowski metric, the bilinear form, and the Minkowski inner product are all the same object; it is a bilinear function that accepts two (contravariant) vectors and returns a real number. In coordinates, this is the matrix representing the bilinear form.
For comparison, in general relativity, a Lorentzian manifold is likewise equipped with a metric tensor , which is a nondegenerate symmetric bilinear form on the tangent space at each point of . In coordinates, it may be represented by a matrix depending on spacetime position. Minkowski space is thus a comparatively simple special case of a Lorentzian manifold. Its metric tensor is in coordinates with the same symmetric matrix at every point of , and its arguments can, per above, be taken as vectors in spacetime itself.
Introducing more terminology (but not more structure), Minkowski space is thus a pseudo-Euclidean space with total dimension and signature or . Elements of Minkowski space are called events. Minkowski space is often denoted or to emphasize the chosen signature, or just . It is an example of a pseudo-Riemannian manifold.
Then mathematically, the metric is a bilinear form on an abstract four-dimensional real vector space , that is,
where has signature , and signature is a coordinate-invariant property of . The space of bilinear maps forms a vector space which can be identified with , and may be equivalently viewed as an element of this space. By making a choice of orthonormal basis , can be identified with the space . The notation is meant to emphasize the fact that and are not just vector spaces but have added structure. .
An interesting example of non-inertial coordinates for (part of) Minkowski spacetime is the Born coordinates. Another useful set of coordinates is the light-cone coordinates.
Pseudo-Euclidean metrics
The Minkowski inner product is not an inner product, since it is not positive-definite, i.e. the quadratic form need not be positive for nonzero . The positive-definite condition has been replaced by the weaker condition of non-degeneracy. The bilinear form is said to be indefinite.
The Minkowski metric is the metric tensor of Minkowski space. It is a pseudo-Euclidean metric, or more generally, a constant pseudo-Riemannian metric in Cartesian coordinates. As such, it is a nondegenerate symmetric bilinear form, a type tensor. It accepts two arguments , vectors in , the tangent space at in . Due to the above-mentioned canonical identification of with itself, it accepts arguments with both and in .
As a notational convention, vectors in , called 4-vectors, are denoted in italics, and not, as is common in the Euclidean setting, with boldface . The latter is generally reserved for the -vector part (to be introduced below) of a -vector.
The definition
yields an inner product-like structure on , previously and also henceforth, called the Minkowski inner product, similar to the Euclidean inner product, but it describes a different geometry. It is also called the relativistic dot product. If the two arguments are the same,
the resulting quantity will be called the Minkowski norm squared. The Minkowski inner product satisfies the following properties.
Linearity in the first argument
Symmetry
Non-degeneracy
The first two conditions imply bilinearity. The defining difference between a pseudo-inner product and an inner product proper is that the former is not required to be positive definite, that is, is allowed.
The most important feature of the inner product and norm squared is that these are quantities unaffected by Lorentz transformations. In fact, it can be taken as the defining property of a Lorentz transformation in that it preserves the inner product (i.e. the value of the corresponding bilinear form on two vectors). This approach is taken more generally for all classical groups definable this way in classical group. There, the matrix is identical in the case (the Lorentz group) to the matrix to be displayed below.
Two vectors and are said to be orthogonal if . For a geometric interpretation of orthogonality in the special case, when and (or vice versa), see hyperbolic orthogonality.
A vector is called a unit vector if . A basis for consisting of mutually orthogonal unit vectors is called an orthonormal basis.
For a given inertial frame, an orthonormal basis in space, combined with the unit time vector, forms an orthonormal basis in Minkowski space. The number of positive and negative unit vectors in any such basis is a fixed pair of numbers equal to the signature of the bilinear form associated with the inner product. This is Sylvester's law of inertia.
More terminology (but not more structure): The Minkowski metric is a pseudo-Riemannian metric, more specifically, a Lorentzian metric, even more specifically, the Lorentz metric, reserved for -dimensional flat spacetime with the remaining ambiguity only being the signature convention.
Minkowski metric
From the second postulate of special relativity, together with homogeneity of spacetime and isotropy of space, it follows that the spacetime interval between two arbitrary events called and is:
This quantity is not consistently named in the literature. The interval is sometimes referred to as the square root of the interval as defined here.
The invariance of the interval under coordinate transformations between inertial frames follows from the invariance of
provided the transformations are linear. This quadratic form can be used to define a bilinear form
via the polarization identity. This bilinear form can in turn be written as
where is a matrix associated with . While possibly confusing, it is common practice to denote with just . The matrix is read off from the explicit bilinear form as
and the bilinear form
with which this section started by assuming its existence, is now identified.
For definiteness and shorter presentation, the signature is adopted below. This choice (or the other possible choice) has no (known) physical implications. The symmetry group preserving the bilinear form with one choice of signature is isomorphic (under the map given here) with the symmetry group preserving the other choice of signature. This means that both choices are in accord with the two postulates of relativity. Switching between the two conventions is straightforward. If the metric tensor has been used in a derivation, go back to the earliest point where it was used, substitute for , and retrace forward to the desired formula with the desired metric signature.
Standard basis
A standard or orthonormal basis for Minkowski space is a set of four mutually orthogonal vectors such that
and for which when
These conditions can be written compactly in the form
Relative to a standard basis, the components of a vector are written where the Einstein notation is used to write . The component is called the timelike component of while the other three components are called the spatial components. The spatial components of a -vector may be identified with a -vector .
In terms of components, the Minkowski inner product between two vectors and is given by
and
Here lowering of an index with the metric was used.
There are many possible choices of standard basis obeying the condition Any two such bases are related in some sense by a Lorentz transformation, either by a change-of-basis matrix , a real matrix satisfying
or , a linear map on the abstract vector space satisfying, for any pair of vectors , ,
Then if two different bases exist, and , can be represented as or . While it might be tempting to think of and as the same thing, mathematically, they are elements of different spaces, and act on the space of standard bases from different sides.
Raising and lowering of indices
Technically, a non-degenerate bilinear form provides a map between a vector space and its dual; in this context, the map is between the tangent spaces of and the cotangent spaces of . At a point in , the tangent and cotangent spaces are dual vector spaces (so the dimension of the cotangent space at an event is also ). Just as an authentic inner product on a vector space with one argument fixed, by Riesz representation theorem, may be expressed as the action of a linear functional on the vector space, the same holds for the Minkowski inner product of Minkowski space.
Thus if are the components of a vector in tangent space, then are the components of a vector in the cotangent space (a linear functional). Due to the identification of vectors in tangent spaces with vectors in itself, this is mostly ignored, and vectors with lower indices are referred to as covariant vectors. In this latter interpretation, the covariant vectors are (almost always implicitly) identified with vectors (linear functionals) in the dual of Minkowski space. The ones with upper indices are contravariant vectors. In the same fashion, the inverse of the map from tangent to cotangent spaces, explicitly given by the inverse of in matrix representation, can be used to define raising of an index. The components of this inverse are denoted . It happens that . These maps between a vector space and its dual can be denoted (eta-flat) and (eta-sharp) by the musical analogy.
Contravariant and covariant vectors are geometrically very different objects. The first can and should be thought of as arrows. A linear function can be characterized by two objects: its kernel, which is a hyperplane passing through the origin, and its norm. Geometrically thus, covariant vectors should be viewed as a set of hyperplanes, with spacing depending on the norm (bigger = smaller spacing), with one of them (the kernel) passing through the origin. The mathematical term for a covariant vector is 1-covector or 1-form (though the latter is usually reserved for covector fields).
One quantum mechanical analogy explored in the literature is that of a de Broglie wave (scaled by a factor of Planck's reduced constant) associated with a momentum four-vector to illustrate how one could imagine a covariant version of a contravariant vector. The inner product of two contravariant vectors could equally well be thought of as the action of the covariant version of one of them on the contravariant version of the other. The inner product is then how many times the arrow pierces the planes. The mathematical reference, , offers the same geometrical view of these objects (but mentions no piercing).
The electromagnetic field tensor is a differential 2-form, which geometrical description can as well be found in MTW.
One may, of course, ignore geometrical views altogether (as is the style in e.g. and ) and proceed algebraically in a purely formal fashion. The time-proven robustness of the formalism itself, sometimes referred to as index gymnastics, ensures that moving vectors around and changing from contravariant to covariant vectors and vice versa (as well as higher order tensors) is mathematically sound. Incorrect expressions tend to reveal themselves quickly.
Coordinate free raising and lowering
Given a bilinear form , the lowered version of a vector can be thought of as the partial evaluation of , that is, there is an associated partial evaluation map
The lowered vector is then the dual map . Note it does not matter which argument is partially evaluated due to the symmetry of .
Non-degeneracy is then equivalent to injectivity of the partial evaluation map, or equivalently non-degeneracy indicates that the kernel of the map is trivial. In finite dimension, as is the case here, and noting that the dimension of a finite-dimensional space is equal to the dimension of the dual, this is enough to conclude the partial evaluation map is a linear isomorphism from to . This then allows the definition of the inverse partial evaluation map,
which allows the inverse metric to be defined as
where the two different usages of can be told apart by the argument each is evaluated on. This can then be used to raise indices. If a coordinate basis is used, the metric is indeed the matrix inverse to .
Formalism of the Minkowski metric
The present purpose is to show semi-rigorously how formally one may apply the Minkowski metric to two vectors and obtain a real number, i.e. to display the role of the differentials and how they disappear in a calculation. The setting is that of smooth manifold theory, and concepts such as convector fields and exterior derivatives are introduced.
A full-blown version of the Minkowski metric in coordinates as a tensor field on spacetime has the appearance
Explanation: The coordinate differentials are 1-form fields. They are defined as the exterior derivative of the coordinate functions . These quantities evaluated at a point provide a basis for the cotangent space at . The tensor product (denoted by the symbol ) yields a tensor field of type , i.e. the type that expects two contravariant vectors as arguments. On the right-hand side, the symmetric product (denoted by the symbol or by juxtaposition) has been taken. The equality holds since, by definition, the Minkowski metric is symmetric. The notation on the far right is also sometimes used for the related, but different, line element. It is not a tensor. For elaboration on the differences and similarities, see
Tangent vectors are, in this formalism, given in terms of a basis of differential operators of the first order,
where is an event. This operator applied to a function gives the directional derivative of at in the direction of increasing with fixed. They provide a basis for the tangent space at .
The exterior derivative of a function is a covector field, i.e. an assignment of a cotangent vector to each point , by definition such that
for each vector field . A vector field is an assignment of a tangent vector to each point . In coordinates can be expanded at each point in the basis given by the . Applying this with , the coordinate function itself, and , called a coordinate vector field, one obtains
Since this relation holds at each point , the provide a basis for the cotangent space at each and the bases and are dual to each other,
at each . Furthermore, one has
for general one-forms on a tangent space and general tangent vectors . (This can be taken as a definition, but may also be proved in a more general setting.)
Thus when the metric tensor is fed two vectors fields , , both expanded in terms of the basis coordinate vector fields, the result is
where , are the component functions of the vector fields. The above equation holds at each point , and the relation may as well be interpreted as the Minkowski metric at applied to two tangent vectors at .
As mentioned, in a vector space, such as modeling the spacetime of special relativity, tangent vectors can be canonically identified with vectors in the space itself, and vice versa. This means that the tangent spaces at each point are canonically identified with each other and with the vector space itself. This explains how the right-hand side of the above equation can be employed directly, without regard to the spacetime point the metric is to be evaluated and from where (which tangent space) the vectors come from.
This situation changes in general relativity. There one has
where now , i.e., is still a metric tensor but now depending on spacetime and is a solution of Einstein's field equations. Moreover, must be tangent vectors at spacetime point and can no longer be moved around freely.
Chronological and causality relations
Let . Here,
chronologically precedes if is future-directed timelike. This relation has the transitive property and so can be written .
causally precedes if is future-directed null or future-directed timelike. It gives a partial ordering of spacetime and so can be written .
Suppose is timelike. Then the simultaneous hyperplane for is . Since this hyperplane varies as varies, there is a relativity of simultaneity in Minkowski space.
Generalizations
A Lorentzian manifold is a generalization of Minkowski space in two ways. The total number of spacetime dimensions is not restricted to be ( or more) and a Lorentzian manifold need not be flat, i.e. it allows for curvature.
Complexified Minkowski space
Complexified Minkowski space is defined as . Its real part is the Minkowski space of four-vectors, such as the four-velocity and the four-momentum, which are independent of the choice of orientation of the space. The imaginary part, on the other hand, may consist of four pseudovectors, such as angular velocity and magnetic moment, which change their direction with a change of orientation. A pseudoscalar is introduced, which also changes sign with a change of orientation. Thus, elements of are independent of the choice of the orientation.
The inner product-like structure on is defined as for any . A relativistic pure spin of an electron or any half spin particle is described by as , where is the four-velocity of the particle, satisfying and is the 4D spin vector, which is also the Pauli–Lubanski pseudovector satisfying and .
Generalized Minkowski space
Minkowski space refers to a mathematical formulation in four dimensions. However, the mathematics can easily be extended or simplified to create an analogous generalized Minkowski space in any number of dimensions. If , -dimensional Minkowski space is a vector space of real dimension on which there is a constant Minkowski metric of signature or . These generalizations are used in theories where spacetime is assumed to have more or less than dimensions. String theory and M-theory are two examples where . In string theory, there appears conformal field theories with spacetime dimensions.
de Sitter space can be formulated as a submanifold of generalized Minkowski space as can the model spaces of hyperbolic geometry (see below).
Curvature
As a flat spacetime, the three spatial components of Minkowski spacetime always obey the Pythagorean Theorem. Minkowski space is a suitable basis for special relativity, a good description of physical systems over finite distances in systems without significant gravitation. However, in order to take gravity into account, physicists use the theory of general relativity, which is formulated in the mathematics of a non-Euclidean geometry. When this geometry is used as a model of physical space, it is known as curved space.
Even in curved space, Minkowski space is still a good description in an infinitesimal region surrounding any point (barring gravitational singularities). More abstractly, it can be said that in the presence of gravity spacetime is described by a curved 4-dimensional manifold for which the tangent space to any point is a 4-dimensional Minkowski space. Thus, the structure of Minkowski space is still essential in the description of general relativity.
Geometry
The meaning of the term geometry for the Minkowski space depends heavily on the context. Minkowski space is not endowed with Euclidean geometry, and not with any of the generalized Riemannian geometries with intrinsic curvature, those exposed by the model spaces in hyperbolic geometry (negative curvature) and the geometry modeled by the sphere (positive curvature). The reason is the indefiniteness of the Minkowski metric. Minkowski space is, in particular, not a metric space and not a Riemannian manifold with a Riemannian metric. However, Minkowski space contains submanifolds endowed with a Riemannian metric yielding hyperbolic geometry.
Model spaces of hyperbolic geometry of low dimension, say 2 or 3, cannot be isometrically embedded in Euclidean space with one more dimension, i.e. or respectively, with the Euclidean metric , preventing easy visualization. By comparison, model spaces with positive curvature are just spheres in Euclidean space of one higher dimension. Hyperbolic spaces can be isometrically embedded in spaces of one more dimension when the embedding space is endowed with the Minkowski metric .
Define to be the upper sheet () of the hyperboloid
in generalized Minkowski space of spacetime dimension This is one of the surfaces of transitivity of the generalized Lorentz group. The induced metric on this submanifold,
the pullback of the Minkowski metric under inclusion, is a Riemannian metric. With this metric is a Riemannian manifold. It is one of the model spaces of Riemannian geometry, the hyperboloid model of hyperbolic space. It is a space of constant negative curvature . The 1 in the upper index refers to an enumeration of the different model spaces of hyperbolic geometry, and the for its dimension. A corresponds to the Poincaré disk model, while corresponds to the Poincaré half-space model of dimension
Preliminaries
In the definition above is the inclusion map and the superscript star denotes the pullback. The present purpose is to describe this and similar operations as a preparation for the actual demonstration that actually is a hyperbolic space.
Hyperbolic stereographic projection
In order to exhibit the metric, it is necessary to pull it back via a suitable parametrization. A parametrization of a submanifold of a manifold is a map whose range is an open subset of . If has the same dimension as , a parametrization is just the inverse of a coordinate map . The parametrization to be used is the inverse of hyperbolic stereographic projection. This is illustrated in the figure to the right for . It is instructive to compare to stereographic projection for spheres.
Stereographic projection and its inverse are given by
where, for simplicity, . The are coordinates on and the are coordinates on .
Pulling back the metric
One has
and the map
The pulled back metric can be obtained by straightforward methods of calculus;
One computes according to the standard rules for computing differentials (though one is really computing the rigorously defined exterior derivatives),
and substitutes the results into the right hand side. This yields
This last equation shows that the metric on the ball is identical to the Riemannian metric in the Poincaré ball model, another standard model of hyperbolic geometry.
See also
Hyperbolic quaternion
Hyperspace
Introduction to the mathematics of general relativity
Minkowski plane
Remarks
Notes
References
Giulini D The rich structure of Minkowski space, https://arxiv.org/abs/0802.4345v1.
Published translation:
Wikisource translation: The Fundamental Equations for Electromagnetic Processes in Moving Bodies
Various English translations on Wikisource: Space and Time.
.
Wikisource translation: On the Dynamics of the Electron
Robb A A: Optical Geometry of Motion; a New View of the Theory of Relativity Cambridge 1911, (Heffers). http://www.archive.org/details/opticalgeometryoOOrobbrich.
Robb A A: Geometry of Time and Space, 1936 Cambridge Univ Press http://www.archive.org/details/geometryoftimean032218mbp.
.
External links
visualizing Minkowski space in the context of special relativity.
The Geometry of Special Relativity: The Minkowski Space – Time Light Cone
Minkowski space at PhilPapers
Equations of physics
Geometry
Lorentzian manifolds
Special relativity
Exact solutions in general relativity
Hermann Minkowski | Minkowski space | [
"Physics",
"Mathematics"
] | 8,018 | [
"Exact solutions in general relativity",
"Equations of physics",
"Mathematical objects",
"Equations",
"Special relativity",
"Geometry",
"Theory of relativity"
] |
230,489 | https://en.wikipedia.org/wiki/Lorentz%20group | In physics and mathematics, the Lorentz group is the group of all Lorentz transformations of Minkowski spacetime, the classical and quantum setting for all (non-gravitational) physical phenomena. The Lorentz group is named for the Dutch physicist Hendrik Lorentz.
For example, the following laws, equations, and theories respect Lorentz symmetry:
The kinematical laws of special relativity
Maxwell's field equations in the theory of electromagnetism
The Dirac equation in the theory of the electron
The Standard Model of particle physics
The Lorentz group expresses the fundamental symmetry of space and time of all known fundamental laws of nature. In small enough regions of spacetime where gravitational variances are negligible, physical laws are Lorentz invariant in the same manner as special relativity.
Basic properties
The Lorentz group is a subgroup of the Poincaré group—the group of all isometries of Minkowski spacetime. Lorentz transformations are, precisely, isometries that leave the origin fixed. Thus, the Lorentz group is the isotropy subgroup with respect to the origin of the isometry group of Minkowski spacetime. For this reason, the Lorentz group is sometimes called the homogeneous Lorentz group while the Poincaré group is sometimes called the inhomogeneous Lorentz group. Lorentz transformations are examples of linear transformations; general isometries of Minkowski spacetime are affine transformations.
Physics definition
Assume two inertial reference frames and , and two points , , the Lorentz group is the set of all the transformations between the two reference frames that preserve the speed of light propagating between the two points:
In matrix form these are all the linear transformations such that:
These are then called Lorentz transformations.
Mathematical definition
Mathematically, the Lorentz group may be described as the indefinite orthogonal group , the matrix Lie group that preserves the quadratic form
on (the vector space equipped with this quadratic form is sometimes written ). This quadratic form is, when put on matrix form (see Classical orthogonal group), interpreted in physics as the metric tensor of Minkowski spacetime.
Mathematical properties
The Lorentz group is a six-dimensional noncompact non-abelian real Lie group that is not connected. The four connected components are not simply connected. The identity component (i.e., the component containing the identity element) of the Lorentz group is itself a group, and is often called the restricted Lorentz group, and is denoted . The restricted Lorentz group consists of those Lorentz transformations that preserve both the orientation of space and the direction of time. Its fundamental group has order 2, and its universal cover, the indefinite spin group , is isomorphic to both the special linear group and to the symplectic group . These isomorphisms allow the Lorentz group to act on a large number of mathematical structures important to physics, most notably spinors. Thus, in relativistic quantum mechanics and in quantum field theory, it is very common to call the Lorentz group, with the understanding that is a specific representation (the vector representation) of it.
A recurrent representation of the action of the Lorentz group on Minkowski space uses biquaternions, which form a composition algebra. The isometry property of Lorentz transformations holds according to the composition property .
Another property of the Lorentz group is conformality or preservation of angles. Lorentz boosts act by hyperbolic rotation of a spacetime plane, and such "rotations" preserve hyperbolic angle, the measure of rapidity used in relativity. Therefore, the Lorentz group is a subgroup of the conformal group of spacetime.
Note that this article refers to as the "Lorentz group", as the "proper Lorentz group", and as the "restricted Lorentz group". Many authors (especially in physics) use the name "Lorentz group" for (or sometimes even ) rather than . When reading such authors it is important to keep clear exactly which they are referring to.
Connected components
Because it is a Lie group, the Lorentz group is a group and also has a topological description as a smooth manifold. As a manifold, it has four connected components. Intuitively, this means that it consists of four topologically separated pieces.
The four connected components can be categorized by two transformation properties its elements have:
Some elements are reversed under time-inverting Lorentz transformations, for example, a future-pointing timelike vector would be inverted to a past-pointing vector
Some elements have orientation reversed by improper Lorentz transformations, for example, certain vierbein (tetrads)
Lorentz transformations that preserve the direction of time are called . The subgroup of orthochronous transformations is often denoted . Those that preserve orientation are called proper, and as linear transformations they have determinant . (The improper Lorentz transformations have determinant .) The subgroup of proper Lorentz transformations is denoted .
The subgroup of all Lorentz transformations preserving both orientation and direction of time is called the proper, orthochronous Lorentz group or restricted Lorentz group, and is denoted by .
The set of the four connected components can be given a group structure as the quotient group , which is isomorphic to the Klein four-group. Every element in can be written as the semidirect product of a proper, orthochronous transformation and an element of the discrete group
where P and T are the parity and time reversal operators:
.
Thus an arbitrary Lorentz transformation can be specified as a proper, orthochronous Lorentz transformation along with a further two bits of information, which pick out one of the four connected components. This pattern is typical of finite-dimensional Lie groups.
Restricted Lorentz group
The restricted Lorentz group is the identity component of the Lorentz group, which means that it consists of all Lorentz transformations that can be connected to the identity by a continuous curve lying in the group. The restricted Lorentz group is a connected normal subgroup of the full Lorentz group with the same dimension, in this case with dimension six.
The restricted Lorentz group is generated by ordinary spatial rotations and Lorentz boosts (which are rotations in a hyperbolic space that includes a time-like direction). Since every proper, orthochronous Lorentz transformation can be written as a product of a rotation (specified by 3 real parameters) and a boost (also specified by 3 real parameters), it takes 6 real parameters to specify an arbitrary proper orthochronous Lorentz transformation. This is one way to understand why the restricted Lorentz group is six-dimensional. (See also the Lie algebra of the Lorentz group.)
The set of all rotations forms a Lie subgroup isomorphic to the ordinary rotation group . The set of all boosts, however, does not form a subgroup, since composing two boosts does not, in general, result in another boost. (Rather, a pair of non-colinear boosts is equivalent to a boost and a rotation, and this relates to Thomas rotation.) A boost in some direction, or a rotation about some axis, generates a one-parameter subgroup.
Surfaces of transitivity
If a group acts on a space , then a surface is a surface of transitivity if is invariant under (i.e., ) and for any two points there is a such that . By definition of the Lorentz group, it preserves the quadratic form
The surfaces of transitivity of the orthochronous Lorentz group , acting on flat spacetime are the following:
is the upper branch of a hyperboloid of two sheets. Points on this sheet are separated from the origin by a future time-like vector.
is the lower branch of this hyperboloid. Points on this sheet are the past time-like vectors.
is the upper branch of the light cone, the future light cone.
is the lower branch of the light cone, the past light cone.
is a hyperboloid of one sheet. Points on this sheet are space-like separated from the origin.
The origin .
These surfaces are , so the images are not faithful, but they are faithful for the corresponding facts about . For the full Lorentz group, the surfaces of transitivity are only four since the transformation takes an upper branch of a hyperboloid (cone) to a lower one and vice versa.
As symmetric spaces
An equivalent way to formulate the above surfaces of transitivity is as a symmetric space in the sense of Lie theory. For example, the upper sheet of the hyperboloid can be written as the quotient space , due to the orbit-stabilizer theorem. Furthermore, this upper sheet also provides a model for three-dimensional hyperbolic space.
Representations of the Lorentz group
These observations constitute a good starting point for finding all infinite-dimensional unitary representations of the Lorentz group, in fact, of the Poincaré group, using the method of induced representations. One begins with a "standard vector", one for each surface of transitivity, and then ask which subgroup preserves these vectors. These subgroups are called little groups by physicists. The problem is then essentially reduced to the easier problem of finding representations of the little groups. For example, a standard vector in one of the hyperbolas of two sheets could be suitably chosen as . For each , the vector pierces exactly one sheet. In this case the little group is , the rotation group, all of whose representations are known. The precise infinite-dimensional unitary representation under which a particle transforms is part of its classification. Not all representations can correspond to physical particles (as far as is known). Standard vectors on the one-sheeted hyperbolas would correspond to tachyons. Particles on the light cone are photons, and more hypothetically, gravitons. The "particle" corresponding to the origin is the vacuum.
Homomorphisms and isomorphisms
Several other groups are either homomorphic or isomorphic to the restricted Lorentz group . These homomorphisms play a key role in explaining various phenomena in physics.
The special linear group is a double covering of the restricted Lorentz group. This relationship is widely used to express the Lorentz invariance of the Dirac equation and the covariance of spinors. In other words, the (restricted) Lorentz group is isomorphic to
The symplectic group is isomorphic to ; it is used to construct Weyl spinors, as well as to explain how spinors can have a mass.
The spin group is isomorphic to ; it is used to explain spin and spinors in terms of the Clifford algebra, thus making it clear how to generalize the Lorentz group to general settings in Riemannian geometry, including theories of supergravity and string theory.
The restricted Lorentz group is isomorphic to the projective special linear group which is, in turn, isomorphic to the Möbius group, the symmetry group of conformal geometry on the Riemann sphere. This relationship is central to the classification of the subgroups of the Lorentz group according to an earlier classification scheme developed for the Möbius group.
Weyl representation
The Weyl representation or spinor map is a pair of surjective homomorphisms from to . They form a matched pair under parity transformations, corresponding to left and right chiral spinors.
One may define an action of on Minkowski spacetime by writing a point of spacetime as a two-by-two Hermitian matrix in the form
in terms of Pauli matrices.
This presentation, the Weyl presentation, satisfies
Therefore, one has identified the space of Hermitian matrices (which is four-dimensional, as a real vector space) with Minkowski spacetime, in such a way that the determinant of a Hermitian matrix is the squared length of the corresponding vector in Minkowski spacetime. An element acts on the space of Hermitian matrices via
where is the Hermitian transpose of . This action preserves the determinant and so acts on Minkowski spacetime by (linear) isometries. The parity-inverted form of the above is
which transforms as
That this is the correct transformation follows by noting that
remains invariant under the above pair of transformations.
These maps are surjective, and kernel of either map is the two element subgroup . By the first isomorphism theorem, the quotient group is isomorphic to .
The parity map swaps these two coverings. It corresponds to Hermitian conjugation being an automorphism of . These two distinct coverings corresponds to the two distinct chiral actions of the Lorentz group on spinors. The non-overlined form corresponds to right-handed spinors transforming as , while the overline form corresponds to left-handed spinors transforming as .
It is important to observe that this pair of coverings does not survive quantization; when quantized, this leads to the peculiar phenomenon of the chiral anomaly. The classical (i.e., non-quantized) symmetries of the Lorentz group are broken by quantization; this is the content of the Atiyah–Singer index theorem.
Notational conventions
In physics, it is conventional to denote a Lorentz transformation as , thus showing the matrix with spacetime indexes . A four-vector can be created from the Pauli matrices in two different ways: as and as . The two forms are related by a parity transformation. Note that .
Given a Lorentz transformation , the double-covering of the orthochronous Lorentz group by given above can be written as
Dropping the this takes the form
The parity conjugate form is
Proof
That the above is the correct form for indexed notation is not immediately obvious, partly because, when working in indexed notation, it is quite easy to accidentally confuse a Lorentz transform with its inverse, or its transpose. This confusion arises due to the identity being difficult to recognize when written in indexed form. Lorentz transforms are not tensors under Lorentz transformations! Thus a direct proof of this identity is useful, for establishing its correctness. It can be demonstrated by starting with the identity
where so that the above are just the usual Pauli matrices, and is the matrix transpose, and is complex conjugation. The matrix is
Written as the four-vector, the relationship is
This transforms as
Taking one more transpose, one gets
Symplectic group
The symplectic group is isomorphic to . This isomorphism is constructed so as to preserve a symplectic bilinear form on , that is, to leave the form invariant under Lorentz transformations. This may be articulated as follows. The symplectic group is defined as
where
Other common notations are for this element; sometimes is used, but this invites confusion with the idea of almost complex structures, which are not the same, as they transform differently.
Given a pair of Weyl spinors (two-component spinors)
the invariant bilinear form is conventionally written as
This form is invariant under the Lorentz group, so that for one has
This defines a kind of "scalar product" of spinors, and is commonly used to defined a Lorentz-invariant mass term in Lagrangians. There are several notable properties to be called out that are important to physics. One is that and so
The defining relation can be written as
which closely resembles the defining relation for the Lorentz group
where is the metric tensor for Minkowski space and of course, as before.
Covering groups
Since is simply connected, it is the universal covering group of the restricted Lorentz group . By restriction, there is a homomorphism . Here, the special unitary group SU(2), which is isomorphic to the group of unit norm quaternions, is also simply connected, so it is the covering group of the rotation group . Each of these covering maps are twofold covers in the sense that precisely two elements of the covering group map to each element of the quotient. One often says that the restricted Lorentz group and the rotation group are doubly connected. This means that the fundamental group of the each group is isomorphic to the two-element cyclic group .
Twofold coverings are characteristic of spin groups. Indeed, in addition to the double coverings
we have the double coverings
These spinorial double coverings are constructed from Clifford algebras.
Topology
The left and right groups in the double covering
are deformation retracts of the left and right groups, respectively, in the double covering
.
But the homogeneous space is homeomorphic to hyperbolic 3-space , so we have exhibited the restricted Lorentz group as a principal fiber bundle with fibers and base . Since the latter is homeomorphic to , while is homeomorphic to three-dimensional real projective space , we see that the restricted Lorentz group is locally homeomorphic to the product of with . Since the base space is contractible, this can be extended to a global homeomorphism.
Conjugacy classes
Because the restricted Lorentz group is isomorphic to the Möbius group , its conjugacy classes also fall into five classes:
Elliptic transformations
Hyperbolic transformations
Loxodromic transformations
Parabolic transformations
The trivial identity transformation
In the article on Möbius transformations, it is explained how this classification arises by considering the fixed points of Möbius transformations in their action on the Riemann sphere, which corresponds here to null eigenspaces of restricted Lorentz transformations in their action on Minkowski spacetime.
An example of each type is given in the subsections below, along with the effect of the one-parameter subgroup it generates (e.g., on the appearance of the night sky).
The Möbius transformations are the conformal transformations of the Riemann sphere (or celestial sphere). Then conjugating with an arbitrary element of obtains the following examples of arbitrary elliptic, hyperbolic, loxodromic, and parabolic (restricted) Lorentz transformations, respectively. The effect on the flow lines of the corresponding one-parameter subgroups is to transform the pattern seen in the examples by some conformal transformation. For example, an elliptic Lorentz transformation can have any two distinct fixed points on the celestial sphere, but points still flow along circular arcs from one fixed point toward the other. The other cases are similar.
Elliptic
An elliptic element of is
and has fixed points = 0, ∞. Writing the action as and collecting terms, the spinor map converts this to the (restricted) Lorentz transformation
This transformation then represents a rotation about the axis, exp(). The one-parameter subgroup it generates is obtained by taking to be a real variable, the rotation angle, instead of a constant.
The corresponding continuous transformations of the celestial sphere (except for the identity) all share the same two fixed points, the North and South poles. The transformations move all other points around latitude circles so that this group yields a continuous counter-clockwise rotation about the axis as increases. The angle doubling evident in the spinor map is a characteristic feature of spinorial double coverings.
Hyperbolic
A hyperbolic element of is
and has fixed points = 0, ∞. Under stereographic projection from the Riemann sphere to the Euclidean plane, the effect of this Möbius transformation is a dilation from the origin.
The spinor map converts this to the Lorentz transformation
This transformation represents a boost along the axis with rapidity . The one-parameter subgroup it generates is obtained by taking to be a real variable, instead of a constant. The corresponding continuous transformations of the celestial sphere (except for the identity) all share the same fixed points (the North and South poles), and they move all other points along longitudes away from the South pole and toward the North pole.
Loxodromic
A loxodromic element of is
and has fixed points = 0, ∞. The spinor map converts this to the Lorentz transformation
The one-parameter subgroup this generates is obtained by replacing with any real multiple of this complex constant. (If , vary independently, then a two-dimensional abelian subgroup is obtained, consisting of simultaneous rotations about the axis and boosts along the -axis; in contrast, the one-dimensional subgroup discussed here consists of those elements of this two-dimensional subgroup such that the rapidity of the boost and angle of the rotation have a fixed ratio.)
The corresponding continuous transformations of the celestial sphere (excepting the identity) all share the same two fixed points (the North and South poles). They move all other points away from the South pole and toward the North pole (or vice versa), along a family of curves called loxodromes. Each loxodrome spirals infinitely often around each pole.
Parabolic
A parabolic element of is
and has the single fixed point = ∞ on the Riemann sphere. Under stereographic projection, it appears as an ordinary translation along the real axis.
The spinor map converts this to the matrix (representing a Lorentz transformation)
This generates a two-parameter abelian subgroup, which is obtained by considering a complex variable rather than a constant. The corresponding continuous transformations of the celestial sphere (except for the identity transformation) move points along a family of circles that are all tangent at the North pole to a certain great circle. All points other than the North pole itself move along these circles.
Parabolic Lorentz transformations are often called null rotations. Since these are likely to be the least familiar of the four types of nonidentity Lorentz transformations (elliptic, hyperbolic, loxodromic, parabolic), it is illustrated here how to determine the effect of an example of a parabolic Lorentz transformation on Minkowski spacetime.
The matrix given above yields the transformation
Now, without loss of generality, pick . Differentiating this transformation with respect to the now real group parameter and evaluating at produces the corresponding vector field (first order linear partial differential operator),
Apply this to a function , and demand that it stays invariant; i.e., it is annihilated by this transformation. The solution of the resulting first order linear partial differential equation can be expressed in the form
where is an arbitrary smooth function. The arguments of give three rational invariants describing how points (events) move under this parabolic transformation, as they themselves do not move,
Choosing real values for the constants on the right hand sides yields three conditions, and thus specifies a curve in Minkowski spacetime. This curve is an orbit of the transformation.
The form of the rational invariants shows that these flowlines (orbits) have a simple description: suppressing the inessential coordinate , each orbit is the intersection of a null plane, , with a hyperboloid, . The case 3 = 0 has the hyperboloid degenerate to a light cone with the orbits becoming parabolas lying in corresponding null planes.
A particular null line lying on the light cone is left invariant; this corresponds to the unique (double) fixed point on the Riemann sphere mentioned above. The other null lines through the origin are "swung around the cone" by the transformation. Following the motion of one such null line as increases corresponds to following the motion of a point along one of the circular flow lines on the celestial sphere, as described above.
A choice instead, produces similar orbits, now with the roles of and interchanged.
Parabolic transformations lead to the gauge symmetry of massless particles (such as photons) with helicity || ≥ 1. In the above explicit example, a massless particle moving in the direction, so with 4-momentum , is not affected at all by the -boost and -rotation combination defined below, in the "little group" of its motion. This is evident from the explicit transformation law discussed: like any light-like vector, P itself is now invariant; i.e., all traces or effects of have disappeared. , in the special case discussed. (The other similar generator, as well as it and comprise altogether the little group of the light-like vector, isomorphic to .)
Appearance of the night sky
This isomorphism has the consequence that Möbius transformations of the Riemann sphere represent the way that Lorentz transformations change the appearance of the night sky, as seen by an observer who is maneuvering at relativistic velocities relative to the "fixed stars".
Suppose the "fixed stars" live in Minkowski spacetime and are modeled by points on the celestial sphere. Then a given point on the celestial sphere can be associated with , a complex number that corresponds to the point on the Riemann sphere, and can be identified with a null vector (a light-like vector) in Minkowski space
or, in the Weyl representation (the spinor map), the Hermitian matrix
The set of real scalar multiples of this null vector, called a null line through the origin, represents a line of sight from an observer at a particular place and time (an arbitrary event we can identify with the origin of Minkowski spacetime) to various distant objects, such as stars. Then the points of the celestial sphere (equivalently, lines of sight) are identified with certain Hermitian matrices.
Projective geometry and different views of the 2-sphere
This picture emerges cleanly in the language of projective geometry. The (restricted) Lorentz group acts on the projective celestial sphere. This is the space of non-zero null vectors with under the given quotient for projective spaces: if for . This is referred to as the celestial sphere as this allows us to rescale the time coordinate to 1 after acting using a Lorentz transformation, ensuring the space-like part sits on the unit sphere.
From the Möbius side, acts on complex projective space , which can be shown to be diffeomorphic to the 2-sphere – this is sometimes referred to as the Riemann sphere. The quotient on projective space leads to a quotient on the group .
Finally, these two can be linked together by using the complex projective vector to construct a null-vector. If is a projective vector, it can be tensored with its Hermitian conjugate to produce a Hermitian matrix. From elsewhere in this article we know this space of matrices can be viewed as 4-vectors. The space of matrices coming from turning each projective vector in the Riemann sphere into a matrix is known as the Bloch sphere.
Lie algebra
As with any Lie group, a useful way to study many aspects of the Lorentz group is via its Lie algebra. Since the Lorentz group is a matrix Lie group, its corresponding Lie algebra is a matrix Lie algebra, which may be computed as
.
If is the diagonal matrix with diagonal entries , then the Lie algebra consists of matrices such that
.
Explicitly, consists of matrices of the form
,
where are arbitrary real numbers. This Lie algebra is six dimensional. The subalgebra of consisting of elements in which , , and equal to zero is isomorphic to .
The full Lorentz group , the proper Lorentz group and the proper orthochronous Lorentz group (the component connected to the identity) all have the same Lie algebra, which is typically denoted .
Since the identity component of the Lorentz group is isomorphic to a finite quotient of (see the section above on the connection of the Lorentz group to the Möbius group), the Lie algebra of the Lorentz group is isomorphic to the Lie algebra . As a complex Lie algebra is three dimensional, but is six dimensional when viewed as a real Lie algebra.
Commutation relations of the Lorentz algebra
The standard basis matrices can be indexed as where take values in . These arise from taking only one of to be one, and others zero, in turn. The components can be written as
.
The commutation relations are
There are different possible choices of convention in use. In physics, it is common to include a factor of with the basis elements, which gives a factor of in the commutation relations.
Then generate boosts and generate rotations.
The structure constants for the Lorentz algebra can be read off from the commutation relations. Any set of basis elements which satisfy these relations form a representation of the Lorentz algebra.
Generators of boosts and rotations
The Lorentz group can be thought of as a subgroup of the diffeomorphism group of and therefore its Lie algebra can be identified with vector fields on . In particular, the vectors that generate isometries on a space are its Killing vectors, which provides a convenient alternative to the left-invariant vector field for calculating the Lie algebra. We can write down a set of six generators:
Vector fields on generating three rotations ,
Vector fields on generating three boosts ,
The factor of appears to ensure that the generators of rotations are Hermitian.
It may be helpful to briefly recall here how to obtain a one-parameter group from a vector field, written in the form of a first order linear partial differential operator such as
The corresponding initial value problem (consider a function of a scalar and solve with some initial conditions) is
The solution can be written
or
where we easily recognize the one-parameter matrix group of rotations about the z-axis.
Differentiating with respect to the group parameter and setting it in that result, we recover the standard matrix,
which corresponds to the vector field we started with. This illustrates how to pass between matrix and vector field representations of elements of the Lie algebra. The exponential map plays this special role not only for the Lorentz group but for Lie groups in general.
Reversing the procedure in the previous section, we see that the Möbius transformations that correspond to our six generators arise from exponentiating respectively (for the three boosts) or (for the three rotations) times the three Pauli matrices
Generators of the Möbius group
Another generating set arises via the isomorphism to the Möbius group. The following table lists the six generators, in which
The first column gives a generator of the flow under the Möbius action (after stereographic projection from the Riemann sphere) as a real vector field on the Euclidean plane.
The second column gives the corresponding one-parameter subgroup of Möbius transformations.
The third column gives the corresponding one-parameter subgroup of Lorentz transformations (the image under our homomorphism of preceding one-parameter subgroup).
The fourth column gives the corresponding generator of the flow under the Lorentz action as a real vector field on Minkowski spacetime.
Notice that the generators consist of
Two parabolics (null rotations)
One hyperbolic (boost in the direction)
Three elliptics (rotations about the x, y, z axes, respectively)
Worked example: rotation about the y-axis
Start with
Exponentiate:
This element of represents the one-parameter subgroup of (elliptic) Möbius transformations:
Next,
The corresponding vector field on (thought of as the image of under stereographic projection) is
Writing , this becomes the vector field on
Returning to our element of , writing out the action and collecting terms, we find that the image under the spinor map is the element of
Differentiating with respect to at , yields the corresponding vector field on ,
This is evidently the generator of counterclockwise rotation about the -axis.
Subgroups of the Lorentz group
The subalgebras of the Lie algebra of the Lorentz group can be enumerated, up to conjugacy, from which the closed subgroups of the restricted Lorentz group can be listed, up to conjugacy. (See the book by Hall cited below for the details.) These can be readily expressed in terms of the generators given in the table above.
The one-dimensional subalgebras of course correspond to the four conjugacy classes of elements of the Lorentz group:
generates a one-parameter subalgebra of parabolics ,
generates a one-parameter subalgebra of boosts ,
generates a one-parameter of rotations ,
(for any ) generates a one-parameter subalgebra of loxodromic transformations.
(Strictly speaking the last corresponds to infinitely many classes, since distinct give different classes.)
The two-dimensional subalgebras are:
generate an abelian subalgebra consisting entirely of parabolics,
generate a nonabelian subalgebra isomorphic to the Lie algebra of the affine group ,
generate an abelian subalgebra consisting of boosts, rotations, and loxodromics all sharing the same pair of fixed points.
The three-dimensional subalgebras use the Bianchi classification scheme:
generate a Bianchi V subalgebra, isomorphic to the Lie algebra of , the group of euclidean homotheties,
generate a Bianchi VII subalgebra, isomorphic to the Lie algebra of , the euclidean group,
, where , generate a Bianchi VII subalgebra,
generate a Bianchi VIII subalgebra, isomorphic to the Lie algebra of , the group of isometries of the hyperbolic plane,
generate a Bianchi IX subalgebra, isomorphic to the Lie algebra of , the rotation group.
The Bianchi types refer to the classification of three-dimensional Lie algebras by the Italian mathematician Luigi Bianchi.
The four-dimensional subalgebras are all conjugate to
generate a subalgebra isomorphic to the Lie algebra of , the group of Euclidean similitudes.
The subalgebras form a lattice (see the figure), and each subalgebra generates by exponentiation a closed subgroup of the restricted Lie group. From these, all subgroups of the Lorentz group can be constructed, up to conjugation, by multiplying by one of the elements of the Klein four-group.
As with any connected Lie group, the coset spaces of the closed subgroups of the restricted Lorentz group, or homogeneous spaces, have considerable mathematical interest. A few, brief descriptions:
The group is the stabilizer of a null line; i.e., of a point on the Riemann sphere—so the homogeneous space is the Kleinian geometry that represents conformal geometry on the sphere .
The (identity component of the) Euclidean group is the stabilizer of a null vector, so the homogeneous space is the momentum space of a massless particle; geometrically, this Kleinian geometry represents the degenerate geometry of the light cone in Minkowski spacetime.
The rotation group is the stabilizer of a timelike vector, so the homogeneous space is the momentum space of a massive particle; geometrically, this space is none other than three-dimensional hyperbolic space .
Generalization to higher dimensions
The concept of the Lorentz group has a natural generalization to spacetime of any number of dimensions. Mathematically, the Lorentz group of (n + 1)-dimensional Minkowski space is the indefinite orthogonal group of linear transformations of that preserves the quadratic form
The group preserves the quadratic form
is isomorphic to , and both presentations of the Lorentz group are in use in the theoretical physics community. The former is more common in literature related to gravity, while the latter is more common in particle physics literature.
A common notation for the vector space , equipped with this choice of quadratic form, is .
Many of the properties of the Lorentz group in four dimensions (where ) generalize straightforwardly to arbitrary . For instance, the Lorentz group has four connected components, and it acts by conformal transformations on the celestial -sphere in -dimensional Minkowski space. The identity component is an -bundle over hyperbolic -space .
The low-dimensional cases and are often useful as "toy models" for the physical case , while higher-dimensional Lorentz groups are used in physical theories such as string theory that posit the existence of hidden dimensions. The Lorentz group is also the isometry group of -dimensional de Sitter space , which may be realized as the homogeneous space . In particular is the isometry group of the de Sitter universe , a cosmological model.
See also
Lorentz transformation
Representation theory of the Lorentz group
Poincaré group
Möbius group
Minkowski space
Biquaternions
Indefinite orthogonal group
Quaternions and spatial rotation
Special relativity
Symmetry in quantum mechanics
Notes
References
Reading List
Emil Artin (1957) Geometric Algebra, chapter III: Symplectic and Orthogonal Geometry via Internet Archive, covers orthogonal groups
A canonical reference; see chapters 1–6 for representations of the Lorentz group.
An excellent resource for Lie theory, fiber bundles, spinorial coverings, and many other topics.
See Lecture 11 for the irreducible representations of .
See Chapter 6 for the subalgebras of the Lie algebra of the Lorentz group.
See also the See Section 1.3 for a beautifully illustrated discussion of covering spaces. See Section 3D for the topology of rotation groups.
§41.3
(Dover reprint edition.) An excellent reference on Minkowski spacetime and the Lorentz group.
See Chapter 3 for a superbly illustrated discussion of Möbius transformations.
Lie groups
Special relativity
Group theory
Hendrik Lorentz | Lorentz group | [
"Physics",
"Mathematics"
] | 7,599 | [
"Lie groups",
"Mathematical structures",
"Special relativity",
"Group theory",
"Fields of abstract algebra",
"Algebraic structures",
"Theory of relativity"
] |
230,491 | https://en.wikipedia.org/wiki/Fock%20space | The Fock space is an algebraic construction used in quantum mechanics to construct the quantum states space of a variable or unknown number of identical particles from a single particle Hilbert space . It is named after V. A. Fock who first introduced it in his 1932 paper "Konfigurationsraum und zweite Quantelung" ("Configuration space and second quantization").
Informally, a Fock space is the sum of a set of Hilbert spaces representing zero particle states, one particle states, two particle states, and so on. If the identical particles are bosons, the -particle states are vectors in a symmetrized tensor product of single-particle Hilbert spaces . If the identical particles are fermions, the -particle states are vectors in an antisymmetrized tensor product of single-particle Hilbert spaces (see symmetric algebra and exterior algebra respectively). A general state in Fock space is a linear combination of -particle states, one for each .
Technically, the Fock space is (the Hilbert space completion of) the direct sum of the symmetric or antisymmetric tensors in the tensor powers of a single-particle Hilbert space ,
Here is the operator that symmetrizes or antisymmetrizes a tensor, depending on whether the Hilbert space describes particles obeying bosonic or fermionic statistics, and the overline represents the completion of the space. The bosonic (resp. fermionic) Fock space can alternatively be constructed as (the Hilbert space completion of) the symmetric tensors (resp. alternating tensors ). For every basis for there is a natural basis of the Fock space, the Fock states.
Definition
The Fock space is the (Hilbert) direct sum of tensor products of copies of a single-particle Hilbert space
Here , the complex scalars, consists of the states corresponding to no particles, the states of one particle, the states of two identical particles etc.
A general state in is given by
where
is a vector of length 1 called the vacuum state and is a complex coefficient,
is a state in the single particle Hilbert space and is a complex coefficient,
, and is a complex coefficient, etc.
The convergence of this infinite sum is important if is to be a Hilbert space. Technically we require to be the Hilbert space completion of the algebraic direct sum. It consists of all infinite tuples such that the norm, defined by the inner product is finite
where the particle norm is defined by
i.e., the restriction of the norm on the tensor product
For two general states
and
the inner product on is then defined as
where we use the inner products on each of the -particle Hilbert spaces. Note that, in particular the particle subspaces are orthogonal for different .
Product states, indistinguishable particles, and a useful basis for Fock space
A product state of the Fock space is a state of the form
which describes a collection of particles, one of which has quantum state , another and so on up to the th particle, where each is any state from the single particle Hilbert space . Here juxtaposition (writing the single particle kets side by side, without the ) is symmetric (resp. antisymmetric) multiplication in the symmetric (antisymmetric) tensor algebra. The general state in a Fock space is a linear combination of product states. A state that cannot be written as a convex sum of product states is called an entangled state.
When we speak of one particle in state , we must bear in mind that in quantum mechanics identical particles are indistinguishable. In the same Fock space, all particles are identical. (To describe many species of particles, we take the tensor product of as many different Fock spaces as there are species of particles under consideration). It is one of the most powerful features of this formalism that states are implicitly properly symmetrized. For instance, if the above state is fermionic, it will be 0 if two (or more) of the are equal because the antisymmetric (exterior) product . This is a mathematical formulation of the Pauli exclusion principle that no two (or more) fermions can be in the same quantum state. In fact, whenever the terms in a formal product are linearly dependent; the product will be zero for antisymmetric tensors. Also, the product of orthonormal states is properly orthonormal by construction (although possibly 0 in the Fermi case when two states are equal).
A useful and convenient basis for a Fock space is the occupancy number basis. Given a basis of , we can denote the state with
particles in state ,
particles in state , ..., particles in state , and no particles in the remaining states, by defining
where each takes the value 0 or 1 for fermionic particles and 0, 1, 2, ... for bosonic particles. Note that trailing zeroes may be dropped without changing the state. Such a state is called a Fock state. When the are understood as the steady states of a free field, the Fock states describe an assembly of non-interacting particles in definite numbers. The most general Fock state is a linear superposition of pure states.
Two operators of great importance are the creation and annihilation operators, which upon acting on a Fock state add or respectively remove a particle in the ascribed quantum state. They are denoted for creation and for annihilation respectively. To create ("add") a particle, the quantum state is symmetric or exterior- multiplied with ; and respectively to annihilate ("remove") a particle, an (even or odd) interior product is taken with , which is the adjoint of . It is often convenient to work with states of the basis of so that these operators remove and add exactly one particle in the given basis state. These operators also serve as generators for more general operators acting on the Fock space, for instance the number operator giving the number of particles in a specific state is .
Wave function interpretation
Often the one particle space is given as , the space of square-integrable functions on a space with measure (strictly speaking, the equivalence classes of square integrable functions where functions are equivalent if they differ on a set of measure zero). The typical example is the free particle with the space of square integrable functions on three-dimensional space. The Fock spaces then have a natural interpretation as symmetric or anti-symmetric square integrable functions as follows.
Let and , , , etc.
Consider the space of tuples of points which is the disjoint union
It has a natural measure such that and the restriction of to is .
The even Fock space can then be identified with the space of symmetric functions in whereas the odd Fock space can be identified with the space of anti-symmetric functions. The identification follows directly from the isometric mapping
.
Given wave functions , the Slater determinant
is an antisymmetric function on . It can thus be naturally interpreted as an element of the -particle sector of the odd Fock space. The normalization is chosen such that if the functions are orthonormal. There is a similar "Slater permanent" with the determinant replaced with the permanent which gives elements of -sector of the even Fock space.
Relation to the Segal–Bargmann space
Define the Segal–Bargmann space of complex holomorphic functions square-integrable with respect to a Gaussian measure:
where
Then defining a space as the nested union of the spaces over the integers , Segal and Bargmann showed that is isomorphic to a bosonic Fock space. The monomial
corresponds to the Fock state
See also
Fock state
Tensor algebra
Holomorphic Fock space
Creation and annihilation operators
Slater determinant
Wick's theorem
Noncommutative geometry
Grand canonical ensemble, thermal distribution over Fock space
Schrödinger functional
References
External links
Feynman diagrams and Wick products associated with q-Fock space - noncommutative analysis, Edward G. Effros and Mihai Popa, Department of Mathematics, UCLA
R. Geroch, Mathematical Physics, Chicago University Press, Chapter 21.
Quantum mechanics
Quantum field theory | Fock space | [
"Physics"
] | 1,713 | [
"Quantum field theory",
"Theoretical physics",
"Quantum mechanics"
] |
230,527 | https://en.wikipedia.org/wiki/Seismic%20hazard | A seismic hazard is the probability that an earthquake will occur in a given geographic area, within a given window of time, and with ground motion intensity exceeding a given threshold. With a hazard thus estimated, risk can be assessed and included in such areas as building codes for standard buildings, designing larger buildings and infrastructure projects, land use planning and determining insurance rates. The seismic hazard studies also may generate two standard measures of anticipated ground motion, both confusingly abbreviated MCE; the simpler probabilistic Maximum Considered Earthquake (or Event ), used in standard building codes, and the more detailed and deterministic Maximum Credible Earthquake incorporated in the design of larger buildings and civil infrastructure like dams or bridges. It is important to clarify which MCE is being discussed.
Calculations for determining seismic hazard were first formulated by C. Allin Cornell in 1968 and, depending on their level of importance and use, can be quite complex.
The regional geology and seismology setting is first examined for sources and patterns of earthquake occurrence, both in depth and at the surface from seismometer records; secondly, the impacts from these sources are assessed relative to local geologic rock and soil types, slope angle and groundwater conditions. Zones of similar potential earthquake shaking are thus determined and drawn on maps. The well known San Andreas Fault is illustrated as a long narrow elliptical zone of greater potential motion, like many areas along continental margins associated with the Pacific Ring of Fire. Zones of higher seismicity in the continental interior may be the site for intraplate earthquakes) and tend to be drawn as broad areas, based on historic records, like the 1812 New Madrid earthquake, since specific causative faults are generally not identified as earthquake sources.
Each zone is given properties associated with source potential: how many earthquakes per year, the maximum size of earthquakes (maximum magnitude), etc. Finally, the calculations require formulae that give the required hazard indicators for a given earthquake size and distance. For example, some districts prefer to use peak acceleration, others use peak velocity, and more sophisticated uses require response spectral ordinates.
The computer program then integrates over all the zones and produces probability curves for the key ground motion parameter. The final result gives a 'chance' of exceeding a given value over a specified amount of time. Standard building codes for homeowners might be concerned with a 1 in 500 years chance, while nuclear plants look at the 10,000 year time frame. A longer-term seismic history can be obtained through paleoseismology. The results may be in the form of a ground response spectrum for use in seismic analysis.
More elaborate variations on the theme also look at the soil conditions. Higher ground motions are likely to be experienced on a soft swamp compared to a hard rock site. The standard seismic hazard calculations become adjusted upwards when postulating characteristic earthquakes. Areas with high ground motion due to soil conditions are also often subject to soil failure due to liquefaction. Soil failure can also occur due to earthquake-induced landslides in steep terrain. Large area landsliding can also occur on rather gentle slopes as was seen in the Good Friday earthquake in Anchorage, Alaska, March 28, 1964.
MCEs
In a normal seismic hazard analyses intended for the public, that of a "maximum considered earthquake", or "maximum considered event" (MCE) for a specific area, is an earthquake that is expected to occur once in approximately 2,500 years; that is, it has a 2-percent probability of being exceeded in 50 years. The term is used specifically for general building codes, which people commonly occupy; building codes in many localities will require non-essential buildings to be designed for "collapse prevention" in an MCE, so that the building remains standing – allowing for safety and escape of occupants – rather than full structural survival of the building.
A far more detailed and stringent MCE stands for "maximum credible earthquake", which is used in designing for skyscrapers and larger civil infrastructure, like dams, where structural failure could lead to other catastrophic consequences. These MCEs might require determining more than one specific earthquake event, depending on the variety of structures included.
US seismic hazard maps
Some maps released by the USGS are shown with peak ground acceleration with a 10% probability of exceedance in 50 years, measured in Metre per second squared. For parts of the US, the National Seismic Hazard Mapping Project in 2008 resulted in seismic hazard maps showing peak acceleration (as a percentage of gravity) with a 2% probability of exceedance in 50 years.
Temblor, a company founded in 2014, offers a seismic hazard rank for all of the conterminous US. This service is free and ad-free for the public. The hazard rank "is made for the likelihood of experiencing strong shaking (0.4g peak ground acceleration) in 30 years, based on the 2014 USGS NSHMP hazard model."
Global seismic hazard maps
Global seismic hazard maps exist too, which similarly present the level of certain ground motions that have a 10% probability of exceedance (or a 90% chance of non-exceedance) during a 50-year time span (that corresponds to a return period of 475 years).
See also
C. Allin Cornell
Earthquake engineering
Mitigation of seismic motion
Neotectonics
Seismic loading
Seismic performance
Vibration control
References
External links
Global Seismic Hazard Assessment Program
Infrastructure Risk Research Project at The University of British Columbia, Vancouver, Canada
Diagnose the impact of global earthquakes from direct and indirect eyewitnesses contributions
Earthquake and seismic risk mitigation | Seismic hazard | [
"Engineering"
] | 1,118 | [
"Structural engineering",
"Earthquake and seismic risk mitigation"
] |
230,530 | https://en.wikipedia.org/wiki/Seismic%20risk | Seismic risk or earthquake risk is the potential impact on the built environment and on people's well-being due to future earthquakes. Seismic risk has been defined, for most management purposes, as the potential economic, social and environmental consequences of hazardous events that may occur in a specified period of time. A building located in a region of high seismic hazard is at lower risk if it is built to sound seismic engineering principles. On the other hand, a building located in a region with a history of minor seismicity, in a brick building located on fill subject to liquefaction can be as high or higher risk.
A special subset is urban seismic risk which looks at the specific issues of cities. Risk determination and emergency response can also be determined through the use of an earthquake scenario.
Determination of seismic risk
The determination of seismic risk is the foundation for risk mitigation decision-making, a key step in risk management. Large corporations and other enterprises (e.g., local governments) analyze their 'portfolio' of properties, to determine how to best allocate limited funds for structural strengthening of buildings, or other risk reduction measures such as emergency planning. In calculating the risk of each facility in the 'portfolio', potential life safety and economic losses due not only to structural damage, but also to equipment, contents and business interruption are considered. Public agencies (local, state governments and federal agencies) similarly analyze their portfolios. The interconnections of infrastructures such as water, road and highway, and electric power systems are also considered. Insurance companies routinely employ estimates of seismic risk in their operations to determine appropriate insurance rates, monitor over-accumulation of policies in a small area, and purchase reinsurance. A simplified method of calculating seismic risk for a given city, involves the use of a street survey. If you know the level of seismic hazard, the damage generally follows established patterns.
Seismic risk is often determined using a seismic modeling computer programs which uses the seismic hazard inputs and combines them with the known susceptibilities of structures and facilities, such as buildings, bridges, electrical power switching stations, etc. The result gives probabilities for economic damage or casualties, for example the HAZUS computer program. While the results can be used as a general measure of seismic risk for types of buildings, the actual seismic risk for any individual building may vary considerably and will depend upon its exact configuration and condition. Acquiring and analyzing the specific data for an individual building or facility is one of the most expensive and daunting aspects of seismic risk estimation. Progress is made if one can calculate the 'fragility' or seismic capacity of the components within a structure.
In 1999, ASTM produced guidelines for reporting seismic loss estimates on commercial properties, commonly known as Probable Maximum Loss or PML reviews. These guidelines specify the scope of work, qualifications of the reviewer, and proper nomenclature for reporting loss estimates.
Reduction of seismic risk
Seismic risk can be reduced by active programs that improve emergency response, and improve basic infrastructure. The concepts of earthquake preparedness can help plan for emergencies arising from an earthquake. Building codes are intended to help to manage seismic risk and are updated as more is learned about the effects of seismic ground motion on buildings. This type of active improvement of mitigation of damage from earthquakes is known as seismic retrofit. However, the changes generally do not immediately improve seismic risk in a community since existing buildings are rarely required to be upgraded to meet the revisions.
See also
Probabilistic risk assessment
Probable Maximum Loss
Seismic hazard
Notes
External links
C. Allin Cornell
HAZUS – Seismic Risk Program for the US
An All HAZUS Web Space
HAZUS Community website
Infrastructure Risk Research Project at The University of British Columbia, Vancouver, Canada
OIKOS – Educational European project based on Google Maps Mashups
EMSC : Seismic real-time information
Diagnose the impact of global earthquakes from direct and indirect eyewitnesses contributions
Earthquake and seismic risk mitigation | Seismic risk | [
"Engineering"
] | 805 | [
"Structural engineering",
"Earthquake and seismic risk mitigation"
] |
230,641 | https://en.wikipedia.org/wiki/Copper%20interconnects | Copper interconnects are used in integrated circuits to reduce propagation delays and power consumption. Since copper is a better conductor than aluminium, ICs using copper for their interconnects can have interconnects with narrower dimensions, and use less energy to pass electricity through them. Together, these effects lead to ICs with better performance. They were first introduced by IBM, with assistance from Motorola, in 1997.
The transition from aluminium to copper required significant developments in fabrication techniques, including radically different methods for patterning the metal as well as the introduction of barrier metal layers to isolate the silicon from potentially damaging copper atoms.
Although the methods of superconformal copper electrodepostion were known since late 1960, their application at the (sub)micron via scale (e.g. in microchips) started only in 1988-1995 (see figure). By year 2002 it became a mature technology, and research and development efforts in this field started to decline.
Patterning
Although some form of volatile copper compound has been known to exist since 1947, with more discovered as the century progressed, none were in industrial use, so copper could not be patterned by the previous techniques of photoresist masking and plasma etching that had been used with great success with aluminium. The inability to plasma etch copper called for a drastic rethinking of the metal patterning process and the result of this rethinking was a process referred to as an additive patterning, also known as a "Damascene" or "dual-Damascene" process by analogy to a traditional technique of metal inlaying.
In this process, the underlying silicon oxide insulating layer is patterned with open trenches where the conductor should be. A thick coating of copper that significantly overfills the trenches is deposited on the insulator, and chemical-mechanical planarization (CMP) is used to remove the copper (known as overburden) that extends above the top of the insulating layer. Copper sunken within the trenches of the insulating layer is not removed and becomes the patterned conductor. Damascene processes generally form and fill a single feature with copper per Damascene stage. Dual-Damascene processes generally form and fill two features with copper at once, e.g., a trench overlying a via may both be filled with a single copper deposition using dual-Damascene.
With successive layers of insulator and copper, a multilayer interconnect structure is created. The number of layers depends on the IC's function, 10 or more metal layers are possible. Without the ability of CMP to remove the copper coating in a planar and uniform fashion, and without the ability of the CMP process to stop repeatably at the copper-insulator interface, this technology would not be realizable.
Barrier metal
A barrier metal layer must completely surround all copper interconnect, since diffusion of copper into surrounding materials would degrade their properties. For instance, silicon forms deep-level traps when doped with copper. As the name implies, a barrier metal must limit copper diffusivity sufficiently to chemically isolate the copper conductor from the silicon below, yet have high electrical conductivity in order to maintain a good electronic contact.
The thickness of the barrier film is also quite important; with too thin a layer, the copper contacts poison the very devices that they connect to; with too thick a layer, the stack of two barrier metal films and a copper conductor have a greater total resistance than aluminium interconnects, eliminating any benefit.
The improvement in conductivity in going from earlier aluminium to copper based conductors was modest, and not as good as to be expected by a simple comparison of bulk conductivities of aluminium and copper. The addition of barrier metals on all four sides of the copper conductor significantly reduces the cross-sectional area of the conductor that is composed of pure, low resistance, copper. Aluminium, while requiring a thin barrier metal to promote low ohmic resistance when making a contact directly to silicon or aluminium layers, did not require barrier metals on the sides of the metal lines to isolate aluminium from the surrounding silicon oxide insulators. Therefore scientists are looking for new ways to reduce the diffusion of copper into silicon substrates without using the buffer layer. One method is to use copper-germanium alloy as the interconnect material so that buffer layer (e.g. titanium nitride) is no longer needed. Epitaxial Cu3Ge layer has been fabricated with an average resistivity of 6 ± 1 μΩ cm and work function of ~4.47 ± 0.02 eV respectively, qualifying it as a good alternative to copper.
Electromigration
Resistance to electromigration, the process by which a metal conductor changes shape under the influence of an electric current flowing through it and which eventually leads to the breaking of the conductor, is significantly better with copper than with aluminium. This improvement in electromigration resistance allows higher currents to flow through a given size copper conductor compared to aluminium. The combination of a modest increase in conductivity along with this improvement in electromigration resistance was to prove highly attractive. The overall benefits derived from these performance improvements were ultimately enough to drive full-scale investment in copper-based technologies and fabrication methods for high performance semiconductor devices, and copper-based processes continue to be the state of the art for the semiconductor industry today.
Superconformal electrodeposition of copper
Around 2005 the processor frequency reached 3 GHz due to continuous decrease in the on-chip transistor size in the previous years. At this point, the capacitive RC coupling of interconnects became the speed(frequency)-limiting factor.
The process of reducing both R and C started in the late 1990’s, when Al (aluminium) was replaced with Cu (copper) for lower R (resistance), and SiO2 was replaced with low-κ dielectrics for lower C (capacitance). Cu was selected as the replacement for Al, because it has the lowest electronic resistance among low-cost materials at room temperature, and because Cu shows a slower electromigration than Al. Noteworthy, in the case of Al interconnects was patterning process involves selective Al etching (i.e. subtractive manufacturing process) in uncoated areas, followed by deposition of a dielectric. Since no method of spatially-selective etching of copper was known, etching (patterning) of the dielectric was implemented instead. For the Cu deposition (i.e. an additive manufacturing process), the IBM team in the late 1990’s selected electroplating. This started the ‘copper revolution” in the semiconductor / microchip industry.
The copper plating starts with coating the walls of a via with a protective layer (Ta, TaN, SiN or SiC), that prevents Cu diffusion into silicon. Then, physical vapor deposition of a thin seed Cu layer on the via walls is performed.
This “seed layer” servers as the promoter for the next step of electrodeposition. Normally, due to slower mass-transport of Cu2+ ion, the electroplating is slower deep inside the vias. Under such conditions, via filling results in a formation of a void inside. In order to avoid such defects, bottom-up filling (or superconformal) filling is required, as shown in Fig. A.
Liquid solutions for superconformal copper electroplating typically comprise several additives in mM concentrations: chloride ion, a suppressor (such as polyethyleneglycol), an accelerator (e.g. bis(3-sulfopropyl)disulfide) and a leveling agent (e.g. Janus Green B).
Two main models for superconformal metal electroplating have been proposed:
1) curvature enhanced adsorbate concentration (CEAC) model suggests, that as the curvature of the copper layer on the bottom of the via increases, and the surface coverage of the adsorbed accelerator increases as well, facilitating kinetically limited Cu deposition in these areas. This model emphasizes the role of accelerator.
2) S-shaped negative differential resistance (S-NDR) model claims instead, that the main effect comes from the suppressor, which due to its high molecular weight/slow diffusion does not reach the bottom of the via and preferentially adsorbs at the top of the via, where it inhibits Cu plating.
There is experimental evidence to support either model. The reconciliatory opinion is that in the early stages of the bottom-up via filling the higher rate of Cu plating at the bottom is due to the lack of the PEG suppressor molecules there (their diffusion coefficienct is too low to provide a fast enough mass-transport). The accelerator, which is a smaller and faster diffusing molecule, reaches the bottom of the via, where is accelerates the rate of Cu plating without the suppressor. At the end of plating, the accelerator remains in a high concentration on the surface of the plated copper, causing the formation of a final bump.
See also
Carbon nanotubes in interconnects
References
Integrated circuits
Interconnect | Copper interconnects | [
"Technology",
"Engineering"
] | 1,889 | [
"Computer engineering",
"Integrated circuits"
] |
230,777 | https://en.wikipedia.org/wiki/Security%20protocol%20notation | In cryptography, security (engineering) protocol notation, also known as protocol narrations and Alice & Bob notation, is a way of expressing a protocol of correspondence between entities of a dynamic system, such as a computer network. In the context of a formal model, it allows reasoning about the properties of such a system.
The standard notation consists of a set of principals (traditionally named Alice, Bob, Charlie, and so on) who wish to communicate. They may have access to a server S, shared keys K, timestamps T, and can generate nonces N for authentication purposes.
A simple example might be the following:
This states that Alice intends a message for Bob consisting of a plaintext X encrypted under shared key KA,B.
Another example might be the following:
This states that Bob intends a message for Alice consisting of a nonce NB encrypted using public key of Alice.
A key with two subscripts, KA,B, is a symmetric key shared by the two corresponding individuals. A key with one subscript, KA, is the public key of the corresponding individual. A private key is represented as the inverse of the public key.
The notation specifies only the operation and not its semantics — for instance, private key encryption and signature are represented identically.
We can express more complicated protocols in such a fashion. See Kerberos as an example. Some sources refer to this notation as Kerberos Notation. Some authors consider the notation used by Steiner, Neuman, & Schiller as a notable reference.
Several models exist to reason about security protocols in this way, one of which is BAN logic.
Security protocol notation inspired many of the programming languages used in choreographic programming.
References
Cryptography | Security protocol notation | [
"Mathematics",
"Engineering"
] | 356 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
231,204 | https://en.wikipedia.org/wiki/Channel%20capacity | Channel capacity, in electrical engineering, computer science, and information theory, is the theoretical maximum rate at which information can be reliably transmitted over a communication channel.
Following the terms of the noisy-channel coding theorem, the channel capacity of a given channel is the highest information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability.
Information theory, developed by Claude E. Shannon in 1948, defines the notion of channel capacity and provides a mathematical model by which it may be computed. The key result states that the capacity of the channel, as defined above, is given by the maximum of the mutual information between the input and output of the channel, where the maximization is with respect to the input distribution.
The notion of channel capacity has been central to the development of modern wireline and wireless communication systems, with the advent of novel error correction coding mechanisms that have resulted in achieving performance very close to the limits promised by channel capacity.
Formal definition
The basic mathematical model for a communication system is the following:
where:
is the message to be transmitted;
is the channel input symbol ( is a sequence of symbols) taken in an alphabet ;
is the channel output symbol ( is a sequence of symbols) taken in an alphabet ;
is the estimate of the transmitted message;
is the encoding function for a block of length ;
is the noisy channel, which is modeled by a conditional probability distribution; and,
is the decoding function for a block of length .
Let and be modeled as random variables. Furthermore, let be the conditional probability distribution function of given , which is an inherent fixed property of the communication channel. Then the choice of the marginal distribution completely determines the joint distribution due to the identity
which, in turn, induces a mutual information . The channel capacity is defined as
where the supremum is taken over all possible choices of .
Additivity of channel capacity
Channel capacity is additive over independent channels. It means that using two independent channels in a combined manner provides the same theoretical capacity as using them independently.
More formally, let and be two independent channels modelled as above; having an input alphabet and an output alphabet . Idem for .
We define the product channel as
This theorem states:
Shannon capacity of a graph
If G is an undirected graph, it can be used to define a communications channel in which the symbols are the graph vertices, and two codewords may be confused with each other if their symbols in each position are equal or adjacent. The computational complexity of finding the Shannon capacity of such a channel remains open, but it can be upper bounded by another important graph invariant, the Lovász number.
Noisy-channel coding theorem
The noisy-channel coding theorem states that for any error probability ε > 0 and for any transmission rate R less than the channel capacity C, there is an encoding and decoding scheme transmitting data at rate R whose error probability is less than ε, for a sufficiently large block length. Also, for any rate greater than the channel capacity, the probability of error at the receiver goes to 0.5 as the block length goes to infinity.
Example application
An application of the channel capacity concept to an additive white Gaussian noise (AWGN) channel with B Hz bandwidth and signal-to-noise ratio S/N is the Shannon–Hartley theorem:
C is measured in bits per second if the logarithm is taken in base 2, or nats per second if the natural logarithm is used, assuming B is in hertz; the signal and noise powers S and N are expressed in a linear power unit (like watts or volts2). Since S/N figures are often cited in dB, a conversion may be needed. For example, a signal-to-noise ratio of 30 dB corresponds to a linear power ratio of .
Channel capacity estimation
To determine the channel capacity, it is necessary to find the capacity-achieving distribution and evaluate the mutual information . Research has mostly focused on studying additive noise channels under certain power constraints and noise distributions, as analytical methods are not feasible in the majority of other scenarios. Hence, alternative approaches such as, investigation on the input support, relaxations and capacity bounds, have been proposed in the literature.
The capacity of a discrete memoryless channel can be computed using the Blahut-Arimoto algorithm.
Deep learning can be used to estimate the channel capacity. In fact, the channel capacity and the capacity-achieving distribution of any discrete-time continuous memoryless vector channel can be obtained using CORTICAL, a cooperative framework inspired by generative adversarial networks. CORTICAL consists of two cooperative networks: a generator with the objective of learning to sample from the capacity-achieving input distribution, and a discriminator with the objective to learn to distinguish between paired and unpaired channel input-output samples and estimates .
Channel capacity in wireless communications
This section focuses on the single-antenna, point-to-point scenario. For channel capacity in systems with multiple antennas, see the article on MIMO.
Bandlimited AWGN channel
If the average received power is [W], the total bandwidth is in Hertz, and the noise power spectral density is [W/Hz], the AWGN channel capacity is
[bits/s],
where is the received signal-to-noise ratio (SNR). This result is known as the Shannon–Hartley theorem.
When the SNR is large (SNR ≫ 0 dB), the capacity is logarithmic in power and approximately linear in bandwidth. This is called the bandwidth-limited regime.
When the SNR is small (SNR ≪ 0 dB), the capacity is linear in power but insensitive to bandwidth. This is called the power-limited regime.
The bandwidth-limited regime and power-limited regime are illustrated in the figure.
Frequency-selective AWGN channel
The capacity of the frequency-selective channel is given by so-called water filling power allocation,
where and is the gain of subchannel , with chosen to meet the power constraint.
Slow-fading channel
In a slow-fading channel, where the coherence time is greater than the latency requirement, there is no definite capacity as the maximum rate of reliable communications supported by the channel, , depends on the random channel gain , which is unknown to the transmitter. If the transmitter encodes data at rate [bits/s/Hz], there is a non-zero probability that the decoding error probability cannot be made arbitrarily small,
,
in which case the system is said to be in outage. With a non-zero probability that the channel is in deep fade, the capacity of the slow-fading channel in strict sense is zero. However, it is possible to determine the largest value of such that the outage probability is less than . This value is known as the -outage capacity.
Fast-fading channel
In a fast-fading channel, where the latency requirement is greater than the coherence time and the codeword length spans many coherence periods, one can average over many independent channel fades by coding over a large number of coherence time intervals. Thus, it is possible to achieve a reliable rate of communication of [bits/s/Hz] and it is meaningful to speak of this value as the capacity of the fast-fading channel.
Feedback Capacity
Feedback capacity is the greatest rate at which information can be reliably transmitted, per unit time, over a point-to-point communication channel in which the receiver feeds back the channel outputs to the transmitter. Information-theoretic analysis of communication systems that incorporate feedback is more complicated and challenging than without feedback. Possibly, this was the reason C.E. Shannon chose feedback as the subject of the first Shannon Lecture, delivered at the 1973 IEEE International Symposium on Information Theory in Ashkelon, Israel.
The feedback capacity is characterized by the maximum of the directed information between the channel inputs and the channel outputs, where the maximization is with respect to the causal conditioning of the input given the output. The directed information was coined by James Massey in 1990, who showed that its an upper bound on feedback capacity. For memoryless channels, Shannon showed that feedback does not increase the capacity, and the feedback capacity coincides with the channel capacity characterized by the mutual information between the input and the output. The feedback capacity is known as a closed-form expression only for several examples such as the trapdoor channel, Ising channel,. For some other channels, it is characterized through constant-size optimization problems such as the binary erasure channel with a no-consecutive-ones input constraint , NOST channel.
The basic mathematical model for a communication system is the following:
Here is the formal definition of each element (where the only difference with respect to the nonfeedback capacity is the encoder definition):
is the message to be transmitted, taken in an alphabet ;
is the channel input symbol ( is a sequence of symbols) taken in an alphabet ;
is the channel output symbol ( is a sequence of symbols) taken in an alphabet ;
is the estimate of the transmitted message;
is the encoding function at time , for a block of length ;
is the noisy channel at time , which is modeled by a conditional probability distribution; and,
is the decoding function for a block of length .
That is, for each time there exists a feedback of the previous output such that the encoder has access to all previous outputs . An code is a pair of encoding and decoding mappings with , and is uniformly distributed. A rate is said to be achievable if there exists a sequence of codes such that the average probability of error: tends to zero as .
The feedback capacity is denoted by , and is defined as the supremum over all achievable rates.
Main results on feedback capacity
Let and be modeled as random variables. The causal conditioning describes the given channel. The choice of the causally conditional distribution determines the joint distribution due to the chain rule for causal conditioning which, in turn, induces a directed information .
The feedback capacity is given by
,
where the supremum is taken over all possible choices of .
Gaussian feedback capacity
When the Gaussian noise is colored, the channel has memory. Consider for instance the simple case on an autoregressive model noise process where is an i.i.d. process.
Solution techniques
The feedback capacity is difficult to solve in the general case. There are some techniques that are related to control theory and Markov decision processes if the channel is discrete.
See also
Bandwidth (computing)
Bandwidth (signal processing)
Bit rate
Code rate
Error exponent
Nyquist rate
Negentropy
Redundancy
Sender, Data compression, Receiver
Shannon–Hartley theorem
Spectral efficiency
Throughput
Shannon capacity of a graph
Advanced Communication Topics
MIMO
Cooperative diversity
External links
AWGN Channel Capacity with various constraints on the channel input (interactive demonstration)
References
Information theory
Telecommunication theory
Television terminology | Channel capacity | [
"Mathematics",
"Technology",
"Engineering"
] | 2,223 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
231,288 | https://en.wikipedia.org/wiki/Peristaltic%20pump | A peristaltic pump, also commonly known as a roller pump, is a type of positive displacement pump used for pumping a variety of fluids. The fluid is contained in a flexible tube fitted inside a circular pump casing. Most peristaltic pumps work through rotary motion, though linear peristaltic pumps have also been made. The rotor has a number of "wipers" or "rollers" attached to its external circumference, which compress the flexible tube as they rotate by. The part of the tube under compression is closed, forcing the fluid to move through the tube. Additionally, as the tube opens to its natural state after the rollers pass, more fluid is drawn into the tube. This process is called peristalsis and is used in many biological systems such as the gastrointestinal tract. Typically, there will be two or more rollers compressing the tube, trapping a body of fluid between them. The body of fluid is transported through the tube, toward the pump outlet. Peristaltic pumps may run continuously, or they may be indexed through partial revolutions to deliver smaller amounts of fluid.
History
A form of peristaltic pump was described in The Mechanics Magazine in 1845. The pump used a leather hose which did not need to self-open when released by the rollers, instead relying on the incoming water having sufficient pressure to fill the open inlet end on each cycle. The peristaltic pump was first patented in the United States by Rufus Porter and J. D. Bradley in 1855 (U.S. Patent number 12753) as a well pump, and later by Eugene Allen in 1881 (U.S. Patent number 249285) for blood transfusions. It was developed by heart surgeon Dr. Michael DeBakey for blood transfusions while he was a medical student in 1932 and later used by him for cardiopulmonary bypass systems. A specialized nonocclusive roller pump (US Patent 5222880) using soft flat tubing was developed in 1992 for cardiopulmonary bypass systems.
Applications
Peristaltic pumps are typically used to pump clean/sterile or highly reactive fluids without exposing those fluids to contamination from exposed pump components. Some common applications include pumping IV fluids through an infusion device, apheresis, highly reactive chemicals, high-solids slurries, and other materials where isolation of the product from the environment are critical. They are also used in heart–lung machines to circulate blood during bypass surgery, and in hemodialysis systems, since the pump does not cause significant hemolysis, or rupture of the blood cells.
Key design parameters
The ideal peristaltic pump should have an infinite diameter of the pump head and the largest possible diameter of the rollers. Such an ideal peristaltic pump would offer the longest possible tubing lifetime and provide a constant and pulsation-free flow rate.
Such an ideal peristaltic pump cannot be constructed in reality. However, peristaltic pumps can be designed to approach these ideal peristaltic pump parameters.
Careful design can offer constant accurate flow rates for several weeks together with a long tubing lifetime without the risk of tubing rupture.
Chemical compatibility
The pumped fluid contacts only the inside surface of the tubing. This eliminates fluid compatibility concerns with other pump components such as valves, O-rings, and seals, which must be considered for other pump designs. Therefore, only the composition of the tubing that the pumped medium travels through is considered for chemical compatibility.
The tubing needs to be elastomeric to maintain the circular cross-section after millions of cycles of squeezing in the pump. This requirement eliminates a variety of non-elastomeric polymers that have compatibility with a wide range of chemicals, such as PTFE, polyolefins, PVDF, etc. from consideration as material for pump tubing. The popular elastomers for pump tubing are nitrile (NBR), Hypalon, Viton, silicone, PVC, EPDM, EPDM+polypropylene (as in Santoprene), polyurethane and natural rubber. Of these materials, natural rubber has the best fatigue resistance, and EPDM and Hypalon have the best chemical compatibility. Silicone is popular with water-based fluids, such as in bio-pharma industry, but has a limited range of chemical compatibility in other industries.
Extruded fluoropolymer tubes such as FKM (Viton, Fluorel, etc.) have good compatibility with acids, hydrocarbons, and petroleum fuels, but have insufficient fatigue resistance to achieve an effective tube life.
There are a couple of newer tubing developments that offer broad chemical compatibility using lined tubing and fluoroelastomers.
With lined tubing, the thin inside liner is made of a chemically resistant material such as poly-olefin and PTFE that forms a barrier for the rest of the tubing wall from coming in contact with the pumped fluid. These liners are materials that are often not elastomeric, therefore the entire tube wall cannot be made with this material for peristaltic pump applications. This tubing provides adequate chemical compatibility and life to be used in chemically challenging applications. There are a few things to keep in mind when using these tubes - any pinholes in the liner during manufacturing could render the tubing vulnerable to chemical attack. In the case of stiff plastic liners like the polyolefins, with repeated flexing in the peristaltic pump they can develop cracks, rendering the bulk material again vulnerable to chemical attack. A common issue with all lined tubing is the delamination of the liner with repeated flexing that signals the end of the tube's life. For those with the need for chemically compatible tubing, these lined tubings offer a good solution.
With fluoroelastomer tubing, the elastomer itself has the chemical resistance. In the case of e.g. Chem-Sure, it is made of a perfluoroelastomer, that has the broadest chemical compatibility of all elastomers. The two fluoroelastomer tubes listed above combine the chemical compatibility with a very long tube life stemming from their reinforcement technology but come at a pretty high initial cost. One has to justify the cost with the total value derived over the long tube life and compare with other options such as other tubing or even other pump technologies.
There are many online sites for checking the chemical compatibility of the tubing material with the pumped fluid. The tubing manufacturers may also have compatibility charts specific to their tubing production method, coating, material, and the fluid being pumped.
While these charts cover a list of commonly encountered fluids, they may not have all the fluids. If there is a fluid whose compatibility is not listed anywhere, then a common test of compatibility is the immersion testing. A 1 to 2 inch sample of the tubing is immersed in the fluid to be pumped for anywhere from 24 to 48 hours, and the amount of weight change from before and after the immersion is measured. If the weight change is greater than 10% of the initial weight, then that tube is not compatible with the fluid, and should not be used in that application. This test is still a one-way test, in the sense that there is still a remote chance that the tubing that passes this test can still be incompatible for the application since the combination of borderline compatibility and mechanical flexing can push the tube over the edge, resulting in premature tube failure.
In general, recent tubing developments have brought broad chemical compatibility to the peristaltic pump option that many chemical dosing applications can benefit over other current pump technologies.
Occlusion
The minimum gap between the roller and the housing determines the maximum squeeze applied on the tubing. The amount of squeeze applied to the tubing affects pumping performance and the tube life more squeezing decreases the tubing life dramatically, while less squeezing can cause the pumped medium to slip back, especially in high-pressure pumping, and decreases the efficiency of the pump dramatically, and the high velocity of the slip-back typically causes premature failure of the hose. Therefore, this amount of squeeze becomes an important design parameter.
The term "occlusion" is used to measure the amount of squeeze. It is either expressed as a percentage of twice the wall thickness, or as an absolute amount of the wall that is squeezed.
Let
g = minimum gap between the roller and the housing,
t = wall thickness of the tubing.
Then
y = 2t − g, when expressed as the absolute amount of squeeze,
y = 100% × (2t − g) / (2t), when expressed as a percentage of twice the wall thickness.
The occlusion is typically 10% to 20%, with a higher occlusion for a softer tube material, and a lower occlusion for a harder tube material.
Thus for a given pump, the most critical tubing dimension becomes the wall thickness. An interesting point here is that the inside diameter (ID) of the tubing is not an important design parameter for the suitability of the tubing for the pump. Therefore, it is common for more than one ID be used with a pump, as long as the wall thickness remains the same.
Inside diameter
For a given rotational speed of the pump, a tube with a larger inside diameter (ID) will give a higher flow rate than one with a smaller inside diameter. The flow rate is a function of the cross-section area of the tube bore.
Flow rate
The flow rate is an important parameter for a pump. The flow rate in a peristaltic pump is determined by many factors, such as:
Tube inner diameter higher flow rate with larger inner diameter.
Pump-head outer diameter higher flow rate with larger outer diameter.
Pump-head rotational speed higher flow rate with higher speed.
Inlet pulsation the pulse reduces the filling volume of the hose.
Increasing the number of rollers does not increase the flow rate, instead it will decrease the flow rate somewhat by reducing the effective (i.e. fluid-pumping) circumference of the head. Adding rollers does tend to decrease the amplitude of the fluid pulsing at the outlet by increasing the frequency of the pulsed flow.
The length of the tube (measured from the initial pinch point near the inlet to the final release point near the outlet) does not affect the flow rate. However, a longer tube implies more pinch points between inlet and outlet, increasing the pressure that the pump can generate.
The flow rate of a peristaltic pump is in most cases not linear. The effect of pulsation at the inlet of the pump changes the filling degree of the peristaltic hose. With high inlet pulsation, the peristaltic hose may become oval-shaped, resulting in less flow.
Accurate metering with a peristaltic pump is therefore only possible when the pump has a constant flow rate, or when inlet pulsation is eliminated with the use of correctly designed pulsation dampeners.
Pulsation
Pulsation is an important side effect of the peristaltic pump. The pulsation in a peristaltic pump is determined by many factors, such as:
Flow rate higher flow rate gives more pulsation.
Line length Long pipelines give more pulsation.
Higher pump speed higher rotational frequency gives more pulsation.
Specific gravity of the fluid higher fluid density gives more pulsation.
Variations
Hose pumps
Higher pressure peristaltic hose pumps which can typically operate against up to in continuous service, use shoes (rollers only used on low-pressure types) and have casings filled with lubricant to prevent abrasion of the exterior of the pump tube and to aid in the dissipation of heat, and use reinforced tubes, often called "hoses". This class of pump is often called a "hose pump".
The biggest advantage with the hose pumps over the roller pumps is the high operating pressure of up to 16 bar. With rollers, max pressure can arrive up to without any problem. If the high operating pressure is not required, a tubing pump is a better option than a hose pump if the pumped medium is not abrasive. With recent advances made in the tubing technology for pressure, life, and chemical compatibility, as well as the higher flow rate ranges, the advantages that hose pumps had over roller pumps continues to erode.
Tube pumps
Lower pressure peristaltic pumps typically have dry casings and use rollers along with non-reinforced, extruded tubing. This class of pump is sometimes called a "tube pump" or "tubing pump". These pumps employ rollers to squeeze the tube. Except for a 360° eccentric pump design, these pumps have a minimum of 2 rollers 180° apart and may have as many as 8, or even 12 rollers. Increasing the number of rollers increases the pressure pulse frequency of the pumped fluid at the outlet, thereby decreasing the amplitude of pulsing. The downside to increasing the number of rollers it that it proportionately increases the number of squeezes, or occlusions, on the tubing for a given cumulative flow through that tube, thereby reducing the tubing life.
There are two kinds of roller design in peristaltic pumps:
Fixed occlusion - In this kind of pump, the rollers have a fixed locus as it turns, keeping the occlusion constant as it squeezes the tube. This is a simple, yet effective design. The only downside to this design is that the occlusion as a percent on the tube varies with the variation of the tube wall thickness. Typically the wall thickness of the extruded tubes varies enough that the % occlusion can vary with the wall thickness (see above). Therefore, a section of tube with greater wall thickness, but within the accepted tolerance, will have higher percent occlusion, which increases the wear on the tubing, thereby decreasing the tube life. Tube wall thickness tolerances today are generally kept tight enough that this issue is not of much practical concern. For those mechanically inclined, this may be the constant strain operation.
Spring-loaded rollers - As the name indicates, the rollers in this pump are mounted on a spring. This design is more elaborate than the fixed occlusion, but helps overcome the variations in the tube wall thickness over a broader range. Regardless of the variations, the roller imparts the same amount of stress on the tubing that is proportional to the spring constant, making this a constant stress operation. The spring is selected to overcome not only the hoop strength of the tubing, but also the pressure of the pumped fluid.
The operating pressure of these pumps is determined by the tubing and by the motor's ability to overcome the hoop strength of the tubing and the fluid pressure.
Microfluidic pumps
In microfluidics, it is often desirable to minimize the circulating volume of fluid. Traditional pumps require a large volume of liquid external to the microfluidic circuit. This can lead to problems due to dilution of analytes and already dilute biological signalling molecules.
For this reason, among others, it is desirable to integrate a micro-pumping structure into the microfluidic circuit. Wu et al. presented in 2008 a pneumatically actuated peristaltic micropump which eliminates the need for large external circulating fluid volumes.
Advantages
No contamination. Because the only part of the pump in contact with the fluid being pumped is the interior of the tube, it is easy to sterilize and clean the inside surfaces of the pump.
Low maintenance needs and easy to clean; their lack of valves, seals and glands makes them comparatively inexpensive to maintain.
They are able to handle slurries, viscous, shear-sensitive and aggressive fluids.
Pump design prevents backflow and siphoning without valves.
A fixed amount of fluid is pumped per rotation, so it can be used to roughly measure the amount of pumped fluid.
Disadvantages
The flexible tubing will tend to degrade with time and require periodic replacement.
The flow is pulsed, particularly at low rotational speeds. Therefore, these pumps are less suitable where a smooth consistent flow is required. In applications that require smooth flow, an alternative type of positive displacement pump should then be considered.
Effectiveness is limited by liquid viscosity
Decreasing potential flow rates with increasing overall lift on intake side, with a maximum theoretical lift of 33 feet
Tubing
Considerations for selecting peristaltic pump tubing include appropriate chemical resistance towards the liquid being pumped, whether the pump will be used continuously or intermittently, and cost. Types of tubing commonly used in peristaltic pumps include:
Polyvinyl chloride (PVC)
Silicone rubber
Fluoropolymer
PharMed
Thermoplastic
Fluoroelastomer
For continuous use, most of the materials perform similarly over short time frames. This suggests that overlooked low cost materials such as PVC might meet the needs of a short-term, one time use medical applications. For intermittent use, compression set is important and Silicone is an optimal material choice.
Typical applications
Medicine
Dialysis machines
Open-heart bypass pump machines
Medical infusion pumps
Testing and research
AutoAnalyzer
Analytical chemistry experiments
Carbon monoxide monitors
Media dispensers
Agriculture
'Sapsucker' pumps to extract maple tree sap
Dosers for hydroponic systems
Food manufacturing and sales
Liquid food fountains (ex. cheese sauce for nachos)
Beverage dispensing
Food-service Washing Machine fluid pump
Chemical handling
Printing, paint and pigments
Pharmaceutical production
Dosing systems for dishwasher and laundry chemicals
Engineering and manufacturing
Concrete pump
Pulp and paper plants
Minimum quantity lubrication
Inkjet printers
Water and Waste
Chemical treatment in water purification plant
Sewage sludge
Aquariums, particularly calcium reactors
Automatic wastewater sampling for wastewater quality indicators
See also
Tube stripper
References
Pumps | Peristaltic pump | [
"Physics",
"Chemistry"
] | 3,701 | [
"Pumps",
"Hydraulics",
"Physical systems",
"Turbomachinery"
] |
19,285,667 | https://en.wikipedia.org/wiki/Auxiliary%20particle%20filter | The auxiliary particle filter is a particle filtering algorithm introduced by Pitt and Shephard in 1999 to improve some deficiencies of the sequential importance resampling (SIR) algorithm when dealing with tailed observation densities.
Motivation
Particle filters approximate continuous random variable by particles with discrete probability mass , say for uniform distribution. The random sampled particles can be used to approximate the probability density function of the continuous random variable if the value .
The empirical prediction density is produced as the weighted summation of these particles:
, and we can view it as the "prior" density. Note that the particles are assumed to have the same weight .
Combining the prior density and the likelihood , the empirical filtering density can be produced as:
, where .
On the other hand, the true filtering density which we want to estimate is
.
The prior density can be used to approximate the true filtering density :
The particle filters draw samples from the prior density . Each sample are drawn with equal probability.
Assign each sample with the weights . The weights represent the likelihood function .
If the number , than the samples converge to the desired true filtering density.
The particles are resampled to particles with the weight .
The weakness of the particle filters includes:
If the weight {} has a large variance, the sample amount must be large enough for the samples to approximate the empirical filtering density. In other words, while the weight is widely distributed, the SIR method will be imprecise and the adaption is difficult.
Therefore, the auxiliary particle filter is proposed to solve this problem.
Auxiliary particle filter
Auxiliary variable
Comparing with the empirical filtering density which has ,
we now define , where .
Being aware that is formed by the summation of particles, the auxiliary variable represents one specific particle. With the aid of , we can form a set of samples which has the distribution . Then, we draw from these sample set instead of directly from . In other words, the samples are drawn from with different probability. The samples are ultimately utilized to approximate .
Take the SIR method for example:
The particle filters draw samples from .
Assign each samples with the weight .
By controlling and , the weights are adjusted to be even.
Similarly, the particles are resampled to particles with the weight .
The original particle filters draw samples from the prior density, while the auxiliary filters draw from the joint distribution of the prior density and the likelihood. In other words, the auxiliary particle filters avoid the circumstance which the particles are generated in the regions with low likelihood. As a result, the samples can approximate more precisely.
Selection of the auxiliary variable
The selection of the auxiliary variable affects and controls the distribution of the samples. A possible selection of can be:, where and is the mean.
We sample from to approximate by the following procedure:
First, we assign probabilities to the indexes of . We named these probabilities as the first-stage weights , which are proportional to .
Then, we draw samples from with the weighted indexes. By doing so, we are actually drawing the samples from .
Moreover, we reassign the second-stage weights as the probabilities of the samples, where . The weights are aim to compensate the effect of .
Finally, the particles are resampled to particles with the weights .
Following the procedure, we draw the samples from . Since is closely related to the mean , it has high conditional likelihood. As a result, the sampling procedure is more efficient and the value can be reduced.
Other point of view
Assume that the filtered posterior is described by the following M weighted samples:
Then, each step in the algorithm consists of first drawing a sample of the particle index which will be propagated from into the new step . These indexes are auxiliary variables only used as an intermediary step, hence the name of the algorithm. The indexes are drawn according to the likelihood of some reference point which in some way is related to the transition model (for example, the mean, a sample, etc.):
This is repeated for , and using these indexes we can now draw the conditional samples:
Finally, the weights are updated to account for the mismatch between the likelihood at the actual sample and the predicted point :
References
Sources
Monte Carlo methods
Computational statistics
Nonlinear filters | Auxiliary particle filter | [
"Physics",
"Mathematics"
] | 851 | [
"Monte Carlo methods",
"Computational statistics",
"Computational mathematics",
"Computational physics"
] |
19,287,081 | https://en.wikipedia.org/wiki/Clearing%20House%20Automated%20Transfer%20System | The Clearing House Automated Transfer System, or CHATS, is a real-time gross settlement (RTGS) system for the transfer of funds in Hong Kong. It is operated by Hong Kong Interbank Clearing Limited (HKICL), a limited-liability private company jointly owned by the Hong Kong Monetary Authority (HKMA) and the Hong Kong Association of Banks. Transactions in four currency denominations may be settled using CHATS: Hong Kong dollar, renminbi, euro, and US dollar. In 2005, the value of Hong Kong dollar CHATS transactions averaged HK$467 billion per day, which amounted to a third of Hong Kong's annual Gross Domestic Product (GDP); the total value of transactions that year was 84 times the GDP of Hong Kong. CHATS has been referred by authors at the Bank for International Settlements to as "the poster child of multicurrency offshore systems".
History
Prior to the launch of CHATS as a RTGS system, interbank settlements in Hong Kong relied on a multi-tier system which settled on a daily net basis. About 170 banks settled with ten clearing banks. These ten banks, in turn, settled with Hongkong Bank, which then settled with the HKMA on a one-to-one basis. Hongkong Bank acted as the clearing house under this system, settling payments across its books on a net basis on the day following the transactions. The HKMA decided that this did not meet international standards as set by G10's Committee on Payment and Settlement Systems; following a six-month feasibility study, in June 1994, it decided to develop CHATS as an RTGS system.
After two years of development, CHATS for Hong Kong dollars was launched on 9 December 1996. CHATS for US dollars and euros were launched on 21 August 2000 and 28 April 2003, respectively. In July 2007, the Regional CHATS Payment Services was also launched to link all participants in the three different CHATS versions for transactions involving currency exchange.
Features
CHATS, like other RTGS systems, settles payment transactions on a "real time" and "gross" basis—payments are not subjected to any waiting period and each transaction is settled in a one-to-one manner such that it is not bunched with other transactions. It is a single-tier system where participants settle with one central clearing house. Payments are final, irrevocable, and settled immediately if the participant's settlement account with the clearing house has sufficient funds. Daylight overdraft is not offered in CHATS; payments that cannot be settled due to insufficient funds are queued. Banks are able to alter, cancel, and re-sequence payments in their queues.
Banks can obtain interest-free intraday liquidity through repurchase agreements (repos). This prevents banks from having to maintain large balances in their settlement accounts, which accrue no interest, to cover their payments. Intraday repos that are not reversed at the end of the business day are carried into overnight borrowing.
To access RTGS system functions, banks must connect to the SWIFT network to initiate/receive payment instructions, and access eMBT provided by HKICL for performing administrative functions to respective payment instructions.
Hong Kong dollar CHATS
The HKMA, which is Hong Kong's central banking institution, acts as the clearing house for Hong Kong dollar (HKD) CHATS. All licensed banks in Hong Kong maintain HKD settlement accounts with the HKMA, and as of June 2000, restricted license banks "with a clear business need" may also open settlement accounts with the HKMA. The volume of transactions in HKD CHATS in 2007, in raw number of transactions, totaled at 5,499,494. The total value of all transactions conducted in the same year was about HK$217 trillion. As of 28 December 2015, there are 156 participating banks with HKD CHATS.
US dollar CHATS
Unlike HKD CHATS, the clearing house for US dollar (USD) CHATS is a commercial bank, Hongkong and Shanghai Banking Corporation (HSBC). In addition to obtaining intraday repos for payment settlement, participating banks may also obtain intraday liquidity via an overdraft facility provided by HSBC. Banks in Hong Kong are not required to participate in USD CHATS; they may choose to join as direct participants or indirect participants. Direct participants maintain USD settlement accounts with HSBC for payment transactions in CHATS. Indirect participants must conduct their payment transactions through direct participants. Additionally, a membership category called Indirect CHATS Users exists where its banks also conduct their payment transactions through direct participants. The volume of transactions in USD CHATS in 2007, in raw number of transactions, totaled at 2,121,058. The total value of all transactions conducted in the same year was about US$2,127 billion. As of 28 December 2015, USD CHATS has 100 direct participant member banks, 24 indirect participant member banks, 87 Indirect CHATS User member banks, and eight Third Party User member banks.
Euro CHATS
Euro CHATS is structured similarly to USD CHATS. Its clearing house is also a commercial bank, Standard Chartered Hong Kong. Participating banks may obtain intraday liquidity via an overdraft facility provided by Standard Chartered Bank. Like USD CHATS, banks in Hong Kong are not required to participate in Euro CHATS. Euro CHATS has two categories of membership, Direct Participants and Indirect CHATS Users; they function in the same manner as the categories in USD CHATS. The volume of transactions in Euro CHATS in 2007, in raw number of transactions, totaled at 18,169. The total value of all transactions conducted in the same year was about €280 billion. As of 28 December 2015, Euro CHATS has 37 Direct Participant member banks and 18 Indirect CHATS User member banks.
Renminbi CHATS
CHATS uses Bank of China (Hong Kong) as its clearing house for settling renminbi payments. Bank of China has a settlement account on the China National Advanced Payment System (CNAPS), allowing renminbi CHATS to effectively work as an extension of the Chinese system.
See also
Faster Payment System, a newer, cheaper and round-the-clock RTGS system facing to general public in Hong Kong, which also supports fund transfers from, to or between electronic payment and digital wallet operators.
Fedwire (US)
Clearing House Interbank Payments System (US)
Clearing House Automated Payments System (UK)
TARGET Services (EU)
Indian Settlement Systems
Society for Worldwide Interbank Financial Telecommunication
References
Banking in Hong Kong
Real-time gross settlement | Clearing House Automated Transfer System | [
"Technology"
] | 1,339 | [
"Real-time gross settlement"
] |
17,062,920 | https://en.wikipedia.org/wiki/DNA%20damage%20theory%20of%20aging | The DNA damage theory of aging proposes that aging is a consequence of unrepaired accumulation of naturally occurring DNA damage. Damage in this context is a DNA alteration that has an abnormal structure. Although both mitochondrial and nuclear DNA damage can contribute to aging, nuclear DNA is the main subject of this analysis. Nuclear DNA damage can contribute to aging either indirectly (by increasing apoptosis or cellular senescence) or directly (by increasing cell dysfunction).
Several review articles have shown that deficient DNA repair, allowing greater accumulation of DNA damage, causes premature aging; and that increased DNA repair facilitates greater longevity, e.g. Mouse models of nucleotide-excision–repair syndromes reveal a striking correlation between the degree to which specific DNA repair pathways are compromised and the severity of accelerated aging, strongly suggesting a causal relationship. Human population studies show that single-nucleotide polymorphisms in DNA repair genes, causing up-regulation of their expression, correlate with increases in longevity. Lombard et al. compiled a lengthy list of mouse mutational models with pathologic features of premature aging, all caused by different DNA repair defects. Freitas and de Magalhães presented a comprehensive review and appraisal of the DNA damage theory of aging, including a detailed analysis of many forms of evidence linking DNA damage to aging. As an example, they described a study showing that centenarians of 100 to 107 years of age had higher levels of two DNA repair enzymes, PARP1 and Ku70, than general-population old individuals of 69 to 75 years of age. Their analysis supported the hypothesis that improved DNA repair leads to longer life span. Overall, they concluded that while the complexity of responses to DNA damage remains only partly understood, the idea that DNA damage accumulation with age is the primary cause of aging remains an intuitive and powerful one.
In humans and other mammals, DNA damage occurs frequently and DNA repair processes have evolved to compensate. In estimates made for mice, DNA lesions occur on average 25 to 115 times per minute in each cell, or about 36,000 to 160,000 per cell per day. Some DNA damage may remain in any cell despite the action of repair processes. The accumulation of unrepaired DNA damage is more prevalent in certain types of cells, particularly in non-replicating or slowly replicating cells, such as cells in the brain, skeletal and cardiac muscle.
DNA damage and mutation
To understand the DNA damage theory of aging it is important to distinguish between DNA damage and mutation, the two major types of errors that occur in DNA. Damage and mutation are fundamentally different. DNA damage is any physical abnormality in the DNA, such as single and double strand breaks, 8-hydroxydeoxyguanosine residues and polycyclic aromatic hydrocarbon adducts. DNA damage can be recognized by enzymes, and thus can be correctly repaired using the complementary undamaged strand in DNA as a template or an undamaged sequence in a homologous chromosome if it is available for copying. If a cell retains DNA damage, transcription of a gene can be prevented and thus translation into a protein will also be blocked. Replication may also be blocked and/or the cell may die. Descriptions of reduced function, characteristic of aging and associated with accumulation of DNA damage, are described in the next section.
In contrast to DNA damage, a mutation is a change in the base sequence of the DNA. A mutation cannot be recognized by enzymes once the base change is present in both DNA strands, and thus a mutation cannot be repaired. At the cellular level, mutations can cause alterations in protein function and regulation. Mutations are replicated when the cell replicates. In a population of cells, mutant cells will increase or decrease in frequency according to the effects of the mutation on the ability of the cell to survive and reproduce. Although distinctly different from each other, DNA damages and mutations are related because DNA damages often cause errors of DNA synthesis during replication or repair and these errors are a major source of mutation.
Given these properties of DNA damage and mutation, it can be seen that DNA damages are a special problem in non-dividing or slowly dividing cells, where unrepaired damages will tend to accumulate over time. On the other hand, in rapidly dividing cells, unrepaired DNA damages that do not kill the cell by blocking replication will tend to cause replication errors and thus mutation. The great majority of mutations that are not neutral in their effect are deleterious to a cell's survival. Thus, in a population of cells comprising a tissue with replicating cells, mutant cells will tend to be lost. However, infrequent mutations that provide a survival advantage will tend to clonally expand at the expense of neighboring cells in the tissue. This advantage to the cell is disadvantageous to the whole organism, because such mutant cells can give rise to cancer. Thus, DNA damages in frequently dividing cells, because they give rise to mutations, are a prominent cause of cancer. In contrast, DNA damages in infrequently dividing cells are likely a prominent cause of aging.
The first person to suggest that DNA damage, as distinct from mutation, is the primary cause of aging was Alexander in 1967. By the early 1980s there was significant experimental support for this idea in the literature. By the early 1990s experimental support for this idea was substantial, and furthermore it had become increasingly evident that oxidative DNA damage, in particular, is a major cause of aging.
In a series of articles from 1970 to 1977, PV Narasimh Acharya, Phd. (1924–1993) theorized and presented evidence that cells undergo "irreparable DNA damage", whereby DNA crosslinks occur when both normal cellular repair processes fail and cellular apoptosis does not occur. Specifically, Acharya noted that double-strand breaks and a "cross-linkage joining both strands at the same point is irreparable because neither strand can then serve as a template for repair. The cell will die in the next mitosis or in some rare instances, mutate."
Age-associated accumulation of DNA damage and changes in gene expression
In tissues composed of non- or infrequently replicating cells, DNA damage can accumulate with age and lead either to loss of cells, or, in surviving cells, loss of gene expression. Accumulated DNA damage is usually measured directly. Numerous studies of this type have indicated that oxidative damage to DNA is particularly important. The loss of expression of specific genes can be detected at both the mRNA level and protein level.
Other form of age-associated changes in gene expression is increased transcriptional variability, that was found first in a selected panel of genes in heart cells and, more recently, in the whole transcriptomes of immune cells, and human pancreas cells.
Brain
The adult brain is composed in large part of terminally differentiated non-dividing neurons. Many of the conspicuous features of aging reflect a decline in neuronal function. Accumulation of DNA damage with age in the mammalian brain has been reported during the period 1971 to 2008 in at least 29 studies. This DNA damage includes the oxidized nucleoside 8-oxo-2'-deoxyguanosine (8-oxo-dG), single- and double-strand breaks, DNA-protein crosslinks and malondialdehyde adducts (reviewed in Bernstein et al.). Increasing DNA damage with age has been reported in the brains of the mouse, rat, gerbil, rabbit, dog, and human.
Rutten et al. showed that single-strand breaks accumulate in the mouse brain with age. Young 4-day-old rats have about 3,000 single-strand breaks and 156 double-strand breaks per neuron, whereas in rats older than 2 years the level of damage increases to about 7,400 single-strand breaks and 600 double-strand breaks per neuron. Sen et al. showed that DNA damages which block the polymerase chain reaction in rat brain accumulate with age. Swain and Rao observed marked increases in several types of DNA damages in aging rat brain, including single-strand breaks, double-strand breaks and modified bases (8-OHdG and uracil). Wolf et al. also showed that the oxidative DNA damage 8-OHdG accumulates in rat brain with age. Similarly, it was shown that as humans age from 48 to 97 years, 8-OHdG accumulates in the brain.
Lu et al. studied the transcriptional profiles of the human frontal cortex of individuals ranging from 26 to 106 years of age. This led to the identification of a set of genes whose expression was altered after age 40. These genes play central roles in synaptic plasticity, vesicular transport and mitochondrial function. In the brain, promoters of genes with reduced expression have markedly increased DNA damage. In cultured human neurons, these gene promoters are selectively damaged by oxidative stress. Thus Lu et al. concluded that DNA damage may reduce the expression of selectively vulnerable genes involved in learning, memory and neuronal survival, initiating a program of brain aging that starts early in adult life.
Muscle
Muscle strength, and stamina for sustained physical effort, decline in function with age in humans and other species. Skeletal muscle is a tissue composed largely of multinucleated myofibers, elements that arise from the fusion of mononucleated myoblasts. Accumulation of DNA damage with age in mammalian muscle has been reported in at least 18 studies since 1971. Hamilton et al. reported that the oxidative DNA damage 8-OHdG accumulates in heart and skeletal muscle (as well as in brain, kidney and liver) of both mouse and rat with age. In humans, increases in 8-OHdG with age were reported for skeletal muscle. Catalase is an enzyme that removes hydrogen peroxide, a reactive oxygen species, and thus limits oxidative DNA damage. In mice, when catalase expression is increased specifically in mitochondria, oxidative DNA damage (8-OHdG) in skeletal muscle is decreased and lifespan is increased by about 20%. These findings suggest that mitochondria are a significant source of the oxidative damages contributing to aging.
Protein synthesis and protein degradation decline with age in skeletal and heart muscle, as would be expected, since DNA damage blocks gene transcription. In 2005, Piec et al. found numerous changes in protein expression in rat skeletal muscle with age, including lower levels of several proteins related to myosin and actin. Force is generated in striated muscle by the interactions between myosin thick filaments and actin thin filaments.
Liver
Liver hepatocytes do not ordinarily divide and appear to be terminally differentiated, but they retain the ability to proliferate when injured. With age, the mass of the liver decreases, blood flow is reduced, metabolism is impaired, and alterations in microcirculation occur. At least 21 studies have reported an increase in DNA damage with age in liver. For instance, Helbock et al. estimated that the steady state level of oxidative DNA base alterations increased from 24,000 per cell in the liver of young rats to 66,000 per cell in the liver of old rats.
One or two months after inducing DNA double-strand breaks in the livers of young mice, the mice showed multiple symptoms of aging similar to those seen in untreated livers of normally aged control mice.
Kidney
In kidney, changes with age include reduction in both renal blood flow and glomerular filtration rate, and impairment in the ability to concentrate urine and to conserve sodium and water. DNA damages, particularly oxidative DNA damages, increase with age (at least 8 studies). For instance Hashimoto et al. showed that 8-OHdG accumulates in rat kidney DNA with age.
Long-lived stem cells
Tissue-specific stem cells produce differentiated cells through a series of increasingly more committed progenitor intermediates. In hematopoiesis (blood cell formation), the process begins with long-term hematopoietic stem cells that self-renew and also produce progeny cells that upon further replication go through a series of stages leading to differentiated cells without self-renewal capacity. In mice, deficiencies in DNA repair appear to limit the capacity of hematopoietic stem cells to proliferate and self-renew with age. Sharpless and Depinho reviewed evidence that hematopoietic stem cells, as well as stem cells in other tissues, undergo intrinsic aging. They speculated that stem cells grow old, in part, as a result of DNA damage. DNA damage may trigger signalling pathways, such as apoptosis, that contribute to depletion of stem cell stocks. This has been observed in several cases of accelerated aging and may occur in normal aging too.
A key aspect of hair loss with age is the aging of the hair follicle. Ordinarily, hair follicle renewal is maintained by the stem cells associated with each follicle. Aging of the hair follicle appears to be due to the DNA damage that accumulates in renewing stem cells during aging.
Mutation theories of aging
A related theory is that mutation, as distinct from DNA damage, is the primary cause of aging. A comparison of somatic mutation rate across several mammal species found that the total number of accumulated mutations at the end of lifespan was roughly equal across a broad range of lifespans. The authors state that this strong relationship between somatic mutation rate and lifespan across different mammalian species suggests that evolution may constrain somatic mutation rates, perhaps by selection acting on different DNA repair pathways.
As discussed above, mutations tend to arise in frequently replicating cells as a result of errors of DNA synthesis when template DNA is damaged, and can give rise to cancer. However, in mice there is no increase in mutation in the brain with aging. Mice defective in a gene (Pms2) that ordinarily corrects base mispairs in DNA have about a 100-fold elevated mutation frequency in all tissues, but do not appear to age more rapidly. On the other hand, mice defective in one particular DNA repair pathway show clear premature aging, but do not have elevated mutation.
One variation of the idea that mutation is the basis of aging, that has received much attention, is that mutations specifically in mitochondrial DNA are the cause of aging. Several studies have shown that mutations accumulate in mitochondrial DNA in infrequently replicating cells with age. DNA polymerase gamma is the enzyme that replicates mitochondrial DNA. A mouse mutant with a defect in this DNA polymerase is only able to replicate its mitochondrial DNA inaccurately, so that it sustains a 500-fold higher mutation burden than normal mice. These mice showed no clear features of rapidly accelerated aging. Overall, the observations discussed in this section indicate that mutations are not the primary cause of aging.
Dietary restriction
In rodents, caloric restriction slows aging and extends lifespan. At least 4 studies have shown that caloric restriction reduces 8-OHdG damages in various organs of rodents. One of these studies showed that caloric restriction reduced accumulation of 8-OHdG with age in rat brain, heart and skeletal muscle, and in mouse brain, heart, kidney and liver. More recently, Wolf et al. showed that dietary restriction reduced accumulation of 8-OHdG with age in rat brain, heart, skeletal muscle, and liver. Thus reduction of oxidative DNA damage is associated with a slower rate of aging and increased lifespan.
Inherited defects that cause premature aging
If DNA damage is the underlying cause of aging, it would be expected that humans with inherited defects in the ability to repair DNA damages should age at a faster pace than persons without such a defect. Numerous examples of rare inherited conditions with DNA repair defects are known. Several of these show multiple striking features of premature aging, and others have fewer such features. Perhaps the most striking premature aging conditions are Werner syndrome (mean lifespan 47 years), Huchinson–Gilford progeria (mean lifespan 13 years), and Cockayne syndrome (mean lifespan 13 years).
Werner syndrome is due to an inherited defect in an enzyme (a helicase and exonuclease) that acts in base excision repair of DNA (e.g. see Harrigan et al.).
Huchinson–Gilford progeria is due to a defect in Lamin A protein which forms a scaffolding within the cell nucleus to organize chromatin and is needed for repair of double-strand breaks in DNA. A-type lamins promote genetic stability by maintaining levels of proteins that have key roles in the DNA repair processes of non-homologous end joining and homologous recombination. Mouse cells deficient for maturation of prelamin A show increased DNA damage and chromosome aberrations and are more sensitive to DNA damaging agents.
Cockayne Syndrome is due to a defect in a protein necessary for the repair process, transcription coupled nucleotide excision repair, which can remove damages, particularly oxidative DNA damages, that block transcription.
In addition to these three conditions, several other human syndromes, that also have defective DNA repair, show several features of premature aging. These include ataxia–telangiectasia, Nijmegen breakage syndrome, some subgroups of xeroderma pigmentosum, trichothiodystrophy, Fanconi anemia, Bloom syndrome and Rothmund–Thomson syndrome.
In addition to human inherited syndromes, experimental mouse models with genetic defects in DNA repair show features of premature aging and reduced lifespan.(e.g. refs.) In particular, mutant mice defective in Ku70, or Ku80, or double mutant mice deficient in both Ku70 and Ku80 exhibit early aging. The mean lifespans of the three mutant mouse strains were similar to each other, at about 37 weeks, compared to 108 weeks for the wild-type control. Six specific signs of aging were examined, and the three mutant mice were found to display the same aging signs as the control mice, but at a much earlier age. Cancer incidence was not increased in the mutant mice. Ku70 and Ku80 form the heterodimer Ku protein essential for the non-homologous end joining (NHEJ) pathway of DNA repair, active in repairing DNA double-strand breaks. This suggests an important role of NHEJ in longevity assurance.
Defects in DNA repair cause features of premature aging
Many authors have noted an association between defects in the DNA damage response and premature aging (see e.g.). If a DNA repair protein is deficient, unrepaired DNA damages tend to accumulate. Such accumulated DNA damages appear to cause features of premature aging (segmental progeria). Table 1 lists 18 DNA repair proteins which, when deficient, cause numerous features of premature aging.
Increased DNA repair and extended longevity
Table 2 lists DNA repair proteins whose increased expression is connected to extended longevity.
Lifespan in different mammalian species
DNA repair capacity
Studies comparing DNA repair capacity in different mammalian species have shown that repair capacity correlates with lifespan. The initial study of this type, by Hart and Setlow, showed that the ability of skin fibroblasts of seven mammalian species to perform DNA repair after exposure to a DNA damaging agent correlated with lifespan of the species. The species studied were shrew, mouse, rat, hamster, cow, elephant and human. This initial study stimulated many additional studies involving a wide variety of mammalian species, and the correlation between repair capacity and lifespan generally held up. In one of the more recent studies, Burkle et al. studied the level of a particular enzyme, Poly ADP ribose polymerase, which is involved in repair of single-strand breaks in DNA. They found that the lifespan of 13 mammalian species correlated with the activity of this enzyme.
The DNA repair transcriptomes of the liver of humans, naked mole-rats and mice were compared. The maximum lifespans of humans, naked mole-rat, and mouse are respectively ~120, 30 and 3 years. The longer-lived species, humans and naked mole rats expressed DNA repair genes, including core genes in several DNA repair pathways, at a higher level than did mice. In addition, several DNA repair pathways in humans and naked mole-rats were up-regulated compared with mouse. These findings suggest that increased DNA repair facilitates greater longevity.
Over the past decade, a series of papers have shown that the mitochondrial DNA (mtDNA) base composition correlates with animal species maximum life span. The mitochondrial DNA base composition is thought to reflect its nucleotide-specific (guanine, cytosine, thymidine and adenine) different mutation rates (i.e., accumulation of guanine in the mitochondrial DNA of an animal species is due to low guanine mutation rate in the mitochondria of that species).
DNA damage accumulation and repair decline
The rate of accumulation of DNA damage (double-strand breaks) in the leukocytes of dolphins, goats, reindeer, American flamingos, and griffon vultures was compared to the longevity of individuals of these different species. The species with longer lifespans were found to have slower accumulation of DNA damage, a finding consistent with the DNA damage theory of aging. In healthy humans after age 50, endogenous DNA single- and double-strand breaks increase linearly, and other forms of DNA damage also increase with age in blood mononuclear cells. Also, after age 50 DNA repair capability decreases with age.
In mice, the DNA repair process of non-homologous end-joining that repairs DNA double strand breaks, declines in efficiency from 1.8-3.8-fold, depending on the specific tissue, when 5 month old animals are compared to 24 month old animals. A study of fibroblast cells from humans varying in age from 16-75 years showed that the efficiency and fidelity of non-homologous end joining, and the efficiency of homologous recombinational DNA repair decline with age leading to increased sensitivity to ionizing radiation in older individuals. In middle aged human adults, oxidative DNA damage was found to be greater among individuals who were both frail and living in poverty.
Centenarians
Lymphoblastoid cell lines established from blood samples of humans who lived past 100 years (centenarians) have significantly higher activity of the DNA repair protein Poly (ADP-ribose) polymerase (PARP) than cell lines from younger individuals (20 to 70 years old). The lymphocytic cells of centenarians have characteristics typical of cells from young people, both in their capability of priming the mechanism of repair after H2O2 sublethal oxidative DNA damage and in their PARP capacity.
Among centenarians, those with the most severe cognitive impairment have the lowest activity of the central DNA repair enzyme apurinic/apyrimidinc (AP) endonuclease 1. AP endonuclease I is employed in the DNA base excision repair pathway and its main role is the repair of damaged or mismatched nucleotides in DNA.
Menopause
As women age, they experience a decline in reproductive performance leading to menopause. This decline is tied to a decline in the number of ovarian follicles. Although 6 to 7 million oocytes are present at mid-gestation in the human ovary, only about 500 (about 0.05%) of these ovulate, and the rest are lost. The decline in ovarian reserve appears to occur at an increasing rate with age, and leads to nearly complete exhaustion of the reserve by about age 51. As ovarian reserve and fertility decline with age, there is also a parallel increase in pregnancy failure and meiotic errors resulting in chromosomally abnormal conceptions.
BRCA1 and BRCA2 are homologous recombination repair genes. The role of declining ATM-Mediated DNA double strand DNA break (DSB) repair in oocyte aging was first proposed by Kutluk Oktay, MD, PhD based on his observations that women with BRCA mutations produced fewer oocytes in response to ovarian stimulation repair. His laboratory has further studied this hypothesis and provided an explanation for the decline in ovarian reserve with age. They showed that as women age, double-strand breaks accumulate in the DNA of their primordial follicles. Primordial follicles are immature primary oocytes surrounded by a single layer of granulosa cells. An enzyme system is present in oocytes that normally accurately repairs DNA double-strand breaks. This repair system is referred to as homologous recombinational repair, and it is especially active during meiosis. Titus et al. from Oktay Laboratory also showed that expression of four key DNA repair genes that are necessary for homologous recombinational repair (BRCA1, MRE11, Rad51 and ATM) decline in oocytes with age. This age-related decline in ability to repair double-strand damages can account for the accumulation of these damages, which then likely contributes to the decline in ovarian reserve as further explained by Turan and Oktay.
Women with an inherited mutation in the DNA repair gene BRCA1 undergo menopause prematurely, suggesting that naturally occurring DNA damages in oocytes are repaired less efficiently in these women, and this inefficiency leads to early reproductive failure. Genomic data from about 70,000 women were analyzed to identify protein-coding variation associated with age at natural menopause. Pathway analyses identified a major association with DNA damage response genes, particularly those expressed during meiosis and including a common coding variant in the BRCA1 gene.
Atherosclerosis
The most important risk factor for cardiovascular problems is chronological aging. Several research groups have reviewed evidence for a key role of DNA damage in vascular aging.
Atherosclerotic plaque contains vascular smooth muscle cells, macrophages and endothelial cells and these have been found to accumulate 8-oxoG, a common type of oxidative DNA damage. DNA strand breaks also increased in atherosclerotic plaques, thus linking DNA damage to plaque formation.
Werner syndrome (WS), a premature aging condition in humans, is caused by a genetic defect in a RecQ helicase that is employed in several DNA repair processes. WS patients develop a substantial burden of atherosclerotic plaques in their coronary arteries and aorta. These findings link excessive unrepaired DNA damage to premature aging and early atherosclerotic plaque development.
DNA damage and the epigenetic clock
Endogenous, naturally occurring DNA damages are frequent, and in humans include an average of about 10,000 oxidative damages per day and 50 double-strand DNA breaks per cell cycle.
Several reviews summarize evidence that the methylation enzyme DNMT1 is recruited to sites of oxidative DNA damage. Recruitment of DNMT1 leads to DNA methylation at the promoters of genes to inhibit transcription during repair. In addition, the 2018 review describes recruitment of DNMT1 during repair of DNA double-strand breaks. DNMT1 localization results in increased DNA methylation near the site of recombinational repair, associated with altered expression of the repaired gene. In general, repair-associated hyper-methylated promoters are restored to their former methylation level after DNA repair is complete. However, these reviews also indicate that transient recruitment of epigenetic modifiers can occasionally result in subsequent stable epigenetic alterations and gene silencing after DNA repair has been completed.
In human and mouse DNA, cytosine followed by guanine (CpG) is the least frequent dinucleotide, making up less than 1% of all dinucleotides (see CG suppression). At most CpG sites cytosine is methylated to form 5-methylcytosine. As indicated in the article CpG site, in mammals, 70% to 80% of CpG cytosines are methylated. However, in vertebrates there are CpG islands, about 300 to 3,000 base pairs long, with interspersed DNA sequences that deviate significantly from the average genomic pattern by being CpG-rich. These CpG islands are predominantly nonmethylated. In humans, about 70% of promoters located near the transcription start site of a gene (proximal promoters) contain a CpG island (see CpG islands in promoters). If the initially nonmethylated CpG sites in a CpG island become largely methylated, this causes stable silencing of the associated gene.
For humans, after adulthood is reached and during subsequent aging, the majority of CpG sequences slowly lose methylation (called epigenetic drift). However, the CpG islands that control promoters tend to gain methylation with age. The gain of methylation at CpG islands in promoter regions is correlated with age, and has been used to create an epigenetic clock (see article Epigenetic clock).
There may be some relationship between the epigenetic clock and epigenetic alterations accumulating after DNA repair. Both unrepaired DNA damage accumulated with age and accumulated methylation of CpG islands would silence genes in which they occur, interfere with protein expression, and contribute to the aging phenotype.
See also
References
DNA
Programmed cell death
Proximate theories of biological ageing
Senescence
Theories of biological ageing
Theories of ageing | DNA damage theory of aging | [
"Chemistry",
"Biology"
] | 6,089 | [
"Signal transduction",
"Senescence",
"Theories of biological ageing",
"Cellular processes",
"Programmed cell death",
"Metabolism"
] |
17,063,328 | https://en.wikipedia.org/wiki/Ion%20beam%20lithography | Ion-beam lithography is the practice of scanning a focused beam of ions in a patterned fashion across a surface in order to create very small structures such as integrated circuits or other nanostructures.
Details
Ion-beam lithography has been found to be useful for transferring high-fidelity patterns on three-dimensional surfaces.
Ion-beam lithography offers higher resolution patterning than UV, X-ray, or electron beam lithography because these heavier particles have more momentum. This gives the ion beam a smaller wavelength than even an e-beam and therefore almost no diffraction. The momentum also reduces scattering in the target and in any residual gas. There is also a reduced potential radiation effect to sensitive underlying structures compared to x-ray and e-beam lithography.
Ion-beam lithography, or ion-projection lithography, is similar to Electron beam lithography, but uses much heavier charged particles, ions. In addition to diffraction being negligible, ions move in straighter paths than electrons do both through vacuum and through matter, so there seems be a potential for very high resolution. Secondary particles (electrons and atoms) have very short range, because of the lower speed of the ions. On the other hand, intense sources are more difficult to make and higher acceleration voltages are needed for a given range. Due to the higher energy loss rate, higher particle energy for a given range and the absence of significant space charge effects, shot noise will tend to be greater.
Fast-moving ions interact differently with matter than electrons do, and, owing to their higher momentum, their optical properties are different. They have much shorter range in matter and move straighter through it. At low energies, at the end of the range, they lose more of their energy to the atomic nuclei, rather than to the atoms, so that atoms are dislocated rather than ionized. If the ions don't defuse out of the resist, they dope it. The energy loss in matter follows a Bragg curve and has a smaller statistical spread. They are "stiffer" optically, they require larger fields or distances to focus or bend. The higher momentum resists space charge effects.
Collider particle accelerators have shown that it is possible to focus and steer high momentum charged particles with very great precision.
See also
E-beam lithography
Maskless lithography
Nanochannel glass materials
Photolithography
References
Semiconductor device fabrication | Ion beam lithography | [
"Materials_science"
] | 504 | [
"Semiconductor device fabrication",
"Microtechnology"
] |
17,070,368 | https://en.wikipedia.org/wiki/Atomichron | The Atomichron was the world's first commercial atomic clock, built by the National Company, Inc. of Malden, Massachusetts. It was also the first self-contained portable atomic clock and was a caesium standard clock. More than 50 clocks with the trademarked Atomichron name were produced.
See also
Chip-scale atomic clock
Hoptroff London
References
External links
* A Brief History of the National Company, Inc
Atomichron: The Atomic Clock from Concept to Commercial Product
Atomichron NC-1001 Manual
Atomic clocks | Atomichron | [
"Physics"
] | 111 | [
"Spacetime",
"Physical quantities",
"Time",
"Time stubs"
] |
17,071,161 | https://en.wikipedia.org/wiki/Strong%20Nash%20equilibrium | In game theory, a strong Nash equilibrium (SNE) is a combination of actions of the different players, in which no coalition of players can cooperatively deviate in a way that strictly benefits all of its members, given that the actions of the other players remain fixed. This is in contrast to simple Nash equilibrium, which considers only deviations by individual players. The concept was introduced by Israel Aumann in 1959. SNE is particularly useful in areas such as the study of voting systems, in which there are typically many more players than possible outcomes, and so plain Nash equilibria are far too abundant.
Existence
Nessah and Tian prove that an SNE exists if the following conditions are satisfied:
The strategy space of each player is compact and convex;
The payoff function of each player is concave and continuous;
The coalition consistency property: there exists a weight-vector-tuple w, assigning a weight-vector wS to each possible coalition S, such that for each strategy-profile x, there exists a strategy-profile z in which zS maximizes the weighted (by wS) social welfare to members of S, given x-S.
Note that if x is itself an SNE, then z can be taken to be equal to x. If x is not an SNE, the condition requires that one can move to a different strategy-profile which is a social-welfare-best-response for all coalitions simultaneously.
For example, consider a game with two players, with strategy spaces [1/3, 2] and [3/4, 2], which are clearly compact and convex. The utility functions are:
u1(x) = - x12 + x2 + 1
u2(x) = x1 - x22 + 1
which are continuous and convex. It remains to check coalition consistency. For every strategy-tuple x, we check the weighted-best-response of each coalition:
For the coalition {1}, we need to find, for every x2, maxy1 (-y12 + x2 + 1); it is clear that the maximum is attained at the smallest point of the strategy space, which is y1=1/3.
For the coalition {2}, we similarly see that for every x1, the maximum payoff is attained at the smallest point, y2=3/4.
For the coalition {1,2}, with weights w1,w2, we need to find maxy1,y2 (w1*(-y12 + y2 + 1)+w2*(y1 - y22 + 1)). Using the derivative test, we can find out that the maximum point is y1=w2/(2*w1) and y2=w1/(2*w2). By taking w1=0.6,w2=0.4 we get y1=1/3 and y2=3/4.
So, with w1=0.6,w2=0.4 the point (1/3,3/4) is a consistent social-welfare-best-response for all coalitions simultaneously. Therefore, an SNE exists, at the same point (1/3,3/4).
Here is an example in which the coalition consistency fails, and indeed there is no SNE.There are two players, with strategy space [0,1]. Their utility functions are:
u1(x) = -x1 + 2*x2;
u2(x) = 2*x1 - x2.
There is a unique Nash equilibrium at (0,0), with payoff vector (0,0). However, it is not SNE as the coalition {1,2} can deviate to (1,1), with payoff vector (1,1). Indeed, coalition consistency is violated at x=(0,0): for the coalition {1,2}, for any weight-vector wS, the social-welfare-best-response is either on the line (1,0)--(1,1) or on the line (0,1)--(1,1); but any such point is not a best-response for the player playing 1.
Nessah and Tian also present a necessary and sufficient condition for SNE existence, along with an algorithm that finds an SNE if and only if it exists.
Properties
Every SNE is a Nash equilibrium. This can be seen by considering a deviation of the n singleton coalitions.
Every SNE is weakly Pareto-efficient. This can be seen by considering a deviation of the grand coalition - the coalition of all players.
Every SNE is in the weak alpha-core and in the weak-beta core.
Criticism
The strong Nash concept is criticized as too "strong" in that the environment allows for unlimited private communication. As a result of these requirements, Strong Nash rarely exists in games interesting enough to deserve study. Nevertheless, it is possible for there to be multiple strong Nash equilibria. For instance, in Approval voting, there is always a strong Nash equilibrium for any Condorcet winner that exists, but this is only unique (apart from inconsequential changes) when there is a majority Condorcet winner.
A relatively weaker yet refined Nash stability concept is called coalition-proof Nash equilibrium (CPNE) in which the equilibria are immune to multilateral deviations that are self-enforcing. Every correlated strategy supported by iterated strict dominance and on the Pareto frontier is a CPNE. Further, it is possible for a game to have a Nash equilibrium that is resilient against coalitions less than a specified size k. CPNE is related to the theory of the core.
Confusingly, the concept of a strong Nash equilibrium is unrelated to that of a weak Nash equilibrium. That is, a Nash equilibrium can be both strong and weak, either, or neither.
References
Game theory equilibrium concepts | Strong Nash equilibrium | [
"Mathematics"
] | 1,243 | [
"Game theory",
"Game theory equilibrium concepts"
] |
17,072,220 | https://en.wikipedia.org/wiki/Affine%20Grassmannian%20%28manifold%29 | In mathematics, there are two distinct meanings of the term affine Grassmannian. In one it is the manifold of all k-dimensional affine subspaces of Rn (described on this page), while in the other the affine Grassmannian is a quotient of a group-ring based on formal Laurent series.
Formal definition
Given a finite-dimensional vector space V and a non-negative integer k, then Graffk(V) is the topological space of all affine k-dimensional subspaces of V.
It has a natural projection p:Graffk(V) → Grk(V), the Grassmannian of all linear k-dimensional subspaces of V by defining p(U) to be the translation of U to a subspace through the origin. This projection is a fibration, and if V is given an inner product, the fibre containing U can be identified with , the orthogonal complement to p(U).
The fibres are therefore vector spaces, and the projection p is a vector bundle over the Grassmannian, which defines the manifold structure on Graffk(V).
As a homogeneous space, the affine Grassmannian of an n-dimensional vector space V can be identified with
where E(n) is the Euclidean group of Rn and O(m) is the orthogonal group on Rm. It follows that the dimension is given by
(This relation is easier to deduce from the identification of next section, as the difference between the number of coefficients, (n−k)(n+1) and the dimension of the linear group acting on the equations, (n−k)2.)
Relationship with ordinary Grassmannian
Let be the usual linear coordinates on Rn. Then Rn is embedded into Rn+1 as the affine hyperplane xn+1 = 1. The k-dimensional affine subspaces of Rn are in one-to-one correspondence with the (k+1)-dimensional linear subspaces of Rn+1 that are in general position with respect to the plane xn+1 = 1. Indeed, a k-dimensional affine subspace of Rn is the locus of solutions of a rank n − k system of affine equations
These determine a rank n−k system of linear equations on Rn+1
whose solution is a (k + 1)-plane that, when intersected with xn+1 = 1, is the original k-plane.
Because of this identification, Graff(k,n) is a Zariski open set in Gr(k + 1, n + 1).
References
Differential geometry
Projective geometry
Algebraic homogeneous spaces
Algebraic geometry | Affine Grassmannian (manifold) | [
"Mathematics"
] | 547 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
17,074,970 | https://en.wikipedia.org/wiki/Concurrence%20%28quantum%20computing%29 | In quantum information science, the concurrence is a state invariant involving qubits.
Definition
The concurrence is an entanglement monotone (a way of measuring entanglement) defined for a mixed state of two qubits as:
in which are the eigenvalues, in decreasing order, of the Hermitian matrix
with
the spin-flipped state of and a Pauli spin matrix. The complex conjugation is taken in the eigenbasis of the Pauli matrix . Also, here, for a positive semidefinite matrix , denotes a positive semidefinite matrix such that . Note that is a unique matrix so defined.
A generalized version of concurrence for multiparticle pure states in arbitrary dimensions (including the case of continuous-variables in infinite dimensions) is defined as:
in which is the reduced density matrix (or its continuous-variable analogue) across the bipartition of the pure state, and it measures how much the complex amplitudes deviate from the constraints required for tensor separability. The faithful nature of the measure admits necessary and sufficient conditions of separability for pure states.
Other formulations
Alternatively, the 's represent the square roots of the eigenvalues of the non-Hermitian matrix . Note that each is a non-negative real number. From the concurrence, the entanglement of formation can be calculated.
Properties
For pure states, the square of the concurrence (also known as the tangle) is a polynomial invariant in the state's coefficients. For mixed states, the concurrence can be defined by convex roof extension.
For the tangle, there is monogamy of entanglement, that is, the tangle of a qubit with the rest of the system cannot ever exceed the sum of the tangles of qubit pairs which it is part of.
References
Theoretical computer science
Quantum information science | Concurrence (quantum computing) | [
"Mathematics"
] | 396 | [
"Theoretical computer science",
"Applied mathematics"
] |
17,075,935 | https://en.wikipedia.org/wiki/Environmental%20stress%20cracking | Environmental Stress Cracking (ESC) is one of the most common causes of unexpected brittle failure of thermoplastic (especially amorphous) polymers known at present. According to ASTM D883, stress cracking is defined as "an external or internal crack in a plastic caused by tensile stresses less than its short-term mechanical strength". This type of cracking typically involves brittle cracking, with little or no ductile drawing of the material from its adjacent failure surfaces. Environmental stress cracking may account for around 15-30% of all plastic component failures in service. This behavior is especially prevalent in glassy, amorphous thermoplastics. Amorphous polymers exhibit ESC because of their loose structure which makes it easier for the fluid to permeate into the polymer. Amorphous polymers are more prone to ESC at temperature higher than their glass transition temperature (Tg) due to the increased free volume. When Tg is approached, more fluid can permeate into the polymer chains.
Exposure of polymers to solvents
ESC and polymer resistance to ESC (ESCR) have been studied for several decades. Research shows that the exposure of polymers to liquid chemicals tends to accelerate the crazing process, initiating crazes at stresses that are much lower than the stress causing crazing in air. The action of either a tensile stress or a corrosive liquid alone would not be enough to cause failure, but in ESC the initiation and growth of a crack is caused by the combined action of the stress and a corrosive environmental liquid. These corrosive environmental liquids are called 'secondary chemical agents', are often organic, and are defined as solvents not anticipated to come into contact with the plastic during its lifetime of use. Failure is rarely associated with primary chemical agents, as these materials are anticipated to come into contact with the polymer during its lifetime, and thus compatibility is ensured prior to use. In air, failure due to creep is known as creep rupture, as the air acts as a plasticizer, and this acts in parallel to environmental stress cracking.
It is somewhat different from polymer degradation in that stress cracking does not break polymer bonds. Instead, it breaks the secondary linkages between polymers. These are broken when the mechanical stresses cause minute cracks in the polymer and they propagate rapidly under the harsh environmental conditions. It has also been seen that catastrophic failure under stress can occur due to the attack of a reagent that would not attack the polymer in an unstressed state. Environmental stress cracking is accelerated due to higher temperatures, cyclic loading, increased stress concentrations, and fatigue.
Metallurgists typically use the term Stress corrosion cracking or Environmental stress fracture to describe this type of failure in metals.
Factors influencing ESC
Although the phenomenon of ESC has been known for a number of decades, research has not yet enabled prediction of this type of failure for all environments and for every type of polymer. Some scenarios are well known, documented or are able to be predicted, but there is no complete reference for all combinations of stress, polymer and environment. The rate of ESC is dependent on many factors including the polymer's chemical makeup, bonding, crystallinity, surface roughness, molecular weight and residual stress. It also depends on the liquid reagent's chemical nature and concentration, the temperature of the system and the strain rate.
Mechanisms of ESC
There are a number of opinions on how certain reagents act on polymers under stress. Because ESC is often seen in amorphous polymers rather than in semicrystalline polymers, theories regarding the mechanism of ESC often revolve around liquid interactions with the amorphous regions of polymers. One such theory is that the liquid can diffuse into the polymer, causing swelling which increases the polymer's chain mobility. The result is a decrease in the yield stress and glass transition temperature (Tg), as well as a plasticisation of the material which leads to crazing at lower stresses and strains. A second view is that the liquid can reduce the energy required to create new surfaces in the polymer by wetting the polymer's surface and hence aid the formation of voids, which is thought to be very important in the early stages of craze formation. ESC may occur continuously, or a piece-wise start and stop mechanism
There is an array of experimentally derived evidence to support the above theories:
Once a craze is formed in a polymer this creates an easy diffusion path so that the environmental attack can continue and the crazing process can accelerate.
Chemical compatibility between the environment and the polymer govern the amount in which the environment can swell and plasticise the polymer.
The effects of ESC are reduced when crack growth rate is high. This is primarily due to the inability of the liquid to keep up with the growth of the crack.
Once separated from the other chains, the polymers align, thus allowing embrittlement.
ESC generally occurs at the surface of a plastic and doesn't require the secondary chemical agent to penetrate the material significantly, which leaves the bulk properties unmodified.
Another theory for the mechanism of craze propagation in amorphous polymers is proposed by Kramer. According to his theory, the formation of internal surfaces in polymers is facilitated by polymeric surface tension that is determined by both secondary interactions and the contribution of load-bearing chains that must undergo fracture or slippage to form a surface. This theory provides an explanation for the decrease in the stress needed to propagate the craze in the presence of surface-active reagents such as detergents and high temperature.
ESC mechanism in polyethylene
Semi-crystalline polymers such as polyethylene show brittle fracture under stress if exposed to stress cracking agents. In such polymers, the crystallites are connected by the tie molecules through the amorphous phase. The tie molecules play an important role in the mechanical properties of the polymer through the transferring of load. Stress cracking agents, such as detergents, act to lower the cohesive forces which maintain the tie molecules in the crystallites, thus facilitating their "pull-out" and disentanglement from the lamellae. As a result, cracking is initiated at stress values lower than the critical stress level of the material.
In general, the mechanism of environmental stress cracking in polyethylene involves the disentanglement of the tie molecules from the crystals. The number of tie molecules and the strength of the crystals that anchor them are considered the controlling factors in determining the polymer resistance to ESC.
Characterizing ESC
A number of different methods are used to evaluate a polymer's resistance to environmental stress cracking. A common method in the polymer industry is use of the Bergen jig, which subjects the sample to variable strain during a single test. The results of this test indicate the critical strain to cracking, using only one sample. Another widely used test is the Bell Telephone test where bent strips are exposed to fluids of interest under controlled conditions. Further, new tests have been developed where the time for crack initiation under transverse loading and an aggressive solvent (10% Igepal CO-630 solution) is evaluated. These methods rely on an indentor to stress the material biaxially, while preventing a radial stress concentration. The stressed polymer sits in the aggressive agent and the stressed plastic around the indentor is watched to evaluate the time to crack formation, which is the way that ESC resistance is quantified. A testing apparatus for this method is known as the Telecom and is commercially available; initial experiments have shown that this testing gives equivalent results to ASTM D1693, but at a much shorter time scale. Current research deals with the application of fracture mechanics to the study of ESC phenomena. In summary, though, there is not a singular descriptor that is applicable to ESC—rather, the specific fracture is dependent on the material, conditions, and secondary chemical agents present .
Scanning electron microscopy and fractographic methods have historically been used to analyze the failure mechanism, particularly in high density polyethylene (HDPE). Freeze fracture has proved particularly useful for examining the kinetics of ESC, as they provide a snapshot in time of the crack propagation process.
Strain hardening as a measure of environmental stress cracking resistance (ESCR)
Many different methods exist for measuring ESCR. However, the long testing time and high costs associated with these methods slow down the R&D activities for designing materials with higher resistance to stress cracking. To overcome these challenges, a new simpler and faster method was developed by SABIC to assess ESCR for high density polyethylene (HDPE) materials. In this method, the resistance of slow crack growth or environmental stress cracking is predicted from simple tensile measurement at a temperature of 80 °C. When polyethylene is deformed under a uniaxiial tension, before yield, the stiff crystalline phase of the polymer undergoes small deformation while the amorphous domains deforms significantly. After the yield point but before the material undergoes strain hardening, the crystalline lamellae slips where both the crystalline phase and the amorphous domains contribute to load bearing and straining. At some point, the amorphous domains will stretch fully at which the strain hardening begin. In the strain hardening region, the elongated amorphous domains become the loading bearing phase whereas the crystalline lamellae undergoes fracture and unfold to adjust for the change in strain. The load-bearing chains in the amorphous domains in polyethylene are made of tie-molecules and entangles chains. Because of the key role of tie-molecules and entanglements in resisting environmental stress cracking in polyethylene, it follows that ESCR and strain hardening behaviors can very well be correlated.
In the strain hardening method, the slope of strain hardening region (above the natural draw ratio) in the true stress-strain curves is calculated and used as a measure of ESCR. This slope is called the strain hardening modulus (Gp). The strain hardening modulus is calculated over the entire strain hardening region in the true stress strain curve. The strain hardening region of the stress-strain curve is considered to be the homogeneously deforming part well above the natural draw ratio, which is determined by presence of the neck propagation, and below the maximum elongation. The strain hardening modulus when measured at 80 °C is sensitive to the same molecular factors that govern slow crack resistance in HDPE as measured by an accelerated ESCR test where a surface active agent is used. The strain hardening modulus and ESCR values for polyethylene have been found to be strongly correlated with each others.
Examples
An obvious example of the need to resist ESC in everyday life is the automotive industry, in which a number of different polymers are subjected to a number of fluids. Some of the chemicals involved in these interactions include petrol, brake fluid and windscreen cleaning solution. Plasticisers leaching from PVC can also cause ESC over an extended period of time, for example.
One of the first examples of the problem concerned ESC of LDPE. The material was initially used in insulating electric cables, and cracking occurred due to the interaction of the insulation with oils. The solution to the problem lay in increasing the molecular weight of the polymer. A test of exposure to a strong detergent such as Igepal was developed to give a warning of ESC.
Styrene acrylonitrile susceptibility to ketone solvent
A more specific example comes in the form of a piano key made from injection moulded styrene acrylonitrile (SAN). The key has a hook end which connects it to a metal spring, which causes the key to spring back into position after being struck. During assembly of the piano an adhesive was used, and excess adhesive which had spilled onto areas where it was not required was removed using a ketone solvent. Some vapour from this solvent condensed on the internal surface of the piano keys. Some time after this cleaning, fracture occurred at the junction where the hook end meets the spring.
To determine the cause of the fracture, the SAN piano key was heated above its glass transition temperature for a short time. If there is residual stress within the polymer, the piece will shrink when held at such a temperature. Results showed that there was significant shrinkage, particularly at the hook end-spring junction. This indicates stress concentration, possibly the combination of residual stress from forming and the action of the spring. It was concluded that although there was residual stress, the fracture was due to a combination of the tensile stress from the spring action and the presence of the ketone solvent.
Polymer formworks susceptibility to concrete paste
Polymer formworks can suffer from sudden failures during casting, which are generally associated with the pressure exerted by the wet concrete on thin plastic formworks. Such failures can be substantially accelerated by the corrosive effects of the wet concrete paste, which has a pH of circa 13. Certain thermoplastics are more severely affected, especially those in amorphous form, such as PLA, PET and PC. This phenomenon is even more pronounced in 3D-printed polymer formworks, where there is a correlation between the environmental stress cracking mechanism and layer interface grooves, where stresses concentrate.
See also
Corrosion engineering
Creep (deformation)
Crocodile cracking
Embrittlement
Environmental stress fracture
Forensic engineering
Forensic polymer engineering
Fracture Mechanics
Season cracking
Stress corrosion cracking
Structural failure
References
Further reading
Ezrin, Meyer (1996). Plastics Failure Guide: Cause and Prevention, Hanser-SPE.
Wright, David C. (2001). Environmental Stress Cracking of Plastics RAPRA.
Lewis, Peter Rhys, Reynolds, K. and Gagg, C. (2004). Forensic Materials Engineering: Case studies, CRC Press.
Polymer physics
Thermoplastics
Polymers | Environmental stress cracking | [
"Chemistry",
"Materials_science"
] | 2,850 | [
"Polymer physics",
"Polymers",
"Polymer chemistry"
] |
17,076,411 | https://en.wikipedia.org/wiki/Thermal%20degradation%20of%20polymers | In polymers, such as plastics, thermal degradation refers to a type of polymer degradation where damaging chemical changes take place at elevated temperatures, without the simultaneous involvement of other compounds such as oxygen. Simply put, even in the absence of air, polymers will begin to degrade if heated high enough. It is distinct from thermal-oxidation, which can usually take place at less elevated temperatures.
The onset of thermal degradation dictates the maximum temperature at which a polymer can be used. It is an important limitation in how the polymer is manufactured and processed. For instance, polymers become less viscous at higher temperatures which makes injection moulding easier and faster, but thermal degradation places a ceiling temperature on this. Polymer devolatilization is similarly effected.
At high temperatures, the components of the long chain backbone of the polymer can break (chain scission) and react with one another (cross-link) to change the properties of the polymer. These reactions result in changes to the molecular weight (and molecular weight distribution) of the polymer and can affect its properties by causing reduced ductility and increased embrittlement, chalking, scorch, colour changes, cracking and general reduction in most other desirable physical properties.
Reaction pathways
Depolymerisation
Under thermal effect, the end of polymer chain departs, and forms low free radical which has low activity. Then according to the chain reaction mechanism, the polymer loses the monomer one by one. However, the molecular chain doesn't change a lot in a short time. The reaction is shown below. This process is common for polymethymethacrylate (perspex).
CH2-C(CH3)COOCH3-CH2-C*(CH3)COOCH3→CH2-C*(CH3)COOCH3 + CH2=C(CH3)COOCH3
Side-group elimination
Groups that are attached to the side of the backbone are held by bonds which are weaker than the bonds connecting the chain. When the polymer is heated, the side groups are stripped off from the chain before it is broken into smaller pieces.
For example, the PVC eliminates HCl, under 100–120 °C.
CH2(Cl)CHCH2CH(Cl)→CH=CH-CH=CH+2HCl
Side group elimination can also proceed in a radical manner. For instance, methyl groups in polypropylene are susceptible to homolysis at high temperatures, leaving radicals on the polymer backbone.
Random chain scission
Radicals formed on the polymer backbone by either hydrogen abstraction side-group elimination can cause the chain to break by beta scission. As a result the molecular weight decreases rapidly. As new free radicals with high reactivity are formed, monomers cannot be a product of this reaction, also intermolecular chain transfer and disproportion termination reactions can occur.
CH2-CH2-CH2-CH2-CH2-CH2-CH2’→
CH2-CH2-CH=CH2 + CH3-CH2-CH2’ or
CH2’+CH2=CH-CH2-CH2-CH2-CH3
As polymers approach their ceiling temperature scission starts to take place randomly on the backbone.
Oxidation of the polymer
Although thermal degradation is defined as an oxygen free process it is difficult in practise to completely exclude oxygen. Where this is the case thermal oxidation is to be expected, leading to the formation of free radicals by way of hydroperoxides. These may then participate in thermal degradation reactions, accelerating the rate of breakdown.
Analytical Methods
TGA
(Thermogravimetric analysis) (TGA) refers to the techniques where a sample is heated in a controlled atmosphere at a defined heating rate whilst the sample's mass is measured. When a polymer sample degrades, its mass decreases due to the production of gaseous products like carbon monoxide, water vapour and carbon dioxide.
=== DTA and DSC ===
(Differential thermal analysis) (DTA) and (differential scanning calorimetry) (DSC): Analyzing the heating effect of polymer during the physical changes in terms of glass transition, melting, and so on. These techniques measure the heat flow associated with oxidation.
See also
Autoxidation
Photo-oxidation of polymers
Weather testing of polymers
Environmental stress cracking
References
Polymer chemistry
Corrosion
Forensic phenomena
Materials degradation | Thermal degradation of polymers | [
"Chemistry",
"Materials_science",
"Engineering"
] | 903 | [
"Metallurgy",
"Materials science",
"Corrosion",
"Electrochemistry",
"Polymer chemistry",
"Materials degradation"
] |
17,077,434 | https://en.wikipedia.org/wiki/Comparative%20Toxicogenomics%20Database | The Comparative Toxicogenomics Database (CTD) is a public website and research tool launched in November 2004 that curates scientific data describing relationships between chemicals/drugs, genes/proteins, diseases, taxa, phenotypes, GO annotations, pathways, and interaction modules.
The database is maintained by the Department of Biological Sciences at North Carolina State University.
Background
The Comparative Toxicogenomics Database (CTD) is a public website and research tool that curates scientific data describing relationships between chemicals, genes/proteins, diseases, taxa, phenotypes, GO annotations, pathways, and interaction modules, launched on November 12, 2004.
The database is maintained by the Department of Biological Sciences at North Carolina State University.
Goals and objectives
One of the primary goals of CTD is to advance the understanding of the effects of environmental chemicals on human health on the genetic level, a field called toxicogenomics.
The etiology of many chronic diseases involves interactions between environmental factors and genes that modulate important physiological processes. Chemicals are an important component of the environment. Conditions such as asthma, cancer, diabetes, hypertension, immunodeficiency, and Parkinson's disease are known to be influenced by the environment; however, the molecular mechanisms underlying these correlations are not well understood. CTD may help resolve these mechanisms. The most up-to-date extensive list of peer-reviewed scientific articles about CTD is available at their publications page
Core data
CTD is a unique resource where biocurators read the scientific literature and manually curate four types of core data:
Chemical-gene interactions
Chemical-disease associations
Gene-disease associations
Chemical-phenotype associations
Data integration
By integrating the above four data sets, CTD automatically constructs putative chemical-gene-phenotype-disease networks to illuminate molecular mechanisms underlying environmentally-influenced diseases.
These inferred relationships are statistically scored and ranked and can be used by scientists and computational biologists to generate and verify testable hypotheses about toxicogenomic mechanisms and how they relate to human health.
Users can search CTD to explore scientific data for chemicals, genes, diseases, or interactions between any of these three concepts. Currently, CTD integrates toxicogenomic data for vertebrates and invertebrates.
CTD integrates data from or hyperlinks to these databases:
ChemIDplus, a dictionary of more than 400,000 chemicals housed in the US National Library of Medicine
DrugBank
Data Infrastructure for Chemical Safety project (diXa) Data Warehouse by the European Bioinformatics Institute which as of November 2015 contained 469 compounds, 188 disease datasets in three sub-categories liver, kidney and cardiovascular disease.
Gene Ontology Consortium
KEGG
NCBI Entrez-Gene
NCBI PubMed
NCBI Taxonomy
NLM Medical Subject Headings
OMIM
Reactome
References
External links
Comparative Toxicogenomics Database
MDI Biologicjfxal Laboratory
Biochemistry databases
Genetics databases
Ontology (information science)
Toxicology
Molecular genetics
Environmental science
Comparisons | Comparative Toxicogenomics Database | [
"Chemistry",
"Biology",
"Environmental_science"
] | 613 | [
"Toxicology",
"Biochemistry databases",
"Molecular genetics",
"nan",
"Molecular biology",
"Biochemistry"
] |
17,077,613 | https://en.wikipedia.org/wiki/Line%20%28text%20file%29 | In computing, a line is a unit of organization for text files. A line consists of a sequence of zero or more characters, usually displayed within a single horizontal sequence.
The term comes directly from physical printing, where a line of text is a horizontal row of characters.
Depending on the file system or operating system being used the number of characters on a line may either be predetermined or fixed, or the length may vary from line to line. Fixed-length lines are sometimes called records. With variable-length lines, the end of each line is usually indicated by the presence of one or more special end-of-line characters. These include line feed, carriage return, or combinations thereof.
A blank line usually refers to a line containing zero characters (not counting any end-of-line characters); though it may also refer to any line that does not contain any visible characters (consisting only of whitespace).
Some tools that operate on text files (e.g., editors) provide a mechanism to reference lines by their line number.
See also
Newline
Line wrap and word wrap
Line-oriented programming language, programming languages that interpret the end of line to be the end of an instruction or statement
Computer file formats
Computer data
References | Line (text file) | [
"Technology"
] | 253 | [
"Computer data",
"Data"
] |
12,555,175 | https://en.wikipedia.org/wiki/GLUT8 | GLUT8 also known as SLC2A8 is the eighth member of glucose transporter superfamily.
It is characterized by the presence of two leucine residues in its N-terminal intracellular domain, which influences intracellular trafficking.
Discovery
GLUT8, originally named GLUTX1, was cloned almost simultaneously by two different groups.
Tissue distribution
Subcellular localization
Contrary to GLUT4, GLUT8 (previously known as GLUTX1) is not insulin-sensitive. In other words, insulin does not promote GLUT8 translocation to the cell surface in neurons as well as in transfected cell lines.
Where in the cell GLUT8 is localized in not yet clear. Most GLUT8 is not present at the cell surface. Some co-localization with both the endoplasmic reticulum and late endosomes/lysosomes has been published.
When the N-terminal di-leucine motif is mutated into a di-alanine motif, GLUT8 is located mostly at the cell surface in Xenopus oocytes and mammalian cells such as HEK 293 cells and differentiated PC12 cells.
Physiological role
GLUT8 function in vivo remains to be defined, despite suggestions that it may play a role in fertility, being expressed at high levels in testes and in the acrosomal part of spermatozoa. Furthermore, GLUT8 appears to play an important role in the energy metabolism of sperm cells.
GLUT8, when expressed in Xenopus oocytes, mediates glucose uptake with high affinity. Other hexoses are not good substrates of the transporter.
Mice devoid of both copies of the SLC2A8 gene are viable, fertile and do not show any obvious phenotype. They are not diabetic, showing that GLUT8 is unlikely to play major roles in glucose homeostasis.
References
Membrane proteins
Solute carrier family | GLUT8 | [
"Biology"
] | 412 | [
"Protein classification",
"Membrane proteins"
] |
12,555,662 | https://en.wikipedia.org/wiki/Active%20contour%20model | Active contour model, also called snakes, is a framework in computer vision introduced by Michael Kass, Andrew Witkin, and Demetri Terzopoulos for delineating an object outline from a possibly noisy 2D image. The snakes model is popular in computer vision, and snakes are widely used in applications like object tracking, shape recognition, segmentation, edge detection and stereo matching.
A snake is an energy minimizing, deformable spline influenced by constraint and image forces that pull it towards object contours and internal forces that resist deformation. Snakes may be understood as a special case of the general technique of matching a deformable model to an image by means of energy minimization. In two dimensions, the active shape model represents a discrete version of this approach, taking advantage of the point distribution model to restrict the shape range to an explicit domain learnt from a training set.
Snakes do not solve the entire problem of finding contours in images, since the method requires knowledge of the desired contour shape beforehand. Rather, they depend on other mechanisms such as interaction with a user, interaction with some higher level image understanding process, or information from image data adjacent in time or space.
Motivation
In computer vision, contour models describe the boundaries of shapes in an image. Snakes in particular are designed to solve problems where the approximate shape of the boundary is known. By being a deformable model, snakes can adapt to differences and noise in stereo matching and motion tracking. Additionally, the method can find Illusory contours in the image by ignoring missing boundary information.
Compared to classical feature extraction techniques, snakes have multiple advantages:
They autonomously and adaptively search for the minimum state.
External image forces act upon the snake in an intuitive manner.
Incorporating Gaussian smoothing in the image energy function introduces scale sensitivity.
They can be used to track dynamic objects.
The key drawbacks of the traditional snakes are
They are sensitive to local minima states, which can be counteracted by simulated annealing techniques.
Minute features are often ignored during energy minimization over the entire contour.
Their accuracy depends on the convergence policy.
Energy formulation
A simple elastic snake is defined by a set of n points for , the internal elastic energy term , and the external edge-based energy term . The purpose of the internal energy term is to control the deformations made to the snake, and the purpose of the external energy term is to control the fitting of the contour onto the image. The external energy is usually a combination of the forces due to the image itself and the constraint forces introduced by the user
The energy function of the snake is the sum of its external energy and internal energy, or
Internal energy
The internal energy of the snake is composed of the continuity of the contour and the smoothness of the contour .
This can be expanded as
where and are user-defined weights; these control the internal energy function's sensitivity to the amount of stretch in the snake and the amount of curvature in the snake, respectively, and thereby control the number of constraints on the shape of the snake.
In practice, a large weight for the continuity term penalizes changes in distances between points in the contour. A large weight for the smoothness term penalizes oscillations in the contour and will cause the contour to act as a thin plate.
Image energy
Energy in the image is some function of the features of the image. This is one of the most common points of modification in derivative methods. Features in images and images themselves can be processed in many and various ways.
For an image , lines, edges, and terminations present in the image, the general formulation of energy due to the image is
where , , are weights of these salient features. Higher weights indicate that the salient feature will have a larger contribution to the image force.
Line functional
The line functional is the intensity of the image, which can be represented as
The sign of will determine whether the line will be attracted to either dark lines or light lines.
Some smoothing or noise reduction may be used on the image, which then the line functional appears as
Edge functional
The edge functional is based on the image gradient. One implementation of this is
A snake originating far from the desired object contour may erroneously converge to some local minimum. Scale space continuation can be used in order to avoid these local minima. This is achieved by using a blur filter on the image and reducing the amount of blur as the calculation progresses to refine the fit of the snake. The energy functional using scale space continuation is
where is a Gaussian with standard deviation . Minima of this function fall on the zero-crossings of which define edges as per Marr–Hildreth theory.
Termination functional
Curvature of level lines in a slightly smoothed image can be used to detect corners and terminations in an image. Using this method, let be the image smoothed by
with gradient angle
unit vectors along the gradient direction
and unit vectors perpendicular to the gradient direction
The termination functional of energy can be represented as
Constraint energy
Some systems, including the original snakes implementation, allowed for user interaction to guide the snakes, not only in initial placement but also in their energy terms. Such constraint energy can be used to interactively guide the snakes towards or away from particular features.
Optimization through gradient descent
Given an initial guess for a snake, the energy function of the snake is iteratively minimized. Gradient descent minimization is one of the simplest optimizations which can be used to minimize snake energy. Each iteration takes one step in the negative gradient of the point with controlled step size to find local minima. This gradient-descent minimization can be implemented as
Where is the force on the snake, which is defined by the negative of the gradient of the energy field.
Assuming the weights and are constant with respect to , this iterative method can be simplified to
Discrete approximation
In practice, images have finite resolution and can only be integrated over finite time steps . As such, discrete approximations must be made for practical implementation of snakes.
The energy function of the snake can be approximated by using the discrete points on the snake.
Consequentially, the forces of the snake can be approximated as
Gradient approximation can be done through any finite approximation method with respect to s, such as Finite difference.
Numerical instability due to discrete time
The introduction of discrete time into the algorithm can introduce updates which the snake is moved past the minima it is attracted to; this further can cause oscillations around the minima or lead to a different minima being found.
This can be avoided through tuning the time step such that the step size is never greater than a pixel due to the image forces. However, in regions of low energy, the internal energies will dominate the update.
Alternatively, the image forces can be normalized for each step such that the image forces only update the snake by one pixel. This can be formulated as
where is near the value of the pixel size. This avoids the problem of dominating internal energies that arise from tuning the time step.
Numerical instability due to discrete space
The energies in a continuous image may have zero-crossing that do not exist as a pixel in the image. In this case, a point in the snake would oscillate between the two pixels that neighbor this zero-crossing. This oscillation can be avoided by using interpolation between pixels instead of nearest neighbor.
Some variants of snakes
The default method of snakes has various limitation and corner cases where the convergence performs poorly. Several alternatives exist which addresses issues of the default method, though with their own trade-offs. A few are listed here.
GVF snake model
The gradient vector flow (GVF) snake model addresses two issues with snakes:
poor convergence performance for concave boundaries
poor convergence performance when snake is initialized far from minimum
In 2D, the GVF vector field minimizes the energy functional
where is a controllable smoothing term. This can be solved by solving the Euler equations
This can be solved through iteration towards a steady-state value.
This result replaces the default external force.
The primary issue with using GVF is the smoothing term causes rounding of the edges of the contour. Reducing the value of reduces the rounding but weakens the amount of smoothing.
The balloon model
The balloon model addresses these problems with the default active contour model:
The snake is not attracted to distant edges.
The snake will shrink inwards if no substantial images forces are acting upon it.
a snake larger than the minima contour will eventually shrink into it, but a snake smaller than the minima contour will not find the minima and instead continue to shrink.
The balloon model introduces an inflation term into the forces acting on the snake
where is the normal unitary vector of the curve at and is the magnitude of the force. should have the same magnitude as the image normalization factor and be smaller in value than to allow forces at image edges to overcome the inflation force.
Three issues arise from using the balloon model:
Instead of shrinking, the snake expands into the minima and will not find minima contours smaller than it.
The outward force causes the contour to be slightly larger than the actual minima. This can be solved by decreasing the balloon force after a stable solution has been found.
The inflation force can overpower forces from weak edges, amplifying the issue with snakes ignoring weaker features in an image.
Diffusion snakes model
The diffusion snake model addresses the sensitivity of snakes to noise, clutter, and occlusion. It implements a modification of the Mumford–Shah functional and its cartoon limit and incorporates statistical shape knowledge. The default image energy functional is replaced with
where is based on a modified Mumford–Shah functional
where is the piecewise smooth model of the image of domain . Boundaries are defined as
where are quadratic B-spline basis functions and are the control points of the splines. The modified cartoon limit is obtained as and is a valid configuration of .
The functional is based on training from binary images of various contours and is controlled in strength by the parameter . For a Gaussian distribution of control point vectors with mean control point vector and covariance matrix , the quadratic energy that corresponds to the Gaussian probability is
The strength of this method relies on the strength of the training data as well as the tuning of the modified Mumford–Shah functional. Different snakes will require different training data sets and tunings.
Geometric active contours
Geometric active contour, or geodesic active contour (GAC) or conformal active contours employs ideas from Euclidean curve shortening evolution. Contours split and merge depending on the detection of objects in the image. These models are largely inspired by level sets, and have been extensively employed in medical image computing.
For example, the gradient descent curve evolution equation of GAC is
where is a halting function, c is a Lagrange multiplier, is the curvature, and is the unit inward normal. This particular form of curve evolution equation is only dependent on the velocity in the normal direction. It therefore can be rewritten equivalently in an Eulerian form by inserting the level set function into it as follows
This simple yet powerful level-set reformation enables active contours to handle topology changes during the gradient descent curve evolution. It has inspired tremendous progress in the related fields, and using numerical methods to solve the level-set reformulation is now commonly known as the level-set method. Although the level set method has become quite a popular tool for implementing active contours, Wang and Chan argued that not all curve evolution equations should be directly solved by it.
More recent developments in active contours address modeling of regional properties, incorporation of flexible shape priors and fully automatic segmentation, etc.
Statistical models combining local and global features have been formulated by Lankton and Allen Tannenbaum.
Relations to graph cuts
Graph cuts, or max-flow/min-cut, is a generic method for minimizing a particular form of energy called Markov random field (MRF) energy. The Graph cuts method has been applied to image segmentation as well, and it sometimes outperforms the level set method when the model is MRF or can be approximated by MRF.
See also
Boundary vector field
References
External links
David Young, March 1995
Snakes: Active Contours, CVOnline
Active contours, deformable models, and gradient vector flow by Chenyang Xu and Jerry Prince, including code download
ICBE, University of Manchester
Active contours implementation & test platform GUI
A simple implementation of snakes by Associate Professor Cris Luengo
MATLAB documentation for activecontour, which segments an image using active contours
Sample code
Practical examples of different snakes developed by Xu and Prince
Basic tool to play with snakes (active contour models) from Tim Cootes, University of Manchester
Matlab implementation of 2D and 3D snake including GVF and balloon force
Matlab Snake Demo by Chris Bregler and Malcolm Slaney, Interval Research Corporation.
A Demonstration Using Java
Active Contours implementation & test platform GUI by Nikolay S. and Alex Blekhman implementing "Active Contours without Edges"
Active Contour Segmentation by Shawn Lankton implementing "Active Contours without Edges"
Geometric Active Contour Code by Jarno Ralli
Morphological Snakes
Computer vision | Active contour model | [
"Engineering"
] | 2,721 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
12,557,781 | https://en.wikipedia.org/wiki/6-demicubic%20honeycomb | The 6-demicubic honeycomb or demihexeractic honeycomb is a uniform space-filling tessellation (or honeycomb) in Euclidean 6-space. It is constructed as an alternation of the regular 6-cube honeycomb.
It is composed of two different types of facets. The 6-cubes become alternated into 6-demicubes h{4,3,3,3,3} and the alternated vertices create 6-orthoplex {3,3,3,3,4} facets.
D6 lattice
The vertex arrangement of the 6-demicubic honeycomb is the D6 lattice. The 60 vertices of the rectified 6-orthoplex vertex figure of the 6-demicubic honeycomb reflect the kissing number 60 of this lattice. The best known is 72, from the E6 lattice and the 222 honeycomb.
The D lattice (also called D) can be constructed by the union of two D6 lattices. This packing is only a lattice for even dimensions. The kissing number is 25=32 (2n-1 for n<8, 240 for n=8, and 2n(n-1) for n>8).
∪
The D lattice (also called D and C) can be constructed by the union of all four 6-demicubic lattices: It is also the 6-dimensional body centered cubic, the union of two 6-cube honeycombs in dual positions.
∪ ∪ ∪ = ∪ .
The kissing number of the D6* lattice is 12 (2n for n≥5). and its Voronoi tessellation is a trirectified 6-cubic honeycomb, , containing all birectified 6-orthoplex Voronoi cell, .
Symmetry constructions
There are three uniform construction symmetries of this tessellation. Each symmetry can be represented by arrangements of different colors on the 64 6-demicube facets around each vertex.
Related honeycombs
See also
6-cubic honeycomb
Notes
External links
Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
Honeycombs (geometry)
7-polytopes | 6-demicubic honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 527 | [
"Tessellation",
"Crystallography",
"Honeycombs (geometry)",
"Symmetry"
] |
12,561,056 | https://en.wikipedia.org/wiki/Quantum%20dimer%20models | Quantum dimer models were introduced to model the physics of resonating valence bond (RVB) states in lattice spin systems. The only degrees of freedom retained from the motivating spin systems are the valence bonds, represented as dimers which live on the lattice bonds. In typical dimer models, the dimers do not overlap ("hardcore constraint").
Typical phases of quantum dimer models tend to be valence bond crystals. However, on non-bipartite lattices, RVB liquid phases possessing topological order and fractionalized spinons also appear. The discovery of topological order in quantum dimer models (more than a decade after the models were introduced) has led to new interest in these models.
Classical dimer models have been studied previously in statistical physics, in particular by P. W. Kasteleyn (1961) and
M. E. Fisher (1961).
References
Exact solution for classical dimer models on planar graphs:
Introduction of model; early literature:
Topological order in quantum dimer model on non-bipartite lattices:
;
Topological order in quantum spin model on non-bipartite lattices:
Quantum lattice models
Condensed matter physics
Statistical mechanics
Matching (graph theory) | Quantum dimer models | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 251 | [
"Materials science stubs",
"Statistical mechanics",
"Phases of matter",
"Quantum mechanics",
"Graph theory",
"Materials science",
"Mathematical relations",
"Condensed matter physics",
"Matching (graph theory)",
"Condensed matter stubs",
"Matter",
"Quantum lattice models"
] |
12,561,401 | https://en.wikipedia.org/wiki/Aluminium%20smelting | Aluminium smelting is the process of extracting aluminium from its oxide, alumina, generally by the Hall-Héroult process. Alumina is extracted from the ore bauxite by means of the Bayer process at an alumina refinery.
This is an electrolytic process, so an aluminium smelter uses huge amounts of electric power; smelters tend to be located close to large power stations, often hydro-electric ones, in order to hold down costs and reduce the overall carbon footprint. Smelters are often located near ports, since many smelters use imported alumina.
Layout of an aluminium smelter
The Hall-Héroult electrolysis process is the major production route for primary aluminium. An electrolytic cell is made of a steel shell with a series of insulating linings of refractory materials. The cell consists of a brick-lined outer steel shell as a container and support. Inside the shell, cathode blocks are cemented together by ramming paste. The top lining is in contact with the molten metal and acts as the cathode. The molten electrolyte is maintained at high temperature inside the cell. The prebaked anode is also made of carbon in the form of large sintered blocks suspended in the electrolyte. A single Soderberg electrode or a number of prebaked carbon blocks are used as anode, while the principal formulation and the fundamental reactions occurring on their surface are the same.
An aluminium smelter consists of a large number of cells (pots) in which the electrolysis takes place. A typical smelter contains anywhere from 300 to 720 pots, each of which produces about a ton of aluminium a day, though the largest proposed smelters are up to five times that capacity. Smelting is run as a batch process, with the aluminium deposited at the bottom of the pots and periodically siphoned off. Particularly in Australia these smelters are used to control electrical network demand, and as a result power is supplied to the smelter at a very low price. However power must not be interrupted for more than 4–5 hours, since the pots have to be repaired at significant cost if the liquid metal solidifies.
Principle
Aluminium is produced by electrolytic reduction of aluminium oxide dissolved in molten cryolite.
Al^3+ + 3e- -> Al
At the same time the carbon electrode is oxidised, initially to carbon monoxide
C + 1/2O2 -> CO
Although the formation of carbon monoxide (CO) is thermodynamically favoured at the reaction temperature, the presence of considerable overvoltage (difference between reversible and polarization potentials) changes the thermodynamic equilibrium and a mixture of CO and is produced. Thus the idealised overall reactions may be written as
By increasing the current density up to 1 A/cm2, the proportion of increases and carbon consumption decreases.
As three electrons are needed to produce each atom of aluminium, the process consumes a large amount of electricity. For this reason aluminium smelters are sited close to sources of inexpensive electricity, such as hydroelectric.
Cell components
Electrolyte: The electrolyte is a molten bath of cryolite (Na3AlF6) and dissolved alumina. Cryolite is a good solvent for alumina with low melting point, satisfactory viscosity, and low vapour pressure. Its density is also lower than that of liquid aluminium (2 vs 2.3 g/cm3), which allows natural separation of the product from the salt at the bottom of the cell. The cryolite ratio (NaF/AlF3) in pure cryolite is 3, with a melting temperature of 1010 °C, and it forms a eutectic with 11% alumina at 960 °C. In industrial cells the cryolite ratio is kept between 2 and 3 to decrease its melting temperature to 940–980 °C.
Cathode: Carbon cathodes are essentially made of anthracite, graphite and petroleum coke, which are calcined at around 1200 °C and crushed and sieved prior to being used in cathode manufacturing. Aggregates are mixed with coal-tar pitch, formed, and baked. Carbon purity is not as stringent as for anode, because metal contamination from cathode is not significant. Carbon cathode must have adequate strength, good electrical conductivity and high resistance to wear and sodium penetration. Anthracite cathodes have higher wear resistance and slower creep with lower amplitude [15] than graphitic and graphitized petroleum coke cathodes. Instead, dense cathodes with more graphitic order have higher electrical conductivity, lower energy consumption [14], and lower swelling due to sodium penetration. Swelling results in early and non-uniform deterioration of cathode blocks.
Anode: Carbon anodes have a specific situation in aluminium smelting and depending on the type of anode, aluminium smelting is divided in two different technologies; “Soderberg” and “prebaked” anodes. Anodes are also made of petroleum coke, mixed with coal-tar-pitch, followed by forming and baking at elevated temperatures. The quality of anode affects technological, economical and environmental aspects of aluminium production. Energy efficiency is related to the nature of anode materials, as well as the porosity of baked anodes. Around 10% of cell power is consumed to overcome the electrical resistance of prebaked anode (50–60 μΩm). Carbon is consumed more than theoretical value due to a low current efficiency and non-electrolytic consumption. Inhomogeneous anode quality due to the variation in raw materials and production parameters also affects its performance and the cell stability.
Prebaked consumable carbon anodes are divided into graphitized and coke types. For manufacturing of the graphitized anodes, anthracite and petroleum coke are calcined and classified. They are then mixed with coal-tar pitch and pressed. The pressed green anode is then baked at 1200 °C and graphitized. Coke anodes are made of calcined petroleum coke, recycled anode butts, and coal-tar pitch (binder). The anodes are manufactured by mixing aggregates with coal tar pitch to form a paste with a doughy consistency. This material is most often vibro-compacted but in some plants pressed. The green anode is then sintered at 1100–1200 °C for 300–400 hours, without graphitization, to increase its strength through decomposition and carbonization of the binder. Higher baking temperatures increase the mechanical properties and thermal conductivity, and decrease the air and CO2 reactivity. The specific electrical resistance of the coke-type anodes is higher than that of the graphitized ones, but they have higher compressive strength and lower porosity.
Soderberg electrodes (in-situ baking), used for the first time in 1923 in Norway, are composed of a steel shell and a carbonaceous mass which is baked by the heat being escaped from the electrolysis cell. Soderberg Carbon-based materials such as coke and anthracite are crushed, heat-treated, and classified. These aggregates are mixed with pitch or oil as binder, briquetted and loaded into the shell. Temperature increases bottom to the top of the column and in-situ baking takes place as the anode is lowered into the bath. Significant amount of hydrocarbons are emitted during baking which is a disadvantage of this type of electrodes. Most of the modern smelters use prebaked anodes since the process control is easier and a slightly better energy efficiency is achieved, compared to Soderberg anodes.
Environmental issues of aluminium smelters
The process produces a quantity of fluoride waste: perfluorocarbons and hydrogen fluoride as gases, and sodium and aluminium fluorides and unused cryolite as particulates. This can be as small as 0.5 kg per tonne of aluminium in the best plants in 2007, up to 4 kg per tonne of aluminium in older designs in 1974. Unless carefully controlled, hydrogen fluorides tend to be very toxic to vegetation around the plants.
The Soderberg process which bakes the Anthracite/pitch mix as the anode is consumed, produces significant emissions of polycyclic aromatic hydrocarbons as the pitch is consumed in the smelter.
The linings of the pots end up contaminated with cyanide-forming materials; Alcoa has a process for converting spent linings into aluminium fluoride for reuse and synthetic sand usable for building purposes and inert waste.
Inert anodes
Inert anodes are non-carbon based alternatives to traditional anodes used during aluminum reduction. These anodes do not chemically react with the electrolyte, and are therefore not consumed during the reduction process. Because the anode does not contain carbon, carbon dioxide is not produced. Through a review of literature, Haradlsson et al. found that inert anodes reduced the green house gas emissions of the aluminum smelting process by approximately 2 tonnes CO2eq/ tonne Al.
Types of anodes
Ceramic anode materials include Ni-Fe, Sn, and Ni-Li based oxides. These anodes show promise as they are extremely stable during the reduction process at normal operating temperatures (~1000 °C), ensuring that the Al is not contaminated. The stability of these anodes also allows them to be used with a range of electrolytes. However, ceramic anodes suffer from poor electrical conductivity and low mechanical strength.
Alternatively metal anodes boast high mechanical strength and conductivity but tend to corrode easily during the reduction process. Some material systems that are used in inert metal anodes include Al-Cu, Ni-Cu, and Fe-Ni-Cu systems. Additional additives such as Sn, Ag, V, Nb, Ir, Ru can be included in these systems to form non reactive oxides on the anode surface, but this significantly increases the cost and embodied energy of the anode.
Cermet anodes are the combination of a metal and ceramic anode, and aim to take advantage of the desirable properties of both; the electrical conductivity and toughness of the metal and stability of the ceramic. These anodes often consist of a combination of the above metal and ceramic materials. In industry, Alcoa and Rio Tinto have formed a joint venture, Elysis, to commercialize inert anode technology developed by Alcoa. The inert anode is a cermet material, a metallic dispersion of copper alloy in a ceramic matrix of nickel ferrite. Unfortunately, as the number of anode components increases , the structure of the anode becomes more unstable. As a result. cermet anodes also suffer from corrosion issues during reduction.
Energy use
Aluminium smelting is highly energy intensive, and in some countries is economical only if there are inexpensive sources of electricity. In some countries, smelters are given exemptions to energy policy like renewable energy targets.
To reduce the energy cost of the smelting process, alternative electrolytes such as Na3AlF6 are being investigated that can operate at a lower temperature. However, changing the electrolyte changes the kinetics of the liberated oxygen from the Al2O3 ore. This change in bubble formation can alter the rate the anode reacts with Oxygen or the electrolyte and effectively change the efficiency of the reduction process.
Inert anodes, used in tandem with vertical electrode cells, can also reduce the energy cost of aluminum reduction up to 30% by lowering the voltage needed for reduction to occur. Applying these two technologies at the same times allows the anode-cathode distance to be minimized which decreases restive losses.
Example aluminium smelters
Alcan Lynemouth Aluminium Smelter, powered by the coal-fired Lynemouth Power Station in North East England, ceased production in 2012, demolished in 2018.
Anglesey Aluminium, powered by Wylfa nuclear power station in north-west Wales, closed in 2013, with redevelopment of the site announced in 2022.
The Valco aluminium smelter in Ghana, powered by the Akosombo Hydroelectric Project
Fjarðaál in Iceland, powered by the Kárahnjúkar Hydropower Plant
Jharsuguda in Orissa, India, to be powered by its own coal-fired power station.
Aluminerie Alouette in Sept-Îles, Québec, powered by the Churchill Falls Hydro Electric project.
Alba Smelter in Bahrain, powered by its own four power stations with a total generating capacity of .
See also
List of aluminium smelters
List of alumina refineries
Lead smelter
Nuclear power
Zinc smelting
Solid oxide Hall–Héroult process
References
Metallurgical processes
smelting
Electrolysis
de:Aluminium#Gewinnung | Aluminium smelting | [
"Chemistry",
"Materials_science"
] | 2,684 | [
"Metallurgical processes",
"Electrochemistry",
"Metallurgy",
"Electrolysis"
] |
12,562,201 | https://en.wikipedia.org/wiki/Industrial%20symbiosis | Industrial symbiosis a subset of industrial ecology. It describes how a network of diverse organizations can foster eco-innovation and long-term culture change, create and share mutually profitable transactions—and improve business and technical processes.
Although geographic proximity is often associated with industrial symbiosis, it is neither necessary nor sufficient—nor is a singular focus on physical resource exchange. Strategic planning is required to optimize the synergies of co-location. In practice, using industrial symbiosis as an approach to commercial operations—using, recovering and redirecting resources for reuse—results in resources remaining in productive use in the economy for longer. This in turn creates business opportunities, reduces demands on the earth's resources, and provides a stepping-stone towards creating a circular economy.
Industrial symbiosis is a subset of industrial ecology, with a particular focus on material and energy exchange. Industrial ecology is a relatively new field that is based on a natural paradigm, claiming that an industrial ecosystem may behave in a similar way to the natural ecosystem wherein everything gets recycled, albeit the simplicity and applicability of this paradigm has been questioned.
Introduction
Eco-industrial development is one of the ways in which industrial ecology contributes to the integration of economic growth and environmental protection. Some of the examples of eco-industrial development are:
Circular economy (single material and/or energy exchange)
Greenfield eco-industrial development (geographically confined space)
Brownfield eco-industrial development (geographically confined space)
Eco-industrial network (no strict requirement of geographical proximity)
Virtual eco-industrial network (networks spread in large areas e.g. regional network)
Networked Eco-industrial System (macro level developments with links across regions)
Industrial symbiosis engages traditionally separate industries in a collective approach to competitive advantage involving physical exchange of materials, energy, water, and/or by-products. The keys to industrial symbiosis are collaboration and the synergistic possibilities offered by geographic proximity". Notably, this definition and the stated key aspects of industrial symbiosis, i.e., the role of collaboration and geographic proximity, in its variety of forms, has been explored and empirically tested in the UK through the research and published activities of the National Industrial Symbiosis Programme.
Industrial symbiosis systems collectively optimize material and energy use at efficiencies beyond those achievable by any individual process alone. IS systems such as the web of materials and energy exchanges among companies in Kalundborg, Denmark have spontaneously evolved from a series of micro innovations over a long time scale; however, the engineered design and implementation of such systems from a macro planner's perspective, on a relatively short time scale, proves challenging.
Often, access to information on available by-products is difficult to obtain. These by-products are considered waste and typically not traded or listed on any type of exchange. Only a small group of specialized waste marketplaces addresses this particular kind of waste trading.
Example
Recent work reviewed government policies necessary to construct a multi-gigaWatt photovoltaic factory and complementary policies to protect existing solar companies are outlined and the technical requirements for a symbiotic industrial system are explored to increase the manufacturing efficiency while improving the environmental impact of solar photovoltaic cells. The results of the analysis show that an eight-factory industrial symbiotic system can be viewed as a medium-term investment by any government, which will not only obtain direct financial return, but also an improved global environment.
This is because synergies have been identified for co-locating glass manufacturing and photovoltaic manufacturing.
The waste heat from glass manufacturing can be used in industrial-sized greenhouses for food production. Even within the PV plant itself a secondary chemical recycling plant can reduce environmental impact while improving economic performance for the group of manufacturing facilities.
In DCM Shriram consolidated limited (Kota unit) produces caustic soda, calcium carbide, cement and PVC resins. Chlorine and hydrogen are obtained as by-products from caustic soda production, while calcium carbide produced is partly sold and partly is treated with water to form slurry(aqueous solution of calcium hydroxide) and ethylene. The chlorine and ethylene produced are utilised to form PVC compounds, while the slurry is consumed for cement production by wet process. Hydrochloric acid is prepared by direct synthesis where the pure chlorine gas can be combined with hydrogen to produce hydrogen chloride in the presence of UV light.
See also
Eco-industrial park
Industrial ecology
Industrial metabolism
Waste valorization
References
External links
International Group of Industrial Symbiosis Researchers & Practitioners
Marian Chertow interview on Industrial Symbiosis (audio)
Western Cape Industrial Symbiosis Programme (WISP)
Industrial ecology | Industrial symbiosis | [
"Chemistry",
"Engineering"
] | 984 | [
"Industrial ecology",
"Industrial engineering",
"Environmental engineering"
] |
12,563,713 | https://en.wikipedia.org/wiki/High%20production%20volume%20chemicals | High production volume chemicals (HPV chemicals) are produced or imported into the United States in quantities of 1 million pounds or 500 tons per year. In OECD countries, HPV chemicals are defined as being produced at levels greater than 1,000 metric tons per producer/importer per year in at least one member country/region. A list of HPV chemicals serves as an overall priority list, from which chemicals are selected to gather data for a screening information dataset (SIDS), for testing and for initial hazard assessment.
History
OECD countries including EU
In 1987, member countries of the Organisation for Economic Co-operation and Development decided to investigate existing chemicals. In 1991, they agreed to begin by focusing on High production volume (HPV) chemicals, where production volume was used as a surrogate for data on occupational, consumer, and environmental exposure. Each country agreed to "sponsor" the assessment of a proportion of the HPV chemicals. Countries also agreed on a minimum set of required information, the screening information dataset (SIDS). Six tests are: acute toxicity, chronic toxicity, developmental toxicity/reproductive toxicity, mutagenicity, ecotoxicity and environmental fate. Using SIDS and detailed exposure data OECD's High Production Volume Chemicals Programme conducted initial risk assessments to screen and to identify any need for further work.
During the late 1990s, OECD member countries began to assess chemical categories and to use quantitative structure–activity relationship (QSAR) results to create OECD guidance documents, as well as a computerized QSAR toolbox. In 1998, the global chemical industry, organized in the International Council of Chemical Associations (ICCA) initiative, offered to join OECD efforts. The ICCA promised to sponsor by 2013 about 1,000 substances from the OECD's HPV chemicals list "to establish as priorities for investigation", based on "presumed wide dispersive use, production in two or more global regions or similarity to another chemical, which met either of these criteria". OECD in turn agreed to refocus and to "increase transparency, efficiency and productivity and allow longer-term planning for governments and industry". The OECD refocus was on initial hazard assessments of HPV chemicals only, and no longer extensive exposure information gathering and evaluation. Detailed exposure assessments within national (or regional) programmes and priority setting activities were postponed as post-SIDS work.
United States
On October 9, 1998, EPA Administrator Carol Browner sent letters to the CEOs of more than 900 chemical companies that manufacture HPV chemicals, asking them to participate in EPA's voluntary testing initiative, the so-called "HPV Challenge Program". The Environmental Defense Fund, the American Petroleum Institute, and American Chemistry Council joined in the effort.
HPV chemical lists
The OECD list of HPV chemicals keeps changing. A 2004 list of 143 pages contained 4,842 entries. A 2007 list was published in 2009.
the EPA's HPV list had 2,539 chemicals, while the HPV Challenge Program chemical list contained only 1,973 chemicals because inorganic chemicals and polymers were not included.
The EPA has published an online list of HPV chemicals since 2010. The list is not numerated and without footnotes.
Regulatory context
Europe
The "Strategic Approach to International Chemicals Management" (SAICM) is a policy for achieving safe production and use of chemicals worldwide by 2020, developed with stakeholders from more than 140 countries, signed by 100 governments, adopted by the UNEP Governing Council in February 2006.
The Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) proposal and the European Chemicals Agency will help the EU to fulfill objectives of SAICM.
The Stockholm Convention on Persistent Organic Pollutants has aimed to control production, use, trade, disposal and release of twelve Persistent organic pollutants (POPs); the European Community has proposed five additional chemicals. The Convention bans deliberate production and use of POPs, bans the development of new POPs, and aims at minimizing releases of unintentionally produced POPs. The Convention has so far been ratified by the European Community, 18 member states and the two accession countries.
United States
The 1976 Toxic Substances Control Act requires the EPA to "compile, keep current, and publish a list of each chemical substance that is manufactured or processed in the United States". In 1998, the EPA reported the most heavily used HPV chemicals in commerce were largely untested: 43% of 2,800 HPV chemicals had no basic toxicity data or screening level data at all, 50% had incomplete screening data, and only 7% of the HPV chemicals had a complete set of screening level toxicity data. However, screening level data, even if they indicated a problem, were not sufficient to restrict the use of a compound.
In 1986, 2003, 2005, and in 2011 EPA issued regulations to amend and update the TSCA inventory.
As of April 2010, about 84,000 chemicals were on the TSCA inventory, per a GAO report. TSCA Section 4 gives EPA the authority to demand chemical testing.
Toxicity data
In 1982, U.S. manufacturers, processors, and importers of 75 chemicals that the International Agency for Research on Cancer had found to cause cancers in animals, but the carcinogenicity of which in humans was uncertain, were surveyed. Only for 13 of the 75 chemicals had epidemiologic studies on human health been completed or were in progress. Eighteen of the 75 were HPV chemicals and only for eight HPV chemicals had epidemiologic studies been completed or were in progress. The largest number of chemicals (19) were drugs, and none of them had been epidemiologically studied. Seven chemicals that had been studied were used as pesticides.
In 1997 the Environmental Defense Fund reported in “Toxic Ignorance” results of its analysis of the availability of basic health test data on HPV chemicals that only 29% of the HPV chemicals in the US met minimum data requirements.
In 1998 the EPA published a report CHEMICAL HAZARD DATA AVAILABILITY STUDY showing "55% of TRI chemicals have had full SIDS testing, while only 7% of other chemicals have full test data". They wrote
"...of the 830 companies making HPV chemicals in the US, 148 companies have NO SIDS data available on their chemicals; an additional 459 companies sell products for which, on average, half or less of SIDS tests are available. Only 21 companies (or 3% of the 830 companies) have all SIDS tests available for their chemicals. The basic set of test data costs about $200,000 per chemical."
In 1999, the European Union (EU) published a study about how many EU-HPV chemicals were publicly available in a comprehensive chemical data base called IUCLID: Only 14% of the EU-HPV chemicals had data at the level of the base-set, 65% had less than base-set, and 21% had no data available. The authors concluded, "more data [were] publicly available than most previous studies" had shown.
In 2004, one of the partners in EPA's HPV Challenge Program assessed 532 up to then unsponsored chemicals, whether they were "orphaned" or not, and found:
156 chemicals (29%) likely were still "orphans" – i.e., they could and should be sponsored, but had not been
103 chemicals (19%) had an unclear status
266 chemicals (50%) were likely no longer HPV
only 7 chemicals (1%) appeared to be in the process of becoming sponsored.
Since 2009, the EPA required companies to perform toxicity testing on merely 34 chemicals. In 2011, the EPA announced, but as of 2013 had yet to finalize, plans to require testing for 23 additional chemicals, so altogether 57 chemicals. The EPA has prioritized 83 chemicals for risk assessment, and initiated seven assessments in 2012, with plans to start 18 additional assessments in 2013 and 2014.
In 2007, EPA began Toxcast which uses "automated chemical screening technologies (called "high-throughput screening assays") to expose living cells or isolated proteins to chemicals".
In 2009, EPA reported that it developed a system called ACToR (Aggregated Computational Toxicology Resource) to expose living cells or isolated proteins to chemicals. It pooled chemical research, data and screening tools from multiple federal agencies including the National Toxicology Program/ National Institute of Environmental Health Science, National Center for Advancing Translational Sciences and the Food and Drug Administration.
See also
Chemical compound
Risk assessment
TSCA
References
External links
Chemical industry
Chemical substances
Chemical compounds
Hazardous materials
Import
Toxicology
Ecology | High production volume chemicals | [
"Physics",
"Chemistry",
"Technology",
"Biology",
"Environmental_science"
] | 1,773 | [
"Toxicology",
"Molecules",
"Chemical compounds",
"Ecology",
"Materials",
"nan",
"Chemical substances",
"Hazardous materials",
"Matter"
] |
12,564,389 | https://en.wikipedia.org/wiki/Marsh%20funnel | The Marsh funnel is a simple device for measuring viscosity by observing the time it takes a known volume of liquid to flow from a cone through a short tube. It is standardized for use by mud engineers to check the quality of drilling mud. Other cones with different geometries and orifice arrangements are called flow cones, but have the same operating principle.
In use, the funnel is held vertically with the end of the tube closed by a finger. The liquid to be measured is poured through the mesh to remove any particles which might block the tube. When the fluid level reaches the mesh, the amount inside is equal to the rated volume. To take the measurement, the finger is released as a stopclock is started, and the liquid is allowed to run into a measuring container. The time in seconds is recorded as a measure of the viscosity.
The Marsh Funnel
Based on a method published in 1931 by H.N.Marsh, a Marsh cone is a flow cone with an aspect ratio of 2:1 and a working volume of at least a litre. A Marsh funnel is a Marsh cone with a particular orifice and a working volume of 1.5 litres. It consists of a cone 6 inches (152 mm) across and 12 inches in height (305 mm) to the apex of which is fixed a tube 2 inches (50.8 mm) long and 3/16 inch (4.76 mm) internal diameter. A 10-mesh screen is fixed near the top across half the cone.
In American practice (and most of the oil industry) the volume collected is a quart. If water is used, the time should be 26 +/- 0.5 seconds. If the time is less than this the tube is probably enlarged by erosion, if more it may be blocked or damaged, and the funnel should be replaced. In some companies, and Europe in particular, the volume collected is a litre, for which the water funnel time should be 28 seconds. Marsh himself collected 0.50 litre, for which the time was 18.5 seconds.
The Marsh funnel time is often referred to as the Marsh funnel viscosity, and represented by the abbreviation FV. The unit (seconds) is often omitted. Formally, the volume should also be stated.
The (quart) Marsh funnel time for typical drilling muds is 34 to 50 seconds, though mud mixtures to cope with some geological conditions may have a time of 100 or more seconds.
While the most common use is for drilling muds, which are non-Newtonian fluids, the Marsh funnel is not a rheometer, because it only provides one measurement under one flow condition. However the effective viscosity can be determined from following simple formula.
μ = ρ (t - 25)
where μ = effective viscosity in centipoise ρ = density in g/cm3t = quart funnel time in seconds
For example, a mud of funnel time 40 seconds and density 1.1 g/cm3 has an effective viscosity of about 16.5 cP. For the range of times of typical muds above, the shear rate in the Marsh funnel is about 2000 s−1.
Other Flow Cones
The term Marsh cone is also used in the concrete and oil industries. European standard EN445 and American standard C939 for measuring the flow properties of cement grout mixtures specify a funnel similar to the Marsh cone. Some manufacturers supply devices which they call Marsh cones, with removable tubes with size ranges from 5 to 15 mm. These can be used for quality control by selecting a tube which gives a convenient time, say 30 to 60 seconds.
References
Further reading
ASTM D6910-04 Standard Test Method for Marsh Funnel Viscosity of Clay Construction Slurries
N. Roussel & R. Le Roy (2005) Cement and Concrete Research vol 35 823-830 “The Marsh Cone: a test or a rheological apparatus?”
Drilling fluid
Viscosity
Petroleum engineering | Marsh funnel | [
"Physics",
"Engineering"
] | 812 | [
"Physical phenomena",
"Physical quantities",
"Petroleum engineering",
"Energy engineering",
"Wikipedia categories named after physical quantities",
"Viscosity",
"Physical properties"
] |
12,564,556 | https://en.wikipedia.org/wiki/Pressure%20angle | Pressure angle in relation to gear teeth, also known as the angle of obliquity, is the angle between the tooth face and the gear wheel tangent. It is more precisely the angle at a pitch point between the line of pressure (which is normal to the tooth surface) and the plane tangent to the pitch surface. The pressure angle gives the direction normal to the tooth profile. The pressure angle is equal to the profile angle at the standard pitch circle and can be termed the "standard" pressure angle at that point. Standard values are 14.5 and 20 degrees. Earlier gears with pressure angle 14.5 were commonly used because the cosine is larger for a smaller angle, providing more power transmission and less pressure on the bearing; however, teeth with smaller pressure angles are weaker. To run gears together properly their pressure angles must be matched.
The pressure angle is also the angle of the sides of the trapezoidal teeth on the corresponding rack.
The force transmitted during the mating of gear teeth acts along the normal. This force has components along the pitch line and the other along the line perpendicular to the pitch line. The force along the pitch line which is responsible for power transmission is proportional to the cosine of pressure angle. The one which exerts thrust (perpendicular to the pitch line) is proportional to the sine of pressure angle. So it is advised to keep the pressure angle low.
Just as there are three types of profile angle, there are three types of corresponding pressure angle: the transverse pressure angle, the normal pressure angle, and the axial pressure angle.
See also
List of gear nomenclature
Involute gear
References
Gears | Pressure angle | [
"Engineering"
] | 335 | [
"Mechanical engineering stubs",
"Mechanical engineering"
] |
12,564,973 | https://en.wikipedia.org/wiki/Kinetic%20PreProcessor | The Kinetic PreProcessor (KPP) is an open-source software tool used in atmospheric chemistry. Taking a set of chemical reactions and their rate coefficients as input, KPP generates Fortran 90, FORTRAN 77, C, or Matlab code
of the resulting ordinary differential equations (ODEs). Solving the ODEs allows the temporal integration of the kinetic system. Efficiency is obtained by exploiting the sparsity structures of the Jacobian and of the Hessian. A comprehensive suite of stiff numerical integrators is also provided. Moreover, KPP can be used to generate the tangent linear model, as well as the continuous and discrete adjoint models of the chemical system.
Models using KPP
BASCOE - A data assimilation system based on a chemical transport model and created by the Belgian Institute for Space Aeronomy (BIRA-IASB)
Boream - Model for the degradation of alpha-pinene
BOXMOX - Box model extensions to KPP
CMAQ - Community Multiscale Air Quality model
DSMACC - Dynamically Simple Model of Atmospheric Chemical Complexity
GEOS–Chem - Global 3-D chemical transport model for atmospheric composition
MALTE - Model to predict new aerosol formation in the lower troposphere
MCM - Master Chemical Mechanism
MECCA - Module Efficiently Calculating the Chemistry of the Atmosphere
Mistra - Microphysical Stratus model
PACT-1D - Platform for Atmospheric Chemistry and vertical Transport in 1-dimension
PALM - Meteorological modeling system for atmospheric and oceanic boundary layer flows
RACM - Regional Atmospheric Chemistry Mechanism gas-phase chemistry mechanism
WRF-Chem - Weather Research & Forecasting Model with Chemistry
See also
Chemical kinetics
Autochem
CHEMKIN
Cantera
Chemical WorkBench
External links
KPP documentation
GitHub repository
KPP web page
The Kinetic PreProcessor KPP 3.0.0
The Kinetic PreProcessor KPP-2.1
Forward, Tangent Linear, and Adjoint Runge Kutta Methods in KPP–2.2 for Efficient Chemical Kinetic Simulations
KPPA (the Kinetic PreProcessor: Accelerated)
KPP Fortran to CUDA source-to-source Pre-processor (Open License)
Computational chemistry software
Chemical kinetics
Environmental chemistry | Kinetic PreProcessor | [
"Chemistry",
"Environmental_science"
] | 453 | [
"Chemical reaction engineering",
"Computational chemistry software",
"Chemistry software",
"Environmental chemistry",
"Computational chemistry",
"nan",
"Chemical kinetics"
] |
4,581,251 | https://en.wikipedia.org/wiki/Molecular%20beacon | Molecular beacons, or molecular beacon probes, are oligonucleotide hybridization probes that can report the presence of specific nucleic acids in homogenous solutions. Molecular beacons are hairpin-shaped molecules with an internally quenched fluorophore whose fluorescence is restored when they bind to a target nucleic acid sequence. This is a novel non-radioactive method for detecting specific sequences of nucleic acids. They are useful in situations where it is either not possible or desirable to isolate the probe-target hybrids from an excess of the hybridization probes.
Molecular beacon probes
A typical molecular beacon probe is 25 nucleotides long. The middle 15 nucleotides are complementary to the target DNA or RNA and do not base pair with one another, while the five nucleotides at each terminus are complementary to each other rather than to the target DNA. A typical molecular beacon structure can be divided in 4 parts: 1) loop, an 18–30 base pair region of the molecular beacon that is complementary to the target sequence; 2) stem formed by the attachment to both termini of the loop of two short (5 to 7 nucleotide residues) oligonucleotides that are complementary to each other; 3) 5' fluorophore at the 5' end of the molecular beacon, a fluorescent dye is covalently attached; 4) 3' quencher (non fluorescent) dye that is covalently attached to the 3' end of the molecular beacon. When the beacon is in closed loop shape, the quencher resides in proximity to the fluorophore, which results in quenching the fluorescent emission of the latter.
If the nucleic acid to be detected is complementary to the strand in the loop, the event of hybridization occurs. The duplex formed between the nucleic acid and the loop is more stable than that of the stem because the former duplex involves more base pairs. This causes the separation of the stem and hence of the fluorophore and the quencher. Once the fluorophore is no longer next to the quencher, illumination of the hybrid with light results in the fluorescent emission. The presence of the emission reports that the event of hybridization has occurred and hence the target nucleic acid sequence is present in the test sample.
Use in Cell Engineering
Fluorogenic signaling oligonucleotide probes were reported for use to detect and isolate cells expressing one or more desired genes, including the production of multigene stable cell lines expressing heteromultimeric epithelial sodium channel (αβγ-ENaC), sodium voltage-gated ion channel 1.7 (NaV1.7-αβ1β2), four unique γ-aminobutyric acid A (GABAA) receptor ion channel subunit combinations α1β3γ2s, α2β3γ2s, α3β3γ2s and α5β3γ2s, cystic fibrosis conductance regulator (CFTR), CFTR-Δ508 and two G-protein coupled receptors (GPCRs).
Synthesis
Molecular beacons are synthetic oligonucleotides whose preparation is well documented. In addition to the conventional set of nucleoside phosphoramidites, the synthesis also requires a solid support derivatized with a quencher and a phosphoramidite building block designed for the attachment of a protected fluorescent dye.
The first use of the term molecular beacons, synthesis and demonstration of function was in 1996.
Alternative homogeneous assay technologies
5'-nuclease TaqMan assay
Exciton-controlled hybridization-sensitive fluorescent oligonucleotide (ECHO) probes.
Dual Hybridization (LightCycler®) probes
Scorpions® Probes
LUX (Light Upon Extension) Probes
DNA binding dye assays (e.g., SYBR Green, SYTO9, Melt Doctor, LCGreen Plus, etc.)
Applications
SNP detection
Real-time nucleic acid detection
Real-time PCR quantification
Allelic discrimination and identification
Multiplex PCR assays
Diagnostic clinical assays
References
Biochemistry methods
Fluorescence
Genetics techniques | Molecular beacon | [
"Chemistry",
"Engineering",
"Biology"
] | 874 | [
"Biochemistry methods",
"Genetics techniques",
"Luminescence",
"Fluorescence",
"Genetic engineering",
"Biochemistry"
] |
4,581,532 | https://en.wikipedia.org/wiki/Nature%20Chemical%20Biology | Nature Chemical Biology is a monthly peer-reviewed scientific journal published by Nature Portfolio. It was established in June 2005 by founding Chief Editor Terry L. Sheppard as part of Nature Publishing Group. Sheppard was the Chief Editor of the journal 2004–2022. The current editor-in-chief is Russell Johnson.
Aims and scope
The publishing focus of Nature Chemical Biology is a forum for original research and commentary in chemical biology. Published topics encompass concepts and research methods in chemistry, biology, and related disciplines with the result of controlling biological systems at the molecular level. Authors (contributors) are chemical biologists, also chemists involved in interdisciplinary research between chemistry and biology, along with biologists who produce research results in understanding and controlling biological processes at the molecular level.
Interdisciplinary research in chemistry and biology is emphasized. The journal's main focus in this area is fundamental research which illuminates available chemical and biological tools, as well as mechanisms underpinning biological processes. Also included are studies articulating applications at the molecular level when combining these two disciplines. Emphasis is also given to innovations in methods and theory produced from cross-disciplinary studies.
The readership for Nature Chemical Biology, which also functions as a forum, are researchers in the chemical and life sciences. Besides original research articles, this journal also publishes reviews, perspectives, highlights of research in this and other journals, correspondence, and commentaries.
Abstracting and indexing
Nature Chemical Biology is indexed in the following databases:
Chemical Abstracts Service - CASSI
Science Citation Index
Science Citation Index Expanded
Current Contents - Life Sciences
BIOSIS Previews
According to the Journal Citation Reports, the journal has a 2021 impact factor of 16.290, ranking it 13th out of 296 journals in the category "Biochemistry & Molecular Biology".
See also
Nature
Nature Physics
Nature Materials
References
External links
Official website
Nature Research academic journals
Academic journals established in 2005
Biochemistry journals
English-language journals
Monthly journals | Nature Chemical Biology | [
"Chemistry"
] | 382 | [
"Biochemistry journals",
"Biochemistry literature"
] |
4,583,491 | https://en.wikipedia.org/wiki/JOELib | JOELib is computer software, a chemical expert system used mainly to interconvert chemical file formats. Because of its strong relationship to informatics, this program belongs more to the category cheminformatics than to molecular modelling. It is available for Windows, Unix and other operating systems supporting the programming language Java. It is free and open-source software distributed under the GNU General Public License (GPL) 2.0.
History
JOELib and OpenBabel were derived from the OELib Cheminformatics library.
Logo
The project logo is just the word JOELib in the Tengwar script of J. R. R. Tolkien. The letters are grouped as JO-E-Li-b. Vowels are usually grouped together with a consonant, but two following vowels must be separated by a helper construct.
Major features
Chemical expert system
Query and substructure search (based on Simplified molecular-input line-entry system (SMARTS), a SMILES extension
Clique detection
QSAR
Data mining
Molecule mining, special case of Structured Data Mining
Feature–descriptor calculation
Partition coefficient, log P
Rule-of-five
Partial charges
Fingerprint calculation
etc.
Chemical file formats
Chemical table file: MDL Molfile, SD format
SMILES
Gaussian
Chemical Markup Language
MOPAC
See also
OpenBabel - C++ version of JOELib-OELib
Jmol
Chemistry Development Kit (CDK)
Comparison of software for molecular mechanics modeling
Blue Obelisk
Molecule editor
List of free and open-source software packages
References
The Blue Obelisk-Interoperability in Chemical Informatics, Rajarshi Guha, Michael T. Howard, Geoffrey R. Hutchison, Peter Murray-Rust, Henry Rzepa, Christoph Steinbeck, Jörg K. Wegner, and Egon L. Willighagen, J. Chem. Inf. Model.; 2006;
External links
at SourceForge
Algorithm dictionary
Free science software
Free software programmed in Java (programming language)
Computational chemistry software
Science software for Linux | JOELib | [
"Chemistry"
] | 415 | [
"Computational chemistry",
"Computational chemistry software",
"Chemistry software"
] |
4,583,557 | https://en.wikipedia.org/wiki/OELib | OELib was an open source Cheminformatics library written by Matt Stahl and based on the ideas of OBabel. Its actual GPLed C++ and Java based successors are OpenBabel and JOELib, with Its commercial successor being called OEChem.
See also
JOELib
OpenBabel
External links
Archived copy of OELib in 2008 on Internet Archive.
Design flaws in OELib
References
Free science software
Chemistry software for Linux | OELib | [
"Chemistry",
"Engineering"
] | 93 | [
"Software engineering",
"Software engineering stubs",
"Chemistry software",
"Chemistry software for Linux"
] |
4,584,639 | https://en.wikipedia.org/wiki/Human%20error | Human error is an action that has been done but that was "not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits". Human error has been cited as a primary cause and contributing factor in disasters and accidents in industries as diverse as nuclear power (e.g., the Three Mile Island accident), aviation, space exploration (e.g., the Space Shuttle Challenger disaster and Space Shuttle Columbia disaster), and medicine. Prevention of human error is generally seen as a major contributor to reliability and safety of (complex) systems. Human error is one of the many contributing causes of risk events.
Definition
Human error refers to something having been done that was "not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits". In short, it is a deviation from intention, expectation or desirability. Logically, human actions can fail to achieve their goal in two different ways: the actions can go as planned, but the plan can be inadequate (leading to mistakes); or, the plan can be satisfactory, but the performance can be deficient (leading to slips and lapses). However, a mere failure is not an error if there had been no plan to accomplish something in particular.
Performance
Human error and performance are two sides of the same coin: "human error" mechanisms are the same as "human performance" mechanisms; performance later categorized as 'error' is done so in hindsight: therefore actions later termed "human error" are actually part of the ordinary spectrum of human behaviour. The study of absent-mindedness in everyday life provides ample documentation and categorization of such aspects of behavior. While human error is firmly entrenched in the classical approaches to accident investigation and risk assessment, it has no role in newer approaches such as resilience engineering.
Categories
There are many ways to categorize human error:
exogenous versus endogenous error (i.e., originating outside versus inside the individual)
situation assessment versus response planning and related distinctions in
error in problem detection (also see signal detection theory)
error in problem diagnosis (also see problem solving)
error in action planning and execution (for example: slips or errors of execution versus mistakes or errors of intention)
by level of analysis; for example, perceptual (e.g., optical illusions) versus cognitive versus communication versus organizational
physical manipulation error
'slips' occurring when the physical action fails to achieve the immediate objective
'lapses' involve a failure of one's memory or recall
active error - observable, physical action that changes equipment, system, or facility state, resulting in immediate undesired consequences
latent human error resulting in hidden organization-related weaknesses or equipment flaws that lie dormant; such errors can go unnoticed at the time they occur, having no immediate apparent outcome
equipment dependency error – lack of vigilance due to the assumption that hardware controls or physical safety devices will always work
team error – lack of vigilance created by the social (interpersonal) interaction between two or more people working together
personal dependencies error – unsafe attitudes and traps of human nature leading to complacency and overconfidence
Sources
The cognitive study of human error is a very active research field, including work related to limits of memory and attention and also to decision making strategies such as the availability heuristic and other cognitive biases. Such heuristics and biases are strategies that are useful and often correct, but can lead to systematic patterns of error.
Misunderstandings as a topic in human communication have been studied in conversation analysis, such as the examination of violations of the cooperative principle and Gricean maxims.
Organizational studies of error or dysfunction have included studies of safety culture. One technique for analyzing complex systems failure that incorporates organizational analysis is management oversight risk tree analysis.
Controversies
Some researchers have argued that the dichotomy of human actions as "correct" or "incorrect" is a harmful oversimplification of a complex phenomenon. A focus on the variability of human performance and how human operators (and organizations) can manage that variability, may be a more fruitful approach. Newer approaches, such as resilience engineering mentioned above, highlight the positive roles that humans can play in complex systems. In resilience engineering, successes (things that go right) and failures (things that go wrong) are seen as having the same basis, namely human performance variability. A specific account of that is the efficiency–thoroughness trade-off principle, which can be found on all levels of human activity, in individuals as well as in groups.
See also
Behavior-shaping constraint
Error-tolerant design
Human reliability
Poka-yoke
SHELL model
User error
Technique for human error-rate prediction
Fallacy
To err is human
References
External links
Human reliability
Error
Management cybernetics | Human error | [
"Engineering"
] | 1,002 | [
"Human reliability",
"Systems engineering",
"Reliability engineering"
] |
4,584,863 | https://en.wikipedia.org/wiki/Aza-crown%20ether | In organic chemistry, an aza-crown ether is an aza analogue of a crown ether (cyclic polyether). That is, it has a nitrogen atom (amine linkage, or ) in place of each oxygen atom (ether linkage, ) around the ring. While the parent crown ethers have the formulae , the parent aza-crown ethers have the formulae , where n = 3, 4, 5, 6. Well-studied aza crowns include triazacyclononane (n = 3), cyclen (n = 4), and hexaaza-18-crown-6 (n = 6).
Synthesis
The synthesis of aza crown ethers are subject to the challenges associated with the preparation of macrocycles. The 18-membered ring in (CH2CH2NH)6 can be synthesized by combining two triamine components. By reaction with tosyl chloride, diethylene triamine is converted to a derivative with two secondary sulfonamides. This compound serves as a building block for macrocyclizations.
Variants
Many kinds of aza crown ethers exist.
Variable length linkers
Aza crowns often feature trimethylene ((CH2)3) as well as ethylene ((CH2)2) linkages. One example is cyclam (1,4,8,11-tetraazacyclotetradecane).
Tertiary amines
In many aza-crown ethers some or all of the amines are tertiary. One example is the tri(tertiary amine) (CH2CH2NCH3)3, known as trimethyltriazacyclononane. Cryptands, three-dimensional aza crowns, feature tertiary amines.
Mixed ether-amine ligands
Another large class of macrocyclic ligands feature both ether and amines.. One example is the diaza-18-crown-6, [(CH2CH2O)2(CH2CH2NH)]2.
Lariate crowns
The presence of the amine allows the formation of Lariat crown ethers, which feature sidearms that augment complexation of cation.
References
Polyamines
Secondary amines
Tertiary amines
Ethyleneamines
Chelating agents
Macrocycles | Aza-crown ether | [
"Chemistry"
] | 483 | [
"Organic compounds",
"Chelating agents",
"Macrocycles",
"Process chemicals"
] |
4,585,250 | https://en.wikipedia.org/wiki/Magnetic%20tension | In physics, magnetic tension is a restoring force with units of force density that acts to straighten bent magnetic field lines. In SI units, the force density exerted perpendicular to a magnetic field can be expressed as
where is the vacuum permeability.
Magnetic tension forces also rely on vector current densities and their interaction with the magnetic field. Plotting magnetic tension along adjacent field lines can give a picture as to their divergence and convergence with respect to each other as well as current densities.
Magnetic tension is analogous to the restoring force of rubber bands.
Mathematical statement
In ideal magnetohydrodynamics (MHD) the magnetic tension force in an electrically conducting fluid with a bulk plasma velocity field , current density , mass density , magnetic field , and plasma pressure can be derived from the Cauchy momentum equation:
where the first term on the right hand side represents the Lorentz force and the second term represents pressure gradient forces. The Lorentz force can be expanded using Ampère's law, , and the vector identity
to give
where the first term on the right hand side is the magnetic tension and the second term is the magnetic pressure force.
The force due to changes in the magnitude of and its direction can be separated by writing with and a unit vector:
where the spatial constancy of the magnitude has been assumed and
has magnitude equal to the curvature, or the reciprocal of the radius of curvature, and is directed from a point on a magnetic field line to the center of curvature. Therefore, as the curvature of the magnetic field line increases, so too does the magnetic tension force resisting this curvature.
Magnetic tension and pressure are both implicitly included in the Maxwell stress tensor. Terms representing these two forces are present along the main diagonal where they act on differential area elements normal to the corresponding axis.
Plasma physics
Magnetic tension is particularly important in plasma physics and MHD, where it controls dynamics of some systems and the shape of magnetic structures. For example, in a homogeneous magnetic field and an absence of gravity, magnetic tension is the sole driver of linear Alfvén waves.
See also
Magnetic pinch
Magnetosonic wave
References
Magnetic circuits
Plasma parameters
Magnetohydrodynamics | Magnetic tension | [
"Chemistry"
] | 435 | [
"Magnetohydrodynamics",
"Fluid dynamics"
] |
4,587,555 | https://en.wikipedia.org/wiki/Omega%20meson | The omega meson () is a flavourless meson formed from a superposition of an up quark–antiquark and a down quark–antiquark pair. It is part of the vector meson nonet and mediates the nuclear force along with pions and rho mesons.
Properties
The most common decay mode for the ω meson is at 89.2±0.7%, followed by at 8.34±0.26%.
The quark composition of the meson can be thought of as a mix between , and states, but it is very nearly a pure symmetric - state. This can be shown by deconstructing the wave function of the into its component parts. We see that the and mesons are mixtures of the SU(3) wave functions as follows.
,
,
where
is the nonet mixing angle,
and
.
The mixing angle at which the components decouple completely can be calculated to be , which almost corresponds to the actual value calculated from the masses of 35°. Therefore, the meson is nearly a pure symmetric - state.
See also
List of mesons
Quark model
Vector meson
References
Mesons
Onia
Subatomic particles with spin 1 | Omega meson | [
"Physics"
] | 251 | [
"Particle physics stubs",
"Particle physics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.