id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
144,652 | https://en.wikipedia.org/wiki/Riemannian%20manifold | In differential geometry, a Riemannian manifold is a geometric space on which many geometric notions such as distance, angles, length, volume, and curvature are defined. Euclidean space, the -sphere, hyperbolic space, and smooth surfaces in three-dimensional space, such as ellipsoids and paraboloids, are all examples of Riemannian manifolds. Riemannian manifolds are named after German mathematician Bernhard Riemann, who first conceptualized them.
Formally, a Riemannian metric (or just a metric) on a smooth manifold is a choice of inner product for each tangent space of the manifold. A Riemannian manifold is a smooth manifold together with a Riemannian metric. The techniques of differential and integral calculus are used to pull geometric data out of the Riemannian metric. For example, integration leads to the Riemannian distance function, whereas differentiation is used to define curvature and parallel transport.
Any smooth surface in three-dimensional Euclidean space is a Riemannian manifold with a Riemannian metric coming from the way it sits inside the ambient space. The same is true for any submanifold of Euclidean space of any dimension. Although John Nash proved that every Riemannian manifold arises as a submanifold of Euclidean space, and although some Riemannian manifolds are naturally exhibited or defined in that way, the idea of a Riemannian manifold emphasizes the intrinsic point of view, which defines geometric notions directly on the abstract space itself without referencing an ambient space. In many instances, such as for hyperbolic space and projective space, Riemannian metrics are more naturally defined or constructed using the intrinsic point of view. Additionally, many metrics on Lie groups and homogeneous spaces are defined intrinsically by using group actions to transport an inner product on a single tangent space to the entire manifold, and many special metrics such as constant scalar curvature metrics and Kähler–Einstein metrics are constructed intrinsically using tools from partial differential equations.
Riemannian geometry, the study of Riemannian manifolds, has deep connections to other areas of math, including geometric topology, complex geometry, and algebraic geometry. Applications include physics (especially general relativity and gauge theory), computer graphics, machine learning, and cartography. Generalizations of Riemannian manifolds include pseudo-Riemannian manifolds, Finsler manifolds, and sub-Riemannian manifolds.
History
In 1827, Carl Friedrich Gauss discovered that the Gaussian curvature of a surface embedded in 3-dimensional space only depends on local measurements made within the surface (the first fundamental form). This result is known as the Theorema Egregium ("remarkable theorem" in Latin).
A map that preserves the local measurements of a surface is called a local isometry. Call a property of a surface an intrinsic property if it is preserved by local isometries and call it an extrinsic property if it is not. In this language, the Theorema Egregium says that the Gaussian curvature is an intrinsic property of surfaces.
Riemannian manifolds and their curvature were first introduced non-rigorously by Bernhard Riemann in 1854. However, they would not be formalized until much later. In fact, the more primitive concept of a smooth manifold was first explicitly defined only in 1913 in a book by Hermann Weyl.
Élie Cartan introduced the Cartan connection, one of the first concepts of a connection. Levi-Civita defined the Levi-Civita connection, a special connection on a Riemannian manifold.
Albert Einstein used the theory of pseudo-Riemannian manifolds (a generalization of Riemannian manifolds) to develop general relativity. Specifically, the Einstein field equations are constraints on the curvature of spacetime, which is a 4-dimensional pseudo-Riemannian manifold.
Definition
Riemannian metrics and Riemannian manifolds
Let be a smooth manifold. For each point , there is an associated vector space called the tangent space of at . Vectors in are thought of as the vectors tangent to at .
However, does not come equipped with an inner product, a measuring stick that gives tangent vectors a concept of length and angle. This is an important deficiency because calculus teaches that to calculate the length of a curve, the length of vectors tangent to the curve must be defined. A Riemannian metric puts a measuring stick on every tangent space.
A Riemannian metric on assigns to each a positive-definite inner product in a smooth way (see the section on regularity below). This induces a norm defined by . A smooth manifold endowed with a Riemannian metric is a Riemannian manifold, denoted . A Riemannian metric is a special case of a metric tensor.
A Riemannian metric is not to be confused with the distance function of a metric space, which is also called a metric.
The Riemannian metric in coordinates
If are smooth local coordinates on , the vectors
form a basis of the vector space for any . Relative to this basis, one can define the Riemannian metric's components at each point by
.
These functions can be put together into an matrix-valued function on . The requirement that is a positive-definite inner product then says exactly that this matrix-valued function is a symmetric positive-definite matrix at .
In terms of the tensor algebra, the Riemannian metric can be written in terms of the dual basis of the cotangent bundle as
Regularity of the Riemannian metric
The Riemannian metric is continuous if its components are continuous in any smooth coordinate chart The Riemannian metric is smooth if its components are smooth in any smooth coordinate chart. One can consider many other types of Riemannian metrics in this spirit, such as Lipschitz Riemannian metrics or measurable Riemannian metrics.
There are situations in geometric analysis in which one wants to consider non-smooth Riemannian metrics. See for instance (Gromov 1999) and (Shi and Tam 2002). However, in this article, is assumed to be smooth unless stated otherwise.
Musical isomorphism
In analogy to how an inner product on a vector space induces an isomorphism between a vector space and its dual given by , a Riemannian metric induces an isomorphism of bundles between the tangent bundle and the cotangent bundle. Namely, if is a Riemannian metric, then
is a isomorphism of smooth vector bundles from the tangent bundle to the cotangent bundle .
Isometries
An isometry is a function between Riemannian manifolds which preserves all of the structure of Riemannian manifolds. If two Riemannian manifolds have an isometry between them, they are called isometric, and they are considered to be the same manifold for the purpose of Riemannian geometry.
Specifically, if and are two Riemannian manifolds, a diffeomorphism is called an isometry if , that is, if
for all and For example, translations and rotations are both isometries from Euclidean space (to be defined soon) to itself.
One says that a smooth map not assumed to be a diffeomorphism, is a local isometry if every has an open neighborhood such that is an isometry (and thus a diffeomorphism).
Volume
An oriented -dimensional Riemannian manifold has a unique -form called the Riemannian volume form. The Riemannian volume form is preserved by orientation-preserving isometries. The volume form gives rise to a measure on which allows measurable functions to be integrated. If is compact, the volume of is .
Examples
Euclidean space
Let denote the standard coordinates on The (canonical) Euclidean metric is given by
or equivalently
or equivalently by its coordinate functions
where is the Kronecker delta
which together form the matrix
The Riemannian manifold is called Euclidean space.
Submanifolds
Let be a Riemannian manifold and let be an immersed submanifold or an embedded submanifold of . The pullback of is a Riemannian metric on , and is said to be a Riemannian submanifold of .
In the case where , the map is given by and the metric is just the restriction of to vectors tangent along . In general, the formula for is
where is the pushforward of by
Examples:
The -sphere
is a smooth embedded submanifold of Euclidean space . The Riemannian metric this induces on is called the round metric or standard metric.
Fix real numbers . The ellipsoid
is a smooth embedded submanifold of Euclidean space .
The graph of a smooth function is a smooth embedded submanifold of with its standard metric.
If is not simply connected, there is a covering map , where is the universal cover of . This is an immersion (since it is locally a diffeomorphism), so automatically inherits a Riemannian metric. By the same principle, any smooth covering space of a Riemannian manifold inherits a Riemannian metric.
On the other hand, if already has a Riemannian metric , then the immersion (or embedding) is called an isometric immersion (or isometric embedding) if . Hence isometric immersions and isometric embeddings are Riemannian submanifolds.
Products
Let and be two Riemannian manifolds, and consider the product manifold . The Riemannian metrics and naturally put a Riemannian metric on which can be described in a few ways.
Considering the decomposition one may define
If is a smooth coordinate chart on and is a smooth coordinate chart on , then is a smooth coordinate chart on Let be the representation of in the chart and let be the representation of in the chart . The representation of in the coordinates is
where
For example, consider the -torus . If each copy of is given the round metric, the product Riemannian manifold is called the flat torus. As another example, the Riemannian product , where each copy of has the Euclidean metric, is isometric to with the Euclidean metric.
Positive combinations of metrics
Let be Riemannian metrics on If are any positive smooth functions on , then is another Riemannian metric on
Every smooth manifold admits a Riemannian metric
Theorem: Every smooth manifold admits a (non-canonical) Riemannian metric.
This is a fundamental result. Although much of the basic theory of Riemannian metrics can be developed using only that a smooth manifold is a locally Euclidean topological space, for this result it is necessary to use that smooth manifolds are Hausdorff and paracompact. The reason is that the proof makes use of a partition of unity.
Let be a smooth manifold and a locally finite atlas so that are open subsets and are diffeomorphisms. Such an atlas exists because the manifold is paracompact.
Let be a differentiable partition of unity subordinate to the given atlas, i.e. such that for all .
Define a Riemannian metric on by
where
Here is the Euclidean metric on and is its pullback along . While is only defined on , the product is defined and smooth on since . It takes the value 0 outside of . Because the atlas is locally finite, at every point the sum contains only finitely many nonzero terms, so the sum converges. It is straightforward to check that is a Riemannian metric.
An alternative proof uses the Whitney embedding theorem to embed into Euclidean space and then pulls back the metric from Euclidean space to . On the other hand, the Nash embedding theorem states that, given any smooth Riemannian manifold there is an embedding for some such that the pullback by of the standard Riemannian metric on is That is, the entire structure of a smooth Riemannian manifold can be encoded by a diffeomorphism to a certain embedded submanifold of some Euclidean space. Therefore, one could argue that nothing can be gained from the consideration of abstract smooth manifolds and their Riemannian metrics. However, there are many natural smooth Riemannian manifolds, such as the set of rotations of three-dimensional space and hyperbolic space, of which any representation as a submanifold of Euclidean space will fail to represent their remarkable symmetries and properties as clearly as their abstract presentations do.
Metric space structure
An admissible curve is a piecewise smooth curve whose velocity is nonzero everywhere it is defined. The nonnegative function is defined on the interval except for at finitely many points. The length of an admissible curve is defined as
The integrand is bounded and continuous except at finitely many points, so it is integrable. For a connected Riemannian manifold, define by
Theorem: is a metric space, and the metric topology on coincides with the topology on .
In verifying that satisfies all of the axioms of a metric space, the most difficult part is checking that implies . Verification of the other metric space axioms is omitted.
There must be some precompact open set around p which every curve from p to q must escape. By selecting this open set to be contained in a coordinate chart, one can reduce the claim to the well-known fact that, in Euclidean geometry, the shortest curve between two points is a line. In particular, as seen by the Euclidean geometry of a coordinate chart around p, any curve from p to q must first pass though a certain "inner radius." The assumed continuity of the Riemannian metric g only allows this "coordinate chart geometry" to distort the "true geometry" by some bounded factor.
To be precise, let be a smooth coordinate chart with and Let be an open subset of with By continuity of and compactness of there is a positive number such that for any and any where denotes the Euclidean norm induced by the local coordinates. Let R denote .
Now, given any admissible curve from p to q, there must be some minimal such that clearly
The length of is at least as large as the restriction of to So
The integral which appears here represents the Euclidean length of a curve from 0 to , and so it is greater than or equal to R. So we conclude
The observation about comparison between lengths measured by g and Euclidean lengths measured in a smooth coordinate chart, also verifies that the metric space topology of coincides with the original topological space structure of .
Although the length of a curve is given by an explicit formula, it is generally impossible to write out the distance function by any explicit means. In fact, if is compact, there always exist points where is non-differentiable, and it can be remarkably difficult to even determine the location or nature of these points, even in seemingly simple cases such as when is an ellipsoid.
If one works with Riemannian metrics that are merely continuous but possibly not smooth, the length of an admissible curve and the Riemannian distance function are defined exactly the same, and, as before, is a metric space and the metric topology on coincides with the topology on .
Diameter
The diameter of the metric space is
The Hopf–Rinow theorem shows that if is complete and has finite diameter, it is compact. Conversely, if is compact, then the function has a maximum, since it is a continuous function on a compact metric space. This proves the following.
If is complete, then it is compact if and only if it has finite diameter.
This is not the case without the completeness assumption; for counterexamples one could consider any open bounded subset of a Euclidean space with the standard Riemannian metric. It is also not true that any complete metric space of finite diameter must be compact; it matters that the metric space came from a Riemannian manifold.
Connections, geodesics, and curvature
Connections
An (affine) connection is an additional structure on a Riemannian manifold that defines differentiation of one vector field with respect to another. Connections contain geometric data, and two Riemannian manifolds with different connections have different geometry.
Let denote the space of vector fields on . An (affine) connection
on is a bilinear map
such that
For every function ,
The product rule holds.
The expression is called the covariant derivative of with respect to .
Levi-Civita connection
Two Riemannian manifolds with different connections have different geometry. Thankfully, there is a natural connection associated to a Riemannian manifold called the Levi-Civita connection.
A connection is said to preserve the metric if
A connection is torsion-free if
where is the Lie bracket.
A Levi-Civita connection is a torsion-free connection that preserves the metric. Once a Riemannian metric is fixed, there exists a unique Levi-Civita connection. Note that the definition of preserving the metric uses the regularity of .
Covariant derivative along a curve
If is a smooth curve, a smooth vector field along is a smooth map such that for all . The set of smooth vector fields along is a vector space under pointwise vector addition and scalar multiplication. One can also pointwise multiply a smooth vector field along by a smooth function :
for
Let be a smooth vector field along . If is a smooth vector field on a neighborhood of the image of such that , then is called an extension of .
Given a fixed connection on and a smooth curve , there is a unique operator , called the covariant derivative along , such that:
If is an extension of , then .
Geodesics
Geodesics are curves with no intrinsic acceleration. Equivalently, geodesics are curves that locally take the shortest path between two points. They are the generalization of straight lines in Euclidean space to arbitrary Riemannian manifolds. An ant living in a Riemannian manifold walking straight ahead without making any effort to accelerate or turn would trace out a geodesic.
Fix a connection on . Let be a smooth curve. The acceleration of is the vector field along . If for all , is called a geodesic.
For every and , there exists a geodesic defined on some open interval containing 0 such that and . Any two such geodesics agree on their common domain. Taking the union over all open intervals containing 0 on which a geodesic satisfying and exists, one obtains a geodesic called a maximal geodesic of which every geodesic satisfying and is a restriction.
Every curve that has the shortest length of any admissible curve with the same endpoints as is a geodesic (in a unit-speed reparameterization).
Examples
The nonconstant maximal geodesics of the Euclidean plane are exactly the straight lines. This agrees with the fact from Euclidean geometry that the shortest path between two points is a straight line segment.
The nonconstant maximal geodesics of with the round metric are exactly the great circles. Since the Earth is approximately a sphere, this means that the shortest path a plane can fly between two locations on Earth is a segment of a great circle.
Hopf–Rinow theorem
The Riemannian manifold with its Levi-Civita connection is geodesically complete if the domain of every maximal geodesic is . The plane is geodesically complete. On the other hand, the punctured plane with the restriction of the Riemannian metric from is not geodesically complete as the maximal geodesic with initial conditions , does not have domain .
The Hopf–Rinow theorem characterizes geodesically complete manifolds.
Theorem: Let be a connected Riemannian manifold. The following are equivalent:
The metric space is complete (every -Cauchy sequence converges),
All closed and bounded subsets of are compact,
is geodesically complete.
Parallel transport
In Euclidean space, all tangent spaces are canonically identified with each other via translation, so it is easy to move vectors from one tangent space to another. Parallel transport is a way of moving vectors from one tangent space to another along a curve in the setting of a general Riemannian manifold. Given a fixed connection, there is a unique way to do parallel transport.
Specifically, call a smooth vector field along a smooth curve parallel along if identically. Fix a curve with and . to parallel transport a vector to a vector in along , first extend to a vector field parallel along , and then take the value of this vector field at .
The images below show parallel transport induced by the Levi-Civita connection associated to two different Riemannian metrics on the punctured plane . The curve the parallel transport is done along is the unit circle. In polar coordinates, the metric on the left is the standard Euclidean metric , while the metric on the right is . This second metric has a singularity at the origin, so it does not extend past the puncture, but the first metric extends to the entire plane.
Warning: This is parallel transport on the punctured plane along the unit circle, not parallel transport on the unit circle. Indeed, in the first image, the vectors fall outside of the tangent space to the unit circle.
Riemann curvature tensor
The Riemann curvature tensor measures precisely the extent to which parallel transporting vectors around a small rectangle is not the identity map. The Riemann curvature tensor is 0 at every point if and only if the manifold is locally isometric to Euclidean space.
Fix a connection on . The Riemann curvature tensor is the map defined by
where is the Lie bracket of vector fields. The Riemann curvature tensor is a -tensor field.
Ricci curvature tensor
Fix a connection on . The Ricci curvature tensor is
where is the trace. The Ricci curvature tensor is a covariant 2-tensor field.
Einstein manifolds
The Ricci curvature tensor plays a defining role in the theory of Einstein manifolds, which has applications to the study of gravity. A (pseudo-)Riemannian metric is called an Einstein metric if Einstein's equation
for some constant
holds, and a (pseudo-)Riemannian manifold whose metric is Einstein is called an Einstein manifold. Examples of Einstein manifolds include Euclidean space, the -sphere, hyperbolic space, and complex projective space with the Fubini-Study metric.
Scalar curvature
Constant curvature and space forms
A Riemannian manifold is said to have constant curvature if every sectional curvature equals the number . This is equivalent to the condition that, relative to any coordinate chart, the Riemann curvature tensor can be expressed in terms of the metric tensor as
This implies that the Ricci curvature is given by and the scalar curvature is , where is the dimension of the manifold. In particular, every Riemannian manifold of constant curvature is an Einstein manifold, thereby having constant scalar curvature. As found by Bernhard Riemann in his 1854 lecture introducing Riemannian geometry, the locally-defined Riemannian metric
has constant curvature . Any two Riemannian manifolds of the same constant curvature are locally isometric, and so it follows that any Riemannian manifold of constant curvature can be covered by coordinate charts relative to which the metric has the above form.
A Riemannian space form is a Riemannian manifold with constant curvature which is additionally connected and geodesically complete. A Riemannian space form is said to be a spherical space form if the curvature is positive, a Euclidean space form if the curvature is zero, and a hyperbolic space form or hyperbolic manifold if the curvature is negative. In any dimension, the sphere with its standard Riemannian metric, Euclidean space, and hyperbolic space are Riemannian space forms of constant curvature , , and respectively. Furthermore, the Killing–Hopf theorem says that any simply-connected spherical space form is homothetic to the sphere, any simply-connected Euclidean space form is homothetic to Euclidean space, and any simply-connected hyperbolic space form is homothetic to hyperbolic space.
Using the covering manifold construction, any Riemannian space form is isometric to the quotient manifold of a simply-connected Riemannian space form, modulo a certain group action of isometries. For example, the isometry group of the -sphere is the orthogonal group . Given any finite subgroup thereof in which only the identity matrix possesses as an eigenvalue, the natural group action of the orthogonal group on the -sphere restricts to a group action of , with the quotient manifold inheriting a geodesically complete Riemannian metric of constant curvature . Up to homothety, every spherical space form arises in this way; this largely reduces the study of spherical space forms to problems in group theory. For instance, this can be used to show directly that every even-dimensional spherical space form is homothetic to the standard metric on either the sphere or real projective space. There are many more odd-dimensional spherical space forms, although there are known algorithms for their classification. The list of three-dimensional spherical space forms is infinite but explicitly known, and includes the lens spaces and the Poincaré dodecahedral space.
The case of Euclidean and hyperbolic space forms can likewise be reduced to group theory, based on study of the isometry group of Euclidean space and hyperbolic space. For example, the class of two-dimensional Euclidean space forms includes Riemannian metrics on the Klein bottle, the Möbius strip, the torus, the cylinder , along with the Euclidean plane. Unlike the case of two-dimensional spherical space forms, in some cases two space form structures on the same manifold are not homothetic. The case of two-dimensional hyperbolic space forms is even more complicated, having to do with Teichmüller space. In three dimensions, the Euclidean space forms are known, while the geometry of hyperbolic space forms in three and higher dimensions remains an area of active research known as hyperbolic geometry.
Riemannian metrics on Lie groups
Left-invariant metrics on Lie groups
Let be a Lie group, such as the group of rotations in three-dimensional space. Using the group structure, any inner product on the tangent space at the identity (or any other particular tangent space) can be transported to all other tangent spaces to define a Riemannian metric. Formally, given an inner product on the tangent space at the identity, the inner product on the tangent space at an arbitrary point is defined by
where for arbitrary , is the left multiplication map sending a point to . Riemannian metrics constructed this way are left-invariant; right-invariant Riemannian metrics could be constructed likewise using the right multiplication map instead.
The Levi-Civita connection and curvature of a general left-invariant Riemannian metric can be computed explicitly in terms of , the adjoint representation of , and the Lie algebra associated to . These formulas simplify considerably in the special case of a Riemannian metric which is bi-invariant (that is, simultaneously left- and right-invariant). All left-invariant metrics have constant scalar curvature.
Left- and bi-invariant metrics on Lie groups are an important source of examples of Riemannian manifolds. Berger spheres, constructed as left-invariant metrics on the special unitary group SU(2), are among the simplest examples of the collapsing phenomena, in which a simply-connected Riemannian manifold can have small volume without having large curvature. They also give an example of a Riemannian metric which has constant scalar curvature but which is not Einstein, or even of parallel Ricci curvature. Hyperbolic space can be given a Lie group structure relative to which the metric is left-invariant. Any bi-invariant Riemannian metric on a Lie group has nonnegative sectional curvature, giving a variety of such metrics: a Lie group can be given a bi-invariant Riemannian metric if and only if it is the product of a compact Lie group with an abelian Lie group.
Homogeneous spaces
A Riemannian manifold is said to be homogeneous if for every pair of points and in , there is some isometry of the Riemannian manifold sending to . This can be rephrased in the language of group actions as the requirement that the natural action of the isometry group is transitive. Every homogeneous Riemannian manifold is geodesically complete and has constant scalar curvature.
Up to isometry, all homogeneous Riemannian manifolds arise by the following construction. Given a Lie group with compact subgroup which does not contain any nontrivial normal subgroup of , fix any complemented subspace of the Lie algebra of within the Lie algebra of . If this subspace is invariant under the linear map for any element of , then -invariant Riemannian metrics on the coset space are in one-to-one correspondence with those inner products on which are invariant under for every element of . Each such Riemannian metric is homogeneous, with naturally viewed as a subgroup of the full isometry group.
The above example of Lie groups with left-invariant Riemannian metrics arises as a very special case of this construction, namely when is the trivial subgroup containing only the identity element. The calculations of the Levi-Civita connection and the curvature referenced there can be generalized to this context, where now the computations are formulated in terms of the inner product on , the Lie algebra of , and the direct sum decomposition of the Lie algebra of into the Lie algebra of and . This reduces the study of the curvature of homogeneous Riemannian manifolds largely to algebraic problems. This reduction, together with the flexibility of the above construction, makes the class of homogeneous Riemannian manifolds very useful for constructing examples.
Symmetric spaces
A connected Riemannian manifold is said to be symmetric if for every point of there exists some isometry of the manifold with as a fixed point and for which the negation of the differential at is the identity map. Every Riemannian symmetric space is homogeneous, and consequently is geodesically complete and has constant scalar curvature. However, Riemannian symmetric spaces also have a much stronger curvature property not possessed by most homogeneous Riemannian manifolds, namely that the Riemann curvature tensor and Ricci curvature are parallel. Riemannian manifolds with this curvature property, which could loosely be phrased as "constant Riemann curvature tensor" (not to be confused with constant curvature), are said to be locally symmetric. This property nearly characterizes symmetric spaces; Élie Cartan proved in the 1920s that a locally symmetric Riemannian manifold which is geodesically complete and simply-connected must in fact be symmetric.
Many of the fundamental examples of Riemannian manifolds are symmetric. The most basic include the sphere and real projective spaces with their standard metrics, along with hyperbolic space. The complex projective space, quaternionic projective space, and Cayley plane are analogues of the real projective space which are also symmetric, as are complex hyperbolic space, quaternionic hyperbolic space, and Cayley hyperbolic space, which are instead analogues of hyperbolic space. Grassmannian manifolds also carry natural Riemannian metrics making them into symmetric spaces. Among the Lie groups with left-invariant Riemannian metrics, those which are bi-invariant are symmetric.
Based on their algebraic formulation as special kinds of homogeneous spaces, Cartan achieved an explicit classification of symmetric spaces which are irreducible, referring to those which cannot be locally decomposed as product spaces. Every such space is an example of an Einstein manifold; among them only the one-dimensional manifolds have zero scalar curvature. These spaces are important from the perspective of Riemannian holonomy. As found in the 1950s by Marcel Berger, any Riemannian manifold which is simply-connected and irreducible is either a symmetric space or has Riemannian holonomy belonging to a list of only seven possibilities. Six of the seven exceptions to symmetric spaces in Berger's classification fall into the fields of Kähler geometry, quaternion-Kähler geometry, G2 geometry, and Spin(7) geometry, each of which study Riemannian manifolds equipped with certain extra structures and symmetries. The seventh exception is the study of 'generic' Riemannian manifolds with no particular symmetry, as reflected by the maximal possible holonomy group.
Infinite-dimensional manifolds
The statements and theorems above are for finite-dimensional manifolds—manifolds whose charts map to open subsets of These can be extended, to a certain degree, to infinite-dimensional manifolds; that is, manifolds that are modeled after a topological vector space; for example, Fréchet, Banach, and Hilbert manifolds.
Definitions
Riemannian metrics are defined in a way similar to the finite-dimensional case. However, there is a distinction between two types of Riemannian metrics:
A weak Riemannian metric on is a smooth function such that for any the restriction is an inner product on
A strong Riemannian metric on is a weak Riemannian metric such that induces the topology on . If is a strong Riemannian metric, then must be a Hilbert manifold.
Examples
If is a Hilbert space, then for any one can identify with The metric for all is a strong Riemannian metric.
Let be a compact Riemannian manifold and denote by its diffeomorphism group. The latter is a smooth manifold (see here) and in fact, a Lie group. Its tangent bundle at the identity is the set of smooth vector fields on Let be a volume form on The weak Riemannian metric on , denoted , is defined as follows. Let Then for ,
.
Metric space structure
Length of curves and the Riemannian distance function are defined in a way similar to the finite-dimensional case. The distance function , called the geodesic distance, is always a pseudometric (a metric that does not separate points), but it may not be a metric. In the finite-dimensional case, the proof that the Riemannian distance function separates points uses the existence of a pre-compact open set around any point. In the infinite case, open sets are no longer pre-compact, so the proof fails.
If is a strong Riemannian metric on , then separates points (hence is a metric) and induces the original topology.
If is a weak Riemannian metric, may fail to separate points. In fact, it may even be identically 0. For example, if is a compact Riemannian manifold, then the weak Riemannian metric on induces vanishing geodesic distance.
Hopf–Rinow theorem
In the case of strong Riemannian metrics, one part of the finite-dimensional Hopf–Rinow still holds.
Theorem: Let be a strong Riemannian manifold. Then metric completeness (in the metric ) implies geodesic completeness.
However, a geodesically complete strong Riemannian manifold might not be metrically complete and it might have closed and bounded subsets that are not compact. Further, a strong Riemannian manifold for which all closed and bounded subsets are compact might not be geodesically complete.
If is a weak Riemannian metric, then no notion of completeness implies the other in general.
See also
Smooth manifold
Riemannian geometry
Finsler manifold
Sub-Riemannian manifold
Pseudo-Riemannian manifold
Metric tensor
Hermitian manifold
Symplectic manifold
Kahler manifold
Einstein manifold
References
Notes
Sources
External links
Riemannian geometry
Differential geometry | Riemannian manifold | [
"Mathematics"
] | 7,235 | [
"Riemannian manifolds",
"Space (mathematics)",
"Metric spaces"
] |
144,940 | https://en.wikipedia.org/wiki/Longitudinal%20wave | Longitudinal waves are waves which oscillate in the direction which is parallel to the direction in which the wave travels and displacement of the medium is in the same (or opposite) direction of the wave propagation. Mechanical longitudinal waves are also called compressional or compression waves, because they produce compression and rarefaction when travelling through a medium, and pressure waves, because they produce increases and decreases in pressure. A wave along the length of a stretched Slinky toy, where the distance between coils increases and decreases, is a good visualization. Real-world examples include sound waves (vibrations in pressure, a particle of displacement, and particle velocity propagated in an elastic medium) and seismic P waves (created by earthquakes and explosions).
The other main type of wave is the transverse wave, in which the displacements of the medium are at right angles to the direction of propagation. Transverse waves, for instance, describe some bulk sound waves in solid materials (but not in fluids); these are also called "shear waves" to differentiate them from the (longitudinal) pressure waves that these materials also support.
Nomenclature
"Longitudinal waves" and "transverse waves" have been abbreviated by some authors as "L-waves" and "T-waves", respectively, for their own convenience.
While these two abbreviations have specific meanings in seismology (L-wave for Love wave or long wave) and electrocardiography (see T wave), some authors chose to use "ℓ-waves" (lowercase 'L') and "t-waves" instead, although they are not commonly found in physics writings except for some popular science books.
Sound waves
For longitudinal harmonic sound waves, the frequency and wavelength can be described by the formula
where:
is the displacement of the point on the traveling sound wave;
is the distance from the point to the wave's source;
is the time elapsed;
is the amplitude of the oscillations,
is the speed of the wave; and
is the angular frequency of the wave.
The quantity is the time that the wave takes to travel the distance
The ordinary frequency () of the wave is given by
The wavelength can be calculated as the relation between a wave's speed and ordinary frequency.
For sound waves, the amplitude of the wave is the difference between the pressure of the undisturbed air and the maximum pressure caused by the wave.
Sound's propagation speed depends on the type, temperature, and composition of the medium through which it propagates.
Speed of longitudinal waves
Isotropic medium
For isotropic solids and liquids, the speed of a longitudinal wave can be described by
where
is the elastic modulus, such that
where is the shear modulus and is the bulk modulus;
is the mass density of the medium.
Attenuation of longitudinal waves
The attenuation of a wave in a medium describes the loss of energy a wave carries as it propagates throughout the medium. This is caused by the scattering of the wave at interfaces, the loss of energy due to the friction between molecules, or geometric divergence. The study of attenuation of elastic waves in materials has increased in recent years, particularly within the study of polycrystalline materials where researchers aim to "nondestructively evaluate the degree of damage of engineering components" and to "develop improved procedures for characterizing microstructures" according to a research team led by R. Bruce Thompson in a Wave Motion publication.
Attenuation in viscoelastic materials
In viscoelastic materials, the attenuation coefficients per length for longitudinal waves and for transverse waves must satisfy the following ratio:
where and are the transverse and longitudinal wave speeds respectively.
Attenuation in polycrystalline materials
Polycrystalline materials are made up of various crystal grains which form the bulk material. Due to the difference in crystal structure and properties of these grains, when a wave propagating through a poly-crystal crosses a grain boundary, a scattering event occurs causing scattering based attenuation of the wave. Additionally it has been shown that the ratio rule for viscoelastic materials,
applies equally successfully to polycrystalline materials.
A current prediction for modeling attenuation of waves in polycrystalline materials with elongated grains is the second-order approximation (SOA) model which accounts the second order of inhomogeneity allowing for the consideration multiple scattering in the crystal system. This model predicts that the shape of the grains in a poly-crystal has little effect on attenuation.
Pressure waves
The equations for sound in a fluid given above also apply to acoustic waves in an elastic solid. Although solids also support transverse waves (known as S-waves in seismology), longitudinal sound waves in the solid exist with a velocity and wave impedance dependent on the material's density and its rigidity, the latter of which is described (as with sound in a gas) by the material's bulk modulus.
In May 2022, NASA reported the sonification (converting astronomical data associated with pressure waves into sound) of the black hole at the center of the Perseus galaxy cluster.
Electromagnetics
Maxwell's equations lead to the prediction of electromagnetic waves in a vacuum, which are strictly transverse waves; due to the fact that they would need particles to vibrate upon, the electric and magnetic fields of which the wave consists are perpendicular to the direction of the wave's propagation. However plasma waves are longitudinal since these are not electromagnetic waves but density waves of charged particles, but which can couple to the electromagnetic field.
After Heaviside's attempts to generalize Maxwell's equations, Heaviside concluded that electromagnetic waves were not to be found as longitudinal waves in "free space" or homogeneous media. Maxwell's equations, as we now understand them, retain that conclusion: in free-space or other uniform isotropic dielectrics, electro-magnetic waves are strictly transverse. However electromagnetic waves can display a longitudinal component in the electric and/or magnetic fields when traversing birefringent materials, or inhomogeneous materials especially at interfaces (surface waves for instance) such as Zenneck waves.
In the development of modern physics, Alexandru Proca (1897–1955) was known for developing relativistic quantum field equations bearing his name (Proca's equations) which apply to the massive vector spin-1 mesons. In recent decades some other theorists, such as Jean-Pierre Vigier and Bo Lehnert of the Swedish Royal Society, have used the Proca equation in an attempt to demonstrate photon mass as a longitudinal electromagnetic component of Maxwell's equations, suggesting that longitudinal electromagnetic waves could exist in a Dirac polarized vacuum. However photon rest mass is strongly doubted by almost all physicists and is incompatible with the Standard Model of physics.
See also
Transverse wave
Sound
Acoustic wave
P-wave
Plasma waves
References
Further reading
Varadan, V. K., and Vasundara V. Varadan, "Elastic wave scattering and propagation". Attenuation due to scattering of ultrasonic compressional waves in granular media – A.J. Devaney, H. Levine, and T. Plona. Ann Arbor, Mich., Ann Arbor Science, 1982.
Schaaf, John van der, Jaap C. Schouten, and Cor M. van den Bleek, "Experimental Observation of Pressure Waves in Gas-Solids Fluidized Beds". American Institute of Chemical Engineers. New York, N.Y., 1997.
Russell, Dan, "Longitudinal and Transverse Wave Motion". Acoustics Animations, Pennsylvania State University, Graduate Program in Acoustics.
Longitudinal Waves, with animations "The Physics Classroom"
Wave mechanics
Articles containing video clips | Longitudinal wave | [
"Physics"
] | 1,586 | [
"Wave mechanics",
"Waves",
"Physical phenomena",
"Classical mechanics"
] |
145,020 | https://en.wikipedia.org/wiki/Sintering | Sintering or frittage is the process of compacting and forming a solid mass of material by pressure or heat without melting it to the point of liquefaction. Sintering happens as part of a manufacturing process used with metals, ceramics, plastics, and other materials. The atoms/molecules in the sintered material diffuse across the boundaries of the particles, fusing the particles together and creating a solid piece.
Since the sintering temperature does not have to reach the melting point of the material, sintering is often chosen as the shaping process for materials with extremely high melting points, such as tungsten and molybdenum. The study of sintering in metallurgical powder-related processes is known as powder metallurgy.
An example of sintering can be observed when ice cubes in a glass of water adhere to each other, which is driven by the temperature difference between the water and the ice. Examples of pressure-driven sintering are the compacting of snowfall to a glacier, or the formation of a hard snowball by pressing loose snow together.
The material produced by sintering is called sinter. The word sinter comes from the Middle High German , a cognate of English cinder.
General sintering
Sintering is generally considered successful when the process reduces porosity and enhances properties such as strength, electrical conductivity, translucency and thermal conductivity. In some special cases, sintering is carefully applied to enhance the strength of a material while preserving porosity (e.g. in filters or catalysts, where gas adsorption is a priority). During the sintering process, atomic diffusion drives powder surface elimination in different stages, starting at the formation of necks between powders to final elimination of small pores at the end of the process.
The driving force for densification is the change in free energy from the decrease in surface area and lowering of the surface free energy by the replacement of solid-vapor interfaces. It forms new but lower-energy solid-solid interfaces with a net decrease in total free energy. On a microscopic scale, material transfer is affected by the change in pressure and differences in free energy across the curved surface. If the size of the particle is small (and its curvature is high), these effects become very large in magnitude. The change in energy is much higher when the radius of curvature is less than a few micrometers, which is one of the main reasons why much ceramic technology is based on the use of fine-particle materials.
The ratio of bond area to particle size is a determining factor for properties such as strength and electrical conductivity. To yield the desired bond area, temperature and initial grain size are precisely controlled over the sintering process. At steady state, the particle radius and the vapor pressure are proportional to (p0)2/3 and to (p0)1/3, respectively.
The source of power for solid-state processes is the change in free or chemical potential energy between the neck and the surface of the particle. This energy creates a transfer of material through the fastest means possible; if transfer were to take place from the particle volume or the grain boundary between particles, particle count would decrease and pores would be destroyed. Pore elimination is fastest in samples with many pores of uniform size because the boundary diffusion distance is smallest. during the latter portions of the process, boundary and lattice diffusion from the boundary become important.
Control of temperature is very important to the sintering process, since grain-boundary diffusion and volume diffusion rely heavily upon temperature, particle size, particle distribution, material composition, and often other properties of the sintering environment itself.
Ceramic sintering
Sintering is part of the firing process used in the manufacture of pottery and other ceramic objects. Sintering and vitrification (which requires higher temperatures) are the two main mechanisms behind the strength and stability of ceramics. Sintered ceramic objects are made from substances such as glass, alumina, zirconia, silica, magnesia, lime, beryllium oxide, and ferric oxide. Some ceramic raw materials have a lower affinity for water and a lower plasticity index than clay, requiring organic additives in the stages before sintering.
Sintering begins when sufficient temperatures have been reached to mobilize the active elements in the ceramic material, which can start below their melting point (typically at 50–80% of their melting point), e.g. as premelting. When sufficient sintering has taken place, the ceramic body will no longer break down in water; additional sintering can reduce the porosity of the ceramic, increase the bond area between ceramic particles, and increase the material strength.
Industrial procedures to create ceramic objects via sintering of powders generally include:
mixing water, binder, deflocculant, and unfired ceramic powder to form a slurry
spray-drying the slurry
putting the spray dried powder into a mold and pressing it to form a green body (an unsintered ceramic item)
heating the green body at low temperature to burn off the binder
sintering at a high temperature to fuse the ceramic particles together.
All the characteristic temperatures associated with phase transformation, glass transitions, and melting points, occurring during a sinterisation cycle of a particular ceramic's formulation (i.e., tails and frits) can be easily obtained by observing the expansion-temperature curves during optical dilatometer thermal analysis. In fact, sinterisation is associated with a remarkable shrinkage of the material because glass phases flow once their transition temperature is reached, and start consolidating the powdery structure and considerably reducing the porosity of the material.
Sintering is performed at high temperature. Additionally, a second and/or third external force (such as pressure, electric current) could be used. A commonly used second external force is pressure. Sintering performed by only heating is generally termed "pressureless sintering", which is possible with graded metal-ceramic composites, utilising a nanoparticle sintering aid and bulk molding technology. A variant used for 3D shapes is called hot isostatic pressing.
To allow efficient stacking of product in the furnace during sintering and to prevent parts sticking together, many manufacturers separate ware using ceramic powder separator sheets. These sheets are available in various materials such as alumina, zirconia and magnesia. They are additionally categorized by fine, medium and coarse particle sizes. By matching the material and particle size to the ware being sintered, surface damage and contamination can be reduced while maximizing furnace loading.
Sintering of metallic powders
Most, if not all, metals can be sintered. This applies especially to pure metals produced in vacuum which suffer no surface contamination. Sintering under atmospheric pressure requires the use of a protective gas, quite often endothermic gas. Sintering, with subsequent reworking, can produce a great range of material properties. Changes in density, alloying, and heat treatments can alter the physical characteristics of various products. For instance, the Young's modulus En of sintered iron powders remains somewhat insensitive to sintering time, alloying, or particle size in the original powder for lower sintering temperatures, but depends upon the density of the final product:
where D is the density, E is Young's modulus and d is the maximum density of iron.
Sintering is static when a metal powder under certain external conditions may exhibit coalescence, and yet reverts to its normal behavior when such conditions are removed. In most cases, the density of a collection of grains increases as material flows into voids, causing a decrease in overall volume. Mass movements that occur during sintering consist of the reduction of total porosity by repacking, followed by material transport due to evaporation and condensation from diffusion. In the final stages, metal atoms move along crystal boundaries to the walls of internal pores, redistributing mass from the internal bulk of the object and smoothing pore walls. Surface tension is the driving force for this movement.
A special form of sintering (which is still considered part of powder metallurgy) is liquid-state sintering in which at least one but not all elements are in a liquid state. Liquid-state sintering is required for making cemented carbide and tungsten carbide.
Sintered bronze in particular is frequently used as a material for bearings, since its porosity allows lubricants to flow through it or remain captured within it. Sintered copper may be used as a wicking structure in certain types of heat pipe construction, where the porosity allows a liquid agent to move through the porous material via capillary action. For materials that have high melting points such as molybdenum, tungsten, rhenium, tantalum, osmium and carbon, sintering is one of the few viable manufacturing processes. In these cases, very low porosity is desirable and can often be achieved.
Sintered metal powder is used to make frangible shotgun shells called breaching rounds, as used by military and SWAT teams to quickly force entry into a locked room. These shotgun shells are designed to destroy door deadbolts, locks and hinges without risking lives by ricocheting or by flying on at lethal speed through the door. They work by destroying the object they hit and then dispersing into a relatively harmless powder.
Sintered bronze and stainless steel are used as filter materials in applications requiring high temperature resistance while retaining the ability to regenerate the filter element. For example, sintered stainless steel elements are employed for filtering steam in food and pharmaceutical applications, and sintered bronze in aircraft hydraulic systems.
Sintering of powders containing precious metals such as silver and gold is used to make small jewelry items. Evaporative self-assembly of colloidal silver nanocubes into supercrystals has been shown to allow the sintering of electrical joints at temperatures lower than 200 °C.
Advantages
Particular advantages of the powder technology include:
Very high levels of purity and uniformity in starting materials
Preservation of purity, due to the simpler subsequent fabrication process (fewer steps) that it makes possible
Stabilization of the details of repetitive operations, by control of grain size during the input stages
Absence of binding contact between segregated powder particles – or "inclusions" (called stringering) – as often occurs in melting processes
No deformation needed to produce directional elongation of grains
Capability to produce materials of controlled, uniform porosity.
Capability to produce nearly net-shaped objects.
Capability to produce materials which cannot be produced by any other technology.
Capability to fabricate high-strength material like turbine blades.
After sintering the mechanical strength to handling becomes higher.
The literature contains many references on sintering dissimilar materials to produce solid/solid-phase compounds or solid/melt mixtures at the processing stage. Almost any substance can be obtained in powder form, through either chemical, mechanical or physical processes, so basically any material can be obtained through sintering. When pure elements are sintered, the leftover powder is still pure, so it can be recycled.
Disadvantages
Particular disadvantages of the powder technology include:
sintering cannot create uniform sizes
micro- and nanostructures produced before sintering are often destroyed.
Plastics sintering
Plastic materials are formed by sintering for applications that require materials of specific porosity. Sintered plastic porous components are used in filtration and to control fluid and gas flows. Sintered plastics are used in applications requiring caustic fluid separation processes such as the nibs in whiteboard markers, inhaler filters, and vents for caps and liners on packaging materials. Sintered ultra high molecular weight polyethylene materials are used as ski and snowboard base materials. The porous texture allows wax to be retained within the structure of the base material, thus providing a more durable wax coating.
Liquid phase sintering
For materials that are difficult to sinter, a process called liquid phase sintering is commonly used. Materials for which liquid phase sintering is common are Si3N4, WC, SiC, and more. Liquid phase sintering is the process of adding an additive to the powder which will melt before the matrix phase. The process of liquid phase sintering has three stages:
rearrangement – As the liquid melts capillary action will pull the liquid into pores and also cause grains to rearrange into a more favorable packing arrangement.
solution-precipitation – In areas where capillary pressures are high (particles are close together) atoms will preferentially go into solution and then precipitate in areas of lower chemical potential where particles are not close or in contact. This is called contact flattening. This densifies the system in a way similar to grain boundary diffusion in solid state sintering. Ostwald ripening will also occur where smaller particles will go into solution preferentially and precipitate on larger particles leading to densification.
final densification – densification of solid skeletal network, liquid movement from efficiently packed regions into pores.
For liquid phase sintering to be practical the major phase should be at least slightly soluble in the liquid phase and the additive should melt before any major sintering of the solid particulate network occurs, otherwise rearrangement of grains will not occur. Liquid phase sintering was successfully applied to improve grain growth of thin semiconductor layers from nanoparticle precursor films.
Electric current assisted sintering
These techniques employ electric currents to drive or enhance sintering. English engineer A. G. Bloxam registered in 1906 the first patent on sintering powders using direct current in vacuum. The primary purpose of his inventions was the industrial scale production of filaments for incandescent lamps by compacting tungsten or molybdenum particles. The applied current was particularly effective in reducing surface oxides that increased the emissivity of the filaments.
In 1913, Weintraub and Rush patented a modified sintering method which combined electric current with pressure. The benefits of this method were proved for the sintering of refractory metals as well as conductive carbide or nitride powders. The starting boron–carbon or silicon–carbon powders were placed in an electrically insulating tube and compressed by two rods which also served as electrodes for the current. The estimated sintering temperature was 2000 °C.
In the United States, sintering was first patented by Duval d'Adrian in 1922. His three-step process aimed at producing heat-resistant blocks from such oxide materials as zirconia, thoria or tantalia. The steps were: (i) molding the powder; (ii) annealing it at about 2500 °C to make it conducting; (iii) applying current-pressure sintering as in the method by Weintraub and Rush.
Sintering that uses an arc produced via a capacitance discharge to eliminate oxides before direct current heating, was patented by G. F. Taylor in 1932. This originated sintering methods employing pulsed or alternating current, eventually superimposed to a direct current. Those techniques have been developed over many decades and summarized in more than 640 patents.
Of these technologies the most well known is resistance sintering (also called hot pressing) and spark plasma sintering, while electro sinter forging is the latest advancement in this field.
Spark plasma sintering
In spark plasma sintering (SPS), external pressure and an electric field are applied simultaneously to enhance the densification of the metallic/ceramic powder compacts. However, after commercialization it was determined there is no plasma, so the proper name is spark sintering as coined by Lenel. The electric field driven densification supplements sintering with a form of hot pressing, to enable lower temperatures and taking less time than typical sintering. For a number of years, it was speculated that the existence of sparks or plasma between particles could aid sintering; however, Hulbert and coworkers systematically proved that the electric parameters used during spark plasma sintering make it (highly) unlikely. In light of this, the name "spark plasma sintering" has been rendered obsolete. Terms such as field assisted sintering technique (FAST), electric field assisted sintering (EFAS), and direct current sintering (DCS) have been implemented by the sintering community. Using a direct current (DC) pulse as the electric current, spark plasma, spark impact pressure, joule heating, and an electrical field diffusion effect would be created. By modifying the graphite die design and its assembly, it is possible to perform pressureless sintering in spark plasma sintering facility. This modified die design setup is reported to synergize the advantages of both conventional pressureless sintering and spark plasma sintering techniques.
Electro sinter forging
Electro sinter forging is an electric current assisted sintering (ECAS) technology originated from capacitor discharge sintering. It is used for the production of diamond metal matrix composites and is under evaluation for the production of hard metals, nitinol and other metals and intermetallics. It is characterized by a very low sintering time, allowing machines to sinter at the same speed as a compaction press.
Pressureless sintering
Pressureless sintering is the sintering of a powder compact (sometimes at very high temperatures, depending on the powder) without applied pressure. This avoids density variations in the final component, which occurs with more traditional hot pressing methods.
The powder compact (if a ceramic) can be created by slip casting, injection moulding, and cold isostatic pressing. After presintering, the final green compact can be machined to its final shape before being sintered.
Three different heating schedules can be performed with pressureless sintering: constant-rate of heating (CRH), rate-controlled sintering (RCS), and two-step sintering (TSS). The microstructure and grain size of the ceramics may vary depending on the material and method used.
Constant-rate of heating (CRH), also known as temperature-controlled sintering, consists of heating the green compact at a constant rate up to the sintering temperature. Experiments with zirconia have been performed to optimize the sintering temperature and sintering rate for CRH method. Results showed that the grain sizes were identical when the samples were sintered to the same density, proving that grain size is a function of specimen density rather than CRH temperature mode.
In rate-controlled sintering (RCS), the densification rate in the open-porosity phase is lower than in the CRH method. By definition, the relative density, ρrel, in the open-porosity phase is lower than 90%. Although this should prevent separation of pores from grain boundaries, it has been proven statistically that RCS did not produce smaller grain sizes than CRH for alumina, zirconia, and ceria samples.
Two-step sintering (TSS) uses two different sintering temperatures. The first sintering temperature should guarantee a relative density higher than 75% of theoretical sample density. This will remove supercritical pores from the body. The sample will then be cooled down and held at the second sintering temperature until densification is completed. Grains of cubic zirconia and cubic strontium titanate were significantly refined by TSS compared to CRH. However, the grain size changes in other ceramic materials, like tetragonal zirconia and hexagonal alumina, were not statistically significant.
Microwave sintering
In microwave sintering, heat is sometimes generated internally within the material, rather than via surface radiative heat transfer from an external heat source. Some materials fail to couple and others exhibit run-away behavior, so it is restricted in usefulness. A benefit of microwave sintering is faster heating for small loads, meaning less time is needed to reach the sintering temperature, less heating energy is required and there are improvements in the product properties.
A failing of microwave sintering is that it generally sinters only one compact at a time, so overall productivity turns out to be poor except for situations involving one of a kind sintering, such as for artists. As microwaves can only penetrate a short distance in materials with a high conductivity and a high permeability, microwave sintering requires the sample to be delivered in powders with a particle size around the penetration depth of microwaves in the particular material. The sintering process and side-reactions run several times faster during microwave sintering at the same temperature, which results in different properties for the sintered product.
This technique is acknowledged to be quite effective in maintaining fine grains/nano sized grains in sintered bioceramics. Magnesium phosphates and calcium phosphates are the examples which have been processed through the microwave sintering technique.
Densification, vitrification and grain growth
Sintering in practice is the control of both densification and grain growth. Densification is the act of reducing porosity in a sample, thereby making it denser. Grain growth is the process of grain boundary motion and Ostwald ripening to increase the average grain size. Many properties (mechanical strength, electrical breakdown strength, etc.) benefit from both a high relative density and a small grain size. Therefore, being able to control these properties during processing is of high technical importance. Since densification of powders requires high temperatures, grain growth naturally occurs during sintering. Reduction of this process is key for many engineering ceramics. Under certain conditions of chemistry and orientation, some grains may grow rapidly at the expense of their neighbours during sintering. This phenomenon, known as abnormal grain growth (AGG), results in a bimodal grain size distribution that has consequences for the mechanical, dielectric and thermal performance of the sintered material.
For densification to occur at a quick pace it is essential to have (1) an amount of liquid phase that is large in size, (2) a near complete solubility of the solid in the liquid, and (3) wetting of the solid by the liquid. The power behind the densification is derived from the capillary pressure of the liquid phase located between the fine solid particles. When the liquid phase wets the solid particles, each space between the particles becomes a capillary in which a substantial capillary pressure is developed. For submicrometre particle sizes, capillaries with diameters in the range of 0.1 to 1 micrometres develop pressures in the range of to for silicate liquids and in the range of to for a metal such as liquid cobalt.
Densification requires constant capillary pressure where just solution-precipitation material transfer would not produce densification. For further densification, additional particle movement while the particle undergoes grain-growth and grain-shape changes occurs. Shrinkage would result when the liquid slips between particles and increases pressure at points of contact causing the material to move away from the contact areas, forcing particle centers to draw near each other.
The sintering of liquid-phase materials involves a fine-grained solid phase to create the needed capillary pressures proportional to its diameter, and the liquid concentration must also create the required capillary pressure within range, else the process ceases. The vitrification rate is dependent upon the pore size, the viscosity and amount of liquid phase present leading to the viscosity of the overall composition, and the surface tension. Temperature dependence for densification controls the process because at higher temperatures viscosity decreases and increases liquid content. Therefore, when changes to the composition and processing are made, it will affect the vitrification process.
Sintering mechanisms
Sintering occurs by diffusion of atoms through the microstructure. This diffusion is caused by a gradient of chemical potential – atoms move from an area of higher chemical potential to an area of lower chemical potential. The different paths the atoms take to get from one spot to another are the "sintering mechanisms" or "matter transport mechanisms".
In solid state sintering, the six common mechanisms are:
surface diffusion – diffusion of atoms along the surface of a particle
vapor transport – evaporation of atoms which condense on a different surface
lattice diffusion from surface – atoms from surface diffuse through lattice
lattice diffusion from grain boundary – atom from grain boundary diffuses through lattice
grain boundary diffusion – atoms diffuse along grain boundary
plastic deformation – dislocation motion causes flow of matter.
Mechanisms 1–3 above are non-densifying (i.e. do not cause the pores and the overall ceramic body to shrink) but can still increase the area of the bond or "neck" between grains; they take atoms from the surface and rearrange them onto another surface or part of the same surface. Mechanisms 4–6 are densifying – atoms are moved from the bulk material or the grain boundaries to the surface of pores, thereby eliminating porosity and increasing the density of the sample.
Grain growth
A grain boundary (GB) is the transition area or interface between adjacent crystallites (or grains) of the same chemical and lattice composition, not to be confused with a phase boundary. The adjacent grains do not have the same orientation of the lattice, thus giving the atoms in GB shifted positions relative to the lattice in the crystals. Due to the shifted positioning of the atoms in the GB they have a higher energy state when compared with the atoms in the crystal lattice of the grains. It is this imperfection that makes it possible to selectively etch the GBs when one wants the microstructure to be visible.
Striving to minimize its energy leads to the coarsening of the microstructure to reach a metastable state within the specimen. This involves minimizing its GB area and changing its topological structure to minimize its energy. This grain growth can either be normal or abnormal, a normal grain growth is characterized by the uniform growth and size of all the grains in the specimen. Abnormal grain growth is when a few grains grow much larger than the remaining majority.
Grain boundary energy/tension
The atoms in the GB are normally in a higher energy state than their equivalent in the bulk material. This is due to their more stretched bonds, which gives rise to a GB tension . This extra energy that the atoms possess is called the grain boundary energy, . The grain will want to minimize this extra energy, thus striving to make the grain boundary area smaller and this change requires energy.
"Or, in other words, a force has to be applied, in the plane of the grain boundary and acting along a line in the grain-boundary area, in order to extend the grain-boundary area in the direction of the force. The force per unit length, i.e. tension/stress, along the line mentioned is σGB. On the basis of this reasoning it would follow that:
with dA as the increase of grain-boundary area per unit length along the line in the grain-boundary area considered."[pg 478]
The GB tension can also be thought of as the attractive forces between the atoms at the surface and the tension between these atoms is due to the fact that there is a larger interatomic distance between them at the surface compared to the bulk (i.e. surface tension). When the surface area becomes bigger the bonds stretch more and the GB tension increases. To counteract this increase in tension there must be a transport of atoms to the surface keeping the GB tension constant. This diffusion of atoms accounts for the constant surface tension in liquids. Then the argument,
holds true. For solids, on the other hand, diffusion of atoms to the surface might not be sufficient and the surface tension can vary with an increase in surface area.
For a solid, one can derive an expression for the change in Gibbs free energy, dG, upon the change of GB area, dA. dG is given by
which gives
is normally expressed in units of while is normally expressed in units of since they are different physical properties.
Mechanical equilibrium
In a two-dimensional isotropic material the grain boundary tension would be the same for the grains. This would give angle of 120° at GB junction where three grains meet. This would give the structure a hexagonal pattern which is the metastable state (or mechanical equilibrium) of the 2D specimen. A consequence of this is that, to keep trying to be as close to the equilibrium as possible, grains with fewer sides than six will bend the GB to try keep the 120° angle between each other. This results in a curved boundary with its curvature towards itself. A grain with six sides will, as mentioned, have straight boundaries, while a grain with more than six sides will have curved boundaries with its curvature away from itself. A grain with six boundaries (i.e. hexagonal structure) is in a metastable state (i.e. local equilibrium) within the 2D structure. In three dimensions structural details are similar but much more complex and the metastable structure for a grain is a non-regular 14-sided polyhedra with doubly curved faces. In practice all arrays of grains are always unstable and thus always grow until prevented by a counterforce.
Grains strive to minimize their energy, and a curved boundary has a higher energy than a straight boundary. This means that the grain boundary will migrate towards the curvature. The consequence of this is that grains with less than 6 sides will decrease in size while grains with more than 6 sides will increase in size.
Grain growth occurs due to motion of atoms across a grain boundary. Convex surfaces have a higher chemical potential than concave surfaces, therefore grain boundaries will move toward their center of curvature. As smaller particles tend to have a higher radius of curvature and this results in smaller grains losing atoms to larger grains and shrinking. This is a process called Ostwald ripening. Large grains grow at the expense of small grains.
Grain growth in a simple model is found to follow:
Here G is final average grain size, G0 is the initial average grain size, t is time, m is a factor between 2 and 4, and K is a factor given by:
Here Q is the molar activation energy, R is the ideal gas constant, T is absolute temperature, and K0 is a material dependent factor. In most materials the sintered grain size is proportional to the inverse square root of the fractional porosity, implying that pores are the most effective retardant for grain growth during sintering.
Reducing grain growth
Solute ions
If a dopant is added to the material (example: Nd in BaTiO3) the impurity will tend to stick to the grain boundaries. As the grain boundary tries to move (as atoms jump from the convex to concave surface) the change in concentration of the dopant at the grain boundary will impose a drag on the boundary. The original concentration of solute around the grain boundary will be asymmetrical in most cases. As the grain boundary tries to move, the concentration on the side opposite of motion will have a higher concentration and therefore have a higher chemical potential. This increased chemical potential will act as a backforce to the original chemical potential gradient that is the reason for grain boundary movement. This decrease in net chemical potential will decrease the grain boundary velocity and therefore grain growth.
Fine second phase particles
If particles of a second phase which are insoluble in the matrix phase are added to the powder in the form of a much finer powder, then this will decrease grain boundary movement. When the grain boundary tries to move past the inclusion diffusion of atoms from one grain to the other, it will be hindered by the insoluble particle. This is because it is beneficial for particles to reside in the grain boundaries and they exert a force in opposite direction compared to grain boundary migration. This effect is called the Zener effect after the man who estimated this drag force to
where r is the radius of the particle and λ the interfacial energy of the boundary if there are N particles per unit volume their volume fraction f is
assuming they are randomly distributed. A boundary of unit area will intersect all particles within a volume of 2r which is 2Nr particles. So the number of particles n intersecting a unit area of grain boundary is:
Now, assuming that the grains only grow due to the influence of curvature, the driving force of growth is where (for homogeneous grain structure) R approximates to the mean diameter of the grains. With this the critical diameter that has to be reached before the grains ceases to grow:
This can be reduced to
so the critical diameter of the grains is dependent on the size and volume fraction of the particles at the grain boundaries.
It has also been shown that small bubbles or cavities can act as inclusion
More complicated interactions which slow grain boundary motion include interactions of the surface energies of the two grains and the inclusion and are discussed in detail by C.S. Smith.
Sintering of catalysts
Sintering is an important cause for loss of catalytic activity, especially on supported metal catalysts. It decreases the surface area of the catalyst and changes the surface structure. For a porous catalytic surface, the pores may collapse due to sintering, resulting in loss of surface area. Sintering is in general an irreversible process.
Small catalyst particles have the highest possible relative surface area and high reaction temperature, both factors that generally increase the reactivity of a catalyst. However, these factors are also the circumstances under which sintering occurs. Specific materials may also increase the rate of sintering. On the other hand, by alloying catalysts with other materials, sintering can be reduced. Rare-earth metals in particular have been shown to reduce sintering of metal catalysts when alloyed.
For many supported metal catalysts, sintering starts to become a significant effect at temperatures over . Catalysts that operate at higher temperatures, such as a car catalyst, use structural improvements to reduce or prevent sintering. These improvements are in general in the form of a support made from an inert and thermally stable material such as silica, carbon or alumina.
See also
, a rapid prototyping technology, that includes Direct Metal Laser Sintering (DMLS).
– a pioneer of sintering methods
References
Further reading
External links
Particle-Particle-Sintering – a 3D lattice kinetic Monte Carlo simulation
Sphere-Plate-Sintering – a 3D lattice kinetic Monte Carlo simulation
Industrial processes
Metalworking
Plastics industry
Metallurgical processes | Sintering | [
"Chemistry",
"Materials_science"
] | 7,074 | [
"Metallurgical processes",
"Metallurgy"
] |
145,040 | https://en.wikipedia.org/wiki/Conservation%20of%20mass | In physics and chemistry, the law of conservation of mass or principle of mass conservation states that for any system closed to all transfers of matter the mass of the system must remain constant over time.
The law implies that mass can neither be created nor destroyed, although it may be rearranged in space, or the entities associated with it may be changed in form. For example, in chemical reactions, the mass of the chemical components before the reaction is equal to the mass of the components after the reaction. Thus, during any chemical reaction and low-energy thermodynamic processes in an isolated system, the total mass of the reactants, or starting materials, must be equal to the mass of the products.
The concept of mass conservation is widely used in many fields such as chemistry, mechanics, and fluid dynamics. Historically, mass conservation in chemical reactions was primarily demonstrated in the 17th century and finally confirmed by Antoine Lavoisier in the late 18th century. The formulation of this law was of crucial importance in the progress from alchemy to the modern natural science of chemistry.
In reality, the conservation of mass only holds approximately and is considered part of a series of assumptions in classical mechanics. The law has to be modified to comply with the laws of quantum mechanics and special relativity under the principle of mass–energy equivalence, which states that energy and mass form one conserved quantity. For very energetic systems the conservation of mass only is shown not to hold, as is the case in nuclear reactions and particle-antiparticle annihilation in particle physics.
Mass is also not generally conserved in open systems. Such is the case when any energy or matter is allowed into, or out of, the system. However, unless radioactivity or nuclear reactions are involved, the amount of energy entering or escaping such systems (as heat, mechanical work, or electromagnetic radiation) is usually too small to be measured as a change in the mass of the system.
For systems that include large gravitational fields, general relativity has to be taken into account; thus mass–energy conservation becomes a more complex concept, subject to different definitions, and neither mass nor energy is as strictly and simply conserved as is the case in special relativity.
Formulation and examples
The law of conservation of mass can only be formulated in classical mechanics, in which the energy scales associated with an isolated system are much smaller than , where is the mass of a typical object in the system, measured in the frame of reference where the object is at rest, and is the speed of light.
The law can be formulated mathematically in the fields of fluid mechanics and continuum mechanics, where the conservation of mass is usually expressed using the continuity equation, given in differential form as where is the density (mass per unit volume), is the time, is the divergence, and is the flow velocity field.
The interpretation of the continuity equation for mass is the following: For a given closed surface in the system, the change, over any time interval, of the mass enclosed by the surface is equal to the mass that traverses the surface during that time interval: positive if the matter goes in and negative if the matter goes out. For the whole isolated system, this condition implies that the total mass , the sum of the masses of all components in the system, does not change over time, i.e. where is the differential that defines the integral over the whole volume of the system.
The continuity equation for the mass is part of the Euler equations of fluid dynamics. Many other convection–diffusion equations describe the conservation and flow of mass and matter in a given system.
In chemistry, the calculation of the amount of reactant and products in a chemical reaction, or stoichiometry, is founded on the principle of conservation of mass. The principle implies that during a chemical reaction the total mass of the reactants is equal to the total mass of the products. For example, in the following reaction
where one molecule of methane () and two oxygen molecules are converted into one molecule of carbon dioxide () and two of water (). The number of molecules resulting from the reaction can be derived from the principle of conservation of mass, as initially four hydrogen atoms, 4 oxygen atoms and one carbon atom are present (as well as in the final state); thus the number water molecules produced must be exactly two per molecule of carbon dioxide produced.
Many engineering problems are solved by following the mass distribution of a given system over time; this methodology is known as mass balance.
History
As early as 520 BCE, Jain philosophy, a non-creationist philosophy based on the teachings of Mahavira, stated that the universe and its constituents such as matter cannot be destroyed or created. The Jain text Tattvarthasutra (2nd century CE) states that a substance is permanent, but its modes are characterised by creation and destruction.
An important idea in ancient Greek philosophy was that "Nothing comes from nothing", so that what exists now has always existed: no new matter can come into existence where there was none before. An explicit statement of this, along with the further principle that nothing can pass away into nothing, is found in Empedocles (c.4th century BCE): "For it is impossible for anything to come to be from what is not, and it cannot be brought about or heard of that what is should be utterly destroyed."
A further principle of conservation was stated by Epicurus around the 3rd century BCE, who wrote in describing the nature of the Universe that "the totality of things was always such as it is now, and always will be".
Discoveries in chemistry
By the 18th century the principle of conservation of mass during chemical reactions was widely used and was an important assumption during experiments, even before a definition was widely established, though an expression of the law can be dated back to Hero of Alexandria’s time, as can be seen in the works of Joseph Black, Henry Cavendish, and Jean Rey. One of the first to outline the principle was Mikhail Lomonosov in 1756. He may have demonstrated it by experiments and certainly had discussed the principle in 1748 in correspondence with Leonhard Euler, though his claim on the subject is sometimes challenged. According to the Soviet physicist Yakov Dorfman:The universal law was formulated by Lomonosov on the basis of general philosophical materialistic considerations, it was never questioned or tested by him, but on the contrary, served him as a solid starting position in all research throughout his life. A more refined series of experiments were later carried out by Antoine Lavoisier who expressed his conclusion in 1773 and popularized the principle of conservation of mass. The demonstrations of the principle disproved the then popular phlogiston theory that said that mass could be gained or lost in combustion and heat processes.
The conservation of mass was obscure for millennia because of the buoyancy effect of the Earth's atmosphere on the weight of gases. For example, a piece of wood weighs less after burning; this seemed to suggest that some of its mass disappears, or is transformed or lost. Careful experiments were performed in which chemical reactions such as rusting were allowed to take place in sealed glass ampoules; it was found that the chemical reaction did not change the weight of the sealed container and its contents. Weighing of gases using scales was not possible until the invention of the vacuum pump in the 17th century.
Once understood, the conservation of mass was of great importance in progressing from alchemy to modern chemistry. Once early chemists realized that chemical substances never disappeared but were only transformed into other substances with the same weight, these scientists could for the first time embark on quantitative studies of the transformations of substances. The idea of mass conservation plus a surmise that certain "elemental substances" also could not be transformed into others by chemical reactions, in turn led to an understanding of chemical elements, as well as the idea that all chemical processes and transformations (such as burning and metabolic reactions) are reactions between invariant amounts or weights of these chemical elements.
Following the pioneering work of Lavoisier, the exhaustive experiments of Jean Stas supported the consistency of this law in chemical reactions, even though they were carried out with other intentions. His research indicated that in certain reactions the loss or gain could not have been more than 2 to 4 parts in 100,000. The difference in the accuracy aimed at and attained by Lavoisier on the one hand, and by Edward W. Morley and Stas on the other, is enormous.
Modern physics
The law of conservation of mass was challenged with the advent of special relativity. In one of the Annus Mirabilis papers of Albert Einstein in 1905, he suggested an equivalence between mass and energy. This theory implied several assertions, like the idea that internal energy of a system could contribute to the mass of the whole system, or that mass could be converted into electromagnetic radiation. However, as Max Planck pointed out, a change in mass as a result of extraction or addition of chemical energy, as predicted by Einstein's theory, is so small that it could not be measured with the available instruments and could not be presented as a test of special relativity. Einstein speculated that the energies associated with newly discovered radioactivity were significant enough, compared with the mass of systems producing them, to enable their change of mass to be measured, once the energy of the reaction had been removed from the system. This later indeed proved to be possible, although it was eventually to be the first artificial nuclear transmutation reaction in 1932, demonstrated by Cockcroft and Walton, that proved the first successful test of Einstein's theory regarding mass loss with energy gain.
The law of conservation of mass and the analogous law of conservation of energy were finally generalized and unified into the principle of mass–energy equivalence, described by Albert Einstein's equation . Special relativity also redefines the concept of mass and energy, which can be used interchangeably and are defined relative to the frame of reference. Several quantities had to be defined for consistency, such as the rest mass of a particle (mass in the rest frame of the particle) and the relativistic mass (in another frame). The latter term is usually less frequently used.
In general relativity, conservation of both mass and energy is not globally conserved and its definition is more complicated.
See also
Charge conservation
Conservation law
Fick's laws of diffusion
Law of definite proportions
Law of multiple proportions
References
Mass
Conservation laws | Conservation of mass | [
"Physics",
"Mathematics"
] | 2,108 | [
"Scalar physical quantities",
"Matter",
"Physical quantities",
"Equations of physics",
"Conservation laws",
"Mass",
"Quantity",
"Size",
"Wikipedia categories named after physical quantities",
"Symmetry",
"Physics theorems"
] |
145,066 | https://en.wikipedia.org/wiki/Extractive%20metallurgy | Extractive metallurgy is a branch of metallurgical engineering wherein process and methods of extraction of metals from their natural mineral deposits are studied. The field is a materials science, covering all aspects of the types of ore, washing, concentration, separation, chemical processes and extraction of pure metal and their alloying to suit various applications, sometimes for direct use as a finished product, but more often in a form that requires further working to achieve the given properties to suit the applications.
The field of ferrous and non-ferrous extractive metallurgy have specialties that are generically grouped into the categories of mineral processing, hydrometallurgy, pyrometallurgy, and electrometallurgy based on the process adopted to extract the metal. Several processes are used for extraction of the same metal depending on occurrence and chemical requirements.
Mineral processing
Mineral processing begins with beneficiation, consisting of initially breaking down the ore to required sizes depending on the concentration process to be followed, by crushing, grinding, sieving etc. Thereafter, the ore is physically separated from any unwanted impurity, depending on the form of occurrence and or further process involved. Separation processes take advantage of physical properties of the materials. These physical properties can include density, particle size and shape, electrical and magnetic properties, and surface properties. Major physical and chemical methods include magnetic separation, froth flotation, leaching etc., whereby the impurities and unwanted materials are removed from the ore and the base ore of the metal is concentrated, meaning the percentage of metal in the ore is increased. This concentrate is then either processed to remove moisture or else used as is for extraction of the metal or made into shapes and forms that can undergo further processing, with ease of handling.
Ore bodies often contain more than one valuable metal. Tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. Additionally, a concentrate may contain more than one valuable metal. That concentrate would then be processed to separate the valuable metals into individual constituents.
Hydrometallurgy
Hydrometallurgy is concerned with processes involving aqueous solutions to extract metals from ores. The first step in the hydrometallurgical process is leaching, which involves dissolution of the valuable metals into the aqueous solution and or a suitable solvent. After the solution is separated from the ore solids, the extract is often subjected to various processes of purification and concentration before the valuable metal is recovered either in its metallic state or as a chemical compound. This may include precipitation, distillation, adsorption, and solvent extraction. The final recovery step may involve precipitation, cementation, or an electrometallurgical process. Sometimes, hydrometallurgical processes may be carried out directly on the ore material without any pretreatment steps. More often, the ore must be pretreated by various mineral processing steps, and sometimes by pyrometallurgical processes.
Pyrometallurgy
Pyrometallurgy involves high temperature processes where chemical reactions take place among gases, solids, and molten materials. Solids containing valuable metals are treated to form intermediate compounds for further processing or converted into their elemental or metallic state. Pyrometallurgical processes that involve gases and solids are typified by calcining and roasting operations. Processes that produce molten products are collectively referred to as smelting operations. The energy required to sustain the high temperature pyrometallurgical processes may derive from the exothermic nature of the chemical reactions taking place. Typically, these reactions are oxidation, e.g. of sulfide to sulfur dioxide . Often, however, energy must be added to the process by combustion of fuel or, in the case of some smelting processes, by the direct application of electrical energy.
Ellingham diagrams are a useful way of analysing the possible reactions, and so predicting their outcome.
Electrometallurgy
Electrometallurgy involves metallurgical processes that take place in some form of electrolytic cell. The most common types of electrometallurgical processes are electrowinning and electro-refining. Electrowinning is an electrolysis process used to recover metals in aqueous solution, usually as the result of an ore having undergone one or more hydrometallurgical processes. The metal of interest is plated onto the cathode, while the anode is an inert electrical conductor. Electro-refining is used to dissolve an impure metallic anode (typically from a smelting process) and produce a high purity cathode. Fused salt electrolysis is another electrometallurgical process whereby the valuable metal has been dissolved into a molten salt which acts as the electrolyte, and the valuable metal collects on the cathode of the cell. The fused salt electrolysis process is conducted at temperatures sufficient to keep both the electrolyte and the metal being produced in the molten state. The scope of electrometallurgy has significant overlap with the areas of hydrometallurgy and (in the case of fused salt electrolysis) pyrometallurgy. Additionally, electrochemical phenomena play a considerable role in many mineral processing and hydrometallurgical processes.
Ionometallurgy
Mineral processing and extraction of metals are very energy-intensive processes, which are not exempted of producing large volumes of solid residues and wastewater, which also require energy to be further treated and disposed. Moreover, as the demand for metals increases, the metallurgical industry must rely on sources of materials with lower metal contents both from a primary (e.g., mineral ores) and/or secondary (e.g., slags, tailings, municipal waste) raw materials. Consequently, mining activities and waste recycling must evolve towards the development of more selective, efficient and environmentally friendly mineral and metal processing routes.
Mineral processing operations are needed firstly to concentrate the mineral phases of interest and reject the unwanted material physical or chemically associated to a defined raw material. The process, however, demand about 30 GJ/tonne of metal, which accounts about 29% of the total energy spent on mining in the USA. Meanwhile, pyrometallurgy is a significant producer of greenhouse gas emissions and harmful flue dust. Hydrometallurgy entails the consumption of large volumes of lixiviants such as H2SO4, HCl, KCN, NaCN which have poor selectivity. Moreover, despite the environmental concern and the use restriction imposed by some countries, cyanidation is still considered the prime process technology to recover gold from ores. Mercury is also used by artisanal miners in less economically developed countries to concentrate gold and silver from minerals, despite its obvious toxicity. Bio-hydro-metallurgy make use of living organisms, such as bacteria and fungi, and although this method demands only the input of and from the atmosphere, it requires low solid-to-liquid ratios and long contact times, which significantly reduces space-time yields.
Ionometallurgy makes use of non-aqueous ionic solvents such ionic liquids (ILs) and deep eutectic solvents (DESs), which allows the development of closed-loop flow sheet to effectively recover metals by, for instance, integrating the metallurgical unit operations of leaching and electrowinning. It allows to process metals at moderate temperatures in a non-aqueous environment which allows controlling metal speciation, tolerates impurities and at the same time exhibits suitable solubilities and current efficiencies. This simplify conventional processing routes and allows a substantial reduction in the size of a metal processing plant.
Metal extraction with ionic fluids
DESs are fluids generally composed of two or three cheap and safe components that are capable of self-association, often through hydrogen bond interactions, to form eutectic mixtures with a melting point lower than that of each individual component. DESs are generally liquid at temperatures lower than 100 °C, and they exhibit similar physico-chemical properties to traditional ILs, while being much cheaper and environmentally friendlier. Most of them are mixtures of choline chloride (ChCl) and a hydrogen-bond donor (e.g., urea, ethylene glycol, malonic acid) or mixtures of choline chloride with a hydrated metal salt. Other choline salts (e.g. acetate, citrate, nitrate) have a much higher costs or need to be synthesised, and the DES formulated from these anions are typically much more viscous and can have higher conductivities than for choline chloride. This results in lower plating rates and poorer throwing power and for this reason chloride-based DES systems are still favoured. For instance, Reline (a 1:2 mixture of choline chloride and urea) has been used to selectively recover Zn and Pb from a mixed metal oxide matrix. Similarly, Ethaline (a 1: 2 mixture of choline chloride and ethylene glycol) facilitates metal dissolution in electropolishing of steels. DESs have also demonstrated promising results to recover metals from complex mixtures such Cu/Zn and Ga/As, and precious metals from minerals. It has also been demonstrated that metals can be recovered from complex mixtures by electrocatalysis using a combination of DESs as lixiviants and an oxidising agent, while metal ions can be simultaneously separated from the solution by electrowinning.
Recovery of precious metals by ionometallurgy
Precious metals are rare, naturally occurring metallic chemical elements of high economic value. Chemically, the precious metals tend to be less reactive than most elements. They include gold and silver, but also the so-called platinum group metals: ruthenium, rhodium, palladium, osmium, iridium, and platinum (see precious metals). Extraction of these metals from their corresponding hosting minerals would typically require pyrometallurgy (e.g., roasting), hydrometallurgy (cyanidation), or both as processing routes.
Early studies have demonstrated that gold dissolution rate in Ethaline compares very favourably to the cyanidation method, which is further enhanced by the addition of iodine as an oxidising agent. In an industrial process the iodine has the potential to be employed as an electrocatalyst, whereby it is continuously recovered in situ from the reduced iodide by electrochemical oxidation at the anode of an electrochemical cell. Dissolved metals can be selectively deposited at the cathode by adjusting the electrode potential. The method also allows better selectivity as part of the gangue (e.g., pyrite) tend to be dissolved more slowly.
Sperrylite (PtAs2) and moncheite (PtTe2), which are typically the more abundant platinum minerals in many orthomagmatic deposits, do not react under the same conditions in Ethaline because they are disulphide (pyrite), diarsenide (sperrylite) or ditellurides (calaverite and moncheite) minerals, which are particularly resistant to iodine oxidation. The reaction mechanism by which dissolution of platinum minerals is taking place is still under investigation.
Metal recovery from sulfide minerals with ionometallurgy
Metal sulfides (e.g., pyrite FeS2, arsenopyrite FeAsS, chalcopyrite CuFeS2) are normally processed by chemical oxidation either in aqueous media or at high temperatures. In fact, most base metals, e.g., aluminium, chromium, must be (electro)chemically reduced at high temperatures by which the process entails a high energy demand, and sometimes large volumes of aqueous waste is generated. In aqueous media chalcopyrite, for instance, is more difficult to dissolve chemically than covellite and chalcocite due to surface effects (formation of polysulfide species,). The presence of Cl− ions has been suggested to alter the morphology of any sulfide surface formed, allowing the sulfide mineral to leach more easily by preventing passivation. DESs provide a high Cl− ion concentration and low water content, whilst reducing the need for either high additional salt or acid concentrations, circumventing most oxide chemistry. Thus, the electrodissolution of sulfide minerals has demonstrated promising results in DES media in absence of passivation layers, with the release into the solution of metal ions which could be recovered from solution.
During extraction of copper from copper sulfide minerals with Ethaline, chalcocite (Cu2S) and covellite (CuS) produce a yellow solution, indicating that [CuCl4]2− complex are formed. Meanwhile, in the solution formed from chalcopyrite, Cu2+ and Cu+ species co-exist in solution due to the generation of reducing Fe2+ species at the cathode. The best selective recovery of copper (>97%) from chalcopyrite can be obtained with a mixed DES of 20 wt.% ChCl-oxalic acid and 80 wt.% Ethaline.
Metal recovery from oxide compounds with Ionometallurgy
Recovery of metals from oxide matrixes is generally carried out using mineral acids. However, electrochemical dissolution of metal oxides in DES can allow to enhance the dissolution up to more than 10 000 times in pH neutral solutions.
Studies have shown that ionic oxides such as ZnO tend to have high solubility in ChCl:malonic acid, ChCl:urea and Ethaline, which can resemble the solubilities in aqueous acidic solutions, e.g., HCl. Covalent oxides such as TiO2, however, exhibits almost no solubility. The electrochemical dissolution of metal oxides is strongly dependent on the proton activity from the HBD, i.e. capability of the protons to act as oxygen acceptors, and on the temperature. It has been reported that eutectic ionic fluids of lower pH-values, such as ChCl:oxalic acid and ChCl:lactic acid, allow a better solubility than that of higher pH (e.g., ChCl:acetic acid). Hence, different solubilities can be obtained by using, for instance, different carboxylic acids as HBD.
Outlook
Currently, the stability of most ionic liquids under practical electrochemical conditions is unknown, and the fundamental choice of ionic fluid is still empirical as there is almost no data on metal ion thermodynamics to feed into solubility and speciation models. Also, there are no Pourbaix diagrams available, no standard redox potentials, and bare knowledge of speciation or pH-values. It must be noticed that most processes reported in the literature involving ionic fluids have a Technology Readiness Level (TRL) 3 (experimental proof-of-concept) or 4 (technology validated in the lab), which is a disadvantage for short-term implementation. However, ionometallurgy has the potential to effectively recover metals in a more selective and sustainable way, as it considers environmentally benign solvents, reduction of greenhouse gas emissions and avoidance of corrosive and harmful reagents.
References
Further reading
Gilchrist, J.D. (1989). Extraction Metallurgy, Pergamon Press.
Mailoo Selvaratnam, (1996): Guided Approach to Learning Chemistry.
Metallurgy | Extractive metallurgy | [
"Chemistry",
"Materials_science",
"Engineering"
] | 3,219 | [
"Metallurgy",
"Materials science",
"nan"
] |
145,159 | https://en.wikipedia.org/wiki/Atropine | Atropine is a tropane alkaloid and anticholinergic medication used to treat certain types of nerve agent and pesticide poisonings as well as some types of slow heart rate, and to decrease saliva production during surgery. It is typically given intravenously or by injection into a muscle. Eye drops are also available which are used to treat uveitis and early amblyopia. The intravenous solution usually begins working within a minute and lasts half an hour to an hour. Large doses may be required to treat some poisonings.
Common side effects include dry mouth, abnormally large pupils, urinary retention, constipation, and a fast heart rate. It should generally not be used in people with closed-angle glaucoma. While there is no evidence that its use during pregnancy causes birth defects, this has not been well studied so sound clinical judgment should be used. It is likely safe during breastfeeding. It is an antimuscarinic (a type of anticholinergic) that works by inhibiting the parasympathetic nervous system.
Atropine occurs naturally in a number of plants of the nightshade family, including deadly nightshade (belladonna), Jimson weed, and mandrake. It was first isolated in 1833, It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication.
Medical uses
Eyes
Topical atropine is used as a cycloplegic, to temporarily paralyze the accommodation reflex, and as a mydriatic, to dilate the pupils. Atropine degrades slowly, typically wearing off in 7 to 14 days, so it is generally used as a therapeutic mydriatic, whereas tropicamide (a shorter-acting cholinergic antagonist) or phenylephrine (an α-adrenergic agonist) is preferred as an aid to ophthalmic examination.
In refractive and accommodative amblyopia, when occlusion is not appropriate sometimes atropine is given to induce blur in the good eye. Evidence suggests that atropine penalization is just as effective as occlusion in improving visual acuity.
Antimuscarinic topical medication is effective in slowing myopia progression in children; accommodation difficulties and papillae and follicles are possible side effects. All doses of atropine appear similarly effective, while higher doses have greater side effects. The lower dose of 0.01% is thus generally recommended due to fewer side effects and potential less rebound worsening when the atropine is stopped.
Heart
Injections of atropine are used in the treatment of symptomatic or unstable bradycardia.
Atropine was previously included in international resuscitation guidelines for use in cardiac arrest associated with asystole and PEA but was removed from these guidelines in 2010 due to a lack of evidence for its effectiveness. For symptomatic bradycardia, the usual dosage is 0.5 to 1 mg IV push; this may be repeated every 3 to 5 minutes, up to a total dose of 3 mg (maximum 0.04 mg/kg).
Atropine is also useful in treating second-degree heart block Mobitz type 1 (Wenckebach block), and also third-degree heart block with a high Purkinje or AV-nodal escape rhythm. It is usually not effective in second-degree heart block Mobitz type 2, and in third-degree heart block with a low Purkinje or ventricular escape rhythm.
Atropine has also been used to prevent a low heart rate during intubation of children; however, the evidence does not support this use.
Secretions
Atropine's actions on the parasympathetic nervous system inhibit salivary and mucous glands. The drug may also inhibit sweating via the sympathetic nervous system. This can be useful in treating hyperhidrosis, and can prevent the death rattle of dying patients. Even though atropine has not been officially indicated for either of these purposes by the FDA, it has been used by physicians for these purposes.
Poisonings
Atropine is not an actual antidote for organophosphate poisoning. However, by blocking the action of acetylcholine at muscarinic receptors, atropine also serves as a treatment for poisoning by organophosphate insecticides and nerve agents, such as tabun (GA), sarin (GB), soman (GD), and VX. Troops who are likely to be attacked with chemical weapons often carry autoinjectors with atropine and oxime, for rapid injection into the muscles of the thigh. In a developed case of nerve gas poisoning, maximum atropinization is desirable. Atropine is often used in conjunction with the oxime pralidoxime chloride.
Some of the nerve agents attack and destroy acetylcholinesterase by phosphorylation, so the action of acetylcholine becomes excessive and prolonged. Pralidoxime (2-PAM) can be effective against organophosphate poisoning because it can re-cleave this phosphorylation. Atropine can be used to reduce the effect of the poisoning by blocking muscarinic acetylcholine receptors, which would otherwise be overstimulated, by excessive acetylcholine accumulation.
Atropine or diphenhydramine can be used to treat muscarine intoxication.
Atropine was added to cafeteria salt shakers in an attempt to poison the staff of Radio Free Europe during the Cold War.
Irinotecan-induced diarrhea
Atropine has been observed to prevent or treat irinotecan induced acute diarrhea.
Side effects
Adverse reactions to atropine include ventricular fibrillation, supraventricular or ventricular tachycardia, dizziness, nausea, blurred vision, loss of balance, dilated pupils, photophobia, dry mouth and potentially extreme confusion, deliriant hallucinations, and excitation especially among the elderly. These latter effects are because atropine can cross the blood–brain barrier. Because of the hallucinogenic properties, some have used the drug recreationally, though this is potentially dangerous and often unpleasant.
In overdoses, atropine is poisonous. Atropine is sometimes added to potentially addictive drugs, particularly antidiarrhea opioid drugs such as diphenoxylate or difenoxin, wherein the secretion-reducing effects of the atropine can also aid the antidiarrhea effects.
Although atropine treats bradycardia (slow heart rate) in emergency settings, it can cause paradoxical heart rate slowing when given at very low doses (i.e. <0.5 mg), presumably as a result of central action in the CNS. One proposed mechanism for atropine's paradoxical bradycardia effect at low doses involves blockade of inhibitory presynaptic muscarinic autoreceptors, thereby blocking a system that inhibits the parasympathetic response.
Atropine is incapacitating at doses of 10 to 20 mg per person. Its LD50 is estimated to be 453 mg per person (by mouth) with a probit slope of 1.8.
The antidote to atropine is physostigmine or pilocarpine.
A common mnemonic used to describe the physiologic manifestations of atropine overdose is: "hot as a hare, blind as a bat, dry as a bone, red as a beet, and mad as a hatter". These associations reflect the specific changes of warm, dry skin from decreased sweating, blurry vision, decreased lacrimation, vasodilation, and central nervous system effects on muscarinic receptors, type 4 and 5. This set of symptoms is known as anticholinergic toxidrome, and may also be caused by other drugs with anticholinergic effects, such as hyoscine hydrobromide (scopolamine), diphenhydramine, phenothiazine antipsychotics and benztropine.
Contraindications
It is generally contraindicated in people with glaucoma, pyloric stenosis, or prostatic hypertrophy, except in doses ordinarily used for preanesthesia.
Chemistry
Atropine, a tropane alkaloid, is an enantiomeric mixture of d-hyoscyamine and l-hyoscyamine, with most of its physiological effects due to l-hyoscyamine, the 3(S)-endo isomer of atropine. Its pharmacological effects are due to binding to muscarinic acetylcholine receptors. It is an antimuscarinic agent. Significant levels are achieved in the CNS within 30 minutes to 1 hour and disappear rapidly from the blood with a half-life of 2 hours. About 60% is excreted unchanged in the urine, and most of the rest appears in the urine as hydrolysis and conjugation products. Noratropine (24%), atropine-N-oxide (15%), tropine (2%), and tropic acid (3%) appear to be the major metabolites, while 50% of the administered dose is excreted as apparently unchanged atropine. No conjugates were detectable. Evidence that atropine is present as (+)-hyoscyamine was found, suggesting that stereoselective metabolism of atropine probably occurs. Effects on the iris and ciliary muscle may persist for longer than 72 hours.
The most common atropine compound used in medicine is atropine sulfate (monohydrate) ()2·H2SO4·H2O, the full chemical name is 1α H, 5α H-Tropan-3-α ol (±)-tropate(ester), sulfate monohydrate.
Pharmacology
In general, atropine counters the "rest and digest" activity of glands regulated by the parasympathetic nervous system, producing clinical effects such as increased heart rate and delayed gastric emptying. This occurs because atropine is a competitive, reversible antagonist of the muscarinic acetylcholine receptors (acetylcholine being the main neurotransmitter used by the parasympathetic nervous system).
Atropine is a competitive antagonist of the muscarinic acetylcholine receptor types M1, M2, M3, M4 and M5. It is classified as an anticholinergic drug (parasympatholytic).
In cardiac uses, it works as a nonselective muscarinic acetylcholinergic antagonist, increasing firing of the sinoatrial node (SA) and conduction through the atrioventricular node (AV) of the heart, opposes the actions of the vagus nerve, blocks acetylcholine receptor sites, and decreases bronchial secretions.
In the eye, atropine induces mydriasis by blocking the contraction of the circular pupillary sphincter muscle, which is normally stimulated by acetylcholine release, thereby allowing the radial iris dilator muscle to contract and dilate the pupil. Atropine induces cycloplegia by paralyzing the ciliary muscles, whose action inhibits accommodation to allow accurate refraction in children, helps to relieve pain associated with iridocyclitis, and treats ciliary block (malignant) glaucoma.
The vagus (parasympathetic) nerves that innervate the heart release acetylcholine (ACh) as their primary neurotransmitter. ACh binds to muscarinic receptors (M2) that are found principally on cells comprising the sinoatrial (SA) and atrioventricular (AV) nodes. Muscarinic receptors are coupled to the Gi subunit; therefore, vagal activation decreases cAMP. Gi-protein activation also leads to the activation of KACh channels that increase potassium efflux and hyperpolarizes the cells.
Increases in vagal activities to the SA node decrease the firing rate of the pacemaker cells by decreasing the slope of the pacemaker potential (phase 4 of the action potential); this decreases heart rate (negative chronotropy). The change in phase 4 slope results from alterations in potassium and calcium currents, as well as the slow-inward sodium current that is thought to be responsible for the pacemaker current (If). By hyperpolarizing the cells, vagal activation increases the cell's threshold for firing, which contributes to the reduction in the firing rate. Similar electrophysiological effects also occur at the AV node; however, in this tissue, these changes are manifested as a reduction in impulse conduction velocity through the AV node (negative dromotropy). In the resting state, there is a large degree of vagal tone in the heart, which is responsible for low resting heart rates.
There is also some vagal innervation of the atrial muscle, and to a much lesser extent, the ventricular muscle. Vagus activation, therefore, results in modest reductions in atrial contractility (inotropy) and even smaller decreases in ventricular contractility.
Muscarinic receptor antagonists bind to muscarinic receptors thereby preventing ACh from binding to and activating the receptor. By blocking the actions of ACh, muscarinic receptor antagonists very effectively block the effects of vagal nerve activity on the heart. By doing so, they increase heart rate and conduction velocity.
History
The name atropine was coined in the 19th century, when pure extracts from the belladonna plant Atropa belladonna were first made. The medicinal use of preparations from plants in the nightshade family is much older however. Mandragora (mandrake) was described by Theophrastus in the fourth century B.C. for the treatment of wounds, gout, and sleeplessness, and as a love potion. By the first century A.D. Dioscorides recognized wine of mandrake as an anaesthetic for treatment of pain or sleeplessness, to be given before surgery or cautery. The use of nightshade preparations for anesthesia, often in combination with opium, persisted throughout the Roman and Islamic Empires and continued in Europe until superseded in the 19th century by modern anesthetics.
Atropine-rich extracts from the Egyptian henbane plant (another nightshade) were used by Cleopatra in the last century B.C. to dilate the pupils of her eyes, in the hope that she would appear more alluring. Likewise in the Renaissance, women used the juice of the berries of the nightshade Atropa belladonna to enlarge their pupils for cosmetic reasons. This practice resumed briefly in the late nineteenth and early twentieth century in Paris.
The pharmacological study of belladonna extracts was begun by the German chemist Friedlieb Ferdinand Runge (1795–1867). In 1831, the German pharmacist Heinrich F. G. Mein (1799-1864) succeeded in preparing a pure crystalline form of the active substance, which was named atropine. The substance was first synthesized by German chemist Richard Willstätter in 1901.
Natural sources
Atropine is found in many members of the family Solanaceae. The most commonly found sources are Atropa belladonna (the deadly nightshade), Datura innoxia, D. wrightii, D. metel, and D. stramonium. Other sources include members of the genera Brugmansia (angel's trumpets) and Hyoscyamus.
Synthesis
Atropine can be synthesized by the reaction of tropine with tropic acid in the presence of hydrochloric acid.
Biosynthesis
The biosynthesis of atropine starting from l-phenylalanine first undergoes a transamination forming phenylpyruvic acid which is then reduced to phenyl-lactic acid. Coenzyme A then couples phenyl-lactic acid with tropine forming littorine, which then undergoes a radical rearrangement initiated with a P450 enzyme forming hyoscyamine aldehyde. A dehydrogenase then reduces the aldehyde to a primary alcohol making (−)-hyoscyamine, which upon racemization forms atropine.
Name
The species name "belladonna" ('beautiful woman' in Italian) comes from the original use of deadly nightshade to dilate the pupils of the eyes for cosmetic effect. Both atropine and the genus name for deadly nightshade derive from Atropos, one of the three Fates who, according to Greek mythology, chose how a person was to die.
See also
Apoatropine
Mark I Nerve Agent Antidote Kit
References
External links
Antidotes
Chemical substances for emergency medicine
Deliriants
Entheogens
Esters
Drugs developed by Pfizer
Glycine receptor agonists
M1 receptor antagonists
M2 receptor antagonists
M3 receptor antagonists
M4 receptor antagonists
M5 receptor antagonists
Medical mnemonics
Oneirogens
Ophthalmology drugs
Plant toxins
Secondary metabolites
Tropane alkaloids
Tropane alkaloids found in Solanaceae
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Atropine | [
"Chemistry"
] | 3,693 | [
"Chemical ecology",
"Secondary metabolites",
"Esters",
"Chemical substances for emergency medicine",
"Tropane alkaloids",
"Plant toxins",
"Functional groups",
"Organic compounds",
"Alkaloids by chemical classification",
"Chemicals in medicine",
"Metabolism"
] |
145,197 | https://en.wikipedia.org/wiki/Metallic%20hydrogen | Metallic hydrogen is a phase of hydrogen in which it behaves like an electrical conductor. This phase was predicted in 1935 on theoretical grounds by Eugene Wigner and Hillard Bell Huntington.
At high pressure and temperatures, metallic hydrogen can exist as a partial liquid rather than a solid, and researchers think it might be present in large quantities in the hot and gravitationally compressed interiors of Jupiter and Saturn, as well as in some exoplanets.
Theoretical predictions
Hydrogen under pressure
Though often placed at the top of the alkali metal column in the periodic table, hydrogen does not, under ordinary conditions, exhibit the properties of an alkali metal. Instead, it forms diatomic molecules, similar to halogens and some nonmetals in the second period of the periodic table, such as nitrogen and oxygen. Diatomic hydrogen is a gas that, at atmospheric pressure, liquefies and solidifies only at very low temperature (20 K and 14 K respectively).
In 1935, physicists Eugene Wigner and Hillard Bell Huntington predicted that under an immense pressure of around , hydrogen would display metallic properties: instead of discrete molecules (which consist of two electrons bound between two protons), a bulk phase would form with a solid lattice of protons and the electrons delocalized throughout. Since then, producing metallic hydrogen in the laboratory has been described as "the holy grail of high-pressure physics".
The initial prediction about the amount of pressure needed was eventually shown to be too low. Since the first work by Wigner and Huntington, the more modern theoretical calculations point toward higher but potentially achievable metalization pressures of around .
Liquid metallic hydrogen
Helium-4 is a liquid at normal pressure near absolute zero, a consequence of its high zero-point energy (ZPE). The ZPE of protons in a dense state is also high, and a decline in the ordering energy (relative to the ZPE) is expected at high pressures. Arguments have been advanced by Neil Ashcroft and others that there is a melting point maximum in compressed hydrogen, but also that there might be a range of densities, at pressures around 400 GPa, where hydrogen would be a liquid metal, even at low temperatures.
Geng predicted that the ZPE of protons indeed lowers the melting temperature of hydrogen to a minimum of at pressures of .
Within this flat region there might be an elemental mesophase intermediate between the liquid and solid state, which could be metastably stabilized down to low temperature and enter a supersolid state.
Superconductivity
In 1968, Neil Ashcroft suggested that metallic hydrogen might be a superconductor, up to room temperature (). This hypothesis is based on an expected strong coupling between conduction electrons and lattice vibrations.
As a rocket propellant
Metastable metallic hydrogen may have potential as a highly efficient rocket propellant; the metallic form would be stored, and the energy of its decompression and conversion to the diatomic gaseous form when released through a nozzle used to generate thrust, with a theoretical specific impulse of up to 1700 seconds (for reference, the current most efficient chemical rocket propellants have an less than 500 s), although a metastable form suitable for mass-production and conventional high-volume storage may not exist. Another significant issue is the heat of the reaction, which at over 6000 K is too high for any known engine materials to be used. This would necessitate diluting the metallic hydrogen with water or liquid hydrogen, a mixture that would still provide a significant performance boost from current propellants.
Possibility of novel types of quantum fluid
Presently known "super" states of matter are superconductors, superfluid liquids and gases, and supersolids. Egor Babaev predicted that if hydrogen and deuterium have liquid metallic states, they might have quantum ordered states that cannot be classified as superconducting or superfluid in the usual sense. Instead, they might represent two possible novel types of quantum fluids: superconducting superfluids and metallic superfluids. Such fluids were predicted to have highly unusual reactions to external magnetic fields and rotations, which might provide a means for experimental verification of Babaev's predictions. It has also been suggested that, under the influence of a magnetic field, hydrogen might exhibit phase transitions from superconductivity to superfluidity and vice versa.
Lithium alloying reduces requisite pressure
In 2009, Zurek et al. predicted that the alloy would be a stable metal at only one quarter of the pressure required to metallize hydrogen, and that similar effects should hold for alloys of type LiHn and possibly "other alkali high-hydride systems", i.e. alloys of type XHn, where X is an alkali metal. This was later verified in AcH8 and LaH10 with Tc approaching 270 K leading to speculation that other compounds may even be stable at mere MPa pressures with room-temperature superconductivity.
Experimental pursuit
Shock-wave compression, 1996
In March 1996, a group of scientists at Lawrence Livermore National Laboratory reported that they had serendipitously produced the first identifiably metallic hydrogen for about a microsecond at temperatures of thousands of kelvins, pressures of over , and densities of approximately . The team did not expect to produce metallic hydrogen, as it was not using solid hydrogen, thought to be necessary, and was working at temperatures above those specified by metallization theory. Previous studies in which solid hydrogen was compressed inside diamond anvils to pressures of up to , did not confirm detectable metallization. The team had sought simply to measure the less extreme electrical conductivity changes they expected. The researchers used a 1960s-era light-gas gun, originally employed in guided missile studies, to shoot an impactor plate into a sealed container containing a half-millimeter thick sample of liquid hydrogen. The liquid hydrogen was in contact with wires leading to a device measuring electrical resistance. The scientists found that, as pressure rose to , the electronic energy band gap, a measure of electrical resistance, fell to almost zero. The band gap of hydrogen in its uncompressed state is about , making it an insulator but, as the pressure increases significantly, the band gap gradually fell to . Because the thermal energy of the fluid (the temperature became about due to compression of the sample) was above , the hydrogen might be considered metallic.
Other experimental research, 1996–2004
Many experiments are continuing in the production of metallic hydrogen in laboratory conditions at static compression and low temperature. Arthur Ruoff and Chandrabhas Narayana from Cornell University in 1998, and later Paul Loubeyre and René LeToullec from Commissariat à l'Énergie Atomique, France in 2002, have shown that at pressures close to those at the center of the Earth () and temperatures of , hydrogen is still not a true alkali metal, because of the non-zero band gap. The quest to see metallic hydrogen in laboratory at low temperature and static compression continues. Studies are also ongoing on deuterium. Shahriar Badiei and Leif Holmlid from the University of Gothenburg have shown in 2004 that condensed metallic states made of excited hydrogen atoms (Rydberg matter) are effective promoters to metallic hydrogen, however these results are disputed.
Pulsed laser heating experiment, 2008
The theoretically predicted maximum of the melting curve (the prerequisite for the liquid metallic hydrogen) was discovered by Shanti Deemyad and Isaac F. Silvera by using pulsed laser heating. Hydrogen-rich molecular silane () was claimed to be metallized and become superconducting by M.I. Eremets et al.. This claim is disputed, and their results have not been repeated.
Observation of liquid metallic hydrogen, 2011
In 2011 Eremets and Troyan reported observing the liquid metallic state of hydrogen and deuterium at static pressures of . This claim was questioned by other researchers in 2012.
Z machine, 2015
In 2015, scientists at the Z Pulsed Power Facility announced the creation of metallic deuterium using dense liquid deuterium, an electrical insulator-to-conductor transition associated with an increase in optical reflectivity.
Claimed observation of solid metallic hydrogen, 2016
On 5 October 2016, Ranga Dias and Isaac F. Silvera of Harvard University released claims in a pre-print manuscript of experimental evidence that solid metallic hydrogen had been synthesized in the laboratory at a pressure of around using a diamond anvil cell. A revised version was published in Science in 2017.
In the preprint version of the paper, Dias and Silvera write:
In June 2019 a team at the Commissariat à l'énergie atomique et aux énergies alternatives (French Alternative Energies & Atomic Energy Commission) claimed to have created metallic hydrogen at around 425GPa.
W. Ferreira et al. (including Dias and Silvera) repeated their experiments multiple times after the Science article was published, finally publishing in 2023 and finding metallisation of hydrogen between . This time, the pressure was released to assess the question of metastability. Metallic hydrogen was not found to be metastable to zero pressure.
Experiments on fluid deuterium at the National Ignition Facility, 2018
In August 2018, scientists announced new observations regarding the rapid transformation of fluid deuterium from an insulating to a metallic form below 2000 K. Remarkable agreement is found between the experimental data and the predictions based on quantum Monte Carlo simulations, which is expected to be the most accurate method to date. This may help researchers better understand giant gas planets, such as Jupiter, Saturn and related exoplanets, since such planets are thought to contain a lot of liquid metallic hydrogen, which may be responsible for their observed powerful magnetic fields.
See also
Hydride#Interstitial hydrides or metallic hydrides
Hydrogen safety#Cryogenics
Juno (spacecraft)
Metallization pressure
Slush hydrogen
Timeline of hydrogen technologies
References
2016 in science
Hydrogen
Hydrogen physics
Hydrogen
October 2016
Phases of matter
Physical chemistry
Superfluidity | Metallic hydrogen | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,054 | [
"Periodic table",
"Physical phenomena",
"Phase transitions",
"Applied and interdisciplinary physics",
"Properties of chemical elements",
"Allotropes",
"Phases of matter",
"Superfluidity",
"Materials",
"Condensed matter physics",
"nan",
"Exotic matter",
"Physical chemistry",
"Matter",
"Fl... |
145,343 | https://en.wikipedia.org/wiki/Wave%20function | In quantum physics, a wave function (or wavefunction) is a mathematical description of the quantum state of an isolated quantum system. The most common symbols for a wave function are the Greek letters and (lower-case and capital psi, respectively). Wave functions are complex-valued. For example, a wave function might assign a complex number to each point in a region of space. The Born rule provides the means to turn these complex probability amplitudes into actual probabilities. In one common form, it says that the squared modulus of a wave function that depends upon position is the probability density of measuring a particle as being at a given place. The integral of a wavefunction's squared modulus over all the system's degrees of freedom must be equal to 1, a condition called normalization. Since the wave function is complex-valued, only its relative phase and relative magnitude can be measured; its value does not, in isolation, tell anything about the magnitudes or directions of measurable observables. One has to apply quantum operators, whose eigenvalues correspond to sets of possible results of measurements, to the wave function and calculate the statistical distributions for measurable quantities.
Wave functions can be functions of variables other than position, such as momentum. The information represented by a wave function that is dependent upon position can be converted into a wave function dependent upon momentum and vice versa, by means of a Fourier transform. Some particles, like electrons and photons, have nonzero spin, and the wave function for such particles includes spin as an intrinsic, discrete degree of freedom; other discrete variables can also be included, such as isospin. When a system has internal degrees of freedom, the wave function at each point in the continuous degrees of freedom (e.g., a point in space) assigns a complex number for each possible value of the discrete degrees of freedom (e.g., z-component of spin). These values are often displayed in a column matrix (e.g., a column vector for a non-relativistic electron with spin ).
According to the superposition principle of quantum mechanics, wave functions can be added together and multiplied by complex numbers to form new wave functions and form a Hilbert space. The inner product of two wave functions is a measure of the overlap between the corresponding physical states and is used in the foundational probabilistic interpretation of quantum mechanics, the Born rule, relating transition probabilities to inner products. The Schrödinger equation determines how wave functions evolve over time, and a wave function behaves qualitatively like other waves, such as water waves or waves on a string, because the Schrödinger equation is mathematically a type of wave equation. This explains the name "wave function", and gives rise to wave–particle duality. However, the wave function in quantum mechanics describes a kind of physical phenomenon, as of 2023 still open to different interpretations, which fundamentally differs from that of classic mechanical waves.
Historical background
In 1900, Max Planck postulated the proportionality between the frequency of a photon and its energy
and in 1916 the corresponding relation between a photon's momentum and wavelength
where is the Planck constant. In 1923, De Broglie was the first to suggest that the relation now called the De Broglie relation, holds for massive particles, the chief clue being Lorentz invariance, and this can be viewed as the starting point for the modern development of quantum mechanics. The equations represent wave–particle duality for both massless and massive particles.
In the 1920s and 1930s, quantum mechanics was developed using calculus and linear algebra. Those who used the techniques of calculus included Louis de Broglie, Erwin Schrödinger, and others, developing "wave mechanics". Those who applied the methods of linear algebra included Werner Heisenberg, Max Born, and others, developing "matrix mechanics". Schrödinger subsequently showed that the two approaches were equivalent.
In 1926, Schrödinger published the famous wave equation now named after him, the Schrödinger equation. This equation was based on classical conservation of energy using quantum operators and the de Broglie relations and the solutions of the equation are the wave functions for the quantum system. However, no one was clear on how to interpret it. At first, Schrödinger and others thought that wave functions represent particles that are spread out with most of the particle being where the wave function is large. This was shown to be incompatible with the elastic scattering of a wave packet (representing a particle) off a target; it spreads out in all directions.
While a scattered particle may scatter in any direction, it does not break up and take off in all directions. In 1926, Born provided the perspective of probability amplitude. This relates calculations of quantum mechanics directly to probabilistic experimental observations. It is accepted as part of the Copenhagen interpretation of quantum mechanics. There are many other interpretations of quantum mechanics. In 1927, Hartree and Fock made the first step in an attempt to solve the N-body wave function, and developed the self-consistency cycle: an iterative algorithm to approximate the solution. Now it is also known as the Hartree–Fock method. The Slater determinant and permanent (of a matrix) was part of the method, provided by John C. Slater.
Schrödinger did encounter an equation for the wave function that satisfied relativistic energy conservation before he published the non-relativistic one, but discarded it as it predicted negative probabilities and negative energies. In 1927, Klein, Gordon and Fock also found it, but incorporated the electromagnetic interaction and proved that it was Lorentz invariant. De Broglie also arrived at the same equation in 1928. This relativistic wave equation is now most commonly known as the Klein–Gordon equation.
In 1927, Pauli phenomenologically found a non-relativistic equation to describe spin-1/2 particles in electromagnetic fields, now called the Pauli equation. Pauli found the wave function was not described by a single complex function of space and time, but needed two complex numbers, which respectively correspond to the spin +1/2 and −1/2 states of the fermion. Soon after in 1928, Dirac found an equation from the first successful unification of special relativity and quantum mechanics applied to the electron, now called the Dirac equation. In this, the wave function is a spinor represented by four complex-valued components: two for the electron and two for the electron's antiparticle, the positron. In the non-relativistic limit, the Dirac wave function resembles the Pauli wave function for the electron. Later, other relativistic wave equations were found.
Wave functions and wave equations in modern theories
All these wave equations are of enduring importance. The Schrödinger equation and the Pauli equation are under many circumstances excellent approximations of the relativistic variants. They are considerably easier to solve in practical problems than the relativistic counterparts.
The Klein–Gordon equation and the Dirac equation, while being relativistic, do not represent full reconciliation of quantum mechanics and special relativity. The branch of quantum mechanics where these equations are studied the same way as the Schrödinger equation, often called relativistic quantum mechanics, while very successful, has its limitations (see e.g. Lamb shift) and conceptual problems (see e.g. Dirac sea).
Relativity makes it inevitable that the number of particles in a system is not constant. For full reconciliation, quantum field theory is needed.
In this theory, the wave equations and the wave functions have their place, but in a somewhat different guise. The main objects of interest are not the wave functions, but rather operators, so called field operators (or just fields where "operator" is understood) on the Hilbert space of states (to be described next section). It turns out that the original relativistic wave equations and their solutions are still needed to build the Hilbert space. Moreover, the free fields operators, i.e. when interactions are assumed not to exist, turn out to (formally) satisfy the same equation as do the fields (wave functions) in many cases.
Thus the Klein–Gordon equation (spin ) and the Dirac equation (spin ) in this guise remain in the theory. Higher spin analogues include the Proca equation (spin ), Rarita–Schwinger equation (spin ), and, more generally, the Bargmann–Wigner equations. For massless free fields two examples are the free field Maxwell equation (spin ) and the free field Einstein equation (spin ) for the field operators.
All of them are essentially a direct consequence of the requirement of Lorentz invariance. Their solutions must transform under Lorentz transformation in a prescribed way, i.e. under a particular representation of the Lorentz group and that together with few other reasonable demands, e.g. the cluster decomposition property,
with implications for causality is enough to fix the equations.
This applies to free field equations; interactions are not included. If a Lagrangian density (including interactions) is available, then the Lagrangian formalism will yield an equation of motion at the classical level. This equation may be very complex and not amenable to solution. Any solution would refer to a fixed number of particles and would not account for the term "interaction" as referred to in these theories, which involves the creation and annihilation of particles and not external potentials as in ordinary "first quantized" quantum theory.
In string theory, the situation remains analogous. For instance, a wave function in momentum space has the role of Fourier expansion coefficient in a general state of a particle (string) with momentum that is not sharply defined.
Definition (one spinless particle in one dimension)
For now, consider the simple case of a non-relativistic single particle, without spin, in one spatial dimension. More general cases are discussed below.
According to the postulates of quantum mechanics, the state of a physical system, at fixed time , is given by the wave function belonging to a separable complex Hilbert space. As such, the inner product of two wave functions and can be defined as the complex number (at time )
.
More details are given below. However, the inner product of a wave function with itself,
,
is always a positive real number. The number (not ) is called the norm of the wave function .
The separable Hilbert space being considered is infinite-dimensional, which means there is no finite set of square integrable functions which can be added together in various combinations to create every possible square integrable function.
Position-space wave functions
The state of such a particle is completely described by its wave function, where is position and is time. This is a complex-valued function of two real variables and .
For one spinless particle in one dimension, if the wave function is interpreted as a probability amplitude; the square modulus of the wave function, the positive real number
is interpreted as the probability density for a measurement of the particle's position at a given time . The asterisk indicates the complex conjugate. If the particle's position is measured, its location cannot be determined from the wave function, but is described by a probability distribution.
Normalization condition
The probability that its position will be in the interval is the integral of the density over this interval:
where is the time at which the particle was measured. This leads to the normalization condition:
because if the particle is measured, there is 100% probability that it will be somewhere.
For a given system, the set of all possible normalizable wave functions (at any given time) forms an abstract mathematical vector space, meaning that it is possible to add together different wave functions, and multiply wave functions by complex numbers. Technically, wave functions form a ray in a projective Hilbert space rather than an ordinary vector space.
Quantum states as vectors
At a particular instant of time, all values of the wave function are components of a vector. There are uncountably infinitely many of them and integration is used in place of summation. In Bra–ket notation, this vector is written
and is referred to as a "quantum state vector", or simply "quantum state". There are several advantages to understanding wave functions as representing elements of an abstract vector space:
All the powerful tools of linear algebra can be used to manipulate and understand wave functions. For example:
Linear algebra explains how a vector space can be given a basis, and then any vector in the vector space can be expressed in this basis. This explains the relationship between a wave function in position space and a wave function in momentum space and suggests that there are other possibilities too.
Bra–ket notation can be used to manipulate wave functions.
The idea that quantum states are vectors in an abstract vector space is completely general in all aspects of quantum mechanics and quantum field theory, whereas the idea that quantum states are complex-valued "wave" functions of space is only true in certain situations.
The time parameter is often suppressed, and will be in the following. The coordinate is a continuous index. The are called improper vectors which, unlike proper vectors that are normalizable to unity, can only be normalized to a Dirac delta function.
thus
and
which illuminates the identity operator
which is analogous to completeness relation of orthonormal basis in N-dimensional Hilbert space.
Finding the identity operator in a basis allows the abstract state to be expressed explicitly in a basis, and more (the inner product between two state vectors, and other operators for observables, can be expressed in the basis).
Momentum-space wave functions
The particle also has a wave function in momentum space:
where is the momentum in one dimension, which can be any value from to , and is time.
Analogous to the position case, the inner product of two wave functions and can be defined as:
One particular solution to the time-independent Schrödinger equation is
a plane wave, which can be used in the description of a particle with momentum exactly , since it is an eigenfunction of the momentum operator. These functions are not normalizable to unity (they are not square-integrable), so they are not really elements of physical Hilbert space. The set
forms what is called the momentum basis. This "basis" is not a basis in the usual mathematical sense. For one thing, since the functions are not normalizable, they are instead normalized to a delta function,
For another thing, though they are linearly independent, there are too many of them (they form an uncountable set) for a basis for physical Hilbert space. They can still be used to express all functions in it using Fourier transforms as described next.
Relations between position and momentum representations
The and representations are
Now take the projection of the state onto eigenfunctions of momentum using the last expression in the two equations,
Then utilizing the known expression for suitably normalized eigenstates of momentum in the position representation solutions of the free Schrödinger equation
one obtains
Likewise, using eigenfunctions of position,
The position-space and momentum-space wave functions are thus found to be Fourier transforms of each other. They are two representations of the same state; containing the same information, and either one is sufficient to calculate any property of the particle.
In practice, the position-space wave function is used much more often than the momentum-space wave function. The potential entering the relevant equation (Schrödinger, Dirac, etc.) determines in which basis the description is easiest. For the harmonic oscillator, and enter symmetrically, so there it does not matter which description one uses. The same equation (modulo constants) results. From this, with a little bit of afterthought, it follows that solutions to the wave equation of the harmonic oscillator are eigenfunctions of the Fourier transform in .
Definitions (other cases)
Following are the general forms of the wave function for systems in higher dimensions and more particles, as well as including other degrees of freedom than position coordinates or momentum components.
Finite dimensional Hilbert space
While Hilbert spaces originally refer to infinite dimensional complete inner product spaces they, by definition, include finite dimensional complete inner product spaces as well.
In physics, they are often referred to as finite dimensional Hilbert spaces. For every finite dimensional Hilbert space there exist orthonormal basis kets that span the entire Hilbert space.
If the -dimensional set is orthonormal, then the projection operator for the space spanned by these states is given by:
where the projection is equivalent to identity operator since spans the entire Hilbert space, thus leaving any vector from Hilbert space unchanged. This is also known as completeness relation of finite dimensional Hilbert space.
The wavefunction is instead given by:
where , is a set of complex numbers which can be used to construct a wavefunction using the above formula.
Probability interpretation of inner product
If the set are eigenkets of a non-degenerate observable with eigenvalues , by the postulates of quantum mechanics, the probability of measuring the observable to be is given according to Born rule as:
For non-degenerate of some observable, if eigenvalues have subset of eigenvectors labelled as , by the postulates of quantum mechanics, the probability of measuring the observable to be is given by:
where is a projection operator of states to subspace spanned by . The equality follows due to orthogonal nature of .
Hence, which specify state of the quantum mechanical system, have magnitudes whose square gives the probability of measuring the respective state.
Physical significance of relative phase
While the relative phase has observable effects in experiments, the global phase of the system is experimentally indistinguishable. For example in a particle in superposition of two states, the global phase of the particle cannot be distinguished by finding expectation value of observable or probabilities of observing different states but relative phases can affect the expectation values of observables.
While the overall phase of the system is considered to be arbitrary, the relative phase for each state of a prepared state in superposition can be determined based on physical meaning of the prepared state and its symmetry. For example, the construction of spin states along x direction as a superposition of spin states along z direction, can done by applying appropriate rotation transformation on the spin along z states which provides appropriate phase of the states relative to each other.
Application to include spin
An example of finite dimensional Hilbert space can be constructed using spin eigenkets of -spin particles which forms a dimensional Hilbert space. However, the general wavefunction of a particle that fully describes its state, is always from an infinite dimensional Hilbert space since it involves a tensor product with Hilbert space relating to the position or momentum of the particle. Nonetheless, the techniques developed for finite dimensional Hilbert space are useful since they can either be treated independently or treated in consideration of linearity of tensor product.
Since the spin operator for a given -spin particles can be represented as a finite matrix which acts on independent spin vector components, it is usually preferable to denote spin components using matrix/column/row notation as applicable.
For example, each is usually identified as a column vector:
but it is a common abuse of notation, because the kets are not synonymous or equal to the column vectors. Column vectors simply provide a convenient way to express the spin components.
Corresponding to the notation, the z-component spin operator can be written as:
since the eigenvectors of z-component spin operator are the above column vectors, with eigenvalues being the corresponding spin quantum numbers.
Corresponding to the notation, a vector from such a finite dimensional Hilbert space is hence represented as:
where are corresponding complex numbers.
In the following discussion involving spin, the complete wavefunction is considered as tensor product of spin states from finite dimensional Hilbert spaces and the wavefunction which was previously developed. The basis for this Hilbert space are hence considered: .
One-particle states in 3d position space
The position-space wave function of a single particle without spin in three spatial dimensions is similar to the case of one spatial dimension above: where is the position vector in three-dimensional space, and is time. As always is a complex-valued function of real variables. As a single vector in Dirac notation
All the previous remarks on inner products, momentum space wave functions, Fourier transforms, and so on extend to higher dimensions.
For a particle with spin, ignoring the position degrees of freedom, the wave function is a function of spin only (time is a parameter);
where is the spin projection quantum number along the axis. (The axis is an arbitrary choice; other axes can be used instead if the wave function is transformed appropriately, see below.) The parameter, unlike and , is a discrete variable. For example, for a spin-1/2 particle, can only be or , and not any other value. (In general, for spin , can be ). Inserting each quantum number gives a complex valued function of space and time, there are of them. These can be arranged into a column vector
In bra–ket notation, these easily arrange into the components of a vector:
The entire vector is a solution of the Schrödinger equation (with a suitable Hamiltonian), which unfolds to a coupled system of ordinary differential equations with solutions . The term "spin function" instead of "wave function" is used by some authors. This contrasts the solutions to position space wave functions, the position coordinates being continuous degrees of freedom, because then the Schrödinger equation does take the form of a wave equation.
More generally, for a particle in 3d with any spin, the wave function can be written in "position–spin space" as:
and these can also be arranged into a column vector
in which the spin dependence is placed in indexing the entries, and the wave function is a complex vector-valued function of space and time only.
All values of the wave function, not only for discrete but continuous variables also, collect into a single vector
For a single particle, the tensor product of its position state vector and spin state vector gives the composite position-spin state vector
with the identifications
The tensor product factorization of energy eigenstates is always possible if the orbital and spin angular momenta of the particle are separable in the Hamiltonian operator underlying the system's dynamics (in other words, the Hamiltonian can be split into the sum of orbital and spin terms). The time dependence can be placed in either factor, and time evolution of each can be studied separately. Under such Hamiltonians, any tensor product state evolves into another tensor product state, which essentially means any unentangled state remains unentangled under time evolution. This is said to happen when there is no physical interaction between the states of the tensor products. In the case of non separable Hamiltonians, energy eigenstates are said to be some linear combination of such states, which need not be factorizable; examples include a particle in a magnetic field, and spin–orbit coupling.
The preceding discussion is not limited to spin as a discrete variable, the total angular momentum J may also be used. Other discrete degrees of freedom, like isospin, can expressed similarly to the case of spin above.
Many-particle states in 3d position space
If there are many particles, in general there is only one wave function, not a separate wave function for each particle. The fact that one wave function describes many particles is what makes quantum entanglement and the EPR paradox possible. The position-space wave function for particles is written:
where is the position of the -th particle in three-dimensional space, and is time. Altogether, this is a complex-valued function of real variables.
In quantum mechanics there is a fundamental distinction between identical particles and distinguishable particles. For example, any two electrons are identical and fundamentally indistinguishable from each other; the laws of physics make it impossible to "stamp an identification number" on a certain electron to keep track of it. This translates to a requirement on the wave function for a system of identical particles:
where the sign occurs if the particles are all bosons and sign if they are all fermions. In other words, the wave function is either totally symmetric in the positions of bosons, or totally antisymmetric in the positions of fermions. The physical interchange of particles corresponds to mathematically switching arguments in the wave function. The antisymmetry feature of fermionic wave functions leads to the Pauli principle. Generally, bosonic and fermionic symmetry requirements are the manifestation of particle statistics and are present in other quantum state formalisms.
For distinguishable particles (no two being identical, i.e. no two having the same set of quantum numbers), there is no requirement for the wave function to be either symmetric or antisymmetric.
For a collection of particles, some identical with coordinates and others distinguishable (not identical with each other, and not identical to the aforementioned identical particles), the wave function is symmetric or antisymmetric in the identical particle coordinates only:
Again, there is no symmetry requirement for the distinguishable particle coordinates .
The wave function for N particles each with spin is the complex-valued function
Accumulating all these components into a single vector,
For identical particles, symmetry requirements apply to both position and spin arguments of the wave function so it has the overall correct symmetry.
The formulae for the inner products are integrals over all coordinates or momenta and sums over all spin quantum numbers. For the general case of particles with spin in 3-d,
this is altogether three-dimensional volume integrals and sums over the spins. The differential volume elements are also written "" or "".
The multidimensional Fourier transforms of the position or position–spin space wave functions yields momentum or momentum–spin space wave functions.
Probability interpretation
For the general case of particles with spin in 3d, if is interpreted as a probability amplitude, the probability density is
and the probability that particle 1 is in region with spin and particle 2 is in region with spin etc. at time is the integral of the probability density over these regions and evaluated at these spin numbers:
Physical significance of phase
In non-relativistic quantum mechanics, it can be shown using Schrodinger's time dependent wave equation that the equation:
is satisfied, where is the probability density and , is known as the probability flux in accordance with the continuity equation form of the above equation.
Using the following expression for wavefunction:where is the probability density and is the phase of the wavefunction, it can be shown that:
Hence the spacial variation of phase characterizes the probability flux.
In classical analogy, for , the quantity is analogous with velocity. Note that this does not imply a literal interpretation of as velocity since velocity and position cannot be simultaneously determined as per the uncertainty principle. Substituting the form of wavefunction in Schrodinger's time dependent wave equation, and taking the classical limit, :
Which is analogous to Hamilton-Jacobi equation from classical mechanics. This interpretation fits with Hamilton–Jacobi theory, in which , where is Hamilton's principal function.
Time dependence
For systems in time-independent potentials, the wave function can always be written as a function of the degrees of freedom multiplied by a time-dependent phase factor, the form of which is given by the Schrödinger equation. For particles, considering their positions only and suppressing other degrees of freedom,
where is the energy eigenvalue of the system corresponding to the eigenstate . Wave functions of this form are called stationary states.
The time dependence of the quantum state and the operators can be placed according to unitary transformations on the operators and states. For any quantum state and operator , in the Schrödinger picture changes with time according to the Schrödinger equation while is constant. In the Heisenberg picture it is the other way round, is constant while evolves with time according to the Heisenberg equation of motion. The Dirac (or interaction) picture is intermediate, time dependence is places in both operators and states which evolve according to equations of motion. It is useful primarily in computing S-matrix elements.
Non-relativistic examples
The following are solutions to the Schrödinger equation for one non-relativistic spinless particle.
Finite potential barrier
One of the most prominent features of wave mechanics is the possibility for a particle to reach a location with a prohibitive (in classical mechanics) force potential. A common model is the "potential barrier", the one-dimensional case has the potential
and the steady-state solutions to the wave equation have the form (for some constants )
Note that these wave functions are not normalized; see scattering theory for discussion.
The standard interpretation of this is as a stream of particles being fired at the step from the left (the direction of negative ): setting corresponds to firing particles singly; the terms containing and signify motion to the right, while and – to the left. Under this beam interpretation, put since no particles are coming from the right. By applying the continuity of wave functions and their derivatives at the boundaries, it is hence possible to determine the constants above.
In a semiconductor crystallite whose radius is smaller than the size of its exciton Bohr radius, the excitons are squeezed, leading to quantum confinement. The energy levels can then be modeled using the particle in a box model in which the energy of different states is dependent on the length of the box.
Quantum harmonic oscillator
The wave functions for the quantum harmonic oscillator can be expressed in terms of Hermite polynomials , they are
where .
Hydrogen atom
The wave functions of an electron in a Hydrogen atom are expressed in terms of spherical harmonics and generalized Laguerre polynomials (these are defined differently by different authors—see main article on them and the hydrogen atom).
It is convenient to use spherical coordinates, and the wave function can be separated into functions of each coordinate,
where are radial functions and are spherical harmonics of degree and order . This is the only atom for which the Schrödinger equation has been solved exactly. Multi-electron atoms require approximative methods. The family of solutions is:
where is the Bohr radius,
are the generalized Laguerre polynomials of degree , is the principal quantum number, the azimuthal quantum number, the magnetic quantum number. Hydrogen-like atoms have very similar solutions.
This solution does not take into account the spin of the electron.
In the figure of the hydrogen orbitals, the 19 sub-images are images of wave functions in position space (their norm squared). The wave functions represent the abstract state characterized by the triple of quantum numbers , in the lower right of each image. These are the principal quantum number, the orbital angular momentum quantum number, and the magnetic quantum number. Together with one spin-projection quantum number of the electron, this is a complete set of observables.
The figure can serve to illustrate some further properties of the function spaces of wave functions.
In this case, the wave functions are square integrable. One can initially take the function space as the space of square integrable functions, usually denoted .
The displayed functions are solutions to the Schrödinger equation. Obviously, not every function in satisfies the Schrödinger equation for the hydrogen atom. The function space is thus a subspace of .
The displayed functions form part of a basis for the function space. To each triple , there corresponds a basis wave function. If spin is taken into account, there are two basis functions for each triple. The function space thus has a countable basis.
The basis functions are mutually orthonormal.
Wave functions and function spaces
The concept of function spaces enters naturally in the discussion about wave functions. A function space is a set of functions, usually with some defining requirements on the functions (in the present case that they are square integrable), sometimes with an algebraic structure on the set (in the present case a vector space structure with an inner product), together with a topology on the set. The latter will sparsely be used here, it is only needed to obtain a precise definition of what it means for a subset of a function space to be closed. It will be concluded below that the function space of wave functions is a Hilbert space. This observation is the foundation of the predominant mathematical formulation of quantum mechanics.
Vector space structure
A wave function is an element of a function space partly characterized by the following concrete and abstract descriptions.
The Schrödinger equation is linear. This means that the solutions to it, wave functions, can be added and multiplied by scalars to form a new solution. The set of solutions to the Schrödinger equation is a vector space.
The superposition principle of quantum mechanics. If and are two states in the abstract space of states of a quantum mechanical system, and and are any two complex numbers, then is a valid state as well. (Whether the null vector counts as a valid state ("no system present") is a matter of definition. The null vector does not at any rate describe the vacuum state in quantum field theory.) The set of allowable states is a vector space.
This similarity is of course not accidental. There are also a distinctions between the spaces to keep in mind.
Representations
Basic states are characterized by a set of quantum numbers. This is a set of eigenvalues of a maximal set of commuting observables. Physical observables are represented by linear operators, also called observables, on the vectors space. Maximality means that there can be added to the set no further algebraically independent observables that commute with the ones already present. A choice of such a set may be called a choice of representation.
It is a postulate of quantum mechanics that a physically observable quantity of a system, such as position, momentum, or spin, is represented by a linear Hermitian operator on the state space. The possible outcomes of measurement of the quantity are the eigenvalues of the operator. At a deeper level, most observables, perhaps all, arise as generators of symmetries.
The physical interpretation is that such a set represents what can – in theory – simultaneously be measured with arbitrary precision. The Heisenberg uncertainty relation prohibits simultaneous exact measurements of two non-commuting observables.
The set is non-unique. It may for a one-particle system, for example, be position and spin -projection, , or it may be momentum and spin -projection, . In this case, the operator corresponding to position (a multiplication operator in the position representation) and the operator corresponding to momentum (a differential operator in the position representation) do not commute.
Once a representation is chosen, there is still arbitrariness. It remains to choose a coordinate system. This may, for example, correspond to a choice of - and -axis, or a choice of curvilinear coordinates as exemplified by the spherical coordinates used for the Hydrogen atomic wave functions. This final choice also fixes a basis in abstract Hilbert space. The basic states are labeled by the quantum numbers corresponding to the maximal set of commuting observables and an appropriate coordinate system.
The abstract states are "abstract" only in that an arbitrary choice necessary for a particular explicit description of it is not given. This is the same as saying that no choice of maximal set of commuting observables has been given. This is analogous to a vector space without a specified basis. Wave functions corresponding to a state are accordingly not unique. This non-uniqueness reflects the non-uniqueness in the choice of a maximal set of commuting observables. For one spin particle in one dimension, to a particular state there corresponds two wave functions, and , both describing the same state.
For each choice of maximal commuting sets of observables for the abstract state space, there is a corresponding representation that is associated to a function space of wave functions.
Between all these different function spaces and the abstract state space, there are one-to-one correspondences (here disregarding normalization and unobservable phase factors), the common denominator here being a particular abstract state. The relationship between the momentum and position space wave functions, for instance, describing the same state is the Fourier transform.
Each choice of representation should be thought of as specifying a unique function space in which wave functions corresponding to that choice of representation lives. This distinction is best kept, even if one could argue that two such function spaces are mathematically equal, e.g. being the set of square integrable functions. One can then think of the function spaces as two distinct copies of that set.
Inner product
There is an additional algebraic structure on the vector spaces of wave functions and the abstract state space.
Physically, different wave functions are interpreted to overlap to some degree. A system in a state that does not overlap with a state cannot be found to be in the state upon measurement. But if overlap to some degree, there is a chance that measurement of a system described by will be found in states . Also selection rules are observed apply. These are usually formulated in the preservation of some quantum numbers. This means that certain processes allowable from some perspectives (e.g. energy and momentum conservation) do not occur because the initial and final total wave functions do not overlap.
Mathematically, it turns out that solutions to the Schrödinger equation for particular potentials are orthogonal in some manner, this is usually described by an integral where are (sets of) indices (quantum numbers) labeling different solutions, the strictly positive function is called a weight function, and is the Kronecker delta. The integration is taken over all of the relevant space.
This motivates the introduction of an inner product on the vector space of abstract quantum states, compatible with the mathematical observations above when passing to a representation. It is denoted , or in the Bra–ket notation . It yields a complex number. With the inner product, the function space is an inner product space. The explicit appearance of the inner product (usually an integral or a sum of integrals) depends on the choice of representation, but the complex number does not. Much of the physical interpretation of quantum mechanics stems from the Born rule. It states that the probability of finding upon measurement the state given the system is in the state is
where and are assumed normalized. Consider a scattering experiment. In quantum field theory, if describes a state in the "distant future" (an "out state") after interactions between scattering particles have ceased, and an "in state" in the "distant past", then the quantities , with and varying over a complete set of in states and out states respectively, is called the S-matrix or scattering matrix. Knowledge of it is, effectively, having solved the theory at hand, at least as far as predictions go. Measurable quantities such as decay rates and scattering cross sections are calculable from the S-matrix.
Hilbert space
The above observations encapsulate the essence of the function spaces of which wave functions are elements. However, the description is not yet complete. There is a further technical requirement on the function space, that of completeness, that allows one to take limits of sequences in the function space, and be ensured that, if the limit exists, it is an element of the function space. A complete inner product space is called a Hilbert space. The property of completeness is crucial in advanced treatments and applications of quantum mechanics. For instance, the existence of projection operators or orthogonal projections relies on the completeness of the space. These projection operators, in turn, are essential for the statement and proof of many useful theorems, e.g. the spectral theorem. It is not very important in introductory quantum mechanics, and technical details and links may be found in footnotes like the one that follows.
The space is a Hilbert space, with inner product presented later. The function space of the example of the figure is a subspace of . A subspace of a Hilbert space is a Hilbert space if it is closed.
In summary, the set of all possible normalizable wave functions for a system with a particular choice of basis, together with the null vector, constitute a Hilbert space.
Not all functions of interest are elements of some Hilbert space, say . The most glaring example is the set of functions . These are plane wave solutions of the Schrödinger equation for a free particle that are not normalizable, hence not in . But they are nonetheless fundamental for the description. One can, using them, express functions that are normalizable using wave packets. They are, in a sense, a basis (but not a Hilbert space basis, nor a Hamel basis) in which wave functions of interest can be expressed. There is also the artifact "normalization to a delta function" that is frequently employed for notational convenience, see further down. The delta functions themselves are not square integrable either.
The above description of the function space containing the wave functions is mostly mathematically motivated. The function spaces are, due to completeness, very large in a certain sense. Not all functions are realistic descriptions of any physical system. For instance, in the function space one can find the function that takes on the value for all rational numbers and for the irrationals in the interval . This is square integrable,
but can hardly represent a physical state.
Common Hilbert spaces
While the space of solutions as a whole is a Hilbert space there are many other Hilbert spaces that commonly occur as ingredients.
Square integrable complex valued functions on the interval . The set is a Hilbert space basis, i.e. a maximal orthonormal set.
The Fourier transform takes functions in the above space to elements of , the space of square summable functions . The latter space is a Hilbert space and the Fourier transform is an isomorphism of Hilbert spaces. Its basis is with .
The most basic example of spanning polynomials is in the space of square integrable functions on the interval for which the Legendre polynomials is a Hilbert space basis (complete orthonormal set).
The square integrable functions on the unit sphere is a Hilbert space. The basis functions in this case are the spherical harmonics. The Legendre polynomials are ingredients in the spherical harmonics. Most problems with rotational symmetry will have "the same" (known) solution with respect to that symmetry, so the original problem is reduced to a problem of lower dimensionality.
The associated Laguerre polynomials appear in the hydrogenic wave function problem after factoring out the spherical harmonics. These span the Hilbert space of square integrable functions on the semi-infinite interval .
More generally, one may consider a unified treatment of all second order polynomial solutions to the Sturm–Liouville equations in the setting of Hilbert space. These include the Legendre and Laguerre polynomials as well as Chebyshev polynomials, Jacobi polynomials and Hermite polynomials. All of these actually appear in physical problems, the latter ones in the harmonic oscillator, and what is otherwise a bewildering maze of properties of special functions becomes an organized body of facts. For this, see .
There occurs also finite-dimensional Hilbert spaces. The space is a Hilbert space of dimension . The inner product is the standard inner product on these spaces. In it, the "spin part" of a single particle wave function resides.
In the non-relativistic description of an electron one has and the total wave function is a solution of the Pauli equation.
In the corresponding relativistic treatment, and the wave function solves the Dirac equation.
With more particles, the situations is more complicated. One has to employ tensor products and use representation theory of the symmetry groups involved (the rotation group and the Lorentz group respectively) to extract from the tensor product the spaces in which the (total) spin wave functions reside. (Further problems arise in the relativistic case unless the particles are free. See the Bethe–Salpeter equation.) Corresponding remarks apply to the concept of isospin, for which the symmetry group is SU(2). The models of the nuclear forces of the sixties (still useful today, see nuclear force) used the symmetry group SU(3). In this case, as well, the part of the wave functions corresponding to the inner symmetries reside in some or subspaces of tensor products of such spaces.
In quantum field theory the underlying Hilbert space is Fock space. It is built from free single-particle states, i.e. wave functions when a representation is chosen, and can accommodate any finite, not necessarily constant in time, number of particles. The interesting (or rather the tractable) dynamics lies not in the wave functions but in the field operators that are operators acting on Fock space. Thus the Heisenberg picture is the most common choice (constant states, time varying operators).
Due to the infinite-dimensional nature of the system, the appropriate mathematical tools are objects of study in functional analysis.
Simplified description
Not all introductory textbooks take the long route and introduce the full Hilbert space machinery, but the focus is on the non-relativistic Schrödinger equation in position representation for certain standard potentials. The following constraints on the wave function are sometimes explicitly formulated for the calculations and physical interpretation to make sense:
The wave function must be square integrable. This is motivated by the Copenhagen interpretation of the wave function as a probability amplitude.
It must be everywhere continuous and everywhere continuously differentiable. This is motivated by the appearance of the Schrödinger equation for most physically reasonable potentials.
It is possible to relax these conditions somewhat for special purposes.
If these requirements are not met, it is not possible to interpret the wave function as a probability amplitude. Note that exceptions can arise to the continuity of derivatives rule at points of infinite discontinuity of potential field. For example, in particle in a box where the derivative of wavefunction can be discontinuous at the boundary of the box where the potential is known to have infinite discontinuity.
This does not alter the structure of the Hilbert space that these particular wave functions inhabit, but the subspace of the square-integrable functions , which is a Hilbert space, satisfying the second requirement is not closed in , hence not a Hilbert space in itself.
The functions that does not meet the requirements are still needed for both technical and practical reasons.
More on wave functions and abstract state space
As has been demonstrated, the set of all possible wave functions in some representation for a system constitute an in general infinite-dimensional Hilbert space. Due to the multiple possible choices of representation basis, these Hilbert spaces are not unique. One therefore talks about an abstract Hilbert space, state space, where the choice of representation and basis is left undetermined. Specifically, each state is represented as an abstract vector in state space. A quantum state in any representation is generally expressed as a vector
where
the basis vectors of the chosen representation
a differential volume element in the continuous degrees of freedom
a component of the vector , called the wave function of the system
dimensionless discrete quantum numbers
continuous variables (not necessarily dimensionless)
These quantum numbers index the components of the state vector. More, all are in an -dimensional set where each is the set of allowed values for ; all are in an -dimensional "volume" where and each is the set of allowed values for , a subset of the real numbers . For generality and are not necessarily equal.
Example:
The probability density of finding the system at time at state is
The probability of finding system with in some or all possible discrete-variable configurations, , and in some or all possible continuous-variable configurations, , is the sum and integral over the density,
Since the sum of all probabilities must be 1, the normalization condition
must hold at all times during the evolution of the system.
The normalization condition requires to be dimensionless, by dimensional analysis must have the same units as .
Ontology
Whether the wave function exists in reality, and what it represents, are major questions in the interpretation of quantum mechanics. Many famous physicists of a previous generation puzzled over this problem, such as Erwin Schrödinger, Albert Einstein and Niels Bohr. Some advocate formulations or variants of the Copenhagen interpretation (e.g. Bohr, Eugene Wigner and John von Neumann) while others, such as John Archibald Wheeler or Edwin Thompson Jaynes, take the more classical approach and regard the wave function as representing information in the mind of the observer, i.e. a measure of our knowledge of reality. Some, including Schrödinger, David Bohm and Hugh Everett III and others, argued that the wave function must have an objective, physical existence. Einstein thought that a complete description of physical reality should refer directly to physical space and time, as distinct from the wave function, which refers to an abstract mathematical space.
See also
Boson
De Broglie–Bohm theory
Double-slit experiment
Faraday wave
Fermion
Phase-space formulation
Schrödinger equation
Wave function collapse
Wave packet
Notes
Remarks
Citations
References
Online copy (French) Online copy (English)
Online copy
Further reading
External links
Quantum Mechanics for Engineers
Spin wave functions NYU
Identical Particles Revisited, Michael Fowler
The Nature of Many-Electron Wavefunctions
Quantum Mechanics and Quantum Computation at BerkeleyX
Einstein, The quantum theory of radiation
Quantum states
Waves | Wave function | [
"Physics"
] | 10,220 | [
"Physical phenomena",
"Quantum mechanics",
"Waves",
"Motion (physics)",
"Quantum states"
] |
145,372 | https://en.wikipedia.org/wiki/Audiophile | An audiophile (from + ) is a person who is enthusiastic about high-fidelity sound reproduction. The audiophile seeks to achieve high sound quality in the audio reproduction of recorded music, typically in a quiet listening space in a room with good acoustics.
Audiophile values may be applied at all stages of music reproduction—the initial audio recording, the production process, the storage of sound data, and the playback (usually in a home setting). In general, the values of an audiophile are seen to be antithetical to the growing popularity of more convenient but lower-quality music, especially lossy digital file types like MP3, lower-definition music streaming services, laptop or cell phone speakers, and low-cost headphones.
The term high-end audio refers to playback equipment used by audiophiles, which may be bought at specialist shops and websites. High-end components include turntables, digital-to-analog converters, equalization devices, preamplifiers and amplifiers (both solid-state and vacuum tube), loudspeakers (including horn, electrostatic and magnetostatic speakers), power conditioners, subwoofers, headphones, and acoustic room treatment in addition to room correction devices.
Although many audiophile techniques are based on objective criteria that can be verified using techniques like ABX testing, perceived sound quality is necessarily subjective, often with subtle differences, leading to some more controversial audiophile techniques being based on pseudoscientific principles.
Audio playback components
An audio system typically consists of one or more source components, one or more amplification components, and (for stereo) two or more loudspeakers.
Signal cables (analog audio, speaker, digital audio etc.) are used to link these components. There are also a variety of accessories, including equipment racks, power conditioners, devices to reduce or control vibration, record cleaners, anti-static devices, phonograph needle cleaners, reverberation reducing devices such as speaker pads and stands, sound absorbent foam, and soundproofing.
The interaction between the loudspeakers and the room (room acoustics) plays an important part in sound quality. Sound vibrations are reflected from walls, floor and ceiling, and are affected by the room's contents. Room dimensions can create standing waves at particular (usually low) frequencies. There are devices and materials for room treatment that affect sound quality. Soft materials, such as draperies and carpets, can absorb higher frequencies, whereas hard walls and floors can cause excess reverberation.
Sound sources
Audiophiles play music from a variety of sources including phonograph records, compact discs (CDs), and digital audio files that are either uncompressed or are losslessly compressed, such as FLAC, DSD, Windows Media Audio 9 Lossless and Apple Lossless (ALAC), in contrast to lossy compression, such as in MP3 encoding. From the early 1990s, CDs were the most common source of high-quality music. Nevertheless, turntables, tonearms, and magnetic cartridges are still used, despite the difficulties of keeping records free from dust and the delicate set-up associated with turntables.
The 44.1 kHz sampling rate of the CD format, in theory, restricts CD information losses to above the theoretical upper-frequency limit of human hearing – 20 kHz. Nonetheless, newer formats such as FLAC, ALAC, DVD-Audio and Super Audio Compact Disc (SACD) allow for sampling rates of 88.2 kHz, 96 kHz or even 192 kHz. Higher sample rates allow fewer restrictions on filter choices in playback components, and some audiophiles upsample from the source rate to higher rates to achieve different filter properties.
CD audio signals are encoded in 16-bit values. Higher-definition consumer formats such as HDCD-encoded CDs, DVD-Audio, and SA-CD contain 20-bit, 24-bit and even 32-bit audio streams. With more bits, more dynamic range is possible; 20-bit dynamic range is theoretically 120 dB—the limit of most consumer electronic playback equipment.
SACDs and DVD-Audio have up to 5.1 to 6.1 surround sound. Although both high-res optical formats have failed, there has been a resurgence in high-res digital files. SACD can be stored as a DSD file, and DVD-Audio can be stored as an FLAC or ALAC file. FLAC is the most widely used digital format for high-res with up to 8 channels, a maximum depth of 32-bit, and 655,350 Hz sampling rate. Uncompressed formats such as WAV and AIFF files can store audio CDs without compression.
Amplifiers
A preamplifier selects among several audio inputs, amplifies source-level signals (such as those from a turntable), and allows the listener to adjust the sound with volume and tone controls. Many audiophile-oriented preamplifiers lack tone controls. A power amplifier takes the "line-level" audio signal from the preamplifier and drives the loudspeakers. An integrated amplifier combines the functions of power amplification with input switching and volume and tone control. Both pre/power combinations and integrated amplifiers are widely used by audiophiles.
Audiophile amplifiers are available based on solid-state (semiconductor) technology, vacuum-tube (valve) technology, or hybrid technology—semiconductors and vacuum tubes.
Dedicated amplifiers are also commonly used by audiophiles to drive headphones, especially those with high impedance and/or low sensitivity, or electrostatic headphones.
Loudspeakers
The loudspeaker's cabinet is known as the enclosure. There are a variety of loudspeaker enclosure designs, including sealed cabinets (acoustic suspension), ported cabinets (bass-reflex), transmission line, infinite baffles, and horn-loaded. The enclosure plays a major role in the sound of the loudspeaker.
Depending on the frequencies reproduced, the drivers that produce the sound are referred to as tweeters for high frequencies, midranges for middle frequencies, such as voice and lead instruments, and woofers for bass frequencies. Driver designs include dynamic, electrostatic, plasma, ribbon, planar, ionic, and servo-actuated. Drivers are made from various materials, including paper pulp, polypropylene, kevlar, aluminium, magnesium, beryllium, and vapour-deposited diamond.
The direction and intensity of the output of a loudspeaker, called dispersion or polar response, has a large effect on its sound. Various methods are employed to control the dispersion. These methods include monopolar, bipolar, dipolar, 360-degree, horn, waveguide, and line source. These terms refer to the configuration and arrangement of the various drivers in the enclosure.
The positioning of loudspeakers in the room strongly influences the sound experience. Loudspeaker output is influenced by interaction with room boundaries, particularly bass response, and high-frequency transducers are directional, or "beaming".
Accessories
Audiophiles use a wide variety of accessories and fine-tuning techniques, sometimes referred to as "tweaks", to improve the sound of their systems. These include power conditioner filters to "clean" the electricity, equipment racks to isolate components from floor vibrations, specialty power and audio cables, loudspeaker stands (and footers to isolate speakers from stands), and room treatments.
There are several types of room treatment. Sound-absorbing materials may be placed strategically within a listening room to reduce the amplitude of early reflections, and to deal with resonance modes. Other treatments are designed to produce diffusion, reflection of sound in a scattered fashion. Room treatments can be expensive and difficult to optimize.
Headphones
Headphones are regularly used by audiophiles. These products can be remarkably expensive, some over $10,000, but in general are much cheaper than comparable speaker systems. They have the advantage of not requiring room treatment and being usable without requiring others to listen at the same time. However, many audiophiles still prefer speaker systems over headphones due to their ability to simulate an immersive, rounded sonic environment. Newer canalphones can be driven by the less powerful outputs found on portable music players.
Design variety
For music storage and playback, digital formats offer an absence of clicks, pops, wow, flutter, acoustic feedback, and rumble, compared to vinyl records. Depending on the format, digital can have a higher signal-to-noise ratio, a wider dynamic range, less total harmonic distortion, and a flatter and more extended frequency response. The digital recording and playback processes may include degradations not found in the analog processes, such as timing jitter and distortions associated with band limiting filter choices. Vinyl records remain popular and discussion about the relative merits of analog and digital sound continues (see Comparison of analog and digital recording). Note that vinyl records may be mastered differently from their digital versions, and multiple digital remasters may exist.
In the amplification stage, vacuum-tube electronics remain popular, despite most other applications having since abandoned tubes for solid state amplifiers. Vacuum-tube amplifiers often have higher total harmonic distortion, require rebiasing, are less reliable, generate more heat, are less powerful, and cost more. There is also continuing debate about the proper use of negative feedback in amplifier design.
Community
The audiophile community is scattered across many different platforms and communication methods. In person, one can find audiophiles at audio-related events such as music festivals, theaters, and concerts. The online audiophile community is even more widespread, with users on web forums and apps such as Facebook, Reddit, and others. These groups are self-identified audiophiles and will often contribute to their communities by mentoring new audiophiles, posting their current audio configurations, and sharing news related to the audiophile community.
Among the listeners themselves, audiophiles will commonly differentiate community members between "golden eared" and "wooden eared" individuals. Those who are deemed as having "golden ears" are people who can accurately express the description of a sound or sonic environment, whereas those with "wooden ears" are implied to be untrained in listening and needing more guidance or assistance. These labels are not permanent, however, and people within these two groups can move between the groups interchangeably, often depending on the judgement of others within the community.
Controversies
There is substantial controversy on the subject of audiophile components; many have asserted that the occasionally high cost produces no measurable improvement in audio reproduction. For example, skeptic James Randi, through his foundation One Million Dollar Paranormal Challenge, offered a prize of $1 million to anyone able to demonstrate that $7,250 audio cables "are any better than ordinary audio cables". In 2008, audio reviewer Michael Fremer attempted to claim the prize, and said that Randi declined the challenge. Randi said that the cable manufacturer Pear Cables was the one who withdrew.
Another commonly referenced study done by Philip Greenspun and Leigh Klotz of the Massachusetts Institute of Technology found that although test subjects were able to distinguish between high fidelity, "expensive" cables versus common use cables, there was no statistically significant preference between the two cables. Greenspun and Klotz expect that critics of the study will point to the fact that this experiment was not done as a double-blind test, but this critique has a counter in that the study participants felt as though the experiment solely isolated the subjects' opinions on sound quality and nothing more.
There is disagreement on how equipment testing should be conducted and its utility. Audiophile publications frequently describe differences in quality which are not detected by standard audio system measurements and double blind testing, claiming that they perceive differences in audio quality which cannot be measured by current instrumentation, and cannot be detected by listeners if listening conditions are controlled, but without providing an explanation for those claims.
Criticisms usually focus on claims around so-called "tweaks" and accessories beyond the core source, amplification, and speaker products. Examples of these accessories include speaker cables, component interconnects, stones, cones, CD markers, and power cables or conditioners. One of the most notorious "tweakers" was Peter Belt, who introduced numerous eccentric innovations that included a £500 "quantum clip" that consisted of a crocodile clip with a short length of copper wire attached.
See also
Broadcast quality
Professional audio
Videophile
Audiophile publications
The Absolute Sound
Stereophile
What Hi-Fi? Sound and Vision
References
External links
Audiophilia: The Online Journal for the Serious Audiophile
Why We Need Audiophiles (Gizmodo)
High end Audio and Audiophile Pages
Enjoy the Music.com: Equipment reviews, industry news, shows reports, etc.
Portuguese High end Audio benchmark reviews and reports website from José Victor Henriques.
Audio societies
Audiophile Society of New South Wales
Bay Area Audiophile Society
Boston Audio Society
Chicago Audio Society
Colorado Audio Society
Los Angeles and Orange County Audio Society
Pacific Northwest Audio Society
Hobbies
Audio engineering | Audiophile | [
"Engineering"
] | 2,680 | [
"Electrical engineering",
"Audio engineering"
] |
145,424 | https://en.wikipedia.org/wiki/Exoskeleton | An exoskeleton (from Greek éxō "outer" and skeletós "skeleton") is a skeleton that is on the exterior of an animal in the form of hardened integument, which both supports the body's shape and protects the internal organs, in contrast to an internal endoskeleton (e.g. that of a human) which is enclosed underneath other soft tissues. Some large, hard and non-flexible protective exoskeletons are known as shell or armour.
Examples of exoskeletons in animals include the cuticle skeletons shared by arthropods (insects, chelicerates, myriapods and crustaceans) and tardigrades, as well as the skeletal cups formed by hardened secretion of stony corals, the test/tunic of sea squirts and sea urchins, and the prominent mollusc shell shared by snails, clams, tusk shells, chitons and nautilus. Some vertebrate animals, such as the turtle, have both an endoskeleton and a protective exoskeleton.
Role
Exoskeletons contain rigid and resistant components that fulfil a set of functional roles in addition to structural support in many animals, including protection, respiration, excretion, sensation, feeding and courtship display, and as an osmotic barrier against desiccation in terrestrial organisms. Exoskeletons have roles in defence from parasites and predators and in providing attachment points for musculature.
Arthropod exoskeletons contain chitin; the addition of calcium carbonate makes them harder and stronger, at the price of increased weight. Ingrowths of the arthropod exoskeleton known as apodemes serve as attachment sites for muscles. These structures are composed of chitin and are approximately six times stronger and twice the stiffness of vertebrate tendons. Similar to tendons, apodemes can stretch to store elastic energy for jumping, notably in locusts. Calcium carbonates constitute the shells of molluscs, brachiopods, and some tube-building polychaete worms. Silica forms the exoskeleton in the microscopic diatoms and radiolaria. One mollusc species, the scaly-foot gastropod, even uses the iron sulfides greigite and pyrite.
Some organisms, such as some foraminifera, agglutinate exoskeletons by sticking grains of sand and shell to their exterior. Contrary to a common misconception, echinoderms do not possess an exoskeleton and their test is always contained within a layer of living tissue.
Exoskeletons have evolved independently many times; 18 lineages evolved calcified exoskeletons alone. Further, other lineages have produced tough outer coatings, such as some mammals, that are analogous to an exoskeleton. This coating is constructed from bone in the armadillo, and hair in the pangolin. The armour of reptiles like turtles and dinosaurs like Ankylosaurs is constructed of bone; crocodiles have bony scutes and horny scales.
Growth
Since exoskeletons are rigid, they present some limits to growth. Organisms with open shells can grow by adding new material to the aperture of their shell, as is the case in gastropods, bivalves, and other molluscans. A true exoskeleton, like that found in panarthropods, must be shed via moulting (ecdysis) when the animal starts to outgrow it. A new exoskeleton is produced beneath the old one, and the new skeleton is soft and pliable before shedding the old one. The animal will typically stay in a den or burrow during moulting, as it is quite vulnerable to trauma during this period. Once at least partially set, the organism will plump itself up to try to expand the exoskeleton. The new exoskeleton is still capable of growing to some degree before it is eventually hardened. In contrast, moulting reptiles shed only the outer layer of skin and often exhibit indeterminate growth. These animals produce new skin and integuments throughout their life, replacing them according to growth. Arthropod growth, however, is limited by the space within its current exoskeleton. Failure to shed the exoskeleton once outgrown can result in the animal's death or prevent subadults from reaching maturity, thus preventing them from reproducing. This is the mechanism behind some insect pesticides, such as Azadirachtin.
Paleontological significance
Exoskeletons, as hard parts of organisms, are greatly useful in assisting the preservation of organisms, whose soft parts usually rot before they can be fossilized. Mineralized exoskeletons can be preserved as shell fragments. The possession of an exoskeleton permits a couple of other routes to fossilization. For instance, the strong layer can resist compaction, allowing a mould of the organism to be formed underneath the skeleton, which may later decay. Alternatively, exceptional preservation may result in chitin being mineralised, as in the Burgess Shale, or transformed to the resistant polymer keratin, which can resist decay and be recovered.
However, our dependence on fossilised skeletons also significantly limits our understanding of evolution. Only the parts of organisms that were already mineralised are usually preserved, such as the shells of molluscs. It helps that exoskeletons often contain "muscle scars", marks where muscles have been attached to the exoskeleton, which may allow the reconstruction of much of an organism's internal parts from its exoskeleton alone. The most significant limitation is that, although there are 30-plus phyla of living animals, two-thirds of these phyla have never been found as fossils, because most animal species are soft-bodied and decay before they can become fossilised.
Mineralized skeletons first appear in the fossil record shortly before the base of the Cambrian period, . The evolution of a mineralised exoskeleton is considered a possible driving force of the Cambrian explosion of animal life, resulting in a diversification of predatory and defensive tactics. However, some Precambrian (Ediacaran) organisms produced tough outer shells while others, such as Cloudina, had a calcified exoskeleton. Some Cloudina shells even show evidence of predation, in the form of borings.
Evolution
The fossil record primarily contains mineralized exoskeletons, since these are by far the most durable. Since most lineages with exoskeletons are thought to have started with a non-mineralized exoskeleton which they later mineralized, it is difficult to comment on the very early evolution of each lineage's exoskeleton. It is known, however, that in a very short course of time, just before the Cambrian period, exoskeletons made of various materials – silica, calcium phosphate, calcite, aragonite, and even glued-together mineral flakes – sprang up in a range of different environments. Most lineages adopted the form of calcium carbonate which was stable in the ocean at the time they first mineralized, and did not change from this mineral morph - even when it became less favourable.
Some Precambrian (Ediacaran) organisms produced tough but non-mineralized outer shells, while others, such as Cloudina, had a calcified exoskeleton, but mineralized skeletons did not become common until the beginning of the Cambrian period, with the rise of the "small shelly fauna". Just after the base of the Cambrian, these miniature fossils become diverse and abundant – this abruptness may be an illusion since the chemical conditions which preserved the small shells appeared at the same time. Most other shell-forming organisms appeared during the Cambrian period, with the Bryozoans being the only calcifying phylum to appear later, in the Ordovician. The sudden appearance of shells has been linked to a change in ocean chemistry which made the calcium compounds of which the shells are constructed stable enough to be precipitated into a shell. However, this is unlikely to be a sufficient cause, as the main construction cost of shells is in creating the proteins and polysaccharides required for the shell's composite structure, not in the precipitation of the mineral components. Skeletonization also appeared at almost the same time that animals started burrowing to avoid predation, and one of the earliest exoskeletons was made of glued-together mineral flakes, suggesting that skeletonization was likewise a response to increased pressure from predators.
Ocean chemistry may also control which mineral shells are constructed of. Calcium carbonate has two forms, the stable calcite and the metastable aragonite, which is stable within a reasonable range of chemical environments but rapidly becomes unstable outside this range. When the oceans contain a relatively high proportion of magnesium compared to calcium, aragonite is more stable, but as the magnesium concentration drops, it becomes less stable, hence harder to incorporate into an exoskeleton, as it will tend to dissolve.
Except for the molluscs, whose shells often comprise both forms, most lineages use just one form of the mineral. The form used appears to reflect the seawater chemistry – thus which form was more easily precipitated – at the time that the lineage first evolved a calcified skeleton, and does not change thereafter. However, the relative abundance of calcite- and aragonite-using lineages does not reflect subsequent seawater chemistry – the magnesium/calcium ratio of the oceans appears to have a negligible impact on organisms' success, which is instead controlled mainly by how well they recover from mass extinctions. A recently discovered modern gastropod Chrysomallon squamiferum that lives near deep-sea hydrothermal vents illustrates the influence of both ancient and modern local chemical environments: its shell is made of aragonite, which is found in some of the earliest fossil molluscs; but it also has armour plates on the sides of its foot, and these are mineralised with the iron sulfides pyrite and greigite, which had never previously been found in any metazoan but whose ingredients are emitted in large quantities by the vents.
See also
Spiracle – small openings in the exoskeleton that allow insects to breathe
Hydrostatic skeleton
Endoskeleton
Powered exoskeleton
Osteoderm
known to incorporate iron into its exoskeleton
References
Animal anatomy
Biomechanics
Skeletons
Armour (zoology) | Exoskeleton | [
"Physics",
"Biology"
] | 2,223 | [
"Biomechanics",
"Mechanics",
"Biological defense mechanisms",
"Armour (zoology)"
] |
145,438 | https://en.wikipedia.org/wiki/Meta-Object%20Facility | The Meta-Object Facility (MOF) is an Object Management Group (OMG) standard for model-driven engineering. Its purpose is to provide a type system for entities in the CORBA architecture and a set of interfaces through which those types can be created and manipulated.
MOF may be used for domain-driven software design and object-oriented modelling.
Overview
MOF was developed to provide a type system for use in the CORBA architecture, a set of schemas by which the structure, meaning and behaviour of objects could be defined, and a set of CORBA interfaces through which these schemas could be created, stored and manipulated.
MOF is designed as a four-layered architecture. It provides a meta-meta model at the top layer, called the M3 layer. This M3-model is the language used by MOF to build metamodels, called M2-models. The most prominent example of a Layer 2 MOF model is the UML metamodel, the model that describes the UML itself. These M2-models describe elements of the M1-layer, and thus M1-models. These would be, for example, models written in UML. The last layer is the M0-layer or data layer. It is used to describe real-world objects.
Beyond the M3-model, MOF describes the means to create and manipulate models and metamodels by defining CORBA interfaces that describe those operations. Because of the similarities between the MOF M3-model and UML structure models, MOF metamodels are usually modeled as UML class diagrams.
File formats
A conversion from MOF specification models (M3-, M2-, or M1-Layer) to W3C XML and XSD are specified by the XMI (ISO/IEC 19503) specification. XMI is an XML-based exchange format for models.
From MOF to Java™ there is the Java Metadata Interchange (JMI) specification by Java Community Process.
It also provides specs to make easier automatic CORBA IDL interfaces generation.
Metamodeling architecture
MOF is a closed metamodeling architecture; it defines an M3-model, which conforms to itself. MOF allows a strict meta-modeling architecture; every model element on every layer is strictly in correspondence with a model element of the layer above. MOF only provides a means to define the structure, or abstract syntax of a language or of data. For defining metamodels, MOF plays exactly the role that EBNF plays for defining programming language grammars. MOF is a Domain Specific Language (DSL) used to define metamodels, just as EBNF is a DSL for defining grammars. Similarly to EBNF, MOF could be defined in MOF.
In short, MOF uses the notion of MOF::Classes (not to be confused with UML::Classes), as known from object orientation, to define concepts (model elements) on a metalayer. MOF may be used to define object-oriented metamodels (as UML for example) as well as non object-oriented metamodels (e.g. a Petri net or a Web Service metamodel).
As of May 2006, the OMG has defined two compliance points for MOF:
EMOF for Essential MOF
CMOF for Complete MOF
In June 2006, a request for proposal was issued by OMG for a third variant, SMOF (Semantic MOF).
The variant ECore that has been defined in the Eclipse Modeling Framework is more or less aligned on OMG's EMOF.
Another related standard is OCL, which describes a formal language that can be used to define model constraints in terms of predicate logic.
QVT, which introduces means to query, view and transform MOF-based models, is a very important standard, approved in 2008. See Model Transformation Language for further information.
International standard
MOF is an international standard:
MOF 2.4.2
ISO/IEC 19508:2014 Information technology — Object Management Group Meta Object Facility (MOF) Core
MOF 1.4
ISO/IEC 19502:2005 Information technology — Meta Object Facility (MOF)
MOF can be viewed as a standard to write metamodels, for example in order to model the abstract syntax of Domain Specific Languages. Kermeta is an extension to MOF allowing executable actions to be attached to EMOF meta-models, hence making it possible to also model a DSL operational semantics and readily obtain an interpreter for it.
JMI defines a Java API for manipulating MOF models.
OMG's MOF is not to be confused with the Managed Object Format (MOF) defined by the Distributed Management Task Force (DMTF) in section 6 of the Common Information Model (CIM) Infrastructure Specification, version 2.5.0.
See also
Common Warehouse Metamodel
Domain-specific language
Kermeta
KM3
Metamodeling
Metadata
Model-driven architecture
OGML
Platform-specific model
QVT
SPEM
XML Metadata Interchange
References
Further reading
Official MOF specification from OMG
Ralph Sobek, MOF Specifications Documents
Johannes Ernst, What is metamodeling?
Woody Pidcock, What are the differences between a vocabulary, a taxonomy, a thesaurus, an ontology, and a meta-model?
Anna Gerber and Kerry Raymond, MOF to EMF and Back Again.
Weaving Executability into Object-Oriented Meta-Languages
MOF Support for Semantic Structures RFP Request For Proposal on SMOF
External links
OMG's MetaObject Facility
Specification languages
Data modeling
Unified Modeling Language
ISO standards | Meta-Object Facility | [
"Engineering"
] | 1,172 | [
"Data modeling",
"Specification languages",
"Data engineering",
"Software engineering"
] |
8,510,311 | https://en.wikipedia.org/wiki/Liquid%20Fidelity | Liquid Fidelity is a "microdisplay" technology applied in high-definition televisions. It incorporates Liquid Crystal on Silicon technology capable of producing true 1080p resolution with two million pixels on a single display chip.
Components of Liquid Fidelity technology were originally used in 720p HDTVs produced by Uneed Systems of Korea from 2004-2006.
Technology Overview
Liquid Crystal on Silicon in general is a sophisticated mix of optical and electrical technologies on one chip. The top layer of the chip is liquid crystal material, the bottom layer is an integrated circuit that drives the liquid crystal, and the surface between the layers is highly reflective. The circuit determines how much light passes through the liquid crystal layer, and the reflected light creates an image on a projection screen.
LCOS chips with both 720p and 1080p resolution have been developed for HDTVs. Nearly all LCOS chips in mass production have been used in three-chip systems, with one LCOS chip each for red, green and blue light. Sony’s SXRD and JVC’s HD-ILA TVs create images this way. While three-chip systems can produce very good HDTV pictures, they are difficult to align precisely and are expensive. Misalignments can cause visible convergence errors between red, green and blue, particularly along the sides and in the corners of the screen.
Liquid Fidelity addresses both the alignment and cost problems. Exclusive technology enables Liquid Fidelity to change its brightness much more quickly than ordinary LCOS chips can. This fast response allows the use of one chip and a color wheel, rather than three chips, so red, green and blue alignment is assured at all areas on the screen. Also, by eliminating two of the three LCOS chips and the additional optical components to support them, Liquid Fidelity HDTVs are generally less expensive to manufacture.
Comparison to DLP technology
DLP uses MEMS technology, which stands for Micro-Electro-Mechanical Systems. DLP HDTV chips include hundreds of thousands of microscopic mirrors which tilt back and forth, reflecting light which is then projected onto a television screen. While Liquid Fidelity creates an HDTV image by controlling the amount of light reflecting from it, DLP creates an HDTV image by varying the percentage of the time that its mirrors are aimed toward the projection screen.
The main advantage of Liquid Fidelity over DLP is that the 1080p Liquid Fidelity chip has over 2 million cells, in an array of 1920 x 1080, for true 1080p pixel resolution. The 1080p DLP chips designed for consumer HDTVs have only half that number of microscopic mirrors, and use yet another mechanism to create 2 pixels from each of those mirrors.
By providing a dedicated cell for every pixel, Liquid Fidelity technology provides a sharp, stable picture with smooth, fine texture.
References
LCoS and 'Liquid Fidelity': A microdisplay application
External links
MicroDisplay Corporation
Liquid Fidelity
Liquid Fidelity detail
Display technology | Liquid Fidelity | [
"Engineering"
] | 593 | [
"Electronic engineering",
"Display technology"
] |
8,512,436 | https://en.wikipedia.org/wiki/Olfactory%20fatigue | Olfactory fatigue, also known as odor fatigue, odor habituation, olfactory adaptation, or noseblindness, is the temporary, normal inability to distinguish a particular odor after a prolonged exposure to that airborne compound. For example, when entering a restaurant initially the odor of food is often perceived as being very strong, but after time the awareness of the odor normally fades to the point where the smell is not perceptible or is much weaker. After leaving the area of high odor, the sensitivity is restored with time. Anosmia is the permanent loss of the sense of smell, and is different from olfactory fatigue.
It is a term commonly used in wine tasting, where one loses the ability to smell and distinguish wine bouquet after sniffing at wine continuously for an extended period of time. The term is also used in the study of indoor air quality, for example, in the perception of odors from people, tobacco, and cleaning agents. Since odor detection may be an indicator that exposure to certain chemicals is occurring, olfactory fatigue can also reduce one's awareness about chemical hazard exposure.
Olfactory fatigue is an example of neural adaptation. The body becomes desensitized to stimuli to prevent the overloading of the nervous system, thus allowing it to respond to new stimuli that are 'out of the ordinary'.
Mechanism
Odorants are small molecules present in the environment that bind receptors on the surface of cells called olfactory receptor neurons (ORNs). ORNs are present in the olfactory epithelium which lines the nasal cavity and are able to signal due to an internal balance of signal molecules which vary in concentration depending on the presence or absence an odorant. When odorants bind receptors on ORNs, Ca2+ ions flood into the cell causing depolarization and signaling to the brain. Increased Ca2+ also activates a negative, stabilizing feedback loop which lowers the olfactory neuron's sensitivity the longer it is stimulated by an odorant to prevent overstimulation. This happens by limiting the amount of cyclic AMP (cAMP) in the cell and by making the Ca2+-importing channels which cAMP binds to less responsive to cAMP, both effects reducing further intake of Ca2+ and thus limiting depolarization and signaling to the brain. It is important to note that the same mechanism which allows for signaling also limits signaling for prolonged periods of time, the first cannot occur without the second.
On the molecular level, as ORNs depolarize in response to an odorant the G-protein mediated second messenger response activates adenylyl cyclase. This increases cyclic AMP (cAMP) concentration inside the ORN, which then opens a cyclic nucleotide gated cation channel. The influx of Ca2+ ions through this channel triggers olfactory adaptation immediately because Ca2+/calmodulin-dependent protein kinase II or CaMK activation directly represses the opening of cation channels, inactivates adenylyl cyclase, and activates the phosphodiesterase that cleaves cAMP. This series of actions by CaMK desensitizes olfactory receptors to prolonged odorant exposure.
When the nose is covered taste is a lot harder because the air we breathe goes into the mouth as well. A common idea is that vanilla smells sweet and that is because we taste sweet when we eat vanilla flavorings.
Mitigating scent effects on olfactory fatigue
According to a study by Grosofsky, Haupert and Versteeg, "fragrance sellers often provide coffee beans to their customers as a nasal palate cleanser" to reduce the effects of olfactory adaptation and habituation. In their study, participants sniffed coffee beans, lemon slices, or plain air. Participants then indicated which of four presented fragrances had not been previously smelled. The results indicated that coffee beans did not yield better performance than lemon slices or air.
See also
Adaptive system
Anosmia
Banner blindness
Building Indoor Environment
Olfaction
Palate cleanser
Phantosmia
Semantic satiation
Thermal comfort
References
External links
Building biology
fatigue, olfactory
Wine tasting
Indoor air pollution | Olfactory fatigue | [
"Engineering"
] | 840 | [
"Building engineering",
"Building biology"
] |
8,512,525 | https://en.wikipedia.org/wiki/ONETEP | ONETEP (Order-N Electronic Total Energy Package) is a linear-scaling density functional theory software package able to run on parallel computers. It uses a basis of non-orthogonal generalized Wannier functions (NGWFs) expressed in terms of periodic cardinal sine (psinc) functions, which are in turn equivalent to a basis of plane-waves. ONETEP therefore combines the advantages of the plane-wave approach (controllable accuracy and variational convergence of the total energy with respect to the size of the basis) with computational effort that scales linearly with the size of the system. The ONETEP approach involves simultaneous optimization of the density kernel (a generalization of occupation numbers to non-orthogonal basis, which represents the density matrix in the basis of NGWFs) and the NGWFs themselves. The optimized NGWFs then provide a minimal localized basis set, which can be considerably smaller in size, but of equal or higher accuracy, than the unoptimized basis sets used in most linear-scaling approaches.
ONETEP has been developed by a UK-centric group of academics based at the universities of Cambridge, Southampton, Warwick, Imperial College London and Gdańsk University of Technology. It is available to academics at a reduced rate, and licenses can be obtained for non-academic usage from the developers or through Accelrys' Materials Studio package. The latest academic version 6.0 was released on 15 September 2020.
See also
Density functional theory
Quantum chemistry computer programs
External links
References
Computational chemistry software
Density functional theory software | ONETEP | [
"Chemistry"
] | 316 | [
"Computational chemistry",
"Density functional theory software",
"Computational chemistry software",
"Chemistry software"
] |
8,515,349 | https://en.wikipedia.org/wiki/Table%20of%20thermodynamic%20equations | Common thermodynamic equations and quantities in thermodynamics, using mathematical notation, are as follows:
Definitions
Many of the definitions below are also used in the thermodynamics of chemical reactions.
General basic quantities
General derived quantities
Thermal properties of matter
Thermal transfer
Equations
The equations in this article are classified by subject.
Thermodynamic processes
Kinetic theory
Ideal gas
Entropy
, where kB is the Boltzmann constant, and Ω denotes the volume of macrostate in the phase space or otherwise called thermodynamic probability.
, for reversible processes only
Statistical physics
Below are useful results from the Maxwell–Boltzmann distribution for an ideal gas, and the implications of the Entropy quantity. The distribution is valid for atoms or molecules constituting ideal gases.
Corollaries of the non-relativistic Maxwell–Boltzmann distribution are below.
Quasi-static and reversible processes
For quasi-static and reversible processes, the first law of thermodynamics is:
where δQ is the heat supplied to the system and δW is the work done by the system.
Thermodynamic potentials
The following energies are called the thermodynamic potentials,
and the corresponding fundamental thermodynamic relations or "master equations" are:
Maxwell's relations
The four most common Maxwell's relations are:
More relations include the following.
Other differential equations are:
Quantum properties
Indistinguishable Particles
where N is number of particles, h is that Planck constant, I is moment of inertia, and Z is the partition function, in various forms:
Thermal properties of matter
Thermal transfer
Thermal efficiencies
See also
List of thermodynamic properties
Antoine equation
Bejan number
Bowen ratio
Bridgman's equations
Clausius–Clapeyron relation
Departure functions
Duhem–Margules equation
Ehrenfest equations
Gibbs–Helmholtz equation
Phase rule
Kopp's law
Noro–Frenkel law of corresponding states
Onsager reciprocal relations
Stefan number
Thermodynamics
Timeline of thermodynamics
Triple product rule
Exact differential
References
Atkins, Peter and de Paula, Julio Physical Chemistry, 7th edition, W.H. Freeman and Company, 2002 .
Chapters 1–10, Part 1: "Equilibrium".
Landsberg, Peter T. Thermodynamics and Statistical Mechanics. New York: Dover Publications, Inc., 1990. (reprinted from Oxford University Press, 1978).
Lewis, G.N., and Randall, M., "Thermodynamics", 2nd Edition, McGraw-Hill Book Company, New York, 1961.
Reichl, L.E., A Modern Course in Statistical Physics, 2nd edition, New York: John Wiley & Sons, 1998.
Schroeder, Daniel V. Thermal Physics. San Francisco: Addison Wesley Longman, 2000 .
Silbey, Robert J., et al. Physical Chemistry, 4th ed. New Jersey: Wiley, 2004.
Callen, Herbert B. (1985). Thermodynamics and an Introduction to Themostatistics, 2nd edition, New York: John Wiley & Sons.
External links
Thermodynamic equation calculator
Thermodynamic equations
Thermodynamics
Chemical engineering | Table of thermodynamic equations | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 675 | [
"Thermodynamic equations",
"Equations of physics",
"Chemical engineering",
"Thermodynamics",
"nan",
"Dynamical systems"
] |
18,848,199 | https://en.wikipedia.org/wiki/BioPAX | BioPAX (Biological Pathway Exchange) is a RDF/OWL-based
standard language to represent biological pathways at the molecular and cellular level. Its major use is to facilitate the exchange of pathway data. Pathway data captures our understanding of biological processes, but
its rapid growth necessitates development of databases and computational tools to aid interpretation. However, the current fragmentation of pathway information across many
databases with incompatible formats presents barriers to its effective use. BioPAX solves this
problem by making pathway data substantially easier to collect, index, interpret and share.
BioPAX can represent metabolic and signaling pathways, molecular and genetic interactions and
gene regulation networks. BioPAX was created through a community process. Through BioPAX, millions of interactions organized into thousands of pathways across many organisms, from a
growing number of sources, are available. Thus, large amounts of pathway data are available in a
computable form to support visualization, analysis and biological discovery.
It is supported by a variety of online databases (e.g. Reactome) and tools. The latest released version is BioPAX Level 3. There is also an effort to create a version of BioPAX as part of OBO.
Governance and development
The next version of BioPAX, Level 4, is being developed by a community of researchers. Development is coordinated by the board of editors and facilitated by various BioPAX work groups.
Systems Biology Pathway Exchange (SBPAX) is an extension for Level 3 and proposal for Level 4 to add quantitative data and systems biology terms (such as Systems Biology Ontology). SBPAX export has been implemented by the pathway databases Signaling Gateway Molecule Pages, and the SABIO-Reaction Kinetics Database. SBPAX import has been implemented by the cellular modeling framework Virtual Cell.
Other proposals for Level 4 include improved support for Semantic Web, validation and visualization.
Databases with BioPAX Export
Online databases offering BioPAX export include:
Signaling Gateway Molecule Pages (SGMP)
Reactome
BioCyc
INOH
BioModels
Nature/NCI Pathway Interaction Database
Cancer Cell Map
Pathway Commons
Netpath - A curated resource of signal transduction pathways in humans
ConsensusPathDB - A database integrating human functional interaction networks
PANTHER (List of Pathways)
WikiPathways
PharmGKB/PharmGKB*
Software
Software supporting BioPAX include:
Paxtools, a Java API for handling BioPAX files
Systems Biology Linker (Sybil), an application for visualizing BioPAX and converting BioPAX to SBML, as part of the Virtual Cell.
ChiBE (Chisio BioPAX Editor), an application for visualizing and editing BioPAX.
BioPAX Validator - syntax and semantic rules and best practices (project wiki)
Cytoscape includes a BioPAX reader and other extensions, such as PathwayCommons plugin and CyPath2 app.
BiNoM, a cytoscape plugin for network analysis, with functions to import and export BioPAX level 3 files.
BioPAX-pattern, a Java API for defining and searching graph patterns in BioPAX files.
See also
SBML
Linked Open Vocabularies
References
External links
BioPAX homepage
BioPAX Sourceforge Wiki
Systems biology
Bioinformatics
Semantic Web
Domain-specific knowledge representation languages | BioPAX | [
"Engineering",
"Biology"
] | 680 | [
"Bioinformatics",
"Biological engineering",
"Systems biology"
] |
2,710,684 | https://en.wikipedia.org/wiki/Delay-tolerant%20networking | Delay-tolerant networking (DTN) is an approach to computer network architecture that seeks to address the technical issues in heterogeneous networks that may lack continuous network connectivity. Examples of such networks are those operating in mobile or extreme terrestrial environments, or planned networks in space.
Recently, the term disruption-tolerant networking has gained currency in the United States due to support from DARPA, which has funded many DTN projects. Disruption may occur because of the limits of wireless radio range, sparsity of mobile nodes, energy resources, attack, and noise.
History
In the 1970s, spurred by the decreasing size of computers, researchers began developing technology for routing between non-fixed locations of computers. While the field of ad hoc routing was inactive throughout the 1980s, the widespread use of wireless protocols reinvigorated the field in the 1990s as mobile ad hoc networking (MANET) and vehicular ad hoc networking became areas of increasing interest.
Concurrently with (but separate from) the MANET activities, DARPA had funded NASA, MITRE and others to develop a proposal for the Interplanetary Internet (IPN). Internet pioneer Vint Cerf and others developed the initial IPN architecture, relating to the necessity of networking technologies that can cope with the significant delays and packet corruption of deep-space communications. In 2002, Kevin Fall started to adapt some of the ideas in the IPN design to terrestrial networks and coined the term delay-tolerant networking and the DTN acronym. A paper published in 2003 SIGCOMM conference gives the motivation for DTNs. The mid-2000s brought about increased interest in DTNs, including a growing number of academic conferences on delay and disruption-tolerant networking, and growing interest in combining work from sensor networks and MANETs with the work on DTN. This field saw many optimizations on classic ad hoc and delay-tolerant networking algorithms and began to examine factors such as security, reliability, verifiability, and other areas of research that are well understood in traditional computer networking.
Routing
The ability to transport, or route, data from a source to a destination is a fundamental ability all communication networks must have. Delay and disruption-tolerant networks (DTNs), are characterized by their lack of connectivity, resulting in a lack of instantaneous end-to-end paths. In these challenging environments, popular ad hoc routing protocols such as AODV and DSR fail to establish routes. This is due to these protocols trying to first establish a complete route and then, after the route has been established, forward the actual data. However, when instantaneous end-to-end paths are difficult or impossible to establish, routing protocols must take to a "store and forward" approach, where data is incrementally moved and stored throughout the network in hopes that it will eventually reach its destination. A common technique used to maximize the probability of a message being successfully transferred is to replicate many copies of the message in the hope that one will succeed in reaching its destination. This is feasible only on networks with large amounts of local storage and internode bandwidth relative to the expected traffic. In many common problem spaces, this inefficiency is outweighed by the increased efficiency and shortened delivery times made possible by taking maximum advantage of available unscheduled forwarding opportunities. In others, where available storage and internode throughput opportunities are more tightly constrained, a more discriminate algorithm is required.
Other concerns
Bundle protocols
In efforts to provide a shared framework for algorithm and application development in DTNs, were published in 2007 to define a common abstraction to software running on disrupted networks. Commonly known as the Bundle Protocol, this protocol defines a series of contiguous data blocks as a bundle—where each bundle contains enough semantic information to allow the application to make progress where an individual block may not. Bundles are routed in a store and forward manner between participating nodes over varied network transport technologies (including both IP and non-IP based transports). The transport layers carrying the bundles across their local networks are called bundle convergence layers. The bundle architecture therefore operates as an overlay network, providing a new naming architecture based on Endpoint Identifiers (EIDs) and coarse-grained class of service offerings.
Protocols using bundling must leverage application-level preferences for sending bundles across a network. Due to the store and forward nature of delay-tolerant protocols, routing solutions for delay-tolerant networks can benefit from exposure to application-layer information. For example, network scheduling can be influenced if application data must be received in its entirety, quickly, or without variation in packet delay. Bundle protocols collect application data into bundles that can be sent across heterogeneous network configurations with high-level service guarantees. The service guarantees are generally set by the application level, and the Bundle Protocol specification includes "bulk", "normal", and "expedited" markings.
In October 2014 the Internet Engineering Task Force (IETF) instantiated a Delay Tolerant Networking working group to review and revise the protocol specified in . The Bundle Protocol for CCSDS is a profile of RFC 5050 specifically addressing the Bundle Protocol's utility for data communication in space missions.
As of January 2022, the IETF published the following RFCs related to BPv7: .
Security issues
Addressing security issues has been a major focus of the bundle protocol. Possible attacks take the form of nodes behaving as a "black hole" or a "flooder".
Security concerns for delay-tolerant networks vary depending on the environment and application, though authentication and privacy are often critical. These security guarantees are difficult to establish in a network without continuous bi-directional end-to-end paths between devices because the network hinders complicated cryptographic protocols, hinders key exchange, and each device must identify other intermittently visible devices. Solutions have typically been modified from mobile ad hoc network and distributed security research, such as the use of distributed certificate authorities and PKI schemes. Original solutions from the delay-tolerant research community include: 1) the use of identity-based encryption, which allows nodes to receive
information encrypted with their public identifier; and 2) the use of tamper-evident tables with a gossiping protocol;
Implementations
There are a number of implementations of the Bundle Protocol:
BPv6 (RFC 5050, Bundle Protocol for CCSDS)
The main implementation of BPv6 are listed below. A number of other implementations exist.
High-rate DTN-C++17 - based; performance-optimized DTN; runs directly on Linux and Windows.
NASA Interplanetary Overlay Network (ION)—Written in C; designed to run on a wide variety of platforms; conforms to restrictions for space flight software (e.g. no dynamic memory allocation).
IBR-DTN—C++ - based; runs on routers with OpenWRT; also contains JAVA applications (router and user apps) for use on Android.
DTN2—C++ - based; designed to be a reference / learning / teaching implementation of the Bundle Protocol.
DTN Marshal Enterprise (DTNME) — C++ - based; Enterprise solution; designed as an operational DTN implementation. Currently used in ISS operations. DTNME is a single implementation supporting both BPv6 and BPv7.
BPv7 (Internet Research Task Force RFC)
The draft of BPv7 lists the following implementations.
High-rate DTN-C++17 - based; performance-optimized DTN; runs directly on Linux and Windows.
μPCN—C; built upon the POSIX API as well as FreeRTOS and intended to run on low-cost micro satellites.
PyDTN—Python; developed by X-works and during the IETF 101 Hackathon.
Terra—Java; developed in the context of terrestrial DTN.
dtn7-go—Go; implementation focused on easy extensibility and suitable for research.
dtn7-rs—Rust; intended for environments with limited resources and performance requirements.
NASA Interplanetary Overlay Network (ION)—C; intended to be usable in embedded environments including spacecraft flight computers.
DTN Marshal Enterprise (DTNME) — C++ - based; Enterprise solution; designed as an operational DTN implementation. Currently used in ISS operations. DTNME is a single implementation supporting both BPv6 and BPv7.
NASA BPLib - C; A Bundle Protocol library and associated applications by Goddard Space Flight Center. Intended for general use, particularly in space flight applications, integration with cFS (core Flight System), and other applications where store-and-forward capabilities are needed. First time will be used on PACE mission
Research efforts
Various research efforts are currently investigating the issues involved with DTN:
The Delay-Tolerant Networking Research Group.
The Technology and Infrastructure for Developing Regions project at UC Berkeley
The Bytewalla research project at the Royal Institute of Technology, KTH
The KioskNet research project at the University of Waterloo.
The DieselNet research project at the University of Massachusetts Amherst, Amherst.
The ResiliNets Research Initiative at the University of Kansas and Lancaster University.
The Haggle EU research project.
The Space Internetworking Center EU/FP7 project at the Democritus University of Thrace.
The N4C EU/FP7 research project.
The WNaN DARPA project.
The EMMA and OPTRACOM projects at TU Braunschweig
The DTN at Helsinki University of Technology.
The SARAH project, funded by the French National Research Agency (ANR).
The development of the DoDWAN platform at Université Bretagne Sud.
The CROWD project, funded by the French National Research Agency (ANR).
The PodNet project at KTH Stockholm and ETH Zurich.
Some research efforts look at DTN for the Interplanetary Internet by examining use of the Bundle Protocol in space:
The Saratoga project at the University of Surrey, which was the first to test the bundle protocol in space on the UK-DMC Disaster Monitoring Constellation satellite in 2008.
NASA JPL's Deep Impact Networking (DINET) Experiment on board the Deep Impact/EPOXI spacecraft.
BioServe Space Technologies, one of the first payload developers to adopt the DTN technology, has utilized their CGBA (Commercial Generic Bioprocessing Apparatus) payloads on board the ISS, which provide computational/communications platforms, to implement the DTN protocol.
NASA, ESA Use Experimental Interplanetary Internet to Test Robot From International Space Station
See also
Logistical Networking
Message switching
Store and forward
References
Network architecture
Network protocols | Delay-tolerant networking | [
"Engineering"
] | 2,158 | [
"Network architecture",
"Computer networks engineering"
] |
2,710,800 | https://en.wikipedia.org/wiki/Curvature%20invariant%20%28general%20relativity%29 | In general relativity, curvature invariants are a set of scalars formed from the Riemann, Weyl and Ricci tensors — which represent curvature, hence the name — and possibly operations on them such as contraction, covariant differentiation and dualisation.
Certain invariants formed from these curvature tensors play an important role in classifying spacetimes. Invariants are actually less powerful for distinguishing locally non-isometric Lorentzian manifolds than they are for distinguishing Riemannian manifolds. This means that they are more limited in their applications than for manifolds endowed with a positive definite metric tensor.
Principal invariants
The principal invariants of the Riemann and Weyl tensors are certain quadratic polynomial invariants (i.e., sums of squares of components).
The principal invariants of the Riemann tensor of a four-dimensional Lorentzian manifold are
the Kretschmann scalar
the Chern–Pontryagin scalar
the Euler scalar
These are quadratic polynomial invariants (sums of squares of components). (Some authors define the Chern–Pontryagin scalar using the right dual instead of the left dual.)
The first of these was introduced by Erich Kretschmann. The second two names are somewhat anachronistic, but since the integrals of the last two are related to the instanton number and Euler characteristic respectively, they have some justification.
The principal invariants of the Weyl tensor are
(Because , there is no need to define a third principal invariant for the Weyl tensor.)
Relation with Ricci decomposition
As one might expect from the Ricci decomposition of the Riemann tensor into the Weyl tensor plus a sum of fourth-rank tensors constructed from the second rank Ricci tensor and from the Ricci scalar, these two sets of invariants are related (in d=4):
Relation with Bel decomposition
In four dimensions, the Bel decomposition of the Riemann tensor, with respect to a timelike unit vector field , not necessarily geodesic or hypersurface orthogonal, consists of three pieces
the electrogravitic tensor
the magnetogravitic tensor
the topogravitic tensor
Because these are all transverse (i.e. projected to the spatial hyperplane elements orthogonal to our timelike unit vector field), they can be represented as linear operators on three-dimensional vectors, or as three by three real matrices. They are respectively symmetric, traceless, and symmetric (6,8,6 linearly independent components, for a total of 20). If we write these operators as E, B, L respectively, the principal invariants of the Riemann tensor are obtained as follows:
is the trace of E2 + L2 - 2 B BT,
is the trace of B ( E - L ),
is the trace of E L - B2.
Expression in Newman–Penrose formalism
In terms of the Weyl scalars in the Newman–Penrose formalism, the principal invariants of the Weyl tensor may be obtained by taking the real and imaginary parts of the expression
(But note the minus sign!)
The principal quadratic invariant of the Ricci tensor, , may be obtained as a more complicated expression involving the Ricci scalars (see the paper by Cherubini et al. cited below).
Distinguishing Lorentzian manifolds
An important question related to Curvature invariants is when the set of polynomial curvature invariants can be used to (locally) distinguish manifolds. To be able to do this is necessary to include higher-order invariants including derivatives of the Riemann tensor but in the Lorentzian case, it is known that there are spacetimes which cannot be distinguished; e.g., the VSI spacetimes for which all such curvature invariants vanish and thus cannot be distinguished from flat space. This failure of being able to distinguishing Lorentzian manifolds is related to the fact that the Lorentz group is non-compact.
There are still examples of cases when we can distinguish Lorentzian manifolds using their invariants. Examples of such are fully general Petrov type I spacetimes with no Killing vectors, see Coley et al. below. Indeed, it was here found that the spacetimes failing to be distinguished by their set of curvature invariants are all Kundt spacetimes.
See also
Bach tensor, for a sometimes useful tensor generated by via a variational principle.
Carminati-McLenaghan invariants, for a set of polynomial invariants of the Riemann tensor of a four-dimensional Lorentzian manifold which is known to be complete under some circumstances.
Curvature invariant, for curvature invariants in a more general context.
References
See also the eprint version.
Curvature tensors
Tensors in general relativity | Curvature invariant (general relativity) | [
"Physics",
"Engineering"
] | 980 | [
"Tensors",
"Physical quantities",
"Tensor physical quantities",
"Curvature tensors",
"Tensors in general relativity"
] |
2,710,834 | https://en.wikipedia.org/wiki/Shock%20chlorination | Shock chlorination is a process used in many swimming pools, water wells, springs, and other water sources to reduce the bacterial and algal residue in the water. Shock chlorination is performed by mixing a large amount of sodium hypochlorite, which can be in the form of a powder or a liquid such as chlorine bleach, into the water. The common advice is that the amount added must raise the level of chlorine to 10X the level (in parts per million) of chloramines present in the pool water; this is "shocking". A lesser ratio is termed superchlorinating. Water that is being shock chlorinated should not be swum in or drunk until the sodium hypochlorite count in the water goes down to three ppm or less which is generally more than 6 hours. Commercial sodium hypochlorite should not be mixed with commercial calcium hypochlorite, as there is a risk of explosion. Although a verb for superchlorination, "shock" is often misunderstood (through marketing and sales language) to be a unique type of product.
Drawbacks
While "shocking" pools to reduce the buildup of chloramines works with inorganic, ammonia-based chloramines, in two studies it was found ineffective with the organic chloramines present in all pool water e.g. with creatinine, an organic component in human sweat. Indeed, superchlorination produces free chlorine that reacts with organic contaminants to form a variety of disinfection byproducts (DBPs) which are hazardous to swimmer health e.g. one of the worst DBPs is the noxious and volatile trichloramine (NCl3), well known for irritating the eyes nearby a pool. It has been pointed out that ozone is an excellent alternative, a much more effective oxidizer than chlorine shock.
See also
Water chlorination
Shock Chlorination Procedure by High-Pressure Metering Pump Injection
References
Water treatment
Chlorine | Shock chlorination | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 429 | [
"Water treatment",
"Water pollution",
"Water technology",
"Environmental engineering"
] |
2,711,029 | https://en.wikipedia.org/wiki/Physics%20education | Physics education or physics teaching refers to the education methods currently used to teach physics. The occupation is called physics educator or physics teacher. Physics education research refers to an area of pedagogical research that seeks to improve those methods. Historically, physics has been taught at the high school and college level primarily by the lecture method together with laboratory exercises aimed at verifying concepts taught in the lectures. These concepts are better understood when lectures are accompanied with demonstration, hand-on experiments, and questions that require students to ponder what will happen in an experiment and why. Students who participate in active learning for example with hands-on experiments learn through self-discovery. By trial and error they learn to change their preconceptions about phenomena in physics and discover the underlying concepts. Physics education is part of the broader area of science education.
History
In Ancient Greece, Aristotle wrote what is considered now as the first textbook of physics. Aristotle's ideas were taught unchanged until the Late Middle Ages, when scientists started making discoveries that didn't fit them. For example, Copernicus' discovery contradicted Aristotle's idea of an Earth-centric universe. Aristotle's ideas about motion weren't displaced until the end of the 17th century, when Newton published his ideas.
Today's physics students often think of physics concepts in Aristotelian terms, despite being taught only Newtonian concepts.
Teaching strategies
Teaching strategies are the various techniques used to facilitate the education of students with different learning styles.
The different teaching strategies are intended to help students develop critical thinking and engage with the material. The choice of teaching strategy depends on the concept being taught, and indeed on the interest of the students.
Methods/Approaches for teaching physics
Lecture: Lecturing is one of the more traditional ways of teaching science. Owing to the convenience of this method, and the fact that most teachers are taught by it, it remains popular in spite of certain limitations (compared to other methods, it does little to develop critical thinking and scientific attitude among students). This method is teacher centric.
Recitation: Also known as the Socratic method. In this method, the student plays a greater role than they would in a lecture. The teacher asks questions with the aim of prompting the thoughts of the students. This method can be very effective in developing higher order thinking in pupils. To apply this strategy, the students should be partially informed about the content. The efficacy of the recitation method depends largely on the quality of the questions. This method is student centric.
Demonstration: In this method, the teacher performs certain experiments, which students observe and ask questions about. After the demonstration, the teacher can explain the experiment further and test the students' understanding via questions. This method is an important one, as science is not an entirely theoretical subject.
Lecture-cum-Demonstration: As its name suggests, this is a combination of two of the above methods: lecture and demonstration. The teacher performs the experiment and explains it simultaneously. By this method, the teacher can provide more information in less time. As with the demonstration method, the students only observe; they do not get any practical experience of their own. It is not possible to teach all topics by this method.
Laboratory Activities: Laboratories have students conduct physics experiments and collect data by interacting with physics equipment. Generally, students follow instructions in a lab manual. These instructions often take students through an experiment step-by-step. Typical learning objectives include reinforcing the course content through real-world interaction (similar to demonstrations) and thinking like experimental physicists. Lately, there has been some effort to shift lab activities toward the latter objective by separating from the course content, having students make their own decisions, and calling to question the notion of a "correct" experimental result. Unlike the demonstration method, the laboratory method gives students practical experience performing experiments like professional scientists. However, it often requires a significant amount of time and resources to work properly.
Problem-based learning: A group of 8-10 students and a tutor meet together to study a "case" or trigger problem. One student acts as a chair and one as a scribe to record the session. Students interact to understand the terminology and issues of the problem, discussing possible solutions and a set of learning objectives. The group breaks up for private study then return to share results. The approach has been used in many UK medical schools. The technique fosters independence, engagement, development of communication skill, and integration of new knowledge with real world issues. However, the technique requires more staff per student, staff willing to facilitate rather than lecture, and well designed and documented trigger scenarios. The technique has been shown to be effective in teaching physics.
Research
Physics education research is the study of how physics is taught and how students learn physics. It a subfield of educational research.
Worldwide
Physics education in Hong Kong
Physics education in the United Kingdom
See also
American Association of Physics Teachers
Balsa wood bridge
Concept inventory
Egg drop competition
Feynman lectures
Harvard Project Physics
Learning Assistant Model
List of physics concepts in primary and secondary education curricula
Mousetrap car
Physical Science Study Committee
Physics First
SAT Subject Test in Physics
Physics Outreach
Science education
Teaching quantum mechanics
Mathematics education
Engineering education
Discipline-based education research
References
Further reading
Miscellaneous:
Education by subject
Occupations | Physics education | [
"Physics"
] | 1,063 | [
"Applied and interdisciplinary physics",
"Physics education"
] |
2,711,320 | https://en.wikipedia.org/wiki/Tidal%20tensor | In Newton's theory of gravitation and in various relativistic classical theories of gravitation, such as general relativity, the tidal tensor represents
tidal accelerations of a cloud of (electrically neutral, nonspinning) test particles,
tidal stresses in a small object immersed in an ambient gravitational field.
The tidal tensor represents the relative acceleration due to gravity of two test masses separated by an infinitesimal distance. The component represents the relative acceleration in the direction produced by a displacement in the direction.
Tidal tensor for a spherical body
The most common example of tides is the tidal force around a spherical body (e.g., a planet or a moon).
Here we compute the tidal tensor for the gravitational field outside an isolated spherically symmetric massive object. According to Newton's gravitational law, the acceleration a at a distance r from a central mass m is
(to simplify the math, in the following derivations we use the convention of setting the gravitational constant G to one. To calculate the differential accelerations, the results are to be multiplied by G.)
Let us adopt the frame in polar coordinates for our three-dimensional Euclidean space, and consider infinitesimal displacements in the radial and azimuthal directions, and , which are given the subscripts 1, 2, and 3 respectively.
We will directly compute each component of the tidal tensor, expressed in this frame.
First, compare the gravitational forces on two nearby objects lying on the same radial line at distances from the central body differing by a distance h:
Because in discussing tensors we are dealing with multilinear algebra, we retain only first order terms, so . Since there is no acceleration in the or direction due to a displacement in the radial direction, the other radial terms are zero: .
Similarly, we can compare the gravitational force on two nearby observers lying at the same radius but displaced by an (infinitesimal) distance h in the or direction. Using some elementary trigonometry and the small angle approximation, we find that the force vectors differ by a vector tangent to the sphere which has magnitude
By using the small angle approximation, we have ignored all terms of order , so the tangential components are . Again, since there is no acceleration in the radial direction due to displacements in either of the azimuthal directions, the other azimuthal terms are zero: .
Combining this information, we find that the tidal tensor is diagonal with frame components
This is the Coulomb form characteristic of spherically symmetric central force fields in Newtonian physics.
Hessian formulation
In the more general case where the mass is not a single spherically symmetric central object, the tidal tensor can be derived from the gravitational potential , which obeys the Poisson equation:
where is the mass density of any matter present, and where is the Laplace operator. Note that this equation implies that in a vacuum solution, the potential is simply a harmonic function.
The tidal tensor is given by the traceless part
of the Hessian
where we are using the standard Cartesian chart for E3, with the Euclidean metric tensor
Using standard results in vector calculus, this is readily converted to expressions valid in other coordinate charts, such as the polar spherical chart
Spherically symmetric field
As an example, we can calculate the tidal tensor for a spherical body using the Hessian. Next, let us plug the gravitational potential into the Hessian. We can convert the expression above to one valid in polar spherical coordinates, or we can convert the potential to Cartesian coordinates before plugging in. Adopting the second course, we have , which gives
After a rotation of our frame, which is adapted to the polar spherical coordinates, this expression agrees with our previous result. The easiest way to see this is to set to zero so that the off-diagonal terms vanish and , and then invoke the spherical symmetry.
In General Relativity
In general relativity, the tidal tensor is generalized by the Riemann curvature tensor. In the weak field limit, the tidal tensor is given by the components of the curvature tensor.
See also
Tidal force
Stress tensor
References
External links
Tensor physical quantities
Gravity
Tides | Tidal tensor | [
"Physics",
"Mathematics",
"Engineering"
] | 834 | [
"Quantity",
"Tensors",
"Tensor physical quantities",
"Physical quantities"
] |
2,711,446 | https://en.wikipedia.org/wiki/MCM6 | DNA replication licensing factor MCM6 is a protein that in humans is encoded by the MCM6 gene. MCM6 is one of the highly conserved mini-chromosome maintenance proteins (MCM) that are essential for the initiation of eukaryotic genome replication.
Function
The MCM complex consisting of MCM6 (this protein) and MCM2, 4 and 7 possesses DNA helicase activity, and may act as a DNA unwinding enzyme. The hexameric protein complex formed by the MCM proteins is a key component of the pre-replication complex (pre-RC) and may be involved in the formation of replication forks and in the recruitment of other DNA replication related proteins. The phosphorylation of the complex by CDC2 kinase reduces the helicase activity, suggesting a role in the regulation of DNA replication. Mcm 6 has recently been shown to interact strongly Cdt1 at defined residues, by mutating these target residues Wei et al. observed lack of Cdt1 recruitment of Mcm2-7 to the pre-RC.
Gene
The MCM6 gene, MCM6, is expressed at very high level. MCM6 contains 18 introns. There are 2 non overlapping alternative last exons. The transcripts appear to differ by truncation of the 3' end, presence or absence of 2 cassette exons, common exons with different boundaries.
MCM6 produces, by alternative splicing, 3 different transcripts, all with introns, putatively encoding 3 different protein isoforms.
MCM6 contains two of the regulatory regions for LCT, the gene encoding the protein lactase, located in two of the MCM6 introns, approximately 14 kb and 22 kb upstream of LCT. A substitution of thymine for cytosine in the first region (at -13910), in particular, has been shown to function in vitro as an enhancer element capable of differentially activating transcription of LCT promoter.
Mutations in these regions are associated with lactose tolerance into adult life. " Two variants were associated with lactase persistence..."
Interactions
MCM6 has been shown to interact with:
CDC45-related protein,
MCM2,
MCM4,
MCM7,
ORC1L,
ORC2L,
ORC4L, and
Replication protein A1.
See also
Mini Chromosome Maintenance
References
Further reading
Proteins | MCM6 | [
"Chemistry"
] | 495 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
2,711,593 | https://en.wikipedia.org/wiki/Well%20control | Well control is the technique used in oil and gas operations such as drilling, well workover and well completion for maintaining the hydrostatic pressure and formation pressure to prevent the influx of formation fluids into the wellbore. This technique involves the estimation of formation fluid pressures, the strength of the subsurface formations and the use of casing and mud density to offset those pressures in a predictable fashion. Understanding pressure and pressure relationships is important in well control.
The aim of oil operations is to complete all tasks in a safe and efficient manner without detrimental environmental effects. This aim can only be achieved if well control is maintained at all times. The understanding of pressure and pressure relationships are important in preventing blowouts by experienced personnel who are able to detect when the well is kicking and take proper and prompt actions.
Fluid pressure
The fluid is any substance that flows; e.g., oil, water, gas and ice are all examples of fluids. Under extreme pressure and temperature, almost anything acts as a fluid. Fluids exert pressure, and this pressure comes from the density and height of the fluid column. Oil companies typically measure density in pounds per gallon (ppg) or kilograms per cubic meter (kg/m3) and pressure measurement in pounds per square inch (psi) or bar or pascal (Pa). Pressure increases with fluid density. To find out the amount of pressure fluid of a known density exerts per unit length, the pressure gradient is used. The pressure gradient is defined as the pressure increase per unit of depth due to its density and it is usually measured in pounds per square inch per foot or bars per meter. It is expressed mathematically as;
.
The conversion factor used to convert density to pressure is 0.052 in Imperial system and 0.0981 in Metric system.
Hydrostatic pressure
Hydro means water, or fluid, that exerts pressure and static means not moving or at rest. Therefore, hydrostatic pressure is the total fluid pressure created by the weight of a column of fluid, acting on any given point in a well. In oil and gas operations, it is represented mathematically as
or
.
The true vertical depth is the distance that a well reaches below ground. The measured depth is the length of the well including any angled or horizontal sections. Consider two wells, X and Y. Well X has a measured depth of 9,800 ft and a true vertical depth of 9,800 ft while well Y has measured depth of 10,380 ft while its true vertical depth is 9,800 ft. To calculate the hydrostatic pressure of the bottom hole, the true vertical depth is used because gravity acts (pulls) vertically down the hole.
Formation pressure
Formation pressure is the pressure of the fluid within the pore spaces of the formation rock. This pressure can be affected by the weight of the overburden (rock layers) above the formation, which exerts pressure on both the grains and pore fluids. Grains are solid or rock material, while pores are spaces between grains. If pore fluids are free to move or escape, the grains lose some of their support and move closer together. This process is called consolidation. Depending on the magnitude of the pore pressure, it is described as normal, abnormal or subnormal.
Normal
Normal pore pressure or formation pressure is equal to the hydrostatic pressure of formation fluid extending from the surface to the surface formation being considered. In other words, if the structure was opened and allowed to fill a column whose length is equal to the depth of the formation, then the pressure at the bottom of the column is similar to the formation pressure and the pressure at the surface is equal to zero. Normal pore pressure is not constant. Its magnitude varies with the concentration of dissolved salts, type of fluid, gases present and temperature gradient.
When a normally pressured formation is raised toward the surface while prevented from losing pore fluid in the process, it changes from normal pressure (at a greater depth) to abnormal pressure (at a shallower depth). When this happens, and then one drills into the formation, mud weights of up to 20 ppg (2397 kg/m ³) may be required for control. This process accounts for many of the shallow, abnormally pressured zones in the world. In areas where faulting is present, salt layers or domes are predicted, or excessive geothermal gradients are known, drilling operations may encounter abnormal pressure.
Abnormal
Abnormal pore pressure is defined as any pore pressure that is greater than the hydrostatic pressure of the formation fluid occupying the pore space. It is sometimes called overpressure or geopressure. An abnormally pressured formation can often be predicted using well history, surface geology, downhole logs or geophysical surveys.
Subnormal
Subnormal pore pressure is defined as any formation pressure that is less than the corresponding fluid hydrostatic pressure at a given depth. Subnormally pressured formations have pressure gradients lower than fresh water or less than 0.433 psi/ft (0.0979 bar/m). Naturally occurring subnormal pressure can develop when the overburden has been stripped away, leaving the formation exposed at the surface. Depletion of original pore fluids through evaporation, capillary action, and dilution produce hydrostatic gradients below 0.433 psi/ft (0.0979 bar/m). Subnormal pressures may also be induced through depletion of formation fluids. If Formation Pressure < Hydrostatic pressure, then it is under pressure. If Formation Pressure > Hydrostatic pressure then it is overpressured.
Fracture pressure
Fracture pressure is the amount of pressure it takes to permanently deform the rock structure of a formation. Overcoming formation pressure is usually not sufficient to cause fracturing. If more fluid is free to move, a slow rate of entry into the formation will not cause fractures. If pore fluid cannot move out of the way, fracturing and permanent deformation of the formation can occur. Fracture pressure can be expressed as a gradient (psi/ft), a fluid density equivalent (ppg), or by calculated total pressure at the formation (psi). Fracture gradients normally increase with depth due to increasing overburden pressure. Deep, highly compacted formations can require high fracture pressures to overcome the existing formation pressure and resisting rock structure. Loosely compacted formations, such as those found offshore in deep water, can fracture at low gradients (a situation exacerbated by the fact that some of total "overburden" up the surface is sea water rather than the heavier rock that would be present in an otherwise-comparable land well). Fracture pressures at any given depth can vary widely because of the area's geology.
Bottom hole pressure
Bottom hole pressure is used to represent the sum of all the pressures being exerted at the bottom of the hole. The pressure is imposed on the walls of the hole. The hydrostatic fluid column accounts for most of the pressure, but the pressure to move fluid up the annulus also acts on the walls. In larger diameters, this annular pressure is small, rarely exceeding 200 psi (13.79 bar). In smaller diameters, it can be 400 psi (27.58 bar) or higher. Backpressure or pressure held on the choke further increases bottom hole pressure, which can be estimated by adding up all the known pressures acting in, or on, the annular (casing) side. Bottom hole pressure can be estimated during the following activities
Static well
If no fluid is moving, the well is static. The bottom hole pressure (BHP) is equal to the hydrostatic pressure (HP) on the annular side. If shut in on a kick, bottom hole pressure is equal to the hydrostatic pressure in the annulus plus the casing (wellhead or surface pressure) pressure.
Normal circulation
During circulation, the bottom hole pressure is equal to the hydrostatic pressure on the annular side plus the annular pressure loss (APL).
Rotating head
During circulating with a rotating head the bottom hole pressure is equal to the hydrostatic pressure on the annular side, plus the annular pressure loss, plus the rotating head backpressure.
Circulating a kick out
Bottom hole pressure is equal to hydrostatic pressure on the annular side, plus annular pressure loss, plus choke (casing) pressure. For subsea, add choke line pressure loss.
Formation integrity test
An accurate evaluation of a casing cement job as well as of the formation is important during the drilling and subsequent phases. The Information resulting from Formation Integrity Tests (FIT) is used throughout the life of the well and for nearby wells. Casing depths, well control options, formation fracture pressures and limiting fluid weights may be based on this information. To determine formation strength and integrity, a Leak Off Test (LOT) or a Formation Integrity Test (FIT) may be performed.
The FIT is: a method of checking the cement seal between the casing and the formation. The LOT determines the pressure and/or fluid weight the test zone below the casing can sustain. The fluid in the well must be circulated clean to ensure it is of a known and consistent density. If mud is used, it must be properly conditioned and gel strengths minimized. The pump used should be a high-pressure, low-volume test, or cementing pump. Rig pumps can be used if the rig has electric drives on the mud pumps, and they can be slowly rolled over. If the rig pump must be used and the pump cannot be easily controlled at low rates, then the leak-off technique must be modified. It is a good idea to make a graph of the pressure versus time or volume for all leak-off tests.
The main reasons for performing FIT are:
To investigate the strength of the cement bond around the casing shoe and to ensure that no communication is established with higher formations.
To determine the fracture gradient around the casing shoe and therefore establish the upper limit of the primary well control for the open hole section below the current casing.
To investigate well bore capability to withstand pressure below the casing shoe in order to test the well engineering plan regarding the casing shoe setting depth.
U-tube concepts
It is often helpful to visualize the well as a U-shaped tube. Column Y of the tube represents the annulus, and column X represents the pipe (string) in the well. The bottom of the U-tube represents the bottom of the well. In most cases, fluids create hydrostatic pressure in both the pipe and annulus. Atmospheric pressure can be omitted since it works the same on both columns. If the fluid in both the pipe and annulus are of the same density, hydrostatic pressures would be equal, and the fluid would be static on both sides of the tube. If the fluid in the annulus is heavier, it will exert more pressure downward and will flow into the string, pushing some of the lighter fluid out of the string, causing a flow at the surface. The fluid level then falls in the annulus, equalizing pressures. Given a difference in the hydrostatic pressures, the fluid will try to reach a balanced point. This is called U-tubing, and it explains why there is often a flow from the pipe when making connections. This is often evident when drilling fast because the effective density in the annulus is increased by cuttings.
Equivalent circulating densities
The Equivalent Circulating Density (ECD) is defined as the increase in density due to friction, normally expressed in pounds per gallon. Equivalent Circulating Density (when forward circulating) is defined as the apparent fluid density that results from adding annular friction to the actual fluid density in the well.
or ECD = MW +( p/1.4223*TVD(M)
Where:
ECD = Equivalent circulating density (ppg)
Pa = Difference between annular pressure at surface & annular pressure at depth TVD (psi)
TVD = True vertical depth (ft)
MW = Mud weight (ppg)
When the drilling mud is under static condition (no circulation), pressure at any point is only due to drilling mud weight and is given by:-
Pressure under static condition =
0.052 * Mud weight (in ppg) * TVD (in feet)
During circulation, the pressure applied is due to drilling mud weight and also due to the pressure applied by the mud pumps to circulate the drilling fluid.
Pressure under circulating condition
= Pressure under static condition
+ Pressure due to pumping at that point or pressure loss in the system
If we convert pressure under circulating condition in the annulus to its density equivalent it will be called ECD
Dividing the above equation by 0.052*TVD into both sides:-
ECD = (Pressure under static condition + Annular pressure loss) / (0.052 * TVD)
ECD = MW + Annular pressure loss / (0.052 * TVD)
using (Pressure under static condition = 0.052 * TVD * MW)
Pipe surge/swab
During trips (up/down) the drill string acts as a large piston, when moving down it increases the pressure below the drill string and forces the drilling fluid into the formation which is termed as surge. Similarly, while moving up, there is a low-pressure zone created below the drill string, which sucks the formation fluid into the wellbore, which is called swab.
The total pressure acting on the wellbore is affected by pipe movement upwards or downwards. Tripping pipe into and out of a well is another common operation during completions and workovers. Unfortunately, statistics indicate that most kicks occur during trips. Therefore, understanding the basic concepts of tripping is a major concern in completion/workover operations.
Downward movement of tubing (tripping in) creates a pressure that is exerted on the bottom of a well. As the tubing is entering a well, the fluid in the well must move upward to exit the volume consumed by the tubing. The combination of the downward movement of the tubing and the upward movement of the fluid (or piston effect) results in an increase in pressure throughout the well. This increase in pressure is commonly called Surge pressure.
Upward movement of the tubing (tripping out) also affects the pressure at the bottom of the well. When pulling pipe, the fluid must move downward and replace the volume occupied by the tubing. The net effect of the upward and downward movements creates a decrease in bottom hole pressure. This decrease in pressure is referred to as Swab pressure. Both surge and swab pressures are affected by:
Velocity of the pipe, or tripping speed
Fluid density
Fluid viscosity
Fluid gel strength
Well bore geometry (annular clearance between tools and casing, tubing open-ended or closed off)
The faster pipe moves, the greater the surge and swab effects. The greater the fluid density, viscosity and gel strength, the greater the surge and swab. Finally, the downhole tools such as packers and scrapers, which have small annular clearance, also increase surge and swab effects. Determination of actual surge and swab pressures can be accomplished with the use of WORKPRO and DRILPRO calculator programs or hydraulics manuals.
Differential pressure
In well control, differential pressure is defined as the difference between the formation pressure and the bottom hole hydrostatic pressure. These are classified as overbalanced, underbalanced or balanced.
Overbalanced – The hydrostatic pressure exerted on the bottom of the hole is greater than the formation pressure. i.e. HP > FP
Underbalanced – The hydrostatic pressure exerted on the bottom of the hole is less than the formation pressure. i.e. HP < FP
Balanced – The hydrostatic pressure exerted on the bottom of the hole is equal to the formation pressure. i.e. HP = FP
Cuttings change: shape, size, amount, type
Cuttings are rock fragments chipped, scraped or crushed away from a formation by the action of the drill bit. The size, shape, and amount of cuttings depend largely on formation type, weight on the bit, bit sharpness and the pressure differential (formation versus fluid hydrostatic pressures). The size of the cuttings usually decreases as the bit dulls during drilling if the weight on bit, formation type and the pressure differential, remain constant. However, if the pressure differential changes (formation pressure increases), even a dull bit could cut more effectively, and the size, shape, and amount of cuttings could increase.
Kick
Kick is defined as an undesirable influx of formation fluid into the wellbore. If left unchecked, a kick can develop into a blowout (an uncontrolled influx of formation fluid into the wellbore). The result of failing to control a kick leads to lost operation time, loss of well and quite possibly, the loss of the rig and lives of personnel.
Causes
Once the hydrostatic pressure is less than the formation pore pressure, formation fluid can flow into the well. This can happen when one or a combination of the following occurs:
Improper hole fill up
Insufficient mud density
Swabbing/surging
Lost circulation
Abnormal formation pressure
Gas cut mud
Poor well planning
Improper hole fill up
When tripping out of the hole, the volume of the removed pipe results in a corresponding decrease in the wellbore fluid. Whenever the fluid level in the hole decreases, the hydrostatic pressure that it exerts also decreases and if the decrease in hydrostatic pressure falls below the formation pore pressure, the well may flow. Therefore, the hole must be filled to maintain sufficient hydrostatic pressure to control formation pressure. During tripping, the pipe could be dry or wet depending on the conditions. The API7G illustrates the methodology for calculating accurate pipe displacement and gives correct charts and tables. The volume to fill the well when tripping dry pipe out is:
To calculate the volume to fill the well when tripping wet pipe out is given as;
In some wells, monitoring fill-up volumes on trips can be complicated by loss through perforations. The wells may stand full of fluid initially, but over time the fluid seeps into the reservoir. In such wells, the fill-up volume always exceeds the calculated or theoretical volume of the pipe removed from the well. In some fields, wells have low reservoir pressures and will not support a full column of fluid. In these wells filling the hole with fluid is essentially impossible unless sort of bridging agent is used to temporarily bridge off the subnormally pressured zone. The common practice is to pump the theoretical fill-up volume while pulling out of the well.
Insufficient mud (fluid) density
The mud in the wellbore must exert enough hydrostatic pressure to equal the formation pore pressure. If the fluid's hydrostatic pressure is less than formation pressure the well can flow. The most common reason for insufficient fluid density is drilling into unexpected abnormally pressured formations. This situation usually arises when unpredicted geological conditions are encountered. Such as drilling across a fault that abruptly changes the formation being drilled.
Mishandling mud at the surface accounts for many instances of insufficient fluid weight. Such as opening the wrong valve on the pump suction manifold and allowing a tank of lightweight fluid to be pumped; bumping the water valve so more is added than intended; washing off shale shakers; or clean-up operations. All of these can affect mud weight.
Swabbing /Surging
Swabbing is as a result of the upward movement of pipe in a well and results in a decrease in bottom hole pressure. In some cases, the bottom hole pressure reduction can be large enough to cause the well to go underbalanced and allow formation fluids to enter the wellbore. The initial swabbing action compounded by the reduction in hydrostatic pressure (from formation fluids entering the well) can lead to a significant reduction in bottom hole pressure and a larger influx of formation fluids. Therefore, early detection of swabbing on trips is critical to minimizing the size of a kick.
Many wellbore conditions increase the likelihood of swabbing on a trip. Swabbing (piston) action is enhanced when the pipe is pulled too fast. Poor fluid properties, such as high viscosity and gel strengths, also increase the chances of swabbing a well in. Additionally, large outside diameter (OD) tools (packers, scrapers, fishing tools, etc.) enhance the piston effect.
These conditions need to be recognized in order to decrease the likelihood of swabbing a well in during completion/workover operations. As mentioned earlier, there are several computer and calculator programs that can estimate surge and swab pressures. Swabbing is detected by closely monitoring hole fill-up volumes during trips. For example, if three barrels of steel (tubing) are removed from the well and it takes only two barrels of fluid to fill the hole, then a one barrel kick has probably been swabbed into the wellbore. Special attention should be paid to hole fill-up volumes since statistics indicate that most kicks occur on trips.
Lost circulation
Another cause of kick during completion/workover operations is lost circulation. Loss of
circulation leads to a drop of both the fluid level and hydrostatic pressure in a well. If the
hydrostatic pressure falls below the reservoir pressure, the well kicks. Three main causes of lost circulation are:
Excessive pressure overbalance
Excessive surge pressure
Poor formation integrity
Abnormal pressure
In case of drilling a wildcat or exploratory well (often the formation pressures are not known accurately) the bit suddenly penetrates into an abnormal pressure formation resulting the hydrostatic pressure of mud become less than the formation pressure and cause a kick.
Gas cut mud
When the gas is circulated to the surface, it expands and reduces the hydrostatic pressure sufficient to allow a kick. Although the mud density is reduced considerably at the surface, the hydrostatic pressure is not reduced significantly since the gas expansion occurs near surface and not at the bottom.
Poor well planning
The fourth cause of kick is poor planning. The mud and casing programs bear on well control. These programs must be flexible enough to allow progressively deeper casing strings to be set; otherwise a situation may arise where it is not possible to control kicks or lost circulation.
Methods
During drilling, kicks are usually killed using the Driller's, Engineer's or a hybrid method called Concurrent, while forward circulating. The choice will depend on:
the amount and type of kick fluids in the well
the rig's equipment capabilities
the minimum fracture pressure in the open hole
the drilling and operating companies well control policies.
For workover or completion operations, other methods are often used. Bullheading is a common way to kill a well during workovers and completions operations but is not often used while drilling. Reverse circulation is another kill method used for workovers that are not used for drilling.
See also
Blowout (well drilling)
Blowout preventer
Oil well
Oil well control
References
Oil wells
Petroleum production | Well control | [
"Chemistry"
] | 4,704 | [
"Petroleum technology",
"Oil wells"
] |
2,712,280 | https://en.wikipedia.org/wiki/Solar%20desalination | Solar desalination is a desalination technique powered by solar energy. The two common methods are direct (thermal) and indirect (photovoltaic).
History
Solar distillation has been used for thousands of years. Early Greek mariners and Persian alchemists produced both freshwater and medicinal distillates. Solar stills were the first method used on a large scale to convert contaminated water into a potable form.
In 1870 the first US patent was granted for a solar distillation device to Norman Wheeler and Walton Evans. Two years later in Las Salinas, Chile, Swedish engineer Charles Wilson began building a solar distillation plant to supply freshwater to workers at a saltpeter and silver mine. It operated continuously for 40 years and distilled an average of 22.7 m3 of water a day using the effluent from mining operations as its feed water.
Solar desalination in the United States began in the early 1950s when Congress passed the Conversion of Saline Water Act, which led to the establishment of the Office of Saline Water (OSW) in 1955. OSW's main function was to administer funds for desalination research and development projects. One of five demonstration plants was located in Daytona Beach, Florida. Many of the projects were aimed at solving water scarcity issues in remote desert and coastal communities. In the 1960s and 1970s several distillation plants were constructed on the Greek isles with capacities ranging from 2000 to 8500 m3/day. In 1984 a plant was constructed in Abu-Dhabi with a capacity of 120 m3/day that is still in operation. In Italy, an open source design called "the Eliodomestico" by Gabriele Diamanti was developed for personal costing $50.Of the estimated 22 million m3 daily freshwater produced through desalination worldwide, less than 1% uses solar energy. The prevailing methods of desalination, MSF and RO, are energy-intensive and rely heavily on fossil fuels. Because of inexpensive methods of freshwater delivery and abundant low-cost energy resources, solar distillation has been viewed as cost-prohibitive and impractical. It is estimated that desalination plants powered by conventional fuels consume the equivalent of 203 million tons of fuel a year.
Methods
Solar desalination is a technique that harnesses solar energy to convert saline water into fresh water, making it suitable for human consumption and irrigation. The process can be categorized based on the type of solar energy source utilized. In direct solar desalination, saline water absorbs solar energy and evaporates, leaving behind salt and other impurities. An example of this is solar stills, where an enclosed environment allows for the collection and condensation of pure water vapor. On the other hand, indirect solar desalination involves the use of solar collectors that capture and transfer solar energy to saline water. This energy is then used to power desalination processes such as Humidification-Dehumidification (HDH) and diffusion-driven methods.
Direct
In the direct (distillation) method, a solar collector is coupled with a distilling mechanism. Solar stills of this type are described in survival guides, provided in marine survival kits, and employed in many small desalination and distillation plants.
Water production is proportional to the area of the solar surface and solar incidence angle and has an average estimated value of . Because of this proportionality and the relatively high cost of property and material for construction, distillation tends to favor plants with production capacities less than .
Single-effect
This uses the same process as rainfall. A transparent cover encloses a pan where saline water is placed. The latter traps solar energy, evaporating the seawater. The vapor condenses on the inner face of a sloping transparent cover, leaving behind salts, inorganic and organic components and microbes.
The direct method achieves values of 4-5 L/m2/day and efficiency of 30-40%. Efficiency can be improved to 45% by using a double slope or an additional condenser.
Types of Stills
Wick Still
In a wick still, feed water flows slowly through a porous radiation-absorbing pad. This requires less water to be heated and is easier to change the angle towards the sun which saves time and achieves higher temperatures.
Diffusion Still
A diffusion still is composed of a hot storage tank coupled to a solar collector and the distillation unit. Heating is produced by the thermal diffusion between them.
Improving Productivity
Increasing the internal temperature using an external energy source can improve productivity.
Limitations
Direct methods use thermal energy to vaporize the seawater as part of a 2-phase separation. Such methods are relatively simple and require little space so they are normally used on small systems. However, they have a low production rate due to low operating temperature and pressure, so they are appropriate for systems that yield 200 m3/day.
Indirect
Indirect desalination employs a solar collection array, consisting of photovoltaic and/or fluid-based thermal collectors, and a separate conventional desalination plant. Many arrangements have been analyzed, experimentally tested and deployed. Categories include multiple-effect humidification (MEH), multi-stage flash distillation (MSF), multiple-effect distillation (MED), multiple-effect boiling (MEB), humidification–dehumidification (HDH), reverse osmosis (RO), and freeze-effect distillation.
Large solar desalination plants typically use indirect methods. Indirect solar desalination processes are categorized into single-phase processes (membrane based) and phase change processes (non-membrane based). Single-phase desalination use photovoltaics to produce electricity that drive pumps. Phase-change (or multi-phase) solar desalination is not membrane-based.
Indirect single-phase
Indirect solar desalination systems using photovoltaic (PV) panels and reverse osmosis (RO) have been in use since 2009. Output by 2013 reached per hour per system, and per day per square metre of PV panel. Utirik Atoll in the Pacific Ocean has been supplied with fresh water this way since 2010.
Single-phase desalination processes include reverse osmosis and membrane distillation, where membranes filter water from contaminants. As of 2014 reverse osmosis (RO) made up about 52% of indirect methods. Pumps push salt water through RO modules at high pressure. RO systems depend on pressure differences. A pressure of 55–65 bar is required to purify seawater. An average of 5 kWh/m3 of energy is typically required to run a large-scale RO plant. Membrane distillation (MD) utilizes pressure difference from two sides of a microporous hydrophobic membrane. Fresh water can be extracted through four MD methods: Direct Contact (DCMD), Air Gap (AGMD), Sweeping Gas (SGMD) and Vacuum (VMD). An estimated water cost of $15/m3 and $18/m3 support medium-scale solar-MD plants. Energy consumption ranges from 200 to 300 kWh/m3.
Indirect multi-phase
Phase-change (or multi-phase) solar desalination includes multi-stage flash, multi-effect distillation (MED), and thermal vapor compression (VC). It is accomplished by using phase change materials (PCMs) to maximize latent heat storage and high temperatures. MSF phase change temperatures range 80–120 °C, 40–100 °C for VC, and 50–90 °C for the MED method. Multi-stage flash (MSF) requires seawater to travel through a series of vacuumed reactors held at successively lower pressures. Heat is added to capture the latent heat of the vapor. As seawater flows through the reactors, steam is collected and is condensed to produce fresh water. In Multi-effect distillation (MED), seawater flows through successively low pressure vessels and reuses latent heat to evaporate seawater for condensation. MED desalination requires less energy than MSF due to higher efficiency in thermodynamic transfer rates.
Multi-stage flash distillation (MSF)
The multi-stage flash (MSF) method is a widely used technology for desalination, particularly in large-scale seawater desalination plants. It is based on the principle of utilizing the evaporation and condensation process to separate saltwater from freshwater.
In the MSF desalination process, seawater is heated and subjected to a series of flashings or rapid depressurizations in multiple stages. Each stage consists of a series of heat exchangers and flash chambers. The process typically involves the following steps:
Preheating: Seawater is initially preheated to reduce the energy required for subsequent stages. The preheated seawater then enters the first stage of the MSF system.
Flashing: In each stage, the preheated seawater is passed through a flash chamber, where its pressure is rapidly reduced. This sudden drop in pressure causes the water to flash into steam, leaving behind concentrated brine with high salt content.
Condensation: The steam produced in the flash chamber is then condensed on the surfaces of heat exchanger tubes. The condensation occurs as the steam comes into contact with colder seawater or with tubes carrying cool freshwater from previous stages.
Collection and extraction: The condensed freshwater is collected and collected as product water. It is then extracted from the system for storage and distribution, while the remaining brine is removed and disposed of properly.
Reheating and repetition: The brine from each stage is reheated, usually by steam extracted from the turbine that drives the process, and then introduced into the subsequent stage. This process is repeated in subsequent stages, with the number of stages determined by the desired level of freshwater production and the overall efficiency of the system.
The multi-stage flash (MSF) method, known for its high energy efficiency through the utilization of latent heat of vaporization during the flashing process, accounted for approximately 45% of the world's desalination capacity and a dominant 93% of thermal systems as recorded in 2009.
In Margherita di Savoia, Italy a 50–60 m3/day MSF plant uses a salinity gradient solar pond. In El Paso, Texas a similar project produces 19 m3/day. In Kuwait a MSF facility uses parabolic trough collectors to provide solar thermal energy to produce 100 m3 of fresh water a day. And in Northern China an experimental, automatic, unmanned operation uses 80 m2 of vacuum tube solar collectors coupled with a 1 kW wind turbine (to drive several small pumps) to produce 0.8 m3/day.
MSF solar distillation has an output capacity of 6–60 L/m2/day versus the 3-4 L/m2/day standard output of a solar still. MSF experience poor efficiency during start-up or low energy periods. Achieving highest efficiency requires controlled pressure drops across each stage and steady energy input. As a result, solar applications require some form of thermal energy storage to deal with cloud interference, varying solar patterns, nocturnal operation, and seasonal temperature changes. As thermal energy storage capacity increases a more continuous process can be achieved and production rates approach maximum efficiency.
Indirect Solar Desalination by Humidification/Dehumidification
Indirect solar desalination by a form of humidification/dehumidification is in use in the seawater greenhouse.
Freezing
Although it has only been used on demonstration projects, this indirect method based on crystallization of the saline water has the advantage of the low energy required. Since the latent heat of fusion of water is 6,01 kJ/mole and the latent heat of vaporization at 100 °C is 40,66 kJ/mole, it should be cheaper in terms of energy cost. Furthermore, the corrosion risk is lower too. There is however a disadvantage related with the difficulties of mechanically moving mixtures of ice and liquid. The process has not been commercialized yet due to cost and difficulties with refrigeration systems.
The most studied way of using this process is the refrigeration freezing. A refrigeration cycle is used to cool the water stream to form ice, and after that those crystals are separated and melted to obtain fresh water. There are some recent examples of this solar powered processes: the unit constructed in Saudi Arabia by Chicago Bridge and Iron Inc. in the late 1980s, which was shut down for its inefficiency.
Nevertheless, there is a recent study for the saline groundwater concluding that a plant capable of producing 1 million gal/day would produce water at a cost of $1.30/1000 gallons. Being this true, it would be a cost-competitive device with the reverse osmosis ones.
Problems with thermal systems
Inherent design problems face thermal solar desalination projects. First, the system's efficiency is governed by competing heat and mass transfer rates during evaporation and condensation.
Second, the heat of condensation is valuable because it takes large amounts of solar energy to evaporate water and generate saturated, vapor-laden hot air. This energy is, by definition, transferred to the condenser's surface during condensation. With most solar stills, this heat is emitted as waste heat.
Solutions
Heat recovery allows the same heat input to be reused, providing several times the water.
One solution is to reduce the pressure within the reservoir. This can be accomplished using a vacuum pump, and significantly decreases the required heat energy. For example, water at a pressure of 0.1 atmospheres boils at rather than .
Solar humidification–dehumidification
The solar humidification–dehumidification (HDH) process (also called the multiple-effect humidification–dehumidification process, solar multistage condensation evaporation cycle (SMCEC) or multiple-effect humidification (MEH) mimics the natural water cycle on a shorter time frame by distilling water. Thermal energy produces water vapor that is condensed in a separate chamber. In sophisticated systems, waste heat is minimized by collecting the heat from the condensing water vapor and pre-heating the incoming water source.
Single-phase solar desalination
In indirect, or single phase, solar-powered desalination, two systems are combined: a solar energy collection system (e.g. photovoltaic panels) and a desalination system such as reverse osmosis (RO). The main single-phase processes, generally membrane processes, consist of RO and electrodialysis (ED). Single phase desalination is predominantly accomplished with photovoltaics that produce electricity to drive RO pumps. Over 15,000 desalination plants operate around the world. Nearly 70% use RO, yielding 44% of desalination. Alternative methods that use solar thermal collection to provide mechanical energy to drive RO are in development.
Reverse osmosis
RO is the most common desalination process due to its efficiency compared to thermal desalination systems, despite the need for water pre-treatment. Economic and reliability considerations are the main challenges to improving PV powered RO desalination systems. However, plummeting PV panel costs make solar-powered desalination more feasible.
Solar-powered RO desalination is common in demonstration plants due to the modularity and scalability of both PV and RO systems. An economic analysis that explored an optimisation strategy of PV-powered RO reported favorable results.
PV converts solar radiation into direct-current (DC) electricity, which powers the RO unit. The intermittent nature of sunlight and its variable intensity throughout the day complicates PV efficiency prediction and limits night-time desalination. Batteries can store solar energy for later use. Similarly, thermal energy storage systems ensure constant performance after sunset and on cloudy days.
Batteries allow continuous operation. Studies have indicated that intermittent operations can increase biofouling.
Batteries remain expensive and require ongoing maintenance. Also, storing and retrieving energy from the battery lowers efficiency.
Reported average cost of RO desalination is US$0.56/m3. Using renewable energy, that cost could increase up to US$16/m3. Although renewable energy costs are greater, their use is increasing.
Electrodialysis
Both electrodialysis (ED) and reverse electrodialysis (RED) use selective ion transport through ion exchange membranes (IEMs) due either to the influence of concentration difference (RED) or electrical potential (ED).
In ED, an electrical force is applied to the electrodes; the cations travel toward the cathode and anions travel toward the anode. The exchange membranes only allow the passage of its permeable type (cation or anion), hence with this arrangement, diluted and concentrated salt solutions are placed in the space between the membranes (channels). The configuration of this stack can be either horizontal or vertical. The feed water passes in parallel through all the cells, providing a continuous flow of permeate and brine. Although this is a well-known process electrodialysis is not commercially suited for seawater desalination, because it can be used only for brackish water (TDS < 1000 ppm). Due to the complexity for modeling ion transport phenomena in the channels, performance could be affected, considering the non-ideal behavior presented by the exchange membranes.
The basic ED process could be modified and turned into RED, in which the polarity of the electrodes changes periodically, reversing the flow through the membranes. This limits the deposition of colloidal substances, which makes this a self-cleaning process, almost eliminating the need for chemical pre-treatment, making it economically attractive for brackish water.
The use ED systems began in 1954, while RED was developed in the 1970s. These processes are used in over 1100 plants worldwide. The main advantages of PV in desalination plants is due to its suitability for small-scale plants. One example is in Japan, on Oshima Island (Nagasaki), which has operated since 1986 with 390 PV panels producing 10 m3/day with dissolved solids (TDS) about 400 ppm.
See also
Point Paterson Desalination Plant
References
External links
Water treatment
Water technology
Water conservation
Water desalination | Solar desalination | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,743 | [
"Water desalination",
"Water treatment",
"Water pollution",
"Environmental engineering",
"Water technology"
] |
2,713,504 | https://en.wikipedia.org/wiki/Porous%20silicon | Porous silicon (abbreviated as "PS" or "pSi") is a form of the chemical element silicon that has introduced nanopores in its microstructure, rendering a large surface to volume ratio in the order of 500 m2/cm3.
History
Porous silicon was discovered by accident in 1956 by Arthur Uhlir Jr. and Ingeborg Uhlir at the Bell Labs in the U.S. At the time, the Uhlirs were in the process of developing a technique for polishing and shaping the surfaces of silicon and germanium. However, it was found that under several conditions a crude product in the form of thick black, red or brown film were formed on the surface of the material. At the time, the findings were not taken further and were only mentioned in Bell Lab's technical notes.
Despite the discovery of porous silicon in the 1950s, the scientific community was not interested in porous silicon until the late 1980s. At the time, Leigh Canham – while working at the Defence Research Agency in England – reasoned that the porous silicon may display quantum confinement effects. The intuition was followed by successful experimental results published in 1990. In the published experiment, it was revealed that silicon wafers can emit light if subjected to electrochemical and chemical dissolution.
The published result stimulated the interest of the scientific community in its non-linear optical and electrical properties. The growing interest was evidenced in the number of published work concerning the properties and potential applications of porous silicon. In an article published in 2000, it was found that the number of published work grew exponentially in between 1991 and 1995.
In 2001, a team of scientists at the Technical University of Munich inadvertently discovered that hydrogenated porous silicon reacts explosively with oxygen at cryogenic temperatures, releasing several times as much energy as an equivalent amount of TNT, at a much greater speed. (An abstract of the study can be found below.) Explosion occurs because the oxygen, which is in a liquid state at the necessary temperatures, is able to oxidize through the porous molecular structure of the silicon extremely rapidly, causing a very quick and efficient detonation. Although hydrogenated porous silicon would probably not be effective as a weapon, due to its functioning only at low temperatures, other uses are being explored for its explosive properties, such as providing thrust for satellites.
Fabrication of porous silicon
Anodization and stain-etching are the two most common methods used for fabrication of porous silicon; however, there are almost twenty other methods to fabricate this material. Drying and surface modification might be needed afterwards. If anodization in an aqueous solution is used to form microporous silicon, the material is commonly treated in ethanol immediately after fabrication, to avoid damage to the structure that results due to the stresses of the capillary effect of the aqueous solution.
Anodization
One method of introducing pores in silicon is through the use of an anodization cell. A possible anodization cell is made of Teflon and employs a platinum cathode and a crystalline Si wafer anode immersed in hydrogen fluoride (HF) electrolyte. Recently, inert diamond cathodes were used to avoid metallic impurities in the electrolyte and inert diamond anodes form an improved electrical back plate contact to the silicon wafers. Corrosion of the anode is produced by running electric current through the cell. It is noted that etching with constant DC is usually implemented to ensure steady tip-concentration of HF resulting in a more homogeneous porous layer, while pulsed current is more appropriate for the formation of thick PS layers with thickness greater than 50 μm. Pore direction is governed by crystal orientation. In (100)-cut Si the pores are oriented perpendicular to the wafer's surface.
It was noted by Halimaoui that hydrogen evolution occurs during the formation of porous silicon.
When purely aqueous HF solutions are used for the PS formation, the hydrogen bubbles stick to the surface and induce lateral and in-depth inhomogeneity
The hydrogen evolution is normally treated with absolute ethanol in concentration exceeding 15%. It was found that the introduction of ethanol eliminates hydrogen and ensures complete infiltration of HF solution within the pores. Subsequently, uniform distribution of porosity and thickness is improved.
Stain etching
It is possible to obtain porous silicon through stain-etching with hydrofluoric acid, nitric acid and water. A publication in 1957 revealed that stain films can be grown in dilute solutions of nitric acid in concentrated hydrofluoric acid. Porous silicon formation by stain-etching is particularly attractive because of its simplicity and the presence of readily available corrosive reagents; namely nitric acid (HNO3) and hydrogen fluoride (HF). Furthermore, stain-etching is useful if one needs to produce a very thin porous Si films. A publication in 1960 by R. J. Archer revealed that it is possible to create stain films as thin as 25 Å through stain-etching with HF-HNO3 solution.
Bottom-Up Synthesis
Porous silicon can be synthesized chemically from silicon tetrachloride, using self-forming salt byproducts as templates for pore formation. The salt templates are later removed with water.
Drying of porous silicon
Porous silicon is systematically prone to presence of cracks when the water is evaporated. The cracks are particularly evident in thick or highly porous silicon layers. The origin of the cracks has been attributed to the large capillary stress due to the minute size of the pores. In particular, it has been known that cracks will appear for porous silicon samples with thickness larger than a certain critical value. Bellet concluded that it was impossible to avoid cracking in thick porous silicon layers under normal evaporating conditions. Hence, several appropriate techniques have been developed to minimize the risk of cracks formed during drying.
Supercritical drying
Supercritical drying is reputed to be the most efficient drying technique but is rather expensive and difficult to implement. It was first implemented by Canham in 1994 and involves superheating the liquid pore above the critical point to avoid interfacial tension.
Freeze drying
Freeze drying procedure was first documented around 1996. After the formation of porous silicon, the sample is frozen at a temperature of about 200 K and sublimed under vacuum.
Pentane drying
The technique uses pentane as the drying liquid instead of water. In doing so the capillary stress is reduced because pentane has a lower surface tension than water.
Slow evaporation
Slow evaporating technique can be implemented following the water or ethanol rinsing. It was found that slow evaporation decreased the trap density.
Physical properties of porous silicon
Physical parameters describing PS are pore diameter, pore density and thickness of the porous layer. During formation of porous silicon layer by means of anodization of a Si wafer, these parameter can be controlled by the Si resistivity, HF concentration, current density and etching time. It is possible to create several porous layers with different pore densities and diameters of the pores on the same substrate by etching with different current densities.
Porosity
Porosity is defined as volume fraction of voids within the PS layer and can be determined easily by weight measurement. The porosity of PS may range from 4% for macroporous layers to 95% for mesoporous layers. A study by Canham in 1995 found that "a 1 μm thick layer of high porosity silicon completely dissolved within a day of in-vitro exposure to a simulated body fluid". It was also found that a silicon wafer with medium to low porosity displayed more stability. Hence, the porosity of PS is chosen according to its potential application areas. The porosity of PS is a macroscopic parameter and doesn’t yield any information regarding the microstructure of the layer. It is proposed that the properties of a sample are more accurately predicted if the pore size and pore distribution within the sample can be obtained.
Optical properties
PS demonstrates optical properties based on porosity and complex refractive indices of Si and the medium inside the pores. Effective refractive index of PS can be modelled by means of effective medium approximations (EMA). Usually generalised Bruggeman model is used. If the refractive index of the medium inside pores is high, the effective refractive index of PS will be high as well. This phenomenon causes the spectrum to shift towards longer wavelength.
Classification of porous silicon
Porous silicon is classified into three categories according to the size of its pores: macroporous, mesoporous, and microporous.
Surface modification of porous silicon
The surface of porous silicon may be modified to exhibit different properties. Often, freshly etched porous silicon may be unstable due to the rate of its oxidation by the atmosphere or unsuitable for cell attachment purposes. Therefore, it can be surface modified to improve stability and cell attachment
Surface modification improving stability
Following the formation of porous silicon, its surface is covered with covalently bonded hydrogen. Although the hydrogen coated surface is sufficiently stable when exposed to inert atmosphere for a short period of time, prolonged exposure render the surface prone to oxidation by atmospheric oxygen. The oxidation promotes instability in the surface and is undesirable for many applications. Thus, several methods were developed to promote the surface stability of porous silicon.
An approach that can be taken is through thermal oxidation. The process involves heating the silicon to a temperature above 1000 C to promote full oxidation of silicon. The method reportedly produced samples with good stability to aging and electronic surface passivation.
Porous silicon exhibits a high degree of biocompatibility. The large surface area enables organic molecules to adhere well. It degrades to Orthosillicic acid (H4SiO4), which causes no harm to the body. This has opened potential applications in medicine such as a framework of the growth of bone.
Surface modification improving cell adhesion
Surface modification can also affect properties that promote cell adhesion. One particular research in 2005 studied the mammalian cell adhesion on the modified surfaces of porous silicon. The research used rat PC12 cells and Human Lens Epithelial (HLE) cells cultured for four hours on the surface modified porous silicon. Cells were then stained with vital dye FDA and observed under fluorescence microscopy. The research concluded that "amino silanisation and coating the pSi surface with collagen enhanced cell attachment and spreading".
Key characteristics of porous silicon
Highly controllable properties
Porous silicon studies conducted in 1995 showed that the behaviour of porous silicon can be altered in between "bio-inert", "bioactive" and "resorbable" by varying the porosity of the silicon sample. The in-vitro study used simulated body fluid containing ion concentration similar to the human blood and tested the activities of porous silicon sample when exposed to the fluids for prolonged period of time. It was found that high porosity mesoporous layers were completely removed by the simulated body fluids within a day. In contrast, low to medium porosity microporous layers displayed more stable configurations and induced hydroxyapatite growth.
Bioactive
The first sign of porous silicon as a bioactive material was found in 1995. In the conducted study, it was found that hydroxyapatite growth was occurring on porous silicon areas. It was then suggested that "hydrated microporous Si could be a bioactive form of the semiconductor and suggest that Si itself should be seriously considered for development as a material for widespread in vivo applications." Another paper published the finding that porous silicon may be used a substrate for hydroxyapatite growth either by simple soaking process or laser-liquid-solid interaction process.
Since then, in-vitro studies have been conducted to evaluate the interaction of cells with porous silicon. A 1995 study of the interaction of B50 rat hippocampal cells with porous silicon found that B50 cells have clear preference for adhesion to porous silicon over untreated surface. The study indicated that porous silicon can be suitable for cell culturing purposes and can be used to control cell growth pattern.
Non-toxic waste product
Another positive attribute of porous silicon is the degradation of porous silicon into monomeric silicic acid (SiOH4). Silicic acid is reputed to be the most natural form of element in the environment and is readily removed by kidneys.
The human blood plasma contains monomeric silicic acid at levels of less than 1 mg Si/L, corresponding to the average dietary intake of 20–50 mg/day. It was proposed that the small thickness of silicon coatings presents minimal risk to a toxic concentration being reached. The proposal was supported by an experiment involving volunteers and silicic-acid drinks. It was found that concentration of the acid rose only briefly above the normal 1 mg Si/L level and was efficiently expelled by urine excretion.
Superhydrophobicity
The simple adjustment of pore morphology and geometry of porous silicon also offers a convenient way to control its wetting behavior. Stable ultra- and superhydrophobic states on porous silicon can be fabricated and used in lab-on-a-chip, microfluidic devices for the improved surface-based bioanalysis.
See also
Nanocrystalline silicon
Silicon
Porosity
Quantum wire
Etching (microfabrication)
References
Further reading
Allotropes of silicon
Silicon, Porous
Biomedical engineering
Pharmacodynamics | Porous silicon | [
"Chemistry",
"Engineering",
"Biology"
] | 2,793 | [
"Pharmacology",
"Biological engineering",
"Allotropes",
"Biomedical engineering",
"Pharmacodynamics",
"Semiconductor materials",
"Group IV semiconductors",
"Allotropes of silicon",
"Medical technology"
] |
2,714,149 | https://en.wikipedia.org/wiki/Symmetry%20in%20mathematics | Symmetry occurs not only in geometry, but also in other branches of mathematics. Symmetry is a type of invariance: the property that a mathematical object remains unchanged under a set of operations or transformations.
Given a structured object X of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. This can occur in many ways; for example, if X is a set with no additional structure, a symmetry is a bijective map from the set to itself, giving rise to permutation groups. If the object X is a set of points in the plane with its metric structure or any other metric space, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (i.e., an isometry).
In general, every kind of structure in mathematics will have its own kind of symmetry, many of which are listed in the given points mentioned above.
Symmetry in geometry
The types of symmetry considered in basic geometry include reflectional symmetry, rotational symmetry, translational symmetry and glide reflection symmetry, which are described more fully in the main article Symmetry (geometry).
Symmetry in calculus
Even and odd functions
Even functions
Let f(x) be a real-valued function of a real variable, then f is even if the following equation holds for all x and -x in the domain of f:
Geometrically speaking, the graph face of an even function is symmetric with respect to the y-axis, meaning that its graph remains unchanged after reflection about the y-axis. Examples of even functions include , x2, x4, cos(x), and cosh(x).
Odd functions
Again, let f be a real-valued function of a real variable, then f is odd if the following equation holds for all x and -x in the domain of f:
That is,
Geometrically, the graph of an odd function has rotational symmetry with respect to the origin, meaning that its graph remains unchanged after rotation of 180 degrees about the origin. Examples of odd functions are x, x3, sin(x), sinh(x), and erf(x).
Integrating
The integral of an odd function from −A to +A is zero, provided that A is finite and that the function is integrable (e.g., has no vertical asymptotes between −A and A).
The integral of an even function from −A to +A is twice the integral from 0 to +A, provided that A is finite and the function is integrable (e.g., has no vertical asymptotes between −A and A). This also holds true when A is infinite, but only if the integral converges.
Series
The Maclaurin series of an even function includes only even powers.
The Maclaurin series of an odd function includes only odd powers.
The Fourier series of a periodic even function includes only cosine terms.
The Fourier series of a periodic odd function includes only sine terms.
Symmetry in linear algebra
Symmetry in matrices
In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose (i.e., it is invariant under matrix transposition). Formally, matrix A is symmetric if
By the definition of matrix equality, which requires that the entries in all corresponding positions be equal, equal matrices must have the same dimensions (as matrices of different sizes or shapes cannot be equal). Consequently, only square matrices can be symmetric.
The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if the entries are written as A = (aij), then aij = aji, for all indices i and j.
For example, the following 3×3 matrix is symmetric:
Every square diagonal matrix is symmetric, since all off-diagonal entries are zero. Similarly, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative.
In linear algebra, a real symmetric matrix represents a self-adjoint operator over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric matrix refers to one which has real-valued entries. Symmetric matrices appear naturally in a variety of applications, and typical numerical linear algebra software makes special accommodations for them.
Symmetry in abstract algebra
Symmetric groups
The symmetric group Sn (on a finite set of n symbols) is the group whose elements are all the permutations of the n symbols, and whose group operation is the composition of such permutations, which are treated as bijective functions from the set of symbols to itself. Since there are n! (n factorial) possible permutations of a set of n symbols, it follows that the order (i.e., the number of elements) of the symmetric group Sn is n!.
Symmetric polynomials
A symmetric polynomial is a polynomial P(X1, X2, ..., Xn) in n variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, P is a symmetric polynomial if for any permutation σ of the subscripts 1, 2, ..., n, one has P(Xσ(1), Xσ(2), ..., Xσ(n)) = P(X1, X2, ..., Xn).
Symmetric polynomials arise naturally in the study of the relation between the roots of a polynomial in one variable and its coefficients, since the coefficients can be given by polynomial expressions in the roots, and all roots play a similar role in this setting. From this point of view, the elementary symmetric polynomials are the most fundamental symmetric polynomials. A theorem states that any symmetric polynomial can be expressed in terms of elementary symmetric polynomials, which implies that every symmetric polynomial expression in the roots of a monic polynomial can alternatively be given as a polynomial expression in the coefficients of the polynomial.
Examples
In two variables X1 and X2, one has symmetric polynomials such as:
and in three variables X1, X2 and X3, one has as a symmetric polynomial:
Symmetric tensors
In mathematics, a symmetric tensor is tensor that is invariant under a permutation of its vector arguments:
for every permutation σ of the symbols {1,2,...,r}.
Alternatively, an rth order symmetric tensor represented in coordinates as a quantity with r indices satisfies
The space of symmetric tensors of rank r on a finite-dimensional vector space is naturally isomorphic to the dual of the space of homogeneous polynomials of degree r on V. Over fields of characteristic zero, the graded vector space of all symmetric tensors can be naturally identified with the symmetric algebra on V. A related concept is that of the antisymmetric tensor or alternating form. Symmetric tensors occur widely in engineering, physics and mathematics.
Galois theory
Given a polynomial, it may be that some of the roots are connected by various algebraic equations. For example, it may be that for two of the roots, say A and B, that . The central idea of Galois theory is to consider those permutations (or rearrangements) of the roots having the property that any algebraic equation satisfied by the roots is still satisfied after the roots have been permuted. An important proviso is that we restrict ourselves to algebraic equations whose coefficients are rational numbers. Thus, Galois theory studies the symmetries inherent in algebraic equations.
Automorphisms of algebraic objects
In abstract algebra, an automorphism is an isomorphism from a mathematical object to itself. It is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. The set of all automorphisms of an object forms a group, called the automorphism group. It is, loosely speaking, the symmetry group of the object.
Examples
In set theory, an arbitrary permutation of the elements of a set X is an automorphism. The automorphism group of X is also called the symmetric group on X.
In elementary arithmetic, the set of integers, Z, considered as a group under addition, has a unique nontrivial automorphism: negation. Considered as a ring, however, it has only the trivial automorphism. Generally speaking, negation is an automorphism of any abelian group, but not of a ring or field.
A group automorphism is a group isomorphism from a group to itself. Informally, it is a permutation of the group elements such that the structure remains unchanged. For every group G there is a natural group homomorphism G → Aut(G) whose image is the group Inn(G) of inner automorphisms and whose kernel is the center of G. Thus, if G has trivial center it can be embedded into its own automorphism group.
In linear algebra, an endomorphism of a vector space V is a linear operator V → V. An automorphism is an invertible linear operator on V. When the vector space is finite-dimensional, the automorphism group of V is the same as the general linear group, GL(V).
A field automorphism is a bijective ring homomorphism from a field to itself. In the cases of the rational numbers (Q) and the real numbers (R) there are no nontrivial field automorphisms. Some subfields of R have nontrivial field automorphisms, which however do not extend to all of R (because they cannot preserve the property of a number having a square root in R). In the case of the complex numbers, C, there is a unique nontrivial automorphism that sends R into R: complex conjugation, but there are infinitely (uncountably) many "wild" automorphisms (assuming the axiom of choice). Field automorphisms are important to the theory of field extensions, in particular Galois extensions. In the case of a Galois extension L/K the subgroup of all automorphisms of L fixing K pointwise is called the Galois group of the extension.
Symmetry in representation theory
Symmetry in quantum mechanics: bosons and fermions
In quantum mechanics, bosons have representatives that are symmetric under permutation operators, and fermions have antisymmetric representatives.
This implies the Pauli exclusion principle for fermions. In fact, the Pauli exclusion principle with a single-valued many-particle wavefunction is equivalent to requiring the wavefunction to be antisymmetric. An antisymmetric two-particle state is represented as a sum of states in which one particle is in state and the other in state :
and antisymmetry under exchange means that . This implies that , which is Pauli exclusion. It is true in any basis, since unitary changes of basis keep antisymmetric matrices antisymmetric, although strictly speaking, the quantity is not a matrix but an antisymmetric rank-two tensor.
Conversely, if the diagonal quantities are zero in every basis, then the wavefunction component:
is necessarily antisymmetric. To prove it, consider the matrix element:
This is zero, because the two particles have zero probability to both be in the superposition state . But this is equal to
The first and last terms on the right hand side are diagonal elements and are zero, and the whole sum is equal to zero. So the wavefunction matrix elements obey:
.
or
Symmetry in set theory
Symmetric relation
We call a relation symmetric if every time the relation stands from A to B, it stands too from B to A.
Note that symmetry is not the exact opposite of antisymmetry.
Symmetry in metric spaces
Isometries of a space
An isometry is a distance-preserving map between metric spaces. Given a metric space, or a set and scheme for assigning distances between elements of the set, an isometry is a transformation which maps elements to another metric space such that the distance between the elements in the new metric space is equal to the distance between the elements in the original metric space. In a two-dimensional or three-dimensional space, two geometric figures are congruent if they are related by an isometry: related by either a rigid motion, or a composition of a rigid motion and a reflection. Up to a relation by a rigid motion, they are equal if related by a direct isometry.
Isometries have been used to unify the working definition of symmetry in geometry and for functions, probability distributions, matrices, strings, graphs, etc.
Symmetries of differential equations
A symmetry of a differential equation is a transformation that leaves the differential equation invariant. Knowledge of such symmetries may help solve the differential equation.
A Line symmetry of a system of differential equations is a continuous symmetry of the system of differential equations. Knowledge of a Line symmetry can be used to simplify an ordinary differential equation through reduction of order.
For ordinary differential equations, knowledge of an appropriate set of Lie symmetries allows one to explicitly calculate a set of first integrals, yielding a complete solution without integration.
Symmetries may be found by solving a related set of ordinary differential equations. Solving these equations is often much simpler than solving the original differential equations.
Symmetry in probability
In the case of a finite number of possible outcomes, symmetry with respect to permutations (relabelings) implies a discrete uniform distribution.
In the case of a real interval of possible outcomes, symmetry with respect to interchanging sub-intervals of equal length corresponds to a continuous uniform distribution.
In other cases, such as "taking a random integer" or "taking a random real number", there are no probability distributions at all symmetric with respect to relabellings or to exchange of equally long subintervals. Other reasonable symmetries do not single out one particular distribution, or in other words, there is not a unique probability distribution providing maximum symmetry.
There is one type of isometry in one dimension that may leave the probability distribution unchanged, that is reflection in a point, for example zero.
A possible symmetry for randomness with positive outcomes is that the former applies for the logarithm, i.e., the outcome and its reciprocal have the same distribution. However this symmetry does not single out any particular distribution uniquely.
For a "random point" in a plane or in space, one can choose an origin, and consider a probability distribution with circular or spherical symmetry, respectively.
See also
Use of symmetry in integration
Invariance (mathematics)
References
Bibliography
(Concise introduction for lay reader)
Symmetry | Symmetry in mathematics | [
"Physics",
"Mathematics"
] | 3,026 | [
"Geometry",
"Symmetry"
] |
2,714,255 | https://en.wikipedia.org/wiki/3D%20scanning | 3D scanning is the process of analyzing a real-world object or environment to collect three dimensional data of its shape and possibly its appearance (e.g. color). The collected data can then be used to construct digital 3D models.
A 3D scanner can be based on many different technologies, each with its own limitations, advantages and costs. Many limitations in the kind of objects that can be digitised are still present. For example, optical technology may encounter many difficulties with dark, shiny, reflective or transparent objects. For example, industrial computed tomography scanning, structured-light 3D scanners, LiDAR and Time Of Flight 3D Scanners can be used to construct digital 3D models, without destructive testing.
Collected 3D data is useful for a wide variety of applications. These devices are used extensively by the entertainment industry in the production of movies and video games, including virtual reality. Other common applications of this technology include augmented reality, motion capture, gesture recognition, robotic mapping, industrial design, orthotics and prosthetics, reverse engineering and prototyping, quality control/inspection and the digitization of cultural artifacts.
Functionality
The purpose of a 3D scanner is usually to create a 3D model. This 3D model consists of a polygon mesh or point cloud of geometric samples on the surface of the subject. These points can then be used to extrapolate the shape of the subject (a process called reconstruction). If colour information is collected at each point, then the colours or textures on the surface of the subject can also be determined.
3D scanners share several traits with cameras. Like most cameras, they have a cone-like field of view, and like cameras, they can only collect information about surfaces that are not obscured. While a camera collects colour information about surfaces within its field of view, a 3D scanner collects distance information about surfaces within its field of view. The "picture" produced by a 3D scanner describes the distance to a surface at each point in the picture. This allows the three dimensional position of each point in the picture to be identified.
In some situations, a single scan will not produce a complete model of the subject. Multiple scans, from different directions are usually helpful to obtain information about all sides of the subject. These scans have to be brought into a common reference system, a process that is usually called alignment or registration, and then merged to create a complete 3D model. This whole process, going from the single range map to the whole model, is usually known as the 3D scanning pipeline.
Technology
There are a variety of technologies for digitally acquiring the shape of a 3D object. The techniques work with most or all sensor types including optical, acoustic, laser scanning, radar, thermal, and seismic. 3D-scan technologies can be split in 2 categories: contact and non-contact. Non-contact solutions can be further divided into two main categories, active and passive. There are a variety of technologies that fall under each of these categories.
Contact
Contact 3D scanners work by physically probing (touching) the part and recording the position of the sensor as the probe moves around the part.
There are two main types of contact 3D scanners:
Coordinate measuring machines (CMMs) which traditionally have 3 perpendicular moving axis with a touch probe mounted on the Z axis. As the touch probe moves around the part, sensors on each axis record the position to generate XYZ coordinates. Modern CMMs are 5 axis systems, with the two extra axes provided by pivoting sensor heads. CMMs are the most accurate form of 3D measurement achieving micron precision. The greatest advantage of a CMM after accuracy is that it can be run in autonomous (CNC) mode or as a manual probing system. The disadvantage of CMMs is that their upfront cost and the technical knowledge required to operate them.
Articulated Arms which generally have multiple segments with polar sensors on each joint. As per the CMM, as the articulated arm moves around the part sensors record their position and the location of the end of the arm is calculated using complex math and the wrist rotation angle and hinge angle of each joint. While not usually as accurate as CMMs, articulated arms still achieve high accuracy and are cheaper and slightly easier to use. They do not usually have CNC options.
Both modern CMMs and Articulated Arms can also be fitted with non-contact laser scanners instead of touch probes.
Non-contact active
Active scanners emit some kind of radiation or light and detect its reflection or radiation passing through object in order to probe an object or environment. Possible types of emissions used include light, ultrasound or x-ray.
Time-of-flight
The time-of-flight 3D laser scanner is an active scanner that uses laser light to probe the subject. At the heart of this type of scanner is a time-of-flight laser range finder. The laser range finder finds the distance of a surface by timing the round-trip time of a pulse of light. A laser is used to emit a pulse of light and the amount of time before the reflected light is seen by a detector is measured. Since the speed of light is known, the round-trip time determines the travel distance of the light, which is twice the distance between the scanner and the surface. If is the round-trip time, then distance is equal to . The accuracy of a time-of-flight 3D laser scanner depends on how precisely we can measure the time: 3.3 picoseconds (approx.) is the time taken for light to travel 1 millimetre.
The laser range finder only detects the distance of one point in its direction of view. Thus, the scanner scans its entire field of view one point at a time by changing the range finder's direction of view to scan different points. The view direction of the laser range finder can be changed either by rotating the range finder itself, or by using a system of rotating mirrors. The latter method is commonly used because mirrors are much lighter and can thus be rotated much faster and with greater accuracy. Typical time-of-flight 3D laser scanners can measure the distance of 10,000~100,000 points every second.
Time-of-flight devices are also available in a 2D configuration. This is referred to as a time-of-flight camera.
Triangulation
Triangulation based 3D laser scanners are also active scanners that use laser light to probe the environment. With respect to time-of-flight 3D laser scanner the triangulation laser shines a laser on the subject and exploits a camera to look for the location of the laser dot. Depending on how far away the laser strikes a surface, the laser dot appears at different places in the camera's field of view. This technique is called triangulation because the laser dot, the camera and the laser emitter form a triangle. The length of one side of the triangle, the distance between the camera and the laser emitter is known. The angle of the laser emitter corner is also known. The angle of the camera corner can be determined by looking at the location of the laser dot in the camera's field of view. These three pieces of information fully determine the shape and size of the triangle and give the location of the laser dot corner of the triangle. In most cases a laser stripe, instead of a single laser dot, is swept across the object to speed up the acquisition process. The use of triangulation to measure distances dates to antiquity.
Strengths and weaknesses
Time-of-flight range finders are capable of operating over long distances on the order of kilometres. These scanners are thus suitable for scanning large structures like buildings or geographic features. A disadvantage is that, due to the high speed of light, measuring the round-trip time is difficult and so the accuracy of the distance measurement is relatively low, on the order of millimetres.
Triangulation range finders, on the other hand, have a range of usually limited to a few meters for reasonably sized devices, but their accuracy is relatively high. The accuracy of triangulation range finders is on the order of tens of micrometers.
Time-of-flight scanners' accuracy can be lost when the laser hits the edge of an object because the information that is sent back to the scanner is from two different locations for one laser pulse. The coordinate relative to the scanner's position for a point that has hit the edge of an object will be calculated based on an average and therefore will put the point in the wrong place. When using a high resolution scan on an object the chances of the beam hitting an edge are increased and the resulting data will show noise just behind the edges of the object. Scanners with a smaller beam width will help to solve this problem but will be limited by range as the beam width will increase over distance. Software can also help by determining that the first object to be hit by the laser beam should cancel out the second.
At a rate of 10,000 sample points per second, low resolution scans can take less than a second, but high resolution scans, requiring millions of samples, can take minutes for some time-of-flight scanners. The problem this creates is distortion from motion. Since each point is sampled at a different time, any motion in the subject or the scanner will distort the collected data. Thus, it is usually necessary to mount both the subject and the scanner on stable platforms and minimise vibration. Using these scanners to scan objects in motion is very difficult.
Recently, there has been research on compensating for distortion from small amounts of vibration and distortions due to motion and/or rotation.
Short-range laser scanners can not usually encompass a depth of field more than 1 meter. When scanning in one position for any length of time slight movement can occur in the scanner position due to changes in temperature. If the scanner is set on a tripod and there is strong sunlight on one side of the scanner then that side of the tripod will expand and slowly distort the scan data from one side to another. Some laser scanners have level compensators built into them to counteract any movement of the scanner during the scan process.
Conoscopic holography
In a conoscopic system, a laser beam is projected onto the surface and then the immediate reflection along the same ray-path are put through a conoscopic crystal and projected onto a CCD. The result is a diffraction pattern, that can be frequency analyzed to determine the distance to the measured surface. The main advantage with conoscopic holography is that only a single ray-path is needed for measuring, thus giving an opportunity to measure for instance the depth of a finely drilled hole.
Hand-held laser scanners
Hand-held laser scanners create a 3D image through the triangulation mechanism described above: a laser dot or line is projected onto an object from a hand-held device and a sensor (typically a charge-coupled device or position sensitive device) measures the distance to the surface. Data is collected in relation to an internal coordinate system and therefore to collect data where the scanner is in motion the position of the scanner must be determined. The position can be determined by the scanner using reference features on the surface being scanned (typically adhesive reflective tabs, but natural features have been also used in research work) or by using an external tracking method. External tracking often takes the form of a laser tracker (to provide the sensor position) with integrated camera (to determine the orientation of the scanner) or a photogrammetric solution using 3 or more cameras providing the complete six degrees of freedom of the scanner. Both techniques tend to use infrared light-emitting diodes attached to the scanner which are seen by the camera(s) through filters providing resilience to ambient lighting.
Data is collected by a computer and recorded as data points within three-dimensional space, with processing this can be converted into a triangulated mesh and then a computer-aided design model, often as non-uniform rational B-spline surfaces. Hand-held laser scanners can combine this data with passive, visible-light sensors — which capture surface textures and colors — to build (or "reverse engineer") a full 3D model.
Structured light
Structured-light 3D scanners project a pattern of light on the subject and look at the deformation of the pattern on the subject. The pattern is projected onto the subject using either an LCD projector or other stable light source. A camera, offset slightly from the pattern projector, looks at the shape of the pattern and calculates the distance of every point in the field of view.
Structured-light scanning is still a very active area of research with many research papers published each year. Perfect maps have also been proven useful as structured light patterns that solve the correspondence problem and allow for error detection and error correction.
The advantage of structured-light 3D scanners is speed and precision. Instead of scanning one point at a time, structured light scanners scan multiple points or the entire field of view at once. Scanning an entire field of view in a fraction of a second reduces or eliminates the problem of distortion from motion. Some existing systems are capable of scanning moving objects in real-time.
A real-time scanner using digital fringe projection and phase-shifting technique (certain kinds of structured light methods) was developed, to capture, reconstruct, and render high-density details of dynamically deformable objects (such as facial expressions) at 40 frames per second. Recently, another scanner has been developed. Different patterns can be applied to this system, and the frame rate for capturing and data processing achieves 120 frames per second. It can also scan isolated surfaces, for example two moving hands. By utilising the binary defocusing technique, speed breakthroughs have been made that could reach hundreds to thousands of frames per second.
Modulated light
Modulated light 3D scanners shine a continually changing light at the subject. Usually the light source simply cycles its amplitude in a sinusoidal pattern. A camera detects the reflected light and the amount the pattern is shifted by determines the distance the light travelled. Modulated light also allows the scanner to ignore light from sources other than a laser, so there is no interference.
Volumetric techniques
Medical
Computed tomography (CT) is a medical imaging method which generates a three-dimensional image of the inside of an object from a large series of two-dimensional X-ray images, similarly magnetic resonance imaging is another medical imaging technique that provides much greater contrast between the different soft tissues of the body than computed tomography (CT) does, making it especially useful in neurological (brain), musculoskeletal, cardiovascular, and oncological (cancer) imaging. These techniques produce a discrete 3D volumetric representation that can be directly visualised, manipulated or converted to traditional 3D surface by mean of isosurface extraction algorithms.
Industrial
Although most common in medicine, industrial computed tomography, microtomography and MRI are also used in other fields for acquiring a digital representation of an object and its interior, such as non destructive materials testing, reverse engineering, or studying biological and paleontological specimens.
Non-contact passive
Passive 3D imaging solutions do not emit any kind of radiation themselves, but instead rely on detecting reflected ambient radiation. Most solutions of this type detect visible light because it is a readily available ambient radiation. Other types of radiation, such as infrared could also be used. Passive methods can be very cheap, because in most cases they do not need particular hardware but simple digital cameras.
Stereoscopic systems usually employ two video cameras, slightly apart, looking at the same scene. By analysing the slight differences between the images seen by each camera, it is possible to determine the distance at each point in the images. This method is based on the same principles driving human stereoscopic vision.
Photometric systems usually use a single camera, but take multiple images under varying lighting conditions. These techniques attempt to invert the image formation model in order to recover the surface orientation at each pixel.
Silhouette techniques use outlines created from a sequence of photographs around a three-dimensional object against a well contrasted background. These silhouettes are extruded and intersected to form the visual hull approximation of the object. With these approaches some concavities of an object (like the interior of a bowl) cannot be detected.
Photogrammetric non-contact passive methods
Photogrammetry provides reliable information about 3D shapes of physical objects based on analysis of photographic images. The resulting 3D data is typically provided as a 3D point cloud, 3D mesh or 3D points. Modern photogrammetry software applications automatically analyze a large number of digital images for 3D reconstruction, however manual interaction may be required if the software cannot automatically determine the 3D positions of the camera in the images which is an essential step in the reconstruction pipeline. Various software packages are available including PhotoModeler, Geodetic Systems, Autodesk ReCap, RealityCapture and Agisoft Metashape (see comparison of photogrammetry software).
Close range photogrammetry typically uses a handheld camera such as a DSLR with a fixed focal length lens to capture images of objects for 3D reconstruction. Subjects include smaller objects such as a building facade, vehicles, sculptures, rocks, and shoes.
Camera Arrays can be used to generate 3D point clouds or meshes of live objects such as people or pets by synchronizing multiple cameras to photograph a subject from multiple perspectives at the same time for 3D object reconstruction.
Wide angle photogrammetry can be used to capture the interior of buildings or enclosed spaces using a wide angle lens camera such as a 360 camera.
Aerial photogrammetry uses aerial images acquired by satellite, commercial aircraft or UAV drone to collect images of buildings, structures and terrain for 3D reconstruction into a point cloud or mesh.
Acquisition from acquired sensor data
Semi-automatic building extraction from lidar data and high-resolution images is also a possibility. Again, this approach allows modelling without physically moving towards the location or object. From airborne lidar data, digital surface model (DSM) can be generated and then the objects higher than the ground are automatically detected from the DSM. Based on general knowledge about buildings, geometric characteristics such as size, height and shape information are then used to separate the buildings from other objects. The extracted building outlines are then simplified using an orthogonal algorithm to obtain better cartographic quality. Watershed analysis can be conducted to extract the ridgelines of building roofs. The ridgelines as well as slope information are used to classify the buildings per type. The buildings are then reconstructed using three parametric building models (flat, gabled, hipped).
Acquisition from on-site sensors
Lidar and other terrestrial laser scanning technology offers the fastest, automated way to collect height or distance information. lidar or laser for height measurement of buildings is becoming very promising. Commercial applications of both airborne lidar and ground laser scanning technology have proven to be fast and accurate methods for building height extraction. The building extraction task is needed to determine building locations, ground elevation, orientations, building size, rooftop heights, etc. Most buildings are described to sufficient details in terms of general polyhedra, i.e., their boundaries can be represented by a set of planar surfaces and straight lines. Further processing such as expressing building footprints as polygons is used for data storing in GIS databases.
Using laser scans and images taken from ground level and a bird's-eye perspective, Fruh and Zakhor present an approach to automatically create textured 3D city models. This approach involves registering and merging the detailed facade models with a complementary airborne model. The airborne modeling process generates a half-meter resolution model with a bird's-eye view of the entire area, containing terrain profile and building tops. Ground-based modeling process results in a detailed model of the building facades. Using the DSM obtained from airborne laser scans, they localize the acquisition vehicle and register the ground-based facades to the airborne model by means of Monte Carlo localization (MCL). Finally, the two models are merged with different resolutions to obtain a 3D model.
Using an airborne laser altimeter, Haala, Brenner and Anders combined height data with the existing ground plans of buildings. The ground plans of buildings had already been acquired either in analog form by maps and plans or digitally in a 2D GIS. The project was done in order to enable an automatic data capture by the integration of these different types of information. Afterwards virtual reality city models are generated in the project by texture processing, e.g. by mapping of terrestrial images. The project demonstrated the feasibility of rapid acquisition of 3D urban GIS. Ground plans proved are another very important source of information for 3D building reconstruction. Compared to results of automatic procedures, these ground plans proved more reliable since they contain aggregated information which has been made explicit by human interpretation. For this reason, ground plans, can considerably reduce costs in a reconstruction project. An example of existing ground plan data usable in building reconstruction is the Digital Cadastral map, which provides information on the distribution of property, including the borders of all agricultural areas and the ground plans of existing buildings. Additionally information as street names and the usage of buildings (e.g. garage, residential building, office block, industrial building, church) is provided in the form of text symbols. At the moment the Digital Cadastral map is built up as a database covering an area, mainly composed by digitizing preexisting maps or plans.
Cost
Terrestrial laser scan devices (pulse or phase devices) + processing software generally start at a price of €150,000. Some less precise devices (as the Trimble VX) cost around €75,000.
Terrestrial lidar systems cost around €300,000.
Systems using regular still cameras mounted on RC helicopters (Photogrammetry) are also possible, and cost around €25,000. Systems that use still cameras with balloons are even cheaper (around €2,500), but require additional manual processing. As the manual processing takes around one month of labor for every day of taking pictures, this is still an expensive solution in the long run.
Obtaining satellite images is also an expensive endeavor. High resolution stereo images (0.5 m resolution) cost around €11,000. Image satellites include Quikbird, Ikonos. High resolution monoscopic images cost around €5,500. Somewhat lower resolution images (e.g. from the CORONA satellite; with a 2 m resolution) cost around €1,000 per 2 images. Note that Google Earth images are too low in resolution to make an accurate 3D model.
Reconstruction
From point clouds
The point clouds produced by 3D scanners and 3D imaging can be used directly for measurement and visualisation in the architecture and construction world.
From models
Most applications, however, use instead polygonal 3D models, NURBS surface models, or editable feature-based CAD models (aka solid models).
Polygon mesh models: In a polygonal representation of a shape, a curved surface is modeled as many small faceted flat surfaces (think of a sphere modeled as a disco ball). Polygon models—also called Mesh models, are useful for visualisation, for some CAM (i.e., machining), but are generally "heavy" ( i.e., very large data sets), and are relatively un-editable in this form. Reconstruction to polygonal model involves finding and connecting adjacent points with straight lines in order to create a continuous surface. Many applications, both free and nonfree, are available for this purpose (e.g. GigaMesh, MeshLab, PointCab, kubit PointCloud for AutoCAD, Reconstructor, imagemodel, PolyWorks, Rapidform, Geomagic, Imageware, Rhino 3D etc.).
Surface models: The next level of sophistication in modeling involves using a quilt of curved surface patches to model the shape. These might be NURBS, TSplines or other curved representations of curved topology. Using NURBS, the spherical shape becomes a true mathematical sphere. Some applications offer patch layout by hand but the best in class offer both automated patch layout and manual layout. These patches have the advantage of being lighter and more manipulable when exported to CAD. Surface models are somewhat editable, but only in a sculptural sense of pushing and pulling to deform the surface. This representation lends itself well to modelling organic and artistic shapes. Providers of surface modellers include Rapidform, Geomagic, Rhino 3D, Maya, T Splines etc.
Solid CAD models: From an engineering/manufacturing perspective, the ultimate representation of a digitised shape is the editable, parametric CAD model. In CAD, the sphere is described by parametric features which are easily edited by changing a value (e.g., centre point and radius).
These CAD models describe not simply the envelope or shape of the object, but CAD models also embody the "design intent" (i.e., critical features and their relationship to other features). An example of design intent not evident in the shape alone might be a brake drum's lug bolts, which must be concentric with the hole in the centre of the drum. This knowledge would drive the sequence and method of creating the CAD model; a designer with an awareness of this relationship would not design the lug bolts referenced to the outside diameter, but instead, to the center. A modeler creating a CAD model will want to include both Shape and design intent in the complete CAD model.
Vendors offer different approaches to getting to the parametric CAD model. Some export the NURBS surfaces and leave it to the CAD designer to complete the model in CAD (e.g., Geomagic, Imageware, Rhino 3D). Others use the scan data to create an editable and verifiable feature based model that is imported into CAD with full feature tree intact, yielding a complete, native CAD model, capturing both shape and design intent (e.g. Geomagic, Rapidform). For instance, the market offers various plug-ins for established CAD-programs, such as SolidWorks. Xtract3D, DezignWorks and Geomagic for SolidWorks allow manipulating a 3D scan directly inside SolidWorks. Still other CAD applications are robust enough to manipulate limited points or polygon models within the CAD environment (e.g., CATIA, AutoCAD, Revit).
From a set of 2D slices
CT, industrial CT, MRI, or micro-CT scanners do not produce point clouds but a set of 2D slices (each termed a "tomogram") which are then 'stacked together' to produce a 3D representation. There are several ways to do this depending on the output required:
Volume rendering: Different parts of an object usually have different threshold values or greyscale densities. From this, a 3-dimensional model can be constructed and displayed on screen. Multiple models can be constructed from various thresholds, allowing different colours to represent each component of the object. Volume rendering is usually only used for visualisation of the scanned object.
Image segmentation: Where different structures have similar threshold/greyscale values, it can become impossible to separate them simply by adjusting volume rendering parameters. The solution is called segmentation, a manual or automatic procedure that can remove the unwanted structures from the image. Image segmentation software usually allows export of the segmented structures in CAD or STL format for further manipulation.
Image-based meshing: When using 3D image data for computational analysis (e.g. CFD and FEA), simply segmenting the data and meshing from CAD can become time-consuming, and virtually intractable for the complex topologies typical of image data. The solution is called image-based meshing, an automated process of generating an accurate and realistic geometrical description of the scan data.
From laser scans
Laser scanning describes the general method to sample or scan a surface using laser technology. Several areas of application exist that mainly differ in the power of the lasers that are used, and in the results of the scanning process. Low laser power is used when the scanned surface doesn't have to be influenced, e.g. when it only has to be digitised. Confocal or 3D laser scanning are methods to get information about the scanned surface. Another low-power application uses structured light projection systems for solar cell flatness metrology, enabling stress calculation throughout in excess of 2000 wafers per hour.
The laser power used for laser scanning equipment in industrial applications is typically less than 1W. The power level is usually on the order of 200 mW or less but sometimes more.
From photographs
3D data acquisition and object reconstruction can be performed using stereo image pairs. Stereo photogrammetry or photogrammetry based on a block of overlapped images is the primary approach for 3D mapping and object reconstruction using 2D images. Close-range photogrammetry has also matured to the level where cameras or digital cameras can be used to capture the close-look images of objects, e.g., buildings, and reconstruct them using the very same theory as the aerial photogrammetry. An example of software which could do this is Vexcel FotoG 5. This software has now been replaced by Vexcel GeoSynth. Another similar software program is Microsoft Photosynth.
A semi-automatic method for acquiring 3D topologically structured data from 2D aerial stereo images has been presented by Sisi Zlatanova. The process involves the manual digitizing of a number of points necessary for automatically reconstructing the 3D objects. Each reconstructed object is validated by superimposition of its wire frame graphics in the stereo model. The topologically structured 3D data is stored in a database and are also used for visualization of the objects. Notable software used for 3D data acquisition using 2D images include e.g. Agisoft Metashape, RealityCapture, and ENSAIS Engineering College TIPHON (Traitement d'Image et PHOtogrammétrie Numérique).
A method for semi-automatic building extraction together with a concept for storing building models alongside terrain and other topographic data in a topographical information system has been developed by Franz Rottensteiner. His approach was based on the integration of building parameter estimations into the photogrammetry process applying a hybrid modeling scheme. Buildings are decomposed into a set of simple primitives that are reconstructed individually and are then combined by Boolean operators. The internal data structure of both the primitives and the compound building models are based on the boundary representation methods
Multiple images are used in Zhang's approach to surface reconstruction from multiple images. A central idea is to explore the integration of both 3D stereo data and 2D calibrated images. This approach is motivated by the fact that only robust and accurate feature points that survived the geometry scrutiny of multiple images are reconstructed in space. The density insufficiency and the inevitable holes in the stereo data should then be filled in by using information from multiple images. The idea is thus to first construct small surface patches from stereo points, then to progressively propagate only reliable patches in their neighborhood from images into the whole surface using a best-first strategy. The problem thus reduces to searching for an optimal local surface patch going through a given set of stereo points from images.
Multi-spectral images are also used for 3D building detection. The first and last pulse data and the normalized difference vegetation index are used in the process.
New measurement techniques are also employed to obtain measurements of and between objects from single images by using the projection, or the shadow as well as their combination. This technology is gaining attention given its fast processing time, and far lower cost than stereo measurements.
Applications
Space experiments
3D scanning technology has been used to scan space rocks for the European Space Agency.
Construction industry and civil engineering
Robotic control: e.g. a laser scanner may function as the "eye" of a robot.
As-built drawings of bridges, industrial plants, and monuments
Documentation of historical sites
Site modelling and lay outing
Quality control
Quantity surveys
Payload monitoring
Freeway redesign
Establishing a bench mark of pre-existing shape/state in order to detect structural changes resulting from exposure to extreme loadings such as earthquake, vessel/truck impact or fire.
Create GIS (geographic information system) maps and geomatics.
Subsurface laser scanning in mines and karst voids.
Forensic documentation
Design process
Increasing accuracy working with complex parts and shapes,
Coordinating product design using parts from multiple sources,
Updating old CD scans with those from more current technology,
Replacing missing or older parts,
Creating cost savings by allowing as-built design services, for example in automotive manufacturing plants,
"Bringing the plant to the engineers" with web shared scans, and
Saving travel costs.
Entertainment
3D scanners are used by the entertainment industry to create digital 3D models for movies, video games and leisure purposes. They are heavily utilized in virtual cinematography. In cases where a real-world equivalent of a model exists, it is much faster to scan the real-world object than to manually create a model using 3D modeling software. Frequently, artists sculpt physical models of what they want and scan them into digital form rather than directly creating digital models on a computer.
3D photography
3D scanners are evolving for the use of cameras to represent 3D objects in an accurate manner. Companies are emerging since 2010 that create 3D portraits of people (3D figurines or 3D selfie).
An augmented reality menu for the Madrid restaurant chain 80 Degrees
Law enforcement
3D laser scanning is used by the law enforcement agencies around the world. 3D models are used for on-site documentation of:
Crime scenes
Bullet trajectories
Bloodstain pattern analysis
Accident reconstruction
Bombings
Plane crashes, and more
Reverse engineering
Reverse engineering of a mechanical component requires a precise digital model of the objects to be reproduced. Rather than a set of points a precise digital model can be represented by a polygon mesh, a set of flat or curved NURBS surfaces, or ideally for mechanical components, a CAD solid model. A 3D scanner can be used to digitise free-form or gradually changing shaped components as well as prismatic geometries whereas a coordinate measuring machine is usually used only to determine simple dimensions of a highly prismatic model. These data points are then processed to create a usable digital model, usually using specialized reverse engineering software.
Real estate
Land or buildings can be scanned into a 3D model, which allows buyers to tour and inspect the property remotely, anywhere, without having to be present at the property. There is already at least one company providing 3D-scanned virtual real estate tours. A typical virtual tour would consist of dollhouse view, inside view, as well as a floor plan.
Virtual/remote tourism
The environment at a place of interest can be captured and converted into a 3D model. This model can then be explored by the public, either through a VR interface or a traditional "2D" interface. This allows the user to explore locations which are inconvenient for travel. A group of history students at Vancouver iTech Preparatory Middle School created a Virtual Museum by 3D Scanning more than 100 artifacts.
Cultural heritage
There have been many research projects undertaken via the scanning of historical sites and artifacts both for documentation and analysis purposes. The resulting models can be used for a variety of different analytical approaches.
The combined use of 3D scanning and 3D printing technologies allows the replication of real objects without the use of traditional plaster casting techniques, that in many cases can be too invasive for being performed on precious or delicate cultural heritage artifacts. In an example of a typical application scenario, a gargoyle model was digitally acquired using a 3D scanner and the produced 3D data was processed using MeshLab. The resulting digital 3D model was fed to a rapid prototyping machine to create a real resin replica of the original object.
Creation of 3D models for Museums and Archaeological artifacts
Michelangelo
In 1999, two different research groups started scanning Michelangelo's statues. Stanford University with a group led by Marc Levoy used a custom laser triangulation scanner built by Cyberware to scan Michelangelo's statues in Florence, notably the David, the Prigioni and the four statues in The Medici Chapel. The scans produced a data point density of one sample per 0.25 mm, detailed enough to see Michelangelo's chisel marks. These detailed scans produced a large amount of data (up to 32 gigabytes) and processing the data from his scans took 5 months. Approximately in the same period a research group from IBM, led by H. Rushmeier and F. Bernardini scanned the Pietà of Florence acquiring both geometric and colour details. The digital model, result of the Stanford scanning campaign, was thoroughly used in the 2004 subsequent restoration of the statue.
Monticello
In 2002, David Luebke, et al. scanned Thomas Jefferson's Monticello. A commercial time of flight laser scanner, the DeltaSphere 3000, was used. The scanner data was later combined with colour data from digital photographs to create the Virtual Monticello, and the Jefferson's Cabinet exhibits in the New Orleans Museum of Art in 2003. The Virtual Monticello exhibit simulated a window looking into Jefferson's Library. The exhibit consisted of a rear projection display on a wall and a pair of stereo glasses for the viewer. The glasses, combined with polarised projectors, provided a 3D effect. Position tracking hardware on the glasses allowed the display to adapt as the viewer moves around, creating the illusion that the display is actually a hole in the wall looking into Jefferson's Library. The Jefferson's Cabinet exhibit was a barrier stereogram (essentially a non-active hologram that appears different from different angles) of Jefferson's Cabinet.
Cuneiform tablets
The first 3D models of cuneiform tablets were acquired in Germany in 2000. In 2003 the so-called Digital Hammurabi project acquired cuneiform tablets with a laser triangulation scanner using a regular grid pattern having a resolution of . With the use of high-resolution 3D-scanners by the Heidelberg University for tablet acquisition in 2009 the development of the GigaMesh Software Framework began to visualize and extract cuneiform characters from 3D-models. It was used to process ca. 2.000 3D-digitized tablets of the Hilprecht Collection in Jena to create an Open Access benchmark dataset and an annotated collection of 3D-models of tablets freely available under CC BY licenses.
Kasubi Tombs
A 2009 CyArk 3D scanning project at Uganda's historic Kasubi Tombs, a UNESCO World Heritage Site, using a Leica HDS 4500, produced detailed architectural models of Muzibu Azaala Mpanga, the main building at the complex and tomb of the Kabakas (Kings) of Uganda. A fire on March 16, 2010, burned down much of the Muzibu Azaala Mpanga structure, and reconstruction work is likely to lean heavily upon the dataset produced by the 3D scan mission.
"Plastico di Roma antica"
In 2005, Gabriele Guidi, et al. scanned the "Plastico di Roma antica", a model of Rome created in the last century. Neither the triangulation method, nor the time of flight method satisfied the requirements of this project because the item to be scanned was both large and contained small details. They found though, that a modulated light scanner was able to provide both the ability to scan an object the size of the model and the accuracy that was needed. The modulated light scanner was supplemented by a triangulation scanner which was used to scan some parts of the model.
Other projects
The 3D Encounters Project at the Petrie Museum of Egyptian Archaeology aims to use 3D laser scanning to create a high quality 3D image library of artefacts and enable digital travelling exhibitions of fragile Egyptian artefacts, English Heritage has investigated the use of 3D laser scanning for a wide range of applications to gain archaeological and condition data, and the National Conservation Centre in Liverpool has also produced 3D laser scans on commission, including portable object and in situ scans of archaeological sites. The Smithsonian Institution has a project called Smithsonian X 3D notable for the breadth of types of 3D objects they are attempting to scan. These include small objects such as insects and flowers, to human sized objects such as Amelia Earhart's Flight Suit to room sized objects such as the Gunboat Philadelphia to historic sites such as Liang Bua in Indonesia. Also of note the data from these scans is being made available to the public for free and downloadable in several data formats.
Medical CAD/CAM
3D scanners are used to capture the 3D shape of a patient in orthotics and dentistry. It gradually supplants tedious plaster cast. CAD/CAM software are then used to design and manufacture the orthosis, prosthesis or dental implants.
Many chairside dental CAD/CAM systems and dental laboratory CAD/CAM systems use 3D scanner technologies to capture the 3D surface of a dental preparation (either in vivo or in vitro), in order to produce a restoration digitally using CAD software and ultimately produce the final restoration using a CAM technology (such as a CNC milling machine, or 3D printer). The chairside systems are designed to facilitate the 3D scanning of a preparation in vivo and produce the restoration (such as a Crown, Onlay, Inlay or Veneer).
Creation of 3D models for anatomy and biology education and cadaver models for educational neurosurgical simulations.
Quality assurance and industrial metrology
The digitalisation of real-world objects is of vital importance in various application domains. This method is especially applied in industrial quality assurance to measure the geometric dimension accuracy. Industrial processes such as assembly are complex, highly automated and typically based on CAD (computer-aided design) data. The problem is that the same degree of automation is also required for quality assurance. It is, for example, a very complex task to assemble a modern car, since it consists of many parts that must fit together at the very end of the production line. The optimal performance of this process is guaranteed by quality assurance systems. Especially the geometry of the metal parts must be checked in order to assure that they have the correct dimensions, fit together and finally work reliably.
Within highly automated processes, the resulting geometric measures are transferred to machines that manufacture the desired objects. Due to mechanical uncertainties and abrasions, the result may differ from its digital nominal. In order to automatically capture and evaluate these deviations, the manufactured part must be digitised as well. For this purpose, 3D scanners are applied to generate point samples from the object's surface which are finally compared against the nominal data.
The process of comparing 3D data against a CAD model is referred to as CAD-Compare, and can be a useful technique for applications such as determining wear patterns on moulds and tooling, determining accuracy of final build, analysing gap and flush, or analysing highly complex sculpted surfaces. At present, laser triangulation scanners, structured light and contact scanning are the predominant technologies employed for industrial purposes, with contact scanning remaining the slowest, but overall most accurate option. Nevertheless, 3D scanning technology offers distinct advantages compared to traditional touch probe measurements. White-light or laser scanners accurately digitize objects all around, capturing fine details and freeform surfaces without reference points or spray. The entire surface is covered at record speed without the risk of damaging the part. Graphic comparison charts illustrate geometric deviations of full object level, providing deeper insights into potential causes.
Object reconstruction
After the data has been collected, the acquired (and sometimes already processed) data from images or sensors needs to be reconstructed. This may be done in the same program or in some cases, the 3D data needs to be exported and imported into another program for further refining, and/or to add additional data. Such additional data could be GPS-location data. After the reconstruction, the data might be directly implemented into a local (GIS) map or a worldwide map such as Google Earth or Apple Maps.
Software
Several software packages are used in which the acquired (and sometimes already processed) data from images or sensors is imported. Notable software packages include:
Qlone
3DF Zephyr
Canoma
Leica Photogrammetry Suite
MeshLab
MountainsMap SEM (microscopy applications only)
PhotoModeler
SketchUp
tomviz
See also
3D computer graphics software
3D printing
3D reconstruction
3D selfie
Angle-sensitive pixel
Depth map
Digitization
Epipolar geometry
Full body scanner
Image scanner
Image reconstruction
Light-field camera
Photogrammetry
Range imaging
Remote sensing
Replicator
Structured-light 3D scanner
Thingiverse
Photogrammetry
References
Articles containing video clips
Geodesy
Surveying
Cartography
Measurement
Computer vision
3D computer graphics
3D imaging | 3D scanning | [
"Physics",
"Mathematics",
"Engineering"
] | 9,118 | [
"Physical quantities",
"Applied mathematics",
"Quantity",
"Packaging machinery",
"Measurement",
"Size",
"Surveying",
"Civil engineering",
"Artificial intelligence engineering",
"Geodesy",
"Computer vision"
] |
2,715,668 | https://en.wikipedia.org/wiki/NA60%20experiment | The NA60 experiment was a high energy heavy ions experiment at the CERN Super Proton Synchrotron. It studied "prompt dimuon and charm production with proton and heavy ion beams". The spokesperson for the experiment is Gianluca Usai. The experiment was proposed on 7 March 2000 and accepted on 15 June 2000. The experiment ran from October 2001 to 15 November 2004.
External links
NA60 website
CERN-NA-60 experiment record on INSPIRE-HEP
Grey Book entry
CERN experiments
Particle experiments | NA60 experiment | [
"Physics"
] | 106 | [
"Particle physics stubs",
"Particle physics"
] |
2,715,981 | https://en.wikipedia.org/wiki/4-Aminosalicylic%20acid | 4-Aminosalicylic acid, also known as para-aminosalicylic acid (PAS) and sold under the brand name Paser among others, is an antibiotic primarily used to treat tuberculosis. Specifically it is used to treat active drug resistant tuberculosis together with other antituberculosis medications. It has also been used as a second line agent to sulfasalazine in people with inflammatory bowel disease such as ulcerative colitis and Crohn's disease. It is typically taken by mouth.
Common side effects include nausea, abdominal pain, and diarrhea. Other side effects may include liver inflammation and allergic reactions. It is not recommended in people with end stage kidney disease. While there does not appear to be harm with use during pregnancy it has not been well studied in this population. 4-Aminosalicylic acid is believed to work by blocking the ability of bacteria to make folic acid.
4-Aminosalicylic acid was first made in 1902, and came into medical use in 1943. It is on the World Health Organization's List of Essential Medicines.
Medical uses
The main use for 4-aminosalicylic acid is for the treatment of tuberculosis infections.
In the United States, 4-aminosalicylic acid is indicated for the treatment of tuberculosis in combination with other active agents.
In the European Union, it is used in combination with other medicines to treat adults and children from 28 days of age who have multi-drug resistant tuberculosis when combinations without this medicine cannot be used, either because the disease is resistant to them or because of their side effects.
Tuberculosis
Aminosalicylic acid was introduced to clinical use in 1944. It was the second antibiotic found to be effective in the treatment of tuberculosis, after streptomycin. PAS formed part of the standard treatment for tuberculosis prior to the introduction of rifampicin and pyrazinamide.
Its potency is less than that of the current five first-line drugs (isoniazid, rifampicin, ethambutol, pyrazinamide, and streptomycin) for treating tuberculosis and its cost is higher, but it is still useful in the treatment of multidrug-resistant tuberculosis. PAS is always used in combination with other anti-TB drugs.
The dose when treating tuberculosis is 150 mg/kg/day divided into two to four daily doses; the usual adult dose is therefore approximately 2 to 4 grams four times a day. It is sold in the US as "Paser" by Jacobus Pharmaceutical, which comes in the form of 4 g packets of delayed-release granules. The drug should be taken with acid food or drink (orange, apple or tomato juice). PAS was once available in a combination formula with isoniazid called Pasinah or Pycamisan 33.
4-Aminosalicylic acid was approved for medical use in the United States in June 1994, and for medical use in the European Union in April 2014.
Inflammatory bowel disease
4-Aminosalicylic acid has also been used in the treatment of inflammatory bowel disease (ulcerative colitis and Crohn's disease), but has been superseded by other drugs such as sulfasalazine and mesalazine.
Others
4-Aminosalicylic acid has been investigated for the use in manganese chelation therapy, and a 17-year follow-up study shows that it might be superior to other chelation protocols such as EDTA.
Side effects
Gastrointestinal side-effects (nausea, vomiting, diarrhoea) are common; the delayed-release formulation is meant to help overcome this problem. It is also a cause of drug-induced hepatitis. Patients with glucose-6-phosphate dehydrogenase deficiency should avoid taking aminosalicylic acid as it causes haemolysis. Thyroid goitre is also a side-effect because aminosalicylic acid inhibits the synthesis of thyroid hormones.
Drug interactions include elevated phenytoin levels. When taken with rifampicin, the levels of rifampicin in the blood fall by about half.
It is not known whether it will harm an unborn baby.
Pharmacology
With heat, 4-aminosalicylic acid is decarboxylated to produce CO2 and 3-aminophenol.
Mode of action
4-Aminosalicylic acid has been shown to be a pro-drug and it is incorporated into the folate pathway by dihydropteroate synthase (DHPS) and dihydrofolate synthase (DHFS) to generate a hydroxyl dihydrofolate (Hydroxy-H2Pte and Hydroxy-H2PteGlu) antimetabolite, which competes with dihydrofolate at the binding site of dihydrofolate reductase (DHFR). The binding of Hydroxy-H2PteGlu to dihydrofolate reductase will block the enzymatic activity.
Mechanism of action
Some studies have shown that principal antitubercular action of PAS occurs via poisoning of folate metabolism.
Resistance
It was initially thought that resistance of 4-aminosalicylic acid came from a mutation affecting dihydrofolate reductase (DHFR). However, it was discovered that it was caused by a mutation affecting the dihydrofolate synthesis (DHFS) enzyme activity. The mutations of isoleucine 43, arginine 49, serine 150, phenylalanine 152, glutamate 153, and alanine 183 were found to affect the binding pocket of the dihydrofolate synthase enzyme. This will reduce the ability for hydroxy-H2Pte to bind to dihydrofolate synthase and preventing 4-aminosalicylic acid from poisoning the folate metabolism.
History
4-Aminosalicylic acid was first synthesized by Seidel and Bittner in 1902. It was rediscovered by the Swedish chemist Jörgen Lehmann upon the report that the tuberculosis bacterium avidly metabolized salicylic acid. Lehmann first tried PAS as an oral TB therapy late in 1944. The first patient made a dramatic recovery. The drug proved better than streptomycin, which had nerve toxicity and to which TB could easily develop resistance. In 1948, researchers at Britain's Medical Research Council demonstrated that combined treatment with streptomycin and PAS was superior to either drug alone, and established the principle of combination therapy for tuberculosis.
Other names
4-Aminosalicylic acid has many names including para-aminosalicylic acid, p-aminosalicylic acid, 4-ASA, and simply P.
References
Further reading
Anilines
Antibiotics
Antimetabolites
Salicylic acids
Tuberculosis
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | 4-Aminosalicylic acid | [
"Chemistry",
"Biology"
] | 1,439 | [
"Antimetabolites",
"Biotechnology products",
"Antibiotics",
"Biocides",
"Metabolism"
] |
1,958,621 | https://en.wikipedia.org/wiki/Superhydrophilicity | Superhydrophilicity refers to the phenomenon of excess hydrophilicity, or attraction to water; in superhydrophilic materials, the contact angle of water is equal to zero degrees. This effect was discovered in 1995 by the Research Institute of Toto Ltd. for titanium dioxide irradiated by sunlight. Under light irradiation, water dropped onto titanium dioxide forms no contact angle (almost 0 degrees).
Superhydrophilic material has various advantages. For example, it can defog glass, and it can also enable oil spots to be swept away easily with water. Such materials are already commercialized as door mirrors for cars, coatings for buildings, self-cleaning glass, etc.
Several mechanisms of this superhydrophilicity have been proposed by researchers. One is the change of the surface structure to a metastable structure, and another is cleaning the surface by the photodecomposition of dirt such as organic compounds adsorbed on the surface, after either of which water molecules can adsorb to the surface. The mechanism is still controversial, and it is too soon to decide which suggestion is correct. To decide, atomic scale measurements and other studies will be necessary.
See also
Superhydrophobicity, the opposite phenomenon
References
Further reading
Chemical properties
Surface science | Superhydrophilicity | [
"Physics",
"Chemistry",
"Materials_science"
] | 266 | [
"nan",
"Condensed matter physics",
"Surface science"
] |
1,959,732 | https://en.wikipedia.org/wiki/Scanning%20tunneling%20spectroscopy | Scanning tunneling spectroscopy (STS), an extension of scanning tunneling microscopy (STM), is used to provide information about the density of electrons in a sample as a function of their energy.
In scanning tunneling microscopy, a metal tip is moved over a conducting sample without making physical contact. A bias voltage applied between the sample and tip allows a current to flow between the two. This is as a result of quantum tunneling across a barrier; in this instance, the physical distance between the tip and the sample
The scanning tunneling microscope is used to obtain "topographs" - topographic maps - of surfaces. The tip is rastered across a surface and (in constant current mode), a constant current is maintained between the tip and the sample by adjusting the height of the tip. A plot of the tip height at all measurement positions provides the topograph. These topographic images can obtain atomically resolved information on metallic and semi-conducting surfaces
However, the scanning tunneling microscope does not measure the physical height of surface features. One such example of this limitation is an atom adsorbed onto a surface. The image will result in some perturbation of the height at this point. A detailed analysis of the way in which an image is formed shows that the transmission of the electric current between the tip and the sample depends on two factors: (1) the geometry of the sample and (2) the arrangement of the electrons in the sample. The arrangement of the electrons in the sample is described quantum mechanically by an "electron density". The electron density is a function of both position and energy, and is formally described as the local density of electron states, abbreviated as local density of states (LDOS), which is a function of energy.
Spectroscopy, in its most general sense, refers to a measurement of the number of something as a function of energy. For scanning tunneling spectroscopy the scanning tunneling microscope is used to measure the number of electrons (the LDOS) as a function of the electron energy. The electron energy is set by the electrical potential difference (voltage) between the sample and the tip. The location is set by the position of the tip.
At its simplest, a "scanning tunneling spectrum" is obtained by placing a scanning tunneling microscope tip above a particular place on the sample. With the height of the tip fixed, the electron tunneling current is then measured as a function of electron energy by varying the voltage between the tip and the sample (the tip to sample voltage sets the electron energy). The change of the current with the energy of the electrons is the simplest spectrum that can be obtained, it is often referred to as an I-V curve. As is shown below, it is the slope of the I-V curve at each voltage (often called the dI/dV-curve) which is more fundamental because dI/dV corresponds to the electron density of states at the local position of the tip, the LDOS.
Introduction
Scanning tunneling spectroscopy is an experimental technique which uses a scanning tunneling microscope (STM) to probe the local density of electronic states (LDOS) and the band gap of surfaces and materials on surfaces at the atomic scale. Generally, STS involves observation of changes in constant-current topographs with tip-sample bias, local measurement of the tunneling current versus tip-sample bias (I-V) curve, measurement of the tunneling conductance, , or more than one of these. Since the tunneling current in a scanning tunneling microscope only flows in a region with diameter ~5 Å, STS is unusual in comparison with other surface spectroscopy techniques, which average over a larger surface region. The origins of STS are found in some of the earliest STM work of Gerd Binnig and Heinrich Rohrer, in which they observed changes in the appearance of some atoms in the (7 x 7) unit cell of the Si(111) – (7 x 7) surface with tip-sample bias. STS provides the possibility for probing the local electronic structure of metals, semiconductors, and thin insulators on a scale unobtainable with other spectroscopic methods. Additionally, topographic and spectroscopic data can be recorded simultaneously.
Tunneling current
Since STS relies on tunneling phenomena and measurement of the tunneling current or its derivative, understanding the expressions for the tunneling current is very important. Using the modified Bardeen transfer Hamiltonian method, which treats tunneling as a perturbation, the tunneling current () is found to be
where is the Fermi distribution function, and are the density of states (DOS) in the sample and tip, respectively, and is the tunneling matrix element between the modified wavefunctions of the tip and the sample surface. The tunneling matrix element,
describes the energy lowering due to the interaction between the two states. Here and are the sample wavefunction modified by the tip potential, and the tip wavefunction modified by sample potential, respectively.
For low temperatures and a constant tunneling matrix element, the tunneling current reduces to
which is a convolution of the DOS of the tip and the sample. Generally, STS experiments attempt to probe the sample DOS, but equation (3) shows that the tip DOS must be known for the measurement to have meaning. Equation (3) implies that
under the gross assumption that the tip DOS is constant. For these ideal assumptions, the tunneling conductance is directly proportional to the sample DOS.
For higher bias voltages, the predictions of simple planar tunneling models using the Wentzel-Kramers Brillouin (WKB) approximation are useful. In the WKB theory, the tunneling current is predicted to be
where and are the density of states (DOS) in the sample and tip, respectively. The energy- and bias-dependent electron tunneling transition probability, T, is given by
where and are the respective work functions of the sample and tip and is the distance from the sample to the tip.
The tip is often regarded to be a single molecule, essentially neglecting further shapes induced effects. This approximation is the Tersoff-Hamann approximation, which suggests the tip to be a single ball-shaped molecule of certain radius. The tunneling current therefore becomes proportional to the local density of states (LDOS).
Experimental methods
Acquiring standard STM topographs at many different tip-sample biases and comparing to experimental topographic information is perhaps the most straightforward spectroscopic method. The tip-sample bias can also be changed on a line-by-line basis during a single scan. This method creates two interleaved images at different biases. Since only the states between the Fermi levels of the sample and the tip contribute to , this method is a quick way to determine whether there are any interesting bias-dependent features on the surface. However, only limited information about the electronic structure can be extracted by this method, since the constant topographs depend on the tip and sample DOS's and the tunneling transmission probability, which depends on the tip-sample spacing, as described in equation (5).
By using modulation techniques, a constant current topograph and the spatially resolved can be acquired simultaneously. A small, high frequency sinusoidal modulation voltage is superimposed on the D.C. tip-sample bias. The A.C. component of the tunneling current is recorded using a lock-in amplifier, and the component in-phase with the tip-sample bias modulation gives directly. The amplitude of the modulation Vm has to be kept smaller than the spacing of the characteristic spectral features. The broadening caused by the modulation amplitude is 2 eVm and it has to be added to the thermal broadening of 3.2 kBT. In practice, the modulation frequency is chosen slightly higher than the bandwidth of the STM feedback system. This choice prevents the feedback control from compensating for the modulation by changing the tip-sample spacing and minimizes the displacement current 90° out-of-phase with the applied bias modulation. Such effects arise from the capacitance between the tip and the sample, which grows as the modulation frequency increases.
In order to obtain I-V curves simultaneously with a topograph, a sample-and-hold circuit is used in the feedback loop for the z piezo signal. The sample-and-hold circuit freezes the voltage applied to the z piezo, which freezes the tip-sample distance, at the desired location allowing I-V measurements without the feedback system responding. The tip-sample bias is swept between the specified values, and the tunneling current is recorded. After the spectra acquisition, the tip-sample bias is returned to the scanning value, and the scan resumes. Using this method, the local electronic structure of semiconductors in the band gap can be probed.
There are two ways to record I-V curves in the manner described above. In constant-spacing scanning tunneling spectroscopy (CS-STS), the tip stops scanning at the desired location to obtain an I-V curve. The tip-sample spacing is adjusted to reach the desired initial current, which may be different from the initial current setpoint, at a specified tip-sample bias. A sample-and-hold amplifier freezes the z piezo feedback signal, which holds the tip-sample spacing constant by preventing the feedback system from changing the bias applied to the z piezo. The tip-sample bias is swept through the specified values, and the tunneling current is recorded. Either numerical differentiation of I(V) or lock-in detection as described above for modulation techniques can be used to find . If lock-in detection is used, then an A.C. modulation voltage is applied to the D.C. tip-sample bias during the bias sweep and the A.C. component of the current in-phase with the modulation voltage is recorded.
In variable-spacing scanning tunneling spectroscopy (VS-STS), the same steps occur as in CS-STS through turning off the feedback. As the tip-sample bias is swept through the specified values, the tip-sample spacing is decreased continuously as the magnitude of the bias is reduced. Generally, a minimum tip-sample spacing is specified to prevent the tip from crashing into the sample surface at the 0 V tip-sample bias. Lock-in detection and modulation techniques are used to find the conductivity, because the tunneling current is a function also of the varying tip-sample spacing. Numerical differentiation of I(V) with respect to V would include the contributions from the varying tip-sample spacing. Introduced by Mårtensson and Feenstra to allow conductivity measurements over several orders of magnitude, VS-STS is useful for conductivity measurements on systems with large band gaps. Such measurements are necessary to properly define the band edges and examine the gap for states.
Current-imaging-tunneling spectroscopy (CITS) is an STS technique where an I-V curve is recorded at each pixel in the STM topograph. Either variable-spacing or constant-spacing spectroscopy may be used to record the I-V curves. The conductance, , can be obtained by numerical differentiation of I with respect to V or acquired using lock-in detection as described above. Because the topographic image and the tunneling spectroscopy data are obtained nearly simultaneously, there is nearly perfect registry of topographic and spectroscopic data. As a practical concern, the number of pixels in the scan or the scan area may be reduced to prevent piezo creep or thermal drift from moving the feature of study or the scan area during the duration of the scan. While most CITS data obtained on the times scale of several minutes, some experiments may require stability over longer periods of time. One approach to improving the experimental design is by applying feature-oriented scanning (FOS) methodology.
Data interpretation
From the obtained I-V curves, the band gap of the sample at the location of the I-V measurement can be determined. By plotting the magnitude of I on a log scale versus the tip-sample bias, the band gap can clearly be determined. Although determination of the band gap is possible from a linear plot of the I-V curve, the log scale increases the sensitivity. Alternatively, a plot of the conductance, , versus the tip-sample bias, V, allows one to locate the band edges that determine the band gap.
The structure in the , as a function of the tip-sample bias, is associated with the density of states of the surface when the tip-sample bias is less than the work functions of the tip and the sample. Usually, the WKB approximation for the tunneling current is used to interpret these measurements at low tip-sample bias relative to the tip and sample work functions. The derivative of equation (5), I in the WKB approximation, is
where is the sample density of states, is the tip density of states, and T is the tunneling transmission probability. Although the tunneling transmission probability T is generally unknown, at a fixed location T increases smoothly and monotonically with the tip-sample bias in the WKB approximation. Hence, structure in the is usually assigned to features in the density of states in the first term of equation (7).
Interpretation of as a function of position is more complicated. Spatial variations in T show up in measurements of as an inverted topographic background. When obtained in constant current mode, images of the spatial variation of contain a convolution of topographic and electronic structure. An additional complication arises since in the low-bias limit. Thus, diverges as V approaches 0, preventing investigation of the local electronic structure near the Fermi level.
Since both the tunneling current, equation (5), and the conductance, equation (7), depend on the tip DOS and the tunneling transition probability, T, quantitative information about the sample DOS is very difficult to obtain. Additionally, the voltage dependence of T, which is usually unknown, can vary with position due to local fluctuations in the electronic structure of the surface. For some cases, normalizing by dividing by can minimize the effect of the voltage dependence of T and the influence of the tip-sample spacing. Using the WKB approximation, equations (5) and (7), we obtain:
Feenstra et al. argued that the dependencies of and on tip-sample spacing and tip-sample bias tend to cancel, since they appear as ratios. This cancellation reduces the normalized conductance to the following form:
where normalizes T to the DOS and describes the influence of the electric field in the tunneling gap on the decay length. Under the assumption that and vary slowly with tip-sample bias, the features in reflect the sample DOS,
Limitations
While STS can provide spectroscopic information with amazing spatial resolution, there are some limitations. The STM and STS lack chemical sensitivity. Since the tip-sample bias range in tunneling experiments is limited to , where is the apparent barrier height, STM and STS only sample valence electron states. Element-specific information is generally impossible to extract from STM and STS experiments, since the chemical bond formation greatly perturbs the valence states.
At finite temperatures, the thermal broadening of the electron energy distribution due to the Fermi-distribution limits spectroscopic resolution. At , , and the sample and tip energy distribution spread are both . Hence, the total energy deviation is . Assuming the dispersion relation for simple metals, it follows from the uncertainty relation that
where is the Fermi energy, is the bottom of the valence band, is the Fermi wave vector, and is the lateral resolution. Since spatial resolution depends on the tip-sample spacing, smaller tip-sample spacings and higher topographic resolution blur the features in tunneling spectra.
Despite these limitations, STS and STM provide the possibility for probing the local electronic structure of metals, semiconductors, and thin insulators on a scale unobtainable with other spectroscopic methods. Additionally, topographic and spectroscopic data can be recorded simultaneously.
References
Further reading
Scanning probe microscopy
Spectroscopy | Scanning tunneling spectroscopy | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,306 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Scanning probe microscopy",
"Microscopy",
"Nanotechnology",
"Spectroscopy"
] |
1,960,070 | https://en.wikipedia.org/wiki/Calixarene | A calixarene is a macrocycle or cyclic oligomer based on a methylene-linked phenols. With hydrophobic cavities that can hold smaller molecules or ions, calixarenes belong to the class of cavitands known in host–guest chemistry.
Nomenclature
Calixarene nomenclature is straightforward and involves counting the number of repeating units in the ring and including it in the name. A calix[4]arene has 4 units in the ring and a calix[6]arene has 6. A substituent in the meso position Rb is added to the name with a prefix C- as in C-methylcalix[6]arene The word calixarene is derived from the Greek calix or chalice because this type of molecule resembles a vase (or cup) and from the word arene that refers to the aromatic building block.
Synthesis
Calixarenes are generally produced by condensation of two components: an electron-rich aromatic compound, classically a 4-substituted phenol, and an aldehyde, classically formaldehyde.
The scope for the aromatic component is broad diverse. The key attribute is susceptibility toward hydroxyalkylation. The related resorcinarenes and pyrogallolarenes are produced from resorcinol and pyrogallol, respectively.
The aldehyde most often used is formaldehyde, while larger aldehydes, like acetaldehyde, are usually required in condensation reactions with resorcinol and pyrogallol to facilitate formation of the C4v symmetric vase conformation. Additionally, substituted aldehydes and some heterocycles (e.g. furan) may be used to impart additional functional groups onto the pendent groups of resorcinarenes and pyrogallolarenes.
Calixarenes can be challenging to synthesize, producing instead complex mixtures of linear and cyclic oligomers. With finely tuned starting materials and reaction conditions, synthesis can also be surprisingly efficient. Calixarenes are sparingly soluble as parent compounds and have high melting points.
Structure
Calixarenes are characterised by a three-dimensional basket, cup or bucket shape. In calix[4]arenes the internal volume is around 10 cubic angstroms. Calixarenes are characterised by a wide upper rim and a narrow lower rim and a central annulus. With phenol as a starting material the 4 hydroxyl groups are intrannular on the lower rim. In a resorcin[4]arene 8 hydroxyl groups are placed extraannular on the upper ring. Calixarenes exist in different chemical conformations because rotation around the methylene bridge is not difficult. In calix[4]arene 4 up–down conformations exist: cone (point group C2v,C4v), partial cone Cs, 1,2 alternate C2h and 1,3 alternate D2d. The 4 hydroxyl groups interact by hydrogen bonding and stabilize the cone conformation. This conformation is in dynamic equilibrium with the other conformations. Conformations can be locked in place with proper substituents replacing the hydroxyl groups which increase the rotational barrier. Alternatively placing a bulky substituent on the upper rim also locks a conformation. The calixarene based on p-tert-butyl phenol is also a cone. Calixarenes are structurally related to the pillararenes.
History
In 1872 Adolf von Baeyer mixed various aldehydes, including formaldehyde, with phenols in a strongly acidic solution. The resultant tars defied characterization; but represented the typical products of a phenol/formaldehyde polymerization. Leo Baekeland discovered that these tars could be cured into a brittle substance which he marketed as "Bakelite". This polymer was the first commercial synthetic plastic.
The success of Bakelite spurred scientific investigations into the chemistry of the phenol/formaldehyde reaction. One result was the discovery made in 1942 by Alois Zinke, that p-alkyl phenols and formaldehyde in a strongly basic solution yield mixtures containing cyclic tetramers. Concomitantly, Joseph Niederl and H. J. Vogel obtained similar cyclic tetramers from the acid-catalyzed reaction of resorcinol and aldehydes such as benzaldehyde. A number of years later, John Cornforth showed that the product from p-tert-butylphenol and formaldehyde is a mixture of the cyclic tetramer and another ambiguous cyclomer. His interest in these compounds was in the tuberculostatic properties of their oxyethylated derivatives.
In the early 1970s C. David Gutsche recognized the calix shape of the cyclic tetramer and thought that it might furnish the structure for building an enzyme xenologue. He initiated a study that lasted for three decades. His attention to these compounds came from acquaintance with the Petrolite company's commercial demulsifiers, made by ethoxylation of the still ambiguous products from p-alkylphenols and formaldehyde. He introduced the name "calixarene": from "calix", the Greek name for a chalice, and "arene" for the presence of aryl groups in the cyclic array. He also determined the structures for the cyclic tetramer, hexamer, and octamer, along with procedures for obtaining these materials in good to excellent yields. He then established procedures for attaching functional groups to both the upper and lower rims and mapped the conformational states of these flexible molecules. Additionally, he proved that the cyclic tetramer can be frozen into a cone conformation, by the addition of measurably large substituents to the lower "rim" of the calix shape.
Concomitant with Gutsche's work was that of the Hermann Kämmerer and Volker Böhmer. They developed methods for the stepwise synthesis of calixarenes. Chemists of University of Parma, Giovanni Andreetti, Rocco Ungaro and Andrea Pochini were the first to resolve X-ray crystallographic images of calixarenes. In the mid 1980s, other investigators joined the field of calixarene chemistry. It has become an important aspect of supramolecular chemistry and attracts the attention of hundreds of scientists around the world. The Niederl cyclic tetramers from resorcinol and aldehydes were studied in detail by Donald J. Cram, who called the derived compounds "cavitands" and "carcerands". An accurate and detailed history of the calixarenes along with extensive discussion of calixarene chemistry can be found in Gutsche's monograph.
Medical uses
Water soluble calixarenes, such as para-sulfontocalix[4]arene, have not only been examined for drug delivery., but also for their potential as pharmaceutical drugs themselves, directly combating disease. Calix[6]arene, for instance, has been shown to inhibit extracellular vesicle biogenesis of extracellular vesicles in pancreatic cancer. In turn, this impairs release of matrix metalloprotease enzymes in the tumor microenvironment, in turn slowing down metastasis of disease. Thus in conjunction with their low toxicity they are considered promising agents for combating oncological disease.
Host guest interactions
Calixarenes are used in commercial applications as sodium selective electrodes for the measurement of sodium levels in blood. Calixarenes also form complexes with cadmium, lead, lanthanides and actinides. Calix[5]arene and the C70 fullerene in p-xylene form a ball-and-socket supramolecular complex. Calixarenes also form exo-calix ammonium salts with aliphatic amines such as piperidine. Derivatives or homologues of calix[4]arene exhibit highly selective binding behavior towards anions (especially halogen anions) with changes in optical properties such as fluorescence.
Calixarenes in general, and more specifically calix[4]arenes have been extensively investigated as platforms for catalysts. Some complexes compounds are active for hydrolytic reactions.
Calixarenes are of interest as enzyme mimetics, components of ion sensitive electrodes or sensors, selective membranes, non-linear optics and in HPLC stationary phases. In addition, in nanotechnology calixarenes are used as negative resist for high-resolution electron beam lithography.
A tetrathia[4]arene is found to mimic some properties of the aquaporin proteins. This calixarene adopts a 1,3-alternate conformation (methoxy groups populate the lower ring) and water is not contained in the basket but grabbed by two opposing tert-butyl groups on the outer rim in a pincer. The nonporous and hydrophobic crystals are soaked in water for 8 hours in which time the calixarene:water ratio nevertheless acquires the value of one.
Calixarenes accelerate reactions taking place inside the concavity by a combination of local concentration effect and polar stabilization of the transition state. An extended resorcin[4]arene cavitand is found to accelerate the reaction rate of a Menshutkin reaction between quinuclidine and butylbromide by a factor of 1600.
In heterocalixarenes the phenolic units are replaced by heterocycles, for instance by furans in calix[n]furanes and by pyridines in calix[n]pyridines. Calixarenes have been used as the macrocycle portion of a rotaxane and two calixarene molecules covalently joined together by the lower rims form carcerands.
References
Macrocycles
Chelating agents
Cyclophanes
Alkylphenols | Calixarene | [
"Chemistry"
] | 2,111 | [
"Organic compounds",
"Chelating agents",
"Macrocycles",
"Process chemicals"
] |
1,960,338 | https://en.wikipedia.org/wiki/Hoogsteen%20base%20pair | A Hoogsteen base pair is a variation of base-pairing in nucleic acids such as the A•T pair. In this manner, two nucleobases, one on each strand, can be held together by hydrogen bonds in the major groove. A Hoogsteen base pair applies the N7 position of the purine base (as a hydrogen bond acceptor) and C4 amino group (as a donor), which bind the Watson–Crick (N3–C4) face of the pyrimidine base.
History
Ten years after James Watson and Francis Crick published their model of the DNA double helix, Karst Hoogsteen reported a crystal structure of a complex in which analogues of A and T formed a base pair that had a different geometry from that described by Watson and Crick. Similarly, an alternative base-pairing geometry can occur for G•C pairs. Hoogsteen pointed out that if the alternative hydrogen-bonding patterns were present in DNA, then the double helix would have to assume a quite different shape. Hoogsteen base pairs are observed in alternative structures such as the four-stranded G-quadruplex structures that form in DNA and RNA.
Chemical properties
Hoogsteen pairs have quite different properties from Watson–Crick base pairs. The angle between the two glycosidic bonds (ca. 80° in the A• T pair) is larger and the C1–C1 distance (ca. 860 pm or 8.6 Å) is smaller than in the regular geometry. In some cases, called reversed Hoogsteen base pairs, one base is rotated 180° with respect to the other.
In some DNA sequences, especially CA and TA dinucleotides, Hoogsteen base pairs exist as transient entities that are present in thermal equilibrium with standard Watson–Crick base pairs. The detection of the transient species required the use of NMR relaxation dispersion spectroscopy applied to macromolecules.
Hoogsteen base pairs have been observed in protein–DNA complexes. Some proteins have evolved to recognize only one base-pair type, and use intermolecular interactions to shift the equilibrium between the two geometries.
DNA has many features that allow its sequence-specific recognition by proteins. This recognition was originally thought to primarily involve specific hydrogen-bonding interactions between amino-acid side chains and bases. But it soon became clear that there was no identifiable one-to-one correspondence — that is, there was no simple code to be read. Part of the problem is that DNA can undergo conformational changes that distort the classical double helix. The resulting variations alter the presentation of DNA bases to proteins molecules and thus affect the recognition mechanism.
As distortions in the double helix are themselves are dependent on base sequence, proteins are able to recognize DNA in a manner similar to the way that they recognize other proteins and small ligand molecules, i.e. via geometric shape (instead of the specific sequence). For example, stretches of A and T bases can lead to narrowing the minor groove of DNA (the narrower of the two grooves in the double helix), resulting in enhanced local negative electrostatic potentials which in turn creates binding sites for positively charged arginine amino-acid residues on the protein.
Triplex structures
This non-Watson–Crick base-pairing allows the third strands to wind around the duplexes, which are assembled in the Watson–Crick pattern, and form triple-stranded helices such as (poly(dA)•2poly(dT)) and (poly(rG)•2poly(rC)). It can be also seen in three-dimensional structures of transfer RNA, as T54•A58 and U8•A14.
Triple-helix base pairing
Watson–Crick base pairs are indicated by a "•", "-", or a "." (example: A•T, or poly(rC)•2poly(rC)).
Hoogsteen triple-stranded DNA base pairs are indicated by a "*" or a ":" (example: C•G*C+, T•A*T, C•G*G, or T•A*A).
Quadruplex structures
Hoogsteen pairs also allows formation of secondary structures of single stranded DNA and RNA G-rich called G-quadruplexes (G4-DNA and G4-RNA). Evidence exists for both in vitro and in vivo formation of G4s. Genomic G4s have been suggested to regulate gene transcription and at the RNA level inhibit protein synthesis through steric inhibition of ribosome function. It needs four triplets of G, separated by short spacers. This permits assembly of planar quartets which are composed of stacked associations of Hoogsteen bonded guanine molecules.
See also
Wobble base pair
G-quadruplex
Guanine tetrad
Nucleic acid tertiary structure
Polypurine reverse-Hoogsteen hairpins (PPRHs), oligonucleotides that can bind either DNA or RNA and decrease gene expression.
References
Nucleic acids | Hoogsteen base pair | [
"Chemistry"
] | 1,069 | [
"Biomolecules by chemical classification",
"Nucleic acids"
] |
1,960,886 | https://en.wikipedia.org/wiki/Machine%20coordinate%20system | In the manufacturing industry, with regard to numerically controlled machine tools, the phrase machine coordinate system refers to the physical limits of the motion of the machine in each of its axes, and to the numerical coordinate which is assigned (by the machine tool builder) to each of these limits. CNC Machinery refers to machines and devices that are controlled by using programmed commands which are encoded on to a storage medium, and NC refers to the automation of machine tools that are operated by abstract commands programmed and encoded onto a storage medium.
Types of Machine Coordinate Systems
The absolute coordinate system uses the cartesian coordinate system, where a point on the machine is specifically defined. The cartesian coordinate system is a set of three number lines labeled X, Y, and Z, which are used to determine the point in the workspace that the machine needs to operate in. This absolute coordinate system allows the machine operator to edit the machine code in a way where the specifically defined section is easy to pinpoint. Before putting in theses coordinates though, the machine operator needs to set a point of origin on the machine. The point of origin in the cartesian system is 0, 0, 0. This allows the machine operator to know which directions are positive and negative in the cartesian plane. It also makes sure that every move made is based on the distance from this origin point.
The relative coordinate system, also known as the incremental coordinate system, also uses the cartesian coordinate system, but in a different manner. The relative coordinate system allows the machine operator to define a point in the workspace based on, or relative to, the previous point that the machine tool was at. This means that after every move the machine tool makes, the point that it ends up at is based on the distance from the previous point. So, the origin set on the machine changes after every move.
The polar coordinate system does not use the cartesian coordinate system. It uses the distance from the point of origin to the point, and the angle from either the point of origin or the previous point used. This means that the polar coordinate system can be used in tangent with either the absolute coordinate system or the relative coordinate system. This just has to be specified within the code of the machine being used. The points in the polar coordinate system can be measured using a rule and protractor to get an approximate point, or the machine operator can use trigonometry to find the exact number needed for the machine to work.
References
Industrial machinery | Machine coordinate system | [
"Engineering"
] | 504 | [
"Industrial machinery"
] |
1,961,275 | https://en.wikipedia.org/wiki/Amyloid-beta%20precursor%20protein | Amyloid-beta precursor protein (APP) is an integral membrane protein expressed in many tissues and concentrated in the synapses of neurons. It functions as a cell surface receptor and has been implicated as a regulator of synapse formation, neural plasticity, antimicrobial activity, and iron export. It is coded for by the gene APP and regulated by substrate presentation. APP is best known as the precursor molecule whose proteolysis generates amyloid beta (Aβ), a polypeptide containing 37 to 49 amino acid residues, whose amyloid fibrillar form is the primary component of amyloid plaques found in the brains of Alzheimer's disease patients.
Genetics
Amyloid-beta precursor protein is an ancient and highly conserved protein. In humans, the gene APP is located on chromosome 21 and contains 18 exons spanning 290 kilobases. Several alternative splicing isoforms of APP have been observed in humans, ranging in length from 639 to 770 amino acids, with certain isoforms preferentially expressed in neurons; changes in the neuronal ratio of these isoforms have been associated with Alzheimer's disease. Homologous proteins have been identified in other organisms such as Drosophila (fruit flies), C. elegans (roundworms), and all mammals. The amyloid beta region of the protein, located in the membrane-spanning domain, is not well conserved across species and has no obvious connection with APP's native-state biological functions.
Mutations in critical regions of amyloid precursor protein, including the region that generates amyloid beta (Aβ), cause familial susceptibility to Alzheimer's disease. For example, several mutations outside the Aβ region associated with familial Alzheimer's have been found to dramatically increase production of Aβ.
A mutation (A673T) in the APP gene protects against Alzheimer's disease. This substitution is adjacent to the beta secretase cleavage site and results in a 40% reduction in the formation of amyloid beta in vitro.
Structure
A number of different structural domains that fold mostly on their own have been found in the APP sequence. The extracellular region, much larger than the intracellular region, is divided into the E1 and E2 domains, linked by an acidic domain (AcD); E1 contains two subdomains including a growth factor-like domain (GFLD) and a copper-binding domain (CuBD) interacting tightly together. A serine protease inhibitor domain, absent from the isoform differentially expressed in the brain, is found between acidic region and E2 domain. The complete crystal structure of APP has not yet been solved; however, individual domains have been successfully crystallized, the growth factor-like domain, the copper-binding domain, the complete E1 domain and the E2 domain.
Isoform diversity
Amyloid-beta precursor protein is highly versatile with several isoforms generated through alternative splicing of its mRNA. The primary isoforms include APP695, APP751, and APP770, differing in their inclusion of certain exons, mainly exon 7 and 8. APP695 is predominantly expressed in neuronal cells and is crucial for normal neuronal function. APP751 and APP770 are more widely expressed in non-neuronal tissues but exhibit distinct expression patterns during neuron differentiation. The differential expression of these isoforms plays a significant role in cellular processes such as neurodevelopment, synaptic plasticity, and the pathogenesis of Alzheimer's disease. Understanding the isoform diversity of APP is essential for deciphering its various physiological and pathological roles.
Post-translational processing
APP undergoes extensive post-translational modification including glycosylation, phosphorylation, sialylation, and tyrosine sulfation, as well as many types of proteolytic processing to generate peptide fragments. It is commonly cleaved by proteases in the secretase family; alpha secretase and beta secretase both remove nearly the entire extracellular domain to release membrane-anchored carboxy-terminal fragments that may be associated with apoptosis. Cleavage by gamma secretase within the membrane-spanning domain after beta-secretase cleavage generates the amyloid-beta fragment; gamma secretase is a large multi-subunit complex whose components have not yet been fully characterized, but include presenilin, whose gene has been identified as a major genetic risk factor for Alzheimer's.
The amyloidogenic processing of APP has been linked to its presence in lipid rafts. When APP molecules occupy a lipid raft region of membrane, they are more accessible to and differentially cleaved by beta secretase, whereas APP molecules outside a raft are differentially cleaved by the non-amyloidogenic alpha secretase. Gamma secretase activity has also been associated with lipid rafts. The role of cholesterol in lipid raft maintenance has been cited as a likely explanation for observations that high cholesterol and apolipoprotein E genotype are major risk factors for Alzheimer's disease.
Biological function
Although the native biological role of APP is of obvious interest to Alzheimer's research, thorough understanding has remained elusive. Experimental models of Alzheimer's disease are commonly used by researchers to gain better understandings about the biological function of APP in disease pathology and progression.
Synaptic formation and repair
The most-substantiated role for APP is in synaptic formation and repair; its expression is upregulated during neuronal differentiation and after neural injury. Roles in cell signaling, long-term potentiation, and cell adhesion have been proposed and supported by as-yet limited research. In particular, similarities in post-translational processing have invited comparisons to the signaling role of the surface receptor protein Notch.
APP knockout mice are viable and have relatively minor phenotypic effects including impaired long-term potentiation and memory loss without general neuron loss. On the other hand, transgenic mice with upregulated APP expression have also been reported to show impaired long-term potentiation.
The logical inference is that because Aβ accumulates excessively in Alzheimer's disease its precursor, APP, would be elevated as well. However, neuronal cell bodies contain less APP as a function of their proximity to amyloid plaques. The data indicate that this deficit in APP results from a decline in production rather than an increase in catalysis. Loss of a neuron's APP may affect physiological deficits that contribute to dementia.
Somatic recombination
In neurons of the human brain, somatic recombination occurs frequently in the gene that encodes APP. Neurons from individuals with sporadic Alzheimer's disease show greater APP gene diversity due to somatic recombination than neurons from healthy individuals.
Anterograde neuronal transport
Molecules synthesized in the cell bodies of neurons must be conveyed outward to the distal synapses. This is accomplished via fast anterograde transport. It has been found that APP can mediate interaction between cargo and kinesin and thus facilitate this transport. Specifically, a short peptide 15-amino-acid sequence from the cytoplasmic carboxy-terminus is necessary for interaction with the motor protein.
Additionally, it has been shown that the interaction between APP and kinesin is specific to the peptide sequence of APP. In a recent experiment involving transport of peptide-conjugated colored beads, controls were conjugated to a single amino acid, glycine, such that they display the same terminal carboxylic acid group as APP without the intervening 15-amino-acid sequence mentioned above. The control beads were not motile, which demonstrated that the terminal COOH moiety of peptides is not sufficient to mediate transport.
Iron export
A different perspective on Alzheimer's is revealed by a mouse study that has found that APP possesses ferroxidase activity similar to ceruloplasmin, facilitating iron export through interaction with ferroportin; it seems that this activity is blocked by zinc trapped by accumulated Aβ in Alzheimer's. It has been shown that a single nucleotide polymorphism in the 5'UTR of APP mRNA can disrupt its translation.
The hypothesis that APP has ferroxidase activity in its E2 domain and facilitates export of Fe(II) is possibly incorrect since the proposed ferroxidase site of APP located in the E2 domain does not have ferroxidase activity.
As APP does not possess ferroxidase activity within its E2 domain, the mechanism of APP-modulated iron efflux from ferroportin has come under scrutiny. One model suggests that APP acts to stabilize the iron efflux protein ferroportin in the plasma membrane of cells thereby increasing the total number of ferroportin molecules at the membrane. These iron-transporters can then be activated by known mammalian ferroxidases (i.e. ceruloplasmin or hephaestin).
Hormonal regulation
The amyloid-β precursor protein (AβPP), and all associated secretases, are expressed early in development and play a key role in the endocrinology of reproduction – with the differential processing of AβPP by secretases regulating human embryonic stem cell (hESC) proliferation as well as their differentiation into neural precursor cells (NPC). The pregnancy hormone human chorionic gonadotropin (hCG) increases AβPP expression and hESC proliferation while progesterone directs AβPP processing towards the non-amyloidogenic pathway, which promotes hESC differentiation into NPC.
AβPP and its cleavage products do not promote the proliferation and differentiation of post-mitotic neurons; rather, the overexpression of either wild-type or mutant AβPP in post-mitotic neurons induces apoptotic death following their re-entry into the cell cycle. It is postulated that the loss of sex steroids (including progesterone) but the elevation in luteinizing hormone, the adult equivalent of hCG, post-menopause and during andropause drives amyloid-β production and re-entry of post-mitotic neurons into the cell cycle.
Interactions
Amyloid precursor protein has been shown to interact with:
APBA1,
APBA2,
APBA3,
APBB1,
APPBP1,
APPBP2,
BCAP31,
BLMH
CLSTN1,
CAV1,
COL25A1,
FBLN1,
GSN,
HSD17B10, and
SHC1.
APP interacts with reelin, a protein implicated in a number of brain disorders, including Alzheimer's disease.
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on Early-Onset Familial Alzheimer Disease
Entrez Gene: APP amyloid beta (A4) precursor protein (peptidase nexin-II, Alzheimer disease)
Alzheimer's disease
Single-pass transmembrane proteins
Neurochemistry
Amyloidosis
Precursor proteins | Amyloid-beta precursor protein | [
"Chemistry",
"Biology"
] | 2,303 | [
"Biochemistry",
"Neurochemistry"
] |
1,961,594 | https://en.wikipedia.org/wiki/Scavenger%20receptor%20%28immunology%29 | Scavenger receptors are a large and diverse superfamily of cell surface receptors. Its properties were first recorded in 1970 by Drs. Brown and Goldstein, with the defining property being the ability to bind and remove modified low density lipoproteins (LDL). Today scavenger receptors are known to be involved in a wide range of processes, such as: homeostasis, apoptosis, inflammatory diseases and pathogen clearance. Scavenger receptors are mainly found on myeloid cells and other cells that bind to numerous ligands, primarily endogenous and modified host-molecules together with pathogen-associated molecular patterns (PAMPs), and remove them. The Kupffer cells in the liver are particularly rich in scavenger receptors, includes SR-A I, SR-A II, and MARCO.
Function
The scavenger receptor superfamily is defined by its ability to recognize and bind a broad range of common ligands. These ligands include: polyanionic ligands including lipoproteins, apoptotic cells, cholesterol ester, phospholipids, proteoglycans, ferritin, and carbohydrates. This broad recognition range allows scavenger receptors to play an important role in homeostasis and the combating of diseases. This is accomplished via the recognition of various PAMP's and DAMP's, which leads to the removal or scavenging of pathogens with the recognition of PAMP's and the removal of apoptotic cells, self reactive antigens and the products of oxidative stress with the recognition of DAMP's.
In atherosclerotic lesions, macrophages that express scavenger receptors on their plasma membrane take up the oxidized LDL deposited in the blood vessel wall aggressively, and develop into foam cells. Likewise, they secrete various inflammatory cytokines and accelerate the development of atherosclerosis.
Types
Scavenger receptors are incredibly diverse and therefore, organized into many different classes, starting at A and continuing to L. This organization is based on their structural properties. Due to the diversity and ongoing research into scavenger receptors, the receptors lack an accepted nomenclature and have been described under different names. In 2014 a new nomenclature was proposed that has been used by some researchers, although no official recognition has been given.
Class A is mainly expressed in the macrophage, as a protein whose molecular weight is about 80 kDa and makes a trimer; it is composed of 1) cytosol domain, 2) transmembrane domain, 3) spacer domain, 4) alpha-helical coiled-coil domain, 5) collagen-like domain, and 6) cysteine-rich domain.
Class B has two transmembrane regions.
Class C is a transmembrane protein whose N-terminus is located extracellularly.
Class A
Class A receptors are a type II membrane protein who use their collagen-like domain for ligand binding.
Members include:Scavenger receptors type 1 (SR-A1), which is a trimer with a molecular weight of about 220-250 kDa (the molecular weight of monomeric protein is about 80 kDa). It preferentially binds modified LDL, either acylated (acLDL) or oxidized (oxLDL). Other ligands include: β-amyloid, heat shock proteins, surface molecules of Gram-positive and Gram-negative bacteria, hepatitis C virus.
SR-A1 can be alternatively spliced to generate a truncation at the C-terminus; it is contained within the Endoplasmatic Reticulum, and just like the unspliced version, has a strong affinity for polyanionic ligand binding.
SCARA1 or MSR1 (SR-A1): besides macrophages they can be found on smooth vascular muscle cells and endothelial tissues; oxidative stress enhances their presence on the endothelium.
SCARA2 or MARCO (SR-A6): only found on macrophages in the peritoneum, lymph nodes, liver and specific zones of the spleen. Bacteria and lipopolysaccharide produced by bacteria stimulate its expression; SR-A6 is unable to connect with modified LDL.
SCARA3, MSRL1 or APC7 (SR-A3): plays a significant role in the protection against reactive oxygen species (ROS).
SCARA4 or COLEC12 (SR-A4): acts as a receptor for the detection, engulfment and destruction of oxidatively modified LDL for vascular endothelial cells.
SCARA5 or TESR (SR-A5): located in a diverse set of tissues, such as, lung placenta, intestine, heart and epithelial cells, it has a high affinity for bacteria but not for modified LDL.
Class B
CD36 and scavenger receptor class BI are identified as genes encoding for oxidized LDL receptors and classified into scavenger receptor B (SR-B). Both proteins have two transmembrane domains with an extracellular loop, and they are concentrated in a specific plasma membrane microdomain, the caveolae.
Members include:
SCARB1 or CD36L1 (SR-B1): can interact not only with oxidized LDL but also with normal LDL and high-density lipoproteins (HDL), and plays an important role in their transportation into the cells. Recent studies have indicated that SR-B1 is likely to be the major receptor involved in HDL metabolism in mice and humans. Besides LDL and HDL, SR-B1 binds to viruses and bacteria. SR-B1 is located on hepatocytes, steroidogenic cells, arterial wall and macrophages. Mutations in SR-B1 have a negative effect on fertility and innate immune response, and leads to an increase in atherosclerosis.
SCARB2
SCARB3 or CD36 (SR-B2): has been thought to be implicated in cell adhesion, development of blood vessels, in the phagocytosis of apoptotic cells, and in the metabolism of long-chain fatty acids. Furthermore, it has been shown that CD36 is heavily involved with macrophage migration and signalling, together with protecting the host against, bacteria, fungi and malaria parasites. In experimental mice models of atherosclerosis, in which the gene for CD36 has been deleted, the mice have a greatly reduced number of atherosclerotic lesions. CD36 can be found in many different cells, for example, insulin-responsive cells, hematopoietic cells like platelets, monocytes, and macrophages, endothelial cells, and specialized epithelial cells in the breast and the eye.
Other
Some receptors that can bind to oxidized LDL have been discovered.
CD68 and its mouse homologue, macrosialin, has a unique N-terminal mucin-like domain.
Mucin is a naturally occurring viscous substance (such as found in many nattō or okra) that is composed of a protein and covalently linked polysaccharides. A Drosophila class C scavenger receptor (dSR-C1) also has a mucin-like structure.
Lectin-like oxidized LDL receptor-1 (LOX-1) was isolated from an aortic endothelial cell; recently, it has been discovered in macrophages and vascular smooth muscle cells in artery vessels. The expression of LOX-1 is induced by inflammatory stimuli, so LOX-1 is thought to be involved in the development of atherosclerotic lesions.
References
External links
Human scavenger-like receptors in Membranome database
Receptors
Single-pass transmembrane proteins | Scavenger receptor (immunology) | [
"Chemistry"
] | 1,641 | [
"Receptors",
"Signal transduction"
] |
1,961,786 | https://en.wikipedia.org/wiki/Van%20der%20Pol%20oscillator | In the study of dynamical systems, the van der Pol oscillator (named for Dutch physicist Balthasar van der Pol) is a non-conservative, oscillating system with non-linear damping. It evolves in time according to the second-order differential equation
where is the position coordinate—which is a function of the time —and is a scalar parameter indicating the nonlinearity and the strength of the damping.
History
The Van der Pol oscillator was originally proposed by the Dutch electrical engineer and physicist Balthasar van der Pol while he was working at Philips. Van der Pol found stable oscillations, which he subsequently called relaxation-oscillations and are now known as a type of limit cycle, in electrical circuits employing vacuum tubes. When these circuits are driven near the limit cycle, they become entrained, i.e. the driving signal pulls the current along with it. Van der Pol and his colleague, van der Mark, reported in the September 1927 issue of Nature that at certain drive frequencies an irregular noise was heard, which was later found to be the result of deterministic chaos.
The Van der Pol equation has a long history of being used in both the physical and biological sciences. For instance, in biology, Fitzhugh and Nagumo extended the equation in a planar field as a model for action potentials of neurons. The equation has also been utilised in seismology to model the two plates in a geological fault, and in studies of phonation to model the right and left vocal fold oscillators.
Two-dimensional form
Liénard's theorem can be used to prove that the system has a limit cycle. Applying the Liénard transformation , where the dot indicates the time derivative, the Van der Pol oscillator can be written in its two-dimensional form:
.
Another commonly used form based on the transformation leads to:
.
Results for the unforced oscillator
When , i.e. there is no damping function, the equation becomesThis is a form of the simple harmonic oscillator, and there is always conservation of energy.
When , all initial conditions converge to a globally unique limit cycle. Near the origin the system is unstable, and far from the origin, the system is damped.
The Van der Pol oscillator does not have an exact, analytic solution. However, such a solution does exist for the limit cycle if in the Lienard equation is a constant piece-wise function.
The period at small has serial expansion See Poincaré–Lindstedt method for a derivation to order 2. See chapter 10 of for a derivation up to order 3, and for a numerical derivation up to order 164.
For large , the behavior of the oscillator has a slow buildup, fast release cycle (a cycle of building up the tension and releasing the tension, thus a relaxation oscillation). This is most easily seen in the form In this form, the oscillator completes one cycle as follows:
Slowly ascending the right branch of the cubic curve from to .
Rapidly moving to the left branch of the cubic curve, from to .
Repeat the two steps on the left branch.
The leading term in the period of the cycle is due to the slow ascending and descending, which can be computed as: Higher orders of the period of the cycle is where is the smallest root of , where is the Airy function. (Section 9.7 ) ( contains a derivation, but has a misprint of to .) This was derived by Anatoly Dorodnitsyn.
The amplitude of the cycle is
Hopf bifurcation
As moves from less than zero to more than zero, the spiral sink at origin becomes a spiral source, and a limit cycle appears "out of the blue" with radius two. This is because the transition is not generic: when , both the differential equation becomes linear, and the origin becomes a circular node.
Knowing that in a Hopf bifurcation, the limit cycle should have size we may attempt to convert this to a Hopf bifurcation by using the change of variables which givesThis indeed is a Hopf bifurcation.
Hamiltonian for Van der Pol oscillator
One can also write a time-independent Hamiltonian formalism for the Van der Pol oscillator by augmenting it to a four-dimensional autonomous dynamical system using an auxiliary second-order nonlinear differential equation as follows:
Note that the dynamics of the original Van der Pol oscillator is not affected due to the one-way coupling between the time-evolutions of x and y variables. A Hamiltonian H for this system of equations can be shown to be
where and are the conjugate momenta corresponding to x and y, respectively. This may, in principle, lead to quantization of the Van der Pol oscillator. Such a Hamiltonian also connects the geometric phase of the limit cycle system having time dependent parameters with the Hannay angle of the corresponding Hamiltonian system.
Quantum oscillator
The quantum van der Pol oscillator, which is the quantum mechanical version of the classical van der Pol oscillator, has been proposed using a Lindblad equation to study its quantum dynamics and quantum synchronization. Note the above Hamiltonian approach with an auxiliary second-order equation produces unbounded phase-space trajectories and hence cannot be used to quantize the van der Pol oscillator. In the limit of weak nonlinearity (i.e. μ→0) the van der Pol oscillator reduces to the Stuart–Landau equation. The Stuart–Landau equation in fact describes an entire class of limit-cycle oscillators in the weakly-nonlinear limit. The form of the classical Stuart–Landau equation is much simpler, and perhaps not surprisingly, can be quantized by a Lindblad equation which is also simpler than the Lindblad equation for the van der Pol oscillator. The quantum Stuart–Landau model has played an important role in the study of quantum synchronisation (where it has often been called a van der Pol oscillator although it cannot be uniquely associated with the van der Pol oscillator). The relationship between the classical Stuart–Landau model (μ→0) and more general limit-cycle oscillators (arbitrary μ) has also been demonstrated numerically in the corresponding quantum models.
Forced Van der Pol oscillator
The forced, or driven, Van der Pol oscillator takes the 'original' function and adds a driving function to give a differential equation of the form:
where is the amplitude, or displacement, of the wave function and is its angular velocity.
Popular culture
Author James Gleick described a vacuum tube Van der Pol oscillator in his book from 1987 Chaos: Making a New Science. According to a New York Times article, Gleick received a modern electronic Van der Pol oscillator from a reader in 1988.
See also
Mary Cartwright, British mathematician, one of the first to study the theory of deterministic chaos, particularly as applied to this oscillator.
References
External links
Van der Pol oscillator on Scholarpedia
Van Der Pol Oscillator Interactive Demonstrations
Chaotic maps
Dutch inventions
Ordinary differential equations
Electronic oscillators
Dynamical systems | Van der Pol oscillator | [
"Physics",
"Mathematics"
] | 1,520 | [
"Functions and mappings",
"Mathematical objects",
"Mechanics",
"Mathematical relations",
"Chaotic maps",
"Dynamical systems"
] |
3,670,743 | https://en.wikipedia.org/wiki/Extent%20of%20reaction | In physical chemistry and chemical engineering, extent of reaction is a quantity that measures the extent to which the reaction has proceeded. Often, it refers specifically to the value of the extent of reaction when equilibrium has been reached. It is usually denoted by the Greek letter ξ. The extent of reaction is usually defined so that it has units of amount (moles). It was introduced by the Belgian scientist Théophile de Donder.
Definition
Consider the reaction
A ⇌ 2 B + 3 C
Suppose an infinitesimal amount of the reactant A changes into B and C. This requires that all three mole numbers change according to the stoichiometry of the reaction, but they will not change by the same amounts. However, the extent of reaction can be used to describe the changes on a common footing as needed. The change of the number of moles of A can be represented by the equation , the change of B is , and the change of C is .
The change in the extent of reaction is then defined as
where denotes the number of moles of the reactant or product and is the stoichiometric number of the reactant or product. Although less common, we see from this expression that since the stoichiometric number can either be considered to be dimensionless or to have units of moles, conversely the extent of reaction can either be considered to have units of moles or to be a unitless mole fraction.
The extent of reaction represents the amount of progress made towards equilibrium in a chemical reaction. Considering finite changes instead of infinitesimal changes, one can write the equation for the extent of a reaction as
The extent of a reaction is generally defined as zero at the beginning of the reaction. Thus the change of is the extent itself. Assuming that the system has come to equilibrium,
Although in the example above the extent of reaction was positive since the system shifted in the forward direction, this usage implies that in general the extent of reaction can be positive or negative, depending on the direction that the system shifts from its initial composition.
Relations
The relation between the change in Gibbs reaction energy and Gibbs energy can be defined as the slope of the Gibbs energy plotted against the extent of reaction at constant pressure and temperature.
This formula leads to the Nernst equation when applied to the oxidation-reduction reaction which generates the voltage of a voltaic cell. Analogously, the relation between the change in reaction enthalpy and enthalpy can be defined. For example,
Example
The extent of reaction is a useful quantity in computations with equilibrium reactions. Consider the reaction
2 A ⇌ B + 3 C
where the initial amounts are , and the equilibrium amount of A is 0.5 mol. We can calculate the extent of reaction in equilibrium from its definition
In the above, we note that the stoichiometric number of a reactant is negative. Now when we know the extent, we can rearrange the equation and calculate the equilibrium amounts of B and C.
References
Physical chemistry
Analytical chemistry | Extent of reaction | [
"Physics",
"Chemistry"
] | 606 | [
"Physical chemistry",
"Applied and interdisciplinary physics",
"nan"
] |
3,672,150 | https://en.wikipedia.org/wiki/Vacuum%20evaporation | Vacuum evaporation is the process of causing the pressure in a liquid-filled container to be reduced below the vapor pressure of the liquid, causing the liquid to evaporate at a lower temperature than normal. Although the process can be applied to any type of liquid at any vapor pressure, it is generally used to describe the boiling of water by lowering the container's internal pressure below standard atmospheric pressure and causing the water to boil at room temperature.
The vacuum evaporation treatment process consists of reducing the interior pressure of the evaporation chamber below atmospheric pressure. This reduces the boiling point of the liquid to be evaporated, thereby reducing or eliminating the need for heat in both the boiling and condensation processes. There are other advantages, such as the ability to distill liquids with high boiling points and avoiding decomposition of substances that are heat sensitive.
Application
Food
When the process is applied to food and the water is evaporated and removed, the food can be stored for long periods without spoiling. It is also used when boiling a substance at normal temperatures would chemically change the consistency of the product, such as egg whites coagulating when attempting to dehydrate the albumen into a powder.
This process was invented by Henri Nestlé in 1866, of Nestlé Chocolate fame, although the Shakers were already using a vacuum pan before that (see condensed milk).
This process is used industrially to make such food products as evaporated milk for milk chocolate and tomato paste for ketchup.
In the sugar industry vacuum evaporation is used in the crystallization of sucrose solutions. Traditionally this process was performed in batch mode, but nowadays continuous vacuum pans are available.
Wastewater treatment
Vacuum evaporators are used in a wide range of industrial sectors to treat industrial wastewater. It represents a clean, safe and very versatile technology with low management costs, which in most cases serves as a zero-discharge treatment system.
Thin film deposition
Vacuum evaporation is also a form of physical vapor deposition used in the semiconductor, microelectronics, and optical industries. In this context it is used to deposit thin films of material onto surfaces. Such a technique consists of pumping a vacuum chamber to low pressures (<10−5 torr) and heating a material to produce vapor to deposit the material onto a cold surface. The material to be vaporized is typically heated until its vapor pressure is high enough to produce a flux of several Angstroms per second by using an electrically resistive heater or bombardment by a high voltage beam.
Electronics
Thermal evaporation has been investigated for the production of organic light-emitting diodes (OLEDs) and organic photovoltaic cells. In organic photovoltaic cells, the purity of the organic semiconductor layers influences the device’s energy conversion efficiency and stability.
See also
Freeze drying
List of waste-water treatment technologies
Vacuum deposition
References
External links
Vacuum evaporation manufacturer
Evaporators
Food preservation
Food processing
Vacuum
Water pollution
Thin film deposition | Vacuum evaporation | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering",
"Environmental_science"
] | 611 | [
"Thin film deposition",
"Chemical equipment",
"Coatings",
"Thin films",
"Vacuum",
"Water pollution",
"Distillation",
"Evaporators",
"Planes (geometry)",
"Solid state engineering",
"Matter"
] |
3,673,778 | https://en.wikipedia.org/wiki/Ludwieg%20tube | A Ludwieg tube is a cheap and efficient way of producing supersonic flow. Mach numbers up to 4 in air are easily obtained without any additional heating of the flow. With heating, Mach numbers of up to 11 can be reached.
Principle
A Ludwieg tube is a wind tunnel that produces supersonic flow for short periods of time. A large evacuated dump tank is separated from the downstream end of a convergent-divergent nozzle by a diaphragm or fast acting valve. The upstream end of the nozzle connects to a long cylindrical tube, whose cross-sectional area is significantly larger than the throat area of the nozzle. Initially, the pressure in the nozzle and tube is high. To start the tunnel, the diaphragm is ruptured, e.g., by piercing it with a suitable cutting device, or opening the valve respectively. As always when a diaphragm ruptures, a shock wave propagates into the low-pressure region (here the dump tank) and an expansion wave propagates into the high-pressure region (here the nozzle and the long tube). As this unsteady expansion propagates through the long tube, it sets up a steady subsonic flow toward the nozzle, which is accelerated by the convergent-divergent nozzle to a supersonic condition. The flow is steady until the expansion, having been reflected from the far end of the tube, arrives at the nozzle again. For practical reasons, flow times are about 100 milliseconds for most Ludwieg tubes. For many purposes, this flow duration is sufficient. However, by taking advantage of multiple quasi-static flows between expansion wave reflections, experimentation times of up to 6 seconds can be achieved.
History
The Ludwieg tube was invented by Hubert Ludwieg (1912-2000) in 1955 in response to a competition for a transonic or supersonic wind tunnel design that would be capable of producing high Reynolds number at low operating cost. Professor Ludwieg was also responsible for the experimental demonstration and explanation of the large effect of sweep on the drag of transonic wings (his dissertation in 1937).
See also
Shock tube
Supersonic wind tunnel
Hypersonic wind tunnel
References
External links
Ludwieg Tube Laboratory at the California Institute of Technology
Heated Ludwieg Tube at the ZARM in Bremen, Germany
Operation of a transonic Ludwieg tunnel(Video)
Fluid dynamics
Aerodynamics
Wind tunnels | Ludwieg tube | [
"Chemistry",
"Engineering"
] | 511 | [
"Chemical engineering",
"Aerodynamics",
"Aerospace engineering",
"Piping",
"Fluid dynamics"
] |
3,674,008 | https://en.wikipedia.org/wiki/Puka%20shell | Puka shells are naturally occurring bead-like shells found on the beaches of Hawaii or other places. Each bead is the beach-worn apex of a cone snail. Such shells are often strung as necklaces, known as puka shell necklaces. Puka is the Hawaiian word for "hole" and refers to the naturally occurring hole in the middle of these rounded and worn shell fragments.
Numerous inexpensive imitations are now widely sold as puka shell necklaces. The majority of contemporary "puka shell necklaces" are not made from cone shells, but from other shells, or even from plastic. In addition, some strings of beads are currently sold that are made from cone shells, but the beads in these necklaces were not formed by natural processes. They were instead worked by hand from whole shells using pliers to break the shell down to the needed part, and then subjecting the rough results to tumble finishing, in order to give each bead more or less smooth edges in imitation of the natural wear-and-tear a shell receives when tumbled in the surf over long periods of time.
The original all-natural puka shells were very easily made into necklaces, bracelets and anklets because they were naturally pierced, which enabled them to be strung like beads. Such jewellery were often gifted by Hawaii's royal families to foreign dignitaries, but it was only during the tourism boom of the 1960s, after the islands' admission into the US, that it became massively popular as an attractive and inexpensive lei that could be made and sold on the beach. In the 1970s, this type of shell jewelry became highly sought after by celebrities like Elizabeth Taylor and prices skyrocketed. The craftsmanship also became more refined and the lei pūpū puka, puka shell leis were strung in graduated or matching styles, rather than the original random patterns. It was highly sought after in the 1990s starting from Californians and their surf culture.
Many "legends" about the puka shell were created during this time, and these stories also helped sales.
Natural puka shell formation
The terminal helix of the shell of a cone snail is cone-shaped, and closed at the apex. When the empty shell is rolled over a long time by the waves in the breaking surf and coral rubble, the terminal helix of the shell breaks off or is gradually ground off, leaving the solid top of the shell intact.
Given enough time, the tip of the spire of the shell usually also wears down, and thus a natural hole is formed from one side to the other. This shell fragment can be viewed as a sort of a natural bead, and is known in Hawaii as a "puka". Real puka shells are not flat: one side of the bead is slightly convex; the other is concave. The concave side of the bead clearly shows the spiral form of the interior of the spire of the cone shell.
Modern substitutes
Naturally-formed rounded cone shell fragments suitable to be used as beads are hard to find in large quantities, so true puka jewelry, formed entirely naturally, is now uncommon. Shell jewellery made from naturally occurring puka shells is also now more expensive because of the labor and time involved in locating and hand-picking these rather uncommon shell fragments from the beach drift.
In modern times, beads cut from other types of shell, or even beads of plastic, are used to make imitation puka jewellery. Cone snail shells are sometimes harvested so that they can be chipped and ground down to make more authentic-looking puka jewellery, which is however still not genuine by the standards of the originals.
A very glossy patina indicates that the shells in a necklace have been tumble polished. If the edges of the shell beads are chipped, the shells were harvested and manually broken into shape. If the "puka" or central hole is perfectly circular and parallel-sided, then the hole was drilled by humans.
See also
Shell jewelry
Wampum
References
Books
Jewellery components
Symbols of Hawaii
Hawaii culture
Seashells in art
1960s fashion
1970s fashion
1990s fashion
2000s fashion | Puka shell | [
"Technology"
] | 826 | [
"Jewellery components",
"Components"
] |
27,369,936 | https://en.wikipedia.org/wiki/Fusion%20welding | Fusion welding is a generic term for welding processes that rely on melting to join materials of similar compositions and melting points. Due to the high-temperature phase transitions inherent to these processes, a heat-affected zone is created in the material (although some techniques, like beam welding, often minimize this effect by introducing comparatively little heat into the workpiece).
In contrast to fusion welding, solid-state welding does not involve the melting of materials.
Applications
Fusion welding has been a critical factor in the creation of modern civilization due to its vital role in construction practices. Besides bolts and rivets, there are no other practical methods for joining pieces of metal securely. Fusion welding is used in the manufacture of many everyday items, including airplanes, cars, and structures.
Beyond construction, a large community uses both arc and flame contact welding to create artwork.
Types
Electrical
Arc
Arc welding is one of the many types of fusion welding. Arc welding joins two pieces of metal together by using an intermediate filler metal. The way this works is by completing an electrical circuit to create an electrical arc. This electrical arc is 6500 °F (3593 °C) in its center. This electrical arc is created at the tip of the filler metal. As the arc melts the metal, it is moved either by a person or a machine along the gap in the metals, creating a bond. This method is very common as it is typically done with a hand held machine. Arc welding machines are portable and can be brought onto job sites and hard to reach areas. It is also the most common method of underwater welding. Electrical arcs form between points separated by a gas. In the process of underwater welding a bubble of gas is blown around the area being welded so that an electrical arc may form. Underwater welding has many applications. Ship hulls are repaired and oil rigs are maintained with underwater arc welding.
Resistance welding is done using two electrodes. Each comes into contact with one of the pieces being welded. The two pieces of metal are then pressed together between the electrodes and an electric current is run through them. The pieces of metal begin to heat up at the point where they come into contact. The current is passed through the metal until it is hot enough that the two pieces melt and conjoin. As the metal cools the bond is solidified. This process requires large amounts of electricity. In most cases transformers are needed to provide enough amps. Resistance welding is a very prevalent form of fusion welding. It is used in the manufacturing of automobiles and construction equipment.
Laser beam
Conduction welding, also known as laser beam welding or radiation welding, is a highly precise form of fusion welding. "Laser" is an acronym for Light Amplification by Stimulated Emission of Radiation. The laser emits light in bursts called pumps. These bursts are aimed at the seam of the metals desired to be conjoined. As the laser bursts it is guided along the seam. These intense bursts melt the metal. The two metals when melted mix with each other. Once it has cooled the seam created is a strong bond. Lasers are efficient because they can be configured to make multiple welds at once. The laser beam can be split and sent to multiple locations greatly reducing the cost and amount of energy required. Laser beam welding finds applications in the automotive industry.
Induction
Induction welding is a form of resistance welding. However, there are no points of contact between the metal being welding and the electrical source or the welder. In induction welding a coil is wrapped around a cylinder. This coil causes a magnetic field across the surface of the metal inside. This magnetic field flows in the opposite direction of the magnetic field on the inside of the cylinder. These magnetic flows impede each other. This heats the metal and causes the edges to melt together.
Chemical
Oxyfuel
Flame contact is a very common form of welding. The most popular kind of flame contact welding is oxyfuel gas welding. Flame contact welding uses a flame exposed to the surface of the metals being welded to melt and then join them together. Oxyfuel uses oxygen as a primary ignition source in tandem with another gas such as acetylene to produce a flame which is 2500 °C at the tip and 2800-3500 °C at the tip of the inner cone. Other gasses such as propane and methanol can be used for oxyfuel welding. Acetylene is the most common gas used in oxyfuel welding.
Solid reactant
Solid reactant welding uses reactions between elements and compounds. Certain compounds when mixed create an exothermic chemical reaction, meaning they give off heat. A very common reaction uses thermite, a combination of a metal oxide (rust) and aluminum. This reaction produces heat over 4000 °F. Solid reactant compounds are channeled to the two pieces of metal being joined. Once in place, a catalyst is used to start the reaction. This catalyst can be a chemical or another heat source. The heat created melts the metals being joined. Once it cools, a bond is formed. From welding together train tracks to entering bank vaults, solid reactant welding has many niche uses.
See also
References
Welding | Fusion welding | [
"Engineering"
] | 1,051 | [
"Welding",
"Mechanical engineering"
] |
27,370,341 | https://en.wikipedia.org/wiki/Friction%20stir%20processing | Friction stir processing (FSP) is a method of changing the properties of a metal through intense, localized plastic deformation. This deformation is produced by forcibly inserting a non-consumable tool into the workpiece, and revolving the tool in a stirring motion as it is pushed laterally through the workpiece. The precursor of this technique, friction stir welding, is used to join multiple pieces of metal without creating the heat affected zone typical of fusion welding.
When ideally implemented, this process mixes the material without changing the phase (by melting or otherwise) and creates a microstructure with fine, equiaxed grains. This homogeneous grain structure, separated by high-angle boundaries, allows some aluminium alloys to take on superplastic properties. Friction stir processing also enhances the tensile strength and fatigue strength of the metal. In tests with actively cooled magnesium-alloy workpieces, the microhardness was almost tripled in the area of the friction stir processed seam (to 120–130 Vickers hardness).
Process
In friction stir processing (FSP), a rotating tool is used with a pin and a shoulder to a single piece of material to make specific property enhancement, such as improving the material's toughness or flexibility, in a specific area in the micro-structure of the material via fine grain of a second material with properties that improve the first.(Ma) Friction between the tool and workpieces results in localized heating that softens and plasticizes the workpiece. A volume of processed material is produced by movement of materials from the front of the pin to the back of the pin. During this process, the material undergoes intense plastic deformation and this results in significant grain refinement. (Mishra) FSP changes physical properties without changing physical state which helps engineers create things such as “high-strain-rate superplasticity”. The grain refinement occurs on the base material improving properties of the first material, while mixing with the second material. This allows for a variety of materials to be altered to be changed for things that may require other difficult to acquire conditions. The processes branches off of friction stir welding (FSW) which uses the same process to weld two pieces of different materials together without heating, melting, or having to change the materials' physical state.
Tool
The tool has a crucial part to creation of the final product. The tool consists of two main functions:
Localized heating
Material flow
The tool at its most simplest form consist of a shoulder, a small cylinder with a diameter of 50 mm, and a pin, a small threaded cylinder similar to a drill. The tool itself has been modified to reduce displaced volume of the metals as they merged. Recently two new pin geometries have arisen:
Flared-Triflute – introducing flutes (large carving vertically on the pin)
A-skew – the pin axis being inclined to the axis of the spindle.
Applications
The FSP is used when metals properties want to be improved using other metals for support and improvement of the first. This is promising process for the automotive and aerospace industries where new material will need to be developed to improve resistance to wear, creep, and fatigue. (Misha) Examples of materials successfully processed using the friction stir technique include AA 2519, AA 5083 and AA 7075 aluminum alloys, AZ61 magnesium alloy, nickel-aluminium bronze and 304L stainless steel.
Casting
Metallic parts produced by casting are comparatively inexpensive, but are often subject to metallurgical flaws like porosity and microstructural defects. Friction stir processing can be used to introduce a wrought microstructure into a cast component and eliminate many of the defects. By vigorously stirring a cast metal part to homogenize it and reduce the grain size, the ductility and strength are increased.
Powder metallurgy
Friction stir processing can also be used to improve the microstructural properties of powder metal objects. In particular, when dealing with aluminium powder metal alloys, the aluminium oxide film on the surface of each granule is detrimental to the ductility, fatigue properties and fracture toughness of the workpiece. While conventional techniques for removing this film include forging and extrusion, friction stir processing is suited for situations where localized treatment is desired.
Fabrication of metal matrix composites
FSP can also be used to fabricate metal matrix composites at the nugget zone where we need the change of properties. Al 5052/SiC and some other composites were successfully fabricated. Even nano composites can also be fabricated by FSP.
Aluminium Surface Composites with Superior Properties
Aluminium surface composites with enhanced surface properties can be fabricated using FSP. Aluminium surface composites fabricated with the optimum friction stir processing parameters show better mechanical properties and corrosion resistance. The processing parameters such as tool rotational speed and tool shoulder diameter affects the surface properties. Higher surface hardness is exhibited by the surface composites fabricated at higher tool rotational speed and lower tool shoulder diameter. The properties of the composite materials can be altered by changing the type of reinforcement. Reinforcement particles aids in the grain size refinement as well as the property enhancement in the processed materials. The surface composite properties can be varied by changing the reinforcement particles based on the end application. The reinforcement phases can be metallic, ceramic, or polymer materials.
Testing
Mg based nano-composites
FSP was used to modify a Mg alloy and insert nano-sized SiO2. The test was conducted a total four times with the average grain size varying from 0.5–2μm. This nearly doubled the hardness of the Mg and also increased the super-plasticity. At room temperature, the yield stress of the FSP composites was improved in the 1D and in the 2D specimens signifying a larger resistance of the product metal under high stress conditions without deforming. The tensile strength was shown to increase along with the yield stress.
Benefits
FSP has benefits for when two materials' would be needed to be mixed. “FSP is a short route, solid state processing technique with one-step processing that achieves microstructural refinement densification and homogeneity” (Ma) FSW helps modify materials so that metaling down or changing the material drastically does not have to take place. FSP, for example, can easily change the form of a piece of material as sheets of metal, where before it may have had to be melted down before and put into a mold for it to cool and form. (Smith, Mishra) “The microstructure and mechanical properties of the processed zone can be accurately controlled by optimizing the tool design, FSP parameters an active cooling/heating.” (Ma) The same sheet of metal can be modified to fit various situations with the proper modification of the tool. FSP has shown to make metallic alloys bendable as for example an alloy modified with FSP would be able to bend to 30 degrees as before it could only bend to seven.
See also
Friction stir welding
References
Metallurgical processes
Friction stir welding | Friction stir processing | [
"Chemistry",
"Materials_science"
] | 1,433 | [
"Metallurgical processes",
"Metallurgy"
] |
27,370,755 | https://en.wikipedia.org/wiki/Muhammad%20Suhail%20Zubairy | Muhammad Suhail Zubairy, HI, SI, FPAS (born 19 October 1952), is a University Distinguished Professor as of 2014 in the Department of Physics and Astronomy at the Texas A&M University and is the inaugural holder of the Munnerlyn-Heep Chair in Quantum Optics.
In 2017, Prof. Suhail Zubairy was awarded the Changjiang Distinguished Chair at Huazhong University of Science and Technology. This is the highest award of the Chinese Government to a university professor and is rarely given to a non-Chinese. He has made pioneering contributions in the fields of Quantum computing, laser physics and quantum optics. He has authored and co-authored several books and over 300 research papers on a wide variety of research problems relating to theoretical physics. His research and work has been widely recognised by the physics community and he has won many international awards. In addition, he took part as the lead lecturer in the Casper College Quantum Science Camp during July 2022.
Academic career
Zubairy attended Edwardes College in Peshawar, where he received double BSc degree in physics and mathematics from the Peshawar University, in 1971. He was conferred with Gold Medal with his degree by Peshawar University. He received MSc in physics from the Quaid-i-Azam University in 1974, and his PhD in physics from the University of Rochester under the guidance of Professor Emil Wolf in 1978. He wrote an internationally renowned textbook on Quantum Optics that he co-authored with Marlan O. Scully.
His research interests are very wide and he has written papers on quantum optical applications to quantum computing, quantum informatics, quantum entanglement and sub-wavelength atom localisation. More recently, Zubairy has concentrated most of his efforts on research in quantum microscopy and quantum lithography, some of which are ground breaking. For example, his papers on sub-wavelength lithography using classical light sources are very well received. His recent Physical Review Letters was reviewed in Physical Review Focus as well as in the News of the Week section of Nature. Another of his recent Physical Review Letters was selected by Science as a news release with the title "A new way to beat the limit on shrinking transistors".
Awards and honours
Changjiang Distinguished Chair Professor, HUST, China (2017)
Willis E. Lamb Award for Laser Science and Quantum Optics (2014)
Bush Excellence Award for International Research (2011)
Humboldt Senior Scientist Research Award (2007)
Khwarizmi International Award by the President of Iran (2001)
Hilal-e-Imtiaz (Crescent of Excellence) Award by the Government of Pakistan (2000)
COMSTECH Award for Physics (1999)
Sitara-i-Imtiaz (Star of Excellence) Award by the Government of Pakistan (1993)
Gold Medal, Pakistan Academy of Sciences (1989)
Abdus Salam Prize for Physics (1986)
Fellowships
Fellow American Physical Society (2006)
Fellow Pakistan Academy of Sciences (1995)
Fellow Optical Society of America (1988)
References
Recipients of Hilal-i-Imtiaz
Recipients of Sitara-i-Imtiaz
University of Rochester alumni
Texas A&M University faculty
Pakistani physicists
Academic staff of Quaid-i-Azam University
Quaid-i-Azam University alumni
1952 births
Living people
Fellows of Pakistan Academy of Sciences
Theoretical physicists
Pakistani information theorists
University of Peshawar alumni
Supercomputing in Pakistan
American academics of Pakistani descent
Edwardes College alumni
Pakistani emigrants to the United States
Fellows of the American Physical Society | Muhammad Suhail Zubairy | [
"Physics"
] | 706 | [
"Theoretical physics",
"Theoretical physicists"
] |
28,628,021 | https://en.wikipedia.org/wiki/Siacci%27s%20theorem | In kinematics, the acceleration of a particle moving along a curve in space is the time derivative of its velocity. In most applications, the acceleration vector is expressed as the sum of its normal and tangential components, which are orthogonal to each other. Siacci's theorem, formulated by the Italian mathematician Francesco Siacci (1839–1907), is the kinematical decomposition of the acceleration vector into its radial and tangential components. In general, the radial and tangential components are not orthogonal to each other. Siacci's theorem is particularly useful in motions where the angular momentum is constant.
Siacci's theorem in the plane
Let a particle P of mass m move in a two-dimensional Euclidean space (planar motion). Suppose that C is the curve traced out by P and s is the arc length of C corresponding to time t. Let O be an arbitrary origin in the plane and {i,j} be a fixed orthonormal basis. The position vector of the particle is
The unit vector er is the radial basis vector of a polar coordinate system in the plane. The velocity vector of the particle is
where et is the unit tangent vector to C. Define the angular momentum of P as
where k = i x j. Assume that h ≠ 0. The position vector r may then be expressed as
in the Serret-Frenet Basis {et, en, eb}. The magnitude of the angular momentum is h = mpv, where p is the perpendicular from the origin to the tangent line ZP. According to Siacci's theorem, the acceleration a of P can be expressed as
where the prime denotes differentiation with respect to the arc length s, and κ is the curvature function of the curve C. In general, Sr and St are not equal to the orthogonal projections of a onto er and et.
Example: Central forces
Suppose that the angular momentum of the particle P is a nonzero constant and that Sr is a function of r. Then
Because the curvature at a point in an orbit is given by
the function f can be conveniently written as a first order ODE
The energy conservation equation for the particle is then obtained if f(r) is integrable.
Siacci's theorem in space
Siacci's theorem can be extended to three-dimensional motions. Thus, let C be a space curve traced out by P and s is the arc length of C corresponding to time t. Also, suppose that the binormal component of the angular momentum does not vanish. Then the acceleration vector of P can be expressed as
The tangential component is tangent to the curve C. The radial component is directed from the point P to the point where the perpendicular from an arbitrary fixed origin meets the osculating plane. Other expressions for a can be found in, where a new proof of Siacci's theorem is given.
See also
Acceleration
Areal velocity
Central force
Serret-Frenet equations
References
F. Siacci. Moto per una linea plana. Atti della Reale Accademia della Scienze di Torino, XIV, 750–760, 1879.
F. Siacci. Moto per una linea gobba. Atti della Reale Accademia della Scienze di Torino, XIV, 946–951, 1879.
E. T. Whittaker. A Treatise on the Analytical Dynamics of Particles and Rigid Bodies. 4th edition, Cambridge University Press, Cambridge. Reprinted by Dover Publications, Inc., New York (1944).
Nathaniel Grossman. The sheer joy of celestial mechanics. Birkhäuser, Basel, 1996.
Dynamics (mechanics)
Eponymous theorems of physics | Siacci's theorem | [
"Physics"
] | 760 | [
"Physical phenomena",
"Equations of physics",
"Classical mechanics",
"Eponymous theorems of physics",
"Motion (physics)",
"Dynamics (mechanics)",
"Physics theorems"
] |
28,632,986 | https://en.wikipedia.org/wiki/CD%20V-700 | The CD V-700 (often written as "CDV-700") is a Geiger counter employing a probe equipped with a Geiger–Müller tube, manufactured by several companies under contract to United States federal civil defense agencies in the 1950s and 1960s. While all models adhere to a similar size, shape, coloring and form-factor, there were substantial differences between various models and manufacturers over the years the CD V-700 was in production. Many of the earlier units required the use of now-obsolete high-voltage batteries, and were declared obsolete by the end of the 1970s.
Tens of thousands of these units were distributed to US state civil defense agencies. Even though large numbers have been sold off as surplus to civilian users, many remain in use with first responders and state emergency management agencies today.
Characteristics
Case and contents
Most models of the CD V-700 are constructed using a two-piece case made of die-cast and stamped aluminum with a distinctive yellow paint (John Deere Yellow), a Civil Defense “CD” decal and check source. The upper, die-cast part of the case contains a groove around the outer edge for a rubber gasket that renders the case water-tight. These have often deteriorated over the years but can be replaced with a section of rubber bead as used for mounting household screen windows in their frames. This section of the case also houses the meter, the printed circuit board and the battery holders. Mounted to the top of the upper case is a carry handle in which the probe clips in for storage, a connector for a headphone and the control knob.
The inside of the unit contains high voltage electronics of up to 900 volts, so care is recommended when operating with the case open. Inside the lower, stamped aluminum portion of the case is a printed diagram corresponding to the make and model of the unit that the lower case shipped with. In the decades since these units were made, it is common to find case bottoms have been switched between different makes and models, so the diagram may not match the actual mechanical and electrical properties of the unit.
Power is typically supplied by 2–5 D cell batteries, though early models used three B batteries in conjunction with two D-cell batteries. The most common cells employed are common alkaline primary batteries but rechargeable batteries can also be used. The common alkaline batteries have the disadvantage of a good chance of leakage of corrosive fluids when the unit is placed into storage for long periods, so it is advisable to remove the batteries for storage.
The units were shipped with a packet containing a silica gel to absorb any moisture inside the container and maintain the electronic components of the device. This packet should either be replaced or regenerated in an oven annually.
Unlike many newer devices of this type, the CD V-700s are not equipped with a visual or audible alarm for excessively high levels of radiation.
Probe
The thin sidewall Geiger-Mueller (GM) tube enclosed within the brass probe body detects beta radiation and gamma radiation with the detecting probe's beta shield open, or gamma only when the shield is closed. The brass body of the probe has an energy-compensation effect on the readings taken from the probe, reducing the over-reporting of events from low-energy gamma radiation as can be encountered with non-compensated GM probes.
In the case of the common Victoreen 6A model, the tube employed in the probe is an EON 62l0, of metal construction and halogen-quenched. It is sensitive to both gamma and hard beta radiation. The tube is approximately 90 mm long, has a diameter of 8.5 mm and operates at a relatively high voltage of 900 volts. The choice of this tube is intended to strike a balance between sensitivity and ruggedness and detects gamma radiation at energy levels of between 20 and 1000 Kev. The analogue dial is calibrated in both milliroentgens per hour and Counts Per Minute (CPM). When new and in calibration, these units can be expected to deliver accuracy to within +/− 10%. There are 3 possible scales of reading:
x1 (0–0.5 mR/h or 0–300 C/m)
x10 (0–5 mR/h or 0–3000 C/m)
x100 (0–50 mR/h or 0–30000 C/m)
A drop-in replacement for the EON 6210 or the 6993 tube is the LND 720, manufactured by LND, Inc. of New York, USA.
A small number of CD V-700s were modified by the addition of a much larger end-window probe to detect alpha radiation in addition to betas and gammas and which operated at considerably lower voltages, resulting to changes in the high voltage electronics. These units were redesignated as the CD V-700M and were issued on the basis of six units per state emergency management agency. Their primarily intended use was for detecting and cleaning up low-level radioactive contamination in state RADEF calibration laboratories, though they could be used to detect alpha radiation in the field, if needed.
Rate
The CD V-700 is used to detect low levels of radiation, 0 to 50 mR/h. High-radiation fields can saturate the Geiger tube, causing the meter to read a very low level of radiation (close to 0 R/h), thus inducing the user to erroneously believe conditions are safe when they are not.
Test
The CD V-700 also came with a "check source", a bit of a radioactive isotope under a sticker on the side of the unit. The isotope varied with the maker; depleted or natural uranium was common, though the Instruction and Maintenance Manual for the Lionel Model 6B indicates that a "Radium D+E beta source" with an approximate half-life of 22 years is present under the nameplate. This produced, at the time of installation on the machine, about 1–2 mR/hr adjacent to the source, a value which was clearly visible on the analog meter as well as audible via the headphones that accompanied the units. Measured with the probe's beta window uncovered and the probe directly in contact with the operational check source, this is about 100 times normal sea-level background levels of radiation, and similar to the near-field from a uranium oxide-glazed red Fiestaware saucer. The level drops back to background levels a few feet from the source. Since the half-life of the different operational check source materials used varied and since all CD V-700's were manufactured prior to 1964 and are now approaching a half-century old, the amount of radiation emitted from the source may be much lower than when the unit was originally issued. Therefore, accurate calibration via the source cannot be assured.
The units were calibrated at the factory, but this may drift over time and need to be recalibrated by means of an adjusting screw inside the case. The procedure for this will vary from model to model and it is best performed by somebody that is familiar with radiation metrology gear and its calibration. US state emergency management agencies usually maintain a calibration lab intended to keep the state's own inventory of such devices calibrated and often offer such services to local first responding agencies statewide but generally do not offer them to the public.
Use
The CD V-700, as a true Geiger counter, is capable of measuring ambient background levels of gamma radiation and detecting the presence of beta radiation in the environment, and thus can be used to detect such common low-level radioactive artifacts as uranium-doped marbles, Fiestaware plates and radium watch faces. This differentiates them from other civil defense radiation meters such as the CD V-715, CD V-717 and CD-V-720, which are ion chamber meters that can measure gamma radiation levels far above (up to 500R/h) what the CD V-700 can (up to 50mR/h). Conversely, the ion chamber units are so insensitive to low-levels of gamma radiation that no legally exempt radiation source can make them register at all.
The CD V-700s were usually packaged in various combinations with the high-level ion chamber units as listed above in kits designated CD V-777. In their original Cold War application, the sensitive CD V-700s would be saturated and thus rendered useless by the high radiation levels expected after an exchange of nuclear weapons. In that setting, it would be the ion chamber units that received most of the use. The CD V-700s main purpose was as a peacetime training instrument and for use in checking food and shelter entrances for low levels of fallout contamination. Later, as the Cold War mission wound down, many CD V-700s were re-purposed as instruments for first responders to use at the scene of a radiological event or other incident.
As the CD V-700s produced for the US government are now over 50 years old, they are being slowly replaced by more modern instrumentation in US federal and state government stockpiles and these older units are being sold off as government surplus. This makes them inexpensive and very common in the US for use by uranium prospectors, physics teachers, radiation hobbyists and others interested in detecting and measuring ionizing radiation.
CD V-700s are relatively simple devices, with a circuit board that employs a few dozen electronics components, at most. Thus, the CD V-700 is a common target for various upgrades and modifications that enable added functionality. Most of the parts used are common off-the-shelf electronics parts, but some models employ transformers and other parts that were custom-made just for that particular make and model of CD V-700 and these can be difficult to locate and replace in the event of a failure. Considering their age, the CD V-700 is a fairly reliable device and are usually fairly easy to keep operational over time.
The battery limits the range of temperature at which the device works from between −10 degrees C and 40 degrees C.
Related models and modifications
CD V-700M
The CD V-700M is a Model 6/6A/6B modified with an OCD-P-108 probe fitted with an OCD-D-109 GM tube that enables the detection of alpha radiation, in order to aid in decontamination.
ENI 6B's were most commonly used, though some Victoreens were converted as well.
The modification involved improvements to the high voltage circuit, lowering the voltage from 900 to 575 and replacing the meter with one labeled only in Counts Per Minute.
Ludlum "CD V-700 Model 7"
The CD V-700 Model 7 is a kit to upgrade and modernize the Lionel Model 6B, Lionel/Anton Model 6, or Electro-Neutronics 6B with electronics derived from a recent version of the Ludlum Model 3 Geiger counter. The kit includes a new main electronics board with rotary switch, a connector for a detachable external probe, a wiring harness and switches to add new functionality and improved reliability to aging CD V-700s. The upgrade requires substantial modifications to the case. The State of Florida Department of Public Health has modified a large number of their CD V-700s to this standard.
The kit has the following ranges:
x1 (0-0.5 mR/h or 0-300 C/m)
x10 (0-5 mR/h or 0-3000 C/m)
x100 (0-50 mR/h or 0-30000 C/m)
x1000 (0-500 mR/h or 0-300000 C/m)
OCD-D-101
The OCD-D-101 is a GM tube replacement kit for the CD V-700 introduced in 1963.
The kit, which was produced by Lionel Electronic Labs and EON Corporation, is designed to decrease the sensitivity of Model 6, 6A & 6B meters by ten times.
This would result in the following ranges:
x10 (0-5 mR/h or 0-3000 C/m)
x100 (0-50 mR/h or 0-30000 C/m)
x1000 (0-500 mR/h or 0-300000 C/m)
One or two pressure-sensitive metal foil labels were included to indicate the new range scale.
Victoreen 496
Victoreen produced their Model 496 in the 1980s and many were purchased by the United States Department of Energy. The 496 used a modified version of the same casing as the CD V-700 Model 6Bs that Victoreen had produced in the 1960s. If the decal on top of a Model 496 is removed, the markings of the older civil defense meter can be found cast into the top cover. However, the 496 features a number of improvements over any of the older civil defense versions, including modernized electronics, a BNC connector or MHV connector for external probes, a built-in speaker with on-off knob, a built-in battery test circuit, and a meter face graduated in Counts Per Minute for use with various probes. Other 490-series Victoreen models of the same era shared similar physical characteristics but varied considerably in their features.
List of manufacturers, models and respective years of manufacture
References
Radioactivity
Particle detectors
Laboratory equipment
Counting instruments | CD V-700 | [
"Physics",
"Chemistry",
"Mathematics",
"Technology",
"Engineering"
] | 2,747 | [
"Counting instruments",
"Measuring instruments",
"Particle detectors",
"Numeral systems",
"Nuclear physics",
"Radioactivity"
] |
24,113,755 | https://en.wikipedia.org/wiki/Chinese%20Nuclear%20Society | The Chinese Nuclear Society (CNS; ) is a non-profit organization representing individuals contributing to and supporting nuclear science, nuclear technology and nuclear engineering in China.
Objectives
It was established in 1980. Its objective is to promote the advancement and peaceful use of nuclear science and technology, undertake scientific and technical exchange, engage in public communication and enhance international cooperation and carries out Conferences, seminars, workshops, etc.; Transactions and publications; Lectures and materials to the public, media; Exhibitions; Visits to and from other overseas partners; Policy suggestions to government authorities.
Membership
The membership of the Society consists of regular members, student members and organization members. Membership is open to any person or organization supporting the object of the Society and agreeing with the rules of the Society, each applicant for membership could submit an application to the Secretariat. The Society has about 8810 individual members and 45 organization members.
Organization
The supreme authority in the Society is the National General Conference, and its executive body is the Board of Directors. The National General Conference is held every four years. The Secretariat under the leadership of the Board of Director is responsible for daily operation, and the Secretary General is the chief administrative officer of the Society.
Committees
The Society has 7 committees carrying our specified functions of the Society, 20 technical divisions enhancing activities in specific areas of nuclear science and technology, 20 provincial branches serving members in geographical provinces. Committees: scientific exchange committee, public communication and inquiry committee, education and human resource committee, editorial committee, organization committee, financial committee, women-in-nuclear committee
Technical divisions
Technical Divisions: Calculation Physics, Isotope, Isotope Separation Technology, Nuclear Agronomy, Nuclear Chemical Engineering, Nuclear Chemistry and Radiochemistry, Nuclear Electronics and Nuclear Detection Techniques, Nuclear Fusion and Plasma Physics, Nuclear Industry Applications, Nuclear Materials, Nuclear Medicine, Nuclear Physics, Nuclear Power, Nuclear Science and Technology Information, Nuclear Techno-economics and Management, Particles Accelerator Technology, Radiation Protection, Radiation Research and Technology, Uranium Geology, Uranium Mining and Metallurgy.
Branches
Provincial Branches: Anhui, Beijing, Fujian, Gansu, Guangdong, Guizhou, Henan, Hubei, Hunan, Jiangsu, Jiangxi, Jilin, Liaoning, Shanghai, Shanxi, Shanxxi, Sichuan, Tianjin, Xinjiang, Zhejiang. Under each of Technical divisions and Provincial Branches, there are a number of committees focusing on specific technical area.
See also
List of nuclear power groups
American Nuclear Society
European Nuclear Society
Canadian Nuclear Society
External links
Official website: http://www.ns.org.cn/
Organizations established in 1980
Professional associations based in China
Nuclear organizations
Nuclear technology in China
Nuclear industry organizations | Chinese Nuclear Society | [
"Physics",
"Engineering"
] | 529 | [
"Nuclear organizations",
"Nuclear and atomic physics stubs",
"Nuclear physics",
"Nuclear industry organizations",
"Energy organizations"
] |
24,114,848 | https://en.wikipedia.org/wiki/Attrition%20test | An attrition test is a test carried out to measure the resistance of a granular material to wear. An example of a material subjected to an attrition test are stones used in road construction, indicating the resistance of the material to being broken down under road traffic. Heterogeneous catalysts are also subjected to attrition tests to determine their physical performance in a heterogeneous catalytic reactor.
The test itself involves agitating the particles, typically by tumbling within a drum, vibration, or with jets of gas to simulate a fluidised bed. After a specified time, the material is sieved, and the sieved material is weighed to measure the proportion of material which has been reduced to below a certain size (referred to as 'fines'). The specifics of the test are defined by various standards as applicable to the purpose in question, such as those defined by ASTM.
Roads
Tribology | Attrition test | [
"Chemistry",
"Materials_science",
"Engineering"
] | 190 | [
"Tribology",
"Mechanical engineering",
"Materials science",
"Surface science"
] |
27,005,434 | https://en.wikipedia.org/wiki/Stochastic%20roadmap%20simulation | For robot control, Stochastic roadmap simulation is inspired by probabilistic roadmap methods (PRM) developed for robot motion planning.
The main idea of these methods is to capture the connectivity of a geometrically complex high-dimensional space by constructing a graph of local paths connecting points randomly sampled from that space. A roadmap G = (V,E) is a directed graph. Each vertex v is a randomly sampled conformation in C. Each (directed) edge from vertex vi to vertex vj carries a weight Pij , which represents the probability that the molecule will move to conformation vj , given that it is currently at vi. The probability Pij is 0 if there is no edge from vi to vj. Otherwise, it depends on the energy difference between conformations.
Stochastic roadmap simulation is used to explore the kinetics of molecular motion by simultaneously examining multiple pathways in the roadmap. Ensemble properties of molecular motion (e.g., probability of folding (PFold), escape time in ligand-protein binding) is computed efficiently and accurately with stochastic roadmap simulation. PFold values are computed using the first step analysis of Markov chain theory.
See also
Jean-Claude Latombe
Mark Overmars
References
Robot control
Stochastic simulation | Stochastic roadmap simulation | [
"Engineering"
] | 271 | [
"Robotics engineering",
"Robot control"
] |
27,007,301 | https://en.wikipedia.org/wiki/Gent%20hyperelastic%20model | The Gent hyperelastic material model is a phenomenological model of rubber elasticity that is based on the concept of limiting chain extensibility. In this model, the strain energy density function is designed such that it has a singularity when the first invariant of the left Cauchy-Green deformation tensor reaches a limiting value .
The strain energy density function for the Gent model is
where is the shear modulus and .
In the limit where , the Gent model reduces to the Neo-Hookean solid model. This can be seen by expressing the Gent model in the form
A Taylor series expansion of around and taking the limit as leads to
which is the expression for the strain energy density of a Neo-Hookean solid.
Several compressible versions of the Gent model have been designed. One such model has the form (the below strain energy function yields a non zero hydrostatic stress at no deformation, refer for compressible Gent models).
where , is the bulk modulus, and is the deformation gradient.
Consistency condition
We may alternatively express the Gent model in the form
For the model to be consistent with linear elasticity, the following condition has to be satisfied:
where is the shear modulus of the material.
Now, at ,
Therefore, the consistency condition for the Gent model is
The Gent model assumes that
Stress-deformation relations
The Cauchy stress for the incompressible Gent model is given by
Uniaxial extension
For uniaxial extension in the -direction, the principal stretches are . From incompressibility . Hence .
Therefore,
The left Cauchy-Green deformation tensor can then be expressed as
If the directions of the principal stretches are oriented with the coordinate basis vectors, we have
If , we have
Therefore,
The engineering strain is . The engineering stress is
Equibiaxial extension
For equibiaxial extension in the and directions, the principal stretches are . From incompressibility . Hence .
Therefore,
The left Cauchy-Green deformation tensor can then be expressed as
If the directions of the principal stretches are oriented with the coordinate basis vectors, we have
The engineering strain is . The engineering stress is
Planar extension
Planar extension tests are carried out on thin specimens which are constrained from deforming in one direction. For planar extension in the directions with the direction constrained, the principal stretches are . From incompressibility . Hence .
Therefore,
The left Cauchy-Green deformation tensor can then be expressed as
If the directions of the principal stretches are oriented with the coordinate basis vectors, we have
The engineering strain is . The engineering stress is
Simple shear
The deformation gradient for a simple shear deformation has the form
where are reference orthonormal basis vectors in the plane of deformation and the shear deformation is given by
In matrix form, the deformation gradient and the left Cauchy-Green deformation tensor may then be expressed as
Therefore,
and the Cauchy stress is given by
In matrix form,
References
See also
Hyperelastic material
Strain energy density function
Mooney-Rivlin solid
Finite strain theory
Stress measures
Continuum mechanics
Elasticity (physics)
Non-Newtonian fluids
Rubber properties
Solid mechanics | Gent hyperelastic model | [
"Physics",
"Materials_science"
] | 649 | [
"Physical phenomena",
"Solid mechanics",
"Continuum mechanics",
"Elasticity (physics)",
"Deformation (mechanics)",
"Classical mechanics",
"Mechanics",
"Physical properties"
] |
27,007,597 | https://en.wikipedia.org/wiki/Polynomial%20hyperelastic%20model | The polynomial hyperelastic material model is a phenomenological model of rubber elasticity. In this model, the strain energy density function is of the form of a polynomial in the two invariants of the left Cauchy-Green deformation tensor.
The strain energy density function for the polynomial model is
where are material constants and .
For compressible materials, a dependence of volume is added
where
In the limit where , the polynomial model reduces to the Neo-Hookean solid model. For a compressible Mooney-Rivlin material and we have
References
See also
Hyperelastic material
Strain energy density function
Mooney-Rivlin solid
Finite strain theory
Stress measures
Continuum mechanics
Non-Newtonian fluids
Rubber properties
Solid mechanics | Polynomial hyperelastic model | [
"Physics"
] | 150 | [
"Solid mechanics",
"Mechanics",
"Classical mechanics",
"Continuum mechanics"
] |
27,008,781 | https://en.wikipedia.org/wiki/Kinetic%20exchange%20models%20of%20markets | Kinetic exchange models are multi-agent dynamic models inspired by the statistical physics of energy distribution, which try to explain the robust and universal features of income/wealth distributions.
Understanding the distributions of income and wealth in an economy has been a classic problem in economics for more than a hundred years. Today it is one of the main branches of econophysics.
Data and basic tools
In 1897, Vilfredo Pareto first found a universal feature in the distribution of wealth. After that, with some notable exceptions, this field had been dormant for many decades, although accurate data had been accumulated over this period. Considerable investigations with the real data during the last fifteen years (1995–2010) revealed that the tail (typically 5 to 10 percent of agents in any country) of the income/wealth distribution indeed follows a power law. However, the majority of the population (i.e., the low-income population) follows a different distribution which is debated to be either Gibbs or log-normal.
Basic tools used in this type of modelling are probabilistic and statistical methods mostly taken from the kinetic theory of statistical physics. Monte Carlo simulations often come handy in solving these models.
Overview of the models
Since the distributions of income/wealth are the results of the interaction among many heterogeneous agents, there is an analogy with statistical mechanics, where many particles interact. This similarity was noted by Meghnad Saha and B. N. Srivastava in 1931 and thirty years later by Benoit Mandelbrot. In 1986, an elementary version of the stochastic exchange model was first proposed by J. Angle. for open online view only.
In the context of kinetic theory of gases, such an exchange model was first investigated by A. Dragulescu and V. Yakovenko. Later, scholars found that in 1988, Bennati had independently introduced the same kinetic exchange dynamics, thus leading to the nomenclature of this model as Bennati-Dragulescu-Yakovenko (BDY) game. The main modelling efforts since then have been put to introduce the concepts of savings, and taxation in the setting of an ideal gas-like system. Basically, it assumes that in the short-run, an economy remains conserved in terms of income/wealth; therefore law of conservation for income/wealth can be applied. Millions of such conservative transactions lead to a steady state distribution of money (gamma function-like in the Chakraborti-Chakrabarti model with uniform savings, and a gamma-like bulk distribution ending with a Pareto tail in the Chatterjee-Chakrabarti-Manna model with distributed savings) and the distribution converges to it. The distributions derived thus have close resemblance with those found in empirical cases of income/wealth distributions.
Though this theory had been originally derived from the entropy maximization principle of statistical mechanics, it had been shown by A. S. Chakrabarti and B. K. Chakrabarti that the same could be derived from the utility maximization principle as well, following a standard exchange-model with Cobb-Douglas utility function. Recently it has been shown that an extension of the Cobb-Douglas utility function (in the above-mentioned Chakrabarti-Chakrabarti formulation) by adding a production savings factor leads to the desired feature of growth of the economy in conformity with some earlier phenomenologically established growth laws in the economics literature. The exact distributions produced by this class of kinetic models are known only in certain limits and extensive investigations have been made on the mathematical structures of this class of models. The general forms have not been derived so far. For a recent review (in 2024) on these developments, see the article by M. Greenberg (Dept. Economics, University of Massachusetts Amherst & Systems Engineering, Cornell University) and H. Oliver Gao (Systems Engineering, Cornell University) in the last twenty five years of research on kinetic exchange modeling of income or wealth dynamics and the resulting statistical properties.
A very simple model, based on the same kinetic exchange framework, was introduced by Chakraborti in 2002, now popularly called the "yard sale model", because it had few features of a real one-on-one economic transactions which led to an oligarchy; this has been extensively studied and reviewed by Boghosian.
Criticisms
This class of models has attracted criticisms from many dimensions. It has been debated for long whether the distributions derived from these models are representing the income distributions or wealth distributions. The law of conservation for income/wealth has also been a subject of criticism.
See also
Economic inequality
Econophysics
Thermoeconomics
Wealth condensation
References
Further reading
Brian Hayes, Follow the money, American Scientist, 90:400-405 (Sept.-Oct., 2002)
Jenny Hogan, There's only one rule for rich, New Scientist, 6-7 (12 March 2005)
Peter Markowich, Applied Partial Differential Equations, Springer-Verlag (Berlin, 2007)
Arnab Chatterjee, Bikas K Chakrabarti, Kinetic exchange models for income and wealth distribution, European Physical Journal B, 60:135-149(2007)
Victor Yakovenko, J. B. Rosser, Colloquium: statistical mechanics of money, wealth and income, Reviews of Modern Physics 81:1703-1725 (2009)
Thomas Lux, F. Westerhoff, Economics crisis, Nature Physics, 5:2 (2009)
Sitabhra Sinha, Bikas K Chakrabarti, Towards a physics of economics, Physics News 39(2) 33-46 (April 2009)
Stephen Battersby, The physics of our finances, New Scientist, p. 41 (28 July 2012)
Bikas K Chakrabarti, Anirban Chakraborti, Satya R Chakravarty, Arnab Chatterjee, Econophysics of Income & Wealth Distributions, Cambridge University Press (Cambridge 2013).
Lorenzo Pareschi and Giuseppe Toscani, Interacting Multiagent Systems: Kinetic equations and Monte Carlo methods Oxford University Press (Oxford 2013)
Kishore Chandra Dash, "Story of Econophysics" Cambridge Scholars Press (UK, 2019)
Marcelo Byrro Ribeiro, Income Distribution Dynamics of Economic Systems: An Econophysical Approach, Cambridge University Press (Cambridge, UK, 2020)
Giuseppe Toscani, Parongama Sen and Soumyajyoti Biswas (Eds), "Kinetic exchange models of societies and economies" Philosophical Transactions of the Royal Society A 380: 20210170 (Special Issue, May 2022)
Applied and interdisciplinary physics
Distribution of wealth
Schools of economic thought
Statistical mechanics
Interdisciplinary subfields of economics | Kinetic exchange models of markets | [
"Physics"
] | 1,355 | [
"Statistical mechanics",
"Applied and interdisciplinary physics"
] |
27,009,195 | https://en.wikipedia.org/wiki/Astro%20Space%20Center%20%28Russia%29 | This enclave of scientific research is officially known as Astro Space Center of PN Lebedev Physics Institute, (ASC LPI, ) and is under the purview of the Russian Academy of Sciences. Generally speaking, the space center's mission focuses on astrophysics, which includes cosmology. The emphasis is on accomplishing basic research in this science. The research leads into exploring the composition, and structure of astronomical objects, interstellar and interplanetary space along with exploring how these evolved.
ASC divisions
The Astro Space Center is separated into three divisions, two of which are national observatories. These three divisions are the " Moscow branch", the Pushchino Radio Astronomy Observatory, and Kalyazin Radio Astronomy Observatory. The ASC divisions accomplish research, and achieve scientific milestones, and perform administrative duties as well.
Moscow branch
The Moscow branch () is itself divided into approximately eight divisions. These branches conduct research in Theoretical physics, the thermal history of the universe, various properties of extragalactic objects, and design and development of space and astronomy research equipment
Pushchino Radio Astronomy Observatory
Another division of ASC LPI is the Pushchino observatory, at . It has an array of antennas running N-S and E-W, and produced a fan beam in the sky. It is sited near Pushchino.
It employs 45 researchers along with 60 engineers and technicians to accomplish staff the several major departments and several labs of the observatory. These are combined with 80 other people who perform administrative duties, workshops, garage, and a staff of guards.
The departments and labs are designed to focus on scientific and technical aspects of observatory sciences.
The departments are as follows: Plasma astrophysics, Extragalactic radio astronomy, Pulsar physics, Space radio spectroscopy, and Pulsar astrometry. The laboratories are as follows: Radio astronomy equipment, Automation radio astronomy research, Computer engineering and information technology, and Radio telescopes of the meter wavelength range.
Kalyazin Radio Astronomy Observatory
Another, third division, is the Kalyazin Radio Astronomy Observatory, at .
Achievements of the ASC
The ASC has led the development and deployment of an international VLBI project. It is called the RadioAstron. VLBI stands for (Very Long Baseline Interferometry), for radio astronomy. It allows observations of an object that are made simultaneously by many telescopes to be combined, emulating a telescope with a size equal to the maximum separation between the telescopes.
Notable works
The following are notable works published in affiliation with Russia's Astro Space Center:
Notes
References
Astrophysics research institutes
Astronomical observatories in Russia
Radio astronomy
Institutes of the Russian Academy of Sciences
Astronomy in the Soviet Union | Astro Space Center (Russia) | [
"Physics",
"Astronomy"
] | 552 | [
"Radio astronomy",
"Astronomical sub-disciplines",
"Astrophysics research institutes",
"Astrophysics"
] |
20,027,027 | https://en.wikipedia.org/wiki/Split-pi%20topology | In electronics, a split-pi topology is a pattern of component interconnections used in a kind of power converter that can theoretically produce an arbitrary output voltage, either higher or lower than the input voltage. In practice the upper voltage output is limited to the voltage rating of components used. It is essentially a boost (step-up) converter followed by a buck (step-down) converter. The topology and use of MOSFETs make it inherently bi-directional which lends itself to applications requiring regenerative braking.
The split-pi converter is a type of DC-to-DC converter that has an output voltage magnitude either greater than or less than the input voltage magnitude. It is a switched-mode power supply with a similar circuit configuration to a boost converter followed by a buck converter. Split-pi gets its name from the pi circuit due to the use of two pi filters in series and split with the switching MOSFET bridges.
Other DC–DC converter topologies that can produce output voltage magnitude either greater than or less than the input voltage magnitude include the boost-buck converter topologies (the split-pi, the Ćuk converter, the SEPIC, etc.) and the buck–boost converter topologies.
Principle of operation
In typical operation where a source voltage is located at the left-hand side input terminals, the left-hand bridge operates as a boost converter and the right-hand bridge operates as a buck converter. In regenerative mode, the reverse is true with the left-hand bridge operating as a buck converter and the right as the boost converter.
Only one bridge switches at any time to provide voltage conversion, with the unswitched bridge's top switch always switched on. A straight through 1:1 voltage output is achieved with the top switch of each bridge switch on and the bottom switches off. The output voltage is adjustable based on the duty cycle of the switching MOSFET bridge.
Applications
Electric drivetrain
Motor control
Battery balancing
Regenerative braking
References
British Patent GB2376357B - Power converter and method for power conversion
DC-to-DC converters
Voltage regulation | Split-pi topology | [
"Physics"
] | 448 | [
"Voltage",
"Physical quantities",
"Voltage regulation"
] |
20,037,546 | https://en.wikipedia.org/wiki/Benzylisoquinoline | Substitution of the heterocycle isoquinoline at the C1 position by a benzyl group provides 1‑benzylisoquinoline, the most widely examined of the numerous benzylisoquinoline structural isomers. The 1-benzylisoquinoline moiety can be identified within numerous compounds of pharmaceutical interest, such as moxaverine; but most notably it is found within the structures of a wide variety of plant natural products, collectively referred to as benzylisoquinoline alkaloids. This class is exemplified in part by the following compounds: papaverine, noscapine, codeine, morphine, apomorphine, berberine, tubocurarine.
Biosynthesis
(S)-Norcoclaurine (higenamine) has been identified as the central 1-benzyl-tetrahydro-isoquinoline precursor from which numerous complex biosynthetic pathways eventually emerge. These pathways collectively lead to the structurally disparate compounds comprising the broad classification of plant natural products referred to as benzylisoquinoline alkaloids (BIA), which have been comprehensively discussed by Hagel. The biosynthesis of (S)-norcoclaurine, which is catalyzed by (S)-norcoclaurine synthase, is accomplished by the stereoselective condensation of dopamine and 4-hydroxyphenylacetaldehyde (4-HPAA); each of these compounds is prepared by multiple enzymatic transformations from L-tyrosine.
It is of interest to note that early studies initially identified norlaudanosoline (tetrahydropapaveroline) as the purported central precursor for the biosynthesis of BIAs. However, more than two decades later it was finely unequivocally established that (S)-norcoclaurine was the central precursor for the biosynthesis of the structurally diverse BIAs.
Examples of benzylisoquinoline alkaloids
See also
Morphinan
Indole
Indolizidine
References
Benzylisoquinoline biosynthesis by cultivated plant cells and isolated enzymes
Alkaloids | Benzylisoquinoline | [
"Chemistry"
] | 455 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Natural products",
"Alkaloids"
] |
6,551,283 | https://en.wikipedia.org/wiki/Cyclic%20pump | A Cyclic pump is an apparatus which moves a fluid in a periodic uni-directional direction from one containment system to another while overcoming static conditions that would, without intervention, not move. The intervention predicated by the pump alters pressures, volumes and sometimes temperatures of fluids (gaseous, liquid, colloidal, plasmic, etc.) in such a way that the fluids are transported to other chambers or enclosures (including pipes), thus "flowing" in a consistent direction, usually having characteristics of pulsation (as is the case with the Human heart) or of uniform motion (as is the case with an Automobile motor oil pump). Cyclic pumps are generally incorporated into machines to deal with all sorts of fluids associated with that machine's functionality.
References
See also
Water hammer
Hydraulic ram
Fluid dynamics
Switched-mode power supply
Boost converter
Buck converter
Buck–boost converter
Pumps
Articles containing video clips | Cyclic pump | [
"Physics",
"Chemistry"
] | 188 | [
"Physical systems",
"Hydraulics",
"Turbomachinery",
"Pumps"
] |
6,552,190 | https://en.wikipedia.org/wiki/Stress%20space | In continuum mechanics, Haigh–Westergaard stress space, or simply stress space is a 3-dimensional space in which the three spatial axes represent the three principal stresses of a body subject to stress. This space is named after Bernard Haigh and Harold M. Westergaard.
In mathematical terms, H-W space can also be interpreted (understood) as a set of numerical markers of stress tensors orbits (with respect to proper rotations group – special orthogonal group SO3); every point of H-W space represents one orbit.
Functions of the principal stresses, such as the yield function, can be represented by surfaces in stress space''. In particular, the surface represented by von Mises yield function is a right circular cylinder, equiaxial to each of the three stress axes.
In 2-dimensional models, stress space''' reduces to a plane and the von Mises yield surface reduces to an ellipse.
See Also
Bigoni–Piccolroaz yield criterion
References
Continuum mechanics | Stress space | [
"Physics"
] | 208 | [
"Classical mechanics",
"Continuum mechanics"
] |
6,553,354 | https://en.wikipedia.org/wiki/Tightness%20of%20measures | In mathematics, tightness is a concept in measure theory. The intuitive idea is that a given collection of measures does not "escape to infinity".
Definitions
Let be a Hausdorff space, and let be a σ-algebra on that contains the topology . (Thus, every open subset of is a measurable set and is at least as fine as the Borel σ-algebra on .) Let be a collection of (possibly signed or complex) measures defined on . The collection is called tight (or sometimes uniformly tight) if, for any , there is a compact subset of such that, for all measures ,
where is the total variation measure of . Very often, the measures in question are probability measures, so the last part can be written as
If a tight collection consists of a single measure , then (depending upon the author) may either be said to be a tight measure or to be an inner regular measure.
If is an -valued random variable whose probability distribution on is a tight measure then is said to be a separable random variable or a Radon random variable.
Another equivalent criterion of the tightness of a collection is sequentially weakly compact. We say the family of probability measures is sequentially weakly compact if for every sequence from the family, there is a subsequence of measures that converges weakly to some probability measure . It can be shown that a family of measure is tight if and only if it is sequentially weakly compact.
Examples
Compact spaces
If is a metrizable compact space, then every collection of (possibly complex) measures on is tight. This is not necessarily so for non-metrisable compact spaces. If we take with its order topology, then there exists a measure on it that is not inner regular. Therefore, the singleton is not tight.
Polish spaces
If is a Polish space, then every probability measure on is tight. Furthermore, by Prokhorov's theorem, a collection of probability measures on is tight if and only if
it is precompact in the topology of weak convergence.
A collection of point masses
Consider the real line with its usual Borel topology. Let denote the Dirac measure, a unit mass at the point in . The collection
is not tight, since the compact subsets of are precisely the closed and bounded subsets, and any such set, since it is bounded, has -measure zero for large enough . On the other hand, the collection
is tight: the compact interval will work as for any . In general, a collection of Dirac delta measures on is tight if, and only if, the collection of their supports is bounded.
A collection of Gaussian measures
Consider -dimensional Euclidean space with its usual Borel topology and σ-algebra. Consider a collection of Gaussian measures
where the measure has expected value (mean) and covariance matrix . Then the collection is tight if, and only if, the collections and are both bounded.
Tightness and convergence
Tightness is often a necessary criterion for proving the weak convergence of a sequence of probability measures, especially when the measure space has infinite dimension. See
Finite-dimensional distribution
Prokhorov's theorem
Lévy–Prokhorov metric
Weak convergence of measures
Tightness in classical Wiener space
Tightness in Skorokhod space
Tightness and stochastic ordering
A family of real-valued random variables is tight if and only if there exists an almost surely finite random variable
such that
for all , where
denotes the stochastic order defined by
if for all nondecreasing functions .
Exponential tightness
A strengthening of tightness is the concept of exponential tightness, which has applications in large deviations theory. A family of probability measures on a Hausdorff topological space is said to be exponentially tight if, for any , there is a compact subset of such that
References
(See chapter 2)
Measure theory
Measures (measure theory) | Tightness of measures | [
"Physics",
"Mathematics"
] | 787 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
4,983,963 | https://en.wikipedia.org/wiki/Stone%E2%80%93Wales%20defect | A Stone–Wales defect is a crystallographic defect that involves the change of connectivity of two π-bonded carbon atoms, leading to their rotation by 90° with respect to the midpoint of their bond. The reaction commonly involves conversion between a naphthalene-like structure into a fulvalene-like structure, that is, two rings that share an edge vs two separate rings that have vertices bonded to each other.
The reaction occurs on carbon nanotubes, graphene, and similar carbon frameworks, where the four adjacent six-membered rings of a pyrene-like region are changed into two five-membered rings and two seven-membered rings when the bond uniting two of the adjacent rings rotates. In these materials, the rearrangement is thought to have important implications for the thermal, chemical, electrical, and mechanical properties. The rearrangement is an example of a pyracyclene rearrangement.
History
The defect is named after Anthony Stone and David J. Wales at the University of Cambridge, who described it in a 1986 paper on the isomerization of fullerenes. However, a similar defect was described much earlier by G. J. Dienes in 1952 in a paper on diffusion mechanisms in graphite and later in 1969 in a paper on defects in graphite by Peter Thrower. For this reason, the term Stone–Thrower–Wales defect is sometimes used.
Structural effects
The defects have been imaged using scanning tunneling microscopy and transmission electron microscopy and can be determined using various vibrational spectroscopy techniques.
It has been proposed that the coalescence process of fullerenes or carbon nanotubes may occur through a sequence of such a rearrangements. The defect is thought to be responsible for nanoscale plasticity and the brittle–ductile transitions in carbon nanotubes.
Chemical details
The activation energy for the simple atomic motion that gives the bond-rotation apparent in a Stone–Wales defects is fairly high—a barrier of several electronvolts. but various processes can create the defects at substantially lower energies than might be expected.
The rearrangement creates a structure with less resonance stabilization among the sp2 atoms involved and higher strain energy in the local structure. As a result, the defect creates a region with greater chemical reactivity, including acting as a nucleophile and creating a preferred site for binding to hydrogen atoms. The high affinity of these defects for hydrogen, coupled with the large surface area of the bulk material, might make these defects an important aspect in the use of carbon nanomaterials for hydrogen storage. Incorporation of defects along a carbon-nanotube network can program a carbon-nanotube circuit to enhance the conductance along a specific path. In this scenario, the defects lead to a charge delocalization, which redirects an incoming electron down a given trajectory.
References
External links
Carbon nanotubes
Crystallographic defects | Stone–Wales defect | [
"Chemistry",
"Materials_science",
"Engineering"
] | 593 | [
"Crystallographic defects",
"Crystallography",
"Materials degradation",
"Materials science"
] |
4,985,628 | https://en.wikipedia.org/wiki/Cauchy%E2%80%93Born%20rule | The Cauchy–Born rule or Cauchy–Born approximation is a basic hypothesis used in the mathematical formulation of solid mechanics which relates the movement of atoms in a crystal to the overall deformation of the bulk solid. A widespread simplified version states that in a crystalline solid subject to a small strain, the positions of the atoms within the crystal lattice follow the overall strain of the medium. To give a more precise definition, consider a crystalline body where the position of the atoms can be described by a set of reference lattice vectors . The Cauchy-Born rules states that if the body is deformed by a deformation whoes gradient is , the lattice of the deform body can be described by . The rule only describes the lattice, not the atoms.
The currently accepted form is Max Born's refinement of Cauchy's original hypothesis which was used to derive the equations satisfied by the Cauchy stress tensor. The approximation generally holds for face-centered and body-centered cubic crystal systems. For complex lattices such as diamond, however, the rule has to be modified to allow for internal degrees of freedom between the sublattices. The approximation can then be used to obtain bulk properties of crystalline materials such as stress-strain relationship.
For crystalline bodies of finite size, the effect of surface stress is also significant. However, the standard Cauchy–Born rule cannot deduce the surface properties. To overcome this limitation, Park et al. (2006) proposed a surface Cauchy–Born rule. Several modified forms of the Cauchy–Born rule have also been proposed to cater to crystalline bodies having special shapes. Arroyo & Belytschko (2002) proposed an exponential Cauchy Born rule for modeling of mono-layered crystalline sheets as two-dimensional continuum shells. Kumar et al. (2015) proposed a helical Cauchy–Born rule for modeling slender bodies (such as nano and continuum rods) as special Cosserat continuum rods.
References
.
.
.
.
Crystallography
Max Born | Cauchy–Born rule | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 417 | [
"Materials science stubs",
"Materials science",
"Crystallography stubs",
"Crystallography",
"Condensed matter physics"
] |
4,989,510 | https://en.wikipedia.org/wiki/Samar%20Mubarakmand | Samar Mubarakmand (Urdu: ; b. 17 September 1942; ) is a Pakistani nuclear physicist known for his research in gamma spectroscopy and the experimental development of the Charged Particle Accelerator at the Pakistan Institute of Nuclear Science & Technology (PINSTECH).
Due to his doctoral research in fast neutron spectrometry, he was appointed as the head of the Diagnostic Group for the Pakistan Atomic Energy Commission and eventually served as the test director for nuclear weapons testing in the Ras Koh Range in Balochistan in Pakistan, in 1998.
Prior to that, he was the lead scientist for Pakistan's military Hatf Program, overseeing the development of the Shaheen and Babur missile programs, while serving as Chairman of the National Engineering and Scientific Commission for Government of Punjab between 2001 and 2007. His career in government continued when he was appointed as a science adviser to the federal Government of Pakistan to assist the Thar coalfield project. He is currently heading the mineral exploration work in district Chiniot as Chairman of the Board of Directors at the Punjab Mineral Company (Mines & Minerals Department), Government of Punjab.
Biography
Early Life and Education
Samar Mubarakmand was born in Rawalpindi on 17 September 1942 in a Punjabi family from Hoshiarpur, East Punjab. He gained his education in Lahore and matriculated from St. Anthony's High School in 1956. After passing the university entrance exams, he enrolled at Government College University (GCU) where he studied physics under Tahir Hussain. In 1960, he graduated with a Bachelor of Science (BSc) in physics with a concentration in experimental physics and a minor in mathematics. During his college years, Mubarakmand was an avid swimmer and represented GCU at the National Games of Pakistan.
He conducted research in experimental physics under Hussain and built an experimental apparatus for his master's thesis. His thesis contained detailed work on gamma ray spectrometry and performed an experiment that was witnessed by nuclear physicist Denys Wilkinson as part of his master's program. Wilkinson spoke highly of his work and invited Mubarakmand to visit Oxford University in the United Kingdom to resume studies in experimental physics.
In 1962, Mubarakmand gained a Master of Science (MSc) in Physics after publishing his thesis, "Construction of a gamma-ray spectrometer," under Hussain. In 1962, he joined the Pakistan Atomic Energy Commission (PAEC) and won a scholarship to study at Oxford University and joined the group led by Wilkinson. At Oxford, Mubarakmand participated in preparing a 22 million volt particle accelerator and was part of the team that commissioned it. He also learned about linear accelerators, and after returning to Pakistan, he built one. Apart from studying, Mubarakmand played cricket and fast bowled for the Oxford University Cricket Club. In 1966, he completed his doctoral thesis under Wilkinson and was awarded a Doctor of Philosophy (DPhil) in Experimental Nuclear Physics.
On his return to Pakistan, he joined the Pakistan Atomic Energy Commission. From the experience he gained in the use of high energy accelerators, he converted a neutron generator available with PAEC, to study nuclear structure and fast neutron scattering.
Pakistan Atomic Energy Commission (PAEC)
On his return to Pakistan, he joined the Pakistan Atomic Energy Commission. From the experience he gained in the use of high energy accelerators, he converted a neutron generator available with PAEC, to study nuclear structure and fast neutron scattering.o 1974, he completed research in fast neutron induced reactions, and developed experimental techniques for neutron spectroscopy. This resulted in several publications in the Journals “Nuclear Physics” and “Nuclear Instrumentation and Methods” North Holland publications.
From 1974 to 1977, he was Director at Center for Advanced Studies in Physics (CASP) at Government College University, Lahore on temporary attachment. During his tenure, he developed interest in the applications of solid-state track detectors. He presented his work at an international conference held at the Max Planck Institute, Munich, in 1976.
From 1977 to 1980, Mubarakmand worked on applications of lasers and separation of isotopes of sulfur in sulfur hexafluoride. Mubarakmand pioneered the transmission of fast signals through optical fibers avoiding external interference from electromagnetic radiations on data transmission. This later led to the adoption of wide applications of fiber optic technology in communications throughout the country.
1971 War and Atomic Bomb Project
In the decade of the 80’s, when PAEC was busy in developing several designs of nuclear devices, it was felt that these designs would have to be ratified through cold tests. Samar Mubarakmand, an experimental physicist was known for his expertise in the field of fast neutron spectrometry.
During his research in nuclear structure for his doctorate at Oxford, Mubarakmand developed and refined the technique for spectroscopy of fast neutrons released during the nuclear reactions under his study. This technique has direct applications in carrying out the cold tests of nuclear devices.
Mubarakmand was Munir Khan, Chairman Pakistan Atomic Energy Commission’s first choice, to appoint him as head of the Diagnostic Group.
Several designs of nuclear devices were developed and high confidence in their performance assured through cold testing. Each of these tests involved detection and measurement of fast neutrons emitted in short sharp sub microsecond bursts. All the available nuclear devices designed and developed were consequently Cold Tested and qualified. The Diagnostic Group also accomplished the important task of designing and manufacturing a neutron trigger source based on fusion reactions. This neutron source would Trigger a nuclear device in a Hot Test.
From 1991 to 1994, Mubarakmand was given a higher responsibility to lead the Directorate of Technical Development (DTD). He supervised and modernized the method of working at his new assignment and within the short period of three years more efficient, powerful and compact nuclear devices were produced to meet the varied requirements of Pakistan’s Fighter Aircraft as well as the upcoming delivery systems of different types. All the designs were ratified through Cold Testing by his previous diagnostic team.
After three years as Director DTD, Mubarakmand was promoted as Director General DTD in 1994. In 1995, he was given the greater responsibility of Member Technical PAEC which he held till the year 2000. During the five year tenure, Mubarakmand, apart from looking after the classified side of the Technical Program of PAEC, also became responsible for the optimal functioning of the centers of Nuclear Medicine, Agriculture, PINSTECH and New Labs. At the last mentioned facility, Mubarakmand and an outstanding team of PAEC’s Scientists and Engineers were able to establish and commission Pakistan’s first reprocessing Plant for burnt reactor fuel. Thus, an important milestone of producing Metallic plutonium was achieved in the year 2000.
Several areas were visited and studied with the help of Senior Geologists of the PAEC and finally the Chairman PAEC, Ishfaq Ahmed selected the Chaghai Site for conducting Pakistan’s Hot Tests at an appropriate time. Mubarakmand supervised the installation of Diagnostic Equipment and other facilities at the Site relevant to the safe conduction of the Nuclear Tests.
In 2005, Mubarakmand eulogized his memories in an interview with Hamid Mir's Capital Talk television show and said:
Recalling Munir Ahmad Khan and PAEC's role and its relation to the atomic bomb project priority dispute, Mubarakmand later said that:
Pakistan's Missile Program
In 1990s, Mubarakmand took special initiatives in the advancement of the space program and led a team of engineers to successfully develop the Shaheen-I missile. He was the founding director of the National Defence Complex (NDC) bureau that initiated the work on the Shaheen-I and gathered support for the program. Necessary funding for the program was secured by the military. Mubarakmand oversaw the development of the solid-fuel rocket booster. Initiated in 1987 by the Pakistan Ministry of Defence in response to India's Integrated Guided Missile Development Programme, Pakistan's spin-off missile program was aggressively pursued by Prime Minister Benazir Bhutto in 1993. The Shaheen-I missile was successfully test fired in 1999 by a team of engineers led by Mubarakmand. Subsequently, Shaheen-II and Shaheen-III missiles were successfully test fired with ranges of 2000 Km and 2750 Km respectively.
Key strategic weapon systems, such as the Babur and Ghaznavi missiles, were also built by his team.
Development of Babur Cruise Missile with a range of 700 Km was also commenced during the same period. Several tests of its boost phase and flight phase were conducted with the objective of providing Pakistan with a second strike capability. Mubarakmand retired from NESCOM in November 2007.
In 2008, Mubarakmand joined the Planning Commission of Pakistan where he strongly advocated for peaceful usage of their space program. In 2009, he revealed the work on Paksat-1R, the nation's first geostationary satellite that was launched in 2011.
The satellite was described as being able to monitor agricultural programs, minerals programs and weather conditions and quoted that there were sufficient funds for the defence, nuclear and space programs. The satellite was launched in 2011 from the Xichang Satellite Launch Centre in China. His relations with Abdul Qadeer Khan often remained tense over several scientific issues.
Thar Coal Project
In 2013, Mubarakmand assisted the Provincial Government of Balochistan in mineral extraction. He lobbied heavily for the implementation of the Thar coal project initiated by the Provincial Government of Sindh despite strong public criticism by Abdul Qadeer Khan. In 2015, a breakthrough in the Thar coal project was reported by the media. According to Mubarakmand, a massive coal reserve in Thar can provide affordable power for the next 600 years. When speaking to a large crowd at Cadet College Fateh Jang, he said that he had developed a solution to the growing power outage and was now waiting for the government to put it into action.
Reko Diq Copper Gold Project
The Tethyan Cooper Company (TCC) has approached the High Court of Justice in the British Virgin Islands for the enforcement of the $5.97 billion award against Pakistan by the International Centre for Settlement of Investment Disputes (ICSID) in the Reko Diq case in Dec-20.
A senior official revealed that the "misstatement" of scientist, Mubarakmand before the Supreme Court tribunal, in 2011, was one of the main reasons behind the Supreme Court Decision on 7 January 2013, when a three-member bench of the apex court, headed by then Chief Justice Iftikhar Muhammad Chaudhry, declared Chejva "illegal, void" and non-binding, causing ICSID slapping the heavy penalty on Pakistan. Mubarakmand had claimed that the Reko Diq gold mines would fetch the country around $2.5 billion annually. He had also maintained Reko Diq and other gold reserves in the country will bring in $131 billion to the national exchequer in life of mine, 56 years. The tribunal relied on his statement.
State honours
Mubarakmand has been conferred with state honors for his services to the country by the Government of Pakistan. He is the recipient of the: Sitara-e-Imtiaz (1992); Hilal-e-Imtiaz (1998); and the Nishan-e-Imtiaz (2003), which is the highest civil honor of Pakistan. Additionally, he is a Fellow of the Pakistan Academy of Sciences (PAS), inducted by Ishfaq Ahmad in 2000.
Nishan-e-Imtiaz (2003)
Hilal-e-Imtiaz (1998)
Sitara-e-Imtiaz (1992)
PAS Nazir Ahmad Award (2005)
International Scientist of the Year (2007)
Life Member, Pakistan Nuclear Society
Roll of Honour GCU (1962)
Fellow, Pakistan Mathematical Society (2003)
Scientific journals and papers
Research publications
Aspects of a-emission from the bombardment of 58Ni with 14.7 MeV neutrons, by Naeem Ahmad Khan, Samar Mubarakmand and Masud Ahmed, journal of Nuclear physics, PINSTECH.
Cross-section measurements with a neutron generator by Samar Mubarakmand, Masud Ahmad, M. Anwar and M. S. Chaudhry.
Some characteristic differences between the etch pits due to 86Rn and 232 Th α particles in CA80–15 and LR–115 cellulose nitrate track detectors, by Hameed Ahmad Khan, M. Afzal, P. Chaudhary, Samar Mubarakmand, F. I. Nagi and A.Waheed, journal of Isotopic Radiation, PINSTECH (1977).
Application of glass solid state nuclear track detectors in the measurement of the + particle fission cross–section of uranium, by Samar Mubarakmand, K. Rashid, P. Chaudhry and Hameed Ahmad Khan, Methods of Nuclear Instrumentation. (1977)
Etching of glass solid state nuclear track detectors in aqueous solutions of (4NH)2HF, NaOH and KOH, by Hameed Ahmad Khan, R. A. Akbar, A. Waheed, P. Chaudhry and Samar Mubarakmand, journal of Isotopic Radiation, PINSTECH (1978).
See also
Pakistan and weapons of mass destruction
Shaheen (missile)
Chagai-I
Chagai-II
Kirana Hills
References
Biographical annotations
*
External links
Samar Mubarakmand
http://cerncourier.com/cws/article/cern/28142/1/people11_12-99
https://web.archive.org/web/20090507090152/http://www.ciitlahore.edu.pk/PL/News/VisitDrSamar110209.html
Living people
1942 births
Pashtun scientists
Scientists from Rawalpindi
Government College University, Lahore alumni
Alumni of the University of Oxford
Pakistani nuclear physicists
Experimental physicists
Project-706 people
Fluid dynamicists
Fellows of Pakistan Academy of Sciences
Academic staff of the Government College University, Lahore
St. Anthony's High School, Lahore alumni
Nuclear weapons scientists and engineers | Samar Mubarakmand | [
"Physics",
"Chemistry"
] | 2,906 | [
"Fluid dynamicists",
"Experimental physics",
"Experimental physicists",
"Fluid dynamics"
] |
4,989,808 | https://en.wikipedia.org/wiki/Pakistan%20Institute%20of%20Nuclear%20Science%20%26%20Technology | The Pakistan Institute of Nuclear Science & Technology (PINSTECH) is a federally funded research and development laboratory in Nilore, Islamabad, Pakistan.
The site was designed by the American architect Edward Durell Stone and its construction was completed in 1965. It has been described as "[maybe] the most architecturally stunning physics complex in the world".
In response to the war with India in 1971, the lab was repurposed as a primary weapons laboratory from its original civilian mission. Since the 1990s, the lab has been focused increasingly on civilian mission and it maintains a broad portfolio in providing research opportunities in supercomputing, renewable energy, physical sciences, philosophy, materials science, medicine, environmental science, and mathematics.
Overview
The Pakistan Institute of Nuclear Science & Technology (PINSTECH) is one of nation's leading research and development Institution affiliated to the national security. It is a principle national laboratory that has the responsibility by ensuring the safety, security, and reliability of nation's nuclear weapons program by advancing applications in science and technology.
The PINSTECH is located in Nilore, about southeast of Islamabad, and was designed by the American firm, AMF Atomics and Edward Durell Stone who once worded: "This....has been my greatest work. I am proud that it looks like it belongs in this country."
Since owned by the Government of Pakistan, its managed by Pakistan Atomic Energy Commission. The scientific research programs are supported at the laboratory through the Pakistan Institute of Engineering and Applied Sciences, also in Nilore. The laboratory covers around area.
History
The establishment of the Pakistan Institute of Nuclear Science & Technology (Pinstech) was embodiment of the Atoms for Peace initiative in 1953 and a long-sought initiative led by Abdus Salam who was lobbying for a professional physical laboratory since 1951. Budget constraints and lack of interests by the government administration had left a deep impression on Salam who was determined to establish to create an institution to which scientists from the developing countries would come as a right to interact with their peers from industrially advanced countries without permanently leaving their own countries. Construction of the Pinstech began when Salam who was able to find funding from the United States in 1961.
Eventually, Salam and I. H. Usmani approached Glenn T. Seaborg for the further funding of the laboratory from the United States government, which stipulated the fund if the Pakistan Atomic Energy Commission were to set up a research reactor of their own at sum of US$ 350,000. Contrary to United States' financial pledges, it was reported that the actual cost of building the Pinstech was neared at US$ 6.6 million that was funded and paid by the Pakistani taxpayers in 1965.
From 1965–69, the Pinstech had an active and direct laboratory-to-laboratory interaction with the American national laboratories such as Oak Ridge, Argonne, Livermore, and Sandia.
The scientific library of the institute consisted of a large section containing historical references and literatures on the Manhattan Project, brought by Abdus Salam in 1971 prior to start of the nuclear weapons program under Zulfikar Ali Bhutto's administration.
The Pakistan Atomic Energy Commission (PAEC) hired the laboratory's first director Rafi Muhammad, a professor of physics at the Government College University, Lahore (GCU), who affiliated the Pinstech with the Quaid-i-Azam University in 1967, bearing some special materials testing. Soon, the scientists from Institute of Theoretical Physics of the Quaid-i-Azam University had an opportunity to seek permanent research employment in physics at the laboratory.
Major Projects
Strategic deterrence
After the costly war with India in 1971, the re-purposing of the Pinstech Laboratory was difficult since it was never intended to be a weapons laboratory. Initially, the plutonium pit production at Pinstech was quite difficult together with its tiny research reactor that could never be a source of weapons-grade plutonium. In spite of its short-comings, the investigations and classified studies on understanding the equation of state on plutonium was started the physicists at the Pinstech laboratory in 1972. The Pinstech laboratory became a main research and development laboratory when it initiated its ingenious program for the production of plutonium oxide (plutonia) and uranium oxide (Urania) in 1973.
The Pinstech laboratory was also a learning center for gaining expertise in nuclear fuel cycle which it provided training to other facilities after learning the very basic knowledge from the European industries prior to 1969. At the Pinstech laboratory, a pilot plant (New Labs) was built for reprocessing spent reactor fuel into plutonium pit production. Besides its fundamental and basic programs on physical sciences, the laboratory provided a ground for the Pakistani scientists to design and engineer weapon designs, with many feared that India was rapidly developing a nuclear bomb.
As Nilore became restricted site, the research efforts were directed towards working on understanding and producing first the reactor-grade plutonium and eventually to military-grade plutonium from the spent fuel rods by undergoing a chemical process, "reprocessing". The design work had carried out on 20 different laboratories at the lab, and it was its New Labs facility of the lab that was able to produce the first batch of the weapon grade plutonium of 239Pu by 1983. This weapon-grade plutonium was the source material that was carried on a nuclear test conducted at the Ras Koh Range on 30 May 1998.
Nuclear fuel cycle
The scientists at the Pinstech laboratory initiated the studies on understanding the ingenious nuclear fuel cycle in spite of having basic familiarity. In 1973, the lab conducted several studies on understanding the properties of uranium oxide, eventually producing the first fuel bundle in 1976 that was shipped to the Karachi Nuclear Power Plant to keep its grid operations running. The Pinstech also took initiatives in learning and understanding the chemistry of uranium hexafluoride, which the technology was transferred to the Islamabad Uranium Conversion Facility in 1974. In addition, the understanding of UF6 eventually led in producing the Zircaloys, which it was also produced at the lab first; and later having it transfer the technology to the Kundian Nuclear Fuel Complex in 1980.
As of today, PINSTECH has been shifted to peacetime research in medicine, biology, materials and physics. Its Molybdenum-42 facility was used to medical radioisotopes for treating cancer. Scientists from Nuclear Institute for Agriculture and Biology (NIAB) and Nuclear Institute for Food and Agriculture (NIFA) had been using the PINSTECH facilities to conduct advanced research in both medical and food sciences.
Plutonium research
Since its repurposing in 1972, the Pinstech laboratory conducts research into understanding the equation of state of plutonium, its phase diagrams, and its properties. In 1987, the Pinstech developed a technology by fabricating a Chromium kF39 and developed an innovative technique, "in-stu leaching", which allowed the extraction of actinides from the uranium ore without the need for conventional milling.
The computer scientists at the Pinstech Laboratory had built a supercomputer based on the vintage IBM computer architecture that allowed the physicists at the Pinstech to model the behavior of plutonium without the actual nuclear testing. Research work on plutonium is conducted at its special-purpose facility, the New Laboratories, where the weapon-grade nuclear explosives are designed and manufactured. Much of the work on plutonium is, however, is subjected to classified information.
The Centralized Analysis Facility (CAF) has been utilized chemistry on plutonium and other areas of actinides sciences are studied and conducts experiments at the Central Diagnostic Laboratory (CDL); both of labs are the most potent facilities in Pakistan.
Besides its national security mission, the lab promotes applications of radiation and isotope technology in various scientific and technological disciplines to support the nation. It is also working on important non-nuclear fields, which are crucial for the development of science and technology in the country. In 2020, expansion work was started at Pinstech lab to help its "ability to produce isotopes for medical use, especially for preparation of radiopharmaceuticals for cancer patients while also helping the country in its aspirations in other applications of peaceful use of nuclear technology."
Nuclear reactors
PINSTECH has particle accelerators and also operates two small nuclear research reactors, a reprocessing plant and another experimental neutron source based on:
PARR-I Reactor-Utilize Low-Enriched Uranium (LEU)
PARR-II Reactor-Utilize High-Enriched Uranium (HEU)
New Labs-Plutonium reprocessing (PR) facility.
Charged Particle Accelerator- a nuclear particle accelerator.
Fast Neutron Generator- An experimental neutron generator.
Research divisions
The PINSTECH four research directorates and each directorate is headed by an appointed Director-Generals. The following PINSTECH Divisions are listed below:
Directorate of Science
Physics Research Division (RPD)
The directorate of science consists of four division, and each divisions are headed by deputy director-generals. In 2004, the PINSTECH administration had brought together all of the groups, and were merged into one single Division, known as Physics Research Division (PRD). Meanwhile, the PINSTECH had also merged Nuclear Physics Division (NPD) and Radiation Physics Division (RPD), Nuclear and Applied Chemistry Divisions as well. The below is the list of research groups working in RPD.
Atomic and Nuclear Radiation Group
Fast Neutron Diffraction Group (FNDG)
Electronic and Magnetic Materials Group (EMMG)
Nuclear Track Studies Group
Nuclear Geology Group
Radiation Damage Group
Diagnostics Group
Mathematical Physics Group (MPG)
Theoretical Physics Group (TPG)
Chemistry Research Division (CRD)
Nuclear Chemistry Division (NCD) - The Nuclear Chemistry Division was founded in 1966 by Dr. Iqbal Hussain Qureshi. As of today, the division is the largest Divisions of the PINSTECH comprising five major groups. Nuclear Chemistry Division has gained experience in the characterization of reactor grade and high purity materials by using advanced analytical techniques and it is dealing with environmental and health related problems.
Applied Chemistry Division
Laser Development Division
Directorate of System and Services
The Directorate of System and Services (DSS, headed by Dr. Matiullah, consists of 5 research divisions that are listed below:
Health Physics Division (HPD) - The Health Physics Division (HPD) was established in 1965 by the small team of health physicists. Founded as a group, it was made a division of PINSTECH in 1966. The division heavily involves its research in medical physics and using nuclear technology in medical and agricultural sciences.
Nuclear Engineering Division (NED) - The Nuclear Engineering Division (NED, headed by Dr. Masood Iqbal, is one of the most prestigious and well-known Division of Pakistan Institute of Nuclear Science and Technology (PINSTECH). The Division was established in 1965 with the objective to develop technical expertise mainly in the area of Nuclear Reactor Technology. The NED has been used to provide technical assistance and training to the field of reactor technology.
Electronics Maintenance Division (EMD) - The Electronics Division (ED, headed by Mr. Hameed, was formally established in 1967, recognizing its important role in scientific research and development at PINSTECH. The Division has rendered valuable service to the scientific effort by carrying out maintenance of scientific equipment and development of electronic instruments for use in research and development projects. In 1989, the ED was involved in the upgrade program of the PARR-I Reactor led by PAEC chairman Munir Ahmad Khan. The ED had supplied and developed electronic material and system for the PARR-I Reactor, and had successfully converted PARR-I to utilize HEU fuel into LEU fuel. An outstanding achievement ED was the design and engineering of nuclear instrumentation of research reactor (PARR-1) which required a very high degree of sophistication and reliability.
General Services Division (GSD) - The General Services Division (GSD) is responsible for the routine operational research, maintenance repairs of the laboratories, upkeep and development of engineering services such as civil, electrical, mechanical workshops, air conditioning as well as water supply to PINSTECH and annexed labs.
Computer Division (CD) - Computer Division (CD) was established in January 1980 with an aim to provide service and support to the researchers and scientists of PINSTECH in the area of computer hardware and software. Although computer division is still providing computer hardware and software services but it has gradually shifted its activities from being only a service provider division to an important design and development division.
Directorate of Technology
The Directorate of Technology (D-TECH) consists of 3 divisions that are Materials Division (MD), Isotope Application Division (IAD), and the Isotope Production Division (IPD).This is currently overseen by Dr. Gulzar Hussain Zahid, Chief Engineer.
Materials Division (MD) - Materials Division (MD) was established in 1973, with aim of to provide technical assistance to other PAEC's projects on development, production and characterization of materials.
Isotope Application Division (IAD) - The Isotope Application Division (IAD) was established in PINSTECH by Dr. Naeem Ahmad Khan in early 1971. Having known as the problem solver in the institute, the IAD is responsible for solving the problems in Isotope Hydrolog, Environmental Pollution, Non-Destructive Testing, Industrial Applications, Life Sciences, and Isotope Geology. IAD also extends expert services to solve relevant problems faced by the industrial sector and different organizations.
Isotope Production Division (IPD) - The Isotope Production Division (IPD) It contains Molly Group, Generator Production group, Kit production Group. IPD also involves in modification of exiting isotope production facility.
Directorate of Coordination
The Directorate of Coordination, headed by Engr. Iqbal Hussain Khan, is an administrative directorate which consists of 3 technical support divisions. Computation, Information, Communication Technologies (CICT)/Management Information System (MIS) Division, The Scientific Information Division (SID), Programme Coordination Division (PCD) are included in this division.
Computation Information & Communication Technologies Division/Management Information Systems (MISD) - CICT/MIS division headed by Dr. Syed Zubair Ahmad was established in 1980 for developing computation and information technologies infrastructure at PINSTECH. Initially mainframe computer systems like VAX/11-780 were deployed to provide computational support to the scientific community at PINSTECH. Later on with the advent of distributed computing technologies, numerous distribute systems were deployed to achieve higher processing and storage capacity than mainframe computers. These include Data and Compute clusters, grids, clouds and applications. Data acquisition systems, enterprise resource planning (ERP) systems and advanced network architectures are also developed and deployed.
Scientific Information Division (SID)- The Scientific Information Division (SID, headed by Dr. Ishtiaq Hussain Bokhari, was established in PINSTECH in 1966. It was upgraded into a full-fledged division in 1984. SID is the central source of scientific and technical information not only for Pakistan Atomic Energy Commission but also for other scientific organizations and universities in the country and is responsible for the efficient acquisition, storage, retrieval and dissemination of Scientific and Technical information in support of the PAEC program.
User facilities
Analytical Laboratories
Charged Particle Accelerator
Computer Oriented Services
Corrosion Testing
Environmental Studies Building
Health Physics, Radiation Safety & Radioactive Waste Management
Irradiation Laboratories
Lasers Laboratory and Testing Facility
Materials Development & Characterization
Nuclear Geological Services
Processing of Polymers
Production of Radioisotopes & Radio-pharmaceuticals
Radiation & Radioisotope Applications
Repair & Maintenance of Electronic Equipment
Scientific & Industrial Instruments
Scientific Glass Blowing
Scientific Information
Technical Services & Collaboration
Vacuum Technology Laboratory
Vibration Analysis
Director generals (DGs) of PINSTECH
Notes
References
Nuclear technology in Pakistan
Nuclear research institutes
Particle physics facilities
International research institutes
Research institutes in Pakistan
Pakistan federal departments and agencies
Constituent institutions of Pakistan Atomic Energy Commission
Edward Durell Stone buildings
Laboratories in Pakistan
Mathematical institutes
Military research of Pakistan
Chemical research institutes
Pakistan Institute of Engineering and Applied Sciences
Biological research institutes
Supercomputer sites
1965 establishments in Pakistan
Abdus Salam
Theoretical physics institutes
Nuclear weapons programme of Pakistan | Pakistan Institute of Nuclear Science & Technology | [
"Physics",
"Chemistry",
"Engineering"
] | 3,242 | [
"Nuclear research institutes",
"Nuclear organizations",
"Theoretical physics",
"Chemical research institutes",
"Theoretical physics institutes"
] |
4,990,900 | https://en.wikipedia.org/wiki/Bidirectional%20scattering%20distribution%20function | The definition of the BSDF (bidirectional scattering distribution function) is not well standardized. The term was probably introduced in 1980 by Bartell, Dereniak, and Wolfe. Most often it is used to name the general mathematical function which describes the way in which the light is scattered by a surface. However, in practice, this phenomenon is usually split into the reflected and transmitted components, which are then treated separately as BRDF (bidirectional reflectance distribution function) and BTDF (bidirectional transmittance distribution function).
BSDF is a superset and the generalization of the BRDF and BTDF. The concept behind all BxDF functions could be described as a black box with the inputs being any two angles, one for incoming (incident) ray and the second one for the outgoing (reflected or transmitted) ray at a given point of the surface. The output of this black box is the value defining the ratio between the incoming and the outgoing light energy for the given couple of angles. The content of the black box may be a mathematical formula which more or less accurately tries to model and approximate the actual surface behavior or an algorithm which produces the output based on discrete samples of measured data. This implies that the function is 4(+1)-dimensional (4 values for 2 3D angles + 1 optional for wavelength of the light), which means that it cannot be simply represented by 2D and not even by a 3D graph. Each 2D or 3D graph, sometimes seen in the literature, shows only a slice of the function.
Some tend to use the term BSDF simply as a category name covering the whole family of BxDF functions.
The term BSDF is sometimes used in a slightly different context, for the function describing the amount of the scatter (not scattered light), simply as a function of the incident light angle. An example to illustrate this context: for perfectly lambertian surface the BSDF (angle)=const. This approach is used for instance to verify the output quality by the manufacturers of the glossy surfaces.
Another recent usage of the term BSDF can be seen in some 3D packages, when vendors use it as a 'smart' category to encompass the simple well known cg algorithms like Phong, Blinn–Phong etc.
Acquisition of the BSDF over the human face in 2000 by Debevec et al. was one of the last key breakthroughs on the way to fully virtual cinematography with its ultra-photorealistic digital look-alikes. The team was the first in the world to isolate the subsurface scattering component (a specialized case of BTDF) using the simplest light stage, consisting on moveable light source, moveable high-res digital camera, 2 polarizers in a few positions and really simple algorithms on a modest computer. The team utilized the existing scientific knowledge that light that is reflected and scattered from the air-to-oil layer retains its polarization while light that travels within the skin loses its polarization. The subsurface scattering component can be simulated as a steady high-scatter glow of light from within the models, without which the skin does not look realistic. ESC Entertainment, a company set up by Warner Brothers Pictures specially to do the visual effects / virtual cinematography system for The Matrix Reloaded and The Matrix Revolutions isolated the parameters for an approximate analytical BRDF which consisted of Lambertian diffusion component and a modified specular Phong component with a Fresnel type of effect.
Overview of the BxDF functions
BRDF (Bidirectional reflectance distribution function) is a simplified BSSRDF, assuming that light enters and leaves at the same point (see the image on the right).
BTDF (Bidirectional transmittance distribution function) is similar to BRDF but for the opposite side of the surface. (see the top image).
BDF (Bidirectional distribution function) is collectively defined by BRDF and BTDF.
BSSRDF (Bidirectional scattering-surface reflectance distribution function or Bidirectional surface scattering RDF) describes the relation between outgoing radiance and the incident flux, including the phenomena like subsurface scattering (SSS). The BSSRDF describes how light is transported between any two rays that hit a surface.
BSSTDF (Bidirectional scattering-surface transmittance distribution function) is like BTDF but with subsurface scattering.
BSSDF (Bidirectional scattering-surface distribution function) is collectively defined by BSSTDF and BSSRDF. Also known as BSDF (Bidirectional scattering distribution function).
See also
BRDF
Radiometry
Reflectance
Radiance
BTF
References
Radiometry
Astrophysics
3D rendering | Bidirectional scattering distribution function | [
"Physics",
"Astronomy",
"Engineering"
] | 967 | [
"Telecommunications engineering",
"Astronomical sub-disciplines",
"Astrophysics",
"Radiometry"
] |
4,992,216 | https://en.wikipedia.org/wiki/Malonic%20ester%20synthesis | The malonic ester synthesis is a chemical reaction where diethyl malonate or another ester of malonic acid is alkylated at the carbon alpha (directly adjacent) to both carbonyl groups, and then converted to a substituted acetic acid.
A major drawback of malonic ester synthesis is that the alkylation stage can also produce dialkylated structures. This makes separation of products difficult and yields lower.
Mechanism
The carbons alpha to carbonyl groups can be deprotonated by a strong base. The carbanion formed can undergo nucleophilic substitution on the alkyl halide, to give the alkylated compound. On heating, the di-ester undergoes thermal decarboxylation, yielding an acetic acid substituted by the appropriate R group. Thus, the malonic ester can be thought of being equivalent to the −CH2COOH synthon.
The esters chosen are usually the same as the base used, i.e. ethyl esters with sodium ethoxide. This is to prevent scrambling by transesterification.
Variations
Dialkylation
The ester may be dialkylated if deprotonation and alkylation are repeated before the addition of aqueous acid.
Cycloalkylcarboxylic acid synthesis
Intramolecular malonic ester synthesis occurs when reacted with a dihalide. This reaction is also called the Perkin alicyclic synthesis (see: alicyclic compound) after investigator William Henry Perkin, Jr.
Application
In the manufacture of medicines, malonic ester is used for the synthesis of barbiturates, as well as sedatives and anticonvulsants.
Used in organic synthesis.
See also
Knoevenagel condensation
Acetoacetic ester synthesis
References
Carbon-carbon bond forming reactions
Substitution reactions | Malonic ester synthesis | [
"Chemistry"
] | 384 | [
"Coupling reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
22,618,454 | https://en.wikipedia.org/wiki/Radium-223 | Radium-223 (223Ra, Ra-223) is an isotope of radium with an 11.4-day half-life. It was discovered in 1905 by T. Godlewski, a Polish chemist from Kraków, and was historically known as actinium X (AcX). Radium-223 dichloride is an alpha particle-emitting radiotherapy drug that mimics calcium and forms complexes with hydroxyapatite at areas of increased bone turnover. The principal use of radium-223, as a radiopharmaceutical to treat metastatic cancers in bone, takes advantage of its chemical similarity to calcium, and the short range of the alpha radiation it emits.
Origin and preparation
Although radium-223 is naturally formed in trace amounts by the decay of uranium-235, it is generally made artificially, by exposing natural radium-226 to neutrons to produce radium-227, which decays with a 42-minute half-life to actinium-227. Actinium-227 (half-life 21.8 years) in turn decays via thorium-227 (half-life 18.7 days) to radium-223. This decay path makes it convenient to prepare radium-223 by "milking" it from an actinium-227 containing generator or "cow", similar to the moly cows widely used to prepare the medically important isotope technetium-99m.
223Ra itself decays to 219Rn (half-life 3.96 s), a short-lived gaseous radon isotope, by emitting an alpha particle of 5.979 MeV.
Medical uses
The pharmaceutical product and medical use of radium-223 against skeletal metastases was invented by Roy H. Larsen, Gjermund Henriksen and Øyvind S. Bruland and has been developed by the former Norwegian company Algeta ASA, in a partnership with Bayer, under the trade name Xofigo (formerly Alpharadin), and is distributed as a solution containing radium-223 chloride (1100 kBq/ml), sodium chloride, and other ingredients for intravenous injection. Algeta ASA was later acquired by Bayer who is the sole owner of Xofigo.
Mechanism of action
The use of radium-223 to treat metastatic bone cancer relies on the ability of alpha radiation from radium-223 and its short-lived decay products to kill cancer cells. Radium is preferentially absorbed by bone by virtue of its chemical similarity to calcium, with most radium-223 that is not taken up by the bone being cleared, primarily via the gut, and excreted. Although radium-223 and its decay products also emit beta and gamma radiation, over 95% of the decay energy is in the form of alpha radiation. Alpha radiation has a very short range in tissues compared to beta or gamma radiation: around 2–10 cells. This reduces damage to surrounding healthy tissues, producing an even more localized effect than the beta-emitter strontium-89, also used to treat bone cancer. Taking account of its preferential uptake by bone and the alpha particles' short range, radium-223 is estimated to give targeted osteogenic cells a radiation dose at least eight times higher than other non-targeted tissues.
Clinical trials and FDA and EMA approval
The phase II study of radium-223 in castration-resistant prostate cancer (CRPC) patients with bone metastases showed minimum myelotoxicity and good tolerance for the treatment.
223Ra successfully met the primary endpoint of overall survival in the phase III ALSYMPCA (ALpharadin in SYMptomatic Prostate CAncer patients) study for bone metastases resulting from CRPC in 922 patients.
The ALSYMPCA study was stopped early after a pre-planned efficacy interim analysis, following a recommendation from an Independent Data Monitoring Committee, on the basis of achieving a statistically significant improvement in overall survival (two-sided p-value = 0.0022, HR = 0.699, the median overall survival was 14.0 months for 223Ra and 11.2 months for placebo). Earlier phase II of the trial showed a median increased survival of 18.9 weeks (around 4.4 months). The lower figure of 2.8 months increased survival in interim phase III results is a probable result of stopping the trial; median survival time for patients still alive could not be calculated. A 2014 update indicates a median increased survival of 3.6 months.
In May 2013, 223Ra received marketing approval from the US Food and Drug Administration (FDA) as a treatment for CRPC with bone metastases in people with symptomatic bone metastases and without known visceral disease. 223Ra received priority review as a treatment for an unmet medical need, based on its ability to extend overall survival as shown its Phase III trial.
This study also led to approval in the European Union in November 2013, The European Medicines Agency subsequently recommended restricting its use to patients who have had two previous treatments for metastatic prostate cancer or who cannot receive other treatments. The medicine must also not be used with abiraterone acetate, prednisone or prednisolone and its use is not recommended in patients with a low number of osteoblastic bone metastases.
223Ra also showed promising preliminary results in a phase IIa trial enrolling 23 women with bone metastases resulting from breast cancer that no longer responds to endocrine therapy. 223Ra treatment reduced the levels of bone alkaline phosphatase (bALP) and urine N-telopeptide (uNTX), key markers of bone turnover associated with bone metastases in breast cancer, diminished bone pain slightly though consistently, and was well tolerated. Another single-arm, open-label Phase II trial reported possible efficacy of 223Ra combined with endocrine therapy in hormone-receptor-positive, bone-dominant breast cancer metastasis.
Side effects
The most common side effects reported during clinical trials in men receiving 223Ra were nausea, diarrhea, vomiting and swelling of the leg, ankle or foot. The most common abnormalities detected during blood testing were anemia, lymphocytopenia, leukopenia, thrombocytopenia and neutropenia.
Other radium-223-based compounds
Although radium does not easily form stable molecular complexes, data has been presented on methods to increase and customize its specificity for particular cancers by linking it to monoclonal antibodies, by enclosing the 223Ra in liposomes bearing the antibodies on their surface.
References
Drugs developed by Bayer
Drugs acting on the musculoskeletal system
Experimental cancer drugs
Isotopes of radium
Radiation therapy
Medical isotopes | Radium-223 | [
"Chemistry"
] | 1,403 | [
"Isotopes of radium",
"Chemicals in medicine",
"Isotopes",
"Medical isotopes"
] |
22,618,474 | https://en.wikipedia.org/wiki/Virtual%20Cybernetic%20Building%20Testbed | The Virtual Cybernetic Building Testbed (VCBT) is a whole building emulator located at the National Institute of Standards and Technology in Gaithersburg, Maryland. It is designed with enough flexibility to be capable of reproducibly simulating normal operation and a variety of faulty and hazardous conditions that might occur in a cybernetic building. It serves as a testbed for investigating the interactions between integrated building systems and a wide range of issues important to the development of cybernetic building technology.
The VCBT consists of a variety of simulation models that together emulate the
characteristics and performance of a cybernetic building system. The simulation models are
interfaced to real state-of-the-art BACnet speaking control systems to provide a hybrid
software/hardware testbed that can be used to develop and evaluate control strategies and control
products that use the BACnet communication protocol. The simulation models used are based on
versions of HVACSIM+ and CFAST.
References
Cybernetics
Building automation
Control engineering | Virtual Cybernetic Building Testbed | [
"Engineering"
] | 208 | [
"Building engineering",
"Control engineering",
"Building automation",
"Automation"
] |
22,619,813 | https://en.wikipedia.org/wiki/Cyclopentadienylcobalt%20dicarbonyl | Cyclopentadienylcobalt dicarbonyl is an organocobalt compound with formula (C5H5)Co(CO)2, abbreviated CpCo(CO)2. It is an example of a half-sandwich complex. It is a dark red air sensitive liquid. This compound features one cyclopentadienyl ring that is bound in an η5-manner and two carbonyl ligands. The compound is soluble in common organic solvents.
Preparation
CpCo(CO)2 was first reported in 1954 by Piper, Cotton, and Wilkinson who produced it by the reaction of cobalt carbonyl with cyclopentadiene. It is prepared commercially by the same method:
Co2(CO)8 + 2 C5H6 → 2 C5H5Co(CO)2 + H2 + 4 CO
Alternatively, it is generated by the high pressure carbonylation of bis(cyclopentadienyl)cobalt (cobaltocene) at elevated temperature and pressures:
Co(C5H5)2 + 2 CO → C5H5Co(CO)2 + "C5H5"
The compound is identified by strong bands in its IR spectrum at 2030 and 1960 cm−1.
Reactions
CpCo(CO)2 catalyzes the cyclotrimerization of alkynes. The catalytic cycle begins with dissociation of one CO ligand forming bis(alkyne) intermediate.
CpCo(CO)2 + 2 R2C2 → CpCo(R2C2)2 + 2 CO
This reaction proceeds by formation of metal-alkyne complexes by dissociation of CO. Although monoalkyne complexes CpCo(CO)(R1C2R2) have not been isolated, their analogues, CpCo(PPh3)(R1C2R2) are made by the following reactions:
CpCo(CO)2 + PR3 → CO + CpCo(CO)(PR3)
CpCoL(PR3) + R2C2 → L + CpCo(PR3)(R2C2) (where L = CO or PR3)
CpCo(CO)2 catalyzes the formation of pyridines from a mixture of alkynes and nitriles. Reduction of CpCo(CO)2 with sodium yields the dinuclear radical [Cp2Co2(CO)2]−, which reacts with alkyl halides to give the dialkyl complexes [Cp2Co2(CO)2R2]. Ketones are produced by carbonylation of these dialkyl complexes, regenerating CpCo(CO)2.
Related compounds
The pentamethylcyclopentadienyl analogue Cp*Co(CO)2 (CAS RN#12129-77-0) is well studied. The Rh and Ir analogues, CpRh(CO)2 (CAS RN#12192-97-1) and CpIr(CO)2 (CAS RN#12192-96-0), are also well known.
References
Cyclopentadienyl complexes
Carbonyl complexes
Organocobalt compounds
Half sandwich compounds | Cyclopentadienylcobalt dicarbonyl | [
"Chemistry"
] | 664 | [
"Half sandwich compounds",
"Organometallic chemistry",
"Cyclopentadienyl complexes"
] |
22,624,170 | https://en.wikipedia.org/wiki/Quantum%20invariant | In the mathematical field of knot theory, a quantum knot invariant or quantum invariant of a knot or link is a linear sum of colored Jones polynomial of surgery presentations of the knot complement.
List of invariants
Finite type invariant
Kontsevich invariant
Kashaev's invariant
Witten–Reshetikhin–Turaev invariant (Chern–Simons)
Invariant differential operator
Rozansky–Witten invariant
Vassiliev knot invariant
Dehn invariant
LMO invariant
Turaev–Viro invariant
Dijkgraaf–Witten invariant
Reshetikhin–Turaev invariant
Tau-invariant
I-Invariant
Klein J-invariant
Quantum isotopy invariant
Ermakov–Lewis invariant
Hermitian invariant
Goussarov–Habiro theory of finite-type invariant
Linear quantum invariant (orthogonal function invariant)
Murakami–Ohtsuki TQFT
Generalized Casson invariant
Casson-Walker invariant
Khovanov–Rozansky invariant
HOMFLY polynomial
K-theory invariants
Atiyah–Patodi–Singer eta invariant
Link invariant
Casson invariant
Seiberg–Witten invariants
Gromov–Witten invariant
Arf invariant
Hopf invariant
See also
Invariant theory
Framed knot
Chern–Simons theory
Algebraic geometry
Seifert surface
Geometric invariant theory
References
Further reading
External links
Quantum invariants of knots and 3-manifolds By Vladimir G. Turaev
Invariant theory
Knot theory | Quantum invariant | [
"Physics"
] | 286 | [
"Invariant theory",
"Group actions",
"Symmetry"
] |
22,625,160 | https://en.wikipedia.org/wiki/Intrinsic%20activity | Intrinsic activity (IA) and efficacy (Emax) refer to the relative ability of a drug-receptor complex to produce a maximum functional response. This must be distinguished from the affinity, which is a measure of the ability of the drug to bind to its molecular target, and the EC50, which is a measure of the potency of the drug and which is proportional to both efficacy and affinity. This use of the word "efficacy" was introduced by Stephenson (1956) to describe the way in which agonists vary in the response they produce, even when they occupy the same number of receptors. High efficacy agonists can produce the maximal response of the receptor system while occupying a relatively low proportion of the receptors in that system. There is a distinction between efficacy and intrinsic activity.
Mechanism of efficacy
Agonists of lower efficacy are not as efficient at producing a response from the drug-bound receptor, by stabilizing the active form of the drug-bound receptor. Therefore, they may not be able to produce the same maximal response, even when they occupy the entire receptor population, as the efficiency of transformation of the inactive form of the drug-receptor complex to the active drug-receptor complex may not be high enough to evoke a maximal response. Since the observed response may be less than maximal in systems with no spare receptor reserve, some low efficacy agonists are referred to as partial agonists. However, it is worth bearing in mind that these terms are relative - even partial agonists may appear as full agonists in a different system/experimental setup, as when the number of receptors increases, there may be enough drug-receptor complexes for a maximum response to be produced, even with individually low efficacy of transducing the response. There are actually relatively few true full agonists or silent antagonists; many compounds usually considered to be full agonists (such as DOI) are more accurately described as high efficacy partial agonists, as a partial agonist with efficacy over ≈80-90% is indistinguishable from a full agonist in most assays. Similarly many antagonists (such as naloxone) are in fact partial agonists or inverse agonists, but with very low efficacy (less than 10%). Compounds considered partial agonists tend to have efficacy in between this range. Another case is represented by silent agonists, which are ligands that can place a receptor, typically an ion channel, into a desensitized state with little or no apparent activation of it, forming a complex that can subsequently generate currents when treated with an allosteric modulator.
Intrinsic activity
Intrinsic activity of a test agonist is defined as:
Stevenson's efficacy
R. P. Stephenson (1925–2004) was a British pharmacologist. Efficacy has historically been treated as a proportionality constant between the binding of the drug and the generation of the biological response. Stephenson defined efficacy as:
where is the proportion of agonist-bound receptors (given by the Hill equation) and is the stimulus to the biological system. The response is generated by an unknown function , which is assumed to be hyperbolic. This model was arguably flawed in that it did not incorporate the equilibrium between the inactivated agonist-bound-receptor and the activated agonist-bound-receptor that is shown in the del Castillo Katz model.
Furchgott's efficacy
Robert F. Furchgott later improved on Stephenson's model with the definition of efficacy, e, as
where is the intrinsic efficacy and is the total concentration of receptors.
Stevenson and Furchgott's models of efficacy have been criticised and many more have been developed. The models of efficacy are shown in Bindslev (2008).
References
Pharmacodynamics | Intrinsic activity | [
"Chemistry"
] | 772 | [
"Pharmacology",
"Pharmacodynamics"
] |
299,901 | https://en.wikipedia.org/wiki/Optical%20tweezers | Optical tweezers (originally called single-beam gradient force trap) are scientific instruments that use a highly focused laser beam to hold and move microscopic and sub-microscopic objects like atoms, nanoparticles and droplets, in a manner similar to tweezers. If the object is held in air or vacuum without additional support, it can be called optical levitation.
The laser light provides an attractive or repulsive force (typically on the order of piconewtons), depending on the relative refractive index between particle and surrounding medium. Levitation is possible if the force of the light counters the force of gravity. The trapped particles are usually micron-sized, or even smaller. Dielectric and absorbing particles can be trapped, too.
Optical tweezers are used in biology and medicine (for example to grab and hold a single bacterium, a cell like a sperm cell or a blood cell, or a molecule like DNA), nanoengineering and nanochemistry (to study and build materials from single molecules), quantum optics and quantum optomechanics (to study the interaction of single particles with light). The development of optical tweezing by Arthur Ashkin was lauded with the 2018 Nobel Prize in Physics.
History and development
The detection of optical scattering and the gradient forces on micron sized particles was first reported in 1970 by Arthur Ashkin, a scientist working at Bell Labs. Years later, Ashkin and colleagues reported the first observation of what is now commonly referred to as an optical tweezer: a tightly focused beam of light capable of holding microscopic particles stable in three dimensions. In 2018, Ashkin was awarded the Nobel Prize in Physics for this development.
One author of this seminal 1986 paper, Steven Chu, would go on to use optical tweezing in his work on cooling and trapping neutral atoms. This research earned Chu the 1997 Nobel Prize in Physics along with Claude Cohen-Tannoudji and William D. Phillips. In an interview, Steven Chu described how Ashkin had first envisioned optical tweezing as a method for trapping atoms. Ashkin was able to trap larger particles (10 to 10,000 nanometers in diameter) but it fell to Chu to extend these techniques to the trapping of neutral atoms (0.1 nanometers in diameter) using resonant laser light and a magnetic gradient trap (cf. Magneto-optical trap).
In the late 1980s, Arthur Ashkin and Joseph M. Dziedzic demonstrated the first application of the technology to the biological sciences, using it to trap an individual tobacco mosaic virus and Escherichia coli bacterium. Throughout the 1990s and afterwards, researchers like Carlos Bustamante, James Spudich, and Steven Block pioneered the use of optical trap force spectroscopy to characterize molecular-scale biological motors. These molecular motors are ubiquitous in biology, and are responsible for locomotion and mechanical action within the cell. Optical traps allowed these biophysicists to observe the forces and dynamics of nanoscale motors at the single-molecule level; optical trap force-spectroscopy has since led to greater understanding of the stochastic nature of these force-generating molecules.
Optical tweezers have proven useful in other areas of biology as well. They are used in synthetic biology to construct tissue-like networks of artificial cells, and to fuse synthetic membranes together to initiate biochemical reactions. They are also widely employed in genetic studies and research on chromosome structure and dynamics. In 2003 the techniques of optical tweezers were applied in the field of cell sorting; by creating a large optical intensity pattern over the sample area, cells can be sorted by their intrinsic optical characteristics. Optical tweezers have also been used to probe the cytoskeleton, measure the visco-elastic properties of biopolymers, and study cell motility. A bio-molecular assay in which clusters of ligand coated nano-particles are both optically trapped and optically detected after target molecule induced clustering was proposed in 2011 and experimentally demonstrated in 2013.
Optical tweezers are also used to trap laser-cooled atoms in vacuum, mainly for applications in quantum science. Some achievements in this area include trapping of a single atom in 2001, trapping of 2D arrays of atoms in 2002, trapping of strongly interacting entangled pairs in 2010, trapping precisely assembled 2-dimensional arrays of atoms in 2016 and 3-dimensional arrays in 2018. These techniques have been used in quantum simulators to obtain programmable arrays of 196 and 256 atoms in 2021 and represent a promising platform for quantum computing.
Researchers have worked to convert optical tweezers from large, complex instruments to smaller, simpler ones, for use by those with smaller research budgets.
Physics
General description
Optical tweezers are capable of manipulating nanometer and micron-sized dielectric particles, and even individual atoms, by exerting extremely small forces via a highly focused laser beam. The beam is typically focused by sending it through a microscope objective. Near the narrowest point of the focused beam, known as the beam waist, the amplitude of the oscillating electric field varies rapidly in space. Dielectric particles are attracted along the gradient to the region of strongest electric field, which is the center of the beam. The laser light also tends to apply a force on particles in the beam along the direction of beam propagation. This is due to conservation of momentum: photons that are absorbed or scattered by the tiny dielectric particle impart momentum to the dielectric particle. This is known as the scattering force and results in the particle being displaced slightly downstream from the exact position of the beam waist, as seen in the figure.
Optical traps are very sensitive instruments and are capable of the manipulation and detection of sub-nanometer displacements for sub-micron dielectric particles. For this reason, they are often used to manipulate and study single molecules by interacting with a bead that has been attached to that molecule. DNA and the proteins and enzymes that interact with it are commonly studied in this way.
For quantitative scientific measurements, most optical traps are operated in such a way that the dielectric particle rarely moves far from the trap center. The reason for this is that the force applied to the particle is linear with respect to its displacement from the center of the trap as long as the displacement is small. In this way, an optical trap can be compared to a simple spring, which follows Hooke's law.
Detailed view
Proper explanation of optical trapping behavior depends upon the size of the trapped particle relative to the wavelength of light used to trap it. In cases where the dimensions of the particle are much greater than the wavelength, a simple ray optics treatment is sufficient. If the wavelength of light far exceeds the particle dimensions, the particles can be treated as electric dipoles in an electric field. For optical trapping of dielectric objects of dimensions within an order of magnitude of the trapping beam wavelength, the only accurate models involve the treatment of either time dependent or time harmonic Maxwell equations using appropriate boundary conditions.
Ray optics
In cases where the diameter of a trapped particle is significantly greater than the wavelength of light, the trapping phenomenon can be explained using ray optics. As shown in the figure, individual rays of light emitted from the laser will be refracted as it enters and exits the dielectric bead. As a result, the ray will exit in a direction different from which it originated. Since light has a momentum associated with it, this change in direction indicates that its momentum has changed. Due to Newton's third law, there should be an equal and opposite momentum change on the particle.
Most optical traps operate with a Gaussian beam (TEM00 mode) profile intensity. In this case, if the particle is displaced from the center of the beam, as in the right part of the figure, the particle has a net force returning it to the center of the trap because more intense beams impart a larger momentum change towards the center of the trap than less intense beams, which impart a smaller momentum change away from the trap center. The net momentum change, or force, returns the particle to the trap center.
If the particle is located at the center of the beam, then individual rays of light are refracting through the particle symmetrically, resulting in no net lateral force. The net force in this case is along the axial direction of the trap, which cancels out the scattering force of the laser light. The cancellation of this axial gradient force with the scattering force is what causes the bead to be stably trapped slightly downstream of the beam waist.
The standard tweezers works with the trapping laser propagated in the
direction of gravity and the inverted tweezers works against gravity.
Electric dipole approximation
In cases where the diameter of a trapped particle is significantly smaller than the wavelength of light, the conditions for Rayleigh scattering are satisfied and the particle can be treated as a point dipole in an inhomogeneous electromagnetic field. The force applied on a single charge in an electromagnetic field is known as the Lorentz force,
The force on the dipole can be calculated by substituting two terms for the electric field in the equation above, one for each charge. The polarization of a dipole is where is the distance between the two charges. For a point dipole, the distance is infinitesimal, Taking into account that the two charges have opposite signs, the force takes the form
Notice that the cancel out. Multiplying through by the charge, , converts position, , into polarization, ,
where in the second equality, it has been assumed that the dielectric particle is linear (i.e. ).
In the final steps, two equalities will be used: (1) a vector analysis equality, (2) Faraday's law of induction.
First, the vector equality will be inserted for the first term in the force equation above. Maxwell's equation will be substituted in for the second term in the vector equality. Then the two terms which contain time derivatives can be combined into a single term.
The second term in the last equality is the time derivative of a quantity that is related through a multiplicative constant to the Poynting vector, which describes the power per unit area passing through a surface. Since the power of the laser is constant when sampling over frequencies much longer than the frequency of the laser's light ~1014 Hz, the derivative of this term averages to zero and the force can be written as
where in the second part we have included the induced dipole moment (in MKS units) of a spherical dielectric particle: , where is the particle radius, is the index of refraction of the particle and is the relative refractive index between the particle and the medium. The square of the magnitude of the electric field is equal to the intensity of the beam as a function of position. Therefore, the result indicates that the force on the dielectric particle, when treated as a point dipole, is proportional to the gradient along the intensity of the beam. In other words, the gradient force described here tends to attract the particle to the region of highest intensity. In reality, the scattering force of the light works against the gradient force in the axial direction of the trap, resulting in an equilibrium position that is displaced slightly downstream of the intensity maximum. Under the Rayleigh approximation, we can also write the scattering force as
Since the scattering is isotropic, the net momentum is transferred in the forward direction. On the quantum level, we picture the gradient force as forward Rayleigh scattering in which identical photons are created and annihilated concurrently, while in the scattering (radiation) force the incident photons travel in the same direction and ‘scatter’ isotropically. By conservation of momentum, the particle must accumulate the photons' original momenta, causing a forward force in the latter.
Harmonic potential approximation
A useful way to study the interaction of an atom in a Gaussian beam is to look at the harmonic potential approximation of the intensity profile the atom experiences. In the case of the two-level atom, the potential experienced is related to its AC Stark Shift,
where is the natural line width of the excited state, is the electric dipole coupling, is the frequency of the transition, and is the detuning or difference between the laser frequency and the transition frequency.
The intensity of a gaussian beam profile is characterized by the wavelength , minimum waist , and power of the beam . The following formulas define the beam profile:
To approximate this Gaussian potential in both the radial and axial directions of the beam, the intensity profile must be expanded to second order in and for and respectively and equated to the harmonic potential . These expansions are evaluated assuming fixed power.
This means that when solving for the harmonic frequencies (or trap frequencies when considering optical traps for atoms), the frequencies are given as:
so that the relative trap frequencies for the radial and axial directions as a function of only beam waist scale as:
Optical levitation
In order to levitate the particle in air, the downward force of gravity must be countered by the forces stemming from photon momentum transfer. Typically photon radiation pressure of a focused laser beam of enough intensity counters the downward force of gravity while also preventing lateral (side to side) and vertical instabilities to allow for a stable optical trap capable of holding small particles in suspension.
Micrometer sized (from several to 50 micrometers in diameter) transparent dielectric spheres such as fused silica spheres, oil or water droplets, are used in this type of experiment. The laser radiation can be fixed in wavelength such as that of an argon ion laser or that of a tunable dye laser. Laser power required is of the order of 1 Watt focused to a spot size of several tens of micrometers. Phenomena related to morphology-dependent resonances in a spherical optical cavity have been studied by several research groups.
For a shiny object, such as a metallic micro-sphere, stable optical levitation has not been achieved. Optical levitation of a macroscopic object is also theoretically possible, and can be enhanced with nano-structuring.
Materials that have been successfully levitated include Black liquor, aluminum oxide, tungsten, and nickel.
Optothermal tweezers
In the last two decades, optical forces are combined with thermophoretic forces to enable trapping at reduced laser powers, thus resulting in minimized photon damage. By introducing light-absorbing elements (either particles or substrates), microscale temperature gradients are created, resulting in thermophoresis. Typically, particles (including biological objects such as cells, bacteria, DNA/RNA) drift towards the cold - resulting in particle repulsion using optical tweezers. Overcoming this limitation, different techniques such as beam shaping and solution modification with electrolytes and surfactants were used to successfully trap the objects. Laser cooling was also achieved with Ytterbium-doped yttrium lithium fluoride crystals to generate cold spots using lasers to achieve trapping with reduced photobleaching. The sample temperature has also been reduced to achieve optical trapping for a significantly increased selection of particles using optothermal tweezers for drug delivery applications.
Setups
The most basic optical tweezer setup will likely include the following components: a laser (usually Nd:YAG), a beam expander, some optics used to steer the beam location in the sample plane, a microscope objective and condenser to create the trap in the sample plane, a position detector (e.g. quadrant photodiode) to measure beam displacements and a microscope illumination source coupled to a CCD camera.
An Nd:YAG laser (1064 nm wavelength) is a common choice of laser for working with biological specimens. This is because such specimens (being mostly water) have a low absorption coefficient at this wavelength. A low absorption is advisable so as to minimise damage to the biological material, sometimes referred to as opticution. Perhaps the most important consideration in optical tweezer design is the choice of the objective. A stable trap requires that the gradient force, which is dependent upon the numerical aperture (NA) of the objective, be greater than the scattering force. Suitable objectives typically have an NA between 1.2 and 1.4.
While alternatives are available, perhaps the simplest method for position detection involves imaging the trapping laser exiting the sample chamber onto a quadrant photodiode. Lateral deflections of the beam are measured similarly to how it is done using atomic force microscopy (AFM).
Expanding the beam emitted from the laser to fill the aperture of the objective will result in a tighter, diffraction-limited spot. While lateral translation of the trap relative to the sample can be accomplished by translation of the microscope slide, most tweezer setups have additional optics designed to translate the beam to give an extra degree of translational freedom. This can be done by translating the first of the two lenses labelled as "Beam Steering" in the figure. For example, translation of that lens in the lateral plane will result in a laterally deflected beam from what is drawn in the figure. If the distance between the beam steering lenses and the objective is chosen properly, this will correspond to a similar deflection before entering the objective and a resulting lateral translation in the sample plane. The position of the beam waist, that is the focus of the optical trap, can be adjusted by an axial displacement of the initial lens. Such an axial displacement causes the beam to diverge or converge slightly, the result of which is an axially displaced position of the beam waist in the sample chamber.
Visualization of the sample plane is usually accomplished through illumination via a separate light source coupled into the optical path in the opposite direction using dichroic mirrors. This light is incident on a CCD camera and can be viewed on an external monitor or used for tracking the trapped particle position via video tracking.
Alternative laser beam modes
The majority of optical tweezers make use of conventional TEM00 Gaussian beams. However a number of other beam types have been used to trap particles, including high order laser beams i.e. Hermite-Gaussian beams (TEMxy), Laguerre-Gaussian (LG) beams (TEMpl) and Bessel beams.
Optical tweezers based on Laguerre-Gaussian beams have the unique capability of trapping particles that are optically reflective and absorptive. Laguerre-Gaussian beams also possess a well-defined orbital angular momentum that can rotate particles. This is accomplished without external mechanical or electrical steering of the beam.
Both zero and higher order Bessel Beams also possess a unique tweezing ability. They can trap and rotate multiple particles that are millimeters apart and even around obstacles.
Micromachines can be driven by these unique optical beams due to their intrinsic rotating mechanism due to the spin and orbital angular momentum of light.
Multiplexed optical tweezers
A typical setup uses one laser to create one or two traps. Commonly, two traps are generated by splitting the laser beam into two orthogonally polarized beams. Optical tweezing operations with more than two traps can be realized either by time-sharing a single laser beam among several optical tweezers, or by diffractively splitting the beam into multiple traps. With acousto-optic deflectors or galvanometer-driven mirrors, a single laser beam can be shared among hundreds of optical tweezers in the focal plane, or else spread into an extended one-dimensional trap. Specially designed diffractive optical elements can divide a single input beam into hundreds of continuously illuminated traps in arbitrary three-dimensional configurations. The trap-forming hologram also can specify the mode structure of each trap individually, thereby creating arrays of optical vortices, optical tweezers, and holographic line traps, for example. When implemented with a spatial light modulator, such holographic optical traps also can move objects in three dimensions. Advanced forms of holographic optical traps with arbitrary spatial profiles, where smoothness of the intensity and the phase are controlled, find applications in many areas of science, from micromanipulation to ultracold atoms.
Ultracold atoms could also be used for realization of quantum computers.
Single mode optical fibers
The standard fiber optical trap relies on the same principle as the optical trapping, but with the Gaussian laser beam delivered through an optical fiber. If one end of the optical fiber is molded into a lens-like facet, the nearly gaussian beam carried by a single mode standard fiber will be focused at some distance from the fiber tip. The effective Numerical Aperture of such assembly is usually not enough to allow for a full 3D optical trap but only for a 2D trap (optical trapping and manipulation of objects will be possible only when, e.g., they are in contact with a surface ).
A true 3D optical trapping based on a single fiber, with a trapping point which is not in nearly contact with the fiber tip, has been realized based on a not-standard annular-core fiber arrangement and a total-internal-reflection geometry.
On the other hand, if the ends of the fiber are not moulded, the laser exiting the fiber will be diverging and thus a stable optical trap can only be realised by balancing the gradient and the scattering force from two opposing ends of the fiber. The gradient force will trap the particles in the transverse direction, while the axial optical force comes from the scattering force of the two counter propagating beams emerging from the two fibers. The equilibrium z-position of such a trapped bead is where the two scattering forces equal each other. This work was pioneered by A. Constable et al., Opt. Lett. 18,1867 (1993), and followed by J.Guck et al., Phys. Rev. Lett. 84, 5451 (2000), who made use of this technique to stretch microparticles. By manipulating the input power into the two ends of the fiber, there will be an increase of an "optical stretching" that can be used to measure viscoelastic properties of cells, with sensitivity sufficient to distinguish between different individual cytoskeletal phenotypes. i.e. human erythrocytes and mouse fibroblasts. A recent test has seen great success in differentiating cancerous cells from non-cancerous ones from the two opposed, non-focused laser beams.
Multimode fiber-based traps
While earlier version of fiber-based laser traps exclusively used single mode beams, M. Kreysing and colleagues recently showed that the careful excitation of further optical modes in a short piece of optical fiber allows the realization of non-trivial trapping geometries. By this the researchers were able to orient various human cell types (individual cells and clusters) on a microscope. The main advantage of the so-called "optical cell rotator" technology over standard optical tweezers is the decoupling of trapping from imaging optics. This, its modular design, and the high compatibility of divergent laser traps with biological material indicates the great potential of this new generation of laser traps in medical research and life science. Recently, the optical cell rotator technology was implemented on the basis of adaptive optics, allowing to dynamically reconfigure the optical trap during operation and adapt it to the sample.
Cell sorting
One of the more common cell-sorting systems makes use of flow cytometry through fluorescence imaging. In this method, a suspension of biologic cells is sorted into two or more containers, based upon specific fluorescent characteristics of each cell during an assisted flow. By using an electrical charge that the cell is "trapped" in, the cells are then sorted based on the fluorescence intensity measurements. The sorting process is undertaken by an electrostatic deflection system that diverts cells into containers based upon their charge.
In the optically actuated sorting process, the cells are flowed through into an optical landscape i.e. 2D or 3D optical lattices. Without any induced electrical charge, the cells would sort based on their intrinsic refractive index properties and can be re-configurability for dynamic sorting. An optical lattice can be created using diffractive optics and optical elements.
On the other hand, K. Ladavac et al. used a spatial light modulator to project an intensity pattern to enable the optical sorting process. K. Xiao and D. G. Grier applied holographic video microscopy to demonstrate that this technique can sort colloidal spheres with part-per-thousand resolution for size and refractive index.
The main mechanism for sorting is the arrangement of the optical lattice points. As the cell flow through the optical lattice, there are forces due to the particles drag force that is competing directly with the optical gradient force (See Physics of optical tweezers) from the optical lattice point. By shifting the arrangement of the optical lattice point, there is a preferred optical path where the optical forces are dominant and biased. With the aid of the flow of the cells, there is a resultant force that is directed along that preferred optical path. Hence, there is a relationship of the flow rate with the optical gradient force. By adjusting the two forces, one will be able to obtain a good optical sorting efficiency.
Competition of the forces in the sorting environment need fine tuning to succeed in high efficient optical sorting. The need is mainly with regards to the balance of the forces; drag force due to fluid flow and optical gradient force due to arrangement of intensity spot.
Scientists at the University of St. Andrews have received considerable funding from the UK Engineering and Physical Sciences Research Council (EPSRC) for an optical sorting machine. This new technology could rival the conventional fluorescence-activated cell sorting.
Evanescent fields
An evanescent field is a residue optical field that "leaks" during total internal reflection. This "leaking" of light fades off at an exponential rate. The evanescent field has found a number of applications in nanometer resolution imaging (microscopy); optical micromanipulation (optical tweezers) are becoming ever more relevant in research.
In optical tweezers, a continuous evanescent field can be created when light is propagating through an optical waveguide (multiple total internal reflection). The resulting evanescent field has a directional sense and will propel microparticles along its propagating path. This work was first pioneered by S. Kawata and T. Sugiura, in 1992, who showed that the field can be coupled to the particles in proximity on the order of 100 nanometers. This direct coupling of the field is treated as a type of photon tunnelling across the gap from prism to microparticles. The result is a directional optical propelling force.
A recent updated version of the evanescent field optical tweezers makes use of extended optical landscape patterns to simultaneously guide a large number of particles into a preferred direction without using a waveguide. It is termed as Lensless Optical Trapping ("LOT"). The orderly movement of the particles is aided by the introduction of Ronchi Ruling that creates well-defined optical potential wells (replacing the waveguide). This means that particles are propelled by the evanescent field while being trapped by the linear bright fringes. At the moment, there are scientists working on focused evanescent fields as well.
In recent studies, the evanescent field generated by mid-infrared laser has been used to sort particles by molecular vibrational resonance selectively. Mid-infrared light is commonly used to identify molecular structures of materials because the vibrational modes exist in the mid-infrared region. A study by Statsenko et al. described optical force enhancement by molecular vibrational resonance by exciting the stretching mode of Si-O-Si bond at 9.3 μm. It is shown that silica microspheres containing significant Si-O-Si bond move up to ten times faster than polystyrene microspheres due to molecular vibrational resonance. Moreover, this same group also investigated the possibility of optical force chromatography based on molecular vibrational resonance.
Another approach that has been recently proposed makes use of surface plasmons, which is an enhanced evanescent wave localized at a metal/dielectric interface. The enhanced force field experienced by colloidal particles exposed to surface plasmons
at a flat metal/dielectric interface has been for the first time measured using a photonic force microscope, the total force magnitude being found 40 times stronger compared to a normal evanescent wave. By patterning the surface with gold microscopic islands it is possible to have selective and parallel trapping in these islands. The forces of the latter optical tweezers lie in the femtonewton range.
The evanescent field can also be used to trap cold atoms and molecules near the surface of an optical waveguide or optical nanofiber.
Indirect approach
Ming Wu, a UC Berkeley Professor of electrical engineering and computer sciences invented the new optoelectronic tweezers.
Wu transformed the optical energy from low powered light emitting diodes (LED) into electrical energy via a photoconductive surface. The idea is to allow the LED to switch on and off the photoconductive material via its fine projection. As the optical pattern can be easily transformable through optical projection, this method allows a high flexibility of switching different optical landscapes.
The manipulation/tweezing process is done by the variations between the electric field actuated by the light pattern. The particles will be either attracted or repelled from the actuated point due to its induced electrical dipole. Particles suspended in a liquid will be susceptible to the electrical field gradient, this is known as dielectrophoresis.
One clear advantage is that the electrical conductivity is different between different kinds of cells. Living cells have a lower conductive medium while the dead ones have minimum or no conductive medium. The system may be able to manipulate roughly 10,000 cells or particles at the same time.
See comments by Professor Kishan Dholakia on this new technique, K. Dholakia, Nature Materials 4, 579–580 (01 Aug 2005) News and Views.
"The system was able to move live E. coli bacteria and 20-micrometre-wide particles, using an optical power output of less than 10 microwatts. This is one-hundred-thousandth of the power needed for [direct] optical tweezers".
Another notably new type of optical tweezers is optothermal tweezers invented by Yuebing Zheng at The University of Texas at Austin. The strategy is to use light to create a temperature gradient and exploit the thermophoretic migration of matter for optical trapping. The team further integrated thermophoresis with laser cooling to develop opto-refrigerative tweezers to avoid thermal damages for noninvasive optical trapping and manipulation.
Optical binding
When a cluster of microparticles are trapped within a monochromatic laser beam, the organization of the microparticles within the optical trapping is heavily dependent on the redistributing of the optical trapping forces amongst the microparticles. This redistribution of light forces amongst the cluster of microparticles provides a new force equilibrium on the cluster as a whole. As such we can say that the cluster of microparticles are somewhat bound together by light. One of the first experimental evidence of optical binding was reported by Michael M. Burns, Jean-Marc Fournier, and Jene A. Golovchenko, though it was originally predicted by T. Thirunamachandran. One of the many recent studies on optical binding has shown that for a system of chiral nanoparticles, the magnitude of the binding forces are dependent on the polarisation of the laser beam and the handedness of interacting particles themselves, with potential applications in areas such as enantiomeric separation and optical nanomanipulation.
Fluorescence optical tweezers
In order to simultaneously manipulate and image samples that exhibit fluorescence, optical tweezers can be built alongside a fluorescence microscope. Such instruments are particularly useful when it comes to studying single or small numbers of biological molecules that have been fluorescently labelled, or in applications in which fluorescence is used to track and visualize objects that are to be trapped.
This approach has been extended for simultaneous sensing and imaging of dynamic protein complexes using long and strong tethers generated by a highly efficient multi-step enzymatic approach and applied to investigations of disaggregation machines in action.
Tweezers combined with other imaging techniques
Other than 'standard' fluorescence optical tweezers are now being built with multiple color Confocal, Widefield, STED, FRET, TIRF or IRM.
This allows applications such as measuring: protein/DNA localization binding, protein folding, condensation, motor protein force generation, visualization of cytoskeletal filaments and motor dynamics, microtubule dynamics, manipulating liquid droplet (rheology) or fusion. These setups are difficult to build and traditionally are found in non correlated 'academic' setups. In the recent years even home builders (both biophysics and general biologists) are converting to the alternative and are acquiring total correlated solution with easy data acquisition and data analysis.
See also
Atom optics
Levitation
List of laser articles
Quantum control
Quantum optics
References
External links
Video: Levitating DIAMONDS with a laser beam
Levitation
Photonics
Condensed matter physics
Molecular biology
Cell biology
Biophysics
Optical trapping
1986 introductions
Force lasers | Optical tweezers | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 6,864 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Cell biology",
"Optical trapping",
"Phases of matter",
"Materials science",
"Levitation",
"Motion (physics)",
"Particle traps",
"Biophysics",
"Condensed matter physics",
"Molecular biology",
"Biochemistry",
"Matter"
] |
300,044 | https://en.wikipedia.org/wiki/Brucite | Brucite is the mineral form of magnesium hydroxide, with the chemical formula Mg(OH)2. It is a common alteration product of periclase in marble; a low-temperature hydrothermal vein mineral in metamorphosed limestones and chlorite schists; and formed during serpentinization of dunites. Brucite is often found in association with serpentine, calcite, aragonite, dolomite, magnesite, hydromagnesite, artinite, talc and chrysotile.
It adopts a layered CdI2-like structure with hydrogen-bonds between the layers.
Discovery
Brucite was first described in 1824 by François Sulpice Beudant and named for the discoverer, American mineralogist, Archibald Bruce (1777–1818). A fibrous variety of brucite is called nemalite. It occurs in fibers or laths, usually elongated along [1010], but sometimes [1120] crystalline directions.
Occurrence
A notable location in the US is Wood's Chrome Mine, Cedar Hill Quarry, Lancaster County, Pennsylvania. Yellow, white and blue brucite with a botryoidal habit was discovered in Qila Saifullah District of Province Baluchistan, Pakistan. In a later discovery, brucite also occurred in the Bela Ophiolite of Wadh, Khuzdar District, Province Baluchistan, Pakistan. Brucite has also occurred from South Africa, Italy, Russia, Canada, and other localities as well, but the most notable discoveries are the US, Russian and Pakistani examples.
Industrial applications
Synthetic brucite is mainly consumed as a precursor to magnesia (MgO), a useful refractory and thermal insulator. It finds some use as a flame retardant because it thermally decomposes to release water in a similar way to aluminium hydroxide () and mixtures of huntite () and hydromagnesite (). It also constitutes a significant source of magnesium for industry. Although generally deemed safe, brucite can be contaminated with naturally occurring asbestos fibers.
Magnesium attack of cement and concrete
When cement or concrete are exposed to Mg2+, the neoformation of brucite, an expansive material, may induce mechanical stress in the hardened cement paste or may clog the porous network creating a and delaying the alteration/transformation of the C-S-H phase (the "glue" phase in the hardened cement paste) into M-S-H phase (a non-cohesive mineral phase). The exact magnitude of impact that brucite has on cement paste is still debatable. Prolonged contact between sea water or brines and concrete may induce durability issues for regularly immersed concrete components or structures.
The use of dolomite as aggregate in concrete can also cause magnesium attack and should be avoided.
Gallery
See also
List of minerals
List of minerals named after people
Portlandite,
Cookeite,
References
Further reading
Magnesium minerals
Hydroxide minerals
Cement
Concrete
Trigonal minerals
Minerals in space group 164
Luminescent minerals
Minerals described in 1824 | Brucite | [
"Chemistry",
"Engineering"
] | 639 | [
"Structural engineering",
"Luminescence",
"Concrete",
"Luminescent minerals"
] |
300,082 | https://en.wikipedia.org/wiki/Anechoic%20chamber | An anechoic chamber (an-echoic meaning "non-reflective" or "without echoes") is a room designed to stop reflections or echoes of either sound or electromagnetic waves. They are also often isolated from energy entering from their surroundings. This combination means that a person or detector exclusively hears direct sounds (no reflected sounds), in effect simulating being outside in a free field.
Anechoic chambers, a term coined by American acoustics expert Leo Beranek, were initially exclusively used to refer to acoustic anechoic chambers. Recently, the term has been extended to other radio frequency (RF) and sonar anechoic chambers, which eliminate reflection and external noise caused by electromagnetic waves.
Anechoic chambers range from small compartments the size of household microwave ovens to ones as large as aircraft hangars. The size of the chamber depends on the size of the objects and frequency ranges being tested.
Acoustic anechoic chambers
The requirement for what was subsequently called an anechoic chamber originated to allow testing of loudspeakers that generated such intense sound levels that they could not be tested outdoors in inhabited areas.
Anechoic chambers are commonly used in acoustics to conduct experiments in nominally "free field" conditions, free field meaning that there are no reflected signals. All sound energy will be traveling away from the source with almost none reflected back. Common anechoic chamber experiments include measuring the transfer function of a loudspeaker or the directivity of noise radiation from industrial machinery. In general, the interior of an anechoic chamber can be very quiet, with typical noise levels in the 10–20 dBA range. In 2005, the best anechoic chamber measured at −9.4 dBA. In 2015, an anechoic chamber on the campus of Microsoft broke the world record with a measurement of −20.6 dBA. The human ear can typically detect sounds above 0 dBA, so a human in such a chamber would perceive the surroundings as devoid of sound. Anecdotally, some people may not like such silence and can become disoriented.
The mechanism by which anechoic chambers minimize the reflection of sound waves impinging onto their walls is as follows: In the included figure, an incident sound wave I is about to impinge onto a wall of an anechoic chamber. This wall is composed of a series of wedges W with height H. After the impingement, the incident wave I is reflected as a series of waves R which in turn "bounce up-and-down" in the gap of air A (bounded by dotted lines) between the wedges W. Such bouncing may produce (at least temporarily) a standing wave pattern in A. During this process, the acoustic energy of the waves R gets dissipated via the air's molecular viscosity, in particular near the corner C. In addition, with the use of foam materials to fabricate the wedges, another dissipation mechanism happens during the wave/wall interactions. As a result, the component of the reflected waves R along the direction of I that escapes the gaps A (and goes back to the source of sound), denoted R', is notably reduced. Even though this explanation is two-dimensional, it is representative and applicable to the actual three-dimensional wedge structures used in anechoic chambers.
Semi-anechoic and hemi-anechoic chambers
Full anechoic chambers aim to absorb energy in all directions. To do this, all surfaces, including the floor, need to be covered in correctly shaped wedges. A mesh grille is usually installed above the floor to provide a surface to walk on and place equipment. This mesh floor is typically placed at the same floor level as the rest of the building, meaning the chamber itself extends below floor level. This mesh floor is damped and floating on absorbent buffers to isolate it from outside vibration or electromagnetic signals.
In contrast, semi-anechoic or hemi-anechoic chambers have a solid floor that acts as a work surface for supporting heavy items, such as cars, washing machines, or industrial machinery, which could not be supported by the mesh grille in a full anechoic chamber. Recording studios are often semi-anechoic.
The distinction between "semi-anechoic" and "hemi-anechoic" is unclear. In some uses they are synonyms, or only one term is used. Other uses distinguish one as having an ideally reflective floor (creating free-field conditions with a single reflective surface) and the other as simply having a flat untreated floor. Still other uses distinguish them by size and performance, with one being likely an existing room retrofitted with acoustic treatment, and the other a purpose-built room which is likely larger and has better anechoic performance.
Radio-frequency anechoic chambers
The internal appearance of the radio frequency (RF) anechoic chamber is sometimes similar to that of an acoustic anechoic chamber; however, the interior surfaces of the RF anechoic chamber are covered with radiation absorbent material (RAM) instead of acoustically absorbent material. Uses for RF anechoic chambers include testing antennas and radars, and they are typically used to house the antennas for performing measurements of antenna radiation patterns and electromagnetic interference.
Performance expectations (gain, efficiency, pattern characteristics, etc.) constitute primary challenges in designing stand alone or embedded antennas. Designs are becoming ever more complex with a single device incorporating multiple technologies such as cellular, WiFi, Bluetooth, LTE, MIMO, RFID and GPS.
Radiation-absorbent material
RAM is designed and shaped to absorb incident RF radiation (also known as non-ionising radiation) as effectively as possible, from as many incident directions as possible. The more effective the RAM, the lower the resulting level of reflected RF radiation. Many measurements in electromagnetic compatibility (EMC) and antenna radiation patterns require that spurious signals arising from the test setup, including reflections, are negligible to avoid the risk of causing measurement errors and ambiguities.
Effectiveness over frequency
Waves of higher frequencies have shorter wavelengths and are higher in energy, while waves of lower frequencies have longer wavelengths and are lower in energy, according to the relationship where lambda represents wavelength, v is phase velocity of wave, and is frequency. To shield for a specific wavelength, the cone must be of appropriate size to absorb that wavelength. The performance quality of an RF anechoic chamber is determined by its lowest test frequency of operation, at which measured reflections from the internal surfaces will be the most significant compared to higher frequencies. Pyramidal RAM is at its most absorptive when the incident wave is at normal incidence to the internal chamber surface and the pyramid height is approximately equal to , where is the free space wavelength. Accordingly, increasing the pyramid height of the RAM for the same (square) base size improves the effectiveness of the chamber at low frequencies but results in increased cost and a reduced unobstructed working volume that is available inside a chamber of defined size.
Installation into a screened room
An RF anechoic chamber is usually built into a screened room, designed using the Faraday cage principle. This is because most of the RF tests that require an anechoic chamber to minimize reflections from the inner surfaces also require the properties of a screened room to attenuate unwanted signals penetrating inwards and causing interference to the equipment under test and prevent leakage from tests penetrating outside.
Chamber size and commissioning
At lower radiated frequencies, far-field measurement can require a large and expensive chamber. Sometimes, for example for radar cross-section measurements, it is possible to scale down the object under test and reduce the chamber size, provided that the wavelength of the test frequency is scaled down in direct proportion by testing at a higher frequency.
RF anechoic chambers are normally designed to meet the electrical requirements of one or more accredited standards. For example, the aircraft industry may test equipment for aircraft according to company specifications or military specifications such as MIL-STD 461E. Once built, acceptance tests are performed during commissioning to verify that the standard(s) are in fact met. Provided they are, a certificate will be issued to that effect. The chamber will need to be periodically retested.
Operational use
Test and supporting equipment configurations to be used within anechoic chambers must expose as few metallic (conductive) surfaces as possible, as these risk causing unwanted reflections. Often this is achieved by using non-conductive plastic or wooden structures for supporting the equipment under test. Where metallic surfaces are unavoidable, they may be covered with pieces of RAM after setting up to minimize such reflection as far as possible.
A careful assessment may be required as to whether the test equipment (as opposed to the equipment under test) should be placed inside or outside the chamber. Typically most of it is located in a separate screened room attached to the main test chamber, in order to shield it from both external interference and from the radiation within the chamber. Mains power and test signal cabling into the test chamber require high quality filtering.
Fiber optic cables are sometimes used for the signal cabling, as they are immune to ordinary RFI and also cause little reflection inside the chamber.
Health and safety risks associated with RF anechoic chamber
The following health and safety risks are associated with RF anechoic chambers:
RF radiation hazard
Fire hazard
Trapped personnel
Personnel are not normally permitted inside the chamber during a measurement as this not only can cause unwanted reflections from the human body but may also be a radiation hazard to the personnel concerned if tests are being performed at high RF powers. Such risks are from RF or non-ionizing radiation and not from the higher energy ionizing radiation.
As RAM is highly absorptive of RF radiation, incident radiation will generate heat within the RAM. If this cannot be dissipated adequately there is a risk that hot spots may develop and the RAM temperature may rise to the point of combustion. This can be a risk if a transmitting antenna inadvertently gets too close to the RAM. Even for quite modest transmitting power levels, high gain antennas can concentrate the power sufficiently to cause high power flux near their apertures. Although recently manufactured RAM is normally treated with a fire retardant to reduce such risks, they are difficult to eliminate.
See also
Soundproofing
Vibration isolation
Buffer (disambiguation)
Damped wave
Damping ratio
Damper (disambiguation)
Electromagnetic reverberation chamber
Reverberation room
Sensory deprivation
GTEM cell
References
External links
360-degree video of an anechoic chamber
Pictures and description of an acoustic anechoic chamber
Anechoic Chambers, Past and Present
How RF Anechoic Chambers Work
Video tour of an EMC/RF Test facility. Including the largest anechoic test chamber in the southern hemisphere
Some examples
Antenna Testing For An Anechoic Chamber
Millimeter Wave Inc's Radio/MM Wave anechoic chamber
Bell Labs' Murray Hill anechoic chamber
Anechoic chamber for millimeter wave designs
Anechoic chambers at Apple Inc. campus used to test their mobile device products, via WaybackMachine
Photos from building an anechoic chamber in CTU, Prague
Sound examples
The sound of clothes inside an anechoic chamber
Hallucinations in anechoic chambers: the science behind the claim
Listen to a subdued balloon burst in an anechoic chamber
Laboratories
Electromagnetic radiation
Silence
Radiation
Rooms
Noise control
Acoustics | Anechoic chamber | [
"Physics",
"Chemistry",
"Engineering"
] | 2,321 | [
"Transport phenomena",
"Physical phenomena",
"Electromagnetic radiation",
"Classical mechanics",
"Acoustics",
"Rooms",
"Waves",
"Radiation",
"Architecture"
] |
300,420 | https://en.wikipedia.org/wiki/Mesophile | A mesophile is an organism that grows best in moderate temperature, neither too hot nor too cold, with an optimum growth range from . The optimum growth temperature for these organisms is 37 °C (about 99 °F). The term is mainly applied to microorganisms. Organisms that prefer extreme environments are known as extremophiles. Mesophiles have diverse classifications, belonging to two domains: Bacteria, Archaea, and to kingdom Fungi of domain Eucarya. Mesophiles belonging to the domain Bacteria can either be gram-positive or gram-negative. Oxygen requirements for mesophiles can be aerobic or anaerobic. There are three basic shapes of mesophiles: coccus, bacillus, and spiral.
Habitat
The habitats of mesophiles can include cheese and yogurt. They are often included during fermentation of beer and wine making. Since normal human body temperature is 37 °C, the majority of human pathogens are mesophiles, as are most of the organisms comprising the human microbiome.
Mesophiles vs. extremophiles
Mesophiles are the opposite of extremophiles. Extremophiles that prefer cold environments are termed psychrophilic, those preferring warmer temperatures are termed thermophilic or thermotropic and those thriving in extremely hot environments are hyperthermophilic.
A genome-wide computational approach has been designed by Zheng, et al. to classify bacteria into mesophilic and thermophilic.
Adaptations
All bacteria have their own optimum environmental surroundings and temperatures in which they thrive. Many factors are responsible for a given organism's optimal temperature range, but evidence suggests that the expression of particular genetic elements (alleles) can alter the temperature-sensitive phenotype of the organism. A study published in 2016 demonstrated that mesophilic bacteria could be genetically engineered to express certain alleles from psychrophilic bacteria, consequently shifting the restrictive temperature range of the mesophilic bacteria to closely match that of the psychrophilic bacteria.
Due to the less stable structure of mesophiles, it has reduced flexibility for protein synthesis. Mesophiles are not able to synthesize proteins in low temperatures. It is more sensitive to temperature changes, and the fatty acid composition of the membrane does not allow for much fluidity. Decreasing the optimal temperature of 37 °C to 0 °C to 8 °C leads to a gradual decrease in protein synthesis. Cold-induced proteins (CIPs) are induced during low temperatures, which then allows cold-shock proteins (CSPs) to synthesize. The shift back to the optimal temperature sees an increase, indicating that mesophiles are highly dependent on temperature. Oxygen availability also affects microorganism growth.
There are two explanations for thermophiles being able to survive at such high temperatures whereas mesophiles can not. The most evident explanation is that thermophiles are believed to have cell components that are relatively more stable than the cell components of mesophiles which is why thermophiles are able to live at higher temperatures than mesophiles. "A second school of thought, as represented by the writings of Gaughran (21) and Allen (3), believes that rapid resynthesis of damaged or destroyed cell constituents is the key to the problem of biological stability to heat."
Oxygen requirements
Due to the diversity of mesophiles, oxygen requirements greatly vary. Aerobic respiration requires the use of oxygen and anaerobic does not. There are three types of anaerobes. Facultative anaerobes grow in the absence of oxygen, using fermentation instead. During fermentation, sugars are converted to acids, alcohol, or gases. If there is oxygen present, it will use aerobic respiration instead. Obligate anaerobes cannot grow in the presence of oxygen. Aerotolerant anaerobes can withstand oxygen.
Roles
Microorganisms play an important role in decomposition of organic matter and mineralization of nutrients. In aquatic environments, the diversity of the ecosystem allows for the diversity of mesophiles. The functions of each mesophile rely on the surroundings, most importantly temperature range. Bacteria such as mesophiles and thermophiles are used in the cheesemaking due to their role in fermentation. "Traditional microbiologists use the following terms to indicate the general (slightly arbitrary) optimum temperature for the growth of bacteria: psychrophiles (15–20 °C), mesophiles (30–37 °C), thermophiles (50–60 °C) and extreme thermophiles (up to 122 °C)". Both mesophiles and thermophiles are used in cheesemaking for the same reason; however, they grow, thrive and die at different temperatures. Psychrotrophic bacteria contribute to dairy products spoiling, getting mouldy or going bad due to their ability to grow at lower temperatures such as in a refrigerator.
Examples
Some notable mesophiles include Listeria monocytogenes, Staphylococcus aureus, and Escherichia coli. Other examples of species of mesophiles are Clostridium kluyveri, Pseudomonas maltophilia, Thiobacillus novellus, Streptococcus pyogenes, and Streptococcus pneumoniae. Different types of diseases and infections typically have pathogens from mesophilic bacteria such as the ones listed above.
Listeria monocytogenes
Listeria monocytogenes is a gram-positive bacterium. It is closely related to Bacillus and Staphylococcus. It is a rod-shaped, facultative anaerobe that is motile by peritrichous flagella. L. monocytogenes motility is limited from 20 °C to 25 °C. At the optimal temperature, it loses its motility. This bacterium is responsible for listeriosis which derives from contaminated food.
Staphylococcus aureus
Staphylococcus aureus was first identified in 1880. It is responsible for different infections stemming from an injury. The bacterium overcomes the body's natural mechanisms. Long lasting infections of S. aureus includes pneumonia, meningitis, and osteomyelitis. S. aureus is commonly contracted in hospital settings.
Escherichia coli
Escherichia coli is a gram-negative, rod-shaped facultative anaerobic bacterium that does not produce spores. The bacterium is a member of Enterobacteriaceae. It is capable of producing enterotoxins which are thermolabile or thermostable. Other characteristics of E. coli are that it is oxidase-negative, citrate-negative, methyl-red positive, and Voges-Proskauer-negative. To sum up E. coli, it is a coliform. It is able to use glucose and acetate as a carbon source for fermentation. E. coli is commonly found in the gut of living organisms. E. coli has many capabilities such as being a host for recombinant DNA and being a pathogen.
See also
Anaerobic digestion
Mesophilic digester
Mesophyte
Neutrophile
Reverse ecology
References
Anaerobic digestion
Biodegradable waste management
Biodegradation
Microbial growth and nutrition | Mesophile | [
"Chemistry",
"Engineering"
] | 1,563 | [
"Biodegradable waste management",
"Biodegradation",
"Anaerobic digestion",
"Environmental engineering",
"Water technology"
] |
300,445 | https://en.wikipedia.org/wiki/Human%20chorionic%20gonadotropin | Human chorionic gonadotropin (hCG) is a hormone for the maternal recognition of pregnancy produced by trophoblast cells that are surrounding a growing embryo (syncytiotrophoblast initially), which eventually forms the placenta after implantation. The presence of hCG is detected in some pregnancy tests (HCG pregnancy strip tests). Some cancerous tumors produce this hormone; therefore, elevated levels measured when the patient is not pregnant may lead to a cancer diagnosis and, if high enough, paraneoplastic syndromes, however, it is unknown whether this production is a contributing cause or an effect of carcinogenesis. The pituitary analog of hCG, known as luteinizing hormone (LH), is produced in the pituitary gland of males and females of all ages.
Beta-hCG is initially secreted by the syncytiotrophoblast.
Structure
Human chorionic gonadotropin is a glycoprotein composed of 237 amino acids with a molecular mass of 36.7 kDa, approximately 14.5kDa αhCG and 22.2kDa βhCG.
It is heterodimeric, with an α (alpha) subunit identical to that of luteinizing hormone (LH), follicle-stimulating hormone (FSH), thyroid-stimulating hormone (TSH), and a β (beta) subunit that is unique to hCG.
The α (alpha) subunit is 92 amino acids long.
The β-subunit of hCG gonadotropin (beta-hCG) contains 145 amino acids, encoded by six highly homologous genes that are arranged in tandem and inverted pairs on chromosome 19q13.3 - CGB (1, 2, 3, 5, 7, 8). It is known that CGB7 has a sequence slightly different from that of the others.
The two subunits create a small hydrophobic core surrounded by a high surface area-to-volume ratio: 2.8 times that of a sphere. The vast majority of the outer amino acids are hydrophilic.
beta-hCG is mostly similar to beta-LH, with the exception of a Carboxy Terminus Peptide (beta-CTP) containing four glycosylated serine residues that is responsible for hCG's longer half-life.
Function
Human chorionic gonadotropin interacts with the LHCG receptor of the ovary and promotes the maintenance of the corpus luteum for the maternal recognition of pregnancy at the beginning of pregnancy. This allows the corpus luteum to secrete the hormone progesterone during the first trimester. Progesterone enriches the uterus with a thick lining of blood vessels and capillaries so that it can sustain the growing fetus.
It has been hypothesized that hCG may be a placental link for the development of local maternal immunotolerance. For example, hCG-treated endometrial cells induce an increase in T cell apoptosis (dissolution of T cells). These results suggest that hCG may be a link in the development of peritrophoblastic immune tolerance, and may facilitate the trophoblast invasion, which is known to expedite fetal development in the endometrium. It has also been suggested that hCG levels are linked to the severity of morning sickness or hyperemesis gravidarum in pregnant women.
Because of its similarity to LH, hCG can also be used clinically to induce ovulation in the ovaries as well as testosterone production in the testes. As the most abundant biological source is in women who are presently pregnant, some organizations collect urine from pregnant women to extract hCG for use in fertility treatment.
Human chorionic gonadotropin also plays a role in cellular differentiation/proliferation and may activate apoptosis.
Production
Naturally, it is produced in the human placenta by the syncytiotrophoblast.
Like any other gonadotropins, it can be extracted from the urine of pregnant women or produced from cultures of genetically modified cells using recombinant DNA technology.
In Pubergen, Pregnyl, Follutein, Profasi, Choragon and Novarel, it is extracted from the urine of pregnant women. In Ovidrel, it is produced with recombinant DNA technology.
hCG forms
Three major forms of hCG are produced by humans, with each having distinct physiological roles. These include regular hCG, hyperglycosylated hCG, and the free beta-subunit of hCG. Degradation products of hCG have also been detected, including nicked hCG, hCG missing the C-terminal peptide from the beta-subunit, and free alpha-subunit, which has no known biological function. Some hCG is also made by the pituitary gland with a pattern of glycosylation that differs from placental forms of hCG.
Regular hCG is the main form of hCG associated with the majority of pregnancy and in non-invasive molar pregnancies. This is produced in the trophoblast cells of the placental tissue. Hyperglycosylated hCG is the main form of hCG during the implantation phase of pregnancy, with invasive molar pregnancies, and with choriocarcinoma.
Gonadotropin preparations of hCG can be produced for pharmaceutical use from animal or synthetic sources.
Testing
Blood or urine tests measure hCG. These can be pregnancy tests. hCG-positive can indicate an implanted blastocyst and mammalian embryogenesis or can be detected for a short time following childbirth or pregnancy loss. Tests can be done to diagnose and monitor germ cell tumors and gestational trophoblastic diseases.
Concentrations are commonly reported in thousandth international units per milliliter (mIU/mL). The international unit of hCG was originally established in 1938 and has been redefined in 1964 and in 1980. At the present time, 1 international unit is equal to approximately 2.35×10−12 moles, or about 6×10−8 grams.
It is also possible to test for hCG to have an approximation of the gestational age.
Methodology
Most tests employ a monoclonal antibody, which is specific to the β-subunit of hCG (β-hCG). This procedure is employed to ensure that tests do not make false positives by confusing hCG with LH and FSH. (The latter two are always present at varying levels in the body, whereas the presence of hCG almost always indicates pregnancy.)
Many hCG immunoassays are based on the sandwich principle, which uses antibodies to hCG labeled with an enzyme or a conventional or luminescent dye.
Pregnancy urine dipstick tests are based on the lateral flow technique.
The urine test may be a chromatographic immunoassay or any of several other test formats, home-, physician's office-, or laboratory-based. Published detection thresholds range from 20 to 100 mIU/mL, depending on the brand of test. Early in pregnancy, more accurate results may be obtained by using the first urine of the morning (when urine is most concentrated). When the urine is dilute (specific gravity less than 1.015), the hCG concentration may not be representative of the blood concentration, and the test may be falsely negative.
The serum test, using 2-4 mL of venous blood, is typically a chemiluminescent or fluorimetric immunoassay that can detect βhCG levels as low as 5 mIU/mL and allows quantification of the βhCG concentration.
Reference levels in normal pregnancy
The hCG levels grow exponentially after conception and implantation. hCG levels typically peak around weeks 8-11 of pregnancy and are generally higher in the first trimester compared to the second trimester.
The following is a list of serum hCG levels:
LMP is the last menstrual period dated from the first day of the last menstrual period
If a pregnant woman has serum hCG levels that are higher than expected, they may be experiencing a multiple pregnancy or an abnormal uterine growth. Falling hCG levels may indicate the possibility of a miscarriage. hCG levels which are rising at a slower rate than expected may indicate an ectopic pregnancy.
Interpretation
The ability to quantitate the βhCG level is useful in monitoring germ cell and trophoblastic tumors, follow-up care after miscarriage, and diagnosis of and follow-up care after treatment of ectopic pregnancy. The lack of a visible fetus on vaginal ultrasound after βhCG levels reach 1500 mIU/mL is strongly indicative of an ectopic pregnancy. Still, even an hCG over 2000 IU/L does not necessarily exclude the presence of a viable intrauterine pregnancy in such cases.
As pregnancy tests, quantitative blood tests and the most sensitive urine tests usually detect hCG between 6 and 12 days after ovulation. It must be taken into account, however, that total hCG levels may vary in a very wide range within the first 4 weeks of gestation, leading to false results during this period. A rise of 35% over 48 hours is proposed as the minimal rise consistent with a viable intrauterine pregnancy.
Associations with pathologies
Gestational trophoblastic disease like hydatidiform moles ("molar pregnancy") or choriocarcinoma may produce high levels of βhCG due to the presence of syncytiotrophoblasts, part of the villi that make up the placenta, and despite the absence of an embryo. This, as well as several other conditions, can lead to elevated hCG readings in the absence of pregnancy.
hCG levels are also a component of the triple test, a screening test for certain fetal chromosomal abnormalities/birth defects. High hCG levels in the maternal serum could suggest Down syndrome, potentially due to continued hCG production by the placenta beyond the first trimester.
A study of 32 normal pregnancies came to the result that a gestational sac of 1–3 mm was detected at a mean hCG level of 1150 IU/L (range 800–1500), a yolk sac was detected at a mean level of 6000 IU/L (range 4500–7500) and fetal heartbeat was visible at a mean hCG level of 10,000 IU/L (range 8650–12,200).
Uses
Tumor marker
Human chorionic gonadotropin can be used as a tumor marker, as its β subunit is secreted by some cancers including seminoma, choriocarcinoma, teratoma with elements of choriocarcinoma, other germ cell tumors, hydatidiform mole, and islet cell tumor. For this reason, a positive result in males can be a test for testicular cancer. The normal range for men is between 0-5 mIU/mL. Combined with alpha-fetoprotein, β-HCG is an excellent tumor marker for the monitoring of germ cell tumors.
Fertility
Human chorionic gonadotropin injection is extensively used for final maturation induction in lieu of luteinizing hormone. In the presence of one or more mature ovarian follicles, ovulation can be triggered by the administration of HCG. As ovulation will happen between 38 and 40 hours after a single HCG injection, procedures can be scheduled to take advantage of this time sequence, such as intrauterine insemination or sexual intercourse. Also, patients that undergo IVF, in general, receive HCG to trigger the ovulation process, but have an oocyte retrieval performed at about 34 to 36 hours after injection, a few hours before the eggs actually would be released from the ovary.
As hCG supports the corpus luteum, administration of hCG is used in certain circumstances to enhance the production of progesterone.
Several vaccines against human chorionic gonadotropin (hCG) for the prevention of pregnancy are currently in clinical trials.
Use in males
In males, hCG injections are used to stimulate the Leydig cells to synthesize testosterone. The intratesticular testosterone is necessary for spermatogenesis from the sertoli cells. Typical medical uses for hCG in males include treating certain types of hypogonadism (either as monotherapy, or, more commonly, in combination with exogenous testosterone), as well as to either treat or prevent infertility, for example, during testosterone replacement therapy hCG is often used to restore or maintain fertility and prevent testicular atrophy.
HCG Pubergen, Pregnyl warnings
In the case of female patients who want to be treated with HCG Pubergen, Pregnyl:
a) Since infertile female patients who undergo medically assisted reproduction (especially those who need in vitro fertilization), are known to often be suffering from tubal abnormalities, after a treatment with this drug they might experience many more ectopic pregnancies. This is why early ultrasound confirmation at the beginning of a pregnancy (to see whether the pregnancy is intrauterine or not) is crucial. Pregnancies that have occurred after a treatment with this drug have a higher risk of multiple pregnancy. Female patients who have thrombosis, severe obesity, or thrombophilia should not be prescribed this medicine as they have a higher risk of arterial or venous thromboembolic events after or during a treatment with HCG Pubergen, Pregnyl. b)Female patients who have been treated with this medicine are usually more prone to pregnancy losses.
In the case of male patients: A prolonged treatment with HCG Pubergen, Pregnyl is known to regularly lead to increased production of androgen. Therefore: Patients who have overt or latent cardiac failure, hypertension, renal dysfunction, migraines, or epilepsy might not be allowed to start using this medicine or may require a lower dose of HCG Pubergen, Pregnyl. This drug should be used with extreme caution in the treatment of prepubescent teenagers in order to reduce the risk of precocious sexual development or premature epiphyseal closure. This type of patients' skeletal maturation should be closely and regularly monitored.
Both male and female patients who have the following medical conditions must not start a treatment with HCG Pubergen, Pregnyl: (1) Hypersensitivity to this drug or to any of its main ingredients. (2) Known or possible androgen-dependent tumors for example male breast carcinoma or prostatic carcinoma.
Anabolic steroid adjunct
HCG is included in some sports' banned substances lists.
When exogenous AAS (Anabolic Androgenic Steroids) are put into the male body, natural negative-feedback loops cause the body to shut down its own production of testosterone via shutdown of the hypothalamic-pituitary-gonadal axis (HPGA). This causes testicular atrophy, among other things. HCG is commonly used during and after steroid cycles to maintain and restore testicular size as well as normal testosterone production.
High levels of AASs, that mimic the body's natural testosterone, trigger the hypothalamus to shut down its production of gonadotropin-releasing hormone (GnRH) from the hypothalamus. Without GnRH, the pituitary gland stops releasing luteinizing hormone (LH). LH normally travels from the pituitary via the blood stream to the testes, where it triggers the production and release of testosterone. Without LH, the testes shut down their production of testosterone. In males, HCG helps restore and maintain testosterone production in the testes by mimicking LH and triggering the production and release of testosterone.
Professional athletes who have tested positive for HCG have been temporarily banned from their sport, including a 50-game ban from MLB for Manny Ramirez in 2009 and a 4-game ban from the NFL for Brian Cushing for a positive urine test for HCG. Mixed Martial Arts fighter Dennis Siver was fined $19,800 and suspended 9 months for being tested positive after his bout at UFC 168.
HCG diet
British endocrinologist Albert T. W. Simeons proposed HCG as an adjunct to an ultra-low-calorie weight-loss diet (fewer than 500 calories). Simeons, while studying pregnant women in India on a calorie-deficient diet, and obese boys with pituitary issues (Frölich's syndrome) treated with low-dose HCG, observed that both lost fat rather than lean (muscle) tissue. He reasoned that HCG must be programming the hypothalamus to do this in the former cases in order to protect the developing fetus by promoting mobilization and consumption of abnormal, excessive adipose deposits. Simeons in 1954 published a book entitled Pounds and Inches, designed to combat obesity. Simeons, practicing at Salvator Mundi International Hospital in Rome, Italy, recommended low-dose daily HCG injections (125 IU) in combination with a customized ultra-low-calorie (500 cal/day, high-protein, low-carbohydrate/fat) diet, which was supposed to result in a loss of adipose tissue without loss of lean tissue.
Other researchers did not find the same results when attempting experiments to confirm Simeons' conclusions, and in 1976 in response to complaints the FDA required Simeons and others to include the following disclaimer on all advertisements:
There was a resurgence of interest in the "HCG diet" following promotion by Kevin Trudeau, who was banned from making HCG diet weight-loss claims by the U.S. Federal Trade Commission in 2008, and eventually jailed over such claims.
A 1976 study in the American Journal of Clinical Nutrition concluded that HCG is not more effective as a weight-loss aid than dietary restriction alone.
A 1995 meta analysis found that studies supporting HCG for weight loss were of poor methodological quality and concluded that "there is no scientific evidence that HCG is effective in the treatment of obesity; it does not bring about weight-loss or fat-redistribution, nor does it reduce hunger or induce a feeling of well-being".
On November 15, 2016, the American Medical Association (AMA) passed policy that "The use of human chorionic gonadotropin (HCG) for weight loss is inappropriate."
According to the American Society of Bariatric Physicians, no new clinical trials have been published since the definitive 1995 meta-analysis.
The scientific consensus is that any weight loss reported by individuals on an "HCG diet" may be attributed entirely to the fact that such diets prescribe calorie intake of between 500 and 1,000 calories per day, substantially below recommended levels for an adult, to the point that this may risk health effects associated with malnutrition.
Homeopathic HCG for weight control
Controversy about, and shortages of, injected HCG for weight loss have led to substantial Internet promotion of "homeopathic HCG" for weight control. The ingredients in these products are often obscure, but if prepared from true HCG via homeopathic dilution, they contain either no HCG at all or only trace amounts. Moreover, it is highly unlikely that oral HCG is bioavailable due to the fact that digestive protease enzymes and hepatic metabolism renders peptide-based molecules (such as insulin and human growth hormone) biologically inert. HCG can likely only enter the bloodstream through injection.
The United States Food and Drug Administration has stated that over-the-counter products containing HCG are fraudulent and ineffective for weight loss. They are also not protected as homeopathic drugs and have been deemed illegal substances. HCG is classified as a prescription drug in the United States and it has not been approved for over-the-counter sales by the FDA as a weight loss product or for any other purposes, and therefore neither HCG in its pure form nor any preparations containing HCG may be sold legally in the country except by prescription. In December 2011, FDA and FTC started to take actions to pull unapproved HCG products from the market. In the aftermath, some suppliers started to switch to "hormone-free" versions of their weight loss products, where the hormone is replaced with an unproven mixture of free amino acids or where radionics is used to transfer the "energy" to the final product.
, the United States Food and Drug Administration has prohibited the sale of homeopathic and over-the-counter hCG diet products and declared them fraudelent and banned.
Tetanus vaccine conspiracy theory
Catholic Bishops in Kenya are among those who have spread a conspiracy theory asserting that HCG forms part of a covert sterilization program, forcing denials from the Kenyan government.
In order to induce a stronger immune response, some versions of human chorionic gonadotropin-based anti-fertility vaccines were designed as conjugates of the β subunit of HCG covalently linked to tetanus toxoid. It was alleged that a non-conjugated tetanus vaccine used in developing countries was laced with a human chorionic gonadotropin-based anti-fertility drug and was distributed as a means of mass sterilization. This charge has been vigorously denied by the World Health Organization (WHO) and UNICEF. Others have argued that an hCG-laced vaccine could not possibly be used for sterilization, since the effects of the anti-fertility vaccines are reversible (requiring booster doses to maintain infertility) and a non-conjugated vaccine is likely to be ineffective. Finally, independent testing of the tetanus vaccine by Kenya's health authorities revealed no traces of the human chorionic gonadotropin hormone.
See also
Equine chorionic gonadotropin
Gonadotropin preparations
Human placental lactogen
Triple test - a screening test in pregnancy
The Weight-Loss Cure "They" Don't Want You to Know About - Kevin Trudeau's book
References
External links
Genes on human chromosome 19
Glycoproteins
Gynaecological endocrinology
Peptide hormones
Drugs developed by Schering-Plough
Drugs developed by Merck & Co.
Drugs developed by Merck
Gonadotropin-releasing hormone and gonadotropins
Hormones of the hypothalamus-pituitary-gonad axis
Sex hormones
Hormones of the placenta
Hormones of the pregnant female
Chemical pathology
Tumor markers
Anti-aging substances
Tests for pregnancy
Human female endocrine system | Human chorionic gonadotropin | [
"Chemistry",
"Biology"
] | 4,758 | [
"Biomarkers",
"Behavior",
"Biochemistry",
"Sex hormones",
"Anti-aging substances",
"Tumor markers",
"Senescence",
"Glycoproteins",
"Glycobiology",
"Chemical pathology",
"Sexuality"
] |
300,650 | https://en.wikipedia.org/wiki/Oskar%20Klein | Oskar Benjamin Klein (; 15 September 1894 – 5 February 1977) was a Swedish theoretical physicist.
Oskar Klein is known for his work on Kaluza–Klein theory, which is partially named after him.
Biography
Klein was born in Danderyd outside Stockholm, son of the chief rabbi of Stockholm, Gottlieb Klein from Humenné in Kingdom of Hungary, now Slovakia and Antonie (Toni) Levy. He became a student of Svante Arrhenius at the Nobel Institute at a young age and was on the way to Jean-Baptiste Perrin in France when World War I broke out and he was drafted into the military.
From 1917, he worked a few years with Niels Bohr in the University of Copenhagen and received his doctoral degree at the University College of Stockholm (now Stockholm University) in 1921. In 1923, he received a professorship at University of Michigan in Ann Arbor and moved there with his recently wedded wife, Gerda Koch from Denmark. Klein returned to Copenhagen in 1925, spent some time with Paul Ehrenfest in Leiden, then became docent at Lund University in 1926 and in 1930 accepted the offer of the professorial chair in physics at the Stockholm University College, which had previously been held by Erik Ivar Fredholm until his death in 1927. Klein was awarded the Max Planck Medal in 1959. He retired as professor emeritus in 1962.
Klein discovered in 1926 the Klein-Gordon equation, the simplest and prototypical example of relativistic wave equation. It describes the behavior of scalar fields, such as e.g., those associated to the pions.
Walter Gordon, independently discovered and published the equation a few months later, as well as Vladimir Fock.
The Klein-Gordon equation is an example of Stigler's law as it was first discovered by Erwin Schrödinger in 1925 but not published until after Klein, Gordon and Fock's papers because Schrödinger was initially discouraged by the fact that it did not give the right fine structure for the hydrogen atom.
Klein is also credited for inventing the idea, part of Kaluza–Klein theory, that extra dimensions may be physically real but curled up and very small, an idea essential to string theory.
In 1938, he proposed a boson-exchange model for charge-charging weak interactions (radioactive decay), a few years after a similar proposal by Hideki Yukawa. His model was based on a local isotropic gauge symmetry and anticipated the later successful theory of Yang–Mills.
Oskar Klein died on 5 February 1977 in Stockholm, Sweden.
The Oskar Klein Memorial Lecture, held annually at the University of Stockholm, has been named after him. The Oskar Klein Centre for Cosmoparticle Physics in Stockholm, Sweden is also in his honor.
Oskar Klein is the grandfather of Helle Klein.
References
External links
Oskar Klein; The Atomicity of Electricity as a Quantum Theory Law, Nature 1926, 118 (516) - doi = "10.1038/118516a0",
Oskar Klein; Quantentheorie und fünfdimensionale relativitätstheorie - Surveys in High Energy Physics, https://doi.org/10.1080/01422418608228771
1894 births
1977 deaths
Swedish physicists
Jewish physicists
Theoretical physicists
Stockholm University alumni
Members of the Royal Swedish Academy of Sciences
Swedish Jews
Hungarian Jews
Winners of the Max Planck Medal
University of Michigan faculty
Burials at Norra begravningsplatsen | Oskar Klein | [
"Physics"
] | 718 | [
"Theoretical physics",
"Theoretical physicists"
] |
300,769 | https://en.wikipedia.org/wiki/Antony%20Hewish | Antony Hewish (11 May 1924 – 13 September 2021) was a British radio astronomer who won the Nobel Prize for Physics in 1974 (together with fellow radio-astronomer Martin Ryle) for his role in the discovery of pulsars. He was also awarded the Eddington Medal of the Royal Astronomical Society in 1969.
Early life and education
Hewish attended King's College, Taunton. His undergraduate degree, at Gonville and Caius College, Cambridge, was interrupted by the Second World War. He was assigned to war service at the Royal Aircraft Establishment, and at the Telecommunications Research Establishment where he worked with Martin Ryle. Returning to the University of Cambridge in 1946, Hewish completed his undergraduate degree and became a postgraduate student in Ryle's research team at the Cavendish Laboratory. For his PhD thesis, awarded in 1952, Hewish made practical and theoretical advances in the observation and exploitation of the scintillations of astronomical radio sources, due to foreground plasma.
Career and research
Hewish proposed the construction of a large phased array radio telescope, which could be used to perform a survey at high time resolution, primarily for studying interplanetary scintillation. In 1965 he secured funding to construct his design, the Interplanetary Scintillation Array, at the Mullard Radio Astronomy Observatory (MRAO) outside Cambridge. It was completed in 1967. One of Hewish's PhD students, Jocelyn Bell (later known as Jocelyn Bell Burnell), helped to build the array and was assigned to analyse its output. Bell soon discovered a radio source which was ultimately recognised as the first pulsar. Hewish initially thought that the signal might be radio frequency interference, but it remained at a constant right ascension, which is unlikely for a terrestrial source. The scientific paper announcing the discovery had five authors, Hewish's name being listed first, Bell's second.
Hewish and Ryle were awarded the Nobel Prize in Physics in 1974 for work on the development of radio aperture synthesis and for Hewish's decisive role in the discovery of pulsars. The exclusion of Bell from the Nobel prize was controversial (see Nobel prize controversies). Fellow Cambridge astronomer Fred Hoyle argued that Bell should have received a share of the prize, although Bell herself stated "it would demean Nobel Prizes if they were awarded to research students, except in very exceptional cases, and I do not believe this is one of them". Michael Rowan-Robinson later wrote that "Hewish was undoubtedly the major player in the work that led to the discovery, inventing the scintillation technique in 1952, leading the team that built the array and made the discovery, and providing the interpretation".
Hewish was professor of radio astronomy in the Cavendish Laboratory from 1971 to 1989 and head of the MRAO from 1982 to 1988. He developed an association with the Royal Institution in London when it was directed by Sir Lawrence Bragg. In 1965 he was invited to co-deliver the Royal Institution Christmas Lecture on "Exploration of the Universe". He subsequently gave several Friday Evening Discourses and was made a Professor of the Royal Institution in 1977. Hewish was a fellow of Churchill College, Cambridge. He was also a member of the Advisory Council for the Campaign for Science and Engineering.
Awards and honours
Hewish had honorary degrees from six universities, including Manchester, Exeter and Cambridge, was a foreign member of the Belgian Royal Academy, American Academy of Arts and Sciences and the Indian National Science Academy. The National Portrait Gallery holds multiple portraits of him in its permanent collection. Other awards and honours include:
Elected a Fellow of the Royal Society (FRS) in 1968
Eddington Medal, Royal Astronomical Society (1969)
Dellinger Gold Medal, International Union of Radio Science (1972)
Albert A. Michelson Medal, Franklin Institute (1973, jointly with Jocelyn Bell Burnell)
Fernand Holweck Medal and Prize (1974)
Nobel Prize for Physics (jointly) (1974)
Hughes Medal, Royal Society (1976)
Elected a Fellow of the Institute of Physics (FInstP) in 1998
Personal life
Hewish married Marjorie Elizabeth Catherine Richards in 1950. They had a son, a physicist, and a daughter, a language teacher. Hewish died on 13 September 2021, aged 97.
Religious views
Hewish argued that religion and science are complementary. In the foreword to Questions of Truth, Hewish writes, "The ghostly presence of virtual particles defies rational common sense and is non-intuitive for those unacquainted with physics. Religious belief in God, and Christian belief ... may seem strange to common-sense thinking. But when the most elementary physical things behave in this way, we should be prepared to accept that the deepest aspects of our existence go beyond our common-sense understanding."
See also
List of astronomers
References
Further reading
External links
Antony Hewish interviewed on Web of Stories
Interviewed by Alan Macfarlane 26 March 2008 (video)
including the Nobel Lecture, 12 December 1974 Pulsars and High Density Physics
The Papers of Professor Antony Hewish held at Churchill Archives Centre
1924 births
2021 deaths
Alumni of Gonville and Caius College, Cambridge
20th-century British astronomers
British Nobel laureates
Fellows of Churchill College, Cambridge
Fellows of Gonville and Caius College, Cambridge
Nobel laureates in Physics
People from Fowey
Place of death missing
Fellows of the Royal Society
Foreign fellows of the Indian National Science Academy
English Christians
People educated at King's College, Taunton
People from Taunton
English Nobel laureates
Spectroscopists
Radio astronomers | Antony Hewish | [
"Physics",
"Chemistry"
] | 1,135 | [
"Physical chemists",
"Spectrum (physical sciences)",
"Analytical chemists",
"Spectroscopists",
"Spectroscopy"
] |
301,135 | https://en.wikipedia.org/wiki/Tacoma%20Narrows%20Bridge%20%281940%29 | The 1940 Tacoma Narrows Bridge, the first bridge at this location, was a suspension bridge in the U.S. state of Washington that spanned the Tacoma Narrows strait of Puget Sound between Tacoma and the Kitsap Peninsula. It opened to traffic on July 1, 1940, and dramatically collapsed into Puget Sound on November 7 of the same year. The bridge's collapse has been described as "spectacular" and in subsequent decades "has attracted the attention of engineers, physicists, and mathematicians". Throughout its short existence, it was the world's third-longest suspension bridge by main span, behind the Golden Gate Bridge and the George Washington Bridge.
Construction began in September 1938. From the time the deck was built, it began to move vertically in windy conditions, so construction workers nicknamed the bridge "Galloping Gertie". The motion continued after the bridge opened to the public, despite several damping measures. The bridge's main span finally collapsed in winds on the morning of November 7, 1940, as the deck oscillated in an alternating twisting motion that gradually increased in amplitude until the deck tore apart. The violent swaying and eventual collapse resulted in the death of a cocker spaniel named "Tubby", as well as inflicting injuries on people fleeing the disintegrating bridge or attempting to rescue the stranded dog.
Efforts to replace the bridge were delayed by US involvement in World War II, as well as engineering and finance issues, but in 1950, a new Tacoma Narrows Bridge opened in the same location, using the original bridge's tower pedestals and cable anchorages. The portion of the bridge that fell into the water now serves as an artificial reef.
The bridge's collapse had a lasting effect on science and engineering. In many physics textbooks, the event is presented as an example of elementary forced mechanical resonance, but it was more complicated in reality; the bridge collapsed because moderate winds produced aeroelastic flutter that was self-exciting and unbounded: for any constant sustained wind speed above about , the amplitude of the (torsional) flutter oscillation would continuously increase, with a negative damping factor, i.e., a reinforcing effect, opposite to damping. The collapse boosted research into bridge aerodynamics-aeroelastics, which has influenced the designs of all later long-span bridges.
Design and construction
Proposals for a bridge between Tacoma and the Kitsap Peninsula date at least to the Northern Pacific Railway's 1889 trestle proposal, but concerted efforts began in the mid-1920s. The Tacoma Chamber of Commerce began campaigning and funding studies in 1923. Several noted bridge engineers were consulted, including Joseph B. Strauss, who went on to be chief engineer of the Golden Gate Bridge, and David B. Steinman, later the designer of the Mackinac Bridge. Steinman made several Chamber-funded visits and presented a preliminary proposal in 1929, but by 1931 the Chamber had cancelled the agreement because Steinman was not working hard enough to obtain financing. At the 1938 meeting of the structural division of the American Society of Civil Engineers, during the construction of the bridge, with its designer in the audience, Steinman predicted its failure.
In 1937, the Washington State legislature created the Washington State Toll Bridge Authority and appropriated $5,000 (equivalent to $ today) to study the request by Tacoma and Pierce County for a bridge over the Narrows.
From the start, financing of the bridge was a problem: Revenue from the proposed tolls would not be enough to cover construction costs; another expense was buying out the ferry contract from a private firm running services on the Narrows at the time. Nonetheless, there was strong support for the bridge from the United States Navy, which operated the Puget Sound Naval Shipyard in Bremerton, and from the United States Army, which ran McChord Field and Fort Lewis near Tacoma.
Washington State engineer Clark Eldridge produced a preliminary tried-and-true conventional suspension bridge design, and the Washington State Toll Bridge Authority requested $11 million (equivalent to $ million today) from the federal Public Works Administration (PWA). Preliminary construction plans by the Washington Department of Highways had called for a set of trusses to sit beneath the roadway and stiffen it.
However, "Eastern consulting engineers" — by which Eldridge meant Leon Moisseiff, the noted New York bridge engineer who served as designer and consultant engineer for the Golden Gate Bridge — petitioned the PWA and the Reconstruction Finance Corporation (RFC) to build the bridge for less. Moisseiff and Frederick Lienhard, the latter an engineer with what was then known in New York as the Port Authority, had published a paper that was probably the most important theoretical advance in the bridge engineering field of the decade.
Their theory of elastic distribution extended the deflection theory that was originally devised by the Austrian engineer Josef Melan to horizontal bending under static wind load. They showed that the stiffness of the main cables (via the suspenders) would absorb up to one-half of the static wind pressure pushing a suspended structure laterally. This energy would then be transmitted to the anchorages and towers. Using this theory, Moisseiff argued for stiffening the bridge with a set of plate girders rather than the trusses proposed by the Washington State Toll Bridge Authority. This approach meant a slimmer, more elegant design, and also reduced the construction costs as compared with the Highway Department's design proposed by Eldridge. Moisseiff's design won out, inasmuch as the other proposal was considered to be too expensive. On June 23, 1938, the PWA approved nearly $6 million (equivalent to $ million today) for the Tacoma Narrows Bridge. Another $1.6 million ($ million today) was to be collected from tolls to cover the estimated total $8 million cost ($ million today).
Following Moisseiff's design, bridge construction began on November 23, 1938. Construction took only nineteen months, at a cost of $6.4 million ($ million today), which was financed by the grant from the PWA and a loan from the RFC.
The Tacoma Narrows Bridge, with a main span of , was the third-longest suspension bridge in the world at that time, following the George Washington Bridge between New Jersey and New York City, and the Golden Gate Bridge, connecting San Francisco with Marin County to its north.
Because planners expected fairly light traffic volumes, the bridge was designed with two lanes, and it was just wide. This was quite narrow, especially in comparison with its length. With only the plate girders providing additional depth, the bridge's roadway section was also shallow.
The decision to use such shallow and narrow girders proved the bridge's undoing. With such minimal girders, the deck of the bridge was insufficiently rigid and was easily moved about by winds; from the start, the bridge became infamous for its movement. A mild to moderate wind could cause alternate halves of the centre span to visibly rise and fall several feet over four- to five-second intervals. This flexibility was experienced by the builders and workmen during construction, which led some of the workers to christen the bridge "Galloping Gertie". The nickname soon stuck, and even the public (when the toll-paid traffic started) felt these motions on the day that the bridge opened on July 1, 1940.
Attempt to control structural vibration
Since the structure experienced considerable vertical oscillations while it was still under construction, several strategies were used to reduce the motion of the bridge. They included:
attachment of tie-down cables to the plate girders, which were anchored to 50-ton concrete blocks on the shore. This measure proved ineffective, as the cables snapped shortly after installation.
addition of a pair of inclined cable stays that connected the main cables to the bridge deck at mid-span. These remained in place until the collapse but were also ineffective at reducing the oscillations.
Finally, the structure was equipped with hydraulic buffers installed between the towers and the floor system of the deck to damp longitudinal motion of the main span. The effectiveness of the hydraulic dampers was nullified, however, because the seals of the units were damaged when the bridge was sand-blasted before being painted.
The Washington State Toll Bridge Authority hired Frederick Burt Farquharson, an engineering professor at the University of Washington, to make wind tunnel tests and recommend solutions to reduce the oscillations of the bridge. Farquharson and his students built a 1:200-scale model of the bridge and a 1:20-scale model of a section of the deck. The first studies concluded on November 2, 1940—five days before the bridge collapse on November 7. He proposed two solutions:
To drill holes in the lateral girders and along the deck so that the airflow could circulate through them (in this way reducing lift forces).
To give a more aerodynamic shape to the transverse section of the deck by adding fairings or deflector vanes along the deck, attached to the girder fascia.
The first option was not favored, because of its irreversible nature. The second option was the chosen one, but it was not carried out, because the bridge collapsed five days after the studies were concluded.
Collapse
On November 7, 1940, at around 9:45 a.m. PST, especially strong winds caused the bridge to sway wildly from side to side. At least two vehicles were on the bridge at the time – a delivery truck driven by Ruby Jacox and Arthur Hagen, employees of Rapid Transfer Company, and a vehicle driven by Leonard Coatsworth, editor at The News Tribune. The truck tipped over due to the swaying, while the car lost control and began to slide from side to side. Jacox, Hagen, and Coatsworth exited their respective vehicles and got off of the bridge on foot. Coatsworth's daughter's dog Tubby was left inside the car.
Coatsworth later described his experience.
Traffic was stopped to prevent additional vehicles from entering the bridge. Howard Clifford, a photographer for the Tacoma News Tribune, walked onto the bridge to try to save Tubby, but was forced to turn back when the span began to break apart in the center. At approximately 11:00 a.m., the bridge collapsed into the strait.
Coatsworth received $814.40 (equivalent to $ today) in reimbursement from the Washington State Toll Bridge Authority for his car and its contents, including Tubby the cocker spaniel.
Film of collapse
The collapse was filmed with two cameras by Barney Elliott and by Harbine Monroe, owners of The Camera Shop in Tacoma, including the unsuccessful attempt to rescue the dog. Their footage was subsequently sold to Paramount Pictures, which duplicated it for newsreels in black-and-white and distributed it worldwide to movie theaters. Castle Films also received distribution rights for 8 mm home video. In 1998, The Tacoma Narrows Bridge Collapse was selected for preservation in the United States National Film Registry by the Library of Congress as being culturally, historically, or aesthetically significant. This footage is still shown to engineering, architecture, and physics students as a cautionary tale.
Elliott and Monroe's footage of the construction and collapse was shot on 16 mm Kodachrome film, but most copies in circulation are in black and white because newsreels of the day copied the film onto 35 mm black-and-white stock. There were also film-speed discrepancies between Monroe's and Elliot's footage, with Monroe filming at 24 frames per second and Elliott at 16 frames per second. As a result, most copies in circulation also show the bridge oscillating approximately 50% faster than real time, due to an assumption during conversion that the film was shot at 24 frames per second rather than the actual 16 fps.
Another reel of film emerged in February 2019, taken by Arthur Leach from the Gig Harbor (westward) side of the bridge, and one of the few known images of the collapse from that side. Leach was a civil engineer who served as toll collector for the bridge, and is believed to have been the last person to cross the bridge to the west before its collapse, trying to prevent further crossings from that side as the bridge became unstable. Leach's footage (originally on black-and-white film but then recorded to video cassette by filming the projection) also includes Leach's commentary at the time of the collapse.
Inquiry
Theodore von Kármán, the director of the Guggenheim Aeronautical Laboratory and a world-renowned aerodynamicist, was a member of the board of inquiry into the collapse. He reported that the State of Washington was unable to collect on one of the insurance policies for the bridge because its insurance agent had fraudulently pocketed the insurance premiums. The agent, Hallett R. French, who represented the Merchant's Fire Assurance Company, was charged and tried for grand larceny for withholding the premiums for $800,000 worth of insurance (equivalent to $ million today). The bridge was insured by many other policies that covered 80% of the $5.2 million structure's value (equivalent to $ million today). Most of these were collected without incident.
On November 28, 1940, the U.S. Navy's Hydrographic Office reported that the remains of the bridge were located at geographical coordinates , at a depth of .
Federal Works Agency Commission
A commission formed by the Federal Works Agency studied the collapse of the bridge. The board of engineers responsible for the report were Othmar Ammann, Theodore von Kármán, and Glenn B. Woodruff. Without drawing any definitive conclusions, the commission explored three possible failure causes:
Aerodynamic instability by self-induced vibrations in the structure
Eddy formations that might be periodic
Random effects of turbulence, that is the random fluctuations in velocity of the wind.
Cause of the collapse
The original Tacoma Narrows Bridge was the first to be built with girders of carbon steel anchored in concrete blocks; preceding designs typically had open lattice beam trusses underneath the roadbed. This bridge was the first of its type to employ plate girders (pairs of deep I-beams) to support the roadbed. With the earlier designs, any wind would pass through the truss, but in the new design, the wind would be diverted above and below the structure. Shortly after construction finished at the end of June (opened to traffic on July 1, 1940), it was discovered that the bridge would sway and buckle dangerously in relatively mild windy conditions that are common for the area, and worse during severe winds. This vibration was transverse, one-half of the central span rising while the other lowered. Drivers would see cars approaching from the other direction rise and fall, riding the violent energy wave through the bridge. However, at that time the mass of the bridge was considered sufficient to keep it structurally sound.
The failure of the bridge occurred when a never-before-seen twisting mode occurred, from winds at . This is a so-called torsional vibration mode (which is different from the transversal or longitudinal vibration mode), whereby when the left side of the roadway went down, the right side would rise, and vice versa, i.e., the two halves of the bridge twisted in opposite directions, with the centre line of the road remaining still (motionless). This vibration was caused by aeroelastic fluttering.
Fluttering is a physical phenomenon in which several degrees of freedom of a structure become coupled in an unstable oscillation driven by the wind. Here, unstable means that the forces and effects that cause the oscillation are not checked by forces and effects that limit the oscillation, so it does not self-limit but grows without bound. Eventually, the amplitude of the motion produced by the fluttering increased beyond the strength of a vital part, in this case the suspender cables. As several cables failed, the weight of the deck transferred to the adjacent cables, which became overloaded and broke in turn until almost all of the central deck fell into the water below the span.
Resonance (due to Von Kármán vortex street) hypothesis
The bridge's spectacular destruction is often used as an object lesson in the necessity to consider both aerodynamics and resonance effects in civil and structural engineering. Billah and Scanlan (1991) reported that, in fact, many physics textbooks (for example Resnick et al. and Tipler et al.) wrongly explain that the cause of the failure of the Tacoma Narrows bridge was externally forced mechanical resonance. Resonance is the tendency of a system to oscillate at larger amplitudes at certain frequencies, known as the system's natural frequencies. At these frequencies, even relatively small periodic driving forces can produce large amplitude vibrations, because the system stores energy. For example, a child using a swing realizes that if the pushes are properly timed, the swing can move with a very large amplitude. The driving force, in this case the child pushing the swing, exactly replenishes the energy that the system loses if its frequency equals the natural frequency of the system.
Usually, the approach taken by those physics textbooks is to introduce a first order forced oscillator, defined by the second-order differential equation
where , and stand for the mass, damping coefficient and stiffness of the linear system and and represent the amplitude and the angular frequency of the exciting force. The solution of such ordinary differential equation as a function of time represents the displacement response of the system (given appropriate initial conditions). In the above system resonance happens when is approximately , i.e., is the natural (resonant) frequency of the system. The actual vibration analysis of a more complicated mechanical system — such as an airplane, a building or a bridge — is based on the linearization of the equation of motion for the system, which is a multidimensional version of equation (). The analysis requires eigenvalue analysis and thereafter the natural frequencies of the structure are found, together with the so-called fundamental modes of the system, which are a set of independent displacements and/or rotations that specify completely the displaced or deformed position and orientation of the body or system, i.e., the bridge moves as a (linear) combination of those basic deformed positions.
Each structure has natural frequencies. For resonance to occur, it is necessary to have also periodicity in the excitation force. The most tempting candidate of the periodicity in the wind force was assumed to be the so-called vortex shedding. This is because bluff (non-streamlined) bodies — like bridge decks — in a fluid stream produce (or "shed") wakes, whose characteristics depend on the size and shape of the body and the properties of the fluid. These wakes are accompanied by alternating low-pressure vortices on the downwind side of the body, the so-called Kármán vortex street or von Kármán vortex street. The body will in consequence try to move toward the low-pressure zone, in an oscillating movement called vortex-induced vibration. Eventually, if the frequency of vortex shedding matches the natural frequency of the structure, the structure will begin to resonate and the structure's movement can become self-sustaining.
The frequency of the vortices in the von Kármán vortex street is called the Strouhal frequency , and is given by
Here, stands for the flow velocity, is a characteristic length of the bluff body and is the dimensionless Strouhal number, which depends on the body in question. For Reynolds numbers greater than 1000, the Strouhal number is approximately equal to 0.21. In the case of the Tacoma Narrows, was approximately and was 0.20.
It was thought that the Strouhal frequency was close enough to one of the natural vibration frequencies of the bridge, i.e., , to cause resonance and therefore vortex-induced vibration.
In the case of the Tacoma Narrows Bridge, this appears not to have been the cause of the catastrophic damage. According to Farquharson, the wind was steady at and the frequency of the destructive mode was 12 cycles/minute (0.2 Hz). This frequency was neither a natural mode of the isolated structure nor the frequency of blunt-body vortex shedding of the bridge at that wind speed, which was approximately 1 Hz. It can be concluded therefore that the vortex shedding was not the cause of the bridge collapse. The event can be understood only while considering the coupled aerodynamic and structural system that requires rigorous mathematical analysis to reveal all the degrees of freedom of the particular structure and the set of design loads imposed.
Vortex-induced vibration is a far more complex process that involves both the external wind-initiated forces and internal self-excited forces that lock on to the motion of the structure. During lock-on, the wind forces drive the structure at or near one of its natural frequencies, but as the amplitude increases this has the effect of changing the local fluid boundary conditions, so that this induces compensating, self-limiting forces, which restrict the motion to relatively benign amplitudes. This is clearly not a linear resonance phenomenon, even if the bluff body has linear behaviour, since the exciting force amplitude is a nonlinear force of the structural response.
Resonance vs. non-resonance explanations
Billah and Scanlan state that Lee Edson in his biography of Theodore von Kármán is a source of misinformation: "The culprit in the Tacoma disaster was the Karman vortex street."
However, the Federal Works Administration report of the investigation, of which von Kármán was part, concluded that
A group of physicists cited "wind-driven amplification of the torsional oscillation" as distinct from resonance:
To some degree the debate is due to the lack of a commonly accepted precise definition of resonance. Billah and Scanlan provide the following definition of resonance "In general, whenever a system capable of oscillation is acted on by a periodic series of impulses having a frequency equal to or nearly equal to one of the natural frequencies of the oscillation of the system, the system is set into oscillation with a relatively large amplitude." They then state later in their paper "Could this be called a resonant phenomenon? It would appear not to contradict the qualitative definition of resonance quoted earlier, if we now identify the source of the periodic impulses as self-induced, the wind supplying the power, and the motion supplying the power-tapping mechanism. If one wishes to argue, however, that it was a case of externally forced linear resonance, the mathematical distinction ... is quite clear, self-exciting systems differing strongly enough from ordinary linear resonant ones."
Link to the Armistice Day blizzard
The weather system that caused the bridge collapse went on to cause the 1940 Armistice Day Blizzard that killed 145 people in the Midwestern United States:
Fate of the collapsed superstructure
Efforts to salvage the bridge began almost immediately after its collapse and continued into May 1943. Two review boards, one appointed by the federal government and one appointed by the state of Washington, concluded that repair of the bridge was impossible, and the entire bridge would have to be dismantled and an entirely new bridge superstructure built. With steel being a valuable commodity because of the involvement of the United States in World War II, steel from the bridge cables and the suspension span was sold as scrap metal to be melted down. The salvage operation cost the state more than was returned from the sale of the material, a net loss of over $350,000 ().
The cable anchorages, tower pedestals and most of the remaining substructure were relatively undamaged in the collapse, and were reused during construction of the replacement span that opened in 1950. The towers, which supported the main cables and road deck, suffered major damage at their bases from being deflected towards shore as a result of the collapse of the mainspan and the sagging of the sidespans. They were dismantled, and the steel sent to recyclers.
Preservation of the collapsed roadway
The underwater remains of the highway deck of the old suspension bridge act as a large artificial reef, and these are listed on the National Register of Historic Places with reference number 92001068.
The Harbor History Museum has a display in its main gallery regarding the 1940 bridge, its collapse, and the subsequent two bridges.
A lesson for history
Othmar Ammann, a leading bridge designer and member of the Federal Works Agency Commission investigating the collapse of the Tacoma Narrows Bridge, wrote:
Following the incident, engineers took extra caution to incorporate aerodynamics into their designs, and wind tunnel testing of designs was eventually made mandatory.
The Bronx–Whitestone Bridge, which is of similar design to the 1940 Tacoma Narrows Bridge, was reinforced shortly after the collapse. Fourteen-foot-high (4.3 m) steel trusses were installed on both sides of the deck in 1943 to weigh down and stiffen the bridge in an effort to reduce oscillation. In 2003, the stiffening trusses were removed and aerodynamic fiberglass fairings were installed along both sides of the road deck.
A key consequence was that suspension bridges reverted to a deeper and heavier truss design, including the replacement Tacoma Narrows Bridge (1950), until the development in the 1960s of box girder bridges with an airfoil shape such as the Severn Bridge, which gave the necessary stiffness together with reduced torsional forces.
Replacement bridge
Because of shortages in materials and labor as a result of the involvement of the United States in World War II, it took 10 years before a replacement bridge was opened to traffic. This replacement bridge was opened to traffic on October 14, 1950, and is long, longer than the original bridge. The replacement bridge also has more lanes than the original bridge, which only had two traffic lanes, plus shoulders on both sides.
Half a century later, the replacement bridge exceeded its traffic capacity, and a second, parallel, suspension bridge was constructed to carry eastbound traffic. The suspension bridge that was completed in 1950 was reconfigured to carry only westbound traffic. The new parallel bridge opened to traffic in July 2007.
See also
Engineering disasters
Humen Pearl River Bridge, suspension bridge that shook violently until weight limits were implemented
List of bridge failures
List of structural failures and collapses
Millennium Bridge, London, for an engineering error
Silver Bridge, a bridge that collapsed in 1967 on the West Virginia–Ohio border
Volgograd Bridge, a bridge in Russia that experienced similar problems with the wind
Kutai Kartanegara Bridge, a suspension bridge that collapsed in Indonesia
References
Further reading
External links
Tacoma Narrows Bridge at the Gig Harbor Peninsula Historical Society & Museum
1940
1940 establishments in Washington (state)
1940 disestablishments in Washington (state)
1940 disasters in the United States
1940 in Washington (state)
Articles containing video clips
Artificial reefs
Bridge disasters caused by engineering error
Bridge disasters in the United States
Bridges completed in 1940
Bridges in Tacoma, Washington
Former toll bridges in Washington (state)
Historic Civil Engineering Landmarks
National Register of Historic Places in Tacoma, Washington
North Tacoma, Washington
November 1940 events in the United States
Road bridges on the National Register of Historic Places in Washington (state)
Steel bridges in the United States
Suspension bridges in Washington (state)
Transport disasters in 1940
Transportation disasters in Washington (state)
Building and structure collapses in the United States | Tacoma Narrows Bridge (1940) | [
"Engineering"
] | 5,572 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
301,398 | https://en.wikipedia.org/wiki/Iolair | Iolair (Gaelic for eagle) is a specialised semi-submersible offshore platform designed for BP to support and service oil platforms in the North Sea and served as an emergency support vessel (ESV) in the Forties Oil Field. Since 2000 it has been working in the Cantarell Field, Mexico as an offshore construction and maintenance service vessel operated by Cotemar S.A de C.V.
Particulars
Iolair is a self-propelled, twin hull, vessel and operates as a dynamically positioned (DP) construction support vessel. The vessel can operate up to a water depth of , is long and wide, and has 350 beds with single and double occupancy.
This unique vessel did not start as an ESV, but rather as the concept of a maintenance and support vessel (MSV). It was proposed for the Forties oil field, jointly owned by British National Oil Corporation and operated by BP Petroleum Development Company Ltd in the North Sea. A particular feature of the design by the Naval Architects was that there was no cross-bracing between the pontoons. Instead, the platform was given extra strength by a box-girder construction and diagonal bracing was arranged from the centre of the platform to the pontoons. This arrangement remained virtually unchanged to the build completion and offered exceptional speed when the vessel was de-ballasted on the surface. The intention was to achieve a rapid response to emergencies, wherever they might be experienced in the North Sea.
As an MSV, the vessel was always conceived to provide accommodation for about 220 persons, saturation diving facilities, a large workshop, craneage, and helicopter landing area with hangar and re-fueling. All were still featured in the eventual design but had been enhanced with other features and sophistication much of which was to support the emergency role. ESV incorporated novelty and ideas that were years ahead of their time. Indeed, part of the brief was that she should still be modern ten years after entering service.
The saturation diving system was equipped with an advanced launch and recovery system.
History
Designed by Victor Griffin Carrell and Eric Tim's for BP.
She was built by Scott Lithgow in Port Glasgow, and launched on 6 April 1981. In her early years, she was based in the BP Forties Oil Field.
In 1995, she was sold to U.S. drilling company Reading & Bates. She was to be converted to a workover/well intervention vessel and was stationed West of Shetland. The modifications included removal of some of the top structures, removal of the fire-fighting systems, closing of the dive tube and wave surge tank. However the intended conversion was never carried out and she was heavily involved in the installation of subsea production equipment using Remote Operated Vehicles. She was also heavily involved in the commissioning of the Foinaven and Schiehallion floating production vessels.
In 2000 she left the UK oilfields and went to the Bay of Campeche, Mexico, working in the Cantarell Field. There she carries out construction and platform support work. She was sold in 2001 by Transocean, who had taken over Reading and Bates, then sold to Exeter Marine Ltd. and since 2017 is owned by Iolair Offshore Pte Ltd. and registered in the Marshall Islands, a long way from her original registered port of Dundee in Scotland.
Industry firsts
Heave/swell compensation in the diving tube to enable operation in rougher weather.
A Citadel area to which people could retire and survive if the vessel was engulfed in gas.
A drenching system to cool exterior surfaces if the vessel was close to a burning platform.
The largest capacity and longest range firefighting monitors ever at sea.
Fixed water-cannon on the after columns to cool the underside of production platforms.
Iolair is assured of its place in history by being the subject of a 28p commemorative stamp issued by Post Office Ltd. on 25 May 1983. This was one of a series of three stamps celebrating British Engineering Achievements.
References
1982 ships
Oil platforms
Semi-submersibles
Ships built on the River Clyde
Transocean
Ships of BP | Iolair | [
"Chemistry",
"Engineering"
] | 837 | [
"Oil platforms",
"Petroleum technology",
"Natural gas technology",
"Structural engineering"
] |
301,429 | https://en.wikipedia.org/wiki/Linear%20elasticity | Linear elasticity is a mathematical model as to how solid objects deform and become internally stressed by prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.
The fundamental "linearizing" assumptions of linear elasticity are: infinitesimal strains or "small" deformations (or strains) and linear relationships between the components of stress and strain. In addition linear elasticity is valid only for stress states that do not produce yielding.
These assumptions are reasonable for many engineering materials and engineering design scenarios. Linear elasticity is therefore used extensively in structural analysis and engineering design, often with the aid of finite element analysis.
Mathematical formulation
Equations governing a linear elastic boundary value problem are based on three tensor partial differential equations for the balance of linear momentum and six infinitesimal strain-displacement relations. The system of differential equations is completed by a set of linear algebraic constitutive relations.
Direct tensor form
In direct tensor form that is independent of the choice of coordinate system, these governing equations are:
Cauchy momentum equation, which is an expression of Newton's second law. In convective form it is written as:
Strain-displacement equations:
Constitutive equations. For elastic materials, Hooke's law represents the material behavior and relates the unknown stresses and strains. The general equation for Hooke's law is
where is the Cauchy stress tensor, is the infinitesimal strain tensor, is the displacement vector, is the fourth-order stiffness tensor, is the body force per unit volume, is the mass density, represents the nabla operator, represents a transpose, represents the second material derivative with respect to time, and is the inner product of two second-order tensors (summation over repeated indices is implied).
Cartesian coordinate form
Expressed in terms of components with respect to a rectangular Cartesian coordinate system, the governing equations of linear elasticity are:
Equation of motion: where the subscript is a shorthand for and indicates , is the Cauchy stress tensor, is the body force density, is the mass density, and is the displacement.These are 3 independent equations with 6 independent unknowns (stresses). In engineering notation, they are:
Strain-displacement equations: where is the strain. These are 6 independent equations relating strains and displacements with 9 independent unknowns (strains and displacements). In engineering notation, they are:
Constitutive equations. The equation for Hooke's law is: where is the stiffness tensor. These are 6 independent equations relating stresses and strains. The requirement of the symmetry of the stress and strain tensors lead to equality of many of the elastic constants, reducing the number of different elements to 21 .
An elastostatic boundary value problem for an isotropic-homogeneous media is a system of 15 independent equations and equal number of unknowns (3 equilibrium equations, 6 strain-displacement equations, and 6 constitutive equations). Specifying the boundary conditions, the boundary value problem is completely defined. To solve the system two approaches can be taken according to boundary conditions of the boundary value problem: a displacement formulation, and a stress formulation.
Cylindrical coordinate form
In cylindrical coordinates () the equations of motion are
The strain-displacement relations are
and the constitutive relations are the same as in Cartesian coordinates, except that the indices ,, now stand for ,,, respectively.
Spherical coordinate form
In spherical coordinates () the equations of motion are
The strain tensor in spherical coordinates is
(An)isotropic (in)homogeneous media
In isotropic media, the stiffness tensor gives the relationship between the stresses (resulting internal stresses) and the strains (resulting deformations). For an isotropic medium, the stiffness tensor has no preferred direction: an applied force will give the same displacements (relative to the direction of the force) no matter the direction in which the force is applied. In the isotropic case, the stiffness tensor may be written: where is the Kronecker delta, K is the bulk modulus (or incompressibility), and is the shear modulus (or rigidity), two elastic moduli. If the medium is inhomogeneous, the isotropic model is sensible if either the medium is piecewise-constant or weakly inhomogeneous; in the strongly inhomogeneous smooth model, anisotropy has to be accounted for. If the medium is homogeneous, then the elastic moduli will be independent of the position in the medium. The constitutive equation may now be written as:
This expression separates the stress into a scalar part on the left which may be associated with a scalar pressure, and a traceless part on the right which may be associated with shear forces. A simpler expression is:
where λ is Lamé's first parameter. Since the constitutive equation is simply a set of linear equations, the strain may be expressed as a function of the stresses as:
which is again, a scalar part on the left and a traceless shear part on the right. More simply:
where is Poisson's ratio and is Young's modulus.
Elastostatics
Elastostatics is the study of linear elasticity under the conditions of equilibrium, in which all forces on the elastic body sum to zero, and the displacements are not a function of time. The equilibrium equations are then
In engineering notation (with tau as shear stress),
This section will discuss only the isotropic homogeneous case.
Displacement formulation
In this case, the displacements are prescribed everywhere in the boundary. In this approach, the strains and stresses are eliminated from the formulation, leaving the displacements as the unknowns to be solved for in the governing equations.
First, the strain-displacement equations are substituted into the constitutive equations (Hooke's Law), eliminating the strains as unknowns:
Differentiating (assuming and are spatially uniform) yields:
Substituting into the equilibrium equation yields:
or (replacing double (dummy) (=summation) indices k,k by j,j and interchanging indices, ij to, ji after the, by virtue of Schwarz' theorem)
where and are Lamé parameters.
In this way, the only unknowns left are the displacements, hence the name for this formulation. The governing equations obtained in this manner are called the elastostatic equations, the special case of the steady Navier–Cauchy equations given below.
Once the displacement field has been calculated, the displacements can be replaced into the strain-displacement equations to solve for strains, which later are used in the constitutive equations to solve for stresses.
The biharmonic equation
The elastostatic equation may be written:
Taking the divergence of both sides of the elastostatic equation and assuming the body forces has zero divergence (homogeneous in domain) () we have
Noting that summed indices need not match, and that the partial derivatives commute, the two differential terms are seen to be the same and we have: from which we conclude that:
Taking the Laplacian of both sides of the elastostatic equation, and assuming in addition , we have
From the divergence equation, the first term on the left is zero (Note: again, the summed indices need not match) and we have:
from which we conclude that:
or, in coordinate free notation which is just the biharmonic equation in .
Stress formulation
In this case, the surface tractions are prescribed everywhere on the surface boundary. In this approach, the strains and displacements are eliminated leaving the stresses as the unknowns to be solved for in the governing equations. Once the stress field is found, the strains are then found using the constitutive equations.
There are six independent components of the stress tensor which need to be determined, yet in the displacement formulation, there are only three components of the displacement vector which need to be determined. This means that there are some constraints which must be placed upon the stress tensor, to reduce the number of degrees of freedom to three. Using the constitutive equations, these constraints are derived directly from corresponding constraints which must hold for the strain tensor, which also has six independent components. The constraints on the strain tensor are derivable directly from the definition of the strain tensor as a function of the displacement vector field, which means that these constraints introduce no new concepts or information. It is the constraints on the strain tensor that are most easily understood. If the elastic medium is visualized as a set of infinitesimal cubes in the unstrained state, then after the medium is strained, an arbitrary strain tensor must yield a situation in which the distorted cubes still fit together without overlapping. In other words, for a given strain, there must exist a continuous vector field (the displacement) from which that strain tensor can be derived. The constraints on the strain tensor that are required to assure that this is the case were discovered by Saint Venant, and are called the "Saint Venant compatibility equations". These are 81 equations, 6 of which are independent non-trivial equations, which relate the different strain components. These are expressed in index notation as:
In engineering notation, they are:
The strains in this equation are then expressed in terms of the stresses using the constitutive equations, which yields the corresponding constraints on the stress tensor. These constraints on the stress tensor are known as the Beltrami-Michell equations of compatibility:
In the special situation where the body force is homogeneous, the above equations reduce to
A necessary, but insufficient, condition for compatibility under this situation is or .
These constraints, along with the equilibrium equation (or equation of motion for elastodynamics) allow the calculation of the stress tensor field. Once the stress field has been calculated from these equations, the strains can be obtained from the constitutive equations, and the displacement field from the strain-displacement equations.
An alternative solution technique is to express the stress tensor in terms of stress functions which automatically yield a solution to the equilibrium equation. The stress functions then obey a single differential equation which corresponds to the compatibility equations.
Solutions for elastostatic cases
Thomson's solution - point force in an infinite isotropic medium
The most important solution of the Navier–Cauchy or elastostatic equation is for that of a force acting at a point in an infinite isotropic medium. This solution was found by William Thomson (later Lord Kelvin) in 1848 (Thomson 1848). This solution is the analog of Coulomb's law in electrostatics. A derivation is given in Landau & Lifshitz. Defining
where is Poisson's ratio, the solution may be expressed as where is the force vector being applied at the point, and is a tensor Green's function which may be written in Cartesian coordinates as:
It may be also compactly written as:
and it may be explicitly written as:
In cylindrical coordinates () it may be written as:
where is total distance to point.
It is particularly helpful to write the displacement in cylindrical coordinates for a point force directed along the z-axis. Defining and as unit vectors in the and directions respectively yields:
It can be seen that there is a component of the displacement in the direction of the force, which diminishes, as is the case for the potential in electrostatics, as 1/r for large r. There is also an additional ρ-directed component.
Boussinesq–Cerruti solution - point force at the origin of an infinite isotropic half-space
Another useful solution is that of a point force acting on the surface of an infinite half-space. It was derived by Boussinesq for the normal force and Cerruti for the tangential force and a derivation is given in Landau & Lifshitz. In this case, the solution is again written as a Green's tensor which goes to zero at infinity, and the component of the stress tensor normal to the surface vanishes. This solution may be written in Cartesian coordinates as [recall: and , = Poisson's ratio]:
Other solutions
Point force inside an infinite isotropic half-space.
Point force on a surface of an isotropic half-space.
Contact of two elastic bodies: the Hertz solution (see Matlab code). See also the page on Contact mechanics.
Elastodynamics in terms of displacements
Elastodynamics is the study of elastic waves and involves linear elasticity with variation in time. An elastic wave is a type of mechanical wave that propagates in elastic or viscoelastic materials. The elasticity of the material provides the restoring force of the wave. When they occur in the Earth as the result of an earthquake or other disturbance, elastic waves are usually called seismic waves.
The linear momentum equation is simply the equilibrium equation with an additional inertial term:
If the material is governed by anisotropic Hooke's law (with the stiffness tensor homogeneous throughout the material), one obtains the displacement equation of elastodynamics:
If the material is isotropic and homogeneous, one obtains the (general, or transient) Navier–Cauchy equation:
The elastodynamic wave equation can also be expressed as
where
is the acoustic differential operator, and is Kronecker delta.
In isotropic media, the stiffness tensor has the form
where
is the bulk modulus (or incompressibility), and is the shear modulus (or rigidity), two elastic moduli. If the material is homogeneous (i.e. the stiffness tensor is constant throughout the material), the acoustic operator becomes:
For plane waves, the above differential operator becomes the acoustic algebraic operator:
where
are the eigenvalues of with eigenvectors parallel and orthogonal to the propagation direction , respectively. The associated waves are called longitudinal and shear elastic waves. In the seismological literature, the corresponding plane waves are called P-waves and S-waves (see Seismic wave).
Elastodynamics in terms of stresses
Elimination of displacements and strains from the governing equations leads to the Ignaczak equation of elastodynamics
In the case of local isotropy, this reduces to
The principal characteristics of this formulation include: (1) avoids gradients of compliance but introduces gradients of mass density; (2) it is derivable from a variational principle; (3) it is advantageous for handling traction initial-boundary value problems, (4) allows a tensorial classification of elastic waves, (5) offers a range of applications in elastic wave propagation problems; (6) can be extended to dynamics of classical or micropolar solids with interacting fields of diverse types (thermoelastic, fluid-saturated porous, piezoelectro-elastic...) as well as nonlinear media.
Anisotropic homogeneous media
For anisotropic media, the stiffness tensor is more complicated. The symmetry of the stress tensor means that there are at most 6 different elements of stress. Similarly, there are at most 6 different elements of the strain tensor . Hence the fourth-order stiffness tensor may be written as a matrix (a tensor of second order). Voigt notation is the standard mapping for tensor indices,
With this notation, one can write the elasticity matrix for any linearly elastic medium as:
As shown, the matrix is symmetric, this is a result of the existence of a strain energy density function which satisfies . Hence, there are at most 21 different elements of .
The isotropic special case has 2 independent elements:
The simplest anisotropic case, that of cubic symmetry has 3 independent elements:
The case of transverse isotropy, also called polar anisotropy, (with a single axis (the 3-axis) of symmetry) has 5 independent elements:
When the transverse isotropy is weak (i.e. close to isotropy), an alternative parametrization utilizing Thomsen parameters, is convenient for the formulas for wave speeds.
The case of orthotropy (the symmetry of a brick) has 9 independent elements:
Elastodynamics
The elastodynamic wave equation for anisotropic media can be expressed as
where
is the acoustic differential operator, and is Kronecker delta.
Plane waves and Christoffel equation
A plane wave has the form
with of unit length.
It is a solution of the wave equation with zero forcing, if and only if and constitute an eigenvalue/eigenvector pair of the acoustic algebraic operator
This propagation condition (also known as the Christoffel equation) may be written as
where
denotes propagation direction and is phase velocity.
See also
Castigliano's method
Cauchy momentum equation
Clapeyron's theorem
Contact mechanics
Deformation
Elasticity (physics)
GRADELA
Hooke's law
Infinitesimal strain theory
Michell solution
Plasticity (physics)
Signorini problem
Spring system
Stress (mechanics)
Stress functions
References
Elasticity (physics)
Solid mechanics
Sound | Linear elasticity | [
"Physics",
"Materials_science"
] | 3,521 | [
"Solid mechanics",
"Physical phenomena",
"Elasticity (physics)",
"Deformation (mechanics)",
"Mechanics",
"Physical properties"
] |
301,504 | https://en.wikipedia.org/wiki/Functional%20derivative | In the calculus of variations, a field of mathematical analysis, the functional derivative (or variational derivative) relates a change in a functional (a functional in this sense is a function that acts on functions) to a change in a function on which the functional depends.
In the calculus of variations, functionals are usually expressed in terms of an integral of functions, their arguments, and their derivatives. In an integrand of a functional, if a function is varied by adding to it another function that is arbitrarily small, and the resulting integrand is expanded in powers of , the coefficient of in the first order term is called the functional derivative.
For example, consider the functional
where . If is varied by adding to it a function , and the resulting integrand is expanded in powers of , then the change in the value of to first order in can be expressed as follows:
where the variation in the derivative, was rewritten as the derivative of the variation , and integration by parts was used in these derivatives.
Definition
In this section, the functional differential (or variation or first variation) Called first variation in , variation or first variation in , variation or differential in and differential in . is defined. Then the functional derivative is defined in terms of the functional differential.
Functional differential
Suppose is a Banach space and is a functional defined on .
The differential of at a point is the linear functional on defined by the condition that, for all ,
where is a real number that depends on in such a way that as . This means that is the Fréchet derivative of at .
However, this notion of functional differential is so strong it may not exist, and in those cases a weaker notion, like the Gateaux derivative is preferred. In many practical cases, the functional differential is defined as the directional derivative
Note that this notion of the functional differential can even be defined without a norm.
Functional derivative
In many applications, the domain of the functional is a space of differentiable functions defined on some space and is of the form
for some function that may depend on , the value and the derivative .
If this is the case and, moreover, can be written as the integral of times another function (denoted )
then this function is called the functional derivative of at . If is restricted to only certain functions (for example, if there are some boundary conditions imposed) then is restricted to functions such that continues to satisfy these conditions.
Heuristically, is the change in , so we 'formally' have , and then this is similar in form to the total differential of a function ,
where are independent variables.
Comparing the last two equations, the functional derivative has a role similar to that of the partial derivative , where the variable of integration is like a continuous version of the summation index . One thinks of as the gradient of at the point , so the value measures how much the functional will change if the function is changed at the point . Hence the formula
is regarded as the directional derivative at point in the direction of . This is analogous to vector calculus, where the inner product of a vector with the gradient gives the directional derivative in the direction of .
Properties
Like the derivative of a function, the functional derivative satisfies the following properties, where and are functionals:
Linearity: where are constants.
Product rule:
Chain rules:
If is a functional and another functional, then
If is an ordinary differentiable function (local functional) , then this reduces to
Determining functional derivatives
A formula to determine functional derivatives for a common class of functionals can be written as the integral of a function and its derivatives. This is a generalization of the Euler–Lagrange equation: indeed, the functional derivative was introduced in physics within the derivation of the Lagrange equation of the second kind from the principle of least action in Lagrangian mechanics (18th century). The first three examples below are taken from density functional theory (20th century), the fourth from statistical mechanics (19th century).
Formula
Given a functional
and a function that vanishes on the boundary of the region of integration, from a previous section Definition,
The second line is obtained using the total derivative, where is a derivative of a scalar with respect to a vector.
The third line was obtained by use of a product rule for divergence. The fourth line was obtained using the divergence theorem and the condition that on the boundary of the region of integration. Since is also an arbitrary function, applying the fundamental lemma of calculus of variations to the last line, the functional derivative is
where and . This formula is for the case of the functional form given by at the beginning of this section. For other functional forms, the definition of the functional derivative can be used as the starting point for its determination. (See the example Coulomb potential energy functional.)
The above equation for the functional derivative can be generalized to the case that includes higher dimensions and higher order derivatives. The functional would be,
where the vector , and is a tensor whose components are partial derivative operators of order ,
An analogous application of the definition of the functional derivative yields
In the last two equations, the components of the tensor are partial derivatives of with respect to partial derivatives of ρ,
where , and the tensor scalar product is,
Examples
Thomas–Fermi kinetic energy functional
The Thomas–Fermi model of 1927 used a kinetic energy functional for a noninteracting uniform electron gas in a first attempt of density-functional theory of electronic structure:
Since the integrand of does not involve derivatives of , the functional derivative of is,
Coulomb potential energy functional
For the electron-nucleus potential, Thomas and Fermi employed the Coulomb potential energy functional
Applying the definition of functional derivative,
So,
For the classical part of the electron-electron interaction, Thomas and Fermi employed the Coulomb potential energy functional
From the definition of the functional derivative,
The first and second terms on the right hand side of the last equation are equal, since and in the second term can be interchanged without changing the value of the integral. Therefore,
and the functional derivative of the electron-electron Coulomb potential energy functional [ρ] is,
The second functional derivative is
Weizsäcker kinetic energy functional
In 1935 von Weizsäcker proposed to add a gradient correction to the Thomas-Fermi kinetic energy functional to make it better suit a molecular electron cloud:
where
Using a previously derived formula for the functional derivative,
and the result is,
Entropy
The entropy of a discrete random variable is a functional of the probability mass function.
Thus,
Thus,
Exponential
Let
Using the delta function as a test function,
Thus,
This is particularly useful in calculating the correlation functions from the partition function in quantum field theory.
Functional derivative of a function
A function can be written in the form of an integral like a functional. For example,
Since the integrand does not depend on derivatives of ρ, the functional derivative of ρ is,
Functional derivative of iterated function
The functional derivative of the iterated function is given by:
and
In general:
Putting in gives:
Using the delta function as a test function
In physics, it is common to use the Dirac delta function in place of a generic test function , for yielding the functional derivative at the point (this is a point of the whole functional derivative as a partial derivative is a component of the gradient):
This works in cases when formally can be expanded as a series (or at least up to first order) in . The formula is however not mathematically rigorous, since is usually not even defined.
The definition given in a previous section is based on a relationship that holds for all test functions , so one might think that it should hold also when is chosen to be a specific function such as the delta function. However, the latter is not a valid test function (it is not even a proper function).
In the definition, the functional derivative describes how the functional changes as a result of a small change in the entire function . The particular form of the change in is not specified, but it should stretch over the whole interval on which is defined. Employing the particular form of the perturbation given by the delta function has the meaning that is varied only in the point . Except for this point, there is no variation in .
Notes
Footnotes
References
.
.
.
.
.
External links
Calculus of variations
Differential calculus
Differential operators
Topological vector spaces
Variational analysis | Functional derivative | [
"Mathematics"
] | 1,693 | [
"Mathematical analysis",
"Vector spaces",
"Calculus",
"Space (mathematics)",
"Topological vector spaces",
"Differential calculus",
"Differential operators"
] |
301,521 | https://en.wikipedia.org/wiki/Functional%20integration | Functional integration is a collection of results in mathematics and physics where the domain of an integral is no longer a region of space, but a space of functions. Functional integrals arise in probability, in the study of partial differential equations, and in the path integral approach to the quantum mechanics of particles and fields.
In an ordinary integral (in the sense of Lebesgue integration) there is a function to be integrated (the integrand) and a region of space over which to integrate the function (the domain of integration). The process of integration consists of adding up the values of the integrand for each point of the domain of integration. Making this procedure rigorous requires a limiting procedure, where the domain of integration is divided into smaller and smaller regions. For each small region, the value of the integrand cannot vary much, so it may be replaced by a single value. In a functional integral the domain of integration is a space of functions. For each function, the integrand returns a value to add up. Making this procedure rigorous poses challenges that continue to be topics of current research.
Functional integration was developed by Percy John Daniell in an article of 1919 and Norbert Wiener in a series of studies culminating in his articles of 1921 on Brownian motion. They developed a rigorous method (now known as the Wiener measure) for assigning a probability to a particle's random path. Richard Feynman developed another functional integral, the path integral, useful for computing the quantum properties of systems. In Feynman's path integral, the classical notion of a unique trajectory for a particle is replaced by an infinite sum of classical paths, each weighted differently according to its classical properties.
Functional integration is central to quantization techniques in theoretical physics. The algebraic properties of functional integrals are used to develop series used to calculate properties in quantum electrodynamics and the standard model of particle physics.
Functional integration
Whereas standard Riemann integration sums a function f(x) over a continuous range of values of x, functional integration sums a functional G[f], which can be thought of as a "function of a function" over a continuous range (or space) of functions f. Most functional integrals cannot be evaluated exactly but must be evaluated using perturbation methods. The formal definition of a functional integral is
However, in most cases the functions f(x) can be written in terms of an infinite series of orthogonal functions such as , and then the definition becomes
which is slightly more understandable. The integral is shown to be a functional integral with a capital . Sometimes the argument is written in square brackets , to indicate the functional dependence of the function in the functional integration measure.
Examples
Most functional integrals are actually infinite, but often the limit of the quotient of two related functional integrals can still be finite. The functional integrals that can be evaluated exactly usually start with the following Gaussian integral:
in which . By functionally differentiating this with respect to J(x) and then setting to 0 this becomes an exponential multiplied by a monomial in f. To see this, let's use the following notation:
With this notation the first equation can be written as:
Now, taking functional derivatives to the definition of and then evaluating in , one obtains:
which is the result anticipated. More over, by using the first equation one arrives to the useful result:
Putting these results together and backing to the original notation we have:
Another useful integral is the functional delta function:
which is useful to specify constraints. Functional integrals can also be done over Grassmann-valued functions , where , which is useful in quantum electrodynamics for calculations involving fermions.
Approaches to path integrals
Functional integrals where the space of integration consists of paths (ν = 1) can be defined in many different ways. The definitions fall in two different classes: the constructions derived from Wiener's theory yield an integral based on a measure, whereas the constructions following Feynman's path integral do not. Even within these two broad divisions, the integrals are not identical, that is, they are defined differently for different classes of functions.
The Wiener integral
In the Wiener integral, a probability is assigned to a class of Brownian motion paths. The class consists of the paths w that are known to go through a small region of space at a given time. The passage through different regions of space is assumed independent of each other, and the distance between any two points of the Brownian path is assumed to be Gaussian-distributed with a variance that depends on the time t and on a diffusion constant D:
The probability for the class of paths can be found by multiplying the probabilities of starting in one region and then being at the next. The Wiener measure can be developed by considering the limit of many small regions.
Itō and Stratonovich calculus
The Feynman integral
Trotter formula, or Lie product formula.
The Kac idea of Wick rotations.
Using x-dot-dot-squared or i S[x] + x-dot-squared.
The Cartier DeWitt–Morette relies on integrators rather than measures
The Lévy integral
Fractional quantum mechanics
Fractional Schrödinger equation
Lévy process
Fractional statistical mechanics
See also
Feynman path integral
Partition function (quantum field theory)
Saddle point approximation
References
Further reading
Jean Zinn-Justin (2009), Scholarpedia 4(2):8674.
Kleinert, Hagen, Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 4th edition, World Scientific (Singapore, 2004); Paperback (also available online: PDF-files)
O. G. Smolyanov, E. T. Shavgulidze. Continual integrals. Moscow, Moscow State University Press, 1990. (in Russian). http://lib.mexmat.ru/books/5132
Victor Popov, Functional Integrals in Quantum Field Theory and Statistical Physics, Springer 1983
Sergio Albeverio, Sonia Mazzucchi, A unified approach to infinite-dimensional integration, Reviews in Mathematical Physics, 28, 1650005 (2016)
Integral calculus
Functional analysis
Mathematical physics
Quantum mechanics
Quantum field theory | Functional integration | [
"Physics",
"Mathematics"
] | 1,273 | [
"Quantum field theory",
"Functions and mappings",
"Functional analysis",
"Calculus",
"Applied mathematics",
"Theoretical physics",
"Mathematical objects",
"Quantum mechanics",
"Mathematical relations",
"Mathematical physics",
"Integral calculus"
] |
301,647 | https://en.wikipedia.org/wiki/Barrier%20island | Barrier islands are a coastal landform, a type of dune system and sand island, where an area of sand has been formed by wave and tidal action parallel to the mainland coast. They usually occur in chains, consisting of anything from a few islands to more than a dozen. They are subject to change during storms and other action, but absorb energy and protect the coastlines and create areas of protected waters where wetlands may flourish. A barrier chain may extend for hundreds of kilometers, with islands periodically separated by tidal inlets. The largest barrier island in the world is Padre Island of Texas, United States, at long. Sometimes an important inlet may close permanently, transforming an island into a peninsula, thus creating a barrier peninsula, often including a beach, barrier beach.
Though many are long and narrow, the length and width of barriers and overall morphology of barrier coasts are related to parameters including tidal range, wave energy, sediment supply, sea-level trends, and basement controls. The amount of vegetation on the barrier has a large impact on the height and evolution of the island.
Chains of barrier islands can be found along approximately 13-15% of the world's coastlines. They display different settings, suggesting that they can form and be maintained in a variety of environments. Numerous theories have been given to explain their formation.
A human-made offshore structure constructed parallel to the shore is called a breakwater. In terms of coastal morphodynamics, it acts similarly to a naturally occurring barrier island by dissipating and reducing the energy of the waves and currents striking the coast. Hence, it is an important aspect of coastal engineering.
Constituent parts
Upper shoreface
The shoreface is the part of the barrier where the ocean meets the shore of the island. The barrier island body itself separates the shoreface from the backshore and lagoon/tidal flat area. Characteristics common to the upper shoreface are fine sands with mud and possibly silt. Further out into the ocean the sediment becomes finer. The effect of waves at this point is weak because of the depth. Bioturbation is common and many fossils can be found in upper shoreface deposits in the geologic record.
Middle shoreface
The middle shoreface is located in the upper shoreface. The middle shoreface is strongly influenced by wave action because of its depth. Closer to shore the sand is medium-grained, with shell pieces common. Since wave action is heavier, bioturbation is not likely.
Lower shoreface
The lower shoreface is constantly affected by wave action. This results in development of herringbone sedimentary structures because of the constant differing flow of waves. The sand is coarser.
Foreshore
The foreshore is the area on land between high and low tide. Like the upper shoreface, it is constantly affected by wave action. Cross-bedding and lamination are present and coarser sands are present because of the high energy present by the crashing of the waves. The sand is also very well sorted.
Backshore
The backshore is always above the highest water level point. The berm is also found here which marks the boundary between the foreshore and backshore. Wind is the important factor here, not water. During strong storms high waves and wind can deliver and erode sediment from the backshore.
Dunes
Coastal dunes, created by wind, are typical of a barrier island. They are located at the top of the backshore. The dunes will display characteristics of typical aeolian wind-blown dunes. The difference is that dunes on a barrier island typically contain coastal vegetation roots and marine bioturbation.
Lagoon and tidal flats
The lagoon and tidal flat area is located behind the dune and backshore area. Here the water is still, which allows fine silts, sands, and mud to settle out. Lagoons can become host to an anaerobic environment. This will allow high amounts of organic-rich mud to form. Vegetation is also common.
Location
Barrier Islands can be observed on every continent on Earth, except Antarctica. They occur primarily in areas that are tectonically stable, such as "trailing edge coasts" facing (moving away from) ocean ridges formed by divergent boundaries of tectonic plates, and around smaller marine basins such as the Mediterranean Sea and the Gulf of Mexico. Areas with relatively small tides and ample sand supply favor barrier island formation.
Australia
Moreton Bay, on the east coast of Australia and directly east of Brisbane, is sheltered from the Pacific Ocean by a chain of very large barrier islands. Running north to south they are Bribie Island, Moreton Island, North Stradbroke Island and South Stradbroke Island (the last two used to be a single island until a storm created a channel between them in 1896). North Stradbroke Island is the second largest sand island in the world and Moreton Island is the third largest.
Fraser Island, another barrier island lying 200 km north of Moreton Bay on the same coastline, is the largest sand island in the world.
United States
Barrier islands are found most prominently on the United States' East and Gulf Coasts, where every state, from Maine to Florida (East Coast) and from Florida to Texas (Gulf coast), features at least part of a barrier island. Many have large numbers of barrier islands; Florida, for instance, had 29 (in 1997) in just along the west (Gulf) coast of the Florida peninsula, plus about 20 others on the east coast and several barrier islands and spits along the panhandle coast. Padre Island, in Texas, is the world's longest barrier island; other well-known islands on the Gulf Coast include Galveston Island in Texas and Sanibel and Captiva Islands in Florida. Those on the East Coast include Miami Beach and Palm Beach in Florida; Hatteras Island in North Carolina; Assateague Island in Virginia and Maryland; Absecon Island in New Jersey, where Atlantic City is located; and Jones Beach Island and Fire Island, both off Long Island in New York. No barrier islands are found on the Pacific Coast of the United States due to the rocky shore and short continental shelf, but barrier peninsulas can be found. Barrier islands can also be seen on Alaska's Arctic coast.
Canada
Barrier Islands can also be found in Maritime Canada, and other places along the coast. A good example is found at Miramichi Bay, New Brunswick, where Portage Island as well as Fox Island and Hay Island protect the inner bay from storms in the Gulf of Saint Lawrence.
Mexico
Mexico's Gulf of Mexico coast has numerous barrier islands and barrier peninsulas.
New Zealand
Barrier islands are more prevalent in the north of both of New Zealand's main islands. Notable barrier islands in New Zealand include Matakana Island, which guards the entrance to Tauranga Harbour, and Rabbit Island, at the southern end of Tasman Bay. See also Nelson Harbour's Boulder Bank, below.
India
The Vypin Island in the Southwest coast of India in Kerala is 27 km long. It is also one of the most densely populated islands in the world.
Indonesia
The Indonesian Barrier Islands lie off the western coast of Sumatra. From north to south along this coast they include Simeulue, the Banyak Islands (chiefly Tuangku and Bangkaru), Nias, the Batu Islands (notably Pini, Tanahmasa and Tanahbala), the Mentawai Islands (mainly Siberut, Sipura, North Pagai and South Pagai Islands) and Enggano Island.
Europe
Barrier islands can be observed in the Baltic Sea from Poland to Lithuania as well as distinctly in the Wadden Islands, which stretch from the Netherlands to Denmark. Lido di Venezia and Pellestrina are notable barrier islands of the Lagoon of Venice which have for centuries protected the city of Venice in Italy. Chesil Beach on the south coast of England developed as a barrier beach. Barrier beaches are also found in the north of the Azov and Black seas.
Processes
Migration and overwash
Water levels may be higher than the island during storm events. This situation can lead to overwash, which brings sand from the front of the island to the top and/or landward side of the island. This process leads to the evolution and migration of the barrier island.
Critical width concept
Barrier islands are often formed to have a certain width. The term "critical width concept" has been discussed with reference to barrier islands, overwash, and washover deposits since the 1970s. The concept basically states that overwash processes were effective in migration of the barrier only where the barrier width is less than a critical value. The island did not narrow below these values because overwash was effective at transporting sediment over the barrier island, thereby keeping pace with the rate of ocean shoreline recession. Sections of the island with greater widths experienced washover deposits that did not reach the bayshore, and the island narrowed by ocean shoreline recession until it reached the critical width. The only process that widened the barrier beyond the critical width was breaching, formation of a partially subaerial flood shoal, and subsequent inlet closure.
Critical barrier width can be defined as the smallest cross-shore dimension that minimizes net loss of sediment from the barrier island over the defined project lifetime. The magnitude of critical width is related to sources and sinks of sand in the system, such as the volume stored in the dunes and the net long-shore and cross-shore sand transport, as well as the island elevation. The concept of critical width is important for large-scale barrier island restoration, in which islands are reconstructed to optimum height, width, and length for providing protection for estuaries, bays, marshes and mainland beaches.
Formation theories
Scientists have proposed numerous explanations for the formation of barrier islands for more than 150 years. There are three major theories: offshore bar, spit accretion, and submergence. No single theory can explain the development of all barriers, which are distributed extensively along the world's coastlines. Scientists accept the idea that barrier islands, including other barrier types, can form by a number of different mechanisms.
There appears to be some general requirements for formation. Barrier island systems develop most easily on wave-dominated coasts with a small to moderate tidal range. Coasts are classified into three groups based on tidal range: microtidal, 0–2 meter tidal range; mesotidal, 2–4 meter tidal range; and macrotidal, >4 meter tidal range. Barrier islands tend to form primarily along microtidal coasts, where they tend to be well developed and nearly continuous. They are less frequently formed in mesotidal coasts, where they are typically short with tidal inlets common. Barrier islands are very rare along macrotidal coasts. Along with a small tidal range and a wave-dominated coast, there must be a relatively low gradient shelf. Otherwise, sand accumulation into a sandbar would not occur and instead would be dispersed throughout the shore. An ample sediment supply is also a requirement for barrier island formation. This often includes fluvial deposits and glacial deposits. The last major requirement for barrier island formation is a stable sea level. It is especially important for sea level to remain relatively unchanged during barrier island formation and growth. If sea level changes are too drastic, time will be insufficient for wave action to accumulate sand into a dune, which will eventually become a barrier island through aggradation. The formation of barrier islands requires a constant sea level so that waves can concentrate the sand into one location.
Offshore bar theory
In 1845 the Frenchman Elie de Beaumont published an account of barrier formation. He believed that waves moving into shallow water churned up sand, which was deposited in the form of a submarine bar when the waves broke and lost much of their energy. As the bars developed vertically, they gradually rose above sea level, forming barrier islands.
Several barrier islands have been observed forming by this process along the Gulf coast of the Florida peninsula, including: the North and South Anclote Bars associated with Anclote Key, Three Rooker Island, Shell Key, and South Bunces Key.
Spit accretion theory
American geologist Grove Karl Gilbert first argued in 1885 that the barrier sediments came from longshore sources. He proposed that sediment moving in the breaker zone through agitation by waves in longshore drift would construct spits extending from headlands parallel to the coast. The subsequent breaching of spits by storm waves would form barrier islands.
Submergence theory
William John McGee reasoned in 1890 that the East and Gulf coasts of the United States were undergoing submergence, as evidenced by the many drowned river valleys that occur along these coasts, including Raritan, Delaware and Chesapeake bays. He believed that during submergence, coastal ridges were separated from the mainland, and lagoons formed behind the ridges. He used the Mississippi–Alabama barrier islands (consists of Cat, Ship, Horn, Petit Bois and Dauphin Islands) as an example where coastal submergence formed barrier islands. His interpretation was later shown to be incorrect when the ages of the coastal stratigraphy and sediment were more accurately determined.
Along the coast of Louisiana, former lobes of the Mississippi River delta have been reworked by wave action, forming beach ridge complexes. Prolonged sinking of the marshes behind the barriers has converted these former vegetated wetlands to open-water areas. In a period of 125 years, from 1853 to 1978, two small semi-protected bays behind the barrier developed as the large water body of Lake Pelto, leading to Isles Dernieres's detachment from the mainland.
Boulder Bank
An unusual natural structure in New Zealand may give clues to the formation processes of barrier islands. The Boulder Bank, at the entrance to Nelson Haven at the northern end of the South Island, is a unique 13 km-long stretch of rocky substrate a few metres in width. It is not strictly a barrier island, as it is linked to the mainland at one end. The Boulder Bank is composed of granodiorite from Mackay Bluff, which lies close to the point where the bank joins the mainland. It is still debated what process or processes have resulted in this odd structure, though longshore drift is the most accepted hypothesis. Studies have been conducted since 1892 to determine the speed of boulder movement. Rates of the top-course gravel movement have been estimated at 7.5 metres a year.
Types
Richard Davis distinguishes two types of barrier islands, wave-dominated and mixed-energy.
Wave-dominated
Wave-dominated barrier islands are long, low, and narrow, and usually are bounded by unstable inlets at either end. The presence of longshore currents caused by waves approaching the island at an angle will carry sediment long, extending the island. Longshore currents, and the resultant extension, are usually in one direction, but in some circumstances the currents and extensions can occur towards both ends of the island (as occurs on Anclote Key, Three Rooker Bar, and Sand Key, on the Gulf Coast of Florida). Washover fans on the lagoon side of barriers, where storm surges have over-topped the island, are common, especially on younger barrier islands. Wave-dominated barriers are also susceptible to being breached by storms, creating new inlets. Such inlets may close as sediment is carried in them by longshore currents, but may become permanent if the tidal prism (volumn and force of tidal flow) is large enough. Older barrier islands that have accumulated dunes are less subject to washovers and opening of inlets. Wave-dominated islands require an abundant supply of sediment to grow and develop dunes. If a barrier island does not receive enough sediment to grow, repeated washovers from storms will migrate the island towards the mainland.
Mixed-energy
Wave-dominated barrier islands may eventually develop into mixed-energy barrier islands. Mixed-energy barrier islands are molded by both wave energy and tidal flux. The flow of a tidal prism moves sand. Sand accumulates at both the inshore and off shore sides of an inlet, forming a flood delta or shoal on the bay or lagoon side of the inlet (from sand carried in on a flood tide), and an ebb delta or shoal on the open water side (from sand carried out by an ebb tide). Large tidal prisms tend to produce large ebb shoals, which may rise enough to be exposed at low tide. Ebb shoals refract waves approaching the inlet, locally reversing the longshore current moving sand along the coast. This can modify the ebb shoal into swash bars, which migrate into the end of the island up current from the inlet, adding to the barrier's width near the inlet (creating a "drumstick" barrier island). This process captures sand that is carried by the longshore current, preventing it from reaching the downcurrent side of the inlet, starving that island.
Many of the Sea Islands in the U.S. state of Georgia are relatively wide compared to their shore-parallel length. Siesta Key, Florida has a characteristic drumstick shape, with a wide portion at the northern end near the mouth of Phillipi Creek.
Ecological importance
Barrier islands are critically important in mitigating ocean swells and other storm events for the water systems on the mainland side of the barrier island, as well as protecting the coastline. This effectively creates a unique environment of relatively low energy, brackish water. Multiple wetland systems such as lagoons, estuaries, and/or marshes can result from such conditions depending on the surroundings. They are typically rich habitats for a variety of flora and fauna. Without barrier islands, these wetlands could not exist; they would be destroyed by daily ocean waves and tides as well as ocean storm events. One of the most prominent examples is the Louisiana barrier islands.
See also
North Frisian Barrier Island
Outer Banks
Virginia Barrier Islands
New York Barrier Islands
Texas barrier islands
Sea Islands
Long Beach Island
Bald Head Island
Notes
References
Sources
External links
Physical oceanography
Coastal geography
Hydrology
Coastal and oceanic landforms
Oceanographical terminology
Islands by type | Barrier island | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,650 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Physical oceanography",
"Environmental engineering"
] |
301,750 | https://en.wikipedia.org/wiki/Magnus%20effect | The Magnus effect is a phenomenon that occurs when a spinning object is moving through a fluid. A lift force acts on the spinning object and its path may be deflected in a manner not present when it is not spinning. The strength and direction of the Magnus effect is dependent on the speed and direction of the rotation of the object.
The Magnus effect is named after Heinrich Gustav Magnus, the German physicist who investigated it. The force on a rotating cylinder is an example of Kutta–Joukowski lift, named after Martin Kutta and Nikolay Zhukovsky (or Joukowski), mathematicians who contributed to the knowledge of how lift is generated in a fluid flow.
Description
The most readily observable case of the Magnus effect is when a spinning sphere (or cylinder) curves away from the arc it would follow if it were not spinning. It is often used by football (soccer) and volleyball players, baseball pitchers, and cricket bowlers. Consequently, the phenomenon is important in the study of the physics of many ball sports. It is also an important factor in the study of the effects of spinning on guided missiles—and has some engineering uses, for instance in the design of rotor ships and Flettner airplanes.
Topspin in ball games is defined as spin about a horizontal axis perpendicular to the direction of travel that moves the top surface of the ball in the direction of travel. Under the Magnus effect, topspin produces a downward swerve of a moving ball, greater than would be produced by gravity alone. Backspin produces an upwards force that prolongs the flight of a moving ball. Likewise side-spin causes swerve to either side as seen during some baseball pitches, e.g. slider. The overall behaviour is similar to that around an aerofoil (see lift force), but with a circulation generated by mechanical rotation rather than shape of the foil.
In baseball, this effect is used to generate the downward motion of a curveball, in which the baseball is rotating forward (with 'topspin'). Participants in other sports played with a ball also take advantage of this effect.
Physics
The Magnus effect or Magnus force acts on a rotating body moving relative to a fluid. Examples include a "curve ball" in baseball or a tennis ball hit obliquely. The rotation alters the boundary layer between the object and the fluid. The force is perpendicular to the relative direction of motion and oriented towards the direction of rotation, i.e. the direction the "nose" of the ball is turning towards. The magnitude of the force depends primarily on the rotation rate, the relative velocity, and the geometry of the body; the magnitude also depends upon the body's surface roughness and viscosity of the fluid. Accurate quantitative predictions of the force are difficult, but as with other examples of aerodynamic lift there are simpler, qualitative explanations:
Flow deflection
The diagram shows lift being produced on a back-spinning ball. The wake and trailing air-flow have been deflected downwards; according to Newton's third law of motion there must be a reaction force in the opposite direction.
Pressure differences
The air's viscosity and the surface roughness of the object cause the air to be carried around the object. This adds to the air velocity on one side of the object and decreases the velocity on the other side. Bernoulli's principle states that under certain conditions increased flow speed is associated with reduced pressure, implying that there is lower air pressure on one side than the other. This pressure difference results in a force perpendicular to the direction of travel.
Kutta–Joukowski lift
On a cylinder, the force due to rotation is an example of Kutta–Joukowski lift. It can be analysed in terms of the vortex produced by rotation. The lift per unit length of the cylinder , is the product of the freestream velocity (in m/s), the fluid density (in kg/m3), and circulation due to viscous effects:
where the vortex strength (assuming that the surrounding fluid obeys the no-slip condition) is given by
where ω is the angular velocity of the cylinder (in rad/s) and r is the radius of the cylinder (in m).
Inverse Magnus effect
In wind tunnel studies, (rough surfaced) baseballs show the Magnus effect, but smooth spheres do not. Further study has shown that certain combinations of conditions result in turbulence in the fluid on one side of the rotating body but laminar flow on the other side. In these cases are called the inverse Magnus effect: the deflection is opposite to that of the typical Magnus effect.
Magnus effect in potential flow
Potential flow is a mathematical model of the steady flow of a fluid with no viscosity or vorticity present. For potential flow around a circular cylinder, it provides the following results:
Non-spinning cylinder
The flow pattern is symmetric about a horizontal axis through the centre of the cylinder. At each point above the axis and its corresponding point below the axis, the spacing of streamlines is the same so velocities are also the same at the two points. Bernoulli’s principle shows that, outside the boundary layers, pressures are also the same at corresponding points. There is no lift acting on the cylinder.
Spinning cylinder
Streamlines are closer spaced immediately above the cylinder than below, so the air flows faster past the upper surface than past the lower surface. Bernoulli’s principle shows that the pressure adjacent to the upper surface is lower than the pressure adjacent to the lower surface. The Magnus force acts vertically upwards on the cylinder.
Streamlines immediately above the cylinder are curved with radius little more than the radius of the cylinder. This means there is low pressure close to the upper surface of the cylinder. Streamlines immediately below the cylinder are curved with a larger radius than streamlines above the cylinder. This means there is higher pressure acting on the lower surface than on the upper.
Air immediately above and below the cylinder is curving downwards, accelerated by the pressure gradient. A downwards force is acting on the air.
Newton's third law predicts that the Magnus force and the downwards force acting on the air are equal in magnitude and opposite in direction.
History
The effect is named after German physicist Heinrich Gustav Magnus who demonstrated the effect with a rapidly rotating brass cylinder and an air blower in 1852. In 1672, Isaac Newton had speculated on the effect after observing tennis players in his Cambridge college. In 1742, Benjamin Robins, a British mathematician, ballistics researcher, and military engineer, explained deviations in the trajectories of musket balls due to their rotation.
Pioneering wind tunnel research on the Magnus effect was carried out with smooth rotating spheres in 1928. Lyman Briggs later studied baseballs in a wind tunnel, and others have produced images of the effect. The studies show that a turbulent wake behind the spinning ball causes aerodynamic drag, plus there is a noticeable angular deflection in the wake, and this deflection is in the direction of spin.
In sport
The Magnus effect explains commonly observed deviations from the typical trajectories or paths of spinning balls in sport, notably association football, table tennis, tennis, volleyball, golf, baseball, and cricket.
The curved path of a golf ball known as slice or hook is largely due to the ball's spin axis being tilted away from the horizontal due to the combined effects of club face angle and swing path, causing the Magnus effect to act at an angle, moving the ball away from a straight line in its trajectory. Backspin (upper surface rotating backwards from the direction of movement) on a golf ball causes a vertical force that counteracts the force of gravity slightly, and enables the ball to remain airborne a little longer than it would were the ball not spinning: this allows the ball to travel farther than a ball not spinning about its horizontal axis.
In table tennis, the Magnus effect is easily observed, because of the small mass and low density of the ball. An experienced player can place a wide variety of spins on the ball. Table tennis rackets usually have a surface made of rubber to give the racket maximum grip on the ball to impart a spin.
In cricket, the Magnus effect contributes to the types of motion known as drift, dip and lift in spin bowling, depending on the axis of rotation of the spin applied to the ball. The Magnus effect is not responsible for the movement seen in conventional swing bowling, in which the pressure gradient is not caused by the ball's spin, but rather by its raised seam, and the asymmetric roughness or smoothness of its two halves; however, the Magnus effect may be responsible for so-called "Malinga Swing", as observed in the bowling of the swing bowler Lasith Malinga.
In airsoft, a system known as hop-up is used to create a backspin on a fired BB, which greatly increases its range, using the Magnus effect in a similar manner as in golf.
In baseball, pitchers often impart different spins on the ball, causing it to curve in the desired direction due to the Magnus effect. The PITCHf/x system measures the change in trajectory caused by Magnus in all pitches thrown in Major League Baseball.
The match ball for the 2010 FIFA World Cup has been criticised for the different Magnus effect from previous match balls. The ball was described as having less Magnus effect and as a result flies farther but with less controllable swerve.
In external ballistics
The Magnus effect can also be found in advanced external ballistics. First, a spinning bullet in flight is often subject to a crosswind, which can be simplified as blowing from either the left or the right. In addition to this, even in completely calm air a bullet experiences a small sideways wind component due to its yawing motion. This yawing motion along the bullet's flight path means that the nose of the bullet points in a slightly different direction from the direction the bullet travels. In other words, the bullet "skids" sideways at any given moment, and thus experiences a small sideways wind component in addition to any crosswind component.
The combined sideways wind component of these two effects causes a Magnus force to act on the bullet, which is perpendicular both to the direction the bullet is pointing and the combined sideways wind. In a very simple case where we ignore various complicating factors, the Magnus force from the crosswind would cause an upward or downward force to act on the spinning bullet (depending on the left or right wind and rotation), causing deflection of the bullet's flight path up or down, thus influencing the point of impact.
Overall, the effect of the Magnus force on a bullet's flight path itself is usually insignificant compared to other forces such as aerodynamic drag. However, it greatly affects the bullet's stability, which in turn affects the amount of drag, how the bullet behaves upon impact, and many other factors. The stability of the bullet is affected, because the Magnus effect acts on the bullet's centre of pressure instead of its centre of gravity. This means that it affects the yaw angle of the bullet; it tends to twist the bullet along its flight path, either towards the axis of flight (decreasing the yaw thus stabilising the bullet) or away from the axis of flight (increasing the yaw thus destabilising the bullet). The critical factor is the location of the centre of pressure, which depends on the flowfield structure, which in turn depends mainly on the bullet's speed (supersonic or subsonic), but also the shape, air density and surface features. If the centre of pressure is ahead of the centre of gravity, the effect is destabilizing; if the centre of pressure is behind the centre of gravity, the effect is stabilising.
In aviation
Some aircraft have been built to use the Magnus effect to create lift with a rotating cylinder instead of a wing, allowing flight at lower horizontal speeds. The earliest attempt to use the Magnus effect for a heavier-than-air aircraft was in 1910 by a US member of Congress, Butler Ames of Massachusetts. The next attempt was in the early 1930s by three inventors in New York state.
Ship propulsion and stabilization
Rotor ships use mast-like cylinders, called Flettner rotors, for propulsion. These are mounted vertically on the ship's deck. When the wind blows from the side, the Magnus effect creates a forward thrust. Thus, as with any sailing ship, a rotor ship can only move forwards when there is a wind blowing. The effect is also used in a special type of ship stabilizer consisting of a rotating cylinder mounted beneath the waterline and emerging laterally. By controlling the direction and speed of rotation, strong lift or downforce can be generated. The largest deployment of the system to date is in the motor yacht Eclipse.
See also
Air resistance
Ball of the Century
Bernoulli's principle
Coandă effect
Fluid dynamics
Kite types
Navier–Stokes equations
Potential flow around a circular cylinder
Reynolds number
Tesla turbine
References
Further reading
External links
Magnus Cups, Ri Channel Video, January 2012
Analytic Functions, The Magnus Effect, and Wings at MathPages
How do bullets fly? Ruprecht Nennstiel, Wiesbaden, Germany
How do bullets fly? old version (1998), by Ruprecht Nennstiel
Anthony Thyssen's Rotor Kites page
Has plans on how to build a model
Harnessing wind power using the Magnus effect
Researchers Observe Magnus Effect in Light for First Time
Quantum Maglift
Video:Applications of the Magnus effect
Fluid dynamics
Physical phenomena | Magnus effect | [
"Physics",
"Chemistry",
"Engineering"
] | 2,765 | [
"Piping",
"Chemical engineering",
"Physical phenomena",
"Fluid dynamics"
] |
301,928 | https://en.wikipedia.org/wiki/Lumped-element%20model | The lumped-element model (also called lumped-parameter model, or lumped-component model) is a simplified representation of a physical system or circuit that assumes all components are concentrated at a single point and their behavior can be described by idealized mathematical models. The lumped-element model simplifies the system or circuit behavior description into a topology. It is useful in electrical systems (including electronics), mechanical multibody systems, heat transfer, acoustics, etc. This is in contrast to distributed parameter systems or models in which the behaviour is distributed spatially and cannot be considered as localized into discrete entities.
The simplification reduces the state space of the system to a finite dimension, and the partial differential equations (PDEs) of the continuous (infinite-dimensional) time and space model of the physical system into ordinary differential equations (ODEs) with a finite number of parameters.
Electrical systems
Lumped-matter discipline
The lumped-matter discipline is a set of imposed assumptions in electrical engineering that provides the foundation for lumped-circuit abstraction used in network analysis. The self-imposed constraints are:
The change of the magnetic flux in time outside a conductor is zero.
The change of the charge in time inside conducting elements is zero.
Signal timescales of interest are much larger than propagation delay of electromagnetic waves across the lumped element.
The first two assumptions result in Kirchhoff's circuit laws when applied to Maxwell's equations and are only applicable when the circuit is in steady state. The third assumption is the basis of the lumped-element model used in network analysis. Less severe assumptions result in the distributed-element model, while still not requiring the direct application of the full Maxwell equations.
Lumped-element model
The lumped-element model of electronic circuits makes the simplifying assumption that the attributes of the circuit, resistance, capacitance, inductance, and gain, are concentrated into idealized electrical components; resistors, capacitors, and inductors, etc. joined by a network of perfectly conducting wires.
The lumped-element model is valid whenever , where denotes the circuit's characteristic length, and denotes the circuit's operating wavelength. Otherwise, when the circuit length is on the order of a wavelength, we must consider more general models, such as the distributed-element model (including transmission lines), whose dynamic behaviour is described by Maxwell's equations. Another way of viewing the validity of the lumped-element model is to note that this model ignores the finite time it takes signals to propagate around a circuit. Whenever this propagation time is not significant to the application the lumped-element model can be used. This is the case when the propagation time is much less than the period of the signal involved. However, with increasing propagation time there will be an increasing error between the assumed and actual phase of the signal which in turn results in an error in the assumed amplitude of the signal. The exact point at which the lumped-element model can no longer be used depends to a certain extent on how accurately the signal needs to be known in a given application.
Real-world components exhibit non-ideal characteristics which are, in reality, distributed elements but are often represented to a first-order approximation by lumped elements. To account for leakage in capacitors for example, we can model the non-ideal capacitor as having a large lumped resistor connected in parallel even though the leakage is, in reality distributed throughout the dielectric. Similarly a wire-wound resistor has significant inductance as well as resistance distributed along its length but we can model this as a lumped inductor in series with the ideal resistor.
Thermal systems
A lumped-capacitance model, also called lumped system analysis, reduces a thermal system to a number of discrete “lumps” and assumes that the temperature difference inside each lump is negligible. This approximation is useful to simplify otherwise complex differential heat equations. It was developed as a mathematical analog of electrical capacitance, although it also includes thermal analogs of electrical resistance as well.
The lumped-capacitance model is a common approximation in transient conduction, which may be used whenever heat conduction within an object is much faster than heat transfer across the boundary of the object. The method of approximation then suitably reduces one aspect of the transient conduction system (spatial temperature variation within the object) to a more mathematically tractable form (that is, it is assumed that the temperature within the object is completely uniform in space, although this spatially uniform temperature value changes over time). The rising uniform temperature within the object or part of a system, can then be treated like a capacitative reservoir which absorbs heat until it reaches a steady thermal state in time (after which temperature does not change within it).
An early-discovered example of a lumped-capacitance system which exhibits mathematically simple behavior due to such physical simplifications, are systems which conform to Newton's law of cooling. This law simply states that the temperature of a hot (or cold) object progresses toward the temperature of its environment in a simple exponential fashion. Objects follow this law strictly only if the rate of heat conduction within them is much larger than the heat flow into or out of them. In such cases it makes sense to talk of a single "object temperature" at any given time (since there is no spatial temperature variation within the object) and also the uniform temperatures within the object allow its total thermal energy excess or deficit to vary proportionally to its surface temperature, thus setting up the Newton's law of cooling requirement that the rate of temperature decrease is proportional to difference between the object and the environment. This in turn leads to simple exponential heating or cooling behavior (details below).
Method
To determine the number of lumps, the Biot number (Bi), a dimensionless parameter of the system, is used. Bi is defined as the ratio of the conductive heat resistance within the object to the convective heat transfer resistance across the object's boundary with a uniform bath of different temperature. When the thermal resistance to heat transferred into the object is larger than the resistance to heat being diffused completely within the object, the Biot number is less than 1. In this case, particularly for Biot numbers which are even smaller, the approximation of spatially uniform temperature within the object can begin to be used, since it can be presumed that heat transferred into the object has time to uniformly distribute itself, due to the lower resistance to doing so, as compared with the resistance to heat entering the object.
If the Biot number is less than 0.1 for a solid object, then the entire material will be nearly the same temperature, with the dominant temperature difference being at the surface. It may be regarded as being "thermally thin". The Biot number must generally be less than 0.1 for usefully accurate approximation and heat transfer analysis. The mathematical solution to the lumped-system approximation gives Newton's law of cooling.
A Biot number greater than 0.1 (a "thermally thick" substance) indicates that one cannot make this assumption, and more complicated heat transfer equations for "transient heat conduction" will be required to describe the time-varying and non-spatially-uniform temperature field within the material body.
The single capacitance approach can be expanded to involve many resistive and capacitive elements, with Bi < 0.1 for each lump. As the Biot number is calculated based upon a characteristic length of the system, the system can often be broken into a sufficient number of sections, or lumps, so that the Biot number is acceptably small.
Some characteristic lengths of thermal systems are:
Plate: thickness
Fin: thickness/2
Long cylinder: diameter/4
Sphere: diameter/6
For arbitrary shapes, it may be useful to consider the characteristic length to be volume / surface area.
Thermal purely resistive circuits
A useful concept used in heat transfer applications once the condition of steady state heat conduction has been reached, is the representation of thermal transfer by what is known as thermal circuits. A thermal circuit is the representation of the resistance to heat flow in each element of a circuit, as though it were an electrical resistor. The heat transferred is analogous to the electric current and the thermal resistance is analogous to the electrical resistor. The values of the thermal resistance for the different modes of heat transfer are then calculated as the denominators of the developed equations. The thermal resistances of the different modes of heat transfer are used in analyzing combined modes of heat transfer. The lack of "capacitative" elements in the following purely resistive example, means that no section of the circuit is absorbing energy or changing in distribution of temperature. This is equivalent to demanding that a state of steady state heat conduction (or transfer, as in radiation) has already been established.
The equations describing the three heat transfer modes and their thermal resistances in steady state conditions, as discussed previously, are summarized in the table below:
In cases where there is heat transfer through different media (for example, through a composite material), the equivalent resistance is the sum of the resistances of the components that make up the composite. Likely, in cases where there are different heat transfer modes, the total resistance is the sum of the resistances of the different modes. Using the thermal circuit concept, the amount of heat transferred through any medium is the quotient of the temperature change and the total thermal resistance of the medium.
As an example, consider a composite wall of cross-sectional area . The composite is made of an long cement plaster with a thermal coefficient and long paper faced fiber glass, with thermal coefficient . The left surface of the wall is at and exposed to air with a convective coefficient of . The right surface of the wall is at and exposed to air with convective coefficient .
Using the thermal resistance concept, heat flow through the composite is as follows:
where , , , and
Newton's law of cooling
Newton's law of cooling is an empirical relationship attributed to English physicist Sir Isaac Newton (1642–1727). This law stated in non-mathematical form is the following:
Or, using symbols:
An object at a different temperature from its surroundings will ultimately come to a common temperature with its surroundings. A relatively hot object cools as it warms its surroundings; a cool object is warmed by its surroundings. When considering how quickly (or slowly) something cools, we speak of its rate of cooling – how many degrees' change in temperature per unit of time.
The rate of cooling of an object depends on how much hotter the object is than its surroundings. The temperature change per minute of a hot apple pie will be more if the pie is put in a cold freezer than if it is placed on the kitchen table. When the pie cools in the freezer, the temperature difference between it and its surroundings is greater. On a cold day, a warm home will leak heat to the outside at a greater rate when there is a large difference between the inside and outside temperatures. Keeping the inside of a home at high temperature on a cold day is thus more costly than keeping it at a lower temperature. If the temperature difference is kept small, the rate of cooling will be correspondingly low.
As Newton's law of cooling states, the rate of cooling of an object – whether by conduction, convection, or radiation – is approximately proportional to the temperature difference ΔT. Frozen food will warm up faster in a warm room than in a cold room. Note that the rate of cooling experienced on a cold day can be increased by the added convection effect of the wind. This is referred to as wind chill. For example, a wind chill of -20 °C means that heat is being lost at the same rate as if the temperature were -20 °C without wind.
Applicable situations
This law describes many situations in which an object has a large thermal capacity and large conductivity, and is suddenly immersed in a uniform bath which conducts heat relatively poorly. It is an example of a thermal circuit with one resistive and one capacitative element. For the law to be correct, the temperatures at all points inside the body must be approximately the same at each time point, including the temperature at its surface. Thus, the temperature difference between the body and surroundings does not depend on which part of the body is chosen, since all parts of the body have effectively the same temperature. In these situations, the material of the body does not act to "insulate" other parts of the body from heat flow, and all of the significant insulation (or "thermal resistance") controlling the rate of heat flow in the situation resides in the area of contact between the body and its surroundings. Across this boundary, the temperature-value jumps in a discontinuous fashion.
In such situations, heat can be transferred from the exterior to the interior of a body, across the insulating boundary, by convection, conduction, or diffusion, so long as the boundary serves as a relatively poor conductor with regard to the object's interior. The presence of a physical insulator is not required, so long as the process which serves to pass heat across the boundary is "slow" in comparison to the conductive transfer of heat inside the body (or inside the region of interest—the "lump" described above).
In such a situation, the object acts as the "capacitative" circuit element, and the resistance of the thermal contact at the boundary acts as the (single) thermal resistor. In electrical circuits, such a combination would charge or discharge toward the input voltage, according to a simple exponential law in time. In the thermal circuit, this configuration results in the same behavior in temperature: an exponential approach of the object temperature to the bath temperature.
Mathematical statement
Newton's law is mathematically stated by the simple first-order differential equation:
where
Q is thermal energy in joules
h is the heat transfer coefficient between the surface and the fluid
A is the surface area of the heat being transferred
T is the temperature of the object's surface and interior (since these are the same in this approximation)
Tenv is the temperature of the environment
ΔT(t) = T(t) − Tenv is the time-dependent thermal gradient between environment and object
Putting heat transfers into this form is sometimes not a very good approximation, depending on ratios of heat conductances in the system. If the differences are not large, an accurate formulation of heat transfers in the system may require analysis of heat flow based on the (transient) heat transfer equation in nonhomogeneous or poorly conductive media.
Solution in terms of object heat capacity
If the entire body is treated as lumped-capacitance heat reservoir, with total heat content which is proportional to simple total heat capacity , and , the temperature of the body, or . It is expected that the system will experience exponential decay with time in the temperature of a body.
From the definition of heat capacity comes the relation . Differentiating this equation with regard to time gives the identity (valid so long as temperatures in the object are uniform at any given time): . This expression may be used to replace in the first equation which begins this section, above. Then, if is the temperature of such a body at time , and is the temperature of the environment around the body:
where is a positive constant characteristic of the system, which must be in units of , and is therefore sometimes expressed in terms of a characteristic time constant given by: . Thus, in thermal systems, . (The total heat capacity of a system may be further represented by its mass-specific heat capacity multiplied by its mass , so that the time constant is also given by ).
The solution of this differential equation, by standard methods of integration and substitution of boundary conditions, gives:
If:
is defined as : where is the initial temperature difference at time 0,
then the Newtonian solution is written as:
This same solution is almost immediately apparent if the initial differential equation is written in terms of , as the single function to be solved for.
Applications
This mode of analysis has been applied to forensic sciences to analyze the time of death of humans. Also, it can be applied to HVAC (heating, ventilating and air-conditioning, which can be referred to as "building climate control"), to ensure more nearly instantaneous effects of a change in comfort level setting.
Mechanical systems
The simplifying assumptions in this domain are:
all objects are rigid bodies;
all interactions between rigid bodies take place via kinematic pairs (joints), springs and dampers.
Acoustics
In this context, the lumped-component model extends the distributed concepts of acoustic theory subject to approximation. In the acoustical lumped-component model, certain physical components with acoustical properties may be approximated as behaving similarly to standard electronic components or simple combinations of components.
A rigid-walled cavity containing air (or similar compressible fluid) may be approximated as a capacitor whose value is proportional to the volume of the cavity. The validity of this approximation relies on the shortest wavelength of interest being significantly (much) larger than the longest dimension of the cavity.
A reflex port may be approximated as an inductor whose value is proportional to the effective length of the port divided by its cross-sectional area. The effective length is the actual length plus an end correction. This approximation relies on the shortest wavelength of interest being significantly larger than the longest dimension of the port.
Certain types of damping material can be approximated as a resistor. The value depends on the properties and dimensions of the material. The approximation relies in the wavelengths being long enough and on the properties of the material itself.
A loudspeaker drive unit (typically a woofer or subwoofer drive unit) may be approximated as a series connection of a zero-impedance voltage source, a resistor, a capacitor and an inductor. The values depend on the specifications of the unit and the wavelength of interest.
Heat transfer for buildings
A simplifying assumption in this domain is that all heat transfer mechanisms are linear, implying that radiation and convection are linearised for each problem.
Several publications can be found that describe how to generate lumped-element models of buildings. In most cases, the building is considered a single thermal zone and in this case, turning multi-layered walls into lumped elements can be one of the most complicated tasks in the creation of the model. The dominant-layer method is one simple and reasonably accurate method. In this method, one of the layers is selected as the dominant layer in the whole construction, this layer is chosen considering the most relevant frequencies of the problem.
Lumped-element models of buildings have also been used to evaluate the efficiency of domestic energy systems, by running many simulations under different future weather scenarios.
Fluid systems
Fluid systems can be described by means of lumped-element cardiovascular models by using voltage to represent pressure and current to represent flow; identical equations from the electrical circuit representation are valid after substituting these two variables. Such applications can, for example, study the response of the human cardiovascular system to ventricular assist device implantation.
See also
System isomorphism
Model order reduction
References
External links
Advanced modelling and simulation techniques for magnetic components
IMTEK Mathematica Supplement (IMS), the Open Source IMTEK Mathematica Supplement (IMS) for lumped modelling
Conceptual models
Mechanics
Acoustics
Electronic circuits
Electronic design | Lumped-element model | [
"Physics",
"Engineering"
] | 3,991 | [
"Electronic design",
"Electronic circuits",
"Classical mechanics",
"Acoustics",
"Electronic engineering",
"Mechanics",
"Mechanical engineering",
"Design"
] |
301,950 | https://en.wikipedia.org/wiki/Distributed-element%20model | In electrical engineering, the distributed-element model or transmission-line model of electrical circuits assumes that the attributes of the circuit (resistance, capacitance, and inductance) are distributed continuously throughout the material of the circuit. This is in contrast to the more common lumped-element model, which assumes that these values are lumped into electrical components that are joined by perfectly conducting wires. In the distributed-element model, each circuit element is infinitesimally small, and the wires connecting elements are not assumed to be perfect conductors; that is, they have impedance. Unlike the lumped-element model, it assumes nonuniform current along each branch and nonuniform voltage along each wire.
The distributed model is used where the wavelength becomes comparable to the physical dimensions of the circuit, making the lumped model inaccurate. This occurs at high frequencies, where the wavelength is very short, or on low-frequency, but very long, transmission lines such as overhead power lines.
Applications
The distributed-element model is more accurate but more complex than the lumped-element model. The use of infinitesimals will often require the application of calculus, whereas circuits analysed by the lumped-element model can be solved with linear algebra. The distributed model is consequently usually only applied when accuracy calls for its use. The location of this point is dependent on the accuracy required in a specific application, but essentially, it needs to be used in circuits where the wavelengths of the signals have become comparable to the physical dimensions of the components. An often-quoted engineering rule of thumb (not to be taken too literally because there are many exceptions) is that parts larger than one-tenth of a wavelength will usually need to be analysed as distributed elements.
Transmission lines
Transmission lines are a common example of the use of the distributed model. Its use is dictated because the length of the line will usually be many wavelengths of the circuit's operating frequency. Even for the low frequencies used on power transmission lines, one-tenth of a wavelength is still only about 500 kilometres at 60 Hz. Transmission lines are usually represented in terms of the primary line constants as shown in figure 1. From this model, the behaviour of the circuit is described by the secondary line constants, which can be calculated from the primary ones.
The primary line constants are normally taken to be constant with position along the line leading to a particularly simple analysis and model. However, this is not always the case, variations in physical dimensions along the line will cause variations in the primary constants, that is, they have now to be described as functions of distance. Most often, such a situation represents an unwanted deviation from the ideal, such as a manufacturing error, however, there are a number of components where such longitudinal variations are deliberately introduced as part of the function of the component. A well-known example of this is the horn antenna.
Where reflections are present on the line, quite short lengths of line can exhibit effects that are simply not predicted by the lumped-element model. A quarter wavelength line, for instance, will transform the terminating impedance into its dual. This can be a wildly different impedance.
High-frequency transistors
Another example of the use of distributed elements is in the modelling of the base region of a bipolar junction transistor at high frequencies. The analysis of charge carriers crossing the base region is inaccurate when the base region is simply treated as a lumped element. A more successful model is a simplified transmission line model, which includes the base material's distributed bulk resistance and the substrate's distributed capacitance. This model is represented in figure 2.
Resistivity measurements
In many situations, it is desired to measure resistivity of bulk material by applying an electrode array at the surface. Amongst the fields that use this technique are geophysics (because it avoids having to dig into the substrate) and the semiconductor industry (for the similar reason that it is non-intrusive) for testing bulk silicon wafers. The basic arrangement is shown in figure 3, although normally, more electrodes would be used. To form a relationship between the voltage and current measured on the one hand, and the material's resistivity on the other, it is necessary to apply the distributed-element model by considering the material to be an array of infinitesimal resistor elements. Unlike the transmission line example, the need to apply the distributed-element model arises from the geometry of the setup, and not from any wave propagation considerations.
The model used here needs to be truly 3-dimensional (transmission line models are usually described by elements of a one-dimensional line). It is also possible that the resistances of the elements will be functions of the coordinates, indeed, in the geophysical application, it may well be that regions of changed resistivity are the very things that it is desired to detect.
Inductor windings
Another example where a simple one-dimensional model will not suffice is the windings of an inductor. Coils of wire have capacitance between adjacent turns (and more remote turns as well, but the effect progressively diminishes). For a single-layer solenoid, the distributed capacitance will mostly lie between adjacent turns, as shown in figure 4, between turns T1 and T2, but for multiple-layer windings and more accurate models distributed capacitance to other turns must also be considered. This model is fairly difficult to deal with in simple calculations and, for the most part, is avoided. The most common approach is to roll up all the distributed capacitance into one lumped element in parallel with the inductance and resistance of the coil. This lumped model works successfully at low frequencies but falls apart at high frequencies where the usual practice is to simply measure (or specify) an overall Q for the inductor without associating a specific equivalent circuit.
See also
Telegrapher's equations
Distributed-element circuit
Distributed-element filter
Warren P. Mason
References
Bibliography
Kenneth L. Kaiser, Electromagnetic compatibility handbook, CRC Press, 2004 .
Karl Lark-Horovitz, Vivian Annabelle Johnson, Methods of experimental physics: Solid state physics, Academic Press, 1959 .
Robert B. Northrop, Introduction to instrumentation and measurements, CRC Press, 1997 .
P. Vallabh Sharma, Environmental and engineering geophysics, Cambridge University Press, 1997 .
Electronic design
Electronic circuits
Distributed element circuits
Conceptual models
ja:分布定数回路 | Distributed-element model | [
"Engineering"
] | 1,324 | [
"Electronic design",
"Electronic circuits",
"Electronic engineering",
"Distributed element circuits",
"Design"
] |
302,033 | https://en.wikipedia.org/wiki/Frequency%20response | In signal processing and electronics, the frequency response of a system is the quantitative measure of the magnitude and phase of the output as a function of input frequency. The frequency response is widely used in the design and analysis of systems, such as audio and control systems, where they simplify mathematical analysis by converting governing differential equations into algebraic equations. In an audio system, it may be used to minimize audible distortion by designing components (such as microphones, amplifiers and loudspeakers) so that the overall response is as flat (uniform) as possible across the system's bandwidth. In control systems, such as a vehicle's cruise control, it may be used to assess system stability, often through the use of Bode plots. Systems with a specific frequency response can be designed using analog and digital filters.
The frequency response characterizes systems in the frequency domain, just as the impulse response characterizes systems in the time domain. In linear systems (or as an approximation to a real system neglecting second order non-linear properties), either response completely describes the system and thus there is a one-to-one correspondence: the frequency response is the Fourier transform of the impulse response. The frequency response allows simpler analysis of cascaded systems such as multistage amplifiers, as the response of the overall system can be found through multiplication of the individual stages' frequency responses (as opposed to convolution of the impulse response in the time domain). The frequency response is closely related to the transfer function in linear systems, which is the Laplace transform of the impulse response. They are equivalent when the real part of the transfer function's complex variable is zero.
Measurement and plotting
Measuring the frequency response typically involves exciting the system with an input signal and measuring the resulting output signal, calculating the frequency spectra of the two signals (for example, using the fast Fourier transform for discrete signals), and comparing the spectra to isolate the effect of the system. In linear systems, the frequency range of the input signal should cover the frequency range of interest.
Several methods using different input signals may be used to measure the frequency response of a system, including:
Applying constant amplitude sinusoids stepped through a range of frequencies and comparing the amplitude and phase shift of the output relative to the input. The frequency sweep must be slow enough for the system to reach its steady-state at each point of interest
Applying an impulse signal and taking the Fourier transform of the system's response
Applying a wide-sense stationary white noise signal over a long period of time and taking the Fourier transform of the system's response. With this method, the cross-spectral density (rather than the power spectral density) should be used if phase information is required
The frequency response is characterized by the magnitude, typically in decibels (dB) or as a generic amplitude of the dependent variable, and the phase, in radians or degrees, measured against frequency, in radian/s, Hertz (Hz) or as a fraction of the sampling frequency.
There are three common ways of plotting response measurements:
Bode plots graph magnitude and phase against frequency on two rectangular plots
Nyquist plots graph magnitude and phase parametrically against frequency in polar form
Nichols plots graph magnitude and phase parametrically against frequency in rectangular form
For the design of control systems, any of the three types of plots may be used to infer closed-loop stability and stability margins from the open-loop frequency response. In many frequency domain applications, the phase response is relatively unimportant and the magnitude response of the Bode plot may be all that is required. In digital systems (such as digital filters), the frequency response often contains a main lobe with multiple periodic sidelobes, due to spectral leakage caused by digital processes such as sampling and windowing.
Nonlinear frequency response
If the system under investigation is nonlinear, linear frequency domain analysis will not reveal all the nonlinear characteristics. To overcome these limitations, generalized frequency response functions and nonlinear output frequency response functions have been defined to analyze nonlinear dynamic effects. Nonlinear frequency response methods may reveal effects such as resonance, intermodulation, and energy transfer.
Applications
In the audible range frequency response is usually referred to in connection with electronic amplifiers, microphones and loudspeakers. Radio spectrum frequency response can refer to measurements of coaxial cable, twisted-pair cable, video switching equipment, wireless communications devices, and antenna systems. Infrasonic frequency response measurements include earthquakes and electroencephalography (brain waves).
Frequency response curves are often used to indicate the accuracy of electronic components or systems. When a system or component reproduces all desired input signals with no emphasis or attenuation of a particular frequency band, the system or component is said to be "flat", or to have a flat frequency response curve. In other case, we can be use 3D-form of frequency response surface.
Frequency response requirements differ depending on the application. In high fidelity audio, an amplifier requires a flat frequency response of at least 20–20,000 Hz, with a tolerance as tight as ±0.1 dB in the mid-range frequencies around 1000 Hz; however, in telephony, a frequency response of 400–4,000 Hz, with a tolerance of ±1 dB is sufficient for intelligibility of speech.
Once a frequency response has been measured (e.g., as an impulse response), provided the system is linear and time-invariant, its characteristic can be approximated with arbitrary accuracy by a digital filter. Similarly, if a system is demonstrated to have a poor frequency response, a digital or analog filter can be applied to the signals prior to their reproduction to compensate for these deficiencies.
The form of a frequency response curve is very important for anti-jamming protection of radars, communications and other systems.
Frequency response analysis can also be applied to biological domains, such as the detection of hormesis in repeated behaviors with opponent process dynamics, or in the optimization of drug treatment regimens.
See also
Audio system measurements
Bandwidth (signal processing)
Bode plot
Impulse response
Spectral sensitivity
Steady state (electronics)
Transient response
Universal dielectric response
References
Notes
Bibliography
Luther, Arch C.; Inglis, Andrew F. Video engineering, McGraw-Hill, 1999.
Stark, Scott Hunter. Live Sound Reinforcement, Vallejo, California, Artistpro.com, 1996–2002.
L. R. Rabiner and B. Gold. Theory and Application of Digital Signal Processing. – Englewood Cliffs, NJ: Prentice-Hall, 1975. – 720 pp
External links
University of Michigan: Frequency Response Analysis and Design Tutorial
Smith, Julius O. III: Introduction to Digital Filters with Audio Applications has a nice chapter on Frequency Response
Signal processing
Control theory
Audio amplifier specifications | Frequency response | [
"Mathematics",
"Technology",
"Engineering"
] | 1,369 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Applied mathematics",
"Control theory",
"Electronic engineering",
"Audio engineering",
"Audio amplifier specifications",
"Dynamical systems"
] |
302,185 | https://en.wikipedia.org/wiki/Bernard%20Bolzano | Bernard Bolzano (, ; ; ; born Bernardus Placidus Johann Nepomuk Bolzano; 5 October 1781 – 18 December 1848) was a Bohemian mathematician, logician, philosopher, theologian and Catholic priest of Italian extraction, also known for his liberal views.
Bolzano wrote in German, his native language. For the most part, his work came to prominence posthumously.
Family
Bolzano was the son of two pious Catholics. His father, Bernard Pompeius Bolzano, was an Italian who had moved to Prague, where he married Maria Cecilia Maurer who came from Prague's German-speaking family Maurer. Only two of their twelve children lived to adulthood.
Career
When he was ten years old, Bolzano entered the Gymnasium of the Piarists in Prague, which he attended from 1791 to 1796.
Bolzano entered the University of Prague in 1796 and studied mathematics, philosophy and physics. Starting in 1800, he also began studying theology, becoming a Catholic priest in 1804. He was appointed to the new chair of philosophy of religion at Prague University in 1805. He proved to be a popular lecturer not only in religion but also in philosophy, and he was elected Dean of the Philosophical Faculty in 1818.
Bolzano alienated many faculty and church leaders with his teachings of the social waste of militarism and the needlessness of war. He urged a total reform of the educational, social and economic systems that would direct the nation's interests toward peace rather than toward armed conflict between nations. His political convictions, which he was inclined to share with others with some frequency, eventually proved to be too liberal for the Austrian authorities. On December 24, 1819, he was removed from his professorship (upon his refusal to recant his beliefs) and was exiled to the countryside and then devoted his energies to his writings on social, religious, philosophical, and mathematical matters.
Although forbidden to publish in mainstream journals as a condition of his exile, Bolzano continued to develop his ideas and publish them either on his own or in obscure Eastern European journals. In 1842 he moved back to Prague, where he died in 1848.
Mathematical work
Bolzano made several original contributions to mathematics. His overall philosophical stance was that, contrary to much of the prevailing mathematics of the era, it was better not to introduce intuitive ideas such as time and motion into mathematics. To this end, he was one of the earliest mathematicians to begin instilling rigor into mathematical analysis with his three chief mathematical works Beyträge zu einer begründeteren Darstellung der Mathematik (1810), Der binomische Lehrsatz (1816) and Rein analytischer Beweis (1817). These works presented "...a sample of a new way of developing analysis", whose ultimate goal would not be realized until some fifty years later when they came to the attention of Karl Weierstrass.
To the foundations of mathematical analysis he contributed the introduction of a fully rigorous ε–δ definition of a mathematical limit. Bolzano was the first to recognize the greatest lower bound property of the real numbers. Like several others of his day, he was skeptical of the possibility of Gottfried Leibniz's infinitesimals, that had been the earliest putative foundation for differential calculus. Bolzano's notion of a limit was similar to the modern one: that a limit, rather than being a relation among infinitesimals, must instead be cast in terms of how the dependent variable approaches a definite quantity as the independent variable approaches some other definite quantity.
Bolzano also gave the first purely analytic proof of the fundamental theorem of algebra, which had originally been proven by Gauss from geometrical considerations. He also gave the first purely analytic proof of the intermediate value theorem (also known as Bolzano's theorem). Today he is mostly remembered for the Bolzano–Weierstrass theorem, which Karl Weierstrass developed independently and published years after Bolzano's first proof and which was initially called the Weierstrass theorem until Bolzano's earlier work was rediscovered.
Philosophical work
Bolzano's posthumously published work Paradoxien des Unendlichen (The Paradoxes of the Infinite) (1851) was greatly admired by many of the eminent logicians who came after him, including Charles Sanders Peirce, Georg Cantor, and Richard Dedekind. Bolzano's main claim to fame, however, is his 1837 Wissenschaftslehre (Theory of Science), a work in four volumes that covered not only philosophy of science in the modern sense but also logic, epistemology and scientific pedagogy. The logical theory that Bolzano developed in this work has come to be acknowledged as ground-breaking. Other works are a four-volume Lehrbuch der Religionswissenschaft (Textbook of the Science of Religion) and the metaphysical work Athanasia, a defense of the immortality of the soul. Bolzano also did valuable work in mathematics, which remained virtually unknown until Otto Stolz rediscovered many of his lost journal articles and republished them in 1881.
Wissenschaftslehre (Theory of Science)
In his 1837 Wissenschaftslehre Bolzano attempted to provide logical foundations for all sciences, building on abstractions like part-relation, abstract objects, attributes, sentence-shapes, ideas and propositions in themselves, sums and sets, collections, substances, adherences, subjective ideas, judgments, and sentence-occurrences. These attempts were an extension of his earlier thoughts in the philosophy of mathematics, for example his 1810 Beiträge where he emphasized the distinction between the objective relationship between logical consequences and our subjective recognition of these connections. For Bolzano, it was not enough that we merely have confirmation of natural or mathematical truths, but rather it was the proper role of the sciences (both pure and applied) to seek out justification in terms of the fundamental truths that may or may not appear to be obvious to our intuitions.
Introduction to Wissenschaftslehre
Bolzano begins his work by explaining what he means by theory of science, and the relation between our knowledge, truths and sciences. Human knowledge, he states, is made of all truths (or true propositions) that men know or have known. However, this is a very small fraction of all the truths that exist, although still too much for one human being to comprehend. Therefore, our knowledge is divided into more accessible parts. Such a collection of truths is what Bolzano calls a science (Wissenschaft). It is important to note that not all true propositions of a science have to be known to men; hence, this is how we can make discoveries in a science.
To better understand and comprehend the truths of a science, men have created textbooks (Lehrbuch), which of course contain only the true propositions of the science known to men. But how to know where to divide our knowledge, that is, which truths belong together? Bolzano explains that we will ultimately know this through some reflection, but that the resulting rules of how to divide our knowledge into sciences will be a science in itself. This science, that tells us which truths belong together and should be explained in a textbook, is the Theory of Science (Wissenschaftslehre).
Metaphysics
In the Wissenschaftslehre, Bolzano is mainly concerned with three realms:
(1) The realm of language, consisting in words and sentences.
(2) The realm of thought, consisting in subjective ideas and judgements.
(3) The realm of logic, consisting in objective ideas (or ideas in themselves) and propositions in themselves.
Bolzano devotes a great part of the Wissenschaftslehre to an explanation of these realms and their relations.
Two distinctions play a prominent role in his system. First, the distinction between parts and wholes. For instance, words are parts of sentences, subjective ideas are parts of judgments, objective ideas are parts of propositions in themselves. Second, all objects divide into those that exist, which means that they are causally connected and located in time and/or space, and those that do not exist. Bolzano's original claim is that the logical realm is populated by objects of the latter kind.
Satz an Sich (proposition in itself)
Satz an Sich is a basic notion in Bolzano's Wissenschaftslehre. It is introduced at the very beginning, in section 19. Bolzano first introduces the notions of proposition (spoken or written or thought or in itself) and idea (spoken or written or thought or in itself). "The grass is green" is a proposition (Satz): in this connection of words, something is said or asserted. "Grass", however, is only an idea (Vorstellung). Something is represented by it, but it does not assert anything. Bolzano's notion of proposition is fairly broad: "A rectangle is round" is a proposition — even though it is false by virtue of self-contradiction — because it is composed in an intelligible manner out of intelligible parts.
Bolzano does not give a complete definition of a Satz an Sich (i.e. proposition in itself) but he gives us just enough information to understand what he means by it. A proposition in itself (i) has no existence (that is: it has no position in time or place), (ii) is either true or false, independent of anyone knowing or thinking that it is true or false, and (iii) is what is 'grasped' by thinking beings. So a written sentence ('Socrates has wisdom') grasps a proposition in itself, namely the proposition [Socrates has wisdom]. The written sentence does have existence (it has a certain location at a certain time, say it is on your computer screen at this very moment) and expresses the proposition in itself which is in the realm of in itself (i.e. an sich). (Bolzano's use of the term an sich differs greatly from that of Kant; for Kant's use of the term see an sich.)
Every proposition in itself is composed out of ideas in themselves (for simplicity, we will use proposition to mean "proposition in itself" and idea to refer to an objective idea or idea in itself). Ideas are negatively defined as those parts of a proposition that are themselves not propositions. A proposition consists of at least three ideas, namely: a subject idea, a predicate idea and the copula (i.e. 'has', or another form of to have). (Though there are propositions which contain propositions, we won't take them into consideration right now.)
Bolzano identifies certain types of ideas. There are simple ideas that have no parts (as an example Bolzano uses [something]), but there are also complex ideas that consist of other ideas (Bolzano uses the example of [nothing], which consists of the ideas [not] and [something]). Complex ideas can have the same content (i.e. the same parts) without being the same — because their components are differently connected. The idea [A black pen with blue ink] is different from the idea [A blue pen with black ink] though the parts of both ideas are the same.
Ideas and objects
It is important to understand that an idea does not need to have an object. Bolzano uses object to denote something that is represented by an idea. An idea that has an object, represents that object. But an idea that does not have an object represents nothing. (Don't get confused here by terminology: an objectless idea is an idea without a representation.)
Consider, for further explanation, an example used by Bolzano. The idea [a round square], does not have an object, because the object that ought to be represented is self-contrary. A different example is the idea [nothing] which certainly does not have an object. However, the proposition [the idea of a round square has complexity] has as its subject-idea [the idea of a round square]. This subject-idea does have an object, namely the idea [a round square]. But, that idea does not have an object.
Besides objectless ideas, there are ideas that have only one object, e.g. the idea [the first man on the moon] represents only one object. Bolzano calls these ideas 'singular ideas'. Obviously there are also ideas that have many objects (e.g. [the citizens of Amsterdam]) and even infinitely many objects (e.g. [a prime number]).
Sensation and simple ideas
Bolzano has a complex theory of how we are able to sense things. He explains sensation by means of the term intuition, in German called Anschauung. An intuition is a simple idea, it has only one object (Einzelvorstellung), but besides that, it is also unique (Bolzano needs this to explain sensation). Intuitions (Anschauungen) are objective ideas, they belong to the an sich realm, which means that they don't have existence. As said, Bolzano's argumentation for intuitions is by an explanation of sensation.
What happens when you sense a real existing object, for instance a rose, is this: the different aspects of the rose, like its scent and its color, cause in you a change. That change means that before and after sensing the rose, your mind is in a different state. So sensation is in fact a change in your mental state. How is this related to objects and ideas? Bolzano explains that this change, in your mind, is essentially a simple idea (Vorstellung), like, 'this smell' (of this particular rose). This idea represents; it has as its object the change. Besides being simple, this change must also be unique. This is because literally you can't have the same experience twice, nor can two people, who smell the same rose at the same time, have exactly the same experience of that smell (although they will be quite alike). So each single sensation causes a single (new) unique and simple idea with a particular change as its object. Now, this idea in your mind is a subjective idea, meaning that it is in you at a particular time. It has existence. But this subjective idea must correspond to, or has as a content, an objective idea. This is where Bolzano brings in intuitions (Anschauungen); they are the simple, unique and objective ideas that correspond to our subjective ideas of changes caused by sensation. So for each single possible sensation, there is a corresponding objective idea. Schematically the whole process is like this: whenever you smell a rose, its scent causes a change in you. This change is the object of your subjective idea of that particular smell. That subjective idea corresponds to the intuition or Anschauung.
Logic
According to Bolzano, all propositions are composed out of three (simple or complex) elements: a subject, a predicate and a copula. Instead of the more traditional copulative term 'is', Bolzano prefers 'has'. The reason for this is that 'has', unlike 'is', can connect a concrete term, such as 'Socrates', to an abstract term such as 'baldness'. "Socrates has baldness" is, according to Bolzano, preferable to "Socrates is bald" because the latter form is less basic: 'bald' is itself composed of the elements 'something', 'that', 'has' and 'baldness'. Bolzano also reduces existential propositions to this form: "Socrates exists" would simply become "Socrates has existence (Dasein)".
A major role in Bolzano's logical theory is played by the notion of variations: various logical relations are defined in terms of the changes in truth value that propositions incur when their non-logical parts are replaced by others. Logically analytical propositions, for instance, are those in which all the non-logical parts can be replaced without change of truth value. Two propositions are 'compatible' (verträglich) with respect to one of their component parts x if there is at least one term that can be inserted that would make both true. A proposition Q is 'deducible' (ableitbar) from a proposition P, with respect to certain of their non-logical parts, if any replacement of those parts that makes P true also makes Q true. If a proposition is deducible from another with respect to all its non-logical parts, it is said to be 'logically deducible'.
Besides the relation of deducibility, Bolzano also has a stricter relation of 'grounding' (Abfolge). This is an asymmetric relation that obtains between true propositions, when one of the propositions is not only deducible from, but also explained by the other.
Truth
Bolzano distinguishes five meanings the words true and truth have in common usage, all of which Bolzano takes to be unproblematic. The meanings are listed in order of properness:
I. Abstract objective meaning: Truth signifies an attribute that may apply to a proposition, primarily to a proposition in itself, namely the attribute on the basis of which the proposition expresses something that in reality is as is expressed. Antonyms: falsity, falseness, falsehood.
II. Concrete objective meaning: (a) Truth signifies a proposition that has the attribute truth in the abstract objective meaning. Antonym: (a) falsehood.
III. Subjective meaning: (a) Truth signifies a correct judgment. Antonym: (a) mistake.
IV. Collective meaning: Truth signifies a body or multiplicity true propositions or judgments (e.g. the biblical truth).
V. Improper meaning: True signifies that some object is in reality what some denomination states it to be. (e.g. the true God). Antonyms: false, unreal, illusory.
Bolzano's primary concern is with the concrete objective meaning: with concrete objective truths or truths in themselves. All truths in themselves are a kind of propositions in themselves. They do not exist, i.e. they are not spatiotemporally located as thought and spoken propositions are. However, certain propositions have the attribute of being a truth in itself. Being a thought proposition is not a part of the concept of a truth in itself, notwithstanding the fact that, given God's omniscience, all truths in themselves are also thought truths. The concepts 'truth in itself' and 'thought truth' are interchangeable, as they apply to the same objects, but they are not identical.
Bolzano offers as the correct definition of (abstract objective) truth: a proposition is true if it expresses something that applies to its object. The correct definition of a (concrete objective) truth must thus be: a truth is a proposition that expresses something that applies to its object. This definition applies to truths in themselves, rather than to thought or known truths, as none of the concepts figuring in this definition are subordinate to a concept of something mental or known.
Bolzano proves in §§31–32 of his Wissenschaftslehre three things:
There is at least one truth in itself (concrete objective meaning):
1. There are no true propositions (assumption)
2. 1. is a proposition (obvious)
3. 1. is true (assumed) and false (because of 1.)
4. 1. is self-contradictory (because of 3.)
5. 1. is false (because of 4.)
6. There is at least one true proposition (because of 1. and 5.)
B. There is more than one truth in itself:
7. There is only one truth in itself, namely A is B (assumption)
8. A is B is a truth in itself (because of 7.)
9. There are no other truths in themselves apart from A is B (because of 7.)
10. 9. is a true proposition/ a truth in itself (because of 7.)
11. There are two truths in themselves (because of 8. and 10.)
12. There is more than one truth in itself (because of 11.)
C. There are infinitely many truths in themselves:
13. There are only n truths in themselves, namely A is B .... Y is Z (assumption)
14. A is B .... Y is Z are n truths in themselves (because of 13.)
15. There are no other truths apart from A is B .... Y is Z (because of 13.)
16. 15. is a true proposition/ a truth in itself (because of 13.)
17. There are n+1 truths in themselves (because of 14. and 16.)
18. Steps 1 to 5 can be repeated for n+1, which results in n+2 truths and so on endlessly (because n is a variable)
19. There are infinitely many truths in themselves (because of 18.)
Judgments and cognitions
A known truth has as its parts (Bestandteile) a truth in itself and a judgment (Bolzano, Wissenschaftslehre §26). A judgment is a thought which states a true proposition. In judging (at least when the matter of the judgment is a true proposition), the idea of an object is being connected in a certain way with the idea of a characteristic (§ 23). In true judgments, the relation between the idea of the object and the idea of the characteristic is an actual/existent relation (§28).
Every judgment has as its matter a proposition, which is either true or false. Every judgment exists, but not "für sich". Judgments, namely, in contrast with propositions in themselves, are dependent on subjective mental activity. Not every mental activity, though, has to be a judgment; recall that all judgments have as matter propositions, and hence all judgments need to be either true or false. Mere presentations or thoughts are examples of mental activities which do not necessarily need to be stated (behaupten), and so are not judgments (§ 34).
Judgments that have as its matter true propositions can be called cognitions (§36). Cognitions are also dependent on the subject, and so, opposed to truths in themselves, cognitions do permit degrees; a proposition can be more or less known, but it cannot be more or less true. Every cognition implies necessarily a judgment, but not every judgment is necessarily cognition, because there are also judgments that are not true. Bolzano maintains that there are no such things as false cognitions, only false judgments (§34).
Philosophical legacy
Bolzano came to be surrounded by a circle of friends and pupils who spread his thoughts about (the so-called Bolzano Circle), but the effect of his thought on philosophy initially seemed destined to be slight.
Alois Höfler (1853–1922), a former student of Franz Brentano and Alexius Meinong, who subsequently become professor of pedagogy at the University of Vienna, created the "missing link between the Vienna Circle and the Bolzano tradition in Austria." Bolzano's work was rediscovered, however, by Edmund Husserl and Kazimierz Twardowski, both students of Brentano. Through them, Bolzano became a formative influence on both phenomenology and analytic philosophy.
Writings
Bolzano: Gesamtausgabe (Bolzano: Collected Works), critical edition edited by Eduard Winter, , Friedrich Kambartel, Bob van Rootselaar, Stuttgart: Fromman-Holzboog, 1969ff. (103 Volumes available, 28 Volumes in preparation).
Wissenschaftslehre, 4 vols., 2nd rev. ed. by W. Schultz, Leipzig I–II 1929, III 1980, IV 1931; Critical Edition edited by Jan Berg: Bolzano's Gesamtausgabe, vols. 11–14 (1985–2000).
Bernard Bolzano's Grundlegung der Logik. Ausgewählte Paragraphen aus der Wissenschaftslehre, Vols. 1 and 2, with supplementary text summaries, an introduction and indices, edited by F. Kambartel, Hamburg, 1963, 1978².
(Contributions to a better grounded presentation of mathematics; and The Mathematical Works of Bernard Bolzano, 2004, pp. 83–137).
(Purely analytic proof of the theorem that between any two values which give results of opposite sign, there lies at least one real root of the equation; .
Franz Prihonsky (1850), Der Neue Anti-Kant, Bautzen (an assessment of the Critique of Pure Reason by Bolzano, published posthumously by his friend F. Prihonsky).* (Paradoxes of the Infinite; (excerpt)).
Most of Bolzano's work remained in manuscript form, so it had a very small circulation and little influence on the development of the subject.
Translations and compilations
Theory of Science (selection edited and translated by Rolf George, Berkeley and Los Angeles: University of California Press, 1972).
Theory of Science (selection edited, with an introduction, by Jan Berg. Translated from the German by Burnham Terrell, Dordrecht and Boston: D. Reidel Publishing Company, 1973).
Theory of Science, first complete English translation in four volumes by Rolf George and Paul Rusnock, New York: Oxford University Press, 2014.
The Mathematical Works of Bernard Bolzano, translated and edited by Steve Russ, New York: Oxford University Press, 2004 (re-printed 2006).
On the Mathematical Method and Correspondence with Exner, translated by Rolf George and Paul Rusnock, Amsterdam: Rodopi, 2004.
Selected Writings on Ethics and Politics, translated by Rolf George and Paul Rusnock, Amsterdam: Rodopi, 2007.
Franz Prihonsky, The New Anti-Kant, edited by Sandra Lapointe and Clinton Tolley, New York, Palgrave Macmillan, 2014.
(Translation of Rein analytischer Beweis des Lehrsatzes, dass zwischen je zwey Werthen, die ein entgegengesetzes Resultat gewähren, wenigstens eine reelle Wurzel der Gleichung liege (Prague 1817))
See also
List of Roman Catholic scientist-clerics
Notes
References
.
.
.
.
.
Further reading
(1972), "Von Bolzano zu Meinong: Zur Geschichte des logischen Realismus." In: Rudolf Haller (ed.), Jenseits von Sein und Nichtsein: Beiträge zur Meinong-Forschung, Akadem. Druck- u. Verlagsanst., pp. 69–102.
Kamila Veverková, Bernard Bolzano: A New Evaluation of His Thought and His Circle, trans. Angelo Shaun Franklin, Lexington Books, 2022.
External links
Bolzano's Philosophy of Mathematical Knowledge entry by Sandra Lapointe in the Internet Encyclopedia of Philosophy
The Philosophy of Bernard Bolzano: Logic and Ontology
Bernard Bolzano: Bibliography of theEnglish Translations
Annotated Bibliography on the Philosophical Work of Bolzano
Annotated Bibliography on the Practical Philosophy of Bolzano (religion, aesthetics, politics)
Bolzano Collection: Digitized Bolzano's works
Volume 1 of Wissenschaftslehre in Google Books
Volume 2 of Wissenschaftslehre in Google Books
Volumes 3–4 of Wissenschaftslehre in Google Books
Volume 1 of Wissenschaftslehre in Archive.org (pages 162 to 243 are missing)
Volume 2 of Wissenschaftslehre in Archive.org
Volume 4 of Wissenschaftslehre in Archive.org
Volume 3 of Wissenschaftslehre in Gallica
Volume 4 of Wissenschaftslehre in Gallica
1781 births
1848 deaths
19th-century Czech people
19th-century Czech philosophers
19th-century essayists
19th-century mathematicians
19th-century philosophers
Burials at Olšany Cemetery
Catholic clergy scientists
Catholic philosophers
Charles University alumni
Czech essayists
Czech logicians
Czech mathematicians
Czech non-fiction writers
Czech pacifists
Czech people of Italian descent
Czech philosophers
Czech writers in German
Enlightenment philosophers
Epistemologists
History of logic
History of mathematics
Mathematical analysts
Mathematical logicians
Mathematicians from Prague
Ontologists
Philosophers of logic
Philosophers of mathematics
Philosophers of religion
Philosophers of science
Philosophy academics
Philosophy writers
Utilitarians
Writers about religion and science | Bernard Bolzano | [
"Mathematics"
] | 5,952 | [
"Mathematical analysis",
"Mathematical logicians",
"Mathematical logic",
"Mathematical analysts",
"Philosophers of mathematics"
] |
302,218 | https://en.wikipedia.org/wiki/Zipper | A zipper (N. America), zip, zip fastener (UK), formerly known as a clasp locker, is a commonly used device for binding together two edges of fabric or other flexible material. Used in clothing (e.g. jackets and jeans), luggage and other bags, camping gear (e.g. tents and sleeping bags), and many other items, zippers come in a wide range of sizes, shapes, and colors. In 1892, Whitcomb L. Judson, an American inventor from Chicago, patented the original design from which the modern device evolved.
The zipper gets its name from a brand of rubber boots (or galoshes) it was used on in 1923. The galoshes could be fastened with a single zip of the hand, and soon the hookless fasteners came to be called "Zippers".
Description
A zipper consists of a slider mounted on two rows of metal or plastic teeth that are designed to interlock and thereby join the material to which the rows are attached. The slider, usually operated by hand, contains a Y-shaped channel that, by moving along the rows of teeth, meshes or separates them, depending on the direction of the slider's movement. The teeth may be individually discrete or shaped from a continuous coil, and are also referred to as elements. The word zipper is onomatopoetic, as the device makes a high-pitched zip when used.
In many jackets and similar garments, the opening is closed completely when the slider is at the top end.
Some jackets have double-separating zippers with two sliders on the tape. When the sliders are on opposite ends of the tape, the jacket is closed. If the lower slider is raised then the bottom part of the jacket may be opened to allow more comfortable sitting or bicycling. When both sliders are lowered then the zipper may be totally separated.
Bags, suitcases and other pieces of luggage also often feature two sliders on the tape: the part of the zipper between them is unfastened. When the two sliders are located next to each other, which can be at any point along the tape, the zipper is fully closed.
Zippers may:
increase or decrease the size of an opening to allow or restrict the passage of objects, as in the fly of trousers or in a pocket;
join or separate two ends or sides of a single garment, as in the front of a jacket, or on the front, back or side of a dress or skirt to facilitate dressing;
attach or detach a separable part of the garment to or from another, as in the conversion between trousers and shorts or the connection or disconnection of a hood and a coat;
attach or detach a small pouch or bag to or from a larger one. One example of this is military rucksacks, which have smaller pouches or bags attached to the sides using one or two zippers;
be used to decorate an item.
These variations are achieved by sewing one end of the zipper together, sewing both ends together, or allowing both ends of the zipper to fall completely apart.
A zipper costs relatively little, but if it fails, the garment may be unusable until the zipper is repaired or replaced—which can be quite difficult and expensive. Problems often lie with the zipper slider; when it becomes worn it does not properly align and join the alternating teeth. With separating zippers, the insertion pin may tear loose from the tape; the tape may even disintegrate from use. If a zipper fails, it can either jam (i.e. get stuck) or partially break off.
History
In 1851, Elias Howe received a patent for an "Improvement in Fastenings for Garments". He did not try seriously to market it, thus missing the recognition that he might otherwise have received. Howe's device was more like an elaborate drawstring than a true slide fastener.
Forty-two years later, in 1893, Whitcomb L. Judson, who invented a pneumatic street railway, patented a "Shoe-Fastening". The device served as a (more complicated) hook-and-eye shoe fastener. With the support of businessman Colonel Lewis Walker, Judson launched the Universal Fastener Company to manufacture the new device. Judson's "clasp locker" had its public debut at the 1893 Chicago World's Fair and met with little commercial success. Judson is sometimes given credit as the inventor of the zipper, but his device was not used in clothing.
The Universal Fastener Company moved to Hoboken, New Jersey, in 1901, reorganized as the Fastener Manufacturing and Machine Company. Gideon Sundbäck, a Swedish-American electrical engineer, was hired to work for the company in 1906. Good technical skills and marriage to the plant manager's daughter, Elvira Aronson, led Sundbäck to the position of head designer. The company moved to Meadville, Pennsylvania, where it operated for most of the 20th century under the name Talon, Inc. Sundbäck worked on improving the fastener, and, in 1909, he registered a patent in Germany. The US rights to this invention were on the name of the Meadville company (operating as the Hookless Fastener Co.), but Sundbäck retained non-U.S. rights and used these in subsequent years to set up Lightning Fastener Co. in St. Catharines, Ontario, Canada. Sundbäck's work with this firm has led to the common misperception that he was Canadian and that the zipper originated in that country.
In 1916, newspapers in Australia reported displays of the "new hookless fastener", a device from America that "the world has been waiting for" by a live model in the store window of Raynor's of Melbourne.
Gideon Sundbäck increased the number of fastening elements from four per inch (about one every 6.4 mm) to ten or eleven (around every 2.5 mm), introduced two facing rows of teeth that pulled into a single piece by the slider and increased the opening for the teeth guided by the slider. The patent for the "Separable Fastener" was issued in 1917. Gideon Sundbäck also created the manufacturing machine for the new device. The "S-L" or "strapless" machine took a special Y-shaped wire and cut scoops from it, then punched the scoop dimple and nib, and clamped each scoop on a cloth tape to produce a continuous zipper chain. Within the first year of operation, Sundbäck's machinery was producing a few hundred feet (around 100 meters) of fastener per day. In March of the same year, Mathieu Burri, a Swiss inventor, improved the design by adding a lock-in system attached to the last teeth, but his version never got into production due to conflicting patents.
In 1923, during a trip to Europe, Sundbäck sold his European rights to Martin Othmar Winterhalter, who improved the design by using ribs and grooves instead of Sundbäck's joints and jaws and started producing with his company Riri on a large scale first in Germany, then in Switzerland.
The popular North American term zipper (UK zip, or occasionally zip-fastener) came from the B. F. Goodrich Company in 1923. The company used Gideon Sundbäck's fastener on a new type of rubber boots (or galoshes) and referred to it as the zipper, and the name stuck. The two chief uses of the zipper in its early years were for closing boots and tobacco pouches. Zippers began being used for clothing in 1925 by Schott NYC on leather jackets.
In the 1930s, a sales campaign began for children's clothing featuring zippers. The campaign praised zippers for promoting self-reliance in young children by making it possible for them to dress themselves. The zipper beat the button in 1937 in the "Battle of the Fly", after French fashion designers raved over zippers in men's trousers. Esquire declared the zipper the "Newest Tailoring Idea for Men", and that among the zippered fly's many virtues was that it would exclude "The Possibility of Unintentional and Embarrassing Disarray."
The most recent innovation in the zipper's design was the introduction of models that could open on both ends, as on jackets. The zipper has become by far the most widespread fastener, and is used on clothing, luggage, leather goods, and various other objects.
Types
Coil zippers now form the bulk of sales of zippers worldwide. The slider runs on two coils on each side; the teeth are formed by the windings of the coils. Two basic types of coils are used: one with coils in spiral form, usually with a cord running inside the coils; the other with coils in ladder form, also called the Ruhrmann type. Coil zippers are made of polyester coil and are thus also termed polyester zippers. Nylon was formerly used to make them, and though only polyester is used now, the type is still also termed a nylon zipper.
Invisible zippers have the teeth hidden behind a tape, so that the zipper is invisible. It is also called the concealed zipper. The tape's color matches the garment's, as does the slider's and the puller's. This kind of a zipper is common in skirts and dresses. Invisible zippers are usually coil zippers. They are also seeing increased use by the military and emergency services because the appearance of a button down shirt can be maintained while providing a quick and easy fastening system. A regular invisible zipper uses a lighter lace-like fabric on the zipper tape, instead of the common heavier woven fabric on other zippers.
Reverse coil zippers are a variation of the coil zipper. In a reverse coil zipper, the coil is on the reverse (back) side of the zipper and the slider works on the flat side of the zipper (normally the back, now the front). Unlike an invisible zipper where the coil is also on the back, the reverse coil shows stitching on the front side and the slider accommodates a variety of pulls (the invisible zipper requires a small, tear-drop pull due to the small slider attachment). Water resistant zippers are generally configured as reverse coil so that the PVC coating can cover the stitching. A rubber- or PVC-coated reverse zipper is called a waterproof zipper.
Metal zippers are the classic zipper type, found mostly in jeans and pencil cases today. The teeth are not a coil, but are instead individual pieces of metal molded into shape and set on the zipper tape at regular intervals. Metal zippers are made with brass, aluminium and nickel. All these zippers are basically made from flat wire. A special type of metal zipper is made from pre-formed wire, usually brass, but sometimes other metals, too. Only a few companies in the world have this technology. This type of pre-formed metal zipper is mainly used in high grade jeans-wear, work-wear, etc., where high strength is required and zippers need to withstand tough washing.
Plastic-molded zippers are identical to metallic zippers, except that the teeth are plastic instead of metal. Metal zippers can be painted to match the surrounding fabric; plastic zippers can be made in any color of plastic. Plastic zippers mostly use polyacetal resin, though other thermoplastic polymers are used as well, such as polyethylene. Used most popularly for pencil cases, small plastic pouches and other stationery.
Open-ended zippers use a box and pin mechanism to lock the two sides of the zipper into place, often in jackets. Open-ended zippers can be of any of the above described types.
Two way open-ended zippers have a puller on each end of the zipper tape instead of having an insertion pin and pin box at the bottom. Someone wearing a garment with this kind of zipper can slide up the bottom puller to accommodate more leg movement without stressing the pin and box of a one-way open-ended zipper. It is most commonly used on long coats.
Two way closed-ended zippers are closed at both ends; they are often used in luggage and can have either one or two pullers on the zipper.
Magnetic zippers allow for one-handed closure and are used in sportswear.
Air and water tightness
Airtight zippers were first developed by NASA for making high-altitude pressure suits and later space suits, capable of retaining air pressure inside the suit in the vacuum of space.
The airtight zipper is built like a standard toothed zipper, but with a waterproof sheeting (which is made of fabric-reinforced polyethylene and is bonded to the rest of the suit) wrapped around the outside of each row of zipper teeth. When the zipper is closed, the two facing sides of the plastic sheeting are squeezed tightly against one another (between the C-shaped clips) both above and below the zipper teeth, forming a double seal.
This double-mated surface is good at retaining both vacuum and pressure, but the fit must be very tight to press the surfaces together firmly. Consequently, these zippers are typically very stiff when zipped shut and have minimal flex or stretch. They are hard to open and close because the zipper anvil must bend apart teeth that are being held under tension. They can also be derailed, causing damage to the sealing surfaces, if the teeth are misaligned while straining to pull the zipper shut.
These zippers are very common where airtight or watertight seals are needed, such as on scuba diving dry suits, ocean survival suits, and hazmat suits.
A less common water-resistant zipper is similar in construction to a standard toothed zipper, but includes a molded plastic ridge seal similar to the mating surfaces on a Ziploc bag. Such a zipper is easier to open and close than a clipped version, and the slider has a gap above the zipper teeth for separating the ridge seal. This seal is structurally weak against internal pressure, and can be separated by pressure within the sealed container pushing outward on the ridges, which simply flex and spread apart, potentially allowing air or liquid entry. Ridge-sealed zippers are sometimes used on lower-cost surface dry suits.
Anti-slide zipper locks
Some zippers include a designed ability for the slider to hold in a steady open or closed position, resisting forces that would try to move the slider and open the zipper unexpectedly. There are two common ways this is accomplished:
The zipper handle can have a short protruding pin stamped into it, which inserts between the zipper teeth through a hole on the slider, when the handle is folded down flat against the zipper teeth. This appears on some brands of trousers. The handle of the fly zipper is folded flat against the teeth when it is not in use, and the handle is held down by both slider hinge tension and the fabric flap over the fly.
The slider can also have a two-piece hinge assembly attaching the handle to the slider, with the base of the hinge under spring tension and with protruding pins on the bottom that insert between the zipper teeth. To move the zipper, the handle is pulled outward against spring tension, lifting the pins out from between the teeth as the slider moves. When the handle is released the pins automatically engage between the zipper teeth again. They are called "auto-lock sliders".
A three-piece version of the above uses a tiny pivoting arm held under tension inside the hinge. Pulling on the handle from any direction lifts the pivoting arm's pins out of the zipper teeth so that the slider can move.
Components
The components of a zipper are:
Top Tape Extension (the fabric part of the zipper, that extends beyond the teeth, at the top of the chain)
Top Stop (two devices affixed to the top end of a zipper, to prevent the slider from coming off the chain)
Slider (the device that moves up and down the chain to open or close the zipper)
Pull Tab or Puller (the part of the slider that is held to move the slider up or down)
Tape Width (refers to the width of the fabric on both sides of the zipper chain)
Chain or Zipper Teeth (the continuous piece that is formed when both halves of a zipper are meshed together) and Chain Width (refers to the specific gauge of the chain – common gauge sizes are #3, #5, #7, #8 and #10, the bigger the number, the wider the teeth/chain width is)
Bottom Stop (a device affixed to the bottom end of a zipper, to prevent further movement of the half of the zipper from separating)
Bottom Tape Extension (the fabric part of the zipper, that extends beyond the teeth, at the bottom of the chain)
Single Tape Width (refers to the width of the fabric on one side of the zipper chain)
Insertion Pin (a device used on a separating zipper whose function is to allow the joining of the two zipper halves)
Retainer Box or Pin Box (a device used on a separating zipper whose function is to correctly align the pin, to begin the joining of the zipper halves)
Reinforcement Film (a strip of plastic fused to each half of the zipper tape to allow a manufacturer to electronically "weld" the zipper onto the garment or item that is being manufactured, without the need of sewing or stitching)
Manufacturing
Forbes reported in 2003 that although the zipper market in the 1960s was dominated by Talon Zipper (US) and Optilon (Germany), Japanese manufacturer YKK grew to become the industry giant by the 1980s. YKK held 45 percent of world market share, followed by Optilon (8 percent) and Talon Zipper (7 percent).
Tex Corp (India) has also emerged as a significant supplier to the apparel industry.
In Europe, the Cremalleras Rubi company was established in 1926 in Spain. It sold over 30 million zippers in 2012.
In 2005, The Guardian reported that China had 80 percent of the international market. Most of its product is made in Qiaotou, Yongjia County.
U.S. Patents
25 November 1851 : "Improvement in Fastening for Garments"
29 August 1893 : "Shoe fastening"
29 August 1893 : "Clasp Locker or Unlocker for Shoes"
31 March 1896 : "Fastening for Shoes"
31 March 1896 : "Clasp-Locker for Shoes"
29 April 1913 : "Separable fastener" (Gideon Sundback)
20 March 1917 : "Separable fastener" (Gideon Sundback)
22 December 1936 : "Slider"
Mechanism
From , the following mechanism of the zipper improved by Gideon Sundback in 1917 is explained:
The zipper is analogous in function to a drawstring, but different in mechanism. A drawstring works by tension in the string drawing the eyelets of the piece together, since the tension acts to straighten the string and forces the eyelets toward a line. The zipper works by an elastic, that is, reversible, deformation of the "locking members" (teeth). The zipper teeth are shaped and sized so that the forces which act on the zipper when the garment it is sewn on is worn cannot unlock the teeth. The slider constrains the teeth positions, moves them along a given path, and acts on the teeth one-by-one in its "Y-shaped channel", and so, can reversibly lock and unlock them. This is a lock and key design. In Sundback's invention the teeth are symmetric with "exterior and interior rounded surfaces" that are "elongated transversely". The teeth have a material part ("external projection") and a space ("internal recess"). The material part of one tooth is slightly smaller than the space on the other and so shaped to act as a "contractible jaw"—the jaw is elastically opened and then closed as it goes over the other tooth. The "snug fit" that results when "one member nests within the recess of an adjoining member" is a stable locked state. The maximum force when the slider operates is in between the unlocked and locked positions, giving two stable mechanical equilibria. The "snug fit" is stable, not only to forces from wear that act in the same direction as those of the slider, but also to transverse and longitudinal (both perpendicular) forces.
The zipper is analogous in mechanism to a bobby pin, where the person's hand slides hair into and out of the pin's "contractible jaw".
In popular culture
Zippers have entered into urban legends. American folklorist Jan Brunvand noted that "The zipper has been the subject of jokes and legends since... the 1920s". Those stories reflect "modern anxieties and desires", emphasizing embarrassments and accidents, primarily involving the flies of men's trousers in stories such as "The Unzipped Stranger" and "The Unzipped Fly".
In Brave New World, Aldous Huxley repeatedly mentioned zippers, implying that, in their newness (as of the early 1930s), mechanical complexity, ease of use, and speed, zippers were somehow corrosive of natural human values.
Durability and repairs
The zipper is often the least durable component in any garment or type of equipment. Most often, the zipper fails to close due to a worn or bent slider not being able to apply the necessary force to the sides of the teeth to cause them to interlock. This problem can sometimes be redressed by using small pliers to carefully squeeze the back part of the slider together a fraction of a millimeter. This can compensate for the wear of the slider. The slider is typically made as a magnesium diecast which breaks easily. It is necessary to reduce the force on the pliers before it can be felt that the slider actually gives in. If it is not yet possible to successfully close the zipper, the pressure applied to the slider should only gradually be increased. Another way to reduce the gap of the open end of the slider is by preparing a small block of wood by sawing a slot into one end so that it fits over the upper arm of the slider. Then a hammer can be used to exact a force onto the slider by carefully hitting the wood.
When the protective coating of the diecast slider has been worn off by prolonged use, the material can corrode. The corrosion products are usually metal salts which can accumulate and block the slider from moving. When this happens, the salt can often be dissolved by submerging the slider in vinegar or another mild acid. Otherwise, the slider needs to be removed and replaced.
See also
Talon Zipper the first manufacturer of hookless fasteners a/k/a zippers
Funicular—A "zipper train" is a type of funicular train, sometimes called "cremallera" in Spanish
Zipper storage bag
References
Continue reading
Petroski, Henry (1992). The Evolution of Useful Things. New York: Alfred A. Knopf. .
Friedel, Robert (1996). Zipper: An Exploration in Novelty. New York: W. W. Norton and Company. .
External links
How Zippers Work by S. M. Blinder, the Wolfram Demonstrations Project
The History of the Zipper
Type of Zippers
Putting in a Zipper, ca. 1962, Archives of Ontario YouTube Channel
1913 introductions
American inventions
Swedish inventions
Brands that became generic
Fasteners
Textile closures
20th-century inventions | Zipper | [
"Engineering"
] | 4,878 | [
"Construction",
"Fasteners"
] |
302,441 | https://en.wikipedia.org/wiki/Hexagram | A hexagram (Greek) or sexagram (Latin) is a six-pointed geometric star figure with the Schläfli symbol {6/2}, 2{3}, or {{3}}. The term is used to refer to a compound figure of two equilateral triangles. The intersection is a regular hexagon.
The hexagram is part of an infinite series of shapes which are compounds of two n-dimensional simplices. In three dimensions, the analogous compound is the stellated octahedron, and in four dimensions the compound of two 5-cells is obtained.
It has been historically used in various religious and cultural contexts and as decorative motifs. The symbol was used as a decorative motif in medieval Christian churches and Jewish synagogues. In the medieval period, a Muslim mystical symbol known as the Seal of Solomon was depicted as either a hexagram or pentagram.
Group theory
In mathematics, the root system for the simple Lie group G2 is in the form of a hexagram, with six long roots and six short roots.
Construction by compass and a straight edge
A six-pointed star, like a regular hexagon, can be created using a compass and a straight edge:
Make a circle of any size with the compass.
Without changing the radius of the compass, set its pivot on the circle's circumference, and find one of the two points where a new circle would intersect the first circle.
With the pivot on the last point found, similarly find a third point on the circumference, and repeat until six such points have been marked.
With a straight edge, join alternate points on the circumference to form two overlapping equilateral triangles.
Construction by linear algebra
A regular hexagram can be constructed by orthographically projecting any cube onto a plane through three vertices that are all adjacent to the same vertex. The twelve midpoints to edges of the cube form a hexagram. For example, consider the projection of the unit cube with vertices at the eight possible binary vectors in three dimensions onto the plane . The midpoints are , and all points resulting from these by applying a permutation to their entries. These 12 points project to a hexagram: six vertices around the outer hexagon and six on the inner.
Origins and shape
As a derivative of two overlapping triangles, the hexagram may have developed from different peoples with no direct correlation to one another.
The mandala symbol called yantra, found on ancient South Indian Hindu temples, is a geometric toolset that incorporates hexagrams into its framework. It symbolizes the nara-narayana, or perfect meditative state of balance achieved between Man and God, and if maintained, results in "moksha," or "nirvana" (release from the bounds of the earthly world and its material trappings).
Some researchers have theorized that the hexagram represents the astrological chart at the time of David's birth or anointment as king. The hexagram is also known as the "King's Star" in astrological circles.
In antique papyri, pentagrams, together with stars and other signs, are frequently found on amulets bearing the Jewish names of God, and used to guard against fever and other diseases. Curiously the hexagram is not found among these signs. In the Greek Magical Papyri (Wessely, l.c. pp. 31, 112) at Paris and London there are 22 signs side by side, and a circle with twelve signs, but neither a pentagram nor a hexagram.
Religious usage
Indian religions
Six-pointed stars have also been found in cosmological diagrams in Hinduism, Buddhism, and Jainism. The reasons behind this symbol's common appearance in Indic religions and the West are unknown. One possibility is that they have a common origin. The other possibility is that artists and religious people from several cultures independently created the hexagram shape, which is a relatively simple geometric design.
Within Indic lore, the shape is generally understood to consist of two triangles—one pointed up and the other down—locked in harmonious embrace. The two components are called "Om" and the "Hrim" in Sanskrit, and symbolize man's position between earth and sky. The downward triangle symbolizes Shakti, the sacred embodiment of femininity, and the upward triangle symbolizes Shiva, or Agni Tattva, representing the focused aspects of masculinity. The mystical union of the two triangles represents Creation, occurring through the divine union of male and female. The two locked triangles are also known as 'Shanmukha'—the six-faced, representing the six faces of Shiva & Shakti's progeny Kartikeya. This symbol is also a part of several yantras and has deep significance in Hindu ritual worship and history.
In Buddhism, some old versions of the Bardo Thodol, also known as The "Tibetan Book of the Dead", contain a hexagram with a swastika inside. It was made up by the publishers for this particular publication. In Tibetan, it is called the "origin of phenomenon" (chos-kyi 'byung-gnas). It is especially connected with Vajrayogini, and forms the center part of her mandala. In reality, it is in three dimensions, not two, although it may be portrayed either way.
The Shatkona is a symbol used in Hindu yantra that represents the union of both the masculine and feminine form. More specifically it is supposed to represent Purusha (the supreme being), and Prakriti (mother nature, or causal matter). Often this is represented as Shiva – Shakti.
Anahata or heart chakra is the fourth primary chakra, according to Hindu Yogic, Shakta and Buddhist Tantric traditions. In Sanskrit, anahata means "unhurt, unstruck, and unbeaten". Anahata Nad refers to the Vedic concept of unstruck sound (the sound of the celestial realm). Anahata is associated with balance, calmness, and serenity.
Judaism
The Magen David is a generally recognized symbol of Judaism and Jewish identity and is also known colloquially as the Jewish Star or "Star of David." Its usage as a sign of Jewish identity began in the Middle Ages, though its religious usage began earlier, with the current earliest archeological evidence being a stone bearing the shield from the arch of a 3–4th century synagogue in the Galilee.
Christianity
The first and the most important Armenian Cathedral of Etchmiadzin (303 AD, built by the founder of Christianity in Armenia) is decorated with many types of ornamented hexagrams and so is the tomb of an Armenian prince of the Hasan-Jalalyan dynasty of Khachen (1214 AD) in the Gandzasar Church of Artsakh.
The hexagram may be found in some Churches and stained-glass windows. In Christianity, it is sometimes called the star of creation. A very early example, noted by Nikolaus Pevsner, can be found in Winchester Cathedral, England in one of the canopies of the choir stalls, circa 1308.
Latter-day Saints (Mormons)
The Star of David is also used less prominently by the Church of Jesus Christ of Latter-day Saints, in the temples and in architecture. It symbolizes God reaching down to man and man reaching up to God, the union of Heaven and earth. It may also symbolize the Tribes of Israel and friendship and their affinity towards the Jewish people. Additionally, it is sometimes used to symbolize the quorum of the twelve apostles, as in Revelation 12, wherein the Church of God is symbolized by a woman wearing a crown of twelve stars. It is also sometimes used to symbolize the Big Dipper, which points to the North Star, a symbol of Jesus Christ.
Islam
The symbol is known in Arabic as Khātem Sulaymān (Seal of Solomon; ) or Najmat Dāwūd (Star of David; ). The "Seal of Solomon" may also be represented by a five-pointed star or pentagram.
In the Qur'an, it is written that David and King Solomon (Arabic, Suliman or Sulayman) were prophets and kings, and are figures revered by Muslims. The Medieval pre-Ottoman Hanafi Anatolian beyliks of the Karamanids and Jandarids used the star on their flag. The symbol is also used on the Hayreddin Barbarossa flag. Today the six-pointed star can be found in mosques and on other Arabic and Islamic artifacts.
Usage in heraldry
In heraldry and vexillology, a hexagram is a fairly common charge employed, though it is rarely called by this name. In Germanic regions it is known simply as a "star." In English and French heraldry, however, the hexagram is known as a "mullet of six points," where mullet is a French term for a spur rowel which is shown with five pointed arms by default unless otherwise specified. In Albanian heraldry and vexillology, hexagram has been used since classical antiquity and it is commonly referred to as sixagram. The coat of arms of the House of Kastrioti depicts the hexagram on a pile argent over the double headed eagle.
Usage in Theosophy
The Star of David is used in the seal and the emblem of the Theosophical Society (founded in 1875). Although it is more pronounced, it is used along with other religious symbols. These include the Swastika, the Ankh, the Aum, and the Ouroboros. The star of David is also known as the Seal of Solomon, which was its original name, being in regular use until around 50 years ago.
Usage in occultism
The hexagram, like the pentagram, was and is used in practices of the occult and ceremonial magic and is attributed to the 7 "old" planets outlined in astrology.
The six-pointed star is commonly used both as a talisman and for conjuring spirits and spiritual forces in diverse forms of occult magic. In the book The History and Practice of Magic, Vol. 2, the six-pointed star is called the talisman of Saturn and it is also referred to as the Seal of Solomon. Details are given in this book on how to make these symbols and the materials to use.
Traditionally, the Hexagram can be seen as the combination of the four elements. Fire is symbolized as an upwards pointing triangle, while Air (its elemental opposite) is also an upwards pointing triangle, but with a horizontal line through its center. Water is symbolized as a downwards pointing triangle, while Earth (its elemental opposite) is also a downwards pointing triangle, but with a horizontal line through its center. Combining the symbols of fire and water creates a hexagram (six-pointed star). The same follows when combining the symbols of air and earth. Both hexagrams combined are called a double-hexagram. Thus, a combination of the elements is created.
In Rosicrucian and Hermetic Magic, the seven Traditional planets correspond with the angles and the center of the Hexagram as follows, in the same patterns as they appear on the Sephiroth and on the Tree of Life. Saturn, although formally attributed to the Sephira of Binah, within this frame work nonetheless occupies the position of Daath.
In alchemy, the two triangles represent the reconciliation of the opposites of fire and water.
The hexagram is used as a sign for quintessence, the fifth element.
Usage in Freemasonry
The hexagram is featured within and on the outside of many Masonic temples as a decoration. It may have been found within the structures of King Solomon's temple, from which Freemasons are inspired in their philosophies and studies. Like many other symbols in Freemasonry, the deciphering of the hexagram is non-dogmatic and left to the interpretation of the individual.
Other uses
Flags
The flag of Australia had a six pointed star to represent the six federal states from 1901 to 1908.
The Ulster Banner flag of Northern Ireland, used from 1953 to 1972. The six pointed star, representing the six counties that make up Northern Ireland. The star of the Ulster Banner is not the compound of two equilateral triangles. The intersection is not a regular hexagon.
A flag used by rebels during the Whiskey Insurrection in South-Western Pennsylvania, 1794.
A hexagram appears on the Dardania Flag, proposed for Kosovo by the Democratic League of Kosovo.
The flag of Nigeria depicted a green hexagram surrounding a crown from with the white word "Nigeria" under it on a red disc from 1914 to 1960.
The flag of Israel has a blue hexagram in the middle.
Other symbolic uses
A six-point interlocking triangles has been used for thousands of years as an indication a sword was made, and "proofed", in the Damascus area of the Middle East. Still today, it is a required proof mark on all official UK and United States military swords though the blades themselves no longer come from the Middle East.
In southern Germany the hexagram can be found as part of tavern anchors. It is symbol for the tapping of beer and sign of the brewer's guild. In German this is called "Bierstern" (beer star) or "Brauerstern" (brewer's star).
A six-point star is used as an identifying mark of the Folk Nation alliance of US street gangs.
The Indian sage and seer Sri Aurobindo used it—e.g. on the cover of his books—as a symbol of the aspiration of humanity calling to the Divine to descend into life (the triangle with the point at the top), and the descent of the Divine into the Earth's atmosphere and all individuals in response to that calling (the triangle with the point at the bottom). (This was explained by the Mother, his spiritual partner in Her 14-volume Agenda and elsewhere by Sri Aurobindo in his writings.)
Man-made and natural occurrences
The main runways and taxiways of Heathrow Airport were arranged roughly in the shape of a hexagram.
A hexagram in a circle is incorporated prominently in the supports of Worthing railway station's platform 2 canopy (UK).
An extremely large, free-standing wood hexagram stands in the central park of the Municipality of El Tejar, Guatemala. Additionally, every year at Christmastime the residents of El Tejar erect a giant artificial Christmas tree in front of their municipal building, with a hexagram sitting at its peak.
Unicode
In Unicode, the "Star of David" symbol ✡ is encoded in U+2721.
Other hexagrams
The figure {6/3} can be shown as a compound of three digons.
Other hexagrams can be constructed as a continuous path.
See also
Pentagram
Star of Bethlehem
Star of David
Seal of Solomon
Heptagram
The Thelemic Unicursal hexagram
Pascal's mystic hexagram
Hexagram (I Ching)
Sacred Geometry
Footnotes
References
Graham, Dr. O.J. The Six-Pointed Star: Its Origin and Usage 4th ed. Toronto: The Free Press 777, 2001.
Grünbaum, B. and G. C. Shephard; Tilings and patterns, New York: W. H. Freeman & Co., (1987), .
Grünbaum, B.; Polyhedra with Hollow Faces, Proc of NATO-ASI Conference on Polytopes ... etc. (Toronto 1993), ed T. Bisztriczky et al., Kluwer Academic (1994) pp. 43–70.
Wessely, l.c. pp. 31, 112
External links
Hexagram (MathWorld)
The Archetypal Mandala of India
Thesis from Munich University on hexagram as brewing symbol
Art history
Church architecture
Iconography
Ornaments
6 (number)
Rotational symmetry
Synagogue architecture
Visual motifs
06 | Hexagram | [
"Physics",
"Mathematics"
] | 3,351 | [
"Visual motifs",
"Symbols",
"Symmetry",
"Rotational symmetry"
] |
28,635,039 | https://en.wikipedia.org/wiki/Balancing%20of%20rotating%20masses | The balancing of rotating bodies is important to avoid vibration. In heavy industrial machines such as gas turbines and electric generators, vibration can cause catastrophic failure, as well as noise and discomfort. In the case of a narrow wheel, balancing simply involves moving the center of gravity to the centre of rotation. For a system to be in complete balance both force and couple polygons should be close in order to prevent the effect of centrifugal force. It is important to design the machine parts wisely so that the unbalance is reduced up to the minimum possible level or eliminated completely.
Static balance
Static balance occurs when the centre of gravity of an object is on the axis of rotation. The object can therefore remain stationary, with the axis horizontal, without the application of any braking force. It has no tendency to rotate due to the force of gravity. This is seen in bike wheels where the reflective plate is placed opposite the valve to distribute the centre of mass to the centre of the wheel. Other examples are grindstones, discs or car wheels. Verifying static balance requires the freedom for the object to rotate with as little friction as possible.
This may be provided with sharp, hardened knife edges, adjusted to be both horizontal and parallel. Alternatively, a pair of free-running ball bearing races is substituted for each knife edge, which relaxed the horizontal and parallel requirement. The object is either axially symmetrical like a wheel or must be provided with an axle. It is slowly spun, and when it comes to rest, it will stop at a random position if statically balanced. If not, an adhesive or clip on weight is securely attached to achieve balance.
Dynamic balance
A rotating system of mass is in dynamic balance when the rotation does not produce any resultant centrifugal force or couple. The system rotates without requiring the application of any external force or couple, other than that required to support its weight. If a system is initially unbalanced, to avoid the stress upon the bearings caused by the centrifugal couple, counterbalancing weights must be added.
This is seen when a bicycle wheel gets a buckled rim. The wheel will not rotate to a preferred position but because some rim mass is offset there is a wobbling couple leading to a dynamic vibration. If the spokes on this wheel cannot be adjusted to center the rim, an alternative method is used to provide dynamic balance.
To correct dynamic imbalance, there are three requirements: 1) a means of spinning the object 2) a frame to allow the object to vibrate perpendicular to its rotation axis 3) A means to detect the imbalance, by sensing its vibrating displacement, vibration velocity or (ideally) its instantaneous acceleration.
If the object is disk-like, weights may be attached near the rim to reduce the sensed vibration. This is called one-plane dynamic balancing. If the object is cylinder or rod-like, it may be preferable to execute two-plane balancing, which holds one end's spin axis steady, while the other end's vibration is reduced. Then the near end is freed to vibrate, while the far end spin axis is fixed, and vibration is again reduced. In precision work, this two plane measurement may be iterated.
Dynamic balancing was formerly the province of expensive equipment, but users with just occasional need to quench running vibrations may use the built in accelerometers of a smart phone and a spectrum analysis application. See ref 3 for example. A less tedious means of achieving dynamic balance requires just four measurements. 1) initial imbalance reading 2) an imbalance reading with a test mass attached on a reference point 3) The test mass moved to 120 degrees ahead and the imbalance again noted. 4) The test mass finally moved to 120 degrees behind the reference point. These four readings are sufficient to define the size and position of a final mass to achieve good balance. Ref 4
For production balancing, the phase of dynamic vibration is observed with its amplitude. This allows one-shot dynamic balance to be achieved with a single spin, by adding a mass of internally calculated size in a calculated position. This is the method commonly used to dynamically balance automobile wheels with tire installed by means of clip-on lead (or currently zinc) 'wheel weights'.
Unbalanced systems
When an unbalanced system is rotating, periodic linear and/or torsional forces are generated which are perpendicular to the axis of rotation. The periodic nature of these forces is commonly experienced as vibration. These off-axis vibration forces may exceed the design limits of individual machine elements, reducing the service life of these parts. For instance, a bearing may be subjected to perpendicular torsion forces that would not occur in a nominally balanced system, or the instantaneous linear forces may exceed the limits of the bearing. Such excessive forces will cause failure in bearings in short time periods. Shafts with unbalanced masses can be bent by the forces and experience fatigue failure.
Under conditions where rotating speed is very high even though the mass is low, as in gas turbines or jet engines, or under conditions where rotating speed is low but the mass is high, as in ship propellers, balance of the rotating system should be highly considered, because it may generate large vibrations and cause failure of the whole system.
References
https://www.instructables.com/Dynamic-Motor-Balancing-with-Sugru-and-an-iPhone/
http://www.conradhoffman.com/Balancing.xls
External links
Mechanical vibrations
Torsional vibration | Balancing of rotating masses | [
"Physics",
"Engineering"
] | 1,124 | [
"Structural engineering",
"Mechanics",
"Mechanical vibrations"
] |
28,640,072 | https://en.wikipedia.org/wiki/Hellenic%20Register%20of%20Shipping | Hellenic Register of Shipping was founded in 1919 and is active in the industry inspections and certifications of facilities and equipment as well as the Certification of Management Systems. HRS is an international NGO -non-governmental International Organization-, dedicated to the safeguarding of both life and property at sea, the prevention of marine pollution and lastly the quality assurance in the industry.
The Hellenic Register of Shipping monitors about 5,500 passenger ships and issue the general protocol inspection, as well as several large yachts, which are covered by International Conventions (SOLAS, LL. MARPOL, etc.) or the European Directive 98/18, where the Greek classification society certify class, which is mandatory for ships heading for each one.
Specifically in the Industry operates as either an accredited and authorized organizations with public authorities as an independent or third-party organization (in cases where the conditions for works contracts).
External links
Hellenic Register of Shipping website (In Greek/English)
Ship classification societies | Hellenic Register of Shipping | [
"Engineering"
] | 197 | [
"Marine engineering organizations",
"Ship classification societies"
] |
28,643,345 | https://en.wikipedia.org/wiki/Viability%20assay | A viability assay is an assay that is created to determine the ability of organs, cells or tissues to maintain or recover a state of survival. Viability can be distinguished from the all-or-nothing states of life and death by the use of a quantifiable index that ranges between the integers of 0 and 1 or, if more easily understood, the range of 0% and 100%. Viability can be observed through the physical properties of cells, tissues, and organs. Some of these include mechanical activity, motility, such as with spermatozoa and granulocytes, the contraction of muscle tissue or cells, mitotic activity in cellular functions, and more. Viability assays provide a more precise basis for measurement of an organism's level of vitality.
Viability assays can lead to more findings than the difference of living versus nonliving. These techniques can be used to assess the success of cell culture techniques, cryopreservation techniques, the toxicity of substances, or the effectiveness of substances in mitigating effects of toxic substances.
Common methods
Though simple visual techniques of observing viability can be useful, it can be difficult to thoroughly measure an organism's/part of an organism's viability merely using the observation of physical properties. However, there are a variety of common protocols utilized for further observation of viability using assays.
Tetrazolium reduction: One useful way to locate and measure viability is to complete a Tetrazolium Reduction Assay. The tetrazolium aspect of this assay, which utilizes both positive and negative charges in its formula, promotes the distinction of cell viability in a specimen.
Resazurin reduction: Resazurin Reduction Assays perform very closely to that of a tetrazolium assay, except they use the power of redox to fuel their ability to represent cell viability.
Protease viability marker: One can look at protease function in specimens if they wish to target viability in cells; this practice in research is known as "Protease Viability Marker Assay Concept". The actions of protease cease once a cell dies, so a clear-cut line is drawn in determining cell viability when using this technique.
ATP:ATP is a common energy molecule that many researchers hold extensive knowledge of, thus carrying over to how one understands viability assays. The ATP Assay Concept is a well-known technique for determining the viability of cells using the assessment of ATP and a method known as "firefly luciferase".
Sodium-potassium ratio: Another kind of assay practices the examination of the ratio of potassium to sodium in cells to serve as an index of viability. If the cells do not have high intracellular potassium and if intracellular sodium is low, then (1) the cell membrane may not be intact, and/or (2) the sodium-potassium pump may not be operating well.
Cytolysis or membrane leakage: This category includes the lactate dehydrogenase assay. Assays such as these contain a stable enzyme common in all cells that can be readily detected when cell membranes are no longer intact. Examples of this type of assay include propidium iodide, trypan blue, and 7-Aminoactinomycin D (7-AAD).
Mitochondrial activity or caspase: Resazurin and Formazan (MTT/XTT) can assay for various stages in the apoptosis process that foreshadows cell death.
Functional: Assays of cell function will be highly specific to the types of cells being assayed. For example, motility is a widely used assay of sperm cell function. Gamete survival can generally be used to assay fertility. Red blood cells have been assayed in terms of deformability, osmotic fragility, hemolysis, ATP level, and hemoglobin content. For transplantable whole organs, the ultimate assay is the ability to sustain life after transplantation, an assay which is not helpful in preventing transplantation of non-functional organs.
Genomic and proteomic: Cells can be assayed for activation of stress pathways using DNA microarrays and protein chips.
Flow Cytometry: Automation allows for analysis of thousands of cells per second.
As with many kinds of viability assays, quantitative measures of physiological function do not indicate whether damage repair and recovery is possible. An assay of the ability of a cell line to adhere and divide may be more indicative of incipient damage than membrane integrity.
Frogging and tadpoling
"Frogging" is a type of viability assay method that utilizes an agar plate for its environment and consists of plating serial dilutions by pinning them after they have been diluted in liquid. Some of its limitations include that it does not account for total viability and it is not particularly sensitive to low-viability assays; however, it is known for its quick pace. "Tadpoling", which is a method practiced after the development of "frogging", is similar to the "frogging" method, but its test cells are diluted in liquid and then kept in liquid through the examination process. The "tadpoling" method can be used to measure culture viability accurately, which is what depicts its main separation from "frogging".
List of viability assay methods
Calcein AM
Clonogenic assay
Ethidium homodimer assay
Evans blue
Fluorescein diacetate hydrolysis/Propidium iodide staining (FDA/PI staining)
Flow cytometry
Formazan-based assays (MTT/XTT)
Green fluorescent protein
Lactate dehydrogenase (LDH)
Methyl violet
Neutral red uptake (vital stain)
Propidium iodide, DNA stain that can differentiate necrotic, apoptotic and normal cells.
Resazurin
TUNEL assay
See also
Cytotoxicity
Vital stain
References
Further reading
Cell biology
Cell culture techniques
Molecular biology techniques
Microbiology techniques
Toxicology tests | Viability assay | [
"Chemistry",
"Biology",
"Environmental_science"
] | 1,265 | [
"Biochemistry methods",
"Toxicology",
"Cell biology",
"Cell culture techniques",
"Molecular biology techniques",
"Microbiology techniques",
"Molecular biology",
"Toxicology tests"
] |
18,857,868 | https://en.wikipedia.org/wiki/Virtual%20Socket%20Interface%20Alliance | Virtual Socket Interface Alliance (VSIA) is a body of SIP (Semiconductor / Silicon intellectual property) standards.
History
VSIA was founded in 1996 and dissolved in 2008, and was an open, international organization of companies such as Mentor Graphics, Cadence Design Systems, Magma Design Automation, ARM Holdings, and Synopsys, from various segments of the SoC (System-on-a-chip) industry.
Importance of VSIA
VSIA's mission was to enhance the productivity of the SoC design community dramatically. VSIA has developed an international standard, the QIP metric (Quality Intellectual Property Metric) for measuring SIP quality and examining the practices used to design, integrate and support the IP. This is important and, to have a measure of the quality, VSIA also works on other issues such as IP protection, IP transfer, IP integration and IP reuse standards (IP hardening is required for easy IP reuse) for Integrated circuit design.
VSIA was founded and driven by Executive Director Stan Baker (1934-2022), a prominent world-wide Electronics Industry Journalist. During the 1990s, Baker and his team including Kathy Rogers, was a driving force behind several electronics industry consortium's that were working to set industry standards to help propel the development of single chip devices, including cell phones.
Eg. for IP Trading
Hong Kong Science and Technology Parks Corporation (HKSTP), which was set up by Hong Kong government, joined VSIA as a member in 2006. HKSTP and Hong Kong University of Science and Technology (HKUST) started to develop a SIP verification and quality measures framework in 2005.
The objective is to develop a technical framework for SIP quality measures and evaluation based on QIP. HKSTP provides QIP services, and a SIP trading platform for different semiconductor vendors, developers, and SIP providers.
References
Archive of documents from the VSIA official website
News about VSIA’s QIP metric
Project of SIP Trading platform with QIP by VSIA
QIP services by HKSTP
Official website of IP Service centre by HKSTP
Technology consortia
Semiconductors | Virtual Socket Interface Alliance | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 424 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
515,683 | https://en.wikipedia.org/wiki/Carbanion | In organic chemistry, a carbanion is an anion in which carbon is negatively charged.
Formally, a carbanion is the conjugate base of a carbon acid:
where B stands for the base. The carbanions formed from deprotonation of alkanes (at an sp3 carbon), alkenes (at an sp2 carbon), arenes (at an sp2 carbon), and alkynes (at an sp carbon) are known as alkyl, alkenyl (vinyl), aryl, and alkynyl (acetylide) anions, respectively.
Carbanions have a concentration of electron density at the negatively charged carbon, which, in most cases, reacts efficiently with a variety of electrophiles of varying strengths, including carbonyl groups, imines/iminium salts, halogenating reagents (e.g., N-bromosuccinimide and diiodine), and proton donors. A carbanion is one of several reactive intermediates in organic chemistry. In organic synthesis, organolithium reagents and Grignard reagents are commonly treated and referred to as "carbanions." This is a convenient approximation, although these species are generally clusters or complexes containing highly polar, but still covalent bonds metal–carbon bonds (Mδ+–Cδ−) rather than true carbanions.
Geometry
Absent π delocalization, the negative charge of a carbanion is localized in an spx hybridized orbital on carbon as a lone pair. As a consequence, localized alkyl, alkenyl/aryl, and alkynyl carbanions assume trigonal pyramidal, bent, and linear geometries, respectively. By Bent's rule, placement of the carbanionic lone pair electrons in an orbital with significant s character is favorable, accounting for the pyramidalized and bent geometries of alkyl and alkenyl carbanions, respectively. Valence shell electron pair repulsion (VSEPR) theory makes similar predictions. This contrasts with carbocations, which have a preference for unoccupied nonbonding orbitals of pure atomic p character, leading to planar and linear geometries, respectively, for alkyl and alkenyl carbocations.
However, delocalized carbanions may deviate from these geometries. Instead of residing in a hybrid orbital, the carbanionic lone pair may instead occupy a p orbital (or an orbital of high p character). A p orbital has a more suitable shape and orientation to overlap with the neighboring π system, resulting in more effective charge delocalization. As a consequence, alkyl carbanions with neighboring conjugating groups (e.g., allylic anions, enolates, nitronates, etc.) are generally planar rather than pyramidized. Likewise, delocalized alkenyl carbanions sometimes favor a linear instead of bent geometry. More often, a bent geometry is still preferred for substituted alkenyl anions, though the linear geometry is only slightly less stable, resulting in facile equilibration between the (E) and (Z) isomers of the (bent) anion through a linear transition state. For instance, calculations indicate that the parent vinyl anion or ethylenide, , has an inversion barrier of , while allenyl anion or allenide, ), whose negative charge is stabilized by delocalization, has an inversion barrier of only , reflecting stabilization of the linear transition state by better π delocalization.
Trends and occurrence
Carbanions are typically nucleophilic and basic. The basicity and nucleophilicity of carbanions are determined by the substituents on carbon. These include
the inductive effect. Electronegative atoms adjacent to the charge will stabilize the charge;
the extent of conjugation of the anion. Resonance effects can stabilize the anion. This is especially true when the anion is stabilized as a result of aromaticity.
Geometry also affects the orbital hybridization of the charge-bearing carbanion. The greater the s-character of the charge-bearing atom, the more stable the anion.
Carbanions, especially ones derived from weak carbon acids that do not benefit sufficiently from the two stabilizing factors listed above, are generally oxygen- and water-sensitive to varying degrees. While some merely degrade and decompose over several weeks or months upon exposure to air, others may react vigorously and exothermically with air almost immediately to spontaneously ignite (pyrophoricity). Among commonly encountered carbanionic reagents in the laboratory, ionic salts of hydrogen cyanide (cyanides) are unusual in being indefinitely stable under dry air and hydrolyzing only very slowly in the presence of moisture.
Organometallic reagents like butyllithium (hexameric cluster, ) or methylmagnesium bromide (ether complex, ) are often referred to as "carbanions," at least in a retrosynthetic sense. However, they are really clusters or complexes containing a polar covalent bond, though with electron density heavily polarized toward the carbon atom. The more electropositive the attached metal atom, the closer the behavior of the reagent is to that of a true carbanion.
In fact, true carbanions (i.e., a species not attached to a stabilizing covalently bound metal) without electron-withdrawing and/or conjugating substituents are not available in the condensed phase, and these species must be studied in the gas phase. For some time, it was not known whether simple alkyl anions could exist as free species; many theoretical studies predicted that even the methanide anion should be an unbound species (i.e., the electron affinity of was predicted to be negative). Such a species would decompose immediately by spontaneous ejection of an electron and would therefore be too fleeting to observe directly by mass spectrometry. However, in 1978, the methanide anion was unambiguously synthesized by subjecting ketene to an electric discharge, and the electron affinity (EA) of was determined by photoelectron spectroscopy to be +1.8 kcal/mol, making it a bound species, but just barely so. The structure of was found to be pyramidal (C3v) with an H−C−H angle of 108° and inversion barrier of 1.3 kcal/mol, while was determined to be planar (D3h point group).
Simple primary, secondary and tertiary sp3 carbanions (e.g., ethanide , isopropanide , and t-butanide were subsequently determined to be unbound species (the EAs of , , are −6, −7.4, −3.6 kcal/mol, respectively) indicating that α substitution is destabilizing. However, relatively modest stabilizing effects can render them bound. For example, cyclopropyl and cubyl anions are bound due to increased s character of the lone pair orbital, while neopentyl and phenethyl anions are also bound, as a result of negative hyperconjugation of the lone pair with the β-substituent (nC → σ*C–C). The same holds true for anions with benzylic and allylic stabilization. Gas-phase carbanions that are sp2 and sp hybridized are much more strongly stabilized and are often prepared directly by gas-phase deprotonation.
In the condensed phase only carbanions that are sufficiently stabilized by delocalization have been isolated as truly ionic species. In 1984, Olmstead and Power presented the lithium crown ether salt of the triphenylmethanide carbanion from triphenylmethane, n-butyllithium and 12-crown-4 (which forms a stable complex with lithium cations) at low temperatures:
Adding n-butyllithium to triphenylmethane (pKa in DMSO of = 30.6) in THF at low temperatures followed by 12-crown-4 results in a red solution and the salt complex [Li(12-crown-4)] precipitates at −20 °C. The central C–C bond lengths are 145 pm with the phenyl ring propellered at an average angle of 31.2°. This propeller shape is less pronounced with a tetramethylammonium counterion. A crystal structure for the analogous diphenylmethanide anion ([Li(12-crown-4)]), prepared form diphenylmethane (pKa in DMSO of = 32.3), was also obtained. However, the attempted isolation of a complex of the benzyl anion from toluene (pKa in DMSO of ≈ 43) was unsuccessful, due to rapid reaction of the formed anion with the THF solvent. The free benzyl anion has also been generated in the solution phase by pulse radiolysis of dibenzylmercury.
Early in 1904 and 1917, Schlenk prepared two red-colored salts, formulated as and , respectively, by metathesis of the corresponding organosodium reagent with tetramethylammonium chloride. Since tetramethylammonium cations cannot form a chemical bond to the carbanionic center, these species are believed to contain free carbanions. While the structure of the former was verified by X-ray crystallography almost a century later, the instability of the latter has so far precluded structural verification. The reaction of the putative "" with water was reported to liberate toluene and tetramethylammonium hydroxide and provides indirect evidence for the claimed formulation.
One tool for the detection of carbanions in solution is proton NMR. A spectrum of cyclopentadiene in DMSO shows four vinylic protons at 6.5 ppm and two methylene bridge protons at 3 ppm whereas the cyclopentadienyl anion has a single resonance at 5.50 ppm. The use of and NMR has provided structural and reactivity data for a variety of organolithium species.
Carbon acids
Any compound containing hydrogen can, in principle, undergo deprotonation to form its conjugate base. A compound is a carbon acid if deprotonation results in loss of a proton from a carbon atom. Compared to compounds typically considered to be acids (e.g., mineral acids like nitric acid, or carboxylic acids like acetic acid), carbon acids are typically many orders of magnitude weaker, although exceptions exist (see below). For example, benzene is not an acid in the classical Arrhenius sense, since its aqueous solutions are neutral. Nevertheless, it is very weak Brønsted acid with an estimated pKa of 49 which may undergo deprotonation in the presence of a superbase like the Lochmann–Schlosser base (n-butyllithium and potassium t-butoxide). As conjugate acid–base pairs, the factors that determine the relative stability of carbanions also determine the ordering of the pKa values of the corresponding carbon acids. Furthermore, pKa values allow the prediction of whether a proton transfer process will be thermodynamically favorable: In order for the deprotonation of an acidic species HA with base to be thermodynamically favorable (K > 1), the relationship pKa(BH) > pKa(AH) must hold.
These values below are pKa values determined in dimethylsulfoxide (DMSO), which has a broader useful range (~0 to ~35) than values determined in water (~0 to ~14) and better reflect the basicity of the carbanions in typical organic solvents. Values below less than 0 or greater than 35 are indirectly estimated; hence, the numerical accuracy of these values is limited. Aqueous pKa values are also commonly encountered in the literature, particularly in the context of biochemistry and enzymology. Moreover, aqueous values are often given in introductory organic chemistry textbooks for pedagogical reasons, although the issue of solvent dependence is often glossed over. In general, pKa values in water and organic solvent diverge significantly when the anion is capable of hydrogen bonding. For instance, in the case of water, the values differ dramatically: the pKa in water of water is 14.0, while the pKa in DMSO of water is 31.4, reflecting the differing ability of water and DMSO to stabilize the hydroxide anion. On the other hand, for cyclopentadiene, the numerical values are comparable: the pKa in water is 15, while the pKa in DMSO is 18.
{|align="center" class="wikitable collapsible" style="background: #ffffff; text-align: center;"
|+Carbon acid acidities by pKa in DMSO.These values may differ significantly from aqueous pKa values.
|-
!Name
!Formula
!Structural formula
!pKa in DMSO
|-
|Cyclohexane
|
|
|~60
|-
|Methane
|
|
|~56
|-
|Benzene
|
|
|~49
|-
|Propene
|
|
|~44
|-
|Toluene
|
|
|~43
|- style="background: lightgray;"
|Ammonia (N–H)
|
|
|~41
|-
|Dithiane
|
|
|~39
|-
|Dimethyl sulfoxide
|
|
|35.1
|-
|Diphenylmethane
|
|
|32.3
|-
|Acetonitrile
|
|
|31.3
|- style="background: lightgray;"
|Aniline (N–H)
|
|
|30.6
|-
|Triphenylmethane
|
|
|30.6
|-
|Fluoroform
|
|
|30.5
|-
|Xanthene
|
|
|30.0
|- style="background: lightgray;"
|Ethanol (O–H)
|
|
|29.8
|-
|Phenylacetylene
|
|
|28.8
|-
|Thioxanthene
|
|
|28.6
|-
|Acetone
|
|
|26.5
|-
|Chloroform
|
|
|24.4
|-
|Benzoxazole
|
|
|24.4
|-
|Fluorene
|
|
|22.6
|-
|Indene
|
|
|20.1
|-
|Cyclopentadiene
|
|
|18.0
|-
|Nitromethane
|
|
|17.2
|-
|Diethyl malonate
|
|
|16.4
|-
|Acetylacetone
|
|
|13.3
|-
|Hydrogen cyanide
|HCN
|
|12.9
|- style="background: lightgray;"
|Acetic acid (O–H)
|
|
|12.6
|-
|Malononitrile
|
|
|11.1
|-
|Dimedone
|
|
|10.3
|-
|Meldrum's acid
|
|
|7.3
|-
|Hexafluoroacetylacetone
|
|
|2.3
|- style="background: lightgray;"
|Hydrogen chloride (Cl–H)
|HCl
|HCl (g)
|−2.0
|-
|Triflidic acid
|
|
|~ −16
|-
|}
Note that acetic acid, ammonia, aniline, ethanol, and hydrogen chloride are not carbon acids, but are common acids shown for comparison.
As indicated by the examples above, acidity increases (pKa decreases) when the negative charge is delocalized. This effect occurs when the substituents on the carbanion are unsaturated and/or electronegative. Although carbon acids are generally thought of as acids that are much weaker than "classical" Brønsted acids like acetic acid or phenol, the cumulative (additive) effect of several electron accepting substituents can lead to acids that are as strong or stronger than the inorganic mineral acids. For example, trinitromethane , tricyanomethane , pentacyanocyclopentadiene , and fulminic acid HCNO, are all strong acids with aqueous pKa values that indicate complete or nearly complete proton transfer to water. Triflidic acid, with three strongly electron-withdrawing triflyl groups, has an estimated pKa well below −10. On the other end of the scale, hydrocarbons bearing only alkyl groups are thought to have pKa values in the range of 55 to 65. The range of acid dissociation constants for carbon acids thus spans over 70 orders of magnitude.
The acidity of the α-hydrogen in carbonyl compounds enables these compounds to participate in synthetically important C–C bond-forming reactions including the aldol reaction and Michael addition.
Chiral carbanions
With the molecular geometry for a carbanion described as a trigonal pyramid the question is whether or not carbanions can display chirality, because if the activation barrier for inversion of this geometry is too low any attempt at introducing chirality will end in racemization, similar to the nitrogen inversion. However, solid evidence exists that carbanions can indeed be chiral for example in research carried out with certain organolithium compounds.
The first ever evidence for the existence of chiral organolithium compounds was obtained in 1950. Reaction of chiral 2-iodooctane with s-butyllithium in petroleum ether at −70 °C followed by reaction with dry ice yielded mostly racemic 2-methylbutyric acid but also an amount of optically active 2-methyloctanoic acid, which could only have formed from likewise optically active 2-methylheptyllithium with the carbon atom linked to lithium the carbanion:
On heating the reaction to 0 °C the optical activity is lost. More evidence followed in the 1960s. A reaction of the cis isomer of 2-methylcyclopropyl bromide with s-butyllithium again followed by carboxylation with dry ice yielded cis-2-methylcyclopropylcarboxylic acid. The formation of the trans isomer would have indicated that the intermediate carbanion was unstable.
In the same manner the reaction of (+)-(S)-l-bromo-l-methyl-2,2-diphenylcyclopropane with n-butyllithium followed by quenching with methanol resulted in product with retention of configuration:
Of recent date are chiral methyllithium compounds:
The phosphate 1 contains a chiral group with a hydrogen and a deuterium substituent. The stannyl group is replaced by lithium to intermediate 2 which undergoes a phosphate–phosphorane rearrangement to phosphorane 3 which on reaction with acetic acid gives alcohol 4. Once again in the range of −78 °C to 0 °C the chirality is preserved in this reaction sequence. (Enantioselectivity was determined by NMR spectroscopy after derivatization with Mosher's acid.)
History
A carbanionic structure first made an appearance in the reaction mechanism for the benzoin condensation as correctly proposed by Clarke and Arthur Lapworth in 1907. In 1904 Wilhelm Schlenk prepared in a quest for tetramethylammonium (from tetramethylammonium chloride and
) and in 1914 he demonstrated how triarylmethyl radicals could be reduced to carbanions by alkali metals The phrase carbanion was introduced by Wallis and Adams in 1933 as the negatively charged counterpart of the carbonium ion
See also
Carbocation
Enolates
Nitrile anion
References
External links
Large database of Bordwell pKa values at www.chem.wisc.edu Link
Large database of Bordwell pKa values at daecr1.harvard.edu Link
Anions
Reactive intermediates | Carbanion | [
"Physics",
"Chemistry"
] | 4,298 | [
"Matter",
"Anions",
"Organic compounds",
"Physical organic chemistry",
"Reactive intermediates",
"Ions"
] |
515,898 | https://en.wikipedia.org/wiki/Birth%20rate | Birth rate, also known as natality, is the total number of live human births per 1,000 population for a given period divided by the length of the period in years. The number of live births is normally taken from a universal registration system for births; population counts from a census, and estimation through specialized demographic techniques such as population pyramids. The birth rate (along with mortality and migration rates) is used to calculate population growth. The estimated average population may be taken as the mid-year population.
When the crude death rate is subtracted from the crude birth rate (CBR), the result is the rate of natural increase (RNI). This is equal to the rate of population change (excluding migration).
The total (crude) birth rate (which includes all births)—typically indicated as births per 1,000 population—is distinguished from a set of age-specific rates (the number of births per 1,000 persons, or more usually 1,000 females, in each age group). The first known use of the term "birth rate" in English was in 1856.
The average global birth rate was 17 births per 1,000 total population in 2024. The death rate was 7.9 per 1,000. The RNI was thus 0.91 percent.
In 2012, the average global birth rate was 19.611 per 1,000 according to the World Bank and 19.15 births per 1,000 total population according to the CIA, compared to 20.09 per 1,000 total population in 2007.
The 2024 average of 17 births per 1,000 total population equates to approximately 4.3 births per second or about 260 births per minute for the world. On average, two people in the world die every second or about 121 per minute.
In politics
The birth rate is an issue of concern and policy for national governments. Some (including those of Italy and Malaysia) seek to increase the birth rate with financial incentives or provision of support services to new mothers. Conversely, other countries have policies to reduce the birth rate (for example, China's one-child policy which was in effect from 1978 to 2015). Policies to increase the crude birth rate are known as pro-natalist policies, and policies to reduce the crude birth rate are known as anti-natalist policies. Non-coercive measures such as improved information on birth control and its availability have achieved good results in countries such as Iran and Bangladesh.
There has also been discussion on whether bringing women into the forefront of development initiatives will lead to a decline in birth rates. In some countries, government policies have focused on reducing birth rates by improving women's rights, sexual and reproductive health. Typically, high birth rates are associated with health problems, low life expectancy, low living standards, low social status for women and low educational levels. Demographic transition theory postulates that as a country undergoes economic development and social change its population growth declines, with birth rates serving as an indicator.
At the 1974 World Population Conference in Bucharest, Romania, women's issues gained considerable attention. Family programs were discussed, and 137 countries drafted a World Population Plan of Action. As part of the discussion, many countries accepted modern birth control methods such as the birth control pill and the condom while opposing abortion. Population concerns, as well as the desire to include women in the discourse, were discussed; it was agreed that improvements in women's status and initiatives in defense of reproductive health and freedom, the environment, and sustainable socioeconomic development were needed.
Birth rates ranging from 10 to 20 births per 1,000 are considered low, while rates from 40 to 50 births per 1,000 are considered high. There are problems associated with high birth rates, and there may be problems associated with low birth rates. High birth rates may contribute to malnutrition and starvation, stress government welfare and family programs, and more importantly store up overpopulation for the future, and increase human damage to other species and habitats, and environmental degradation. Additional problems faced by a country with a high birth rate include educating a growing number of children, creating jobs for these children when they enter the workforce, and dealing with the environmental impact of a large population. Low birth rates may stress the government to provide adequate senior welfare systems and stress families who must support the elders themselves. There will be fewer younger able-bodied people who may be needed to support an ageing population, if a high proportion of older people become disabled and unable to care for themselves.
Population control
In the 20th century, several authoritarian governments sought either to increase or to decrease the birth rates, sometimes through forceful intervention. One of the most notorious natalist policies was that in communist Romania in 1967–1990, during the time of communist leader Nicolae Ceaușescu, who adopted a very aggressive natalist policy which included outlawing abortion and contraception, routine pregnancy tests for women, taxes on childlessness, and legal discrimination against childless people. This policy has been depicted in movies and documentaries (such as 4 Months, 3 Weeks and 2 Days, and Children of the Decree). These policies temporarily increased birth rates for a few years, but this was followed by a decline due to the increased use of illegal abortion. Ceaușescu's policy resulted in over 9,000 deaths of women due to illegal abortions, large numbers of children put into Romanian orphanages by parents who could not cope with raising them, street children in the 1990s (when many orphanages were closed and the children ended on the streets), and overcrowding in homes and schools. Ultimately, this aggressive natalist policy led to a generation who eventually led the Romanian Revolution which overthrew and executed him.
In stark contrast to Ceaușescu's natalist policy was China's one child policy, in effect from 1978 to 2015, which included abuses such as forced abortions. This policy has also been deemed responsible for the common practice of sex-selective abortion which led to an imbalanced sex ratio in the country. Given strict family size limitations and a preference for sons, girls became unwanted in China because they were considered as depriving the parents of the chance of having a son. With the progress of prenatal sex-determination technologies and induced abortion, the one-child policy gradually turned into a one-son policy.
In many countries, the steady decline in birth rates over the past decades can largely be attributed to the significant gains in women's freedoms, such as tackling forced marriage and child marriage, access to contraception, equal access to education, and increased socioeconomic opportunities. Women of all economic, social, religious and educational persuasions are choosing to have fewer children as they are gaining more control over their own reproductive rights. Apart from more children living into their adult years, women are often more ambitious to take up education and paid work outside the home, and to live their own lives rather than just a life of reproduction and unpaid domestic work. Birth rates have fallen due to the introduction of family planning clinics and other access to contraception.
In Bangladesh, one of the poorest countries in the world, women are less likely to have two children (or more) than they were before 1999, according to Australian demographer Jack Caldwell. Bangladeshi women eagerly took up contraceptives, such as condoms and the pill, according to a study in 1994 by the World Bank. The study proved that family planning could be carried out and accepted practically anywhere. Caldwell also believes that agricultural improvements led to the need for less labour. Children not needed to plough the fields would be of surplus and require some education, so in turn, families become smaller and women are able to work and have greater ambitions. Other examples of non-coercive family planning policies are Ethiopia, Thailand and Indonesia.
Myanmar was controlled until 2011 by an austere military junta, intent on controlling every aspect of people's lives. The generals wanted the country's population doubled. In their view, women's job was to produce babies to power the country's labour force, so family planning was vehemently opposed. The women of Burma opposed this policy, and Peter McDonald of the Australian National University argues that this gave rise to a black market trade in contraceptives, smuggled in from neighbouring Thailand.
In 1990, five years after the Iraq-Iran war ended, Iran saw the fastest recorded fall in fertility in world history. Revolution gave way to consumerism and westernization. With TVs and cars came condoms and birth control pills. A generation of women had been expected to produce soldiers to fight Iraq, but the next generation of women could choose to enjoy some newfound luxuries. During the war, the women of Iran averaged about 8 children each, a ratio the hard-line Islamic President Mahmoud Ahmadinejad wanted to revive. As of 2010, the birth rate of Iran is 1.7 babies per woman. Some observers claim this to be a triumph of Western values of freedom for women against states with Islamic values.
Islamic clerics are also having less influence over women in other Muslim countries. In the past 30 years Turkey's fertility rate of children per woman has dropped from 4.07 to 2.08. Tunisia has dropped from 4.82 to 2.14 and Morocco from 5.4 to 2.52 children per woman.
Latin America, of predominately Catholic faith, has seen the same trends of falling fertility rates. Brazilian women are having half the children compared to 25 years ago: a rate of 1.7 children per woman. The Vatican now has less influence over women in other hard-line Catholic countries. Mexico, El Salvador, Ecuador, Nicaragua, Colombia, Venezuela and Peru have all seen significant drops in fertility in the same period, all going from over six to less than three children per woman. Forty percent of married Brazilian women are choosing to get sterilised after having children, but this may be because it only requires confession on one occasion. Some observers claim this to be a triumph of modern Western values of freedom for women against states with Catholic values.
National birth rates
According to the CIA's The World Factbook, who presumably get their figures from the World Health Organization, the country with the highest birth rate is Niger at 6.49 children born per woman and the country with the lowest birth rate is Taiwan, at 1.13 children born per woman. However, despite not having any official records, it can be presumed for obvious reasons (only men are allowed to be Catholic priests) that the Holy See has the lowest birth rate of any sovereign state.
Compared with the 1950s (when the birth rate was 36 per thousand), as of 2011, the world birth rate has declined by 16 per thousand.
As of 2017, Niger has had 49.443 births per thousand people.
Japan has one of the lowest birth rates in the world with 8 per thousand people.
While in Japan there are 126 million people and in Niger 21 million, both countries had around 1 million babies born in 2016.
Sub-Saharan Africa
The region of Sub-Saharan Africa has the highest birth rate in the world. As of 2016, Niger, Mali, Uganda, Zambia, and Burundi have the highest birth rates in the world. This is part of the fertility-income paradox, as these countries are very poor, and it may seem counter-intuitive for families there to have so many children. The inverse relationship between income and fertility has been termed a demographic-economic "paradox" by the notion that greater means would enable the production of more offspring as suggested by the influential Thomas Malthus.
Afghanistan
Afghanistan has the 11th highest birth rate in the world, and also the highest birth rate of any non-African country (as of 2016). The rapid population growth of Afghanistan is considered a problem when it prevents population stabilization and affects maternal and infant health. Reasons for large families include tradition, religion, the different roles of men and women, and the cultural desire to have several sons.
Australia
Historically, Australia has had a relatively low fertility rate, reaching a high of 3.14 births per woman in 1960. This was followed by a decline which continued until the mid-2000, when a one off cash incentive was introduced to reverse the decline. In 2004, the then Howard government introduced a non-means tested 'Maternity Payment' to parents of every newborn as a substitute to maternity leave. The payment known as the 'Baby Bonus' was A$3000 per child. This rose to A$5000 which was paid in 13 installments.
At a time when Australia's unemployment was at a 28-year low of 5.2%, the then Treasurer Peter Costello stated there was opportunity to go lower. With a good economic outlook for Australia, Costello held the view that now was a good time to expand the population, with his famous quote that every family should have three children "one for mum, one for dad and one for the country". Australia's fertility rate reached a peak of 1.95 children per woman in 2010, a 30-year high, although still below replacement rate.
Phil Ruthven of the business information firm IBISWorld believes the spike in fertility was more about timing and less about monetary incentives. Generation X was now aged 25 to 45 years old. With numerous women putting pregnancies off for a few years for the sake of a career, many felt the years closing in and their biological clocks ticking.
On 1 March 2014, the baby bonus was replaced with Family Tax Benefit A. By then the baby bonus had left its legacy on Australia.
In 2016, Australia's fertility rate has only decreased slightly to 1.91 children per woman.
France
France has been successful in increasing fertility rates from the low levels seen in the late 1980s, after a continuous fall in the birth rate. In 1994, the total fertility rate was as low as 1.66, but perhaps due to the active family policy of the government in the mid-1990s, it has increased, and maintained an average of 2.0 from 2008 until 2015.
France has embarked on a strong incentive policy based on two key measures to restore the birth rate: family benefits (les allocations familiales) and a family-coefficient of income tax (le quotient familial). Since the end of World War II, early family policy in France has been based on a family tradition that requires children to support multi-child family, so that a third child enables a multi-child family to benefit from family allowances and income tax exemptions. This is intended to allow families with three children to enjoy the same living standards as households without children.
In particular, the French income taxation system is structured so that families with children receive tax breaks greater than single adults without children. This income tax imposition system is known as the family coefficient of income tax. A characteristic of the family factor is that households with a large number of children, even if they are at the same standard of living, can receive more tax exemption benefits.
Since the 1970s, the focus has been on supporting families who are vulnerable such as single parent families and the children of a poor family in order to ensure equality of opportunity. In addition, as many women began to participate in the labor market, the government introduced policies of financial support for childcare leave as well as childcare facilities. In 1994, the government expanded the parent education allowance (l'allocation parentale d'éducation) for women with two children to ensure freedom of choice and reduce formal unemployment in order to promote family well-being and women's labor participation.
There are also:
an infant child care allowance, family allowance and family allowance for multichild family, and a multi-element family pension scheme.
a medical insurance system that covers all medical expenses, hospitalization costs, and medical expenses incurred after six months of pregnancy as 100% of the national health insurance in the national social security system, and the statutory leave system during pregnancy.
Germany
The birth rate in Germany is only 8.3 per thousand, lower than the UK and France.
Ireland
In Europe as of July 2011, Ireland's birth rate was 16.5 per 1000 (3.5 percent higher than the next-ranked country, the UK).
Japan
As of 2016, Japan has the third lowest crude birth rate (i.e. not allowing for the population's age distribution) in the world, with only Saint Pierre and Miquelon and Monaco having lower crude birth rates. Japan has an unbalanced population with many elderly but few young people, and this is projected to be more extreme in the future, unless there are major changes. An increasing number of Japanese people are staying unmarried: between 1980 and 2010, the percentage of the population who had never married increased from 22% to almost 30%, even as the population continued to age, and by 2035 one in four people will not marry during their childbearing years. The Japanese sociologist Masahiro Yamada coined the term "parasite singles" for unmarried adults in their late 20s and 30s who continue to live with their parents.
South Korea
Since joining the Organization for Economic Cooperation & Development (OECD) in 1996, South Korea's fertility rate has been on the decline. It recorded the lowest fertility rate among OECD countries in 2017, with just 1.1 children per woman being born. Subsequent studies indicate that Korea has broken its own record and that the fertility rate has fallen to below one child per woman. The total fertility rate in South Korea sharply declined from 4.53 in 1970 to 2.06 in 1983, falling below the replacement level of 2.10. The low birth rate accelerated in the 2000s, with the fertility rate dropping to 1.48 in 2000, 1.23 in 2010, and reaching 0.72 in 2023.
One example of Korea's economic crisis is the housing market. Tenants may choose to buy, rent, or use the Jeonse system of renting. Landlords require the renters to upfront as much as 70% of the property value as a type of security deposit, then live rent free for the duration of the contract, usually two years. At the end of the contract, the deposit is refunded 100% back to the renter. Historically, landlords have invested the security deposit and banked on rising property values. But as inflation rises higher than the interest rates, property values plummeted. Recent government caps, aimed at protecting the renters from being victims of price gouging, restricted the profit the landlord can make on renewing the contract.
The Korean government offers a wide range of financial incentives to parents; however, many new parents, both mother and father, refuse to take full advantage of postpartum parental leave. Some fathers fear being ridiculed for taking "mom leave" while both working parents fear the stigma of "falling behind" in their professional careers. The South Korean corporate world is very unsympathetic to family needs.
Abortion and divorce are other contributing factors for Korea's low birth rate. In the twentieth century, due mainly to Confucian beliefs and a strong desire to sire a son, female fetuses were being aborted. This strong desire to have a son as a first child has somewhat of an oxymoron effect on today's low birth rate, as many women will not want to marry the oldest son, aware of his financial obligation to feed, clothe and shelter the aging parents. As a result, in 1988 the government banned doctors from telling expectant parents the sex of the fetus. Effective 1 January 2021, abortion has been decriminalized. Divorce is another deterrent to childbirth. Although divorce has been on the rise over the last 50 years, it hit families especially hard after the economic crisis in 1997; fathers abandoning their families because they could not financially support them. In addition, the abortion of female fetuses lead to a relative shortage of women, resulting in overall lowerall birthrate in the country.
Taiwan
In August 2011, Taiwan's government announced that its birth rate declined in the previous year, despite the fact that the government implemented approaches to encourage fertility.
United Kingdom
In July 2011, the UK's Office for National Statistics (ONS) announced a 2.4 percent increase in live births in the United Kingdom in 2010. This is the highest birth rate in the UK in 40 years. However, the UK record year for births and birth rate remains 1920 (when the ONS reported over 957,000 births to a population of "around 40 million").
United States
There has been a dramatic decline in birth rates in the U.S. between 2007 and 2020. The Great Recession appears to have contributed to the decline in the early period. A 2022 study did not identify any other economic, policy, or social factor that contributed to the decline. The decline may be due to shifting life priorities of recent cohorts that go through childbearing age, as there have been "changes in preferences for having children, aspirations for life, and parenting norms."
A Pew research center study found evidence of a correlation between economic difficulties and fertility decline by race and ethnicity. Hispanics (particularly affected by the recession) have experienced the largest fertility decline, particularly compared to Caucasians. In 2008–2009 the birth rate declined 5.9 percent for Hispanic women, 2.4 percent for African American women and 1.6 percent for white women. The relatively large birth rate declines among Hispanics mirror their relatively large economic declines, in terms of jobs and wealth. According to the statistics using the data from National Centre for Health Statistics and U.S. Census Bureau, from 2007 to 2008, the employment rate among Hispanics declined by 1.6 percentage points, compared with declines of 0.7 points for whites. The unemployment rate shows a similar pattern—unemployment among Hispanics increased 2.0 percentage points from 2007 to 2008, while for whites the increase was 0.9 percentage points. A recent report from the Pew Hispanic Center revealed that Hispanics have also been the biggest losers in terms of wealth since the beginning of the recession, with Hispanic households losing 66% of their median wealth from 2005 to 2009. In comparison, black households lost 53% of their median wealth and white households lost only 16%.
Other factors (such as women's labor-force participation, contraceptive technology and public policy) make it difficult to determine how much economic change affects fertility. Research suggests that much of the fertility decline during an economic downturn is a postponement of childbearing, not a decision to have fewer (or no) children; people plan to "catch up" to their plans of bearing children when economic conditions improve. Younger women are more likely than older women to postpone pregnancy due to economic factors, since they have more years of fertility remaining.
In July 2011, the U.S. National Institutes of Health announced that the adolescent birth rate continues to decline. In 2013, teenage birth rates in the U.S. were at the lowest level in U.S. history. Teen birth rates in the U.S. have decreased from 1991 through 2012 (except for an increase from 2005 to 2007). The other aberration from this otherwise-steady decline in teen birth rates is the six percent decrease in birth rates for 15- to 19-year-olds between 2008 and 2009. Despite the decrease, U.S. teen birth rates remain higher than those in other developed nations. Racial differences affect teen birth and pregnancy rates: American Indian/Alaska Native, Hispanic, and non-Hispanic black teen pregnancy rates are more than double the non-Hispanic white teenage birth rate.
States strict in enforcing child support have up to 20 percent fewer unmarried births than states that are lax about getting unmarried dads to pay, the researchers found. Moreover, according to the results, if all 50 states in the United States had done at least as well in their enforcement efforts as the state ranked fifth from the top, that would have led to a 20 percent reduction in out-of-wedlock births.
The United States population growth is at a historical low level as the United States current birth rates are the lowest ever recorded. The low birth rates in the contemporary United States can possibly be ascribed to the recession, which led families to postpone having children and fewer immigrants coming to the US. The current US birth rates are not high enough to maintain the size of the U.S. population, according to The Economist.
Factors affecting birth rate
There are many factors that interact in complex ways, influencing the birth rates of a population.
Developed countries have a lower birth rate than underdeveloped countries (see Income and fertility). A parent's number of children strongly correlates with the number of children that each person in the next generation will eventually have. Factors generally associated with increased fertility include religiosity, intention to have children, and maternal support. Factors generally associated with decreased fertility include wealth, education, female labor participation, urban residence, intelligence, increased female age, women's rights, access to family planning services and (to a lesser degree) increased male age. Many of these factors however are not universal, and differ by region and social class. For instance, at a global level, religion is correlated with increased fertility.
Reproductive health can also affect the birth rate, as untreated infections can lead to fertility problems, as can be seen in the "infertility belt" - a region that stretches across central Africa from the United Republic of Tanzania in the east to Gabon in the west, and which has a lower fertility than other African regions.
Child custody laws, affecting fathers' parental rights over their children from birth until child custody ends at age 18, may have an effect on the birth rate. U.S. states strict in enforcing child support have up to 20 percent fewer unmarried births than states that are lax about getting unmarried fathers to pay, the researchers found. Moreover, according to the results, if all 50 states in the United States had done at least as well in their enforcement efforts as the state ranked fifth from the top, that would have led to a 20 percent reduction in out-of-wedlock births.
Crude birth rate
Crude birth rate is a measure of the number of live births occurring during the year, per 1,000 people in the population. It is normally used to predict population growth.
See also
Case studies
Aging of the United States
Lists
List of sovereign states and dependent territories by birth rate
List of sovereign states and dependencies by total fertility rate
Notes
References
United Nations World Population Prospects: The 2008 Revision Population Database
World Birth rate by IndexMundi
http://www.childtrends.org/?indicators=fertility-and-birth-rates
External links
CIA World Factbook Birth Rate List by Rank
Population ecology
Demographic economics
Fertility
Ageing
Temporal rates | Birth rate | [
"Physics"
] | 5,452 | [
"Temporal quantities",
"Temporal rates",
"Physical quantities"
] |
515,977 | https://en.wikipedia.org/wiki/Naturally%20aspirated%20engine | A naturally aspirated engine, also known as a normally aspirated engine, and abbreviated to N/A or NA, is an internal combustion engine in which air intake depends solely on atmospheric pressure and does not have forced induction through a turbocharger or a supercharger.
Description
In a naturally aspirated engine, air for combustion (Diesel cycle in a diesel engine or specific types of Otto cycle in petrol engines, namely petrol direct injection) or an air/fuel mixture (traditional Otto cycle petrol engines), is drawn into the engine's cylinders by atmospheric pressure acting against a partial vacuum that occurs as the piston travels downwards toward bottom dead centre during the intake stroke. Owing to innate restriction in the engine's inlet tract, which includes the intake manifold, a small pressure drop occurs as air is drawn in, resulting in a volumetric efficiency of less than 100 percent—and a less than complete air charge in the cylinder. The density of the air charge, and therefore the engine's maximum theoretical power output, in addition to being influenced by induction system restriction, is also affected by engine speed and atmospheric pressure, the latter of which decreases as the operating altitude increases.
This is in contrast to a forced-induction engine, in which a mechanically driven supercharger or an exhaust-driven turbocharger is employed to facilitate increasing the mass of intake air beyond what could be produced by atmospheric pressure alone. Nitrous oxide can also be used to artificially increase the mass of oxygen present in the intake air. This is accomplished by injecting liquid nitrous oxide into the intake, which supplies significantly more oxygen in a given volume than is possible with atmospheric air. Nitrous oxide is 36.3% available oxygen by mass after it decomposes as compared with atmospheric air at 20.95%. Nitrous oxide also boils at at atmospheric pressures and offers significant cooling from the latent heat of vaporization, which also aids in increasing the overall air charge density significantly compared to natural aspiration.
Applications
Most automobile petrol engines, as well as many small engines used for non-automotive purposes, are naturally aspirated. Most modern diesel engines powering highway vehicles are turbocharged to produce a more favourable power-to-weight ratio, a higher torque curve, as well as better fuel efficiency and lower exhaust emissions. Turbocharging is nearly universal on diesel engines that are used in railroad, marine engines, and commercial stationary applications (electrical power generation, for example). Forced induction is also used with reciprocating aircraft engines to negate some of the power loss that occurs as the aircraft climbs to higher altitudes.
Advantages and disadvantages
The advantages and disadvantages of a naturally aspirated engine in relation to a same-sized engine relying on forced induction include:
Advantages
Easier to maintain and repair
Lower development and production costs
Increased reliability, partly due to fewer separate, moving parts
More direct throttle response than a turbo system due to the lack of turbo lag (an advantage also shared with superchargers)
Less potential for overheating and or uncontrolled combustion (pinging/ knocking)
Disadvantages
Decreased efficiency
Decreased power-to-weight ratio
Decreased potential for tuning
Increased power loss at higher elevation (due to lower air pressure) compared to forced induction engines
See also
Carburetor
Fuel injection
Manifold vacuum
References
Engine technology
Internal combustion engine | Naturally aspirated engine | [
"Technology",
"Engineering"
] | 684 | [
"Engine technology",
"Combustion engineering",
"Engines",
"Internal combustion engine"
] |
515,999 | https://en.wikipedia.org/wiki/Brokaw%20bandgap%20reference | Brokaw bandgap reference is a voltage reference circuit widely used in integrated circuits, with an output voltage around 1.25 V with low temperature dependence. This particular circuit is one type of a bandgap voltage reference, named after Paul Brokaw, the author of its first publication.
Like all temperature-independent bandgap references, the circuit maintains an internal voltage source that has a positive temperature coefficient and another internal voltage source that has a negative temperature coefficient. By summing the two together, the temperature dependence can be canceled. Additionally, either of the two internal sources can be used as a temperature sensor.
In the Brokaw bandgap reference, the circuit uses negative feedback (by means of an operational amplifier) to force a constant current through two bipolar transistors with different emitter areas. By the Ebers–Moll model of a transistor,
The transistor with the larger emitter area requires a smaller base–emitter voltage for the same current.
The difference between the two base–emitter voltages has a positive temperature coefficient (i.e., it increases with temperature).
The base–emitter voltage for each transistor has a negative temperature coefficient (i.e., it decreases with temperature).
The circuit output is the sum of one of the base–emitter voltages with a multiple of the base–emitter voltage differences. With appropriate component choices, the two opposing temperature coefficients will cancel each other exactly and the output will have no temperature dependence.
In the example circuit shown, the opamp ensures that its inverting and non-inverting inputs are at the same voltage. This means that the currents in each collector resistor are identical, so the collector currents of Q1 and Q2 are also identical. If Q2 has an emitter area that is times larger than Q1, its base-emitter voltage will be lower than that of Q1 by a magnitude of . This voltage is generated across and so defines the current in each leg as . The output voltage (at the opamp output) is therefore , or:
The first term has a negative temperature coefficient; the second term has a positive temperature coefficient (from its ). By an appropriate choice of and and , these temperature coefficients can be made to cancel, giving an output voltage that is nearly independent of temperature. The magnitude of this output voltage can be shown to be approximately equal to the bandgap voltage (EG0) of Silicon extrapolated to 0 K.
See also
LM317
References
External links
Original IEEE paper(pdf) — This is the 1974 paper describing the circuit.
A Transistor Voltage Reference, and What the Band-Gap Has To Do With It — This 1989 video features Paul Brokaw explaining his bandgap voltage reference.
How to make a Bandgap Voltage Reference in One Easy Lesson by A. Paul Brokaw of IDT
ELEN 689-602: Introduction to Bandgap Reference Generators — Includes detailed description and analysis of Brokaw bandgap reference.
The Design of Band-Gap Reference Circuits: Trials and Tribulations — Robert Pease, National Semiconductor (In "The Best of Bob Pease", page 286, shows Brokaw cell in Figure 3)
ECE 327: LM317 Bandgap Voltage Reference Example — Brief explanation of the temperature-independent bandgap reference circuit within the LM317. The circuit is nearly identical, but the document discusses how the circuit allows different currents through matched transistors (rather than a single current through different transistors) can set up the same voltages with opposing temperature coefficients.
Electronic circuits
Analog circuits | Brokaw bandgap reference | [
"Engineering"
] | 755 | [
"Analog circuits",
"Electronic engineering",
"Electronic circuits"
] |
516,067 | https://en.wikipedia.org/wiki/Bandgap%20voltage%20reference | A bandgap voltage reference is a voltage reference circuit widely used in integrated circuits. It produces an almost constant voltage corresponding to the particular semiconductor's theoretical band gap, with very little fluctuations from variations of power supply, electrical load, time, temperature (, they typically have an initial error of 0.5–1.0% and a temperature coefficient of 25–50 ppm/°C).
David Hilbiber of Fairchild Semiconductor filed a patent in 1963 and published this circuit concept in 1964. Bob Widlar, Paul Brokaw and others followed up with other commercially-successful versions.
Operation
The voltage difference between two p–n junctions (e.g. diodes), operated at different current densities, is used to generate a current that is proportional to absolute temperature (PTAT) in a resistor. This current is used to generate a voltage in a second resistor. This voltage in turn is added to the voltage of one of the junctions (or a third one, in some implementations). The voltage across a diode operated at constant current is complementary to absolute temperature (CTAT), with a temperature coefficient of approximately −2mV/K. If the ratio between the first and second resistor is chosen properly, the first order effects of the temperature dependency of the diode and the PTAT current will cancel out.
Although silicon's (Si) band gap at 0K is technically 1.165eV, the circuit essentially linearly extrapolates the bandgap–temperature curve to determine a slightly higher but precise reference around 1.2–1.3V (the specific value depends on the particular technology and circuit design); the remaining voltage change over the operating temperature of typical integrated circuits is on the order of a few millivolts. This temperature dependency has a typical parabolic residual behavior since the linear (first order) effects are chosen to cancel.
Because the output voltage is by definition fixed around 1.25V for typical Si bandgap reference circuits, the minimum operating voltage is about 1.4V, as in a CMOS circuit at least one drain-source voltage of a field-effect transistor (FET) has to be added. Therefore, recent work concentrates on finding alternative solutions, in which for example currents are summed instead of voltages, resulting in a lower theoretical limit for the operating voltage.
The first letter of the acronym, CTAT, is sometimes misconstrued to represent constant rather than complementary. The term, constant with temperature (CWT), exists to address this confusion, but is not in widespread use.
When summing a PTAT and a CTAT current, only the linear terms of current are compensated, while the higher-order terms are limiting the temperature drift (TD) of the bandgap reference at around 20ppm/°C, over a temperature range of 100°C. For this reason, in 2001, Malcovati designed a circuit topology that can compensate high-order non-linearities, thus achieving an improved TD. This design used an improved version of Banba's topology and an analysis of base-emitter temperature effects that was performed by Tsividis in 1980. In 2012, Andreou has further improved the high-order non-linear compensation by using a second operational amplifier along with an additional resistor leg at the point where the two currents are summed up. This method enhanced further the curvature correction and achieved superior TD performance over a wider temperature range. In addition it achieved improved line regulation and lower noise.
The other critical issue in design of bandgap references is power efficiency and size of circuit. As a bandgap reference is generally based on BJT devices and resistors, the total size of circuit could be large and therefore expensive for IC design. Moreover, this type of circuit might consume a lot of power to reach to the desired noise and precision specification.
Despite these limitations, the band gap voltage reference is widely used in voltage regulators, covering the majority of 78xx, 79xx devices along with the TL431 and the complementary LM317 and LM337. Temperature coefficients as low as 1.5–2.0ppm/°C can be obtained with bandgap references. However, the parabolic characteristic of voltage versus temperature means that a single figure in ppm/°C does not adequately describe the behavior of the circuit. Manufacturers' data sheets show that the temperature at which the peak (or trough) of the voltage curve occurs is subject to normal sample variations in production. Bandgap references are also suited for low-power applications.
Mixed-signal microcontrollers may provide an internal bandgap reference signal to be used as reference for any internal comparator(s) and analog-to-digital converter(s).
Patents
1966, US Patent 3271660, Reference voltage source, David Hilbiber.
1971, US Patent 3617859, Electrical regulator apparatus including a zero temperature coefficient voltage reference circuit, Robert Dobkin and Robert Widlar.
1981, US Patent 4249122, Temperature compensated bandgap IC voltage references, Robert Widlar.
1984, US Patent 4447784, Temperature compensated bandgap voltage reference circuit, Robert Dobkin.
Notes
See also
Brokaw bandgap reference
LM317
TL431
Silicon bandgap temperature sensor
References
External links
The Design of Band-Gap Reference Circuits: Trials and Tribulations p. 286 – Robert Pease, National Semiconductor
Features and Limitations of CMOS Voltage References
ECE 327: LM317 Bandgap Voltage Reference Example – Brief explanation of the temperature-independent bandgap reference circuit within the LM317.
Electronic circuits
Analog circuits | Bandgap voltage reference | [
"Engineering"
] | 1,181 | [
"Analog circuits",
"Electronic engineering",
"Electronic circuits"
] |
516,133 | https://en.wikipedia.org/wiki/Equipartition%20theorem | In classical statistical mechanics, the equipartition theorem relates the temperature of a system to its average energies. The equipartition theorem is also known as the law of equipartition, equipartition of energy, or simply equipartition. The original idea of equipartition was that, in thermal equilibrium, energy is shared equally among all of its various forms; for example, the average kinetic energy per degree of freedom in translational motion of a molecule should equal that in rotational motion.
The equipartition theorem makes quantitative predictions. Like the virial theorem, it gives the total average kinetic and potential energies for a system at a given temperature, from which the system's heat capacity can be computed. However, equipartition also gives the average values of individual components of the energy, such as the kinetic energy of a particular particle or the potential energy of a single spring. For example, it predicts that every atom in a monatomic ideal gas has an average kinetic energy of in thermal equilibrium, where is the Boltzmann constant and T is the (thermodynamic) temperature. More generally, equipartition can be applied to any classical system in thermal equilibrium, no matter how complicated. It can be used to derive the ideal gas law, and the Dulong–Petit law for the specific heat capacities of solids. The equipartition theorem can also be used to predict the properties of stars, even white dwarfs and neutron stars, since it holds even when relativistic effects are considered.
Although the equipartition theorem makes accurate predictions in certain conditions, it is inaccurate when quantum effects are significant, such as at low temperatures. When the thermal energy is smaller than the quantum energy spacing in a particular degree of freedom, the average energy and heat capacity of this degree of freedom are less than the values predicted by equipartition. Such a degree of freedom is said to be "frozen out" when the thermal energy is much smaller than this spacing. For example, the heat capacity of a solid decreases at low temperatures as various types of motion become frozen out, rather than remaining constant as predicted by equipartition. Such decreases in heat capacity were among the first signs to physicists of the 19th century that classical physics was incorrect and that a new, more subtle, scientific model was required. Along with other evidence, equipartition's failure to model black-body radiation—also known as the ultraviolet catastrophe—led Max Planck to suggest that energy in the oscillators in an object, which emit light, were quantized, a revolutionary hypothesis that spurred the development of quantum mechanics and quantum field theory.
Basic concept and simple examples
The name "equipartition" means "equal division," as derived from the Latin equi from the antecedent, æquus ("equal or even"), and partition from the noun, partitio ("division, portion"). The original concept of equipartition was that the total kinetic energy of a system is shared equally among all of its independent parts, on the average, once the system has reached thermal equilibrium. Equipartition also makes quantitative predictions for these energies. For example, it predicts that every atom of an inert noble gas, in thermal equilibrium at temperature , has an average translational kinetic energy of , where is the Boltzmann constant. As a consequence, since kinetic energy is equal to (mass)(velocity)2, the heavier atoms of xenon have a lower average speed than do the lighter atoms of helium at the same temperature. Figure 2 shows the Maxwell–Boltzmann distribution for the speeds of the atoms in four noble gases.
In this example, the key point is that the kinetic energy is quadratic in the velocity. The equipartition theorem shows that in thermal equilibrium, any degree of freedom (such as a component of the position or velocity of a particle) which appears only quadratically in the energy has an average energy of and therefore contributes to the system's heat capacity. This has many applications.
Translational energy and ideal gases
The (Newtonian) kinetic energy of a particle of mass , velocity is given by
where , and are the Cartesian components of the velocity . Here, is short for Hamiltonian, and used henceforth as a symbol for energy because the Hamiltonian formalism plays a central role in the most general form of the equipartition theorem.
Since the kinetic energy is quadratic in the components of the velocity, by equipartition these three components each contribute to the average kinetic energy in thermal equilibrium. Thus the average kinetic energy of the particle is , as in the example of noble gases above.
More generally, in a monatomic ideal gas the total energy consists purely of (translational) kinetic energy: by assumption, the particles have no internal degrees of freedom and move independently of one another. Equipartition therefore predicts that the total energy of an ideal gas of particles is .
It follows that the heat capacity of the gas is and hence, in particular, the heat capacity of a mole of such gas particles is , where NA is the Avogadro constant and R is the gas constant. Since R ≈ 2 cal/(mol·K), equipartition predicts that the molar heat capacity of an ideal gas is roughly 3 cal/(mol·K). This prediction is confirmed by experiment when compared to monatomic gases.
The mean kinetic energy also allows the root mean square speed of the gas particles to be calculated:
where is the mass of a mole of gas particles. This result is useful for many applications such as Graham's law of effusion, which provides a method for enriching uranium.
Rotational energy and molecular tumbling in solution
A similar example is provided by a rotating molecule with principal moments of inertia , and . According to classical mechanics, the rotational energy of such a molecule is given by
where , , and are the principal components of the angular velocity. By exactly the same reasoning as in the translational case, equipartition implies that in thermal equilibrium the average rotational energy of each particle is . Similarly, the equipartition theorem allows the average (more precisely, the root mean square) angular speed of the molecules to be calculated.
The tumbling of rigid molecules—that is, the random rotations of molecules in solution—plays a key role in the relaxations observed by nuclear magnetic resonance, particularly protein NMR and residual dipolar couplings. Rotational diffusion can also be observed by other biophysical probes such as fluorescence anisotropy, flow birefringence and dielectric spectroscopy.
Potential energy and harmonic oscillators
Equipartition applies to potential energies as well as kinetic energies: important examples include harmonic oscillators such as a spring, which has a quadratic potential energy
where the constant describes the stiffness of the spring and is the deviation from equilibrium. If such a one-dimensional system has mass , then its kinetic energy is
where and denote the velocity and momentum of the oscillator. Combining these terms yields the total energy
Equipartition therefore implies that in thermal equilibrium, the oscillator has average energy
where the angular brackets denote the average of the enclosed quantity,
This result is valid for any type of harmonic oscillator, such as a pendulum, a vibrating molecule or a passive electronic oscillator. Systems of such oscillators arise in many situations; by equipartition, each such oscillator receives an average total energy and hence contributes to the system's heat capacity. This can be used to derive the formula for Johnson–Nyquist noise and the Dulong–Petit law of solid heat capacities. The latter application was particularly significant in the history of equipartition.
Specific heat capacity of solids
An important application of the equipartition theorem is to the specific heat capacity of a crystalline solid. Each atom in such a solid can oscillate in three independent directions, so the solid can be viewed as a system of independent simple harmonic oscillators, where denotes the number of atoms in the lattice. Since each harmonic oscillator has average energy , the average total energy of the solid is , and its heat capacity is .
By taking to be the Avogadro constant , and using the relation between the gas constant and the Boltzmann constant , this provides an explanation for the Dulong–Petit law of specific heat capacities of solids, which stated that the specific heat capacity (per unit mass) of a solid element is inversely proportional to its atomic weight. A modern version is that the molar heat capacity of a solid is 3R ≈ 6 cal/(mol·K).
However, this law is inaccurate at lower temperatures, due to quantum effects; it is also inconsistent with the experimentally derived third law of thermodynamics, according to which the molar heat capacity of any substance must go to zero as the temperature goes to absolute zero. A more accurate theory, incorporating quantum effects, was developed by Albert Einstein (1907) and Peter Debye (1911).
Many other physical systems can be modeled as sets of coupled oscillators. The motions of such oscillators can be decomposed into normal modes, like the vibrational modes of a piano string or the resonances of an organ pipe. On the other hand, equipartition often breaks down for such systems, because there is no exchange of energy between the normal modes. In an extreme situation, the modes are independent and so their energies are independently conserved. This shows that some sort of mixing of energies, formally called ergodicity, is important for the law of equipartition to hold.
Sedimentation of particles
Potential energies are not always quadratic in the position. However, the equipartition theorem also shows that if a degree of freedom contributes only a multiple of (for a fixed real number ) to the energy, then in thermal equilibrium the average energy of that part is .
There is a simple application of this extension to the sedimentation of particles under gravity. For example, the haze sometimes seen in beer can be caused by clumps of proteins that scatter light. Over time, these clumps settle downwards under the influence of gravity, causing more haze near the bottom of a bottle than near its top. However, in a process working in the opposite direction, the particles also diffuse back up towards the top of the bottle. Once equilibrium has been reached, the equipartition theorem may be used to determine the average position of a particular clump of buoyant mass . For an infinitely tall bottle of beer, the gravitational potential energy is given by
where is the height of the protein clump in the bottle and g is the acceleration due to gravity. Since , the average potential energy of a protein clump equals . Hence, a protein clump with a buoyant mass of 10 MDa (roughly the size of a virus) would produce a haze with an average height of about 2 cm at equilibrium. The process of such sedimentation to equilibrium is described by the Mason–Weaver equation.
History
The equipartition of kinetic energy was proposed initially in 1843, and more correctly in 1845, by John James Waterston. In 1859, James Clerk Maxwell argued that the kinetic heat energy of a gas is equally divided between linear and rotational energy. In 1876, Ludwig Boltzmann expanded on this principle by showing that the average energy was divided equally among all the independent components of motion in a system. Boltzmann applied the equipartition theorem to provide a theoretical explanation of the Dulong–Petit law for the specific heat capacities of solids.
The history of the equipartition theorem is intertwined with that of specific heat capacity, both of which were studied in the 19th century. In 1819, the French physicists Pierre Louis Dulong and Alexis Thérèse Petit discovered that the specific heat capacities of solid elements at room temperature were inversely proportional to the atomic weight of the element. Their law was used for many years as a technique for measuring atomic weights. However, subsequent studies by James Dewar and Heinrich Friedrich Weber showed that this Dulong–Petit law holds only at high temperatures; at lower temperatures, or for exceptionally hard solids such as diamond, the specific heat capacity was lower.
Experimental observations of the specific heat capacities of gases also raised concerns about the validity of the equipartition theorem. The theorem predicts that the molar heat capacity of simple monatomic gases should be roughly 3 cal/(mol·K), whereas that of diatomic gases should be roughly 7 cal/(mol·K). Experiments confirmed the former prediction, but found that molar heat capacities of diatomic gases were typically about 5 cal/(mol·K), and fell to about 3 cal/(mol·K) at very low temperatures. Maxwell noted in 1875 that the disagreement between experiment and the equipartition theorem was much worse than even these numbers suggest; since atoms have internal parts, heat energy should go into the motion of these internal parts, making the predicted specific heats of monatomic and diatomic gases much higher than 3 cal/(mol·K) and 7 cal/(mol·K), respectively.
A third discrepancy concerned the specific heat of metals. According to the classical Drude model, metallic electrons act as a nearly ideal gas, and so they should contribute to the heat capacity by the equipartition theorem, where Ne is the number of electrons. Experimentally, however, electrons contribute little to the heat capacity: the molar heat capacities of many conductors and insulators are nearly the same.
Several explanations of equipartition's failure to account for molar heat capacities were proposed. Boltzmann defended the derivation of his equipartition theorem as correct, but suggested that gases might not be in thermal equilibrium because of their interactions with the aether. Lord Kelvin suggested that the derivation of the equipartition theorem must be incorrect, since it disagreed with experiment, but was unable to show how. In 1900 Lord Rayleigh instead put forward a more radical view that the equipartition theorem and the experimental assumption of thermal equilibrium were both correct; to reconcile them, he noted the need for a new principle that would provide an "escape from the destructive simplicity" of the equipartition theorem. Albert Einstein provided that escape, by showing in 1906 that these anomalies in the specific heat were due to quantum effects, specifically the quantization of energy in the elastic modes of the solid. Einstein used the failure of equipartition to argue for the need of a new quantum theory of matter. Nernst's 1910 measurements of specific heats at low temperatures supported Einstein's theory, and led to the widespread acceptance of quantum theory among physicists.
General formulation of the equipartition theorem
The most general form of the equipartition theorem states that under suitable assumptions (discussed below), for a physical system with Hamiltonian energy function and degrees of freedom , the following equipartition formula holds in thermal equilibrium for all indices and :
Here is the Kronecker delta, which is equal to one if and is zero otherwise. The averaging brackets is assumed to be an ensemble average over phase space or, under an assumption of ergodicity, a time average of a single system.
The general equipartition theorem holds in both the microcanonical ensemble, when the total energy of the system is constant, and also in the canonical ensemble, when the system is coupled to a heat bath with which it can exchange energy. Derivations of the general formula are given later in the article.
The general formula is equivalent to the following two:
If a degree of freedom xn appears only as a quadratic term anxn2 in the Hamiltonian H, then the first of these formulae implies that
which is twice the contribution that this degree of freedom makes to the average energy . Thus the equipartition theorem for systems with quadratic energies follows easily from the general formula. A similar argument, with 2 replaced by s, applies to energies of the form anxns.
The degrees of freedom xn are coordinates on the phase space of the system and are therefore commonly subdivided into generalized position coordinates qk and generalized momentum coordinates pk, where pk is the conjugate momentum to qk. In this situation, formula 1 means that for all k,
Using the equations of Hamiltonian mechanics, these formulae may also be written
Similarly, one can show using formula 2 that
and
Relation to the virial theorem
The general equipartition theorem is an extension of the virial theorem (proposed in 1870), which states that
where t denotes time. Two key differences are that the virial theorem relates summed rather than individual averages to each other, and it does not connect them to the temperature T. Another difference is that traditional derivations of the virial theorem use averages over time, whereas those of the equipartition theorem use averages over phase space.
Applications
Ideal gas law
Ideal gases provide an important application of the equipartition theorem. As well as providing the formula
for the average kinetic energy per particle, the equipartition theorem can be used to derive the ideal gas law from classical mechanics. If q = (qx, qy, qz) and p = (px, py, pz) denote the position vector and momentum of a particle in the gas, and
F is the net force on that particle, then
where the first equality is Newton's second law, and the second line uses Hamilton's equations and the equipartition formula. Summing over a system of N particles yields
By Newton's third law and the ideal gas assumption, the net force on the system is the force applied by the walls of their container, and this force is given by the pressure P of the gas. Hence
where is the infinitesimal area element along the walls of the container. Since the divergence of the position vector is
the divergence theorem implies that
where is an infinitesimal volume within the container and is the total volume of the container.
Putting these equalities together yields
which immediately implies the ideal gas law for N particles:
where is the number of moles of gas and is the gas constant. Although equipartition provides a simple derivation of the ideal-gas law and the internal energy, the same results can be obtained by an alternative method using the partition function.
Diatomic gases
A diatomic gas can be modelled as two masses, and , joined by a spring of stiffness , which is called the rigid rotor-harmonic oscillator approximation. The classical energy of this system is
where and are the momenta of the two atoms, and is the deviation of the inter-atomic separation from its equilibrium value. Every degree of freedom in the energy is quadratic and, thus, should contribute to the total average energy, and to the heat capacity. Therefore, the heat capacity of a gas of N diatomic molecules is predicted to be : the momenta and contribute three degrees of freedom each, and the extension contributes the seventh. It follows that the heat capacity of a mole of diatomic molecules with no other degrees of freedom should be and, thus, the predicted molar heat capacity should be roughly 7 cal/(mol·K). However, the experimental values for molar heat capacities of diatomic gases are typically about 5 cal/(mol·K) and fall to 3 cal/(mol·K) at very low temperatures. This disagreement between the equipartition prediction and the experimental value of the molar heat capacity cannot be explained by using a more complex model of the molecule, since adding more degrees of freedom can only increase the predicted specific heat, not decrease it. This discrepancy was a key piece of evidence showing the need for a quantum theory of matter.
Extreme relativistic ideal gases
Equipartition was used above to derive the classical ideal gas law from Newtonian mechanics. However, relativistic effects become dominant in some systems, such as white dwarfs and neutron stars, and the ideal gas equations must be modified. The equipartition theorem provides a convenient way to derive the corresponding laws for an extreme relativistic ideal gas. In such cases, the kinetic energy of a single particle is given by the formula
Taking the derivative of with respect to the momentum component gives the formula
and similarly for the and components. Adding the three components together gives
where the last equality follows from the equipartition formula. Thus, the average total energy of an extreme relativistic gas is twice that of the non-relativistic case: for particles, it is .
Non-ideal gases
In an ideal gas the particles are assumed to interact only through collisions. The equipartition theorem may also be used to derive the energy and pressure of "non-ideal gases" in which the particles also interact with one another through conservative forces whose potential depends only on the distance between the particles. This situation can be described by first restricting attention to a single gas particle, and approximating the rest of the gas by a spherically symmetric distribution. It is then customary to introduce a radial distribution function such that the probability density of finding another particle at a distance from the given particle is equal to , where is the mean density of the gas. It follows that the mean potential energy associated to the interaction of the given particle with the rest of the gas is
The total mean potential energy of the gas is therefore , where is the number of particles in the gas, and the factor is needed because summation over all the particles counts each interaction twice.
Adding kinetic and potential energies, then applying equipartition, yields the energy equation
A similar argument, can be used to derive the pressure equation
Anharmonic oscillators
An anharmonic oscillator (in contrast to a simple harmonic oscillator) is one in which the potential energy is not quadratic in the extension (the generalized position which measures the deviation of the system from equilibrium). Such oscillators provide a complementary point of view on the equipartition theorem. Simple examples are provided by potential energy functions of the form
where and are arbitrary real constants. In these cases, the law of equipartition predicts that
Thus, the average potential energy equals , not as for the quadratic harmonic oscillator (where ).
More generally, a typical energy function of a one-dimensional system has a Taylor expansion in the extension :
for non-negative integers . There is no term, because at the equilibrium point, there is no net force and so the first derivative of the energy is zero. The term need not be included, since the energy at the equilibrium position may be set to zero by convention. In this case, the law of equipartition predicts that
In contrast to the other examples cited here, the equipartition formula
does not allow the average potential energy to be written in terms of known constants.
Brownian motion
The equipartition theorem can be used to derive the Brownian motion of a particle from the Langevin equation. According to that equation, the motion of a particle of mass with velocity is governed by Newton's second law
where is a random force representing the random collisions of the particle and the surrounding molecules, and where the time constant τ reflects the drag force that opposes the particle's motion through the solution. The drag force is often written ; therefore, the time constant equals .
The dot product of this equation with the position vector , after averaging, yields the equation
for Brownian motion (since the random force is uncorrelated with the position ). Using the mathematical identities
and
the basic equation for Brownian motion can be transformed into
where the last equality follows from the equipartition theorem for translational kinetic energy:
The above differential equation for (with suitable initial conditions) may be solved exactly:
On small time scales, with , the particle acts as a freely moving particle: by the Taylor series of the exponential function, the squared distance grows approximately quadratically:
However, on long time scales, with , the exponential and constant terms are negligible, and the squared distance grows only linearly:
This describes the diffusion of the particle over time. An analogous equation for the rotational diffusion of a rigid molecule can be derived in a similar way.
Stellar physics
The equipartition theorem and the related virial theorem have long been used as a tool in astrophysics. As examples, the virial theorem may be used to estimate stellar temperatures or the Chandrasekhar limit on the mass of white dwarf stars.
The average temperature of a star can be estimated from the equipartition theorem. Since most stars are spherically symmetric, the total gravitational potential energy can be estimated by integration
where is the mass within a radius and is the stellar density at radius ; represents the gravitational constant and the total radius of the star. Assuming a constant density throughout the star, this integration yields the formula
where is the star's total mass. Hence, the average potential energy of a single particle is
where is the number of particles in the star. Since most stars are composed mainly of ionized hydrogen, equals roughly , where is the mass of one proton. Application of the equipartition theorem gives an estimate of the star's temperature
Substitution of the mass and radius of the Sun yields an estimated solar temperature of T = 14 million kelvins, very close to its core temperature of 15 million kelvins. However, the Sun is much more complex than assumed by this model—both its temperature and density vary strongly with radius—and such excellent agreement (≈7% relative error) is partly fortuitous.
Star formation
The same formulae may be applied to determining the conditions for star formation in giant molecular clouds. A local fluctuation in the density of such a cloud can lead to a runaway condition in which the cloud collapses inwards under its own gravity. Such a collapse occurs when the equipartition theorem—or, equivalently, the virial theorem—is no longer valid, i.e., when the gravitational potential energy exceeds twice the kinetic energy
Assuming a constant density for the cloud
yields a minimum mass for stellar contraction, the Jeans mass
Substituting the values typically observed in such clouds (, ) gives an estimated minimum mass of 17 solar masses, which is consistent with observed star formation. This effect is also known as the Jeans instability, after the British physicist James Hopwood Jeans who published it in 1902.
Derivations
Kinetic energies and the Maxwell–Boltzmann distribution
The original formulation of the equipartition theorem states that, in any physical system in thermal equilibrium, every particle has exactly the same average translational kinetic energy, . However, this is true only for ideal gas, and the same result can be derived from the Maxwell–Boltzmann distribution. First, we choose to consider only the Maxwell–Boltzmann distribution of velocity of the z-component
with this equation, we can calculate the mean square velocity of the -component
Since different components of velocity are independent of each other, the average translational kinetic energy is given by
Notice, the Maxwell–Boltzmann distribution should not be confused with the Boltzmann distribution, which the former can be derived from the latter by assuming the energy of a particle is equal to its translational kinetic energy.
As stated by the equipartition theorem. The same result can also be obtained by averaging the particle energy using the probability of finding the particle in certain quantum energy state.
Quadratic energies and the partition function
More generally, the equipartition theorem states that any degree of freedom which appears in the total energy only as a simple quadratic term , where is a constant, has an average energy of in thermal equilibrium. In this case the equipartition theorem may be derived from the partition function , where is the canonical inverse temperature. Integration over the variable yields a factor
in the formula for . The mean energy associated with this factor is given by
as stated by the equipartition theorem.
General proofs
General derivations of the equipartition theorem can be found in many statistical mechanics textbooks, both for the microcanonical ensemble and for the canonical ensemble.
They involve taking averages over the phase space of the system, which is a symplectic manifold.
To explain these derivations, the following notation is introduced. First, the phase space is described in terms of generalized position coordinates together with their conjugate momenta . The quantities completely describe the configuration of the system, while the quantities together completely describe its state.
Secondly, the infinitesimal volume
of the phase space is introduced and used to define the volume of the portion of phase space where the energy of the system lies between two limits, and :
In this expression, is assumed to be very small, . Similarly, is defined to be the total volume of phase space where the energy is less than :
Since is very small, the following integrations are equivalent
where the ellipses represent the integrand. From this, it follows that is proportional to
where is the density of states. By the usual definitions of statistical mechanics, the entropy equals , and the temperature is defined by
The canonical ensemble
In the canonical ensemble, the system is in thermal equilibrium with an infinite heat bath at temperature (in kelvins). The probability of each state in phase space is given by its Boltzmann factor times a normalization factor , which is chosen so that the probabilities sum to one
where . Using Integration by parts for a phase-space variable the above can be written as
where , i.e., the first integration is not carried out over . Performing the first integral between two limits and and simplifying the second integral yields the equation
The first term is usually zero, either because is zero at the limits, or because the energy goes to infinity at those limits. In that case, the equipartition theorem for the canonical ensemble follows immediately
Here, the averaging symbolized by is the ensemble average taken over the canonical ensemble.
The microcanonical ensemble
In the microcanonical ensemble, the system is isolated from the rest of the world, or at least very weakly coupled to it. Hence, its total energy is effectively constant; to be definite, we say that the total energy is confined between and . For a given energy and spread , there is a region of phase space in which the system has that energy, and the probability of each state in that region of phase space is equal, by the definition of the microcanonical ensemble. Given these definitions, the equipartition average of phase-space variables (which could be either or ) and is given by
where the last equality follows because is a constant that does not depend on . Integrating by parts yields the relation
since the first term on the right hand side of the first line is zero (it can be rewritten as an integral of H − E on the hypersurface where ).
Substitution of this result into the previous equation yields
Since the equipartition theorem follows:
Thus, we have derived the general formulation of the equipartition theorem
which was so useful in the applications described above.
Limitations
Requirement of ergodicity
The law of equipartition holds only for ergodic systems in thermal equilibrium, which implies that all states with the same energy must be equally likely to be populated. Consequently, it must be possible to exchange energy among all its various forms within the system, or with an external heat bath in the canonical ensemble. The number of physical systems that have been rigorously proven to be ergodic is small; a famous example is the hard-sphere system of Yakov Sinai. The requirements for isolated systems to ensure ergodicity—and, thus equipartition—have been studied, and provided motivation for the modern chaos theory of dynamical systems. A chaotic Hamiltonian system need not be ergodic, although that is usually a good assumption.
A commonly cited counter-example where energy is not shared among its various forms and where equipartition does not hold in the microcanonical ensemble is a system of coupled harmonic oscillators. If the system is isolated from the rest of the world, the energy in each normal mode is constant; energy is not transferred from one mode to another. Hence, equipartition does not hold for such a system; the amount of energy in each normal mode is fixed at its initial value. If sufficiently strong nonlinear terms are present in the energy function, energy may be transferred between the normal modes, leading to ergodicity and rendering the law of equipartition valid. However, the Kolmogorov–Arnold–Moser theorem states that energy will not be exchanged unless the nonlinear perturbations are strong enough; if they are too small, the energy will remain trapped in at least some of the modes.
Another simple example is an ideal gas of a finite number of colliding particles in a round vessel. Due to the vessel's symmetry, the angular momentum of such a gas is conserved. Therefore, not all states with the same energy are populated. This results in the mean particle energy being dependent on the mass of this particle, and also on the masses of all the other particles.
Another way ergodicity can be broken is by the existence of nonlinear soliton symmetries. In 1953, Fermi, Pasta, Ulam and Tsingou conducted computer simulations of a vibrating string that included a non-linear term (quadratic in one test, cubic in another, and a piecewise linear approximation to a cubic in a third). They found that the behavior of the system was quite different from what intuition based on equipartition would have led them to expect. Instead of the energies in the modes becoming equally shared, the system exhibited a very complicated quasi-periodic behavior. This puzzling result was eventually explained by Kruskal and Zabusky in 1965 in a paper which, by connecting the simulated system to the Korteweg–de Vries equation led to the development of soliton mathematics.
Failure due to quantum effects
The law of equipartition breaks down when the thermal energy is significantly smaller than the spacing between energy levels. Equipartition no longer holds because it is a poor approximation to assume that the energy levels form a smooth continuum, which is required in the derivations of the equipartition theorem above. Historically, the failures of the classical equipartition theorem to explain specific heats and black-body radiation were critical in showing the need for a new theory of matter and radiation, namely, quantum mechanics and quantum field theory.
To illustrate the breakdown of equipartition, consider the average energy in a single (quantum) harmonic oscillator, which was discussed above for the classical case. Neglecting the irrelevant zero-point energy term since it can be factored out of the exponential functions involved in the probability distribution, the quantum harmonic oscillator energy levels are given by , where is the Planck constant, is the fundamental frequency of the oscillator, and is an integer. The probability of a given energy level being populated in the canonical ensemble is given by its Boltzmann factor
where and the denominator is the partition function, here a geometric series
Its average energy is given by
Substituting the formula for gives the final result
At high temperatures, when the thermal energy is much greater than the spacing between energy levels, the exponential argument is much less than one and the average energy becomes , in agreement with the equipartition theorem (Figure 10). However, at low temperatures, when , the average energy goes to zero—the higher-frequency energy levels are "frozen out" (Figure 10). As another example, the internal excited electronic states of a hydrogen atom do not contribute to its specific heat as a gas at room temperature, since the thermal energy (roughly 0.025 eV) is much smaller than the spacing between the lowest and next higher electronic energy levels (roughly 10 eV).
Similar considerations apply whenever the energy level spacing is much larger than the thermal energy. This reasoning was used by Max Planck and Albert Einstein, among others, to resolve the ultraviolet catastrophe of black-body radiation. The paradox arises because there are an infinite number of independent modes of the electromagnetic field in a closed container, each of which may be treated as a harmonic oscillator. If each electromagnetic mode were to have an average energy , there would be an infinite amount of energy in the container. However, by the reasoning above, the average energy in the higher-frequency modes goes to zero as ν goes to infinity; moreover, Planck's law of black-body radiation, which describes the experimental distribution of energy in the modes, follows from the same reasoning.
Other, more subtle quantum effects can lead to corrections to equipartition, such as identical particles and continuous symmetries. The effects of identical particles can be dominant at very high densities and low temperatures. For example, the valence electrons in a metal can have a mean kinetic energy of a few electronvolts, which would normally correspond to a temperature of tens of thousands of kelvins. Such a state, in which the density is high enough that the Pauli exclusion principle invalidates the classical approach, is called a degenerate fermion gas. Such gases are important for the structure of white dwarf and neutron stars. At low temperatures, a fermionic analogue of the Bose–Einstein condensate (in which a large number of identical particles occupy the lowest-energy state) can form; such superfluid electrons are responsible for superconductivity.
See also
Kinetic theory
Quantum statistical mechanics
Notes and references
Further reading
ASIN B00085D6OO
External links
Applet demonstrating equipartition in real time for a mixture of monatomic and diatomic gases
The equipartition theorem in stellar physics, written by Nir J. Shaviv, an associate professor at the Racah Institute of Physics in the Hebrew University of Jerusalem.
Physics theorems
Laws of thermodynamics
Statistical mechanics theorems | Equipartition theorem | [
"Physics",
"Chemistry",
"Mathematics"
] | 7,954 | [
"Theorems in dynamical systems",
"Equations of physics",
"Statistical mechanics theorems",
"Theorems in mathematical physics",
"Thermodynamics",
"Statistical mechanics",
"Laws of thermodynamics",
"Physics theorems"
] |
516,150 | https://en.wikipedia.org/wiki/Transverse%20mode | A transverse mode of electromagnetic radiation is a particular electromagnetic field pattern of the radiation in the plane perpendicular (i.e., transverse) to the radiation's propagation direction. Transverse modes occur in radio waves and microwaves confined to a waveguide, and also in light waves in an optical fiber and in a laser's optical resonator.
Transverse modes occur because of boundary conditions imposed on the wave by the waveguide. For example, a radio wave in a hollow metal waveguide must have zero tangential electric field amplitude at the walls of the waveguide, so the transverse pattern of the electric field of waves is restricted to those that fit between the walls. For this reason, the modes supported by a waveguide are quantized. The allowed modes can be found by solving Maxwell's equations for the boundary conditions of a given waveguide.
Types of modes
Unguided electromagnetic waves in free space, or in a bulk isotropic dielectric, can be described as a superposition of plane waves; these can be described as TEM modes as defined below.
However in any sort of waveguide where boundary conditions are imposed by a physical structure, a wave of a particular frequency can be described in terms of a transverse mode (or superposition of such modes). These modes generally follow different propagation constants. When two or more modes have an identical propagation constant along the waveguide, then there is more than one modal decomposition possible in order to describe a wave with that propagation constant (for instance, a non-central Gaussian laser mode can be equivalently described as a superposition of Hermite-Gaussian modes or Laguerre-Gaussian modes which are described below).
Waveguides
Modes in waveguides can be classified as follows:
Transverse electromagnetic (TEM) modes Neither electric nor magnetic field in the direction of propagation.
Transverse electric (TE) modes No electric field in the direction of propagation. These are sometimes called H modes because there is only a magnetic field along the direction of propagation (H is the conventional symbol for magnetic field).
Transverse magnetic (TM) modes No magnetic field in the direction of propagation. These are sometimes called E modes because there is only an electric field along the direction of propagation.
Hybrid modes Non-zero electric and magnetic fields in the direction of propagation. See also .
Hollow metallic waveguides filled with a homogeneous, isotropic material (usually air) support TE and TM modes but not the TEM mode. In coaxial cable energy is normally transported in the fundamental TEM mode. The TEM mode is also usually assumed for most other electrical conductor line formats as well. This is mostly an accurate assumption, but a major exception is microstrip which has a significant longitudinal component to the propagated wave due to the inhomogeneity at the boundary of the dielectric substrate below the conductor and the air above it. In an optical fiber or other dielectric waveguide, modes are generally of the hybrid type.
In rectangular waveguides, rectangular mode numbers are designated by two suffix numbers attached to the mode type, such as TEmn or TMmn, where m is the number of half-wave patterns across the width of the waveguide and n is the number of half-wave patterns across the height of the waveguide. In circular waveguides, circular modes exist and here m is the number of full-wave patterns along the circumference and n is the number of half-wave patterns along the diameter.
Optical fibers
The number of modes in an optical fiber distinguishes multi-mode optical fiber from single-mode optical fiber. To determine the number of modes in a step-index fiber, the V number needs to be determined: where is the wavenumber, is the fiber's core radius, and and are the refractive indices of the core and cladding, respectively. Fiber with a V-parameter of less than 2.405 only supports the fundamental mode (a hybrid mode), and is therefore a single-mode fiber whereas fiber with a higher V-parameter has multiple modes.
Decomposition of field distributions into modes is useful because a large number of field amplitudes readings can be simplified into a much smaller number of mode amplitudes. Because these modes change over time according to a simple set of rules, it is also possible to anticipate future behavior of the field distribution. These simplifications of complex field distributions ease the signal processing requirements of fiber-optic communication systems.
The modes in typical low refractive index contrast fibers are usually referred to as LP (linear polarization) modes, which refers to a scalar approximation for the field solution, treating it as if it contains only one transverse field component.
Lasers
In a laser with cylindrical symmetry, the transverse mode patterns are described by a combination of a Gaussian beam profile with a Laguerre polynomial. The modes are denoted where and are integers labeling the radial and angular mode orders, respectively. The intensity at a point (in polar coordinates) from the centre of the mode is given by:
where , is the associated Laguerre polynomial of order and index , and is the spot size of the mode corresponding to the Gaussian beam radius.
With , the TEM00 mode is the lowest order. It is the fundamental transverse mode of the laser resonator and has the same form as a Gaussian beam. The pattern has a single lobe, and has a constant phase across the mode. Modes with increasing show concentric rings of intensity, and modes with increasing show angularly distributed lobes. In general there are spots in the mode pattern (except for ). The mode, the so-called doughnut mode, is a special case consisting of a superposition of two modes (), rotated with respect to one another.
The overall size of the mode is determined by the Gaussian beam radius , and this may increase or decrease with the propagation of the beam, however the modes preserve their general shape during propagation. Higher order modes are relatively larger compared to the mode, and thus the fundamental Gaussian mode of a laser may be selected by placing an appropriately sized aperture in the laser cavity.
In many lasers, the symmetry of the optical resonator is restricted by polarizing elements such as Brewster's angle windows. In these lasers, transverse modes with rectangular symmetry are formed. These modes are designated with and being the horizontal and vertical orders of the pattern. The electric field pattern at a point for a beam propagating along the z-axis is given by
where , , , and are the waist, spot size, radius of curvature, and Gouy phase shift as given for a Gaussian beam; is a normalization constant; and is the -th physicist's Hermite polynomial. The corresponding intensity pattern is
The TEM00 mode corresponds to exactly the same fundamental mode as in the cylindrical geometry. Modes with increasing and show lobes appearing in the horizontal and vertical directions, with in general lobes present in the pattern. As before, higher-order modes have a larger spatial extent than the 00 mode.
The phase of each lobe of a is offset by radians with respect to its horizontal or vertical neighbours. This is equivalent to the polarization of each lobe being flipped in direction.
The overall intensity profile of a laser's output may be made up from the superposition of any of the allowed transverse modes of the laser's cavity, though often it is desirable to operate only on the fundamental mode.
See also
Normal mode
Longitudinal mode
Laser beam profiler
Spatial filter
Transverse wave
References
External links
Detailed descriptions of laser modes
Wave mechanics
Electromagnetic radiation
Laser science | Transverse mode | [
"Physics"
] | 1,557 | [
"Physical phenomena",
"Electromagnetic radiation",
"Classical mechanics",
"Waves",
"Wave mechanics",
"Radiation"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.