id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
363,540 | https://en.wikipedia.org/wiki/Serre%E2%80%93Swan%20theorem | In the mathematical fields of topology and K-theory, the Serre–Swan theorem, also called Swan's theorem, relates the geometric notion of vector bundles to the algebraic concept of projective modules and gives rise to a common intuition throughout mathematics: "projective modules over commutative rings are like vector bundles on compact spaces".
The two precise formulations of the theorems differ somewhat. The original theorem, as stated by Jean-Pierre Serre in 1955, is more algebraic in nature, and concerns vector bundles on an algebraic variety over an algebraically closed field (of any characteristic). The complementary variant stated by Richard Swan in 1962 is more analytic, and concerns (real, complex, or quaternionic) vector bundles on a smooth manifold or Hausdorff space.
Differential geometry
Suppose M is a smooth manifold (not necessarily compact), and E is a smooth vector bundle over M. Then Γ(E), the space of smooth sections of E, is a module over C∞(M) (the commutative algebra of smooth real-valued functions on M). Swan's theorem states that this module is finitely generated and projective over C∞(M). In other words, every vector bundle is a direct summand of some trivial bundle: for some k. The theorem can be proved by constructing a bundle epimorphism from a trivial bundle This can be done by, for instance, exhibiting sections s1...sk with the property that for each point p, {si(p)} span the fiber over p.
When M is connected, the converse is also true: every finitely generated projective module over C∞(M) arises in this way from some smooth vector bundle on M. Such a module can be viewed as a smooth function f on M with values in the n × n idempotent matrices for some n. The fiber of the corresponding vector bundle over x is then the range of f(x). If M is not connected, the converse does not hold unless one allows for vector bundles of non-constant rank (which means admitting manifolds of non-constant dimension). For example, if M is a zero-dimensional 2-point manifold, the module is finitely-generated and projective over but is not free, and so cannot correspond to the sections of any (constant-rank) vector bundle over M (all of which are trivial).
Another way of stating the above is that for any connected smooth manifold M, the section functor Γ from the category of smooth vector bundles over M to the category of finitely generated, projective C∞(M)-modules is full, faithful, and essentially surjective. Therefore the category of smooth vector bundles on M is equivalent to the category of finitely generated, projective C∞(M)-modules. Details may be found in .
Topology
Suppose X is a compact Hausdorff space, and C(X) is the ring of continuous real-valued functions on X. Analogous to the result above, the category of real vector bundles on X is equivalent to the category of finitely generated projective modules over C(X). The same result holds if one replaces "real-valued" by "complex-valued" and "real vector bundle" by "complex vector bundle", but it does not hold if one replace the field by a totally disconnected field like the rational numbers.
In detail, let Vec(X) be the category of complex vector bundles over X, and let ProjMod(C(X)) be the category of finitely generated projective modules over the C*-algebra C(X). There is a functor Γ : Vec(X) → ProjMod(C(X)) which sends each complex vector bundle E over X to the C(X)-module Γ(X, E) of sections. If is a morphism of vector bundles over X then and it follows that
giving the map
which respects the module structure . Swan's theorem asserts that the functor Γ is an equivalence of categories.
Algebraic geometry
The analogous result in algebraic geometry, due to applies to vector bundles in the category of affine varieties. Let X be an affine variety with structure sheaf and a coherent sheaf of -modules on X. Then is the sheaf of germs of a finite-dimensional vector bundle if and only if the space of sections of is a projective module over the commutative ring
References
.
.
.
.
Commutative algebra
Theorems in algebraic topology
Differential topology
K-theory | Serre–Swan theorem | [
"Mathematics"
] | 932 | [
"Theorems in topology",
"Fields of abstract algebra",
"Topology",
"Differential topology",
"Commutative algebra",
"Theorems in algebraic topology"
] |
363,551 | https://en.wikipedia.org/wiki/Universal%20enveloping%20algebra | In mathematics, the universal enveloping algebra of a Lie algebra is the unital associative algebra whose representations correspond precisely to the representations of that Lie algebra.
Universal enveloping algebras are used in the representation theory of Lie groups and Lie algebras. For example, Verma modules can be constructed as quotients of the universal enveloping algebra. In addition, the enveloping algebra gives a precise definition for the Casimir operators. Because Casimir operators commute with all elements of a Lie algebra, they can be used to classify representations. The precise definition also allows the importation of Casimir operators into other areas of mathematics, specifically, those that have a differential algebra. They also play a central role in some recent developments in mathematics. In particular, their dual provides a commutative example of the objects studied in non-commutative geometry, the quantum groups. This dual can be shown, by the Gelfand–Naimark theorem, to contain the C* algebra of the corresponding Lie group. This relationship generalizes to the idea of Tannaka–Krein duality between compact topological groups and their representations.
From an analytic viewpoint, the universal enveloping algebra of the Lie algebra of a Lie group may be identified with the algebra of left-invariant differential operators on the group.
Informal construction
The idea of the universal enveloping algebra is to embed a Lie algebra into an associative algebra with identity in such a way that the abstract bracket operation in corresponds to the commutator in and the algebra is generated by the elements of . There may be many ways to make such an embedding, but there is a unique "largest" such , called the universal enveloping algebra of .
Generators and relations
Let be a Lie algebra, assumed finite-dimensional for simplicity, with basis . Let be the structure constants for this basis, so that
Then the universal enveloping algebra is the associative algebra (with identity) generated by elements subject to the relations
and no other relations. Below we will make this "generators and relations" construction more precise by constructing the universal enveloping algebra as a quotient of the tensor algebra over .
Consider, for example, the Lie algebra sl(2,C), spanned by the matrices
which satisfy the commutation relations , , and . The universal enveloping algebra of sl(2,C) is then the algebra generated by three elements subject to the relations
and no other relations. We emphasize that the universal enveloping algebra is not the same as (or contained in) the algebra of matrices. For example, the matrix satisfies , as is easily verified. But in the universal enveloping algebra, the element does not satisfy because we do not impose this relation in the construction of the enveloping algebra. Indeed, it follows from the Poincaré–Birkhoff–Witt theorem (discussed § below) that the elements are all linearly independent in the universal enveloping algebra.
Finding a basis
In general, elements of the universal enveloping algebra are linear combinations of products of the generators in all possible orders. Using the defining relations of the universal enveloping algebra, we can always re-order those products in a particular order, say with all the factors of first, then factors of , etc. For example, whenever we have a term that contains (in the "wrong" order), we can use the relations to rewrite this as plus a linear combination of the 's. Doing this sort of thing repeatedly eventually converts any element into a linear combination of terms in ascending order. Thus, elements of the form
with the 's being non-negative integers, span the enveloping algebra. (We allow , meaning that we allow terms in which no factors of occur.) The Poincaré–Birkhoff–Witt theorem, discussed below, asserts that these elements are linearly independent and thus form a basis for the universal enveloping algebra. In particular, the universal enveloping algebra is always infinite dimensional.
The Poincaré–Birkhoff–Witt theorem implies, in particular, that the elements themselves are linearly independent. It is therefore common—if potentially confusing—to identify the 's with the generators of the original Lie algebra. That is to say, we identify the original Lie algebra as the subspace of its universal enveloping algebra spanned by the generators. Although may be an algebra of matrices, the universal enveloping of does not consist of (finite-dimensional) matrices. In particular, there is no finite-dimensional algebra that contains the universal enveloping of ; the universal enveloping algebra is always infinite dimensional. Thus, in the case of sl(2,C), if we identify our Lie algebra as a subspace of its universal enveloping algebra, we must not interpret , and as matrices, but rather as symbols with no further properties (other than the commutation relations).
Formalities
The formal construction of the universal enveloping algebra takes the above ideas, and wraps them in notation and terminology that makes it more convenient to work with. The most important difference is that the free associative algebra used in the above is narrowed to the tensor algebra, so that the product of symbols is understood to be the tensor product. The commutation relations are imposed by constructing a quotient space of the tensor algebra quotiented by the smallest two-sided ideal containing elements of the form . The universal enveloping algebra is the "largest" unital associative algebra generated by elements of with a Lie bracket compatible with the original Lie algebra.
Formal definition
Recall that every Lie algebra is in particular a vector space. Thus, one is free to construct the tensor algebra from it. The tensor algebra is a free algebra: it simply contains all possible tensor products of all possible vectors in , without any restrictions whatsoever on those products.
That is, one constructs the space
where is the tensor product, and is the direct sum of vector spaces. Here, is the field over which the Lie algebra is defined. From here, through to the remainder of this article, the tensor product is always explicitly shown. Many authors omit it, since, with practice, its location can usually be inferred from context. Here, a very explicit approach is adopted, to minimize any possible confusion about the meanings of expressions.
The first step in the construction is to "lift" the Lie bracket from the Lie algebra (where it is defined) to the tensor algebra (where it is not), so that one can coherently work with the Lie bracket of two tensors. The lifting is done as follows. First, recall that the bracket operation on a Lie algebra is a map that is bilinear, skew-symmetric and satisfies the Jacobi identity. We wish to define a Lie bracket [-,-] that is a map that is also bilinear, skew symmetric and obeys the Jacobi identity.
The lifting can be done grade by grade. Begin by defining the bracket on as
This is a consistent, coherent definition, because both sides are bilinear, and both sides are skew symmetric (the Jacobi identity will follow shortly). The above defines the bracket on ; it must now be lifted to for arbitrary This is done recursively, by defining
and likewise
It is straightforward to verify that the above definition is bilinear, and is skew-symmetric; one can also show that it obeys the Jacobi identity. The final result is that one has a Lie bracket that is consistently defined on all of one says that it has been "lifted" to all of in the conventional sense of a "lift" from a base space (here, the Lie algebra) to a covering space (here, the tensor algebra).
The result of this lifting is explicitly a Poisson algebra. It is a unital associative algebra with a Lie bracket that is compatible with the Lie algebra bracket; it is compatible by construction. It is not the smallest such algebra, however; it contains far more elements than needed. One can get something smaller by projecting back down. The universal enveloping algebra of is defined as the quotient space
where the equivalence relation is given by
That is, the Lie bracket defines the equivalence relation used to perform the quotienting. The result is still a unital associative algebra, and one can still take the Lie bracket of any two members. Computing the result is straight-forward, if one keeps in mind that each element of can be understood as a coset: one just takes the bracket as usual, and searches for the coset that contains the result. It is the smallest such algebra; one cannot find anything smaller that still obeys the axioms of an associative algebra.
The universal enveloping algebra is what remains of the tensor algebra after modding out the Poisson algebra structure. (This is a non-trivial statement; the tensor algebra has a rather complicated structure: it is, among other things, a Hopf algebra; the Poisson algebra is likewise rather complicated, with many peculiar properties. It is compatible with the tensor algebra, and so the modding can be performed. The Hopf algebra structure is conserved; this is what leads to its many novel applications, e.g. in string theory. However, for the purposes of the formal definition, none of this particularly matters.)
The construction can be performed in a slightly different (but ultimately equivalent) way. Forget, for a moment, the above lifting, and instead consider the two-sided ideal generated by elements of the form
This generator is an element of
A general member of the ideal will have the form
for some All elements of are obtained as linear combinations of elements of this form. Clearly, is a subspace. It is an ideal, in that if and then and Establishing that this is an ideal is important, because ideals are precisely those things that one can quotient with; ideals lie in the kernel of the quotienting map. That is, one has the short exact sequence
where each arrow is a linear map, and the kernel of that map is given by the image of the previous map. The universal enveloping algebra can then be defined as
Superalgebras and other generalizations
The above construction focuses on Lie algebras and on the Lie bracket, and its skewness and antisymmetry. To some degree, these properties are incidental to the construction. Consider instead some (arbitrary) algebra (not a Lie algebra) over a vector space, that is, a vector space endowed with multiplication that takes elements If the multiplication is bilinear, then the same construction and definitions can go through. One starts by lifting up to so that the lifted obeys all of the same properties that the base does – symmetry or antisymmetry or whatever. The lifting is done exactly as before, starting with
This is consistent precisely because the tensor product is bilinear, and the multiplication is bilinear. The rest of the lift is performed so as to preserve multiplication as a homomorphism. By definition, one writes
and also that
This extension is consistent by appeal to a lemma on free objects: since the tensor algebra is a free algebra, any homomorphism on its generating set can be extended to the entire algebra. Everything else proceeds as described above: upon completion, one has a unital associative algebra; one can take a quotient in either of the two ways described above.
The above is exactly how the universal enveloping algebra for Lie superalgebras is constructed. One need only to carefully keep track of the sign, when permuting elements. In this case, the (anti-)commutator of the superalgebra lifts to an (anti-)commuting Poisson bracket.
Another possibility is to use something other than the tensor algebra as the covering algebra. One such possibility is to use the exterior algebra; that is, to replace every occurrence of the tensor product by the exterior product. If the base algebra is a Lie algebra, then the result is the Gerstenhaber algebra; it is the exterior algebra of the corresponding Lie group. As before, it has a grading naturally coming from the grading on the exterior algebra. (The Gerstenhaber algebra should not be confused with the Poisson superalgebra; both invoke anticommutation, but in different ways.)
The construction has also been generalized for Malcev algebras, Bol algebras and left alternative algebras.
Universal property
The universal enveloping algebra, or rather the universal enveloping algebra together with the canonical map , possesses a universal property. Suppose we have any Lie algebra map
to a unital associative algebra (with Lie bracket in given by the commutator). More explicitly, this means that we assume
for all . Then there exists a unique unital algebra homomorphism
such that
where is the canonical map. (The map is obtained by embedding into its tensor algebra and then composing with the quotient map to the universal enveloping algebra. This map is an embedding, by the Poincaré–Birkhoff–Witt theorem.)
To put it differently, if is a linear map into a unital algebra satisfying , then extends to an algebra homomorphism of . Since is generated by elements of , the map must be uniquely determined by the requirement that
.
The point is that because there are no other relations in the universal enveloping algebra besides those coming from the commutation relations of , the map is well defined, independent of how one writes a given element as a linear combination of products of Lie algebra elements.
The universal property of the enveloping algebra immediately implies that every representation of acting on a vector space extends uniquely to a representation of . (Take .) This observation is important because it allows (as discussed below) the Casimir elements to act on . These operators (from the center of ) act as scalars and provide important information about the representations. The quadratic Casimir element is of particular importance in this regard.
Other algebras
Although the canonical construction, given above, can be applied to other algebras, the result, in general, does not have the universal property. Thus, for example, when the construction is applied to Jordan algebras, the resulting enveloping algebra contains the special Jordan algebras, but not the exceptional ones: that is, it does not envelope the Albert algebras. Likewise, the Poincaré–Birkhoff–Witt theorem, below, constructs a basis for an enveloping algebra; it just won't be universal. Similar remarks hold for the Lie superalgebras.
Poincaré–Birkhoff–Witt theorem
The Poincaré–Birkhoff–Witt theorem gives a precise description of . This can be done in either one of two different ways: either by reference to an explicit vector basis on the Lie algebra, or in a coordinate-free fashion.
Using basis elements
One way is to suppose that the Lie algebra can be given a totally ordered basis, that is, it is the free vector space of a totally ordered set. Recall that a free vector space is defined as the space of all finitely supported functions from a set to the field (finitely supported means that only finitely many values are non-zero); it can be given a basis such that is the indicator function for . Let be the injection into the tensor algebra; this is used to give the tensor algebra a basis as well. This is done by lifting: given some arbitrary sequence of , one defines the extension of to be
The Poincaré–Birkhoff–Witt theorem then states that one can obtain a basis for from the above, by enforcing the total order of onto the algebra. That is, has a basis
where , the ordering being that of total order on the set . The proof of the theorem involves noting that, if one starts with out-of-order basis elements, these can always be swapped by using the commutator (together with the structure constants). The hard part of the proof is establishing that the final result is unique and independent of the order in which the swaps were performed.
This basis should be easily recognized as the basis of a symmetric algebra. That is, the underlying vector spaces of and the symmetric algebra are isomorphic, and it is the PBW theorem that shows that this is so. See, however, the section on the algebra of symbols, below, for a more precise statement of the nature of the isomorphism.
It is useful, perhaps, to split the process into two steps. In the first step, one constructs the free Lie algebra: this is what one gets, if one mods out by all commutators, without specifying what the values of the commutators are. The second step is to apply the specific commutation relations from The first step is universal, and does not depend on the specific It can also be precisely defined: the basis elements are given by Hall words, a special case of which are the Lyndon words; these are explicitly constructed to behave appropriately as commutators.
Coordinate-free
One can also state the theorem in a coordinate-free fashion, avoiding the use of total orders and basis elements. This is convenient when there are difficulties in defining the basis vectors, as there can be for infinite-dimensional Lie algebras. It also gives a more natural form that is more easily extended to other kinds of algebras. This is accomplished by constructing a filtration whose limit is the universal enveloping algebra
First, a notation is needed for an ascending sequence of subspaces of the tensor algebra. Let
where
is the -times tensor product of The form a filtration:
More precisely, this is a filtered algebra, since the filtration preserves the algebraic properties of the subspaces. Note that the limit of this filtration is the tensor algebra
It was already established, above, that quotienting by the ideal is a natural transformation that takes one from to This also works naturally on the subspaces, and so one obtains a filtration whose limit is the universal enveloping algebra
Next, define the space
This is the space modulo all of the subspaces of strictly smaller filtration degree. Note that is not at all the same as the leading term of the filtration, as one might naively surmise. It is not constructed through a set subtraction mechanism associated with the filtration.
Quotienting by has the effect of setting all Lie commutators defined in to zero. One can see this by observing that the commutator of a pair of elements whose products lie in actually gives an element in . This is perhaps not immediately obvious: to get this result, one must repeatedly apply the commutation relations, and turn the crank. The essence of the Poincaré–Birkhoff–Witt theorem is that it is always possible to do this, and that the result is unique.
Since commutators of elements whose products are defined in lie in , the quotienting that defines has the effect of setting all commutators to zero. What PBW states is that the commutator of elements in is necessarily zero. What is left are the elements that are not expressible as commutators.
In this way, one is lead immediately to the symmetric algebra. This is the algebra where all commutators vanish. It can be defined as a filtration of symmetric tensor products . Its limit is the symmetric algebra . It is constructed by appeal to the same notion of naturality as before. One starts with the same tensor algebra, and just uses a different ideal, the ideal that makes all elements commute:
Thus, one can view the Poincaré–Birkhoff–Witt theorem as stating that is isomorphic to the symmetric algebra , both as a vector space and as a commutative algebra.
The also form a filtered algebra; its limit is This is the associated graded algebra of the filtration.
The construction above, due to its use of quotienting, implies that the limit of is isomorphic to In more general settings, with loosened conditions, one finds that is a projection, and one then gets PBW-type theorems for the associated graded algebra of a filtered algebra. To emphasize this, the notation is sometimes used for serving to remind that it is the filtered algebra.
Other algebras
The theorem, applied to Jordan algebras, yields the exterior algebra, rather than the symmetric algebra. In essence, the construction zeros out the anti-commutators. The resulting algebra is an enveloping algebra, but is not universal. As mentioned above, it fails to envelop the exceptional Jordan algebras.
Left-invariant differential operators
Suppose is a real Lie group with Lie algebra . Following the modern approach, we may identify with the space of left-invariant vector fields (i.e., first-order left-invariant differential operators). Specifically, if we initially think of as the tangent space to at the identity, then each vector in has a unique left-invariant extension. We then identify the vector in the tangent space with the associated left-invariant vector field. Now, the commutator (as differential operators) of two left-invariant vector fields is again a vector field and again left-invariant. We can then define the bracket operation on as the commutator on the associated left-invariant vector fields. This definition agrees with any other standard definition of the bracket structure on the Lie algebra of a Lie group.
We may then consider left-invariant differential operators of arbitrary order. Every such operator can be expressed (non-uniquely) as a linear combination of products of left-invariant vector fields. The collection of all left-invariant differential operators on forms an algebra, denoted . It can be shown that is isomorphic to the universal enveloping algebra .
In the case that arises as the Lie algebra of a real Lie group, one can use left-invariant differential operators to give an analytic proof of the Poincaré–Birkhoff–Witt theorem. Specifically, the algebra of left-invariant differential operators is generated by elements (the left-invariant vector fields) that satisfy the commutation relations of . Thus, by the universal property of the enveloping algebra, is a quotient of . Thus, if the PBW basis elements are linearly independent in —which one can establish analytically—they must certainly be linearly independent in . (And, at this point, the isomorphism of with is apparent.)
Algebra of symbols
The underlying vector space of may be given a new algebra structure so that and are isomorphic as associative algebras. This leads to the concept of the algebra of symbols : the space of symmetric polynomials, endowed with a product, the , that places the algebraic structure of the Lie algebra onto what is otherwise a standard associative algebra. That is, what the PBW theorem obscures (the commutation relations) the algebra of symbols restores into the spotlight.
The algebra is obtained by taking elements of and replacing each generator by an indeterminate, commuting variable to obtain the space of symmetric polynomials over the field . Indeed, the correspondence is trivial: one simply substitutes the symbol for . The resulting polynomial is called the symbol of the corresponding element of . The inverse map is
that replaces each symbol by . The algebraic structure is obtained by requiring that the product act as an isomorphism, that is, so that
for polynomials
The primary issue with this construction is that is not trivially, inherently a member of , as written, and that one must first perform a tedious reshuffling of the basis elements (applying the structure constants as needed) to obtain an element of in the properly ordered basis. An explicit expression for this product can be given: this is the Berezin formula. It follows essentially from the Baker–Campbell–Hausdorff formula for the product of two elements of a Lie group.
A closed form expression is given by
where
and is just in the chosen basis.
The universal enveloping algebra of the Heisenberg algebra is the Weyl algebra (modulo the relation that the center be the unit); here, the product is called the Moyal product.
Representation theory
The universal enveloping algebra preserves the representation theory: the representations of correspond in a one-to-one manner to the modules over . In more abstract terms, the abelian category of all representations of is isomorphic to the abelian category of all left modules over .
The representation theory of semisimple Lie algebras rests on the observation that there is an isomorphism, known as the Kronecker product:
for Lie algebras . The isomorphism follows from a lifting of the embedding
where
is just the canonical embedding (with subscripts, respectively for algebras one and two). It is straightforward to verify that this embedding lifts, given the prescription above. See, however, the discussion of the bialgebra structure in the article on tensor algebras for a review of some of the finer points of doing so: in particular, the shuffle product employed there corresponds to the Wigner-Racah coefficients, i.e. the 6j and 9j-symbols, etc.
Also important is that the universal enveloping algebra of a free Lie algebra is isomorphic to the free associative algebra.
Construction of representations typically proceeds by building the Verma modules of the highest weights.
In a typical context where is acting by infinitesimal transformations, the elements of act like differential operators, of all orders. (See, for example, the realization of the universal enveloping algebra as left-invariant differential operators on the associated group, as discussed above.)
Casimir operators
The center of can be identified with the centralizer of in Any element of must commute with all of and in particular with the canonical embedding of into Because of this, the center is directly useful for classifying representations of . For a finite-dimensional semisimple Lie algebra, the Casimir operators form a distinguished basis from the center . These may be constructed as follows.
The center corresponds to linear combinations of all elements that commute with all elements that is, for which That is, they are in the kernel of Thus, a technique is needed for computing that kernel. What we have is the action of the adjoint representation on we need it on The easiest route is to note that is a derivation, and that the space of derivations can be lifted to and thus to This implies that both of these are differential algebras.
By definition, is a derivation on if it obeys Leibniz's law:
(When is the space of left invariant vector fields on a group , the Lie bracket is that of vector fields.) The lifting is performed by defining
Since is a derivation for any the above defines acting on and
From the PBW theorem, it is clear that all central elements are linear combinations of symmetric homogeneous polynomials in the basis elements of the Lie algebra. The Casimir invariants are the irreducible homogeneous polynomials of a given, fixed degree. That is, given a basis , a Casimir operator of order has the form
where there are terms in the tensor product, and is a completely symmetric tensor of order belonging to the adjoint representation. That is, can be (should be) thought of as an element of Recall that the adjoint representation is given directly by the structure constants, and so an explicit indexed form of the above equations can be given, in terms of the Lie algebra basis; this is originally a theorem of Israel Gel'fand. That is, from , it follows that
where the structure constants are
As an example, the quadratic Casimir operator is
where is the inverse matrix of the Killing form That the Casimir operator belongs to the center follows from the fact that the Killing form is invariant under the adjoint action.
The center of the universal enveloping algebra of a simple Lie algebra is given in detail by the Harish-Chandra isomorphism.
Rank
The number of algebraically independent Casimir operators of a finite-dimensional semisimple Lie algebra is equal to the rank of that algebra, i.e. is equal to the rank of the Cartan–Weyl basis. This may be seen as follows. For a -dimensional vector space , recall that the determinant is the completely antisymmetric tensor on . Given a matrix , one may write the characteristic polynomial of as
For a -dimensional Lie algebra, that is, an algebra whose adjoint representation is -dimensional, the linear operator
implies that is a -dimensional endomorphism, and so one has the characteristic equation
for elements The non-zero roots of this characteristic polynomial (that are roots for all ) form the root system of the algebra. In general, there are only such roots; this is the rank of the algebra. This implies that the highest value of for which the is non-vanishing is
The are homogeneous polynomials of degree This can be seen in several ways: Given a constant , ad is linear, so that By plugging and chugging in the above, one obtains that
By linearity, if one expands in the basis,
then the polynomial has the form
that is, a is a tensor of rank . By linearity and the commutativity of addition, i.e. that , one concludes that this tensor must be completely symmetric. This tensor is exactly the Casimir invariant of order
The center corresponded to those elements for which for all by the above, these clearly corresponds to the roots of the characteristic equation. One concludes that the roots form a space of rank and that the Casimir invariants span this space. That is, the Casimir invariants generate the center
Example: Rotation group SO(3)
The rotation group SO(3) is of rank one, and thus has one Casimir operator. It is three-dimensional, and thus the Casimir operator must have order (3 − 1) = 2 i.e. be quadratic. Of course, this is the Lie algebra of As an elementary exercise, one can compute this directly. Changing notation to with belonging to the adjoint rep, a general algebra element is and direct computation gives
The quadratic term can be read off as , and so the squared angular momentum operator for the rotation group is that Casimir operator. That is,
and explicit computation shows that
after making use of the structure constants
Example: Pseudo-differential operators
A key observation during the construction of above was that it was a differential algebra, by dint of the fact that any derivation on the Lie algebra can be lifted to . Thus, one is led to a ring of pseudo-differential operators, from which one can construct Casimir invariants.
If the Lie algebra acts on a space of linear operators, such as in Fredholm theory, then one can construct Casimir invariants on the corresponding space of operators. The quadratic Casimir operator corresponds to an elliptic operator.
If the Lie algebra acts on a differentiable manifold, then each Casimir operator corresponds to a higher-order differential on the cotangent manifold, the second-order differential being the most common and most important.
If the action of the algebra is isometric, as would be the case for Riemannian or pseudo-Riemannian manifolds endowed with a metric and the symmetry groups SO(N) and SO (P, Q), respectively, one can then contract upper and lower indices (with the metric tensor) to obtain more interesting structures. For the quadratic Casimir invariant, this is the Laplacian. Quartic Casimir operators allow one to square the stress–energy tensor, giving rise to the Yang-Mills action. The Coleman–Mandula theorem restricts the form that these can take, when one considers ordinary Lie algebras. However, the Lie superalgebras are able to evade the premises of the Coleman–Mandula theorem, and can be used to mix together space and internal symmetries.
Examples in particular cases
If , then it has a basis of matriceswhich satisfy the following identities under the standard bracket:, , and this shows us that the universal enveloping algebra has the presentationas a non-commutative ring.
If is abelian (that is, the bracket is always ), then is commutative; and if a basis of the vector space has been chosen, then can be identified with the polynomial algebra over , with one variable per basis element.
If is the Lie algebra corresponding to the Lie group , then can be identified with the algebra of left-invariant differential operators (of all orders) on ; with lying inside it as the left-invariant vector fields as first-order differential operators.
To relate the above two cases: if is a vector space as abelian Lie algebra, the left-invariant differential operators are the constant coefficient operators, which are indeed a polynomial algebra in the partial derivatives of first order.
The center consists of the left- and right- invariant differential operators; this, in the case of not commutative, is often not generated by first-order operators (see for example Casimir operator of a semi-simple Lie algebra).
Another characterization in Lie group theory is of as the convolution algebra of distributions supported only at the identity element of .
The algebra of differential operators in variables with polynomial coefficients may be obtained starting with the Lie algebra of the Heisenberg group. See Weyl algebra for this; one must take a quotient, so that the central elements of the Lie algebra act as prescribed scalars.
The universal enveloping algebra of a finite-dimensional Lie algebra is a filtered quadratic algebra.
Hopf algebras and quantum groups
The construction of the group algebra for a given group is in many ways analogous to constructing the universal enveloping algebra for a given Lie algebra. Both constructions are universal and translate representation theory into module theory. Furthermore, both group algebras and universal enveloping algebras carry natural comultiplications that turn them into Hopf algebras. This is made precise in the article on the tensor algebra: the tensor algebra has a Hopf algebra structure on it, and because the Lie bracket is consistent with (obeys the consistency conditions for) that Hopf structure, it is inherited by the universal enveloping algebra.
Given a Lie group , one can construct the vector space of continuous complex-valued functions on , and turn it into a C*-algebra. This algebra has a natural Hopf algebra structure: given two functions
, one defines multiplication as
and comultiplication as
the counit as
and the antipode as
Now, the Gelfand–Naimark theorem essentially states that every commutative Hopf algebra is isomorphic to the Hopf algebra of continuous functions on some compact topological group —the theory of compact topological groups and the theory of commutative Hopf algebras are the same. For Lie groups, this implies that is isomorphically dual to ; more precisely, it is isomorphic to a subspace of the dual space
These ideas can then be extended to the non-commutative case. One starts by defining the quasi-triangular Hopf algebras, and then performing what is called a quantum deformation to obtain the quantum universal enveloping algebra, or quantum group, for short.
See also
Milnor–Moore theorem
Harish-Chandra homomorphism
References
Shlomo Sternberg (2004), Lie algebras, Harvard University.
Ring theory
Hopf algebras
Representation theory of Lie algebras | Universal enveloping algebra | [
"Mathematics"
] | 7,310 | [
"Fields of abstract algebra",
"Ring theory"
] |
363,628 | https://en.wikipedia.org/wiki/Tensor%20algebra | In mathematics, the tensor algebra of a vector space V, denoted T(V) or T(V), is the algebra of tensors on V (of any rank) with multiplication being the tensor product. It is the free algebra on V, in the sense of being left adjoint to the forgetful functor from algebras to vector spaces: it is the "most general" algebra containing V, in the sense of the corresponding universal property (see below).
The tensor algebra is important because many other algebras arise as quotient algebras of T(V). These include the exterior algebra, the symmetric algebra, Clifford algebras, the Weyl algebra and universal enveloping algebras.
The tensor algebra also has two coalgebra structures; one simple one, which does not make it a bialgebra, but does lead to the concept of a cofree coalgebra, and a more complicated one, which yields a bialgebra, and can be extended by giving an antipode to create a Hopf algebra structure.
Note: In this article, all algebras are assumed to be unital and associative. The unit is explicitly required to define the coproduct.
Construction
Let V be a vector space over a field K. For any nonnegative integer k, we define the kth tensor power of V to be the tensor product of V with itself k times:
That is, TkV consists of all tensors on V of order k. By convention T0V is the ground field K (as a one-dimensional vector space over itself).
We then construct T(V) as the direct sum of TkV for k = 0,1,2,…
The multiplication in T(V) is determined by the canonical isomorphism
given by the tensor product, which is then extended by linearity to all of T(V). This multiplication rule implies that the tensor algebra T(V) is naturally a graded algebra with TkV serving as the grade-k subspace. This grading can be extended to a Z-grading by appending subspaces for negative integers k.
The construction generalizes in a straightforward manner to the tensor algebra of any module M over a commutative ring. If R is a non-commutative ring, one can still perform the construction for any R-R bimodule M. (It does not work for ordinary R-modules because the iterated tensor products cannot be formed.)
Adjunction and universal property
The tensor algebra is also called the free algebra on the vector space , and is functorial; this means that the map extends to linear maps for forming a functor from the category of -vector spaces to the category of associative algebras. Similarly with other free constructions, the functor is left adjoint to the forgetful functor that sends each associative -algebra to its underlying vector space.
Explicitly, the tensor algebra satisfies the following universal property, which formally expresses the statement that it is the most general algebra containing V:
Any linear map from to an associative algebra over can be uniquely extended to an algebra homomorphism from to as indicated by the following commutative diagram:
Here is the canonical inclusion of into . As for other universal properties, the tensor algebra can be defined as the unique algebra satisfying this property (specifically, it is unique up to a unique isomorphism), but this definition requires to prove that an object satisfying this property exists.
The above universal property implies that is a functor from the category of vector spaces over , to the category of -algebras. This means that any linear map between -vector spaces and extends uniquely to a -algebra homomorphism from to .
Non-commutative polynomials
If V has finite dimension n, another way of looking at the tensor algebra is as the "algebra of polynomials over K in n non-commuting variables". If we take basis vectors for V, those become non-commuting variables (or indeterminates) in T(V), subject to no constraints beyond associativity, the distributive law and K-linearity.
Note that the algebra of polynomials on V is not , but rather : a (homogeneous) linear function on V is an element of for example coordinates on a vector space are covectors, as they take in a vector and give out a scalar (the given coordinate of the vector).
Quotients
Because of the generality of the tensor algebra, many other algebras of interest can be constructed by starting with the tensor algebra and then imposing certain relations on the generators, i.e. by constructing certain quotient algebras of T(V). Examples of this are the exterior algebra, the symmetric algebra, Clifford algebras, the Weyl algebra and universal enveloping algebras.
Coalgebra
The tensor algebra has two different coalgebra structures. One is compatible with the tensor product, and thus can be extended to a bialgebra, and can be further be extended with an antipode to a Hopf algebra structure. The other structure, although simpler, cannot be extended to a bialgebra. The first structure is developed immediately below; the second structure is given in the section on the cofree coalgebra, further down.
The development provided below can be equally well applied to the exterior algebra, using the wedge symbol in place of the tensor symbol ; a sign must also be kept track of, when permuting elements of the exterior algebra. This correspondence also lasts through the definition of the bialgebra, and on to the definition of a Hopf algebra. That is, the exterior algebra can also be given a Hopf algebra structure.
Similarly, the symmetric algebra can also be given the structure of a Hopf algebra, in exactly the same fashion, by replacing everywhere the tensor product by the symmetrized tensor product , i.e. that product where
In each case, this is possible because the alternating product and the symmetric product obey the required consistency conditions for the definition of a bialgebra and Hopf algebra; this can be explicitly checked in the manner below. Whenever one has a product obeying these consistency conditions, the construction goes through; insofar as such a product gave rise to a quotient space, the quotient space inherits the Hopf algebra structure.
In the language of category theory, one says that there is a functor from the category of -vector spaces to the category of -associative algebras. But there is also a functor taking vector spaces to the category of exterior algebras, and a functor taking vector spaces to symmetric algebras. There is a natural map from to each of these. Verifying that quotienting preserves the Hopf algebra structure is the same as verifying that the maps are indeed natural.
Coproduct
The coalgebra is obtained by defining a coproduct or diagonal operator
Here, is used as a short-hand for to avoid an explosion of parentheses. The symbol is used to denote the "external" tensor product, needed for the definition of a coalgebra. It is being used to distinguish it from the "internal" tensor product , which is already being used to denote multiplication in the tensor algebra (see the section Multiplication, below, for further clarification on this issue). In order to avoid confusion between these two symbols, most texts will replace by a plain dot, or even drop it altogether, with the understanding that it is implied from context. This then allows the symbol to be used in place of the symbol. This is not done below, and the two symbols are used independently and explicitly, so as to show the proper location of each. The result is a bit more verbose, but should be easier to comprehend.
The definition of the operator is most easily built up in stages, first by defining it for elements and then by homomorphically extending it to the whole algebra. A suitable choice for the coproduct is then
and
where is the unit of the field . By linearity, one obviously has
for all It is straightforward to verify that this definition satisfies the axioms of a coalgebra: that is, that
where is the identity map on . Indeed, one gets
and likewise for the other side. At this point, one could invoke a lemma, and say that extends trivially, by linearity, to all of , because is a free object and is a generator of the free algebra, and is a homomorphism. However, it is insightful to provide explicit expressions. So, for , one has (by definition) the homomorphism
Expanding, one has
In the above expansion, there is no need to ever write as this is just plain-old scalar multiplication in the algebra; that is, one trivially has that
The extension above preserves the algebra grading. That is,
Continuing in this fashion, one can obtain an explicit expression for the coproduct acting on a homogenous element of order m:
where the symbol, which should appear as ш, the sha, denotes the shuffle product. This is expressed in the second summation, which is taken over all (p, m − p)-shuffles. The shuffle is
By convention, one takes that Sh(m,0) and Sh(0,m) equals {id: {1, ..., m} → {1, ..., m}}. It is also convenient to take the pure tensor products and
to equal 1 for p = 0 and p = m, respectively (the empty product in ). The shuffle follows directly from the first axiom of a co-algebra: the relative order of the elements is preserved in the riffle shuffle: the riffle shuffle merely splits the ordered sequence into two ordered sequences, one on the left, and one on the right.
Equivalently,
where the products are in , and where the sum is over all subsets of .
As before, the algebra grading is preserved:
Counit
The counit is given by the projection of the field component out from the algebra. This can be written as for and for . By homomorphism under the tensor product , this extends to
for all
It is a straightforward matter to verify that this counit satisfies the needed axiom for the coalgebra:
Working this explicitly, one has
where, for the last step, one has made use of the isomorphism , as is appropriate for the defining axiom of the counit.
Bialgebra
A bialgebra defines both multiplication, and comultiplication, and requires them to be compatible.
Multiplication
Multiplication is given by an operator
which, in this case, was already given as the "internal" tensor product. That is,
That is, The above should make it clear why the symbol needs to be used: the was actually one and the same thing as ; and notational sloppiness here would lead to utter chaos. To strengthen this: the tensor product of the tensor algebra corresponds to the multiplication used in the definition of an algebra, whereas the tensor product is the one required in the definition of comultiplication in a coalgebra. These two tensor products are not the same thing!
Unit
The unit for the algebra
is just the embedding, so that
That the unit is compatible with the tensor product is "trivial": it is just part of the standard definition of the tensor product of vector spaces. That is, for field element k and any More verbosely, the axioms for an associative algebra require the two homomorphisms (or commuting diagrams):
on , and that symmetrically, on , that
where the right-hand side of these equations should be understood as the scalar product.
Compatibility
The unit and counit, and multiplication and comultiplication, all have to satisfy compatibility conditions. It is straightforward to see that
Similarly, the unit is compatible with comultiplication:
The above requires the use of the isomorphism in order to work; without this, one loses linearity. Component-wise,
with the right-hand side making use of the isomorphism.
Multiplication and the counit are compatible:
whenever x or y are not elements of , and otherwise, one has scalar multiplication on the field: The most difficult to verify is the compatibility of multiplication and comultiplication:
where exchanges elements. The compatibility condition only needs to be verified on ; the full compatibility follows as a homomorphic extension to all of The verification is verbose but straightforward; it is not given here, except for the final result:
For an explicit expression for this was given in the coalgebra section, above.
Hopf algebra
The Hopf algebra adds an antipode to the bialgebra axioms. The antipode on is given by
This is sometimes called the "anti-identity". The antipode on is given by
and on by
This extends homomorphically to
Compatibility
Compatibility of the antipode with multiplication and comultiplication requires that
This is straightforward to verify componentwise on :
Similarly, on :
Recall that
and that
for any that is not in
One may proceed in a similar manner, by homomorphism, verifying that the antipode inserts the appropriate cancellative signs in the shuffle, starting with the compatibility condition on and proceeding by induction.
Cofree cocomplete coalgebra
One may define a different coproduct on the tensor algebra, simpler than the one given above. It is given by
Here, as before, one uses the notational trick (recalling that trivially).
This coproduct gives rise to a coalgebra. It describes a coalgebra that is dual to the algebra structure on T(V∗), where V∗ denotes the dual vector space of linear maps V → F. In the same way that the tensor algebra is a free algebra, the corresponding coalgebra is termed cocomplete co-free. With the usual product this is not a bialgebra. It can be turned into a bialgebra with the product where (i,j) denotes the binomial coefficient for . This bialgebra is known as the divided power Hopf algebra.
The difference between this, and the other coalgebra is most easily seen in the term. Here, one has that
for , which is clearly missing a shuffled term, as compared to before.
See also
Braided vector space
Braided Hopf algebra
Monoidal category
Multilinear algebra
Fock space
References
(See Chapter 3 §5)
Algebras
Multilinear algebra
Tensors
Hopf algebras | Tensor algebra | [
"Mathematics",
"Engineering"
] | 3,031 | [
"Tensors",
"Algebras",
"Mathematical structures",
"Algebraic structures"
] |
363,647 | https://en.wikipedia.org/wiki/Electronic%20civil%20disobedience | Electronic civil disobedience (ECD; also known as cyber civil disobedience or cyber disobedience) can refer to any type of civil disobedience in which the participants use information technology to carry out their actions. Electronic civil disobedience often involves computers and the Internet and may also be known as hacktivism. The term "electronic civil disobedience" was coined in the critical writings of Critical Art Ensemble (CAE), a collective of tactical media artists and practitioners, in their seminal 1996 text, Electronic Civil Disobedience: And Other Unpopular Ideas. Electronic civil disobedience seeks to continue the practices of nonviolent-yet-disruptive protest originally pioneered by American poet Henry David Thoreau, who in 1848 published Civil Disobedience.
A common form of ECD is coordination DDoS against a specific target, also known as a virtual sit-in. Such virtual sit-ins may be announced on the internet by hacktivist groups like the Electronic Disturbance Theatre and the borderlands Hacklab.
Computerized activism exists at the intersections of politico-social movements and computer-mediated communication. Stefan Wray writes about ECD:
"As hackers become politicized and as activists become computerized, we are going to see an increase in the number of cyber-activists who engage in what will become more widely known as Electronic Civil Disobedience. The same principals of traditional civil disobedience, like trespass and blockage, will still be applied, but more and more these acts will take place in electronic or digital form. The primary site for Electronic Civil Disobedience will be in cyberspace.Jeff Shantz and Jordon Tomblin write that ECD or cyber disobedience merges activism with organization and movement building through online participatory engagement:Cyber disobedience emphasizes direct action, rather than protest, appeals to authority, or simply registering dissent, which directly impedes the capacities of economic and political elites to plan, pursue, or carry out activities that would harm non-elites or restrict the freedoms of people in non-elite communities. Cyber disobedience, unlike much of conventional activism or even civil disobedience, does not restrict actions on the basis of state or corporate acceptance or legitimacy or in terms of legality (which cyber disobedient view largely as biased, corrupt, mechanisms of elites rule). In many cases recently, people and groups involved in online activism or cyber disobedience are also involving themselves in real world actions and organizing. In other cases people and groups who have only been involved in real world efforts are now moving their activism and organizing online as well.
History
The origins of computerized activism extend back in pre-Web history to the mid-1980s. Examples include PeaceNet (1986), a newsgroup service, which allowed political activists to communicate across international borders with relative ease and speed using Bulletin Board Systems and email lists. The term "electronic civil disobedience" was first coined by the Critical Art Ensemble in the context of nomadic conceptions of capital and resistance, an idea that can be traced back to Hakim Bey’s (1991) "T. A. Z. The Temporary Autonomous Zone: Ontological Anarchy, Poetic Terrorism" and Gilles Deleuze’s and Felix Guattari’s (1987) "A Thousand Plateaus". ECF uses temporary - and nomadic -"autonomous zones" as the launch pads from where electronic civil disobedience is activated (for example, temporary websites that announce the ECD action).
Before 1998, ECD remained largely theoretical musings, or was badly articulated, such as the Zippies 1994 call for an "Internet Invasion" which deployed the metaphor of war albeit within the logic of civil disobedience and information activism. Some commentators pinpointing the 1997 Acteal Massacre in Chiapas, Mexico, as a turning point towards the internet infrastructure being viewed not only as means for communication but also a site for direct action.
In reaction to the Acteal Massacre, a group called Electronic Disturbance Theatre (not associated with Autonomedia) created a software called FloodNet, which improved upon early experiments with virtual sit-ins. The Electronic Disruption Theatre exhibited its SWARM project21 at the Ars Electronic Festival on Information Warfare, where it launched a three-pronged FloodNet disturbance against web sites of the Mexican presidency, the Frankfurt Stock Exchange, and the Pentagon, in solidarity with the Zapatista Army of National Liberation, against the Mexican government, against the U.S. military, and against a symbol of international capital. The Acteal Massacre also prompted another group, called the Anonymous Digital Coalition, to post messages calling for cyber attacks against five Mexico City based financial institution’s web sites, the plan being for thousands of people around the world to simultaneously load these web sites on to their web browsers. Electrohippies flooded the World Trade Organization site during the World Trade Organization Ministerial Conference of 1999 protest activity.
Hacktivism
The term electronic civil disobedience and hacktivism may be used synonymously, although some commentators maintain that the difference is that ECD actors don’t hide their names, while most hacktivists wish to remain anonymous. Some commentators maintain that ECD uses only legal means, as opposed to illegal actions used by hacktivists. It is also maintained that hacktivism is done by individuals rather than by specific groups. In reality the distinction between ECD and hacktivism is not clear.
Ricardo Dominguez of the Electronic Disturbance Theater has been incorrectly referred to by many as a founder of ECD and hacktivism. He is currently an Assistant Professor of Visual Arts at the University of California of San Diego and teaches classes on Electronic Civil Disobedience and Performance Art. His recent project the Transborder Immigrant Tool is a hacktivist gesture which has received wide media attention and criticism from anti-immigration groups.
Examples
ECD is often open-source, non-structured, moves horizontally and non-linearly. For example, virtual sit-ins may be announced on the internet and participants may have no formal connection with each other, not knowing each other's identity. ECD actors can participate from home, from work, from the university, or from other points of access to the Net.
Electronic civil disobedience generally involves large numbers of people and may use legal and illegal techniques. For example, a single person reloading a website repeatedly is not illegal, but if enough people do it at the same time it can render the website inaccessible. Another type of electronic civil disobedience is the use of the Internet for publicized and deliberate violations of a law that the protesters take issue with, such as copyright law.
Blatant disregard of copyright law by millions of Internet users every day on file sharing networks might also be considered a form of constant ECD, as the people doing it have decided to simply ignore a law that they disagree with.
Blockchain technology has been leveraged by EDC groups to help make them more decentralized, anonymous, and secure.
Intervasion of the UK
In order to draw attention to John Major's Criminal Justice Bill, a group of cyber-activists staged an event in which they "kidnapped" 60s counter-cultural hero Timothy Leary at a book launch for Chaos & Cyberculture held on Guy Fawkes Day 1994, and then proceeded to "force him to DDoS government websites". Leary called the event an "Intervasion". The Intervasion was preceded by mass email-bombing and denial of service attacks against government servers with some success. Although ignored by the mainstream media, the event was reported on Free Radio Berkeley.
Grey Tuesday
On February 24, 2004, large scale intentional copyright infringement occurred in an event called Grey Tuesday, "a day of coordinated civil disobedience". Activists intentionally violated EMI's copyright of The White Album by distributing MP3 files of The Grey Album, a mashup of The White Album with The Black Album, in an attempt to draw public attention to copyright reform issues and anti-copyright ideals. Reportedly over 400 sites participated including 170 that hosted the album. Jonathan Zittrain, professor of Internet law at Harvard Law School, comments that "As a matter of pure legal doctrine, the Grey Tuesday protest is breaking the law, end of story. But copyright law was written with a particular form of industry in mind. The flourishing of information technology gives amateurs and homerecording artists powerful tools to build and share interesting, transformative, and socially valuable art drawn from pieces of popular cultures. There's no place to plug such an important cultural sea change into the current legal regime."
Border Haunt
On July 15, 2011, 667 people from 28 different countries participated in the online collective act of electronic civil disobedience called "Border Haunt" that targeted the policing of the U.S.-Mexico border. Participants collected entries from a database maintained by the Arizona Daily Star that holds the names and descriptions of migrants that died trying to cross the border territory and then sent those entries into a database run by the company BlueServo which is used to surveil and police the border. As a result, the border was conceptually and symbolically haunted for the duration of the one-day action as the border policing structure received over 1,000 reports of deceased migrants attempting to cross the border. The Border Haunt action was organized by Ian Alan Paul, a California-based new media artist and was reported on by Al Jazeera English and the Bay Citizen.
E-Graffiti: Texts in Mourning and Action
In response to the political assassination of Zapatista teacher Jose Luis Solís López (alias Galeano), in Chiapas, Mexico, Ian Alan Paul and Ricardo Dominguez developed a new form of Electronic Civil Disobedience that was used as part of a distributed online performance on May 24, 2014 as part of the week of action and day of remembrance in solidarity with the Zapatista communities.
When users logged on to the project website, their web browsers sent mass amounts of page requests to the server of the Mexican President Enrique Peña Nieto, filling their error logs with lines of text drawn from Don Quixote, communiques from the Zapatista Communities, as well as from texts authored by the Critical Art Ensemble. As a kind of E-Graffiti and form of Electronic Civil Disobedience, floods of HTTP traffic were sent from around the world as the books and communiques were written onto the error logs of their servers several thousand times by different users.
Öppna skolplattformen
In 2020 a Swedish citizen initiative to build an app for accessing the data from the City of Stockholm’s official school system began. The background is that the City of Stockholm had developed an official school system in-house. The result was a very expensive system (more than $117 million). The mobile application for parents and employees to use left user frustrated and complaining about complexity and horrible usability. As a result of this some parents decided to build an open source version mobile alternative using the API of the school platform. On February 12, 2021 the app was released and all of its code was published under an open source license on GitHub. Following this the city began to work against the new alternative citizen made frontend and tried blocking it by obfuscating the official webbapplications API-calls, reporting key people in the citizen project to the police, calling them out in the press as unlawful, etc.
During most of 2021 the city counsel and staff upheld their opposition but saw their costs rising and that there was an overwhelming support of the new frontend. The politicians in charge finally chose to step in in the fall of 2021 and open up a collaboration with the parents building the frontend.
Thai Censorship
When the government of Thailand proposed a system to reform their country's network in 2015. They stated that changes were imperative "to control the inappropriate websites and control the inflow of information." Their proposed reform would allow the government to monitor and censor the circulation of their network. A couple of months before this news, the Thai government underwent a coup which also resulted in the new government taking over major media and banning political gatherings. These serious events concerned the people of Thailand which caused them to organize and act.
Rather than participating in a DDOS attack against the government which is usually associated with criminal activity, they decided to take to Facebook to gather internet users from around the world. These users all occupied Thai government websites in order to overflow their bandwidth and called it a "Virtual sit-in." The daily average users increased by almost 100,000 people which then prompted the government to announce that they would not use their reform proposal to censor but to study the youth.
See also
Anonymous (group)
E-democracy
Digital rights
Direct action
Ricardo Dominguez (professor)
Information freedom
Internet vigilantism
References
Activism by type
Civil disobedience
Politics and technology
E-democracy | Electronic civil disobedience | [
"Technology"
] | 2,662 | [
"E-democracy",
"Computing and society"
] |
363,695 | https://en.wikipedia.org/wiki/BLAST%20%28biotechnology%29 | In bioinformatics, BLAST (basic local alignment search tool) is an algorithm and program for comparing primary biological sequence information, such as the amino-acid sequences of proteins or the nucleotides of DNA and/or RNA sequences. A BLAST search enables a researcher to compare a subject protein or nucleotide sequence (called a query) with a library or database of sequences, and identify database sequences that resemble the query sequence above a certain threshold. For example, following the discovery of a previously unknown gene in the mouse, a scientist will typically perform a BLAST search of the human genome to see if humans carry a similar gene; BLAST will identify sequences in the human genome that resemble the mouse gene based on similarity of sequence.
Background
BLAST is one of the most widely used bioinformatics programs for sequence searching. It addresses a fundamental problem in bioinformatics research. The heuristic algorithm it uses is much faster than other approaches, such as calculating an optimal alignment. This emphasis on speed is vital to making the algorithm practical on the huge genome databases currently available, although subsequent algorithms can be even faster.
The BLAST program was designed by Eugene Myers, Stephen Altschul, Warren Gish, David J. Lipman and Webb Miller at the NIH and was published in J. Mol. Biol. in 1990. BLAST extended the alignment work of a previously developed program for protein and DNA sequence similarity searches, FASTA, by adding a novel stochastic model developed by Samuel Karlin and Stephen Altschul. They proposed "a method for estimating similarities between the known DNA sequence of one organism with that of another", and their work has been described as "the statistical foundation for BLAST." Subsequently, Altschul, Gish, Miller, Myers, and Lipman designed and implemented the BLAST program, which was published in the Journal of Molecular Biology in 1990 and has been cited over 100,000 times since.
While BLAST is faster than any Smith-Waterman implementation for most cases, it cannot "guarantee the optimal alignments of the query and database sequences" as Smith-Waterman algorithm does. The Smith-Waterman algorithm was an extension of a previous optimal method, the Needleman–Wunsch algorithm, which was the first sequence alignment algorithm that was guaranteed to find the best possible alignment. However, the time and space requirements of these optimal algorithms far exceed the requirements of BLAST.
BLAST is more time-efficient than FASTA by searching only for the more significant patterns in the sequences, yet with comparative sensitivity. This could be further realized by understanding the algorithm of BLAST introduced below.
Examples of other questions that researchers use BLAST to answer are:
Which bacterial species have a protein that is related in lineage to a certain protein with known amino-acid sequence
What other genes encode proteins that exhibit structures or motifs such as ones that have just been determined
BLAST is also often used as part of other algorithms that require approximate sequence matching.
BLAST is available on the web on the NCBI website. Different types of BLASTs are available according to the query sequences and the target databases. Alternative implementations include AB-BLAST (formerly known as WU-BLAST), FSA-BLAST (last updated in 2006), and ScalaBLAST.
The original paper by Altschul, et al. was the most highly cited paper published in the 1990s.
Input
Input sequences (in FASTA or Genbank format), database to search and other optional parameters such as scoring matrix.
Output
BLAST output can be delivered in a variety of formats. These formats include HTML, plain text, and XML formatting. For NCBI's webpage, the default format for output is HTML. When performing a BLAST on NCBI, the results are given in a graphical format showing the hits found, a table showing sequence identifiers for the hits with scoring related data, as well as alignments for the sequence of interest and the hits received with corresponding BLAST scores for these. The easiest to read and most informative of these is probably the table.
If one is attempting to search for a proprietary sequence or simply one that is unavailable in databases available to the general public through sources such as NCBI, there is a BLAST program available for download to any computer, at no cost. This can be found at BLAST+ executables. There are also commercial programs available for purchase. Databases can be found on the NCBI site, as well as on the Index of BLAST databases (FTP).
Process
Using a heuristic method, BLAST finds similar sequences, by locating short matches between the two sequences. This process of finding similar sequences is called seeding. It is after this first match that BLAST begins to make local alignments. While attempting to find similarity in sequences, sets of common letters, known as words, are very important. For example, suppose that the sequence contains the following stretch of letters, GLKFA. If a BLAST was being conducted under normal conditions, the word size would be 3 letters. In this case, using the given stretch of letters, the searched words would be GLK, LKF, and KFA. The heuristic algorithm of BLAST locates all common three-letter words between the sequence of interest and the hit sequence or sequences from the database. This result will then be used to build an alignment. After making words for the sequence of interest, the rest of the words are also assembled. These words must satisfy a requirement of having a score of at least the threshold T, when compared by using a scoring matrix.
One commonly used scoring matrix for BLAST searches is BLOSUM62, although the optimal scoring matrix depends on sequence similarity. Once both words and neighborhood words are assembled and compiled, they are compared to the sequences in the database in order to find matches. The threshold score T determines whether or not a particular word will be included in the alignment. Once seeding has been conducted, the alignment which is only 3 residues long, is extended in both directions by the algorithm used by BLAST. Each extension impacts the score of the alignment by either increasing or decreasing it. If this score is higher than a pre-determined T, the alignment will be included in the results given by BLAST. However, if this score is lower than this pre-determined T, the alignment will cease to extend, preventing the areas of poor alignment from being included in the BLAST results. Note that increasing the T score limits the amount of space available to search, decreasing the number of neighborhood words, while at the same time speeding up the process of BLAST
Algorithm
To run the software, BLAST requires a query sequence to search for, and a sequence to search against (also called the target sequence) or a sequence database containing multiple such sequences. BLAST will find sub-sequences in the database which are similar to subsequences in the query. In typical usage, the query sequence is much smaller than the database, e.g., the query may be one thousand nucleotides while the database is several billion nucleotides.
The main idea of BLAST is that there are often High-scoring Segment Pairs (HSP) contained in a statistically significant alignment. BLAST searches for high scoring sequence alignments between the query sequence and the existing sequences in the database using a heuristic approach that approximates the Smith-Waterman algorithm. However, the exhaustive Smith-Waterman approach is too slow for searching large genomic databases such as GenBank. Therefore, the BLAST algorithm uses a heuristic approach that is less accurate than the Smith-Waterman algorithm but over 50 times faster. The speed and relatively good accuracy of BLAST are among the key technical innovations of the BLAST programs.
An overview of the BLAST algorithm (a protein to protein search) is as follows:
Remove low-complexity region or sequence repeats in the query sequence.
"Low-complexity region" means a region of a sequence composed of few kinds of elements. These regions might give high scores that confuse the program to find the actual significant sequences in the database, so they should be filtered out. The regions will be marked with an X (protein sequences) or N (nucleic acid sequences) and then be ignored by the BLAST program. To filter out the low-complexity regions, the SEG program is used for protein sequences and the program DUST is used for DNA sequences. On the other hand, the program XNU is used to mask off the tandem repeats in protein sequences.
Make a k-letter word list of the query sequence.
Take k=3 for example, we list the words of length 3 in the query protein sequence (k is usually 11 for a DNA sequence) "sequentially", until the last letter of the query sequence is included. The method is illustrated in figure 1.
List the possible matching words.
This step is one of the main differences between BLAST and FASTA. FASTA cares about all of the common words in the database and query sequences that are listed in step 2; however, BLAST only cares about the high-scoring words. The scores are created by comparing the word in the list in step 2 with all the 3-letter words. By using the scoring matrix (substitution matrix) to score the comparison of each residue pair, there are 20^3 possible match scores for a 3-letter word. For example, the score obtained by comparing PQG with PEG and PQA is respectively 15 and 12 with the BLOSUM62 weighting scheme. For DNA words, a match is scored as +5 and a mismatch as -4, or as +2 and -3. After that, a neighborhood word score threshold T is used to reduce the number of possible matching words. The words whose scores are greater than the threshold T will remain in the possible matching words list, while those with lower scores will be discarded. For example, PEG is kept, but PQA is abandoned when T is 13.
Organize the remaining high-scoring words into an efficient search tree.
This allows the program to rapidly compare the high-scoring words to the database sequences.
Repeat step 3 to 4 for each k-letter word in the query sequence.
Scan the database sequences for exact matches with the remaining high-scoring words.
The BLAST program scans the database sequences for the remaining high-scoring word, such as PEG, of each position. If an exact match is found, this match is used to seed a possible un-gapped alignment between the query and database sequences.
Extend the exact matches to high-scoring segment pair (HSP).
The original version of BLAST stretches a longer alignment between the query and the database sequence in the left and right directions, from the position where the exact match occurred. The extension does not stop until the accumulated total score of the HSP begins to decrease. A simplified example is presented in figure 2.
To save more time, a newer version of BLAST, called BLAST2 or gapped BLAST, has been developed. BLAST2 adopts a lower neighborhood word score threshold to maintain the same level of sensitivity for detecting sequence similarity. Therefore, the list of possible matching words list in step 3 becomes longer. Next, the exact matched regions, within distance A from each other on the same diagonal in figure 3, will be joined as a longer new region. Finally, the new regions are then extended by the same method as in the original version of BLAST, and the HSPs' (High-scoring segment pair) scores of the extended regions are then created by using a substitution matrix as before.
List all of the HSPs in the database whose score is high enough to be considered.
We list the HSPs whose scores are greater than the empirically determined cutoff score S. By examining the distribution of the alignment scores modeled by comparing random sequences, a cutoff score S can be determined such that its value is large enough to guarantee the significance of the remaining HSPs.
Evaluate the significance of the HSP score.
BLAST next assesses the statistical significance of each HSP score by exploiting the Gumbel extreme value distribution (EVD). (It is proved that the distribution of Smith-Waterman local alignment scores between two random sequences follows the Gumbel EVD. For local alignments containing gaps it is not proved.). In accordance with the Gumbel EVD, the probability p of observing a score S equal to or greater than x is given by the equation
where
The statistical parameters and are estimated by fitting the distribution of the un-gapped local alignment scores, of the query sequence and a lot of shuffled versions (Global or local shuffling) of a database sequence, to the Gumbel extreme value distribution. Note that and depend upon the substitution matrix, gap penalties, and sequence composition (the letter frequencies). and are the effective lengths of the query and database sequences, respectively. The original sequence length is shortened to the effective length to compensate for the edge effect (an alignment start near the end of one of the query or database sequence is likely not to have enough sequence to build an optimal alignment). They can be calculated as
where is the average expected score per aligned pair of residues in an alignment of two random sequences. Altschul and Gish gave the typical values, , , and , for un-gapped local alignment using BLOSUM62 as the substitution matrix. Using the typical values for assessing the significance is called the lookup table method; it is not accurate. The expect score E of a database match is the number of times that an unrelated database sequence would obtain a score S higher than x by chance. The expectation E obtained in a search for a database of D sequences is given by
Furthermore, when , E could be approximated by the Poisson distribution as
This expectation or expect value "E" (often called an E score or E-value or e-value) assessing the significance of the HSP score for un-gapped local alignment is reported in the BLAST results. The calculation shown here is modified if individual HSPs are combined, such as when producing gapped alignments (described below), due to the variation of the statistical parameters.
Make two or more HSP regions into a longer alignment.
Sometimes, we find two or more HSP regions in one database sequence that can be made into a longer alignment. This provides additional evidence of the relation between the query and database sequence. There are two methods, the Poisson method and the sum-of-scores method, to compare the significance of the newly combined HSP regions. Suppose that there are two combined HSP regions with the pairs of scores (65, 40) and (52, 45), respectively. The Poisson method gives more significance to the set with the maximal lower score (45>40). However, the sum-of-scores method prefers the first set, because 65+40 (105) is greater than 52+45(97). The original BLAST uses the Poisson method; gapped BLAST and the WU-BLAST uses the sum-of scores method.
Show the gapped Smith-Waterman local alignments of the query and each of the matched database sequences.
The original BLAST only generates un-gapped alignments including the initially found HSPs individually, even when there is more than one HSP found in one database sequence.
BLAST2 produces a single alignment with gaps that can include all of the initially found HSP regions. Note that the computation of the score and its corresponding E-value involves use of adequate gap penalties.
Report every match whose expect score is lower than a threshold parameter E.
Types of BLAST
BLASTn (Nucleotide BLAST)
BLASTn compares one or more nucleotide sequence to a database or another sequence. This is useful when trying to identify evolutionary relationships between organisms.
tBLASTn
tBLASTn used to search for proteins in sequences that haven't been translated into proteins yet. It takes a protein sequence and compares it to all possible translations of a DNA sequence. This is useful when looking for similar protein-coding regions in DNA sequences that haven't been fully annotated, like ESTs (short, single-read cDNA sequences) and HTGs (draft genome sequences). Since these sequences don't have known protein translations, we can only search for them using tBLASTn.
BLASTx
BLASTx compares a nucleotide query sequence, which can be translated into six different protein sequences, against a database of known protein sequences. This tool is useful when the reading frame of the DNA sequence is uncertain or contains errors that might cause mistakes in protein-coding. BLASTx provides combined statistics for hits across all frames, making it helpful for the initial analysis of new DNA sequences.
BLASTp
BLASTp, or Protein BLAST, is used to compare protein sequences. You can input one or more protein sequences that you want to compare against a single protein sequence or a database of protein sequences. This is useful when you're trying to identify a protein by finding similar sequences in existing protein databases.
Parallel BLAST
Parallel BLAST versions of split databases are implemented using MPI and Pthreads, and have been ported to various platforms including Windows, Linux, Solaris, Mac OS X, and AIX. Popular approaches to parallelize BLAST include query distribution, hash table segmentation, computation parallelization, and database segmentation (partition). Databases are split into equal sized pieces and stored locally on each node. Each query is run on all nodes in parallel and the resultant BLAST output files from all nodes merged to yield the final output. Specific implementations include MPIblast, ScalaBLAST, DCBLAST and so on.
MPIblast makes use of a database segmentation technique to parallelize the computation process. This allows for significant performance improvements when conducting BLAST searches across a set of nodes in a cluster. In some scenarios a superlinear speedup is achievable. This makes MPIblast suitable for the extensive genomic datasets that are typically used in bioinformatics.
BLAST generally runs at a speed of O(n), where n is the size of the database. The time to complete the search increases linearly as the size of the database increases. MPIblast utilizes parallel processing to speed up the search. The ideal speed for any parallel computation is a complexity of O(n/p), with n being the size of the database and p being the number of processors. This would indicate that the job is evenly distributed among the p number of processors. This is visualized in the included graph. The superlinear speedup that can sometimes occur with MPIblast can have a complexity better than O(n/p). This occurs because the cache memory can be used to decrease the run time.
Alternatives to BLAST
The predecessor to BLAST, FASTA, can also be used for protein and DNA similarity searching. FASTA provides a similar set of programs for comparing proteins to protein and DNA databases, DNA to DNA and protein databases, and includes additional programs for working with unordered short peptides and DNA sequences. In addition, the FASTA package provides SSEARCH, a vectorized implementation of the rigorous Smith-Waterman algorithm. FASTA is slower than BLAST, but provides a much wider range of scoring matrices, making it easier to tailor a search to a specific evolutionary distance.
An extremely fast but considerably less sensitive alternative to BLAST is BLAT (Blast Like Alignment Tool). While BLAST does a linear search, BLAT relies on k-mer indexing the database, and can thus often find seeds faster. Another software alternative similar to BLAT is PatternHunter.
Advances in sequencing technology in the late 2000s has made searching for very similar nucleotide matches an important problem. New alignment programs tailored for this use typically use BWT-indexing of the target database (typically a genome). Input sequences can then be mapped very quickly, and output is typically in the form of a BAM file. Example alignment programs are BWA, SOAP, and Bowtie.
For protein identification, searching for known domains (for instance from Pfam) by matching with Hidden Markov Models is a popular alternative, such as HMMER.
An alternative to BLAST for comparing two banks of sequences is PLAST. PLAST provides a high-performance general purpose bank to bank sequence similarity search tool relying on the PLAST and ORIS algorithms. Results of PLAST are very similar to BLAST, but PLAST is significantly faster and capable of comparing large sets of sequences with a small memory (i.e. RAM) footprint.
For applications in metagenomics, where the task is to compare billions of short DNA reads against tens of millions of protein references, DIAMOND runs at up to 20,000 times as fast as BLASTX, while maintaining a high level of sensitivity.
The open-source software MMseqs is an alternative to BLAST/PSI-BLAST, which improves on current search tools over the full range of speed-sensitivity trade-off, achieving sensitivities better than PSI-BLAST at more than 400 times its speed.
Optical computing approaches have been suggested as promising alternatives to the current electrical implementations. OptCAM is an example of such approaches and is shown to be faster than BLAST.
Comparing BLAST and the Smith-Waterman Process
While both Smith-Waterman and BLAST are used to find homologous sequences by searching and comparing a query sequence with those in the databases, they do have their differences.
Due to the fact that BLAST is based on a heuristic algorithm, the results received through BLAST will not include all the possible hits within the database. BLAST misses hard to find matches.
An alternative in order to find all the possible hits would be to use the Smith-Waterman algorithm. This method varies from the BLAST method in two areas, accuracy and speed. The Smith-Waterman option provides better accuracy, in that it finds matches that BLAST cannot, because it does not exclude any information. Therefore, it is necessary for remote homology. However, when compared to BLAST, it is more time consuming and requires large amounts of computing power and memory. However, advances have been made to speed up the Smith-Waterman search process dramatically. These advances include FPGA chips and SIMD technology.
For more complete results from BLAST, the settings can be changed from their default settings. The optimal settings for a given sequence, however, may vary. The settings one can change are E-Value, gap costs, filters, word size, and substitution matrix.
Note, the algorithm used for BLAST was developed from the algorithm used for Smith-Waterman. BLAST employs an alignment which finds "local alignments between sequences by finding short matches and from these initial matches (local) alignments are created".
BLAST output visualization
To help users interpreting BLAST results, different software is available. According to installation and use, analysis features and technology, here are some available tools:
NCBI BLAST service
general BLAST output interpreters, GUI-based: JAMBLAST, Blast Viewer, BLASTGrabber
integrated BLAST environments: PLAN, BlastStation-Free, SequenceServer
BLAST output parsers: MuSeqBox, Zerg, BioParser, BLAST-Explorer, SequenceServer
specialized BLAST-related tools: MEGAN, BLAST2GENE, BOV, Circoletto
Example visualizations of BLAST results are shown in Figure 4 and 5.
Uses of BLAST
BLAST can be used for several purposes. These include identifying species, locating domains, establishing phylogeny, DNA mapping, and comparison.
Identifying species With the use of BLAST, you can possibly correctly identify a species or find homologous species. This can be useful, for example, when you are working with a DNA sequence from an unknown species.
Locating domains When working with a protein sequence you can input it into BLAST, to locate known domains within the sequence of interest.
Establishing phylogeny Using the results received through BLAST you can create a phylogenetic tree using the BLAST web-page. Phylogenies based on BLAST alone are less reliable than other purpose-built computational phylogenetic methods, so should only be relied upon for "first pass" phylogenetic analyses.
DNA mapping When working with a known species, and looking to sequence a gene at an unknown location, BLAST can compare the chromosomal position of the sequence of interest, to relevant sequences in the database(s). NCBI has a "Magic-BLAST" tool built around BLAST for this purpose.
Comparison When working with genes, BLAST can locate common genes in two related species, and can be used to map annotations from one organism to another.
Classifying taxonomy
BLAST can use genetic sequences to compare multiple taxa against known taxonomical data. By doing this, it can provide a picture of the evolutionary relationships between various species (Fig.6). This is a useful way to identify orphan genes, since if the gene shows up in an organism outside of the ancestral lineage, then it wouldn't be classified as an orphan gene.
Although this method is helpful, some more accurate options to find homologs would be through pairwise sequence alignment and multiple sequence alignment.
See also
PSI Protein Classifier
Needleman-Wunsch algorithm
Smith-Waterman algorithm
Sequence alignment
Sequence alignment software
Sequerome
eTBLAST
References
External links
BLAST+ executables — free source downloads
Bioinformatics algorithms
Phylogenetics software
Laboratory software
Public-domain software
Free bioinformatics software | BLAST (biotechnology) | [
"Biology"
] | 5,162 | [
"Bioinformatics",
"Bioinformatics algorithms"
] |
363,703 | https://en.wikipedia.org/wiki/Reed%E2%80%93Sternberg%20cell | Reed–Sternberg cells (also known as lacunar histiocytes for certain types) are distinctive, giant cells found with light microscopy in biopsies from individuals with Hodgkin lymphoma. They are usually derived from B lymphocytes, classically considered crippled germinal center B cells. In the vast majority of cases, the immunoglobulin genes of Reed–Sternberg cells have undergone both V(D)J recombination and somatic hypermutation, establishing an origin from a germinal center or postgerminal center B cell. Despite having the genetic signature of a B cell, the Reed–Sternberg cells of classical Hodgkin lymphoma fail to express most B-cell–specific genes, including the immunoglobulin genes. The cause of this wholesale reprogramming of gene expression has yet to be fully explained. It presumably is the result of widespread epigenetic changes of uncertain etiology, but is partly a consequence of so-called "crippling" mutations acquired during somatic hypermutation. Seen against a sea of B cells, they give the tissue a moth-eaten appearance.
Reed–Sternberg cells are large (30–50 microns) and are either multinucleated or have a bilobed nucleus with prominent eosinophilic inclusion-like nucleoli (thus resembling an "owl's eye" appearance). Reed–Sternberg cells are CD30 and CD15 positive except in the lymphocyte predominance type where they are negative, but are usually positive for CD20 and CD45. The presence of these cells is necessary in the diagnosis of Hodgkin lymphoma – the absence of Reed–Sternberg cells has very high negative predictive value. The presence of these cells is confirmed mainly by use of biomarkers in immunohistochemistry. They can also be found in reactive lymphadenopathy (such as infectious mononucleosis immunoblasts which are RS like in appearance, carbamazepine associated lymphadenopathy) and very rarely in other types of non-Hodgkin lymphomas. Anaplastic large cell lymphoma may show RS-like cells as well.
History
They are named after Dorothy Reed Mendenhall and Carl Sternberg, who provided the first definitive microscopic descriptions of Hodgkin's disease.
Pathology
Hodgkin lymphoma
A special type of Reed–Sternberg cell (RSC) is the lacunar histiocyte, whose cytoplasm retracts when fixed in formalin, so the nuclei give the appearance of cells that lie with empty spaces (called lacunae) between them. These are characteristic of the nodular sclerosis subtype of Hodgkin lymphoma.
Mummified RSCs (compact nucleus, basophilic cytoplasm, no nucleolus) are also associated with classical Hodgkin's lymphoma while popcorn cells (small cell with hyper-lobulated nucleus and small nucleoli) are lymph histiocytic (L-H) variant of Reed–Sternberg cells and are associated with nodular lymphocyte predominant Hodgkin lymphoma (NLPHL).
RSCs and one RSC cell line (L1236 cells) but not other RSC cell lines express very high levels of ALOX15 (i.e., 15-lipoxygenase-1) or possibly ALOX15B (i.e. 15-lipoxygenase-2), enzymes that metabolize arachidonic acid and various other polyunsaturated fatty acids to a wide array of bioactive products including in particular those of the 15-Hydroperoxyeicosatetraenoic acid family of arachidonic acid metabolites. This is unusual in that lymphocytes typically express little or no ALOX15. It is suggested that ALOX15 and/or ALOX15B, perhaps operating through one of its arachidonic acid-derived products, the eoxins, contributes to the development and/or morphology of Hodgkin lymphoma.
See also
Non-Hodgkin lymphoma
Hodgkin lymphoma
References
Histopathology
Hodgkin lymphoma | Reed–Sternberg cell | [
"Chemistry"
] | 937 | [
"Histopathology",
"Microscopy"
] |
363,747 | https://en.wikipedia.org/wiki/Long%20QT%20syndrome | Long QT syndrome (LQTS) is a condition affecting repolarization (relaxing) of the heart after a heartbeat, giving rise to an abnormally lengthy QT interval. It results in an increased risk of an irregular heartbeat which can result in fainting, drowning, seizures, or sudden death. These episodes can be triggered by exercise or stress. Some rare forms of LQTS are associated with other symptoms and signs including deafness and periods of muscle weakness.
Long QT syndrome may be present at birth or develop later in life. The inherited form may occur by itself or as part of larger genetic disorder. Onset later in life may result from certain medications, low blood potassium, low blood calcium, or heart failure. Medications that are implicated include certain antiarrhythmics, antibiotics, and antipsychotics. LQTS can be diagnosed using an electrocardiogram (EKG) if a corrected QT interval of greater than 450–500 milliseconds is found, but clinical findings, other EKG features, and genetic testing may confirm the diagnosis with shorter QT intervals.
Management may include avoiding strenuous exercise, getting sufficient potassium in the diet, the use of beta blockers, or an implantable cardiac defibrillator. For people with LQTS who survive cardiac arrest and remain untreated, the risk of death within 15 years is greater than 50%. With proper treatment this decreases to less than 1% over 20 years.
Long QT syndrome is estimated to affect 1 in 7,000 people. Females are affected more often than males. Most people with the condition develop symptoms before they are 40 years old. It is a relatively common cause of sudden death along with Brugada syndrome and arrhythmogenic right ventricular dysplasia. In the United States it results in about 3,500 deaths a year. The condition was first clearly described in 1957.
Signs and symptoms
Many people with long QT syndrome have no signs or symptoms. When symptoms occur, they are generally caused by abnormal heart rhythms (arrhythmias), most commonly a form of ventricular tachycardia called Torsades de pointes (TdP). If the arrhythmia reverts to a normal rhythm spontaneously the affected person may experience lightheadedness (known as presyncope) or faint which may be preceded by a fluttering sensation in the chest. If the arrhythmia continues, the affected person may experience a cardiac arrest, which if untreated may lead to sudden death. Those with LQTS may also experience non-epileptic seizures as a result of reduced blood flow to the brain during an arrhythmia. Epilepsy is also associated with certain types of long QT syndrome.
The arrhythmias that lead to faints and sudden death are more likely to occur in specific circumstances, in part determined by which genetic variant is present. While arrhythmias can occur at any time, in some forms of LQTS arrhythmias are more commonly seen in response to exercise or mental stress (LQT1), in other forms following a sudden loud noise (LQT2), and in some forms during sleep or immediately upon waking (LQT3).
Some rare forms of long QT syndrome affect other parts of the body, leading to deafness in the Jervell and Lange-Nielsen form of the condition, and periodic paralysis in the Andersen–Tawil (LQT7) form.
Risk for arrhythmias
While those with long QT syndrome have an increased risk of developing abnormal heart rhythms, the absolute risk of arrhythmias is very variable. The strongest predictor of whether someone will develop TdP is whether they have experienced this arrhythmia or another form of cardiac arrest in the past. Those with LQTS who have experienced syncope without an ECG having been recorded at the time are also at higher risk, as syncope in these cases is frequently due to an undocumented self-terminating arrhythmia.
In addition to a history of arrhythmias, the extent to which the QT is prolonged predicts risk. While some have QT intervals that are very prolonged, others have only slight QT prolongation, or even a normal QT interval at rest (concealed LQTS). Those with the longest QT intervals are more likely to experience TdP, and a corrected QT interval of greater than 500 ms is thought to represent those at higher risk. Despite this, those with only subtle QT prolongation or concealed LQTS still have some risk of arrhythmias. Overall, every 10 ms increase in the corrected QT interval is associated with a 15% increase in arrhythmic risk.
As the QT prolonging effects of both genetic variants and acquired causes of LQTS are additive, those with inherited LQTS are more likely to experience TdP if given QT prolonging drugs or if they experience electrolyte problems such as low blood levels of potassium (hypokalaemia). Similarly, those taking QT prolonging medications are more likely to experience TdP if they have a genetic tendency to a prolonged QT interval, even it this tendency is concealed. Arrhythmias occur more commonly in drug-induced LQTS if the medication in question has been rapidly given intravenously, or if high concentrations of the drug are present in the person's blood. The risk of arrhythmias is also higher if the person receiving the drug has heart failure, is taking digitalis, or has recently been cardioverted from atrial fibrillation. Other risk factors for developing torsades de pointes among those with LQTS include female sex, increasing age, pre-existing cardiovascular disease, and abnormal liver or kidney function.
Causes
There are several subtypes of long QT syndrome. These can be broadly split into those caused by genetic mutations which those affected are born with, carry throughout their lives, and can pass on to their children (inherited or congenital long QT syndrome), and those caused by other factors which cannot be passed on and are often reversible (acquired long QT syndrome).
Inherited
Inherited, or congenital long QT syndrome, is caused by genetic abnormalities. LQTS can arise from variants in several genes, leading in some cases to quite different features. The common thread linking these variants is that they affect one or more ion currents leading to prolongation of the ventricular action potential, thus lengthening the QT interval. Classification systems have been proposed to distinguish between subtypes of the condition based on the clinical features (and named after those who first described the condition) and subdivided by the underlying genetic variant. The most common of these, accounting for 99% of cases, is Romano–Ward syndrome (genetically LQT1-6 and LQT9-16), an autosomal dominant form in which the electrical activity of the heart is affected without involving other organs. A less commonly seen form is Jervell and Lange-Nielsen syndrome, an autosomal recessive form of LQTS combining a prolonged QT interval with congenital deafness. Other rare forms include Andersen–Tawil syndrome (LQT7) with features including a prolonged QT interval, periodic paralysis, and abnormalities of the face and skeleton; and Timothy syndrome (LQT8) in which a prolonged QT interval is associated with abnormalities in the structure of the heart and autism spectrum disorder.
Romano–Ward syndrome
LQT1 is the most common subtype of Romano–Ward syndrome, responsible for 30 to 35% of all cases. The gene responsible, KCNQ1, has been isolated to chromosome 11p15.5 and encodes the alpha subunit of the KvLQT1 potassium channel. This subunit interacts with other proteins (in particular, the minK beta subunit) to create the channel, which carries the delayed potassium rectifier current IKs responsible for the repolarisation phase of the cardiac action potential. Variants in KCNQ1 that decrease IKs (loss of function variants) slow the repolarisation of the action potential. This causes the LQT1 subtype of Romano–Ward syndrome when a single copy of the variant is inherited (heterozygous, autosomal dominant inheritance). Inheriting two copies of the variant (homozygous, autosomal recessive inheritance) leads to the more severe Jervell and Lange–Nielsen syndrome. Conversely, variants in KCNQ1 that increase IKs lead to more rapid repolarisation and the short QT syndrome.
The LQT2 subtype is the second-most common form of Romano–Ward syndrome, responsible for 25 to 30% of all cases. It is caused by variants in the KCNH2 gene (also known as hERG) on chromosome 7 which encodes the potassium channel that carries the rapid inward rectifier current IKr. This current contributes to the terminal repolarisation phase of the cardiac action potential, and therefore the length of the QT interval.
The LQT3 subtype of Romano–Ward syndrome is caused by variants in the SCN5A gene located on chromosome 3p22–24. SCN5A encodes the alpha subunit of the cardiac sodium channel, NaV1.5, responsible for the sodium current INa which depolarises cardiac cells at the start of the action potential. Cardiac sodium channels normally inactivate rapidly, but the mutations involved in LQT3 slow their inactivation leading to a small sustained 'late' sodium current. This continued inward current prolongs the action potential and thereby the QT interval. While some variants in SCN5A cause LQT3, other variants can cause quite different conditions. Variants causing a reduction in the early peak current can cause Brugada syndrome and cardiac conduction disease, while other variants have been associated with dilated cardiomyopathy. Some variants which affect both the early and late sodium current can cause overlap syndromes which combine aspects of both LQT3 and Brugada syndrome.
Rare Romano–Ward subtypes (LQT4-6 and LQT9-16)
LQT5 is caused by variants in the KCNE1 gene responsible for the potassium channel beta subunit MinK. This subunit, in conjunction with the alpha subunit encoded by KCNQ1, is responsible for the potassium current IKs which is decreased in LQTS. LQT6 is caused by variants in the KCNE2 gene responsible for the potassium channel beta subunit MiRP1 which generates the potassium current IKr. Variants that decrease this current have been associated with prolongation of the QT interval. However, subsequent evidence such as the relatively common finding of variants in the gene in those without long QT syndrome, and the general need for a second stressor such as hypokalaemia to be present to reveal the QT prolongation, has suggested that this gene instead represents a modifier to susceptibility to QT prolongation. Some therefore dispute whether variants in KCNE2 are sufficient to cause Romano-Ward syndrome by themselves.
LQT9 is caused by variants in the membrane structural protein, caveolin-3. Caveolins form specific membrane domains called caveolae in which voltage-gated sodium channels sit. Similar to LQT3, these caveolin variants increase the late sustained sodium current, which impairs cellular repolarization.
LQT10 is an extremely rare subtype, caused by variants in the SCN4B gene. The product of this gene is an auxiliary beta-subunit (NaVβ4) forming cardiac sodium channels, variants in which increase the late sustained sodium current. LQT13 is caused by variants in GIRK4, a protein involved in the parasympathetic modulation of the heart. Clinically, the patients are characterized by only modest QT prolongation, but an increased propensity for atrial arrhythmias. LQT14, LQT15 and LQT16 are caused by variants in the genes responsible for calmodulin (CALM1, CALM2, and CALM3 respectively). Calmodulin interacts with several ion channels and its roles include modulation of the L-type calcium current in response to calcium concentrations, and trafficking the proteins produced by KCNQ1 and thereby influencing potassium currents. The precise mechanisms by which means these genetic variants prolong the QT interval remain uncertain.
Jervell and Lange–Nielsen syndrome
Jervell and Lange–Nielsen syndrome (JLNS) is a rare form of LQTS inherited in an autosomal recessive manner. In addition to severe prolongation of the QT interval, those affected are born with severe sensorineural deafness affecting both ears. The syndrome is caused by inheriting two copies of certain variant in the KCNE1 or KCNQ1 genes. The same genetic variants lead to the LQT5 and LQT1 forms of Romano-Ward syndrome if only a single copy of the variant is inherited. JLNS is generally associated with a higher risk of arrhythmias than most other forms of LQTS.
Andersen–Tawil syndrome (LQT7)
LQT7, also known as Andersen–Tawil syndrome, is characterised by a triad of features – in addition to a prolonged QT interval, those affected may experience intermittent weakness often occurring at times when blood potassium concentrations are low (hypokalaemic periodic paralysis), and characteristic facial and skeletal abnormalities such as a small lower jaw (micrognathia), low set ears, and fused or abnormally angled fingers and toes (syndactyly and clinodactyly). The condition is inherited in an autosomal-dominant manner and is caused by mutations in the KCNJ2 gene which encodes the potassium channel protein Kir2.1.
Timothy syndrome (LQT8)
LQT8, also known as Timothy syndrome combines a prolonged QT interval with fused fingers or toes (syndactyly). Abnormalities of the structure of the heart are commonly seen including ventricular septal defect, tetralogy of Fallot, and hypertrophic cardiomyopathy. The condition presents early in life and the average life expectancy is 2.5 years with death most commonly caused by ventricular arrhythmias. Many children with Timothy syndrome who survive longer than this have features of autism spectrum disorder. Timothy syndrome is caused by variants in the calcium channel Cav1.2 encoded by the gene CACNA1c.
Table of associated genes
The following is a list of genes associated with Long QT syndrome:
Acquired
Although long QT syndrome is often a genetic condition, a prolonged QT interval associated with an increased risk of abnormal heart rhythms can also occur in people without a genetic abnormality, commonly due to a side effect of medications. Drug-induced QT prolongation is often a result of treatment by antiarrhythmic drugs such as amiodarone and sotalol, antibiotics such as erythromycin, or antihistamines such as terfenadine. Other drugs which prolong the QT interval include some antipsychotics such as haloperidol and ziprasidone, and the antidepressant citalopram. Lists of medications associated with prolongation of the QT interval such as the CredibleMeds database can be found online.
Other causes of acquired LQTS include abnormally low levels of potassium (hypokalaemia) or magnesium (hypomagnesaemia) within the blood. This can be exacerbated following a sudden reduction in the blood supply to the heart (myocardial infarction), low levels of thyroid hormone (hypothyroidism), and a slow heart rate (bradycardia).
Anorexia nervosa has been associated with sudden death, possibly due to QT prolongation. The malnutrition seen in this condition can sometimes affect the blood concentration of salts such as potassium, potentially leading to acquired long QT syndrome, in turn causing sudden cardiac death. The malnutrition and associated changes in salt balance develop over a prolonged period of time, and rapid refeeding may further disturb the salt imbalances, increasing the risk of arrhythmias. Care must therefore be taken to monitor electrolyte levels to avoid the complications of refeeding syndrome.
Factors which prolong the QT interval are additive, meaning that a combination of factors (such as taking a QT-prolonging drug and having low levels of potassium) can cause a greater degree of QT prolongation than each factor alone. This also applies to some genetic variants which by themselves only minimally prolong the QT interval but can make people more susceptible to significant drug-induced QT prolongation.
Mechanisms
The various forms of long QT syndrome, both congenital and acquired, produce abnormal heart rhythms (arrhythmias) by influencing the electrical signals used to coordinate individual heart cells. The common theme is a prolongation of the cardiac action potential – the characteristic pattern of voltage changes across the cell membrane that occur with each heart beat. Heart cells when relaxed normally have fewer positively charged ions on the inner side of their cell membrane than on the outer side, referred to as the membrane being polarised. When heart cells contract, positively charged ions such as sodium and calcium enter the cell, equalising or reversing this polarity, or depolarising the cell. After a contraction has taken place, the cell restores its polarity (or repolarises) by allowing positively charged ions such as potassium to leave the cell, restoring the membrane to its relaxed, polarised state. In long QT syndrome it takes longer for this repolarisation to occur, shown in individual cells as a longer action potential while being marked on the surface ECG as a long QT interval.
The prolonged action potentials can lead to arrhythmias through several mechanisms. The arrhythmia characteristic of long QT syndrome, torsades de pointes, starts when an initial action potential triggers further abnormal action potentials in the form of afterdepolarisations. Early afterdepolarisations, occurring before the cell has fully repolarised, are particularly likely to be seen when action potentials are prolonged, and arise due to reactivation of calcium and sodium channels that would normally switch off until the next heartbeat is due. Under the right conditions, reactivation of these currents, facilitated by the sodium-calcium exchanger, can cause further depolarisation of the cell. The early afterdepolarisations triggering arrhythmias in long QT syndrome tend to arise from the Purkinje fibres of the cardiac conduction system. Early afterdepolarisations may occur as single events, but may occur repeatedly leading to multiple rapid activations of the cell.
Some research suggests that delayed afterdepolarisations, occurring after repolarisation has completed, may also play a role in long QT syndrome. This form of afterdepolarisation originates from the spontaneous release of calcium from the intracellular calcium store known as the sarcoplasmic reticulum, forcing calcium out of cell through the sodium calcium exchanger in exchange for sodium, generating a net inward current.
While there is strong evidence that the trigger for torsades de pointes comes from afterdepolarisations, it is less certain what sustains this arrhythmia. Some lines of evidence suggest that repeated afterdepolarisations from many sources contribute to the continuing arrhythmia. However, some suggest that the arrhythmia sustains through a mechanism known as re-entry. According to this model, the action potential prolongation occurs to a variable extent in different layers of the heart muscle with longer action potentials in some layers than others. In response to a triggering impulse, the waves of depolarisation will spread through regions with shorter action potentials but block in regions with longer action potentials. This allows the depolarising wavefront to bend around areas of block, potentially forming a complete loop and self-perpetuating. The twisting pattern on the ECG can be explained by movement of the core of the re-entrant circuit in the form of a meandering spiral wave.
Diagnosis
Diagnosing long QT syndrome is challenging. Whilst the hallmark of LQTS is prolongation of the QT interval, the QT interval is highly variable among both those who are healthy and those who have LQTS. This leads to overlap between the QT intervals of those with and without LQTS. 2.5% of those with genetically proven LQTS have a QT interval within the normal range. Conversely, given the normal distribution of QT intervals, a proportion of healthy people will have a longer QT interval than any arbitrary cutoff. Other factors beyond the QT interval should therefore be taken into account when making a diagnosis, some of which have been incorporated into scoring systems.
Electrocardiogram
Long QT syndrome is principally diagnosed by measuring the QT interval corrected for heart rate (QTc) on a 12-lead electrocardiogram (ECG). Long QT syndrome is associated with a prolonged QTc, although in some genetically proven cases of LQTS this prolongation can be hidden, known as concealed LQTS. The QTc is less than 450 ms in 95% of normal males, and less than 460 ms in 95% of normal females. LQTS is suggested if the QTc is longer than these cutoffs. However, as 5% of normal people also fall into this category, some suggest cutoffs of 470 and 480 ms for males and females respectively, corresponding with the 99th centiles of normal values.
The major subtypes of inherited LQTS are associated with specific ECG features. LQT1 is typically associated with broad-based T-waves, whereas the T-waves in LQT2 are notched and of lower amplitude, whilst in LQT3 the T-waves are often late onset, being preceded by a long isoelectric segment.
Schwartz score
The Schwartz score has been proposed as a method of combining clinical and ECG factors to assess how likely an individual is to have an inherited form of LQTS. The table below lists the criteria used to calculate the score.
Other investigations
In cases of diagnostic uncertainty, other investigations may be helpful to unmask a prolonged QT. In addition to prolonging the resting QT interval, LQTS may affect how the QT changes in response to exercise and stimulation by catecholamines such as adrenaline. Provocation tests, in the form of exercise tolerance tests or direct infusion of adrenaline, can be used to detect these abnormal responses. These investigations are most useful for identifying those with concealed congenital Type 1 LQTS 1 (LQT1) who have a normal QT interval at rest. While in healthy persons the QT interval shortens during exercise, in those with concealed LQT1 exercise or adrenaline infusion may lead to paradoxical prolongation of the QT interval, revealing the underlying condition.
Guideline cutoffs
International consensus guidelines differ on the degree of QT prolongation required to diagnose LQTS. The European Society of Cardiology recommends that, with or without symptoms or other investigations, LQTS can be diagnosed if the corrected QT interval is longer than 480ms. They recommend that a diagnosis can be considered in the presence of a QTc of greater than 460 ms if unexplained syncope has occurred. The Heart Rhythm Society guidelines are more stringent, recommending QTc cutoff of greater than 500 ms in the absence of other factors that prolong the QT, or greater than 480 ms with syncope. Both sets of guidelines agree that LQTS can also be diagnosed if an individual has a Schwartz score of greater than 3 or if a pathogenic genetic variant associated with LQTS is identified, regardless of QT interval.
Treatment
Those diagnosed with LQTS are usually advised to avoid drugs that can prolong the QT interval further or lower the threshold for TDP, lists of which can be found in public access online databases. In addition to this, two intervention options are known for individuals with LQTS: arrhythmia prevention and arrhythmia termination.
Arrhythmia prevention
Arrhythmia suppression involves the use of medications or surgical procedures that attack the underlying cause of the arrhythmias associated with LQTS. Since the cause of arrhythmias in LQTS is early afterdepolarizations (EADs), and they are increased in states of adrenergic stimulation, steps can be taken to blunt adrenergic stimulation in these individuals. These include administration of beta receptor blocking agents, which decreases the risk of stress-induced arrhythmias. Nadolol, a powerful non-selective beta blocker, has been shown to reduce the arrhythmic risk in all three main genotypes (LQT1, LQT2, and LQT3).
Genotype and QT interval duration are independent predictors of recurrence of life-threatening events.
Sodium channel blocking drugs such as mexiletine have been used to prevent arrhythmias in long QT syndrome. While the most compelling indication is for those whose long QT syndrome is caused by defective sodium channels producing a sustained late current (LQT3), mexiletine also shortens the QT interval in other forms of long QT syndrome including LQT1, LQT2 and LQT8. As the predominant action of mexiletine is on the early peak sodium current, there are theoretical reasons why drugs which preferentially suppress the late sodium current such as ranolazine may be more effective, although evidence that this is the case in practice is limited.
Amputation of the cervical sympathetic chain (left stellectomy). This therapy is typically reserved for LQTS caused by JLNS, but may be used as an add-on therapy to beta blockers in certain cases. In most cases, modern therapy favors ICD implantation if beta blocker therapy fails.
In patients considered at high risk of life-threatening arrhythmic events, ICD implantation may be considered as a preventive step.
Arrhythmia termination
Arrhythmia termination involves stopping a life-threatening arrhythmia once it has already occurred. One effective form of arrhythmia termination in individuals with LQTS is placement of an implantable cardioverter-defibrillator (ICD). Also, external defibrillation can be used to restore sinus rhythm. ICDs are commonly used in patients with fainting episodes despite beta blocker therapy, and in patients having experienced a cardiac arrest. As mentioned earlier, ICDs may be used also in patients considered at high risk of life-threatening arrhythmic events.
With better knowledge of the genetics underlying LQTS, more precise treatments hopefully will become available.
Outcomes
Genotype and QTc interval duration are the strongest predictors of outcome for patients with LQTS. 2022 European Society of Cardiology clinical practice guidelines have endorsed the use of independently validated risk score calculator, called 1-2-3-LQTS-Risk Calculator, which allows to calculate individual 5-year risk of life-threatening arrhythmic events.
For people who experience cardiac arrest or fainting caused by LQTS and who are untreated, the risk of death within 15 years is around 50%. With careful treatment this decreases to less than 1% over 20 years. Those who exhibit symptoms before the age of 18 are more likely to experience a cardiac arrest.
Epidemiology
Inherited LQTS is estimated to affect between one in 2,500 and 7,000 people.
History
The first documented case of LQTS was described in Leipzig by Meissner in 1856, when a deaf girl died after her teacher yelled at her. Soon after being notified, the girl's parents reported that her older brother, also deaf, had previously died after a terrible fright. This was several decades before the ECG was invented, but is likely the first described case of Jervell and Lange-Nielsen syndrome. In 1957, the first case documented by ECG was described by Anton Jervell and Fred Lange-Nielsen, working in Tønsberg, Norway. Italian pediatrician Cesarino Romano, in 1963, and Irish pediatrician Owen Conor Ward, in 1964, separately described the more common variant of LQTS with normal hearing, later called Romano-Ward syndrome. The establishment of the International Long-QT Syndrome Registry in 1979 allowed numerous pedigrees to be evaluated in a comprehensive manner. This helped in detecting many of the numerous genes involved. Transgenic animal models of the LQTS helped define the roles of various genes and hormones involved, and recently experimental pharmacological therapies to normalize the abnormal repolarization in animals were published.
References
Citations
General and cited references
Cardiac arrhythmia
Cardiogenetic disorders
Channelopathies
Cytoskeletal defects
Single-nucleotide polymorphism associated disease
Syndromes affecting the heart
Wikipedia medicine articles ready to translate | Long QT syndrome | [
"Biology"
] | 6,163 | [
"Single-nucleotide polymorphism associated disease",
"Single-nucleotide polymorphisms"
] |
363,762 | https://en.wikipedia.org/wiki/Invariant%20theory | Invariant theory is a branch of abstract algebra dealing with actions of groups on algebraic varieties, such as vector spaces, from the point of view of their effect on functions. Classically, the theory dealt with the question of explicit description of polynomial functions that do not change, or are invariant, under the transformations from a given linear group. For example, if we consider the action of the special linear group SLn on the space of n by n matrices by left multiplication, then the determinant is an invariant of this action because the determinant of A X equals the determinant of X, when A is in SLn.
Introduction
Let be a group, and a finite-dimensional vector space over a field (which in classical invariant theory was usually assumed to be the complex numbers). A representation of in is a group homomorphism , which induces a group action of on . If is the space of polynomial functions on , then the group action of on produces an action on by the following formula:
With this action it is natural to consider the subspace of all polynomial functions which are invariant under this group action, in other words the set of polynomials such that for all . This space of invariant polynomials is denoted .
First problem of invariant theory: Is a finitely generated algebra over ?
For example, if and the space of square matrices, and the action of on is given by left multiplication, then is isomorphic to a polynomial algebra in one variable, generated by the determinant. In other words, in this case, every invariant polynomial is a linear combination of powers of the determinant polynomial. So in this case, is finitely generated over .
If the answer is yes, then the next question is to find a minimal basis, and ask whether the module of polynomial relations between the basis elements (known as the syzygies) is finitely generated over .
Invariant theory of finite groups has intimate connections with Galois theory. One of the first major results was the main theorem on the symmetric functions that described the invariants of the symmetric group acting on the polynomial ring ] by permutations of the variables. More generally, the Chevalley–Shephard–Todd theorem characterizes finite groups whose algebra of invariants is a polynomial ring. Modern research in invariant theory of finite groups emphasizes "effective" results, such as explicit bounds on the degrees of the generators. The case of positive characteristic, ideologically close to modular representation theory, is an area of active study, with links to algebraic topology.
Invariant theory of infinite groups is inextricably linked with the development of linear algebra, especially, the theories of quadratic forms and determinants. Another subject with strong mutual influence was projective geometry, where invariant theory was expected to play a major role in organizing the material. One of the highlights of this relationship is the symbolic method. Representation theory of semisimple Lie groups has its roots in invariant theory.
David Hilbert's work on the question of the finite generation of the algebra of invariants (1890) resulted in the creation of a new mathematical discipline, abstract algebra. A later paper of Hilbert (1893) dealt with the same questions in more constructive and geometric ways, but remained virtually unknown until David Mumford brought these ideas back to life in the 1960s, in a considerably more general and modern form, in his geometric invariant theory. In large measure due to the influence of Mumford, the subject of invariant theory is seen to encompass the theory of actions of linear algebraic groups on affine and projective varieties. A distinct strand of invariant theory, going back to the classical constructive and combinatorial methods of the nineteenth century, has been developed by Gian-Carlo Rota and his school. A prominent example of this circle of ideas is given by the theory of standard monomials.
Examples
Simple examples of invariant theory come from computing the invariant monomials from a group action. For example, consider the -action on sending
Then, since are the lowest degree monomials which are invariant, we have that
This example forms the basis for doing many computations.
The nineteenth-century origins
Cayley first established invariant theory in his "On the Theory of Linear Transformations (1845)." In the opening of his paper, Cayley credits an 1841 paper of George Boole, "investigations were suggested to me by a very elegant paper on the same subject... by Mr Boole." (Boole's paper was Exposition of a General Theory of Linear Transformations, Cambridge Mathematical Journal.)
Classically, the term "invariant theory" refers to the study of invariant algebraic forms (equivalently, symmetric tensors) for the action of linear transformations. This was a major field of study in the latter part of the nineteenth century. Current theories relating to the symmetric group and symmetric functions, commutative algebra, moduli spaces and the representations of Lie groups are rooted in this area.
In greater detail, given a finite-dimensional vector space V of dimension n we can consider the symmetric algebra S(Sr(V)) of the polynomials of degree r over V, and the action on it of GL(V). It is actually more accurate to consider the relative invariants of GL(V), or representations of SL(V), if we are going to speak of invariants: that is because a scalar multiple of the identity will act on a tensor of rank r in S(V) through the r-th power 'weight' of the scalar. The point is then to define the subalgebra of invariants I(Sr(V)) for the action. We are, in classical language, looking at invariants of n-ary r-ics, where n is the dimension of V. (This is not the same as finding invariants of GL(V) on S(V); this is an uninteresting problem as the only such invariants are constants.) The case that was most studied was invariants of binary forms where n = 2.
Other work included that of Felix Klein in computing the invariant rings of finite group actions on (the binary polyhedral groups, classified by the ADE classification); these are the coordinate rings of du Val singularities.
The work of David Hilbert, proving that I(V) was finitely presented in many cases, almost put an end to classical invariant theory for several decades, though the classical epoch in the subject continued to the final publications of Alfred Young, more than 50 years later. Explicit calculations for particular purposes have been known in modern times (for example Shioda, with the binary octavics).
Hilbert's theorems
proved that if V is a finite-dimensional representation of the complex algebraic group G = SLn(C) then the ring of invariants of G acting on the ring of polynomials R = S(V) is finitely generated. His proof used the Reynolds operator ρ from R to RG with the properties
ρ(1) = 1
ρ(a + b) = ρ(a) + ρ(b)
ρ(ab) = a ρ(b) whenever a is an invariant.
Hilbert constructed the Reynolds operator explicitly using Cayley's omega process Ω, though now it is more common to construct ρ indirectly as follows: for compact groups G, the Reynolds operator is given by taking the average over G, and non-compact reductive groups can be reduced to the case of compact groups using Weyl's unitarian trick.
Given the Reynolds operator, Hilbert's theorem is proved as follows. The ring R is a polynomial ring so is graded by degrees, and the ideal I is defined to be the ideal generated by the homogeneous invariants of positive degrees. By Hilbert's basis theorem the ideal I is finitely generated (as an ideal). Hence, I is finitely generated by finitely many invariants of G (because if we are given any – possibly infinite – subset S that generates a finitely generated ideal I, then I is already generated by some finite subset of S). Let i1,...,in be a finite set of invariants of G generating I (as an ideal). The key idea is to show that these generate the ring RG of invariants. Suppose that x is some homogeneous invariant of degree d > 0. Then
x = a1i1 + ... + anin
for some aj in the ring R because x is in the ideal I. We can assume that aj is homogeneous of degree d − deg ij for every j (otherwise, we replace aj by its homogeneous component of degree d − deg ij; if we do this for every j, the equation x = a1i1 + ... + anin will remain valid). Now, applying the Reynolds operator to x = a1i1 + ... + anin gives
x = ρ(a1)i1 + ... + ρ(an)in
We are now going to show that x lies in the R-algebra generated by i1,...,in.
First, let us do this in the case when the elements ρ(ak) all have degree less than d. In this case, they are all in the R-algebra generated by i1,...,in (by our induction assumption). Therefore, x is also in this R-algebra (since x = ρ(a1)i1 + ... + ρ(an)in).
In the general case, we cannot be sure that the elements ρ(ak) all have degree less than d. But we can replace each ρ(ak) by its homogeneous component of degree d − deg ij. As a result, these modified ρ(ak) are still G-invariants (because every homogeneous component of a G-invariant is a G-invariant) and have degree less than d (since deg ik > 0). The equation x = ρ(a1)i1 + ... + ρ(an)in still holds for our modified ρ(ak), so we can again conclude that x lies in the R-algebra generated by i1,...,in.
Hence, by induction on the degree, all elements of RG are in the R-algebra generated by i1,...,in.
Geometric invariant theory
The modern formulation of geometric invariant theory is due to David Mumford, and emphasizes the construction of a quotient by the group action that should capture invariant information through its coordinate ring. It is a subtle theory, in that success is obtained by excluding some 'bad' orbits and identifying others with 'good' orbits. In a separate development the symbolic method of invariant theory, an apparently heuristic combinatorial notation, has been rehabilitated.
One motivation was to construct moduli spaces in algebraic geometry as quotients of schemes parametrizing marked objects. In the 1970s and 1980s the theory developed
interactions with symplectic geometry and equivariant topology, and was used to construct moduli spaces of objects in differential geometry, such as instantons and monopoles.
See also
Gram's theorem
Representation theory of finite groups
Molien series
Invariant (mathematics)
Invariant of a binary form
Invariant measure
First and second fundamental theorems of invariant theory
References
Reprinted as
A recent resource for learning about modular invariants of finite groups.
An undergraduate level introduction to the classical theory of invariants of binary forms, including the Omega process starting at page 87.
An older but still useful survey.
A beautiful introduction to the theory of invariants of finite groups and techniques for computing them using Gröbner bases.
External links
H. Kraft, C. Procesi, Classical Invariant Theory, a Primer
V. L. Popov, E. B. Vinberg, ``Invariant Theory", in Algebraic geometry. IV. Encyclopaedia of Mathematical Sciences, 55 (translated from 1989 Russian edition) Springer-Verlag, Berlin, 1994; vi+284 pp.; | Invariant theory | [
"Physics"
] | 2,460 | [
"Invariant theory",
"Group actions",
"Symmetry"
] |
363,786 | https://en.wikipedia.org/wiki/Stereopticon | A stereopticon is a slide projector or relatively powerful "magic lantern", which has two lenses, usually one above the other, and has mainly been used to project photographic images. These devices date back to the mid 19th century, and were a popular form of entertainment and education before the advent of moving pictures.
Magic lanterns originally used rather weak light sources, like candles or oil lamps, that produced projections that were just large and strong enough to entertain small groups of people. During the 19th century stronger light sources, like limelight, became available.
For the "dissolving views" lantern shows that were popularized by Henry Langdon Childe since the late 1830s, lanternists needed to be able to project two aligned pictures in the same spot on a screen, gradually dimming a first picture while revealing a second one. This could be done with two lanterns, but soon biunial lanterns (with two objectives placed one above the other) became common.
William and Frederick Langenheim from Philadelphia introduced a photographic glass slide technology at the Crystal Palace Exhibition in London in 1851. For circa two centuries magic lanterns had been used to project painted images from glass slides, but the Langenheim brothers seem to have been the firsts to incorporate the relatively new medium of photography (introduced in 1839). To enjoy the details of photographic slides optimally, the stronger lanterns were needed.
By 1860 Massachusetts chemist and businessman John Fallon improved a large biunial lantern, imported from England, and named it 'stereopticon'.
For a usual fee of ten cents, people could view realistic images of nature, history, and science themes. The two lenses are used to dissolve between images when projected. This "visual storytelling" with technology directly preceded the development of the first moving pictures.
The term stereopticon has been widely misused to name a stereoscope. The stereopticon has not commonly been used for three-dimensional images.
References
Further reading
1850 introductions
Entertainment
Display technology
Magic lanterns
American inventions | Stereopticon | [
"Engineering"
] | 401 | [
"Electronic engineering",
"Display technology"
] |
363,844 | https://en.wikipedia.org/wiki/Zymology | Zymology, also known as zymurgy, is an applied science that studies the biochemical process of fermentation and its practical uses. Common topics include the selection of fermenting yeast and bacteria species and their use in brewing, wine making, fermenting milk, and the making of other fermented foods.
Fermentation
Fermentation can be simply defined, in the context of brewing, as the conversion of sugar molecules into ethanol and carbon dioxide by yeast.
Fermentation practices have led to the discovery of ample microbial and antimicrobial cultures on fermented foods and products.
History
French chemist Louis Pasteur was the first 'zymologist' when in 1857 he connected yeast to fermentation. Pasteur originally defined fermentation as "respiration without air".
Pasteur performed careful research and concluded:
The German Eduard Buchner, winner of the 1907 Nobel Prize in chemistry, later determined that fermentation was actually caused by a yeast secretion, which he termed 'zymase'.
The research efforts undertaken by the Danish Carlsberg scientists greatly accelerated understanding of yeast and brewing. The Carlsberg scientists are generally acknowledged as having jump-started the entire field of molecular biology.
Products
All alcoholic drinks including beer, cider, kombucha, kvass, mead, perry, tibicos, wine, pulque, hard liquors (brandy, rum, vodka, sake, schnapps), and soured by-products including vinegar and alegar
Yeast leavened breads including sourdough, salt-rising bread, and others
Cheese and some dairy products including kefir and yogurt
Chocolate
Coffee
Dishes including fermented fish, such as garum, surströmming, and Worcestershire sauce
Some vegetables such as kimchi, some types of pickles (most are not fermented though), and sauerkraut
A wide variety of fermented foods made from soybeans, including fermented bean paste, nattō, tempeh, and soya sauce
Notes
References
Sources
External links
Winemaking: Fundamentals of winemaking: zymology
Biochemistry
Brewing
Oenology | Zymology | [
"Chemistry",
"Biology"
] | 445 | [
"Biochemistry",
"nan"
] |
363,890 | https://en.wikipedia.org/wiki/One-way%20function | In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, specifically the theory of polynomial time problems. This has nothing to do with whether the function is one-to-one; finding any one input with the desired image is considered a successful inversion. (See , below.)
The existence of such one-way functions is still an open conjecture. Their existence would prove that the complexity classes P and NP are not equal, thus resolving the foremost unsolved question of theoretical computer science. The converse is not known to be true, i.e. the existence of a proof that P ≠ NP would not directly imply the existence of one-way functions.
In applied contexts, the terms "easy" and "hard" are usually interpreted relative to some specific computing entity; typically "cheap enough for the legitimate users" and "prohibitively expensive for any malicious agents". One-way functions, in this sense, are fundamental tools for cryptography, personal identification, authentication, and other data security applications. While the existence of one-way functions in this sense is also an open conjecture, there are several candidates that have withstood decades of intense scrutiny. Some of them are essential ingredients of most telecommunications, e-commerce, and e-banking systems around the world.
Theoretical definition
A function f : {0, 1}* → {0, 1}* is one-way if f can be computed by a polynomial-time algorithm, but any polynomial-time randomized algorithm that attempts to compute a pseudo-inverse for f succeeds with negligible probability. (The * superscript means any number of repetitions, see Kleene star.) That is, for all randomized algorithms , all positive integers c and all sufficiently large n = length(x),
where the probability is over the choice of x from the discrete uniform distribution on {0, 1} n, and the randomness of .
Note that, by this definition, the function must be "hard to invert" in the average-case, rather than worst-case sense. This is different from much of complexity theory (e.g., NP-hardness), where the term "hard" is meant in the worst-case. That is why even if some candidates for one-way functions (described below) are known to be NP-complete, it does not imply their one-wayness. The latter property is only based on the lack of known algorithms to solve the problem.
It is not sufficient to make a function "lossy" (not one-to-one) to have a one-way function. In particular, the function that outputs the string of n zeros on any input of length n is not a one-way function because it is easy to come up with an input that will result in the same output. More precisely: For such a function that simply outputs a string of zeroes, an algorithm F that just outputs any string of length n on input f(x) will "find" a proper preimage of the output, even if it is not the input which was originally used to find the output string.
Related concepts
A one-way permutation is a one-way function that is also a permutation—that is, a one-way function that is bijective. One-way permutations are an important cryptographic primitive, and it is not known if their existence is implied by the existence of one-way functions.
A trapdoor one-way function or trapdoor permutation is a special kind of one-way function. Such a function is hard to invert unless some secret information, called the trapdoor, is known.
A collision-free hash function f is a one-way function that is also collision-resistant; that is, no randomized polynomial time algorithm can find a collision—distinct values x, y such that f(x) = f(y)—with non-negligible probability.
Theoretical implications of one-way functions
If f is a one-way function, then the inversion of f would be a problem whose output is hard to compute (by definition) but easy to check (just by computing f on it). Thus, the existence of a one-way function implies that FP ≠ FNP, which in turn implies that P ≠ NP. However, P ≠ NP does not imply the existence of one-way functions.
The existence of a one-way function implies the existence of many other useful concepts, including:
Pseudorandom generators
Pseudorandom function families
Bit commitment schemes
Private-key encryption schemes secure against adaptive chosen-ciphertext attack
Message authentication codes
Digital signature schemes (secure against adaptive chosen-message attack)
Candidates for one-way functions
The following are several candidates for one-way functions (as of April 2009). Clearly, it is not known whether
these functions are indeed one-way; but extensive research has so far failed to produce an efficient inverting algorithm for any of them.
Multiplication and factoring
The function f takes as inputs two prime numbers p and q in binary notation and returns their product. This function can be "easily" computed in O(b2) time, where b is the total number of bits of the inputs. Inverting this function requires finding the factors of a given integer N. The best factoring algorithms known run in time, where b is the number of bits needed to represent N.
This function can be generalized by allowing p and q to range over a suitable set of semiprimes. Note that f is not one-way for randomly selected integers , since the product will have 2 as a factor with probability 3/4 (because the probability that an arbitrary p is odd is 1/2, and likewise for q, so if they're chosen independently, the probability that both are odd is therefore 1/4; hence the probability that p or q is even, is ).
The Rabin function (modular squaring)
The Rabin function, or squaring modulo , where and are primes is believed to be a collection of one-way functions. We write
to denote squaring modulo : a specific member of the Rabin collection. It can be shown that extracting square roots, i.e. inverting the Rabin function, is computationally equivalent to factoring (in the sense of polynomial-time reduction). Hence it can be proven that the Rabin collection is one-way if and only if factoring is hard. This also holds for the special case in which and are of the same bit length. The Rabin cryptosystem is based on the assumption that this Rabin function is one-way.
Discrete exponential and logarithm
Modular exponentiation can be done in polynomial time. Inverting this function requires computing the discrete logarithm. Currently there are several popular groups for which no algorithm to calculate the underlying discrete logarithm in polynomial time is known. These groups are all finite abelian groups and the general discrete logarithm problem can be described as thus.
Let G be a finite abelian group of cardinality n. Denote its group operation by multiplication. Consider a primitive element and another element . The discrete logarithm problem is to find the positive integer k, where , such that:
The integer k that solves the equation is termed the discrete logarithm of β to the base α. One writes .
Popular choices for the group G in discrete logarithm cryptography are the cyclic groups (Zp)× (e.g. ElGamal encryption, Diffie–Hellman key exchange, and the Digital Signature Algorithm) and cyclic subgroups of elliptic curves over finite fields (see elliptic curve cryptography).
An elliptic curve is a set of pairs of elements of a field satisfying . The elements of the curve form a group under an operation called "point addition" (which is not the same as the addition operation of the field). Multiplication kP of a point P by an integer k (i.e., a group action of the additive group of the integers) is defined as repeated addition of the point to itself. If k and P are known, it is easy to compute , but if only R and P are known, it is assumed to be hard to compute k.
Cryptographically secure hash functions
There are a number of cryptographic hash functions that are fast to compute, such as SHA-256. Some of the simpler versions have fallen to sophisticated analysis, but the strongest versions continue to offer fast, practical solutions for one-way computation. Most of the theoretical support for the functions are more techniques for thwarting some of the previously successful attacks.
Other candidates
Other candidates for one-way functions include the hardness of the decoding of random linear codes, the hardness of certain lattice problems, and the subset sum problem (Naccache–Stern knapsack cryptosystem).
Universal one-way function
There is an explicit function f that has been proved to be one-way, if and only if one-way functions exist. In other words, if any function is one-way, then so is f. Since this function was the first combinatorial complete one-way function to be demonstrated, it is known as the "universal one-way function". The problem of finding a one-way function is thus reduced to provingperhaps non-constructivelythat one such function exists.
There also exists a function that is one-way if polynomial-time bounded Kolmogorov complexity is mildly hard on average. Since the existence of one-way functions implies that polynomial-time bounded Kolmogorov complexity is mildly hard on average, the function is a universal one-way function.
See also
One-way compression function
Cryptographic hash function
Geometric cryptography
Trapdoor function
References
Further reading
Jonathan Katz and Yehuda Lindell (2007). Introduction to Modern Cryptography. CRC Press. .
Section 10.6.3: One-way functions, pp. 374–376.
Section 12.1: One-way functions, pp. 279–298.
Cryptographic primitives
Unsolved problems in computer science | One-way function | [
"Mathematics"
] | 2,131 | [
"Unsolved problems in computer science",
"Unsolved problems in mathematics",
"Mathematical problems"
] |
363,891 | https://en.wikipedia.org/wiki/Ethidium%20bromide | Ethidium bromide (or homidium bromide, chloride salt homidium chloride) is an intercalating agent commonly used as a fluorescent tag (nucleic acid stain) in molecular biology laboratories for techniques such as agarose gel electrophoresis. It is commonly abbreviated as EtBr, which is also an abbreviation for bromoethane. To avoid confusion, some laboratories have used the abbreviation EthBr for this salt. When exposed to ultraviolet light, it will fluoresce with an orange colour, intensifying almost 20-fold after binding to DNA. Under the name homidium, it has been commonly used since the 1950s in veterinary medicine to treat trypanosomiasis in cattle. The high incidence of antimicrobial resistance makes this treatment impractical in some areas, where the related isometamidium chloride is used instead. Despite its reputation as a mutagen, tests have shown it to have low mutagenicity without metabolic activation.
Structure, chemistry, and fluorescence
As with most fluorescent compounds, ethidium bromide is aromatic. Its core heterocyclic moiety is generically known as a phenanthridine, an isomer of which is the fluorescent dye acridine. Absorption maxima of EtBr in aqueous solution are at 210 nm and 285 nm, which correspond to ultraviolet light. As a result of this excitation, EtBr emits orange light with wavelength 605 nm.
Ethidium bromide's intense fluorescence after binding with DNA is probably not due to rigid stabilization of the phenyl moiety, because the phenyl ring has been shown to project outside the intercalated bases. In fact, the phenyl group is found to be almost perpendicular to the plane of the ring system, as it rotates about its single bond to find a position where it will impinge upon the ring system minimally. Instead, the hydrophobic environment found between the base pairs is believed to be responsible. By moving into this hydrophobic environment and away from the solvent, the ethidium cation is forced to shed any water molecules that were associated with it. As water is a highly efficient fluorescence quencher, the removal of these water molecules allows the ethidium to fluoresce.
Applications
Ethidium bromide is commonly used to detect nucleic acids in molecular biology laboratories. In the case of DNA this is usually double-stranded DNA from PCRs, restriction digests, etc. Single-stranded RNA can also be detected, since it usually folds back onto itself and thus provides local base pairing for the dye to intercalate. Detection typically involves a gel containing nucleic acids placed on or under an ultraviolet lamp. Since ultraviolet light is harmful to eyes and skin, gels stained with ethidium bromide are usually viewed indirectly using an enclosed camera, with the fluorescent images recorded as photographs. Where direct viewing is needed, the viewer's eyes and exposed skin should be protected. In the laboratory the intercalating properties have long been used to minimize chromosomal condensation when a culture is exposed to mitotic arresting agents during harvest. The resulting slide preparations permit a higher degree of resolution, and thus more confidence in determining structural integrity of chromosomes upon microscopic analysis.
Ethidium bromide is also used during DNA fragment separation by agarose gel electrophoresis. It is added to running buffer and binds by intercalating between DNA base pairs. When the agarose gel is illuminated using UV light, DNA bands become visible. Intercalation of EtBr can alter properties of the DNA molecule, such as charge, weight, conformation, and flexibility. Since the mobilities of DNA molecules through the agarose gel are measured relative to a molecular weight standard, the effects of EtBr can be critical to determining the sizes of molecules.
Ethidium bromide has also been used extensively to reduce mitochondrial DNA copy number in proliferating cells. The effect of EtBr on mitochondrial DNA is used in veterinary medicine to treat trypanosomiasis in cattle, as EtBr binds molecules of kinetoplastid DNA and changes their conformation to the Z-DNA form. This form inhibits replication of kinetoplastid DNA, which is lethal for trypanosomes.
The chloride salt homidium chloride has the same applications.
Ethidium bromide can be added to YPD media and used as an inhibitor for cell growth.
The binding affinity of the cationic nanoparticles with DNA could be evaluated by competitive binding with ethidium bromide.
Alternatives for gel
There are alternatives to ethidium bromide which are advertised as being less dangerous and having better performance. For example, several SYBR-based dyes are used by some researchers and there are other emerging stains such as "Novel Juice". SYBR dyes are less mutagenic than EtBr by the Ames test with liver extract. However, SYBR Green I was actually found to be more mutagenic than EtBr to the bacterial cells exposed to UV (which is used to visualize either dye). This may be the case for other "safer" dyes, but while mutagenic and toxicity details are available these have not been published in peer-reviewed journals. The MSDS for SYBR Safe reports an for rats of over 5 g/kg, which is higher than that of EtBr (1.5 g/kg). Many alternative dyes are suspended in DMSO, which has health implications of its own, including increased skin absorption of organic compounds. Despite the performance advantage of using SYBR dyes instead of EtBr for staining purposes, many researchers still prefer EtBr since it is considerably less expensive.
Possible carcinogenic activity
Most use of ethidium bromide in the laboratory (0.25–1 μg/mL) is below the LD50 dosage, making acute toxicity unlikely. Testing in humans and longer studies in a mammalian system would be required to fully understand the long-term risk ethidium bromide poses to lab workers, but it is clear that ethidium bromide can cause mutations in mammalian and bacterial cells.
Handling and disposal
Ethidium bromide is not regulated as hazardous waste at low concentrations, but is treated as hazardous waste by many organizations. Material should be handled according to the manufacturer's safety data sheet (SDS).
The disposal of laboratory ethidium bromide remains a controversial subject. Ethidium bromide can be degraded chemically, or collected and incinerated. It is common for ethidium bromide waste below a mandated concentration to be disposed of normally (such as pouring it down a drain). A common practice is to treat ethidium bromide with sodium hypochlorite (bleach) before disposal. According to Lunn and Sansone, chemical degradation using bleach yields compounds which are mutagenic by the Ames test. Data are lacking on the mutagenic effects of degradation products. Lunn and Sansone describe more effective methods for degradation. Elsewhere, ethidium bromide removal from solutions with activated charcoal or ion exchange resin is recommended. Various commercial products are available for this use.
Drug resistance
Trypanosomes in the Gibe River Valley in southwest Ethiopia showed universal resistance between July 1989 and February 1993. This likely indicates a permanent loss of function in this area against the tested target, T. congolense isolated from Boran cattle.
See also
Phenanthridine
Agarose gel electrophoresis and gel electrophoresis of nucleic acids
GelRed (itself derived from ethbr) and GelGreen, marketed as safer and more intense DNA stains
Propidium iodide and propidium monoazide, related dyes
References
External links
Aromatic amines
Bromides
DNA intercalaters
Mutagens
Quaternary ammonium compounds
Phenanthridine dyes
Staining dyes
Embryotoxicants
Experimental cancer drugs | Ethidium bromide | [
"Chemistry"
] | 1,641 | [
"Bromides",
"Salts"
] |
363,903 | https://en.wikipedia.org/wiki/Newtonian%20fluid | A Newtonian fluid is a fluid in which the viscous stresses arising from its flow are at every point linearly correlated to the local strain rate — the rate of change of its deformation over time. Stresses are proportional to the rate of change of the fluid's velocity vector.
A fluid is Newtonian only if the tensors that describe the viscous stress and the strain rate are related by a constant viscosity tensor that does not depend on the stress state and velocity of the flow. If the fluid is also isotropic (i.e., its mechanical properties are the same along any direction), the viscosity tensor reduces to two real coefficients, describing the fluid's resistance to continuous shear deformation and continuous compression or expansion, respectively.
Newtonian fluids are the easiest mathematical models of fluids that account for viscosity. While no real fluid fits the definition perfectly, many common liquids and gases, such as water and air, can be assumed to be Newtonian for practical calculations under ordinary conditions. However, non-Newtonian fluids are relatively common and include oobleck (which becomes stiffer when vigorously sheared) and non-drip paint (which becomes thinner when sheared). Other examples include many polymer solutions (which exhibit the Weissenberg effect), molten polymers, many solid suspensions, blood, and most highly viscous fluids.
Newtonian fluids are named after Isaac Newton, who first used the differential equation to postulate the relation between the shear strain rate and shear stress for such fluids.
Definition
An element of a flowing liquid or gas will endure forces from the surrounding fluid, including viscous stress forces that cause it to gradually deform over time. These forces can be mathematically first order approximated by a viscous stress tensor, usually denoted by .
The deformation of a fluid element, relative to some previous state, can be first order approximated by a strain tensor that changes with time. The time derivative of that tensor is the strain rate tensor, that expresses how the element's deformation is changing with time; and is also the gradient of the velocity vector field at that point, often denoted .
The tensors and can be expressed by 3×3 matrices, relative to any chosen coordinate system. The fluid is said to be Newtonian if these matrices are related by the equation
where is a fixed 3×3×3×3 fourth order tensor that does not depend on the velocity or stress state of the fluid.
Incompressible isotropic case
For an incompressible and isotropic Newtonian fluid in laminar flow only in the direction x (i.e. where viscosity is isotropic in the fluid), the shear stress is related to the strain rate by the simple constitutive equation
where
is the shear stress ("skin drag") in the fluid,
is a scalar constant of proportionality, the dynamic viscosity of the fluid
is the derivative in the direction y, normal to x, of the flow velocity component u that is oriented along the direction x.
In case of a general 2D incompressibile flow in the plane x, y, the Newton constitutive equation become:
where:
is the shear stress ("skin drag") in the fluid,
is the partial derivative in the direction y of the flow velocity component u that is oriented along the direction x.
is the partial derivative in the direction x of the flow velocity component v that is oriented along the direction y.
We can now generalize to the case of an incompressible flow with a general direction in the 3D space, the above constitutive equation becomes
where
is the th spatial coordinate
is the fluid's velocity in the direction of axis
is the -th component of the stress acting on the faces of the fluid element perpendicular to axis . It is the ij-th component of the shear stress tensor
or written in more compact tensor notation
where is the flow velocity gradient.
An alternative way of stating this constitutive equation is:
where
is the rate-of-strain tensor. So this decomposition can be made explicit as:
This constitutive equation is also called the Newton law of viscosity.
The total stress tensor can always be decomposed as the sum of the isotropic stress tensor and the deviatoric stress tensor ():
In the incompressible case, the isotropic stress is simply proportional to the thermodynamic pressure :
and the deviatoric stress is coincident with the shear stress tensor :
The stress constitutive equation then becomes
or written in more compact tensor notation
where is the identity tensor.
General compressible case
The Newton's constitutive law for a compressible flow results from the following assumptions on the Cauchy stress tensor:
the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. So the stress variable is the tensor gradient , or more simply the rate-of-strain tensor:
the deviatoric stress is linear in this variable: , where is independent on the strain rate tensor, is the fourth-order tensor representing the constant of proportionality, called the viscosity or elasticity tensor, and : is the double-dot product.
the fluid is assumed to be isotropic, as with gases and simple liquids, and consequently is an isotropic tensor; furthermore, since the deviatoric stress tensor is symmetric, by Helmholtz decomposition it can be expressed in terms of two scalar Lamé parameters, the second viscosity and the dynamic viscosity , as it is usual in linear elasticity:
where is the identity tensor, and is the trace of the rate-of-strain tensor. So this decomposition can be explicitly defined as:
Since the trace of the rate-of-strain tensor in three dimensions is the divergence (i.e. rate of expansion) of the flow:
Given this relation, and since the trace of the identity tensor in three dimensions is three:
the trace of the stress tensor in three dimensions becomes:
So by alternatively decomposing the stress tensor into isotropic and deviatoric parts, as usual in fluid dynamics:
Introducing the bulk viscosity ,
we arrive to the linear constitutive equation in the form usually employed in thermal hydraulics:
which can also be arranged in the other usual form:
Note that in the compressible case the pressure is no more proportional to the isotropic stress term, since there is the additional bulk viscosity term:
and the deviatoric stress tensor is still coincident with the shear stress tensor (i.e. the deviatoric stress in a Newtonian fluid has no normal stress components), and it has a compressibility term in addition to the incompressible case, which is proportional to the shear viscosity:
Note that the incompressible case correspond to the assumption that the pressure constrains the flow so that the volume of fluid elements is constant: isochoric flow resulting in a solenoidal velocity field with .
So one returns to the expressions for pressure and deviatoric stress seen in the preceding paragraph.
Both bulk viscosity and dynamic viscosity need not be constant – in general, they depend on two thermodynamics variables if the fluid contains a single chemical species, say for example, pressure and temperature. Any equation that makes explicit one of these transport coefficient in the conservation variables is called an equation of state.
Apart from its dependence of pressure and temperature, the second viscosity coefficient also depends on the process, that is to say, the second viscosity coefficient is not just a material property. Example: in the case of a sound wave with a definitive frequency that alternatively compresses and expands a fluid element, the second viscosity coefficient depends on the frequency of the wave. This dependence is called the dispersion. In some cases, the second viscosity can be assumed to be constant in which case, the effect of the volume viscosity is that the mechanical pressure is not equivalent to the thermodynamic pressure: as demonstrated below.
However, this difference is usually neglected most of the time (that is whenever we are not dealing with processes such as sound absorption and attenuation of shock waves, where second viscosity coefficient becomes important) by explicitly assuming . The assumption of setting is called as the Stokes hypothesis. The validity of Stokes hypothesis can be demonstrated for monoatomic gas both experimentally and from the kinetic theory; for other gases and liquids, Stokes hypothesis is generally incorrect.
Finally, note that Stokes hypothesis is less restrictive that the one of incompressible flow. In fact, in the incompressible flow both the bulk viscosity term, and the shear viscosity term in the divergence of the flow velocity term disappears, while in the Stokes hypothesis the first term also disappears but the second one still remains.
For anisotropic fluids
More generally, in a non-isotropic Newtonian fluid, the coefficient that relates internal friction stresses to the spatial derivatives of the velocity field is replaced by a nine-element viscous stress tensor .
There is general formula for friction force in a liquid: The vector differential of friction force is equal the viscosity tensor increased on vector product differential of the area vector of adjoining a liquid layers and rotor of velocity:
where is the viscosity tensor. The diagonal components of viscosity tensor is molecular viscosity of a liquid, and not diagonal components – turbulence eddy viscosity.
Newton's law of viscosity
The following equation illustrates the relation between shear rate and shear stress for a fluid with laminar flow only in the direction x:
where:
is the shear stress in the components x and y, i.e. the force component on the direction x per unit surface that is normal to the direction y (so it is parallel to the direction x)
is the dynamic viscosity, and
is the flow velocity gradient along the direction y, that is normal to the flow velocity .
If viscosity does not vary with rate of deformation the fluid is Newtonian.
Power law model
The power law model is used to display the behavior of Newtonian and non-Newtonian fluids and measures shear stress as a function of strain rate.
The relationship between shear stress, strain rate and the velocity gradient for the power law model are:
where
is the absolute value of the strain rate to the (n−1) power;
is the velocity gradient;
n is the power law index.
If
n < 1 then the fluid is a pseudoplastic.
n = 1 then the fluid is a Newtonian fluid.
n > 1 then the fluid is a dilatant.
Fluid model
The relationship between the shear stress and shear rate in a casson fluid model is defined as follows:
where τ0 is the yield stress and
where α depends on protein composition and H is the Hematocrit number.
Examples
Water, air, alcohol, glycerol, and thin motor oil are all examples of Newtonian fluids over the range of shear stresses and shear rates encountered in everyday life. Single-phase fluids made up of small molecules are generally (although not exclusively) Newtonian.
See also
Fluid mechanics
Non-Newtonian fluid
Strain rate tensor
Viscosity
Viscous stress tensor
References
Viscosity
Fluid dynamics | Newtonian fluid | [
"Physics",
"Chemistry",
"Engineering"
] | 2,327 | [
"Physical phenomena",
"Physical quantities",
"Chemical engineering",
"Piping",
"Wikipedia categories named after physical quantities",
"Viscosity",
"Physical properties",
"Fluid dynamics"
] |
363,985 | https://en.wikipedia.org/wiki/Metacentric%20height | The metacentric height (GM) is a measurement of the initial static stability of a floating body. It is calculated as the distance between the centre of gravity of a ship and its metacentre. A larger metacentric height implies greater initial stability against overturning. The metacentric height also influences the natural period of rolling of a hull, with very large metacentric heights being associated with shorter periods of roll which are uncomfortable for passengers. Hence, a sufficiently, but not excessively, high metacentric height is considered ideal for passenger ships.
Different centres
The centre of buoyancy is at the centre of mass of the volume of water that the hull displaces. This point is referred to as B in naval architecture.
The centre of gravity of the ship is commonly denoted as point G or CG. When a ship is at equilibrium, the centre of buoyancy is vertically in line with the centre of gravity of the ship.
The metacentre is the point where the lines intersect (at angle φ) of the upward force of buoyancy of φ ± dφ. When the ship is vertical, the metacentre lies above the centre of gravity and so moves in the opposite direction of heel as the ship rolls. This distance is also abbreviated as GM. As the ship heels over, the centre of gravity generally remains fixed with respect to the ship because it just depends on the position of the ship's weight and cargo, but the surface area increases, increasing BMφ. Work must be done to roll a stable hull. This is converted to potential energy by raising the centre of mass of the hull with respect to the water level or by lowering the centre of buoyancy or both. This potential energy will be released in order to right the hull and the stable attitude will be where it has the least magnitude. It is the interplay of potential and kinetic energy that results in the ship having a natural rolling frequency. For small angles, the metacentre, Mφ, moves with a lateral component so it is no longer directly over the centre of mass.
The righting couple on the ship is proportional to the horizontal distance between two equal forces. These are gravity acting downwards at the centre of mass and the same magnitude force acting upwards through the centre of buoyancy, and through the metacentre above it. The righting couple is proportional to the metacentric height multiplied by the sine of the angle of heel, hence the importance of metacentric height to stability. As the hull rights, work is done either by its centre of mass falling, or by water falling to accommodate a rising centre of buoyancy, or both.
For example, when a perfectly cylindrical hull rolls, the centre of buoyancy stays on the axis of the cylinder at the same depth. However, if the centre of mass is below the axis, it will move to one side and rise, creating potential energy. Conversely if a hull having a perfectly rectangular cross section has its centre of mass at the water line, the centre of mass stays at the same height, but the centre of buoyancy goes down as the hull heels, again storing potential energy.
When setting a common reference for the centres, the molded (within the plate or planking) line of the keel (K) is generally chosen; thus, the reference heights are:
KB – to Centre of Buoyancy
KG – to Centre of Gravity
KMT – to Transverse Metacentre
Metacentre
When a ship heels (rolls sideways), the centre of buoyancy of the ship moves laterally. It might also move up or down with respect to the water line. The point at which a vertical line through the heeled centre of buoyancy crosses the line through the original, vertical centre of buoyancy is the metacentre. The metacentre remains directly above the centre of buoyancy by definition.
In the diagram above, the two Bs show the centres of buoyancy of a ship in the upright and heeled conditions. The metacentre, M, is considered to be fixed relative to the ship for small angles of heel; however, at larger angles the metacentre can no longer be considered fixed, and its actual location must be found to calculate the ship's stability.
It can be calculated using the formulae:
Where KB is the centre of buoyancy (height above the keel), I is the second moment of area of the waterplane around the rotation axis in metres4, and V is the volume of displacement in metres3. KM is the distance from the keel to the metacentre.
Stable floating objects have a natural rolling frequency, just like a weight on a spring, where the frequency is increased as the spring gets stiffer. In a boat, the equivalent of the spring stiffness is the distance called "GM" or "metacentric height", being the distance between two points: "G" the centre of gravity of the boat and "M", which is a point called the metacentre.
Metacentre is determined by the ratio between the inertia resistance of the boat and the volume of the boat. (The inertia resistance is a quantified description of how the waterline width of the boat resists overturning.) Wide and shallow hulls have high transverse metacentres, whilst narrow and deep hulls have low metacentres
.
Ignoring the ballast, wide and shallow means that the ship is very quick to roll, and narrow and deep means that the ship is very hard to overturn and is stiff.
"G", is the center of gravity. "GM", the stiffness parameter of a boat, can be lengthened by lowering the center of gravity or changing the hull form (and thus changing the volume displaced and second moment of area of the waterplane) or both.
An ideal boat strikes a balance. Very tender boats with very slow roll periods are at risk of overturning, but are comfortable for passengers. However, vessels with a higher metacentric height are "excessively stable" with a short roll period resulting in high accelerations at the deck level.
Sailing yachts, especially racing yachts, are designed to be stiff, meaning the distance between the centre of mass and the metacentre is very large in order to resist the heeling effect of the wind on the sails. In such vessels, the rolling motion is not uncomfortable because of the moment of inertia of the tall mast and the aerodynamic damping of the sails.
Righting arm
The metacentric height is an approximation for the vessel stability at a small angle (0-15 degrees) of heel. Beyond that range, the stability of the vessel is dominated by what is known as a righting moment. Depending on the geometry of the hull, naval architects must iteratively calculate the center of buoyancy at increasing angles of heel. They then calculate the righting moment at this angle, which is determined using the equation:
Where RM is the righting moment, GZ is the righting arm and is the displacement. Because the vessel displacement is constant, common practice is to simply graph the righting arm vs the angle of heel. The righting arm (known also as GZ — see diagram): the horizontal distance between the lines of buoyancy and gravity.
at small angles of heel
There are several important factors that must be determined with regards to righting arm/moment. These are known as the maximum righting arm/moment, the point of deck immersion, the downflooding angle, and the point of vanishing stability. The maximum righting moment is the maximum moment that could be applied to the vessel without causing it to capsize. The point of deck immersion is the angle at which the main deck will first encounter the sea. Similarly, the downflooding angle is the angle at which water will be able to flood deeper into the vessel. Finally, the point of vanishing stability is a point of unstable equilibrium. Any heel lesser than this angle will allow the vessel to right itself, while any heel greater than this angle will cause a negative righting moment (or heeling moment) and force the vessel to continue to roll over. When a vessel reaches a heel equal to its point of vanishing stability, any external force will cause the vessel to capsize.
Sailing vessels are designed to operate with a higher degree of heel than motorized vessels and the righting moment at extreme angles is of high importance.
Monohulled sailing vessels should be designed to have a positive righting arm (the limit of positive stability) to at least 120° of heel, although many sailing yachts have stability limits down to 90° (mast parallel to the water surface). As the displacement of the hull at any particular degree of list is not proportional, calculations can be difficult, and the concept was not introduced formally into naval architecture until about 1970.
Stability
GM and rolling period
The metacentre has a direct relationship with a ship's rolling period. A ship with a small GM will be "tender" - have a long roll period. An excessively low or negative GM increases the risk of a ship capsizing in rough weather, for example HMS Captain or the Vasa. It also puts the vessel at risk of potential for large angles of heel if the cargo or ballast shifts, such as with the Cougar Ace. A ship with low GM is less safe if damaged and partially flooded because the lower metacentric height leaves less safety margin. For this reason, maritime regulatory agencies such as the International Maritime Organization specify minimum safety margins for seagoing vessels. A larger metacentric height on the other hand can cause a vessel to be too "stiff"; excessive stability is uncomfortable for passengers and crew. This is because the stiff vessel quickly responds to the sea as it attempts to assume the slope of the wave. An overly stiff vessel rolls with a short period and high amplitude which results in high angular acceleration. This increases the risk of damage to the ship and to cargo and may cause excessive roll in special circumstances where eigenperiod of wave coincide with eigenperiod of ship roll. Roll damping by bilge keels of sufficient size will reduce the hazard. Criteria for this dynamic stability effect remain to be developed. In contrast, a "tender" ship lags behind the motion of the waves and tends to roll at lesser amplitudes. A passenger ship will typically have a long rolling period for comfort, perhaps 12 seconds while a tanker or freighter might have a rolling period of 6 to 8 seconds.
The period of roll can be estimated from the following equation:
where g is the gravitational acceleration, a44 is the added radius of gyration and k is the radius of gyration about the longitudinal axis through the centre of gravity and is the stability index.
Damaged stability
If a ship floods, the loss of stability is caused by the increase in KB, the centre of buoyancy, and the loss of waterplane area - thus a loss of the waterplane moment of inertia - which decreases the metacentric height. This additional mass will also reduce freeboard (distance from water to the deck) and the ship's downflooding angle (minimum angle of heel at which water will be able to flow into the hull). The range of positive stability will be reduced to the angle of down flooding resulting in a reduced righting lever. When the vessel is inclined, the fluid in the flooded volume will move to the lower side, shifting its centre of gravity toward the list, further extending the heeling force. This is known as the free surface effect.
Free surface effect
In tanks or spaces that are partially filled with a fluid or semi-fluid (fish, ice, or grain for example) as the tank is inclined the surface of the liquid, or semi-fluid, stays level. This results in a displacement of the centre of gravity of the tank or space relative to the overall centre of gravity. The effect is similar to that of carrying a large flat tray of water. When an edge is tipped, the water rushes to that side, which exacerbates the tip even further.
The significance of this effect is proportional to the cube of the width of the tank or compartment, so two baffles separating the area into thirds will reduce the displacement of the centre of gravity of the fluid by a factor of 9. This is of significance in ship fuel tanks or ballast tanks, tanker cargo tanks, and in flooded or partially flooded compartments of damaged ships. Another worrying feature of free surface effect is that a positive feedback loop can be established, in which the period of the roll is equal or almost equal to the period of the motion of the centre of gravity in the fluid, resulting in each roll increasing in magnitude until the loop is broken or the ship capsizes.
This has been significant in historic capsizes, most notably the and the .
Transverse and longitudinal metacentric heights
There is also a similar consideration in the movement of the metacentre forward and aft as a ship pitches. Metacentres are usually separately calculated for transverse (side to side) rolling motion and for lengthwise longitudinal pitching motion. These are variously known as and , GM(t) and GM(l), or sometimes GMt and GMl .
Technically, there are different metacentric heights for any combination of pitch and roll motion, depending on the moment of inertia of the waterplane area of the ship around the axis of rotation under consideration, but they are normally only calculated and stated as specific values for the limiting pure pitch and roll motion.
Measurement
The metacentric height is normally estimated during the design of a ship but can be determined by an inclining test once it has been built. This can also be done when a ship or offshore floating platform is in service. It can be calculated by theoretical formulas based on the shape of the structure.
The angle(s) obtained during the inclining experiment are directly related to GM. By means of the inclining experiment, the 'as-built' centre of gravity can be found; obtaining GM and KM by experiment measurement (by means of pendulum swing measurements and draft readings), the centre of gravity KG can be found. So KM and GM become the known variables during inclining and KG is the wanted calculated variable (KG = KM-GM)
See also
Kayak roll
Turtling
Angle of loll
Limit of positive stability
Weight distribution
References
Geometric centers
Buoyancy
Ship measurements
Vertical position | Metacentric height | [
"Physics",
"Mathematics"
] | 2,939 | [
"Vertical position",
"Point (geometry)",
"Physical quantities",
"Distance",
"Geometric centers",
"Symmetry"
] |
364,015 | https://en.wikipedia.org/wiki/Psychology%20of%20self | The psychology of self is the study of either the cognitive, conative or affective representation of one's identity, or the subject of experience. The earliest form of the Self in modern psychology saw the emergence of two elements, I and me, with I referring to the Self as the subjective knower and me referring to the Self as a subject that is known.
The Self has long been considered as the central element and support of any experience. The Self is not 'permanently stuck into the heart of consciousness'. "I am not always as intensively aware of me as an agent, as I am of my actions. That results from the fact that I perform only part of my actions, the other part being conducted by my thought, expression, practical operations, and so on."
Current views of the Self in psychology position it as playing an integral part in human motivation, cognition, affect, and social identity. It may be the case that we can now successfully attempt to create experiences of the Self in a neural process with cognitive consequences, which will give us insight into the elements that compose the complex selves of modern identity.
Over time, different theorists from multiple schools of thought have created ideas of what makes up the Self. Major theorists in the Clinical and Sociological branches of Psychology have emerged from these schools.
In Clinical Psychology
Jungian's Self Archetype
In classical Jungian analysis, the Self is the culmination of several archetypes, which are predispositions of how a person responds to the world. The Self signifies the coherent whole, unifying both the conscious and unconscious mind of a person. The Self, according to Jung, is the most important and difficult archetype to understand. It is fully realized as the product of individuation, which is defined by Jung as the rebirth of the Ego back to the original self.
The Self, besides being the center of the psyche, is also autonomous, meaning that it exists outside of time and space. Jung also called the Self an imago Dei. The Self is the source of dreams and often appears as an authority figure in dreams with the ability to perceive events not yet occurred or guide one in the present.
(See also: Sigmund Freud & Personality)
Kohut's Formulation
Kohut followed Freud's line of thinking regard the Self. However, he deviates from Freud by theorizing that the Self puts energy into the idea of narcissism (See Cathexis). The system is then broken over time into initially two systems of narcissistic perfection: 1) a system of ambitions (the grandiose self) and, 2) a system of ideals (the idealized parent imago). According to Kohut, these two systems represent the poles within Kohut's bipolar self. These poles work with each other to maintain a balance that is referred to as the Self
Winnicott's Selves
Donald Winnicott distinguished what he called the "true self" from the "false self" in the human personality, considering the true self as one based on the individual's sense of being, not doing, something which was rooted in the experiencing body.
Nevertheless, Winnicott did not undervalue the role of the false self in the human personality, regarding it as a necessary form of defensive organization similar to that of a caretaker that protects the true self hides behind so that it may continue to exist.
Five levels of false self-organization were identified by Winnicott, running along a kind of continuum.
In the most severe instance, the false self completely replaces and ousts the true self, leaving the latter a mere possibility.
Less severely, the false self protects the true self, which remains unactualized.
Closer to health, the false self supports the individual's search for conditions that will allow the true self to recover its own identity.
Even closer to health, we find the false self "... established on the basis of identifications".
Finally, in a healthy person, the false self is composed of that which facilitates social behavior, the manners and courtesy that allows for a smooth social life, with emotions expressed in socially acceptable forms.
As for the true self, Winnicott linked it to playing "hide and seek"' designed to protect one's real self against exploitation, without entirely forfeiting the ability to relate to others.
Berne's Transactional Analysis
In his transactional analysis theory Eric Berne distinguished the personality's ego states - Parent, Adult and Child - from what he called 'the real self, the one that can move from one ego state to another'.<ref>Eric Berne, What Do You Say After You Say Hello? (1974) p. 276</ref>
The Parent ego consists of borrowed behaviors and feelings from previous caregivers. The parent ego can consist of either the Nurturing or Critical Parent. Both types of parents offer information to the child that can be either beneficial or detrimental to their development.
The Adult ego is otherwise known as our data-processing center. This ego state is able to judge information based on facts, rather than emotions or preconceived beliefs.
The Child ego is identified as the state that holds all of a person's memories, emotions, and feelings. People carry this ego state with them all the time and can reflect on it at any time. This state can also be divided into two segments: the Free (or Natural) child and the Adapted (and/or Rebellious) child.
Berne considered that 'the feeling of "Self" is a mobile one. It can reside in any of the three ego states at any given moment, and can jump from one to the other as occasion arises'.
A person's tone, gestures, choice of words, posture, and emotional state can portray which ego state they are currently in. By knowing about their own ego states, a person can use each one in particular situations in order to enhance their experience or make new social connections.
Berne saw the Self as the most valuable part of the personality: 'when people get to know each other well, they penetrate into the depths where this real Self resides, and that is the part of the other person they respect and love'.
In Social Psychology
Social psychology acknowledges that "one of the most important life tasks each a person faces is understanding who they are and how they feel about themselves". This allows us to better understand ourselves, abilities, and preferences so that a person can make choices and decisions that suit them the best. However, rather than absolute knowledge, it would seem that 'a healthy sense of Self calls for both accurate self-knowledge and protective self-enhancement, in just the right amounts at just the right times.'
Other schools of thought look at the Self from a Social Psychology perspective. Some are listed below.
The Self is an automatic part of every human being that enables them to relate to others. The self is made up of three main parts that allow for the Self to maintain its function: Self-knowledge, the interpersonal self, and the agent self.
Self-knowledge
Self-knowledge is something many seek to understand. In knowing about their selves, a person is more capable of knowing how to be socially acceptable and desirable. They seek out self-knowledge due to the appraisal motive, self-enhancement motive, and consistency motive.
Self-knowledge is sometimes referred to as self-concept. This feature allows for people to gather information and beliefs about themselves. A person's self-awareness, self-esteem, and self-deception all fall under the self-knowledge part of self. People learn about themselves through our looking-glass selves, introspection, social comparisons, and self-perception.
The looking glass self is a term used to describe a theory that people learn about themselves through other people. In the looking-glass self-proposal, a person visualizes how they appear to others, how they are judged by others, and how they respond to said judgements. the person imagines how other people will judge them, and they then develop a response to the judgment they receive from other people.
Introspection refers to the way a person gathers information about oneself through mental functions and emotions. While a person might not know why they are thinking or feeling a certain way, they consciously know what they are feeling.
Social comparison refers to the way in which people compare themselves to others. By observing others, a person can gauge their work and behaviors as good, bad, or neutral. This can be either motivational or discouraging to the person depending on who they are comparing themselves to
The self-perception theory is another theory in which a person makes inferences about themselves through their own actions and attitudes.
Self-awareness occurs when someone acknowledges their own personality and behaviors. This can occur in both the private and public parts of a person's life.
Self-esteem describes how a person evaluates their self. Four factors that contribute to self-esteem are; reactions from others, comparing a person to others, a person's social roles, and a person's identification.
Interpersonal Self
The Interpersonal self, also known as the public self, refers to the part of the self that can be seen by other members of society. Because society has "unwritten rules", a person may find themselves in a specific role that adheres to these rules and expected behaviors…
Agent Self (non self)
The agent self is known as the executive function that allows for actions. This is how a person make choices and maintains control in situations and actions. The agent self resides over everything that involves decision making, self-control, taking charge in situations, and actively responding.
George-Mead & Charles Clooney
Symbolic interactionism stresses the 'social construction of an individual's sense of self' through two main methods: 'In part the self emerges through interaction with others....But the self is a product of social structure as well as of face-to-face interaction'. This aspect of social psychology emphasizes the theme of mutual constitution of the person and situation. Instead of focusing on the levels of class, race, and gender structure, this perspective seeks to understand the self in the way an individual lives their life on a moment-by-moment basis.
Self as an Emergent Phenomena
In dynamical social psychology as proposed by Nowak et al.'', the self is rather an emergent property that emerges as an experiential phenomena from the interaction of psychological perceptions and experiences. This is also hinted in dynamical evolutionary social psychology where a set of decision rules generates complex behavior.
Memory and the Self
Martin A. Conway
Memory and the self are interconnected to the point that they can be defined as the Self-Memory System (SMS). The self is viewed as a combination of memories and self-images (working self). Conway proposes that a person's long-term memory and working self are dependent on each other. Our prior knowledge of our self puts constraints on what our working self is and the working self modifies the access to our long-term memory and what it consists of.
John Locke
One view of the Self that follows the thinking of John Locke, sees it as a product of episodic memory. It has been suggested that transitory mental constructions within episodic memory form a self-memory system that grounds the goals of the working self, but research upon those with amnesia find they have a coherent sense of self based upon preserved conceptual autobiographical knowledge, and semantic facts, and so conceptual knowledge rather than episodic memory.
Both episodic and semantic memory systems have been proposed to generate a sense of self-identity: personal episodic memory enables the phenomenological continuity of identity, while personal semantic memory generates the narrative continuity of identity. "The nature of personal narratives depends on highly conceptual and ‘story-like' information about one's life, which resides at the general event level of autobiographical memory and is thus unlikely to rely on more event-specific episodic systems."
See also
References
External links
Definitions of Various Self Constructs - Self-esteem, self-efficacy, self-confidence & self-concept.
Discussion of Self – Page of the Emotional Competency website.
Theory of Self - Proposed by an autistic to explain autism
Images of the Self
False self
Psychological
Ego psychology
Personal development
cs:Bytostné já
da:Selvet (psykologi)
fr:Soi (psychologie)
ja:自己
sk:Selbst
sr:Ја
sv:Självet
yi:זעיל
zh:自性 | Psychology of self | [
"Biology"
] | 2,587 | [
"Personal development",
"Behavior",
"Human behavior"
] |
364,084 | https://en.wikipedia.org/wiki/Substance%20P | Substance P (SP) is an undecapeptide (a peptide composed of a chain of 11 amino acid residues) and a type of neuropeptide, belonging to the tachykinin family of neuropeptides. It acts as a neurotransmitter and a neuromodulator. Substance P and the closely related neurokinin A (NKA) are produced from a polyprotein precursor after alternative splicing of the preprotachykinin A gene. The deduced amino acid sequence of substance P is as follows:
Arg Pro Lys Pro Gln Gln Phe Phe Gly Leu Met (RPKPQQFFGLM)
with an amide group at the C-terminus.
Substance P is released from the terminals of specific sensory nerves. It is found in the brain and spinal cord and is associated with inflammatory processes and pain.
Discovery
The original discovery of Substance P (SP) was in 1931 by Ulf von Euler and John H. Gaddum as a tissue extract that caused intestinal contraction in vitro. Its tissue distribution and biologic actions were further investigated over the following decades. The eleven-amino-acid structure of the peptide was determined by Chang, et. al in 1971.
In 1983, Neurokinin A (previously known as substance K or neuromedin L) was isolated from porcine spinal cord and was also found to stimulate intestinal contraction.
Receptor
The endogenous receptor for substance P is neurokinin 1 receptor (NK1-receptor, NK1R). It belongs to the tachykinin receptor sub-family of GPCRs. Other neurokinin subtypes and neurokinin receptors that interact with SP have been reported as well. Amino acid residues that are responsible for the binding of SP and its antagonists are present in the extracellular loops and transmembrane regions of NK-1. Binding of SP to NK-1R results in internalization by the clathrin-dependent mechanism to the acidified endosomes where the complex disassociates. Subsequently, SP is degraded and NK-1R is re-expressed on the cell surface.
Substance P and the NK1-receptor are widely distributed in the brain and are found in brain regions that are specific to regulating emotion (hypothalamus, amygdala, and the periaqueductal gray). They are found in close association with serotonin (5-HT) and neurons containing norepinephrine that are targeted by the currently used antidepressant drugs. The SP receptor promoter contains regions that are sensitive to cAMP, AP-1, AP-4, CEBPB, and epidermal growth factor. Because these regions are related to complexed signal transduction pathways mediated by cytokines, it has been proposed that cytokines and neurotropic factors can induce NK-1. Also, SP can induce the cytokines that are capable of inducing NK-1 transcription factors.
Function
Overview
Substance P ("P" standing for "Preparation" or "Powder") is a neuropeptide – but only nominally so, as it is ubiquitous. Its receptor – the neurokinin type 1 – is distributed over cytoplasmic membranes of many cell types (neurons, glia, endothelia of capillaries and lymphatics, fibroblasts, stem cells, white blood cells) in many tissues and organs. SP amplifies or excites most cellular processes.
Substance P is a key first responder to most noxious/extreme stimuli (stressors), i.e., those with a potential to compromise an organism's biological integrity. SP is thus regarded as an immediate defense, stress, repair, survival system. The molecule, which is rapidly inactivated (or at times further activated by peptidases) is rapidly released – repetitively and chronically, as warranted, in the presence of a stressor. Unique among biological processes, SP release (and expression of its NK1 Receptor (through autocrine, paracrine, and endocrine-like processes)) may not naturally subside in diseases marked by chronic inflammation (including cancer). The SP or its NK1R, as well as similar neuropeptides, appear to be vital targets capable of satisfying many unmet medical needs. The failure of clinical proof of concept studies, designed to confirm various preclinical predictions of efficacy, is currently a source of frustration and confusion among biomedical researchers.
Vasodilation
Substance P is a potent vasodilator. Substance P-induced vasodilation is dependent on nitric oxide release. Substance P is involved in the axon reflex-mediated vasodilation to local heating and wheal and flare reaction. It has been shown that vasodilation to substance P is dependent on the NK1 receptor located on the endothelium. In contrast to other neuropeptides studied in human skin, substance P-induced vasodilation has been found to decline during continuous infusion. This possibly suggests an internalization of neurokinin-1 (NK1). As is typical with many vasodilators, it also has bronchoconstrictive properties, administered through the non-adrenergic, non-cholinergic nervous system (branch of the vagal system).
Inflammation
SP initiates expression of almost all known immunological chemical messengers (cytokines). Also, most of the cytokines, in turn, induce SP and the NK1 receptor. SP is particularly excitatory to cell growth and multiplication, via usual, as well as oncogenic drivers. SP is a trigger for nausea and emesis. Substance P and other sensory neuropeptides can be released from the peripheral terminals of sensory nerve fibers in the skin, muscle, and joints. It is proposed that this release is involved in neurogenic inflammation, which is a local inflammatory response to certain types of infection or injury.
Pain
Preclinical data support the notion that Substance P is an important element in pain perception. The sensory function of substance P is thought to be related to the transmission of pain information into the central nervous system. Substance P coexists with the excitatory neurotransmitter glutamate in primary afferents that respond to painful stimulation. Substance P and other sensory neuropeptides can be released from the peripheral terminals of sensory nerve fibers in the skin, muscle, and joints. It is proposed that this release is involved in neurogenic inflammation, which is a local inflammatory response to certain types of infection or injury. Unfortunately, the reasons why NK1 receptor antagonists have failed as efficacious analgesics in well-conducted clinical proof of concept studies have not yet been persuasively elucidated.
Mood, anxiety, learning
Substance P has been associated with the regulation of mood disorders, anxiety, stress, reinforcement, neurogenesis, synaptic growth and dendritic arborisation, respiratory rhythm, neurotoxicity, pain, and nociception. In 2014, it was found that substance P played a role in male fruit fly aggression. Recently, it has been shown that substance P may play a critical role in long-term potentiation of aversive stimuli.
Vomiting
The vomiting center in the medulla, called the area postrema, contains high concentrations of substance P and its receptor, in addition to other neurotransmitters such as choline, histamine, dopamine, serotonin, and endogenous opioids. Their activation stimulates the vomiting reflex. Different emetic pathways exist, and substance P/NK1R appears to be within the final common pathway to regulate vomiting.
Cell growth, proliferation, angiogenesis, and migration
The above processes are part and parcel to tissue integrity and repair. Substance P has been known to stimulate cell growth in normal and cancer cell line cultures, and it was shown that substance P could promote wound healing of non-healing ulcers in humans. SP and its induced cytokines promote multiplication of cells required for repair or replacement, growth of new blood vessels, and "leg-like pods" on cells (including cancer cells) bestowing upon them mobility, and metastasis. It has been suggested that cancer exploits the SP-NK1R to progress and metastasize, and that NK1RAs may be useful in the treatment of several cancer types.
Clinical significance
Quantification in disease
Elevation of serum, plasma, or tissue SP and/or its receptor (NK1R) has been associated with many diseases: sickle cell crisis; inflammatory bowel disease; major depression and related disorders; fibromyalgia; rheumatological; and infections such as HIV/AIDS and respiratory syncytial virus, as well as in cancer.
When assayed in the human, the observed variability of the SP concentrations are large, and in some cases the assay methodology is questionable. SP concentrations cannot yet be used to diagnose disease clinically or gauge disease severity. It is not yet known whether changes in concentration of SP or density of its receptors is the cause of any given disease, or an effect.
Blockade for diseases with a chronic immunological component
As increasingly documented, the SP-NK1R system induces or modulates many aspects of the immune response, including WBC production and activation, and cytokine expression, Reciprocally, cytokines may induce expression of SP and its NK1R. In this sense, for diseases in which a pro-inflammatory component has been identified or strongly suspected, and for which current treatments are absent or in need of improvement, abrogation of the SP-NK1 system continues to receive focus as a treatment strategy. Currently, the only completely developed method available in that regard is antagonism (blockade, inhibition) of the SP preferring receptor, i.e., by drugs known as neurokinin type 1 antagonists (also termed: SP antagonists, or tachykinin antagonists.) One such drug is aprepitant to prevent the nausea and vomiting that accompanies chemotherapy, typically for cancer.
With the exception of chemotherapy-induced nausea and vomiting, the patho-physiological basis of many of the disease groups listed below, for which NK1RAs have been studied as a therapeutic intervention, are to varying extents hypothesized to be initiated or advanced by a chronic non-homeostatic inflammatory response.
Dermatological disorders: eczema/psoriasis, chronic pruritus
High levels of BDNF and substance P have been found associated with increased itching in eczema.
Infections: HIV-AIDS, measles, RSV, others
The role of SP in HIV-AIDS has been well-documented. Doses of aprepitant greater than those tested to date are required for demonstration of full efficacy. Respiratory syncytial and related viruses appear to upregulate SP receptors, and rat studies suggest that NK1RAs may be useful in treating or limiting long term sequelae from such infections.
Entamoeba histolytica is a unicellular parasitic protozoan that infects the lower gastrointestinal tract of humans. The symptoms of infection are diarrhea, constipation, and abdominal pain. This protozoan was found to secrete serotonin as well as substance P and neurotensin.
Inflammatory bowel disease (IBD)/cystitis
Despite strong preclinical rationale, efforts to demonstrate efficacy of SP antagonists in inflammatory disease have been unproductive. A study in women with IBS confirmed that an NK1RAs antagonist was anxiolytic.
Chemotherapy induced nausea and vomiting
In line with its role as a first line defense system, SP is released when toxicants or poisons come into contact with a range of receptors on cellular elements in the chemoreceptor trigger zone, located in the floor of the fourth ventricle of the brain (area postrema). Presumably, SP is released in or around the nucleus of the solitary tract upon integrated activity of dopamine, serotonin, opioid, and/or acetylcholine receptor signaling. NK1Rs are stimulated. In turn, a fairly complex reflex is triggered involving cranial nerves responsible for respiration, retroperistalsis, and general autonomic discharge. The actions of aprepitant are said to be entirely central, thus requiring passage of the drug into the central nervous system. However, given that NK1Rs are unprotected by a blood brain barrier in the area postrema just adjacent to neuronal structures in the medulla, and the activity of sendide (the peptide based NK1RA) against cisplatin-induced emesis in the ferret, it is likely that some peripheral exposure contributes to antiemetic effects, even if through vagal terminals in the clinical setting.
Radiotherapy
A recent study has highlighted the potential of radiolabeled substance P as a radiopharmaceutical in targeting Glioma. The scientists were able to radiolabel Substance P with radioactive isotopes Technetium-99m (99mTc) and Lutetium-177 (177Lu). These radiolabeled compounds are designed to bind specifically to NK-1 receptors on glioma cells, allowing for both imaging (via 99mTc) and therapeutic (via 177Lu) applications. The study highlights the promising role of NK-1 receptor-targeted strategies in improving glioma diagnosis and treatment through receptor-specific delivery of radioisotopes.
Other findings
Denervation supersensitivity
When the innervation to substance P nerve terminals is lost, post-synaptic cells compensate for the loss of adequate neurotransmitter by increasing the expression of post-synaptic receptors. This, ultimately, leads to a condition known as denervation supersensitivity as the post-synaptic nerves will become hypersensitive to any release of substance P into the synaptic cleft.
Aggression
Tachykinin / Substance P plays an evolutionarily conserved role in inducing aggressive behaviors. In rodents and cats, activation of hypothalamic neurons which release Substance P induces aggressive behaviors (defensive biting and predatory attack). Similarly, in fruit flies, tachykinin-releasing neurons have been implicated in aggressive behaviors (lunging). In this context, male-specific tachykinin neurons control lunging behaviors that can be modulated by the amount of tachykinin release.
References
External links
Fight Club for Flies video, Science Take, New York Times, February 3, 2014
Neuropeptides
Neurotransmitters
Hendecapeptides | Substance P | [
"Chemistry"
] | 3,096 | [
"Neurochemistry",
"Neurotransmitters"
] |
364,090 | https://en.wikipedia.org/wiki/Johann%20Friedrich%20Blumenbach | Johann Friedrich Blumenbach (11 May 1752 – 22 January 1840) was a German physician, naturalist, physiologist and anthropologist. He is considered to be a main founder of zoology and anthropology as comparative, scientific disciplines. He has been called the "founder of racial classifications".
He was one of the first to explore the study of the human being as an aspect of natural history. His teachings in comparative anatomy were applied to his classification of human races, of which he claimed there were five, Caucasian, Mongolian, Malayan, Ethiopian, and American. He was a member of what modern historians call the Göttingen school of history.
He is considered a pivotal figure in the development of physical anthropology. Blumenbach's peers considered him one of the great theorists of his day, and he was a mentor or influence on many of the next generation of German biologists, including Alexander von Humboldt.
Early life and education
Blumenbach was born at his family house in Gotha. His father was Heinrich Blumenbach, a local school headmaster; his mother was Charlotte Eleonore Hedwig Buddeus. He was born into a well-connected family of academics.
Blumenbach was educated at the Illustrious Gymnasium in Gotha before studying medicine, first at Jena and then at Göttingen. He was recognized as a prodigy by the age sixteen in 1768. He graduated from the latter in 1775 with his M.D. thesis De generis humani varietate nativa (On the Natural Variety of Mankind, University of Göttingen, which was first published in 1775, then re-issued with changes to the titlepage in 1776). It is considered one of the most influential works in the development of subsequent human race concepts. It contained the germ of the craniological research to which so many of his subsequent inquiries were directed.
Career
Blumenbach was appointed extraordinary professor of medicine and inspector of the museum of natural history in Göttingen in 1776 and ordinary professor in 1778. His contributions soon began to enrich the pages of the Medicinische Bibliothek, of which he was editor from 1780 to 1794, with various contributions on medicine, physiology, and anatomy. In physiology, he was of the school of Albrecht von Haller, and was in the habit of illustrating his theory by a careful comparison of the animal functions of man with those of other animals. Following Georges Cuvier's identification, Blumenbach gave the woolly mammoth its first scientific name, Elephas primigenius (first-born elephant), in 1799.
His reputation was much extended by the publication of his Institutiones Physiologicae (1787), a condensed, well-arranged view of the animal functions, expounded without discussion of minute anatomical details. Between its first publication and 1821, it went through many editions in Germany, where it was the general textbook of the science of physiology. It was translated into English in America by Charles Caldwell (Philadelphia 1798), and in London by John Elliotson (1807).
He was perhaps still more extensively known by his Handbuch der vergleichenden Anatomie ("Handbook of comparative anatomy"), which passed through numerous German editions from its appearance in 1805 to 1824. It was translated into English in 1809 by the surgeon William Lawrence, and again, with improvements and additions, by William Coulson in 1827. This manual, though slighter than the subsequent works of Cuvier, Carus, and others, and not to be compared with such later expositions as that of Gegenbaur, was long esteemed for the accuracy of the author's own observations, and his just appreciation of the labors of his predecessors.
Although the greatest part of Blumenbach's life was passed at Göttingen, in 1789 he visited Switzerland, and gave a curious medical topography of that country in the Bibliothek. He was in England in 1788 and 1792. He was elected a Foreign Member of the Royal Society of London in 1793 and a Foreign Honorary Member of the American Academy of Arts and Sciences in 1794. In 1798, he was elected as a member to the American Philosophical Society. He became a correspondent, living abroad, of the Royal Institute of the Netherlands in 1808. This was changed to associated member in 1827. He was then appointed secretary to the Royal Society of Sciences in 1812, elected a foreign member of the Royal Swedish Academy of Sciences in 1813, appointed physician to the royal family in Hanover () by the prince regent in 1816, made a knight-commander of the Guelphic Order in 1821, and elected a member of the French Academy of Sciences in 1831. In celebration of his doctoral jubilee (1825), traveling scholarships were founded to assist talented young physicians and naturalists. He retired in 1835. Blumenbach died in 1840 in Göttingen, where he is buried in the Albani cemetery.
Racial anthropology
Blumenbach explored the biodiversity of humans mainly by comparing skull anatomy and skin color. His work included a description of sixty human crania (skulls) published originally in fascicules as Decas craniorum (Göttingen, 1790–1828). This was a founding work for other scientists in the field of craniometry. He established a five-part naming system in 1795 to describe what he called generis humani varietates quinae principes, species vero unica (five principal varieties of humankind, but one species). In his view, humans could be divided into varieties (only in his later work he adopted the term "races", which had been introduced by others) but he was aware that a clear separation was difficult:
Blumenbach's classification of the single human species into five varieties (later called "races") (1793/1795):
the Caucasian or white race. Blumenbach was the first to use this term for Europeans, and he also included Middle Easterners and South Asians in the same category.
the Mongolian or yellow race, including all East Asians.
the Malayan or brown race, including Southeast Asians and Pacific Islanders.
the Ethiopian or black race, including all sub-Saharan Africans.
the American or red race, including all Native Americans.
Blumenbach assumed that all morphological differences between the varieties were induced by the climate and the way of living and he emphasized that the differences in morphology were so small, gradual and transiently connected that it was not possible to separate these varieties clearly. He also noted that skin color was unsuitable for distinguishing varieties. Although Blumenbach did not propose any hierarchy among the five varieties, he placed the Caucasian form in the center of his description as being the most "primitive" or "primeval" one from which the other forms "degenerated". In the 18th century, however, these terms did not have the negative connotations they possess today. At the time, "primitive" or "primeval" described the ancestral form, while "degeneration" was understood to be the process of change leading to a variety adapted to a new environment by being exposed to a different climate and diet. Hence, he argued that physical characteristics like skin color, cranial profile, etc., depended on geography, diet, and mannerism. Further anatomical study led him to the conclusion that "individual Africans differ as much, or even more, from other Africans as from Europeans".
Like other monogenists such as Georges-Louis Leclerc, Comte de Buffon, Blumenbach held to the "degenerative hypothesis" of racial origins. Blumenbach claimed that Adam and Eve were Caucasian inhabitants of Asia, and that other races came about by degeneration from environmental factors such as the sun and poor diet. Thus, he claimed, Negroid pigmentation arose because of the result of the heat of the tropical sun, while the cold wind caused the tawny colour of the Eskimos, and the Chinese were fair skinned compared to the other Asian stocks because they kept mostly in towns protected from environmental factors. He believed that the degeneration could be reversed in a proper environmental control and that all contemporary forms of man could revert to the original Caucasian race.
Moreover, he concluded that Africans were not inferior to the rest of mankind "concerning healthy faculties of understanding, excellent natural talents and mental capacities":
He did not consider his "degenerative hypothesis" as racist and sharply criticized Christoph Meiners, an early practitioner of scientific racialism, as well as Samuel Thomas von Sömmerring, who concluded from autopsies that Africans were an inferior race. Blumenbach wrote three other essays stating non-white peoples were capable of excelling in arts and sciences in reaction against racialists of his time. At his time, Blumenbach was perceived as anti-racist and he strongly opposed the practice of slavery and the belief of the inherent savagery of the coloured races. Alexander von Humboldt wrote on his and Blumenbach's views:
However, selected parts of his views were later used by others to encourage scientific racism.
Other natural studies
In his dissertation, Blumenbach mentioned the name Simia troglodytes in connection with a short description for the chimpanzee. This dissertation was printed and appeared in September 1775, but only for internal use in the University of Göttingen and not for providing a public record. The public print of his dissertation appeared in 1776. Blumenbach knew that Carl Linnaeus had already established a name Homo troglodytes for a badly known primate. In 1779, he discussed this Linnean name and concluded correctly that Linnaeus had been dealing with two species, a human and an orangutan, neither of which was a chimpanzee, and that by consequence the name Homo troglodytes could not be used. Blumenbach was one of the first scientists to understand the identities of the different species of primates, which were (excluding humans) orangutans and chimpanzees. (Gorillas were not known to Europeans at this time). In Opinion 1368, the International Commission on Zoological Nomenclature (ICZN) decided in 1985 that Blumenbach's view should be followed, and that his Simia troglodytes as published by Blumenbach in 1779 shall be the type species of the genus Pan and, since it was the oldest available name for the chimpanzee, be used for this species. However, the commission did not know that Blumenbach had already mentioned this name in his dissertation. Following the rules of the ICZN Code the scientific name of one of the most well-known African animals, currently known as Pan troglodytes, must carry Blumenbach's name combined with the date 1776.
Blumenbach shortly afterward wrote a manual of natural history entitled Handbuch der Naturgeschichte; 12 editions and some translations. It was published first in Göttingen by J. C. Dieterich in 1779/1780. He was also one of the first scientists to study the anatomy of the platypus, assigning the scientific name Ornithorhynchus paradoxus to the animal, being unaware George Shaw had already given it the name Platypus anatinus. However, Platypus had already been shown to be used for the scientific name for a genus of Ambrosia beetles so Blumenbach's scientific name for the genus was used.
Bildungstrieb
Blumenbach made many contributions to the scientific debates of the last half of the 18th century regarding evolution and creation. His central contribution was in the conception of a vis formativa or Bildungstrieb, an inborn force within an organism that led it to create, maintain, and repair its shape.
Background
Enlightenment science and philosophy essentially held a static view of nature and man, but vital nature continued to interrupt this view, and the issue of life, the creation of life and its varieties, increasingly occupied attention and "starting in the 1740s the concept of vital power reentered the scene of generation ... there must be some 'productive power' in nature that enabled unorganized material to generate new living forms."
Georges-Louis Leclerc, Comte de Buffon wrote an influential work in 1749, Natural History, that revived interest in vital nature. Buffon held that there were certain penetrating powers which organised the organic particles that made up the living organism. Erasmus Darwin translated Buffon's idea of organic particles into "molecules with formative propensities" and in Germany Buffon's idea of an internal order, moule interieur arising out of the action of the penetrating powers was translated into German as Kraft (power).
The German term for vital power or living power, Lebenskraft, as distinct from chemical or physical forces, first appeared with Medicus's on the Lebenskraft (1774). Scientists were now forced to consider hidden and mysterious powers of and in living matter that resisted physical laws – warm-blooded animals maintaining a consistent temperature despite changing outside temperatures, for example.
In 1759, Caspar Friedrich Wolff, a German embryologist provided evidence for the ancient idea of epigenesis, that is preformed life, that is a chick out of unformed substance and his dispute with Albrecht von Haller brought the issue of life to the forefront of natural science and philosophy. Wolff identified an "essential power" (essentliche Kraft, or vis essentialis) that allowed structure to be a result of power, "the very power through which, in the vegetable body, all those things which we describe as life are effected."
Blumenbach's Bildungstrieb
While Wolff was not concerned to name this vital organising, reproducing power, in 1780 Blumenbach posited a formative drive (nisus formativus or Bildungstrieb) responsible for biological "procreation, nourishment, and reproduction", as well as self-development and self-perfection on a cultural level.
Blumenbach held that all living organisms "from man down to maggots, and from the cedar to common mould or mucor", possess an inherent "effort or tendency which, while life continues, is active and operative; in the first instance to attain the definite form of the species, then to preserve it entire, and, when it is infringed upon, so far as this is possible, to restore it." This power of vitality is "not referable to any qualities merely physical, chemical, or mechanical."
Blumenbach compared the uncertainty about the origin and ultimate nature of the formative drive to similar uncertainties about gravitational attraction: "just in the same way as we use the name of attraction or gravity to denote certain forces, the causes of which however still remain hid, as they say, in Cimmerian darkness, the formative force (nisus formativus) can explain the generation of animals."
At the same time, befitting the central idea of the science and medicine of dynamic polarity, it was also the physiological functional identity of what theorists of society or mind called "aspiration". Blumenbach's Bildungstrieb found quick passage into evolutionary theorizing of the decade following its formulation and in the thinking of the German natural philosophers.
One of Blumenbach's contemporaries, Samuel Hahnemann, undertook to study in detail how this generative, reproductive and creative power, which he termed the Erzeugungskraft of the Lebenskraft of living power of the organism, could be negatively affected by inimical agents to engender disease.
Blumenbach and Kant on Bildungstrieb
Kant is said by several modern authors to have relied on Blumenbach's biological concept of formative power in developing his idea of organic purpose. Kant wrote to Blumenbach in 1790 to praise his concept of the formative force (Bildungstrieb). However, whereas Kant had a heuristic concept in mind, to explain mechanical causes, Blumenbach conceived of a cause fully resident in nature. From this he would argue that the Bildungstrieb was central to the creation of new species. Though Blumenbach left no overt indications of sources for his theory of biological revolution, his ideas harmonize with those of Charles Bonnet and especially with those of his contemporary Johann Gottfried Herder (1744–1803), and it was Herder whose ideas were influenced by Blumenbach. Blumenbach continued to refine the concept in his De nisu formativo et generationis negotio ('On the Formative Drive and the Operation of Generation', 1787) and in the second edition (1788) of the Handbuch der Naturgeschichte: 'it is a proper force (eigentliche Kraft), whose undeniable existence and extensive effects are apparent throughout the whole of nature and revealed by experience'. He consolidated these in the second edition of Über den Bildungstrieb.
Blumenbach had initially been an advocate of Haller's view, in contrast to those of Wolff, that the essential elements of the embryo were already in the egg, he later sided with Wolff. Blumenbach provided evidence for the actual existence of this formative force, to distinguish it from other, merely nominal terms.
The way in which the Bildungstrieb differed, perhaps, from other such forces was in its comprehensive architectonic character: it directed the formation of anatomical structures and the operations of physiological processes of the organism so that various parts would come into existence and function interactively to achieve the ends of the species.
Influence on German biology
Blumenbach was regarded as a leading light of German science by his contemporaries. Kant and Friedrich Schelling both called him "one of the most profound biological theorists of the modern era." In the words of science historian Peter Watson, "roughly half the German biologists during the early nineteenth century studied under him or were inspired by him: Alexander von Humboldt, Carl Friedrich Kielmeyer, Gottfried Reinhold Treviranus, Heinrich Friedrich Link, Johann Friedrich Meckel, Johannes Illiger, and Rudolph Wagner."
See also
Craniometry
Scientific racism
Notes
References
Klatt N. (2008). "Klytia und die "schöne Georgianerin" – Eine Anmerkung zu Blumenbachs Rassentypologie". Kleine Beiträge zur Blumenbach-Forschung 1: 70–101. urn:nbn:de:101:1-2008112813
External links
Chemistry Tree: Johann Friedrich Blumenbach Details
Blumenbachiana, Göttingen State and University Library Digitised works
Johann Friedrich Blumenbach – Online, project of the Göttingen Academy of Sciences and Humanities, providing a complete bibliography of Blumenbach's works (with digitised versions) as well as biograpical information and sources on his life and career
1752 births
1840 deaths
18th-century German naturalists
18th-century German writers
18th-century German male writers
19th-century German male writers
19th-century German writers
Fellows of the American Academy of Arts and Sciences
Foreign members of the Royal Society
German anthropologists
German ethnologists
History of psychiatry
Honorary members of the Saint Petersburg Academy of Sciences
Members of the Royal Netherlands Academy of Arts and Sciences
Members of the Royal Swedish Academy of Sciences
People from Gotha (town)
People from Saxe-Gotha-Altenburg
German physiologists
Proto-evolutionary biologists
People involved in race and intelligence controversies
University of Jena alumni
University of Göttingen alumni
Academic staff of the University of Göttingen
Members of the Göttingen Academy of Sciences and Humanities | Johann Friedrich Blumenbach | [
"Biology"
] | 4,029 | [
"Non-Darwinian evolution",
"Biology theories",
"Proto-evolutionary biologists"
] |
364,099 | https://en.wikipedia.org/wiki/Gemini%206A | Gemini 6A (officially Gemini VI-A) was a 1965 crewed United States spaceflight in NASA's Gemini program.
The mission, flown by Wally Schirra and Thomas P. Stafford, achieved the first crewed rendezvous with another spacecraft, its sister Gemini 7. Although the Soviet Union had twice previously launched simultaneous pairs of Vostok spacecraft, these established radio contact with each other, but they had no ability to adjust their orbits in order to rendezvous and came no closer than several kilometers of each other, while the Gemini 6 and 7 spacecraft came as close as one foot (30 cm) and could have docked had they been so equipped.
Gemini 6A was the fifth crewed Gemini flight, the 13th crewed American flight, and the 21st crewed spaceflight of all time (including two X-15 flights over ).
Crew
Backup crew
Support crew
Charles A. Bassett II (Houston CAPCOM)
Alan L. Bean (Cape CAPCOM)
Eugene A. Cernan (Houston CAPCOM)
Elliot M. See Jr. (Houston CAPCOM)
Mission parameters
Mass:
Perigee:
Apogee:
Inclination: 28.97°
Period: 88.7 min
Stationkeeping with GT-7
Start: December 15, 1965 19:33 UTC
End: December 16, 1965 00:52 UTC
Duration: 5 hours, 19 minutes
Objective
The original Gemini 6 mission, scheduled for launch on October 25, 1965, at 12:41 pm EDT, had a planned mission duration of 46 hours 47 minutes, completing a total of 29 orbits. It was to land in the western Atlantic Ocean south of Bermuda.
The mission was to include four dockings with the Agena Target Vehicle. The first docking was scheduled for five hours and forty minutes into the mission. The second was scheduled for seven hours and forty-five minutes, the third at nine hours and forty minutes, and the fourth and final docking at ten hours and five minutes into the mission. The final undocking would take place at 18 hours and 20 minutes into the mission. At 23 hours and 55 minutes into the mission, while the spacecraft passed over White Sands, New Mexico, the crew was to attempt to observe a laser beam originating from the ground. The retrorockets were scheduled to be fired at 46 hours and 10 minutes into the mission over the Pacific Ocean on the 29th orbit.
Original mission plans also included the first live television coverage of the recovery of a U.S. spacecraft at sea from the recovery ship, the U.S. aircraft carrier Wasp. The Wasp was fitted with ground station equipment by ITT to relay live television, via the Intelsat I (nicknamed the "Early Bird") satellite.
Original mission canceled
On October 25, 1965, Schirra and Stafford boarded their Gemini 6 craft to prepare for launch. Fifteen minutes later, the uncrewed Atlas-Agena target vehicle was launched. After a successful burn of the Atlas booster, the Agena's engine fired to separate it from the Atlas. But immediately after the Agena's engine fired at the six-minute mark in the flight, telemetry was lost. A catastrophic failure apparently caused the vehicle to explode, as Range Safety was tracking multiple pieces of debris falling into the Atlantic Ocean. After 50 minutes, the Gemini launch was canceled.
Revised mission
After reviewing the situation, NASA decided to launch an alternate Gemini 6A mission, eight days after the launch of Gemini 7, which was scheduled as a 14-day long-duration mission in December. Gemini 6A would perform the first rendezvous of two spacecraft in orbit, using Gemini 7 as the target, though they would not dock. The crews also discussed the possibility of Stafford performing an EVA from 6A to 7, swapping places with Gemini 7 pilot Jim Lovell, but the commander of Gemini 7, Frank Borman, objected, pointing out that it would require Lovell to wear an uncomfortable EVA suit on a long-duration mission.
Flight
First launch attempt
The first attempt to launch the 6A mission (second attempt for Gemini spacecraft No. 6) was on December 12, 1965, at 9:54 a.m. EST. All went well right up to ignition; the engines ignited, but after about 1.5 seconds they abruptly shut down. Since the lift-off clock had started in the spacecraft, mission rules dictated that Wally Schirra, as the commander, had to immediately pull the D-ring between his knees and activate the ejection seats, carrying the astronauts away from the disaster that would be the result of a fully fueled Titan II falling back onto LC-19. However, Schirra did not feel any movement and knew that the booster had not lifted, so he decided not to abort. His quick thinking probably saved the mission as the reliability of the Gemini ejector seats was questionable; the astronauts could have been badly injured from high g-forces as the seats had to launch them at least 800 feet, which was deemed a safe distance from an exploding Titan II.
In addition, the cabin interior had been soaking in pure oxygen for hours. Tom Stafford, in a NASA oral history in 1997, later recalled:
Even if the astronauts had not been injured or killed, ejection would ruin the spacecraft and delay the mission for months.
After the Stage I engines ignited and shut down, fuel (UDMH) was leaking out of the PSV drain valve. The fuel had been ignited by the engine start, and the resulting fire was discovered when the Pad Crew inspected the engine compartment. Water spray was initiated and a cap was installed on the drain line. About 60 minutes after the aborted launch, the booster and spacecraft had been made safe and the service tower raised up to it. After removing the propellants from the Titan II, the booster was checked out and they quickly uncovered one culprit, which was an umbilical plug that dropped out of the base of the booster prematurely. This plug sent a lift-off signal to the spacecraft. Testing revealed that some plugs came out more easily than others, so they were replaced by different ones that would stay in place properly.
However, the electrical plug turned out to not be the only problem with the booster. Examination of telemetry also showed that the Titan actually began experiencing thrust decay before the plug dropped out. Engine No. 1 was unaffected and nearly reached 100% thrust at shutdown, while Engine No. 2 never transitioned to in-flight performance levels. Engineers spent all night combing through the first stage, but failed to find any cause for the thrust decay. Eventually however, one technician identified the problem, which was a plastic dust cover inside the gas generator that had been carelessly left inside when the booster was assembled months earlier at the Martin-Marietta plant, blocking the flow of oxidizer. The cover was removed and the Titan II cleared for another launch attempt.
Had the inadvertent electrical disconnect not occurred, the abort sensing system would have sent a shutoff command to the Titan at T+2.2 seconds due to the loss of Engine No. 2 chamber pressure. Since launcher release and liftoff would take place at T+3.2 seconds, a pad fallback still would not have occurred in this scenario and the astronauts would have been safe.
Second attempt and rendezvous
The Titan's batteries were replaced and the fuel prevalves, which had opened, were removed and it was decided to launch without them. The second attempt to launch the 6A mission (third attempt for Gemini spacecraft No. 6) was successful on December 15 at 8:37:26 a.m. EST. All went well through launch and ascent; first stage cutoff occurred at T+160 seconds and second stage cutoff at T+341 seconds. Spacecraft separation occurred at T+361 seconds and the crew entered a orbit.
The plan called for the rendezvous to take place on the fourth orbit of Gemini 6. Their first burn came 94 minutes after launch when they increased their speed by 5 meters (16 feet) per second. Due to their lower orbit they were gaining on Gemini 7 and were only 730 miles, (or 1,175 kilometers), behind. The next burn was at two hours and eighteen minutes when Gemini 6A made a phase adjustment to put them on the same orbital inclination as Gemini 7. They now only trailed by .
The radar on Gemini 6A first made contact with Gemini 7 at three hours and fifteen minutes when they were away. A third burn put them into a orbit. As they slowly gained, Schirra put Gemini 6A's computer in charge of the rendezvous. At five hours and four minutes, he saw a bright star that he thought was Sirius, but this was in fact Gemini 7.
After several more burns, the two spacecraft were only 130 feet (40 meters) apart. The burns had only used 112 lbs. (51 kilograms) of propellant on Gemini 6A, leaving plenty for some fly-arounds. During the next 270 minutes, the crews moved as close as one foot (30 centimeters), talking over the radio. At one stage the spacecraft were stationkeeping so well that neither crew had to make any burns for 20 minutes.
Schirra said that because there is no turbulence in space, "I was amazed at my ability to maneuver. I did a fly-around inspection of Gemini 7, literally flying rings around it, and I could move to within inches of it in perfect confidence". As the crew sleep periods approached, Gemini 6A made a separation burn and slowly drifted more than from Gemini 7. This ensured that there would not be any accidental collisions while the astronauts slept.
A Christmas surprise
The next day, before reentry, the crew of Gemini 6A had a surprise:
At that point, the sound of "Jingle Bells" was heard played on an eight-note Hohner "Little Lady" harmonica and a handful of small bells. The Smithsonian Institution claims these were the first musical instruments played in space and keeps the instruments on display.
Reentry
Gemini 6A fired its retro-rockets and landed within of the planned site in the Atlantic Ocean northeast of Turks and Caicos. It was the first recovery to be televised live, through a transportable satellite earth station developed by ITT on the deck of the recovery aircraft carrier USS Wasp.
The Gemini 7 and 6A missions were supported by the following U.S. Department of Defense resources: 10,125 personnel, 125 aircraft and 16 ships.
Insignia
Walter Schirra explained the patch in the book All We Did Was Fly to the Moon:
The original patch had called the flight GTA-6 (for Gemini-Titan-Agena) and showed the Gemini craft chasing an Agena. It was changed when the mission was altered to depict two Gemini spacecraft.
Spacecraft location
The spacecraft is currently on display at the Stafford Air & Space Museum in Weatherford, Oklahoma, having previously been displayed at the Oklahoma History Center in Oklahoma City. The spacecraft had previously been on display at the Omniplex Science Museum elsewhere in the city. It is on a long-term loan from the Smithsonian Institution.
Before being moved to Oklahoma, the spacecraft was displayed at the St. Louis Science Center in St. Louis, Missouri.
See also
Splashdown (spacecraft landing)
References
External links
NASA Gemini 6 press kit for cancelled mission - Oct 20, 1965
NASA Gemini 7/Gemini 6 press kit - Nov 29, 1965
Gemini 6 Mission Report (PDF) - October 1965 cancelled mission
Gemini 6/Agena target vehicle 5002 systems test evaluation (PDF) December 1965
Spaceflight Mission Patches
http://www.collectspace.com/news/news-092703a.html
Spacecraft launched in 1965
1965 in the United States
Project Gemini missions
Human spaceflights
Spacecraft launched by Titan rockets
Spacecraft which reentered in 1965
December 1965
Wally Schirra
Thomas P. Stafford
Music in space
Successful space missions | Gemini 6A | [
"Astronomy"
] | 2,415 | [
"Outer space",
"Music in space"
] |
364,143 | https://en.wikipedia.org/wiki/Time-based%20currency | In economics, a time-based currency is an alternative currency or exchange system where the unit of account is the person-hour or some other time unit. Some time-based currencies value everyone's contributions equally: one hour equals one service credit. In these systems, one person volunteers to work for an hour for another person; thus, they are credited with one hour, which they can redeem for an hour of service from another volunteer. Others use time units that might be fractions of an hour (e.g. minutes, ten minutes – 6 units/hour, or 15 minutes – 4 units/hour). While most time-based exchange systems are service exchanges in that most exchange involves the provision of services that can be measured in a time unit, it is also possible to exchange goods by 'pricing' them in terms of the average national hourly wage rate (e.g. if the average hourly rate is $20/hour, then a commodity valued at $20 in the national currency would be equivalent to 1 hour).
History
19th century
Time-based currency exchanges date back to the early 19th century.
The Cincinnati Time Store (1827-1830) was the first in a series of retail stores created by American individualist anarchist Josiah Warren to test his economic labor theory of value. The experimental store operated from May 18, 1827, until May 1830. The Cincinnati Time Store experiment in use of labor as a medium of exchange antedated similar European efforts by two decades.
The National Equitable Labour Exchange was founded by Robert Owen, a Welsh socialist and labor reformer in London, England, in 1832. It was established in Birmingham, England, before folding in 1834. It issued "Labour Notes" similar to banknotes, denominated in units of 1, 2, 5, 10, 20, 40, and 80 hours. John Gray, a socialist economist, worked with Owen and later with Ricardian Socialists and postulated a National Chamber of Commerce as a central bank issuing a labour currency.
In 1848, the socialist and first self-designated anarchist Pierre-Joseph Proudhon postulated a system of time chits.
Josiah Warren published a book describing labor notes in 1852.
In 1875, Karl Marx wrote of "Labor Certificates" (Arbeitszertifikaten) in his Critique of the Gotha Program of a "certificate from society that [the labourer] has furnished such and such an amount of labour", which can be used to draw "from the social stock of means of consumption as much as costs the same amount of labour."
20th century
Teruko Mizushima (1920-1996) was a Japanese housewife, author, inventor, social commentator, and activist credited with creating the world's first time bank in 1973.
Mizushima was born in 1920 in Osaka to a merchant household. She performed well in school and was given the opportunity to study overseas in the United States in 1939. Her stay was shortened from three years to one due to rising tensions between the US, Japan, and China. Mizushima opted to pursue a short-term diploma course in sewing.
After returning home, she married. Her first daughter was born at the outbreak of the Pacific War, and her husband was soon conscripted into the army.
Mizushima's sewing skills proved invaluable to her family during and after the war. While the Japanese population was suffering immense material shortages, Mizushima offered her sewing skills in exchange for fresh vegetables. It was during this time that she began to develop her ideas about economics and the relative value of labor.
In 1950, Mizushima submitted an essay to a newspaper contest as part of a national event titled “Women's Ideas for the Creation of a New Life.” Her essay received the Newspaper Companies’ Prize. While it has since been lost, the ideas in the essay attracted widespread press attention.
Mizushima soon became a social commentator, with her views being aired on the radio, in the newspapers, and on television. She frequently appeared on the NHK, the country's national broadcaster, and toured the country giving talks about her ideas.
In 1973 she started her group the Volunteer Labour Bank (later renamed the Volunteer Labour Network). By 1978, the bank had grown to include approximately 2,600 members. The membership included people of all ages, from teenagers to women in their seventies. The majority of members were housewives in their thirties and forties. Members were organized into over 160 local branches throughout the country, coordinated by the headquarters located on Mizushima’s estate.
By 1983, the network had over 3,800 members organized in 262 branches, including a branch in California.
The political activist and philosopher Cornelius Castoriadis, after criticizing the incoherency of capitalist, Leninist, and Trotskyist justifications of wage differentials in his 1949 Socialisme ou Barbarie text translated as “The Relations of Production in Russia” in the first volume of his Political and Social Writings, responding to the Hungarian Revolution of 1956, advocated that workers “proclaim the abolition of work norms and instaurate full equality of wages and salaries” in his 1957 Socialisme ou Barbarie text translated as "On the Content of Socialism, II". He elaborated further on this advocacy of an “absolute equality of wages and incomes” in his 1974 text, "Hierarchy of Salaries and Incomes", and in the “Today” section of “Done and To Be Done” (1989).
Edgar S. Cahn coined the term "Time Dollars" in Time Dollars: The New Currency That Enables Americans to Turn Their Hidden Resource-Time-Into Personal Security & Community Renewal, a book co-authored with Jonathan Rowe in 1992. He also went on to trademark the terms "TimeBank" and "Time Credit".
Timebanking is a community development tool and works by facilitating the exchange of skills and experience within a community. It aims to build the 'core economy' of family and community by valuing and rewarding the work done in it. The world's first timebank was started in Japan by Teruko Mizushima in 1973 with the idea that participants could earn time credits which they could spend any time during their lives. She based her bank on the simple concept that each hour of time given as services to others could earn reciprocal hours of services for the giver at some stage in the future, particularly in old age when they might need it most. In the 1940s, Mizushima had already foreseen the emerging problems of an ageing society such as seen today. In the 1990s the movement took off in the US, with Dr Edgar Cahn pioneering it there, and in the United Kingdom, with Martin Simon from Timebanking UK and David Boyle, who brought in the London-based New Economics Foundation (Nef).
Paul Glover created Ithaca Hours in 1991. Each HOUR was valued at one hour of basic labor or $10.00. Professionals were entitled to charge multiple HOURS per hour, but often reduced their rate in the spirit of equity. Millions of dollars' worth of HOURS were traded among thousands of residents and 500 businesses. Interest-free HOUR loans were made, and HOUR grants given to over 100 community organizations.
The first British time bank opened in 1998 in Stroud, and a national charity and membership organisation, Timebanking UK, started in 2002.
21st century
According to Edgar S. Cahn, timebanking had its roots in a time when "money for social programs [had] dried up" and no dominant approach to social service in the U.S. was coming up with creative ways to solve the problem. He would later write that "Americans face at least three interlocking sets of problems: growing inequality in access by those at the bottom to the most basic goods and services; increasing social problems stemming from the need to rebuild family, neighborhood and community; and a growing disillusion with public programs designed to address these problems" and that "the crisis in support for efforts to address social problems stems directly from the failure of ... piecemeal efforts to rebuild genuine community." In particular Cahn focused on the top-down attitude prevalent in social services. He believed that one of the major failings of many social service organizations was their unwillingness to enroll the help of those people they were trying to help. He called this a deficit based approach to social service, where organizations view the people they were trying to help only in terms of their needs, as opposed to an asset based approach, which focuses on the contributions towards their communities that everyone can make. He theorized that a system like timebanking could "[rebuild] the infrastructure of trust and caring that can strengthen families and communities." He hoped that the system "would enable individuals and communities to become more self-sufficient, to insulate themselves from the vagaries of politics and to tap the capacity of individuals who were in effect being relegated to the scrap heap and dismissed as freeloaders."
As a philosophy, timebanking, also known as Time Trade is founded upon five principles, known as TimeBanking's Core Values:
Everyone is an asset
Some work is beyond a monetary price
Reciprocity in helping
Community (via social networks) is necessary
A respect for all human beings
Ideally, timebanking builds community. TimeBank members sometimes refer to this as a return to simpler times when the community was there for its individuals. An interview at a timebank in the Gorbals neighbourhood of Glasgow revealed the following sentiment:
[the time bank] involves everybody coming together as a community ... the Gorbals has never—not for a long time—had a lot of community spirit. Way back, years ago, it had a lot of community spirit, but now you see that in some areas, people won't even go to the chap next door for some sugar ... that's what I think the project's doing, trying to bring that back, that community sense ...
In 2017 Nimses offered a concept of a time-based currency Nim. 1 nim = 1 minute of life. The concept was first adopted in Eastern Europe.
The concept is based on the idea of universal basic income. Every person is an issuer of nims. For every minute of one's life, 1 nim is created, which can be spent or sent to another person, like money.
Time dollars
Time dollars are a tax-exempt complementary currency used as a means of providing mutual credit in TimeBanking. They are typically called "time credits" or "service credits" outside the United States. TimeBank members exchange services for Time Dollars. Each exchange is recorded as a corresponding credit and debit in the accounts of the participants. One hour of time is worth one Time Dollar, regardless of the service provided in one hour or how much skill is required to perform the task during that hour. This "one-for-one" system that relies on an abundant resource is designed to both recognize and encourage reciprocal community service, resist inflation, avoid hoarding, enable trade, and encourage cooperation among participants.
Timebanks
Timebanks have been established in 34 countries, with at least 500 timebanks established in 40 US states and 300 throughout the United Kingdom. TimeBanks also have a significant presence in Japan, South Korea, New Zealand, Taiwan, Senegal, Argentina, Israel, Greece, and Spain. TimeBanks have been used to reduce recidivism rates with diversionary programs for first-time juvenile offenders; facilitate re-entry of for ex-convicts; deliver health care, job training and social services in public housing complexes; facilitate substance abuse recovery; prevent institutionalization of severely disabled children through parental support networks; provide transportation for homebound seniors in rural areas; deliver elder care, community health services and hospice care; and foster women's rights initiatives in Senegal.
Timebanking
Timebanking is a pattern of reciprocal service exchange that uses units of time as currency. It is an example of a complementary monetary system. A timebank, also known as a service exchange, is a community that practices time banking. The unit of currency, always valued at an hour's worth of any person's labor, used by these groups has various names but is generally known as a time credit in the US and the UK (formerly a time dollar in the US). Timebanking is primarily used to provide incentives and rewards for work such as mentoring children, caring for the elderly, being neighborly—work usually done on a volunteer basis—which a pure market system devalues. Essentially, the "time" one spends providing these types of community services earns "time" that one can spend to receive services. As well as gaining credits, participating individuals, particularly those more used to being recipients in other parts of their lives, can potentially gain confidence, social contact and skills through giving to others. Communities, therefore, use time banking as a tool to forge stronger intra-community connections, a process known as "building social capital". Timebanking had its intellectual genesis in the US in the early 1980s. By 1990, the Robert Wood Johnson Foundation had invested US$1.2 million to pilot time banking in the context of senior care. Today, 26 countries have active TimeBanks. There are 250 TimeBanks active in the UK and over 276 TimeBanks in the U.S.
Timebanking and the timebank
Timebank members earn credit in Time Dollars for each hour they spend helping other members of the community. Services offered by members in timebanks include: Child Care, Legal Assistance, Language Lessons, Home Repair, and Respite Care for caregivers, among other things. Time Dollars AKA time credits earned are then recorded at the timebank to be accessed when desired. A Timebank can theoretically be as simple as a pad of paper, but the system was originally intended to take advantage of computer databases for record keeping. Some Timebanks employ a paid coordinator to keep track of transactions and to match requests for services with those who can provide them. Other Timebanks select a member or a group of members to handle these tasks. Various organizations provide specialized software to help local Timebanks manage exchanges. The same organizations also often offer consulting services, training, and other materials for individuals or organizations looking to start timebanks of their own.
Example services offered by timebank members
The mission of an individual timebank influences exactly which services are offered. In some places, timebanking is adopted as a means to strengthen the community as a whole. Other timebanks are more oriented towards social service, systems change, and helping underprivileged groups. In some timebanks, both are acknowledged goals.
Time credit
The time credit is the fundamental unit of exchange in a timebank, equal to one hour of a person's labor. In traditional timebanks, one hour of one person's time is equal to one hour of another's. Time credits are earned for providing services and spent receiving services. Upon earning a time credit, a person does not need to spend it right away: they can save it indefinitely. However, since the value of a time credit is fixed at one hour, it resists inflation and does not earn interest. In these ways it is intentionally designed to differ from the traditional fiat currency used in most countries. Consequently, it does little good to hoard time credits and, in practice, many timebanks also encourage the donation of excess time credits to a community pool which is then spent for those in need or on community events.
Criticisms
Some criticisms of timebanking have focused on the time credit's inadequacies as a form of currency and as a market information mechanism. Frank Fisher of MIT predicted in the 1980s that such a currency "would lead to the kind of distortion of market forces which had crippled Russia's economy."
Dr. Gill Seyfang's study of the Gorbals TimeBank—one of the few studies of timebanking done by the academic community—listed several other non-theoretical problems with timebanking. The first is the difficulty of communicating to potential members exactly what makes timebanking different, or "getting people to understand the difference between timebanking and traditional volunteering." She also notes that there is no guarantee that every person's needs will be provided for by a timebank by dint of the fact that the supply of certain skills may be lacking in a community.
One of the most stringent criticisms of timebanking is its organizational sustainability. While some member-run TimeBanks with relatively low overhead costs do exist, others pay a staff to keep the organization running. This can be quite expensive for smaller organizations and without a long-term source of funding, they may fold.
Timebanking around the world
Global timebanking
In 2013 TimeRepublik launched the first global Timebank. Its aim is to eliminate geographical limitations of previous timebanks.
Since 2015 TimeRepublik has been promoting Time Banking within local governments, municipalities, universities, and large companies.
In 2017 TimeRepublik won the first prize at the BAI Global Innovation Awards in the Innovation and Human Capital category
The Community Exchange System (CES) is a global network of communities using alternative exchange systems, many of which use timebanks. Timebanks can trade with each other wherever they are, as well as with mutual credit exchanges. The system uses a base 'currency' of one hour, and the conversion rates between the different exchange groups are based on national average hourly wage rates. This allows timebanks to trade with mutual credit exchanges in the same or different countries.
Studies and examples
Elderplan
Elderplan was a social HMO which incorporated timebanking as a way to promote active, engaged lifestyles for its older members. Funding for the "social" part of social HMOs has since dried up and much of the program has been cut, but at its height, members were able to pay portions of their premiums in time credits (back then called Time Dollars) instead of hard currency. The idea was to encourage older people to become more engaged in their communities while also to ask for help more often and "[foster] dignity by allowing people to contribute services as well as receive them."
Gorbals timebank study
In 2004, Dr. Gill Seyfang published a study in the Community Development Journal about the effects of a timebank located in the Gorbals area of Glasgow, Scotland, "an inner-city estate characterized by high levels of deprivation, poverty, unemployment, poor health and low educational attainment." The Gorbals Timebank is run by a local charity with the intent to combat the social ills that face the region. Seyfang concluded that the timebank was effective at "building community capacity" and "promoting social inclusion." She highlights the timebank's success at "[re-stitching] the social fabric of the Gorbals." by "[boosting] engagement in existing projects and activities" in a variety of projects including a community safety network, a library, a healthy living project, and a theatre. She writes that "the timebank had enabled people to access help they otherwise would have had to do without," help which included home repair, gardening, a funeral, and tuition paid in time credits to a continuing education course.
Timebank Florianópolis
The Time Bank of the City of Florianópolis (BTF) is one of the first and best known Time Banks in Brazil. The initiative was conceived in September 2015 at a local Zeitgeist meeting, part of the international sustainability movement. BTF works from a Facebook group that has more than 20,000 members, and exchanges are counted in a spreadsheet shared with users. Scientific research on BTF indicates that the time bank is a means for creating social capital in local society and that BTF members have different socioeconomic characteristics compared to residents of the city of Florianópolis. Younger, non-white, employed, female individuals, working in the informal sector, with a higher education level and with a higher monthly income are more likely to be BTF members.
Spice Timebank
Spice is a social enterprise that has developed a time-based currency called Time Credits. Spice works across health and social care, housing, community development and education, supporting organisations and services to use Time Credits to achieve their outcomes. Spice grew out of the work of the Wales Institute for Community Currencies in the former mining districts of South Wales, UK.
Several Studies are done based on Spice Timebank or referenced this timebank. In a 2016 survey, based on a 1000 members of Spice timebank, 77% of respondents said Time Credits have had a positive impact on their quality of life, 42% reported that they learned a new skill and 30% reported that they have less need to go to doctor.
See also
Cincinnati Time Store
Collaborative finance
Community currency
Community Exchange System (CES)
Coproduction of public services by service users and communities
Fiscal localism
Labour theory of value
Labour-time voucher
Local exchange trading system (LETS)
References
Further reading
Cahn, Edgar S. (1992). Time Dollars: The New Currency That Enables Americans to Turn Their Hidden Resource Time Into Personal Security and Community Renewal. Emmaus, Penn.: Rodale Press.
External links
TimeBanking on YouTube
Economics and time
Local currencies | Time-based currency | [
"Physics"
] | 4,383 | [
"Spacetime",
"Economics and time",
"Physical quantities",
"Time"
] |
364,210 | https://en.wikipedia.org/wiki/Magic%20carpet | A magic carpet, also called a flying carpet, is a legendary carpet and common trope in fantasy fiction. It is typically used as a form of transportation and can quickly or instantaneously carry its user(s) to their destination.
In literature
One of the stories in the One Thousand and One Nights relates how Prince Husain, the eldest son of Sultan of the Indies, travels to Bisnagar (Vijayanagara) in India and buys a magic carpet. This carpet is described as follows: "Whoever sitteth on this carpet and willeth in thought to be taken up and set down upon other site will, in the twinkling of an eye, be borne thither, be that place nearhand or distant many a day's journey and difficult to reach." The literary traditions of several other cultures also feature magical carpets, in most cases literally flying rather than instantly transporting their passengers from place to place.
Solomon's carpet was reportedly made of green silk with a golden weft, long and wide: "when Solomon sat upon the carpet he was caught up by the wind, and sailed through the air so quickly that he breakfasted at Damascus and supped in Media." The wind followed Solomon's commands, and ensured the carpet would go to the proper destination; when Solomon was proud, for his greatness and many accomplishments, the carpet gave a shake and 40,000 fell to their deaths. The carpet was shielded from the sun by a canopy of birds. In Shaikh Muhammad ibn Yahya al-Tadifi al-Hanbali's book of wonders, Qala'id-al-Jawahir ("Necklaces of Gems"), Shaikh Abdul-Qadir Gilani walks on the water of the River Tigris, then an enormous prayer rug (sajjada) appears in the sky above, "as if it were the flying carpet of Solomon [bisat Sulaiman]".
In Russian folk tales, Baba Yaga can supply Ivan the Fool with a flying carpet or some other magical gifts (e.g. a ball that rolls in front of the hero showing him the way, or a towel that can turn into a bridge). Such gifts help the hero to find his way "beyond thrice-nine lands, in the thrice-ten kingdom". Russian painter Viktor Vasnetsov illustrated the tales featuring a flying carpet on two occasions.
In Mark Twain's "Captain Stormfield's Visit to Heaven", magic wishing carpets are used to instantaneously travel throughout Heaven.
Poul Anderson's Operation Chaos features a world making extensive use of magic in daily life, and among other things having flying carpets as a common, non-polluting means of transportation - in fierce competition with the also available flying brooms. Travelers need not sit on the bare carpet itself, as the carpet serves as the platform for a comfortable cabin.
Magic carpets have also been featured in modern literature, movies, and video games, and not always in a classic context.
In "traditional Chinese fantasy literature" from the late Qing dynasty and before, sentient flying carpets were thought to be "magical monsters" in the same category as lung, qilin, or clouds for heroes to traverse distances with.
In Taoism and Taoist art, flying carpets were used as poetic metaphors for the ability of flight xian had.
In Tibetan Tantric Buddhism, a paper carpet were thought to be able to fly for "adept[s]".
See also
The Phoenix and the Carpet – 1904 children's novel by E. Nesbit
Old Khottabych – 1938 Soviet children's book and later 1956 film with the depiction of a flying carpet
"Magic Carpet Ride" – 1968 song by Steppenwolf
Asterix and the Magic Carpet – 1987 illustrated comic story book on the adventures of Asterix, Obelix and Cacofonix in India
King Solomon's Carpet – 1991 novel by Barbara Vine about the London Underground
Magic Carpet – 1994 video game featuring flight and combat in a realm of magic and monsters
Notes
External links
The secret history of the Flying Carpet
Arabian mythology
Arab culture
Persian mythology
Russian folklore
Fantasy tropes
Fictional aircraft
Fictional objects
Recurrent elements in fairy tales
Iranian folklore
Magic items
Solomon
Rugs and carpets
Fiction about teleportation
Flight folklore
Azerbaijani mythology | Magic carpet | [
"Physics"
] | 886 | [
"Magic items",
"Physical objects",
"Matter"
] |
364,286 | https://en.wikipedia.org/wiki/Nommo | The Nommo or Nummo are primordial ancestral spirits in Dogon religion and cosmogony (sometimes referred to as demi deities) venerated by the Dogon people of Mali. The word Nommos is derived from a Dogon word meaning "to make one drink." Nommos are usually described as amphibious, hermaphroditic, fish-like creatures. Folk art depictions of Nommos show creatures with humanoid upper torsos, legs/feet, and a fish-like lower torso and tail. Nommos are also referred to as "Masters of the Water", "the Monitors", and "the Teachers". Nommo can be a proper name of an individual or can refer to the group of spirits as a whole. For purposes of this article, "Nommo" refers to a specific individual and "Nommos" is used to reference the group of beings.
Nommo mythology
Dogon religion and says that Nommo was the first living creature created by the sky god Amma. Shortly after his creation, Nommo underwent a transformation and multiplied into four pairs of twins. One of the twins rebelled against the universal order created by Amma. To restore order to his creation, Amma sacrificed another of the Nommo progeny, whose body was dismembered and scattered throughout the universe. This dispersal of body parts is seen by the Dogon as the source for the proliferation of Binu shrines throughout the Dogons' traditional territory; wherever a body part fell, a shrine was erected.
In the latter part of the 1940s, French anthropologists Marcel Griaule and Germaine Dieterlen (who had been working with the Dogon since 1931) wrote that they were the recipients of additional, secret mythologies, concerning the Nommo. The Dogon reportedly related to Griaule and Dieterlen a belief that the Nommos were inhabitants of a world circling the star Sirius (see the main article on the Dogon for a discussion of their astronomical knowledge). The Nommos descended from the sky in a vessel accompanied by fire and thunder. After arriving, the Nommos created a reservoir of water and subsequently dived into the water. The Dogon legends state that the Nommos required a watery environment in which to live. According to the myth related to Griaule and Dieterlen: "The Nommo divided his body among men to feed them; that is why it is also said that as the universe "had drunk of his body," the Nommo also made men drink. He gave all his life principles to human beings."
The Nommo are also thought to be the origin of the first Hogon.
Controversy
Walter van Beek, an anthropologist studying the Dogon, found no evidence that they had any historical advanced knowledge of Sirius. Van Beek postulated that Griaule engaged in such leading and forceful questioning of his Dogon sources that new myths were created in the process by confabulation, writing that:
Carl Sagan has noted that the first reported association of the Dogon with the knowledge of Sirius as a binary star was in the 1940s, giving the Dogon ample opportunity to gain cosmological knowledge about Sirius and the Solar System from more scientifically advanced, terrestrial societies whom they had come in contact with. It has also been pointed out that binary star systems like Sirius are theorized to have a very narrow or non-existent Habitable zone, and thus a high improbability of containing a planet capable of sustaining life (particularly life as dependent on water as the Nommos were reported to be).
Daughter and colleague of Marcel Griaule, Geneviève Calame-Griaule, defended the project, dismissing Van Beek's criticism as misguided speculation rooted in an apparent ignorance of esoteric tradition. Van Beek continues to maintain that Griaule was wrong and cites other anthropologists who also reject his work.
The assertion that the Dogon knew of another star in the Sirius system, Emme Ya, or "larger than Sirius B but lighter and dim in magnitude" continues to be discussed. In 1995, gravitational studies indicated the possible existence of a red dwarf star circling around Sirius but further observations have failed to confirm this. Space journalist and sceptic James Oberg collected claims that have appeared concerning Dogon mythology in his 1982 book and concedes that such assumptions of recent acquisition are "entirely circumstantial" and have no foundation in documented evidence and concludes that it seems likely that the Sirius mystery will remain exactly what its title implies: a mystery. Earlier, other critics such as the astronomer Peter Pesch and his collaborator Roland Pesch and Ian Ridpath had attributed the supposed "advanced" astronomical knowledge of the Dogon to a mixture of over-interpretation by commentators and cultural contamination.
References in fiction
The belief structure surrounding Nommo, as well as Robert Temple's conclusion from his pseudoarchaeology book The Sirius Mystery, were used by Larry Niven and Steven Barnes as the background for the role-playing game in The California Voodoo Game, the third volume in their Dream Park series. Novelist Tom Robbins discusses Nommo and the Sirius mysteries in his novel Half Asleep in Frog Pajamas. Nommo and the Dogon are also widely mentioned in Philip K. Dick's novel V.A.L.I.S.. The Nommo are also mentioned in the second book of Ian Douglas's Legacy Trilogy (Battlespace) where the marines encounter the Nommo in the Sirius star system. There are also references to the Nommo in Grant Morrison's comic book series, The Invisibles. A major character in the webcomic series Forming by Jesse Moynihan is inspired by (and named) Nommo.
Since 2017, the African Speculative Fiction Society has given out a prize called the Nommo Award to science fiction and fantasy writing from the African continent or by writers from the African diaspora.
In Popular Culture
Jazz composer and bassist Jymie Merritt dedicated a composition to the Nommo, circa 1965, entitled "Nommo."
References
External links
General information on the Dogon and the Nommo
Dogon religion
West African legendary creatures
Ancient astronaut speculation
Alleged extraterrestrial beings
Ancient astronomy
Piscine and amphibian humanoids | Nommo | [
"Astronomy"
] | 1,300 | [
"Ancient astronomy",
"History of astronomy"
] |
364,300 | https://en.wikipedia.org/wiki/Liebig%27s%20law%20of%20the%20minimum | Liebig's law of the minimum, often simply called Liebig's law or the law of the minimum, is a principle developed in agricultural science by Carl Sprengel (1840) and later popularized by Justus von Liebig. It states that growth is dictated not by total resources available, but by the scarcest resource (limiting factor). The law has also been applied to biological populations and ecosystem models for factors such as sunlight or mineral nutrients.
Applications
This was originally applied to plant or crop growth, where it was found that increasing the amount of plentiful nutrients did not increase plant growth. Only by increasing the amount of the limiting nutrient (the one most scarce in relation to "need") was the growth of a plant or crop improved. This principle can be summed up in the aphorism, "The availability of the most abundant nutrient in the soil is only as good as the availability of the least abundant nutrient in the soil." Or the rough analog, "A chain is only as strong as its weakest link." Though diagnosis of limiting factors to crop yields is a common study, the approach has been criticized.
Scientific applications
Liebig's law has been extended to biological populations (and is commonly used in ecosystem modelling). For example, the growth of an organism such as a plant may be dependent on a number of different factors, such as sunlight or mineral nutrients (e.g., nitrate or phosphate). The availability of these may vary, such that at any given time one is more limiting than the others. Liebig's law states that growth only occurs at the rate permitted by the most limiting factor.
For instance, in the equation below, the growth of population is a function of the minimum of three Michaelis-Menten terms representing limitation by factors , and .
Where
O is the biomass concentration or population density.
μI,μN,μP are the specific growth rates in response to the concentrations of three different limiting nutrients, represented by I,N,P respectively.
kI,kN,kP are the half-saturation constants for the three nutrients I,N,P respectively. These constants represent the concentration of the nutrient at which the growth rate is half of its maximum.
I,N,P are the concentrations of the three nutrients /factors.
m is the mortality rate or decay constant.
The use of the equation is limited to a situation where there are steady state ceteris paribus conditions, and factor interactions are tightly controlled.
Protein nutrition
In human nutrition, the law of the minimum was used by William Cumming Rose to determine the essential amino acids. In 1931 he published his study "Feeding experiments with mixtures of highly refined amino acids". Knowledge of the essential amino acids has enabled vegetarians to enhance their protein nutrition by protein combining from various vegetable sources. One practitioner was Nevin S. Scrimshaw fighting protein deficiency in India and Guatemala. Frances Moore Lappé published Diet for a Small Planet in 1971 which popularized protein combining using grains, legumes, and dairy products.
The law of the minimum was tested at University of Southern California in 1947. "The formation of protein molecules is a coordinated tissue function and can be accomplished only when all amino acids which take part in the formation are present at the same time." It was further concluded, that "'incomplete' amino acid mixtures are not stored in the body, but are irreversibly further metabolized." Robert Bruce Merrifield was a laboratory assistant for the experiments. When he wrote his autobiography he recounted in 1993 the finding:
We showed that no net growth occurred when one essential amino acid was omitted from the diet, nor did it occur if that amino acid was fed several hours after the main feeding with the deficient diet.
Other applications
More recently Liebig's law is starting to find an application in natural resource management where it surmises that growth in markets dependent upon natural resource inputs is restricted by the most limited input. As the natural capital upon which growth depends is limited in supply due to the finite nature of the planet, Liebig's law encourages scientists and natural resource managers to calculate the scarcity of essential resources in order to allow for a multi-generational approach to resource consumption.
Neoclassical economic theory has sought to refute the issue of resource scarcity by application of the law of substitutability and technological innovation. The substitutability "law" states that as one resource is exhausted—and prices rise due to a lack of surplus—new markets based on alternative resources appear at certain prices in order to satisfy demand. Technological innovation implies that humans are able to use technology to fill the gaps in situations where resources are imperfectly substitutable.
A market-based theory depends on proper pricing. Where resources such as clean air and water are not accounted for, there will be a "market failure". These failures may be addressed with Pigovian taxes and subsidies, such as a carbon tax. While the theory of the law of substitutability is a useful rule of thumb, some resources may be so fundamental that there exist no substitutes. For example, Isaac Asimov noted, "We may be able to substitute nuclear power for coal power, and plastics for wood ... but for phosphorus there is neither substitute nor replacement."
Where no substitutes exist, such as phosphorus, recycling will be necessary. This may require careful long-term planning and governmental intervention, in part to create Pigovian taxes to allow efficient market allocation of resources, in part to address other market failures such as excessive time discounting.
Liebig's barrel
Dobenecks used the image of a barrel — often called "Liebig's barrel" — to explain Liebig's law. Just as the maximum practical capacity of a barrel with staves of unequal length is limited by the length of the shortest stave. Similarly, a plant's growth is limited by the nutrient in shortest supply.
If a system satisfies the law of the minimum then adaptation will equalize the load of different factors because the adaptation resource will be allocated for compensation of limitation. Adaptation systems act as the cooper of Liebig's barrel and lengthens the shortest stave to improve barrel capacity. Indeed, in well-adapted systems the limiting factor should be compensated as far as possible. This observation follows the concept of resource competition and fitness maximization.
Due to the law of the minimum paradoxes, if we observe the Law of the Minimum in artificial systems, then under natural conditions adaptation will equalize the load of different factors and we can expect a violation of the law of the minimum. Inversely, if artificial systems demonstrate significant violation of the law of the minimum, then we can expect that under natural conditions adaptation will compensate this violation. In a limited system life will adjust as an evolution of what came before.
Biotechnology
One example of technological innovation is in plant genetics whereby the biological characteristics of species can be changed by employing genetic modification to alter biological dependence on the most limiting resource. Biotechnological innovations are thus able to extend the limits for growth in species by an increment until a new limiting factor is established, which can then be challenged through technological innovation.
Theoretically there is no limit to the number of possible increments towards an unknown productivity limit. This would be either the point where the increment to be advanced is so small it cannot be justified economically or where technology meets an invulnerable natural barrier. It may be worth adding that biotechnology itself is totally dependent on external sources of natural capital.
See also
Bottleneck (disambiguation)
Critical chain
Critical path method
Iron fertilization
Keystone species
Limiting factor
Random walk
Rate determining step
Sustainability
Theory of Constraints
References
Agronomy
Ecological theories
Systems ecology
Justus von Liebig | Liebig's law of the minimum | [
"Environmental_science"
] | 1,602 | [
"Environmental social science",
"Systems ecology"
] |
364,313 | https://en.wikipedia.org/wiki/Photometric-standard%20star | Photometric-standard stars are a series of stars that have had their light output in various passbands of photometric system measured very carefully. Other objects can be observed using CCD cameras or photoelectric photometers connected to a telescope, and the flux, or amount of light received, can be compared to a photometric-standard star to determine the exact brightness, or stellar magnitude, of the object.
A current set of photometric-standard stars for UBVRI photometry was published by Arlo U. Landolt in 1992 in The Astronomical Journal.
See also
References
standard star
star types | Photometric-standard star | [
"Astronomy"
] | 125 | [
"Stellar astronomy stubs",
"Star types",
"Astronomy stubs",
"Astronomical classification systems"
] |
364,328 | https://en.wikipedia.org/wiki/Big%20Bounce | The Big Bounce hypothesis is a cosmological model for the origin of the known universe. It was originally suggested as a phase of the cyclic model or oscillatory universe interpretation of the Big Bang, where the first cosmological event was the result of the collapse of a previous universe. It receded from serious consideration in the early 1980s after inflation theory emerged as a solution to the horizon problem, which had arisen from advances in observations revealing the large-scale structure of the universe.
Inflation was found to be inevitably eternal, creating an infinity of different universes with typically different properties, suggesting that the properties of the observable universe are a matter of chance. An alternative concept that included a Big Bounce was conceived as a predictive and falsifiable possible solution to the horizon problem. Investigation continued as of 2022.
Expansion and contraction
The concept of the Big Bounce envisions the Big Bang as the beginning of a period of expansion that followed a period of contraction. In this view, one could talk of a "Big Crunch" followed by a "Big Bang" or, more simply, a "Big Bounce". This concept suggests that we could exist at any point in an infinite sequence of universes, or conversely, the current universe could be the very first iteration. However, if the condition of the interval phase "between bounces"—considered the "hypothesis of the primeval atom"—is taken into full contingency, such enumeration may be meaningless because that condition could represent a singularity in time at each instance if such perpetual repeats (cycles) were absolute and undifferentiated.
The main idea behind the quantum theory of a Big Bounce is that, as density approaches infinity, the behavior of quantum foam changes. All the so-called fundamental physical constants, including the speed of light in vacuum, need not remain constant during a Big Crunch, especially in the time interval smaller than that in which measurement may never be possible (one unit of Planck time, roughly 10−43 seconds) spanning or bracketing the point of inflection.
History
Big Bounce models were endorsed on largely aesthetic grounds by cosmologists including Willem de Sitter, Carl Friedrich von Weizsäcker, George McVittie, and George Gamow (who stressed that "from the physical point of view we must forget entirely about the precollapse period").
By the early 1980s, the advancing precision and scope of observational cosmology had revealed that the large-scale structure of the universe is flat, homogeneous, and isotropic, a finding later accepted as the cosmological principle to apply at scales beyond roughly 300 million light-years. This led cosmologists to seek an explanation to the horizon problem, which questioned how distant regions of the universe could have identical properties without ever being in light-like communication. A solution was proposed to be a period of exponential expansion of space in the early universe, which formed the basis of what became known as inflation theory. Following the brief inflationary period, the universe continues to expand at a slower rate.
Various formulations of inflation theory and their detailed implications became the subject of intense theoretical study. Without a compelling alternative, inflation became the leading solution to the horizon problem.
The phrase "Big Bounce" appeared in scientific literature in 1987, when it was first used in the title of a pair of articles (in German) in Stern und Weltraum by Wolfgang Priester and Hans-Joachim Blome. It reappeared in 1988 in Iosif Rozental's Big Bang, Big Bounce, a revised English-language translation of a Russian-language book (by a different title), and in a 1991 English-language article by Priester and Blome in Astronomy and Astrophysics. The phrase originated as the title of a novel by Elmore Leonard in 1969, shortly after increased public awareness of the Big Bang model with of the discovery of the cosmic microwave background by Penzias and Wilson in 1965.
The idea of the existence of a big bounce in the very early universe has found diverse support in works based on loop quantum gravity. In loop quantum cosmology, a branch of loop quantum gravity, the big bounce was first discovered in February 2006 for isotropic and homogeneous models by Abhay Ashtekar, Tomasz Pawlowski, and Parampreet Singh at Pennsylvania State University. This result has been generalized to various other models by different groups, and includes the case of spatial curvature, cosmological constant, anisotropies, and Fock quantized inhomogeneities.
Martin Bojowald, an assistant professor of physics at Pennsylvania State University, published a study in July 2007 detailing work related to loop quantum gravity that claimed to mathematically solve the time before the Big Bang, which would give new weight to the oscillatory universe and Big Bounce theories.
One of the main problems with the Big Bang theory is that there is a singularity of zero volume and infinite energy at the moment of the Big Bang. This is normally interpreted as a breakdown of physics as we know it; in this case, of the theory of general relativity. This is why one expects quantum effects to become important and avoid a singularity.
However, research in loop quantum cosmology purported to show that a previously existing universe collapses not to a singularity, but to a point where the quantum effects of gravity become so strongly repulsive that the universe rebounds back out, forming a new branch. Throughout this collapse and bounce, the evolution is unitary.
Bojowald also claimed that some properties of the universe that collapsed to form ours can be determined; however, other properties are not determinable due to some uncertainty principle. This result has been disputed by different groups, which show that due to restrictions on fluctuations stemming from the uncertainty principle, there are strong constraints on the change in relative fluctuations across the bounce.
While the existence of the Big Bounce has still to be demonstrated from loop quantum gravity, the robustness of its main features has been confirmed using exact results and several studies involving numerical simulations using high performance computing in loop quantum cosmology.
In 2006, it was proposed that the application of loop quantum gravity techniques to Big Bang cosmology can lead to a bounce that need not be cyclic.
In 2010, Roger Penrose advanced a general relativity-based theory which he called the "conformal cyclic cosmology". The theory explains that the universe will expand until all matter decays and ultimately turns to light. Since nothing in the universe would have any time or distance scale associated with it, the universe becomes identical with the Big Bang, resulting in a type of Big Crunch that becomes the next Big Bang, thus perpetuating the next cycle.
In 2011, Nikodem Popławski showed that a nonsingular Big Bounce appears naturally in the Einstein–Cartan–Sciama–Kibble theory of gravity. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. The minimal coupling between torsion and Dirac spinors generates a spin-spin interaction which is significant in fermionic matter at extremely high densities. Such an interaction avoids the unphysical Big Bang singularity, replacing it with a cusp-like bounce at a finite minimum scale factor, before which the universe was contracting. This scenario also explains why the present Universe at the largest scales appears spatially flat, homogeneous, and isotropic, providing a physical alternative to cosmic inflation.
In 2012, a new theory of a nonsingular Big Bounce was constructed within the frame of standard Einstein gravity. This theory combines the benefits of matter bounce and ekpyrotic cosmology. Particularly, in the homogeneous and isotropic background cosmological solution, the BKL instability is unstable to the growth of anisotropic stress, which is resolved in this theory. Moreover, curvature perturbations seeded in matter contraction can form a nearly scale-invariant primordial power spectrum and thus provide a consistent mechanism to explain the cosmic microwave background (CMB) observations.
A few sources argue that distant supermassive black holes whose large size is hard to explain so soon after the Big Bang, such as ULAS J1342+0928, may be evidence for a Big Bounce, with these supermassive black holes being formed before the Big Bounce.
Critics
According to a study published in Physical Review Letters in May 2023, the Big Bounce should have left marks in the primordial light, known as the cosmic microwave background (CMB), but comparing observations conducted by the Planck satellite with the simulated CMB in the case the Universe bounced on itself only once, that particular bounce signature was not found.
See also
References
Further reading
Angha, Nader (2001). Expansion & Contraction Within Being (Dahm). Riverside, California: M.T.O Shahmaghsoudi Publications. .
Taiebyzadeh, Payam (2017). String Theory; A unified theory and inner dimension of elementary particles (BazDahm). Riverside, Iran: Shamloo Publications Center. .
External links
Penn State Researchers Look Beyond The Birth Of The Universe (Penn State) May 12, 2006
What Happened Before the Big Bang? (Penn State) July 1, 2007
From big bang to big bounce (Penn State) NewScientist December 13, 2008
Physical cosmology
Ultimate fate of the universe | Big Bounce | [
"Physics",
"Astronomy"
] | 1,932 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Physical cosmology",
"Astrophysics"
] |
364,332 | https://en.wikipedia.org/wiki/Experimental%20analysis%20of%20behavior | The experimental analysis of behavior is a science that studies the behavior of individuals across a variety of species. A key early scientist was B. F. Skinner who discovered operant behavior, reinforcers, secondary reinforcers, contingencies of reinforcement, stimulus control, shaping, intermittent schedules, discrimination, and generalization. A central method was the examination of functional relations between environment and behavior, as opposed to hypothetico-deductive learning theory that had grown up in the comparative psychology of the 1920–1950 period. Skinner's approach was characterized by observation of measurable behavior which could be predicted and controlled. It owed its early success to the effectiveness of Skinner's procedures of operant conditioning, both in the laboratory and in behavior therapy.
Basic learning processes in behavior analysis
Classical (or respondent) conditioning
In classical or respondent conditioning, a neutral stimulus (conditioned stimulus) is delivered just before a reflex-eliciting stimulus (unconditioned stimulus) such as food or pain. This typically done by pairing the two stimuli, as in Pavlov's experiments with dogs, where a bell was followed by food delivery. After repeated pairings, the conditioned stimulus comes to elicit the response.
Operant conditioning
Operant conditioning (also, "instrumental conditioning") is a learning process in which behavior is sensitive to, or controlled by its consequences. Specifically, behavior followed by some consequences becomes more frequent (positive reinforcement), behavior followed by other consequences becomes less frequent (punishment) and behavior not followed by yet other consequence becomes more frequent (negative reinforcement). For example, in a food-deprived subject, when lever-pressing is followed by food delivery lever-pressing increases in frequency (positive reinforcement). Likewise, when stepping off a treadmill is followed by delivery of electric shock, stepping off the treadmill becomes less frequent (punishment). And when stopping lever-pressing is followed by shock, lever-pressing is maintained or increased (negative reinforcement). Many variations and details of this process may be found in the main article.
Experimental tools in behavioral research
Operant conditioning chamber
The most commonly used tool in animal behavioral research is the operant conditioning chamber—also known as a Skinner Box. The chamber is an enclosure designed to hold a test animal (often a rodent, pigeon, or primate). The interior of the chamber contains some type of device that serves the role of discriminative stimuli, at least one mechanism to measure the subject's behavior as a rate of response—such as a lever or key-peck switch—and a mechanism for the delivery of consequences—such as a food pellet dispenser or a token reinforcer such as an LED light.
Cumulative recorder
Of historical interest is the cumulative recorder, an instrument used to record the responses of subjects graphically. Traditionally, its graphing mechanism has consisted of a rotating drum of paper equipped with a marking needle. The needle would start at the bottom of the page and the drum would turn the roll of paper horizontally. Each subject response would result in the marking needle moving vertically along the paper one tick. This makes the rate of response the slope of the graph. For example, a regular rate of response would cause the needle to move vertically at a regular rate, resulting in a straight diagonal line rising towards the right. An accelerating or decelerating rate of response would lead to a quadratic (or similar) curve. For the most part, cumulative records are no longer graphed using rotating drums, but are recorded electronically instead.
Key concepts
Laboratory methods employed in the experimental analysis of behavior are based upon B.F. Skinner's philosophy of radical behaviorism, which is premised upon:
Everything that organisms do is behavior (including thinking), and
All behavior is lawful and open to experimental analysis.
Central to operant conditioning is the use of a Three-Term Contingency (Discriminative Stimulus, Response, Reinforcing Stimulus) to describe functional relationships in the control of behavior.
Discriminative stimulus (S) is a cue or stimulus context that sets the occasion for a response. For example, food on a plate sets the occasion for eating.
Behavior is a response (R), typically controlled by past consequences and also typically controlled by the presence of a discriminative stimulus. It operates on the environment, that is, it changes the environment in some way.
Consequences can consist of reinforcing stimuli (S) or punishing stimuli (S) which follow and modify an operant response. Reinforcing stimuli are often classified as positively (S) or negatively reinforcing (S). Reinforcement may be governed by a schedule of reinforcement, that is, a rule that specifies when or how often a response is reinforced. (See operant conditioning).
Respondent conditioning is dependent on stimulus-response (SR) methodologies (unconditioned stimulus (US), conditioned stimulus (CS), neutral stimulus (NS), unconditioned response (UR), and conditioned response, or CR)
Functional analysis (psychology)
Data collection
Anti-theoretical analysis
The idea that Skinner's position is anti-theoretical is probably inspired by the arguments he put forth in his article Are Theories of Learning Necessary? However, that article did not argue against the use of theory as such, only against certain theories in certain contexts. Skinner argued that many theories did not explain behavior, but simply offered another layer of structure that itself had to be explained in turn. If an organism is said to have a drive, which causes its behavior, what then causes the drive? Skinner argued that many theories had the effect of halting research or generating useless research.
Skinner's work did have a basis in theory, though his theories were different from those that he criticized. Mecca Chiesa notes that Skinner's theories are inductively derived, while those that he attacked were deductively derived. The theories that Skinner opposed often relied on mediating mechanisms and structures—such as a mechanism for memory as a part of the mind—which were not measurable or observable. Skinner's theories form the basis for two of his books: Verbal Behavior, and Science and Human Behavior. These two texts represent considerable theoretical extensions of his basic laboratory work into the realms of political science, linguistics, sociology and others.
Notable figures
Charles Ferster – pioneered Errorless learning, which has since become a commonly used form of Discrete trial training (DTT) to teach autistic children, and co-authored the Schedules of Reinforcement book alongside B. F. Skinner.
Richard Herrnstein – developed the matching law, a mathematical model for decision making, co-authored the controversial The Bell Curve.
James Holland – co-wrote the highly cited and well-known Principles of Behavior with B.F. Skinner.
Fred S. Keller – creator of the Personalized System of Instruction (PSI).
Ogden Lindsley – founder of the Precision Teaching approach to teaching.
Jack Michael – noted verbal behavior and motivating operations theorist and researcher.
John Anthony (Tony) Nevin – development behavioral momentum
David Premack - discovered the Premack principle that more probable behaviors reinforce less probable behaviors, and studied language capacity of chimpanzees
Howard Rachlin – pioneer in self-control research and behavioral economics.
Murray Sidman – discovered Sidman Avoidance, highly cited author, researcher on punishment, also has been influential in research on stimulus equivalence.
Philip Hineline – contributed extensively to negative reinforcement (escape/avoidance), molecular/molar accounts of behavior processes, and the characteristics of interpretive language.
Allen Neuringer – well known for theoretical work including volition perception, randomness, self-experimentation, and other areas.
Peter B. Dews principal founder of behavioral pharmacology
References
External links
The Journal of the Experimental Analysis of Behavior has been the flagship journal for behavioral research since 1958 (as a quarterly and since 1964 as a bimonthly publication).
The Journal of Applied Behavior Analysis explores what is considered to be the more applied areas of the experimental analysis of behavior.
Behavioural Pharmacology publishes research on the effects of drugs, chemicals, and hormones on schedule-controlled operant behavior, as well as research into "the neurochemical mechanisms underlying behaviour."
Experimental Analysis of Human Behavior Bulletin is an online journal publishing experimental research focused on human subjects.
The Analysis of Verbal Behavior – annual journal for publication of verbal behavior research.
Are Theories of Learning Necessary? B.F. Skinner's seminal 1950 classic in which he attacks the hypothetico-deductive model of research driven by hypothesis testing.
Behavioural Processes publishes an annual issue on quantitative analysis of behavior and an issue on Comparative Cognition.
Experimental psychology
Behaviorism | Experimental analysis of behavior | [
"Biology"
] | 1,774 | [
"Behavior",
"Behaviorism"
] |
364,338 | https://en.wikipedia.org/wiki/Respect | Respect, also called esteem, is a positive feeling or deferential action shown towards someone or something considered important or held in high esteem or regard. It conveys a sense of admiration for good or valuable qualities. It is also the process of honoring someone by exhibiting care, concern, or consideration for their needs or feelings.
In many cultures, people are considered to be worthy of respect until they prove otherwise. Some people may earn special respect through their exemplary actions or social roles. In "honor cultures", respect is more often earned in this way then granted by default. Courtesies that show respect may include simple words and phrases like "thank you" in the West or "" in the Indian subcontinent, or simple physical signs like a slight bow, a smile, direct eye contact, or a handshake. Such acts may have very different interpretations depending on the cultural context. The end goal is for all people to be treated with respect.
Signs and other ways of showing respect
Language
One definition of respect is a feeling of admiration for someone or something elicited by their abilities, qualities, and achievements.
An honorific is a word or expression (such as a title like "Doctor" or a pronoun form) that shows respect when used in addressing or referring to a person.
Typically honorifics are used for second and third persons; use for first person is less common. Some languages have anti-honorific first person forms (like "your most humble servant" or "this unworthy person") whose effect is to enhance the relative honor accorded a second or third person.
For example, it is disrespectful not to use polite language and honorifics when speaking in Japanese with someone having a higher social status. The Japanese honorific "" can be used when English is spoken.
In China, it is considered rude to call someone by their first name unless the person is known by the speaker for a long period of time. In work-related situations, people address each other by their titles. At home, people often refer to each other by nicknames or terms of kinship. In Chinese culture, individuals often address their friends as juniors and seniors even if they are just a few months younger or older. When the Chinese ask for someone's age, they often do so to know how to address the person.
Physical gestures
In Islamic cultures, there are many ways to show respect to people. For example, one may kiss the hands of parents, grandparents, or teachers. It is narrated in the sayings of Muhammad "Your smiling in the face of your brother is charity". It is also important for Muslims to treat the Quran with great care, as it's considered the word of God. Actions like placing it on the floor or handling it with unclean hands are forbidden and should be followed by a prayer of forgiveness.
In India, it is customary that, out of respect, when a person's foot accidentally touches a book or any written material (considered to be a manifestations of Saraswati, the goddess of knowledge) or another person's body, it will be followed by an apology in the form of a single hand gesture () with the right hand, where the offending person first touches the object with the finger tips and then the forehead and/or chest. This also counts for money, which is considered to be a manifestation of the goddess of wealth, Lakshmi. Pranāma, or the touching of feet in Indian culture is a sign of respect. For instance, when a child greets their grandparents, they typically will touch their hands to their grandparents' feet. In Indian culture, it is believed that the feet are a source of love and power.
In many African/West Indian descent communities and some non-African/West Indian descent communities, respect can be signified by the touching of fists.
Many gestures or physical acts that are common in the West can be considered disrespectful in Japan. For instance, one should not point directly at someone. When greeting someone or thanking them, it may be insulting if the person of lower status does not bow lower than the person with higher status. The duration and level of the bow depends on many factors such as age and status. Some signs of physical respect apply to women only. If a woman does not wear cosmetics or a brassiere, it is possible that she will be considered unprofessional or others may think she does not care about her situation.
Respect as a virtue
Respect for others is a variety of virtue or character strength. The philosopher Immanuel Kant made the virtue of respect the core of his Categorical Imperative:
So act that you treat humanity… always at the same time as an end, never merely as a means.
Respect as a cultural value
Chinese culture
In Chinese culture, bowing is generally reserved as a sign of respect for elders and ancestors. When bowing, they place the fist of the right hand in the palm of their left at stomach level. The deeper the bow, the more respect they are showing.
Traditionally, there was not much hand-shaking in Chinese culture. However, this gesture is now widely practiced among people, especially when greeting Westerners or other foreigners. Many Westerners may find Chinese handshakes to be too long or too weak, but this is because Chinese people consider a weaker handshake to be a gesture of humility and respect.
Kowtowing, or kneeling and bowing so deeply that one's forehead is touching the floor, is practiced during worship at temples. Kowtowing is a powerful gesture reserved mainly for honoring the dead or offering deep respect at a temple.
Many codes of behavior revolve around young people showing respect to older people. Filial piety is a virtue of having respect for ancestors, family, and elders. As in many cultures, younger Chinese individuals are expected to defer to older people, let them speak first, sit down after them, and not contradict them. Sometimes when an older person enters a room, everyone stands. People are often introduced from oldest to youngest. Often, younger people will go out of their way to open doors for their elders and not cross their legs in front of them. The older you are the more respect you are expected to be treated with.
Indigenous American culture
In many indigenous American societies, respect is viewed as a moral value that teaches indigenous people about their culture. This moral value is treated as a process that influences participation in the community and also helps people develop and become integrated into their culture. For this reason, the value of respect is taught during childhood.
Respect as a form of behavior and participation is especially important as a basis of how children must conduct themselves in their community. Children engage in mature activities such as cooking for the family, cleaning and sweeping the house, caring for infant peers, and crop work. Indigenous children learn to view their participation in these activities as a representation of respect. Through this manner of showing respect by participation in activities, children not only learn about culture but also practice it as well.
See also
Asch conformity experiments
Attention
Dignity
Etiquette
:Category:Social graces
Etiquette in Asia
Golden Rule
Identity (social science)
Impression management
Milgram experiment
Non-aggression principle
Peace Love Unity Respect
Social comparison theory
Social death
Social support
Value (ethics and social sciences)
References
Further reading
External links
—Multidisciplinary research project on interpersonal respect, with additional quotes, gallery, literature
Concepts in ethics
Cultural conventions
Human behavior
Interpersonal relationships
Virtue
Feeling | Respect | [
"Biology"
] | 1,525 | [
"Behavior",
"Interpersonal relationships",
"Human behavior"
] |
364,369 | https://en.wikipedia.org/wiki/Laser%20Interferometer%20Space%20Antenna | The Laser Interferometer Space Antenna (LISA) is a planned space probe to detect and accurately measure gravitational waves—tiny ripples in the fabric of spacetime—from astronomical sources. LISA will be the first dedicated space-based gravitational-wave observatory. It aims to measure gravitational waves directly by using laser interferometry. The LISA concept features three spacecraft arranged in an equilateral triangle with each side 2.5 million kilometers long, flying in an Earth-like heliocentric orbit. The distance between the satellites is precisely monitored to detect a passing gravitational wave.
The LISA project started out as a joint effort between NASA and the European Space Agency (ESA). However, in 2011, NASA announced that it would be unable to continue its LISA partnership with the European Space Agency due to funding limitations. The project is a recognized CERN experiment (RE8). A scaled-down design initially known as the New Gravitational-wave Observatory (NGO) was proposed as one of three large projects in ESA's long-term plans. In 2013, ESA selected 'The Gravitational Universe' as the theme for one of its three large projects in the 2030s whereby it committed to launch a space-based gravitational-wave observatory.
In January 2017, LISA was proposed as a candidate mission. On June 20, 2017, the suggested mission received its clearance goal for the 2030s, and was approved as one of the main research missions of ESA.
On 25 January 2024, the LISA Mission was formally adopted by ESA. This adoption recognises that the mission concept and technology are advanced enough that building the spacecraft and its instruments can commence.
The LISA mission is designed for direct observation of gravitational waves, which are distortions of spacetime travelling at the speed of light. Passing gravitational waves alternately squeeze and stretch space itself by a tiny amount. Gravitational waves are caused by energetic events in the universe and, unlike any other radiation, can pass unhindered by intervening mass. Launching LISA will add a new sense to scientists' perception of the universe and enable them to study phenomena that are invisible in normal light.
Potential sources for signals are merging massive black holes at the centre of galaxies, massive black holes orbited by small compact objects, known as extreme mass ratio inspirals, binaries of compact stars, substellar objects orbiting such binaries, and possibly other sources of cosmological origin, such as a cosmological phase transition shortly after the Big Bang, and speculative astrophysical objects like cosmic strings and domain boundaries.
Mission description
The LISA mission's primary objective is to detect and measure gravitational waves produced by compact binary systems and mergers of supermassive black holes. LISA will observe gravitational waves by measuring differential changes in the length of its arms, as sensed by laser interferometry. Each of the three LISA spacecraft contains two telescopes, two lasers and two test masses (each a 46 mm, roughly 2 kg, gold-coated cube of gold/platinum), arranged in two optical assemblies pointed at the other two spacecraft. These form Michelson-like interferometers, each centred on one of the spacecraft, with the test masses defining the ends of the arms. The entire arrangement, which is ten times larger than the orbit of the Moon, will be placed in solar orbit at the same distance from the Sun as the Earth, but trailing the Earth by 20 degrees, and with the orbital planes of the three spacecraft inclined relative to the ecliptic by about 0.33 degree, which results in the plane of the triangular spacecraft formation being tilted 60 degrees from the plane of the ecliptic. The mean linear distance between the formation and the Earth will be 50 million kilometres.
To eliminate non-gravitational forces such as light pressure and solar wind on the test masses, each spacecraft is constructed as a zero-drag satellite. The test mass floats free inside, effectively in free-fall, while the spacecraft around it absorbs all these local non-gravitational forces. Then, using capacitive sensing to determine the spacecraft's position relative to the mass, very precise thrusters adjust the spacecraft so that it follows, keeping itself centered around the mass.
Arm length
The longer the arms, the more sensitive the detector is to long-period gravitational waves, but its sensitivity to wavelengths shorter than the arms is reduced (2,500,000 km is lightseconds, or Hz [compare to LIGO's peak sensitivity around 500 Hz]). As the satellites are free-flying, the spacing is easily adjusted before launch, with upper bounds being imposed by the sizes of the telescopes required at each end of the interferometer (which are constrained by the size of the launch vehicle's payload fairing) and the stability of the constellation orbit (larger constellations are more sensitive to the gravitational effects of other planets, limiting the mission lifetime). Another length-dependent factor which must be compensated for is the "point-ahead angle" between the incoming and outgoing laser beams; the telescope must receive its incoming beam from where its partner was a few seconds ago, but send its outgoing beam to where its partner will be a few seconds from now.
The original 2008 LISA proposal had arms 5 million kilometres (5 Gm) long. When downscoped to eLISA in 2013, arms of 1 million kilometres were proposed. The approved 2017 LISA proposal has arms 2.5 million kilometres (2.5 Gm) long.
Detection principle
Like most modern gravitational wave-observatories, LISA is based on laser interferometry. Its three satellites form a giant Michelson interferometer in which two "transponder" satellites play the role of reflectors and one "master" satellite the roles of source and observer. When a gravitational wave passes the interferometer, the lengths of the two LISA arms vary due to spacetime distortions caused by the wave. Practically, LISA measures a relative phase shift between one local laser and one distant laser by light interference. Comparison between the observed laser beam frequency (in return beam) and the local laser beam frequency (sent beam) encodes the wave parameters. The principle of laser-interferometric inter-satellite ranging measurements was successfully implemented in the Laser Ranging Interferometer onboard GRACE Follow-On.
Unlike terrestrial gravitational-wave observatories, LISA cannot keep its arms "locked" in position at a fixed length. Instead, the distances between satellites vary significantly over each year's orbit, and the detector must keep track of the constantly changing distance, counting the millions of wavelengths by which the distance changes each second. Then, the signals are separated in the frequency domain: changes with periods of less than a day are signals of interest, while changes with periods of a month or more are irrelevant.
This difference means that LISA cannot use high-finesse Fabry–Pérot resonant arm cavities and signal recycling systems like terrestrial detectors, limiting its length-measurement accuracy. But with arms almost a million times longer, the motions to be detected are correspondingly larger.
LISA Pathfinder
An ESA test mission called LISA Pathfinder (LPF) was launched in 2015 to test the technology necessary to put a test mass in (almost) perfect free fall conditions. LPF consists of a single spacecraft with one of the LISA interferometer arms shortened to about , so that it fits inside a single spacecraft. The spacecraft reached its operational location in heliocentric orbit at the Lagrange point L1 on 22 January 2016, where it underwent payload commissioning. Scientific research started on March 8, 2016. The goal of LPF was to demonstrate a noise level 10 times worse than needed for LISA. However, LPF exceeded this goal by a large margin, approaching the LISA requirement noise levels.
Science goals
Gravitational-wave astronomy seeks to use direct measurements of gravitational waves to study astrophysical systems and to test Einstein's theory of gravity. Indirect evidence of gravitational waves was derived from observations of the decreasing orbital periods of several binary pulsars, such as the Hulse–Taylor pulsar. In February 2016, the Advanced LIGO project announced that it had directly detected gravitational waves from a black hole merger.
Observing gravitational waves requires two things: a strong source of gravitational waves—such as the merger of two black holes—and extremely high detection sensitivity. A LISA-like instrument should be able to measure relative displacements with a resolution of 20 picometres—less than the diameter of a helium atom—over a distance of a million kilometres, yielding a strain sensitivity of better than 1 part in 1020 in the low-frequency band about a millihertz.
A LISA-like detector is sensitive to the low-frequency band of the gravitational-wave spectrum, which contains many astrophysically interesting sources. Such a detector would observe signals from binary stars within our galaxy (the Milky Way); signals from binary supermassive black holes in other galaxies; and extreme-mass-ratio inspirals and bursts produced by a stellar-mass compact object orbiting a supermassive black hole. There are also more speculative signals such as signals from cosmological phase transitions, cosmic strings and primordial gravitational waves generated during cosmological inflation.
Galactic compact binaries
LISA will be able to detect the nearly monochromatic gravitational waves emanating of close binaries consisting of two compact stellar objects (white dwarfs, neutron stars, and black holes) in the Milky Way. At low frequencies these are actually expected to be so numerous that they form a source of (foreground) noise for LISA data analysis. At higher frequencies LISA is expected to detect and resolve around 25,000 galactic compact binaries. Studying the distribution of the masses, periods, and locations of this population, will teach us about the formation and evolution of binary systems in the galaxy. Furthermore, LISA will be able to resolve 10 binaries currently known from electromagnetic observations (and find ≈500 more with electromagnetic counterparts within one square degree). Joint study of these systems will allow inference on other dissipation mechanisms in these systems, e.g. through tidal interactions. One of the currently known binaries that LISA will be able to resolve is the white dwarf binary ZTF J1539+5027 with a period of 6.91 minutes, the second shortest period binary white dwarf pair discovered to date.
Planets of compact binaries
LISA will also be able to detect the presence of large planets and brown dwarfs orbiting white dwarf binaries. The number of such detections in the Milky Way is estimated to range from 17 in a pessimistic scenario to more than 2000 in an optimistic scenario, and even extragalactic detections in the Magellanic Clouds might be possible, far beyond the current capabilities of other detection methods for exoplanets.
Massive black hole mergers
LISA will be able to detect the gravitational waves from the merger of a pair of massive black holes with a chirp mass between 104 and 107 solar masses all the way back to their earliest formation at redshift around z ≈ 10. The most conservative population models expect at least a few such events to happen each year. For mergers closer by (z < 3), it will be able to determine the spins of the components, which carry information about the past evolution of the components (e.g. whether they have grown primarily through accretion or mergers). For mergers around the peak of star formation (z ≈ 2) LISA will be able to locate mergers within 100 square degrees on the night sky at least 24 hours before the actual merger, allowing electromagnetic telescopes to search for counterparts, with the potential of witnessing the formation of a quasar after a merger.
Extreme mass ratio inspirals
Extreme mass ratio inspirals (EMRIs) consist of a stellar compact object (<60 solar masses) on a slowly decaying orbit around a massive black hole of around 105 solar masses. For the ideal case of a prograde orbit around a (nearly) maximally spinning black hole, LISA will be able to detect these events up to z=4. EMRIs are interesting because they are slowly evolving, spending around 105 orbits and between a few months and a few years in the LISA sensitivity band before merging. This allows very accurate (up to an error of 1 in 104) measurements of the properties of the system, including the mass and spin of the central object and the mass and orbital elements (eccentricity and inclination) of the smaller object. EMRIs are expected to occur regularly in the centers of most galaxies and in dense star clusters. Conservative population estimates predict at least one detectable event per year for LISA.
Intermediate mass black hole binaries
LISA will also be able to detect the gravitational waves emanating from black hole binary mergers where the lighter black hole is in the intermediate black hole range (between 102 and 104 solar masses). In the case of both components being intermediate black holes between 600 and 104 solar masses, LISA will be able to detect events up to redshifts around 1. In the case of an intermediate mass black hole spiralling into a massive black hole (between 104 and 106 solar masses) events will be detectable up to at least z=3. Since little is known about the population of intermediate mass black holes, there is no good estimate of the event rates for these events.
Multi-band gravitational wave astronomy
Following the announcement of the first gravitational wave detection, GW150914, it was realized that a similar event would be detectable by LISA well before the merger. Based on the LIGO estimated event rates, it is expected that LISA will detect and resolve about 100 binaries that would merge a few weeks to months later in the LIGO detection band. LISA will be able to accurately predict the time of merger ahead of time and locate the event with 1 square degree on the sky. This will greatly aid the possibilities for searches for electromagnetic counterpart events.
Fundamental black hole physics
Gravitational wave signals from black holes could provide hints at a more fundamental theory of gravity. LISA will be able to test possible modifications of Einstein's general theory of relativity, motivated by dark energy or dark matter. These could manifest, for example, through modifications of the propagation of gravitational waves, or through the possibility of hairy black holes.
Probe expansion of the universe
LISA will be able to independently measure the redshift and distance of events occurring relatively close by (z < 0.1) through the detection of massive black hole mergers and EMRIs. Consequently, it can make an independent measurement of the Hubble parameter H0 that does not depend on the use of the cosmic distance ladder. The accuracy of such a determination is limited by the sample size and therefore the mission duration. With a mission lifetime of 4 years one expects to be able to determine H0 with an absolute error of 0.01 (km/s)/Mpc. At larger ranges LISA events can (stochastically) be linked to electromagnetic counterparts, to further constrain the expansion curve of the universe.
Gravitational wave background
LISA will be sensitive to the stochastic gravitational wave background generated in the early universe through various channels, including inflation, first-order cosmological phase transitions related to spontaneous symmetry breaking, and cosmic strings.
Exotic sources
LISA will also search for currently unknown (and unmodelled) sources of gravitational waves. The history of astrophysics has shown that whenever a new frequency range/medium of detection is available new unexpected sources show up. This could for example include kinks and cusps in cosmic strings.
Memory effects
LISA will be sensitive to the permanent displacement induced on probe masses by gravitational waves, known as gravitational memory effect.
Other gravitational-wave experiments
Previous searches for gravitational waves in space were conducted for short periods by planetary missions that had other primary science objectives (such as Cassini–Huygens), using microwave Doppler tracking to monitor fluctuations in the Earth–spacecraft distance. By contrast, LISA is a dedicated mission that will use laser interferometry to achieve a much higher sensitivity.
Other gravitational wave antennas, such as LIGO, Virgo, and GEO600, are already in operation on Earth, but their sensitivity at low frequencies is limited by the largest practical arm lengths, by seismic noise, and by interference from nearby moving masses. Conversely, NANOGrav measures frequencies too low for LISA. The different types of gravitational wave measurement systems — LISA, NANOGrav and ground-based detectors — are complementary rather than competitive, much like astronomical observatories in different electromagnetic bands (e.g., ultraviolet and infrared).
History
The first design studies for a gravitational-wave detector to be flown in space were performed in the 1980s under the name LAGOS (Laser Antena for Gravitational radiation Observation in Space). LISA was first proposed as a mission to ESA in the early 1990s. First as a candidate for the M3-cycle, and later as 'cornerstone mission' for the 'Horizon 2000 plus' program. As the decade progressed, the design was refined to a triangular configuration of three spacecraft with three 5-million-kilometre arms. This mission was pitched as a joint mission between ESA and NASA in 1997.
In the 2000s the joint ESA/NASA LISA mission was identified as a candidate for the 'L1' slot in ESA's Cosmic Vision 2015–2025 programme. However, due to budget cuts, NASA announced in early 2011 that it would not be contributing to any of ESA's L-class missions. ESA nonetheless decided to push the program forward, and instructed the L1 candidate missions to present reduced cost versions that could be flown within ESA's budget. A reduced version of LISA was designed with only two 1-million-kilometre arms under the name NGO (New/Next Gravitational wave Observatory). Despite NGO being ranked highest in terms of scientific potential, ESA decided to fly Jupiter Icy Moons Explorer (JUICE) as its L1 mission. One of the main concerns was that the LISA Pathfinder mission had been experiencing technical delays, making it uncertain if the technology would be ready for the projected L1 launch date.
Soon afterwards, ESA announced it would be selecting themes for its Large class L2 and L3 mission slots. A theme called "the Gravitational Universe" was formulated with the reduced NGO rechristened eLISA as a straw-man mission. In November 2013, ESA announced that it selected "the Gravitational Universe" for its L3 mission slot (expected launch in 2034). Following the successful detection of gravitational waves by the LIGO, ground-based detectors in September 2015, NASA expressed interest in rejoining the mission as a junior partner. In response to an ESA call for mission proposals for the `Gravitational Universe' themed L3 mission, a mission proposal for a detector with three 2.5-million-kilometre arms again called LISA was submitted in January 2017.
As of January 2024, LISA is expected to launch in 2035 on an Ariane 6, two years earlier than previously announced.
See also
Beyond Einstein program – NASA
Big Bang Observer – proposed LISA successor
Cosmic Vision program – ESA
Deci-hertz Interferometer Gravitational wave Observatory (DECIGO) – proposed Japanese space based gravitational-wave observatory
Satellite formation flying
References
External links
CERN experiments
Cosmic Vision
European Space Agency space probes
Future spaceflights
Interferometric gravitational-wave instruments
Proposed space probes
Space telescopes
Space-based laser
2035 in science | Laser Interferometer Space Antenna | [
"Astronomy"
] | 3,923 | [
"Space telescopes"
] |
364,372 | https://en.wikipedia.org/wiki/Gelfand%20representation | In mathematics, the Gelfand representation in functional analysis (named after I. M. Gelfand) is either of two things:
a way of representing commutative Banach algebras as algebras of continuous functions;
the fact that for commutative C*-algebras, this representation is an isometric isomorphism.
In the former case, one may regard the Gelfand representation as a far-reaching generalization of the Fourier transform of an integrable function. In the latter case, the Gelfand–Naimark representation theorem is one avenue in the development of spectral theory for normal operators, and generalizes the notion of diagonalizing a normal matrix.
Historical remarks
One of Gelfand's original applications (and one which historically motivated much of the study of Banach algebras) was to give a much shorter and more conceptual proof of a celebrated lemma of Norbert Wiener (see the citation below), characterizing the elements of the group algebras L1(R) and whose translates span dense subspaces in the respective algebras.
The model algebra
For any locally compact Hausdorff topological space X, the space C0(X) of continuous complex-valued functions on X which vanish at infinity is in a natural way a commutative C*-algebra:
The algebra structure over the complex numbers is obtained by considering the pointwise operations of addition and multiplication.
The involution is pointwise complex conjugation.
The norm is the uniform norm on functions.
The importance of X being locally compact and Hausdorff is that this turns X into a completely regular space. In such a space every closed subset of X is the common zero set of a family of continuous complex-valued functions on X, allowing one to recover the topology of X from C0(X).
Note that C0(X) is unital if and only if X is compact, in which case C0(X) is equal to C(X), the algebra of all continuous complex-valued functions on X.
Gelfand representation of a commutative Banach algebra
Let be a commutative Banach algebra, defined over the field of complex numbers. A non-zero algebra homomorphism (a multiplicative linear functional) is called a character of ; the set of all characters of is denoted by .
It can be shown that every character on is automatically continuous, and hence is a subset of the space of continuous linear functionals on ; moreover, when equipped with the relative weak-* topology, turns out to be locally compact and Hausdorff. (This follows from the Banach–Alaoglu theorem.) The space is compact (in the topology just defined) if and only if the algebra has an identity element.
Given , one defines the function by . The definition of and the topology on it ensure that is continuous and vanishes at infinity, and that the map defines a norm-decreasing, unit-preserving algebra homomorphism from to . This homomorphism is the Gelfand representation of , and is the Gelfand transform of the element . In general, the representation is neither injective nor surjective.
In the case where has an identity element, there is a bijection between and the set of maximal ideals in (this relies on the Gelfand–Mazur theorem). As a consequence, the kernel of the Gelfand representation may be identified with the Jacobson radical of . Thus the Gelfand representation is injective if and only if is (Jacobson) semisimple.
Examples
The Banach space is a Banach algebra under the convolution, the group algebra of . Then is homeomorphic to and the Gelfand transform of is the Fourier transform . Similarly, with , the group algebra of the multiplicative reals, the Gelfand transform is the Mellin transform.
For , the representation space is the Stone–Čech compactification . More generally, if is a completely regular Hausdorff space, then the representation space of the Banach algebra of bounded continuous functions is the Stone–Čech compactification of .
The C*-algebra case
As motivation, consider the special case A = C0(X). Given x in X, let be pointwise evaluation at x, i.e. . Then is a character on A, and it can be shown that all characters of A are of this form; a more precise analysis shows that we may identify ΦA with X, not just as sets but as topological spaces. The Gelfand representation is then an isomorphism
The spectrum of a commutative C*-algebra
The spectrum or Gelfand space of a commutative C*-algebra A, denoted Â, consists of the set of non-zero *-homomorphisms from A to the complex numbers. Elements of the spectrum are called characters on A. (It can be shown that every algebra homomorphism from A to the complex numbers is automatically a *-homomorphism, so that this definition of the term 'character' agrees with the one above.)
In particular, the spectrum of a commutative C*-algebra is a locally compact Hausdorff space: In the unital case, i.e. where the C*-algebra has a multiplicative unit element 1, all characters f must be unital, i.e. f(1) is the complex number one. This excludes the zero homomorphism. So  is closed under weak-* convergence and the spectrum is actually compact. In the non-unital case, the weak-* closure of  is  ∪ {0}, where 0 is the zero homomorphism, and the removal of a single point from a compact Hausdorff space yields a locally compact Hausdorff space.
Note that spectrum is an overloaded word. It also refers to the spectrum σ(x) of an element x of an algebra with unit 1, that is the set of complex numbers r for which x − r 1 is not invertible in A. For unital C*-algebras, the two notions are connected in the following way: σ(x) is the set of complex numbers f(x) where f ranges over Gelfand space of A. Together with the spectral radius formula, this shows that  is a subset of the unit ball of A* and as such can be given the relative weak-* topology. This is the topology of pointwise convergence. A net {fk}k of elements of the spectrum of A converges to f if and only if for each x in A, the net of complex numbers {fk(x)}k converges to f(x).
If A is a separable C*-algebra, the weak-* topology is metrizable on bounded subsets. Thus the spectrum of a separable commutative C*-algebra A can be regarded as a metric space. So the topology can be characterized via convergence of sequences.
Equivalently, σ(x) is the range of γ(x), where γ is the Gelfand representation.
Statement of the commutative Gelfand–Naimark theorem
Let A be a commutative C*-algebra and let X be the spectrum of A. Let
be the Gelfand representation defined above.
Theorem. The Gelfand map γ is an isometric *-isomorphism from A onto C0(X).
See the Arveson reference below.
The spectrum of a commutative C*-algebra can also be viewed as the set of all maximal ideals m of A, with the hull-kernel topology. (See the earlier remarks for the general, commutative Banach algebra case.) For any such m the quotient algebra A/m is one-dimensional (by the Gelfand-Mazur theorem), and therefore any a in A gives rise to a complex-valued function on Y.
In the case of C*-algebras with unit, the spectrum map gives rise to a contravariant functor from the category of commutative C*-algebras with unit and unit-preserving continuous *-homomorphisms, to the category of compact Hausdorff spaces and continuous maps. This functor is one half of a contravariant equivalence between these two categories (its adjoint being the functor that assigns to each compact Hausdorff space X the C*-algebra C0(X)). In particular, given compact Hausdorff spaces X and Y, then C(X) is isomorphic to C(Y) (as a C*-algebra) if and only if X is homeomorphic to Y.
The 'full' Gelfand–Naimark theorem is a result for arbitrary (abstract) noncommutative C*-algebras A, which though not quite analogous to the Gelfand representation, does provide a concrete representation of A as an algebra of operators.
Applications
One of the most significant applications is the existence of a continuous functional calculus for normal elements in C*-algebra A: An element x is normal if and only if x commutes with its adjoint x*, or equivalently if and only if it generates a commutative C*-algebra C*(x). By the Gelfand isomorphism applied to C*(x) this is *-isomorphic to an algebra of continuous functions on a locally compact space. This observation leads almost immediately to:
Theorem. Let A be a C*-algebra with identity and x a normal element of A. Then there is a *-morphism f → f(x) from the algebra of continuous functions on the spectrum σ(x) into A such that
It maps 1 to the multiplicative identity of A;
It maps the identity function on the spectrum to x.
This allows us to apply continuous functions to bounded normal operators on Hilbert space.
References
Banach algebras
C*-algebras
Functional analysis
Operator theory
Von Neumann algebras | Gelfand representation | [
"Mathematics"
] | 2,063 | [
"Functional analysis",
"Functions and mappings",
"Mathematical relations",
"Mathematical objects"
] |
364,377 | https://en.wikipedia.org/wiki/Functional%20equation | In mathematics, a functional equation
is, in the broadest meaning, an equation in which one or several functions appear as unknowns. So, differential equations and integral equations are functional equations. However, a more restricted meaning is often used, where a functional equation is an equation that relates several values of the same function. For example, the logarithm functions are essentially characterized by the logarithmic functional equation
If the domain of the unknown function is supposed to be the natural numbers, the function is generally viewed as a sequence, and, in this case, a functional equation (in the narrower meaning) is called a recurrence relation. Thus the term functional equation is used mainly for real functions and complex functions. Moreover a smoothness condition is often assumed for the solutions, since without such a condition, most functional equations have very irregular solutions. For example, the gamma function is a function that satisfies the functional equation and the initial value There are many functions that satisfy these conditions, but the gamma function is the unique one that is meromorphic in the whole complex plane, and logarithmically convex for real and positive (Bohr–Mollerup theorem).
Examples
Recurrence relations can be seen as functional equations in functions over the integers or natural numbers, in which the differences between terms' indexes can be seen as an application of the shift operator. For example, the recurrence relation defining the Fibonacci numbers, , where and
, which characterizes the periodic functions
, which characterizes the even functions, and likewise , which characterizes the odd functions
, which characterizes the functional square roots of the function g
(Cauchy's functional equation), satisfied by linear maps. The equation may, contingent on the axiom of choice, also have other pathological nonlinear solutions, whose existence can be proven with a Hamel basis for the real numbers
satisfied by all exponential functions. Like Cauchy's additive functional equation, this too may have pathological, discontinuous solutions
, satisfied by all logarithmic functions and, over coprime integer arguments, additive functions
, satisfied by all power functions and, over coprime integer arguments, multiplicative functions
(quadratic equation or parallelogram law)
(Jensen's functional equation)
(d'Alembert's functional equation)
(Abel equation)
(Schröder's equation).
(Böttcher's equation).
(Julia's equation).
(Levi-Civita),
(sine addition formula and hyperbolic sine addition formula),
(cosine addition formula),
(hyperbolic cosine addition formula).
The commutative and associative laws are functional equations. In its familiar form, the associative law is expressed by writing the binary operation in infix notation, but if we write f(a, b) instead of then the associative law looks more like a conventional functional equation,
The functional equation is satisfied by the Riemann zeta function, as proved here. The capital denotes the gamma function.
The gamma function is the unique solution of the following system of three equations:
(Euler's reflection formula)
The functional equation where are integers satisfying , i.e. = 1, defines to be a modular form of order .
One feature that all of the examples listed above have in common is that, in each case, two or more known functions (sometimes multiplication by a constant, sometimes addition of two variables, sometimes the identity function) are inside the argument of the unknown functions to be solved for.
When it comes to asking for all solutions, it may be the case that conditions from mathematical analysis should be applied; for example, in the case of the Cauchy equation mentioned above, the solutions that are continuous functions are the 'reasonable' ones, while other solutions that are not likely to have practical application can be constructed (by using a Hamel basis for the real numbers as vector space over the rational numbers). The Bohr–Mollerup theorem is another well-known example.
Involutions
The involutions are characterized by the functional equation . These appear in Babbage's functional equation (1820),
Other involutions, and solutions of the equation, include
and
which includes the previous three as special cases or limits.
Solution
One method of solving elementary functional equations is substitution.
Some solutions to functional equations have exploited surjectivity, injectivity, oddness, and evenness.
Some functional equations have been solved with the use of ansatzes, mathematical induction.
Some classes of functional equations can be solved by computer-assisted techniques.
In dynamic programming a variety of successive approximation methods are used to solve Bellman's functional equation, including methods based on fixed point iterations.
See also
Functional equation (L-function)
Bellman equation
Dynamic programming
Implicit function
Functional differential equation
Notes
References
János Aczél, Lectures on Functional Equations and Their Applications, Academic Press, 1966, reprinted by Dover Publications, .
János Aczél & J. Dhombres, Functional Equations in Several Variables, Cambridge University Press, 1989.
C. Efthimiou, Introduction to Functional Equations, AMS, 2011, ; online.
Pl. Kannappan, Functional Equations and Inequalities with Applications, Springer, 2009.
Marek Kuczma, Introduction to the Theory of Functional Equations and Inequalities, second edition, Birkhäuser, 2009.
Henrik Stetkær, Functional Equations on Groups, first edition, World Scientific Publishing, 2013.
External links
Functional Equations: Exact Solutions at EqWorld: The World of Mathematical Equations.
Functional Equations: Index at EqWorld: The World of Mathematical Equations.
IMO Compendium text (archived) on functional equations in problem solving. | Functional equation | [
"Mathematics"
] | 1,196 | [
"Mathematical analysis",
"Mathematical objects",
"Functional equations",
"Equations"
] |
364,380 | https://en.wikipedia.org/wiki/Quantum%20foam | Quantum foam (or spacetime foam, or spacetime bubble) is a theoretical quantum fluctuation of spacetime on very small scales due to quantum mechanics. The theory predicts that at this small scale, particles of matter and antimatter are constantly created and destroyed. These subatomic objects are called virtual particles. The idea was devised by John Wheeler in 1955.
Background
With an incomplete theory of quantum gravity, it is impossible to be certain what spacetime looks like at small scales. However, there is no definitive reason that spacetime needs to be fundamentally smooth. It is possible that instead, in a quantum theory of gravity, spacetime would consist of many small, ever-changing regions in which space and time are not definite, but fluctuate in a foam-like manner.
Wheeler suggested that the uncertainty principle might imply that over sufficiently small distances and sufficiently brief intervals of time, the "very geometry of spacetime fluctuates". These fluctuations could be large enough to cause significant departures from the smooth spacetime seen at macroscopic scales, giving spacetime a "foamy" character.
Experimental results
The experimental proof of the Casimir effect, which is possibly caused by virtual particles, is strong evidence for the existence of virtual particles. The g-2 experiment, which predicts the strength of magnets formed by muons and electrons, also supports their existence.
In 2005, during observations of gamma-ray photons arriving from the blazar Markarian 501, MAGIC (Major Atmospheric Gamma-ray Imaging Cherenkov) telescopes detected that some of the photons at different energy levels arrived at different times, suggesting that some of the photons had moved more slowly and thus were in violation of special relativity's notion that the speed of light is constant, a discrepancy which could be explained by the irregularity of quantum foam. Subsequent experiments were, however, unable to confirm the supposed variation on the speed of light due to graininess of space.
Other experiments involving the polarization of light from distant gamma ray bursts have also produced contradictory results. More Earth-based experiments are ongoing or proposed.
Constraints on the size of quantum fluctuations
The fluctuations characteristic of a spacetime foam would be expected to occur on a length scale on the order of the Planck length (≈ 10−35 m), but some models of quantum gravity predict much larger fluctuations.
Photons should be slowed by quantum foam, with the rate depending on the wavelength of the photons. This would violate Lorentz invariance. But observations of radiation from nearby quasars by Floyd Stecker of NASA's Goddard Space Flight Center failed to find evidence of violation of Lorentz invariance.
A foamy spacetime also sets limits on the accuracy with which distances can be measured because photons should diffuse randomly through a spacetime foam, similar to light diffusing by passing through fog. This should cause the image quality of very distant objects observed through telescopes to degrade. X-ray and gamma-ray observations of quasars using NASA's Chandra X-ray Observatory, the Fermi Gamma-ray Space Telescope and ground-based gamma-ray observations from the Very Energetic Radiation Imaging Telescope Array (VERITAS) showed no detectable degradation at the farthest observed distances, implying that spacetime is smooth at least down to distances 1000 times smaller than the nucleus of a hydrogen atom, setting a bound on the size of quantum fluctuations of spacetime.
Relation to other theories
The vacuum fluctuations provide vacuum with a non-zero energy known as vacuum energy.
Spin foam theory is a modern attempt to make Wheeler's idea quantitative.
See also
False vacuum
Geon
Hawking radiation
Holographic principle
Loop quantum gravity
Lorentzian wormhole
Planck time
Stochastic quantum mechanics
String theory
Wormhole
Virtual black hole
Notes
References
Minkel, J. R. (24 November 2003). "Borrowed Time: Interview with Michio Kaku". Scientific American
Swarup, A. (2006). "Sights set on quantum froth". New Scientist, 189, p. 18, accessed 10 February 2012
Quantum gravity
Wormhole theory | Quantum foam | [
"Physics",
"Astronomy"
] | 837 | [
"Astronomical hypotheses",
"Unsolved problems in physics",
"Quantum gravity",
"Physics beyond the Standard Model",
"Wormhole theory"
] |
364,460 | https://en.wikipedia.org/wiki/Ring%20of%20symmetric%20functions | In algebra and in particular in algebraic combinatorics, the ring of symmetric functions is a specific limit of the rings of symmetric polynomials in n indeterminates, as n goes to infinity. This ring serves as universal structure in which relations between symmetric polynomials can be expressed in a way independent of the number n of indeterminates (but its elements are neither polynomials nor functions). Among other things, this ring plays an important role in the representation theory of the symmetric group.
The ring of symmetric functions can be given a coproduct and a bilinear form making it into a positive selfadjoint graded Hopf algebra that is both commutative and cocommutative.
Symmetric polynomials
The study of symmetric functions is based on that of symmetric polynomials. In a polynomial ring in some finite set of indeterminates, a polynomial is called symmetric if it stays the same whenever the indeterminates are permuted in any way. More formally, there is an action by ring automorphisms of the symmetric group Sn on the polynomial ring in n indeterminates, where a permutation acts on a polynomial by simultaneously substituting each of the indeterminates for another according to the permutation used. The invariants for this action form the subring of symmetric polynomials. If the indeterminates are X1, ..., Xn, then examples of such symmetric polynomials are
and
A somewhat more complicated example is
X13X2X3 + X1X23X3 + X1X2X33 + X13X2X4 + X1X23X4 + X1X2X43 + ...
where the summation goes on to include all products of the third power of some variable and two other variables. There are many specific kinds of symmetric polynomials, such as elementary symmetric polynomials, power sum symmetric polynomials, monomial symmetric polynomials, complete homogeneous symmetric polynomials, and Schur polynomials.
The ring of symmetric functions
Most relations between symmetric polynomials do not depend on the number n of indeterminates, other than that some polynomials in the relation might require n to be large enough in order to be defined. For instance the Newton's identity for the third power sum polynomial p3 leads to
where the denote elementary symmetric polynomials; this formula is valid for all natural numbers n, and the only notable dependency on it is that ek(X1,...,Xn) = 0 whenever n < k. One would like to write this as an identity
that does not depend on n at all, and this can be done in the ring of symmetric functions. In that ring there are nonzero elements ek for all integers k ≥ 1, and any element of the ring can be given by a polynomial expression in the elements ek.
Definitions
A ring of symmetric functions can be defined over any commutative ring R, and will be denoted ΛR; the basic case is for R = Z. The ring ΛR is in fact a graded R-algebra. There are two main constructions for it; the first one given below can be found in (Stanley, 1999), and the second is essentially the one given in (Macdonald, 1979).
As a ring of formal power series
The easiest (though somewhat heavy) construction starts with the ring of formal power series over R in infinitely (countably) many indeterminates; the elements of this power series ring are formal infinite sums of terms, each of which consists of a coefficient from R multiplied by a monomial, where each monomial is a product of finitely many finite powers of indeterminates. One defines ΛR as its subring consisting of those power series S that satisfy
S is invariant under any permutation of the indeterminates, and
the degrees of the monomials occurring in S are bounded.
Note that because of the second condition, power series are used here only to allow infinitely many terms of a fixed degree, rather than to sum terms of all possible degrees. Allowing this is necessary because an element that contains for instance a term X1 should also contain a term Xi for every i > 1 in order to be symmetric. Unlike the whole power series ring, the subring ΛR is graded by the total degree of monomials: due to condition 2, every element of ΛR is a finite sum of homogeneous elements of ΛR (which are themselves infinite sums of terms of equal degree). For every k ≥ 0, the element ek ∈ ΛR is defined as the formal sum of all products of k distinct indeterminates, which is clearly homogeneous of degree k.
As an algebraic limit
Another construction of ΛR takes somewhat longer to describe, but better indicates the relationship with the rings R[X1,...,Xn]Sn of symmetric polynomials in n indeterminates. For every n there is a surjective ring homomorphism ρn from the analogous ring R[X1,...,Xn+1]Sn+1 with one more indeterminate onto R[X1,...,Xn]Sn, defined by setting the last indeterminate Xn+1 to 0. Although ρn has a non-trivial kernel, the nonzero elements of that kernel have degree at least (they are multiples of X1X2...Xn+1). This means that the restriction of ρn to elements of degree at most n is a bijective linear map, and ρn(ek(X1,...,Xn+1)) = ek(X1,...,Xn) for all k ≤ n. The inverse of this restriction can be extended uniquely to a ring homomorphism φn from R[X1,...,Xn]Sn to R[X1,...,Xn+1]Sn+1, as follows for instance from the fundamental theorem of symmetric polynomials. Since the images φn(ek(X1,...,Xn)) = ek(X1,...,Xn+1) for k = 1,...,n are still algebraically independent over R, the homomorphism φn is injective and can be viewed as a (somewhat unusual) inclusion of rings; applying φn to a polynomial amounts to adding all monomials containing the new indeterminate obtained by symmetry from monomials already present. The ring ΛR is then the "union" (direct limit) of all these rings subject to these inclusions. Since all φn are compatible with the grading by total degree of the rings involved, ΛR obtains the structure of a graded ring.
This construction differs slightly from the one in (Macdonald, 1979). That construction only uses the surjective morphisms ρn without mentioning the injective morphisms φn: it constructs the homogeneous components of ΛR separately, and equips their direct sum with a ring structure using the ρn. It is also observed that the result can be described as an inverse limit in the category of graded rings. That description however somewhat obscures an important property typical for a direct limit of injective morphisms, namely that every individual element (symmetric function) is already faithfully represented in some object used in the limit construction, here a ring R[X1,...,Xd]Sd. It suffices to take for d the degree of the symmetric function, since the part in degree d of that ring is mapped isomorphically to rings with more indeterminates by φn for all n ≥ d. This implies that for studying relations between individual elements, there is no fundamental difference between symmetric polynomials and symmetric functions.
Defining individual symmetric functions
The name "symmetric function" for elements of ΛR is a misnomer: in neither construction are the elements functions, and in fact, unlike symmetric polynomials, no function of independent variables can be associated to such elements (for instance e1 would be the sum of all infinitely many variables, which is not defined unless restrictions are imposed on the variables). However the name is traditional and well established; it can be found both in (Macdonald, 1979), which says (footnote on p. 12)
The elements of Λ (unlike those of Λn) are no longer polynomials: they are formal infinite sums of monomials. We have therefore reverted to the older terminology of symmetric functions.
(here Λn denotes the ring of symmetric polynomials in n indeterminates), and also in (Stanley, 1999).
To define a symmetric function one must either indicate directly a power series as in the first construction, or give a symmetric polynomial in n indeterminates for every natural number n in a way compatible with the second construction. An expression in an unspecified number of indeterminates may do both, for instance
can be taken as the definition of an elementary symmetric function if the number of indeterminates is infinite, or as the definition of an elementary symmetric polynomial in any finite number of indeterminates. Symmetric polynomials for the same symmetric function should be compatible with the homomorphisms ρn (decreasing the number of indeterminates is obtained by setting some of them to zero, so that the coefficients of any monomial in the remaining indeterminates is unchanged), and their degree should remain bounded. (An example of a family of symmetric polynomials that fails both conditions is ; the family fails only the second condition.) Any symmetric polynomial in n indeterminates can be used to construct a compatible family of symmetric polynomials, using the homomorphisms ρi for i < n to decrease the number of indeterminates, and φi for i ≥ n to increase the number of indeterminates (which amounts to adding all monomials in new indeterminates obtained by symmetry from monomials already present).
The following are fundamental examples of symmetric functions.
The monomial symmetric functions mα. Suppose α = (α1,α2,...) is a sequence of non-negative integers, only finitely many of which are non-zero. Then we can consider the monomial defined by α: Xα = X1α1X2α2X3α3.... Then mα is the symmetric function determined by Xα, i.e. the sum of all monomials obtained from Xα by symmetry. For a formal definition, define β ~ α to mean that the sequence β is a permutation of the sequence α and set
This symmetric function corresponds to the monomial symmetric polynomial mα(X1,...,Xn) for any n large enough to have the monomial Xα. The distinct monomial symmetric functions are parametrized by the integer partitions (each mα has a unique representative monomial Xλ with the parts λi in weakly decreasing order). Since any symmetric function containing any of the monomials of some mα must contain all of them with the same coefficient, each symmetric function can be written as an R-linear combination of monomial symmetric functions, and the distinct monomial symmetric functions therefore form a basis of ΛR as an R-module.
The elementary symmetric functions ek, for any natural number k; one has ek = mα where . As a power series, this is the sum of all distinct products of k distinct indeterminates. This symmetric function corresponds to the elementary symmetric polynomial ek(X1,...,Xn) for any n ≥ k.
The power sum symmetric functions pk, for any positive integer k; one has pk = m(k), the monomial symmetric function for the monomial X1k. This symmetric function corresponds to the power sum symmetric polynomial pk(X1,...,Xn) = X1k + ... + Xnk for any n ≥ 1.
The complete homogeneous symmetric functions hk, for any natural number k; hk is the sum of all monomial symmetric functions mα where α is a partition of k. As a power series, this is the sum of all monomials of degree k, which is what motivates its name. This symmetric function corresponds to the complete homogeneous symmetric polynomial hk(X1,...,Xn) for any n ≥ k.
The Schur functions sλ for any partition λ, which corresponds to the Schur polynomial sλ(X1,...,Xn) for any n large enough to have the monomial Xλ.
There is no power sum symmetric function p0: although it is possible (and in some contexts natural) to define as a symmetric polynomial in n variables, these values are not compatible with the morphisms ρn. The "discriminant" is another example of an expression giving a symmetric polynomial for all n, but not defining any symmetric function. The expressions defining Schur polynomials as a quotient of alternating polynomials are somewhat similar to that for the discriminant, but the polynomials sλ(X1,...,Xn) turn out to be compatible for varying n, and therefore do define a symmetric function.
A principle relating symmetric polynomials and symmetric functions
For any symmetric function P, the corresponding symmetric polynomials in n indeterminates for any natural number n may be designated by P(X1,...,Xn). The second definition of the ring of symmetric functions implies the following fundamental principle:
If P and Q are symmetric functions of degree d, then one has the identity of symmetric functions if and only if one has the identity P(X1,...,Xd) = Q(X1,...,Xd) of symmetric polynomials in d indeterminates. In this case one has in fact P(X1,...,Xn) = Q(X1,...,Xn) for any number n of indeterminates.
This is because one can always reduce the number of variables by substituting zero for some variables, and one can increase the number of variables by applying the homomorphisms φn; the definition of those homomorphisms assures that φn(P(X1,...,Xn)) = P(X1,...,Xn+1) (and similarly for Q) whenever n ≥ d. See a proof of Newton's identities for an effective application of this principle.
Properties of the ring of symmetric functions
Identities
The ring of symmetric functions is a convenient tool for writing identities between symmetric polynomials that are independent of the number of indeterminates: in ΛR there is no such number, yet by the above principle any identity in ΛR automatically gives identities the rings of symmetric polynomials over R in any number of indeterminates. Some fundamental identities are
which shows a symmetry between elementary and complete homogeneous symmetric functions; these relations are explained under complete homogeneous symmetric polynomial.
the Newton identities, which also have a variant for complete homogeneous symmetric functions:
Structural properties of ΛR
Important properties of ΛR include the following.
The set of monomial symmetric functions parametrized by partitions form a basis of ΛR as a graded R-module, those parametrized by partitions of d being homogeneous of degree d; the same is true for the set of Schur functions (also parametrized by partitions).
ΛR is isomorphic as a graded R-algebra to a polynomial ring R[Y1,Y2, ...] in infinitely many variables, where Yi is given degree i for all i > 0, one isomorphism being the one that sends Yi to ei ∈ ΛR for every i.
There is an involutory automorphism ω of ΛR that interchanges the elementary symmetric functions ei and the complete homogeneous symmetric function hi for all i. It also sends each power sum symmetric function pi to (−1)i−1pi, and it permutes the Schur functions among each other, interchanging sλ and sλt where λt is the transpose partition of λ.
Property 2 is the essence of the fundamental theorem of symmetric polynomials. It immediately implies some other properties:
The subring of ΛR generated by its elements of degree at most n is isomorphic to the ring of symmetric polynomials over R in n variables;
The Hilbert–Poincaré series of ΛR is , the generating function of the integer partitions (this also follows from property 1);
For every n > 0, the R-module formed by the homogeneous part of ΛR of degree n, modulo its intersection with the subring generated by its elements of degree strictly less than n, is free of rank 1, and (the image of) en is a generator of this R-module;
For every family of symmetric functions (fi)i>0 in which fi is homogeneous of degree i and gives a generator of the free R-module of the previous point (for all i), there is an alternative isomorphism of graded R-algebras from R[Y1,Y2, ...] as above to ΛR that sends Yi to fi; in other words, the family (fi)i>0 forms a set of free polynomial generators of ΛR.
This final point applies in particular to the family (hi)i>0 of complete homogeneous symmetric functions.
If R contains the field of rational numbers, it applies also to the family (pi)i>0 of power sum symmetric functions. This explains why the first n elements of each of these families define sets of symmetric polynomials in n variables that are free polynomial generators of that ring of symmetric polynomials.
The fact that the complete homogeneous symmetric functions form a set of free polynomial generators of ΛR already shows the existence of an automorphism ω sending the elementary symmetric functions to the complete homogeneous ones, as mentioned in property 3. The fact that ω is an involution of ΛR follows from the symmetry between elementary and complete homogeneous symmetric functions expressed by the first set of relations given above.
The ring of symmetric functions ΛZ is the Exp ring of the integers Z. It is also a lambda-ring in a natural fashion; in fact it is the universal lambda-ring in one generator.
Generating functions
The first definition of ΛR as a subring of allows the generating functions of several sequences of symmetric functions to be elegantly expressed. Contrary to the relations mentioned earlier, which are internal to ΛR, these expressions involve operations taking place in R[[X1,X2,...;t]] but outside its subring ΛR[[t]], so they are meaningful only if symmetric functions are viewed as formal power series in indeterminates Xi. We shall write "(X)" after the symmetric functions to stress this interpretation.
The generating function for the elementary symmetric functions is
Similarly one has for complete homogeneous symmetric functions
The obvious fact that explains the symmetry between elementary and complete homogeneous symmetric functions.
The generating function for the power sum symmetric functions can be expressed as
((Macdonald, 1979) defines P(t) as Σk>0 pk(X)tk−1, and its expressions therefore lack a factor t with respect to those given here). The two final expressions, involving the formal derivatives of the generating functions E(t) and H(t), imply Newton's identities and their variants for the complete homogeneous symmetric functions. These expressions are sometimes written as
which amounts to the same, but requires that R contain the rational numbers, so that the logarithm of power series with constant term 1 is defined (by ).
Specializations
Let be the ring of symmetric functions and a commutative algebra with unit element. An algebra homomorphism is called a specialization.
Example:
Given some real numbers and , then the substitution and is a specialization.
Let , then is called principal specialization.
See also
Newton's identities
Quasisymmetric function
References
Macdonald, I. G. Symmetric functions and Hall polynomials. Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, Oxford, 1979. viii+180 pp.
Macdonald, I. G. Symmetric functions and Hall polynomials. Second edition. Oxford Mathematical Monographs. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1995. x+475 pp.
Stanley, Richard P. Enumerative Combinatorics, Vol. 2, Cambridge University Press, 1999. (hardback) (paperback).
Polynomials
Invariant theory
Algebraic combinatorics
Permutations
Types of functions | Ring of symmetric functions | [
"Physics",
"Mathematics"
] | 4,261 | [
"Symmetry",
"Functions and mappings",
"Group actions",
"Permutations",
"Polynomials",
"Mathematical objects",
"Combinatorics",
"Symmetric functions",
"Fields of abstract algebra",
"Invariant theory",
"Mathematical relations",
"Algebraic combinatorics",
"Types of functions",
"Algebra"
] |
364,478 | https://en.wikipedia.org/wiki/Absorption%20spectroscopy | Absorption spectroscopy is spectroscopy that involves techniques that measure the absorption of electromagnetic radiation, as a function of frequency or wavelength, due to its interaction with a sample. The sample absorbs energy, i.e., photons, from the radiating field. The intensity of the absorption varies as a function of frequency, and this variation is the absorption spectrum. Absorption spectroscopy is performed across the electromagnetic spectrum.
Absorption spectroscopy is employed as an analytical chemistry tool to determine the presence of a particular substance in a sample and, in many cases, to quantify the amount of the substance present. Infrared and ultraviolet–visible spectroscopy are particularly common in analytical applications. Absorption spectroscopy is also employed in studies of molecular and atomic physics, astronomical spectroscopy and remote sensing.
There is a wide range of experimental approaches for measuring absorption spectra. The most common arrangement is to direct a generated beam of radiation at a sample and detect the intensity of the radiation that passes through it. The transmitted energy can be used to calculate the absorption. The source, sample arrangement and detection technique vary significantly depending on the frequency range and the purpose of the experiment.
Following are the major types of absorption spectroscopy:
Absorption spectrum
A material's absorption spectrum is the fraction of incident radiation absorbed by the material over a range of frequencies of electromagnetic radiation. The absorption spectrum is primarily determined by the atomic and molecular composition of the material. Radiation is more likely to be absorbed at frequencies that match the energy difference between two quantum mechanical states of the molecules . The absorption that occurs due to a transition between two states is referred to as an absorption line and a spectrum is typically composed of many lines.
The frequencies at which absorption lines occur, as well as their relative intensities, primarily depend on the electronic and molecular structure of the sample. The frequencies will also depend on the interactions between molecules in the sample, the crystal structure in solids, and on several environmental factors (e.g., temperature, pressure, electric field, magnetic field). The lines will also have a width and shape that are primarily determined by the spectral density or the density of states of the system.
Theory
It is a branch of atomic spectra where, Absorption lines are typically classified by the nature of the quantum mechanical change induced in the molecule or atom. Rotational lines, for instance, occur when the rotational state of a molecule is changed. Rotational lines are typically found in the microwave spectral region. Vibrational lines correspond to changes in the vibrational state of the molecule and are typically found in the infrared region. Electronic lines correspond to a change in the electronic state of an atom or molecule and are typically found in the visible and ultraviolet region. X-ray absorptions are associated with the excitation of inner shell electrons in atoms. These changes can also be combined (e.g. rotation–vibration transitions), leading to new absorption lines at the combined energy of the two changes.
The energy associated with the quantum mechanical change primarily determines the frequency of the absorption line but the frequency can be shifted by several types of interactions. Electric and magnetic fields can cause a shift. Interactions with neighboring molecules can cause shifts. For instance, absorption lines of the gas phase molecule can shift significantly when that molecule is in a liquid or solid phase and interacting more strongly with neighboring molecules.
The width and shape of absorption lines are determined by the instrument used for the observation, the material absorbing the radiation and the physical environment of that material. It is common for lines to have the shape of a Gaussian or Lorentzian distribution. It is also common for a line to be described solely by its intensity and width instead of the entire shape being characterized.
The integrated intensity—obtained by integrating the area under the absorption line—is proportional to the amount of the absorbing substance present. The intensity is also related to the temperature of the substance and the quantum mechanical interaction between the radiation and the absorber. This interaction is quantified by the transition moment and depends on the particular lower state the transition starts from, and the upper state it is connected to.
The width of absorption lines may be determined by the spectrometer used to record it. A spectrometer has an inherent limit on how narrow a line it can resolve and so the observed width may be at this limit. If the width is larger than the resolution limit, then it is primarily determined by the environment of the absorber. A liquid or solid absorber, in which neighboring molecules strongly interact with one another, tends to have broader absorption lines than a gas. Increasing the temperature or pressure of the absorbing material will also tend to increase the line width. It is also common for several neighboring transitions to be close enough to one another that their lines overlap and the resulting overall line is therefore broader yet.
Relation to transmission spectrum
Absorption and transmission spectra represent equivalent information and one can be calculated from the other through a mathematical transformation. A transmission spectrum will have its maximum intensities at wavelengths where the absorption is weakest because more light is transmitted through the sample. An absorption spectrum will have its maximum intensities at wavelengths where the absorption is strongest.
Relation to emission spectrum
Emission is a process by which a substance releases energy in the form of electromagnetic radiation. Emission can occur at any frequency at which absorption can occur, and this allows the absorption lines to be determined from an emission spectrum. The emission spectrum will typically have a quite different intensity pattern from the absorption spectrum, though, so the two are not equivalent. The absorption spectrum can be calculated from the emission spectrum using Einstein coefficients.
Relation to scattering and reflection spectra
The scattering and reflection spectra of a material are influenced by both its refractive index and its absorption spectrum. In an optical context, the absorption spectrum is typically quantified by the extinction coefficient, and the extinction and index coefficients are quantitatively related through the Kramers–Kronig relations. Therefore, the absorption spectrum can be derived from a scattering or reflection spectrum. This typically requires simplifying assumptions or models, and so the derived absorption spectrum is an approximation.
Applications
Absorption spectroscopy is useful in chemical analysis because of its specificity and its quantitative nature. The specificity of absorption spectra allows compounds to be distinguished from one another in a mixture, making absorption spectroscopy useful in wide variety of applications. For instance, Infrared gas analyzers can be used to identify the presence of pollutants in the air, distinguishing the pollutant from nitrogen, oxygen, water, and other expected constituents.
The specificity also allows unknown samples to be identified by comparing a measured spectrum with a library of reference spectra. In many cases, it is possible to determine qualitative information about a sample even if it is not in a library. Infrared spectra, for instance, have characteristics absorption bands that indicate if carbon-hydrogen or carbon-oxygen bonds are present.
An absorption spectrum can be quantitatively related to the amount of material present using the Beer–Lambert law. Determining the absolute concentration of a compound requires knowledge of the compound's absorption coefficient. The absorption coefficient for some compounds is available from reference sources, and it can also be determined by measuring the spectrum of a calibration standard with a known concentration of the target.
Remote sensing
One of the unique advantages of spectroscopy as an analytical technique is that measurements can be made without bringing the instrument and sample into contact. Radiation that travels between a sample and an instrument will contain the spectral information, so the measurement can be made remotely. Remote spectral sensing is valuable in many situations. For example, measurements can be made in toxic or hazardous environments without placing an operator or instrument at risk. Also, sample material does not have to be brought into contact with the instrument—preventing possible cross contamination.
Remote spectral measurements present several challenges compared to laboratory measurements. The space in between the sample of interest and the instrument may also have spectral absorptions. These absorptions can mask or confound the absorption spectrum of the sample. These background interferences may also vary over time. The source of radiation in remote measurements is often an environmental source, such as sunlight or the thermal radiation from a warm object, and this makes it necessary to distinguish spectral absorption from changes in the source spectrum.
To simplify these challenges, differential optical absorption spectroscopy has gained some popularity, as it focusses on differential absorption features and omits broad-band absorption such as aerosol extinction and extinction due to rayleigh scattering. This method is applied to ground-based, airborne, and satellite-based measurements. Some ground-based methods provide the possibility to retrieve tropospheric and stratospheric trace gas profiles.
Astronomy
Astronomical spectroscopy is a particularly significant type of remote spectral sensing. In this case, the objects and samples of interest are so distant from earth that electromagnetic radiation is the only means available to measure them. Astronomical spectra contain both absorption and emission spectral information. Absorption spectroscopy has been particularly important for understanding interstellar clouds and determining that some of them contain molecules. Absorption spectroscopy is also employed in the study of extrasolar planets. Detection of extrasolar planets by transit photometry also measures their absorption spectrum and allows for the determination of the planet's atmospheric composition, temperature, pressure, and scale height, and hence allows also for the determination of the planet's mass.
Atomic and molecular physics
Theoretical models, principally quantum mechanical models, allow for the absorption spectra of atoms and molecules to be related to other physical properties such as electronic structure, atomic or molecular mass, and molecular geometry. Therefore, measurements of the absorption spectrum are used to determine these other properties. Microwave spectroscopy, for example, allows for the determination of bond lengths and angles with high precision.
In addition, spectral measurements can be used to determine the accuracy of theoretical predictions. For example, the Lamb shift measured in the hydrogen atomic absorption spectrum was not expected to exist at the time it was measured. Its discovery spurred and guided the development of quantum electrodynamics, and measurements of the Lamb shift are now used to determine the fine-structure constant.
Experimental methods
Basic approach
The most straightforward approach to absorption spectroscopy is to generate radiation with a source, measure a reference spectrum of that radiation with a detector and then re-measure the sample spectrum after placing the material of interest in between the source and detector. The two measured spectra can then be combined to determine the material's absorption spectrum. The sample spectrum alone is not sufficient to determine the absorption spectrum because it will be affected by the experimental conditions—the spectrum of the source, the absorption spectra of other materials between the source and detector, and the wavelength dependent characteristics of the detector. The reference spectrum will be affected in the same way, though, by these experimental conditions and therefore the combination yields the absorption spectrum of the material alone.
A wide variety of radiation sources are employed in order to cover the electromagnetic spectrum. For spectroscopy, it is generally desirable for a source to cover a broad swath of wavelengths in order to measure a broad region of the absorption spectrum. Some sources inherently emit a broad spectrum. Examples of these include globars or other black body sources in the infrared, mercury lamps in the visible and ultraviolet, and X-ray tubes. One recently developed, novel source of broad spectrum radiation is synchrotron radiation, which covers all of these spectral regions. Other radiation sources generate a narrow spectrum, but the emission wavelength can be tuned to cover a spectral range. Examples of these include klystrons in the microwave region and lasers across the infrared, visible, and ultraviolet region (though not all lasers have tunable wavelengths).
The detector employed to measure the radiation power will also depend on the wavelength range of interest. Most detectors are sensitive to a fairly broad spectral range and the sensor selected will often depend more on the sensitivity and noise requirements of a given measurement. Examples of detectors common in spectroscopy include heterodyne receivers in the microwave, bolometers in the millimeter-wave and infrared, mercury cadmium telluride and other cooled semiconductor detectors in the infrared, and photodiodes and photomultiplier tubes in the visible and ultraviolet.
If both the source and the detector cover a broad spectral region, then it is also necessary to introduce a means of resolving the wavelength of the radiation in order to determine the spectrum. Often a spectrograph is used to spatially separate the wavelengths of radiation so that the power at each wavelength can be measured independently. It is also common to employ interferometry to determine the spectrum—Fourier transform infrared spectroscopy is a widely used implementation of this technique.
Two other issues that must be considered in setting up an absorption spectroscopy experiment include the optics used to direct the radiation and the means of holding or containing the sample material (called a cuvette or cell). For most UV, visible, and NIR measurements the use of precision quartz cuvettes are necessary. In both cases, it is important to select materials that have relatively little absorption of their own in the wavelength range of interest. The absorption of other materials could interfere with or mask the absorption from the sample. For instance, in several wavelength ranges it is necessary to measure the sample under vacuum or in a noble gas environment because gases in the atmosphere have interfering absorption features.
Specific approaches
Astronomical spectroscopy
Cavity ring-down spectroscopy (CRDS)
Laser absorption spectrometry (LAS)
Mössbauer spectroscopy
Photoacoustic spectroscopy
Photoemission spectroscopy
Photothermal optical microscopy
Photothermal spectroscopy
Reflectance spectroscopy
Reflection-absorption infrared spectroscopy (RAIRS)
Total absorption spectroscopy (TAS)
Tunable diode laser absorption spectroscopy (TDLAS)
X-ray absorption fine structure (XAFS)
X-ray absorption near edge structure (XANES)
See also
Absorption (optics)
Densitometry
HITRAN
Infrared gas analyzer
Infrared spectroscopy of metal carbonyls
Lyman-alpha forest
Optical density
Photoemission spectroscopy
Transparent materials
Water absorption
White cell (spectroscopy)
X-ray absorption spectroscopy
References
External links
Solar absorption spectrum () (archived)
WEBB Space Telescope, Part 3 of a series: Spectroscopy 101 – Types of Spectra and Spectroscopy
Plot Absorption Intensity for many molecules in HITRAN database
Analytical chemistry
Astrochemistry
Electromagnetic radiation
Radiation
Scientific techniques
Spectroscopy | Absorption spectroscopy | [
"Physics",
"Chemistry",
"Astronomy"
] | 2,838 | [
"Transport phenomena",
"Physical phenomena",
"Molecular physics",
"Spectrum (physical sciences)",
"Electromagnetic radiation",
"Instrumental analysis",
"Absorption spectroscopy",
"Waves",
"Astrochemistry",
"Radiation",
"nan",
"Spectroscopy",
"Astronomical sub-disciplines"
] |
364,488 | https://en.wikipedia.org/wiki/Projective%20module | In mathematics, particularly in algebra, the class of projective modules enlarges the class of free modules (that is, modules with basis vectors) over a ring, keeping some of the main properties of free modules. Various equivalent characterizations of these modules appear below.
Every free module is a projective module, but the converse fails to hold over some rings, such as Dedekind rings that are not principal ideal domains. However, every projective module is a free module if the ring is a principal ideal domain such as the integers, or a (multivariate) polynomial ring over a field (this is the Quillen–Suslin theorem).
Projective modules were first introduced in 1956 in the influential book Homological Algebra by Henri Cartan and Samuel Eilenberg.
Definitions
Lifting property
The usual category theoretical definition is in terms of the property of lifting that carries over from free to projective modules: a module P is projective if and only if for every surjective module homomorphism and every module homomorphism , there exists a module homomorphism such that . (We don't require the lifting homomorphism h to be unique; this is not a universal property.)
The advantage of this definition of "projective" is that it can be carried out in categories more general than module categories: we don't need a notion of "free object". It can also be dualized, leading to injective modules. The lifting property may also be rephrased as every morphism from to factors through every epimorphism to . Thus, by definition, projective modules are precisely the projective objects in the category of R-modules.
Split-exact sequences
A module P is projective if and only if every short exact sequence of modules of the form
is a split exact sequence. That is, for every surjective module homomorphism there exists a section map, that is, a module homomorphism such that f h = idP . In that case, is a direct summand of B, h is an isomorphism from P to , and is a projection on the summand . Equivalently,
Direct summands of free modules
A module P is projective if and only if there is another module Q such that the direct sum of P and Q is a free module.
Exactness
An R-module P is projective if and only if the covariant functor is an exact functor, where is the category of left R-modules and Ab is the category of abelian groups. When the ring R is commutative, Ab is advantageously replaced by in the preceding characterization. This functor is always left exact, but, when P is projective, it is also right exact. This means that P is projective if and only if this functor preserves epimorphisms (surjective homomorphisms), or if it preserves finite colimits.
Dual basis
A module P is projective if and only if there exists a set and a set such that for every x in P, fi  (x) is only nonzero for finitely many i, and .
Elementary examples and properties
The following properties of projective modules are quickly deduced from any of the above (equivalent) definitions of projective modules:
Direct sums and direct summands of projective modules are projective.
If is an idempotent in the ring , then is a projective left module over R.
Let be the direct product of two rings and which is a ring for the componentwise operations. Let and Then and are idempotents, and belong to the centre of The two-sided ideals and are projective modules, since their direct sum (as -modules) equals the free -module . However, if and are nontrivial, then they are not free as modules over . For instance is projective but not free over .
Relation to other module-theoretic properties
The relation of projective modules to free and flat modules is subsumed in the following diagram of module properties:
The left-to-right implications are true over any ring, although some authors define torsion-free modules only over a domain. The right-to-left implications are true over the rings labeling them. There may be other rings over which they are true. For example, the implication labeled "local ring or PID" is also true for (multivariate) polynomial rings over a field: this is the Quillen–Suslin theorem.
Projective vs. free modules
Any free module is projective. The converse is true in the following cases:
if R is a field or skew field: any module is free in this case.
if the ring R is a principal ideal domain. For example, this applies to (the integers), so an abelian group is projective if and only if it is a free abelian group. The reason is that any submodule of a free module over a principal ideal domain is free.
if the ring R is a local ring. This fact is the basis of the intuition of "locally free = projective". This fact is easy to prove for finitely generated projective modules. In general, it is due to ; see Kaplansky's theorem on projective modules.
In general though, projective modules need not be free:
Over a direct product of rings where R and S are nonzero rings, both and are non-free projective modules.
Over a Dedekind domain a non-principal ideal is always a projective module that is not a free module.
Over a matrix ring Mn(R), the natural module R n is projective but is not free when n > 1.
Over a semisimple ring, every module is projective, but a nonzero proper left (or right) ideal is not a free module. Thus the only semisimple rings for which all projectives are free are division rings.
The difference between free and projective modules is, in a sense, measured by the algebraic K-theory group K0(R); see below.
Projective vs. flat modules
Every projective module is flat. The converse is in general not true: the abelian group Q is a Z-module that is flat, but not projective.
Conversely, a finitely related flat module is projective.
and proved that a module M is flat if and only if it is a direct limit of finitely-generated free modules.
In general, the precise relation between flatness and projectivity was established by (see also and ) who showed that a module M is projective if and only if it satisfies the following conditions:
M is flat,
M is a direct sum of countably generated modules,
M satisfies a certain Mittag-Leffler-type condition.
This characterization can be used to show that if is a faithfully flat map of commutative rings and is an -module, then is projective if and only if is projective. In other words, the property of being projective satisfies faithfully flat descent.
The category of projective modules
Submodules of projective modules need not be projective; a ring R for which every submodule of a projective left module is projective is called left hereditary.
Quotients of projective modules also need not be projective, for example Z/n is a quotient of Z, but not torsion-free, hence not flat, and therefore not projective.
The category of finitely generated projective modules over a ring is an exact category. (See also algebraic K-theory).
Projective resolutions
Given a module, M, a projective resolution of M is an infinite exact sequence of modules
⋅⋅⋅ → Pn → ⋅⋅⋅ → P2 → P1 → P0 → M → 0,
with all the Pi s projective. Every module possesses a projective resolution. In fact a free resolution (resolution by free modules) exists. The exact sequence of projective modules may sometimes be abbreviated to or . A classic example of a projective resolution is given by the Koszul complex of a regular sequence, which is a free resolution of the ideal generated by the sequence.
The length of a finite resolution is the index n such that Pn is nonzero and for i greater than n. If M admits a finite projective resolution, the minimal length among all finite projective resolutions of M is called its projective dimension and denoted pd(M). If M does not admit a finite projective resolution, then by convention the projective dimension is said to be infinite. As an example, consider a module M such that . In this situation, the exactness of the sequence 0 → P0 → M → 0 indicates that the arrow in the center is an isomorphism, and hence M itself is projective.
Projective modules over commutative rings
Projective modules over commutative rings have nice properties.
The localization of a projective module is a projective module over the localized ring.
A projective module over a local ring is free. Thus a projective module is locally free (in the sense that its localization at every prime ideal is free over the corresponding localization of the ring).
The converse is true for finitely generated modules over Noetherian rings: a finitely generated module over a commutative Noetherian ring is locally free if and only if it is projective.
However, there are examples of finitely generated modules over a non-Noetherian ring that are locally free and not projective. For instance,
a Boolean ring has all of its localizations isomorphic to F2, the field of two elements, so any module over a Boolean ring is locally free, but
there are some non-projective modules over Boolean rings. One example is R/I where
R is a direct product of countably many copies of F2 and I is the direct sum of countably many copies of F2 inside of R.
The R-module R/I is locally free since R is Boolean (and it is finitely generated as an R-module too, with a spanning set of size 1), but R/I is not projective because
I is not a principal ideal. (If a quotient module R/I, for any commutative ring R and ideal I, is a projective R-module then I is principal.)
However, it is true that for finitely presented modules M over a commutative ring R (in particular if M is a finitely generated R-module and R is Noetherian), the following are equivalent.
is flat.
is projective.
is free as -module for every maximal ideal of R.
is free as -module for every prime ideal of R.
There exist generating the unit ideal such that is free as -module for each i.
is a locally free sheaf on (where is the sheaf associated to M.)
Moreover, if R is a Noetherian integral domain, then, by Nakayama's lemma, these conditions are equivalent to
The dimension of the -vector space is the same for all prime ideals of R, where is the residue field at . That is to say, M has constant rank (as defined below).
Let A be a commutative ring. If B is a (possibly non-commutative) A-algebra that is a finitely generated projective A-module containing A as a subring, then A is a direct factor of B.
Rank
Let P be a finitely generated projective module over a commutative ring R and X be the spectrum of R. The rank of P at a prime ideal in X is the rank of the free -module . It is a locally constant function on X. In particular, if X is connected (that is if R has no other idempotents than 0 and 1), then P has constant rank.
Vector bundles and locally free modules
A basic motivation of the theory is that projective modules (at least over certain commutative rings) are analogues of vector bundles. This can be made precise for the ring of continuous real-valued functions on a compact Hausdorff space, as well as for the ring of smooth functions on a smooth manifold (see Serre–Swan theorem that says a finitely generated projective module over the space of smooth functions on a compact manifold is the space of smooth sections of a smooth vector bundle).
Vector bundles are locally free. If there is some notion of "localization" that can be carried over to modules, such as the usual localization of a ring, one can define locally free modules, and the projective modules then typically coincide with the locally free modules.
Projective modules over a polynomial ring
The Quillen–Suslin theorem, which solves Serre's problem, is another deep result: if K is a field, or more generally a principal ideal domain, and is a polynomial ring over K, then every projective module over R is free.
This problem was first raised by Serre with K a field (and the modules being finitely generated). Bass settled it for non-finitely generated modules, and Quillen and Suslin independently and simultaneously treated the case of finitely generated modules.
Since every projective module over a principal ideal domain is free, one might ask this question: if R is a commutative ring such that every (finitely generated) projective R-module is free, then is every (finitely generated) projective R[X]-module free? The answer is no. A counterexample occurs with R equal to the local ring of the curve at the origin. Thus the Quillen–Suslin theorem could never be proved by a simple induction on the number of variables.
See also
Projective cover
Schanuel's lemma
Bass cancellation theorem
Modular representation theory
Notes
References
Nicolas Bourbaki, Commutative algebra, Ch. II, §5
Donald S. Passman (2004) A Course in Ring Theory, especially chapter 2 Projective modules, pp 13–22, AMS Chelsea, .
Paulo Ribenboim (1969) Rings and Modules, §1.6 Projective modules, pp 19–24, Interscience Publishers.
Charles Weibel, The K-book: An introduction to algebraic K-theory
Further reading
https://mathoverflow.net/questions/272018/faithfully-flat-descent-of-projectivity-for-non-commutative-rings
Homological algebra
Module theory | Projective module | [
"Mathematics"
] | 2,935 | [
"Mathematical structures",
"Fields of abstract algebra",
"Category theory",
"Module theory",
"Homological algebra"
] |
364,643 | https://en.wikipedia.org/wiki/Israel%20Gelfand | Israel Moiseevich Gelfand, also written Israïl Moyseyovich Gel'fand, or Izrail M. Gelfand (, , ; – 5 October 2009) was a prominent Soviet-American mathematician. He made significant contributions to many branches of mathematics, including group theory, representation theory and functional analysis. The recipient of many awards, including the Order of Lenin and the first Wolf Prize, he was a Foreign Fellow of the Royal Society and professor at Moscow State University and, after immigrating to the United States shortly before his 76th birthday, at Rutgers University. Gelfand is also a 1994 MacArthur Fellow.
His legacy continues through his students, who include Endre Szemerédi, Alexandre Kirillov, Edward Frenkel, Joseph Bernstein, David Kazhdan, as well as his own son, Sergei Gelfand.
Early years
A native of Kherson Governorate, Russian Empire (now, Odesa Oblast, Ukraine), Gelfand was born into a Jewish family in the small southern Ukrainian town of Okny. According to his own account, Gelfand was expelled from high school under the Soviets because his father had been a mill owner. Bypassing both high school and college, he proceeded to postgraduate study at the age of 19 at Moscow State University, where his advisor was the preeminent mathematician Andrei Kolmogorov. He received his PhD in 1935.
Gelfand immigrated to the United States in 1989.
Work
Gelfand is known for many developments including:
the book Calculus of Variations (1963), which he co-authored with Sergei Fomin;
Gelfand's formula, which expresses the spectral radius as a limit of matrix norms.
the Gelfand representation in Banach algebra theory;
the Gelfand–Mazur theorem in Banach algebra theory;
the Gelfand–Naimark theorem;
the Gelfand–Naimark–Segal construction;
Gelfand–Shilov spaces;
the Gelfand–Pettis integral;
the representation theory of the complex classical Lie groups;
contributions to the theory of Verma modules in the representation theory of semisimple Lie algebras (with I. N. Bernstein and S. I. Gelfand);
contributions to distribution theory and measures on infinite-dimensional spaces;
the first observation of the connection of automorphic forms with representations (with Sergei Fomin);
conjectures about the Atiyah–Singer index theorem;
ordinary differential equations (Gelfand–Levitan theory);
work on calculus of variations and soliton theory (Gelfand–Dikii equations);
contributions to the philosophy of cusp forms;
Gelfand–Fuchs cohomology of Lie algebras;
Gelfand–Kirillov dimension;
integral geometry;
combinatorial definition of the Pontryagin class;
Coxeter functors;
general hypergeometric functions;
Gelfand–Tsetlin patterns;
Gelfand–Lokutsievski method;
and many other results, particularly in the representation theory of classical groups.
Gelfand ran a seminar at Moscow State University from 1943 until May 1989 (when it continued at Rutgers University), which covered a wide range of topics and was an important school for many mathematicians.
Influence outside mathematics
The Gelfand–Tsetlin (also spelled Zetlin) basis is a widely used tool in theoretical physics and the result of Gelfand's work on the representation theory of the unitary group and Lie groups in general.
Gelfand also published works on biology and medicine. For a long time he took an interest in cell biology and organized a research seminar on the subject.
He worked extensively in mathematics education, particularly with correspondence education. In 1994, he was awarded a MacArthur Fellowship for this work.
Personal life
Gelfand was married to Zorya Shapiro, and their two sons, Sergei and Vladimir both live in the United States. The third son, Aleksandr, died of leukemia. Following the divorce from his first wife, Gelfand married his second wife, Tatiana; together they had a daughter, Tatiana. The family also includes four grandchildren and three great-grandchildren. Memories about I. Gelfand are collected at a dedicated website handled by his family.
Gelfand was an advocate of animal rights. He became a vegetarian in 1994 and vegan in 2000.
Honors and awards
Gelfand held several honorary degrees and was awarded the Order of Lenin three times for his research. In 1977 he was elected a Foreign Member of the Royal Society. He won the Wolf Prize in 1978, Kyoto Prize in 1989 and MacArthur Foundation Fellowship in 1994. He held the presidency of the Moscow Mathematical Society between 1968 and 1970, and was elected a foreign member of the U.S. National Academy of Science, the American Academy of Arts and Sciences, the Royal Irish Academy, the American Mathematical Society and the London Mathematical Society.
In an October 2003 article in The New York Times, written on the occasion of his 90th birthday, Gelfand is described as a scholar who is considered "among the greatest mathematicians of the 20th century", having exerted a tremendous influence on the field both through his own works and those of his students.
Death
Gelfand died at the Robert Wood Johnson University Hospital near his home in Highland Park, New Jersey. He was less than five weeks past his 96th birthday. His death was first reported on the blog of his former collaborator Andrei Zelevinsky and confirmed a few hours later by an obituary in the Russian online newspaper Polit.ru.
Publications
Generalized Functions Volumes, 1-6, American Math Society, (2015)
See also
Gelfand duality
Gelfand–Levitan–Marchenko equation
Gelfand pair
Gelfand mapping
Gelfand ring
Gelfand triple
Anti-cosmopolitan campaign
References
Citations
Sources
Chang, Kenneth. "Israel Gelfand, Math Giant, Dies at 96", The New York Times (October 7, 2009)
"Leading mathematician Israel Gelfand dies in N.J." USA Today (October 9, 2009)
"Israel Gelfand | Top mathematician, 96". The Philadelphia Inquirer (October 10, 2009)
"Israel Gelfand" The Daily Telegraph (October 27, 2009)
External links
Israel Moiseevich Gelfand, dedicated site, maintained by Tatiana V. Gelfand and Tatiana I. Gelfand
Israel Gelfand – Daily Telegraph obituary
Israel Gelfand – Guardian obituary
Web page at Rutgers
List of publications.
Steele Prize citation.
The unity of mathematics – In honor of the ninetieth birthday of I. M. Gelfand
Interview: "A talk with professor I. M. Gelfand.", recorded by V. Retakh and A. Sosinsky, Kvant (1989), no. 1, 3–12 (in Russian). English translation in: Quantum (1991), no. 1, 20–26. (Link)
1913 births
2009 deaths
People from Podilsk Raion
People from Ananyevsky Uyezd
Russian Jews
Soviet Jews
Soviet emigrants to the United States
American people of Russian-Jewish descent
Operator theorists
Soviet biologists
Functional analysts
American textbook writers
Fluid dynamicists
Russian mathematicians
Mathematical analysts
Soviet mathematicians
20th-century American biologists
20th-century American mathematicians
21st-century American mathematicians
People from Highland Park, New Jersey
Moscow State University alumni
Full Members of the USSR Academy of Sciences
Full Members of the Russian Academy of Sciences
Members of the French Academy of Sciences
Members of the Royal Irish Academy
Kyoto laureates in Basic Sciences
Foreign associates of the National Academy of Sciences
Foreign members of the Royal Society
MacArthur Fellows
Recipients of the Stalin Prize
Recipients of the Lenin Prize
Recipients of the Order of Friendship of Peoples
Recipients of the Order of Lenin
Recipients of the Order of the Red Banner of Labour
State Prize of the Russian Federation laureates
Wolf Prize in Mathematics laureates
Members of the Royal Swedish Academy of Sciences
Russian scientists | Israel Gelfand | [
"Chemistry",
"Mathematics"
] | 1,616 | [
"Mathematical analysis",
"Fluid dynamicists",
"Mathematical analysts",
"Fluid dynamics"
] |
364,655 | https://en.wikipedia.org/wiki/Timeline%20of%20human%20vaccines | This is a timeline of the development of prophylactic human vaccines. Early vaccines may be listed by the first year of development or testing, but later entries usually show the year the vaccine finished trials and became available on the market. Although vaccines exist for the diseases listed below, only smallpox has been eliminated worldwide. The other vaccine-preventable illnesses continue to cause millions of deaths each year. Currently, polio and measles are the targets of active worldwide eradication campaigns.
18th century
1796 – Edward Jenner develops and documents first vaccine for smallpox.
19th century
1884-1885 – First vaccine for cholera by Jaime Ferran y Clua
1885 – First vaccine for rabies by Louis Pasteur and Émile Roux
1890 – First vaccine for tetanus (serum antitoxin) by Emil von Behring
1896 – First vaccine for typhoid fever by Almroth Edward Wright, Richard Pfeiffer, and Wilhelm Kolle
1897 – First vaccine for bubonic plague by Waldemar Haffkine
20th century
1921 – First vaccine for tuberculosis by Albert Calmette
1923 – First vaccine for diphtheria by Gaston Ramon, Emil von Behring and Kitasato Shibasaburō
1924 – First vaccine for scarlet fever by George F. Dick and Gladys Dick
1924 – First inactive vaccine for tetanus (tetanus toxoid, TT) by Gaston Ramon, C. Zoeller and P. Descombey
1926 – First vaccine for pertussis (whooping cough) by Leila Denmark
1932 – First vaccine for yellow fever by Max Theiler and Jean Laigret
1937 – First vaccine for typhus by Rudolf Weigl, Ludwik Fleck and Hans Zinsser
1937 – First vaccine for influenza by Anatol Smorodintsev
1940 – First vaccine for anthrax
1941 – First vaccine for tick-borne encephalitis
1952 – First intravenous vaccine for polio
1954 – First vaccine for Japanese encephalitis
1957 – First vaccine for adenovirus-4 and 7
1962 – First oral vaccine for polio
1963 – First vaccine for measles
1967 – First vaccine for mumps
1970 – First vaccine for rubella
1977 – First vaccine for pneumonia (Streptococcus pneumoniae)
1978 – First vaccine for meningitis (Neisseria meningitidis)
1980 – Smallpox declared eradicated worldwide due to vaccination efforts
1981 – First vaccine for hepatitis B (first vaccine to target a cause of cancer)
1984 – First vaccine for chicken pox
1985 – First vaccine for Haemophilus influenzae type b (HiB)
1989 – First vaccine for Q fever
1990 – First vaccine for hantavirus hemorrhagic fever with renal syndrome
1991 – First vaccine for hepatitis A
1998 – First vaccine for Lyme disease
1998 – First vaccine for rotavirus
21st century
2000 – First pneumococcal conjugate vaccine approved in the U.S. (PCV7 or Prevnar)
2003 – First nasal influenza vaccine approved in U.S. (FluMist)
2003 – First vaccine for Argentine hemorrhagic fever.
2006 – First vaccine for human papillomavirus (which is a cause of cervical cancer)
2006 – First herpes zoster vaccine for shingles
2011 – First vaccine for non-small-cell lung carcinoma (comprises 85% of lung cancer cases)
2012 – First vaccine for hepatitis E
2012 – First quadrivalent (4-strain) influenza vaccine
2013 – First vaccine for enterovirus 71, one cause of hand, foot, and mouth disease
2015 – First vaccine for malaria
2015 – First vaccine for dengue fever
2019 – First vaccine for Ebola approved
2020 – First vaccine for COVID-19
2023 – First respiratory syncytial virus vaccine
2023 - First vaccine for Chikungunya
References
Vaccines
Timeline | Timeline of human vaccines | [
"Biology"
] | 796 | [
"Vaccination",
"Vaccines"
] |
364,774 | https://en.wikipedia.org/wiki/Conformal%20field%20theory | A conformal field theory (CFT) is a quantum field theory that is invariant under conformal transformations. In two dimensions, there is an infinite-dimensional algebra of local conformal transformations, and conformal field theories can sometimes be exactly solved or classified.
Conformal field theory has important applications to condensed matter physics, statistical mechanics, quantum statistical mechanics, and string theory. Statistical and condensed matter systems are indeed often conformally invariant at their thermodynamic or quantum critical points.
Scale invariance vs conformal invariance
In quantum field theory, scale invariance is a common and natural symmetry, because any fixed point of the renormalization group is by definition scale invariant. Conformal symmetry is stronger than scale invariance, and one needs additional assumptions to argue that it should appear in nature. The basic idea behind its plausibility is that local scale invariant theories have their currents given by where is a Killing vector and is a conserved operator (the stress-tensor) of dimension exactly . For the associated symmetries to include scale but not conformal transformations, the trace has to be a non-zero total derivative implying that there is a non-conserved operator of dimension exactly .
Under some assumptions it is possible to completely rule out this type of non-renormalization and hence prove that scale invariance implies conformal invariance in a quantum field theory, for example in unitary compact conformal field theories in two dimensions.
While it is possible for a quantum field theory to be scale invariant but not conformally invariant, examples are rare. For this reason, the terms are often used interchangeably in the context of quantum field theory.
Two dimensions vs higher dimensions
The number of independent conformal transformations is infinite in two dimensions, and finite in higher dimensions. This makes conformal symmetry much more constraining in two dimensions. All conformal field theories share the ideas and techniques of the conformal bootstrap. But the resulting equations are more powerful in two dimensions, where they are sometimes exactly solvable (for example in the case of minimal models), in contrast to higher dimensions, where numerical approaches dominate.
The development of conformal field theory has been earlier and deeper in the two-dimensional case, in particular after the 1983 article by Belavin, Polyakov and Zamolodchikov.
The term conformal field theory has sometimes been used with the meaning of two-dimensional conformal field theory, as in the title of a 1997 textbook.
Higher-dimensional conformal field theories have become more popular with the AdS/CFT correspondence in the late 1990s, and the development of numerical conformal bootstrap techniques in the 2000s.
Global vs local conformal symmetry in two dimensions
The global conformal group of the Riemann sphere is the group of Möbius transformations , which is finite-dimensional.
On the other hand, infinitesimal conformal transformations form the infinite-dimensional Witt algebra: the conformal Killing equations in two dimensions, reduce to just the Cauchy-Riemann equations, , the infinity of modes of arbitrary analytic coordinate transformations yield the infinity of Killing vector fields .
Strictly speaking, it is possible for a two-dimensional conformal field theory to be local (in the sense of possessing a stress-tensor) while still only exhibiting invariance under the global . This turns out to be unique to non-unitary theories; an example is the biharmonic scalar. This property should be viewed as even more special than scale without conformal invariance as it requires to be a total second derivative.
Global conformal symmetry in two dimensions is a special case of conformal symmetry in higher dimensions, and is studied with the same techniques. This is done not only in theories that have global but not local conformal symmetry, but also in theories that do have local conformal symmetry, for the purpose of testing techniques or ideas from higher-dimensional CFT. In particular, numerical bootstrap techniques can be tested by applying them to minimal models, and comparing the results with the known analytic results that follow from local conformal symmetry.
Conformal field theories with a Virasoro symmetry algebra
In a conformally invariant two-dimensional quantum theory, the Witt algebra of infinitesimal conformal transformations has to be centrally extended. The quantum symmetry algebra is therefore the Virasoro algebra, which depends on a number called the central charge. This central extension can also be understood in terms of a conformal anomaly.
It was shown by Alexander Zamolodchikov that there exists a function which decreases monotonically under the renormalization group flow of a two-dimensional quantum field theory, and is equal to the central charge for a two-dimensional conformal field theory. This is known as the Zamolodchikov C-theorem, and tells us that renormalization group flow in two dimensions is irreversible.
In addition to being centrally extended, the symmetry algebra of a conformally invariant quantum theory has to be complexified, resulting in two copies of the Virasoro algebra.
In Euclidean CFT, these copies are called holomorphic and antiholomorphic. In Lorentzian CFT, they are called left-moving and right moving. Both copies have the same central charge.
The space of states of a theory is a representation of the product of the two Virasoro algebras. This space is a Hilbert space if the theory is unitary.
This space may contain a vacuum state, or in statistical mechanics, a thermal state. Unless the central charge vanishes, there cannot exist a state that leaves the entire infinite dimensional conformal symmetry unbroken. The best we can have is a state that is invariant under the generators of the Virasoro algebra, whose basis is . This contains the generators of the global conformal transformations. The rest of the conformal group is spontaneously broken.
Conformal symmetry
Definition and Jacobian
For a given spacetime and metric, a conformal transformation is a transformation that preserves angles. We will focus on conformal transformations of the flat -dimensional Euclidean space or of the Minkowski space .
If is a conformal transformation, the Jacobian is of the form
where is the scale factor, and is a rotation (i.e. an orthogonal matrix) or Lorentz transformation.
Conformal group
The conformal group is locally isomorphic to (Euclidean) or (Minkowski). This includes translations, rotations (Euclidean) or Lorentz transformations (Minkowski), and dilations i.e. scale transformations
This also includes special conformal transformations. For any translation , there is a special conformal transformation
where is the inversion such that
In the sphere , the inversion exchanges with . Translations leave fixed, while special conformal transformations leave fixed.
Conformal algebra
The commutation relations of the corresponding Lie algebra are
where generate translations, generates dilations, generate special conformal transformations, and generate rotations or Lorentz transformations. The tensor is the flat metric.
Global issues in Minkowski space
In Minkowski space, the conformal group does not preserve causality. Observables such as correlation functions are invariant under the conformal algebra, but not under the conformal group. As shown by Lüscher and Mack, it is possible to restore the invariance under the conformal group by extending the flat Minkowski space into a Lorentzian cylinder. The original Minkowski space is conformally equivalent to a region of the cylinder called a Poincaré patch. In the cylinder, global conformal transformations do not violate causality: instead, they can move points outside the Poincaré patch.
Correlation functions and conformal bootstrap
In the conformal bootstrap approach, a conformal field theory is a set of correlation functions that obey a number of axioms.
The -point correlation function is a function of the positions and other parameters of the fields . In the bootstrap approach, the fields themselves make sense only in the context of correlation functions, and may be viewed as efficient notations for writing axioms for correlation functions. Correlation functions depend linearly on fields, in particular
.
We focus on CFT on the Euclidean space . In this case, correlation functions are Schwinger functions. They are defined for , and do not depend on the order of the fields. In Minkowski space, correlation functions are Wightman functions. They can depend on the order of the fields, as fields commute only if they are spacelike separated. A Euclidean CFT can be related to a Minkowskian CFT by Wick rotation, for example thanks to the Osterwalder-Schrader theorem. In such cases, Minkowskian correlation functions are obtained from Euclidean correlation functions by an analytic continuation that depends on the order of the fields.
Behaviour under conformal transformations
Any conformal transformation acts linearly on fields , such that is a representation of the conformal group, and correlation functions are invariant:
Primary fields are fields that transform into themselves via . The behaviour of a primary field is characterized by a number called its conformal dimension, and a representation of the rotation or Lorentz group. For a primary field, we then have
Here and are the scale factor and rotation that are associated to the conformal transformation . The representation is trivial in the case of scalar fields, which transform as
. For vector fields, the representation is the fundamental representation, and we would have
.
A primary field that is characterized by the conformal dimension and representation behaves as a highest-weight vector in an induced representation of the conformal group from the subgroup generated by dilations and rotations. In particular, the conformal dimension characterizes a representation of the subgroup of dilations. In two dimensions, the fact that this induced representation is a Verma module appears throughout the literature. For higher-dimensional CFTs (in which the maximally compact subalgebra is larger than the Cartan subalgebra), it has recently been appreciated that this representation is a parabolic or generalized Verma module.
Derivatives (of any order) of primary fields are called descendant fields. Their behaviour under conformal transformations is more complicated. For example, if is a primary field, then is a linear combination of and . Correlation functions of descendant fields can be deduced from correlation functions of primary fields. However, even in the common case where all fields are either primaries or descendants thereof, descendant fields play an important role, because conformal blocks and operator product expansions involve sums over all descendant fields.
The collection of all primary fields , characterized by their scaling dimensions and the representations , is called the spectrum of the theory.
Dependence on field positions
The invariance of correlation functions under conformal transformations severely constrain their dependence on field positions. In the case of two- and three-point functions, that dependence is determined up to finitely many constant coefficients. Higher-point functions have more freedom, and are only determined up to functions of conformally invariant combinations of the positions.
The two-point function of two primary fields vanishes if their conformal dimensions differ.
If the dilation operator is diagonalizable (i.e. if the theory is not logarithmic), there exists a basis of primary fields such that two-point functions are diagonal, i.e. .
In this case, the two-point function of a scalar primary field is
where we choose the normalization of the field such that the constant coefficient, which is not determined by conformal symmetry, is one. Similarly, two-point functions of non-scalar primary fields are determined up to a coefficient, which can be set to one. In the case of a symmetric traceless tensor of rank , the two-point function is
where the tensor is defined as
The three-point function of three scalar primary fields is
where , and is a three-point structure constant. With primary fields that are not necessarily scalars, conformal symmetry allows a finite number of tensor structures, and there is a structure constant for each tensor structure. In the case of two scalar fields and a symmetric traceless tensor of rank , there is only one tensor structure, and the three-point function is
where we introduce the vector
Four-point functions of scalar primary fields are determined up to arbitrary functions of the two cross-ratios
The four-point function is then
Operator product expansion
The operator product expansion (OPE) is more powerful in conformal field theory than in more general quantum field theories. This is because in conformal field theory, the operator product expansion's radius of convergence is finite (i.e. it is not zero). Provided the positions of two fields are close enough, the operator product expansion rewrites the product of these two fields as a linear combination of fields at a given point, which can be chosen as for technical convenience.
The operator product expansion of two fields takes the form
where is some coefficient function, and the sum in principle runs over all fields in the theory. (Equivalently, by the state-field correspondence, the sum runs over all states in the space of states.) Some fields may actually be absent, in particular due to constraints from symmetry: conformal symmetry, or extra symmetries.
If all fields are primary or descendant, the sum over fields can be reduced to a sum over primaries, by rewriting the contributions of any descendant in terms of the contribution of the corresponding primary:
where the fields are all primary, and is the three-point structure constant (which for this reason is also called OPE coefficient). The differential operator is an infinite series in derivatives, which is determined by conformal symmetry and therefore in principle known.
Viewing the OPE as a relation between correlation functions shows that the OPE must be associative. Furthermore,
if the space is Euclidean, the OPE must be commutative, because
correlation functions do not depend on the order of the fields, i.e. .
The existence of the operator product expansion is a fundamental axiom of the conformal bootstrap. However, it is generally not necessary to compute operator product expansions and in particular the differential operators . Rather, it is the decomposition of correlation functions into structure constants and conformal blocks that is needed.
The OPE can in principle be used for computing conformal blocks, but in practice there are more efficient methods.
Conformal blocks and crossing symmetry
Using the OPE , a four-point function can be written as a combination of three-point structure constants and s-channel conformal blocks,
The conformal block is the sum of the contributions of the primary field and its descendants. It depends on the fields and their positions. If the three-point functions or involve several independent tensor structures, the structure constants and conformal blocks depend on these tensor structures, and the primary field contributes several independent blocks. Conformal blocks are determined by conformal symmetry, and known in principle. To compute them, there are recursion relations and integrable techniques.
Using the OPE or , the same four-point function is written in terms of t-channel conformal blocks or u-channel conformal blocks,
The equality of the s-, t- and u-channel decompositions is called crossing symmetry: a constraint on the spectrum of primary fields, and on the three-point structure constants.
Conformal blocks obey the same conformal symmetry constraints as four-point functions. In particular, s-channel conformal blocks can be written in terms of functions of the cross-ratios. While the OPE only converges if , conformal blocks can be analytically continued to all (non pairwise coinciding) values of the positions. In Euclidean space, conformal blocks are single-valued real-analytic functions of the positions except when the four points lie on a circle but in a singly-transposed cyclic order [1324], and only in these exceptional cases does the decomposition into conformal blocks not converge.
A conformal field theory in flat Euclidean space is thus defined by its spectrum and OPE coefficients (or three-point structure constants) , satisfying the constraint that all four-point functions are crossing-symmetric. From the spectrum and OPE coefficients (collectively referred to as the CFT data), correlation functions of arbitrary order can be computed.
Features
Unitarity
A conformal field theory is unitary if its space of states has a positive definite scalar product such that the dilation operator is self-adjoint. Then the scalar product endows the space of states with the structure of a Hilbert space.
In Euclidean conformal field theories, unitarity is equivalent to reflection positivity of correlation functions: one of the Osterwalder-Schrader axioms.
Unitarity implies that the conformal dimensions of primary fields are real and bounded from below. The lower bound depends on the spacetime dimension , and on the representation of the rotation or Lorentz group in which the primary field transforms. For scalar fields, the unitarity bound is
In a unitary theory, three-point structure constants must be real, which in turn implies that four-point functions obey certain inequalities. Powerful numerical bootstrap methods are based on exploiting these inequalities.
Compactness
A conformal field theory is compact if it obeys three conditions:
All conformal dimensions are real.
For any there are finitely many states whose dimensions are less than .
There is a unique state with the dimension , and it is the vacuum state, i.e. the corresponding field is the identity field.
(The identity field is the field whose insertion into correlation functions does not modify them, i.e. .) The name comes from the fact that if a 2D conformal field theory is also a sigma model, it will satisfy these conditions if and only if its target space is compact.
It is believed that all unitary conformal field theories are compact in dimension . Without unitarity, on the other hand, it is possible to find CFTs in dimension four and in dimension that have a continuous spectrum. And in dimension two, Liouville theory is unitary but not compact.
Extra symmetries
A conformal field theory may have extra symmetries in addition to conformal symmetry. For example, the Ising model has a symmetry, and superconformal field theories have supersymmetry.
Examples
Mean field theory
A generalized free field is a field whose correlation functions are deduced from its two-point function by Wick's theorem. For instance, if is a scalar primary field of dimension , its four-point function reads
For instance, if are two scalar primary fields such that (which is the case in particular if ), we have the four-point function
Mean field theory is a generic name for conformal field theories that are built from generalized free fields. For example, a mean field theory can be built from one scalar primary field . Then this theory contains , its descendant fields, and the fields that appear in the OPE . The primary fields that appear in can be determined by decomposing the four-point function in conformal blocks: their conformal dimensions belong to : in mean field theory, the conformal dimension is conserved modulo integers. Structure constants can be computed exactly in terms of the Gamma function.
Similarly, it is possible to construct mean field theories starting from a field with non-trivial Lorentz spin. For example, the 4d Maxwell theory (in the absence of charged matter fields) is a mean field theory built out of an antisymmetric tensor field with scaling dimension .
Mean field theories have a Lagrangian description in terms of a quadratic action involving Laplacian raised to an arbitrary real power (which determines the scaling dimension of the field). For a generic scaling dimension, the power of the Laplacian is non-integer. The corresponding mean field theory is then non-local (e.g. it does not have a conserved stress tensor operator).
Critical Ising model
The critical Ising model is the critical point of the Ising model on a hypercubic lattice in two or three dimensions. It has a global symmetry, corresponding to flipping all spins. The two-dimensional critical Ising model includes the Virasoro minimal model, which can be solved exactly. There is no Ising CFT in dimensions.
Critical Potts model
The critical Potts model with colors is a unitary CFT that is invariant under the permutation group . It is a generalization of the critical Ising model, which corresponds to . The critical Potts model exists in a range of dimensions depending on .
The critical Potts model may be constructed as the continuum limit of the Potts model on d-dimensional hypercubic lattice. In the Fortuin-Kasteleyn reformulation in terms of clusters, the Potts model can be defined for , but it is not unitary if is not integer.
Critical O(N) model
The critical O(N) model is a CFT invariant under the orthogonal group. For any integer , it exists as an interacting, unitary and compact CFT in dimensions (and for also in two dimensions). It is a generalization of the critical Ising model, which corresponds to the O(N) CFT at .
The O(N) CFT can be constructed as the continuum limit of a lattice model with spins that are N-vectors, called the n-vector model.
Alternatively, the critical model can be constructed as the limit of Wilson-Fisher fixed point in dimensions. At , the Wilson-Fisher fixed point becomes the tensor product of free scalars with dimension . For the model in question is non-unitary.
When N is large, the O(N) model can be solved perturbatively in a 1/N expansion by means of the Hubbard–Stratonovich transformation. In particular, the limit of the critical O(N) model is well-understood.
The conformal data of the critical O(N) model are functions of N and of the dimension, on which many results are known.
Conformal gauge theories
Some conformal field theories in three and four dimensions admit a Lagrangian description in the form of a gauge theory, either abelian or non-abelian. Examples of such CFTs are conformal QED with sufficiently many charged fields in or the Banks-Zaks fixed point in .
Applications
Continuous phase transitions
Continuous phase transitions (critical points) of classical statistical physics systems with D spatial dimensions are often described by Euclidean conformal field theories. A necessary condition for this to happen is that the critical point should be invariant under spatial rotations and translations. However this condition is not sufficient: some exceptional critical points are described by scale invariant but not conformally invariant theories. If the classical statistical physics system is reflection positive, the corresponding Euclidean CFT describing its critical point will be unitary.
Continuous quantum phase transitions in condensed matter systems with D spatial dimensions may be described by Lorentzian D+1 dimensional conformal field theories (related by Wick rotation to Euclidean CFTs in D+1 dimensions). Apart from translation and rotation invariance, an additional necessary condition for this to happen is that the dynamical critical exponent z should be equal to 1. CFTs describing such quantum phase transitions (in absence of quenched disorder) are always unitary.
String theory
World-sheet description of string theory involves a two-dimensional CFT coupled to dynamical two-dimensional quantum gravity (or supergravity, in case of superstring theory). Consistency of string theory models imposes constraints on the central charge of this CFT, which should be c=26 in bosonic string theory and c=10 in superstring theory. Coordinates of the spacetime in which string theory lives correspond to bosonic fields of this CFT.
AdS/CFT correspondence
Conformal field theories play a prominent role in the AdS/CFT correspondence, in which a gravitational theory in anti-de Sitter space (AdS) is equivalent to a conformal field theory on the AdS boundary. Notable examples are d = 4, N = 4 supersymmetric Yang–Mills theory, which is dual to Type IIB string theory on AdS5 × S5, and d = 3, N = 6 super-Chern–Simons theory, which is dual to M-theory on AdS4 × S7. (The prefix "super" denotes supersymmetry, N denotes the degree of extended supersymmetry possessed by the theory, and d the number of space-time dimensions on the boundary.)
See also
Logarithmic conformal field theory
AdS/CFT correspondence
Operator product expansion
Critical point
Boundary conformal field theory
Primary field
Superconformal algebra
Conformal algebra
Conformal bootstrap
History of conformal field theory
References
Further reading
Martin Schottenloher, A Mathematical Introduction to Conformal Field Theory, Springer-Verlag, Berlin, Heidelberg, 1997. , 2nd edition 2008, .
External links
Symmetry
Scaling symmetries
Mathematical physics | Conformal field theory | [
"Physics",
"Mathematics"
] | 5,100 | [
"Symmetry",
"Applied mathematics",
"Theoretical physics",
"Geometry",
"Mathematical physics",
"Scaling symmetries"
] |
364,820 | https://en.wikipedia.org/wiki/Quotient%20module | In algebra, given a module and a submodule, one can construct their quotient module. This construction, described below, is very similar to that of a quotient vector space. It differs from analogous quotient constructions of rings and groups by the fact that in the latter cases, the subspace that is used for defining the quotient is not of the same nature as the ambient space (that is, a quotient ring is the quotient of a ring by an ideal, not a subring, and a quotient group is the quotient of a group by a normal subgroup, not by a general subgroup).
Given a module over a ring , and a submodule of , the quotient space is defined by the equivalence relation
if and only if
for any in . The elements of are the equivalence classes The function sending in to its equivalence class is called the quotient map or the projection map, and is a module homomorphism.
The addition operation on is defined for two equivalence classes as the equivalence class of the sum of two representatives from these classes; and scalar multiplication of elements of by elements of is defined similarly. Note that it has to be shown that these operations are well-defined. Then becomes itself an -module, called the quotient module. In symbols, for all in and in :
Examples
Consider the polynomial ring, with real coefficients, and the -module . Consider the submodule
of , that is, the submodule of all polynomials divisible by . It follows that the equivalence relation determined by this module will be
if and only if and give the same remainder when divided by .
Therefore, in the quotient module , is the same as 0; so one can view as obtained from by setting . This quotient module is isomorphic to the complex numbers, viewed as a module over the real numbers
See also
Quotient group
Quotient ring
Quotient (universal algebra)
References
Module theory
Module | Quotient module | [
"Mathematics"
] | 413 | [
"Fields of abstract algebra",
"Module theory"
] |
364,852 | https://en.wikipedia.org/wiki/August-Wilhelm%20Scheer | August-Wilhelm Scheer (born July 27, 1941) is a German Professor of business administration and business information at Saarland University, and founder and director of IDS Scheer AG, a major IT service and software company. He is known for the development of the Architecture of Integrated Information Systems (ARIS) concept.
Biography
In 1972 Scheer received a PhD from University of Hamburg with the thesis "Kosten- und kapazitätsorientierte Ersatzpolitik bei stochastisch ausfallenden Produktionsanlagen". In 1974 he obtained his Habilitation also in Hamburg with a thesis about project control.
In 1975 Scheer took over one of the first chairs for information systems and founded the institute for information systems (IWi) at Saarland University, which he led until 2005. In 1984 he founded IDS Scheer, a Business Process Management (BPM) software company, which is widely regarded as the founder of the BPM industry. In 1997 he also founded IMC AG, a company for innovative learning technologies and spin-off of Saarland University, together with Wolfgang Kraemer, Frank Milius and Wolfgang Zimmermann.
In 2003 Scheer was awarded the Philip Morris Research Prize and the Ernst & Young Entrepreneur of the Year Award. In December 2005 he was awarded the Erich Gutenberg price and in the same month the Federal Cross of Merit first class. In 2005 he was also elected as a fellow of the Gesellschaft für Informatik. Since 2006, he has been a member of the council for innovation and growth of the Federal Government. 2007 he was honored as a HPI-Fellow by the Hasso-Plattner-Institut (HPI) für Softwaresystemtechnik and was elected President of the German Association for Information Technology, Telecommunications and New Media.
On June 4, 2010 Scheer was awarded with the Design Science Lifetime Achievement Award at University of St. Gallen. He received the prize as a recognition for his contribution to design science research.
Work
His research focuses on information and business process management in industry, services and administration.
ARIS
The Architecture of Integrated Information Systems (ARIS) concept, which is the representation of business processes in diagrammatic form so as to provide an unambiguous starting point for the development of computer-based information systems. The ARIS architecture and methodology is the core piece of intellectual property business process management software company IDS Scheer was founded on.
Event-driven Process Chain
Event-driven Process Chain is a business process modelling technique, mainly used for analysing processes for the purpose of an ERP implementation.
Businesses use EPC diagrams to lay out business process work flows, originally in conjunction with SAP R/3 modeling, but now more widely. There are a number of tools for creating EPC diagrams, including ARIS Toolset of IDS Scheer AG, free modeling tool ARIS Express by IDS Scheer AG, ADONIS of BOC Group, Visio of Microsoft Corp., Semtalk of Semtation GmbH, or Bonapart by Pikos GmbH. Some but not all of these tools support the tool-independent EPC Markup Language (EPML) interchange format.
There are also tools that generate EPC diagrams from operational data, such as SAP logs. EPC diagrams use symbols of several kinds to show the control flow structure (sequence of decisions, functions, events, and other elements) of a business process.
Publications
His publications attract worldwide attention and were translated in 8 languages. A selection:
1969. Industrielle Investitionsentscheidung. Eine theoret. u. empir. Untersuchung z. Investitionsverhalten in Industrieunternehmungen.
1978. Projektsteuerung.
1984. EDV-orientierte Betriebswirtschaftslehre
1985. Computer, a challenge for business administration
1989. Enterprise-wide data modelling : information systems in industry
1990. CIM-Strategie als Teil der Unternehmensstrategie
1992. Architecture of integrated information systems : foundations of enterprise modelling.
1994. Business process engineering : reference models for industrial enterprises.
1994. CIM : computer integrated manufacturing : towards the factory of the future.
1998. SAP R/3 in der Praxis : neuere Entwicklungen und Anwendungen. With Dieter B. Pressmar.
1998. ARIS—business process modeling
2003. Business process change management : ARIS in practice. Edited with others.
2006. Agility by Aris business process management : Yearbook business process excellence. Edited with others.
2006. Corporate performance management : ARIS in practice
2016. The Complete Business Process Handbook Volume 1: 'Body of Knowledge from Process Modeling to BPM
References
External links
Scheer's institute in Saarbrücken (in German)
Scheer's personal web page
at ProcessWorld conference 2010 in Berlin
Scheer's network of high-growth IT companies
1941 births
Living people
German business theorists
Information systems researchers
Enterprise modelling experts
Software engineering researchers
University of Hamburg alumni
Academic staff of Saarland University
Officers Crosses of the Order of Merit of the Federal Republic of Germany
Recipients of the Saarland Order of Merit | August-Wilhelm Scheer | [
"Technology"
] | 1,087 | [
"Information systems",
"Information systems researchers"
] |
364,950 | https://en.wikipedia.org/wiki/Dubna | Dubna () is a town in Moscow Oblast, Russia. It has a status of naukograd (i.e. town of science), being home to the Joint Institute for Nuclear Research, an international nuclear physics research center and one of the largest scientific foundations in the country. It is also home to MKB Raduga, a defense aerospace company specializing in design and production of missile systems, as well as to the Russia's largest satellite communications center owned by Russian Satellite Communications Company. The modern town was developed in the middle of the 20th century and town status was granted to it in 1956. Population:
Geography
The town is above sea level, situated approximately north of Moscow, on the Volga River, just downstream from the Ivankovo Reservoir. The reservoir is formed by a hydroelectric dam across the Volga situated within the town borders. The town lies on both banks of the Volga. The western boundary of the town is defined by the Moscow Canal joining the Volga, while the eastern boundary is defined by the Dubna River joining the Volga.
Dubna is the northernmost town of Moscow Oblast.
History
Pre-World War II
Fortress Dubna () belonging to Rostov-Suzdal Principality was built in the area in 1132 by the order of Yuri Dolgoruki and existed until 1216. The fortress was destroyed during the feudal war between the sons of Vsevolod the Big Nest. The village of Gorodishche () was located on the right bank of the Volga River and was a part of the Kashin Principality. Dubna customs post ( was located in the area and was a part of the Principality of Tver.
Before the October Revolution, few villages were in the area: Podberezye was on the left bank of the Volga, and Gorodishche, Alexandrovka, Ivankovo, Yurkino, and Kozlaki () were on the right bank.
Right after the Revolution one of the first collective farms was organized in Dubna area.
In 1931, the Orgburo of the Communist Party made a decision to build the Volga-Moscow Canal. Genrikh Yagoda, then the leader of the State Political Directorate, was put in charge of construction. The Canal was completed in 1937. Ivankovo Reservoir and Ivankovo hydroelectrical plant were also created as a part of the project. Many villages and the town Korcheva were submerged under water. Dubna is mentioned in Aleksandr Solzhenitsyn's book The Gulag Archipelago as the town built by Gulag prisoners.
Science
The decision to build a proton accelerator for nuclear research was taken by the Soviet government in 1946. An impractical place where the current town is situated was chosen due to remoteness from Moscow and the presence of the Ivankovo power plant nearby. The scientific leader was Igor Kurchatov. The general supervisor of the project including construction of a settlement, a road and a railway connecting it to Moscow (largely involving penal labour of Gulag inmates) was the NKVD chief Lavrentiy Beria. After three years of intensive work, the accelerator was commissioned on 13 December 1949.
The town of Dubna was officially inaugurated in 1956, together with the Joint Institute for Nuclear Research (JINR), which has developed into a large international research laboratory involved mainly in particle physics, heavy ion physics, synthesis of transuranium elements, and radiobiology. In 1960, a town of Ivankovo situated on the opposite (left) bank of the Volga was merged into Dubna. In 1964, Dubna hosted the prestigious International Conference on High Energy Physics.
Currently, a construction of the NICA particle collider, a megascience project is underway in Dubna.
Outstanding physicists of the 20th century including Nikolay Bogolyubov, Georgy Flyorov, Vladimir Veksler, and Bruno Pontecorvo used to work at the institute. A number of elementary particles and nuclei of transuranium elements (most recently, element 117) have been discovered and investigated there, leading to the honorary naming of chemical element 105 dubnium (Db) for the town.
Administrative and municipal status
Within the framework of administrative divisions, it is incorporated as Dubna Town Under Oblast Jurisdiction—an administrative unit with the status equal to that of the districts. As a municipal division, Dubna Town Under Oblast Jurisdiction is incorporated as Dubna Urban Okrug.
Demographics
Economics
Before the dissolution of the Soviet Union, JINR and MKB Raduga were the main employers in the town. Since then their role has decreased significantly. Several small industrial enterprises have emerged, however the town still experiences some employment difficulties. Proximity to Moscow allows many to commute and work there. Plans by AFK Sistema and other investors including government structures have been announced to build a Russian analogue of Silicon Valley in Dubna. As of the beginning of 2007, nothing has commenced.
Transport
Dubna is the starting point of the Moscow Canal. In addition to the canal, Dubna is connected to Moscow with the А104 highway, and the Savyolovsky suburban railway line provides access to Moscow.
Public transport connections to Moscow include express trains, suburban trains, and bus shuttles departing from the Savyolovsky Rail Terminal.
Culture
Among the city's cultural facilities are: the Mir House of Culture, the Oktyabr Palace of Culture, a movie theater, 21 libraries, 4 music schools and a school of arts. In 1990, the Dubna Symphony Orchestra was established.
Museums
Museum of Archeology and Local History of Dubna
JINR Museum of the History of Science and Technology
Museum of Natural History at Dubna International University
Museum of Locks
Museum of Sports
Svetoch Culturohistorical Center
Cinema
A variety of movies and miniseries were filmed in the city, such as:
Volga-Volga (1938)
Ballad of Siberia (1948)
Nine Days in One Year (1962)
All Remains to People (1963)
Vasili and Vasilisa (1981)
Katya Ismailova (1994)
Law of the Lawless (2002)
Sports
Dubna is located on the Moscow Canal and the Ivankovo Reservoir, making it a good destination for water sports such as windsurfing, kitesurfing, and water skiing.. In 2004, for the first time, a stage of the Water Ski World Cup took place in the city. In 2011, Dubna hosted the World Waterskiing Championships.
Dubna's sports facilities include two stadiums, a waterskiing stadium on the Volga River, four swimming pools, tennis courts, and five sports complexes.
Trivia
One of the world's tallest statues of Vladimir Lenin, high, built in 1937, is located at Dubna at the confluence of the Volga River and the Moscow Canal. The accompanying statue of Joseph Stalin of similar size was demolished in 1961 during the period of de-stalinization.
Twin towns and sister cities
Dubna is twinned with:
Giv'at Shmuel, Israel
La Crosse, Wisconsin, United States
Alushta, Ukraine
Kurchatov, Kazakhstan
Lincang, China
Nová Dubnica, Slovakia
Gallery
References
Notes
Sources
External links
Official website of Dubna
Dubna Business Directory
News of Dubna
Cities and towns in Moscow Oblast
Populated places established in 1956
Populated places on the Volga
Nuclear research institutes
Cities and towns built in the Soviet Union
Naukograds | Dubna | [
"Engineering"
] | 1,504 | [
"Nuclear research institutes",
"Nuclear organizations"
] |
12,118,475 | https://en.wikipedia.org/wiki/Gather.com | Gather or Gather.com was a social networking website designed to encourage interaction by discussion of various social, political and cultural topics. Its headquarters were located in Boston, Massachusetts. It became defunct in 2015.
History
The website was founded in 2005 by Tom Gerace, an entrepreneur who previously founded the affiliate marketing company, Be Free. Gather attracted investments and partnerships from media companies ranging from McGraw-Hill and Hearst Publications to American Public Media and a member of the McClatchy family. Starbucks chose Gather over other social networking sites because of its adult demographic. Lotus founder Jim Manzi was an early investor. Gather was one of very few 2006 startups to use television advertising.
Operations
Members received their own subdomain, where they could publish articles and share comments. Also, members could create groups pertaining to their own efforts, or to any other topic. Writers could also comment on each other's works.
In 2010, Gather management, including owner Tom Gerace, started a business-making subdomain entitled "The Gather News Channel." This consisted of a series of subdomains such as Celebrities, Entertainment, Business, Technology, Politics, Sports and News. Writers were paid to write short articles on a subject for which they were selected from the above topics. Additional money was possible based on a formula of revenue per views. The "Gather News Channel" existed until 2014 when Gather Inc., including Gather.com, was sold to Kitara Media.
In October, 2015, Gather became defunct.
References
External links
CEO Tom Gerace interview, Social Networking Watch, Nov '08
Internet properties established in 2005
Defunct social networking services
Companies based in Boston
American political blogs
Internet properties disestablished in 2015 | Gather.com | [
"Technology"
] | 342 | [
"Computing stubs",
"World Wide Web stubs"
] |
12,118,731 | https://en.wikipedia.org/wiki/Denison%20Dam | Denison Dam, also known as Lake Texoma Dam, is a dam located on the Red River between Texas and Oklahoma that impounds Lake Texoma. The purpose of the dam is flood control, water supply, hydroelectric power production, river regulation, navigation and recreation. It was also designated by the American Society of Civil Engineers as a National Historic Civil Engineering Landmark in 1993.
History
Completed in 1943 primarily as a flood control project, it was at the time the "largest rolled-earth fill dam in the world". Only five times has the lake reached the dam's spillway at a height of above sea level: 1957, 1990, 2007, and twice in 2015. It takes its name from Denison, Texas, just downriver from the damface.
Denison Dam contains a total of 18.8 million cubic yards (14,000,000 m³) of rolled-earth fill. It produces roughly 250,000 megawatt hours of electricity per year, while Lake Texoma provides nearly of water storage for local communities under five permanent contracts.
In addition to two federally managed wildlife-refuge areas, Denison Dam has made possible 47 recreational areas managed by the U.S. Army Corps of Engineers, two state parks -- one in Oklahoma and one in Texas -- as well as of open public land used for hunting.
[...] General Lucius D. Clay was the principal manager of the project.
Oklahoma State Highway 91 and, to a lesser extent, Texas State Highway 91 cross over the dam.
German prisoners of war from Rommel's Afrika Korps were used in the construction of the Denison Dam and Lake Texoma during World War II. They performed non-war-related work such as clearing trees, lining drainage ditches, and building a bathroom facility. They also helped clear more than 7,000 acres for the lake. 4
POWs were housed in camps in Tishomingo and Powell, Oklahoma.
The government paid $1.50 per day per prisoner, and the POWs received 80 cents in canteen coupons. The difference went to the federal treasury to pay for the POW program.
References
External links
Army Corp on floodstage, Retrieved July 6, 2007
ASCE Denison Dam landmark page, Retrieved January 26, 2022
Buildings and structures in Bryan County, Oklahoma
Buildings and structures in Grayson County, Texas
Dams in Oklahoma
Dams in Texas
Historic Civil Engineering Landmarks
Hydroelectric power plants in Texas
Hydroelectric power plants in Oklahoma
Earth-filled dams
United States Army Corps of Engineers dams
Energy infrastructure completed in 1943
Dams completed in 1943
Red River of the South | Denison Dam | [
"Engineering"
] | 526 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
12,118,737 | https://en.wikipedia.org/wiki/List%20of%20systems%20engineering%20universities | This list of systems engineering at universities gives an overview of the different forms of systems engineering (SE) programs, faculties, and institutes at universities worldwide. Since there is no clear consensus on what constitutes a systems engineering degree, this list simply identifies the college and department offering degrees and the degrees offered.
Education in systems engineering is often observed to be an extension to the regular engineering courses, reflecting the industry attitude that engineering professionals need a foundational background in one of the traditional engineering disciplines (e.g. civil engineering, electrical engineering, industrial engineering) plus professional, real-world experience to be effective as systems engineers. Undergraduate university programs in systems engineering are rare.
Education in systems engineering can be viewed as systems-centric or domain-centric.
Systems-centric programs treat systems engineering as a separate discipline with most courses focusing on systems engineering theory and practice.
Domain-centric programs offer systems engineering topics as an option that can be embedded within the major domains or fields of engineering.
Both categories strive to educate the systems engineer with capability to oversee interdisciplinary projects with the depth required of a core-engineer.
The International Council on Systems Engineering (INCOSE) maintains a continuously updated Directory of Systems Engineering Academic Programs worldwide.
Systems engineering degrees in Europe
Systems engineering degrees in the US
As of 2009, some 76 institutions in United States offer 131 undergraduate and graduate programs in systems engineering.
Systems Engineering degrees in other countries
Research institutes for systems engineering
In Asia:
Research Center for Modeling & Simulation, National University of Science and Technology, Islamabad, Pakistan.
In Europe:r
Hasso Plattner Institute related to the University of Potsdam, Germany.
Informatics Research Centre at the University of Reading Business School, Reading, Berkshire, England, UK.
I2S - Institut d'ingénierie des systèmes at the École Polytechnique Fédérale de Lausanne in Lausanne, Switzerland.
Systems Engineering Estimation and Decision Support (SEED) at the University of the West of England CEMS, Bristol, England, UK.
Technical University of Hamburg, Hamburg, Germany.
UCL Centre for Systems Engineering (UCLse) in the Mullard Space Science Laboratory, London, England, UK.
Research Centre for Automatic Control (CRAN) , joint research unit with Nancy-Université and CNRS, Nancy, France
In the USA:
GTRI Electronic Systems Laboratory (ELSYS) at the Georgia Tech Research Institute, Atlanta, Georgia, USA.
GTRI Aerospace, Transportation and Advanced Systems Laboratory at the Georgia Tech Research Institute, Atlanta, Georgia, USA.
Systems Engineering Research Center SERC – Systems Engineering Research Center at Stevens Institute of Technology, Hoboken, New Jersey, USA.
Western Transportation Institute at Montana State University, Montana, USA.
In the Caribbean:
INTEC Instituto Tecnológico de Santo Domingo at Los Proceres, Santo Domingo, Dominican Republic.
See also
List of types of systems engineering
List of systems engineering books (WikiProject System list)
List of systems engineers
List of systems science organizations
References
External links
INCOSE "Directory of Systems Engineering Academic Programs World Wide"
Systems e
Systems engineering | List of systems engineering universities | [
"Engineering"
] | 624 | [
"Electrical engineering",
"Electrical-engineering-related lists",
"Lists of engineering schools",
"Engineering universities and colleges"
] |
12,119,816 | https://en.wikipedia.org/wiki/Collection%20%28abstract%20data%20type%29 | In computer programming, a collection is an abstract data type that is a grouping of items that can be used in a polymorphic way.
Often, the items are of the same data type such as int or string. Sometimes the items derive from a common type; even deriving from the most general type of a programming language such as object or variant.
Although easily confused with implementations in programming languages, collection, as an abstract concept, refers to mathematical concepts which can be misunderstood when the focus is on an implementation. For example, a priority queue is often implemented as a heap, while an associative array is often implemented as a hash table, so these abstract types are often referred to by this preferred implementation, as a "heap" or a "hash", though this is incorrect conceptually.
Subtypes
Other abstract data types are more specific than collection.
Linear
Some collections maintain a linear ordering of items with access to one or both ends. The data structure implementing such a collection need not be linear. For example, a priority queue is often implemented as a heap, which is a kind of tree.
Notable linear collections include:
list
stack
queue
priority queue
double-ended queue
double-ended priority queue
Associative
Some collections are interpreted as a sort of function: given an input, the collection yields an output.
Notable associative collections include:
set
multiset
associative array
graph
tree
A set can be interpreted as a specialized multiset, which in turn is a specialized associative array, in each case by limiting the possible values—considering a set as represented by its indicator function.
Implementation
As an abstract data type, collection does not prescribe an implementation, though type theory describes implementation considerations.
Some collection types are provided as primitive data types in a language, such as lists, while more complex collection types are implemented as composite data types in libraries, sometimes in a language's standard library. Examples include:
C++: known as containers, implemented in C++ Standard Library and earlier Standard Template Library
Java: implemented in the Java collections framework
Oracle PL/SQL implements collections as programmer-defined types
Python: some built-in, others implemented in the collections library
References
External links
Apache Commons Collections.
AS3Commons Collections Framework ActionScript3 implementation of the most common collections.
CollectionSpy — A profiler for Java's Collections Framework.
Guava.
Mango Java library.
Abstract data types | Collection (abstract data type) | [
"Mathematics"
] | 491 | [
"Type theory",
"Mathematical structures",
"Abstract data types"
] |
12,120,000 | https://en.wikipedia.org/wiki/27%20Club | The 27 Club is an informal list consisting mostly of popular musicians, often expanded by artists, actors, and other celebrities who died at age 27. Although the claim of a "statistical spike" for the death of musicians at that age has been refuted by scientific research, it remains a common cultural conception that the phenomenon exists, with many celebrities who die at 27 noted for their high-risk lifestyles.
Cultural perception
Beginning with the deaths of several 27-year-old popular musicians between 1969 and 1971 (such as Brian Jones, Jimi Hendrix, Janis Joplin, and Jim Morrison), dying at the age of 27 came to be, and remains, a perennial subject of popular culture, celebrity journalism, and entertainment industry lore. This perceived phenomenon, which came to be known as the "27 Club", attributes special significance to popular musicians, artists, actors, and other celebrities who died at age 27, often as a result of drug and alcohol abuse or violent means such as homicide, suicide, or transportation-related accidents. The cultural interpretation of events gave rise to an urban myth that celebrity deaths are more common at 27, a claim that has been refuted by statistical research as discussed in the scientific studies section below. However, a subsequent statistical analysis demonstrated that the myth itself has shaped cultural memory by boosting the visibility and cultural prominence of those who die at 27. This phenomenon, deemed the "27 Club effect", reflects the power of collective storytelling and media reinforcement in turning unrelated events into lasting cultural narratives.
White lighter myth
The white lighter myth or white lighter curse is an urban legend based on the 27 Club in which it is claimed several musicians and artists died while in possession of a white disposable cigarette lighter, leading such items to become associated with bad fortune. The myth is primarily based on the deaths of Jimi Hendrix, Janis Joplin, Jim Morrison, and Kurt Cobain. The myth has been integrated with cannabis culture.
In 2017, Snopes published an article discrediting the theory, noting that Bic did not begin producing white disposable lighters until 1973, several years after the deaths of some members of the 27 Club (including Hendrix, Joplin, and Morrison) and that disposable lighters produced by other companies were not widely available at that time.
History
Brian Jones, Jimi Hendrix, Janis Joplin, and Jim Morrison all died at the age of 27 between 1969 and 1971. At the time, the coincidence gave rise to some comment, but, according to Hendrix and Kurt Cobain's biographer, Charles R. Cross: "It wasn't until Kurt Cobain took his own life in 1994 that the idea of the 27 Club arrived in the popular zeitgeist." Cross claims that the "launch of the Club concept" can be traced to the growing influence of the Internet and sensational celebrity journalism on popular culture in the years following Cobain's death, as well as media interpretations of a statement by Cobain's mother, Wendy Fradenburg Cobain O'Connor, quoted in the local Aberdeen, Washington, newspaper The Daily World, and subsequently carried worldwide by the Associated Press: "Now he's gone and joined that stupid club. I told him not to join that stupid club." Many contemporary journalists interpreted her words as referring to the infamous untimely deaths of fellow rock musicians like Hendrix, Joplin, and Morrison, a view shared by Cross and R. Gary Patterson, chronicler of rock music urban myth.
The intended meaning of "that stupid club" referred to by Cobain's mother is disputed. In his analysis of how her quote helped popularize the 27 Club, Eric Segalstad, author of The 27s: The Greatest Myth of Rock & Roll, asserted that she was actually referring to the "tragic family matter" of Cobain's two uncles and his great-uncle, all of whom had committed suicide. Other contemporary journalists linked her quote to the then-recent heroin-related deaths of fellow young Seattle rock musicians Stefanie Sargent of 7 Year Bitch and Andrew Wood of Mother Love Bone, both aged 24. Cross, himself, dismissed "the absurd notion that Kurt Cobain intentionally timed his death so he could join the 27 Club", noting that Cobain "had nearly died from drug overdoses on at least two dozen occasions in the year before his death... [and] made several previous suicide attempts at various ages."
In 2011, seventeen years after Cobain's death, Amy Winehouse died at the age of 27, prompting a renewed swell of media attention devoted to the 27 Club. Three years earlier, Winehouse's personal assistant, Alex Haines, told the British press that Winehouse, then 25, feared she would join Jim Morrison, Brian Jones, and Kurt Cobain in dying at 27: "She reckoned she would join the 27 Club of rock stars who died at that age. She told me, 'I have a feeling I'm gonna die young.'"
Scientific studies
Despite the cultural significance given to musician and celebrity deaths at age 27, the common claim that they are statistically more common at this age is an urban myth, refuted by scientific research.
A study by university academics published in the British Medical Journal in December 2011 concluded that there was no increase in the risk of death for musicians at the age of 27, stating that there were equally small increases at ages 25 and 32. The study noted that young adult musicians have a higher death rate than the general young adult population, surmising that the conclusion that could be drawn is as such: "fame may increase the risk of death among musicians, but this risk is not limited to age 27".
A 2014 article at The Conversation suggested that statistical evidence shows popular musicians are most likely to die at the age of 56 (2.2% compared to 1.3% at 27).
In popular culture
The 27 Club frequently appears by name and reference in popular culture and mass media. Several exhibitions have been devoted to the idea, as well as novels, films, stage plays, songs, video games, and comics.
Music
The title of the song "27" by Fall Out Boy from their 2008 album Folie à Deux is a reference to the club. The lyrics explore the hedonistic lifestyles common in rock and roll. Pete Wentz, the primary lyricist of Fall Out Boy, wrote the song because he felt that he was living a similarly dangerous lifestyle.
John Craigie's song "28", which appeared on his 2009 album Montana Tale, and 2018 live album Opening for Steinbeck, is written from the perspective of 27 Club members Jim Morrison, Janis Joplin, and Kurt Cobain, as each contemplates their respective mortality and imagines what they would do differently "if I could only make it to twenty-eight". Craigie wrote the song when he himself was age 27.
The theme is referenced in the song "27 Forever" by Eric Burdon, on his 2013 album 'Til Your River Runs Dry.
Magenta's studio album The Twenty Seven Club (2013) directly references the club. Each track is a tribute to a member of the club.
Halsey's song "Colors", from her debut album Badlands (2015), includes the line: "I hope you make it to the day you're 28 years old."
JPEGMafia's album Black Ben Carson (2016) includes a song titled "The 27 Club", which the song refers to the club. He references members Jimi Hendrix, Janis Joplin, and Kurt Cobain.
Adore Delano released a song called "27 Club" on her studio album Whatever (2017), with the repeated lyric: "All of the legends die at twenty-seven." Delano was aged 27 at the time of release.
Juice Wrld referenced the club on his song "Legends" (2018), where he says: "What's the 27 Club? We ain't making it past 21."
Video games
In the video game Hitman (2016), one of the in-game missions, Club 27, involves killing an indie musician who is celebrating his 27th birthday.
Identified members
Because the 27 Club is entirely notional, it has no official membership. The table below lists individuals explicitly described as "members" of the 27 Club by journalists and writers in various books and publications.
Some deaths linked to the 27 Club pre-date its emergence as a cultural phenomenon. Blues musician Robert Johnson, who died in 1938, is one of the earliest popular musicians included by various sources.
Despite the club's original association with the deaths of popular musicians, later sources began to link actors, artists, athletes, and other celebrities to the 27 Club. Rolling Stone included television actor Jonathan Brandis, who died by suicide in 2003, in a list of 27 Club members. Anton Yelchin, who had played in a punk rock band but was primarily known as a film actor, was also described as a member of the club upon his death in 2016. Likewise, Jean-Michel Basquiat has been linked to the club despite being known primarily as a painter, with his music career being relatively brief and obscure.
See also
Apophenia
Curse of the ninth
List of deaths in rock and roll
List of murdered hip hop musicians
Saturn return
References
Bibliography
pp. 304, 306.
1994 introductions
Cultural aspects of death
Death-related lists
Lists of musicians
Numerology | 27 Club | [
"Mathematics"
] | 1,933 | [
"Numerology",
"Mathematical objects",
"Numbers"
] |
12,121,068 | https://en.wikipedia.org/wiki/Open%20cluster%20remnant | In astronomy, an open cluster remnant (OCR) is the final stage in the evolution of an open star cluster.
Theory
Viktor Ambartsumian (1938) and Lyman Spitzer (1940) showed that, from a theoretical point of view, it was impossible for a star cluster to evaporate completely; furthermore, Spitzer pointed out two possible final results for the evolution of a star cluster: evaporation provokes physical collisions between stars, or evaporation proceeds until a stable binary or higher multiplicity system is produced.
Observations
Using objective-prism plates, Lodén (1987, 1988, 1993) has investigated the possible population of open cluster remnants in our Galaxy under the assumption that the stars in these clusters should have similar luminosity and spectral type. He found that about 30% of the objects in his sample could be catalogued as a possible type of cluster remnant. The membership for these objects is ≥ 15. The typical age of these systems is about 150 Myr with a range of 50-200 Myr. They show a significant density of binaries and a large number of optical binaries. The stars of these OCRs have a trend to be massive and hence early-type (A-F) stars although this observational method includes a noticeable selection effect because bright early-type spectra are easier to detect than fainter and later ones. In fact, almost no stars with spectral type later than F appear among his objects.
On the other hand, his results were not fully conclusive because there are known regions in the sky with many stars of the same spectral type but in which it is difficult to find two stars with the same proper motions or radial velocity. A striking example of this fact is Upgren 1; initially, it was suggested that this small group of seven F stars was the remnant of an old cluster (Upgren & Rubin 1965) but later, Gatewood et al. (1988) concluded that Upgren 1 is only a chance alignment of F stars resulting from the close passage of members of two dynamically different sets of stars.
Very recently, Stefanik et al. (1997) have shown that one of the sets is formed by 5 stars including a long-period binary and an unusual triple system.
Simulations
Regarding numerical simulations, for systems with some 25 to 250 stars, von Hoerner (1960, 1963), Aarseth (1968) and van Albada (1968) suggested that the final outcome of the evolution of an open cluster is one or more tightly bound binaries (or even a hierarchical triple system). Van Albada pointed out several observational candidates (σ Ori, ADS 12696, ρ Oph, 1 Cas, 8 Lac and 67 Oph) as being OCRs and Wielen (1975) indicated another one, the Ursa Major moving group (Collinder 285).
References
Aarseth, S. J.; 1968, Bull. Astron. Ser., 3, 3, 105
van Albada, T. S.; 1968, Bull. Astron. Inst. Neth., 19, 479
Ambartsumian, V. A.; 1938, Ann. Len. State Univ., # 22, 4, 19 (English translation in: Dynamics of Star Clusters, eds. J. Goodman, P. Hut, (Dordrecht: Reidel) p. 521)
Gatewood, G.; De Jonge, J. K.; Castelaz, M.; et al., 1988, ApJ, 332, 917
von Hoerner, S.; 1960, Z. Astrophys., 50, 184
von Hoerner, S.; 1963, Z. Astrophys., 57, 47
Lodén, L. O.; 1987, Ir. Astron. J., 18, 95
Lodén, L. O.; 1988, A&SS, 142, 177
Lodén, L. O.; 1993, A&SS, 199, 165
Spitzer, L.; 1940, MNRAS, 100, 397
Stefanik, R. P.; Caruso, J. R.; Torres, G.; Jha, S.; Latham, D. W.; 1997, Baltic Astronomy, 6, 137
Upgren, A. R.; Rubin V. C.; 1965, PASP, 77, 355
Wielen, R.; 1975, in: Dynamics of Stellar Systems, ed. A. Hayli, (Dordrecht: Reidel) p. 97
Further reading
Bica, E.; Santiago, B. X.; Dutra, C. M.; Dottori, H.; de Oliveira, M. R.; Pavani D., 2001, A&A, 366, 827-833
Carraro, G.; 2002, A&A, 385, 471-478
Carraro, G.; de la Fuente Marcos, Raúl; Villanova, S.; Moni Bidin, C.; de la Fuente Marcos, Carlos; Baumgardt, H.; Solivella, G.; 2007, A&A, 466, 931-941
Carraro, G.; 2006, Bulletin of the Astronomical Society of India, 34, 153-162
de la Fuente Marcos, Raúl; 1998, A&A, 333, L27-L30
de la Fuente Marcos, Raúl; de la Fuente Marcos, Carlos; Moni Bidin, C.; Carraro, G.; Costa, E.; 2013, MNRAS, 434, 194-208
Kouwenhoven, M. B. N.; Goodwin, S. P.; Parker, R. J.; Davies, M. B.; Malmberg, D.; Kroupa, P.; 2010, MNRAS, 404, 1835-1848
Moni Bidin, C.; de la Fuente Marcos, Raúl; de la Fuente Marcos, Carlos; Carraro, G.; 2010, A&A, 510, A44
Pavani, D. B.; Bica, E.; 2007, A&A, 468, 139-150
Pavani, D. B.; Bica, E.; Ahumada, A. V.; Clariá, J. J.; 2003, A&A, 399, 113-120
Pavani, D. B.; Bica, E.; Dutra, C. M.; Dottori, H.; Santiago, B. X.; Carranza, G.; Díaz, R. J.; 2001, A&A, 374, 554-563
Pavani, D. B.; Kerber, L. O.; Bica, E.; Maciel, W. J.; 2011, MNRAS, 412, 1611-1626
Villanova, S., Carraro, G.; de la Fuente Marcos, Raúl; Stagni, R.; 2004, A&A, 428, 67-77
Star clusters
Remmant
Stellar evolution | Open cluster remnant | [
"Physics",
"Astronomy"
] | 1,479 | [
"Star clusters",
"Astronomical objects",
"Astrophysics",
"Stellar evolution"
] |
12,122,236 | https://en.wikipedia.org/wiki/WS-Federation%20Active%20Requestor%20Profile | WS-Federation Active Requestor Profile is a Web Services specification - intended to work with the WS-Federation specification - which defines how identity, authentication and authorization mechanisms work across trust realms. The specification deals specifically with how applications, such as SOAP-enabled applications, make requests using these mechanisms. By contrast, the WS-Federation Passive Requestor Profile deals with "passive requestors" such as web-browsers. WS-Federation Active Requestor Profile was created by IBM, BEA Systems, Microsoft, VeriSign, and RSA Security.
See also
List of Web service specifications
References
External links
WS-Federation: Active Requestor Profile specification
Security | WS-Federation Active Requestor Profile | [
"Technology"
] | 136 | [
"Computing stubs",
"Computer network stubs"
] |
12,122,248 | https://en.wikipedia.org/wiki/Erythritol%20tetranitrate | Erythritol tetranitrate (ETN) is an explosive compound chemically similar to PETN, though it is thought to be slightly more sensitive to friction and impact.
Like many nitrate esters, ETN acts as a vasodilator, and was the active ingredient in the original "sustained release" tablets, made under a process patent in the early 1950s, called "nitroglyn". Ingesting ETN or prolonged skin contact can lead to absorption and what is known as a "nitro headache".
History
ETN was discovered by John Stenhouse in 1849 by nitrating erythritol he recently discovered. He described its explosive properties but suggested an incorrect formula due to atomic weights not yet being accurately determined.
Its vasodilator properties have been researched since 1895.
DuPont researched the explosive after the war, getting a patent in 1928, but it was never commercialized due to the difficulty of erythritol synthesis. Only due to genetically-engineered yeasts in the 1990s did it become possible for the carbohydrate to become widely available.
Properties
ETN has a relatively high velocity of detonation of 8,206 m/s at a density of 1.7219 (±0.0025) g/cm3. It is white in color and odorless. ETN is commonly cast into mixtures with other high explosives. It is somewhat sensitive to shock and friction, so care must be taken while handling. ETN dissolves readily in acetone and other ketone solvents. The impact and friction sensitivity is slightly higher than the sensitivity of pentaerythritol tetranitrate
(PETN). The sensitivity of melt cast and pressed ETN is comparable. Lower nitrates of erythritol, such as erythritol trinitrate, are soluble in water, so they do not contaminate most ETN samples.
Much like PETN, ETN is known for having a very long shelf life. Studies that directly observed the crystalline structure saw no signs of decomposition after four years of storage at room temperature. ETN has a melting point of 61 °C, compared to PETN which has a melting point of 141.3 °C. Recent studies of ETN decomposition suggested a unimolecular rate-limiting step in which the O−NO2 bond is cleaved and begins the decomposition sequence.
ETN can and should be recrystallized, as to remove the trapped acids from synthesis. Warm ethanol or methanol is a viable solvent (close to 10 g of ETN/100 ml EtOH). ETN will precipitate as big platelets with bulk density of about 0.3 g/cm3 (fluffy material) when the ETN/ethanol solution is quickly poured into several liters of cold water. Smaller, fine crystals are produced by slow addition of water in said ETN/ethanol solution with intense mixing. Very fine crystals can be prepared by shock cooling of warm ETN/ethanol solution in a below −20 °C cooling bath. ETN can be easily hand pressed to about 1.2 g/cm3 (with a slight risk of accidental detonation).
Even small samples of ETN on the order of 20 mg can cause relatively powerful explosions verging on detonation when heated without confinement, e.g. when placed on a layer of aluminium foil and heated with flame from below.
ETN can be melt-cast in warm (about 65 °C) water. Slight decomposition is possible (often displayed by change in color from white to very light yellow). No reports of runaway reactions leading to explosion have been confirmed (when melt-casting using only a bucket of warm water and recrystallized ETN). However, the handling sensitivity in molten state is extremely poor (e. g., much worse than acetone peroxide) and it makes melt-casting it impractical for commercial applications.
Melt-cast ETN, if cooled down slowly over a period of 10–30 minutes, has a density of 1.70 g/cm3, detonation velocity of 8,040 m/s, and Pcj detonation pressure of about 300 kbar. Its brisance is far higher than that of Semtex (about 220 kbar, depending on brand).
Mixtures of melt-cast ETN with PETN (about 50:50% by weight) are about the most brisant explosives that can be produced by moderately equipped amateurs. These mixtures have Pcj slightly above 300 kbar and detonation velocity above 8 km/s. This is close to the maximum of fielded military explosives like LX-10 or EDC-29 (about 370 kbar and close to 9 km/s).
ETN is often plasticized using PIB/synthethic oil binders (very comparable to the binder system in C4) or using liquid nitric esters. The PIB-based plastic explosives are nontoxic and completely comparable to C4 or Semtex with Pcj of 200–250 kbar, depending on density (influenced by crystal size, binder amount, and amount of final rolling). EGDN/ETN/NC systems are toxic to touch, quite sensitive to friction and impact, but generally slightly more powerful than C4 (Pcj of about 250 kbar and Edet of 5.3 MJ/kg) and more powerful than Semtex (Pcj of about 220 kbar and Edet below 5 MJ/kg) with Pcj of about 250–270 kbar and Edet of about 6 MJ/kg. Note that explosion modeling software and experimental tests will yield absolute detonation pressures that can vary by 5% or more with the relative proportions being maintained.
Melt-cast ETN gives invalid results in the Hess test, i.e. the deformation is greater than 26 mm, with the lead cylinder being completely destroyed. Semtex 1A gives only 21 mm in the same test, i.e. melt-cast ETN is at least 20% more brisant than Semtex 1A.
Melt-cast ETN or high density/low inert content ETN plastic explosives are one of the materials on "watch-lists" for terrorism.
Oxygen balance
One positive characteristic of ETN that PETN does not possess is a positive oxygen balance, which means that ETN possesses more than enough oxygen in its structure to fully oxidize all of its carbon and hydrogen upon detonation. This can be seen in the schematic chemical equation below.
2 C4H6N4O12 → 8 CO2 + 6 H2O + 4 N2 + 1 O2
Whereas PETN decomposes to:
2 C5H8N4O12 → 6 CO2 + 8 H2O + 4 N2 + 4 CO
The carbon monoxide (CO) still requires oxygen to complete oxidation to carbon dioxide (CO2). A detailed study of the decomposition chemistry of ETN has been recently elucidated.
Thus, for every two moles of ETN that decompose, one free mole of O2 is released. This oxygen could be used to oxidize an added metal dust, or an oxygen-deficient explosive, such as TNT or PETN. A chemical equation of how the oxygen from ETN with oxidizes PETN is shown below. The extra oxygen from the ETN oxidizes the carbon monoxide (CO) to carbon dioxide (CO2).
2 C4H6N4O12 + 1 C5H8N4O12 → 13 CO2 + 10 H2O + 6 N2
Manufacture
Like other nitrated polyols, ETN is made by nitrating erythritol either through the mixing of concentrated sulfuric acid and a nitrate salt, or by using a mixture of sulfuric and nitric acid.
See also
Mannitol hexanitrate
Xylitol pentanitrate
References
Explosive chemicals
Nitrate esters
Sugar alcohol explosives | Erythritol tetranitrate | [
"Chemistry"
] | 1,673 | [
"Explosive chemicals"
] |
12,122,436 | https://en.wikipedia.org/wiki/C2H4O | The molecular formula (molar mass: 44.05 g/mol, exact mass: 44.0262 u) may refer to:
Acetaldehyde (ethanal)
Ethenol (vinyl alcohol)
Ethylene oxide (epoxyethane, oxirane) | C2H4O | [
"Chemistry"
] | 62 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
12,122,451 | https://en.wikipedia.org/wiki/C2H2O | {{DISPLAYTITLE:C2H2O}}
The molecular formula C2H2O (molar mass: 42.04 g/mol, exact mass: 42.0106 u) may refer to:
Ethenone, or ketene
Ethynol, or hydroxylacetylene
Oxirene | C2H2O | [
"Chemistry"
] | 70 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
12,122,452 | https://en.wikipedia.org/wiki/C2H2O2 | {{DISPLAYTITLE:C2H2O2}}
The molecular formula C2H2O2 may refer to:
Acetylenediol, or ethynediol:
Glyoxal:
Acetolactone: | C2H2O2 | [
"Chemistry"
] | 49 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
12,122,459 | https://en.wikipedia.org/wiki/C2H3ClO | {{DISPLAYTITLE:C2H3ClO}}
The molecular formula C2H3ClO (molar mass: 78.50 g/mol, exact mass: 77.9872 u) may refer to:
Acetyl chloride
Chloroacetaldehyde
Chloroethylene oxide | C2H3ClO | [
"Chemistry"
] | 67 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
12,122,465 | https://en.wikipedia.org/wiki/C2H3N | {{DISPLAYTITLE:C2H3N}}
The molecular formula C2H3N (molar mass: 41.05 g/mol, exact mass: 41.0265 u) may refer to:
Acetonitrile (MeCN)
Azirine
Methyl isocyanide, or isocyanomethane
Ethenimine | C2H3N | [
"Chemistry"
] | 76 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
12,122,476 | https://en.wikipedia.org/wiki/C2H5NO | {{DISPLAYTITLE:C2H5NO}}
The molecular formula C2H5NO (molar mass: 59.07 g/mol, exact mass: 59.03711 u) may refer to:
Acetaldoxime
Acetamide
Aminoacetaldehyde
N-Methylformamide (NMF) | C2H5NO | [
"Chemistry"
] | 71 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
12,122,477 | https://en.wikipedia.org/wiki/C2H5NO2 | {{DISPLAYTITLE:C2H5NO2}}
The molecular formula C2H5NO2 (molar mass: 75.07 g/mol, exact mass: 75.0320 u) may refer to:
Acetohydroxamic acid
Ethyl nitrite
Glycine
Methyl carbamate
Nitroethane | C2H5NO2 | [
"Chemistry"
] | 72 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
12,122,480 | https://en.wikipedia.org/wiki/C2H6OS | {{DISPLAYTITLE:C2H6OS}}
The molecular formula C2H6OS (molar mass: 78.13 g/mol, exact mass: 78.0139 u) may refer to:
Dimethyl sulfoxide (DMSO)
2-Mercaptoethanol, or β-mercaptoethanol | C2H6OS | [
"Chemistry"
] | 76 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
12,122,482 | https://en.wikipedia.org/wiki/C2H6O2 | The molecular formula C2H6O2 (molar mass: 62.07 g/mol, exact mass: 62.03678 u) may refer to:
Ethylene glycol (ethane-1,2-diol)
Ethyl hydroperoxide
Methoxymethanol
Dimethyl peroxide | C2H6O2 | [
"Chemistry"
] | 71 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
12,122,485 | https://en.wikipedia.org/wiki/C2H6S | {{DISPLAYTITLE:C2H6S}}
The molecular formula C2H6S (molar mass: 62.13 g/mol, exact mass: 62.0190 u) may refer to:
Dimethyl sulfide (DMS), or methylthiomethane
Ethanethiol, or ethyl mercaptan | C2H6S | [
"Chemistry"
] | 73 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
12,122,487 | https://en.wikipedia.org/wiki/C2H7N | {{DISPLAYTITLE:C2H7N}}
The molecular formula C2H7N (molar mass: 45.07 g/mol, exact mass: 45.0579 u) may refer to:
Ethylamine (ethanamine)
Dimethylamine (N,N-dimethylamine) | C2H7N | [
"Chemistry"
] | 69 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
12,122,490 | https://en.wikipedia.org/wiki/C2H7NO | {{DISPLAYTITLE:C2H7NO}}
C2H7NO may refer to:
1-Aminoethanol, an organic compound with the formula CHCH(NH)OH
N,O-Dimethylhydroxylamine, a methylated hydroxylamine commercially available as its hydrochloride salt
Ethanolamine, an organic chemical compound with the formula HOCH2CH2NH2 | C2H7NO | [
"Chemistry"
] | 87 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
12,122,491 | https://en.wikipedia.org/wiki/C2H8N2 | {{DISPLAYTITLE:C2H8N2}}
The molecular formula C2H8N2 (molar mass: 60.10 g/mol, exact mass: 60.0688 u) may refer to:
Dimethylhydrazine
Unsymmetrical dimethylhydrazine
Symmetrical dimethylhydrazine
Ethylenediamine, or ethane-1,2-diamine | C2H8N2 | [
"Chemistry"
] | 90 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
12,123,468 | https://en.wikipedia.org/wiki/Fenbufen | Fenbufen is a nonsteroidal anti-inflammatory drug used to treat pain.
Fenbufen is a member of the propionic acid derivatives class of drugs.
It was introduced by American Cyanamid under the trade name Lederfen in the 1980s. Due to liver toxicity, it was withdrawn from markets in the developed world in 2010.
As of 2015 it was available in Taiwan and Thailand under several brand names.
Preparation
Fenbufen can be synthesized by acylation of biphenyl with succinic anhydride under Friedel-Crafts conditions.
References
Hepatotoxins
Nonsteroidal anti-inflammatory drugs
Withdrawn drugs | Fenbufen | [
"Chemistry"
] | 137 | [
"Drug safety",
"Withdrawn drugs"
] |
12,123,762 | https://en.wikipedia.org/wiki/Syllable%20stress%20of%20botanical%20Latin | Syllable stress of botanical names varies with the language spoken by the person using the botanical name. In English-speaking countries, the Botanical Latin places syllable stress for botanical names derived from ancient Greek and Latin broadly according to two systems, either the Reformed academic pronunciation, or the pronunciation developed initially in some large part by British gardeners, horticulturists, naturalists, and botanists of the 19th century. The two systems of pronunciation are sometimes referred to as the "classical method" and the "ecclesiastical method". The two systems differ significantly in pronunciation, but little in syllable stress.
What follow are the rules of stress of reformed academic pronunciation of Latin (intended to approximate the stress rules of ancient spoken Latin). Words of Greek origin are generally pronounced according to the same rules; native ancient Greek rules of stress are not used.
Generally in Latin each vowel or diphthong belongs to a single syllable. Classical Latin diphthongs are ae, au, and oe. Diphthongs from Greek can include oi, eu, ei, and ou, and ui also occasionally occurs in botanical Latin. Syllables end in vowels, unless there are multiple consonants, in which case the consonants are divided between the two syllables, with certain consonants being treated as pairs. In words of two syllables, the stress is on the first syllable. Words that contain three or more syllables have stresses accorded to their syllables by the quality and location of the different vowels in the words. In words of more than two syllables, the stress is on the penultimate syllable when the syllable contains a long vowel or diphthong, otherwise the stress is on the antepenultimate syllable.
Whether a vowel is long or short in a classical Latin word is a function of the vowel and its relationship to the consonants that precede or follow it. Modern Latin dictionaries and textbooks may contain diacritics called macrons for long vowels or breves for short vowels. Botanical Latin does not traditionally include macrons or breves, and they are prohibited (as diacritics) by the International Code of Nomenclature for algae, fungi, and plants (Article 60.6). Some books follow the mediaeval tradition to add an acute accent to mark the stressed syllable.
Rules
To determine the position of the stress of Latin terms:
Vowels followed by two consonants are generally stressed. Thus Po-ten-tíl-la, as the I is followed by a double L.
Diphthongs are to be stressed, too. Thus Al-tháe-a, as AE is a diphthong.
See also
Traditional English pronunciation of Latin
Latin regional pronunciation
Notes
Significant differences between the two systems occur in pronunciation of diphthongs "ae", "eu", "oi", consonants "c", "g", "m", "s", "w", "x", and consonant groups "bs", "bt", "cc", "gg", "gn", "ph", "sc", "ti".
References
Botanical syl
Botanical nomenclature
Phonology | Syllable stress of botanical Latin | [
"Biology"
] | 635 | [
"Botanical terminology",
"Botanical nomenclature",
"Biological nomenclature"
] |
12,123,960 | https://en.wikipedia.org/wiki/Saccadic%20suppression%20of%20image%20displacement | Saccadic suppression of image displacement (SSID) is the phenomenon in visual perception where the brain selectively blocks visual processing during eye movements in such a way that large changes in object location in the visual scene during a saccade or blink are not detected.
The phenomenon described by Bridgeman et al. (Bridgeman, G., Hendry, D., & Stark, L., 1975) is characterized by the inability to detect changes in the location of a target when the change occurs immediately before, during, or shortly after the saccade, following a time course very similar to that of the suppression of visual sensitivity, with a magnitude perhaps even more striking than that of visual sensitivity (4 log units vs. 0.5–0.7 log units (Bridgeman et al., 1975; Volkmann, 1986)).
These results indicate that the human perceptual system neglects many useful pieces of information when it comes to spatially localizing target displacements occurring during a saccade. Surprisingly, in contrast to the perceptual system, the motor system is able to access precise spatial information in order to render precise motor actions during a saccade (Bridgeman, Lewis, Heit, & Nagle, 1979; Prablanc & Martin, 1992).
Elimination
If a target which is displaced during a saccade is not present at the end of the saccade, but reappears a short time later after a blank interval, subjects are able to regain the ability to successfully detect whether the target has moved. In these studies it was discovered that as the gap between the end of the saccade and the presentation of the shifted target increases, subjects become much more accurate at detecting displacement, and that with a gap of 150 ms participants had reached ceiling in their performance. Deubel and colleagues termed this the 'blanking effect' (Deubel et al., 1996; Deubel et al., 2004).
These results suggest that while extra-retinal information is present, and contains accurate localization information, it is only used when other information is not available – specifically retinal scene information. According to this model, three sources of information must be present in order to maintain visual constancy and successfully determine the direction of a target displacement: the target position prior to the saccade, extra retinal information, and a retinal error signal from the corrective saccade to determine the actual direction of target movement by comparing it to the efference copy and proprioceptive inflow (Deubel et al., 1996).
Tactile suppression of displacement
Using the device pictured on the right, Ziat et al. (2010) demonstrated a phenomenon akin to the saccadic suppression of image displacement (Bridgeman et al., 1975) in the tactile system. Under certain conditions participants failed to detect that dots had changed location as they moved their fingers over the tactile display.
See also
Chronostasis
Frame rate
List of cognitive biases
Raster scan
Saccadic masking
Transsaccadic memory
References
Bridgeman, G., Hendry, D., & Stark, L., Failure to detect displacement of visual world during saccadic eye movements. Vision Research, 15, 719–722. (1975) paper
Ziat, M., Hayward, V., Chapman, C. E., Ernst, M. & Lenay, C., Tactile suppression of displacement Experimental Brain Research, 206, 299–310. (2010) paper
Visual system
Vision
Motor control | Saccadic suppression of image displacement | [
"Biology"
] | 737 | [
"Behavior",
"Motor control"
] |
12,123,967 | https://en.wikipedia.org/wiki/Heat%20loss%20due%20to%20linear%20thermal%20bridging | The heat loss due to linear thermal bridging () is a physical quantity used when calculating the energy performance of buildings. It appears in both United Kingdom and Irish methodologies.
Calculation
The calculation of the heat loss due to linear thermal bridging is relatively simple, given by the formula below:
In the formula, if Accredited Construction details used, and otherwise, and is the sum of all the exposed areas of the building envelope,
References
Energy economics
Thermodynamic properties | Heat loss due to linear thermal bridging | [
"Physics",
"Chemistry",
"Mathematics",
"Environmental_science"
] | 98 | [
"Thermodynamics stubs",
"Thermodynamic properties",
"Physical quantities",
"Quantity",
"Energy economics",
"Environmental social science stubs",
"Thermodynamics",
"Environmental social science",
"Physical chemistry stubs"
] |
12,123,973 | https://en.wikipedia.org/wiki/Architectural%20Barriers%20Act%20of%201968 | The Architectural Barriers Act of 1968 ("ABA", , codified at et seq.) is an Act of Congress, enacted by President Lyndon B. Johnson.
The ABA requires that facilities designed, built, altered, or leased with funds supplied by the United States Federal Government be accessible to the public. For example, it mandates provision of disabled-access toilet facilities in such buildings. The ABA marks one of the first efforts to ensure that certain federally funded buildings and facilities are designed and constructed to be accessible to people with disabilities. Facilities that predate the law generally are not covered, but alterations or leases undertaken after the law took effect can trigger coverage.
Uniform standards for the design, construction and alteration of buildings were created so that persons with disabilities will have ready access to and use of them. These Uniform Federal Accessibility Standards (UFAS) are developed and maintained by an Access Board and serve as the basis for the standards used to enforce the law. The Board enforces the ABA by investigating complaints concerning particular facilities. Four Federal agencies are responsible for the setting the standards: the Department of Defense, the Department of Housing and Urban Development, the General Services Administration, and the U.S. Postal Service. These federal agencies are responsible for ensuring compliance with UFAS when funding the design, construction, alteration, or leasing of facilities. Some departments have, as a matter of policy, also required compliance with the Americans with Disabilities Act accessibility guidelines (which otherwise do not apply to the Federal sector) in addition to UFAS.
Structure
The ABA (as amended) consists of seven sections:
Section 1 defines the buildings or facilities covered by the Act.
Section 2, 3, 4 and 4a describe the role of each standards-setting agency.
The General Services Administration (GSA) prescribes standards for all buildings subject to the Architectural Barriers Act that are not covered by standards issued by the other three standard-setting agencies;
The Department of Defense (DoD) prescribes standards for DoD installations;
The Department of Housing and Urban Development (HUD) prescribes standards for residential structures covered by the Architectural Barriers Act except those funded or constructed by DoD;
The U.S. Postal Service (USPS) prescribes standards for postal facilities.
Section 5 states that buildings designed, constructed, or altered after the effective date (August 12, 1968) are covered under the Act.
Section 6 concerns modification of standards and the granting of waivers.
Section 7a requires that the Administrator of General Services report to Congress on his or her activities as they pertain to the act.
Section 7b amends the Act to ensure compliance with the standards, by the establishment an independent Federal agency, the Architectural and Transportation Barriers Compliance Board established by section 502 of the Rehabilitation Act of 1973. 7b also requires report of the Compliance Board to the Senate "activities and actions to insure compliance with the standards prescribed under this Act."
References
1968 in American law
Accessible building
Architecture in the United States
United States federal civil rights legislation
United States federal disability legislation
Disability rights | Architectural Barriers Act of 1968 | [
"Engineering"
] | 615 | [
"Accessible building",
"Architecture"
] |
12,124,197 | https://en.wikipedia.org/wiki/Skiving%20%28metalworking%29 | Skiving or scarfing is the process of cutting material off in slices, usually metal, but also leather or laminates. Skiving can be used instead of rolling the material to shape when the material must not be work hardened, or must not shed minute slivers of metal later which is common in cold rolling processes. It can also be used to create fins on a block of metal, not shaving the part entirely off.
In metalworking, skiving can be used to remove a thin dimension of material or to create thin slices in an existing material, such as heat sinks where a large amount of surface area is required relative to the volume of the piece of metal.
The process involves moving the material past precision-profiled tools made to an exact shape, or past plain cutting tools. The tools are typically made of tungsten carbide-based compounds. It requires a minimum material feed rate to cut successfully. At speeds below those of metal planing or about 10 meters/minute (33 feet/minute), the skiving tools can be vibrated at high frequency to increase the relative speed between the tool and workpiece.
In early machines, it was necessary to precisely position the strip relative to the cutting tools, but newer machines use a floating suspension technology which enables tools to locate by material contact. This allows mutual initial positioning differences up to approximately followed by resilient automatic engagement.
Products using this technology directly are automotive seatbelt springs, large power transformer winding strips, rotogravure plates, cable and hose clamps, gas tank straps, and window counterbalance springs. Products using the process indirectly are tubes and pipes where the edge of the strip is accurately beveled prior to being folded into tubular form and seam welded. The beveled edges enable pinhole free welds.
Another metal skiving application is for hydraulic cylinders, where a round and smooth cylinder bore is required for proper actuation. Several skiving knives on a round tool pass through a bore to create a perfectly round hole. Often, a second operation of roller burnishing follows to cold-work the surface to a mirror finish. This process is common among manufacturers of hydraulic and pneumatic cylinders. Compared to honing, skiving and roller burnishing is faster.
Skiving can be applied to gear cutting, where internal gears are skived with a rotary cutter (rather than shaped or broached) in a process analogous to the hobbing of external gears.
Heat sinks
Skiving is also used for the manufacturing of heat sinks for PC cooling products. A PC cooler created by skiving has the benefit that the heat sink base and fins are created from a single piece of material (copper or aluminum), providing improved heat dissipation and heat transfer from base to fins. Additionally, the skiving process also increases the roughness of the fins. Unlike the underside of a heat sink, which needs to be smooth for maximum contact area with the heat source, the fins benefit from this roughness because it increases the fins' surface area on which to dissipate heat into the air. The fins may be made much thinner and closer together than by extrusion or formed sheet processes, which can offer greater heat transfer in high-performance waterblocks for water cooling.
See also
Skiving (leathercraft)
References
External links
Skiving machine patent
The little-known life of the scarfing tool
Machine tools
Metalworking | Skiving (metalworking) | [
"Engineering"
] | 693 | [
"Machine tools",
"Industrial machinery"
] |
12,124,272 | https://en.wikipedia.org/wiki/Molecular%20shuttle | A molecular shuttle in supramolecular chemistry is a special type of molecular machine capable of shuttling molecules or ions from one location to another. This field is of relevance to nanotechnology in its quest for nanoscale electronic components and also to biology where many biochemical functions are based on molecular shuttles. Academic interest also exists for synthetic molecular shuttles, the first prototype reported in 1991 based on a rotaxane.
This device is based on a molecular thread composed of an ethyleneglycol chain interrupted by two arene groups acting as so-called stations. The terminal units (or stoppers) on this wire are bulky triisopropylsilyl groups. The bead is a tetracationic cyclophane based on two bipyridine groups and two para-phenylene groups. The bead is locked to one of the stations by pi-pi interactions but since the activation energy for migration from one station to the other station is only 13 kcal/mol (54 kJ/mol) the bead shuttles between them. The stoppers prevent the bead from slipping from the thread. Chemical synthesis of this device is based on molecular self-assembly from a preformed thread and two bead fragments (32% chemical yield).
In certain molecular switches the two stations are non-degenerate.
References
Supramolecular chemistry
Molecular machines | Molecular shuttle | [
"Physics",
"Chemistry",
"Materials_science",
"Technology"
] | 290 | [
"Machines",
"Molecular machines",
"Physical systems",
"nan",
"Nanotechnology",
"Supramolecular chemistry"
] |
12,125,099 | https://en.wikipedia.org/wiki/CompHEP | CompHEP is a software package for automatic computations in high energy physics from Lagrangians to collision events or particle decays.
CompHEP is based on quantum theory of gauge fields, namely it uses the technique of squared Feynman diagrams at the tree-level approximation. By default, CompHEP includes the Standard Model Lagrangian in the unitarity and 't Hooft-Feynman gauges and several MSSM models. However users can create new physical models, based on different Lagrangians. There is a special tool for that - LanHEP. CompHEP is able to compute basically the LO cross sections and distributions with several particles in the final state (up to 6-7). It can take into account, if necessary, all QCD and EW diagrams, masses of fermions and bosons and widths of unstable particles. Processes computed by means of CompHEP can be interfaced to the Monte-Carlo generators PYTHIA and HERWIG as new external processes.
The CompHEP project started in 1989 in Skobeltsyn Institute of Nuclear Physics (SINP) of Moscow State University. During the 1990s this package was developed, and now it is a powerful tool for automatic computations of collision processes. The CompHEP program has been used in the past for many studies in many experimental groups as shown schematically in the scheme
Due to an intuitive graphical interface CompHEP is a very useful tool for education in particle and nuclear physics.
External links
official CompHEP page
manual for version 3.3
Skobeltsyn Institute of Nuclear Physics (SINP)
Monte Carlo particle physics software
Physics software | CompHEP | [
"Physics"
] | 343 | [
"Computational physics",
"Particle physics",
"Particle physics stubs",
"Computational physics stubs",
"Physics software"
] |
12,125,107 | https://en.wikipedia.org/wiki/FFF%20system | The furlong–firkin–fortnight (FFF) system is a humorous system of units based on unusual or impractical measurements. The length unit of the system is the furlong, the mass unit is the mass of a firkin of water, and the time unit is the fortnight. Like the SI or metre–kilogram–second systems, there are derived units for velocity, volume, mass and weight, etc. It is sometimes referred to as the FFFF system where the fourth 'F' is degrees Fahrenheit for temperature.
While the FFF system is not used in practice it has been used as an example in discussions of the relative merits of different systems of units. Some of the FFF units, notably the microfortnight, have been used jokingly in computer science. Besides having the meaning "any obscure unit", the derived unit furlongs per fortnight has also served frequently in classroom examples of unit conversion and dimensional analysis.
Base units and definitions
Multiples and derived units
Microfortnight and other decimal prefixes
One microfortnight is equal to 1.2096 seconds. This has become a joke in computer science because in the VMS operating system, the TIMEPROMPTWAIT variable, which holds the time the system will wait for an operator to set the correct date and time at boot if it realizes that the current value is invalid, is set in microfortnights. This is because the computer uses a loop instead of the internal clock, which has not been activated yet to run the timer. The documentation notes that "[t]he time unit of micro-fortnights is approximated as seconds in the implementation".
The Jargon File reports that the millifortnight (about 20 minutes) and nanofortnight have been occasionally used.
Furlong per fortnight
One furlong per fortnight is a speed that would be barely noticeable to the naked eye. It converts to:
1.663 m/s, (i.e. 0.1663 mm/s),
roughly 1 cm/min (to within 1 part in 400),
5.987 km/h,
roughly in/min,
3.720 mph,
the speed of the tip of a inch minute hand.
Speed of light
The speed of light is furlongs per fortnight (1.8026 terafurlongs per fortnight). By mass–energy equivalence, 1 firkin is equal to (≈ , or ).
Others
In the FFF system, heat transfer coefficients are conventionally reported as BTU per foot-fathom per degree Fahrenheit per fortnight. Thermal conductivity has units of BTU per fortnight per furlong per degree Fahrenheit.
Like the more common furlong per fortnight, a firkin per fortnight can refer to "any obscure unit".
See also
List of unusual units of measurement
List of humorous units of measurement
Footnotes
References
Systems of units
Tech humour | FFF system | [
"Mathematics"
] | 604 | [
"Quantity",
"Systems of units",
"Units of measurement"
] |
12,125,137 | https://en.wikipedia.org/wiki/WS-Federation%20Passive%20Requestor%20Profile | WS-Federation Passive Requestor Profile is a Web Services specification - intended to work with the WS-Federation specification - which defines how identity, authentication and authorization mechanisms work across trust realms. The specification deals specifically with how applications, such as web browsers, make requests using these mechanisms. In this context, the web-browser is known as a "passive requestor." By way of contrast, WS-Federation Active Requestor Profile deals with "active requestors" such as SOAP-enabled applications. WS-Federation Passive Requestor Profile was created by IBM, BEA Systems, Microsoft, VeriSign, and RSA Security.
See also
List of Web service specifications
References
External links
WS-Federation: Passive Requestor Profile specification
Security | WS-Federation Passive Requestor Profile | [
"Technology"
] | 154 | [
"Computing stubs",
"Computer network stubs"
] |
12,125,406 | https://en.wikipedia.org/wiki/Web%20Services%20Security%20Kerberos%20Binding | Web Services Security Kerberos Binding is a Web Services specification, authored by IBM and Microsoft, which details how to integrate the Kerberos authentication mechanism with the Web Services Security model. The most recent draft of the specification was released in 2003 and is identified as being for "review and evaluation only."
See also
List of Web service specifications
WS-Federation
References
External links
Web Services Security Kerberos Binding specification
Security | Web Services Security Kerberos Binding | [
"Technology"
] | 86 | [
"Computing stubs",
"Computer network stubs"
] |
12,126,139 | https://en.wikipedia.org/wiki/C3H6O2 | {{DISPLAYTITLE:C3H6O2}}
The molecular formula C3H6O2 may refer to:
Acids and esters
Acid
Propanoic acid
Esters
Methyl acetate
Ethyl formate
Aldehydes and ketones
Lactaldehyde (2-hydroxypropanal)
(S)-Lactaldehyde
(R)-Lactaldehyde
Reuterin (3-hydroxypropanal)
Methoxyacetaldehyde
Hydroxyacetone
Alkenes
Diols
1-Propene-1,1-diol
1-Propene-1,2-diol
(E)-1-Propene-1,2-diol
(Z)-1-Propene-1,2-diol
1-Propene-1,3-diol
(E)-Propene-1,3-diol
(Z)-Propene-1,3-diol
2-Propene-1,1-diol
2-Propene-1,2-diol
Oxyethenol
1-Methoxyethenol
Cyclic
Three atoms in ring
No oxygen in ring
1,1-Cyclopropandiol
Cyclopropan-1,2-diol
(E)-Cyclopropan-1,2-diol
(Z)-Cyclopropan-1,2-diol
One oxygen in ring
Glycidol (oxiran-2-ylmethanol)
(R)-Glycidol
(S)-Glycidol
2-Methyloxiranol
(R)-2-Methyloxiranol
(S)-2-Methyloxiranol
3-Methyloxiranol
(R,R)-3-Methyloxiranol
(R,S)-3-Methyloxiranol
(S,R)-3-Methyloxiranol
(S,S)-3-Methyloxiranol
Two oxygens in ring
Dimethyldioxirane
Four atoms in ring
One oxygen in ring
Oxetan-3-ol
Oxetan-2-ol
(R)-Oxetan-2-ol
(S)-Oxetan-2-ol
Two oxygens in ring
3-Methyl-1,2-dioxetane
(R)-3-Methyl-1,2-dioxetane
(S)-3-Methyl-1,2-dioxetane
2-Methyl-1,3-dioxetane
Five atoms in ring
1,2-Dioxolane
1,3-Dioxolane | C3H6O2 | [
"Chemistry"
] | 562 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
12,126,172 | https://en.wikipedia.org/wiki/Palatability | Palatability (or palatableness) is the hedonic reward (which is pleasure of taste in this case) provided by foods or drinks that are agreeable to the "palate", which often varies relative to the homeostatic satisfaction of nutritional and/or water needs. The palatability of a dish or beverage, unlike its flavor or taste, varies with the state of an individual: it is lower after consumption and higher when deprived. It has increasingly been appreciated that this can create a hunger that is independent of homeostatic needs.
Brain mechanism
The palatability of a substance is determined by opioid receptor-related processes in the nucleus accumbens and ventral pallidum. The opioid processes involve mu opioid receptors and are present in the rostromedial shell part of the nucleus accumbens on its spiny neurons. This area has been called the "opioid eating site".
The rewardfulness of consumption associated with palatability is dissociable from desire or incentive value which is the motivation to seek out a specific commodity. Desire or incentive value is processed by opioid receptor-related processes in the basolateral amygdala. Unlike the liking palatability for food, the incentive salience wanting is not downregulated by the physiological consequences of food consumption and may be largely independent of homoeostatic processes influencing food intake.
Though the wanting of incentive salience may be informed by palatability, it is independent and not necessarily reduced to it. It has been suggested that a third system exists that links opioid processes in the two parts of the brain: "Logically this raises the possibility that a third system, with which the accumbens shell, ventral pallidum, and basolateral amygdala are associated, distributes the affective signals elicited by specific commodities across distinct functional systems to control reward seeking... At present we do not have any direct evidence for a system of this kind, but indirect evidence suggests it may reside within the motivationally rich circuits linking hypothalamic and brainstem viscerogenic structures such as the parabrachial nucleus.
It has also been suggested that hedonic hunger can be driven both in regard to "wanting" and "liking" and that a palatability subtype of neuron may also exist in the basolateral amygdala.
Satiety and palatability
Appetite is controlled by a direct loop and an indirect one. In both the direct and indirect loops there are two feedback mechanisms. First a positive feedback involving its stimulation by palatability food cues, and second, a negative feedback due to satiation and satiety cues following ingestion. In the indirect loop these cues are learnt by association such as meal plate size and work by modulating the potency of the cues of the direct loop. The influence of these processes can exist without subjective awareness.
The cessation of a desire to eat after a meal "satiation" is likely to be due to different processes and cues. More palatable foods reduce the effects of such cues upon satiation causing a larger food intake, exploited in hyperpalatable food. In contrast, unpalatability of certain foods can serve as a deterrent from feeding on those foods in the future. For example, the variable checkerspot butterfly contains iridoid compounds that are unpalatable to avian predators, thus reducing the risk of predation.
See also
Acquired taste
Flavor
Food craving
Motivation
Nutrition
Pleasure center
References
Behavioral neuroscience
Gustation
Gustatory system | Palatability | [
"Biology"
] | 733 | [
"Behavioural sciences",
"Behavior",
"Behavioral neuroscience"
] |
12,126,635 | https://en.wikipedia.org/wiki/Uniform%20tiling | In geometry, a uniform tiling is a tessellation of the plane by regular polygon faces with the restriction of being vertex-transitive.
Uniform tilings can exist in both the Euclidean plane and hyperbolic plane. Uniform tilings are related to the finite uniform polyhedra; these can be considered uniform tilings of the sphere.
Most uniform tilings can be made from a Wythoff construction starting with a symmetry group and a singular generator point inside of the fundamental domain. A planar symmetry group has a polygonal fundamental domain and can be represented by its group notation: the sequence of the reflection orders of the fundamental domain vertices.
A fundamental domain triangle is denoted (p q r), where p, q, r are whole numbers > 1, i.e. ≥ 2; a fundamental domain right triangle is denoted (p q 2). The triangle may exist as a spherical triangle, a Euclidean plane triangle, or a hyperbolic plane triangle, depending on the values of p, q, and r.
There are several symbolic schemes for denoting these figures:
The modified Schläfli symbol for a right triangle domain: (p q 2) → {p, q}.
The Coxeter-Dynkin diagram is a triangular graph with p, q, r labeled on the edges. If r = 2, then the graph is linear, since diagram nodes with connectivity 2 are not connected to each other by a diagram branch (since domain mirrors meeting at 90 degrees generate no new mirrors).
The Wythoff symbol takes the three integers and separates them by a vertical bar (|). If the generator point is off the mirror opposite to a domain vertex, then the reflection order of this domain vertex is given before the bar.
Finally, a uniform tiling can be described by its vertex configuration: the (identical) sequence of polygons around each (equivalent) vertex.
All uniform tilings can be constructed from various operations applied to regular tilings. These operations, as named by Norman Johnson, are called truncation (cutting vertices), rectification (cutting vertices until edges disappear), and cantellation (cutting edges and vertices). Omnitruncation is an operation that combines truncation and cantellation. Snubbing is an operation of alternate truncation of the omnitruncated form. (See Uniform polyhedron#Wythoff construction operators for more details.)
Coxeter groups
Coxeter groups for the plane define the Wythoff construction and can be represented by Coxeter-Dynkin diagrams:
For groups with integer reflection orders, including:
Uniform tilings of the Euclidean plane
There are symmetry groups on the Euclidean plane constructed from fundamental triangles: (4 4 2), (6 3 2), and (3 3 3). Each is represented by a set of lines of reflection that divide the plane into fundamental triangles.
These symmetry groups create 3 regular tilings, and 7 semiregular ones. A number of the semiregular tilings are repeated from different symmetry constructors.
A prismatic symmetry group, (2 2 2 2), is represented by two sets of parallel mirrors, which in general can make a rectangular fundamental domain. It generates no new tilings.
A further prismatic symmetry group, (∞ 2 2), has an infinite fundamental domain. It constructs two uniform tilings: the apeirogonal prism and apeirogonal antiprism.
The stacking of the finite faces of these two prismatic tilings constructs one non-Wythoffian uniform tiling of the plane. It is called the elongated triangular tiling, composed of alternating layers of squares and triangles.
Right angle fundamental triangles: (p q 2)
General fundamental triangles: (p q r)
Non-simplical fundamental domains
The only possible fundamental domain in Euclidean 2-space that is not a simplex is the rectangle (∞ 2 ∞ 2), with Coxeter diagram: . All forms generated from it become a square tiling.
Uniform tilings of the hyperbolic plane
There are infinitely many uniform tilings by convex regular polygons on the hyperbolic plane, each based on a different reflective symmetry group (p q r).
A sampling is shown here with a Poincaré disk projection.
The Coxeter-Dynkin diagram is given in a linear form, although it is actually a triangle, with the trailing segment r connecting to the first node.
Further symmetry groups exist in the hyperbolic plane with quadrilateral fundamental domains — starting with (2 2 2 3), etc. — that can generate new forms. As well, there are fundamental domains that place vertices at infinity, such as (∞ 2 3), etc.
Right angle fundamental triangles: (p q 2)
General fundamental triangles: (p q r)
Expanded lists of uniform tilings
There are several ways the list of uniform tilings can be expanded:
Vertex figures can have retrograde faces and turn around the vertex more than once.
Star polygon tiles can be included.
Apeirogons, {∞}, can be used as tiling faces.
Zigzags (apeirogons alternating between two angles) can also be used.
The restriction that tiles meet edge-to-edge can be relaxed, allowing additional tilings such as the Pythagorean tiling.
Symmetry group triangles with retrogrades include:
(4/3 4/3 2), (6 3/2 2), (6/5 3 2), (6 6/5 3), (6 6 3/2).
Symmetry group triangles with infinity include:
(4 4/3 ∞), (3/2 3 ∞), (6 6/5 ∞), (3 3/2 ∞).
Branko Grünbaum and G. C. Shephard, in the 1987 book Tilings and patterns, section 12.3, enumerate a list of 25 uniform tilings, including the 11 convex forms, and add 14 more they call hollow tilings, using the first two expansions above: star polygon faces and generalized vertex figures.
H. S. M. Coxeter, M. S. Longuet-Higgins, and J. C. P. Miller, in the 1954 paper 'Uniform polyhedra', Table 8: Uniform Tessellations, use the first three expansions and enumerate a total of 38 uniform tilings. If a tiling made of 2 apeirogons is also counted, the total can be considered 39 uniform tilings.
In 1981, Grünbaum, Miller, and Shephard, in their paper Uniform Tilings with Hollow Tiles, list 25 tilings using the first two expansions and 28 more when the third is added (making 53 using Coxeter et al.'s definition). When the fourth is added, they list an additional 23 uniform tilings and 10 families (8 depending on continuous parameters and 2 on discrete parameters).
Besides the 11 convex solutions, the 28 uniform star tilings listed by Coxeter et al., grouped by shared edge graphs, are shown below, followed by 15 more listed by Grünbaum et al. that meet Coxeter et al.'s definition but were missed by them.
This set is not proved complete. By "2.25" is meant tiling 25 in Grünbaum et al.'s table 2 from 1981.
The following three tilings are exceptional in that there is only finitely many of one face type: two apeirogons in each. Sometimes the order-2 apeirogonal tiling is not included, as its two faces meet at more than one edge.
For clarity, the tilings are not colored from here onward (due to the overlaps). A set of polygons around one vertex is highlighted. McNeill only lists tilings given by Coxeter et al. (1954). The eleven convex uniform tilings have been repeated for reference.
There are two uniform tilings for the vertex configuration 4.8.-4.8.-4.∞ (Grünbaum et al., 2.10 and 2.11) and also two uniform tilings for the vertex configuration 4.8/3.4.8/3.-4.∞ (Grünbaum et al., 2.12 and 2.13), with different symmetries. There is also a third tiling for each vertex configuration that is only pseudo-uniform (vertices come in two symmetry orbits). They use different sets of square faces. Hence, for star Euclidean tilings, the vertex configuration does not necessarily determine the tiling.
In the pictures below, the included squares with horizontal and vertical edges are marked with a central dot. A single square has edges highlighted.
The tilings with zigzags are listed below. {∞𝛼} denotes a zigzag with angle 0 < 𝛼 < π. The apeirogon can be considered the special case 𝛼 = π. The symmetries are given for the generic case, but there are sometimes special values of 𝛼 that increase the symmetry. Tilings 3.1 and 3.12 can even become regular; 3.32 already is (it has no free parameters). Sometimes, there are special values of 𝛼 that cause the tiling to degenerate.
The tiling pairs 3.17 and 3.18, as well as 3.19 and 3.20, have identical vertex configurations but different symmetries.
Tilings 3.7 through 3.10 have the same edge arrangement as 2.1 and 2.2; 3.17 through 3.20 have the same edge arrangement as 2.10 through 2.13; 3.21 through 3.24 have the same edge arrangement as 2.18 through 2.23; and 3.25 through 3.33 have the same edge arrangement as 1.25 (the regular triangular tiling).
Self-dual tilings
A tiling can also be self-dual. The square tiling, with Schläfli symbol {4,4}, is self-dual; shown here are two square tilings (red and black), dual to each other.
Uniform tilings using regular or isotoxal polygrams as nonconvex isotoxal simple polygons
Seeing a regular star polygon as a nonconvex isotoxal simple polygon with twice as many (shorter) sides but alternating the same outer and "inner" internal angles allows regular star polygons to be used in a tiling, and seeing isotoxal simple polygons as "regular" allows regular star polygons to (but not all of them can) be used in a "uniform" tiling.
Also, the outlines of certain non-regular isotoxal star polygons are nonconvex isotoxal (simple) polygons with as many (shorter) sides and alternating the same outer and "inner" internal angles; seeing this kind of isotoxal star polygons as their outlines allows it to be used in a tiling, and seeing isotoxal simple polygons as "regular" allows this kind of isotoxal star polygons to (but not all of them can) be used in a "uniform" tiling.
An isotoxal simple 2n-gon with outer internal angle 𝛼 is denoted by {n𝛼}; its outer vertices are labeled as n, and inner ones as n.
These expansions to the definition for a tiling require corners with only 2 polygons to not be considered vertices — since the vertex configuration for vertices with at least 3 polygons suffices to define such a "uniform" tiling, and so that the latter has one vertex configuration alright (otherwise it would have two) —. There are 4 such uniform tilings with adjustable angles 𝛼, and 18 such uniform tilings that only work with specific angles, yielding a total of 22 uniform tilings that use star polygons.
All of these tilings, with possible order-2 vertices ignored, with possible double edges and triple edges reduced to single edges, are topologically related to the ordinary uniform tilings (using only convex regular polygons).
Uniform tilings using convex isotoxal simple polygons
Non-regular isotoxal either star or simple 2n-gons always alternate two angles. Isotoxal simple 2n-gons, {n𝛼}, can be convex; the simplest ones are the rhombi (2×2-gons), {2𝛼}. Considering these convex {n𝛼} as "regular" polygons allows more tilings to be considered "uniform".
See also
Wythoff symbol
List of uniform tilings
Uniform tilings in hyperbolic plane
Uniform polytope
References
Norman Johnson Uniform Polytopes, Manuscript (1991)
N. W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph. D. Dissertation, University of Toronto, 1966
(Star tilings section 12.3)
H. S. M. Coxeter, M. S. Longuet-Higgins, J. C. P. Miller, Uniform polyhedra, Phil. Trans., 1954, 246 A, 401–50 (Table 8)
External links
Uniform Tessellations on the Euclid plane
Tessellations of the Plane
David Bailey's World of Tessellations
k-uniform tilings
n-uniform tilings | Uniform tiling | [
"Physics"
] | 2,782 | [
"Tessellation",
"Uniform tilings",
"Symmetry"
] |
12,126,699 | https://en.wikipedia.org/wiki/MV%20Mefk%C3%BCre | MV Mefküre (often referred to as Mefkura) was a Turkish wooden-hulled motor schooner chartered to carry Jewish Holocaust refugees from Romania to Istanbul, sailing under the Turkish and Red Cross flags. On 5 August 1944 a Soviet submarine sank her in the Black Sea by cannon and machine gun fire, killing more than 300 refugees.
Final voyage and sinking
On 3 August 1944 three small old merchant ships, overcrowded with about 1,000 Jewish refugees, left the Romanian port of Constanța at about 20:30 hrs. Sailing instructions from the German naval authorities were for Morina with 308 passengers to sail first, followed by Bulbul with 390 people, and lastly by Mefküre with 320 refugees (the exact number may be slightly different) on board. The vessels were ordered to sail from position 43°43'N 29°08'E strictly southward, which would lead them directly into the Bosphorus. Armed ships of the Romanian Navy escorted the convoy and provided signal flags to aid their passage from the harbour and through the mined area of the approaches.
On 5 August 1944, about 40 minutes after midnight Mefküre was about northeast of İğneada in Turkey when flares from an unknown vessel illuminated her. Mefküre failed to respond and carried on. In the same night, at 02:00 hrs, the German radio direction-finding station at Cape Pomorie in the Gulf of Burgas intercepted a radio signal of the Soviet , , with a bearing of 116 degrees. "This bearing crossed the course of Mefkure and the two Turkish vessels almost exactly at the area where Mefkure was sunk during that night." The German historian Jürgen Rohwer claimed Shch-215 as the vessel which then attacked. Shch-215 fired 90 rounds from her 45-mm guns and 650 rounds from her 7.62 mm machine guns. Mefküre caught fire and sank. Her captain, Kazım Turan, and six of his crew escaped in the only available lifeboat, but only five of the refugees survived. The number of refugees killed is unknown, but one estimate suggests it includes 37 children.
On 30 July 1944 submarine Shch-215, under command of Captain 3rd Rank AI Strizhak, had departed from Batum, operating at the approaches off Burgas. This submarine, on the night of 5 August, claimed the sinking of a big schooner with about 200 armed men aboard, answering the attack with rifles and light machine guns, and in addition one "barkass", possibly a life boat. Shch-215 made the attack in position 42.00'N 28°42'E, at a distance of westward from the ordered course of Mefküre.
A fortnight after the sinking a JTA news report alleged that three German surface craft had sunk Mefküre. The same report stated that Bulbul had been intercepted, too, but was allowed to proceed after identifying herself; at daybreak she rescued Mefküres survivors. Bulbul continued to İğneada, whence her 395 refugees and the five surviving Mefküre refugees continued by road and rail to Istanbul. Morina also reached Turkey, and refugees from both ships continued overland to Palestine.
Memorials
There are memorials to those killed aboard Mefküre at the Giurgiului Cemetery in the south of Bucharest in Romania and at Ashdod in Israel.
See also
Aliyah Bet
Patria disaster
Struma disaster
References
Further reading
includes 19 documents and a list of 302 passengers (victims) of the Mefkure
External links
1929 ships
1944 in Romania
Aliyah
Jewish Turkish history
Jewish Romanian history
Maritime incidents in August 1944
Rescue of Jews during the Holocaust
Romania in World War II
World War II ships of Turkey
Ships sunk by Soviet submarines
Soviet Union–Turkey relations
Schooners
World War II shipwrecks in the Black Sea
Jewish immigrant ships
International Red Cross and Red Crescent Movement
Aid for Jewish refugees from Nazi Germany | MV Mefküre | [
"Biology"
] | 800 | [
"Rescue of Jews during the Holocaust",
"Behavior",
"Altruism"
] |
12,127,160 | https://en.wikipedia.org/wiki/Personal%20fable | According to Alberts, Elkind, and Ginsberg the personal fable "is the corollary to the imaginary audience. Thinking of themselves as the center of attention, the adolescent comes to believe that it is because they are special and unique.” It is found during the formal operational stage in Piagetian theory, along with the imaginary audience. Feelings of invulnerability are also common. The term "personal fable" was first coined by the psychologist David Elkind in his 1967 work Egocentrism in Adolescence.
Feelings of uniqueness may stem from fascination with one's own thoughts to the point where an adolescent believes that their thoughts or experiences are completely novel and unique when compared to the thoughts or experiences of others. This belief stems from the adolescent's inability to differentiate between the concern(s) of their thoughts from the thoughts of others, while simultaneously over-differentiating their feelings. Thus, an adolescent is likely to think that everyone else (the imaginary audience) is just as concerned with them as they are; while at the same time, this adolescent might believe that they are the only person who can possibly experience whatever feelings they might be experiencing at that particular time and that these experiences are unique to them. According to David Elkind (1967), an adolescent's intense focus on oneself as the center of attention is what ultimately gives rise to the belief that one is unique, and in turn, this may give rise to feelings of invulnerability. Ultimately, the two marked characteristics of personal fable are feelings of uniqueness and invulnerability. Or as David Elkind states, "this complex of beliefs in the uniqueness of (the adolescent's) feelings and of his or her immortality might be called a 'personal fable', a story which he or she tells himself and which is not true."
Early literature on adolescent egocentrism and cognitive development
Elkind's work with the personal fable stemmed from Piaget's theory of cognitive development, which describes egocentrism as a lack of differentiation in a given area of subject-object interaction. According to Elkind, in conjunction with Piaget's theory, adolescent egocentrism is to be understood in the context of ontogeny (referring to the development of an organism across its lifespan). These ontogenetic changes in egocentrism are thought to drive the development of logical and formal operational thinking. Elkind described an operation as a "mental tool whose products, series, class hierarchies, conservations, etc., are not directly derived from experience." However, a child in the concrete operational stage is not able to differentiate between these mental constructs and reality (their experiences). For instance, a child in the concrete operational stage may understand that a dog is an animal, but not all animals are dogs; however, the child is not able to grasp a hypothetical concept such as "suppose that dogs were humans". The child is likely to respond "but dogs aren't humans, they are animals."
According to Elkind, the onset of adolescent egocentrism is brought on by the emergence of the formal operational stage, which allows the adolescent to mentally construct hypotheses that are contrary to reality. It is at the onset of adolescence that the individual is "freed" from the confines of concrete thought, and begins to be able to grasp abstract or hypothetical concepts (thus the formal operational way of thinking arises). Here, the individual is now able to imagine the hypothetical situation involving dogs as humans and not animals. Thus, the individual is also able to imagine, and even come to believe, hypothetical situations in which everyone is as concerned with them are they are, and in which they are unique and invulnerable when compared to others. Such contrary-to-fact propositions are what characterize the personal fable.
Egocentrism and the formal operational stage of cognition
Elkind proposed the concept of teenage egocentrism, which he believes occurs during the transition to Piaget's formal operational stage of cognition (the ultimate stage in which the individual is capable of abstract thinking: hypothetical and deductive reasoning). Although the construct is still widely used in research today, there is no evidence that adolescent egocentrism follows any age-related pattern (as would be implied by the assumption that it disappears when adolescents enter the formal operational stage, a stage that some individuals never reach).
In early, middle, and late adolescence
The onset of adolescent egocentrism tends to occur at about age 11–13 which is considered early adolescence. Since an adolescent is thought to develop the formal operational stage of thinking during this time, the personal fable phenomenon is thought to develop as well. There are studies that support this hypothesis, showing that it is during early adolescence that the personal fable is most prominent (this includes both the uniqueness and invulnerability aspects of personal fable). It has also been shown that both feelings of uniqueness and invulnerability increase significantly from age 11 to age 13.
Middle adolescence is generally considered to be around the age range of 14–16. Past research has demonstrated that personal fable peaks at about age 13 during early adolescence. It has also been speculated that the personal fable phenomenon ought to decline as one moves into middle and then late adolescence.
Late adolescence is considered to range from the age of 17 to about 23. Although Elkind (1967) speculated that the personal fable tends to decrease in late adolescence, there had been evidence of a possible re-emergence of the personal fable (or at least adolescent egocentrism) during late adolescence. It is hypothesized that this re-occurrence of adolescent egocentrism may act as a coping mechanism during the transition to new educational and social contexts (moving away to college, for example). Perhaps further research into the prevalence of the personal fable in late adolescence is required. An additional study was done to analyze whether or not personal fable (and imaginary audience) decreased, increased, or remained stable across an age range from sixth grade to college. The results showed that there was no significant difference between age groups with regards to the personal fable phenomenon, although it did seem to decline slightly. Also, the results showed that the imaginary audience phenomenon seems to decrease as one ages, more so than personal fable. Furthermore, there was a study conducted to analyze the gender differences with regards to the chronicity (the pattern of the behavior across time) of the personal fable phenomenon across early, middle, and late adolescence. The results showed that the personal fable phenomenon, including invulnerability and uniqueness, tends to decrease as an individual moves into middle and late adolescence more so for females than for males.
Gender differences
There has been conflicting evidence of a slight difference between genders in the uniqueness aspect of personal fable. Specifically, females seem to have a higher sense of uniqueness than male adolescents. However, there has also been conflicting evidence suggesting that adolescent boys tend to feel unique more often than adolescent girls. The study which found this conflicting evidence also found that male adolescents also felt more omnipotent (where the adolescent may feel that he is in complete control, all-powerful, and knows everything) when compared to girls. There is presently no knowledge of replication of this finding. Another study found that there was no significant difference between male and female adolescents with regards to the personal fable in general. In regards to the invulnerability aspect of the personal fable, it appears that boys tend to have higher instances of feelings pertaining to invulnerability and risk-taking than girls do. With feelings of invulnerability, it can be said that an adolescent is more likely to participate in risk behavior. A study was done to analyze the role gender plays in sexual risk-taking. The results indicated that females had a higher instance of sexual risk taking (which involved sexual intercourse at a younger age and not using contraception). This finding is somewhat incongruent with the finding that boys tend to have higher feelings of invulnerability (and thus risk-taking behavior) than girls.
Risk-taking in adolescence
Adolescence was once believed to be a time of stress and turmoil. Although this is sometimes the case, research has shown that most adolescents rate their experiences as enjoyable and that the storm and stress of adolescence actually occurs at a fairly low rate and discontinuously. Nonetheless, adolescence is still a time of significant change and development on all levels (psychological, social and biological). Along with all these changes adolescents are faced with situations in which they must make important choices and decisions. Namely, decisions made regarding risky behaviors become more prevalent at this time. Adolescents are faced with decisions on whether to make an effort to have safe sex and how to react to peer pressure regarding substance abuse.
Research suggests that when faced with a decision, adolescents perceive risks but they do not incorporate these into their decision making process. It has been suggested that egocentrism plays a significant role in this lack of risk evaluation. The widespread effect of the correlation between the personal fable and risk-taking behaviors is evident when we consider it has been identified in various cultures, such as the Japanese culture. A study done among Japanese college students found a direct path from egocentrism to health-endangering behaviors. Thus, even though universality can in no way be assumed, it is noteworthy that the correlation has been identified in different parts of the world.
Support for the hypothesis that egocentrism, and the personality fable more specifically, predict risk-taking behaviors is considerable in North America. In fact, the personal fable is commonly associated with risk-taking in research It has been established that speciality and invulnerability are significant predictors of risk. Research has found that egocentrism increased significantly with age and that the personal fable was positively correlated with risk-taking. Male students revealed significantly higher rates of invulnerability. The correlation between the personal fable and risk taking is considered to be of utmost importance. A valid and reliable measure of the personal fable would be an invaluable aid to assessing adolescent risk-taking potential and preventive intervention.
Potential positive factors of the personal fable
Research has come to distinguish three main subtypes of the personal fable. Omnipotence relates to the adolescent believing he has great authority or power (i.e. he is capable of what most others are not). Invulnerability is just that: the adolescent believes he cannot be harmed or affected in the ways others can. And finally, uniqueness is the adolescent's belief that he and his experiences are novel and unique to him (i.e. no one else could possibly relate). Distinguishing between the personal fable's three subtypes has merit. Research has shown that omnipotence does not seem to be related to delinquent behavior such as substance use, nor to depression or suicidal ideation. In fact, omnipotence is suggested to act as a protective factor, allowing for superior adjustment, high coping skills and self-worth. Contrary to omnipotence, invulnerability relates to risk behavior and delinquency, and uniqueness, which is more prevalent in girls, is related to depression and suicidal ideation (and is found to increase with age). Research has focused significantly more on the personal fable's negative effects and it is important to consider pursuing omnipotence to capitalize on its positive results.
Looking at each subtype of the personal fable – invulnerability, omnipotence and uniqueness – revealed that invulnerability was highly correlated with externalizing behaviors, namely risk-taking (i.e. delinquency and substance use). Personal fable as a whole was found to be a multidimensional construct, contrary to the belief of it being invariably negative. Omnipotence was not correlated with any negative outcomes and in fact was correlated with superior adjustment and feelings of self-worth. Uniqueness (more prevalent in females) was highly correlated with depression and suicidal ideation. Therefore, although a certain subset of the personal fable was once again found to significantly predict involvement in risky behavior, further examination into the multidimensionality of the personal fable is recommended. Particularly, examining whether omnipotence may in fact aid in healthy development and appropriate risk taking would be of utmost importance.
An Australian research brought into play the transtheoretical model (a model used to determine an individual's level of readiness and commitment to changing their behaviors to healthier alternatives) in conjunction with the personal fable to examine smoking and implications for smoking cessation. The researchers found that the personal fable is consistently associated with unhealthy and high risk behaviors. Findings from their study provide mixed results however. Although pre-contemplative smokers (individuals believing they do not exhibit any problem behavior) revealed high levels of omnipotence, ex-smokers did as well. These results suggest personal fable actually plays an important role in smoking cessation and researchers should consider re-evaluating the constructs to determine whether omnipotence could become stronger after smoking cessation (omnipotence in this particular case being the individual's belief that he can stop smoking whenever he wants). In the end, it is suggested the personal fable might be better conceptualized as encompassing both adaptive and maladaptive beliefs
Preventive efforts
Studies examining egocentrism's effect on risk awareness/health promotion messages' effectiveness revealed that egocentrism may inhibit deep cognitive processing of these messages. It is contended that explicit messages may not work best for adolescent audiences, despite this being the chosen form. The adolescent needs to be involved in the decision-making process by being presented with a message encouraging discussion and deep elaboration of behaviors and their outcomes. In other words, the message should implicitly encourage non-egocentric thought. In fact, open-ended messages, as opposed to messages scaring, teaching or providing answers, resulted in greater retention of the intended message and in general a reduced intention of risk taking behavior. However this effect was somewhat reduced among male participants.
Identity development and personal fable
As mentioned, the personal fable is an important process that every adolescent experiences and it plays an important role in the adolescent's self-perception in all life stages. Research has shown the personal fable to affect identity development specifically. When it comes to identity, adolescent egocentrism is considered an important construct, especially given its relation to self-compassion. Adolescents gradually develop cognitive skills which allow them to understand or speculate what others are thinking. In other words, adolescents develop theory of mind.
Specifically, theory of mind is an individual's ability to understand another's actions, thoughts, desires, and to hypothesize on their intentions. This construct has been found to emerge once a child reaching three to four years of age and continues to develop until adolescence. Müge Artar conducted a study comparing adolescents identified as having higher levels of egocentrism with adolescents exhibiting more emotional inference and looked into their relationships with their parents. An adolescent's ability to infer a family member's thoughts is considered an important developmental stage. Social-emotional questions were based on the adolescents' understanding of their mother and father's beliefs. Participants were asked questions such as "When you have problems with your mother/father, what does your mother/father feel? What do you feel? Does your mother/father think what you feel?" Most of the adolescents perceived their relationship with parents relevantly and also accurately perceived images about family network.
It can be inferred then that theory of mind acts as a counter to egocentrism. Where egocentrism revolves around the individual and everything in relation to one's own perspective, theory of mind allows for the inclusion of the fact that other people have differing viewpoints.
Self-esteem, self-compassion and the personal fable
Elkind's work on egocentrism was in a sense an expansion and further development of Piagetian theories on the subject. Egocentrism as Piaget describes it "generally refers to a lack of differentiation in some area of subject-object interaction". Both Piaget and Elkind recognize that egocentrism applies to all developmental stages from infancy to childhood, to adolescence to adulthood and beyond. However at each developmental stage, egocentrism manifests its characteristics in different ways, depending on the end goals of that particular stage.
During adolescence, formal operations are developing and become more intact and present in thinking processes. According to Piaget, these formal operations allow for "the young person to construct all the possibilities in a system and construct contrary-to-fact propositions". Elkind adds that "they also enable [the adolescent] to conceptualize his own thought, to take his mental constructions as objects and reason about them". These new thinking processes are believed to begin in early adolescences around ages 11–12. Another characteristic of formal operations that directly applies to adolescence egocentrism is the matter that during this stage, as discussed above, adolescents are conceptualizing the thoughts of those around them, in a sense, putting themselves into someone else's shoes in order to possibly understand their views. However, since adolescence is a stage in which the youth is primarily concerned with themselves and their own personal views and feelings, these shortfalls of formal operations result in the adolescent "fail[ing] to differentiate between what others are thinking about and his own mental preoccupations, he assumes that other people are as obsessed with his behavior and appearance as he is himself". As mentioned earlier, these sentiments are the basis of another feature of adolescent egocentrism: the imaginary audience.
Self-compassion and personal fable
"Self-compassion is an adaptive way of relating to the self when considering personal inadequacies or difficult life circumstances." Self-compassion refers to the ability to hold one's feelings of suffering with a sense of warmth, connection, and concern. Neff, K.D.(2003b) has proposed three major components of self-compassion. The first is self-kindness, which refers to the ability to treat oneself with care and understanding rather than harsh self-judgment. The second involves a sense of common humanity, recognizing that imperfection is a shared aspect of the human experience rather than
feeling isolated by one's failures. The third component of self-compassion is mindfulness, which involves holding one's present-moment experience in balanced perspective rather than exaggerating the dramatic story-line of one's suffering. At the same time, personal fable is theorized to lead to a lack of self-compassion if one's difficulties and failings are not faced and given meaning to be human. Self-compassion may also be able to mediate personal well-being. A study of 522 individuals from ages 14-24 sought to define this link between personal mental health and presence of self-compassion. The group was divided into 235 participants from ages 14 to 17 and 287 participants from ages 19 to 24. The subjects were gathered from high schools and colleges in a single city and were not compensated. Their socioeconomic backgrounds were largely middle-class (Neff & McGehee's).
Evidence found that self-compassion could explain a significant variance in well-being, predicting it even better than variables of maternal support.
Self-esteem and self-compassion
Adolescent egocentrism and personal fable immensely affect the development of self-esteem and self-compassion during adolescence. During this particular stage, self-esteem and self-compassion of an adolescent are developing and changing constantly and many factors influence their development. According to Kristin Neff, self-esteem can be defined as judgments and comparisons stemming from evaluations of self-worth, while also evaluating personal performances in comparison to set standards, and perceiving how others evaluate them to determine how much one likes the self. She goes on to further explain that self-compassion has three main components: "(a) self-kindness – being kind and understanding toward oneself in instances of pain or failure rather than being harshly self-critical, (b) common humanity – perceiving one's experiences as part of the larger human experience rather than seeing them as separating and isolating, (c) mindfulness – holding painful thoughts and feelings in balanced awareness rather than over-identifying with them. Self-compassion is an emotionally positive self-attitude that is assumed to protect against the negative consequences of self-judgment, isolation, and rumination (such as depression). With a basic understanding of these two concepts, self-esteem and self-compassion, it becomes evident that adolescent egocentrism and personal fable have important consequences and affect many aspects of adolescent development.
Neff argues that although there are similarities in self-esteem and self-compassion, the latter contains fewer pitfalls than the former. She asserts that self-compassion is "not based on the performance evaluations of self and others or the congruence with ideal standard... it takes the entire self-evaluation process out of the picture, focusing on feelings of compassion toward oneself and the recognition of one's common humanity rather than making self-judgments". Furthermore, high self-compassion seems to counteract certain negative concerns of extremely high self-esteem such as narcissism and self-centeredness. Neff's studies also contend that those with high self-compassion have greater psychological health than those with lower levels of self-compassion, "because the inevitable pain and sense of failure that is experienced by all individuals is not amplified and perpetuated through harsh self-condemnation... this supportive attitude toward oneself should be associated with a variety of beneficial psychological outcomes, such as less depression, less anxiety, less neurotic perfectionism, and greater life satisfaction".
With these understandings of self-esteem and self-compassion during adolescence, we can see how personal fable and egocentrism plays a role in the development of these self-concepts can greatly impact the way an adolescent views themselves and who they believe they are. If one is using personal fable to an extent that they constantly believe that nobody understands them, they are the only one who is going through "this" or they just feel alone all the time, this can very negatively affect their personal growth, self-esteem and self-compassion during adolescence. On the other had, if they feel that they have a good support system in their family, friends, school, etc., development of self-esteem and self-compassion will likely take a much more positive route and the adolescent will likely have a well rounded sense of themselves. As Neff states, "individuals with high levels of self-compassion should have higher 'true self-esteem'". Thus the development that occurs ongoing during adolescence can most accurately be described as the interactions of multiple systems, functions, and abstract processes that occur together, separately or at any other combination.
Gender differences in the development of self-esteem
A study by Ronald L. Mullis and Paula Chapman examined gender differences pertaining to the development of self-esteem in adolescents. The results of their study shows "the problem-solving skills of adolescents change and improve with age as a function of cognitive development and social experience". They found that the male adolescents used more wishful thinking in their coping strategies than did female adolescents, who tended to rely more on social supports as a coping strategy." Furthermore they found that youths with lower levels of self-esteem relied more on emotional-based coping methods. The study gives "ventilation of feelings" as an example, while those with high levels of self-esteem more readily utilized skills associated with problem solving and higher levels of formal operations as coping strategies.
Identity exploration and emerging adulthood
Arnett (2000) suggested that in adolescents' identity exploration, it is more transient and tentative. (Arnett, 2000). Adolescent dating is recreational in nature, involving group activities. They are still exploring their identity before asking the question "Given the kind of person I am, what kind of person do I wish to have a partner through life?" (Arnett, 2000, p. 473). With increasing opportunities to pursue higher education and greater delays in marriage and childbirth (Arnett, 2007), there is now more time, beyond adolescence, for activities and reflections surrounding self-definition and identity development. (Kose, Papouchis & Fireman). When adolescents start to develop the cognitive skill to understand others' feelings and what they are thinking, also known as theory of mind. This helps adolescents to develop their own sense of self and their own way of perceiving the world. It is normal for adolescents to feel personal fable. It is what drives them to develop their own sets of skills to understand others' thoughts and feelings. And this also triggers their ability to seek out their own identity. Arnett (2000) argues that as the age of adulthood had been moved back and the age of becoming an adult is getting older than the past. There is more time for adolescents to explore themselves; he thought of this period of exploration as seemingly a time when perspective-taking skills are being sharpened most dramatically. Personal fable also helps adolescents transition from exploring oneself to seeking extended experimentation, particularly in relationships, during the transition of young adulthood. Elkind though thought that the extension period for identity exploration and less pressure to take on typical adult roles teens are special and invulnerable, but are not feeling on center stage as often felt by the adolescents. (Elkind et al., Lapsley et al., 1989). As an example, some young adults might still have the feeling that they are special inside and invulnerable, but they are less likely to engage in risky behaviors. Some current findings suggest that increases in personal fable ideation are associated with increases in identity and cognitive formal operations, particularly among this young adult age group. Increase in personal fable ideation, feelings of invulnerability, among emerging adults may explain the heightened level of maladaptive behaviors among this group. For example, studies might explore how faulty thinking, particularly personal fable ideation, is related to risk behavior and how interventions can be tailored to address the type of thinking if leading to harmful outcomes for the young adults (18–25 years old). Apparently inconsistent findings might be resolved by improvements in ways of measuring individual differences in the personal fable. Young adults have to be able to cope with an identity crisis, at the same time knowing that personal fable is driving them to risky behaviors. If young adults do not cope with the inner conflicts, they will be likely to involve in risk-behaviors. Current research indicates that the age of emerging adulthood may extend later than previously thought, and the personal fable also appears to persist into emerging adulthood. The persistence of the personal fable could contribute to continued risk-taking behavior even though that age group physically appears to be adult.
See also
Chūnibyō
Dunning–Kruger effect
God complex
Imaginary audience
References
Developmental psychology | Personal fable | [
"Biology"
] | 5,525 | [
"Behavioural sciences",
"Behavior",
"Developmental psychology"
] |
12,127,446 | https://en.wikipedia.org/wiki/Flood%20Control%20Act%20of%201941 | The Flood Control Act of 1941 was an Act of the United States Congress signed into law by US President Franklin Roosevelt that authorized civil engineering projects such as dams, levees, dikes, and other flood control measures through the United States Army Corps of Engineers and other Federal agencies. It is one of a number of Flood Control Acts that is passed nearly annually by the US Congress.
Projects
Dams
Kinzua Dam (begun in 1960, completed in 1965)
Fort Gibson Dam (begun in 1941, completed in 1949)
Allatoona Dam (begun in 1946, completed in 1950)
Stormwater control
Construction of mandatory storm drains and flood control channels throughout the city of Los Angeles in the wake of the Los Angeles Flood of 1938.
See also
Water Resources Development Act
Rivers and Harbors Act
for related legislation which sometime also implement flood control provisions.
1941 in the environment
1941 in American law
1941 | Flood Control Act of 1941 | [
"Engineering"
] | 177 | [
"Civil engineering",
"Civil engineering stubs"
] |
12,127,451 | https://en.wikipedia.org/wiki/Monkey%20Girl | Monkey Girl: Evolution, Education, Religion, and the Battle for America's Soul is a 2007 non-fiction book about the Kitzmiller v. Dover Area School District trial of 2005. Author Edward Humes, a Pulitzer Prize-winning American journalist, interviewed interested parties to the controversy around a school board's decision to introduce the concept of intelligent design into public school lessons on science. The book describes in detail the experiences of those caught up in the actions of the school board and the ensuing Dover trial, in the context of the intelligent design movement and the ascendency of the American religious right whose opposition to evolution led them to campaign to redefine science to accept supernatural explanations of natural phenomena.
References
Inherit the Wind, Redux – Washington Post, Reviewed by Christine Rosen, February 25, 2007
Edward Humes – The online home of Monkey Girl
Excerpt of Monkey Girl
Intelligent design
2007 non-fiction books
Ecco Press books
Books about education | Monkey Girl | [
"Engineering"
] | 193 | [
"Intelligent design",
"Design"
] |
12,127,712 | https://en.wikipedia.org/wiki/Plutonium-244 | Plutonium-244 (Pu) is an isotope of plutonium that has a half-life of 81.3 million years. This is longer than any other isotope of plutonium and longer than any other known isotope of an element beyond bismuth, except for the three naturally abundant ones: uranium-235 (704 million years), uranium-238 (4.468 billion years), and thorium-232 (14.05 billion years). Given the half-life of Pu, an exceedingly small amount should still be present on Earth, making plutonium a likely but unproven candidate as the shortest-lived primordial element.
Natural occurrence
Accurate measurements, beginning in the early 1970s, appeared to detect primordial plutonium-244, making it the shortest-lived primordial nuclide. The amount of Pu in the pre-Solar nebula (4.57×10 years ago) was estimated as 0.8% the amount of U. As the age of the Earth is about 56 half-lives of Pu, the amount of Pu left should be very small; Hoffman et al. estimated its content in the rare-earth mineral bastnasite as = 1.0×10 g/g, which corresponded to the content in the Earth crust as low as 3×10 g/g (i.e. the total mass of plutonium-244 in Earth's crust is about 9 g). Since Pu cannot be easily produced by natural neutron capture in the low neutron activity environment of uranium ores (see below), its presence cannot plausibly be explained by any other means than creation by r-process nucleosynthesis in supernovae or neutron star mergers.
However, the detection of primordial Pu in 1971 is not confirmed by recent, more sensitive measurements using accelerator mass spectrometry. In a 2012 study, no traces of Pu in the samples of bastnasite (taken from the same mine as in the early study) were observed, so only an upper limit on the Pu content was obtained: < 1.5×10 g/g: 370 (or fewer) atoms per gram of the sample, at least seven times lower than the abundance measured by Hoffman et al. A 2022 study, once again using accelerator mass spectrometry, could not detect Pu in Bayan Obo bastnasite, finding an upper limit of < 2.1×10 g/g (about seven times lower than the 2012 study). Thus, the 1971 detection cannot have been a signal of primordial Pu. Considering the likely abundance ratio of Pu to U in the early solar system (~0.008), this upper limit is still 18x greater than the expected present Pu content in the bastnasite sample (1.2×10 g/g).
Trace amounts of Pu (that arrived on Earth within the last 10 million years) were found in rock from the Pacific ocean by a Japanese oil exploration company.
Live interstellar plutonium-244 has been detected in meteorite dust in marine sediments, though the levels detected are much lower than would be expected from current modelling of the in-fall from the interstellar medium. It is important to recall, however, that in order to be a primordial nuclide – one constituting the amalgam orbiting the Sun that ultimately coalesced into the Earth – plutonium-244 must have comprised some of the solar nebula, rather than having been replenished by extrasolar meteoritic dust. The presence of Pu in a meteor without evidence that the meteor originated in the Solar System's formational disc, supports the hypothesis that Pu was abundant enough to have been a part of that disc, if an extrasolar meteor contained it in some other gravitationally supported system, but such a meteor cannot prove the hypothesis. Only the unlikely discovery of live Pu within the Earth's composition could do that.
As an extinct radionuclide
Plutonium-244 is one of several extinct radionuclides that preceded the formation of the Solar System. Its half-life of 80 million years ensured its circulation across the solar system before its extinction, and indeed, Pu has not yet been found in matter other than meteorites. Radionuclides such as Pu, decay to produce fissiogenic (i.e., arising from fission) xenon isotopes that can then be used to time the events of the early Solar System. In fact, by analyzing data from Earth's mantle which indicates that about 30% of existing fissiogenic xenon is from Pu decay, it can be inferred that the Earth formed nearly 50–70 million years after the Solar System formed.
Before the analysis of mass spectroscopy data from analyzing samples found in meteorites, it was inferential at best to credit Pu as being the nuclide responsible for the fissiogenic xenon found. However, an analysis of a laboratory sample of Pu compared with that of fissiogenic xenon gathered from the meteorites Pasamonte and Kapoeta produced matching spectra that immediately left little doubt as to the source of the isotopic xenon anomalies. Spectra data was further acquired for another actinide isotope, Cm, but such data proved contradictory and helped erase further doubts that the fission was appropriately attributed to Pu.
Both the examination of spectra data and study of fission tracks led to several findings of plutonium-244. In Western Australia, the analysis of the mass spectrum of xenon in 4.1–4.2-billion-year-old zircons was met with findings of diverse levels of Pu fission. Presence of Pu fission tracks can be established by using the initial ratio of Pu to U (Pu/U) at a time T = years, when Xe formation first began in meteorites, and by considering how the ratio of Pu/U fission tracks varies over time. Examination of a whitlockite crystal within a lunar rock specimen brought by Apollo 14, established proportions of Pu/U fission tracks consistent with the (Pu/U) time dependence.
Production
Unlike plutonium-238, plutonium-239, plutonium-240, plutonium-241, and plutonium-242, plutonium-244 is not produced in quantity by the nuclear fuel cycle, because further neutron capture on plutonium-242 produces plutonium-243 which has a short half-life (5 hours) and quickly beta decays to americium-243 before having much opportunity to further capture neutrons in any but very high neutron flux environments. The global inventory of Pu is about 20 grams. Plutonium-244 is also a minor constituent of thermonuclear fallout, with a global Pu/Pu fallout ratio of (5.7 ± 1.0) × 10.
Applications
Plutonium-244 is used as an internal standard for isotope dilution mass spectrometry analysis of plutonium.
References
Nuclear materials
Isotopes of plutonium | Plutonium-244 | [
"Physics",
"Chemistry"
] | 1,434 | [
"Isotopes of plutonium",
"Isotopes",
"Materials",
"Nuclear materials",
"Matter"
] |
12,128,493 | https://en.wikipedia.org/wiki/Plant%20transformation%20vector | Plant transformation vectors are plasmids that have been specifically designed to facilitate the generation of transgenic plants. The most commonly used plant transformation vectors are T-DNA binary vectors and are often replicated in both E. coli, a common lab bacterium, and Agrobacterium tumefaciens, a plant-virulent bacterium used to insert the recombinant DNA into plants.
Plant transformation vectors contain three key elements:
Plasmids Selection (creating a custom circular strand of DNA)
Plasmids Replication (so that it can be easily worked with)
Transfer DNA (T-DNA) region (inserting the DNA into the agrobacteria)
Steps in plant transformation
A custom DNA plasmid sequence can be created and replicated in various ways, but generally, all methods share the following processes:
Plant transformation using plasmids begins with the propagation of the binary vector in E. coli. When the bacterial culture reaches the appropriate density, the binary vector is isolated and purified. Then, a foreign gene can be introduced. The engineered binary vector, including the foreign gene, is re-introduced in E. coli for amplification.
The engineered binary factor is isolated from E. coli and is introduced into Agrobacteria containing a modified (relatively small) Ti plasmid. This engineered Agrobacteria can be used to infect plant cells. The T-DNA, which contains the foreign gene, becomes integrated into the plant cell genome. In each infected cell, the T-DNA is integrated at a different site in the genome.
The entire plant will regenerate from a single transformed cell, resulting in an organism with the transformed DNA integrated identically across all cells.
Consequences of the insertion
Foreign DNA inserted
Insertional mutagenesis (but not lethal for the plant cell – as the organism is diploid)
Transformation DNA fed to rodents ends up in their phagocytes and rarely in other cells. Specifically, this refers to bacterial and M13 DNA. (This preferential accumulation in phagocytes is thought to be real and not a detection artefact since these DNA sequences are thought to provoke phagocytosis.) However, no gene expression is known to have resulted, and this is not thought to be possible.
Plasmid selection
A selector gene can be used to distinguish successfully genetically modified cells from unmodified ones. The selector gene is integrated into the plasmid along with the desired target gene, providing the cells with resistance to an antibiotic, such as kanamycin, ampicillin, spectinomycin or tetracycline. The desired cells, along with any other organisms growing within the culture, can be treated with an antibiotic, allowing only the modified cells to survive. The antibiotic gene is not usually transferred to the plant cell but instead remains within the bacterial cell.
Plasmids replication
Plasmids replicate to produce many plasmid molecules in each host bacterial cell. The number of copies of each plasmid in a bacterial cell is determined by the replication origin, which is the position within the plasmid molecule where DNA replication is initiated. Most binary vectors have a higher number of plasmid copies when they replicate in E. coli; however, the plasmid copy-number is usually lower when the plasmid is resident within Agrobacterium tumefaciens.
Plasmids can also be replicated using the polymerase chain reaction (PCR).
T-DNA region
T-DNA contains two types of genes: the oncogenic genes, encoding for enzymes involved in the synthesis of auxins and cytokinins and responsible for tumor formation, and the genes encoding for the synthesis of opines. These compounds, produced by the condensation between amino acids and sugars, are synthesized and excreted by the crown gall cells, and they are consumed by A. tumefaciens as carbon and nitrogen sources.
The genes involved in opine catabolism, T-DNA transfer from the bacterium to the plant cell and bacterium-bacterium plasmid conjugative transfer are located outside the T-DNA. The T-DNA fragment is flanked by 25-bp direct repeats, which act as a cis-element signal for the transfer apparatus. The process of T-DNA transfer is mediated by the cooperative action of proteins encoded by genes determined in the Ti plasmid virulence region (vir genes) and in the bacterial chromosome. The Ti plasmid also contains the genes for opine catabolism produced by the crown gall cells and regions for conjugative transfer and for its own integrity and stability. The 30 kb virulence (vir) region is a regulon organized in six operons essential for the T-DNA transfer (virA, virB, virD, and virG) or for the increasing of transfer efficiency (virC and virE). Several chromosomal-determined genetic elements have shown their functional role in the attachment of A. tumefaciens to the plant cell and bacterial colonization. The loci chvA and chvB are involved in the synthesis and excretion of the b -1,2 glucan, the required for the sugar enhancement of vir genes induction and bacterial chemotaxis. The cell locus is responsible for the synthesis of cellulose fibrils. The locus is involved in the synthesis of both cyclic glucan and acid succinoglycan. The att locus is involved in the cell surface proteins.
References
Technical Focus:a guide to Agrobacterium binary Ti vectors Trends in Plant Science 5(10): 446-451 2000
Transformation vector
Molecular biology
Mobile genetic elements
Molecular biology techniques
Gene delivery | Plant transformation vector | [
"Chemistry",
"Biology"
] | 1,201 | [
"Genetics techniques",
"Mobile genetic elements",
"Plants",
"Plant genetics",
"Molecular genetics",
"Molecular biology techniques",
"Molecular biology",
"Biochemistry",
"Gene delivery"
] |
12,128,498 | https://en.wikipedia.org/wiki/Arizona%20room | An Arizona room is a screened porch found frequently in homes in Arizona, based on similar concepts as the Florida room. Though often a patio or porch that has been covered and screened-in, creating an outdoor feeling while preventing excessive heat and keeping insects and animals out, many Arizona rooms are purpose built at the time the house is constructed. The room generally borders the backyard or side yard of the house and is often accessed directly from the living room, kitchen or other common room of the home.
According to Phoenix newspaper The Arizona Republic, residents slept in their Arizona room during the summer months, before the advent of air conditioning, because the flow of cool night air made them more comfortable than in an enclosed bedroom.
Arizona rooms are often decorated with Southwestern decor and furniture, and reflect the casual, informal style characteristic of the Southwest.
See also
Florida room
Screened porch
Sleeping porch
References
Room
Rooms | Arizona room | [
"Engineering"
] | 177 | [
"Rooms",
"Architecture"
] |
12,129,829 | https://en.wikipedia.org/wiki/HD%20132406%20b | HD 132406 b is a long-period, massive gas giant exoplanet orbiting the Sun-like star HD 132406. HD 132406 b has a minimum mass 5.61 times the mass of Jupiter. The orbital distance from the star is almost twice that of from Earth to the Sun. The orbital period is 2.7 years.
An astrometric measurement of the planet's inclination and true mass was published in 2022 as part of Gaia DR3, and this was updated in 2023.
References
External links
simulation HD 132406
Boötes
Giant planets
Exoplanets discovered in 2007
Exoplanets detected by radial velocity
Exoplanets detected by astrometry | HD 132406 b | [
"Astronomy"
] | 144 | [
"Boötes",
"Constellations"
] |
12,130,019 | https://en.wikipedia.org/wiki/Consumer-to-business | Consumer-to-business (C2B) is a business model in which consumers (individuals) create value and businesses consume that value. For example, when a consumer writes reviews or when a consumer gives a useful idea for new product development then that consumer is creating value for the business if the business adopts the input. In the C2B model, a reverse auction or demand collection model, enables buyers to name or demand their own price, which is often binding, for a specific good or service. Inside of a consumer to business market the roles involved in the transaction must be established and the consumer must offer something of value to the business.
Another form of C2B is the electronic commerce business model in which consumers can offer products and services to companies, and the companies pay the consumers. This business model is a complete reversal of the traditional business model in which companies offer goods and services to consumers (business-to-consumer = B2C). We can see the C2B model at work in blogs or internet forums in which the author offers a link back to an online business thereby facilitating the purchase of a product (like a book on Amazon.com), for which the author might receive affiliate revenues from a successful sale. Elance was the first C2B model e-commerce site.
Overview
C2B is a kind of economic relationship that is qualified as an inverted business type. The advent of the C2B scheme is due to:
The internet connecting large groups of people to a bidirectional network; the large traditional media outlets are one-directional relationships whereas the internet is bidirectional.
Decreasing costs of technology; individuals now have access to technologies that were once only available to large companies (digital printing and acquisition technology, high-performance computers, and powerful software).
Positives and negatives
Nowadays people have smartphones or connect to the internet through personal tablets/computers daily allowing consumers to engage with brands online. According to Katherine Arline, in traditional consumer-to-business models companies would promote goods and services to consumers, but a shift has occurred to allow consumers to be the driving force behind a transaction. To the consumers benefit, reverse auctions occur in consumer to business markets allowing the consumer to name their price for a product or service. A consumer can also provide value to a business by offering to promote a business products on a consumers blog or social media platforms. Businesses are provided value through their consumers and vice versa. Businesses gain in C2B from the consumers willingness to negotiate price, contribute data, or market to the company. Consumers profit from direct payment of the reduced-price goods and services and the flexibility of the transaction the C2B market created. Consumer to business markets have their downfall as well. C2B is still a relatively new business practice and has not been fully studied.
One weakness of the consumer-business model is that consumer information and privacy could be compromised. For example, businesses might choose to secretly analyze consumer spending by using sensitive information such as purchase history, age, race, location, etc.
Distinguishing between traditional business models
Consumer to business is an up and coming business market that can be utilized as a company's entire business model or added to an already existing model. Consumer to business (C2B) is the opposite of business to consumer (B2C) practices and is facilitated by the internet or online forms of technology. Another important distinction between the traditional business to consumer market is that the consumer chooses to be a part of the business relationship inside a consumer to business market. For a relationship to exist though both parties must acknowledge that it exists, implying that the relationship is important to both participants.
Data and analytics are going to drive the C2B world and enable companies to gain a better understanding of customers. Businesses need to go back to what drives the sales, people. Move away from innovation and the newest technology and go back to who, what, and why of the people interacting with businesses.
Usage in technology
The technology industry has largely adopted the use of consumer-to-business strategies, with social media corporations taking a large part in that growth. For example, companies such as Yelp or TripAdvisor provide a C2B service due to the amount of personal data harvested for use in targeting possible advertising clients. C2B can also be theorized, in the case of review aggregators, to increase the revenue of businesses through more overall knowledge about the company at hand. For example, if a corporation receives many positive reviews on a website such as Yelp, it may help to drive traffic to the company.
Data aggregation
Aggregation of data is a common C2B practice done with many internet corporations. In this instance, the consumer is creating the value of personal information and data to better target them to the correct advertisers. Businesses such as Facebook, Twitter, and others utilize this information in an effort to facilitate their B2B transactions with advertisers. Most of these systems cannot be fully utilized without B2C or B2B transactions, as C2B is usually the facilitator of these.
See also
Business-to-consumer
Business-to-government
Consumer-to-consumer
e-Business
Business-to-business (B2B)
References
Business models
Consumer behaviour | Consumer-to-business | [
"Biology"
] | 1,055 | [
"Behavior",
"Consumer behaviour",
"Human behavior"
] |
12,130,055 | https://en.wikipedia.org/wiki/Business-to-employee | Business-to-employee (B2E) electronic commerce uses an intrabusiness network which allows companies to provide products and/or services to their employees. Typically, companies use B2E networks to automate employee-related corporate processes. B2E portals have to be compelling to the people who use them. Companies are competing for eyeballs of their employees with eBay, Yahoo and thousands of other web sites. A huge percentage of traffic to consumer web sites comes from people who are connecting to the net at the office.
Examples of B2E applications include:
Online insurance policy management
Corporate announcement dissemination
Online supply requests
Special employee offers
Employee benefits reporting
401(k) Management
Online loan services
B2B
Business-to-business (B2B) is another type of e-commerce where the buyers and sellers are business organisations. It covers a broad spectrum of applications that enable an enterprise to form electronic relationships with its distributors, resellers, suppliers, customers, and other partners. Organisations can use B2B to restructure their supply chains and their partner relationships. It is also for the exchange of service information. E-procurement is one of the important parts of the business-to-business purchase and sale of supplies and services over the Internet. A central part of several B to B sites, e-procurement is sometimes referred to in other ways, such as supplier exchange.
Consumer-to-business
Consumer-to-business model (C2B) is a type of commerce where a consumer or end user provides a product or service to an organization. It is a reverse of the Business to Consumer (B2C), where businesses produce products and services for consumer consumption. The idea is that the individual or end user provides a product or service that the business can use to complete a business process or gain competitive advantage.
Portal
Portal is a term, generally synonymous with gateway, for a World Wide Web site that is or proposes to be a major starting site for users when they get connected to the Web or that users tend to visit as an anchor site. There are general portals and specialized or niche portals. Some major general portals include Yahoo, eBay and Microsoft Network.
Consumer-to-consumer
Consumer-to-consumer e-commerce is the practice of individual consumers buying and selling goods via the Internet. The most common examples of this form of transaction comes from sales websites such as eBay, although online forums and classifieds also offer this type of commerce to consumers. In most cases, consumer to consumer e-commerce, also known as C2C e-commerce, is helped along by a third party who officiates the transaction to make sure goods are received and payments are made. This offers some protection for consumers partaking in C2C e-commerce, allowing them the chance to take advantage of the prices offered by motivated sellers.
There are four different types of e-commerce, or electronic commerce, which is the buying and selling of goods or services through the use of computer technology and Internet service. Electronic commerce can take place between businesses, which may refer to either business to commerce, or between consumers and businesses, which may refer to either business to business commerce e-commerce or consumer to business e-commerce. The final type is consumer to consumer e-commerce which take businesses out of the picture and allows for transactions to take place between consumers via the Internet.
Business-to-consumer
Business-to-consumer refers to commercial transactions where businesses sell products or services to consumers. This might refer to individuals shopping for clothes for themselves in physical store, eating at a restaurant, or subscribing to a service at home. More recently, the term B2C refers to the online selling of products in which manufacturers or retailers sell their products to consumers over the Internet.
E-commerce
Electronic commerce or e-commerce is a term for any type of business, or commercial transaction, which includes the transfer of information across the Internet. It covers a range of different types of businesses, from buyer based retail sites, through sale or music sites, to business exchanges trading goods and services among companies. It is currently one of the most important features of the Internet to develop. The development of e-commerce has proceeded in phases. Offline and online brands initially were kept distinct and then were awkwardly merged. Initial e-commerce efforts consisted of flashy brochure sites, with rudimentary shopping carts and accelerated checkouts.
References
See also
e-business
Business-to-consumer
Business-to-business
E-commerce
Information technology management | Business-to-employee | [
"Technology"
] | 916 | [
"E-commerce",
"Information technology",
"Information technology management"
] |
12,130,394 | https://en.wikipedia.org/wiki/Retrenchment%20%28computing%29 | Retrenchment is a technique associated with Formal Methods that was introduced to address some of the perceived limitations of formal, model based refinement, for situations in which refinement might be regarded as desirable in principle, but turned out to be unusable, or nearly unusable, in practice. It was primarily developed at the School of Computer Science, University of Manchester. The most up to date perspective is in the ACM TOSEM article below.
External links
The Retrenchment Homepage
R. Banach, Graded Refinement, Retrenchment and Simulation, ACM Trans. Soft. Eng. Meth., 32, 1-69 (2023)
Formal methods
Software development philosophies
Department of Computer Science, University of Manchester | Retrenchment (computing) | [
"Technology",
"Engineering"
] | 160 | [
"Computer science stubs",
"Software engineering",
"Computer science",
"Computing stubs",
"Formal methods"
] |
17,647,837 | https://en.wikipedia.org/wiki/Lenovo%203000 | Lenovo 3000 was a line of low-priced notebook and desktop computers designed by Lenovo Group targeting small businesses and individuals. It was replaced with the IdeaCentre and IdeaPad brands.
Background
The Lenovo 3000 series marked the debut of Lenovo branded products outside of China. First showcased in New York City on 23 February 2006, the line was intended to boost Lenovo's competitiveness internationally against rival brands like Dell and Hewlett-Packard. In addition, the 3000 series gave the company an independent identity: an identity separate from the Thinkpad line that Lenovo acquired in 2005 and defined its Westernised image since the acquisition.
In 2008, after introducing two new consumer brands, IdeaPad for laptops and IdeaCentre for desktops, Lenovo stopped selling its 3000 series models, although they continued to be sold in China in 2009.
Models
Desktops
Lenovo 3000 J
features both AMD and Intel processors
Lenovo 3000 H
Notebooks
First introduced in 2006, the Lenovo 3000 N100 and V100 offered Intel Core Duo processors, while the lower-end C series featured Pentium M and Celeron M processors. Its successors, C200, N200, V200 featured Core 2 Duo processors. Thereafter, came the N500, the G-series, and the B series
Lenovo 3000 C
C100, C200 - 15-inch XGA screen
Lenovo 3000 N
N100, N200 - 14.1-inch- and 15.4-inch- WXGA models
N500 - 15.4 inch screen.
Lenovo 3000 V
V100, V200 - 12.1-inch WXGA models
Lenovo 3000 G
G400, G410, G430, G450, G455, G510, G530, G550, and G560
Lenovo 3000 B Series
B450-B490 - 15.4 inch screen
References
External links
Introducing Lenovo 3000, Lenovo Group
3000
Consumer electronics brands
Computer-related introductions in 2006
Products and services discontinued in 2008 | Lenovo 3000 | [
"Technology"
] | 420 | [
"Mobile computer stubs",
"Mobile technology stubs"
] |
17,649,642 | https://en.wikipedia.org/wiki/Earmark%20%28agriculture%29 | An earmark is a cut or mark in the ear of livestock animals such as cattle, deer, pigs, goats, camels or sheep, made to show ownership, year of birth or sex.
The term dates to the 16th century in England. For example, in a case of defamation in King's Bench in 1541, the defamatory statement included "George Butteler hath eremarked a mare of one Robert Hawk." The practice existed in the Near East up to the time of Islam. Against this, in Q. 4:119 the Qur'an quotes the Devil promising, ""I will mislead them, I will entice them, I will command them to mark the ears of livestock, and I will command them to distort the creation of God."
Earmarks are typically registered when a stock owner registers a livestock brand for their use. There are many rules and regulations concerning the use of earmarks between states and countries. Tasmanian sheep and cattle must be earmarked before they become six months old.
Generally the owner’s earmark is placed in a designated ear of a camel or sheep to indicate its gender. Typically if a registered earmark is used, it must be applied to the right ear for ewes and the left ear for female camels. The other ear of a sheep then may be used to show the year of its birth. Cattle earmarks are often a variety of knife cuts in the ear as an aid to identification, but it does not necessarily constitute proof of ownership.
Since the 1950s it has been more common to use ear tags to identify livestock because coloured tags are capable of conveying more information than earmarks. Such ear tags were popularised by New Zealand dairy farmers in the earliest successful use of them.
Because of the ubiquity of earmarking, in the nineteenth and twentieth centuries, it became common parlance to call any identifying mark an earmark. In early times many politicians were country or farming folk and were adept at using such words in different ways and in creating new concepts.
Today it is common to refer to an institution's ability to designate funds for a specific use or owner as an earmark.
Laboratory animals
Laboratory mice are often kept with several animals in one cage since mice are social animals, therefore it is necessary to have some method of identifying them individually. Earmarks may be used, although non-traumatic methods such as tattooing their tails and painting spots on white mice with crystal violet or permanent markers can be used as well. Microchips are less commonly used in mice because of their expense compared to the short life span of a mouse.
Earmarking a mutant strain of mice called MRL/MpJ led to the accidental discovery that they had the ability to regenerate tissue very quickly, when scientists working with them found that the holes punched in their ears kept growing back. The holes healed over completely with regenerated cartilage, blood vessels, and skin with hair follicles. It was later found that this strain of mice also heals damage to other body parts such as knee cartilage and heart muscle significantly better than other mice.
See also
Animal identification
British Cattle Movement Service
Ear tag
Livestock branding
Overview of discretionary invasive procedures on animals
References
Earmarkers
Pig Ear Notcher
Icelandic earmarks
Stock ID System
Identification of domesticated animals
Animal equipment
Animal welfare | Earmark (agriculture) | [
"Biology"
] | 686 | [
"Animal equipment",
"Animals"
] |
17,649,996 | https://en.wikipedia.org/wiki/Open%20access%20%28infrastructure%29 | In the context of infrastructure, open access involves physical infrastructure such as railways and physical telecommunications network plants being made available to clients other than owners, for a fee.
For example, private railways within a steel works are private and not available to outsiders. In the hypothetical case of the steelworks having a port or a railway to a distant mine, outsiders might want access to save having to incur a possibly large cost of building their own facility.
Marconi and radio communication
The Marconi Company was a pioneer of long distance radio communication, which was particularly useful for ships at sea. Marconi was very protective about its costly infrastructure and refused—except for emergencies—to allow other radio companies to share its infrastructure. Even if the message sender was royalty, as in the Deutschland incident of 1902, they continued to refuse access. Since radio communication was so new, it preceded laws, regulations and licenses, which might otherwise impose conditions to open infrastructure to other players.
Pilbara railways
In the Pilbara region of Western Australia, two large mining companies operate their own private railway networks to bring iron ore from mines to the port. In 1999 North Limited made an application to access Rio Tinto's system, but Rio's takeover of North Limited meant that application was never fully tested. In 2004 Fortescue Metals Group launched a bid to have the Mount Newman railway owned by BHP Billiton declared for third party access. The owner of the line claimed that they formed an integral part of the production process, and so should not be subject to completion requirements. When these mines started in the 1960s, state laws required the miners to make their infrastructure available to other players, but no application had been made. In the same region, the Fortescue Metals Group railway has been set up for open access for a fee.
In June 2008 the Federal Government advisory body, the National Competition Council, recommended that BHP Billiton's Goldsworthy railway and Rio Tinto's Hamersley & Robe River railway should be declared open access. Treasurer Wayne Swan was given 60 days to make a final decision based on this recommendation. On October 27 the three lines were declared with open access to apply from November 20, 2008. This will apply for 20 years under the National Access Regime within the Trade Practices Act 1974. The declaration does not give a right of access, but provides a third party with recourse if access terms cannot be negotiated with the infrastructure owner.
Concerns
A player seeking access to infrastructure would expect to pay several fees of a capital and operating kind. Hopefully, the cost of this is less than having to build separate infrastructure. It is in the public interest that access disputes be resolved in an efficient way, so that for example, profits are maximized and therefore income tax on those profits is also maximized.
The potential for monopoly infrastructure to charge high fees and provide poor service has long been recognized. Monopolies are often inevitable because of high capital costs, with governments often imposing conditions, in exchange for approval of the project and for the granting of useful powers such as land resumption. Thus a canal might have its rates regulated, and be forbidden to operate canal boats on its own waters.
Trackage rights
Where there are many separate railways and one railway wishes to run trains off their own tracks onto the tracks of another, they may seek trackage rights from the other railway(s). This can be done by voluntary agreement, or by compulsory order of a regulator. In time of flood and accident which puts a line out of order, compulsory trackage rights may be ordered in the public interest to keep traffic flowing, assuming that alternative routes exist.
Joint Venture
One of the problems faced by Open Access are fair prices. Consider the Iron Ore minnow BC Iron which seeks access to the Fortescue railway. (Fortescue has declared itself to be an open access operator).
BC Iron and Fortescue agreed to set up a joint venture to operate both the BCI mine and the railway connection to the port. As the larger player, Fortescue will run the show. The joint venture gives BC Iron inside information as to how much the railway and port will cost, while Fortescue gains inside information about the mine.
Geraldton
In 2010, Mount Gibson Iron spent a total of 20 million dollars to upgrade the ore unloaded at Geraldton. The facility will be open to other users who will pay a toll for such usage.
The Oakajee port and railway network is also to be open access.
Queensland
In 2011, GVK has offered to make its proposed railway from the Galilee Basin to Abbot Point as open access to other players.
Telecommunications
In the telecommunications industry, open access to existing infrastructure, in the form of local loop unbundling, duct sharing, utility pole sharing, and fiber unbundling, is one proposed solution to the middle mile problem.
See also
Network Rail
Open access operator
References
Transport economics | Open access (infrastructure) | [
"Engineering"
] | 995 | [
"Construction",
"Infrastructure"
] |
17,650,136 | https://en.wikipedia.org/wiki/Military%20spectrum%20management | Every military force has a goal to ensure and have permanent access to radio frequencies to meet its vital military tasks. This is based on strategies, doctrines and different policies that military forces adhere to.
The nature of high mobility of military operations and their logistics support requires wide use with high speed capacities of voice, data and image communications, etc. Control, surveillance, reconnaissance and reporting systems play a vital role in the command and control system. Many of these requirements can be only met with the use of radio systems. The equipment of military communications adds and multiplies the power of forces. That is why the use of radio frequencies’ spectrum is evaluated as one of the preliminary conditions for successful military operations.
Need for access
Despite the continuous reduction of forces, especially after the 1990s, it is seen that the military inquiries for access to radio spectrum have not decreased. This is because of the high mobility of the joint forces together with the quick reaction, increased number of missions, etc., which need more exact and timely information in all the defined regions and those unpredicted as well. Also, the equipment of military forces’ systems work in different bands and with several frequencies at the same time.
As long as the electromagnetic spectrum is evaluated as an element of the asset list and the operational electronic architecture that today’s and future forces should have, the military forces make all the efforts to get all the necessary bands of the spectrum. However, the military forces, in their activities to manage the frequencies fight with different challenges.
The technology is running fast and has brought an extended variety of user services. The success of certain applications (mobile radio-telephony, equipment with low power, digital media, various military systems, etc.) naturally has caused an increase in the needs for frequencies from the civilian and military sectors. This has often brought civil administration to have tendencies to decrease the amount of frequencies in the interest of military forces.
Spectrum management
Spectrum management is complex and difficult. The terminology, legal and technical considerations, national, regional and international complex regulations and bilateral and multilateral agreements can confuse those less educated in the effort. The forces in operations often do not see the incompatibilities and interferences between systems in their own communication services and to the other systems. All of these dictate the need for specialized personnel to ensure relevant recommendations for the commanders and staffs in all the levels and to manage the spectrum. The effective continuous training of frequency administrators is an important factor in the improvement of frequency management.
Fulfillment
Based on the priority and the abilities of national security structures, comes the enforcement for the immediate and maximum fulfillment of their inquiries for electromagnetic spectrum. But, the civil administrations that manage the frequencies often do not understand and do not harmonize the spectrum requests in the benefit of national security. The developments in the national security structures do not get followed and do not get put into full consideration by them. So the military forces should be actively engaged for the definition of a clear and sole objective for the needs of spectrum in the internal and international context and also in the priority treatment in the discussions for spectrum definition. The fact of decreasing the military forces does not mean that their available spectrum should be decreased as well. The variety of operations (combat, non-combat and peace support) of military forces has increased and they usually use the frequencies based on the activities and not on the number of forces. The military equipment is designed to work using the entire traditional and harmonized military spectrum. Also, the support with frequencies is mandatory, to fulfill all the acquisition and procurement procedures.
Standards
An essential aspect in the management of frequencies is the orientation towards policies, agreements, and NATO procedures and standards. All of these should have the necessary reflection in those of military forces of one country, member or partner. This is a necessity and needs to achieve among others the interoperability between communication and information systems.
The frequency management in military forces has a dynamic nature. It is related to adjustment and implementation of time concepts for the spectrum, taking into consideration planning, allocation, and spectrum usage in accordance with systems characteristics currently available and those of the future. This implies the flexibility in the protection of frequencies that are approved in national plans of frequencies, available for military forces. Although, it looks for a time to time evaluation of the current and future needs for spectrum aiming at more exact redefinitions of spectrum resources and more effective ways of spectrum division with other non-governmental users.
Levels of command and control
The authorities of different levels of command and control have the responsibility to ensure the full support of needs with spectrum for their structures. They always need and look for more continuous support with equipment which functions with radio frequencies. But, often they do not conceive correctly and do not have the necessary knowledge for access in the necessary frequency spectrum for military tasks. That is why the specialized frequency management structures have the responsibility of developing the full necessary administrative, planning and technical activities for frequencies.
Prevention of interference
To ensure a better and interference-free usage by other users, military forces, through their corresponding structures, take care for the monitoring of the frequency bands defined for them, cooperating and exchanging data with other governmental institutions authorized for spectrum management and other non-governmental users, to identify and detect unauthorized transmissions and illegal interferences. Spectrum monitoring requires expensive equipment and qualified personnel.
Combined and joint operations
Combined and joint operations are still a major challenge for frequency managers. The cooperation of two or more forces together, with different training and organization and without appropriate frequency planning, brings failure of command, control, and communications. The realizations of combined and joint operations, in alliance or coalitions, are closely connected to communications and information systems. Frequency management is evaluated as one of the main points for communications planning. In a coalition force, where there are a huge number of countries and military forces, if there is not correct management and coordination of spectrum bands what is colloquially called “frequency fratricide” will happen. To allocate frequencies in such an operation is very difficult. The spectrum usage in these operations has more than ever showed the need for coordination between forces of different countries and with the country where they operate, rationality, standardization, and interoperability, in accordance with deployment sites, regions and national and international regulations.
Computer software applications
Effective frequency management is closely tied with computer software applications. Through these applications, the optimal administration and coordination of frequencies and fulfillment of inquiries is ensured in every situation. Such applications support centralized and decentralized management of frequencies. They provide the planning and coordination of frequencies throughout the defined bands and their effective usage. Of course, such applications have financial costs and require time for the preliminary preparation and the final implementation.
Policies, guides, procedures
The normal frequency management in military forces is based on policies, guides, procedures, organizational and technical manuals. The preparation, their harmonization with international, regional and national regulations and adherence to technological developments is a continuous task that requires time to be realized from proper military structures.
Technical capabilities
A fundamental problem in frequency allocation is the existence of technical, geographical and operational factors, which restrict frequency usage of military forces. Frequency managers have to take into consideration technical capabilities and the equipment limit for functioning of the systems in accordance with operational requirements. In case of overloaded frequency bands, the military forces are obliged to foresee some interference and to accept some damage in the normal operation of their communications systems.
References
Radio spectrum
Military communications | Military spectrum management | [
"Physics",
"Engineering"
] | 1,493 | [
"Telecommunications engineering",
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Military communications"
] |
17,650,716 | https://en.wikipedia.org/wiki/Quasistatic%20loading | In solid mechanics, quasistatic loading refers to loading where inertial effects are negligible. In other words, time and inertial force are irrelevant.
References
Solid mechanics | Quasistatic loading | [
"Physics"
] | 38 | [
"Solid mechanics",
"Classical mechanics stubs",
"Mechanics",
"Classical mechanics"
] |
17,652,020 | https://en.wikipedia.org/wiki/Oil-lamp%20clock | Oil-lamp clocks are clocks consisting of a graduated glass reservoir to hold oil - usually whale oil, which burned cleanly and evenly - supplying the fuel for a built-in lamp. As the level in the reservoir dropped, it provided a rough measure of the passage of time.
The principle behind such a time-keeping device is that it measures a quantity that either decreases or increases at a constant rate. Lamps or candles, burning fuel at a steady pace, fit this category, and as a bonus produce useful light. Hourglasses depend on the steady draining of fine sand through a small aperture. Water clocks or clepsydra measure a gain or loss of water by using drops of uniform size and frequency. The Persian fenjaan made use of the constant time it took for the sinking of a floating bowl with a hole in its underside.
It is unknown when or where the oil-lamp clock was first introduced. This clock was mainly used during the mid-18th century.
See also
Candle clock
Water clock
Hourglass
References
Clocks | Oil-lamp clock | [
"Physics",
"Technology",
"Engineering"
] | 209 | [
"Machines",
"Physical quantities",
"Time",
"Time stubs",
"Clocks",
"Measuring instruments",
"Physical systems",
"Spacetime"
] |
17,653,787 | https://en.wikipedia.org/wiki/Silverman%27s%20game | In game theory, Silverman's game is a two-person zero-sum game played on the unit square. It is named for mathematician David Silverman.
It is played by two players on a given set of positive real numbers. Before play starts, a threshold and penalty are chosen with and . For example, consider to be the set of integers from to , and .
Each player chooses an element of , and . Suppose player A plays and player B plays . Without loss of generality, assume player A chooses the larger number, so . Then the payoff to A is 0 if , 1 if and if . Thus each player seeks to choose the larger number, but there is a penalty of for choosing too large a number.
A large number of variants have been studied, where the set may be finite, countable, or uncountable. Extensions allow the two players to choose from different sets, such as the odd and even integers.
References
Non-cooperative games | Silverman's game | [
"Mathematics"
] | 199 | [
"Game theory",
"Non-cooperative games"
] |
17,654,083 | https://en.wikipedia.org/wiki/Silicon%20monoxide | Silicon monoxide is the chemical compound with the formula SiO where silicon is present in the oxidation state +2. In the vapour phase, it is a diatomic molecule.
It has been detected in stellar objects and has been described as the most common oxide of silicon in the universe.
Solid form
When SiO gas is cooled rapidly, it condenses to form a brown/black polymeric glassy material, (SiO)n, which is available commercially and used to deposit films of SiO. Glassy (SiO)n is air and moisture sensitive.
Oxidation
Its surface readily oxidizes in air at room temperature, giving an SiO2 surface layer that protects the material from further oxidation. However, (SiO)n irreversibly disproportionates into SiO2 and Si in a few hours between 400 °C and 800 °C and very rapidly between 1,000 °C and 1,440 °C, although the reaction does not go to completion.
Production
The first precise report on the formation of SiO was in 1887 by the chemist Charles F. Maybery (1850–1927) at the Case School of Applied Science in Cleveland. Maybery claimed that SiO formed as an amorphous greenish-yellow substance with a vitreous luster when silica was reduced with charcoal in the absence of metals in an electric furnace. The substance was always found at the interface between the charcoal and silica particles.
By investigating some of the chemical properties of the substance, its specific gravity, and a combustion analysis, Maybery deduced that the substance must be SiO. The equation representing the partial chemical reduction of SiO2 with C can be represented as:
+ ⇌
Complete reduction of SiO2 with twice the amount of carbon yields elemental silicon and twice the amount of carbon monoxide. In 1890, the German chemist Clemens Winkler (the discoverer of germanium) was the first to attempt to synthesize SiO by heating silicon dioxide with silicon in a combustion furnace.
+ ⇌
However, Winkler was not able to produce the monoxide since the temperature of the mixture was only around 1000 °C. The experiment was repeated in 1905 by Henry Noel Potter (1869–1942), a Westinghouse engineer. Using an electric furnace, Potter was able to attain a temperature of 1700 °C and observe the generation of SiO. Potter also investigated the properties and applications of the solid form of SiO.
Gaseous form
Because of the volatility of SiO, silica can be removed from ores or minerals by heating them with silicon to produce gaseous SiO in this manner. However, due to the difficulties associated with accurately measuring its vapor pressure, and because of the dependency on the specifics of the experimental design, various values have been reported in the literature for the vapor pressure of SiO (g). For the pSiO above molten silicon in a quartz (SiO2) crucible at the melting point of silicon, one study yielded a value of 0.002 atm. For the direct vaporization of pure, amorphous SiO solid, 0.001 atm has been reported. For a coating system, at the phase boundary between SiO2 and a silicide, 0.01 atm was reported.
Silica itself, or refractories containing SiO2, can be reduced with H2 or CO at high temperatures, e.g.:
(g) ⇌
As the SiO product volatilizes off (is removed), the equilibrium shifts to the right, resulting in the continued consumption of SiO2. Based on the dependence of the rate of silica weight loss on the gas flow rate normal to the interface, the rate of this reduction appears to be controlled by convective diffusion or mass transfer from the reacting surface.
Gaseous (molecular) form
Silicon monoxide molecules have been trapped in an argon matrix cooled by helium. In these conditions, the SiO bond length is between 148.9 pm and 151 pm. This bond length is similar to the length of Si=O double bonds (148 pm) in the matrix-isolated linear molecule (O=Si=O), suggestive of the absence of a triple bond as in carbon monoxide. However, the SiO triple bond has a calculated bond length of 150 pm and a bond energy of 794 kJ/mol, which are also very close to those reported for SiO. In the carbon analogues the formal double bonds of carbon dioxide (116 pm) is also close to the triple bond length of carbon monoxide (112.8 pm); in light of this the observed bond length of SiO may be consistent with at least some triple-bond character in the diatomic molecule. The SiO double bond structure is, notably, an exception to Lewis' octet rule for molecules composed of the light main group elements, whereas the SiO triple bond satisfies this rule. That anomaly not withstanding, the observation that monomeric SiO is short-lived and that (SiO)'n' oligomers with 'n' = 2,3,4,5 are known, all having closed ring structures in which the silicon atoms are connected through bridging oxygen atoms (i.e. each oxygen atom is singly bonded to two silicon atoms; no Si-Si bonds), suggests the Si=O double bond structure, with a hypovalent silicon atom, is likely for the monomer.
Condensing molecular SiO in argon matrix together with fluorine, chlorine or carbonyl sulfide (COS), followed by irradiation with light, produces the planar molecules (with Si-O distance 148 pm) and (Si-O 149 pm), and the linear molecule (Si-O 149 pm, Si-S 190 pm).
Matrix-isolated molecular SiO reacts with oxygen atoms generated by microwave discharge to produce molecular which has a linear structure.
When metal atoms (such as Na, Al, Pd, Ag, and Au) are co-deposited with SiO, triatomic molecules are produced with linear (AlSiO and PdSiO), non-linear (AgSiO and AuSiO), and ring (NaSiO) structures.
Solid (polymeric) form
Potter reported SiO solid as yellowish-brown in color and as being an electrical and thermal insulator. The solid burns in oxygen and decomposes water with the liberation of hydrogen. It dissolves in warm alkali hydroxides and in hydrofluoric acid. Even though Potter reported the heat of combustion of SiO to be 200 to 800 calories higher than that of an equilibrium mixture of Si and SiO2 (which could, arguably, be used as evidence that SiO is a unique chemical compound), some studies characterized commercially available solid silicon monoxide materials as an inhomogeneous mixture of amorphous SiO2 and amorphous Si with some chemical bonding at the interface of the Si and SiO2 phases. Recent spectroscopic studies in a correlation with Potter's report suggest that commercially available solid silicon monoxide materials can not be considered as an inhomogeneous mixture of amorphous SiO2 and amorphous Si.
Interstellar occurrence
Interstellar SiO was first reported in 1971 after detection in the giant molecular cloud Sgr B2. SiO is used as a molecular tracer of shocked gas in protostellar outflows.
References
Inorganic silicon compounds
Oxides
Inorganic polymers | Silicon monoxide | [
"Chemistry"
] | 1,535 | [
"Inorganic compounds",
"Inorganic polymers",
"Oxides",
"Salts",
"Inorganic silicon compounds"
] |
17,654,379 | https://en.wikipedia.org/wiki/Shaft%20alignment | Shaft alignment is the process of aligning two or more shafts with each other to within a tolerated margin. The resulting fault if alignment is not achieved within the demanded specifications is shaft misalignment, which may be offset or angular. Faults can lead to premature wear and damage to systems.
Background
When a driver like an electric motor or a turbine is coupled to a pump, generator, or any other piece of equipment, the shafts of the two pieces must be aligned. Any misalignment increases the stress on the shafts and will almost certainly result in excessive wear and premature breakdown of the equipment. This can be very costly. When the equipment is down, production requiring the equipment may be delayed. Bearings or mechanical seals may be damaged and need to be replaced.
Shaft alignment is the process of aligning two or more shafts with each other to within a tolerated margin. The process is used for machinery before the machinery is put in service.
Technology
Before shaft alignment can be done, the foundations for the driver and the driven piece must be designed and installed correctly.
Flexible couplings are designed to allow a driver (e.g., electric motor, engine, turbine, hydraulic motor) to be connected to the driven equipment. Flexible couplings use an elastomeric insert to allow a slight degree of misalignment. Flexible couplings can also use shim packs. These couplings are called disc couplings.
Tools used to achieve alignment may be mechanical, optical (e.g., laser shaft alignment), or gyroscope–based. The gyroscopic systems can be operated very time efficiently and can also be used if the shafts have a large distance (e.g., on marine vessels).
Misalignment
The resulting fault if alignment is not achieved within the demanded specifications is shaft misalignment, which may be offset, angular, or both. Misalignment can cause increased vibration and loads on the machine parts for which they have not been designed (i.e. improper operation).
Types of misalignment
There are two types of misalignment: offset or parallel misalignment and angular, gap, or face misalignment. With offset misalignment, the center lines of both shafts are parallel but they are offset. With angular misalignment, the shafts are at an angle to each other. Errors of alignment can be caused by parallel misalignment, angular misalignment, or a combination of the two.
Offset misalignment can be further divided into horizontal and vertical misalignment. Horizontal misalignment is misalignment of the shafts in the horizontal plane and vertical misalignment is misalignment of the shafts in the vertical plane:
Offset horizontal misalignment is where the motor shaft is moved horizontally away from the pump shaft, but both shafts are still in the same horizontal plane and parallel.
Offset vertical misalignment is where the motor shaft is moved vertically away from the pump shaft, but both shafts are still in the same vertical plane and parallel.
Similarly, angular misalignment can be divided into horizontal and vertical misalignment:
Angular horizontal misalignment is where the motor shaft is under an angle with the pump shaft but both shafts are still in the same horizontal plane.
Angular vertical misalignment is where the motor shaft is under an angle with the pump shaft but both shafts are still in the same vertical plane.
References
Maintenance
Shaft drives | Shaft alignment | [
"Engineering"
] | 715 | [
"Maintenance",
"Mechanical engineering"
] |
17,655,017 | https://en.wikipedia.org/wiki/Tension%20meter | A tension meter is a device used to measure tension in wires, cables, textiles, Mechanical belts and more. Meters commonly use a 3 roller system where the material travels through the rollers causing deflection in the center roller that is connected to an analog indicator or load cell on digital models. Single roll tension sensors and sonic tension meters are other types of tension meters. Tension may also be inferred from the frequency of vibration of the material under stress by solving the "Vibrating String Equation". Tension meters are available as handheld devices or as equipment for fixed installations. These are basically necessary to build up a tension-controlled closed loop.
References
Dimensional instruments | Tension meter | [
"Physics",
"Mathematics"
] | 133 | [
"Quantity",
"Dimensional instruments",
"Physical quantities",
"Size"
] |
17,655,088 | https://en.wikipedia.org/wiki/Enrico%20Alleva | Enrico Alleva (born 16 August 1953 in Rome, Italy) is an Italian ethologist. He has been president of the Società Italiana di Etologia (Italian Ethological Society) since 2008.
After obtaining his degree in biological sciences at the Sapienza University of Rome (1975) with geneticist Giuseppe Montalenti, Alleva specialized in animal behaviour at the Scuola Normale Superiore di Pisa (tutored by ). Alleva is a member of the scientific councils of , World Wide Fund for Nature, Legambiente, Stazione Zoologica Anton Dohrn, Istituto della Enciclopedia Italiana "Giovanni Treccani", Italian Space Agency, CNR Department "Scienze della vita", and the Commissione Antartide. He is a corresponding member of the Accademia dei Lincei, Accademia Medica di Roma, and the Academy of Sciences of the Institute of Bologna. Alleva was awarded the "G. B. Grassi" prize of the Accademia dei Lincei and the Anokhin Medal of the Russian Academy of Medical Sciences.
From 1990 until 2017 Alleva was the director of the Section of Behavioural Neurosciences of the Istituto Superiore di Sanità (Rome), from 2017 to 2018 he directed the Reference Centre for Behavioural Sciences and Mental Health (Istituto Superiore di Sanità). The Web of Science lists over 300 articles in peer-reviewed journals . He is the editor-in-chief of the Annali dell'Istituto Superiore di Sanità. From 2022 he is the Vicepresident of the (Italian) Consiglio Superiore di Sanità.
Alleva is also a well known scientific populariser, who has written Il tacchino termostatico (Theoria, 1990), Consigli a un giovane etologo (Theoria, 1994, with Nicoletta Tiliacos), and La mente animale (Einaudi, 2008) and is often invited to radio and television shows.
References
External links
https://www.fisna.it/
1953 births
Living people
Italian biologists
Ethologists
Writers from Rome
Italian science writers
Italian academic journal editors
Sapienza University of Rome alumni | Enrico Alleva | [
"Biology"
] | 477 | [
"Ethology",
"Behavior",
"Ethologists"
] |
17,655,150 | https://en.wikipedia.org/wiki/Metallophilic%20interaction | In chemistry, a metallophilic interaction is defined as a type of non-covalent attraction between heavy metal atoms. The atoms are often within Van der Waals distance of each other and are about as strong as hydrogen bonds. The effect can be intramolecular or intermolecular. Intermolecular metallophilic interactions can lead to formation of supramolecular assemblies whose properties vary with the choice of element and oxidation states of the metal atoms and the attachment of various ligands to them.
The nature of such interactions remains the subject of vigorous debate with recent studies emphasizing that the metallophilic interaction is repulsive due to strong metal-metal Pauli exclusion principle repulsion.
Nature of the interaction
Previously, this type of interaction was considered to be enhanced by relativistic effects. A major contributor is electron correlation of the closed-shell components, which is unusual because closed-shell atoms generally have negligible interaction with one another at the distances observed for the metal atoms. As a trend, the effect becomes larger moving down a periodic table group, for example, from copper to silver to gold, in keeping with increased relativistic effects. Observations and theory find that, on average, 28% of the binding energy in gold–gold interactions can be attributed to relativistic expansion of the gold d orbitals.
Recently, the relativistic effect was found to enhance the intermolecular M-M Pauli repulsion of the closed-shell organometallic complexes. At close M–M distances, metallophilicity is repulsive in nature due to strong M–M Pauli repulsion. The relativistic effect facilitates (n + 1)s-nd and (n + 1)p-nd orbital hybridization of the metal atom, where (n + 1)s-nd hybridization induces strong M–M Pauli repulsion and repulsive M–M orbital interaction, and (n + 1)p-nd hybridization suppresses M–M Pauli repulsion. This model is validated by both DFT (density functional theory) and high-level CCSD(T) (coupled-cluster singles and doubles with perturbative triples) computations.
An important and exploitable property of aurophilic interactions relevant to their supramolecular chemistry is that while both inter- and intramolecular interactions are possible, intermolecular aurophilic linkages are comparatively weak and the gold–gold bonds are easily broken by solvation; most complexes that exhibit intramolecular aurophilic interactions retain such moieties in solution. One way of probing the strength of particular intermolecular metallophilic interactions is to use a competing solvent and examine how it interferes with supromolecular properties. For example, adding various solvents to gold(I) nanoparticles whose luminescence is attributed to Au–Au interactions will have decreasing luminescence as the solvent disrupts the metallophilic interactions.
Applications
The polymerization of metal atoms can lead to the formation of long chains or nucleated clusters. Gold nanoparticles formed from chains of gold(I) complexes linked by aurophilic interactions often give rise to intense luminescence in the visible region of the spectrum.
Chains of Pd(II)–Pd(I) and Pt(II)–Pd(I) complexes have been explored as potential molecular wires.
See also
Metal aromaticity
References
Chemical bonding | Metallophilic interaction | [
"Physics",
"Chemistry",
"Materials_science"
] | 733 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
17,655,204 | https://en.wikipedia.org/wiki/Automatic%20semigroup | In mathematics, an automatic semigroup is a finitely generated semigroup equipped with several regular languages over an alphabet representing a generating set. One of these languages determines "canonical forms" for the elements of the semigroup, the other languages determine if two canonical forms represent elements that differ by multiplication by a generator.
Formally, let be a semigroup and be a finite set of generators. Then an automatic structure for with respect to consists of a regular language over such that every element of has at least one representative in and such that for each , the relation consisting of pairs with is regular, viewed as a subset of (A# × A#)*. Here A# is A augmented with a padding symbol.
The concept of an automatic semigroup was generalized from automatic groups by Campbell et al. (2001)
Unlike automatic groups (see Epstein et al. 1992), a semigroup may have an automatic structure with respect to one generating set, but not with respect to another. However, if an automatic semigroup has an identity, then it has an automatic structure with respect to any generating set (Duncan et al. 1999).
Decision problems
Like automatic groups, automatic semigroups have word problem solvable in quadratic time. Kambites & Otto (2006) showed that it is undecidable whether an element of an automatic monoid possesses a right inverse.
Cain (2006) proved that both cancellativity and left-cancellativity are undecidable for automatic semigroups. On the other hand, right-cancellativity is decidable for automatic semigroups (Silva & Steinberg 2004).
Geometric characterization
Automatic structures for groups have an elegant geometric characterization called the fellow traveller property (Epstein et al. 1992, ch. 2). Automatic structures for semigroups possess the fellow traveller property but are not in general characterized by it (Campbell et al. 2001). However, the characterization can be generalized to certain 'group-like' classes of semigroups, notably completely simple semigroups (Campbell et al. 2002) and group-embeddable semigroups (Cain et al. 2006).
Examples of automatic semigroups
Bicyclic monoid
Finitely generated subsemigroups of a free semigroup
References
.
.
.
.
Further reading
Semigroup theory
Computability theory | Automatic semigroup | [
"Mathematics"
] | 477 | [
"Mathematical structures",
"Mathematical logic",
"Fields of abstract algebra",
"Algebraic structures",
"Computability theory",
"Semigroup theory"
] |
17,656,037 | https://en.wikipedia.org/wiki/Log%20ASCII%20standard | Log ASCII standard (LAS) is a standard file format common in the oil-and-gas and water well industries to store well log information. Well logging is used to investigate and characterize the subsurface stratigraphy in a well.
A single LAS file can only contain data for one well, but it can contain any number datasets (called "curves") from that well. Common curves found in a LAS file may include natural gamma, travel time, or resistivity logs.
External links
Canadian Well Logging Society: LAS (Log ASCII Standard), accessed 23 December 2014.
US Geological Survey: Log ASCII Standard (LAS) files for geophysical wireline well logs and their application to geologic cross sections through the central Appalachian basin, accessed 24 January 2009.
Log I/O Java and .Net library for accessing LAS files and other common well log formats like DLIS, LIS, BIT, SPWLA and CSV
Ruby LAS Reader Ruby library for accessing LAS (Log ASCII Standard) files
Computer file formats
Well logging | Log ASCII standard | [
"Engineering"
] | 220 | [
"Petroleum engineering",
"Well logging"
] |
17,656,561 | https://en.wikipedia.org/wiki/Black%E2%80%93Scholes%20equation | In mathematical finance, the Black–Scholes equation, also called the Black–Scholes–Merton equation, is a partial differential equation (PDE) governing the price evolution of derivatives under the Black–Scholes model. Broadly speaking, the term may refer to a similar PDE that can be derived for a variety of options, or more generally, derivatives.
Consider a stock paying no dividends. Now construct any derivative that has a fixed maturation time in the future, and at maturation, it has payoff that depends on the values taken by the stock at that moment (such as European call or put options). Then the price of the derivative satisfies
where is the price of the option as a function of stock price S and time t, r is the risk-free interest rate, and is the volatility of the stock.
The key financial insight behind the equation is that, under the model assumption of a frictionless market, one can perfectly hedge the option by buying and selling the underlying asset in just the right way and consequently “eliminate risk". This hedge, in turn, implies that there is only one right price for the option, as returned by the Black–Scholes formula.
Financial interpretation
The equation has a concrete interpretation that is often used by practitioners and is the basis for the common derivation given in the next subsection. The equation can be rewritten in the form:
The left-hand side consists of a "time decay" term, the change in derivative value with respect to time, called theta, and a term involving the second spatial derivative gamma, the convexity of the derivative value with respect to the underlying value. The right-hand side is the riskless return from a long position in the derivative and a short position consisting of shares of the underlying asset.
Black and Scholes' insight was that the portfolio represented by the right-hand side is riskless: thus the equation says that the riskless return over any infinitesimal time interval can be expressed as the sum of theta and a term incorporating gamma. For an option, theta is typically negative, reflecting the loss in value due to having less time for exercising the option (for a European call on an underlying without dividends, it is always negative). Gamma is typically positive and so the gamma term reflects the gains in holding the option. The equation states that over any infinitesimal time interval the loss from theta and the gain from the gamma term must offset each other so that the result is a return at the riskless rate.
From the viewpoint of the option issuer, e.g. an investment bank, the gamma term is the cost of hedging the option. (Since gamma is the greatest when the spot price of the underlying is near the strike price of the option, the seller's hedging costs are the greatest in that circumstance.)
Derivation
Per the model assumptions above, the price of the underlying asset (typically a stock) follows a geometric Brownian motion. That is
where W is a stochastic variable (Brownian motion). Note that W, and consequently its infinitesimal increment dW, represents the only source of uncertainty in the price history of the stock. Intuitively, W(t) is a process that "wiggles up and down" in such a random way that its expected change over any time interval is 0. (In addition, its variance over time T is equal to T; see ); a good discrete analogue for W is a simple random walk. Thus the above equation states that the infinitesimal rate of return on the stock has an expected value of μ dt and a variance of .
The payoff of an option (or any derivative contingent to stock ) at maturity is known. To find its value at an earlier time we need to know how evolves as a function of and . By Itô's lemma for two variables we have
Replacing the differentials with deltas in the equations for dS and dV gives:
Now consider a portfolio consisting of a short option and long shares at time . The value of these holdings is
Over the time period , the total profit or loss from changes in the values of the holdings is:
Substituting and into the expression for :
Note that the term has vanished. Thus uncertainty has been eliminated and the portfolio is effectively riskless, i.e. a delta-hedge. The rate of return on this portfolio must be equal to the rate of return on any other riskless instrument; otherwise, there would be opportunities for arbitrage. Now assuming the risk-free rate of return is we must have over the time period
If we now substitute our formulas for and we obtain:
Simplifying, we arrive at the Black–Scholes partial differential equation:
With the assumptions of the Black–Scholes model, this second order partial differential equation holds for any type of option as long as its price function is twice differentiable with respect to and once with respect to .
Alternative derivation
Here is an alternative derivation that can be utilized in situations where it is initially unclear what the hedging portfolio should be. (For a reference, see 6.4 of Shreve vol II).
In the Black–Scholes model, assuming we have picked the risk-neutral probability measure, the underlying stock price S(t) is assumed to evolve as a geometric Brownian motion:
Since this stochastic differential equation (SDE) shows the stock price evolution is Markovian, any derivative on this underlying is a function of time t and the stock price at the current time, S(t). Then an application of Itô's lemma gives an SDE for the discounted derivative process , which should be a martingale. In order for that to hold, the drift term must be zero, which implies the Black—Scholes PDE.
This derivation is basically an application of the Feynman–Kac formula and can be attempted whenever the underlying asset(s) evolve according to given SDE(s).
Solving methods
Once the Black–Scholes PDE, with boundary and terminal conditions, is derived for a derivative, the PDE can be solved numerically using standard methods of numerical analysis, such as a type of finite difference method. In certain cases, it is possible to solve for an exact formula, such as in the case of a European call, which was done by Black and Scholes.
The solution is conceptually simple. Since in the Black–Scholes model, the underlying stock price follows a geometric Brownian motion, the distribution of , conditional on its price at time , is a log-normal distribution. Then the price of the derivative is just discounted expected payoff , which may be computed analytically when the payoff function is analytically tractable, or numerically if not.
To do this for a call option, recall the PDE above has boundary conditions
The last condition gives the value of the option at the time that the option matures. Other conditions are possible as S goes to 0 or infinity. For example, common conditions utilized in other situations are to choose delta to vanish as S goes to 0 and gamma to vanish as S goes to infinity; these will give the same formula as the conditions above (in general, differing boundary conditions will give different solutions, so some financial insight should be utilized to pick suitable conditions for the situation at hand).
The solution of the PDE gives the value of the option at any earlier time, . To solve the PDE we recognize that it is a Cauchy–Euler equation which can be transformed into a diffusion equation by introducing the change-of-variable transformation
Then the Black–Scholes PDE becomes a diffusion equation
The terminal condition now becomes an initial condition
where H(x) is the Heaviside step function. The Heaviside function corresponds to enforcement of the boundary data in the S, t coordinate system that requires when t = T,
assuming both S, K > 0. With this assumption, it is equivalent to the max function over all x in the real numbers, with the exception of x = 0. The equality above between the max function and the Heaviside function is in the sense of distributions because it does not hold for x = 0. Though subtle, this is important because the Heaviside function need not be finite at x = 0, or even defined for that matter. For more on the value of the Heaviside function at x = 0, see the section "Zero Argument" in the article Heaviside step function.
Using the standard convolution method for solving a diffusion equation given an initial value function, u(x, 0), we have
which, after some manipulation, yields
where is the standard normal cumulative distribution function and
These are the same solutions (up to time translation) that were obtained by Fischer Black in 1976.
Reverting to the original set of variables yields the above stated solution to the Black–Scholes equation.
The asymptotic condition can now be realized.
which gives simply S when reverting to the original coordinates.
See also
Bachelier model - uses arithmetic Brownian motion instead of geometric
References
Mathematical finance
Financial models
Partial differential equations | Black–Scholes equation | [
"Mathematics"
] | 1,872 | [
"Applied mathematics",
"Mathematical finance"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.