text
stringlengths
11
320k
source
stringlengths
26
161
In the mathematical field of order theory , an ultrafilter on a given partially ordered set (or "poset") P {\textstyle P} is a certain subset of P , {\displaystyle P,} namely a maximal filter on P ; {\displaystyle P;} that is, a proper filter on P {\textstyle P} that cannot be enlarged to a bigger proper filter on P . {\displaystyle P.} If X {\displaystyle X} is an arbitrary set, its power set P ( X ) , {\displaystyle {\mathcal {P}}(X),} ordered by set inclusion , is always a Boolean algebra and hence a poset, and ultrafilters on P ( X ) {\displaystyle {\mathcal {P}}(X)} are usually called ultrafilters on the set X {\displaystyle X} . [ note 1 ] An ultrafilter on a set X {\displaystyle X} may be considered as a finitely additive 0-1-valued measure on P ( X ) {\displaystyle {\mathcal {P}}(X)} . In this view, every subset of X {\displaystyle X} is either considered " almost everything " (has measure 1) or "almost nothing" (has measure 0), depending on whether it belongs to the given ultrafilter or not. [ 1 ] : §4 Ultrafilters have many applications in set theory, model theory , topology [ 2 ] : 186 and combinatorics. [ 3 ] In order theory , an ultrafilter is a subset of a partially ordered set that is maximal among all proper filters . This implies that any filter that properly contains an ultrafilter has to be equal to the whole poset. Formally, if P {\textstyle P} is a set, partially ordered by ≤ {\displaystyle \,\leq \,} then Every ultrafilter falls into exactly one of two categories: principal or free. A principal (or fixed , or trivial ) ultrafilter is a filter containing a least element . Consequently, each principal ultrafilter is of the form F p = { x : p ≤ x } {\displaystyle F_{p}=\{x:p\leq x\}} for some element p {\displaystyle p} of the given poset. In this case p {\displaystyle p} is called the principal element of the ultrafilter. Any ultrafilter that is not principal is called a free (or non-principal ) ultrafilter. For arbitrary p {\displaystyle p} , the set F p {\displaystyle F_{p}} is a filter, called the principal filter at p {\displaystyle p} ; it is a principal ultrafilter only if it is maximal. For ultrafilters on a powerset P ( X ) , {\displaystyle {\mathcal {P}}(X),} a principal ultrafilter consists of all subsets of X {\displaystyle X} that contain a given element x ∈ X . {\displaystyle x\in X.} Each ultrafilter on P ( X ) {\displaystyle {\mathcal {P}}(X)} that is also a principal filter is of this form. [ 2 ] : 187 Therefore, an ultrafilter U {\displaystyle U} on P ( X ) {\displaystyle {\mathcal {P}}(X)} is principal if and only if it contains a finite set. [ note 2 ] If X {\displaystyle X} is infinite, an ultrafilter U {\displaystyle U} on P ( X ) {\displaystyle {\mathcal {P}}(X)} is hence non-principal if and only if it contains the Fréchet filter of cofinite subsets of X . {\displaystyle X.} [ note 3 ] [ 4 ] : Proposition 3 If X {\displaystyle X} is finite, every ultrafilter is principal. [ 2 ] : 187 If X {\displaystyle X} is infinite then the Fréchet filter is not an ultrafilter on the power set of X {\displaystyle X} but it is an ultrafilter on the finite–cofinite algebra of X . {\displaystyle X.} Every filter on a Boolean algebra (or more generally, any subset with the finite intersection property ) is contained in an ultrafilter (see ultrafilter lemma ) and free ultrafilters therefore exist, but the proofs involve the axiom of choice ( AC ) in the form of Zorn's lemma . On the other hand, the statement that every filter is contained in an ultrafilter does not imply AC . Indeed, it is equivalent to the Boolean prime ideal theorem ( BPIT ), a well-known intermediate point between the axioms of Zermelo–Fraenkel set theory ( ZF ) and the ZF theory augmented by the axiom of choice ( ZFC ). In general, proofs involving the axiom of choice do not produce explicit examples of free ultrafilters, though it is possible to find explicit examples in some models of ZFC ; for example, Gödel showed that this can be done in the constructible universe where one can write down an explicit global choice function. In ZF without the axiom of choice, it is possible that every ultrafilter is principal. [ 5 ] An important special case of the concept occurs if the considered poset is a Boolean algebra . In this case, ultrafilters are characterized by containing, for each element x {\displaystyle x} of the Boolean algebra, exactly one of the elements x {\displaystyle x} and ¬ x {\displaystyle \lnot x} (the latter being the Boolean complement of x {\displaystyle x} ): If P {\textstyle P} is a Boolean algebra and F {\displaystyle F} is a proper filter on P , {\displaystyle P,} then the following statements are equivalent: A proof that 1. and 2. are equivalent is also given in (Burris, Sankappanavar, 2012, Corollary 3.13, p.133). [ 6 ] Moreover, ultrafilters on a Boolean algebra can be related to maximal ideals and homomorphisms to the 2-element Boolean algebra {true, false} (also known as 2-valued morphisms ) as follows: Given an arbitrary set X , {\displaystyle X,} its power set P ( X ) , {\displaystyle {\mathcal {P}}(X),} ordered by set inclusion , is always a Boolean algebra; hence the results of the above section apply. An (ultra)filter on P ( X ) {\displaystyle {\mathcal {P}}(X)} is often called just an "(ultra)filter on X {\displaystyle X} ". [ note 1 ] Given an arbitrary set X , {\displaystyle X,} an ultrafilter on P ( X ) {\displaystyle {\mathcal {P}}(X)} is a set U {\displaystyle {\mathcal {U}}} consisting of subsets of X {\displaystyle X} such that: Equivalently, a family U {\displaystyle {\mathcal {U}}} of subsets of X {\displaystyle X} is an ultrafilter if and only if for any finite collection F {\displaystyle {\mathcal {F}}} of subsets of X {\displaystyle X} , there is some x ∈ X {\displaystyle x\in X} such that U ∩ F = F x ∩ F {\displaystyle {\mathcal {U}}\cap {\mathcal {F}}=F_{x}\cap {\mathcal {F}}} where F x = { Y ⊆ X : x ∈ Y } {\displaystyle F_{x}=\{Y\subseteq X:x\in Y\}} is the principal ultrafilter seeded by x {\displaystyle x} . In other words, an ultrafilter may be seen as a family of sets which "locally" resembles a principal ultrafilter. [ citation needed ] An equivalent form of a given U {\displaystyle {\mathcal {U}}} is a 2-valued morphism , a function m {\displaystyle m} on P ( X ) {\displaystyle {\mathcal {P}}(X)} defined as m ( A ) = 1 {\displaystyle m(A)=1} if A {\displaystyle A} is an element of U {\displaystyle {\mathcal {U}}} and m ( A ) = 0 {\displaystyle m(A)=0} otherwise. Then m {\displaystyle m} is finitely additive , and hence a content on P ( X ) , {\displaystyle {\mathcal {P}}(X),} and every property of elements of X {\displaystyle X} is either true almost everywhere or false almost everywhere. However, m {\displaystyle m} is usually not countably additive , and hence does not define a measure in the usual sense. For a filter F {\displaystyle {\mathcal {F}}} that is not an ultrafilter, one can define m ( A ) = 1 {\displaystyle m(A)=1} if A ∈ F {\displaystyle A\in {\mathcal {F}}} and m ( A ) = 0 {\displaystyle m(A)=0} if X ∖ A ∈ F , {\displaystyle X\setminus A\in {\mathcal {F}},} leaving m {\displaystyle m} undefined elsewhere. [ 1 ] Ultrafilters on power sets are useful in topology , especially in relation to compact Hausdorff spaces, and in model theory in the construction of ultraproducts and ultrapowers . Every ultrafilter on a compact Hausdorff space converges to exactly one point. Likewise, ultrafilters on Boolean algebras play a central role in Stone's representation theorem . In set theory ultrafilters are used to show that the axiom of constructibility is incompatible with the existence of a measurable cardinal κ . This is proved by taking the ultrapower of the set theoretical universe modulo a κ -complete, non-principal ultrafilter. [ 7 ] The set G {\displaystyle G} of all ultrafilters of a poset P {\textstyle P} can be topologized in a natural way, that is in fact closely related to the above-mentioned representation theorem. For any element x {\displaystyle x} of P {\textstyle P} , let D x = { U ∈ G : x ∈ U } . {\displaystyle D_{x}=\left\{U\in G:x\in U\right\}.} This is most useful when P {\textstyle P} is again a Boolean algebra, since in this situation the set of all D x {\displaystyle D_{x}} is a base for a compact Hausdorff topology on G {\displaystyle G} . Especially, when considering the ultrafilters on a powerset P ( S ) {\displaystyle {\mathcal {P}}(S)} , the resulting topological space is the Stone–Čech compactification of a discrete space of cardinality | S | . {\displaystyle |S|.} The ultraproduct construction in model theory uses ultrafilters to produce a new model starting from a sequence of X {\displaystyle X} -indexed models; for example, the compactness theorem can be proved this way. In the special case of ultrapowers, one gets elementary extensions of structures. For example, in nonstandard analysis , the hyperreal numbers can be constructed as an ultraproduct of the real numbers , extending the domain of discourse from real numbers to sequences of real numbers. This sequence space is regarded as a superset of the reals by identifying each real with the corresponding constant sequence. To extend the familiar functions and relations (e.g., + and <) from the reals to the hyperreals, the natural idea is to define them pointwise. But this would lose important logical properties of the reals; for example, pointwise < is not a total ordering. So instead the functions and relations are defined " pointwise modulo " U {\displaystyle U} , where U {\displaystyle U} is an ultrafilter on the index set of the sequences; by Łoś' theorem , this preserves all properties of the reals that can be stated in first-order logic . If U {\displaystyle U} is nonprincipal, then the extension thereby obtained is nontrivial. In geometric group theory , non-principal ultrafilters are used to define the asymptotic cone of a group. This construction yields a rigorous way to consider looking at the group from infinity , that is the large scale geometry of the group. Asymptotic cones are particular examples of ultralimits of metric spaces . Gödel's ontological proof of God's existence uses as an axiom that the set of all "positive properties" is an ultrafilter. In social choice theory , non-principal ultrafilters are used to define a rule (called a social welfare function ) for aggregating the preferences of infinitely many individuals. Contrary to Arrow's impossibility theorem for finitely many individuals, such a rule satisfies the conditions (properties) that Arrow proposes (for example, Kirman and Sondermann, 1972). [ 8 ] Mihara (1997, [ 9 ] 1999) [ 10 ] shows, however, such rules are practically of limited interest to social scientists, since they are non-algorithmic or non-computable.
https://en.wikipedia.org/wiki/Ultrafilter
In the mathematical field of set theory , an ultrafilter on a set X {\displaystyle X} is a maximal filter on the set X . {\displaystyle X.} In other words, it is a collection of subsets of X {\displaystyle X} that satisfies the definition of a filter on X {\displaystyle X} and that is maximal with respect to inclusion, in the sense that there does not exist a strictly larger collection of subsets of X {\displaystyle X} that is also a filter. (In the above, by definition a filter on a set does not contain the empty set.) Equivalently, an ultrafilter on the set X {\displaystyle X} can also be characterized as a filter on X {\displaystyle X} with the property that for every subset A {\displaystyle A} of X {\displaystyle X} either A {\displaystyle A} or its complement X ∖ A {\displaystyle X\setminus A} belongs to the ultrafilter. Ultrafilters on sets are an important special instance of ultrafilters on partially ordered sets , where the partially ordered set consists of the power set ℘ ( X ) {\displaystyle \wp (X)} and the partial order is subset inclusion ⊆ . {\displaystyle \,\subseteq .} This article deals specifically with ultrafilters on a set and does not cover the more general notion. There are two types of ultrafilter on a set. A principal ultrafilter on X {\displaystyle X} is the collection of all subsets of X {\displaystyle X} that contain a fixed element x ∈ X {\displaystyle x\in X} . The ultrafilters that are not principal are the free ultrafilters . The existence of free ultrafilters on any infinite set is implied by the ultrafilter lemma , which can be proven in ZFC . On the other hand, there exists models of ZF where every ultrafilter on a set is principal. Ultrafilters have many applications in set theory, model theory , and topology . [ 1 ] : 186 Usually, only free ultrafilters lead to non-trivial constructions. For example, an ultraproduct modulo a principal ultrafilter is always isomorphic to one of the factors, while an ultraproduct modulo a free ultrafilter usually has a more complex structure. Given an arbitrary set X , {\displaystyle X,} an ultrafilter on X {\displaystyle X} is a non-empty family U {\displaystyle U} of subsets of X {\displaystyle X} such that: Properties (1), (2), and (3) are the defining properties of a filter on X . {\displaystyle X.} Some authors do not include non-degeneracy (which is property (1) above) in their definition of "filter". However, the definition of "ultrafilter" (and also of "prefilter" and "filter subbase") always includes non-degeneracy as a defining condition. This article requires that all filters be proper although a filter might be described as "proper" for emphasis. A filter sub base is a non-empty family of sets that has the finite intersection property (i.e. all finite intersections are non-empty). Equivalently, a filter subbase is a non-empty family of sets that is contained in some (proper) filter. The smallest (relative to ⊆ {\displaystyle \subseteq } ) filter containing a given filter subbase is said to be generated by the filter subbase. The upward closure in X {\displaystyle X} of a family of sets P {\displaystyle P} is the set A prefilter or filter base is a non-empty and proper (i.e. ∅ ∉ P {\displaystyle \varnothing \not \in P} ) family of sets P {\displaystyle P} that is downward directed , which means that if B , C ∈ P {\displaystyle B,C\in P} then there exists some A ∈ P {\displaystyle A\in P} such that A ⊆ B ∩ C . {\displaystyle A\subseteq B\cap C.} Equivalently, a prefilter is any family of sets P {\displaystyle P} whose upward closure P ↑ X {\displaystyle P^{\uparrow X}} is a filter, in which case this filter is called the filter generated by P {\displaystyle P} and P {\displaystyle P} is said to be a filter base for P ↑ X . {\displaystyle P^{\uparrow X}.} The dual in X {\displaystyle X} [ 2 ] of a family of sets P {\displaystyle P} is the set X ∖ P := { X ∖ B : B ∈ P } . {\displaystyle X\setminus P:=\{X\setminus B:B\in P\}.} For example, the dual of the power set ℘ ( X ) {\displaystyle \wp (X)} is itself: X ∖ ℘ ( X ) = ℘ ( X ) . {\displaystyle X\setminus \wp (X)=\wp (X).} A family of sets is a proper filter on X {\displaystyle X} if and only if its dual is a proper ideal on X {\displaystyle X} (" proper " means not equal to the power set). A family U ≠ ∅ {\displaystyle U\neq \varnothing } of subsets of X {\displaystyle X} is called ultra if ∅ ∉ U {\displaystyle \varnothing \not \in U} and any of the following equivalent conditions are satisfied: [ 2 ] [ 3 ] A filter subbase that is ultra is necessarily a prefilter. [ proof 1 ] The ultra property can now be used to define both ultrafilters and ultra prefilters: Ultra prefilters as maximal prefilters To characterize ultra prefilters in terms of "maximality," the following relation is needed. The subordination relationship, i.e. ≥ , {\displaystyle \,\geq ,\,} is a preorder so the above definition of "equivalent" does form an equivalence relation . If M ⊆ N {\displaystyle M\subseteq N} then M ≤ N {\displaystyle M\leq N} but the converse does not hold in general. However, if N {\displaystyle N} is upward closed, such as a filter, then M ≤ N {\displaystyle M\leq N} if and only if M ⊆ N . {\displaystyle M\subseteq N.} Every prefilter is equivalent to the filter that it generates. This shows that it is possible for filters to be equivalent to sets that are not filters. If two families of sets M {\displaystyle M} and N {\displaystyle N} are equivalent then either both M {\displaystyle M} and N {\displaystyle N} are ultra (resp. prefilters, filter subbases) or otherwise neither one of them is ultra (resp. a prefilter, a filter subbase). In particular, if a filter subbase is not also a prefilter, then it is not equivalent to the filter or prefilter that it generates. If M {\displaystyle M} and N {\displaystyle N} are both filters on X {\displaystyle X} then M {\displaystyle M} and N {\displaystyle N} are equivalent if and only if M = N . {\displaystyle M=N.} If a proper filter (resp. ultrafilter) is equivalent to a family of sets M {\displaystyle M} then M {\displaystyle M} is necessarily a prefilter (resp. ultra prefilter). Using the following characterization, it is possible to define prefilters (resp. ultra prefilters) using only the concept of filters (resp. ultrafilters) and subordination: There are no ultrafilters on the empty set , so it is henceforth assumed that X {\displaystyle X} is nonempty. A filter sub base U {\displaystyle U} on X {\displaystyle X} is an ultrafilter on X {\displaystyle X} if and only if any of the following equivalent conditions hold: [ 2 ] [ 3 ] A (proper) filter U {\displaystyle U} on X {\displaystyle X} is an ultrafilter on X {\displaystyle X} if and only if any of the following equivalent conditions hold: If B ⊆ ℘ ( X ) {\displaystyle {\mathcal {B}}\subseteq \wp (X)} then its grill on X {\displaystyle X} is the family B # X := { S ⊆ X : S ∩ B ≠ ∅ for all B ∈ B } {\displaystyle {\mathcal {B}}^{\#X}:=\{S\subseteq X~:~S\cap B\neq \varnothing {\text{ for all }}B\in {\mathcal {B}}\}} where B # {\displaystyle {\mathcal {B}}^{\#}} may be written if X {\displaystyle X} is clear from context. For example, ∅ # = ℘ ( X ) {\displaystyle \varnothing ^{\#}=\wp (X)} and if ∅ ∈ B {\displaystyle \varnothing \in {\mathcal {B}}} then B # = ∅ . {\displaystyle {\mathcal {B}}^{\#}=\varnothing .} If A ⊆ B {\displaystyle {\mathcal {A}}\subseteq {\mathcal {B}}} then B # ⊆ A # {\displaystyle {\mathcal {B}}^{\#}\subseteq {\mathcal {A}}^{\#}} and moreover, if B {\displaystyle {\mathcal {B}}} is a filter subbase then B ⊆ B # . {\displaystyle {\mathcal {B}}\subseteq {\mathcal {B}}^{\#}.} [ 9 ] The grill B # X {\displaystyle {\mathcal {B}}^{\#X}} is upward closed in X {\displaystyle X} if and only if ∅ ∉ B , {\displaystyle \varnothing \not \in {\mathcal {B}},} which will henceforth be assumed. Moreover, B # # = B ↑ X {\displaystyle {\mathcal {B}}^{\#\#}={\mathcal {B}}^{\uparrow X}} so that B {\displaystyle {\mathcal {B}}} is upward closed in X {\displaystyle X} if and only if B # # = B . {\displaystyle {\mathcal {B}}^{\#\#}={\mathcal {B}}.} The grill of a filter on X {\displaystyle X} is called a filter-grill on X . {\displaystyle X.} [ 9 ] For any ∅ ≠ B ⊆ ℘ ( X ) , {\displaystyle \varnothing \neq {\mathcal {B}}\subseteq \wp (X),} B {\displaystyle {\mathcal {B}}} is a filter-grill on X {\displaystyle X} if and only if (1) B {\displaystyle {\mathcal {B}}} is upward closed in X {\displaystyle X} and (2) for all sets R {\displaystyle R} and S , {\displaystyle S,} if R ∪ S ∈ B {\displaystyle R\cup S\in {\mathcal {B}}} then R ∈ B {\displaystyle R\in {\mathcal {B}}} or S ∈ B . {\displaystyle S\in {\mathcal {B}}.} The grill operation F ↦ F # X {\displaystyle {\mathcal {F}}\mapsto {\mathcal {F}}^{\#X}} induces a bijection whose inverse is also given by F ↦ F # X . {\displaystyle {\mathcal {F}}\mapsto {\mathcal {F}}^{\#X}.} [ 9 ] If F ∈ Filters ⁡ ( X ) {\displaystyle {\mathcal {F}}\in \operatorname {Filters} (X)} then F {\displaystyle {\mathcal {F}}} is a filter-grill on X {\displaystyle X} if and only if F = F # X , {\displaystyle {\mathcal {F}}={\mathcal {F}}^{\#X},} [ 9 ] or equivalently, if and only if F {\displaystyle {\mathcal {F}}} is an ultrafilter on X . {\displaystyle X.} [ 9 ] That is, a filter on X {\displaystyle X} is a filter-grill if and only if it is ultra. For any non-empty F ⊆ ℘ ( X ) , {\displaystyle {\mathcal {F}}\subseteq \wp (X),} F {\displaystyle {\mathcal {F}}} is both a filter on X {\displaystyle X} and a filter-grill on X {\displaystyle X} if and only if (1) ∅ ∉ F {\displaystyle \varnothing \not \in {\mathcal {F}}} and (2) for all R , S ⊆ X , {\displaystyle R,S\subseteq X,} the following equivalences hold: If P {\displaystyle P} is any non-empty family of sets then the Kernel of P {\displaystyle P} is the intersection of all sets in P : {\displaystyle P:} [ 10 ] ker ⁡ P := ⋂ B ∈ P B . {\displaystyle \operatorname {ker} P:=\bigcap _{B\in P}B.} A non-empty family of sets P {\displaystyle P} is called: If a family of sets P {\displaystyle P} is fixed then P {\displaystyle P} is ultra if and only if some element of P {\displaystyle P} is a singleton set, in which case P {\displaystyle P} will necessarily be a prefilter. Every principal prefilter is fixed, so a principal prefilter P {\displaystyle P} is ultra if and only if ker ⁡ P {\displaystyle \operatorname {ker} P} is a singleton set. A singleton set is ultra if and only if its sole element is also a singleton set. The next theorem shows that every ultrafilter falls into one of two categories: either it is free or else it is a principal filter generated by a single point. Proposition — If U {\displaystyle U} is an ultrafilter on X {\displaystyle X} then the following are equivalent: Every filter on X {\displaystyle X} that is principal at a single point is an ultrafilter, and if in addition X {\displaystyle X} is finite, then there are no ultrafilters on X {\displaystyle X} other than these. [ 10 ] In particular, if a set X {\displaystyle X} has finite cardinality n < ∞ , {\displaystyle n<\infty ,} then there are exactly n {\displaystyle n} ultrafilters on X {\displaystyle X} and those are the ultrafilters generated by each singleton subset of X . {\displaystyle X.} Consequently, free ultrafilters can only exist on an infinite set. If X {\displaystyle X} is an infinite set then there are as many ultrafilters over X {\displaystyle X} as there are families of subsets of X ; {\displaystyle X;} explicitly, if X {\displaystyle X} has infinite cardinality κ {\displaystyle \kappa } then the set of ultrafilters over X {\displaystyle X} has the same cardinality as ℘ ( ℘ ( X ) ) ; {\displaystyle \wp (\wp (X));} that cardinality being 2 2 κ . {\displaystyle 2^{2^{\kappa }}.} [ 11 ] If U {\displaystyle U} and S {\displaystyle S} are families of sets such that U {\displaystyle U} is ultra, ∅ ∉ S , {\displaystyle \varnothing \not \in S,} and U ≤ S , {\displaystyle U\leq S,} then S {\displaystyle S} is necessarily ultra. A filter subbase U {\displaystyle U} that is not a prefilter cannot be ultra; but it is nevertheless still possible for the prefilter and filter generated by U {\displaystyle U} to be ultra. Suppose U ⊆ ℘ ( X ) {\displaystyle U\subseteq \wp (X)} is ultra and Y {\displaystyle Y} is a set. The trace U | Y := { B ∩ Y : B ∈ U } {\displaystyle U\vert _{Y}:=\{B\cap Y:B\in U\}} is ultra if and only if it does not contain the empty set. Furthermore, at least one of the sets U | Y ∖ { ∅ } {\displaystyle U\vert _{Y}\setminus \{\varnothing \}} and U | X ∖ Y ∖ { ∅ } {\displaystyle U\vert _{X\setminus Y}\setminus \{\varnothing \}} will be ultra (this result extends to any finite partition of X {\displaystyle X} ). If F 1 , … , F n {\displaystyle F_{1},\ldots ,F_{n}} are filters on X , {\displaystyle X,} U {\displaystyle U} is an ultrafilter on X , {\displaystyle X,} and F 1 ∩ ⋯ ∩ F n ≤ U , {\displaystyle F_{1}\cap \cdots \cap F_{n}\leq U,} then there is some F i {\displaystyle F_{i}} that satisfies F i ≤ U . {\displaystyle F_{i}\leq U.} [ 12 ] This result is not necessarily true for an infinite family of filters. [ 12 ] The image under a map f : X → Y {\displaystyle f:X\to Y} of an ultra set U ⊆ ℘ ( X ) {\displaystyle U\subseteq \wp (X)} is again ultra and if U {\displaystyle U} is an ultra prefilter then so is f ( U ) . {\displaystyle f(U).} The property of being ultra is preserved under bijections. However, the preimage of an ultrafilter is not necessarily ultra, not even if the map is surjective. For example, if X {\displaystyle X} has more than one point and if the range of f : X → Y {\displaystyle f:X\to Y} consists of a single point { y } {\displaystyle \{y\}} then { y } {\displaystyle \{y\}} is an ultra prefilter on Y {\displaystyle Y} but its preimage is not ultra. Alternatively, if U {\displaystyle U} is a principal filter generated by a point in Y ∖ f ( X ) {\displaystyle Y\setminus f(X)} then the preimage of U {\displaystyle U} contains the empty set and so is not ultra. The elementary filter induced by an infinite sequence, all of whose points are distinct, is not an ultrafilter. [ 12 ] If n = 2 , {\displaystyle n=2,} then U n {\displaystyle U_{n}} denotes the set consisting all subsets of X {\displaystyle X} having cardinality n , {\displaystyle n,} and if X {\displaystyle X} contains at least 2 n − 1 {\displaystyle 2n-1} ( = 3 {\displaystyle =3} ) distinct points, then U n {\displaystyle U_{n}} is ultra but it is not contained in any prefilter. This example generalizes to any integer n > 1 {\displaystyle n>1} and also to n = 1 {\displaystyle n=1} if X {\displaystyle X} contains more than one element. Ultra sets that are not also prefilters are rarely used. For every S ⊆ X × X {\displaystyle S\subseteq X\times X} and every a ∈ X , {\displaystyle a\in X,} let S | { a } × X := { y ∈ X : ( a , y ) ∈ S } . {\displaystyle S{\big \vert }_{\{a\}\times X}:=\{y\in X~:~(a,y)\in S\}.} If U {\displaystyle {\mathcal {U}}} is an ultrafilter on X {\displaystyle X} then the set of all S ⊆ X × X {\displaystyle S\subseteq X\times X} such that { a ∈ X : S | { a } × X ∈ U } ∈ U {\displaystyle \left\{a\in X~:~S{\big \vert }_{\{a\}\times X}\in {\mathcal {U}}\right\}\in {\mathcal {U}}} is an ultrafilter on X × X . {\displaystyle X\times X.} [ 13 ] The functor associating to any set X {\displaystyle X} the set of U ( X ) {\displaystyle U(X)} of all ultrafilters on X {\displaystyle X} forms a monad called the ultrafilter monad . The unit map X → U ( X ) {\displaystyle X\to U(X)} sends any element x ∈ X {\displaystyle x\in X} to the principal ultrafilter given by x . {\displaystyle x.} This ultrafilter monad is the codensity monad of the inclusion of the category of finite sets into the category of all sets , [ 14 ] which gives a conceptual explanation of this monad. Similarly, the ultraproduct monad is the codensity monad of the inclusion of the category of finite families of sets into the category of all families of set. So in this sense, ultraproducts are categorically inevitable. [ 14 ] The ultrafilter lemma was first proved by Alfred Tarski in 1930. [ 13 ] The ultrafilter lemma /principle/theorem [ 4 ] — Every proper filter on a set X {\displaystyle X} is contained in some ultrafilter on X . {\displaystyle X.} The ultrafilter lemma is equivalent to each of the following statements: A consequence of the ultrafilter lemma is that every filter is equal to the intersection of all ultrafilters containing it. [ 4 ] [ note 2 ] The following results can be proven using the ultrafilter lemma. A free ultrafilter exists on a set X {\displaystyle X} if and only if X {\displaystyle X} is infinite. Every proper filter is equal to the intersection of all ultrafilters containing it. [ 4 ] Since there are filters that are not ultra, this shows that the intersection of a family of ultrafilters need not be ultra. A family of sets F ≠ ∅ {\displaystyle \mathbb {F} \neq \varnothing } can be extended to a free ultrafilter if and only if the intersection of any finite family of elements of F {\displaystyle \mathbb {F} } is infinite. Throughout this section, ZF refers to Zermelo–Fraenkel set theory and ZFC refers to ZF with the Axiom of Choice ( AC ). The ultrafilter lemma is independent of ZF . That is, there exist models in which the axioms of ZF hold but the ultrafilter lemma does not. There also exist models of ZF in which every ultrafilter is necessarily principal. Every filter that contains a singleton set is necessarily an ultrafilter and given x ∈ X , {\displaystyle x\in X,} the definition of the discrete ultrafilter { S ⊆ X : x ∈ S } {\displaystyle \{S\subseteq X:x\in S\}} does not require more than ZF . If X {\displaystyle X} is finite then every ultrafilter is a discrete filter at a point; consequently, free ultrafilters can only exist on infinite sets. In particular, if X {\displaystyle X} is finite then the ultrafilter lemma can be proven from the axioms ZF . The existence of free ultrafilter on infinite sets can be proven if the axiom of choice is assumed. More generally, the ultrafilter lemma can be proven by using the axiom of choice , which in brief states that any Cartesian product of non-empty sets is non-empty. Under ZF , the axiom of choice is, in particular, equivalent to (a) Zorn's lemma , (b) Tychonoff's theorem , (c) the weak form of the vector basis theorem (which states that every vector space has a basis ), (d) the strong form of the vector basis theorem, and other statements. However, the ultrafilter lemma is strictly weaker than the axiom of choice. While free ultrafilters can be proven to exist, it is not possible to construct an explicit example of a free ultrafilter (using only ZF and the ultrafilter lemma); that is, free ultrafilters are intangible. [ 15 ] Alfred Tarski proved that under ZFC , the cardinality of the set of all free ultrafilters on an infinite set X {\displaystyle X} is equal to the cardinality of ℘ ( ℘ ( X ) ) , {\displaystyle \wp (\wp (X)),} where ℘ ( X ) {\displaystyle \wp (X)} denotes the power set of X . {\displaystyle X.} [ 16 ] Other authors attribute this discovery to Bedřich Pospíšil (following a combinatorial argument from Fichtenholz , and Kantorovitch , improved by Hausdorff ). [ 17 ] [ 18 ] Under ZF , the axiom of choice can be used to prove both the ultrafilter lemma and the Krein–Milman theorem ; conversely, under ZF , the ultrafilter lemma together with the Krein–Milman theorem can prove the axiom of choice. [ 19 ] The ultrafilter lemma is a relatively weak axiom. For example, each of the statements in the following list can not be deduced from ZF together with only the ultrafilter lemma: Under ZF , the ultrafilter lemma is equivalent to each of the following statements: [ 20 ] Any statement that can be deduced from the ultrafilter lemma (together with ZF ) is said to be weaker than the ultrafilter lemma. A weaker statement is said to be strictly weaker if under ZF , it is not equivalent to the ultrafilter lemma. Under ZF , the ultrafilter lemma implies each of the following statements: The completeness of an ultrafilter U {\displaystyle U} on a powerset is the smallest cardinal κ such that there are κ elements of U {\displaystyle U} whose intersection is not in U . {\displaystyle U.} The definition of an ultrafilter implies that the completeness of any powerset ultrafilter is at least ℵ 0 {\displaystyle \aleph _{0}} . An ultrafilter whose completeness is greater than ℵ 0 {\displaystyle \aleph _{0}} —that is, the intersection of any countable collection of elements of U {\displaystyle U} is still in U {\displaystyle U} —is called countably complete or σ-complete . The completeness of a countably complete nonprincipal ultrafilter on a powerset is always a measurable cardinal . [ citation needed ] The Rudin–Keisler ordering (named after Mary Ellen Rudin and Howard Jerome Keisler ) is a preorder on the class of powerset ultrafilters defined as follows: if U {\displaystyle U} is an ultrafilter on ℘ ( X ) , {\displaystyle \wp (X),} and V {\displaystyle V} an ultrafilter on ℘ ( Y ) , {\displaystyle \wp (Y),} then V ≤ R K U {\displaystyle V\leq {}_{RK}U} if there exists a function f : X → Y {\displaystyle f:X\to Y} such that for every subset C ⊆ Y . {\displaystyle C\subseteq Y.} Ultrafilters U {\displaystyle U} and V {\displaystyle V} are called Rudin–Keisler equivalent , denoted U ≡ RK V , if there exist sets A ∈ U {\displaystyle A\in U} and B ∈ V {\displaystyle B\in V} and a bijection f : A → B {\displaystyle f:A\to B} that satisfies the condition above. (If X {\displaystyle X} and Y {\displaystyle Y} have the same cardinality, the definition can be simplified by fixing A = X , {\displaystyle A=X,} B = Y . {\displaystyle B=Y.} ) It is known that ≡ RK is the kernel of ≤ RK , i.e., that U ≡ RK V if and only if U ≤ R K V {\displaystyle U\leq {}_{RK}V} and V ≤ R K U . {\displaystyle V\leq {}_{RK}U.} [ 30 ] There are several special properties that an ultrafilter on ℘ ( ω ) , {\displaystyle \wp (\omega ),} where ω {\displaystyle \omega } extends the natural numbers , may possess, which prove useful in various areas of set theory and topology. It is a trivial observation that all Ramsey ultrafilters are P-points. Walter Rudin proved that the continuum hypothesis implies the existence of Ramsey ultrafilters. [ 31 ] In fact, many hypotheses imply the existence of Ramsey ultrafilters, including Martin's axiom . Saharon Shelah later showed that it is consistent that there are no P-point ultrafilters. [ 32 ] Therefore, the existence of these types of ultrafilters is independent of ZFC . P-points are called as such because they are topological P-points in the usual topology of the space βω \ ω of non-principal ultrafilters. The name Ramsey comes from Ramsey's theorem . To see why, one can prove that an ultrafilter is Ramsey if and only if for every 2-coloring of [ ω ] 2 {\displaystyle [\omega ]^{2}} there exists an element of the ultrafilter that has a homogeneous color. An ultrafilter on ℘ ( ω ) {\displaystyle \wp (\omega )} is Ramsey if and only if it is minimal in the Rudin–Keisler ordering of non-principal powerset ultrafilters. [ 33 ] Proofs
https://en.wikipedia.org/wiki/Ultrafilter_on_a_set
Ultrafiltered milk , also known as UF milk , UF skim , or diafiltered milk , is a subclassification of milk protein concentrate that is produced by passing milk under pressure through a thin, porous membrane to separate the components of milk according to size. [ 1 ] [ 2 ] Specifically, ultrafiltration allows the smaller lactose, water, mineral, and vitamin molecules to pass through the membrane, while the larger protein and fat molecules (key components for making cheese) are retained and concentrated. (Depending on the intended use of the UF milk product, the fat in whole milk may be removed before filtration.) The removal of water and lactose reduces the volume of milk, and thereby lowers its transportation and storage costs. Ultrafiltration makes cheese manufacturing more efficient. [ 3 ] Ultrafiltered milk is also sold directly to consumers under brands like Fairlife and Simply Smart (discontinued in May 2022 [ citation needed ] ), who tout its higher protein content, lower sugar content, and creamier taste. [ 4 ] *Adapted from CRS Report for Congress: Agriculture: A Glossary of Terms, Programs, and Laws, 2005 Edition - Order Code 97-905 , a document in the public domain.
https://en.wikipedia.org/wiki/Ultrafiltered_milk
Ultrafiltration ( UF ) is a variety of membrane filtration in which forces such as pressure or concentration gradients lead to a separation through a semipermeable membrane . Suspended solids and solutes of high molecular weight are retained in the so-called retentate, while water and low molecular weight solutes pass through the membrane in the permeate (filtrate). This separation process is used in industry and research for purifying and concentrating macromolecular (10 3 –10 6 Da ) solutions, especially protein solutions. Ultrafiltration is not fundamentally different from microfiltration . Both of these are separate based on size exclusion or particle capture. It is fundamentally different from membrane gas separation , which separate based on different amounts of absorption and different rates of diffusion . Ultrafiltration membranes are defined by the molecular weight cut-off (MWCO) of the membrane used. Ultrafiltration is applied in cross-flow or dead-end mode . Industries such as chemical and pharmaceutical manufacturing, food and beverage processing, and waste water treatment , employ ultrafiltration in order to recycle flow or add value to later products. Blood dialysis also utilizes ultrafiltration. [ citation needed ] Ultrafiltration can be used for the removal of particulates and macromolecules from raw water to produce potable water. It has been used to either replace existing secondary (coagulation, flocculation, sedimentation) and tertiary filtration (sand filtration and chlorination) systems employed in water treatment plants or as standalone systems in isolated regions with growing populations. [ 1 ] When treating water with high suspended solids, UF is often integrated into the process, utilising primary (screening, flotation, filtration) and some secondary treatments as pre-treatment stages. [ 2 ] UF processes are currently preferred over traditional treatment methods for the following reasons: UF processes are currently limited by the high cost incurred due to membrane fouling and replacement. [ 4 ] Additional pretreatment of feed water is required to prevent excessive damage to the membrane units. In many cases UF is used for pre filtration in reverse osmosis (RO) plants to protect the RO membranes. [ citation needed ] UF is used extensively in the dairy industry; [ 5 ] particularly in the processing of cheese whey to obtain whey protein concentrate (WPC) and lactose-rich permeate. [ 6 ] [ 7 ] In a single stage, a UF process is able to concentrate the whey 10–30 times the feed. [ 8 ] The original alternative to membrane filtration of whey was using steam heating followed by drum drying or spray drying. The product of these methods had limited applications due to its granulated texture and insolubility. Existing methods also had inconsistent product composition, high capital and operating costs and due to the excessive heat used in drying would often denature some of the proteins. [ 6 ] Compared to traditional methods, UF processes used for this application: [ 6 ] [ 8 ] The potential for fouling is widely discussed, being identified as a significant contributor to decline in productivity. [ 6 ] [ 7 ] [ 8 ] Cheese whey contains high concentrations of calcium phosphate which can potentially lead to scale deposits on the membrane surface. As a result, substantial pretreatment must be implemented to balance pH and temperature of the feed to maintain solubility of calcium salts. [ 8 ] [ 9 ] The basic operating principle of ultrafiltration uses a pressure induced separation of solutes from a solvent through a semi permeable membrane. The relationship between the applied pressure on the solution to be separated and the flux through the membrane is most commonly described by the Darcy equation: where J is the flux (flow rate per membrane area), TMP is the transmembrane pressure (pressure difference between feed and permeate stream), μ is solvent viscosity and R t is the total resistance (sum of membrane and fouling resistance). [ citation needed ] When filtration occurs the local concentration of rejected material at the membrane surface increases and can become saturated. In UF, increased ion concentration can develop an osmotic pressure on the feed side of the membrane. This reduces the effective TMP of the system, therefore reducing permeation rate. The increase in concentrated layer at the membrane wall decreases the permeate flux, due to increase in resistance which reduces the driving force for solvent to transport through membrane surface. CP affects almost all the available membrane separation processes. In RO, the solutes retained at the membrane layer results in higher osmotic pressure in comparison to the bulk stream concentration. So the higher pressures are required to overcome this osmotic pressure. Concentration polarisation plays a dominant role in ultrafiltration as compared to microfiltration because of the small pore size membrane. [ 10 ] Concentration polarization differs from fouling as it has no lasting effects on the membrane itself and can be reversed by relieving the TMP. It does however have a significant effect on many types of fouling. [ 11 ] [ 12 ] The following are the four categories by which foulants of UF membranes can be defined in: The following models describe the mechanisms of particulate deposition on the membrane surface and in the pores: As a result of concentration polarization at the membrane surface, increased ion concentrations may exceed solubility thresholds and precipitate on the membrane surface. These inorganic salt deposits can block pores causing flux decline, membrane degradation and loss of production. The formation of scale is highly dependent on factors affecting both solubility and concentration polarization including pH, temperature, flow velocity and permeation rate. [ 14 ] Microorganisms will adhere to the membrane surface forming a gel layer – known as biofilm . [ 15 ] The film increases the resistance to flow, acting as an additional barrier to permeation. In spiral-wound modules, blockages formed by biofilm can lead to uneven flow distribution and thus increase the effects of concentration polarization. [ 16 ] Depending on the shape and material of the membrane, different modules can be used for ultrafiltration process. [ 17 ] Commercially available designs in ultrafiltration modules vary according to the required hydrodynamic and economic constraints as well as the mechanical stability of the system under particular operating pressures. [ 18 ] The main modules used in industry include: The tubular module design uses polymeric membranes cast on the inside of plastic or porous paper components with diameters typically in the range of 5–25 mm with lengths from 0.6–6.4 m. [ 6 ] Multiple tubes are housed in a PVC or steel shell. The feed of the module is passed through the tubes, accommodating radial transfer of permeate to the shell side. This design allows for easy cleaning however the main drawback is its low permeability, high volume hold-up within the membrane and low packing density. [ 6 ] [ 18 ] This design is conceptually similar to the tubular module with a shell and tube arrangement. A single module can consist of 50 to thousands of hollow fibres and therefore are self-supporting unlike the tubular design. The diameter of each fibre ranges from 0.2–3 mm with the feed flowing in the tube and the product permeate collected radially on the outside. The advantage of having self-supporting membranes is the ease with which they can be cleaned due to their ability to be backflushed. Replacement costs however are high, as one faulty fibre will require the whole bundle to be replaced. Considering the tubes are of small diameter, using this design also makes the system prone to blockage. [ 8 ] Are composed of a combination of flat membrane sheets separated by a thin meshed spacer material which serves as a porous plastic screen support. These sheets are rolled around a central perforated tube and fitted into a tubular steel pressure vessel casing. The feed solution passes over the membrane surface and the permeate spirals into the central collection tube. Spiral-wound modules are a compact and cheap alternative in ultrafiltration design, offer a high volumetric throughput and can also be easily cleaned. [ 18 ] However it is limited by the thin channels where feed solutions with suspended solids can result in partial blockage of the membrane pores. [ 8 ] This uses a membrane placed on a flat plate separated by a mesh like material. The feed is passed through the system from which permeate is separated and collected from the edge of the plate. Channel length can range from 10–60 cm and channel heights from 0.5–1.0 mm. [ 8 ] This module provides low volume hold-up, relatively easy replacement of the membrane and the ability to feed viscous solutions because of the low channel height, unique to this particular design. [ 18 ] The process characteristics of a UF system are highly dependent on the type of membrane used and its application. Manufacturers' specifications of the membrane tend to limit the process to the following typical specifications: [ 19 ] [ 20 ] [ 21 ] [ 22 ] When designing a new membrane separation facility or considering its integration into an existing plant, there are many factors which must be considered. For most applications a heuristic approach can be applied to determine many of these characteristics to simplify the design process. Some design areas include: Treatment of feed prior to the membrane is essential to prevent damage to the membrane and minimize the effects of fouling which greatly reduce the efficiency of the separation. Types of pre-treatment are often dependent on the type of feed and its quality. For example, in wastewater treatment, household waste and other particulates are screened. Other types of pre-treatment common to many UF processes include pH balancing and coagulation. [ 23 ] [ 24 ] Appropriate sequencing of each pre-treatment phase is crucial in preventing damage to subsequent stages. Pre-treatment can even be employed simply using dosing points. Most UF membranes use polymer materials ( polysulfone , polypropylene , cellulose acetate , polylactic acid ) however ceramic membranes are used for high temperature applications. [ citation needed ] A general rule for choice of pore size in a UF system is to use a membrane with a pore size one tenth that of the particle size to be separated. This limits the number of smaller particles entering the pores and adsorbing to the pore surface. Instead they block the entrance to the pores allowing simple adjustments of cross-flow velocity to dislodge them. [ 8 ] UF systems can either operate with cross-flow or dead-end flow. In dead-end filtration the flow of the feed solution is perpendicular to the membrane surface. On the other hand, in cross flow systems the flow passes parallel to the membrane surface. [ 25 ] Dead-end configurations are more suited to batch processes with low suspended solids as solids accumulate at the membrane surface therefore requiring frequent backflushes and cleaning to maintain high flux. Cross-flow configurations are preferred in continuous operations as solids are continuously flushed from the membrane surface resulting in a thinner cake layer and lower resistance to permeation. [ citation needed ] Flow velocity is especially critical for hard water or liquids containing suspensions in preventing excessive fouling. Higher cross-flow velocities can be used to enhance the sweeping effect across the membrane surface therefore preventing deposition of macromolecules and colloidal material and reducing the effects of concentration polarization. Expensive pumps are however required to achieve these conditions. [ citation needed ] To avoid excessive damage to the membrane, it is recommended to operate a plant at the temperature specified by the membrane manufacturer. In some instances however temperatures beyond the recommended region are required to minimise the effects of fouling. [ 24 ] Economic analysis of the process is required to find a compromise between the increased cost of membrane replacement and productivity of the separation. [ citation needed ] Pressure drops over multi-stage separation can result in a drastic decline in flux performance in the latter stages of the process. This can be improved using booster pumps to increase the TMP in the final stages. This will incur a greater capital and energy cost which will be offset by the improved productivity of the process. [ 24 ] With a multi-stage operation, retentate streams from each stage are recycled through the previous stage to improve their separation efficiency. Multiple stages in series can be applied to achieve higher purity permeate streams. Due to the modular nature of membrane processes, multiple modules can be arranged in parallel to treat greater volumes. [ 26 ] Post-treatment of the product streams is dependent on the composition of the permeate and retentate and its end-use or government regulation. In cases such as milk separation both streams (milk and whey) can be collected and made into useful products. Additional drying of the retentate will produce whey powder. In the paper mill industry, the retentate (non-biodegradable organic material) is incinerated to recover energy and permeate (purified water) is discharged into waterways. It is essential for the permeate water to be pH balanced and cooled to avoid thermal pollution of waterways and altering its pH. [ citation needed ] Cleaning of the membrane is done regularly to prevent the accumulation of foulants and reverse the degrading effects of fouling on permeability and selectivity. Regular backwashing is often conducted every 10 min for some processes to remove cake layers formed on the membrane surface. [ 8 ] By pressurising the permeate stream and forcing it back through the membrane, accumulated particles can be dislodged, improving the flux of the process. Backwashing is limited in its ability to remove more complex forms of fouling such as biofouling, scaling or adsorption to pore walls. [ 27 ] These types of foulants require chemical cleaning to be removed. The common types of chemicals used for cleaning are: [ 27 ] [ 28 ] When designing a cleaning protocol it is essential to consider: Cleaning time – Adequate time must be allowed for chemicals to interact with foulants and permeate into the membrane pores. However, if the process is extended beyond its optimum duration it can lead to denaturation of the membrane and deposition of removed foulants. [ 27 ] The complete cleaning cycle including rinses between stages may take as long as 2 hours to complete. [ 29 ] Aggressiveness of chemical treatment – With a high degree of fouling it may be necessary to employ aggressive cleaning solutions to remove fouling material. However, in some applications this may not be suitable if the membrane material is sensitive, leading to enhanced membrane ageing. Disposal of cleaning effluent – The release of some chemicals into wastewater systems may be prohibited or regulated therefore this must be considered. For example, the use of phosphoric acid may result in high levels of phosphates entering water ways and must be monitored and controlled to prevent eutrophication. Summary of common types of fouling and their respective chemical treatments [ 8 ] In order to increase the life-cycle of membrane filtration systems, energy efficient membranes are being developed in membrane bioreactor systems. Technology has been introduced which allows the power required to aerate the membrane for cleaning to be reduced whilst still maintaining a high flux level. Mechanical cleaning processes have also been adopted using granulates as an alternative to conventional forms of cleaning; this reduces energy consumption and also reduces the area required for filtration tanks. [ 30 ] Membrane properties have also been enhanced to reduce fouling tendencies by modifying surface properties. This can be noted in the biotechnology industry where membrane surfaces have been altered in order to reduce the amount of protein binding. [ 31 ] Ultrafiltration modules have also been improved to allow for more membrane for a given area without increasing its risk of fouling by designing more efficient module internals. The current pre-treatment of seawater desulphonation uses ultrafiltration modules that have been designed to withstand high temperatures and pressures whilst occupying a smaller footprint. Each module vessel is self supported and resistant to corrosion and accommodates easy removal and replacement of the module without the cost of replacing the vessel itself. [ 30 ]
https://en.wikipedia.org/wiki/Ultrafiltration
In the philosophy of mathematics , ultrafinitism (also known as ultraintuitionism , [ 1 ] strict formalism , [ 2 ] strict finitism , [ 2 ] actualism , [ 1 ] predicativism , [ 2 ] [ 3 ] and strong finitism ) [ 2 ] is a form of finitism and intuitionism . There are various philosophies of mathematics that are called ultrafinitism. A major identifying property common among most of these philosophies is their objections to totality of number theoretic functions like exponentiation over natural numbers . Like other finitists , ultrafinitists deny the existence of the infinite set N {\displaystyle \mathbb {N} } of natural numbers , on the basis that it can never be completed (i.e., there is a largest natural number). In addition, some ultrafinitists are concerned with acceptance of objects in mathematics that no one can construct in practice because of physical restrictions in constructing large finite mathematical objects. Thus some ultrafinitists will deny or refrain from accepting the existence of large numbers, for example, the floor of the first Skewes's number , which is a huge number defined using the exponential function as exp(exp(exp(79))), or The reason is that nobody has yet calculated what natural number is the floor of this real number , and it may not even be physically possible to do so. Similarly, 2 ↑ ↑ ↑ 6 {\displaystyle 2\uparrow \uparrow \uparrow 6} (in Knuth's up-arrow notation ) would be considered only a formal expression that does not correspond to a natural number. The brand of ultrafinitism concerned with physical realizability of mathematics is often called actualism . Edward Nelson criticized the classical conception of natural numbers because of the circularity of its definition. In classical mathematics the natural numbers are defined as 0 and numbers obtained by the iterative applications of the successor function to 0. But the concept of natural number is already assumed for the iteration. In other words, to obtain a number like 2 ↑ ↑ ↑ 6 {\displaystyle 2\uparrow \uparrow \uparrow 6} one needs to perform the successor function iteratively (in fact, exactly 2 ↑ ↑ ↑ 6 {\displaystyle 2\uparrow \uparrow \uparrow 6} times) to 0. Some versions of ultrafinitism are forms of constructivism , but most constructivists view the philosophy as unworkably extreme. The logical foundation of ultrafinitism is unclear; in his comprehensive survey Constructivism in Mathematics (1988), the constructive logician A. S. Troelstra dismissed it by saying "no satisfactory development exists at present." This was not so much a philosophical objection as it was an admission that, in a rigorous work of mathematical logic , there was simply nothing precise enough to include. Serious work on ultrafinitism was led, from 1959 until his death in 2016, by Alexander Esenin-Volpin , who in 1961 sketched a program for proving the consistency of Zermelo–Fraenkel set theory in ultrafinite mathematics. Other mathematicians who have worked in the topic include Doron Zeilberger , Edward Nelson , Rohit Jivanlal Parikh , and Jean Paul Van Bendegem . The philosophy is also sometimes associated with the beliefs of Ludwig Wittgenstein , Robin Gandy , Petr Vopěnka , and Johannes Hjelmslev . Shaughan Lavine has developed a form of set-theoretical ultrafinitism that is consistent with classical mathematics. [ 4 ] Lavine has shown that the basic principles of arithmetic such as "there is no largest natural number" can be upheld, as Lavine allows for the inclusion of "indefinitely large" numbers. [ 4 ] Other considerations of the possibility of avoiding unwieldy large numbers can be based on computational complexity theory , as in András Kornai 's work on explicit finitism (which does not deny the existence of large numbers) [ 5 ] and Vladimir Sazonov 's notion of feasible numbers . There has also been considerable formal development on versions of ultrafinitism that are based on complexity theory, like Samuel Buss 's bounded arithmetic theories, which capture mathematics associated with various complexity classes like P and PSPACE . Buss's work can be considered the continuation of Edward Nelson 's work on predicative arithmetic as bounded arithmetic theories like S12 are interpretable in Raphael Robinson 's theory Q and therefore are predicative in Nelson 's sense. The power of these theories for developing mathematics is studied in bounded reverse mathematics as can be found in the works of Stephen A. Cook and Phuong The Nguyen . However these are not philosophies of mathematics but rather the study of restricted forms of reasoning similar to reverse mathematics .
https://en.wikipedia.org/wiki/Ultrafinitism
In mathematics , an ultragraph C*-algebra is a universal C*-algebra generated by partial isometries on a collection of Hilbert spaces constructed from ultragraphs. [ 1 ] pp. 6-7 . These C*-algebras were created in order to simultaneously generalize the classes of graph C*-algebras and Exel–Laca algebras, giving a unified framework for studying these objects. [ 1 ] This is because every graph can be encoded as an ultragraph, and similarly, every infinite graph giving an Exel-Laca algebras can also be encoded as an ultragraph. An ultragraph G = ( G 0 , G 1 , r , s ) {\displaystyle {\mathcal {G}}=(G^{0},{\mathcal {G}}^{1},r,s)} consists of a set of vertices G 0 {\displaystyle G^{0}} , a set of edges G 1 {\displaystyle {\mathcal {G}}^{1}} , a source map s : G 1 → G 0 {\displaystyle s:{\mathcal {G}}^{1}\to G^{0}} , and a range map r : G 1 → P ( G 0 ) ∖ { ∅ } {\displaystyle r:{\mathcal {G}}^{1}\to P(G^{0})\setminus \{\emptyset \}} taking values in the power set collection P ( G 0 ) ∖ { ∅ } {\displaystyle P(G^{0})\setminus \{\emptyset \}} of nonempty subsets of the vertex set. A directed graph is the special case of an ultragraph in which the range of each edge is a singleton, and ultragraphs may be thought of as generalized directed graph in which each edges starts at a single vertex and points to a nonempty subset of vertices. An easy way to visualize an ultragraph is to consider a directed graph with a set of labelled vertices, where each label corresponds to a subset in the image of an element of the range map. For example, given an ultragraph with vertices and edge labels G 0 = { v , w , x } {\displaystyle {\mathcal {G}}^{0}=\{v,w,x\}} , G 1 = { e , f , g } {\displaystyle {\mathcal {G}}^{1}=\{e,f,g\}} with source an range maps s ( e ) = v s ( f ) = w s ( g ) = x r ( e ) = { v , w , x } r ( f ) = { x } r ( g ) = { v , w } {\displaystyle {\begin{matrix}s(e)=v&s(f)=w&s(g)=x\\r(e)=\{v,w,x\}&r(f)=\{x\}&r(g)=\{v,w\}\end{matrix}}} can be visualized as the image on the right. Given an ultragraph G = ( G 0 , G 1 , r , s ) {\displaystyle {\mathcal {G}}=(G^{0},{\mathcal {G}}^{1},r,s)} , we define G 0 {\displaystyle {\mathcal {G}}^{0}} to be the smallest subset of P ( G 0 ) {\displaystyle P(G^{0})} containing the singleton sets { { v } : v ∈ G 0 } {\displaystyle \{\{v\}:v\in G^{0}\}} , containing the range sets { r ( e ) : e ∈ G 1 } {\displaystyle \{r(e):e\in {\mathcal {G}}^{1}\}} , and closed under intersections, unions, and relative complements. A Cuntz–Krieger G {\displaystyle {\mathcal {G}}} -family is a collection of projections { p A : A ∈ G 0 } {\displaystyle \{p_{A}:A\in {\mathcal {G}}^{0}\}} together with a collection of partial isometries { s e : e ∈ G 1 } {\displaystyle \{s_{e}:e\in {\mathcal {G}}^{1}\}} with mutually orthogonal ranges satisfying The ultragraph C*-algebra C ∗ ( G ) {\displaystyle C^{*}({\mathcal {G}})} is the universal C*-algebra generated by a Cuntz–Krieger G {\displaystyle {\mathcal {G}}} -family. Every graph C*-algebra is seen to be an ultragraph algebra by simply considering the graph as a special case of an ultragraph, and realizing that G 0 {\displaystyle {\mathcal {G}}^{0}} is the collection of all finite subsets of G 0 {\displaystyle G^{0}} and p A = ∑ v ∈ A p v {\displaystyle p_{A}=\sum _{v\in A}p_{v}} for each A ∈ G 0 {\displaystyle A\in {\mathcal {G}}^{0}} . Every Exel–Laca algebras is also an ultragraph C*-algebra: If A {\displaystyle A} is an infinite square matrix with index set I {\displaystyle I} and entries in { 0 , 1 } {\displaystyle \{0,1\}} , one can define an ultragraph by G 0 := {\displaystyle G^{0}:=} , G 1 := I {\displaystyle G^{1}:=I} , s ( i ) = i {\displaystyle s(i)=i} , and r ( i ) = { j ∈ I : A ( i , j ) = 1 } {\displaystyle r(i)=\{j\in I:A(i,j)=1\}} . It can be shown that C ∗ ( G ) {\displaystyle C^{*}({\mathcal {G}})} is isomorphic to the Exel–Laca algebra O A {\displaystyle {\mathcal {O}}_{A}} . [ 1 ] Ultragraph C*-algebras are useful tools for studying both graph C*-algebras and Exel–Laca algebras. Among other benefits, modeling an Exel–Laca algebra as ultragraph C*-algebra allows one to use the ultragraph as a tool to study the associated C*-algebras, thereby providing the option to use graph-theoretic techniques, rather than matrix techniques, when studying the Exel–Laca algebra. Ultragraph C*-algebras have been used to show that every simple AF-algebra is isomorphic to either a graph C*-algebra or an Exel–Laca algebra. [ 2 ] They have also been used to prove that every AF-algebra with no (nonzero) finite-dimensional quotient is isomorphic to an Exel–Laca algebra. [ 2 ] While the classes of graph C*-algebras, Exel–Laca algebras, and ultragraph C*-algebras each contain C*-algebras not isomorphic to any C*-algebra in the other two classes, the three classes have been shown to coincide up to Morita equivalence . [ 3 ]
https://en.wikipedia.org/wiki/Ultragraph_C*-algebra
In chemistry and materials science , ultrahydrophobic (or superhydrophobic ) surfaces are highly hydrophobic , i.e., extremely difficult to wet . The contact angles of a water droplet on an ultrahydrophobic material exceed 150°. [ 1 ] This is also referred to as the lotus effect , after the superhydrophobic leaves of the lotus plant. A droplet striking these kinds of surfaces can fully rebound like an elastic ball. [ 2 ] Interactions of bouncing drops can be further reduced using special superhydrophobic surfaces that promote symmetry breaking , [ 3 ] [ 4 ] [ 5 ] [ 6 ] pancake bouncing [ 7 ] or waterbowl bouncing. [ 8 ] [ 9 ] In 1805, Thomas Young defined the contact angle θ by analysing the forces acting on a fluid droplet resting on a smooth solid surface surrounded by a gas. [ 10 ] where θ can be measured using a contact angle goniometer . Wenzel determined that when the liquid is in intimate contact with a microstructured surface, θ will change to θ W* where r is the ratio of the actual area to the projected area. [ 11 ] Wenzel's equation shows that microstructuring a surface amplifies the natural tendency of the surface. A hydrophobic surface (one that has an original contact angle greater than 90°) becomes more hydrophobic when microstructured – its new contact angle becomes greater than the original. However, a hydrophilic surface (one that has an original contact angle less than 90°) becomes more hydrophilic when microstructured – its new contact angle becomes less than the original. [ 12 ] Cassie and Baxter found that if the liquid is suspended on the tops of microstructures, θ will change to θ CB* where φ is the area fraction of the solid that touches the liquid. [ 13 ] Liquid in the Cassie-Baxter state is more mobile than in the Wenzel state. It can be predicted whether the Wenzel or Cassie-Baxter state should exist by calculating the new contact angle with both equations. By a minimization of free energy argument, the relation that predicted the smaller new contact angle is the state most likely to exist. Stated mathematically, for the Cassie-Baxter state to exist, the following inequality must be true. [ 14 ] A recent alternative criteria for the Cassie-Baxter state asserts that the Cassie-Baxter state exists when the following 2 criteria are met: Contact angle is a measure of static hydrophobicity, and contact angle hysteresis and slide angle are dynamic measures. Contact angle hysteresis is a phenomenon that characterizes surface heterogeneity. [ 16 ] When a pipette injects a liquid onto a solid, the liquid will form some contact angle. As the pipette injects more liquid, the droplet will increase in volume, the contact angle will increase, but its three phase boundary will remain stationary until it suddenly advances outward. The contact angle the droplet had immediately before advancing outward is termed the advancing contact angle. The receding contact angle is now measured by pumping the liquid back out of the droplet. The droplet will decrease in volume, the contact angle will decrease, but its three phase boundary will remain stationary until it suddenly recedes inward. The contact angle the droplet had immediately before receding inward is termed the receding contact angle. The difference between advancing and receding contact angles is termed contact angle hysteresis and can be used to characterize surface heterogeneity, roughness, and mobility. Surfaces that are not homogeneous will have domains which impede motion of the contact line. The slide angle is another dynamic measure of hydrophobicity and is measured by depositing a droplet on a surface and tilting the surface until the droplet begins to slide. Liquids in the Cassie-Baxter state generally exhibit lower slide angles and contact angle hysteresis than those in the Wenzel state. A simple model can be used to predict the effectiveness of a synthetic micro- or nano-fabricated surface for its conditional state (Wenzel or Cassie-Baxter), contact angle and contact angle hysteresis . [ 17 ] The main factor of this model is the contact line density, Λ , which is the total perimeter of asperities over a given unit area. The critical contact line density Λ C is a function of body and surface forces, as well as the projected area of the droplet. where If Λ > Λ C , drops are suspended in the Cassie-Baxter state. Otherwise, the droplet will collapse into the Wenzel state. To calculate updated advancing and receding contact angles in the Cassie-Baxter state, the following equations can be used. with also the Wenzel state: where M. Nosonovsky and B. Bhushan studied the effect of unitary (non-hierarchical) structures of micro and nano roughness, and hierarchical structures (micro roughness covered with nano roughness). [ 18 ] They found that hierarchical structure was not only necessary for a high contact angle but essential for the stability of the water-solid and water-air interfaces (the composite interface). Due to an external perturbation, a standing capillary wave can form at the liquid–air interface. If the amplitude of the capillary wave is greater than the height of the asperity, the liquid can touch the valley between the asperities; and if the angle under which the liquid comes in contact with the solid is greater than h 0 , it is energetically profitable for the liquid to fill the valley. The effect of capillary waves is more pronounced for small asperities with heights comparable to the wave amplitude. An example of this is seen in the case of unitary roughness, where the amplitude of asperity is very low. This is why the likelihood of instability of a unitary interface will be very high. However, in a recent study, Eyal Bittoun and Abraham Marmur found that multiscale roughness is not necessarily essential for superhydrophobicity but beneficial for mechanical stability of the surface. [ 19 ] Many very hydrophobic materials found in nature rely on Cassie's law and are biphasic on the submicrometer level. The fine hairs on some plants are hydrophobic, designed to exploit the solvent properties of water to attract and remove sunlight-blocking dirt from their photosynthetic surfaces. Inspired by this lotus effect , many functional superhydrophobic surfaces have been developed. [ 20 ] Water striders are insects that live on the surface film of water, and their bodies are effectively unwettable due to specialized hairpiles called hydrofuge ; many of their body surfaces are covered with these specialized "hairpiles", composed of tiny hairs spaced so closely that there are more than one thousand microhairs per mm, which creates a hydrophobic surface. [ 21 ] Similar hydrofuge surfaces are known in other insects, including aquatic insects that spend most of their lives submerged, with hydrophobic hairs preventing entry of water into their respiratory system. The skin surface of some species of lizards , such as geckos [ 22 ] and anoles , [ 23 ] has also been documented as highly hydrophobic, and may facilitate self-cleaning [ 24 ] or underwater breathing. [ 25 ] Some birds are great swimmers, due to their hydrophobic feather coating. Penguins are coated in a layer of air and can release that trapped air to accelerate rapidly when needing to jump out of the water and land on higher ground. Wearing an air coat when swimming reduces the drag and also acts as a heat insulator. Dettre and Johnson discovered in 1964 that the superhydrophobic lotus effect phenomenon was related to rough hydrophobic surfaces, and they developed a theoretical model based on experiments with glass beads coated with paraffin or TFE telomer. The self-cleaning property of superhydrophobic micro- nanostructured surfaces was reported in 1977. [ 26 ] Perfluoroalkyl, perfluoropolyether and RF plasma formed superhydrophobic materials were developed, used for electrowetting and commercialized for bio-medical applications between 1986 and 1995. [ 27 ] [ 28 ] [ 29 ] [ 30 ] Other technology and applications have emerged since the mid 1990s. [ 31 ] A durable superhydrophobic hierarchical composition, applied in one or two steps, was disclosed in 2002 comprising nano-sized particles ≤ 100 nanometers overlaying a surface having micrometer-sized features or particles ≤ 100 µm . The larger particles were observed to protect the smaller particles from mechanical abrasion. [ 32 ] Durable, optically transparent superhydrophobic and oleophobic coatings were developed in 2012 comprising nano particles in the 10 to 100 nm size range. [ 33 ] [ 34 ] [ 35 ] [ 36 ] [ 37 ] Research in superhydrophobicity recently accelerated with a letter that reported man-made superhydrophobic samples produced by allowing alkylketene dimer (AKD) to solidify into a nanostructured fractal surface. [ 38 ] Many papers have since presented fabrication methods for producing superhydrophobic surfaces including particle deposition, [ 39 ] sol-gel techniques, [ 40 ] plasma treatments, [ 41 ] vapor deposition, [ 39 ] and casting techniques. [ 42 ] Current opportunity for research impact lies mainly in fundamental research and practical manufacturing. [ 43 ] Debates have recently emerged concerning the applicability of the Wenzel and Cassie-Baxter models. In an experiment designed to challenge the surface energy perspective of the Wenzel and Cassie-Baxter model and promote a contact line perspective, water drops were placed on a smooth hydrophobic spot in a rough hydrophobic field, a rough hydrophobic spot in a smooth hydrophobic field, and a hydrophilic spot in a hydrophobic field. [ 44 ] Experiments showed that the surface chemistry and geometry at the contact line affected the contact angle and contact angle hysteresis, but the surface area inside the contact line had no effect. An argument that increased jaggedness in the contact line enhances droplet mobility has also been proposed. [ 45 ] One method to experimentally measure the jaggedness in the contact line uses low melting temperature metal melted and deposited onto micro/nano structured surfaces. When the metal cools and solidifies, it is removed from the surface, flipped, and inspected for contact line micro geometry. [ 46 ] There have been a few efforts in fabricating a surface with tunable wettability. For the purpose of spontaneous droplet mobility, a surface can be fabricated with varying tower widths and spacings to gradually increase the free energy of the surface. [ 47 ] The trend shows that as tower width increases, the free energy barrier becomes larger and the contact angle drops, lowering the hydrophobicity of the material. Increasing tower spacing will increase the contact angle, but also increase the free energy barrier. Droplets naturally move towards areas of weak hydrophobicity, so to make a droplet spontaneously move from one spot to the next, the ideal surface would consist of small width towers with large spacing to large width towers with small spacing. One caveat to this spontaneous motion is the resistance of stationary droplets to move. Initial droplet motion requires an external stimulus, from something as large as a vibration of the surface or as small as a simple syringe "push" as it is released from the needle. An example of readily tunable wettability is found with special developed fabrics. [ 48 ] By stretching a dip-coated commercial fabric, contact angles were typically allowed to increase. This is largely caused by an increase in tower spacing. However, this trend does not continue towards greater hydrophobicity with higher strain. Eventually, the Cassie-Baxter state reaches an instability and transitions to the Wenzel state, soaking the fabric. An example of a biomimetic superhydrophobic material in nanotechnology is nanopin film . In one study a vanadium pentoxide V 2 O 5 surface is presented that can switch reversibly between superhydrophobicity and superhydrophilicity under the influence of UV radiation. [ 49 ] According to the study any surface can be modified to this effect by application of a suspension of rose-like V 2 O 5 particles for instance with an inkjet printer . Once again hydrophobicity is induced by interlaminar air pockets (separated by 2.1 nm distances). The UV effect is also explained. UV light creates electron-hole pairs , with the holes reacting with lattice oxygen creating surface oxygen vacancies while the electrons reduce V 5+ to V 3+ . The oxygen vacancies are met by water and this water absorbency by the vanadium surface makes it hydrophilic. By extended storage in the dark, water is replaced by oxygen and hydrophilicity is once again lost. Another example of a biomimetic surface includes micro-flowers on common polymer polycarbonates. [ 50 ] The micro/nano binary structures (MNBS) imitate the typical micro/nanostructure of a lotus leaf. These micro-flowers offer nanoscale features which enhance the surface's hydrophobicity, without the use of low surface energy coatings. Creation of the superhydrophobic surface through vapor-induced phase separation at varying surrounding relative humidities caused a likewise change to the contact angle of the surface. Surfaces prepared offer contact angles higher than 160° with typical sliding angles around 10°. A recent study has revealed a honeycomb like micro-structures on the taro leaf, which makes the leaf superhydrophobic. The measured contact angle on the taro leaf in this study is around 148 degrees. [ 51 ] Low surface energy coatings can also provide a superhydrophobic surface. A self-assembled monolayer (SAM) coating can provide such surfaces. To maintain a hydrophobic surface, the head groups bind closely to the surface, while the hydrophobic micelles stretch far away from the surface. By varying the amount of SAM you coat on a substrate, one could vary the degree of hydrophobicity. Particular superhydrophobic SAMs have a hydrophobic head group binding to the substrate. In one such work, 1-dodecanethiol (DT; CH 3 (CH 2 ) 11 SH ) is assembled on a Pt/ZnO/SiO 2 composite substrate, producing contact angles of 170.3°. [ 52 ] The monolayers could also be removed with a UV source, decreasing the hydrophobicity. A simple fabrication method could create both microstructure and low surface tension in one step by using octadecyltrichlorosilane (OTS). [ 53 ] Superhydrophobic surfaces are able to stabilize the Leidenfrost effect by making the vapour layer stable. Once the vapour layer is established, cooling never collapses the layer, and no nucleate boiling occurs; the layer instead slowly relaxes until the surface is cooled. [ 54 ] Fabricating superhydrophobic polymer surfaces with controlled geometry can be expensive and time consuming, but a small number of commercial sources [ citation needed ] provide specimens for research labs. Active recent research on superhydrophobic materials might eventually lead to industrial applications. Some attempts at fabricating a superhydrophobic surface include mimicking a lotus leaf surface, namely the two-tiered characteristic. This requires micro-scale surfaces with typically nanoscale features on top of them. For example, a simple routine of coating cotton fabric with silica [ 55 ] or titania [ 56 ] particles by sol-gel technique has been reported, which protects the fabric from UV light and makes it superhydrophobic. Similarly, silica nanoparticles can be deposited on top of already hydrophobic carbon fabric. [ 57 ] The carbon fabric by itself is identified as inherently hydrophobic, but not distinguished as superhydrophobic since its contact angle is not higher than 150°. With the adhesion of silica nanoparticles, contact angles as high as 162° are achieved. Using silica nano-particles is also of interest to develop transparent hydrophobic materials for car windshields and self-cleaning windows. [ 58 ] By coating an already transparent surface with nano-silica with about 1 wt%, droplet contact angles can be raised up to 168° with a 12° sliding angle. An efficient routine has been reported for making linear low-density polyethylene superhydrophobic and thus self-cleaning; [ 59 ] 99% of dirt deposited on such a surface is easily washed away. Patterned superhydrophobic surfaces also have the promises for the lab-on-a-chip, microfluidic devices and can drastically improve the surface based bioanalysis. [ 60 ] In the textile industry, superhydrophobicity refers to static roll-off angles of water of 20° or less. An example of superhydrophobic effect in live application is the team Alinghi in America's Cup using specially treated sailing jackets. The treatment is built up by micrometre size particles in combination with traditional fluorine chemistry. There has been a recent development of super hydrophobic paper which has unique properties for its application in paper based electronics and medical industry. [ 61 ] The paper is synthesized in an organic free medium which makes it environment friendly. The paper has antimicrobial properties as it does not hold moisture so it makes it perfect for surgical applications. This paper can be a huge breakthrough for the paper based electronics industry. The resistance to aqueous and organic solvents makes it an ideal choice in developing electronic sensors and chips. Skin based analyte detection is now possible without damaging and continuous replacing of the electrodes as this paper will be immune to sweat. With its endless applications this field of material science is sure to be more explored. A recent application of hydrophobic structures and materials is in the development of micro fuel cell chips. Reactions within the fuel cell produce waste gas CO 2 which can be vented out through these hydrophobic membranes. [ 62 ] The membrane consists of many microcavities which allow the gas to escape, while its hydrophobicity characteristic prevents the liquid fuel from leaking through. More fuel flows in to replace the volume previously kept by the waste gas, and the reaction is allowed to continue. A well-known application of ultrahydrophobic surfaces is on heat exchangers, [ 63 ] where they can improve droplet shedding and even cause jumping-droplet condensation, with potential for powerplants, heating and air conditioning, and desalination . [ 64 ] Rare earth oxides, which are found to exhibit intrinsically hydrophobic surfaces, offer an alternative to surface coatings, allowing the development of thermally stable hydrophobic surfaces for heat exchangers operating at high temperature [ 65 ] Ultrahydrophobic desalination membranes for membrane distillation have also been fabricated for improved fouling resistance, [ 66 ] which can be fabricated effectively with chemical vapor deposition . [ 67 ] It has been also suggested that the superhydrophobic surfaces can also repel ice or prevent ice accumulation leading to the phenomenon of icephobicity . However, not every superhydrophobic surface is icephobic [ 68 ] and the approach is still under development. [ 69 ] In particular, the frost formation over the entire surface is inevitable as a result of undesired inter-droplet freezing wave propagation initiated by the sample edges. Moreover, the frost formation directly results in an increased frost adhesion, posing severe challenges for the subsequent defrosting process. By creating hierarchical surface, the interdroplet freezing wave propagation can be suppressed whereas the ice/frost removal can be promoted. The enhanced performances are mainly owing to the activation of the microscale edge effect in the hierarchical surface, which increases the energy barrier for ice bridging as well as engendering the liquid lubrication during the deicing/defrosting process. [ 70 ] The ability of packaging to fully empty a viscous liquid is somewhat dependent on the surface energy of the inner walls of the container. The use of superhydrophobic surfaces is useful but can be further improved by using new lubricant-impregnated surfaces. [ 71 ]
https://en.wikipedia.org/wiki/Ultrahydrophobicity
Ultramicrobacteria are bacteria that are smaller than 0.1 μm 3 under all growth conditions. [ 1 ] [ 2 ] [ 3 ] This term was coined in 1981, describing cocci in seawater that were less than 0.3 μm in diameter. [ 4 ] Ultramicrobacteria have also been recovered from soil and appear to be a mixture of gram-positive , gram-negative and cell-wall-lacking species. [ 5 ] [ 2 ] Ultramicrobacteria possess a relatively high surface-area-to-volume ratio due to their small size, which aids in growth under oligotrophic (i.e. nutrient-poor) conditions. [ 2 ] The relatively small size of ultramicrobacteria also enables parasitism of larger organisms; [ 2 ] some ultramicrobacteria have been observed to be obligate or facultative parasites of various eukaryotes and prokaryotes. [ 1 ] [ 2 ] One factor allowing ultramicrobacteria to achieve their small size seems to be genome minimization [ 1 ] [ 2 ] such as in the case of the ultramicrobacterium P. ubique whose small 1.3 Mb genome is seemingly devoid of extraneous genetic elements like non-coding DNA , transposons , extrachromosomal elements etc. [ 2 ] However, genomic data from ultramicrobacteria is lacking [ 2 ] since the study of ultramicrobacteria, like many other prokaryotes, is hindered by difficulties in cultivating them. [ 3 ] Microbacterial studies from Berkeley Lab at UC Berkeley have produced detailed microscopy images of ultra-small microbial species. [ 6 ] Cells imaged have an average volume of 0.009 μm 3 , meaning that about 150,000 of them could fit on the tip of a human hair. [ 6 ] These bacteria were found in groundwater samples and analyzed with 2-D and 3-D cryogenic transmission electron microscopy. These ultra-small bacteria, about 1 million base pairs long, [ 6 ] display dense spirals of DNA, few ribosomes, hair-like fibrous appendages, and minimized metabolic systems. [ 6 ] Such cells probably gain most essential nutrients and metabolites from other bacteria. [ 6 ] Bacteria in the ultra-small size range are thought to be rather common but difficult to detect. [ 6 ] Ultramicrobacteria are commonly confused with ultramicrocells, the latter of which are the dormant , stress-resistant forms of larger cells that form under starvation conditions [ 1 ] [ 2 ] [ 7 ] (i.e. these larger cells downregulate their metabolism, stop growing and stabilize their DNA to create ultramicrocells that remain viable for years [ 1 ] [ 8 ] ) whereas the small size of ultramicrobacteria is not a starvation response and is consistent even under nutrient-rich conditions. [ 3 ] The term "nanobacteria" is sometimes used synonymously with ultramicrobacteria in the scientific literature, [ 2 ] but ultramicrobacteria are distinct from the purported nanobacteria or "calcifying nanoparticles", which were proposed to be living organisms that were 0.1 μm in diameter. [ 9 ] These structures are now thought to be nonliving, [ 10 ] and likely precipitated particles of inorganic material. [ 11 ] [ 12 ] This bacteria -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ultramicrobacteria
Ultramicrotomy is a method for cutting specimens into extremely thin slices, called ultra-thin sections, that can be studied and documented at different magnifications in a transmission electron microscope (TEM). It is used mostly for biological specimens, but sections of plastics and soft metals can also be prepared. Sections must be very thin because the 50 to 125 kV electrons of the standard electron microscope cannot pass through biological material much thicker than 150 nm. For best resolutions, sections should be from 30 to 60 nm. This is roughly the equivalent to splitting a 0.1 mm-thick human hair into 2,000 slices along its diameter, or cutting a single red blood cell into 100 slices. [ 1 ] Ultra-thin sections of specimens are cut using a specialized instrument called an "ultramicrotome". The ultramicrotome is fitted with either a diamond knife, for most biological ultra-thin sectioning, or a glass knife, often used for initial cuts. There are numerous other pieces of equipment involved in the ultramicrotomy process. Before selecting an area of the specimen block to be ultra-thin sectioned, the technician examines semithin or "thick" sections range from 0.5 to 2 μm. These thick sections are also known as survey sections and are viewed under a light microscope to determine whether the right area of the specimen is in a position for thin sectioning. "Ultra-thin" sections from 50 to 100 nm thick are able to be viewed in the TEM. Tissue sections obtained by ultramicrotomy are compressed by the cutting force of the knife. In addition, interference microscopy of the cut surface of the blocks reveals that the sections are often not flat. With Epon or Vestopal as embedding medium the ridges and valleys usually do not exceed 0.5 μm in height, i.e., 5–10 times the thickness of ordinary sections (1). A small sample is taken from the specimen to be investigated. Specimens may be from biological matter, like animal or plant tissue, or from inorganic material such as rock, metal, magnetic tape, plastic, film, etc. [ 3 ] The sample block is first trimmed to create a block face 1 mm by 1 mm in size. "Thick" sections (1 μm) are taken to be looked at on an optical microscope . An area is chosen to be sectioned for TEM and the block face is re-trimmed to a size no larger than 0.7 mm on a side. Block faces usually have a square, trapezoidal, rectangular, or triangular shape. Finally, thin sections are cut with a glass or diamond knife using an ultramicrotome and the sections are left floating on water that is held in a boat or trough. The sections are then retrieved from the water surface and mounted on a copper , nickel , gold , or other metal grid. Ideal section thickness for transmission electron microscopy with accelerating voltages between 50kV and 120kV is about 30–100 nm. In 1952 Humberto Fernandez Morán introduced cryo ultramicrotomy , which is a similar technique but done at freezing temperatures between −20 and −150°C. Cryo ultramicrotomy can be used to cut ultra-thin frozen biological specimens. One of the advantages over the more "traditional" ultramicrotomy process is speed, since it should be possible to freeze and section a specimen in 1 to 2 hours.
https://en.wikipedia.org/wiki/Ultramicrotomy
In hyperbolic geometry , two lines are said to be ultraparallel if they do not intersect and are not limiting parallel . The ultraparallel theorem states that every pair of (distinct) ultraparallel lines has a unique common perpendicular (a hyperbolic line which is perpendicular to both lines). Let r and s be two ultraparallel lines. From any two distinct points A and C on s draw AB and CB' perpendicular to r with B and B' on r . If it happens that AB = CB', then the desired common perpendicular joins the midpoints of AC and BB' (by the symmetry of the Saccheri quadrilateral ACB'B). If not, we may suppose AB < CB' without loss of generality. Let E be a point on the line s on the opposite side of A from C. Take A' on CB' so that A'B' = AB. Through A' draw a line s' (A'E') on the side closer to E, so that the angle B'A'E' is the same as angle BAE. Then s' meets s in an ordinary point D'. Construct a point D on ray AE so that AD = A'D'. Then D' ≠ D. They are the same distance from r and both lie on s. So the perpendicular bisector of D'D (a segment of s) is also perpendicular to r. [ 1 ] (If r and s were asymptotically parallel rather than ultraparallel, this construction would fail because s' would not meet s. Rather s' would be limiting parallel to both s and r.) Let be four distinct points on the abscissa of the Cartesian plane . Let p {\displaystyle p} and q {\displaystyle q} be semicircles above the abscissa with diameters a b {\displaystyle ab} and c d {\displaystyle cd} respectively. Then in the Poincaré half-plane model HP, p {\displaystyle p} and q {\displaystyle q} represent ultraparallel lines. Compose the following two hyperbolic motions : Then a → ∞ , b → ( b − a ) − 1 , c → ( c − a ) − 1 , d → ( d − a ) − 1 . {\displaystyle a\to \infty ,\quad b\to (b-a)^{-1},\quad c\to (c-a)^{-1},\quad d\to (d-a)^{-1}.} Now continue with these two hyperbolic motions: Then a {\displaystyle a} stays at ∞ {\displaystyle \infty } , b → 0 {\displaystyle b\to 0} , c → 1 {\displaystyle c\to 1} , d → z {\displaystyle d\to z} (say). The unique semicircle, with center at the origin, perpendicular to the one on 1 z {\displaystyle 1z} must have a radius tangent to the radius of the other. The right triangle formed by the abscissa and the perpendicular radii has hypotenuse of length 1 2 ( z + 1 ) {\displaystyle {\begin{matrix}{\frac {1}{2}}\end{matrix}}(z+1)} . Since 1 2 ( z − 1 ) {\displaystyle {\begin{matrix}{\frac {1}{2}}\end{matrix}}(z-1)} is the radius of the semicircle on 1 z {\displaystyle 1z} , the common perpendicular sought has radius-square The four hyperbolic motions that produced z {\displaystyle z} above can each be inverted and applied in reverse order to the semicircle centered at the origin and of radius z {\displaystyle {\sqrt {z}}} to yield the unique hyperbolic line perpendicular to both ultraparallels p {\displaystyle p} and q {\displaystyle q} . In the Beltrami-Klein model of the hyperbolic geometry: If one of the chords happens to be a diameter, we do not have a pole, but in this case any chord perpendicular to the diameter it is also perpendicular in the Beltrami-Klein model, and so we draw a line through the pole of the other line intersecting the diameter at right angles to get the common perpendicular. The proof is completed by showing this construction is always possible: Alternatively, we can construct the common perpendicular of the ultraparallel lines as follows: the ultraparallel lines in Beltrami-Klein model are two non-intersecting chords. But they actually intersect outside the circle. The polar of the intersecting point is the desired common perpendicular. [ 2 ]
https://en.wikipedia.org/wiki/Ultraparallel_theorem
In mathematics , an ultrapolynomial is a power series in several variables whose coefficients are bounded in some specific sense. Let d ∈ N {\displaystyle d\in \mathbb {N} } and K {\displaystyle K} a field (typically R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } ) equipped with a norm (typically the absolute value ). Then a function P : K d → K {\displaystyle P:K^{d}\rightarrow K} of the form P ( x ) = ∑ α ∈ N d c α x α {\displaystyle P(x)=\sum _{\alpha \in \mathbb {N} ^{d}}c_{\alpha }x^{\alpha }} is called an ultrapolynomial of class { M p } {\displaystyle \left\{M_{p}\right\}} , if the coefficients c α {\displaystyle c_{\alpha }} satisfy | c α | ≤ C L | α | / M α {\displaystyle \left|c_{\alpha }\right|\leq CL^{\left|\alpha \right|}/M_{\alpha }} for all α ∈ N d {\displaystyle \alpha \in \mathbb {N} ^{d}} , for some L > 0 {\displaystyle L>0} and C > 0 {\displaystyle C>0} (resp. for every L > 0 {\displaystyle L>0} and some C ( L ) > 0 {\displaystyle C(L)>0} ).
https://en.wikipedia.org/wiki/Ultrapolynomial
Ultrapotassic igneous rocks are a class of rare, volumetrically minor, generally ultramafic or mafic silica-depleted igneous rocks . While there are debates on the exact classifications of ultrapotassic rocks, they are defined by using the chemical screens K 2 O/Na 2 O > 3 in much of the scientific literature. [ 1 ] However caution is indicated in interpreting the use of the term "ultrapotassic", and the nomenclature of these rocks continues to be debated, with some classifications using K 2 O/Na 2 O > 2 to indicate a rock is ultrapotassic. The magmas that produce ultrapotassic rocks are produced by a variety of mechanisms and from a variety of sources, but generally occur in a heterogenous , anomalous , phlogopite -bearing upper mantle . [ 2 ] The following conditions are favorable for the formation of ultrapotassic magmas. [ 3 ] Mantle sources of ultrapotassic magmas may contain subducted sediments, or the sources may have been enriched in potassium by melts or fluids partly derived from subducted sediments. Phlogopite and/or potassic amphibole are typical in the sources from which many such magmas have been derived. Ultrapotassic granites are uncommon and may be produced by melting of the continental crust above upwelling mafic magma, such as at rift zones. The economic importance of ultrapotassic rocks is wide and varied. Because kimberlites , lamproites and lamprophyres are all produced at depths of 120 km or greater, they are known to be a major source of diamond deposits and thus can bring diamonds to the surface as xenocrysts . [ 4 ] Additionally, ultrapotassic granites are a known host for granite-hosted gold mineralization and well as significant porphyry -style mineralization. [ 5 ] Ultrapotassic A-type intracontinental granites may also be associated with fluorite and columbite – tantalite mineralization. This article about igneous petrology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ultrapotassic_igneous_rocks
The ultraproduct is a mathematical construction that appears mainly in abstract algebra and mathematical logic , in particular in model theory and set theory . An ultraproduct is a quotient of the direct product of a family of structures . All factors need to have the same signature . The ultrapower is the special case of this construction in which all factors are equal. For example, ultrapowers can be used to construct new fields from given ones. The hyperreal numbers , an ultrapower of the real numbers , are a special case of this. Some striking applications of ultraproducts include very elegant proofs of the compactness theorem and the completeness theorem , Keisler 's ultrapower theorem, which gives an algebraic characterization of the semantic notion of elementary equivalence, and the Robinson–Zakon presentation of the use of superstructures and their monomorphisms to construct nonstandard models of analysis, leading to the growth of the area of nonstandard analysis , which was pioneered (as an application of the compactness theorem) by Abraham Robinson . The general method for getting ultraproducts uses an index set I , {\displaystyle I,} a structure M i {\displaystyle M_{i}} (assumed to be non-empty in this article) for each element i ∈ I {\displaystyle i\in I} (all of the same signature ), and an ultrafilter U {\displaystyle {\mathcal {U}}} on I . {\displaystyle I.} For any two elements a ∙ = ( a i ) i ∈ I {\displaystyle a_{\bullet }=\left(a_{i}\right)_{i\in I}} and b ∙ = ( b i ) i ∈ I {\displaystyle b_{\bullet }=\left(b_{i}\right)_{i\in I}} of the Cartesian product ∏ i ∈ I M i , {\textstyle {\textstyle \prod \limits _{i\in I}}M_{i},} declare them to be U {\displaystyle {\mathcal {U}}} -equivalent , written a ∙ ∼ b ∙ {\displaystyle a_{\bullet }\sim b_{\bullet }} or a ∙ = U b ∙ , {\displaystyle a_{\bullet }=_{\mathcal {U}}b_{\bullet },} if and only if the set of indices { i ∈ I : a i = b i } {\displaystyle \left\{i\in I:a_{i}=b_{i}\right\}} on which they agree is an element of U ; {\displaystyle {\mathcal {U}};} in symbols, a ∙ ∼ b ∙ ⟺ { i ∈ I : a i = b i } ∈ U , {\displaystyle a_{\bullet }\sim b_{\bullet }\;\iff \;\left\{i\in I:a_{i}=b_{i}\right\}\in {\mathcal {U}},} which compares components only relative to the ultrafilter U . {\displaystyle {\mathcal {U}}.} This binary relation ∼ {\displaystyle \,\sim \,} is an equivalence relation [ proof 1 ] on the Cartesian product ∏ i ∈ I M i . {\displaystyle {\textstyle \prod \limits _{i\in I}}M_{i}.} The ultraproduct of M ∙ = ( M i ) i ∈ I {\displaystyle M_{\bullet }=\left(M_{i}\right)_{i\in I}} modulo U {\displaystyle {\mathcal {U}}} is the quotient set of ∏ i ∈ I M i {\displaystyle {\textstyle \prod \limits _{i\in I}}M_{i}} with respect to ∼ {\displaystyle \sim } and is therefore sometimes denoted by ∏ i ∈ I M i / U {\displaystyle {\textstyle \prod \limits _{i\in I}}M_{i}\,/\,{\mathcal {U}}} or ∏ U M ∙ . {\displaystyle {\textstyle \prod }_{\mathcal {U}}\,M_{\bullet }.} Explicitly, if the U {\displaystyle {\mathcal {U}}} - equivalence class of an element a ∈ ∏ i ∈ I M i {\displaystyle a\in {\textstyle \prod \limits _{i\in I}}M_{i}} is denoted by a U := { x ∈ ∏ i ∈ I M i : x ∼ a } {\displaystyle a_{\mathcal {U}}:={\big \{}x\in {\textstyle \prod \limits _{i\in I}}M_{i}\;:\;x\sim a{\big \}}} then the ultraproduct is the set of all U {\displaystyle {\mathcal {U}}} -equivalence classes ∏ U M ∙ = ∏ i ∈ I M i / U := { a U : a ∈ ∏ i ∈ I M i } . {\displaystyle {\prod }_{\mathcal {U}}\,M_{\bullet }\;=\;\prod _{i\in I}M_{i}\,/\,{\mathcal {U}}\;:=\;\left\{a_{\mathcal {U}}\;:\;a\in {\textstyle \prod \limits _{i\in I}}M_{i}\right\}.} Although U {\displaystyle {\mathcal {U}}} was assumed to be an ultrafilter, the construction above can be carried out more generally whenever U {\displaystyle {\mathcal {U}}} is merely a filter on I , {\displaystyle I,} in which case the resulting quotient set ∏ i ∈ I M i / U {\displaystyle {\textstyle \prod \limits _{i\in I}}M_{i}/\,{\mathcal {U}}} is called a reduced product . When U {\displaystyle {\mathcal {U}}} is a principal ultrafilter (which happens if and only if U {\displaystyle {\mathcal {U}}} contains its kernel ∩ U {\displaystyle \cap \,{\mathcal {U}}} ) then the ultraproduct is isomorphic to one of the factors. And so usually, U {\displaystyle {\mathcal {U}}} is not a principal ultrafilter , which happens if and only if U {\displaystyle {\mathcal {U}}} is free (meaning ∩ U = ∅ {\displaystyle \cap \,{\mathcal {U}}=\varnothing } ), or equivalently, if every cofinite subset of I {\displaystyle I} is an element of U . {\displaystyle {\mathcal {U}}.} Since every ultrafilter on a finite set is principal, the index set I {\displaystyle I} is consequently also usually infinite. The ultraproduct acts as a filter product space where elements are equal if they are equal only at the filtered components (non-filtered components are ignored under the equivalence). One may define a finitely additive measure m {\displaystyle m} on the index set I {\displaystyle I} by saying m ( A ) = 1 {\displaystyle m(A)=1} if A ∈ U {\displaystyle A\in {\mathcal {U}}} and m ( A ) = 0 {\displaystyle m(A)=0} otherwise. Then two members of the Cartesian product are equivalent precisely if they are equal almost everywhere on the index set. The ultraproduct is the set of equivalence classes thus generated. Finitary operations on the Cartesian product ∏ i ∈ I M i {\displaystyle {\textstyle \prod \limits _{i\in I}}M_{i}} are defined pointwise (for example, if + {\displaystyle +} is a binary function then a i + b i = ( a + b ) i {\displaystyle a_{i}+b_{i}=(a+b)_{i}} ). Other relations can be extended the same way: R ( a U 1 , … , a U n ) ⟺ { i ∈ I : R M i ( a i 1 , … , a i n ) } ∈ U , {\displaystyle R\left(a_{\mathcal {U}}^{1},\dots ,a_{\mathcal {U}}^{n}\right)~\iff ~\left\{i\in I:R^{M_{i}}\left(a_{i}^{1},\dots ,a_{i}^{n}\right)\right\}\in {\mathcal {U}},} where a U {\displaystyle a_{\mathcal {U}}} denotes the U {\displaystyle {\mathcal {U}}} -equivalence class of a {\displaystyle a} with respect to ∼ . {\displaystyle \sim .} In particular, if every M i {\displaystyle M_{i}} is an ordered field then so is the ultraproduct. An ultrapower is an ultraproduct for which all the factors M i {\displaystyle M_{i}} are equal. Explicitly, the ultrapower of a set M {\displaystyle M} modulo U {\displaystyle {\mathcal {U}}} is the ultraproduct ∏ i ∈ I M i / U = ∏ U M ∙ {\displaystyle {\textstyle \prod \limits _{i\in I}}M_{i}\,/\,{\mathcal {U}}={\textstyle \prod }_{\mathcal {U}}\,M_{\bullet }} of the indexed family M ∙ := ( M i ) i ∈ I {\displaystyle M_{\bullet }:=\left(M_{i}\right)_{i\in I}} defined by M i := M {\displaystyle M_{i}:=M} for every index i ∈ I . {\displaystyle i\in I.} The ultrapower may be denoted by ∏ U M {\displaystyle {\textstyle \prod }_{\mathcal {U}}\,M} or (since ∏ i ∈ I M {\displaystyle {\textstyle \prod \limits _{i\in I}}M} is often denoted by M I {\displaystyle M^{I}} ) by M I / U := ∏ i ∈ I M / U {\displaystyle M^{I}/{\mathcal {U}}~:=~\prod _{i\in I}M\,/\,{\mathcal {U}}\,} For every m ∈ M , {\displaystyle m\in M,} let ( m ) i ∈ I {\displaystyle (m)_{i\in I}} denote the constant map I → M {\displaystyle I\to M} that is identically equal to m . {\displaystyle m.} This constant map/tuple is an element of the Cartesian product M I = ∏ i ∈ I M {\displaystyle M^{I}={\textstyle \prod \limits _{i\in I}}M} and so the assignment m ↦ ( m ) i ∈ I {\displaystyle m\mapsto (m)_{i\in I}} defines a map M → ∏ i ∈ I M . {\displaystyle M\to {\textstyle \prod \limits _{i\in I}}M.} The natural embedding of M {\displaystyle M} into ∏ U M {\displaystyle {\textstyle \prod }_{\mathcal {U}}\,M} is the map M → ∏ U M {\displaystyle M\to {\textstyle \prod }_{\mathcal {U}}\,M} that sends an element m ∈ M {\displaystyle m\in M} to the U {\displaystyle {\mathcal {U}}} -equivalence class of the constant tuple ( m ) i ∈ I . {\displaystyle (m)_{i\in I}.} The hyperreal numbers are the ultraproduct of one copy of the real numbers for every natural number, with regard to an ultrafilter over the natural numbers containing all cofinite sets. Their order is the extension of the order of the real numbers. For example, the sequence ω {\displaystyle \omega } given by ω i = i {\displaystyle \omega _{i}=i} defines an equivalence class representing a hyperreal number that is greater than any real number. Analogously, one can define nonstandard integers , nonstandard complex numbers , etc., by taking the ultraproduct of copies of the corresponding structures. As an example of the carrying over of relations into the ultraproduct, consider the sequence ψ {\displaystyle \psi } defined by ψ i = 2 i . {\displaystyle \psi _{i}=2i.} Because ψ i > ω i = i {\displaystyle \psi _{i}>\omega _{i}=i} for all i , {\displaystyle i,} it follows that the equivalence class of ψ i = 2 i {\displaystyle \psi _{i}=2i} is greater than the equivalence class of ω i = i , {\displaystyle \omega _{i}=i,} so that it can be interpreted as an infinite number which is greater than the one originally constructed. However, let χ i = i {\displaystyle \chi _{i}=i} for i {\displaystyle i} not equal to 7 , {\displaystyle 7,} but χ 7 = 8. {\displaystyle \chi _{7}=8.} The set of indices on which ω {\displaystyle \omega } and χ {\displaystyle \chi } agree is a member of any ultrafilter (because ω {\displaystyle \omega } and χ {\displaystyle \chi } agree almost everywhere), so ω {\displaystyle \omega } and χ {\displaystyle \chi } belong to the same equivalence class. In the theory of large cardinals , a standard construction is to take the ultraproduct of the whole set-theoretic universe with respect to some carefully chosen ultrafilter U . {\displaystyle {\mathcal {U}}.} Properties of this ultrafilter U {\displaystyle {\mathcal {U}}} have a strong influence on (higher order) properties of the ultraproduct; for example, if U {\displaystyle {\mathcal {U}}} is σ {\displaystyle \sigma } -complete, then the ultraproduct will again be well-founded. (See measurable cardinal for the prototypical example.) Łoś's theorem, also called the fundamental theorem of ultraproducts , is due to Jerzy Łoś (the surname is pronounced [ˈwɔɕ] , approximately "wash"). It states that any first-order formula is true in the ultraproduct if and only if the set of indices i {\displaystyle i} such that the formula is true in M i {\displaystyle M_{i}} is a member of U . {\displaystyle {\mathcal {U}}.} More precisely: Let σ {\displaystyle \sigma } be a signature, U {\displaystyle {\mathcal {U}}} an ultrafilter over a set I , {\displaystyle I,} and for each i ∈ I {\displaystyle i\in I} let M i {\displaystyle M_{i}} be a σ {\displaystyle \sigma } -structure. Let ∏ U M ∙ {\displaystyle {\textstyle \prod }_{\mathcal {U}}\,M_{\bullet }} or ∏ i ∈ I M i / U {\displaystyle {\textstyle \prod \limits _{i\in I}}M_{i}/{\mathcal {U}}} be the ultraproduct of the M i {\displaystyle M_{i}} with respect to U . {\displaystyle {\mathcal {U}}.} Then, for each a 1 , … , a n ∈ ∏ i ∈ I M i , {\displaystyle a^{1},\ldots ,a^{n}\in {\textstyle \prod \limits _{i\in I}}M_{i},} where a k = ( a i k ) i ∈ I , {\displaystyle a^{k}=\left(a_{i}^{k}\right)_{i\in I},} and for every σ {\displaystyle \sigma } -formula ϕ , {\displaystyle \phi ,} ∏ U M ∙ ⊨ ϕ [ a U 1 , … , a U n ] ⟺ { i ∈ I : M i ⊨ ϕ [ a i 1 , … , a i n ] } ∈ U . {\displaystyle {\prod }_{\mathcal {U}}\,M_{\bullet }\models \phi \left[a_{\mathcal {U}}^{1},\ldots ,a_{\mathcal {U}}^{n}\right]~\iff ~\{i\in I:M_{i}\models \phi [a_{i}^{1},\ldots ,a_{i}^{n}]\}\in {\mathcal {U}}.} The theorem is proved by induction on the complexity of the formula ϕ . {\displaystyle \phi .} The fact that U {\displaystyle {\mathcal {U}}} is an ultrafilter (and not just a filter) is used in the negation clause, and the axiom of choice is needed at the existential quantifier step. As an application, one obtains the transfer theorem for hyperreal fields . Let R {\displaystyle R} be a unary relation in the structure M , {\displaystyle M,} and form the ultrapower of M . {\displaystyle M.} Then the set S = { x ∈ M : R x } {\displaystyle S=\{x\in M:Rx\}} has an analog ∗ S {\displaystyle {}^{*}S} in the ultrapower, and first-order formulas involving S {\displaystyle S} are also valid for ∗ S . {\displaystyle {}^{*}S.} For example, let M {\displaystyle M} be the reals, and let R x {\displaystyle Rx} hold if x {\displaystyle x} is a rational number. Then in M {\displaystyle M} we can say that for any pair of rationals x {\displaystyle x} and y , {\displaystyle y,} there exists another number z {\displaystyle z} such that z {\displaystyle z} is not rational, and x < z < y . {\displaystyle x<z<y.} Since this can be translated into a first-order logical formula in the relevant formal language, Łoś's theorem implies that ∗ S {\displaystyle {}^{*}S} has the same property. That is, we can define a notion of the hyperrational numbers, which are a subset of the hyperreals, and they have the same first-order properties as the rationals. Consider, however, the Archimedean property of the reals, which states that there is no real number x {\displaystyle x} such that x > 1 , x > 1 + 1 , x > 1 + 1 + 1 , … {\displaystyle x>1,\;x>1+1,\;x>1+1+1,\ldots } for every inequality in the infinite list. Łoś's theorem does not apply to the Archimedean property, because the Archimedean property cannot be stated in first-order logic. In fact, the Archimedean property is false for the hyperreals, as shown by the construction of the hyperreal number ω {\displaystyle \omega } above. In model theory and set theory , the direct limit of a sequence of ultrapowers is often considered. In model theory , this construction can be referred to as an ultralimit or limiting ultrapower . Beginning with a structure, A 0 {\displaystyle A_{0}} and an ultrafilter, D 0 , {\displaystyle {\mathcal {D}}_{0},} form an ultrapower, A 1 . {\displaystyle A_{1}.} Then repeat the process to form A 2 , {\displaystyle A_{2},} and so forth. For each n {\displaystyle n} there is a canonical diagonal embedding A n → A n + 1 . {\displaystyle A_{n}\to A_{n+1}.} At limit stages, such as A ω , {\displaystyle A_{\omega },} form the direct limit of earlier stages. One may continue into the transfinite. The ultrafilter monad is the codensity monad of the inclusion of the category of finite sets into the category of all sets . [ 1 ] Similarly, the ultraproduct monad is the codensity monad of the inclusion of the category F i n F a m {\displaystyle \mathbf {FinFam} } of finitely-indexed families of sets into the category F a m {\displaystyle \mathbf {Fam} } of all indexed families of sets. So in this sense, ultraproducts are categorically inevitable. [ 1 ] Explicitly, an object of F a m {\displaystyle \mathbf {Fam} } consists of a non-empty index set I {\displaystyle I} and an indexed family ( M i ) i ∈ I {\displaystyle \left(M_{i}\right)_{i\in I}} of sets. A morphism ( N i ) j ∈ J → ( M i ) i ∈ I {\displaystyle \left(N_{i}\right)_{j\in J}\to \left(M_{i}\right)_{i\in I}} between two objects consists of a function ϕ : I → J {\displaystyle \phi :I\to J} between the index sets and a J {\displaystyle J} -indexed family ( ϕ j ) j ∈ J {\displaystyle \left(\phi _{j}\right)_{j\in J}} of function ϕ j : M ϕ ( j ) → N j . {\displaystyle \phi _{j}:M_{\phi (j)}\to N_{j}.} The category F i n F a m {\displaystyle \mathbf {FinFam} } is a full subcategory of this category of F a m {\displaystyle \mathbf {Fam} } consisting of all objects ( M i ) i ∈ I {\displaystyle \left(M_{i}\right)_{i\in I}} whose index set I {\displaystyle I} is finite. The codensity monad of the inclusion map F i n F a m ↪ F a m {\displaystyle \mathbf {FinFam} \hookrightarrow \mathbf {Fam} } is then, in essence, given by ( M i ) i ∈ I ↦ ( ∏ i ∈ I M i / U ) U ∈ U ( I ) . {\displaystyle \left(M_{i}\right)_{i\in I}~\mapsto ~\left(\prod _{i\in I}M_{i}\,/\,{\mathcal {U}}\right)_{{\mathcal {U}}\in U(I)}\,.} Proofs
https://en.wikipedia.org/wiki/Ultraproduct
Ultrapure water ( UPW ), high-purity water or highly purified water ( HPW ) is water that has been purified to uncommonly stringent specifications. Ultrapure water is a term commonly used in manufacturing to emphasize the fact that the water is treated to the highest levels of purity for all contaminant types, including organic and inorganic compounds, dissolved and particulate matter, and dissolved gases , as well as volatile and non-volatile compounds, reactive and inert compounds, and hydrophilic and hydrophobic compounds. UPW and the commonly used term deionized (DI) water are not the same. In addition to the fact that UPW has organic particles and dissolved gases removed, a typical UPW system has three stages: a pretreatment stage to produce purified water , a primary stage to further purify the water, and a polishing stage, the most expensive part of the treatment process. [ A ] A number of organizations and groups develop and publish standards associated with the production of UPW. For microelectronics and power, they include Semiconductor Equipment and Materials International ( SEMI ) (microelectronics and photovoltaic ), American Society for Testing and Materials International (ASTM International) (semiconductor, power), Electric Power Research Institute (EPRI) (power), American Society of Mechanical Engineers (ASME) (power), and International Association for the Properties of Water and Steam (IAPWS) (power). Pharmaceutical plants follow water quality standards as developed by pharmacopeias , of which three examples are the United States Pharmacopeia , European Pharmacopeia , and Japanese Pharmacopeia . The most widely used requirements for UPW quality are documented by ASTM D5127 "Standard Guide for Ultra-Pure Water Used in the Electronics and Semiconductor Industries" [ 1 ] and SEMI F63 "Guide for ultrapure water used in semiconductor processing". [ 2 ] Bacteria , particles, organic, and inorganic sources of contamination vary depending on a number of factors, including the feed water to make UPW, as well as the selection of the piping materials used to convey it. Bacteria are typically reported in colony-forming units ( CFU ) per volume of UPW. Particles use number per volume of UPW. Total organic carbon (TOC), metallic contaminants, and anionic contaminants are measured in dimensionless terms of parts per notation , such as ppm, ppb, ppt, and ppq. [ citation needed ] Bacteria have been referred to as one of the most obstinate in this list to control. [ 3 ] Techniques that help to minimize bacterial colony growth within UPW streams include occasional chemical or steam sanitization (which is common in the pharmaceutical industry), ultrafiltration (found in some pharmaceutical, but mostly semiconductor industries), ozonation , and optimization of piping system designs that promote the use of Reynolds Number criteria for minimum flow, [ 4 ] along with minimization of dead legs. In modern and advanced UPW systems, positive (higher than zero) bacteria counts are typically observed on newly constructed facilities. This issue is effectively addressed by sanitization using ozone or hydrogen peroxide . With proper design of the polishing and distribution system, no positive bacteria counts are typically detected throughout the life cycle of the UPW system. To understand why bacteria are so problematic, there are three interrelated challenges. Some species of bacteria are adapted to very low nutrient environments, and therefore flourish in high purity water systems. The agents that are lethal to such bacteria are often chemicals that the purification systems are supposed to remove. Lastly, even the killed microorganisms represent both unwanted chemical contaminants and potential food sources when the water system is re-colonized by new bacteria. Should the particular microorganisms colonizing a water system become tolerant of the primary sanitization agent, use of an alternate agent may be precluded due to the materials of construction of the water system. This places a premium on good system design and rigorous prevention measures. Particles in UPW are the bane of the semiconductor industry, causing defects in sensitive photolithographic processes that define nanometer-sized features. In other industries, their effects can range from a nuisance to life-threatening defects. Particles can be controlled by filtration and ultrafiltration. Sources can include bacterial fragments, the sloughing of the component walls within the conduit's wetted stream, and the cleanliness of the jointing processes used to build the piping system. Total organic carbon in ultra pure water can contribute to bacterial proliferation by providing nutrients, can substitute as a carbide for another chemical species in a sensitive thermal process, react in unwanted ways with biochemical reactions in bioprocessing , and, in severe cases, leave unwanted residues on production parts. TOC can come from the feed water used to produce UPW, from the components used to convey the UPW (additives in the manufacturing piping products or extrusion aides and mold release agents), from subsequent manufacturing and cleaning operations of piping systems, or from dirty pipes, fittings, and valves. Metallic and anionic contamination in UPW systems can shut down enzymatic processes in bioprocessing , corrode equipment in the electrical power generation industry, and result in either short or long-term failure of electronic components in semiconductor chips and photovoltaic cells. Its sources are similar to those of TOC's. Depending on the level of purity needed, detection of these contaminants can range from simple conductivity (electrolytic) readings to sophisticated instrumentation such as ion chromatography (IC), atomic absorption spectroscopy (AA) and inductively coupled plasma mass spectrometry (ICP-MS). Ultrapure water is treated through multiple steps to meet the quality standards for different users. The primary industries using UPW are: The term "ultrapure water" became popular in the late 1970s and early 1980s to describe the particular quality of water used by these industries. While each industry uses what it calls "ultrapure water", the quality standards vary, meaning that the UPW used by a pharmaceutical plant is different from that used in a semiconductor fab or a power station. The standards are based on the application. For instance, semiconductor plants use UPW as a cleaning agent, so it is important that the water not contain dissolved contaminants that can precipitate or particles that may lodge on circuits and cause microchip failures. The power industry uses UPW to make steam to drive steam turbines; pharmaceutical facilities use UPW as a cleaning agent, as well as an ingredient in products, so they seek water free of endotoxins, microbials, and viruses. Today, ion exchange (IX) and electrodeionization (EDI) are the primary deionization technologies associated with UPW production, in most cases following reverse osmosis (RO). Depending on the required water quality, UPW treatment plants often also feature degasification , microfiltration , ultrafiltration , ultraviolet irradiation , and measurement instruments (e.g., total organic carbon [TOC], resistivity/conductivity , particles, pH, and specialty measurements for specific ions). Early on, softened water produced by technologies like zeolite softening or cold lime softening was a precursor to modern UPW treatment. From there, the term "deionized" water was the next advancement as synthetic IX resins were invented in 1935 and then became commercialized in the 1940s. The earliest "deionized" water systems relied on IX treatment to produce "high-purity" as determined by resistivity or conductivity measurements. After commercial RO membranes emerged in the 1960s, RO use with IX treatment eventually became common. EDI was commercialized in the 1980s and this technology has now become commonly associated with UPW treatment. UPW is used extensively in the semiconductor industry where the highest grade of purity is required. The amount of electronic-grade or molecular-grade water used by the semiconductor industry is comparable to the water consumption of a small city; a single factory can utilize ultrapure water (UPW) [ 5 ] at a rate of 2 MGD, or ~5500 m 3 /day. The UPW is usually produced on-site. The use of UPW varies; it may be used to rinse the wafer after application of chemicals, to dilute the chemicals themselves, in optics systems for immersion photolithography , or as make-up to cooling fluid in some critical applications. UPW is even sometimes used as a humidification source for the cleanroom environment. [ 6 ] The primary, and most critical, application of UPW is in wafer cleaning in and after wet etching step during the FEOL stage. [ 7 ] : 118 Impurities which can cause product contamination or impact process efficiency (e.g. etch rate) must be removed from the water during cleaning and etching stage. In chemical-mechanical polishing processes, water is used in addition to reagents and abrasive particles. As of 2002 1-2 parts of contaminating molecules per one million of water ones was considered to be an "ultrapure water" (e.g. semiconductor grade). [ 7 ] : 118 Water quality standards for use in the semiconductor industry It is used in other types of electronics manufacturing in a similar fashion, such as flat-panel displays , discrete components (such as LEDs ), hard disk drive platters (HDD) and solid-state drives NAND flash (SSDs), image sensors and image processors/ wafer-level optics (WLO), and crystalline silicon photovoltaics ; the cleanliness requirements in the semiconductor industry, however, are currently the most stringent. [ 5 ] A typical use of ultrapure water in pharmaceutical and biotechnology industries is summarized in the table below: [ 8 ] Uses of ultrapure water in the pharmaceutical and biotechnology industries In order to be used for pharmaceutical and biotechnology applications for production of licensed human and veterinary health care products it must comply with the specification of the following pharmacopeias monographs: Note: Purified Water is typically a main monograph which references other applications that use Ultrapure water Ultrapure water is often used as a critical utility for cleaning applications (as required). It is also used to generate clean steam for sterilization. The following table summarizes the specifications of two major pharmacopoeias for 'water for injection': Pharmacopoeia specifications for water for injection Ultrapure water and deionized water validation Ultrapure water validation must utilize a risk-based lifecycle approach. [ 15 ] [ 16 ] [ 17 ] [ 18 ] This approach consists of three stages – design and development, qualification, and continued verification. One should utilize current regulatory guidance to comply with regulatory expectations. Typical guidance documents to consult at the time of writing are: FDA Guide to Inspections of High Purity Water Systems, High Purity Water Systems (7/93), [ 19 ] the EMEA CPMP/CVMP Note for Guidance on Quality of Water for Pharmaceutical Use (London, 2002), [ 20 ] and USP Monograph <1231> Water For Pharmaceutical Purposes. [ 21 ] However, other jurisdictions' documents may exist, and it is a responsibility of practitioners validating water systems to consult those. Currently, the World Health Organization (WHO) [ 22 ] as well as the Pharmaceutical Inspection Co-operation Scheme (PIC/S) [ 23 ] developed technical documents which outline validation requirements and strategies for water systems. In pure water systems, electrolytic conductivity or resistivity measurement is the most common indicator of ionic contamination. The same basic measurement is read out in either conductivity units of microsiemens per centimeter (μS/cm), typical of the pharmaceutical and power industries or in resistivity units of megohm-centimeters (MΩ⋅cm) used in the microelectronics industries. These units are reciprocals of each other. Absolutely pure water has a conductivity of 0.05501 μS/cm and a resistivity of 18.18 MΩ⋅cm at 25 °C, the most common reference temperature to which these measurements are compensated. An example of the sensitivity to contamination of these measurements is that 0.1 ppb of sodium chloride raises the conductivity of pure water to 0.05523 μS/cm and lowers the resistivity to 18.11 MΩ⋅cm. [ 24 ] [ 25 ] Ultrapure water is easily contaminated by traces of carbon dioxide from the atmosphere passing through tiny leaks or diffusing through thin wall polymer tubing when sample lines are used for measurement. Carbon dioxide forms conductive carbonic acid in water. For this reason, conductivity probes are most often permanently inserted directly into the main ultrapure water system piping to provide real-time continuous monitoring of contamination. These probes contain both conductivity and temperature sensors to enable accurate compensation for the very large temperature influence on the conductivity of pure waters. Conductivity probes have an operating life of many years in pure water systems. They require no maintenance except for periodic verification of measurement accuracy, typically annually. Sodium is usually the first ion to break through a depleted cation exchanger. Sodium measurement can quickly detect this condition and is widely used as the indicator for cation exchange regeneration. The conductivity of cation exchange effluent is always quite high due to the presence of anions and hydrogen ion and therefore conductivity measurement is not useful for this purpose. Sodium is also measured in power plant water and steam samples because it is a common corrosive contaminant and can be detected at very low concentrations in the presence of higher amounts of ammonia and/or amine treatment which have a relatively high background conductivity. On-line sodium measurement in ultrapure water most commonly uses a glass membrane sodium ion-selective electrode and a reference electrode in an analyzer measuring a small continuously flowing side-stream sample. The voltage measured between the electrodes is proportional to the logarithm of the sodium ion activity or concentration, according to the Nernst equation . Because of the logarithmic response, low concentrations in sub-parts per billion ranges can be measured routinely. To prevent interference from hydrogen ion, the sample pH is raised by the continuous addition of a pure amine before measurement. Calibration at low concentrations is often done with automated analyzers to save time and to eliminate variables of manual calibration. [ 26 ] Advanced microelectronics manufacturing processes require low single digit to 10 ppb dissolved oxygen (DO) concentrations in the ultrapure rinse water to prevent oxidation of wafer films and layers. DO in power plant water and steam must be controlled to ppb levels to minimize corrosion. Copper alloy components in power plants require single digit ppb DO concentrations whereas iron alloys can benefit from the passivation effects of higher concentrations in the 30 to 150 ppb range. Dissolved oxygen is measured by two basic technologies: electrochemical cell or optical fluorescence. Traditional electrochemical measurement uses a sensor with a gas-permeable membrane. Behind the membrane, electrodes immersed in an electrolyte develop an electric current directly proportional to the oxygen partial pressure of the sample. The signal is temperature compensated for the oxygen solubility in water, the electrochemical cell output and the diffusion rate of oxygen through the membrane. Optical fluorescent DO sensors use a light source, a fluorophore and an optical detector. The fluorophore is immersed in the sample. Light is directed at the fluorophore which absorbs energy and then re-emits light at a longer wavelength . The duration and intensity of the re-emitted light is related to the dissolved oxygen partial pressure by the Stern–Volmer relationship . The signal is temperature compensated for the solubility of oxygen in water and the fluorophore characteristics to obtain the DO concentration value. [ 27 ] Silica is a contaminant that is detrimental to microelectronics processing and must be maintained at sub-ppb levels. In steam power generation silica can form deposits on heat-exchange surfaces where it reduces thermal efficiency . In high temperature boilers, silica will volatilize and carry over with steam where it can form deposits on turbine blades which lower aerodynamic efficiency. Silica deposits are very difficult to remove. Silica is the first readily measurable species to be released by a spent anion exchange resin and is therefore used as the trigger for anion resin regeneration. Silica is non-conductive and therefore not detectable by conductivity. Silica is measured on side stream samples with colorimetric analyzers. The measurement adds reagents including a molybdate compound and a reducing agent to produce a blue silico-molybdate complex color which is detected optically and is related to concentration according to the Beer–Lambert law . Most silica analyzers operate on an automated semi-continuous basis, isolating a small volume of sample, adding reagents sequentially and allowing enough time for reactions to occur while minimizing consumption of reagents. The display and output signals are updated with each batch measurement result, typically at 10 to 20-minute intervals. [ 28 ] Particles in UPW have always presented a major problem for semiconductor manufacture, as any particle landing on a silicon wafer can bridge the gap between the electrical pathways in the semiconductor circuitry. When a pathway is short-circuited the semiconductor device will not work properly; such a failure is called a yield loss, one of the most closely watched parameters in the semiconductor industry. The technique of choice to detect these single particles has been to shine a light beam (a laser) through a small volume of UPW and detect the light scattered by any particles (instruments based on this technique are called laser particle counters or LPCs). As semiconductor manufacturers pack more and more transistors into the same physical space, the circuitry line-width has become narrow and narrower. As a result, LPC manufacturers have had to use more and more powerful lasers and very sophisticated scattered light detectors to keep pace. As line-width approaches 10 nm (a human hair is approximately 100,000 nm in diameter) LPC technology is becoming limited by secondary optical effects, and new particle measurement techniques will be required. Recently, one such novel analysis method named NDLS has successfully been brought into use at Electrum Laboratory (Royal Institute of Technology) in Stockholm, Sweden. NDLS is based on Dynamic Light Scattering (DLS) instrumentation. Another type of contamination in UPW is dissolved inorganic material, primarily silica. Silica is one of the most abundant minerals on the planet and is found in all water supplies. Any dissolved inorganic material has the potential to remain on the wafer as the UPW dries. Once again this can lead to a significant loss in yield. To detect trace amounts of dissolved inorganic material a measurement of non-volatile residue is commonly used. This technique involves using a nebulizer to create droplets of UPW suspended in a stream of air. These droplets are dried at a high temperature to produce an aerosol of non-volatile residue particles. A measurement device called a condensation particle counter then counts the residue particles to give a reading in parts per trillion (ppt) by weight. [ 29 ] Total organic carbon is most commonly measured by oxidizing the organics in the water to CO 2 , measuring the increase in the CO 2 concentration after the oxidation or delta CO 2 , and converting the measured delta CO 2 amount into "mass of carbon" per volume concentration units. The initial CO 2 in the water sample is defined as Inorganic Carbon or IC. The CO 2 produced from the oxidized organics and any initial CO 2 (IC) both together are defined as Total Carbon or TC. The TOC value is then equal to the difference between TC and IC. [ 30 ] Oxidation of organics to CO 2 is most commonly achieved in liquid solutions by the creation of the highly oxidizing chemical species, the hydroxyl radical (OH•). Organic oxidation in a combustion environment involves the creation of other energized molecular oxygen species. For the typical TOC levels in UPW systems most methods utilize hydroxyl radicals in the liquid phase. There are multiple methods to create sufficient concentrations of hydroxyl radicals needed to completely oxidize the organics in water to CO 2 , each method being appropriate for different water purity levels. For typical raw waters feeding into the front end of an UPW purification system the raw water can contain TOC levels between 0.7 mg/L to 15 mg/L and require a robust oxidation method that can ensure there is enough oxygen available to completely convert all the carbon atoms in the organic molecules into CO 2 . Robust oxidation methods that supply sufficient oxygen include the following methods; Ultraviolet light (UV) & persulfate, heated persulfate, combustion, and super critical oxidation. Typical equations showing persulfate generation of hydroxyl radicals follows. S 2 O 2− 8 + hν (254 nm) → 2 SO − 4 • and SO − 4 • + H 2 O → HSO − 4 + OH • When the organic concentration is less than 1 mg/L as TOC and the water is saturated with oxygen UV light is sufficient to oxidize the organics to CO 2 , this is a simpler oxidation method. The wavelength of the UV light for the lower TOC waters must be less than 200 nm and is typically 184 nm generated by a low pressure Hg vapor lamp. The 184 nm UV light is energetic enough to break the water molecule into OH and H radicals. The hydrogen radicals quickly react to create H 2 . The equations follow: H 2 O + hν (185 nm) → OH• + H • and H • + H • → H 2 Different types of UPW TOC Analyzers IC (Inorganic Carbon) = CO 2 + HCO − 3 + CO 2− 3 TC (Total Carbon) = Organic Carbon + IC TOC (Total Organic Carbon) = TC – IC H 2 O + hν (185 nm) → OH• + H • S 2 O 2− 8 + hν (254 nm) → 2 SO − 4 • SO − 4 • + H 2 O → HSO − 4 + OH • When testing the quality of UPW, consideration is given to where that quality is required and where it is to be measured. The point of distribution or delivery (POD) is the point in the system immediately after the last treatment step and before the distribution loop. It is the standard location for the majority of analytical tests. The point of connection (POC) is another commonly used point for measuring quality of UPW. It is located at the outlet of the submain or lateral take off valve used for UPW supply to the tool. Grab sample UPW analyses are either complementary to the on-line testing or alternative, depending on the availability of the instruments and the level of the UPW quality specifications. Grab sample analysis is typically performed for the following parameters: metals, anions, ammonium, silica (both dissolved and total), particles by SEM (scanning electron microscope), TOC (total organic compounds) and specific organic compounds. [ 1 ] [ 2 ] Metal analyses are typically performed by ICP-MS ( Inductively coupled plasma mass spectrometry ). The detection level depends on the specific type of the instrument used and the method of the sample preparation and handling. Current state-of-the-art methods allow reaching sub-ppt (parts per trillion) level (< 1 ppt) typically tested by ICPMS. [ 31 ] The anion analysis for seven most common inorganic anions (sulfate, chloride, fluoride, phosphate, nitrite, nitrate, and bromide) is performed by ion chromatography (IC), reaching single digit ppt detection limits. IC is also used to analyze ammonia and other metal cations. However ICPMS is the preferred method for metals due to lower detection limits and its ability to detect both dissolved and non-dissolved metals in UPW. IC is also used for the detection of urea in UPW down to the 0.5 ppb level. Urea is one of the more common contaminants in UPW and probably the most difficult for treatment. Silica analysis in UPW typically includes determination of reactive and total silica. [ 32 ] Due to the complexity of silica chemistry, the form of silica measured is defined by the photometric (colorimetric) method as molybdate-reactive silica. Those forms of silica that are molybdate-reactive include dissolved simple silicates, monomeric silica and silicic acid, and an undetermined fraction of polymeric silica. Total silica determination in water employs high resolution ICPMS, GFAA (graphite furnace atomic absorption), [ 33 ] and the photometric method combined with silica digestion. For many natural waters, a measurement of molybdate-reactive silica by this test method provides a close approximation of total silica, and, in practice, the colorimetric method is frequently substituted for other more time-consuming techniques. However, total silica analysis becomes more critical in UPW, where the presence of colloidal silica is expected due to silica polymerization in the ion exchange columns. Colloidal silica is considered more critical than dissolved in the electronic industry due to the bigger impact of nano-particles in water on the semiconductor manufacturing process. Sub-ppb (parts per billion) levels of silica make it equally complex for both reactive and total silica analysis, making the choice of total silica test often preferred. Although particles and TOC are usually measured using on-line methods, there is significant value in complementary or alternative off-line lab analysis. The value of the lab analysis has two aspects: cost and speciation. Smaller UPW facilities that cannot afford to purchase on-line instrumentation often choose off-line testing. TOC can be measured in the grab sample at a concentration as low as 5 ppb, using the same technique employed for the on-line analysis (see on-line method description). This detection level covers the majority of needs of less critical electronic and all pharmaceutical applications. When speciation of the organics is required for troubleshooting or design purposes, liquid chromatography-organic carbon detection (LC-OCD) provides an effective analysis. This method allows for identification of biopolymers, humics, low molecular weight acids and neutrals, and more, while characterizing nearly 100% of the organic composition in UPW with sub-ppb level of TOC. [ 34 ] [ 35 ] Similar to TOC, SEM particle analysis represents a lower cost alternative to the expensive online measurements and therefore it is commonly a method of choice in less critical applications. SEM analysis can provide particle counting for particle size down to 50 nm, which generally is in-line with the capability of online instruments. The test involves installation of the SEM capture filter cartridge on the UPW sampling port for sampling on the membrane disk with the pore size equal or smaller than the target size of the UPW particles. The filter is then transferred to the SEM microscope where its surface is scanned for detection and identification of the particles. The main disadvantage of SEM analysis is long sampling time. Depending on the pore size and the pressure in the UPW system, the sampling time can be between one week and one month. However, typical robustness and stability of the particle filtration systems allow for successful applications of the SEM method. Application of Energy Dispersive X-ray Spectroscopy (SEM-EDS) provides compositional analysis of the particles, making SEM also helpful for systems with on-line particle counters. Bacteria analysis is typically conducted following ASTM method F1094. [ 36 ] The test method covers sampling and analysis of high purity water from water purification systems and water transmission systems by the direct sampling tap and filtration of the sample collected in the bag. These test methods cover both the sampling of water lines and the subsequent microbiological analysis of the sample by the culture technique. The microorganisms recovered from the water samples and counted on the filters include both aerobes and facultative anaerobes. The temperature of incubation is controlled at 28 ± 2 °C, and the period of incubation is 48 h or 72 h, if time permits. Longer incubation times are typically recommended for most critical applications. However 48 hrs is typically sufficient to detect water quality upsets. Typically, city feed-water (containing all the unwanted contaminants previously mentioned) is taken through a series of purification steps that, depending on the desired quality of UPW, includes gross filtration for large particulates, carbon filtration, water softening, reverse osmosis, exposure to ultraviolet (UV) light for TOC and/or bacterial static control, polishing by ion exchange resins or electrodeionization (EDI), and finally filtration or ultrafiltration . Some systems use direct return, reverse return or serpentine loops that return the water to a storage area, providing continuous re-circulation, while others are single-use systems that run from point of UPW production to point of use. The constant re-circulation action in the former continuously polishes the water with every pass. The latter can be prone to contamination build up if it is left stagnant with no use. For modern UPW systems it is important to consider specific site and process requirements such as environmental constraints (e.g., wastewater discharge limits) and reclaim opportunities (e.g., is there a mandated minimum amount of reclaim required). UPW systems consist of three subsystems: pretreatment, primary, and polishing. Most systems are similar in design but may vary in the pretreatment section depending on the nature of the source water. Pretreatment: Pretreatment produces purified water . Typical pretreatments employed are two pass reverse osmosis , Demineralization plus reverse osmosis or HERO (high efficiency reverse osmosis). [ 37 ] [ 38 ] In addition, the degree of filtration upstream of these processes will be dictated by the level of suspended solids, turbidity and organics present in the source water. The common types of filtration are multi-media, automatic backwashable filters and ultrafiltration for suspended solids removal and turbidity reduction and Activated Carbon for the reduction of organics. The Activated Carbon may also be used for removal of chlorine upstream of the reverse osmosis of demineralization steps. If activated carbon is not employed then sodium bisulfite is used to de-chlorinate the feed water. Primary: Primary treatment consists of ultraviolet light (UV) for organic reduction, EDI and or mixed bed ion exchange for demineralization. The mixed beds may be non-regenerable (following EDI), in-situ or externally regenerated. The last step in this section may be dissolved oxygen removal utilizing the membrane degasification process or vacuum degasification. Polishing: Polishing consists of UV, heat exchange to control constant temperature in the UPW supply, non-regenerable ion exchange, membrane degasification (to polish to final UPW requirements) and ultrafiltration to achieve the required particle level. Some semiconductor Fabs require hot UPW for some of their processes. In this instance polished UPW is heated in the range of 70 to 80C before being delivered to manufacturing. Most of these systems include heat recovery wherein the excess hot UPW returned from manufacturing goes to a heat recovery unit before being returned to the UPW feed tank to conserve on the use of heating water or the need to cool the hot UPW return flow. [ 39 ] Remove contaminants as far forward in the system as practical and cost effective. Steady state flow in the makeup and primary sections to avoid TOC and conductivity spikes (NO start/stop operation). Recirculate excess flow upstream. Minimize the use of chemicals following the reverse osmosis units. Consider EDI and non-regenerable primary mixed beds in lieu of in-situ or externally regenerated primary beds to assure optimum quality UPW makeup and minimize the potential for upset. Select materials that will not contribute TOC and particles to the system particularly in the primary and polishing sections. Minimize stainless steel material in the polishing loop and, if used, electropolishing is recommended. Minimize dead legs in the piping to avoid the potential for bacteria propagation. Maintain minimum scouring velocities in the piping and distribution network to ensure turbulent flow. The recommended minimum is based on a Reynolds number of 3,000 Re or higher. This can range up to 10,000 Re depending on the comfort level of the designer. Use only virgin resin in the polishing mixed beds. Replace every one to two years. Supply UPW to manufacturing at constant flow and constant pressure to avoid system upsets such as particle bursts. Utilize reverse return distribution loop design for hydraulic balance and to avoid backflow (return to supply). Capacity plays an important role in the engineering decisions about UPW system configuration and sizing. For example, polish systems of older and smaller size electronic systems were designed for minimum flow velocity criteria of up to 60 cm (2 ft) per second at the end of pipe to avoid bacterial contamination. Larger fabs required larger size UPW systems. The figure below illustrates the increasing consumption driven by the larger size of wafer manufactured in newer fabs. However, for larger pipe (driven by higher consumption) the 60 cm (2& ft) per second criteria meant extremely high consumption and an oversized polishing system. The industry responded to this issue and through extensive investigation, choice of higher purity materials, and optimized distribution design was able to reduce the design criteria for minimum flow, using Reynolds number criteria. The figure on the right illustrates an interesting coincidence that the largest diameter of the main supply line of UPW is equal to the size of the wafer in production (this relation is known as Klaiber's law). Growing size of the piping as well as the system overall requires new approaches to space management and process optimization. As a result, newer UPW systems look rather alike, which is in contrast with smaller UPW systems that could have less optimized design due to the lower impact of inefficiency on cost and space management. Another capacity consideration is related to operability of the system. Small lab scale (a dozen liters-per-minute/few gallons-per-minute-capacities) systems do not typically involve operators, while large scale systems usually operate 24x7 by well trained operators. As a result, smaller systems are designed with no use of chemicals and lower water and energy efficiency than larger systems. Particles in UPW are critical contaminants, which result in numerous forms of defects on wafer surfaces. With the large volume of UPW, which comes into contact with each wafer, particle deposition on the wafer readily occurs. Once deposited, the particles are not easily removed from the wafer surfaces. With the increased use of dilute chemistries, particles in UPW are an issue not only with UPW rinse of the wafers, but also due to introduction of the particles during dilute wet cleans and etch, where UPW is a major constituent of the chemistry used. Particle levels must be controlled to nm sizes, and current trends [ as of? ] are approaching 10 nm and smaller for particle control in UPW. While filters are used for the main loop, components of the UPW system can contribute additional particle contamination into the water, and at the point of use, additional filtration is recommended. The filters themselves must be constructed of ultraclean and robust materials, which do not contribute organics or cations/anions into the UPW, and must be integrity tested out of the factory to assure reliability and performance. Common materials include nylon , polyethylene , polysulfone , and fluoropolymers . Filters will commonly be constructed of a combination of polymers, and for UPW use are thermally welded without using adhesives or other contaminating additives. The microporous structure of the filter is critical in providing particle control, and this structure can be isotropic or asymmetric . In the former case the pore distribution is uniform through the filter, while in the latter the finer surface provides the particle removal, with the coarser structure giving physical support as well reducing the overall differential pressure. Filters can be cartridge formats where the UPW is flowed through the pleated structure with contaminants collected directly on the filter surface. Common in UPW systems are ultrafilters (UF), composed of hollow fiber membranes. In this configuration, the UPW is flowed across the hollow fiber, sweeping contaminants to a waste stream, known as the retentate stream. The retentate stream is only a small percentage of the total flow, and is sent to waste. The product water, or the permeate stream, is the UPW passing through the skin of the hollow fiber and exiting through the center of the hollow fiber. The UF is a highly efficient filtration product for UPW, and the sweeping of the particles into the retentate stream yield extremely long life with only occasional cleaning needed. Use of the UF in UPW systems provides excellent particle control to single digit nanometer particle sizes. [ 39 ] Point of use applications (POU) for UPW filtration include wet etch and clean, rinse prior to IPA vapor or liquid dry, as well as lithography dispense UPW rinse following develop. These applications pose specific challenges for POU UPW filtration. For wet etch and clean, most tools are single wafer processes, which require flow through the filter upon tool demand. The resultant intermittent flow, which will range from full flow through the filter upon initiation of UPW flow through the spray nozzle, and then back to a trickle flow. The trickle flow is typically maintained to prevent a dead leg in the tool. The filter must be robust to withstand the pressure and low cycling, and must continue to retain captured particles throughout the service life of the filter. This requires proper pleat design and geometry, as well as media designed to optimized particle capture and retention. Certain tools may use a fixed filter housing with replaceable filters, whereas other tools may use disposable filter capsules for the POU UPW. For lithography applications, small filter capsules are used. Similar to the challenges for wet etch and clean POU UPW applications, for lithography UPW rinse, the flow through the filter is intermittent, though at a low flow and pressure, so the physical robustness is not as critical. Another POU UPW application for lithography is the immersion water used at the lens/wafer interface for 193 nm immersion lithography patterning. The UPW forms a puddle between the lens and the wafer, improving NA, and the UPW must be extremely pure. POU filtration is used on the UPW just prior to the stepper scanner. For POU UPW applications, sub 15 nm filters are currently in use for advanced 2x and 1x nodes. The filters are commonly made of nylon, high-density polyethylene (HDPE), polyarylsulfone (or polysulfone), or polytetrafluoroethylene (PTFE) membranes, with hardware typically consisting of HDPE or PFA. Point of use treatment is often applied in critical tool applications such as Immersion lithography and Mask preparation in order to maintain consistent ultrapure water quality. UPW systems located in the central utilities building provide the Fab with quality water but may not provide adequate water purification consistency for these processes. In the case when urea, THM, isopropyl alcohol (IPA) or other difficult to remove (low molecular weight neutral compounds) TOC species may be present, additional treatment is required thru advanced oxidation process (AOP) using systems. This is particularly important when tight TOC specification below 1 ppb is required to be attained. These difficult to control organics have been proven to impact yield and device performance especially at the most demanding process steps. One of the successful examples of the POU organics control down to 0.5 ppb TOC level is AOP combining ammonium persulfate and UV oxidation (refer to the persulfate+UV oxidation chemistry in the TOC measurement section). Available proprietary POU advanced oxidation processes can consistently reduce TOC to 0.5 parts per billion (ppb) in addition to maintaining consistent temperature, oxygen and particles exceeding the SEMI F063 requirements. [ 2 ] This is important because the slightest variation can directly affect the manufacturing process, significantly influencing product yields. [ 39 ] [ 40 ] The semiconductor industry uses a large amount of ultrapure water to rinse contaminants from the surface of the silicon wafers that are later turned into computer chips. The ultrapure water is by definition extremely low in contamination, but once it makes contact with the wafer surface it carries residual chemicals or particles from the surface that then end up in the industrial waste treatment system of the manufacturing facility. The contamination level of the rinse water can vary a great deal depending on the particular process step that is being rinsed at the time. A "first rinse" step may carry a large amount of residual contaminants and particles compared to a last rinse that may carry relatively low amounts of contamination. Typical semiconductor plants have only two drain systems for all of these rinses which are also combined with acid waste and therefore the rinse water is not effectively reused due to risk of contamination causing manufacturing process defects. As noted above, ultrapure water is commonly not recycled in semiconductor applications, but rather reclaimed in other processes. There is one company in the US, Exergy Systems, Inc. of Irvine, California, that offers a patented deionized water recycling process. This product has been successfully tested at a number of semiconductor processes. Definitions: The following definitions are used by ITRS: [ 6 ] Water reclaim and recycle: Some semiconductor manufacturing plants have been using reclaimed water for non-process applications such as chemical aspirators where the discharge water is sent to industrial waste. Water reclamation is also a typical application where spent rinse water from the manufacturing facility may be used in cooling tower supply, exhaust scrubber supply, or point of use abatement systems. UPW Recycling is not as typical and involves collecting the spent manufacturing rinse water, treating it and re-using it back in the wafer rinse process. Some additional water treatment may be required for any of these cases depending on the quality of the spent rinse water and the application of the reclaimed water. These are fairly common practices in many semiconductor facilities worldwide, however there is a limitation to how much water can be reclaimed and recycled if not considering reuse in the manufacturing process. UPW recycling: Recycling rinse water from the semiconductor manufacturing process has been discouraged by many manufacturing engineers for decades because of the risk that the contamination from the chemical residue and particles may end up back in the UPW feed water and result in product defects. Modern Ultrapure Water systems are very effective at removing ionic contamination down to parts per trillion levels (ppt) whereas organic contamination of ultrapure water systems is still in the parts per billion levels (ppb). In any case recycling the process water rinses for UPW makeup has always been a great concern and until recently this was not a common practice. Increasing water and wastewater costs in parts of the US and Asia have pushed some semiconductor companies to investigate the recycling of manufacturing process rinse water in the UPW makeup system. Some companies have incorporated an approach that uses complex large scale treatment designed for worst case conditions of the combined waste water discharge. More recently new approaches have been developed to incorporate a detailed water management plan to try to minimize the treatment system cost and complexity. Water management plan: The key to maximizing water reclaim, recycle, and reuse is having a well thought out water management plan. A successful water management plan includes full understanding of how the rinse waters are used in the manufacturing process including chemicals used and their byproducts. With the development of this critical component, a drain collection system can be designed to segregate concentrated chemicals from moderately contaminated rinse waters, and lightly contaminated rinse waters. Once segregated into separate collection systems the once considered chemical process waste streams can be repurposed or sold as a product stream, and the rinse waters can be reclaimed. A water management plan will also require a significant amount of sample data and analysis to determine proper drain segregation, application of online analytical measurement, diversions control, and final treatment technology. Collecting these samples and performing laboratory analysis can help characterize the various waste streams and determine the potential of their respective re-use. In the case of UPW process rinse water the lab analysis data can then be used to profile typical and non-typical levels of contamination which then can be used to design the rinse water treatment system. In general it is most cost effective to design the system to treat the typical level of contamination that may occur 80-90% of the time, then incorporate on-line sensors and controls to divert the rinse water to industrial waste or to non-critical use such as cooling towers when the contamination level exceeds the capability of the treatment system. By incorporating all these aspects of a water management plan in a semiconductor manufacturing site the level of water use can be reduced by as much as 90%. Stainless steel remains a piping material of choice for the pharmaceutical industry. Due to its metallic contribution, most steel was removed from microelectronics UPW systems in the 1980s and replaced with high performance polymers of polyvinylidene fluoride (PVDF), [ 1 ] perfluoroalkoxy (PFA), ethylene chlorotrifluoroethylene (ECTFE) and polytetrafluoroethylene (PTFE) in the US and Europe. In Asia, polyvinyl chloride (PVC), chlorinated polyvinyl chloride (CPVC) and polypropylene (PP) are popular, along with the high performance polymers. Thermoplastics can be joined by different thermofusion techniques.
https://en.wikipedia.org/wiki/Ultrapure_water
In molecular biology , ultrasensitivity describes an output response that is more sensitive to stimulus change than the hyperbolic Michaelis-Menten response . Ultrasensitivity is one of the biochemical switches in the cell cycle and has been implicated in a number of important cellular events, including exiting G2 cell cycle arrests in Xenopus laevis oocytes, a stage to which the cell or organism would not want to return. [ 1 ] Ultrasensitivity is a cellular system which triggers entry into a different cellular state. [ 2 ] Ultrasensitivity gives a small response to first input signal, but an increase in the input signal produces higher and higher levels of output. This acts to filter out noise, as small stimuli and threshold concentrations of the stimulus (input signal) is necessary for the trigger which allows the system to get activated quickly. [ 3 ] Ultrasensitive responses are represented by sigmoidal graphs, which resemble cooperativity . The quantification of ultrasensitivity is often performed approximately by the Hill equation : Where Hill's coefficient (n) may represent quantitative measure of ultrasensitive response. [ 4 ] Zero-order ultrasensitivity was first described by Albert Goldbeter and Daniel Koshland, Jr in 1981 in a paper in the Proceedings of the National Academy of Sciences . [ 5 ] They showed using mathematical modeling that modification of enzymes operating outside of first order kinetics required only small changes in the concentration of the effector to produce larger changes in the amount of modified protein. This amplification provided added sensitivity in biological control, and implicated the importance of this in many biological systems. Many biological processes are binary (ON-OFF), such as cell fate decisions, [ 6 ] metabolic states, and signaling pathways. Ultrasensitivity is a switch that helps decision-making in such biological processes. [ 7 ] For example, in apoptotic process, a model showed that a positive feedback of inhibition of caspase 3 (Casp3) and Casp9 by inhibitors of apoptosis can bring about ultrasensitivity (bistability). This positive feedback cooperates with Casp3-mediated feedback cleavage of Casp9 to generate irreversibility in caspase activation (switch ON), which leads to cell apoptosis. [ 8 ] Another model also showed similar but different positive feedback controls in Bcl-2 family proteins in apoptotic process. [ 9 ] Recently, Jeyeraman et al. have proposed that the phenomenon of ultrasensitivity may be further subdivided into three sub-regimes, separated by sharp stimulus threshold values: OFF, OFF-ON-OFF, and ON. Based on their model, they proposed that this sub-regime of ultrasensitivity, OFF-ON-OFF, is like a switch-like adaption which can be accomplished by coupling N phosphorylation–dephosphorylation cycles unidirectionally, without any explicit feedback loops. [ 10 ] Other recent work has emphasized that not only is the topology of networks important for creating ultrasensitivity responses, but that their composition (enzymes vs. transcription factors) strongly affects whether they will exhibit robust ultrasensitivity. Mathematical modeling suggests for a broad array of network topologies that a combination of enzymes and transcription factors tends to provide more robust ultrasensitivity than that seen in networks composed entirely of transcription factors or composed entirely of enzymes. [ 11 ] Ultrasensitivity can be achieved through several mechanisms: Multipstep ultrasensitivity occurs when a single effector acts on several steps in a cascade. [ 18 ] Successive cascade signals can result in higher levels of noise being introduced into the signal that can interfere with the final output. This is especially relevant for large cascades, such as the flagellar regulatory system in which the master regulator signal is transmitted through multiple intermediate regulators before activating transcription. [ 19 ] Cascade ultrasensitivity can reduce noise and therefore require less input for activation. [ 12 ] Additionally, multiple phosphorylation events are an example of ultrasensitivity. Recent modeling has shown that multiple phosphorylation sites on membrane proteins could serve to locally saturate enzyme activity. Proteins at the membrane are greatly reduced in mobility compared to those in the cytoplasm, this means that a membrane tethered enzyme acting upon a membrane protein will take longer to diffuse away. With the addition of multiple phosphorylation sites upon the membrane substrate, the enzyme can - by a combination of increased local concentration of enzyme and increased substrates - quickly reach saturation. [ 20 ] Buffering Mechanisms such as molecular titration can generate ultrasensitivity. In vitro , this can be observed for the simple mechanism: Where the monomeric form of A is active and it can be inactivated by binding B to form the heterodimer AB. When the concentration of B T {\displaystyle B_{T}} ( = [B] + [AB]) is much greater than the K d {\displaystyle K_{d}} , this system exhibits a threshold determined by the concentration of B T {\displaystyle B_{T}} . [ 21 ] At concentrations of A T {\displaystyle A_{T}} ( = [A] +[AB]), lower than B T {\displaystyle B_{T}} , B acts as a buffer to free A and nearly all A will be found as AB. However, at the equivalence point, when A T {\displaystyle A_{T}} ≈ B T {\displaystyle B_{T}} , B T {\displaystyle B_{T}} can no longer buffer the increase in A T {\displaystyle A_{T}} , so a small increase in A T {\displaystyle A_{T}} causes a large increase in A. [ 22 ] The strength of the ultrasensitivity of [A] to changes in A T {\displaystyle A_{T}} is determined by B T {\displaystyle B_{T}} / K d {\displaystyle K_{d}} . [ 22 ] Ultrasensitivity occurs when this ratio is greater than one and is increased as the ratio increases. Above the equivalence point, A T {\displaystyle A_{T}} and A are again linearly related. In vivo , the synthesis of A and B as well as the degradation of all three components complicates generation of ultrasensitivity. If the synthesis rates of A and B are equal this system still exhibits ultrasensitivity at the equivalence point. [ 22 ] One example of a buffering mechanism is protein sequestration, which is a common mechanism found in signalling and regulatory networks. [ 23 ] In 2009, Buchler and Cross constructed a synthetic genetic network that was regulated by protein sequestration of a transcriptional activator by a dominant-negative inhibitor. They showed that this system results in a flexibile ultrasensitive response in gene expression. It is flexible in that the degree of ultrasensitivity can be altered by changing expression levels of the dominant-negative inhibitor. Figure 1 in their article illustrates how an active transcription factor can be sequestered by an inhibitor into the inactive complex AB that is unable to bind DNA. This type of mechanism results in an "all-or-none" response, or ultransensitivy, when the concentration of the regulatory protein increases to the point of depleting the inhibitor. Robust buffering against a response exists below this concentration threshold, and when it is reached any small increase in input is amplified into a large change in output. [ citation needed ] Signal transduction is regulated in various ways and one of the ways is translocation. Regulated translocation generates ultrasensitive response in mainly three ways: Translocation is one way of regulating signal transduction, and it can generate ultrasensitive switch-like responses or multistep-feedback loop mechanisms. A switch-like response will occur if translocation raises the local concentration of a signaling protein. For example, epidermal growth factor (EGF) receptors can be internalized through clathrin-independent endocytosis (CIE) and/or clathrin-dependent endocytosis (CDE) in ligand concentration-dependent manner. The distribution of receptors into the two pathways was shown to be EGF concentration-dependent. In the presence of low concentrations of EGF, the receptor was exclusively internalized via CDE, whereas at high concentrations, receptors were equally distributed between CDE and CIE. [ 4 ] [ 24 ] Zero-order ultrasensitivity takes place under saturating conditions. [ 25 ] For example, consider an enzymatic step with a kinase, phosphatase, and substrate. Steady state levels of the phosphorylated substrate have an ultrasensitive response when there is enough substrate to saturate all available kinases and phosphatases. [ 25 ] [ 26 ] Under these conditions, small changes in the ratio of kinase to phosphatase activity can dramatically change the number of phosphorylated substrate (For a graph illustrating this behavior, see [ 5 ] ). This enhancement in sensitivity of steady state phosphorylated substrate to Km, or the ratio of kinase to phosphatase activity, is termed zero-order to distinguish it from the first order behavior described by Michaelis-Menten dynamics, wherein the steady state concentration responds in a more gradual fashion than the switch-like behavior exhibited in ultrasensitivity. [ 18 ] Using the notation from Goldbeter & Koshland, [ 5 ] let W be a certain substrate protein and let W' be a covalently modified version of W. The conversion of W to W' is catalyzed by some enzyme E 1 {\displaystyle {\ce {E1}}} and the reverse conversion of W' to W is catalyzed by a second enzyme E 2 {\displaystyle {\ce {E2}}} according to following equations: The concentrations of all necessary components (such as ATP) are assumed to be constant and represented in the kinetic constants. Using the chemical equations above, the reaction rate equations for each component are: The total concentration of each component is given by: The zero order mechanism assumes that the [ W T ] ≫ [ E 1 ] {\displaystyle [W_{T}]\gg [E_{1}]} or [ E 2 ] {\displaystyle [E_{2}]} . In other words, the system is in a Michaelis-Menten steady state, which means, to a good approximation, [ W E 1 ] {\displaystyle [WE_{1}]} and [ W ′ E 2 ] {\displaystyle [W'E_{2}]} are constant. From these kinetic expressions one can solve for V 1 / V 2 {\displaystyle V_{1}/V_{2}} at steady state defining W = [ W ] / [ W T ] {\displaystyle W=[W]/[W_{T}]} and W = 1 − W ′ {\displaystyle W=1-W'} where k 1 [ W E 1 ] = k 2 [ W ′ E 2 ] {\displaystyle k_{1}[WE_{1}]=k_{2}[W'E_{2}]} and When the V 1 / V 2 {\displaystyle V_{1}/V_{2}} is plotted against the molar ratio W ′ {\displaystyle W'} and W {\displaystyle W} it can be seen that the W to W' conversion occurs over a much smaller change in the V 1 / V 2 {\displaystyle V_{1}/V_{2}} ratio than it would under first order (non-saturating) conditions, which is the telltale sign of ultrasensitivity. Positive feedback loops can cause ultrasensitive responses. An example of this is seen in the transcription of certain eukaryotic genes in which non-cooperative transcription factor binding changes positive feedback loops of histone modification that results in an ultrasensitive activation of transcription. The binding of a transcription factor recruits histone acetyltransferases and methyltransferases. The acetylation and methylation of histones recruits more acetyltransferases and methyltransferases that results in a positive feedback loop. Ultimately, this results in activation of transcription. [ 17 ] Additionally, positive feedback can induce bistability in Cyclin B1 - by the two regulators Wee1 and Cdc25C, leading to the cell's decision to commit to mitosis. The system cannot be stable at intermediate levels of Cyclin B1, and the transition between the two stable states is abrupt when increasing levels of Cyclin B1 switches the system from low to high activity. Exhibiting hysteresis , for different levels of Cyclin B1, the switches from low to high and high to low states vary. [ 27 ] However, the emergence of a bistable system is highly influenced by the sensitivity of its feedback loops. It has been shown in Xenopus egg extracts that Cdc25C hyperphosphorylation is a highly ultrasensitive function of Cdk activity, displaying a high value of the Hill coefficient (approx. 11), and the dephosphorylation step of Ser 287 in Cdc25C (also involved in Cdc25C activation) is even more ultrasensitive, displaying a Hill coefficient of approximately 32. [ 28 ] A proposed mechanism of ultrasensitivity, called allovalency, suggests that activity "derives from a high local concentration of interaction sites moving independently of each other" [ 29 ] Allovalency was first proposed when it was believed to occur in the pathway in which Sic1 , is degraded in order for Cdk1 -Clb ( B-type cyclins ) to allow entry into mitosis. Sic1 must be phosphorylated multiple times in order to be recognized and degraded by Cdc4 of the SCF Complex . [ 30 ] Since Cdc4 only has one recognition site for these phosphorylated residues it was suggested that as the amount of phosphorylation increases, it exponentially increases the likelihood that Sic1 is recognized and degraded by Cdc4. This type of interaction was thought to be relatively immune to loss of any one site and easily tuned to any given threshold by adjusting the properties of individual sites. Assumptions for the allovalency mechanism were based on a general mathematical model that describes the interaction between a polyvalent disordered ligand and a single receptor site [ 29 ] It was later found that the ultrasensitivity in Cdk1 levels by degradation of Sic1 is in fact due to a positive feedback loop. [ 31 ] Modeling by Dushek et al. [ 32 ] proposes a possible mechanism for ultrasensitivity outside of the zero-order regime. For the case of membrane-bound enzymes acting on membrane-bound substrates with multiple enzymatic sites (such as tyrosine-phosphorylated receptors like the T-Cell receptor), ultrasensitive responses could be seen, crucially dependent on three factors: 1) limited diffusion in the membrane, 2) multiple binding sites on the substrate, and 3) brief enzymatic inactivation following catalysis. Under these particular conditions, although the enzyme may be in excess of the substrate (first-order regime), the enzyme is effectively locally saturated with substrate due to the multiple binding sites, leading to switch-like responses. This mechanism of ultrasensitivity is independent of enzyme concentration, however the signal is significantly enhanced depending on the number of binding sites on the substrate. [ 32 ] Both conditional factors (limited diffusion and inactivation) are physiologically plausible, but have yet to be experimentally confirmed. Dushek's modeling found increasing Hill cooperativity numbers with more substrate sites (phosphorylation sites), and with greater steric/diffusional hindrance between enzyme and substrate. This mechanism of ultrasensitivity based on local enzyme saturation arises partly from passive properties of slow membrane diffusion, and therefore may be generally applicable. The bacterial flagellar motor has been proposed to follow a dissipative allosteric model, where ultrasensitivity comes as a combination of protein binding affinity and energy contributions from the proton motive force (see Flagellar motors and chemotaxis below). In a living cell, ultrasensitive modules are embedded in a bigger network with upstream and downstream components. This components may constrain the range of inputs that the module will receive as well as the range of the module's outputs that network will be able to detect. Altszyler et al. (2014) [ 33 ] studied how the effective ultrasensitivity of a modular system is affected by these restrictions. They found for some ultrasensitive motifs that dynamic range limitations imposed by downstream components can produce effective sensitivities much larger than that of the original module when considered in isolation. Ultrasensitive behavior is typically represented by a sigmoidal curve, as small alterations in the stimulus [ L ] {\displaystyle [L]} can trigger large changes in the response θ {\displaystyle \theta } . One such relation is the Hill equation : where n {\displaystyle n} is the Hill coefficient which quantifies the steepness of the sigmoidal stimulus-response curve and it is therefore a sensitivity parameter. It is often used to assess the cooperativity of a system. A Hill coefficient greater than one is indicative of positive cooperativity and thus, the system exhibits ultrasensitivity. [ 34 ] Systems with a Hill coefficient of 1 are noncooperative and follow the classical Michaelis-Menten kinetics. Enzymes exhibiting noncooperative activity are represented by hyperbolic stimulus/response curves, compared to sigmoidal curves for cooperative (ultrasensitive) enzymes. [ 35 ] In mitogen-activated protein kinase (MAPK) signaling (see example below), the ultrasensitivity of the signaling is supported by the sigmoidal stimulus/response curve that is comparable to an enzyme with a Hill coefficient of 4.0-5.0. This is even more ultrasensitive to the cooperative binding activity of hemoglobin, which has a Hill coefficient of 2.8. [ 35 ] From an operational point of view the Hill coefficient can be calculated as: where EC 90 {\displaystyle {\ce {EC90}}} and EC 10 {\displaystyle {\ce {EC10}}} are the input values needed to produce the 10% and 90% of the maximal response, respectively. Global sensitivity measures such as the Hill coefficient do not characterise the local behaviours of the s-shaped curves. Instead, these features are well captured by the response coefficient measure [ 36 ] defined as: In systems biology , such system responses are referred to as control coefficients . Specifically, the concentration control coefficients measure the response of concentrations to changes in a given input. In addition, within the framework of the more general biochemical control analysis , such responses can be described in terms of the individual local responses, called the elasticities . Altszyler et al. (2017) have shown that these ultrasensitivity measures can be linked by the following equation: [ 37 ] where ⟨ X ⟩ a , b {\displaystyle \langle X\rangle _{a,b}} denoted the mean value of the variable x over the range [a,b]. Consider two coupled ultrasensitive modules, disregarding effects of sequestration of molecular components between layers. In this case, the expression for the system's dose-response curve, F {\displaystyle F} , results from the mathematical composition of the functions, f i {\displaystyle f_{i}} , which describe the input/output relationship of isolated modules i = 1 , 2 {\displaystyle i=1,2} : Brown et al. (1997) [ 38 ] have shown that the local ultrasensitivity of the different layers combines multiplicatively: In connection with this result, Ferrell et al. (1997) [ 39 ] showed, for Hill-type modules, that the overall cascade global ultrasensitivity had to be less than or equal to the product of the global ultrasensitivity estimations of each cascade's layer, where n 1 {\displaystyle n_{1}} and n 2 {\displaystyle n_{2}} are the Hill coefficient of modules 1 and 2 respectively. Altszyler et al. (2017) [ 37 ] have shown that the cascade's global ultrasensitivity can be analytically calculated: where X 10 i {\displaystyle X10_{i}} and X 90 i {\displaystyle X90_{i}} delimited the Hill input's working range of the composite system, i.e. the input values for the i-layer so that the last layer (corresponding to i = 2 {\displaystyle i=2} in this case) reached the 10% and 90% of it maximal output level. It followed this equation that the system's Hill coefficient n {\displaystyle n} could be written as the product of two factors, ν 1 {\displaystyle \nu _{1}} and ν 2 {\displaystyle \nu _{2}} , which characterized local average sensitivities over the relevant input region for each layer: [ X 10 i , X 90 i ] {\displaystyle [X10_{i},X90_{i}]} , with i = 1 , 2 {\displaystyle i=1,2} in this case. For the more general case of a cascade of N {\displaystyle N} modules, the Hill Coefficient can be expressed as: Several authors have reported the existence of supramultiplicative behavior in signaling cascades [ 40 ] [ 33 ] (i.e. the ultrasensitivity of the combination of layers is higher than the product of individual ultrasensitivities), but in many cases the ultimate origin of supramultiplicativity remained elusive. Altszyler et al. (2017) [ 37 ] framework naturally suggested a general scenario where supramultiplicative behavior could take place. This could occur when, for a given module, the corresponding Hill's input working range was located in an input region with local ultrasensitivities higher than the global ultrasensitivity of the respective dose-response curve. A ubiquitous signaling motif that exhibits ultrasensitivity is the MAPK ( mitogen-activated protein kinase ) cascade, which can take a graded input signal and produce a switch-like output, such as gene transcription or cell cycle progression . In this common motif, MAPK is activated by an earlier kinase in the cascade, called MAPK kinase, or MAPKK. Similarly, MAPKK is activated by MAPKK kinase, or MAPKKK. These kinases are sequentially phosphorylated when MAPKKK is activated, usually via a signal received by a membrane-bound receptor protein. MAPKKK activates MAPKK, and MAPKK activates MAPK. [ 35 ] Ultrasensitivity arises in this system due to several features: Besides the MAPK cascade, ultrasensitivity has also been reported in muscle glycolysis, in the phosphorylation of isocitrate dehydrogenase and in the activation of the calmodulin-dependent protein kinase II (CAMKII). [ 34 ] An ultrasensitive switch has been engineered by combining a simple linear signaling protein (N-WASP) with one to five SH3 interaction modules that have autoinhibitory and cooperative properties. Addition of a single SH3 module created a switch that was activated in a linear fashion by exogenous SH3-binding peptide. Increasing number of domains increased ultrasensitivity. A construct with three SH3 modules was activated with an apparent Hill coefficient of 2.7 and a construct with five SH3 module was activated with an apparent Hill coefficient of 3.9. [ 41 ] During G2 phase of the cell cycle, Cdk1 and cyclin B1 makes a complex and forms maturation promoting factor (MPF). The complex accumulates in the nucleus due to phosphorylation of the cyclin B1 at multiple sites, which inhibits nuclear export of the complex. Phosphorylation of Thr19 and Tyr15 residues of Cdk1 by Wee1 and MYT1 keeps the complex inactive and inhibits entry into mitosis whereas dephosphorylation of Cdk1 by CDC25C phosphatase at Thr19 and Tyr15 residues, activates the complex which is necessary in order to enter mitosis. Cdc25C phosphatase is present in the cytoplasm and in late G2 phase it is translocated into the nucleus by signaling such as PIK1, [ 42 ] PIK3. [ 43 ] The regulated translocation and accumulation of the multiple required signaling cascade components, MPF and its activator Cdc25, in the nucleus generates efficient activation of the MPF and produces switch-like, ultrasensitive entry into mitosis. [ 4 ] The figure [ 4 ] shows different possible mechanisms for how increased regulation of the localization of signaling components by the stimulus (input signal) shifts the output from Michaelian response to ultrasensitive response. When stimulus is regulating only inhibition of Cdk1-cyclinB1 nuclear export, the outcome is Michaelian response, Fig (a). But if the stimulus can regulate localization of multiple components of the signaling cascade, i.e. inhibition of Cdk1-cyclinB1 nuclear export and translocation of the Cdc25C to nucleus, then the outcome is ultrasensitive response, Fig (b). As more components of the signaling cascade are regulated and localized by the stimulus—i.e. inhibition of Cdk1-cyclinB1 nuclear export, translocation of the Cdc25C to the nucleus, and activation of Cdc25C—the output response becomes more and more ultrasensitive, Fig(c). [ 4 ] During mitosis , mitotic spindle orientation is essential for determining the site of cleavage furrowing and position of daughter cells for subsequent cell fate determination . [ 44 ] This orientation is achieved by polarizing cortical factors and rapid alignment of the spindle with the polarity axis. In fruit flies, three cortical factors have been found to regulate the position of the spindle: heterotrimeric G protein α subunit (Gαi), [ 45 ] Partner of Inscuteable (Pins), [ 46 ] and Mushroom body defect (Mud). [ 47 ] Gαi localizes at apical cortex to recruit Pins. Upon binding to GDP-bound Gαi, Pins is activated and recruits Mud to achieve polarized distribution of cortical factors. [ 48 ] N-terminal tetratricopeptide repeats (TPRs) in Pins is the binding region for Mud, but is autoinhibited by intrinsic C-terminal GoLoco domains (GLs) in the absence of Gαi. [ 49 ] [ 50 ] Activation of Pins by Gαi binding to GLs is highly ultrasensitive and is achieved through the following decoy mechanism: [ 14 ] GLs 1 and 2 act as a decoy domains, competing with the regulatory domain, GL3, for Gαi inputs. This intramolecular decoy mechanism allows Pins to establish its threshold and steepness in response to distinct Gαi concentration. At low Gαi inputs, the decoy GLs 1 and 2 are preferentially bound. At intermediate Gαi concentration, the decoys are nearly saturated, and GL3 begins to be populated. At higher Gαi concentration, the decoys are fully saturated and Gαi binds to GL3, leading to Pins activation. Ultrasensitivity of Pins in response to Gαi ensures that Pins is activated only at the apical cortex where Gαi concentration is above the threshold, allowing for maximal Mud recruitment. [ citation needed ] GTPases are enzymes capable of binding and hydrolyzing guanosine triphosphate (GTP). Small GTPases, such as Ran and Ras, can exist in either a GTP-bound form (active) or a GDP-bound form (inactive), and the conversion between these two forms grants them a switch-like behavior. [ 51 ] As such, small GTPases are involved in multiple cellular events, including nuclear translocation and signaling. [ 52 ] The transition between the active and inactive states is facilitated by guanine nucleotide exchange factors (GEFs) and GTPase activating proteins (GAPs). [ 53 ] Computational studies on the switching behavior of GTPases have revealed that the GTPase-GAP-GEF system displays ultrasensitivity. [ 54 ] In their study, Lipshtat et al. simulated the effects of the levels of GEF and GAP activation on the Rap activation signaling network in response to signals from activated α2-adrenergic (α2R) receptors, which lead to degradation of the activated Rap GAP. They found that the switching behavior of Rap activation was ultrasensitive to changes in the concentration (i.e. amplitude) and the duration of the α2R signal, yielding Hill coefficients of nH=2.9 and nH=1.7, respectively (a Hill coefficient greater than nH=1 is characteristic of ultrasensitivity [ 55 ] ). The authors confirmed this experimentally by treating neuroblasts with HU-210, which activates RAP through degradation of Rap GAP. Ultrasensitivity was observed both in a dose-dependent manner (nH=5±0.2), by treating cells with different HU-210 concentrations for a fixed time, and in a duration-dependent manner (nH=8.6±0.8), by treating cells with a fixed HU-210 concentration during varying times. [ citation needed ] By further studying system, the authors determined that (the degree of responsiveness and ultrasensitivity) was heavily dependent on two parameters: the initial ratio of kGAP/kGEF, where the k's incorporate both the concentration of active GAP or GEF and their corresponding kinetic rates; and the signal impact, which is the product of the degradation rate of activated GAP and either the signal amplitude or the signal duration. [ 54 ] The parameter kGAP/kGEF affects the steepness of the transition from the two states of the GTPase switch, with higher values (~10) leading to ultrasensitivity. The signal impact affects the switching point. Therefore, by depending on the ratio of concentrations rather than on individual concentrations, the switch-like behavior of the system can also be displayed outside of the zero-order regime. [ citation needed ] Persistent stimulation at the neuronal synapse can lead to markedly different outcomes for the post-synaptic neuron. Extended weak signaling can result in long-term depression (LTD), in which activation of the post-synaptic neuron requires a stronger signal than before LTD was initiated. In contrast, long-term potentiation (LTP) occurs when the post-synaptic neuron is subjected to a strong stimulus, and this results in strengthening of the neural synapse (i.e., less neurotransmitter signal is required for activation). In the CA1 region of the hippocampus, the decision between LTD and LTP is mediated solely by the level of intracellular Ca 2 + {\displaystyle {\ce {\scriptstyle Ca^{2}+}}} at the post-synaptic dendritic spine. Low levels of Ca 2 + {\displaystyle {\ce {\scriptstyle Ca^{2}+}}} (resulting from low-level stimulation) activates the protein phosphatase calcineurin , which induces LTD. Higher levels of Ca 2 + {\displaystyle {\ce {\scriptstyle Ca^{2}+}}} results in activation of Ca 2 + {\displaystyle \scriptstyle \color {Blue}{\ce {Ca^2+}}} /calmodulin-dependent protein kinase II (CaMKII), which leads to LTP. The difference in Ca 2+ concentration required for a cell to undergo LTP is only marginally higher than for LTD, and because neurons show bistability (either LTP or LTD) following persistent stimulation, this suggests that one or more components of the system respond in a switch-like, or ultrasensitive manner. Bradshaw et al. demonstrated that CaMKII (the LTP inducer) responds to intracellular calcium levels in an ultrasensitive manner, with <10% activity at 1.0 uM and ~90% activity at 1.5 uM, resulting in a Hill coefficient of ~8. Further experiments showed that this ultrasenstivity was mediated by cooperative binding of CaMKII by two molecules of calmodulin (CaM), and autophosphorylation of activated CaMKII leading to a positive feedback loop. [ 56 ] In this way, intracellular calcium can induce a graded, non-ultrasensitive activation of calcineurin at low levels, leading to LTD, whereas the ultrasensitive activation of CaMKII results in a threshold intracellular calcium level that generates a positive feedback loop that amplifies the signal and leads to the opposite cellular outcome: LTP. Thus, binding of a single substrate to multiple enzymes with different sensitivities facilitates a bistable decision for the cell to undergo LTD or LTP. [ citation needed ] It has been suggested that zero-order ultrasensitivity may generate thresholds during development allowing for the conversion of a graded morphogen input to a binary switch-like response. [ 57 ] Melen et al. (2005) have found evidence for such a system in the patterning of the Drosophila embryonic ventral ectoderm . [ 58 ] In this system, graded mitogen activated protein kinase (MAPK) activity is converted to a binary output, the all-or-none degradation of the Yan transcriptional repressor. They found that MAPK phosphorylation of Yan is both essential and sufficient for Yan's degradation. Consistent with zero-order ultrasensitivity an increase in Yan protein lengthened the time required for degradation but had no effect on the border of Yan degradation in developing embryos. Their results are consistent with a situation where a large pool of Yan becomes either completely degraded or maintained. The particular response of each cell depends on whether or not the rate of reversible Yan phosphorylation by MAPK is greater or less than dephosphorylation. Thus, a small increase in MAPK phosphorylation can cause it to be the dominant process in the cell and lead to complete degradation of Yan. Multistep-feedback loop mechanism also leads to ultrasensitivity. There is paper introducing that engineering synthetic feedback loops using yeast mating mitogen-activated protein (MAP) kinase pathway as a model system. In Yeast mating pathway: alpha-factor activates receptor, Ste2, and Ste4 and activated Ste4 recruits Ste5 complex to membrane, allowing PAK-like kinase Ste20 (membrane-localized) to activate MAPKKK Ste11. Ste11 and downstream kinases, Ste7 (MAPKK) and Fus3 (MAPK), are colocalized on the scaffold and activation of cascade leads to transcriptional program. They used pathway modulators outside of core cascade, Ste50 promotes activation of Ste11 by Ste20; Msg5 (negative, red) is MAPK phosphatase that deactivates Fus3 (Fig.2A). What they built was circuit with enhanced ultrasensitive switch behavior by constitutively expressing a negative modulator, Msg5 which is one of MAPK phosphatase and inducibly expressing a positive modulator, Ste50 which is pathway modulators outside of core cascade(Fig.2B). The success of this recruitment-based engineering strategy suggests that it may be possible to reprogram cellular responses with high precision. [ 59 ] The rotational direction of E. coli is controlled by the flagellar motor switch . A ring of 34 FliM proteins around the rotor bind CheY, whose phosphorylation state determines whether the motor rotates in a clockwise or counterclockwise manner. The rapid switching mechanism is attributed to an ultrasensitive response, which has a Hill coefficient of ~10. This system has been proposed to follow a dissipative allosteric model, in which rotational switching is a result of both CheY binding and energy consumption from the proton motive force , which also powers the flagellar rotation. [ 60 ] Recently it has been shown that a Michaelian signaling pathway can be converted to an ultrasensitive signaling pathway by the introduction of two positive feedback loops. [ 61 ] In this synthetic biology approach, Palani and Sarkar began with a linear, graded response pathway, a pathway that showed a proportional increase in signal output relative to the amount of signal input, over a certain range of inputs. This simple pathway was composed of a membrane receptor, a kinase and a transcription factor. Upon activation the membrane receptor phosphorylates the kinase, which moves into the nucleus and phosphorylates the transcription factor, which turns on gene expression. To transform this graded response system into an ultrasensitive, or switch-like signaling pathway, the investigators created two positive feedback loops. In the engineered system, activation of the membrane receptor resulted in increased expression of both the receptor itself and the transcription factor. This was accomplished by placing a promoter specific for this transcription factor upstream of both genes. The authors were able to demonstrate that the synthetic pathway displayed high ultrasensitivity and bistability. Recent computational analysis of the effects of a signaling protein's concentration on the presence of an ultrasensitive response has come to complementary conclusions about the influence of a signaling protein's concentration on the conversion of a graded response to an ultrasensitive one. Rather than focus on the generation of signaling proteins through positive feedback, however, the study instead focused on how the dynamics of a signaling protein's exit from the system influences the response. Soyer, Kuwahara, and Csika´sz-Nagy [ 62 ] devised a signaling pathway composed of a protein (P) that possesses two possible states (unmodified P or modified P*) and can be modified by an incoming stimulus E. Furthermore, while the unmodified form, P, is permitted to enter or leave the system, P* is only allowed to leave (i.e. it is not generated elsewhere). After varying the parameters of this system, the researchers discovered that the modification of P to P* can shift between a graded response and an ultrasensitive response via the modification of the exit rates of P and P* relative to each other. The transition from an ultrasensitive response to E and a graded response to E was generated when the two rates went from highly similar to highly dissimilar, irrespective of the kinetics of the conversion from P to P* itself. This finding suggests at least two things: 1) the simplifying assumption that the levels of signaling molecules stay constant in a system can severely limit the understanding of ultrasensitivity's complexity; and 2) it may be possible to induce or inhibit ultrasensitivity artificially by regulating the rates of the entry and exit of signaling molecules occupying a system of interest. It has been shown that the integration of a given synthetic ultrasensitive module with upstream and downstream components often alters its information-processing capabilities. [ 33 ] This effects must be taken into account in the design process.
https://en.wikipedia.org/wiki/Ultrasensitivity
Ultrasonic antifouling is a technology that uses high frequency sound ( ultrasound ) to prevent or reduce biofouling on underwater structures, surfaces, and media. Ultrasound is high-frequency sound above the range humans can hear, though other animals may be able to, and otherwise it has the same physical properties as human-audible sound. Ultrasonic antifouling has two primary forms: sub-cavitation intensity and cavitation intensity. Sub-cavitation methods create high frequency vibrations, whilst cavitation methods cause more destructive microscopic pressure changes. Both methods inhibit or prevent biofouling by algae and other single-celled organisms . Ultrasound was discovered in 1794 when Italian physiologist and biologist Lazzarro Spallanzani discovered that bats navigate through the reflection of high-frequency sounds. [ 1 ] Ultrasonic antifouling is believed to have been discovered by the US Navy in the 1950s: during sonar tests on submarines, it was said that the areas surrounding the sonar transducers had less fouling than the rest of the hull. [ 2 ] Antifouling (the removal of biofouling ) has been attempted since ancient times, initially using wax, tar or asphalt. Copper and lead sheathings were later introduced by Phoenicians and Carthaginians . [ 3 ] The Cutty Sark has one example of such copper sheathing , available to view in Greenwich, England . Ultrasound (ultrasonic) is sound at a frequency high enough that humans can not hear it. Sound has a frequency (low to high) and an intensity (quiet to loud). Ultrasound is used to clean jewellery, weld rubber, treat abscesses , and perform sonography . These applications rely on the interaction of sound with the media through which the sound travels. In maritime applications, ultrasound is the key ingredient in some sonars ; sonar relies on sound at frequencies ranging from infrasonic (below human hearing range) to ultrasonic. The three main stages of biofouling are formation of a conditioning biofilm , microfouling, and macrofouling. A biofilm is the accretion of single-celled organisms onto a surface. This creates a habitat that enables other organisms to establish themselves. The conditioning film collects living and dead bacteria, creating the so-called primary film. [ 3 ] The two approaches to ultrasonic antifouling are cavitation and sub-cavitation. Cavitation: Ultrasound of high enough intensity causes water to boil, creating cavitation . This physically annihilates living organisms and the supporting biofilm. One concern with it is the potential effect on the hull. Cavitation [ 4 ] can be predicted mathematically through the calculation of acoustic pressure . Where this pressure is low enough, the liquid can reach its vaporisation pressure , resulting in localised vaporisation and forming small bubbles; these collapse quickly and with tremendous energy and turbulence, generating heat on the order of 5,000 K (4,730 °C; 8,540 °F) and pressures of the order of several atmospheres . [ 5 ] Such systems are more appropriate where power consumption is not a factor, and the surfaces to be protected can tolerate the forces involved. Sub-cavitation: The sound vibrates the surface(s) (e.g., hull, sea chests, water coolers) to which the transducer is attached. The vibrations prevent the cyprid stage of the biofouling species from attaching themselves permanently to the substrate by disrupting the Van Der Waals Force that allow their microvilli to hold themselves to the surface. [ 6 ] Different frequencies and intensities (or power) of ultrasonic waves have varying effects on different kinds of marine life, such as barnacles , [ 6 ] mussels and algae. The two main components of an ultrasonic antifouling system are: Commercial systems are available in a wide range of energies and configurations. All use ceramic piezoelectric transducers as the sound source. Dedicated systems support Ultrasonic algae control is a commercial technology that has been claimed to control the blooming of cyanobacteria , algae , and biofouling in lakes and reservoirs , by using pulsed ultrasound . [ 7 ] [ 8 ] The duration of such treatment is supposed to take up to several months, depending on the water volume and algae species. Despite the experimental demonstration of certain bioeffects in small samples under controlled laboratory and sonication conditions, there is as yet no scientific foundation for outdoor ultrasonic algae control. It has been speculated that ultrasound produced at the resonance frequencies of cells or their membranes may cause them to rupture. The center frequencies of the ultrasound pulses used in academic studies lie between 20 kHz and 2.5 MHz. [ 9 ] The acoustic powers , pressures , and intensities applied vary from low, not affecting humans, [ 10 ] [ 11 ] to high, unsafe for swimmers. [ 12 ] According to research at the University of Hull , ultrasound -assisted gas release from blue-green algae cells may take place from nitrogen -containing cells, but only under very specific short-distance conditions which are not representative for intended outdoors applications. [ 13 ] In addition, a study by Wageningen University on several algae species concluded that most claims on outdoors ultrasonic algae control are unsubstantiated. [ 14 ] Ultrasonic antifouling systems are generally capable of only maintaining a clean surface. They can't clean a surface that already has a well-established and mature biofouling infestation. To this end, they are a preventive measure, with the goal of an ultrasonic antifouling system being to maintain the protected surface as close to its optimum clean state as possible. Ultrasonic systems are ineffective on wooden-hulled vessels or vessels made from ferro-cement, as these materials dampen the vibrations from the transducers. Composite hulls with a sandwich construction may also require modification to form monolithic plinths of solid material at each transducer location.
https://en.wikipedia.org/wiki/Ultrasonic_algae_control
Ultrasonic impact treatment ( UIT ) is a metallurgical processing technique, similar to work hardening , in which ultrasonic energy is applied to a metal object. This technique is part of the High Frequency Mechanical Impact (HFMI) processes. Other acronyms are also equivalent: Ultrasonic Needle Peening (UNP), Ultrasonic Peening (UP). Ultrasonic impact treatment can result in controlled residual compressive stress , grain refinement and grain size reduction. Low and high cycle fatigue are enhanced and have been documented to provide increases up to ten times greater than non-UIT specimens. In UIT, ultrasonic waves are produced by an electro-mechanical ultrasonic transducer , and applied to a workpiece. An acoustically tuned resonator bar is caused to vibrate by energizing it with a magnetostrictive or Piezoelectric ultrasonic transducer. The energy generated from these high frequency impulses is imparted to the treated surface through the contact of specially designed steel pins. These transfer pins are free to move axially between the resonant body and the treated surface. When the tool, made up of the ultrasonic transducer, pins and other components, comes into contact with the work piece it acoustically couples with the work piece, creating harmonic resonance . This harmonic resonance is performed at a carefully calibrated frequency , to which metals respond very favorably, resulting in compressive residual stress, stress relief and grain structure improvements. Depending on the desired effects of treatment a combination of different frequencies and displacement amplitude is applied. Depending on the tool and the Original Equipment Manufacturer, these frequencies range between 15 and 55 kHz , [ 1 ] with the displacement amplitude of the resonant body of between 20 and 80 μm (0.00079 and 0.00315 in). UIT is highly controllable. Incorporating a programmable logic controller (PLC) or a Digital Ultrasonic Generator, the frequency and amplitude of UIT are easily set and maintained, thus removing a significant portion of operator dependency. UIT can also be mechanically controlled, thus providing repeatability of results from one application to the next. Examples of mechanical control employed with UIT include: With these types of controlled applications, the surface finish of the work piece is highly controllable. For many applications, UIT is most effectively employed by hand. The high portability of the UIT system enables travel to austere locations and hard to reach places. The flexibility that is facilitated by variations in the tool configuration (such as angle-peening-head) ensures that access to very tight locations is possible. UIT's effectiveness has been illustrated on the following metals, among others: UIT was originally developed in 1972 and has since been perfected by a team of Russian scientists under the leadership of Dr. Efim Statnikov . Originally developed and utilized to enhance the fatigue and corrosion attributes of ship and submarine structures, UIT has been utilized in aerospace, mining, offshore drilling, shipbuilding, infrastructure, automotive, energy production and other industries. [ 2 ] Different industrial solutions exist nowadays and are commercialized by a limited number of Original Equipment Manufacturers worldwide. UIT enables life extension of steel bridges. [ 3 ] This technique has been employed in numerous US states as well as other nations. The result is a greatly reduced cost of infrastructure. UIT has been certified for this use by AASHTO . The use of UIT on draglines and other heavy equipment in the mining industry has resulted in increased production and has decreased downtime and maintenance costs. UIT is employed on drive shafts and crank shafts in a number of industries. Results show that UIT increases shaft life by over a factor of 3. [ 3 ] The US Navy uses UIT to address cracked areas in certain aluminum decks. Without UIT, crack repairs resulted in almost immediate re-cracking. With UIT, repairs have shown to last over eight months without cracks. IIW PUBLICATIONS:
https://en.wikipedia.org/wiki/Ultrasonic_impact_treatment
An ultrasonic pulse velocity test is an in-situ , nondestructive test to check the quality of concrete and natural rocks. In this test, the strength and quality of concrete or rock is assessed by measuring the velocity of an ultrasonic pulse passing through a concrete structure or natural rock formation. This test is conducted by passing a pulse of ultrasonic through concrete to be tested and measuring the time taken by pulse to get through the structure. Higher velocities indicate good quality and continuity of the material, while slower velocities may indicate concrete with many cracks or voids. Ultrasonic testing equipment includes a pulse generation circuit, consisting of electronic circuit for generating pulses and a transducer for transforming electronic pulse into mechanical pulse having an oscillation frequency in range of 40 kHz to 50 kHz, and a pulse reception circuit that receives the signal. [ 1 ] [ 2 ] The transducer, clock, oscillation circuit, and power source are assembled for use. After calibration to a standard sample of material with known properties, the transducers are placed on opposite sides of the material. Pulse velocity is measured by a simple formula: P u l s e V e l o c i t y = W i d t h o f s t r u c t u r e T i m e t a k e n b y p u l s e t o g o t h r o u g h {\displaystyle Pulse\;Velocity={\frac {\;Width\;ofstructure}{Time\;taken\;by\;pulse\;to\;go\;through}}} . [ 3 ] [ 4 ] [ 5 ] [ 6 ] Ultrasonic Pulse Velocity can be used to: The test can also be used to evaluate the effectiveness of crack repair. [ 9 ] Ultrasonic testing is an indicative and other tests such as destructive testing must be conducted to find the structural and mechanical properties of the material. [ 10 ] [ 11 ] [ 12 ] [ 13 ] A procedure for ultrasonic testing is outlined in ASTM C597 - 09. [ 9 ] In India, till 2018 ultrasonic testing was conducted according to IS 13311-1992.From 2018, procedure and specification for Ultrasonic pulse velocity test is outlined in IS 516 Part 5 :Non destructive testing of concrete Section 1:Ultrasonic Pulse Velocity Testing. This test indicates the quality of workmanship and to find the cracks and defects in concrete. [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] The important factors that affect/influence the ultrasonic pulse velocity test are: This test is recommended in some of testing done by the Indian government to certify and check construction of residential buildings. [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ]
https://en.wikipedia.org/wiki/Ultrasonic_pulse_velocity_test
Ultrasonic welding is an industrial process whereby high-frequency ultrasonic acoustic vibrations are locally applied to work pieces being held together under pressure to create a solid-state weld . It is commonly used for plastics and metals , and especially for joining dissimilar materials . In ultrasonic welding, there are no connective bolts, nails, soldering materials, or adhesives necessary to bind the materials together. When used to join metals, the temperature stays well below the melting point of the involved materials, preventing any unwanted properties which may arise from high temperature exposure of the metal. [ 1 ] [ 2 ] Practical application of ultrasonic welding for rigid plastics was completed in the 1960s. At this point only hard plastics could be welded. The patent for the ultrasonic method for welding rigid thermoplastic parts was awarded to Robert Soloff and Seymour Linsley in 1965. [ 3 ] Soloff, the founder of Sonics & Materials Inc., was a lab manager at Branson Instruments where thin plastic films were welded into bags and tubes using ultrasonic probes. He unintentionally moved the probe close to a plastic tape dispenser and observed that the halves of the dispenser welded together. He realized that the probe did not need to be manually moved around the part, but that the ultrasonic energy could travel through and around rigid plastics and weld an entire joint. [ 3 ] He went on to develop the first ultrasonic press. The first application of this new technology was in the toy industry. [ 4 ] The first car made entirely out of plastic was assembled using ultrasonic welding in 1969. [ 4 ] The automotive industry has used it regularly since the 1980s, and it is now used for a multitude of applications. [ 4 ] For joining complex injection molded thermoplastic parts, ultrasonic welding equipment can be customized to fit the exact specifications of the parts being welded. The parts are sandwiched between a fixed shaped nest ( anvil ) and a sonotrode (horn) connected to a transducer, and a ~20-70 kHz low-amplitude acoustic vibration is emitted. [ citation needed ] When welding plastics, the interface of the two parts is specially designed to concentrate the melting process. One of the materials usually has a spiked or rounded energy director which contacts the second plastic part. The ultrasonic energy melts the point contact between the parts, creating a joint. Ultrasonic welding of thermoplastics causes local melting of the plastic due to absorption of vibrational energy along the joint to be welded. In metals, welding occurs due to high-pressure dispersion of surface oxides and local motion of the materials. Although there is heating, it is not enough to melt the base materials. [ clarification needed ] Ultrasonic welding can be used for both hard and soft plastics, such as semicrystalline plastics, and metals. The understanding of ultrasonic welding has increased with research and testing. The invention of more sophisticated and inexpensive equipment and increased demand for plastic and electronic components has led to a growing knowledge of the fundamental process. [ 4 ] However, many aspects of ultrasonic welding still require more study, such as the relationship of weld quality to process parameters. Scientists from the Institute of Materials Science and Engineering (WKK) of University of Kaiserslautern, with the support from the German Research Foundation ( Deutsche Forschungsgemeinschaft ), have succeeded in proving that using ultrasonic welding processes can lead to highly durable bonds between light metals and carbon-fiber-reinforced polymer (CFRP) sheets. [ 5 ] A benefit of ultrasonic welding is that there is no drying time as with conventional adhesives or solvents, so the workpieces do not need to remain in a fixture for longer than it takes for the weld to cool. The welding can easily be automated, making clean and precise joints; the site of the weld is very clean and rarely requires any touch-up work. The low thermal impact on the materials involved enables a greater number of materials to be welded together. The process is a good automated alternative to glue, screws or snap-fit designs. Ultrasonic welding is typically used with small parts (e.g. cell phones, consumer electronics, disposable medical tools, toys, etc.) but it can be used on parts as large as a small automotive instrument cluster. [ quantify ] Ultrasonics can also be used to weld metals, but are typically limited to small welds of thin, malleable metals such as aluminum, copper, and nickel. Ultrasonics would not be used in welding the chassis of an automobile or in welding pieces of a bicycle together, due to the power levels required. [ clarification needed ] All ultrasonic welding systems are composed of the same basic elements: The applications of ultrasonic welding are extensive and are found in many industries including electrical and computer, automotive and aerospace, medical, and packaging. Whether two items can be ultrasonically welded is determined by their thickness. If they are too thick this process will not join them. This is the main obstacle in the welding of metals. However, wires, microcircuit connections, sheet metal, foils, ribbons and meshes are often joined using ultrasonic welding. Ultrasonic welding is a very popular technique for bonding thermoplastics . It is fast and easily automated with weld times often below one second and there is no ventilation system required to remove heat or exhaust. This type of welding is often used to build assemblies that are too small, too complex, or too delicate for more common welding techniques. In the electrical and computer industry ultrasonic welding is often used to join wired connections and to create connections in small, delicate circuits. Junctions of wire harnesses are often joined using ultrasonic welding. [ 6 ] Wire harnesses are large groupings of wires used to distribute electrical signals and power. Electric motors , field coils , transformers and capacitors may also be assembled with ultrasonic welding. [ 7 ] It is also often preferred in the assembly of storage media such as flash drives and computer disks because of the high volumes required. Ultrasonic welding of computer disks has been found to have cycle times of less than 300 ms. [ 8 ] One of the areas in which ultrasonic welding is most used and where new research and experimentation is centered is microcircuits. [ 6 ] This process is ideal for microcircuits since it creates reliable bonds without introducing impurities or thermal distortion into components. Semiconductor devices, transistors and diodes are often connected by thin aluminum and gold wires using ultrasonic welding. [ 9 ] It is also used for bonding wiring and ribbons as well as entire chips to microcircuits. An example of where microcircuits are used is in medical sensors used to monitor the human heart in bypass patients. One difference between ultrasonic welding and traditional welding is the ability of ultrasonic welding to join dissimilar materials. The assembly of battery components is a good example of where this ability is utilized. When creating battery and fuel cell components, thin gauge copper, nickel and aluminium connections, foil layers and metal meshes are often ultrasonically welded together. [ 6 ] Multiple layers of foil or mesh can often be applied in a single weld eliminating steps and costs. For automobiles, ultrasonic welding tends to be used to assemble large plastic and electrical components such as instrument panels, door panels, lamps, air ducts, steering wheels, upholstery and engine components. [ 10 ] As plastics have continued to replace other materials in the design and manufacture of automobiles, the assembly and joining of plastic components has increasingly become a critical issue. Some of the advantages for ultrasonic welding are low cycle times, automation , low capital costs, and flexibility. [ 11 ] Ultrasonic welding does not damage surface finish because the high-frequency vibrations prevent marks from being generated, which is a crucial consideration for many car manufacturers, . [ 10 ] Ultrasonic welding is generally utilized in the aerospace industry when joining thin sheet gauge metals and other lightweight materials. Aluminum is a difficult metal to weld using traditional techniques because of its high thermal conductivity. However, it is one of the easier materials to weld using ultrasonic welding because it is a softer metal and thus a solid-state weld is simple to achieve. [ 12 ] Since aluminum is so widely used in the aerospace industry, it follows that ultrasonic welding is an important manufacturing process. With the advent of new composite materials , ultrasonic welding is becoming even more prevalent. It has been used in the bonding of the popular composite material carbon fiber . Numerous studies have been done to find the optimum parameters that will produce quality welds for this material. [ 13 ] In the medical industry ultrasonic welding is often used because it does not introduce contaminants or degradation into the weld and the machines can be specialized for use in clean rooms . [ 14 ] The process can also be highly automated, provides strict control over dimensional tolerances and does not interfere with the biocompatibility of parts. Therefore, it increases part quality and decreases production costs. Items such as arterial filters, anesthesia filters, blood filters, IV catheters, dialysis tubes, pipettes , cardiometry reservoirs, blood/gas filters, face masks and IV spike/filters can all be made using ultrasonic welding. [ 15 ] Another important application in the medical industry for ultrasonic welding is textiles. Items like hospital gowns, sterile garments, masks, transdermal patches and textiles for clean rooms can be sealed and sewn using ultrasonic welding. [ 16 ] This prevents contamination and dust production and reduces the risk of infection. Ultrasonic welding is often used in packaging applications. Many common items are either created or packaged using ultrasonic welding. Sealing containers, tubes and blister packs are common applications. Ultrasonic welding is also applied in the packaging of dangerous materials, such as explosives, fireworks and other reactive chemicals. These items tend to require hermetic sealing , but cannot be subjected to high temperatures. [ 9 ] One example is a butane lighter. This container weld must be able to withstand high pressure and stress and must be airtight to contain the butane. [ 17 ] Another example is the packaging of ammunition and propellants. These packages must be able to withstand high pressure and stress to protect the consumer from the contents. The food industry finds ultrasonic welding preferable to traditional joining techniques, because it is fast, sanitary and can produce hermetic seals. Milk and juice containers are examples of products often sealed using ultrasonic welding. The paper parts to be sealed are coated with plastic, generally polypropylene or polyethylene , and then welded together to create an airtight seal. [ 17 ] The main obstacle to overcome in this process is the setting of the parameters. For example, if over-welding occurs, then the concentration of plastic in the weld zone may be too low and cause the seal to break. If it is under-welded, the seal is incomplete. [ 17 ] Variations in the thicknesses of materials can cause variations in weld quality. Some other food items sealed using ultrasonic welding include candy bar wrappers, frozen food packages and beverage containers. "Sonic agglomeration", a combination of ultrasonic welding and molding , is used to produce compact food ration bars for the US Army's Close Combat Assault Ration project without the use of binders. Dried food is pressed into a mold and welded for an hour, during which food particles become stuck together. [ 18 ] Hazards of ultrasonic welding include exposure to high temperatures and voltages. This equipment should be operated using the safety guidelines provided by the manufacturer to avoid injury. For instance, operators must never place hands or arms near the welding tip when the machine is activated. [ 19 ] Also, operators should be provided with hearing protection and safety glasses. Operators should be informed of government agency regulations for the ultrasonic welding equipment and these regulations should be enforced. [ 20 ] Ultrasonic welding machines require routine maintenance and inspection. Panel doors, housing covers and protective guards may need to be removed for maintenance. [ 19 ] This should be done when the power to the equipment is off and only by the trained professional servicing the machine. Sub-harmonic vibrations, which can create annoying audible noise, may be caused in larger parts near the machine due to the ultrasonic welding frequency. [ 21 ] This noise can be damped by clamping these large parts at one or more locations. Also, high-powered welders with frequencies of 15 kHz and 20 kHz typically emit a potentially damaging high-pitched squeal in the range of human hearing. Shielding this radiating sound can be done using an acoustic enclosure. [ 21 ]
https://en.wikipedia.org/wiki/Ultrasonic_welding
Ultrasound attenuation spectroscopy is a method for characterizing properties of fluids and dispersed particles . It is also known as acoustic spectroscopy . There is an international standard for this method. [ 1 ] [ 2 ] Measurement of attenuation coefficient versus ultrasound frequency yields raw data for further calculation of various system properties. Such raw data are often used in the calculation of the particle size distribution in heterogeneous systems such as emulsions and colloids . In the case of acoustic rheometers , the raw data are converted into extensional viscosity or volume viscosity . Instruments that employ ultrasound attenuation spectroscopy are referred to as Acoustic spectrometers. This spectroscopy -related article is a stub . You can help Wikipedia by expanding it . This acoustics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ultrasound_attenuation_spectroscopy
Ultrastructural identity is a concept in biology . It asserts that evolutionary lineages of eukaryotes in general and protists in particular can be distinguished by complements and arrangements of cellular organelles . These ultrastructural components can be visualized by electron microscopy . The concept emerged following the application of electron microscopy to protists. Early ultrastructural studies revealed that many previously accepted groupings of protists based on optical microscopy included organisms with differing cellular organelles. Those groups included amoebae , flagellates , heliozoa , radiolaria , sporozoa , slime molds , and chromophytic algae . They were deemed likely to be polyphyletic , and their inclusion in efforts to assemble a phylogenetic tree would cause confusion. As an example of this work, German cell biologist Christian Bardele established unexpected diversity with the simply organized heliozoa . [ 1 ] [ 2 ] [ 3 ] [ 4 ] His work made it evident that heliozoa were not monophyletic and subsequent studies revealed that the heliozoa was composed of seven types of organisms: actinophryids , centrohelids , ciliophryids, desmothoracids , dimporphids, gymnosphaerids and taxopodids . [ 5 ] A critical advance was made by British phycologist David Hibberd. [ 6 ] He demonstrated that two types of chromophytic algae , previously presumed to be closely related, had different organizations that were revealed by electron microscopy. The number and organization of locomotor organelles differed ( chrysophyte - two flagella ; haptophyte - two flagella and haponema), the surfaces of which differed ( chrysophyte - with tripartite flagellar hairs now regarded as apomorphic for stramenopiles ; haptophyte - naked), as did the transitional zone between axoneme and basal body (chrysophyte with helix); as did flagellar anchorage systems; presence or absence of embellishments on the cell surface (chrysophyte - naked; haptophyte - with scales), plastids especially eyespot, location and functions of dictyosomes , inter alia . This careful study prompted further examination of algal and flagellate organization. Protozoologists Brugerolle and Patterson were the first to use the term 'ultrastructural identity' in discussing the differences between ciliates and a lookalike protist, Stephanopogon . [ 7 ] Patterson later applied the concept to all eukaryotes, classifying their diversity into 71 types, each without clear sister group affinities. [ 8 ] A further 200 or so genera that had not yet been studied by electron microscopy were listed. The catalog of groups with distinctive ultrastructural identities has been used as a base-line for efforts to build a stable tree for all eukaryotes using molecular data. [ 9 ] An indirect benefit of the focus on ultrastructural characters was that it allowed synapomorphies to be identified for emerging lineages. Molecular protistologist Gunderson and colleagues established that dinoflagellates , apicomplexa and ciliates were likely related. [ 10 ] They, and some related flagellates, were shown to share a distinctive system of sacs or alveoli under the cell membrane, and because of this were given the name Alveolates. Similarly, tripartite tubular hairs attached to various algae, fungi and protozoa provided the synapomorphy for the 'stramenopiles' (straw-hairs) [ 11 ] A distinctive flagellar root system that caused grooving on their cell surface was treated as a synapomorphy of the excavate flagellates. [ 12 ]
https://en.wikipedia.org/wiki/Ultrastructural_identity
In biochemistry , an ultratrace element is a chemical element that normally comprises less than one microgram per gram of a given organism (i.e. less than 0.0001% by weight), but which plays a significant role in its metabolism . Possible ultratrace elements in humans include boron , silicon , nickel , vanadium [ 1 ] and cobalt . [ 2 ] Other possible ultratrace elements in other organisms include bromine , cadmium , fluorine , lead , lithium , and tin . [ 3 ] This biochemistry article is a stub . You can help Wikipedia by expanding it . This article about metabolism is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ultratrace_element
Ultraviolet radiation , also known as simply UV , is electromagnetic radiation of wavelengths of 10–400 nanometers , shorter than that of visible light , but longer than X-rays . UV radiation is present in sunlight and constitutes about 10% of the total electromagnetic radiation output from the Sun. It is also produced by electric arcs , Cherenkov radiation , and specialized lights, such as mercury-vapor lamps , tanning lamps , and black lights . The photons of ultraviolet have greater energy than those of visible light, from about 3.1 to 12 electron volts , around the minimum energy required to ionize atoms . [ 1 ] : 25–26 Although long-wavelength ultraviolet is not considered an ionizing radiation [ 2 ] because its photons lack sufficient energy, it can induce chemical reactions and cause many substances to glow or fluoresce . Many practical applications, including chemical and biological effects, are derived from the way that UV radiation can interact with organic molecules. These interactions can involve exciting orbital electrons to higher energy states in molecules potentially breaking chemical bonds. In contrast, the main effect of longer wavelength radiation is to excite vibrational or rotational states of these molecules, increasing their temperature. [ 1 ] : 28 Short-wave ultraviolet light is ionizing radiation . [ 2 ] Consequently, short-wave UV damages DNA and sterilizes surfaces with which it comes into contact. For humans, suntan and sunburn are familiar effects of exposure of the skin to UV, along with an increased risk of skin cancer . The amount of UV radiation produced by the Sun means that the Earth would not be able to sustain life on dry land if most of that light were not filtered out by the atmosphere . [ 3 ] More energetic, shorter-wavelength "extreme" UV below 121 nm ionizes air so strongly that it is absorbed before it reaches the ground. [ 4 ] However, UV (specifically, UVB) is also responsible for the formation of vitamin D in most land vertebrates , including humans. [ 5 ] The UV spectrum, thus, has effects both beneficial and detrimental to life. The lower wavelength limit of the visible spectrum is conventionally taken as 400 nm. Although ultraviolet rays are not generally visible to humans , 400 nm is not a sharp cutoff, with shorter and shorter wavelengths becoming less and less visible in this range. [ 6 ] Insects, birds, and some mammals can see near-UV (NUV), i.e., somewhat shorter wavelengths than what humans can see. [ 7 ] Ultraviolet rays are not usable for normal human vision. The lens of the human eye and surgically implanted lens produced since 1986 blocks most radiation in the near UV wavelength range of 300–400 nm; shorter wavelengths are blocked by the cornea . [ 8 ] Humans also lack color receptor adaptations for ultraviolet rays. The photoreceptors of the retina are sensitive to near-UV but the lens does not focus this light, causing UV light bulbs to look fuzzy. [ 9 ] [ 10 ] People lacking a lens (a condition known as aphakia ) perceive near-UV as whitish-blue or whitish-violet. [ 6 ] Near-UV radiation is visible to insects, some mammals, and some birds . Birds have a fourth color receptor for ultraviolet rays; this, coupled with eye structures that transmit more UV gives smaller birds "true" UV vision. [ 11 ] [ 12 ] "Ultraviolet" means "beyond violet" (from Latin ultra , "beyond"), violet being the color of the highest frequencies of visible light . Ultraviolet has a higher frequency (thus a shorter wavelength) than violet light. UV radiation was discovered in February 1801 when the German physicist Johann Wilhelm Ritter observed that invisible rays just beyond the violet end of the visible spectrum darkened silver chloride -soaked paper more quickly than violet light itself. He announced the discovery in a very brief letter to the Annalen der Physik [ 13 ] [ 14 ] and later called them "(de-)oxidizing rays" ( German : de-oxidierende Strahlen ) to emphasize chemical reactivity and to distinguish them from " heat rays ", discovered the previous year at the other end of the visible spectrum. The simpler term "chemical rays" was adopted soon afterwards, and remained popular throughout the 19th century, although some said that this radiation was entirely different from light (notably John William Draper , who named them "tithonic rays" [ 15 ] [ 16 ] ). The terms "chemical rays" and "heat rays" were eventually dropped in favor of ultraviolet and infrared radiation , respectively. [ 17 ] [ 18 ] In 1878, the sterilizing effect of short-wavelength light by killing bacteria was discovered. By 1903, the most effective wavelengths were known to be around 250 nm. In 1960, the effect of ultraviolet radiation on DNA was established. [ 19 ] The discovery of the ultraviolet radiation with wavelengths below 200 nm, named "vacuum ultraviolet" because it is strongly absorbed by the oxygen in air, was made in 1893 by German physicist Victor Schumann . [ 20 ] The division of UV into UVA, UVB, and UVC was decided "unanimously" by a committee of the Second International Congress on Light on August 17th, 1932, at the Castle of Christiansborg in Copenhagen. [ 21 ] The electromagnetic spectrum of ultraviolet radiation (UVR), defined most broadly as 10–400 nanometers, can be subdivided into a number of ranges recommended by the ISO standard ISO 21348: [ 22 ] Several solid-state and vacuum devices have been explored for use in different parts of the UV spectrum. Many approaches seek to adapt visible light-sensing devices, but these can suffer from unwanted response to visible light and various instabilities. Ultraviolet can be detected by suitable photodiodes and photocathodes , which can be tailored to be sensitive to different parts of the UV spectrum. Sensitive UV photomultipliers are available. Spectrometers and radiometers are made for measurement of UV radiation. Silicon detectors are used across the spectrum. [ 23 ] Vacuum UV, or VUV, wavelengths (shorter than 200 nm) are strongly absorbed by molecular oxygen in the air, though the longer wavelengths around 150–200 nm can propagate through nitrogen . Scientific instruments can, therefore, use this spectral range by operating in an oxygen-free atmosphere (pure nitrogen, or argon for shorter wavelengths), without the need for costly vacuum chambers. Significant examples include 193-nm photolithography equipment (for semiconductor manufacturing ) and circular dichroism spectrometers. [ 24 ] Technology for VUV instrumentation was largely driven by solar astronomy for many decades. While optics can be used to remove unwanted visible light that contaminates the VUV, in general, detectors can be limited by their response to non-VUV radiation, and the development of solar-blind devices has been an important area of research. Wide-gap solid-state devices or vacuum devices with high-cutoff photocathodes can be attractive compared to silicon diodes. Extreme UV (EUV or sometimes XUV) is characterized by a transition in the physics of interaction with matter. Wavelengths longer than about 30 nm interact mainly with the outer valence electrons of atoms, while wavelengths shorter than that interact mainly with inner-shell electrons and nuclei. The long end of the EUV spectrum is set by a prominent He + spectral line at 30.4 nm. EUV is strongly absorbed by most known materials, but synthesizing multilayer optics that reflect up to about 50% of EUV radiation at normal incidence is possible. This technology was pioneered by the NIXT and MSSTA sounding rockets in the 1990s, and it has been used to make telescopes for solar imaging. See also the Extreme Ultraviolet Explorer satellite . [ citation needed ] Some sources use the distinction of "hard UV" and "soft UV". For instance, in the case of astrophysics , the boundary may be at the Lyman limit (wavelength 91.2 nm, the energy needed to ionise a hydrogen atom from its ground state), with "hard UV" being more energetic; [ 25 ] the same terms may also be used in other fields, such as cosmetology , optoelectronic , etc. The numerical values of the boundary between hard/soft, even within similar scientific fields, do not necessarily coincide; for example, one applied-physics publication used a boundary of 190 nm between hard and soft UV regions. [ 26 ] Very hot objects emit UV radiation (see black-body radiation ). The Sun emits ultraviolet radiation at all wavelengths, including the extreme ultraviolet where it crosses into X-rays at 10 nm. Extremely hot stars (such as O- and B-type) emit proportionally more UV radiation than the Sun. Sunlight in space at the top of Earth's atmosphere (see solar constant ) is composed of about 50% infrared light, 40% visible light, and 10% ultraviolet light, for a total intensity of about 1400 W/m 2 in vacuum. [ 27 ] The atmosphere blocks about 77% of the Sun's UV, when the Sun is highest in the sky (at zenith), with absorption increasing at shorter UV wavelengths. At ground level with the sun at zenith, sunlight is 44% visible light, 3% ultraviolet, and the remainder infrared. [ 28 ] [ 29 ] Of the ultraviolet radiation that reaches the Earth's surface, more than 95% is the longer wavelengths of UVA, with the small remainder UVB. Almost no UVC reaches the Earth's surface. [ 30 ] The fraction of UVA and UVB which remains in UV radiation after passing through the atmosphere is heavily dependent on cloud cover and atmospheric conditions. On "partly cloudy" days, patches of blue sky showing between clouds are also sources of (scattered) UVA and UVB, which are produced by Rayleigh scattering in the same way as the visible blue light from those parts of the sky. UVB also plays a major role in plant development, as it affects most of the plant hormones. [ 31 ] During total overcast, the amount of absorption due to clouds is heavily dependent on the thickness of the clouds and latitude, with no clear measurements correlating specific thickness and absorption of UVA and UVB. [ 32 ] The shorter bands of UVC, as well as even more-energetic UV radiation produced by the Sun, are absorbed by oxygen and generate the ozone in the ozone layer when single oxygen atoms produced by UV photolysis of dioxygen react with more dioxygen. The ozone layer is especially important in blocking most UVB and the remaining part of UVC not already blocked by ordinary oxygen in air. [ citation needed ] Ultraviolet absorbers are molecules used in organic materials ( polymers , paints , etc.) to absorb UV radiation to reduce the UV degradation (photo-oxidation) of a material. The absorbers can themselves degrade over time, so monitoring of absorber levels in weathered materials is necessary. [ citation needed ] In sunscreen , ingredients that absorb UVA/UVB rays, such as avobenzone , oxybenzone [ 33 ] and octyl methoxycinnamate , are organic chemical absorbers or "blockers". They are contrasted with inorganic absorbers/"blockers" of UV radiation such as titanium dioxide and zinc oxide . [ 34 ] For clothing, the ultraviolet protection factor (UPF) represents the ratio of sunburn -causing UV without and with the protection of the fabric, similar to sun protection factor (SPF) ratings for sunscreen . [ citation needed ] Standard summer fabrics have UPFs around 6, which means that about 20% of UV will pass through. [ citation needed ] Suspended nanoparticles in stained-glass prevent UV rays from causing chemical reactions that change image colors. [ citation needed ] A set of stained-glass color-reference chips is planned to be used to calibrate the color cameras for the 2019 ESA Mars rover mission, since they will remain unfaded by the high level of UV present at the surface of Mars. [ citation needed ] Common soda–lime glass , such as window glass, is partially transparent to UVA, but is opaque to shorter wavelengths, passing about 90% of the light above 350 nm, but blocking over 90% of the light below 300 nm. [ 35 ] [ 36 ] [ 37 ] A study found that car windows allow 3–4% of ambient UV to pass through, especially if the UV was greater than 380 nm. [ 38 ] Other types of car windows can reduce transmission of UV that is greater than 335 nm. [ 38 ] Fused quartz , depending on quality, can be transparent even to vacuum UV wavelengths. Crystalline quartz and some crystals such as CaF 2 and MgF 2 transmit well down to 150 nm or 160 nm wavelengths. [ 39 ] Wood's glass is a deep violet-blue barium-sodium silicate glass with about 9% nickel(II) oxide developed during World War I to block visible light for covert communications. It allows both infrared daylight and ultraviolet night-time communications by being transparent between 320 nm and 400 nm and also the longer infrared and just-barely-visible red wavelengths. Its maximum UV transmission is at 365 nm, one of the wavelengths of mercury lamps . [ citation needed ] A black light lamp emits long-wave UVA radiation and little visible light. Fluorescent black light lamps work similarly to other fluorescent lamps , but use a phosphor on the inner tube surface which emits UVA radiation instead of visible light. Some lamps use a deep-bluish-purple Wood's glass optical filter that blocks almost all visible light with wavelengths longer than 400 nanometers. [ 40 ] The purple glow given off by these tubes is not the ultraviolet itself, but visible purple light from mercury's 404 nm spectral line which escapes being filtered out by the coating. Other black lights use plain glass instead of the more expensive Wood's glass, so they appear light-blue to the eye when operating. [ citation needed ] Incandescent black lights are also produced, using a filter coating on the envelope of an incandescent bulb that absorbs visible light ( see section below ). These are cheaper but very inefficient, emitting only a small fraction of a percent of their power as UV. Mercury-vapor black lights in ratings up to 1 kW with UV-emitting phosphor and an envelope of Wood's glass are used for theatrical and concert displays. [ citation needed ] Black lights are used in applications in which extraneous visible light must be minimized; mainly to observe fluorescence , the colored glow that many substances give off when exposed to UV light. UVA / UVB emitting bulbs are also sold for other special purposes, such as tanning lamps and reptile-husbandry. [ citation needed ] Shortwave UV lamps are made using a fluorescent lamp tube with no phosphor coating, composed of fused quartz or vycor , since ordinary glass absorbs UVC. These lamps emit ultraviolet light with two peaks in the UVC band at 253.7 nm and 185 nm due to the mercury within the lamp, as well as some visible light. From 85% to 90% of the UV produced by these lamps is at 253.7 nm, whereas only 5–10% is at 185 nm. [ 41 ] The fused quartz tube passes the 253.7 nm radiation but blocks the 185 nm wavelength. Such tubes have two or three times the UVC power of a regular fluorescent lamp tube. These low-pressure lamps have a typical efficiency of approximately 30–40%, meaning that for every 100 watts of electricity consumed by the lamp, they will produce approximately 30–40 watts of total UV output. They also emit bluish-white visible light, due to mercury's other spectral lines. These "germicidal" lamps are used extensively for disinfection of surfaces in laboratories and food-processing industries. [ 42 ] 'Black light' incandescent lamps are also made from an incandescent light bulb with a filter coating which absorbs most visible light. Halogen lamps with fused quartz envelopes are used as inexpensive UV light sources in the near UV range, from 400 to 300 nm, in some scientific instruments. Due to its black-body spectrum a filament light bulb is a very inefficient ultraviolet source, emitting only a fraction of a percent of its energy as UV, as explained by the black body spectrum . Specialized UV gas-discharge lamps containing different gases produce UV radiation at particular spectral lines for scientific purposes. Argon and deuterium arc lamps are often used as stable sources, either windowless or with various windows such as magnesium fluoride . [ 43 ] These are often the emitting sources in UV spectroscopy equipment for chemical analysis. [ citation needed ] Other UV sources with more continuous emission spectra include xenon arc lamps (commonly used as sunlight simulators), deuterium arc lamps , mercury-xenon arc lamps , and metal-halide arc lamps . [ citation needed ] The excimer lamp , a UV source developed in the early 2000s, is seeing increasing use in scientific fields. It has the advantages of high-intensity, high efficiency, and operation at a variety of wavelength bands into the vacuum ultraviolet. [ citation needed ] Light-emitting diodes (LEDs) can be manufactured to emit radiation in the ultraviolet range. In 2019, following significant advances over the preceding five years, UVA LEDs of 365 nm and longer wavelength were available, with efficiencies of 50% at 1.0 W output. Currently, the most common types of UV LEDs are in 395 nm and 365 nm wavelengths, both of which are in the UVA spectrum. The rated wavelength is the peak wavelength that the LEDs put out, but light at both higher and lower wavelengths are present. [ 44 ] The cheaper and more common 395 nm UV LEDs are much closer to the visible spectrum, and give off a purple color. Other UV LEDs deeper into the spectrum do not emit as much visible light. [ 45 ] LEDs are used for applications such as UV curing applications, charging glow-in-the-dark objects such as paintings or toys, and lights for detecting counterfeit money and bodily fluids. UV LEDs are also used in digital print applications and inert UV curing environments. As technological advances beginning in the early 2000s have improved their output and efficiency, they have become increasingly viable alternatives to more traditional UV lamps for use in UV curing applications, and the development of new UV LED curing systems for higher-intensity applications is a major subject of research in the field of UV curing technology. [ 46 ] UVC LEDs are developing rapidly, but may require testing to verify effective disinfection. Citations for large-area disinfection are for non-LED UV sources [ 47 ] known as germicidal lamps . [ 48 ] Also, they are used as line sources to replace deuterium lamps in liquid chromatography instruments. [ 49 ] Gas lasers , laser diodes , and solid-state lasers can be manufactured to emit ultraviolet rays, and lasers are available that cover the entire UV range. The nitrogen gas laser uses electronic excitation of nitrogen molecules to emit a beam that is mostly UV. The strongest ultraviolet lines are at 337.1 nm and 357.6 nm in wavelength. Another type of high-power gas lasers are excimer lasers . They are widely used lasers emitting in ultraviolet and vacuum ultraviolet wavelength ranges. Presently, UV argon-fluoride excimer lasers operating at 193 nm are routinely used in integrated circuit production by photolithography . The current [ timeframe? ] wavelength limit of production of coherent UV is about 126 nm, characteristic of the Ar 2 * excimer laser. [ citation needed ] Direct UV-emitting laser diodes are available at 375 nm. [ 50 ] UV diode-pumped solid state lasers have been demonstrated using cerium - doped lithium strontium aluminum fluoride crystals (Ce:LiSAF), a process developed in the 1990s at Lawrence Livermore National Laboratory . [ 51 ] Wavelengths shorter than 325 nm are commercially generated in diode-pumped solid-state lasers . Ultraviolet lasers can also be made by applying frequency conversion to lower-frequency lasers. [ citation needed ] Ultraviolet lasers have applications in industry ( laser engraving ), medicine ( dermatology , and keratectomy ), chemistry ( MALDI ), free-air secure communications , computing ( optical storage ), and manufacture of integrated circuits. [ citation needed ] The vacuum ultraviolet (V‑UV) band (100–200 nm) can be generated by non-linear 4 wave mixing in gases by sum or difference frequency mixing of 2 or more longer wavelength lasers. The generation is generally done in gasses (e.g. krypton, hydrogen which are two-photon resonant near 193 nm) [ 52 ] or metal vapors (e.g. magnesium). By making one of the lasers tunable, the V‑UV can be tuned. If one of the lasers is resonant with a transition in the gas or vapor then the V‑UV production is intensified. However, resonances also generate wavelength dispersion, and thus the phase matching can limit the tunable range of the 4 wave mixing. Difference frequency mixing (i.e., f 1 + f 2 − f 3 ) has an advantage over sum frequency mixing because the phase matching can provide greater tuning. [ 52 ] In particular, difference frequency mixing two photons of an Ar F (193 nm) excimer laser with a tunable visible or near IR laser in hydrogen or krypton provides resonantly enhanced tunable V‑UV covering from 100 nm to 200 nm. [ 52 ] Practically, the lack of suitable gas / vapor cell window materials above the lithium fluoride cut-off wavelength limit the tuning range to longer than about 110 nm. Tunable V‑UV wavelengths down to 75 nm was achieved using window-free configurations. [ 53 ] Lasers have been used to indirectly generate non-coherent extreme UV (E‑UV) radiation at 13.5 nm for extreme ultraviolet lithography . The E‑UV is not emitted by the laser, but rather by electron transitions in an extremely hot tin or xenon plasma, which is excited by an excimer laser. [ 54 ] This technique does not require a synchrotron, yet can produce UV at the edge of the X‑ray spectrum. Synchrotron light sources can also produce all wavelengths of UV, including those at the boundary of the UV and X‑ray spectra at 10 nm. [ citation needed ] The impact of ultraviolet radiation on human health has implications for the risks and benefits of sun exposure and is also implicated in issues such as fluorescent lamps and health . Getting too much sun exposure can be harmful, but in moderation, sun exposure is beneficial. [ 55 ] UV (specifically, UVB) causes the body to produce vitamin D , [ 56 ] which is essential for life. Humans need some UV radiation to maintain adequate vitamin D levels. According to the World Health Organization: [ 57 ] There is no doubt that a little sunlight is good for you! But 5–15 minutes of casual sun exposure of hands, face and arms two to three times a week during the summer months is sufficient to keep your vitamin D levels high. Vitamin D can also be obtained from food and supplementation. [ 58 ] Excess sun exposure produces harmful effects, however. [ 57 ] Vitamin D promotes the creation of serotonin . The production of serotonin is in direct proportion to the degree of bright sunlight the body receives. [ 59 ] Serotonin is thought to provide sensations of happiness, well-being and serenity to human beings. [ 60 ] UV rays also treat certain skin conditions. Modern phototherapy has been used to successfully treat psoriasis , eczema , jaundice , vitiligo , atopic dermatitis , and localized scleroderma . [ 61 ] [ 62 ] In addition, UV radiation, in particular UVB radiation, has been shown to induce cell cycle arrest in keratinocytes , the most common type of skin cell. [ 63 ] As such, sunlight therapy can be a candidate for treatment of conditions such as psoriasis and exfoliative cheilitis , conditions in which skin cells divide more rapidly than usual or necessary. [ 64 ] In humans, excessive exposure to UV radiation can result in acute and chronic harmful effects on the eye's dioptric system and retina . The risk is elevated at high altitudes and people living in high latitude areas where snow covers the ground right into early summer and sun positions even at zenith are low, are particularly at risk. [ 65 ] Skin, the circadian system, and the immune system can also be affected. [ 66 ] The differential effects of various wavelengths of light on the human cornea and skin are sometimes called the "erythemal action spectrum". [ 67 ] The action spectrum shows that UVA does not cause immediate reaction, but rather UV begins to cause photokeratitis and skin redness (with lighter skinned individuals being more sensitive) at wavelengths starting near the beginning of the UVB band at 315 nm, and rapidly increasing to 300 nm. The skin and eyes are most sensitive to damage by UV at 265–275 nm, which is in the lower UVC band. At still shorter wavelengths of UV, damage continues to happen, but the overt effects are not as great with so little penetrating the atmosphere. The WHO -standard ultraviolet index is a widely publicized measurement of total strength of UV wavelengths that cause sunburn on human skin, by weighting UV exposure for action spectrum effects at a given time and location. This standard shows that most sunburn happens due to UV at wavelengths near the boundary of the UVA and UVB bands. [ citation needed ] Overexposure to UVB radiation not only can cause sunburn but also some forms of skin cancer . However, the degree of redness and eye irritation (which are largely not caused by UVA) do not predict the long-term effects of UV, although they do mirror the direct damage of DNA by ultraviolet. [ 68 ] All bands of UV radiation damage collagen fibers and accelerate aging of the skin. Both UVA and UVB destroy vitamin A in skin, which may cause further damage. [ 69 ] UVB radiation can cause direct DNA damage. [ 70 ] This cancer connection is one reason for concern about ozone depletion and the ozone hole . The most deadly form of skin cancer , malignant melanoma , is mostly caused by DNA damage independent from UVA radiation. This can be seen from the absence of a direct UV signature mutation in 92% of all melanoma. [ 71 ] Occasional overexposure and sunburn are probably greater risk factors for melanoma than long-term moderate exposure. [ 72 ] UVC is the highest-energy, most-dangerous type of ultraviolet radiation, and causes adverse effects that can variously be mutagenic or carcinogenic. [ 73 ] In the past, UVA was considered not harmful or less harmful than UVB, but today it is known to contribute to skin cancer via indirect DNA damage (free radicals such as reactive oxygen species). [ 74 ] UVA can generate highly reactive chemical intermediates, such as hydroxyl and oxygen radicals, which in turn can damage DNA. The DNA damage caused indirectly to skin by UVA consists mostly of single-strand breaks in DNA, while the damage caused by UVB includes direct formation of thymine dimers or cytosine dimers and double-strand DNA breakage. [ 75 ] UVA is immunosuppressive for the entire body (accounting for a large part of the immunosuppressive effects of sunlight exposure), and is mutagenic for basal cell keratinocytes in skin. [ 76 ] UVB photons can cause direct DNA damage. UVB radiation excites DNA molecules in skin cells, causing aberrant covalent bonds to form between adjacent pyrimidine bases, producing a dimer . Most UV-induced pyrimidine dimers in DNA are removed by the process known as nucleotide excision repair that employs about 30 different proteins. [ 70 ] Those pyrimidine dimers that escape this repair process can induce a form of programmed cell death ( apoptosis ) or can cause DNA replication errors leading to mutation . [ citation needed ] UVB damages mRNA [ 77 ] This triggers a fast pathway that leads to inflamination of the skin and sunburn. mRNA damage initially triggers a response in ribosomes though a protein known as ZAK-alpha in a ribotoxic stress response. This response acts as a cell surveillance system. Following this detection of RNA damage leads to inflammatory signaling and recruitment of immune cells. This, not DNA damage (which is slower to detect) results in UVB skin inflammation and acute sunburn. [ 78 ] As a defense against UV radiation, the amount of the brown pigment melanin in the skin increases when exposed to moderate (depending on skin type ) levels of radiation; this is commonly known as a sun tan . The purpose of melanin is to absorb UV radiation and dissipate the energy as harmless heat, protecting the skin against both direct and indirect DNA damage from the UV. UVA gives a quick tan that lasts for days by oxidizing melanin that was already present and triggers the release of the melanin from melanocytes . UVB yields a tan that takes roughly 2 days to develop because it stimulates the body to produce more melanin. [ citation needed ] Medical organizations recommend that patients protect themselves from UV radiation by using sunscreen . Five sunscreen ingredients have been shown to protect mice against skin tumors. However, some sunscreen chemicals produce potentially harmful substances if they are illuminated while in contact with living cells. [ 79 ] [ 80 ] The amount of sunscreen that penetrates into the lower layers of the skin may be large enough to cause damage. [ 81 ] Sunscreen reduces the direct DNA damage that causes sunburn, by blocking UVB, and the usual SPF rating indicates how effectively this radiation is blocked. SPF is, therefore, also called UVB-PF, for "UVB protection factor". [ 82 ] This rating, however, offers no data about important protection against UVA, [ 83 ] which does not primarily cause sunburn but is still harmful, since it causes indirect DNA damage and is also considered carcinogenic. Several studies suggest that the absence of UVA filters may be the cause of the higher incidence of melanoma found in sunscreen users compared to non-users. [ 84 ] [ 85 ] [ 86 ] [ 87 ] [ 88 ] Some sunscreen lotions contain titanium dioxide , zinc oxide , and avobenzone , which help protect against UVA rays. The photochemical properties of melanin make it an excellent photoprotectant . However, sunscreen chemicals cannot dissipate the energy of the excited state as efficiently as melanin and therefore, if sunscreen ingredients penetrate into the lower layers of the skin, the amount of reactive oxygen species may be increased. [ 89 ] [ 79 ] [ 80 ] [ 90 ] The amount of sunscreen that penetrates through the stratum corneum may or may not be large enough to cause damage. In an experiment by Hanson et al . that was published in 2006, the amount of harmful reactive oxygen species (ROS) was measured in untreated and in sunscreen treated skin. In the first 20 minutes, the film of sunscreen had a protective effect and the number of ROS species was smaller. After 60 minutes, however, the amount of absorbed sunscreen was so high that the amount of ROS was higher in the sunscreen-treated skin than in the untreated skin. [ 89 ] The study indicates that sunscreen must be reapplied within 2 hours in order to prevent UV light from penetrating to sunscreen-infused live skin cells. [ 89 ] Ultraviolet radiation can aggravate several skin conditions and diseases, including [ 91 ] systemic lupus erythematosus , Sjögren's syndrome , Sinear Usher syndrome , rosacea , dermatomyositis , Darier's disease , Kindler–Weary syndrome and Porokeratosis . [ 92 ] The eye is most sensitive to damage by UV in the lower UVC band at 265–275 nm. Radiation of this wavelength is almost absent from sunlight at the surface of the Earth but is emitted by artificial sources such as the electrical arcs employed in arc welding . Unprotected exposure to these sources can cause "welder's flash" or "arc eye" ( photokeratitis ) and can lead to cataracts , pterygium and pinguecula formation. To a lesser extent, UVB in sunlight from 310 to 280 nm also causes photokeratitis ("snow blindness"), and the cornea , the lens , and the retina can be damaged. [ 93 ] Protective eyewear is beneficial to those exposed to ultraviolet radiation. Since light can reach the eyes from the sides, full-coverage eye protection is usually warranted if there is an increased risk of exposure, as in high-altitude mountaineering. Mountaineers are exposed to higher-than-ordinary levels of UV radiation, both because there is less atmospheric filtering and because of reflection from snow and ice. [ 94 ] [ 95 ] Ordinary, untreated eyeglasses give some protection. Most plastic lenses give more protection than glass lenses, because, as noted above, glass is transparent to UVA and the common acrylic plastic used for lenses is less so. Some plastic lens materials, such as polycarbonate , inherently block most UV. [ 96 ] UV degradation is one form of polymer degradation that affects plastics exposed to sunlight . The problem appears as discoloration or fading, cracking, loss of strength or disintegration. The effects of attack increase with exposure time and sunlight intensity. The addition of UV absorbers inhibits the effect. Sensitive polymers include thermoplastics and speciality fibers like aramids . UV absorption leads to chain degradation and loss of strength at sensitive points in the chain structure. Aramid rope must be shielded with a sheath of thermoplastic if it is to retain its strength. [ citation needed ] Many pigments and dyes absorb UV and change colour, so paintings and textiles may need extra protection both from sunlight and fluorescent lamps, two common sources of UV radiation. Window glass absorbs some harmful UV, but valuable artifacts need extra shielding. Many museums place black curtains over watercolour paintings and ancient textiles, for example. Since watercolours can have very low pigment levels, they need extra protection from UV. Various forms of picture framing glass , including acrylics (plexiglass), laminates, and coatings, offer different degrees of UV (and visible light) protection. [ citation needed ] Because of its ability to cause chemical reactions and excite fluorescence in materials, ultraviolet radiation has a number of applications. The following table [ 97 ] gives some uses of specific wavelength bands in the UV spectrum. Photographic film responds to ultraviolet radiation but the glass lenses of cameras usually block radiation shorter than 350 nm. Slightly yellow UV-blocking filters are often used for outdoor photography to prevent unwanted bluing and overexposure by UV rays. For photography in the near UV, special filters may be used. Photography with wavelengths shorter than 350 nm requires special quartz lenses which do not absorb the radiation. Digital cameras sensors may have internal filters that block UV to improve color rendition accuracy. Sometimes these internal filters can be removed, or they may be absent, and an external visible-light filter prepares the camera for near-UV photography. A few cameras are designed for use in the UV. [ citation needed ] Photography by reflected ultraviolet radiation is useful for medical, scientific, and forensic investigations, in applications as widespread as detecting bruising of skin, alterations of documents, or restoration work on paintings. Photography of the fluorescence produced by ultraviolet illumination uses visible wavelengths of light. [ citation needed ] In ultraviolet astronomy , measurements are used to discern the chemical composition of the interstellar medium, and the temperature and composition of stars. Because the ozone layer blocks many UV frequencies from reaching telescopes on the surface of the Earth, most UV observations are made from space. [ 99 ] Corona discharge on electrical apparatus can be detected by its ultraviolet emissions. Corona causes degradation of electrical insulation and emission of ozone and nitrogen oxide . [ 100 ] EPROMs (Erasable Programmable Read-Only Memory) are erased by exposure to UV radiation. These modules have a transparent ( quartz ) window on the top of the chip that allows the UV radiation in. Colorless fluorescent dyes that emit blue light under UV are added as optical brighteners to paper and fabrics. The blue light emitted by these agents counteracts yellow tints that may be present and causes the colors and whites to appear whiter or more brightly colored. UV fluorescent dyes that glow in the primary colors are used in paints, papers, and textiles either to enhance color under daylight illumination or to provide special effects when lit with UV lamps. Blacklight paints that contain dyes that glow under UV are used in a number of art and aesthetic applications. [ citation needed ] Amusement parks often use UV lighting to fluoresce ride artwork and backdrops. This often has the side effect of causing rider's white clothing to glow light-purple. [ citation needed ] To help prevent counterfeiting of currency, or forgery of important documents such as driver's licenses and passports , the paper may include a UV watermark or fluorescent multicolor fibers that are visible under ultraviolet light. Postage stamps are tagged with a phosphor that glows under UV rays to permit automatic detection of the stamp and facing of the letter. UV fluorescent dyes are used in many applications (for example, biochemistry and forensics ). Some brands of pepper spray will leave an invisible chemical (UV dye) that is not easily washed off on a pepper-sprayed attacker, which would help police identify the attacker later. In some types of nondestructive testing UV stimulates fluorescent dyes to highlight defects in a broad range of materials. These dyes may be carried into surface-breaking defects by capillary action ( liquid penetrant inspection ) or they may be bound to ferrite particles caught in magnetic leakage fields in ferrous materials ( magnetic particle inspection ). UV is an investigative tool at the crime scene helpful in locating and identifying bodily fluids such as semen, blood, and saliva. [ 101 ] For example, ejaculated fluids or saliva can be detected by high-power UV sources, irrespective of the structure or colour of the surface the fluid is deposited upon. [ 102 ] UV–vis microspectroscopy is also used to analyze trace evidence, such as textile fibers and paint chips, as well as questioned documents. Other applications include the authentication of various collectibles and art, and detecting counterfeit currency. Even materials not specially marked with UV sensitive dyes may have distinctive fluorescence under UV exposure or may fluoresce differently under short-wave versus long-wave ultraviolet. Using multi-spectral imaging it is possible to read illegible papyrus , such as the burned papyri of the Villa of the Papyri or of Oxyrhynchus , or the Archimedes palimpsest . The technique involves taking pictures of the illegible document using different filters in the infrared or ultraviolet range, finely tuned to capture certain wavelengths of light. Thus, the optimum spectral portion can be found for distinguishing ink from paper on the papyrus surface. Simple NUV sources can be used to highlight faded iron-based ink on vellum . [ 103 ] Ultraviolet helps detect organic material deposits that remain on surfaces where periodic cleaning and sanitizing may have failed. It is used in the hotel industry, manufacturing, and other industries where levels of cleanliness or contamination are inspected . [ 104 ] [ 105 ] [ 106 ] [ 107 ] Perennial news features for many television news organizations involve an investigative reporter using a similar device to reveal unsanitary conditions in hotels, public toilets, hand rails, and such. [ 108 ] [ 109 ] UV/Vis spectroscopy is widely used as a technique in chemistry to analyze chemical structure , the most notable one being conjugated systems . UV radiation is often used to excite a given sample where the fluorescent emission is measured with a spectrofluorometer . In biological research, UV radiation is used for quantification of nucleic acids or proteins . In environmental chemistry, UV radiation could also be used to detect Contaminants of emerging concern in water samples. [ 110 ] In pollution control applications, ultraviolet analyzers are used to detect emissions of nitrogen oxides, sulfur compounds, mercury, and ammonia, for example in the flue gas of fossil-fired power plants. [ 111 ] Ultraviolet radiation can detect thin sheens of spilled oil on water, either by the high reflectivity of oil films at UV wavelengths, fluorescence of compounds in oil, or by absorbing of UV created by Raman scattering in water. [ 112 ] UV absorbance can also be used to quantify contaminants in wastewater. Most commonly used 254 nm UV absorbance is generally used as a surrogate parameters to quantify NOM. [ 110 ] Another form of light-based detection method uses a wide spectrum of excitation emission matrix (EEM) to detect and identify contaminants based on their flourense properties. [ 110 ] [ 113 ] EEM could be used to discriminate different groups of NOM based on the difference in light emission and excitation of fluorophores. NOMs with certain molecular structures are reported to have fluorescent properties in a wide range of excitation/emission wavelengths. [ 114 ] [ 110 ] Ultraviolet lamps are also used as part of the analysis of some minerals and gems . In general, ultraviolet detectors use either a solid-state device, such as one based on silicon carbide or aluminium nitride , or a gas-filled tube as the sensing element. UV detectors that are sensitive to UV in any part of the spectrum respond to irradiation by sunlight and artificial light . A burning hydrogen flame, for instance, radiates strongly in the 185- to 260-nanometer range and only very weakly in the IR region, whereas a coal fire emits very weakly in the UV band yet very strongly at IR wavelengths; thus, a fire detector that operates using both UV and IR detectors is more reliable than one with a UV detector alone. Virtually all fires emit some radiation in the UVC band, whereas the Sun 's radiation at this band is absorbed by the Earth's atmosphere . The result is that the UV detector is "solar blind", meaning it will not cause an alarm in response to radiation from the Sun, so it can easily be used both indoors and outdoors. UV detectors are sensitive to most fires, including hydrocarbons , metals, sulfur , hydrogen , hydrazine , and ammonia . Arc welding , electrical arcs, lightning , X-rays used in nondestructive metal testing equipment (though this is highly unlikely), and radioactive materials can produce levels that will activate a UV detection system. The presence of UV-absorbing gases and vapors will attenuate the UV radiation from a fire, adversely affecting the ability of the detector to detect flames. Likewise, the presence of an oil mist in the air or an oil film on the detector window will have the same effect. Ultraviolet radiation is used for very fine resolution photolithography , a procedure wherein a chemical called a photoresist is exposed to UV radiation that has passed through a mask. The exposure causes chemical reactions to occur in the photoresist. After removal of unwanted photoresist, a pattern determined by the mask remains on the sample. Steps may then be taken to "etch" away, deposit on or otherwise modify areas of the sample where no photoresist remains. Photolithography is used in the manufacture of semiconductors , integrated circuit components, [ 115 ] and printed circuit boards . Photolithography processes used to fabricate electronic integrated circuits presently use 193 nm UV and are experimentally using 13.5 nm UV for extreme ultraviolet lithography . Electronic components that require clear transparency for light to exit or enter (photovoltaic panels and sensors) can be potted using acrylic resins that are cured using UV energy. The advantages are low VOC emissions and rapid curing. Certain inks, coatings, and adhesives are formulated with photoinitiators and resins. When exposed to UV light, polymerization occurs, and so the adhesives harden or cure, usually within a few seconds. Applications include glass and plastic bonding, optical fiber coatings, the coating of flooring, UV coating and paper finishes in offset printing , dental fillings, and decorative fingernail "gels". UV sources for UV curing applications include UV lamps , UV LEDs , and excimer flash lamps. Fast processes such as flexo or offset printing require high-intensity light focused via reflectors onto a moving substrate and medium so high-pressure Hg (mercury) or Fe (iron, doped)-based bulbs are used, energized with electric arcs or microwaves. Lower-power fluorescent lamps and LEDs can be used for static applications. Small high-pressure lamps can have light focused and transmitted to the work area via liquid-filled or fiber-optic light guides. The impact of UV on polymers is used for modification of the ( roughness and hydrophobicity ) of polymer surfaces. For example, a poly(methyl methacrylate) surface can be smoothed by vacuum ultraviolet. [ 116 ] UV radiation is useful in preparing low-surface-energy polymers for adhesives. Polymers exposed to UV will oxidize, thus raising the surface energy of the polymer. Once the surface energy of the polymer has been raised, the bond between the adhesive and the polymer is stronger. UV-C light is used in air conditioning systems as a method of improving indoor air quality by disinfecting the air and preventing microbial growth. UV-C light is effective at killing or inactivating harmful microorganisms, such as bacteria, viruses, mold, and mildew. When integrated into an air conditioning system, the ultraviolet light is typically placed in areas like the air handler or near the evaporator coil . In air conditioning systems, UV-C light works by irradiating the airflow within the system, killing or neutralizing harmful microorganisms before they are recirculated into the indoor environment. The effectiveness of it in air conditioning systems depends on factors such as the intensity of the light, the duration of exposure, airflow speed, and the cleanliness of system components. [ 117 ] [ 118 ] Using a catalytic chemical reaction from titanium dioxide and UVC exposure, oxidation of organic matter converts pathogens , pollens , and mold spores into harmless inert byproducts. However, the reaction of titanium dioxide and UVC is not a straight path. Several hundreds of reactions occur prior to the inert byproducts stage and can hinder the resulting reaction creating formaldehyde , aldehyde, and other VOC's en route to a final stage. Thus, the use of titanium dioxide and UVC requires very specific parameters for a successful outcome. The cleansing mechanism of UV is a photochemical process. Contaminants in the indoor environment are almost entirely organic carbon-based compounds, which break down when exposed to high-intensity UV at 240 to 280 nm. Short-wave ultraviolet radiation can destroy DNA in living microorganisms. [ 119 ] UVC's effectiveness is directly related to intensity and exposure time. UV has also been shown to reduce gaseous contaminants such as carbon monoxide and VOCs . [ 120 ] [ 121 ] [ 122 ] UV lamps radiating at 184 and 254 nm can remove low concentrations of hydrocarbons and carbon monoxide if the air is recycled between the room and the lamp chamber. This arrangement prevents the introduction of ozone into the treated air. Likewise, air may be treated by passing by a single UV source operating at 184 nm and passed over iron pentaoxide to remove the ozone produced by the UV lamp. Ultraviolet lamps are used to sterilize workspaces and tools used in biology laboratories and medical facilities. Commercially available low-pressure mercury-vapor lamps emit about 86% of their radiation at 254 nanometers (nm), with 265 nm being the peak germicidal effectiveness curve. UV at these germicidal wavelengths damage a microorganism's DNA/RNA so that it cannot reproduce, making it harmless, (even though the organism may not be killed). [ 123 ] Since microorganisms can be shielded from ultraviolet rays in small cracks and other shaded areas, these lamps are used only as a supplement to other sterilization techniques. UVC LEDs are relatively new to the commercial market and are gaining in popularity. [ failed verification ] [ 124 ] Due to their monochromatic nature (±5 nm) [ failed verification ] these LEDs can target a specific wavelength needed for disinfection. This is especially important knowing that pathogens vary in their sensitivity to specific UV wavelengths. LEDs are mercury free, instant on/off, and have unlimited cycling throughout the day. [ 125 ] Disinfection using UV radiation is commonly used in wastewater treatment applications and is finding an increased usage in municipal drinking water treatment . Many bottlers of spring water use UV disinfection equipment to sterilize their water. Solar water disinfection [ 126 ] has been researched for cheaply treating contaminated water using natural sunlight . The UVA irradiation and increased water temperature kill organisms in the water. Ultraviolet radiation is used in several food processes to kill unwanted microorganisms . UV can be used to pasteurize fruit juices by flowing the juice over a high-intensity ultraviolet source. The effectiveness of such a process depends on the UV absorbance of the juice. Pulsed light (PL) is a technique of killing microorganisms on surfaces using pulses of an intense broad spectrum, rich in UVC between 200 and 280 nm . Pulsed light works with xenon flash lamps that can produce flashes several times per second. Disinfection robots use pulsed UV. [ 127 ] The antimicrobial effectiveness of filtered far-UVC (222 nm) light on a range of pathogens, including bacteria and fungi showed inhibition of pathogen growth, and since it has lesser harmful effects, it provides essential insights for reliable disinfection in healthcare settings, such as hospitals and long-term care homes. [ 128 ] UVC has also been shown to be effective at degrading SARS-CoV-2 virus. [ 129 ] Some animals, including birds, reptiles, and insects such as bees, can see near-ultraviolet wavelengths. Many fruits, flowers, and seeds stand out more strongly from the background in ultraviolet wavelengths as compared to human color vision. Scorpions glow or take on a yellow to green color under UV illumination, thus assisting in the control of these arachnids. Many birds have patterns in their plumage that are invisible at usual wavelengths but observable in ultraviolet, and the urine and other secretions of some animals, including dogs, cats, and human beings, are much easier to spot with ultraviolet. Urine trails of rodents can be detected by pest control technicians for proper treatment of infested dwellings. Butterflies use ultraviolet as a communication system for sex recognition and mating behavior. For example, in the Colias eurytheme butterfly, males rely on visual cues to locate and identify females. Instead of using chemical stimuli to find mates, males are attracted to the ultraviolet-reflecting color of female hind wings. [ 130 ] In Pieris napi butterflies it was shown that females in northern Finland with less UV-radiation present in the environment possessed stronger UV signals to attract their males than those occurring further south. This suggested that it was evolutionarily more difficult to increase the UV-sensitivity of the eyes of the males than to increase the UV-signals emitted by the females. [ 131 ] Many insects use the ultraviolet wavelength emissions from celestial objects as references for flight navigation. A local ultraviolet emitter will normally disrupt the navigation process and will eventually attract the flying insect. The green fluorescent protein (GFP) is often used in genetics as a marker. Many substances, such as proteins, have significant light absorption bands in the ultraviolet that are of interest in biochemistry and related fields. UV-capable spectrophotometers are common in such laboratories. Ultraviolet traps called bug zappers are used to eliminate various small flying insects. They are attracted to the UV and are killed using an electric shock, or trapped once they come into contact with the device. Different designs of ultraviolet radiation traps are also used by entomologists for collecting nocturnal insects during faunistic survey studies. Ultraviolet radiation is helpful in the treatment of skin conditions such as psoriasis and vitiligo . Exposure to UVA, while the skin is hyper-photosensitive, by taking psoralens is an effective treatment for psoriasis . Due to the potential of psoralens to cause damage to the liver , PUVA therapy may be used only a limited number of times over a patient's lifetime. UVB phototherapy does not require additional medications or topical preparations for the therapeutic benefit; only the exposure is needed. However, phototherapy can be effective when used in conjunction with certain topical treatments such as anthralin, coal tar, and vitamin A and D derivatives, or systemic treatments such as methotrexate and Soriatane . [ 132 ] Reptiles need UVB for biosynthesis of vitamin D, and other metabolic processes. [ 133 ] Specifically cholecalciferol (vitamin D3), which is needed for basic cellular / neural functioning as well as the utilization of calcium for bone and egg production. [ citation needed ] The UVA wavelength is also visible to many reptiles and might play a significant role in their ability survive in the wild as well as in visual communication between individuals. [ citation needed ] Therefore, in a typical reptile enclosure, a fluorescent UV a/b source (at the proper strength / spectrum for the species), must be available for many [ which? ] captive species to survive. Simple supplementation with cholecalciferol (Vitamin D3) will not be enough as there is a complete biosynthetic pathway [ which? ] that is "leapfrogged" (risks of possible overdoses), the intermediate molecules and metabolites [ which? ] also play important functions in the animals health. [ citation needed ] Natural sunlight in the right levels is always going to be superior to artificial sources, but this might not be possible for keepers in different parts of the world. [ citation needed ] It is a known problem that high levels of output of the UVa part of the spectrum can both cause cellular and DNA damage to sensitive parts of their bodies – especially the eyes where blindness is the result of an improper UVa/b source use and placement photokeratitis . [ citation needed ] For many keepers there must also be a provision for an adequate heat source this has resulted in the marketing of heat and light "combination" products. [ citation needed ] Keepers should be careful of these "combination" light/ heat and UVa/b generators, they typically emit high levels of UVa with lower levels of UVb that are set and difficult to control so that animals can have their needs met. [ citation needed ] A better strategy is to use individual sources of these elements and so they can be placed and controlled by the keepers for the max benefit of the animals. [ 134 ] The evolution of early reproductive proteins and enzymes is attributed in modern models of evolutionary theory to ultraviolet radiation. UVB causes thymine base pairs next to each other in genetic sequences to bond together into thymine dimers , a disruption in the strand that reproductive enzymes cannot copy. This leads to frameshifting during genetic replication and protein synthesis , usually killing the cell. Before formation of the UV-blocking ozone layer, when early prokaryotes approached the surface of the ocean, they almost invariably died out. The few that survived had developed enzymes that monitored the genetic material and removed thymine dimers by nucleotide excision repair enzymes. Many enzymes and proteins involved in modern mitosis and meiosis are similar to repair enzymes, and are believed to be evolved modifications of the enzymes originally used to overcome DNA damages caused by UV. [ 135 ] Elevated levels of ultraviolet radiation, in particular UV-B, have also been speculated as a cause of mass extinctions in the fossil record. [ 136 ] Photobiology is the scientific study of the beneficial and harmful interactions of non-ionizing radiation in living organisms, conventionally demarcated around 10 eV, the first ionization energy of oxygen. UV ranges roughly from 3 to 30 eV in energy. Hence photobiology entertains some, but not all, of the UV spectrum.
https://en.wikipedia.org/wiki/Ultraviolet
Ultraviolet-sensitive beads (UV beads) are beads that are colorful in the presence of ultraviolet radiation . Ultraviolet rays are present in sunlight and light from various artificial sources and can cause sunburn or skin cancer . [ 1 ] The color change in the beads alerts the wearer to the presence of the radiation. When changing colour they undergo photochromism . When the beads are not exposed to ultraviolet rays, they are colorless and either translucent or opaque . However, when sunlight falls onto the beads, they instantly turn into red, orange, yellow, blue, purple, or pink. This decorative art –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ultraviolet-sensitive_bead
Ultraviolet astronomy is the observation of electromagnetic radiation at ultraviolet wavelengths between approximately 10 and 320 nanometres ; shorter wavelengths—higher energy photons—are studied by X-ray astronomy and gamma-ray astronomy . [ 1 ] Ultraviolet light is not visible to the human eye . [ 2 ] Most of the light at these wavelengths is absorbed by the Earth's atmosphere, so observations at these wavelengths must be performed from the upper atmosphere or from space. [ 1 ] Ultraviolet line spectrum measurements ( spectroscopy ) are used to discern the chemical composition, densities, and temperatures of the interstellar medium , and the temperature and composition of hot young stars. UV observations can also provide essential information about the evolution of galaxies . They can be used to discern the presence of a hot white dwarf or main sequence companion in orbit around a cooler star. [ 3 ] [ 4 ] The ultraviolet universe looks quite different from the familiar stars and galaxies seen in visible light . Most stars are actually relatively cool objects emitting much of their electromagnetic radiation in the visible or near- infrared part of the spectrum. Ultraviolet radiation is the signature of hotter objects, typically in the early and late stages of their evolution . In the Earth's sky seen in ultraviolet light, most stars would fade in prominence. Some very young massive stars and some very old stars and galaxies, growing hotter and producing higher-energy radiation near their birth or death, would be visible. Clouds of gas and dust would block the vision in many directions along the Milky Way . Space-based solar observatories such as SDO and SOHO use ultraviolet telescopes (called AIA and EIT , respectively) to view activity on the Sun and its corona . Weather satellites such as the GOES-R series also carry telescopes for observing the Sun in ultraviolet. The Hubble Space Telescope and FUSE have been the most recent major space telescopes to view the near and far UV spectrum of the sky, though other UV instruments have flown on smaller observatories such as GALEX , as well as sounding rockets and the Space Shuttle . Pioneers in ultraviolet astronomy include George Robert Carruthers , Robert Wilson , and Charles Stuart Bowyer . See also List of ultraviolet space telescopes
https://en.wikipedia.org/wiki/Ultraviolet_astronomy
The ultraviolet catastrophe , also called the Rayleigh–Jeans catastrophe , was the prediction of late 19th century and early 20th century classical physics that an ideal black body at thermal equilibrium would emit an unbounded quantity of energy as wavelength decreased into the ultraviolet range. [ 1 ] : 6–7 The term "ultraviolet catastrophe" was first used in 1911 by the Austrian physicist Paul Ehrenfest , [ 2 ] but the concept originated with the 1900 statistical derivation of the Rayleigh–Jeans law . The phrase refers to the fact that the empirically derived Rayleigh–Jeans law, which accurately predicted experimental results at large wavelengths, failed to do so for short wavelengths. (See the image for further elaboration.) As the theory diverged from empirical observations when these frequencies reached the ultraviolet region of the electromagnetic spectrum , there was a problem. [ 3 ] This problem was later found to be due to a property of quanta as proposed by Max Planck : There could be no fraction of a discrete energy package already carrying minimal energy. Since the first use of this term, it has also been used for other predictions of a similar nature, as in quantum electrodynamics and such cases as ultraviolet divergence . The Rayleigh-Jeans law is an approximation to the spectral radiance of electromagnetic radiation as a function of wavelength from a black body at a given temperature through classical arguments. For wavelength λ {\displaystyle \lambda } , it is: B λ ( T ) = 2 c k B T λ 4 , {\displaystyle B_{\lambda }(T)={\frac {2ck_{\mathrm {B} }T}{\lambda ^{4}}},} where B λ {\displaystyle B_{\lambda }} is the spectral radiance , the power emitted per unit emitting area, per steradian , per unit wavelength; c {\displaystyle c} is the speed of light ; k B {\displaystyle k_{\mathrm {B} }} is the Boltzmann constant ; and T {\displaystyle T} is the temperature in kelvins . For frequency ν {\displaystyle \nu } , the expression is instead B ν ( T ) = 2 ν 2 k B T c 2 . {\displaystyle B_{\nu }(T)={\frac {2\nu ^{2}k_{\mathrm {B} }T}{c^{2}}}.} This formula is obtained from the equipartition theorem of classical statistical mechanics which states that all harmonic oscillator modes (degrees of freedom) of a system at equilibrium have an average energy of k B T {\displaystyle k_{\rm {B}}T} . The "ultraviolet catastrophe" is the expression of the fact that the formula misbehaves at higher frequencies; it predicts infinite energy emission because B ν ( T ) → ∞ {\displaystyle B_{\nu }(T)\to \infty } as ν → ∞ {\displaystyle \nu \to \infty } . An example, from Mason's A History of the Sciences , [ 4 ] illustrates multi-mode vibration via a piece of string. As a natural vibrator , the string will oscillate with specific modes (the standing waves of a string in harmonic resonance), dependent on the length of the string. In classical physics, a radiator of energy will act as a natural vibrator. Since each mode will have the same energy, most of the energy in a natural vibrator will be in the smaller wavelengths and higher frequencies, where most of the modes are. According to classical electromagnetism, the number of electromagnetic modes in a 3-dimensional cavity, per unit frequency, is proportional to the square of the frequency. This implies that the radiated power per unit frequency should be proportional to frequency squared. Thus, both the power at a given frequency and the total radiated power is unlimited as higher and higher frequencies are considered: this is unphysical, as the total radiated power of a cavity is not observed to be infinite, a point that was made independently by Einstein , Lord Rayleigh , and Sir James Jeans in 1905. In 1900, Max Planck derived the correct form for the intensity spectral distribution function by making some assumptions that were strange for the time. In particular, Planck assumed that electromagnetic radiation can be emitted or absorbed only in discrete packets, called quanta , of energy: E quanta = h ν = h c λ , {\displaystyle E_{\text{quanta}}=h\nu =h{\frac {c}{\lambda }},} where: By applying this new energy to the partition function in statistical mechanics , Planck's assumptions led to the correct form of the spectral distribution functions: B λ ( λ , T ) = 2 h c 2 λ 5 1 exp ⁡ ( h c λ k B T ) − 1 {\displaystyle B_{\lambda }(\lambda ,T)={\frac {2hc^{2}}{\lambda ^{5}}}{\frac {1}{\exp \left({\frac {hc}{\lambda k_{\mathrm {B} }T}}\right)-1}}} where: In 1905, Albert Einstein solved the problem physically by postulating that Planck's quanta were real physical particles – what we now call photons , not just a mathematical fiction. They modified statistical mechanics in the style of Boltzmann to an ensemble of photons. Einstein's photon had an energy proportional to its frequency and also explained an unpublished law of Stokes and the photoelectric effect . [ 5 ] This published postulate was specifically cited by the Nobel Prize in Physics committee in their decision to award the prize for 1921 to Einstein. [ 6 ]
https://en.wikipedia.org/wiki/Ultraviolet_catastrophe
In a quantum field theory , one may calculate an effective or running coupling constant that defines the coupling of the theory measured at a given momentum scale. One example of such a coupling constant is the electric charge . In approximate calculations in several quantum field theories, notably quantum electrodynamics and theories of the Higgs particle , the running coupling appears to become infinite at a finite momentum scale. This is sometimes called the Landau pole problem . It is not known whether the appearance of these inconsistencies is an artifact of the approximation, or a real fundamental problem in the theory. However, the problem can be avoided if an ultraviolet or UV fixed point appears in the theory. A quantum field theory has a UV fixed point if its renormalization group flow approaches a fixed point in the ultraviolet (i.e. short length scale/large energy) limit. [ 1 ] This is related to zeroes of the beta-function appearing in the Callan–Symanzik equation . [ 2 ] The large length scale/small energy limit counterpart is the infrared fixed point . Among other things, it means that a theory possessing a UV fixed point may not be an effective field theory , because it is well-defined at arbitrarily small distance scales. At the UV fixed point itself, the theory can behave as a conformal field theory . The converse statement, that any QFT which is valid at all distance scales (i.e. isn't an effective field theory) has a UV fixed point is false. See, for example, cascading gauge theory . Noncommutative quantum field theories have a UV cutoff even though they are not effective field theories. Physicists distinguish between trivial and nontrivial fixed points. If a UV fixed point is trivial (generally known as Gaussian fixed point), the theory is said to be asymptotically free . On the other hand, a scenario, where a non-Gaussian (i.e. nontrivial) fixed point is approached in the UV limit, is referred to as asymptotic safety . [ 3 ] Asymptotically safe theories may be well defined at all scales despite being nonrenormalizable in perturbative sense (according to the classical scaling dimensions ). Steven Weinberg has proposed that the problematic UV divergences appearing in quantum theories of gravity may be cured by means of a nontrivial UV fixed point. [ 4 ] Such an asymptotically safe theory is renormalizable in a nonperturbative sense, and due to the fixed point physical quantities are free from divergences. As yet, a general proof for the existence of the fixed point is still lacking, but there is mounting evidence for this scenario. [ 3 ] This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ultraviolet_fixed_point
Ultraviolet germicidal irradiation (UVGI) is a disinfection technique employing ultraviolet (UV) light, particularly UV-C (180–280 nm), to kill or inactivate microorganisms . UVGI primarily inactivates microbes by damaging their genetic material, thereby inhibiting their capacity to carry out vital functions. [ 1 ] The use of UVGI extends to an array of applications, encompassing food, surface, air, and water disinfection. UVGI devices can inactivate microorganisms including bacteria , viruses , fungi , molds , and other pathogens . [ 2 ] [ 3 ] Recent studies have substantiated the ability of UV-C light to inactivate SARS-CoV-2 , the strain of coronavirus that causes COVID-19 . [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] UV-C wavelengths demonstrate varied germicidal efficacy and effects on biological tissue. [ 9 ] [ 10 ] [ 11 ] Many germicidal lamps like low-pressure mercury (LP-Hg) lamps , with peak emissions around 254 nm, contain UV wavelengths that can be hazardous to humans . [ 12 ] [ 13 ] As a result, UVGI systems have been primarily limited to applications where people are not directly exposed, including hospital surface disinfection, upper-room UVGI , and water treatment . [ 14 ] [ 15 ] [ 16 ] More recently, the application of wavelengths between 200-235 nm, often referred to as far-UVC , has gained traction for surface and air disinfection. [ 11 ] [ 17 ] [ 18 ] These wavelengths are regarded as much safer due to their significantly reduced penetration into human tissue. [ 19 ] [ 20 ] [ 21 ] [ 22 ] Moreover, their efficiency relies on the fact, that in addition to the DNA damage related to the formation of pyrimidine dimers, they provoke important DNA photoionization , leading to oxidative damage. [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ] Notably, UV-C light is virtually absent in sunlight reaching the Earth's surface due to the absorptive properties of the ozone layer within the atmosphere . [ 28 ] The development of UVGI traces back to 1878 when Arthur Downes and Thomas Blunt found that sunlight, particularly its shorter wavelengths, hindered microbial growth. [ 29 ] [ 30 ] [ 31 ] Expanding upon this work, Émile Duclaux , in 1885, identified variations in sunlight sensitivity among different bacterial species. [ 32 ] [ 33 ] [ 34 ] A few years later, in 1890, Robert Koch demonstrated the lethal effect of sunlight on Mycobacterium tuberculosis , hinting at UVGI's potential for combating diseases like tuberculosis . [ 35 ] Subsequent studies further defined the wavelengths most efficient for germicidal inactivation. In 1892, it was noted that the UV segment of sunlight had the most potent bactericidal effect. [ 36 ] [ 37 ] Research conducted in the early 1890s demonstrated the superior germicidal efficacy of UV-C compared to UV-A and UV-B. [ 38 ] [ 39 ] [ 40 ] The mutagenic effects of UV were first unveiled in a 1914 study that observed metabolic changes in Bacillus anthracis upon exposure to sublethal doses of UV. [ 41 ] Frederick Gates, in the late 1920s, offered the first quantitative bactericidal action spectra for Staphylococcus aureus and Bacillus coli, noting peak effectiveness at 265 nm. [ 42 ] [ 43 ] [ 44 ] This matched the absorption spectrum of nucleic acids , hinting at DNA damage as the key factor in bacterial inactivation. This understanding was solidified by the 1960s through research demonstrating the ability of UV-C to form thymine dimers , leading to microbial inactivation. [ 45 ] These early findings collectively laid the groundwork for modern UVGI as a disinfection tool. The utilization of UVGI for air disinfection began in earnest in the mid-1930s. William F. Wells demonstrated in 1935 that airborne infectious organisms, specifically aerosolized B. coli exposed to 254 nm UV, could be rapidly inactivated. [ 46 ] This built upon earlier theories of infectious droplet nuclei transmission put forth by Carl Flügge and Wells himself. [ 47 ] [ 48 ] Prior to this, UV radiation had been studied predominantly in the context of liquid or solid media, rather than airborne microbes. Shortly after Wells' initial experiments, high-intensity UVGI was employed to disinfect a hospital operating room at Duke University in 1936. [ 49 ] The method proved a success, reducing postoperative wound infections from 11.62% without the use of UVGI to 0.24% with the use of UVGI. [ 50 ] Soon, this approach was extended to other hospitals and infant wards using UVGI "light curtains", designed to prevent respiratory cross-infections, with noticeable success. [ 51 ] [ 52 ] [ 53 ] [ 54 ] Adjustments in the application of UVGI saw a shift from "light curtains" to upper-room UVGI, confining germicidal irradiation above human head level. Despite its dependency on good vertical air movement, this approach yielded favorable outcomes in preventing cross-infections. [ 55 ] [ 56 ] [ 57 ] This was exemplified by Wells' successful usage of upper-room UVGI between 1937 and 1941 to curtail the spread of measles in suburban Philadelphia day schools. His study found that 53.6% of susceptibles in schools without UVGI became infected, while only 13.3% of susceptibles in schools with UVGI were infected. [ 58 ] Richard L. Riley, initially a student of Wells, continued the study of airborne infection and UVGI throughout the 1950s and 60s, conducting significant experiments in a Veterans Hospital TB ward. Riley successfully demonstrated that UVGI could efficiently inactivate airborne pathogens and prevent the spread of tuberculosis. [ 59 ] [ 60 ] [ 61 ] Despite initial successes, the use of UVGI declined in the second half of the 20th century era due to various factors, including a rise in alternative infection control and prevention methods, inconsistent efficacy results, and concerns regarding its safety and maintenance requirements. [ 14 ] However, recent events like a rise in multiple drug-resistant bacteria and the COVID-19 pandemic have renewed interest in UVGI for air disinfection. [ 62 ] [ 63 ] [ 64 ] [ 65 ] Using UV light for disinfection of drinking water dates back to 1910 in Marseille, France . [ 66 ] The prototype plant was shut down after a short time due to poor reliability. In 1955, UV water treatment systems were applied in Austria and Switzerland; by 1985 about 1,500 plants were employed in Europe. In 1998 it was discovered that protozoa such as cryptosporidium and giardia were more vulnerable to UV light than previously thought; this opened the way to wide-scale use of UV water treatment in North America. By 2001, over 6,000 UV water treatment plants were operating in Europe. [ 67 ] Over time, UV costs have declined as researchers develop and use new UV methods to disinfect water and wastewater. Several countries have published regulations and guidance for the use of UV to disinfect drinking water supplies, including the US [ 68 ] [ 69 ] [ 70 ] and the UK. [ 71 ] UV light is electromagnetic radiation with wavelengths shorter than visible light but longer than X-rays . UV is categorised into several wavelength ranges, with short-wavelength UV (UV-C) considered "germicidal UV". Wavelengths between about 200 nm and 300 nm are strongly absorbed by nucleic acids . The absorbed energy can result in defects including pyrimidine dimers . These dimers can prevent replication or can prevent the expression of necessary proteins, resulting in the death or inactivation of the organism. Recently, it has been shown that these dimers are fluorescent. [ 72 ] This process is similar to, but stronger than, the effect of longer wavelengths ( UV-B ) producing sunburn in humans. Microorganisms have less protection against UV and cannot survive prolonged exposure to it. [ citation needed ] A UVGI system is designed to expose environments such as water tanks , rooms and forced air systems to germicidal UV. Exposure comes from germicidal lamps that emit germicidal UV at the correct wavelength, thus irradiating the environment. The forced flow of air or water through this environment ensures exposure of that air or water. [ citation needed ] The effectiveness of germicidal UV depends on the UV dose, i.e. how much UV light reaches the microbe (measured as radiant exposure ) and how susceptible the microbe is to the given wavelength(s) of UV light, defined by the germicidal effectiveness curve. The UV dose is measured in light energy per area, i.e. radiant exposure or fluence. The fluence a microbe is exposed to is the product of the light intensity, i.e. irradiance and the time of exposure, according to: Likewise, the irradiance depends on the brightness ( radiant intensity , W/sr) of the UV source, the distance between the UV source and the microbe, the attenuation of filters (e.g. fouled glass) in the light path, the attenuation of the medium (e.g. microbes in turbid water), the presence of particles or objects that can shield the microbes from UV, and the presence of reflectors that can direct the same UV-light through the medium multiple times. [ 77 ] Additionally, if the microbes are not free-flowing, such as in a biofilm , they will block each other from irradiation. The U.S. Environmental Protection Agency (EPA) published UV dosage guidelines for drinking water treatment applications in 2006. [ 70 ] It is difficult to measure UV dose directly but it can also be estimated from: Bulbs require periodic cleaning and replacement to ensure effectiveness. The lifetime of germicidal UV bulbs varies depending on design. Also, the material that the bulb is made of can absorb some of the germicidal rays. Lamp cooling under airflow can also lower UV output. The UV dose should be calculated using the end of lamp life (EOL is specified in number of hours when the lamp is expected to reach 80% of its initial UV output). Some shatter-proof lamps are coated with a fluorated ethylene polymer to contain glass shards and mercury in case of breakage; this coating reduces UV output by as much as 20%. UV source intensity is sometimes specified as irradiance at a distance of 1 meter, which can be easily converted to radiant intensity . UV intensity is inversely proportional to the square of the distance so it decreases at longer distances. Alternatively, it rapidly increases at distances shorter than 1 m. In the above formula, the UV intensity must always be adjusted for distance unless the UV dose is calculated at exactly 1 m (3.3 ft) from the lamp. The UV dose should be calculated at the furthest distance from the lamp on the periphery of the target area. Increases in fluence can be achieved by using reflection, such that the same light passes through the medium several times before being absorbed. Aluminum has the highest reflectivity rate versus other metals and is recommended when using UV. [ 78 ] In static applications the exposure time can be as long as needed for an effective UV dose to be reached. In waterflow/airflow disinfection, exposure time can be increased by increasing the illuminated volume, decreasing the fluid speed, or recirculating the air or water repeatedly through the illuminated section. This ensures multiple passes so that the UV is effective against the highest number of microorganisms and will irradiate resistant microorganisms more than once to break them down. Microbes are more susceptible to certain wavelengths of UV light, a function called the germicidal effectiveness curve. The curve for E. coli is given in the figure, with the most effective UV light having a wavelength of 265 nm. This applies to most bacteria and does not change significantly for other microbes. Dosages for a 90% kill rate of most bacteria and viruses range between 2,000 and 8,000 μJ/cm 2 . Larger parasites such as Cryptosporidium require a lower dose for inactivation. As a result, US EPA has accepted UV disinfection as a method for drinking water plants to obtain Cryptosporidium , Giardia or virus inactivation credits. For example, for a 90% reduction of Cryptosporidium , a minimum dose of 2,500 μW·s/cm 2 is required based on EPA's 2006 guidance manual. [ 70 ] : 1–7 " Sterilization " is often misquoted as being achievable. While it is theoretically possible in a controlled environment, it is very difficult to prove and the term "disinfection" is generally used by companies offering this service as to avoid legal reprimand. Specialist companies will often advertise a certain log reduction , e.g., 6-log reduction or 99.9999% effective, instead of sterilization. This takes into consideration a phenomenon known as light and dark repair ( photoreactivation and base excision repair , respectively), in which a cell can repair DNA that has been damaged by UV light. Many UVGI systems use UV wavelengths that can be harmful to humans, resulting in both immediate and long-term effects. Acute impacts on the eyes and skin can include conditions such as photokeratitis (often termed "snow blindness") and erythema (reddening of the skin), while chronic exposure may heighten the risk of skin cancer . [ 12 ] [ 13 ] [ 79 ] However, the safety and effects of UV vary extensively by wavelength, implying that not all UVGI systems pose the same level of hazards. Humans typically encounter UV light in the form of solar UV, which comprises significant portions of UV-A and UV-B , but excludes UV-C . The UV-B band, able to penetrate deep into living, replicating tissue, is recognized as the most damaging and carcinogenic . [ 80 ] Many standard UVGI systems, such as low-pressure mercury (LP-Hg) lamps, produce broad-band emissions in the UV-C range and also peaks in the UV-B band. This often makes it challenging to attribute damaging effects to a specific wavelength. [ 81 ] Nevertheless, longer wavelengths in the UV-C band can cause conditions like photokeratitis and erythema. [ 22 ] [ 82 ] Hence, many UVGI systems are used in settings where direct human exposure is limited, such as with upper-room UVGI air cleaners and water disinfection systems. Precautions are commonly implemented to protect users of these UVGI systems, including: Since the early 2010s there has been growing interest in the far-UVC wavelengths of 200-235 nm for whole-room exposure. These wavelengths are generally considered safer due to their limited penetration depth caused by increased protein absorption. [ 83 ] [ 84 ] This feature confines far-UVC exposure to the superficial layers of tissue , such as the outer layer of dead skin (the stratum corneum ) and the tear film and surface cells of the cornea . [ 22 ] [ 85 ] [ 86 ] [ 87 ] As these tissues do not contain replicating cells, damage to them poses less carcinogenic risk. It has also been demonstrated that far-UVC does not cause erythema or damage to the cornea at levels many times that of solar UV or conventional 254 nm UVGI systems. [ 88 ] [ 89 ] [ 22 ] Exposure limits for UV, particularly the germicidal UV-C range, have evolved over time due to scientific research and changing technology. The American Conference of Governmental Industrial Hygienists (ACGIH) and the International Commission on Non-Ionizing Radiation Protection (ICNIRP) have set exposure limits to safeguard against both immediate and long-term effects of UV exposure. [ 90 ] [ 91 ] These limits, also referred to as Threshold Limit Values (TLVs), form the basis for emission limits in product safety standards. The UV-C photobiological spectral band is defined as 100–280 nm, with limits currently applying only from 180 to 280 nm. This reflects concerns about acute damage such as erythema and photokeratitis as well as long-term delayed effects like photocarcinogenesis . However, with the increased safety evidence surrounding UV-C for germicidal applications, the existing ACGIH TLVs were revised in 2022. [ 92 ] The TLVs for the 222 nm UV-C wavelength (peak emissions from KrCl excimer lamps), following the 2022 revision, are now 161 mJ/cm 2 for eye exposure and 479 mJ/cm 2 for skin exposure over an eight-hour period. [ 93 ] For the 254 nm UV wavelength, the updated exposure limit is now set at 6 mJ/cm 2 for eyes and 10 mJ/cm 2 for skin. [ 93 ] UV can influence indoor air chemistry, leading to the formation of ozone and other potentially harmful pollutants , including particulate pollution . [ 94 ] This occurs primarily through photolysis , where UV photons break molecules into smaller radicals that form radicals such as OH. [ 95 ] The radicals can react with volatile organic compounds (VOCs) to produce oxidized VOCs (OVOCs) and secondary organic aerosols (SOA). [ 96 ] Wavelengths below 242 nm can also generate ozone, which not only contributes to OVOCs and SOA formation but can be harmful in itself. When inhaled in high quantities, these pollutants can irritate the eyes and respiratory system and exacerbate conditions like asthma . [ 97 ] The specific pollutants produced depend on the initial air chemistry and the UV source power and wavelength. To control ozone and other indoor pollutants, ventilation and filtration methods are used, diluting airborne pollutants and maintaining indoor air quality. [ 98 ] UVC radiation is able to break chemical bonds. This leads to rapid aging of plastics and other material, and insulation and gaskets . Plastics sold as "UV-resistant" are tested only for the lower-energy UVB since UVC does not normally reach the surface of the Earth. [ 99 ] When UV is used near plastic, rubber, or insulation, these materials may be protected by metal tape or aluminum foil. UVGI can be used to disinfect air with prolonged exposure. In the 1930s and 40s, an experiment in public schools in Philadelphia showed that upper-room ultraviolet fixtures could significantly reduce the transmission of measles among students. [ 100 ] UV and violet light are able to neutralize the infectivity of SARS-CoV-2 . [ 101 ] Viral titers usually found in the sputum of COVID-19 patients are completely inactivated by levels of UV-A and UV-B irradiation that are similar to those levels experienced from natural sun exposure . This finding suggests that the reduced incidence of SARS-COV-2 in the summer may be, in part, due to the neutralizing activity of solar UV irradiation. [ 101 ] Various UV-emitting devices can be used for SARS-CoV-2 disinfection, and these devices may help in reducing the spread of infection. [ 102 ] SARS-CoV-2 can be inactivated by a wide range of UVC wavelengths, and the wavelength of 222 nm provides the most effective disinfection performance. [ 102 ] Disinfection is a function of UV intensity and time. For this reason, it is in theory not as effective on moving air, or when the lamp is perpendicular to the flow, as exposure times are dramatically reduced. However, numerous professional and scientific publications have indicated that the overall effectiveness of UVGI actually increases when used in conjunction with fans and HVAC ventilation, which facilitate whole-room circulation that exposes more air to the UV source. [ 103 ] [ 104 ] Air purification UVGI systems can be free-standing units with shielded UV lamps that use a fan to force air past the UV light. Other systems are installed in forced air systems so that the circulation for the premises moves microorganisms past the lamps. Key to this form of sterilization is placement of the UV lamps and a good filtration system to remove the dead microorganisms. [ 105 ] For example, forced air systems by design impede line-of-sight, thus creating areas of the environment that will be shaded from the UV light. However, a UV lamp placed at the coils and drain pans of cooling systems will keep microorganisms from forming in these naturally damp places. [ 106 ] Ultraviolet disinfection of water is a purely physical, chemical-free process. Even parasites such as Cryptosporidium or Giardia , which are extremely resistant to chemical disinfectants, are efficiently reduced. UV can also be used to remove chlorine and chloramine species from water; this process is called photolysis , and requires a higher dose than normal disinfection. The dead microorganisms are not removed from the water. UV disinfection does not remove dissolved organics, inorganic compounds or particles in the water. [ 107 ] The world's largest water disinfection plant treats drinking water for New York City . The Catskill-Delaware Water Ultraviolet Disinfection Facility , commissioned on 8 October 2013, incorporates a total of 56 energy-efficient UV reactors treating up to 2.2 billion U.S. gallons (8.3 billion liters) a day. [ 108 ] [ 109 ] Ultraviolet can also be combined with ozone or hydrogen peroxide to produce hydroxyl radicals to break down trace contaminants through an advanced oxidation process . It used to be thought that UV disinfection was more effective for bacteria and viruses, which have more-exposed genetic material, than for larger pathogens that have outer coatings or that form cyst states (e.g., Giardia ) that shield their DNA from UV light. However, it was recently discovered that ultraviolet radiation can be somewhat effective for treating the microorganism Cryptosporidium . The findings resulted in the use of UV radiation as a viable method to treat drinking water. Giardia in turn has been shown to be very susceptible to UV-C when the tests were based on infectivity rather than excystation. [ 110 ] It has been found that protists are able to survive high UV-C doses but are sterilized at low doses. UV water treatment devices can be used for well water and surface water disinfection. UV treatment compares favourably with other water disinfection systems in terms of cost, labour and the need for technically trained personnel for operation. Water chlorination treats larger organisms and offers residual disinfection, but these systems are expensive because they need special operator training and a steady supply of a potentially hazardous material. Finally, boiling of water is the most reliable treatment method but it demands labour and imposes a high economic cost. UV treatment is rapid and, in terms of primary energy use, approximately 20,000 times more efficient than boiling. [ citation needed ] UV disinfection is most effective for treating high-clarity, purified reverse osmosis distilled water. Suspended particles are a problem because microorganisms buried within particles are shielded from the UV light and pass through the unit unaffected. However, UV systems can be coupled with a pre-filter to remove those larger organisms that would otherwise pass through the UV system unaffected. The pre-filter also clarifies the water to improve light transmittance and therefore UV dose throughout the entire water column. Another key factor of UV water treatment is the flow rate—if the flow is too high, water will pass through without sufficient UV exposure. If the flow is too low, heat may build up and damage the UV lamp. [ 111 ] A disadvantage of UVGI is that while water treated by chlorination is resistant to reinfection (until the chlorine off-gasses), UVGI water is not resistant to reinfection. UVGI water must be transported or delivered in such a way as to avoid reinfection. [ citation needed ] A 2006 project at University of California, Berkeley produced a design for inexpensive water disinfection in resource deprived settings. [ 112 ] The project was designed to produce an open source design that could be adapted to meet local conditions. In a somewhat similar proposal in 2014, Australian students designed a system using potato chip (crisp) packet foil to reflect solar UV radiation into a glass tube that disinfects water without power. [ 113 ] Sizing of a UV system is affected by three variables: flow rate, lamp power, and UV transmittance in the water. Manufacturers typically developed sophisticated computational fluid dynamics (CFD) models validated with bioassay testing. This involves testing the UV reactor's disinfection performance with either MS2 or T1 bacteriophages at various flow rates, UV transmittance, and power levels in order to develop a regression model for system sizing. For example, this is a requirement for all public water systems in the United States per the EPA UV manual. [ 70 ] : 5–2 The flow profile is produced from the chamber geometry, flow rate, and particular turbulence model selected. The radiation profile is developed from inputs such as water quality, lamp type (power, germicidal efficiency, spectral output, arc length), and the transmittance and dimension of the quartz sleeve. Proprietary CFD software simulates both the flow and radiation profiles. Once the 3D model of the chamber is built, it is populated with a grid or mesh that comprises thousands of small cubes. Points of interest—such as at a bend, on the quartz sleeve surface, or around the wiper mechanism—use a higher resolution mesh, whilst other areas within the reactor use a coarse mesh. Once the mesh is produced, hundreds of thousands of virtual particles are "fired" through the chamber. Each particle has several variables of interest associated with it, and the particles are "harvested" after the reactor. Discrete phase modeling produces delivered dose, head loss, and other chamber-specific parameters. When the modeling phase is complete, selected systems are validated using a professional third party to provide oversight and to determine how closely the model is able to predict the reality of system performance. System validation uses non-pathogenic surrogates such as MS 2 phage or Bacillus subtilis to determine the Reduction Equivalent Dose (RED) ability of the reactors. Most systems are validated to deliver 40 mJ/cm 2 within an envelope of flow and transmittance. [ 114 ] To validate effectiveness in drinking water systems, the method described in the EPA UV guidance manual is typically used by US water utilities, whilst Europe has adopted Germany's DVGW 294 standard. For wastewater systems, the NWRI/AwwaRF Ultraviolet Disinfection Guidelines for Drinking Water and Water Reuse protocols are typically used, especially in waste water reuse applications. [ 115 ] Ultraviolet in sewage treatment is commonly replacing chlorination. This is in large part because of concerns that reaction of the chlorine with organic compounds in the waste water stream could synthesize potentially toxic and long lasting chlorinated organics and also because of the environmental risks of storing chlorine gas or chlorine containing chemicals. Individual wastestreams to be treated by UVGI must be tested to ensure that the method will be effective due to potential interferences such as suspended solids , dyes, or other substances that may block or absorb the UV radiation. According to the World Health Organization , "UV units to treat small batches (1 to several liters) or low flows (1 to several liters per minute) of water at the community level are estimated to have costs of US$20 per megaliter, including the cost of electricity and consumables and the annualized capital cost of the unit." [ 116 ] Large-scale urban UV wastewater treatment is performed in cities such as Edmonton, Alberta . The use of ultraviolet light has now become standard practice in most municipal wastewater treatment processes. Effluent is now starting to be recognized as a valuable resource, not a problem that needs to be dumped. Many wastewater facilities are being renamed as water reclamation facilities, whether the wastewater is discharged into a river, used to irrigate crops, or injected into an aquifer for later recovery. Ultraviolet light is now being used to ensure water is free from harmful organisms. Ultraviolet sterilizers are often used to help control unwanted microorganisms in aquaria and ponds. UV irradiation ensures that pathogens cannot reproduce, thus decreasing the likelihood of a disease outbreak in an aquarium. Aquarium and pond sterilizers are typically small, with fittings for tubing that allows the water to flow through the sterilizer on its way from a separate external filter or water pump. Within the sterilizer, water flows as close as possible to the ultraviolet light source. Water pre-filtration is critical as water turbidity lowers UV-C penetration. Many of the better UV sterilizers have long dwell times and limit the space between the UV-C source and the inside wall of the UV sterilizer device. [ 117 ] [ independent source needed ] UVGI is often used to disinfect equipment such as safety goggles , instruments, pipettors , and other devices. Lab personnel also disinfect glassware and plasticware this way. Microbiology laboratories use UVGI to disinfect surfaces inside biological safety cabinets ("hoods") between uses. Since the U.S. Food and Drug Administration issued a rule in 2001 requiring that virtually all fruit and vegetable juice producers follow HACCP controls, and mandating a 5- log reduction in pathogens, UVGI has seen some use in sterilization of juices such as fresh-pressed. Germicidal UV for disinfection is most typically generated by a mercury-vapor lamp . Low-pressure mercury vapor has a strong emission line at 254 nm, which is within the range of wavelengths that demonstrate strong disinfection effect. The optimal wavelengths for disinfection are close to 260 nm. [ 70 ] : 2–6, 2–14 Mercury vapor lamps may be categorized as either low-pressure (including amalgam) or medium-pressure lamps. Low-pressure UV lamps offer high efficiencies (approx. 35% UV-C) but lower power, typically 1 W/cm power density (power per unit of arc length). Amalgam UV lamps utilize an amalgam to control mercury pressure to allow operation at a somewhat higher temperature and power density. They operate at higher temperatures and have a lifetime of up to 16,000 hours. Their efficiency is slightly lower than that of traditional low-pressure lamps (approx. 33% UV-C output), and power density is approximately 2–3 W/cm 3 . Medium-pressure UV lamps operate at much higher temperatures, up to about 800 degrees Celsius, and have a polychromatic output spectrum and a high radiation output but lower UV-C efficiency of 10% or less. Typical power density is 30 W/cm 3 or greater. Depending on the quartz glass used for the lamp body, low-pressure and amalgam UV emit radiation at 254 nm and also at 185 nm, which has chemical effects. UV radiation at 185 nm is used to generate ozone. The UV lamps for water treatment consist of specialized low-pressure mercury-vapor lamps that produce ultraviolet radiation at 254 nm, or medium-pressure UV lamps that produce a polychromatic output from 200 nm to visible and infrared energy. The UV lamp never contacts the water; it is either housed in a quartz glass sleeve inside the water chamber or mounted externally to the water, which flows through the transparent UV tube. Water passing through the flow chamber is exposed to UV rays, which are absorbed by suspended solids, such as microorganisms and dirt, in the stream. [ 118 ] Recent developments in LED technology have led to commercially available UV-C LEDs. UV-C LEDs use semiconductors to emit light between 255 nm and 280 nm. [ 74 ] The wavelength emission is tuneable by adjusting the material of the semiconductor. As of 2019 [update] , the electrical-to-UV-C conversion efficiency of LEDs was lower than that of mercury lamps. The reduced size of LEDs opens up options for small reactor systems allowing for point-of-use applications and integration into medical devices. [ 119 ] The low power consumption of semiconductors has enabled UV disinfection systems powered by small solar cells to be deployed in remote or under-resourced regions. [ 119 ] UV-C LEDs don't necessarily last longer than traditional germicidal lamps in terms of hours used, instead having more-variable engineering characteristics and better tolerance for short-term operation. A UV-C LED can achieve a longer installed time than a traditional germicidal lamp in intermittent use. Likewise, LED degradation increases with heat, while filament and HID lamp output wavelength is dependent on temperature, so engineers can design LEDs of a particular size and cost to have a higher output and faster degradation or a lower output and slower decline over time.
https://en.wikipedia.org/wiki/Ultraviolet_germicidal_irradiation
Ultraviolet photoelectron spectroscopy ( UPS ) refers to the measurement of kinetic energy spectra of photoelectrons emitted by molecules that have absorbed ultraviolet photons, in order to determine molecular orbital energies in the valence region. If Albert Einstein 's photoelectric law is applied to a free molecule, the kinetic energy ( E k {\displaystyle E_{\text{k}}} ) of an emitted photoelectron is given by where h is the Planck constant , ν is the frequency of the ionizing light, and I is an ionization energy for the formation of a singly charged ion in either the ground state or an excited state . According to Koopmans' theorem , each such ionization energy may be identified with the energy of an occupied molecular orbital. The ground-state ion is formed by removal of an electron from the highest occupied molecular orbital , while excited ions are formed by removal of an electron from a lower occupied orbital. Before 1960, virtually all measurements of photoelectron kinetic energies were for electrons emitted from metals and other solid surfaces. In about 1956, Kai Siegbahn developed X-ray photoelectron spectroscopy (XPS) for surface chemical analysis. This method uses x-ray sources to study energy levels of atomic core electrons , and at the time had an energy resolution of about 1 eV ( electronvolt ). [ 1 ] The ultraviolet photoelectron spectroscopy (UPS) was pioneered by Feodor I. Vilesov , a physicist at St. Petersburg (Leningrad) State University in Russia (USSR) in 1961 to study the photoelectron spectra of free molecules in the gas phase. [ 2 ] [ 3 ] The early experiments used monochromatized radiation from a hydrogen discharge and a retarding potential analyzer to measure the photoelectron energies. The PES was further developed by David W. Turner , a physical chemist at Imperial College in London and then at Oxford University , in a series of publications from 1962 to 1967. [ 4 ] [ 5 ] As a photon source, he used a helium discharge lamp that emits a wavelength of 58.4 nm (corresponding to an energy of 21.2 eV) in the vacuum ultraviolet region. With this source, Turner's group obtained an energy resolution of 0.02 eV. Turner referred to the method as "molecular photoelectron spectroscopy", now usually "ultraviolet photoelectron spectroscopy" or UPS. As compared to XPS, UPS is limited to energy levels of valence electrons , but measures them more accurately. After 1967, commercial UPS spectrometers became available. [ 6 ] One of the latest commercial devices was the Perkin Elmer PS18. For the last twenty years, the systems have been homemade. One of the latest in progress – Phoenix II – is that of the laboratory of Pau, IPREM developed by Dr. Jean-Marc Sotiropoulos . [ 7 ] The UPS measures experimental molecular orbital energies for comparison with theoretical values from quantum chemistry , which was also extensively developed in the 1960s. The photoelectron spectrum of a molecule contains a series of peaks each corresponding to one valence-region molecular orbital energy level. Also, the high resolution allowed the observation of fine structure due to vibrational levels of the molecular ion, which facilitates the assignment of peaks to bonding, nonbonding or antibonding molecular orbitals. The method was later extended to the study of solid surfaces where it is usually described as photoemission spectroscopy (PES). It is particularly sensitive to the surface region (to 10 nm depth), due to the short range of the emitted photoelectrons (compared to X-rays ). It is therefore used to study adsorbed species and their binding to the surface, as well as their orientation on the surface. [ 8 ] A useful result from characterization of solids by UPS is the determination of the work function of the material. An example of this determination is given by Park et al. [ 9 ] Briefly, the full width of the photoelectron spectrum (from the highest kinetic energy/lowest binding energy point to the low kinetic energy cutoff) is measured and subtracted from the photon energy of the exciting radiation , and the difference is the work function. Often, the sample is electrically biased negative to separate the low energy cutoff from the spectrometer response. UPS has seen a considerable revival with the increasing availability of synchrotron light sources that provide a wide range of monochromatic photon energies.
https://en.wikipedia.org/wiki/Ultraviolet_photoelectron_spectroscopy
Ultraviolet–visible spectroscopy (UV–vis) can distinguish between enantiomers by showing a distinct Cotton effect for each isomer. UV–vis spectroscopy sees only chromophores , so other molecules must be prepared for analysis by chemical addition of a chromophore such as anthracene . Two methods are reported: the octant rule and the exciton chirality method . [ 1 ] The octant rule was introduced in 1961 by William Moffitt , R. B. Woodward , A. Moscowitz, William Klyne and Carl Djerassi . [ 2 ] [ 3 ] [ 4 ] This empirical rule allows the prediction of the sign of the Cotton effect by analysing relative orientation of substituents in three dimensions and in this way the absolute configuration of an enantiomer.
https://en.wikipedia.org/wiki/Ultraviolet–visible_spectroscopy_of_stereoisomers
Ultrawide formats refers to photos, videos, [ 1 ] and displays [ 2 ] with aspect ratios greater than 2. There were multiple moves in history towards wider formats, including one by Disney, [ 3 ] with some of them being more successful than others. Cameras usually capture ultra-wide photos and videos using an anamorphic format lens, which shrinks the extended horizontal field-of-view (FOV) while saving on film or disk. [ 4 ] Historically ultrawide movie formats have varied between ~2.35 (1678:715), ~2.39 (1024:429) and 2.4. To complicate matters further, films were also produced in following ratios: 2.55, 2.76 and 4. Developed by Rowe E. Carney Jr. and Tom F. Smith, the Smith-Carney System used a 3 camera system, with 4.6 945 (1737:370) ratio, to project movies in 180°. [ 5 ] Disney even created a 6.85 ratio, using 5 projectors to display 200°. The only movie filmed in Disney's 6.85 ratio is Impressions de France . [ 3 ] Suggested by Kerns H. Powers of SMPTE in USA, the 16:9 aspect ratio was developed to unify all other aspect ratios. Subsequently it became the universal standard for widescreen and high-definition television . Around 2007, cameras and non-television screens began to switch from 15:9 (5:3) and 16:10 (8:5) to 16:9 resolutions. Univisium is an aspect ratio of 2:1, created by Vittorio Storaro of the American Society of Cinematographers (ASC) originally intended to unify all other aspect ratios used in movies. It is popular on smartphones and cheap VR [ clarification needed ] displays. VR displays halve the screen into two, one for each eye. So a 2:1 VR screen would be halved into two 1:1 screens. Smartphones began moving to this aspect ratio since late 2010s with the release of Samsung Galaxy S8 , advertised as 18:9. 21:9 is a consumer electronics (CE) marketing term to describe the ultra-widescreen aspect ratio of 64:27 (21 1 ⁄ 3 :9) = 1024:432 for multiples of 1080 lines. It is used for multiple anamorphic formats and DCI 1024:429 (21. 482517 :9), but also for ultrawide computer monitors, including 43:18 (21 1 ⁄ 2 :9) for resolutions based on 720 lines and 12:5 (21 3 ⁄ 5 :9) for ultrawide variants of resolutions based either on 960 pixels width or 900 lines height. The 64:27 aspect ratio is the logical extension of the existing video aspect ratios 4:3 and 16:9. It is the third power of 4:3, whereas 16:9 of widescreen HDTV is 4:3 squared. This allows electronic scalers and optical anamorphic lenses to use an easily implementable 4:3 (1.3 3 ) scaling factor. 4 3 ⋅ 4 3 ⋅ 4 3 = 64 27 {\displaystyle {\tfrac {4}{3}}\cdot {\tfrac {4}{3}}\cdot {\tfrac {4}{3}}={\tfrac {64}{27}}} 21:9 movies usually refers to 1024:429 ≈ 2.387, the aspect ratio of digital ultrawide cinema formats, which is often rounded up to 2.39:1 or 2.4:1 Ultrawide resolution can also be described by its height, such as "UW 1080" and "1080p ultrawide" both stands for the same 2560×1080 resolution. In 2016, IMAX announced the release of films in Ultra-WideScreen 3.6 format, [ 6 ] [ failed verification ] with an aspect ratio of 18:5 (36:10). [ 7 ] A year later, Samsung and Phillips announced 'super ultra-wide displays', with aspect ratio of 32:9, for "iMax-style cinematic viewing". [ 8 ] Panacast developed a 32:9 webcam with three integrated cameras giving 180° view, and resolution matching upcoming 5K 32:9 monitors, 5120x1440. [ 9 ] In 2018 Q4, Dell released the U4919DW, a 5K 32:9 monitor with a resolution of 5120x1440, and Phillips announced the 499P9H with the same resolution. 32:9 Ultrawide monitors are often sold as an alternative to dual 16:9 monitor setups and for more inmersive experiences while playing videogames, and many are capable of displaying 2 16:9 inputs at the same time. 32:9 aspect ratio is derived from 16:9 being twice as large. Some manufacturers therefore refer to the resulting total display resolution with a D prefix for dual or double . Super wide resolutions refers to that with aspect ratio greater than 3. Ultra-WideScreen 3.6 video never spread, as cinemas in an even wider ScreenX 270° format were released. [ 10 ] Abel Gance experimented with ultrawide formats including making a film in 4:1 (36:9). He made a rare use of Polyvision , three 35 mm 1. 3 images projected side by side in the 1927 film Napoléon . AT NAB 2019, Sony introduced a 19.2-metre-wide by 5.4-metre-tall commercial 16K display. [ 11 ] [ 12 ] It is made up of 576 modules (48 by 12) each 360 pixels across, resulting in a 4:1, 17280x4320p screen. Developed by CJ CGV in 2012, ScreenX uses three (or more) projectors to display 270° content, [ 10 ] with an unknown aspect ratio above 4. Walls on both sides of a ScreenX theatre are used as projector screens. Developed by Barco N.V. in 2015, Barco Escape used three projectors of 2.39 ratio to display 270° content, with an aspect ratio of 7.17. The two side screens were angled at 45 degree in order to cover peripheral vision. Barco Escape shut down in February 2018.
https://en.wikipedia.org/wiki/Ultrawide_formats
In placental mammals , the umbilical cord (also called the navel string , [ 1 ] birth cord or funiculus umbilicalis ) is a conduit between the developing embryo or fetus and the placenta . During prenatal development , the umbilical cord is physiologically and genetically part of the fetus and (in humans) normally contains two arteries (the umbilical arteries ) and one vein (the umbilical vein ), buried within Wharton's jelly . The umbilical vein supplies the fetus with oxygenated , nutrient -rich blood from the placenta . Conversely, the fetal heart pumps low-oxygen, nutrient-depleted blood through the umbilical arteries back to the placenta. The umbilical cord develops from and contains remnants of the yolk sac and allantois . It forms by the fifth week of development , replacing the yolk sac as the source of nutrients for the embryo. [ 2 ] The cord is not directly connected to the mother's circulatory system, but instead joins the placenta , which transfers materials to and from the maternal blood without allowing direct mixing. The length of the umbilical cord is approximately equal to the crown-rump length of the fetus throughout pregnancy . The umbilical cord in a full term neonate is usually about 50 centimeters (20 inches ) long and about 2 centimeters (0.79 in) in diameter. This diameter decreases rapidly within the placenta. The fully patent umbilical artery has two main layers: an outer layer consisting of circularly arranged smooth muscle cells and an inner layer which shows rather irregularly and loosely arranged cells embedded in abundant ground substance staining metachromatic . [ 3 ] The smooth muscle cells of the layer are rather poorly differentiated, contain only a few tiny myofilaments and are thereby unlikely to contribute actively to the process of post-natal closure. [ 3 ] The umbilical cord can be detected by ultrasound by six weeks of gestation and well-visualized by eight to nine weeks of gestation. [ 4 ] The umbilical cord lining is a good source of mesenchymal and epithelial stem cells. Umbilical cord mesenchymal stem cells (UC-MSC) have been used clinically to treat osteoarthritis, autoimmune diseases, and multiple other conditions. Their advantages include a better harvesting, and multiplication, and immunosuppressive properties that define their potential for use in transplantations. Their use would also overcome the ethical objections raised by the use of embryonic stem cells . [ 5 ] The umbilical cord contains Wharton's jelly , a gelatinous substance made largely from mucopolysaccharides that protects the blood vessels inside. It contains one vein, which carries oxygenated, nutrient-rich blood to the fetus, and two arteries that carry deoxygenated, nutrient-depleted blood away. [ 6 ] Occasionally, only two vessels (one vein and one artery) are present in the umbilical cord. This is sometimes related to fetal abnormalities, but it may also occur without accompanying problems. It is unusual for a vein to carry oxygenated blood and for arteries to carry deoxygenated blood (the only other examples being the pulmonary veins and arteries , connecting the lungs to the heart). However, this naming convention reflects the fact that the umbilical vein carries blood towards the fetus' heart, while the umbilical arteries carry blood away. The blood flow through the umbilical cord is approximately 35 ml / min at 20 weeks, and 240 ml / min at 40 weeks of gestation . [ 7 ] Adapted to the weight of the fetus, this corresponds to 115 ml / min / kg at 20 weeks and 64 ml / min / kg at 40 weeks. [ 7 ] For terms of location , the proximal part of an umbilical cord refers to the segment closest to the embryo or fetus in embryology and fetal medicine, and closest to the placenta in placental pathology, and opposite for the distal part, respectively. [ 8 ] The umbilical cord enters the fetus via the abdomen , at the point which (after separation) will become the umbilicus (belly button or navel). Within the fetus, the umbilical vein continues towards the transverse fissure of the liver , where it splits into two. One of these branches joins with the hepatic portal vein (connecting to its left branch), which carries blood into the liver. The second branch (known as the ductus venosus ) bypasses the liver and flows into the inferior vena cava , which carries blood towards the heart. The two umbilical arteries branch from the internal iliac arteries and pass on either side of the urinary bladder into the umbilical cord, completing the circuit back to the placenta. [ 9 ] After birth, the umbilical cord stump will dry up and drop away by the time the baby is three weeks old. [ 10 ] If the stump still has not separated after three weeks, it might be a sign of an underlying problem, such as an infection or immune system disorder. [ 10 ] In absence of external interventions, the umbilical cord occludes physiologically shortly after birth, explained both by a swelling and collapse of Wharton's jelly in response to a reduction in temperature and by vasoconstriction of the blood vessels by smooth muscle contraction. In effect, a natural clamp is created, halting the flow of blood. In air at 18 °C, this physiological clamping will take three minutes or less. [ 11 ] In water birth , where the water temperature is close to body temperature, normal pulsation can be five minutes and longer. Closure of the umbilical artery by vasoconstriction consists of multiple constrictions which increase in number and degree with time. There are segments of dilations with trapped uncoagulated blood between the constrictions before complete occlusion. [ 12 ] Both the partial constrictions and the ultimate closure are mainly produced by muscle cells of the outer circular layer. [ 3 ] In contrast, the inner layer seems to serve mainly as a plastic tissue which can easily be shifted in an axial direction and then folded into the narrowing lumen to complete the closure. [ 3 ] The vasoconstrictive occlusion appears to be mainly mediated by serotonin [ 13 ] [ 14 ] and thromboxane A 2 . [ 13 ] The artery in cords of preterm infants contracts more to angiotensin II and arachidonic acid and is more sensitive to oxytocin than in term ones. [ 14 ] In contrast to the contribution of Wharton's jelly, cooling causes only temporary vasoconstriction. [ 14 ] Within the child, the umbilical vein and ductus venosus close up, and degenerate into fibrous remnants known as the round ligament of the liver and the ligamentum venosum respectively. Part of each umbilical artery closes up (degenerating into what are known as the medial umbilical ligaments ), while the remaining sections are retained as part of the circulatory system. A number of abnormalities can affect the umbilical cord, which can cause problems that affect both mother and child: [ 15 ] The cord can be clamped at different times; however, delaying the clamping of the umbilical cord until at least one minute after birth improves outcomes as long as there is the ability to treat the small risk of jaundice if it occurs. [ 18 ] Clamping is followed by cutting of the cord, which is painless due to the absence of nerves . The cord is extremely tough, like thick sinew , and so cutting it requires a suitably sharp instrument. While umbilical severance may be delayed until after the cord has stopped pulsing (one to three minutes after birth), there is ordinarily no significant loss of either venous or arterial blood while cutting the cord. Current evidence neither supports, nor refutes, delayed cutting of the cord, according to the American Congress of Obstetricians and Gynecologists (ACOG) guidelines. There are umbilical cord clamps which incorporate a knife. These clamps are safer and faster, allowing one to first apply the cord clamp and then cut the umbilical cord. After the cord is clamped and cut, the newborn wears a plastic clip on the navel area until the compressed region of the cord has dried and sealed sufficiently. The length of umbilical left attached to the newborn varies by practice; in most hospital settings the length of cord left attached after clamping and cutting is minimal. In the United States, however, where the birth occurred outside of the hospital and an emergency medical technician (EMT) clamps and cuts the cord, a longer segment up to 18 cm (7 in) in length [ 19 ] [ 20 ] is left attached to the newborn. The remaining umbilical stub remains for up to ten days as it dries and then falls off. A Cochrane review in 2013 came to the conclusion that delayed cord clamping (between one and three minutes after birth) is "likely to be beneficial as long as access to treatment for jaundice requiring phototherapy is available". [ 21 ] In this review delayed clamping, as contrasted to early, resulted in no difference in risk of severe maternal postpartum hemorrhage or neonatal mortality, and a low Apgar score . On the other hand, delayed clamping resulted in an increased birth weight of on average about 100 g, and an increased hemoglobin concentration of on average 1.5 g/dL with half the risk of being iron deficient at three and six months, but an increased risk of jaundice requiring phototherapy . [ 21 ] In 2012, the American College of Obstetricians and Gynecologists officially endorsed delaying clamping of the umbilical cord for 30–60 seconds with the newborn held below the level of the placenta in all cases of preterm delivery based largely on evidence that it reduces the risk of intraventricular hemorrhage in these children by 50%. [ 22 ] [ obsolete source ] In the same committee statement, ACOG also recognize several other likely benefits for preterm infants, including "improved transitional circulation, better establishment of red blood cell volume, and decreased need for blood transfusion". In January 2017, a revised Committee Opinion extended the recommendation to term infants, citing data that term infants benefit from increased hemoglobin levels in the newborn period and improved iron stores in the first months of life, which may result in improved developmental outcomes. ACOG recognized a small increase in the incidence of jaundice in term infants with delayed cord clamping, and recommended policies be in place to monitor for and treat neonatal jaundice. ACOG also noted that delayed cord clamping is not associated with increased risk of postpartum hemorrhage. [ 23 ] Several studies have shown benefits of delayed cord clamping: A meta-analysis [ 24 ] showed that delaying clamping of the umbilical cord in full-term neonates for a minimum of two minutes following birth is beneficial to the newborn in giving improved hematocrit , iron status as measured by ferritin concentration and stored iron, as well as a reduction in the risk of anemia ( relative risk , 0.53; 95% CI, 0.40–0.70). [ 24 ] A decrease was also found in a study from 2008. [ 25 ] Although there is higher hemoglobin level at 2 months, this effect did not persist beyond 6 months of age. [ 26 ] Not clamping the cord for three minutes following the birth of a baby improved outcomes at four years of age. [ 27 ] A delay of three minutes or more in umbilical cord clamping after birth reduce the prevalence of anemia in infants. [ 28 ] Negative effects of delayed cord clamping include an increased risk of polycythemia . Still, this condition appeared to be benign in studies. [ 24 ] Infants whose cord clamping occurred later than 60 seconds after birth had a higher rate of neonatal jaundice requiring phototherapy . [ 26 ] Delayed clamping is not recommended as a response to cases where the newborn is not breathing well and needs resuscitation. Rather, the recommendation is instead to immediately clamp and cut the cord and perform cardiopulmonary resuscitation . [ 29 ] The umbilical cord pulsating is not a guarantee that the baby is receiving enough oxygen. [ 30 ] Some parents choose to omit cord severance entirely, a practice called " lotus birth " or umbilical nonseverance. The entire intact umbilical cord is allowed to dry and separates on its own (typically on the 3rd day after birth), falling off and leaving a healed umbilicus. [ 31 ] The Royal College of Obstetricians and Gynaecologists has warned about the risks of infection as the decomposing placenta tissue becomes a nest for infectious bacteria such as Staphylococcus . [ 32 ] In one such case, a 20-hour old baby whose parents chose UCNS was brought to the hospital in an agonal state, was diagnosed with sepsis and required an antibiotic treatment for six weeks. [ 33 ] [ 34 ] As the umbilical vein is directly connected to the central circulation, it can be used as a route for placement of a venous catheter for infusion and medication. The umbilical vein catheter is a reliable alternative to percutaneous peripheral or central venous catheters or intraosseous canulas and may be employed in resuscitation or intensive care of the newborn. From 24 to 34 weeks of gestation, when the fetus is typically viable, blood can be taken from the cord in order to test for abnormalities (particularly for hereditary conditions). This diagnostic genetic test procedure is known as percutaneous umbilical cord blood sampling . [ 35 ] The blood within the umbilical cord, known as cord blood , is a rich and readily available source of primitive, undifferentiated stem cells (of type CD34 -positive and CD38 -negative). These cord blood cells can be used for bone marrow transplant . Some parents choose to have this blood diverted from the baby's umbilical blood transfer through early cord clamping and cutting, to freeze for long-term storage at a cord blood bank should the child ever require the cord blood stem cells (for example, to replace bone marrow destroyed when treating leukemia ). This practice is controversial, with critics asserting that early cord blood withdrawal at the time of birth actually increases the likelihood of childhood disease, due to the high volume of blood taken (an average of 108ml) in relation to the baby's total supply (typically 300ml). [ 25 ] The Royal College of Obstetricians and Gynaecologists stated in 2006 that "there is still insufficient evidence to recommend directed commercial cord blood collection and stem-cell storage in low-risk families". [ 36 ] The American Academy of Pediatrics has stated that cord blood banking for self-use should be discouraged (as most conditions requiring the use of stem cells will already exist in the cord blood), while banking for general use should be encouraged. [ 37 ] In the future, cord blood-derived embryonic-like stem cells (CBEs) may be banked and matched with other patients, much like blood and transplanted tissues. The use of CBEs could potentially eliminate the ethical difficulties associated with embryonic stem cells (ESCs). [ 38 ] While the American Academy of Pediatrics discourages private banking except in the case of existing medical need, it also says that information about the potential benefits and limitations of cord blood banking and transplantation should be provided so that parents can make an informed decision. In the United States, cord blood education has been supported by legislators at the federal and state levels. In 2005, the National Academy of Sciences published an Institute of Medicine (IoM) report which recommended that expectant parents be given a balanced perspective on their options for cord blood banking. In response to their constituents, state legislators across the country are introducing legislation intended to help inform physicians and expectant parents on the options for donating, discarding or banking lifesaving newborn stem cells. Currently 17 states, representing two-thirds of U.S. births, have enacted legislation recommended by the IoM guidelines. The use of cord blood stem cells in treating conditions such as brain injury [ 39 ] and Type 1 Diabetes [ 40 ] is already being studied in humans, and earlier stage research is being conducted for treatments of stroke, [ 41 ] [ 42 ] and hearing loss. [ 43 ] Cord blood stored with private banks is typically reserved for use of the donor child only. In contrast, cord blood stored in public banks is accessible to anyone with a closely matching tissue type and demonstrated need. [ 44 ] The use of cord blood from public banks is increasing. Currently it is used in place of a bone marrow transplant in the treatment of blood disorders such as leukemia, with donations released for transplant through one registry, Netcord.org, [ 45 ] passing 1,000,000 as of January 2013. Cord blood is used when the patient cannot find a matching bone marrow donor; this "extension" of the donor pool has driven the expansion of public banks. The umbilical cord in some mammals, including cattle and sheep, contains two distinct umbilical veins. There is only one umbilical vein in the human umbilical cord. [ 46 ] In some animals, the mother will gnaw through the cord, thus separating the placenta from the offspring. The cord along with the placenta is often eaten by the mother, to provide nourishment and to dispose of tissues that would otherwise attract scavengers or predators. [ citation needed ] In chimpanzees , the mother leaves the cord in place and nurses her young with the cord and placenta attached until the cord dries out and separates naturally, within a day of birth, at which time the cord is discarded. (This was first documented by zoologists in the wild in 1974. [ 47 ] ) Some species of shark — hammerheads , requiems and smooth-hounds —are viviparous and have an umbilical cord attached to their placenta. [ 48 ] The term "umbilical cord" or just "umbilical" has also come to be used for other cords with similar functions, such as the hose connecting surface-supplied divers to their surface supply of air and/or heating, or space-suited astronauts to their spacecraft. Engineers sometimes use the term to describe a complex or critical cable connecting a component, especially when composed of bundles of conductors of different colors, thickness and types, terminating in a single multi-contact disconnect. In multiple American and international studies, cancer-causing chemicals have been found in the blood of umbilical cords. These originate from certain plastics, computer circuit boards, fumes and synthetic fragrances among others. Over 300 chemical toxicants have been found, including bisphenol A (BPA), tetrabromobisphenol A (TBBPA), Teflon -related perfluorooctanoic acid , galaxolide and synthetic musks among others. [ 49 ] The studies in America showed higher levels in African Americans , Hispanic Americans and Asian Americans due, it is thought, to living in areas of higher pollution. [ 50 ]
https://en.wikipedia.org/wiki/Umbilical_cord
An umbilicate lichen is a lichen that is only attached to its substrate at a single point. [ 1 ] An example is Lasallia papulosa . [ 1 ] This article about lichens or lichenology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Umbilicate_lichen
An umbo is a raised area in the center of a mushroom cap. Caps that possess this feature are called umbonate . Umbos that are sharply pointed are called acute , while those that are more rounded are broadly umbonate . If the umbo is elongated, it is cuspidate , and if the umbo is sharply delineated but not elongated (somewhat resembling the shape of a human areola ), it is called mammilate or papillate . [ 1 ]
https://en.wikipedia.org/wiki/Umbo_(mycology)
Umbrella sampling is a technique in computational physics and chemistry , used to improve sampling of a system (or different systems) where ergodicity is hindered by the form of the system's energy landscape . It was first suggested by Torrie and Valleau in 1977. [ 1 ] It is a particular physical application of the more general importance sampling in statistics. Systems in which an energy barrier separates two regions of configuration space may suffer from poor sampling. In Metropolis Monte Carlo runs, the low probability of overcoming the potential barrier can leave inaccessible configurations poorly sampled—or even entirely unsampled—by the simulation. An easily visualised example occurs with a solid at its melting point: considering the state of the system with an order parameter Q , both liquid (low Q ) and solid (high Q ) phases are low in energy, but are separated by a free-energy barrier at intermediate values of Q . This prevents the simulation from adequately sampling both phases. Umbrella sampling is a means of "bridging the gap" in this situation. The standard Boltzmann weighting for Monte Carlo sampling is replaced by a potential chosen to cancel the influence of the energy barrier present. The Markov chain generated has a distribution given by with U the potential energy, w ( r N ) a function chosen to promote configurations that would otherwise be inaccessible to a Boltzmann-weighted Monte Carlo run. In the example above, w may be chosen such that w = w ( Q ), taking high values at intermediate Q and low values at low/high Q , facilitating barrier crossing. Values for a thermodynamic property A deduced from a sampling run performed in this manner can be transformed into canonical-ensemble values by applying the formula with the π {\displaystyle \pi } subscript indicating values from the umbrella-sampled simulation. The effect of introducing the weighting function w ( r N ) is equivalent to adding a biasing potential to the potential energy of the system. If the biasing potential is strictly a function of a reaction coordinate or order parameter Q {\displaystyle Q} , then the (unbiased) free-energy profile on the reaction coordinate can be calculated by subtracting the biasing potential from the biased free-energy profile: where F 0 ( Q ) {\displaystyle F_{0}(Q)} is the free-energy profile of the unbiased system, and F π ( Q ) {\displaystyle F_{\pi }(Q)} is the free-energy profile calculated for the biased, umbrella-sampled system. Series of umbrella sampling simulations can be analyzed using the weighted histogram analysis method (WHAM) [ 2 ] or its generalization. [ 3 ] WHAM can be derived using the maximum likelihood method. Subtleties exist in deciding the most computationally efficient way to apply the umbrella sampling method, as described in Frenkel and Smit's book Understanding Molecular Simulation . Alternatives to umbrella sampling for computing potentials of mean force or reaction rates are free-energy perturbation and transition interface sampling . A further alternative, which functions in full non-equilibrium, is S-PRES .
https://en.wikipedia.org/wiki/Umbrella_sampling
Umbrella species are species selected for making conservation -related decisions, typically because protecting these species indirectly protects the many other species that make up the ecological community of its habitat (the umbrella effect ). Species conservation can be subjective because it is hard to determine the status of many species. The umbrella species is often either a flagship species whose conservation benefits other species [ 1 ] : 280 or a keystone species which may be targeted for conservation due to its impact on an ecosystem . Umbrella species can be used to help select the locations of potential reserves, find the minimum size of these conservation areas or reserves, and to determine the composition, structure, and processes of ecosystems. [ 2 ] Two commonly used definitions are: Other descriptions include: Animals may also be considered umbrella species if they are charismatic. The hope is that species that appeal to popular audiences, such as pandas, will attract support for habitat conservation in general. [ 7 ] In the two decades after its inception, the use of umbrella species as a conservation tool has been highly debated. The term was first used by Bruce Wilcox in 1984, [ 8 ] who defined an umbrella species as one whose minimum area requirements are at least as comprehensive of the rest of the community for which protection is sought through the establishment and management of a protected area. Some scientists have found that the use of an umbrella species approach can provide a more streamlined way to manage ecological communities. [ 9 ] [ 10 ] Others have proposed that umbrella species in combination with other tools will more effectively protect other species in land management reserves than using umbrella species alone. [ 10 ] [ 11 ] Individual invertebrate species can be good umbrella species because they can protect older, unique ecosystems. There have been cases where umbrella species have protected a large amount of area which has been beneficial to surrounding species. Dunk, Zielinski and Welsh (2006) reported that the reserves in Northern California (the Klamath - Siskiyou forests), set aside for the northern spotted owl , also protect mollusks and salamanders within that habitat. They found that the reserves set aside for the northern spotted owl "serve as a reasonable coarse-filter umbrella species for the taxa evaluated", which were mollusks and salamanders. [ 12 ] Gilby and colleagues (2017) found that using threatened species as umbrellas or "surrogates" for management targets could improve conservation outcomes in coastal areas. [ 13 ] The concept of an umbrella species is further utilized to create wildlife corridors with what are termed focal species . These focal species are chosen for a number of reasons and fall into several types, generally measured by their potential for an umbrella effect. By carefully choosing species based on this criterion, a linked or networked habitat can be created from single-species corridors. [ 14 ] These criteria are determined with the assistance of geographic information systems on the larger scale. Regardless of the location or scale of conservation, the umbrella effect is a measurement of a species' impact on others and is an important part of determining an approach. The bay checkerspot butterfly has been on the Endangered Species List since 1987. Launer and Murphy (1994) tried to determine whether this butterfly could be considered an umbrella species in protecting the native grassland it inhabits. They discovered that the Endangered Species Act has a loophole excluding federally protected plants on private property. However, the California Environmental Quality Act reinforces state conservation regulations. [ 6 ] Using the Endangered Species Act to protect termed umbrella species and their habitats can be controversial because they are not as well enforced in some states as others (such as California) to protect overall biodiversity . Protecting a species like the canebrake has practical applications, as protection measures would have broad environmental value because of an umbrella effect. That is, protecting the rattlesnakes would ensure protection of other wildlife species that use the same habitats but are less sensitive to development or require fewer resources . [ 19 ]
https://en.wikipedia.org/wiki/Umbrella_species
In crystalline materials , Umklapp scattering (also U-process or Umklapp process ) is a scattering process that results in a wave vector (usually written k ) which falls outside the first Brillouin zone . If a material is periodic, it has a Brillouin zone, and any point outside the first Brillouin zone can also be expressed as a point inside the zone. So, the wave vector is then mathematically transformed to a point inside the first Brillouin zone. This transformation allows for scattering processes which would otherwise violate the conservation of momentum : two wave vectors pointing to the right can combine to create a wave vector that points to the left. This non-conservation is why crystal momentum is not a true momentum. Examples include electron-lattice potential scattering or an anharmonic phonon -phonon (or electron -phonon) scattering process, reflecting an electronic state or creating a phonon with a momentum k -vector outside the first Brillouin zone . Umklapp scattering is one process limiting the thermal conductivity in crystalline materials, the others being phonon scattering on crystal defects and at the surface of the sample. The left panel of Figure 1 schematically shows the possible scattering processes of two incoming phonons with wave-vectors ( k -vectors) k 1 and k 2 (red) creating one outgoing phonon with a wave vector k 3 (blue). As long as the sum of k 1 and k 2 stay inside the first Brillouin zone (grey squares), k 3 is the sum of the former two, thus conserving phonon momentum. This process is called normal scattering (N-process). With increasing phonon momentum and thus larger wave vectors k 1 and k 2 , their sum might point outside the first Brillouin zone ( k' 3 ). As shown in the right panel of Figure 1, k -vectors outside the first Brillouin zone are physically equivalent to vectors inside it and can be mathematically transformed into each other by the addition of a reciprocal lattice vector G . These processes are called Umklapp scattering and change the total phonon momentum. Umklapp scattering is the dominant process for electrical resistivity at low temperatures for low defect crystals [ 1 ] (as opposed to phonon-electron scattering, which dominates at high temperatures, and high-defect lattices which lead to scattering at any temperature.) Umklapp scattering is the dominant process for thermal resistivity at high temperatures for low defect crystals. [ citation needed ] The thermal conductivity for an insulating crystal where the U-processes are dominant has 1/T dependence. The name derives from the German word umklappen (to turn over). Rudolf Peierls , in his autobiography Bird of Passage states he was the originator of this phrase and coined it during his 1929 crystal lattice studies under the tutelage of Wolfgang Pauli . Peierls wrote, "…I used the German term Umklapp (flip-over) and this rather ugly word has remained in use…". [ 2 ] The term Umklapp appears in the 1920 paper of Wilhelm Lenz 's seed paper of the Ising model . [ 3 ]
https://en.wikipedia.org/wiki/Umklapp_scattering
In organic chemistry , umpolung ( German: [ˈʔʊmˌpoːlʊŋ] ) or polarity inversion is the chemical modification of a functional group with the aim of the reversal of polarity of that group. [ 1 ] [ 2 ] This modification allows secondary reactions of this functional group that would otherwise not be possible. [ 3 ] The concept was introduced by D. Seebach (hence the German word umpolung for reversed polarity) and E.J. Corey . Polarity analysis during retrosynthetic analysis tells a chemist when umpolung tactics are required to synthesize a target molecule. The vast majority of important organic molecules contain heteroatoms, which polarize carbon skeletons by virtue of their electronegativity. Therefore, in standard organic reactions, the majority of new bonds are formed between atoms of opposite polarity. This can be considered to be the "normal" mode of reactivity. One consequence of this natural polarization of molecules is that 1,3- and 1,5- heteroatom substituted carbon skeletons are extremely easy to synthesize ( Aldol reaction , Claisen condensation , Michael reaction , Claisen rearrangement , Diels-Alder reaction ), whereas 1,2-, 1,4-, and 1,6- heteroatom substitution patterns are more difficult to access via "normal" reactivity. It is therefore important to understand and develop methods to induce umpolung in organic reactions. The simplest method of obtaining 1,2-, 1,4-, and 1,6- heteroatom substitution patterns is to start with them. Biochemical and industrial processes can provide inexpensive sources of chemicals that have normally inaccessible substitution patterns. For example, amino acids, oxalic acid, succinic acid, adipic acid, tartaric acid, and glucose are abundant and provide nonroutine substitution patterns. The canonical umpolung reagent is the cyanide ion . The cyanide ion is unusual in that a carbon triply bonded to a nitrogen would be expected to have a (+) polarity due to the higher electronegativity of the nitrogen atom. Yet, the negative charge of the cyanide ion is localized on the carbon, giving it a (-) formal charge. This chemical ambivalence results in umpolung in many reactions where cyanide is involved. For example, cyanide is a key catalyst in the benzoin condensation , a classical example of polarity inversion. The net result of the benzoin reaction is that a bond has been formed between two carbons that are normally electrophiles. N-heterocyclic carbenes are similar to cyanide in reactivity. Like cyanide, they have an unusual chemical ambivalence, which allows them to trigger umpolung in reactions where they are involved. The carbene has six electrons - two each in the carbon-nitrogen single bonds, two in its sp 2 -hybridized orbital, and an empty p-orbital. The sp 2 lone pair acts as an electron donor, whereas the empty p-orbital is capable as acting as an electron acceptor. In this example, the β-carbon of the α,β-unsaturated ester 1 formally acts as a nucleophile, [ 4 ] whereas normally it would be expected to be a Michael acceptor . This carbene reacts with the α,β-unsaturated ester 1 at the β-position forming the intermediate enolate 2 . Through tautomerization 2b can displace the terminal bromine atom to 3 . An elimination reaction regenerates the carbene and releases the product 4 . For comparison: in the Baylis-Hillman reaction the same electrophilic β-carbon atom is attacked by a reagent but resulting in the activation of the α-position of the enone as the nucleophile. Biological processes can employ cyanide-like umpolung reactivity without having to rely on the toxic cyanide ion. Thiamine (which itself is an N-heterocyclic carbene ) pyrophosphate (TPP) serves a functionally identical role. The thiazolium ring in TPP is deprotonated within the hydrophobic core of the enzyme, [ 5 ] resulting in a carbene which is capable of umpolung. Enzymes which use TPP as a cofactor can catalyze umpolung reactivity, such as the decarboxylation of pyruvate. In the absence of TPP, the decarboxylation of pyruvate would result in the placement of a negative charge on the carbonyl carbon, which would run counter to the normal polarization of the carbon-oxygen double bond. 3-membered rings are strained moieties in organic chemistry. When a 3-membered ring contains a heteroatom, such as in an epoxide or in a bromonium intermediate, the three atoms in the ring become polarized. It is impossible to assign (+) and (-) polarities to a 3-membered ring without having two adjacent atoms with the same polarity. Therefore, whenever a polarized 3-membered ring is opened by a nucleophile, umpolung inevitably results . [ 6 ] For example, the opening of ethylene oxide with hydroxide leads to ethylene glycol . Dithiane chemistry is a classic example of polarity inversion. This can be observed in the Corey-Seebach reaction . Ordinarily the oxygen atom in the carbonyl group is more electronegative than the carbon atom and therefore the carbonyl group reacts as an electrophile at carbon. This polarity can be reversed when the carbonyl group is converted into a dithiane or a thioacetal . In synthon terminology the ordinary carbonyl group is an acyl cation and the dithiane is a masked acyl anion . When the dithiane is derived from an aldehyde such as acetaldehyde the acyl proton can be abstracted by n -butyllithium in THF at low temperatures. The thus generated 2-lithio-1,3-dithiane reacts as a nucleophile in nucleophilic displacement with alkyl halides such as benzyl bromide , with other carbonyl compounds such as cyclohexanone or oxiranes such as phenyl-epoxyethane, shown below. After hydrolysis of the dithiane group the final reaction products are α-alkyl-ketones or α-hydroxy-ketones . A common reagent for dithiane hydrolysis is (bis(trifluoroacetoxy)iodo)benzene . Dithiane chemistry opens the way to many new chemical transformations. One example is found in so-called anion relay chemistry in which a negative charge of an anionic functional group resulting from one organic reaction is transferred to a different location within the same carbon framework and available for secondary reaction. [ 7 ] In this example of a multi-component reaction both formaldehyde ( 1 ) and isopropylaldehyde ( 8 ) are converted into dithianes 3 and 9 with 1,3-propanedithiol . Sulfide 3 is first silylated by reaction with tert -butyllithium and then trimethylsilyl chloride 4 and then the second acyl proton is removed and reacted with optically active (−)- epichlorohydrin 6 replacing chlorine. This compound serves as the substrate for reaction with the other dithiane 9 to the oxirane ring opening product 10 . Under influence of the polar base HMPA , 10 rearranges in a 1,4-Brook rearrangement to the silyl ether 11 reactivating the formaldehyde dithiane group as an anion (hence the anion relay concept). This dithiane group reacts with oxirane 12 to the alcohol 13 and in the final step the sulfide groups are removed with (bis(trifluoroacetoxy)iodo)benzene . The anion relay chemistry tactic has been applied elegantly in the total synthesis of complex molecules of significant biological activity, such as spongistatin 2 [ 8 ] and mandelalide A. [ 9 ] [ 10 ] It is possible to form a bond between two carbons of (-) polarity by using an oxidant such as iodine . In this total synthesis of enterolactone , [ 11 ] the 1,4- relationship of oxygen substituents is assembled by the oxidative homocoupling of a carboxylate enolate using iodine as the oxidant. Ordinarily the nitrogen atom in the amine group is reacting as a nucleophile by way of its lone pair . This polarity can be reversed when a primary or secondary amine is substituted with a good leaving group (such as a halogen atom or an alkoxy group ). The resulting N-substituted compound can behave as an electrophile at the nitrogen atom and react with a nucleophile as for example in the electrophilic amination of carbanions . [ 12 ] Recently, various carbonyls have been turned into organometallic reagent surrogates via hydrazone umpolung by C.-J. Li et al. In the presence of a catalyst, similar to organometallic reagents, hydrazones can undergo nucleophilic additions, conjugate additions, and transition-metal catalyzed cross-couplings with various electrophiles to form new C-C bonds. [ 13 ]
https://en.wikipedia.org/wiki/Umpolung
Una Ryan (born December 18, 1941) is a British-American biologist who has conducted research on vascular biology, publishing over 300 papers. After an extended research and academic career she began a career in the biotech industry . She was Director for Health Sciences of Monsanto Company ; CEO, president and director of AVANT Immunotherapeutics; and is currently the chairman of The Bay Area BioEconomy Initiative, among many other associations. She is an angel investor and focuses her funds on women-led companies. She has won numerous awards and recognition during her career including the National Institute of Health 's 10-year merit award, Order of the British Empire and the Albert Einstein Award . Una's favorite champagne is Billecart Salmon. Una Scully was born on December 18, 1941 [ 1 ] in Kuala Lumpur , Malaysia to a British father who was interned in a Japanese camp during World War II . Scully and her mother fled by boat from Singapore to England, where she completed her education, graduating with a degree in zoology from the University of Bristol in 1963. [ 2 ] In 1965, she began publishing under the name of Una Smith and did so until 1973. [ 1 ] [ 3 ] She went on to complete a PhD at the University of Cambridge in 1967 and that same year moved to the United States, taking up a Howard Hughes Fellowship at the University of Miami to study angiotensin-converting enzymes . [ 2 ] After completion of the fellowship, Smith taught as a professor of life sciences and medicine at the University of Miami School of Medicine from 1972 to 1989. [ 4 ] In 1975, her professional publishing reflected her name as U. S. Ryan or Una Scully Ryan. [ 1 ] [ 5 ] Her work at Miami was recognized by a 10-year Merit Award from the National Institutes of Health . [ 2 ] In 1989, Ryan married the surgeon, Allan Dana Callow [ 6 ] and then beginning in 1990, she worked as a Research Professor of Surgery, Medicine and Cell Biology at Washington University School of Medicine in St. Louis, Missouri . Simultaneously, she accepted a position at Monsanto as Director for Health Sciences. She left Monsanto in 1992 and joined AVANT Immunotherapeutics Inc. as a vice-president and chief scientific researcher in May 1993. [ 4 ] Around the same time, she left Washington University for a position as a Research Professor of Medicine at Boston University School of Medicine and obtained US citizenship in 1994. [ 7 ] In 1996, Ryan was promoted to President at AVANT and also began serving as chief executive officer and President of Celldex Therapeutics Inc., [ 4 ] all the while continuing to research and publish papers on vaccines against viral and bacterial diseases and for cholesterol management. [ 7 ] Ryan was awarded the Order of the British Empire in 2002 for her contributions to research and development to biotechnology. [ 7 ] In 2007, she was honored with the Albert Einstein Award for her development of new vaccines to combat global infectious diseases [ 2 ] and then in 2008, she left the for-profit sector leaving her positions at AVANT and Celldex. [ 4 ] In 2009 she was awarded an honorary doctorate from the University of Bristol. [ 2 ] In addition to vaccines, Ryan has worked on clean water solutions and in 2009 was awarded a Cartier Women's Initiative Awards for a wastewater cleaning program using blue-green algae and solar energy . [ 8 ] Unable to secure venture capital funds for the program, Ryan turned her focus toward a program called Diagnostics for All, in an attempt to provide inexpensive diagnostic tests to developing countries. The innovation used paper tests and a drop of blood which when chemicals were applied would change color to indicate different results, without lab work required and simple disposal, as the paper could be burned. [ 9 ] The company began with liver testing and then expanded their products to include pregnancy tests and a glucose monitoring test for diabetics. [ 10 ] Deciding to relocate to the American west coast in 2013, Ryan accepted a position as the first woman to chair the Bay Area BioEconomy Initiative. While she was in Boston, Ryan had served on the board of the Biotechnology Industry Organization and the BioEconomy Initiative had similar aims of increasing efficiency and decreasing the time it takes for products to begin clinical trials and ultimately get to medical professionals. She also turned her focus to angel investing in an attempt to help women-run businesses find venture capital funds. Ryan served as managing director of Golden Seeds, as a partner in Astia Angel and participated with The Angel Forum all aimed at investing in startups and mentoring businesses in the Silicon Valley. [ 11 ] She continues to serve on the boards of several biotechnology firms. [ 4 ] In 2015, Ryan launched ULUX fine art based on her electron micrographs. Ryan has two daughters, Tamsin Smith , a poet and social impact innovator, who helped create and served as founding president of Bono's Product Red initiative. Daughter Amy Ryan Dowsett is an interior designer. Both daughters live in San Francisco, CA. Ryan has four grandchildren.
https://en.wikipedia.org/wiki/Una_Ryan
In reliability engineering , the term availability has the following meanings: Normally high availability systems might be specified as 99.98%, 99.999% or 99.9996%. The converse, unavailability , is 1 minus the availability. The simplest representation of availability ( A ) is a ratio of the expected value of the uptime of a system to the aggregate of the expected values of up and down time (that results in the "total amount of time" C of the observation window) Another equation for availability ( A ) is a ratio of the Mean Time To Failure (MTTF) and Mean Time Between Failure (MTBF), or If we define the status function X ( t ) {\displaystyle X(t)} as therefore, the availability A ( t ) at time t > 0 is represented by Average availability must be defined on an interval of the real line. If we consider an arbitrary constant c > 0 {\displaystyle c>0} , then average availability is represented as Limiting (or steady-state) availability is represented by [ 1 ] Limiting average availability is also defined on an interval [ 0 , c ] {\displaystyle [0,c]} as, Availability is the probability that an item will be in an operable and committable state at the start of a mission when the mission is called for at a random time, and is generally defined as uptime divided by total time (uptime plus downtime). Let's say a series component is composed of components A, B and C. Then following formula applies: Availability of series component = (availability of component A) x (availability of component B) x (availability of component C) [ 2 ] [ 3 ] Therefore, combined availability of multiple components in a series is always lower than the availability of individual components. On the other hand, following formula applies to parallel components: Availability of parallel components = 1 - (1 - availability of component A) X (1 - availability of component B) X (1 - availability of component C) [ 2 ] [ 3 ] In corollary, if you have N parallel components each having X availability, then: Availability of parallel components = 1 - (1 - X)^ N [ 3 ] Using parallel components can exponentially increase the availability of overall system. [ 2 ] For example if each of your hosts has only 50% availability, by using 10 of hosts in parallel, you can achieve 99.9023% availability. [ 3 ] Note that redundancy doesn’t always lead to higher availability. In fact, redundancy increases complexity which in turn reduces availability. According to Marc Brooker, to take advantage of redundancy, ensure that: [ 4 ] Reliability Block Diagrams or Fault Tree Analysis are developed to calculate availability of a system or a functional failure condition within a system including many factors like: Furthermore, these methods are capable to identify the most critical items and failure modes or events that impact availability. Availability, inherent (A i ) [ 5 ] The probability that an item will operate satisfactorily at a given point in time when used under stated conditions in an ideal support environment. It excludes logistics time, waiting or administrative downtime, and preventive maintenance downtime. It includes corrective maintenance downtime. Inherent availability is generally derived from analysis of an engineering design: It is based on quantities under control of the designer. Availability, achieved (Aa) [ 6 ] The probability that an item will operate satisfactorily at a given point in time when used under stated conditions in an ideal support environment (i.e., that personnel, tools, spares, etc. are instantaneously available). It excludes logistics time and waiting or administrative downtime. It includes active preventive and corrective maintenance downtime. Availability, operational (Ao) [ 7 ] The probability that an item will operate satisfactorily at a given point in time when used in an actual or realistic operating and support environment. It includes logistics time, ready time, and waiting or administrative downtime, and both preventive and corrective maintenance downtime. This value is equal to the mean time between failure ( MTBF ) divided by the mean time between failure plus the mean downtime (MDT). This measure extends the definition of availability to elements controlled by the logisticians and mission planners such as quantity and proximity of spares, tools and manpower to the hardware item. Refer to Systems engineering for more details If we are using equipment which has a mean time to failure (MTTF) of 81.5 years and mean time to repair (MTTR) of 1 hour: Outage due to equipment in hours per year = 1/rate = 1/MTTF = 0.01235 hours per year. Availability is well established in the literature of stochastic modeling and optimal maintenance . Barlow and Proschan [1975] define availability of a repairable system as "the probability that the system is operating at a specified time t." Blanchard [1998] gives a qualitative definition of availability as "a measure of the degree of a system which is in the operable and committable state at the start of mission when the mission is called for at an unknown random point in time." This definition comes from the MIL-STD-721. Lie, Hwang, and Tillman [1977] developed a complete survey along with a systematic classification of availability. Availability measures are classified by either the time interval of interest or the mechanisms for the system downtime . If the time interval of interest is the primary concern, we consider instantaneous, limiting, average, and limiting average availability. The aforementioned definitions are developed in Barlow and Proschan [1975], Lie, Hwang, and Tillman [1977], and Nachlas [1998]. The second primary classification for availability is contingent on the various mechanisms for downtime such as the inherent availability, achieved availability, and operational availability. (Blanchard [1998], Lie, Hwang, and Tillman [1977]). Mi [1998] gives some comparison results of availability considering inherent availability. Availability considered in maintenance modeling can be found in Barlow and Proschan [1975] for replacement models, Fawzi and Hawkes [1991] for an R-out-of-N system with spares and repairs, Fawzi and Hawkes [1990] for a series system with replacement and repair, Iyer [1992] for imperfect repair models, Murdock [1995] for age replacement preventive maintenance models, Nachlas [1998, 1989] for preventive maintenance models, and Wang and Pham [1996] for imperfect maintenance models. A very comprehensive recent book is by Trivedi and Bobbio [2017]. Availability factor is used extensively in power plant engineering . For example, the North American Electric Reliability Corporation implemented the Generating Availability Data System in 1982. [ 8 ]
https://en.wikipedia.org/wiki/Unavailability
Uncacheable speculative write combining ( USWC ), is a computer BIOS setting for memory communication between a CPU and graphics card . [ 1 ] [ 2 ] [ 3 ] It allows faster communication than the "uncachable" setting (the alternative to USWC in the BIOS), as long as the graphics card supports write combining (which most modern cards do), allowing data to be temporarily stored in write combine buffers (WCB) and released in burst mode rather than single bits. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Uncacheable_speculative_write_combining
The detailed design of buildings needs to take into account various external factors, which may be subject to uncertainties . Among these factors are prevailing weather and climate ; the properties of the materials used and the standard of workmanship ; and the behaviour of occupants of the building. Several studies have indicated that it is the behavioural factors that are the most important among these. Methods have been developed to estimate the extent of variability in these factors and the resulting need to take this variability into account at the design stage. Earlier work includes a paper by Gero and Dudnik (1978) presenting a methodology to solve the problem of designing heating, ventilation and air conditioning systems subjected to uncertain demands. Since then, other authors have shown an interest in the uncertainties that are present in building design. Ramallo-González (2013) [ 1 ] classified uncertainties in energy building assessment tools in three different groups: Buildings have long life spans: for example, in England and Wales, around 40% of the office blocks existing in 2004 were built before 1940 (30% if considered by floor area), [ 3 ] and 38.9% of English dwellings in 2007 were built before 1944. [ 4 ] This long life span makes buildings likely to operate with climates that might change due to global warming. De Wilde and Coley (2012) showed how important is to design buildings that take into consideration climate change and that are able to perform well in future weathers. [ 5 ] The use of synthetic weather data files may introduce further uncertainty. Wang et al. (2005) showed the impact that uncertainties in weather data (among others) may cause in energy demand calculations. [ 6 ] The deviation in calculated energy use due to variability in the weather data were found to be different in different locations from a range of (-0.5% to 3%) in San Francisco to a range of (-4% to 6%) in Washington D.C. The ranges were calculated using a Typical Meteorological Year (TMY) as the reference. The spatial resolution of weather data files was the concern covered by Eames et al. (2011). [ 7 ] Eames showed how a low spatial resolution of weather data files can be the cause of disparities of up to 40% in the heating demand. The reason is that this uncertainty is not understood as an aleatory parameter but as an epistemic uncertainty that can be solved with the appropriate improvement of the data resources or with specific weather data acquisition for each project. A large study was carried out by Leeds Metropolitan University at Stamford Brook in England. This project saw 700 dwellings built to high efficiency standards. [ 8 ] The results of this project show a significant gap between the energy used expected before construction and the actual energy use once the house is occupied. The workmanship is analysed in this work. The authors emphasise the importance of thermal bridges that were not considered for the calculations, and that the thermal bridges that have the largest impact on the final energy use are those originated by the internal partitions that separate dwellings. The dwellings that were monitored in use in this study show a large difference between the real energy use and that estimated using the UK Standard Assessment Procedure (SAP), with one of them giving +176% of the expected value when in use. Hopfe has published several papers concerning uncertainties in building design. A 2007 publication [ 9 ] looks into uncertainties of types 2 and 3. In this work the uncertainties are defined as normal distributions. The random parameters are sampled to generate 200 tests that are sent to the simulator (VA114), the results from which will be analysed to check the uncertainties with the largest impact on the energy calculations. This work showed that the uncertainty in the value used for infiltration is the factor that is likely to have the largest influence on cooling and heating demands. De Wilde and Tian (2009) agreed with Hopfe on the impact of uncertainties in infiltration upon energy calculations, but also introduced other factors. The work of Schnieders and Hermelink (2006) [ 10 ] showed a substantial variability in the energy demands of low-energy buildings designed under the same ( Passivhaus ) specification. Blight and Coley (2012) [ 11 ] showed that substantial variability in energy use can be occasioned due to variance in occupant behaviour, including the use of windows and doors. Their paper also demonstrated that their method of modelling occupants’ behaviour accurately reproduces actual behavioural patterns of inhabitants. This modelling method was the one developed by Richardson et al. (2008), [ 12 ] using the Time-Use Survey (TUS) of the United Kingdom as a source for real behaviour of occupants, based on the activity of more than 6000 occupants as recorded in 24-hour diaries with a 10-minute resolution. Richardson's paper shows how the tool is able to generate behavioural patterns that correlate with the real data obtained from the TUS. In the work of Pettersen (1994), uncertainties of group 2 (workmanship and quality of elements) and group 3 (behaviour) of the previous grouping were considered. [ 13 ] This work shows how important occupants’ behaviour is on the calculation of the energy demand of a building. Pettersen showed that the total energy use follows a normal distribution with a standard deviation of around 7.6% when the uncertainties due to occupants are considered, and of around 4.0% when considering those generated by the properties of the building elements. Wang et al. (2005) showed that deviations in energy demand due to local variability in weather data were smaller than the ones due to operational parameters linked with occupants’ behaviour. For those, the ranges were (-29% to 79%) for San Francisco and (-28% to 57%) for Washington D.C. The conclusion of this paper is that occupants will have a larger impact in energy calculations than the variability between synthetically generated weather data files. Another study performed by de Wilde and Wei Tian (2009) [ 14 ] compared the impact of most of the uncertainties affecting building energy calculations, including uncertainties in: weather, U-Value of windows, and other variables related with occupants’ behaviour (equipment and lighting), and taking into account climate change. De Wilde and Tian used a two-dimensional Monte Carlo simulation analysis to generate a database obtained with 7280 runs of a building simulator. A sensitivity analysis was applied to this database to obtain the most significant factors on the variability of the energy demand calculations. Standardised regression coefficients and standardised rank regression coefficients were used to compare the impacts of the uncertainties. Their paper compares many of the uncertainties with a good sized database providing a realistic comparison for the scope of the sampling of the uncertainties.
https://en.wikipedia.org/wiki/Uncertainties_in_building_design_and_building_energy_assessment
The uncertainty effect , also known as direct risk aversion , is a phenomenon from economics and psychology which suggests that individuals may be prone to expressing such an extreme distaste for risk that they ascribe a lower value to a risky prospect (e.g., a lottery for which outcomes and their corresponding probabilities are known) than its worst possible realization. [ 1 ] [ 2 ] For example, in the original work on the uncertainty effect by Uri Gneezy , John A. List , and George Wu (2006), individuals were willing to pay $38 for a $50 gift card, but were only willing to pay $28 for a lottery ticket that would yield a $50 or $100 gift card with equal probability. [ 1 ] This effect is considered to be a violation of "internality" (i.e., the proposition that the value of a risky prospect must lie somewhere between the value of that prospect’s best and worst possible realizations) which is central to prospect theory , expected utility theory , and other models of risky choice. [ 1 ] Additionally, it has been proposed as an explanation for a host of naturalistic behaviors which cannot be explained by dominant models of risky choice, such as the popularity of insurance/extended warranties for consumer products. [ 2 ] Research on the uncertainty effect was first formally conducted by Uri Gneezy, John A. List, and George Wu in the early 2000s, though it follows in the footsteps of a large body of work devoted to understanding decision making under risk. As their starting point, Gneezy, List, and Wu noted that most models of risky choice assume that when presented with a risky prospect individuals engage in a balancing exercise of sorts in which they compare the best possible outcomes they might realize to the worst possible outcomes they might realize (e.g., in a gamble that gives a 50-50 chance to win $500 or $1,000, individuals might compare these two outcomes to one another). Within this type of schema, individuals are also expected to weight the value (or utility) of each of these discrete outcomes in accordance with the probability that each will occur. [ 1 ] While expected utility theory and prospect theory differ in terms of how outcomes are evaluated and weighted, they both nonetheless rely upon what Gonzalez, List, and Wu term as the "internality axiom." This axiom specifically posits that the value of some risky prospect must lie between the value of that prospect's best and worst possible outcomes. Formally, for some risky prospect L = ( x , p , y ) {\displaystyle L=(x,p,y)} which offers p {\displaystyle p} probability of earning x {\displaystyle x} and 1 − p {\displaystyle 1-p} probability of earning y {\displaystyle y} (where x {\displaystyle x} is strictly greater than y {\displaystyle y} ), individuals' elicited values for x {\displaystyle x} , y {\displaystyle y} , and L {\displaystyle L} should satisfy the following inequality: V ( x ) ≥ V ( L ) ≥ V ( y ) {\displaystyle V(x)\geq V(L)\geq V(y)} . [ 1 ] In a series of studies conducted by Gneezy, List, and Wu, and in follow-up work conducted by Uri Simonsohn (among others), individuals were repeatedly shown to violate this assumption. For example: Within this body of work, the uncertainty effect was also shown to extend to choice and to consideration of delayed outcomes; it was also shown not to be a consequence of poorly comprehending the lottery. [ 1 ] [ 2 ] Among other explanations, it has been proposed that the uncertainty effect might arise as a consequence of individuals experiencing some form of disutility from risk. [ 2 ] In his follow-up work on the uncertainty effect (or, as he termed it, direct risk aversion), Simonsohn suggested that it might provide an explanation for certain types of responses to risk that cannot be explained by prospect theory and expected utility theory. One notable example is the widespread popularity of insurance for small-stakes and/or low-probability risks – e.g., warranties for consumer electronics, low-deductible insurance policies, and so on; dominant theories of risky choice do not predict that such products should be popular, and Simonsohn asserted that the uncertainty effect might help to explain why. [ 2 ] In the years after Gneezy, List, and Wu published their findings, several other scholars asserted that the uncertainty effect was simply a consequence of individuals misunderstanding the lottery utilized in initial tests of the uncertainty effect. [ 3 ] [ 4 ] Such claims were partially refuted by Simonsohn, whose 2009 paper utilized revised lottery instructions, as well as several other successful replications of the uncertainty effect which were published in subsequent years. [ 2 ] [ 5 ] [ 6 ] [ 7 ] Notably, however, in later work with Robert Mislavsky, Simonsohn suggested that the uncertainty effect might be a consequence of aversion to "weird" transaction features as opposed to some form of disutility from risk. [ 8 ] These and other alternative explanations are briefly summarized below. In work published in 2013, Yang Yang, Joachim Vosgerau, and George Loewenstein suggested that the uncertainty effect might in fact be understood as a framing effect. Specifically, they posited that the anomalies associated with the uncertainty effect might not arise as a consequence of distaste for/disutility from risk, but rather, as a consequence of the fact that in most experiments which successfully replicated the uncertainty effect certain outcomes were contrasted to risky prospects described as lotteries, gambles, and the like. As such, they posited that the effect might instead be described as an aversion to lotteries, or – as they term it – an aversion to "bad deals." [ 9 ] Although Simonsohn initially proposed that the uncertainty effect might reflect a distaste for uncertainty, in later work he and colleague Robert Mislavsky instead explored the idea that adding "weird" features to a transaction might give rise to patterns which appeared consistent with the uncertainty effect. For example, they noted that internality violations may arise as a consequence of being averse to the notion of purchasing a coin flip or other gamble in order to obtain a gift card, rather than the uncertainty represented by the coin flip itself. In their work, Mislavsky and Simonsohn systematically explored this notion, and suggest that the aversion to weird transactions may help to provide a more parsimonious explanation for certain failures to replicate the uncertainty effect. [ 8 ]
https://en.wikipedia.org/wiki/Uncertainty_effect
The uncertainty principle , also known as Heisenberg's indeterminacy principle , is a fundamental concept in quantum mechanics . It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum , can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known. More formally, the uncertainty principle is any of a variety of mathematical inequalities asserting a fundamental limit to the product of the accuracy of certain related pairs of measurements on a quantum system, such as position , x , and momentum, p . [ 1 ] Such paired-variables are known as complementary variables or canonically conjugate variables . First introduced in 1927 by German physicist Werner Heisenberg , [ 2 ] [ 3 ] [ 4 ] [ 5 ] the formal inequality relating the standard deviation of position σ x and the standard deviation of momentum σ p was derived by Earle Hesse Kennard [ 6 ] later that year and by Hermann Weyl [ 7 ] in 1928: σ x σ p ≥ ℏ 2 {\displaystyle \sigma _{x}\sigma _{p}\geq {\frac {\hbar }{2}}} where ℏ = h 2 π {\displaystyle \hbar ={\frac {h}{2\pi }}} is the reduced Planck constant . The quintessentially quantum mechanical uncertainty principle comes in many forms other than position–momentum. The energy–time relationship is widely used to relate quantum state lifetime to measured energy widths but its formal derivation is fraught with confusing issues about the nature of time. The basic principle has been extended in numerous directions; it must be considered in many kinds of fundamental physical measurements. It is vital to illustrate how the principle applies to relatively intelligible physical situations since it is indiscernible on the macroscopic [ 8 ] scales that humans experience. Two alternative frameworks for quantum physics offer different explanations for the uncertainty principle. The wave mechanics picture of the uncertainty principle is more visually intuitive, but the more abstract matrix mechanics picture formulates it in a way that generalizes more easily. Mathematically, in wave mechanics, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding orthonormal bases in Hilbert space are Fourier transforms of one another (i.e., position and momentum are conjugate variables ). A nonzero function and its Fourier transform cannot both be sharply localized at the same time. [ 9 ] A similar tradeoff between the variances of Fourier conjugates arises in all systems underlain by Fourier analysis, for example in sound waves: A pure tone is a sharp spike at a single frequency, while its Fourier transform gives the shape of the sound wave in the time domain, which is a completely delocalized sine wave. In quantum mechanics, the two key points are that the position of the particle takes the form of a matter wave, and momentum is its Fourier conjugate, assured by the de Broglie relation p = ħk , where k is the wavenumber . In matrix mechanics , the mathematical formulation of quantum mechanics , any pair of non- commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value (the eigenvalue). For example, if a measurement of an observable A is performed, then the system is in a particular eigenstate Ψ of that observable. However, the particular eigenstate of the observable A need not be an eigenstate of another observable B : If so, then it does not have a unique associated measurement for it, as the system is not in an eigenstate of that observable. [ 10 ] The uncertainty principle can be visualized using the position- and momentum-space wavefunctions for one spinless particle with mass in one dimension. The more localized the position-space wavefunction, the more likely the particle is to be found with the position coordinates in that region, and correspondingly the momentum-space wavefunction is less localized so the possible momentum components the particle could have are more widespread. Conversely, the more localized the momentum-space wavefunction, the more likely the particle is to be found with those values of momentum components in that region, and correspondingly the less localized the position-space wavefunction, so the position coordinates the particle could occupy are more widespread. These wavefunctions are Fourier transforms of each other: mathematically, the uncertainty principle expresses the relationship between conjugate variables in the transform. According to the de Broglie hypothesis , every object in the universe is associated with a wave . Thus every object, from an elementary particle to atoms, molecules and on up to planets and beyond are subject to the uncertainty principle. The time-independent wave function of a single-moded plane wave of wavenumber k 0 or momentum p 0 is [ 11 ] ψ ( x ) ∝ e i k 0 x = e i p 0 x / ℏ . {\displaystyle \psi (x)\propto e^{ik_{0}x}=e^{ip_{0}x/\hbar }~.} The Born rule states that this should be interpreted as a probability density amplitude function in the sense that the probability of finding the particle between a and b is P ⁡ [ a ≤ X ≤ b ] = ∫ a b | ψ ( x ) | 2 d x . {\displaystyle \operatorname {P} [a\leq X\leq b]=\int _{a}^{b}|\psi (x)|^{2}\,\mathrm {d} x~.} In the case of the single-mode plane wave, | ψ ( x ) | 2 {\displaystyle |\psi (x)|^{2}} is 1 if X = x {\displaystyle X=x} and 0 otherwise. In other words, the particle position is extremely uncertain in the sense that it could be essentially anywhere along the wave packet. On the other hand, consider a wave function that is a sum of many waves , which we may write as ψ ( x ) ∝ ∑ n A n e i p n x / ℏ , {\displaystyle \psi (x)\propto \sum _{n}A_{n}e^{ip_{n}x/\hbar }~,} where A n represents the relative contribution of the mode p n to the overall total. The figures to the right show how with the addition of many plane waves, the wave packet can become more localized. We may take this a step further to the continuum limit , where the wave function is an integral over all possible modes ψ ( x ) = 1 2 π ℏ ∫ − ∞ ∞ φ ( p ) ⋅ e i p x / ℏ d p , {\displaystyle \psi (x)={\frac {1}{\sqrt {2\pi \hbar }}}\int _{-\infty }^{\infty }\varphi (p)\cdot e^{ipx/\hbar }\,dp~,} with φ ( p ) {\displaystyle \varphi (p)} representing the amplitude of these modes and is called the wave function in momentum space . In mathematical terms, we say that φ ( p ) {\displaystyle \varphi (p)} is the Fourier transform of ψ ( x ) {\displaystyle \psi (x)} and that x and p are conjugate variables . Adding together all of these plane waves comes at a cost, namely the momentum has become less precise, having become a mixture of waves of many different momenta. [ 12 ] One way to quantify the precision of the position and momentum is the standard deviation σ . Since | ψ ( x ) | 2 {\displaystyle |\psi (x)|^{2}} is a probability density function for position, we calculate its standard deviation. The precision of the position is improved, i.e. reduced σ x , by using many plane waves, thereby weakening the precision of the momentum, i.e. increased σ p . Another way of stating this is that σ x and σ p have an inverse relationship or are at least bounded from below. This is the uncertainty principle, the exact limit of which is the Kennard bound. We are interested in the variances of position and momentum, defined as σ x 2 = ∫ − ∞ ∞ x 2 ⋅ | ψ ( x ) | 2 d x − ( ∫ − ∞ ∞ x ⋅ | ψ ( x ) | 2 d x ) 2 {\displaystyle \sigma _{x}^{2}=\int _{-\infty }^{\infty }x^{2}\cdot |\psi (x)|^{2}\,dx-\left(\int _{-\infty }^{\infty }x\cdot |\psi (x)|^{2}\,dx\right)^{2}} σ p 2 = ∫ − ∞ ∞ p 2 ⋅ | φ ( p ) | 2 d p − ( ∫ − ∞ ∞ p ⋅ | φ ( p ) | 2 d p ) 2 . {\displaystyle \sigma _{p}^{2}=\int _{-\infty }^{\infty }p^{2}\cdot |\varphi (p)|^{2}\,dp-\left(\int _{-\infty }^{\infty }p\cdot |\varphi (p)|^{2}\,dp\right)^{2}~.} Without loss of generality , we will assume that the means vanish, which just amounts to a shift of the origin of our coordinates. (A more general proof that does not make this assumption is given below.) This gives us the simpler form σ x 2 = ∫ − ∞ ∞ x 2 ⋅ | ψ ( x ) | 2 d x {\displaystyle \sigma _{x}^{2}=\int _{-\infty }^{\infty }x^{2}\cdot |\psi (x)|^{2}\,dx} σ p 2 = ∫ − ∞ ∞ p 2 ⋅ | φ ( p ) | 2 d p . {\displaystyle \sigma _{p}^{2}=\int _{-\infty }^{\infty }p^{2}\cdot |\varphi (p)|^{2}\,dp~.} The function f ( x ) = x ⋅ ψ ( x ) {\displaystyle f(x)=x\cdot \psi (x)} can be interpreted as a vector in a function space . We can define an inner product for a pair of functions u ( x ) and v ( x ) in this vector space: ⟨ u ∣ v ⟩ = ∫ − ∞ ∞ u ∗ ( x ) ⋅ v ( x ) d x , {\displaystyle \langle u\mid v\rangle =\int _{-\infty }^{\infty }u^{*}(x)\cdot v(x)\,dx,} where the asterisk denotes the complex conjugate . With this inner product defined, we note that the variance for position can be written as σ x 2 = ∫ − ∞ ∞ | f ( x ) | 2 d x = ⟨ f ∣ f ⟩ . {\displaystyle \sigma _{x}^{2}=\int _{-\infty }^{\infty }|f(x)|^{2}\,dx=\langle f\mid f\rangle ~.} We can repeat this for momentum by interpreting the function g ~ ( p ) = p ⋅ φ ( p ) {\displaystyle {\tilde {g}}(p)=p\cdot \varphi (p)} as a vector, but we can also take advantage of the fact that ψ ( x ) {\displaystyle \psi (x)} and φ ( p ) {\displaystyle \varphi (p)} are Fourier transforms of each other. We evaluate the inverse Fourier transform through integration by parts : g ( x ) = 1 2 π ℏ ⋅ ∫ − ∞ ∞ g ~ ( p ) ⋅ e i p x / ℏ d p = 1 2 π ℏ ∫ − ∞ ∞ p ⋅ φ ( p ) ⋅ e i p x / ℏ d p = 1 2 π ℏ ∫ − ∞ ∞ [ p ⋅ ∫ − ∞ ∞ ψ ( χ ) e − i p χ / ℏ d χ ] ⋅ e i p x / ℏ d p = i 2 π ∫ − ∞ ∞ [ ψ ( χ ) e − i p χ / ℏ | − ∞ ∞ − ∫ − ∞ ∞ d ψ ( χ ) d χ e − i p χ / ℏ d χ ] ⋅ e i p x / ℏ d p = − i ∫ − ∞ ∞ d ψ ( χ ) d χ [ 1 2 π ∫ − ∞ ∞ e i p ( x − χ ) / ℏ d p ] d χ = − i ∫ − ∞ ∞ d ψ ( χ ) d χ [ δ ( x − χ ℏ ) ] d χ = − i ℏ ∫ − ∞ ∞ d ψ ( χ ) d χ [ δ ( x − χ ) ] d χ = − i ℏ d ψ ( x ) d x = ( − i ℏ d d x ) ⋅ ψ ( x ) , {\displaystyle {\begin{aligned}g(x)&={\frac {1}{\sqrt {2\pi \hbar }}}\cdot \int _{-\infty }^{\infty }{\tilde {g}}(p)\cdot e^{ipx/\hbar }\,dp\\&={\frac {1}{\sqrt {2\pi \hbar }}}\int _{-\infty }^{\infty }p\cdot \varphi (p)\cdot e^{ipx/\hbar }\,dp\\&={\frac {1}{2\pi \hbar }}\int _{-\infty }^{\infty }\left[p\cdot \int _{-\infty }^{\infty }\psi (\chi )e^{-ip\chi /\hbar }\,d\chi \right]\cdot e^{ipx/\hbar }\,dp\\&={\frac {i}{2\pi }}\int _{-\infty }^{\infty }\left[{\cancel {\left.\psi (\chi )e^{-ip\chi /\hbar }\right|_{-\infty }^{\infty }}}-\int _{-\infty }^{\infty }{\frac {d\psi (\chi )}{d\chi }}e^{-ip\chi /\hbar }\,d\chi \right]\cdot e^{ipx/\hbar }\,dp\\&=-i\int _{-\infty }^{\infty }{\frac {d\psi (\chi )}{d\chi }}\left[{\frac {1}{2\pi }}\int _{-\infty }^{\infty }\,e^{ip(x-\chi )/\hbar }\,dp\right]\,d\chi \\&=-i\int _{-\infty }^{\infty }{\frac {d\psi (\chi )}{d\chi }}\left[\delta \left({\frac {x-\chi }{\hbar }}\right)\right]\,d\chi \\&=-i\hbar \int _{-\infty }^{\infty }{\frac {d\psi (\chi )}{d\chi }}\left[\delta \left(x-\chi \right)\right]\,d\chi \\&=-i\hbar {\frac {d\psi (x)}{dx}}\\&=\left(-i\hbar {\frac {d}{dx}}\right)\cdot \psi (x),\end{aligned}}} where v = ℏ − i p e − i p χ / ℏ {\displaystyle v={\frac {\hbar }{-ip}}e^{-ip\chi /\hbar }} in the integration by parts, the cancelled term vanishes because the wave function vanishes at both infinities and | e − i p χ / ℏ | = 1 {\displaystyle |e^{-ip\chi /\hbar }|=1} , and then use the Dirac delta function which is valid because d ψ ( χ ) d χ {\displaystyle {\dfrac {d\psi (\chi )}{d\chi }}} does not depend on p . The term − i ℏ d d x {\textstyle -i\hbar {\frac {d}{dx}}} is called the momentum operator in position space. Applying Plancherel's theorem , we see that the variance for momentum can be written as σ p 2 = ∫ − ∞ ∞ | g ~ ( p ) | 2 d p = ∫ − ∞ ∞ | g ( x ) | 2 d x = ⟨ g ∣ g ⟩ . {\displaystyle \sigma _{p}^{2}=\int _{-\infty }^{\infty }|{\tilde {g}}(p)|^{2}\,dp=\int _{-\infty }^{\infty }|g(x)|^{2}\,dx=\langle g\mid g\rangle .} The Cauchy–Schwarz inequality asserts that σ x 2 σ p 2 = ⟨ f ∣ f ⟩ ⋅ ⟨ g ∣ g ⟩ ≥ | ⟨ f ∣ g ⟩ | 2 . {\displaystyle \sigma _{x}^{2}\sigma _{p}^{2}=\langle f\mid f\rangle \cdot \langle g\mid g\rangle \geq |\langle f\mid g\rangle |^{2}~.} The modulus squared of any complex number z can be expressed as | z | 2 = ( Re ( z ) ) 2 + ( Im ( z ) ) 2 ≥ ( Im ( z ) ) 2 = ( z − z ∗ 2 i ) 2 . {\displaystyle |z|^{2}={\Big (}{\text{Re}}(z){\Big )}^{2}+{\Big (}{\text{Im}}(z){\Big )}^{2}\geq {\Big (}{\text{Im}}(z){\Big )}^{2}=\left({\frac {z-z^{\ast }}{2i}}\right)^{2}.} we let z = ⟨ f | g ⟩ {\displaystyle z=\langle f|g\rangle } and z ∗ = ⟨ g ∣ f ⟩ {\displaystyle z^{*}=\langle g\mid f\rangle } and substitute these into the equation above to get | ⟨ f ∣ g ⟩ | 2 ≥ ( ⟨ f ∣ g ⟩ − ⟨ g ∣ f ⟩ 2 i ) 2 . {\displaystyle |\langle f\mid g\rangle |^{2}\geq \left({\frac {\langle f\mid g\rangle -\langle g\mid f\rangle }{2i}}\right)^{2}~.} All that remains is to evaluate these inner products. ⟨ f ∣ g ⟩ − ⟨ g ∣ f ⟩ = ∫ − ∞ ∞ ψ ∗ ( x ) x ⋅ ( − i ℏ d d x ) ψ ( x ) d x − ∫ − ∞ ∞ ψ ∗ ( x ) ( − i ℏ d d x ) ⋅ x ψ ( x ) d x = i ℏ ⋅ ∫ − ∞ ∞ ψ ∗ ( x ) [ ( − x ⋅ d ψ ( x ) d x ) + d ( x ψ ( x ) ) d x ] d x = i ℏ ⋅ ∫ − ∞ ∞ ψ ∗ ( x ) [ ( − x ⋅ d ψ ( x ) d x ) + ψ ( x ) + ( x ⋅ d ψ ( x ) d x ) ] d x = i ℏ ⋅ ∫ − ∞ ∞ ψ ∗ ( x ) ψ ( x ) d x = i ℏ ⋅ ∫ − ∞ ∞ | ψ ( x ) | 2 d x = i ℏ {\displaystyle {\begin{aligned}\langle f\mid g\rangle -\langle g\mid f\rangle &=\int _{-\infty }^{\infty }\psi ^{*}(x)\,x\cdot \left(-i\hbar {\frac {d}{dx}}\right)\,\psi (x)\,dx-\int _{-\infty }^{\infty }\psi ^{*}(x)\,\left(-i\hbar {\frac {d}{dx}}\right)\cdot x\,\psi (x)\,dx\\&=i\hbar \cdot \int _{-\infty }^{\infty }\psi ^{*}(x)\left[\left(-x\cdot {\frac {d\psi (x)}{dx}}\right)+{\frac {d(x\psi (x))}{dx}}\right]\,dx\\&=i\hbar \cdot \int _{-\infty }^{\infty }\psi ^{*}(x)\left[\left(-x\cdot {\frac {d\psi (x)}{dx}}\right)+\psi (x)+\left(x\cdot {\frac {d\psi (x)}{dx}}\right)\right]\,dx\\&=i\hbar \cdot \int _{-\infty }^{\infty }\psi ^{*}(x)\psi (x)\,dx\\&=i\hbar \cdot \int _{-\infty }^{\infty }|\psi (x)|^{2}\,dx\\&=i\hbar \end{aligned}}} Plugging this into the above inequalities, we get σ x 2 σ p 2 ≥ | ⟨ f ∣ g ⟩ | 2 ≥ ( ⟨ f ∣ g ⟩ − ⟨ g ∣ f ⟩ 2 i ) 2 = ( i ℏ 2 i ) 2 = ℏ 2 4 {\displaystyle \sigma _{x}^{2}\sigma _{p}^{2}\geq |\langle f\mid g\rangle |^{2}\geq \left({\frac {\langle f\mid g\rangle -\langle g\mid f\rangle }{2i}}\right)^{2}=\left({\frac {i\hbar }{2i}}\right)^{2}={\frac {\hbar ^{2}}{4}}} and taking the square root σ x σ p ≥ ℏ 2 . {\displaystyle \sigma _{x}\sigma _{p}\geq {\frac {\hbar }{2}}~.} with equality if and only if p and x are linearly dependent. Note that the only physics involved in this proof was that ψ ( x ) {\displaystyle \psi (x)} and φ ( p ) {\displaystyle \varphi (p)} are wave functions for position and momentum, which are Fourier transforms of each other. A similar result would hold for any pair of conjugate variables. In matrix mechanics, observables such as position and momentum are represented by self-adjoint operators. [ 12 ] When considering pairs of observables, an important quantity is the commutator . For a pair of operators  and B ^ {\displaystyle {\hat {B}}} , one defines their commutator as [ A ^ , B ^ ] = A ^ B ^ − B ^ A ^ . {\displaystyle [{\hat {A}},{\hat {B}}]={\hat {A}}{\hat {B}}-{\hat {B}}{\hat {A}}.} In the case of position and momentum, the commutator is the canonical commutation relation [ x ^ , p ^ ] = i ℏ . {\displaystyle [{\hat {x}},{\hat {p}}]=i\hbar .} The physical meaning of the non-commutativity can be understood by considering the effect of the commutator on position and momentum eigenstates . Let | ψ ⟩ {\displaystyle |\psi \rangle } be a right eigenstate of position with a constant eigenvalue x 0 . By definition, this means that x ^ | ψ ⟩ = x 0 | ψ ⟩ . {\displaystyle {\hat {x}}|\psi \rangle =x_{0}|\psi \rangle .} Applying the commutator to | ψ ⟩ {\displaystyle |\psi \rangle } yields [ x ^ , p ^ ] | ψ ⟩ = ( x ^ p ^ − p ^ x ^ ) | ψ ⟩ = ( x ^ − x 0 I ^ ) p ^ | ψ ⟩ = i ℏ | ψ ⟩ , {\displaystyle [{\hat {x}},{\hat {p}}]|\psi \rangle =({\hat {x}}{\hat {p}}-{\hat {p}}{\hat {x}})|\psi \rangle =({\hat {x}}-x_{0}{\hat {I}}){\hat {p}}\,|\psi \rangle =i\hbar |\psi \rangle ,} where Î is the identity operator . Suppose, for the sake of proof by contradiction , that | ψ ⟩ {\displaystyle |\psi \rangle } is also a right eigenstate of momentum, with constant eigenvalue p 0 . If this were true, then one could write ( x ^ − x 0 I ^ ) p ^ | ψ ⟩ = ( x ^ − x 0 I ^ ) p 0 | ψ ⟩ = ( x 0 I ^ − x 0 I ^ ) p 0 | ψ ⟩ = 0. {\displaystyle ({\hat {x}}-x_{0}{\hat {I}}){\hat {p}}\,|\psi \rangle =({\hat {x}}-x_{0}{\hat {I}})p_{0}\,|\psi \rangle =(x_{0}{\hat {I}}-x_{0}{\hat {I}})p_{0}\,|\psi \rangle =0.} On the other hand, the above canonical commutation relation requires that [ x ^ , p ^ ] | ψ ⟩ = i ℏ | ψ ⟩ ≠ 0. {\displaystyle [{\hat {x}},{\hat {p}}]|\psi \rangle =i\hbar |\psi \rangle \neq 0.} This implies that no quantum state can simultaneously be both a position and a momentum eigenstate. When a state is measured, it is projected onto an eigenstate in the basis of the relevant observable. For example, if a particle's position is measured, then the state amounts to a position eigenstate. This means that the state is not a momentum eigenstate, however, but rather it can be represented as a sum of multiple momentum basis eigenstates. In other words, the momentum must be less precise. This precision may be quantified by the standard deviations, σ x = ⟨ x ^ 2 ⟩ − ⟨ x ^ ⟩ 2 {\displaystyle \sigma _{x}={\sqrt {\langle {\hat {x}}^{2}\rangle -\langle {\hat {x}}\rangle ^{2}}}} σ p = ⟨ p ^ 2 ⟩ − ⟨ p ^ ⟩ 2 . {\displaystyle \sigma _{p}={\sqrt {\langle {\hat {p}}^{2}\rangle -\langle {\hat {p}}\rangle ^{2}}}.} As in the wave mechanics interpretation above, one sees a tradeoff between the respective precisions of the two, quantified by the uncertainty principle. Consider a one-dimensional quantum harmonic oscillator. It is possible to express the position and momentum operators in terms of the creation and annihilation operators : x ^ = ℏ 2 m ω ( a + a † ) {\displaystyle {\hat {x}}={\sqrt {\frac {\hbar }{2m\omega }}}(a+a^{\dagger })} p ^ = i m ω ℏ 2 ( a † − a ) . {\displaystyle {\hat {p}}=i{\sqrt {\frac {m\omega \hbar }{2}}}(a^{\dagger }-a).} Using the standard rules for creation and annihilation operators on the energy eigenstates, a † | n ⟩ = n + 1 | n + 1 ⟩ {\displaystyle a^{\dagger }|n\rangle ={\sqrt {n+1}}|n+1\rangle } a | n ⟩ = n | n − 1 ⟩ , {\displaystyle a|n\rangle ={\sqrt {n}}|n-1\rangle ,} the variances may be computed directly, σ x 2 = ℏ m ω ( n + 1 2 ) {\displaystyle \sigma _{x}^{2}={\frac {\hbar }{m\omega }}\left(n+{\frac {1}{2}}\right)} σ p 2 = ℏ m ω ( n + 1 2 ) . {\displaystyle \sigma _{p}^{2}=\hbar m\omega \left(n+{\frac {1}{2}}\right)\,.} The product of these standard deviations is then σ x σ p = ℏ ( n + 1 2 ) ≥ ℏ 2 . {\displaystyle \sigma _{x}\sigma _{p}=\hbar \left(n+{\frac {1}{2}}\right)\geq {\frac {\hbar }{2}}.~} In particular, the above Kennard bound [ 6 ] is saturated for the ground state n =0 , for which the probability density is just the normal distribution . In a quantum harmonic oscillator of characteristic angular frequency ω , place a state that is offset from the bottom of the potential by some displacement x 0 as ψ ( x ) = ( m Ω π ℏ ) 1 / 4 exp ⁡ ( − m Ω ( x − x 0 ) 2 2 ℏ ) , {\displaystyle \psi (x)=\left({\frac {m\Omega }{\pi \hbar }}\right)^{1/4}\exp {\left(-{\frac {m\Omega (x-x_{0})^{2}}{2\hbar }}\right)},} where Ω describes the width of the initial state but need not be the same as ω . Through integration over the propagator , we can solve for the full time-dependent solution. After many cancelations, the probability densities reduce to | Ψ ( x , t ) | 2 ∼ N ( x 0 cos ⁡ ( ω t ) , ℏ 2 m Ω ( cos 2 ⁡ ( ω t ) + Ω 2 ω 2 sin 2 ⁡ ( ω t ) ) ) {\displaystyle |\Psi (x,t)|^{2}\sim {\mathcal {N}}\left(x_{0}\cos {(\omega t)},{\frac {\hbar }{2m\Omega }}\left(\cos ^{2}(\omega t)+{\frac {\Omega ^{2}}{\omega ^{2}}}\sin ^{2}{(\omega t)}\right)\right)} | Φ ( p , t ) | 2 ∼ N ( − m x 0 ω sin ⁡ ( ω t ) , ℏ m Ω 2 ( cos 2 ⁡ ( ω t ) + ω 2 Ω 2 sin 2 ⁡ ( ω t ) ) ) , {\displaystyle |\Phi (p,t)|^{2}\sim {\mathcal {N}}\left(-mx_{0}\omega \sin(\omega t),{\frac {\hbar m\Omega }{2}}\left(\cos ^{2}{(\omega t)}+{\frac {\omega ^{2}}{\Omega ^{2}}}\sin ^{2}{(\omega t)}\right)\right),} where we have used the notation N ( μ , σ 2 ) {\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})} to denote a normal distribution of mean μ and variance σ 2 . Copying the variances above and applying trigonometric identities , we can write the product of the standard deviations as σ x σ p = ℏ 2 ( cos 2 ⁡ ( ω t ) + Ω 2 ω 2 sin 2 ⁡ ( ω t ) ) ( cos 2 ⁡ ( ω t ) + ω 2 Ω 2 sin 2 ⁡ ( ω t ) ) = ℏ 4 3 + 1 2 ( Ω 2 ω 2 + ω 2 Ω 2 ) − ( 1 2 ( Ω 2 ω 2 + ω 2 Ω 2 ) − 1 ) cos ⁡ ( 4 ω t ) {\displaystyle {\begin{aligned}\sigma _{x}\sigma _{p}&={\frac {\hbar }{2}}{\sqrt {\left(\cos ^{2}{(\omega t)}+{\frac {\Omega ^{2}}{\omega ^{2}}}\sin ^{2}{(\omega t)}\right)\left(\cos ^{2}{(\omega t)}+{\frac {\omega ^{2}}{\Omega ^{2}}}\sin ^{2}{(\omega t)}\right)}}\\&={\frac {\hbar }{4}}{\sqrt {3+{\frac {1}{2}}\left({\frac {\Omega ^{2}}{\omega ^{2}}}+{\frac {\omega ^{2}}{\Omega ^{2}}}\right)-\left({\frac {1}{2}}\left({\frac {\Omega ^{2}}{\omega ^{2}}}+{\frac {\omega ^{2}}{\Omega ^{2}}}\right)-1\right)\cos {(4\omega t)}}}\end{aligned}}} From the relations Ω 2 ω 2 + ω 2 Ω 2 ≥ 2 , | cos ⁡ ( 4 ω t ) | ≤ 1 , {\displaystyle {\frac {\Omega ^{2}}{\omega ^{2}}}+{\frac {\omega ^{2}}{\Omega ^{2}}}\geq 2,\quad |\cos(4\omega t)|\leq 1,} we can conclude the following (the right most equality holds only when Ω = ω ): σ x σ p ≥ ℏ 4 3 + 1 2 ( Ω 2 ω 2 + ω 2 Ω 2 ) − ( 1 2 ( Ω 2 ω 2 + ω 2 Ω 2 ) − 1 ) = ℏ 2 . {\displaystyle \sigma _{x}\sigma _{p}\geq {\frac {\hbar }{4}}{\sqrt {3+{\frac {1}{2}}\left({\frac {\Omega ^{2}}{\omega ^{2}}}+{\frac {\omega ^{2}}{\Omega ^{2}}}\right)-\left({\frac {1}{2}}\left({\frac {\Omega ^{2}}{\omega ^{2}}}+{\frac {\omega ^{2}}{\Omega ^{2}}}\right)-1\right)}}={\frac {\hbar }{2}}.} A coherent state is a right eigenstate of the annihilation operator , a ^ | α ⟩ = α | α ⟩ , {\displaystyle {\hat {a}}|\alpha \rangle =\alpha |\alpha \rangle ,} which may be represented in terms of Fock states as | α ⟩ = e − | α | 2 2 ∑ n = 0 ∞ α n n ! | n ⟩ {\displaystyle |\alpha \rangle =e^{-{|\alpha |^{2} \over 2}}\sum _{n=0}^{\infty }{\alpha ^{n} \over {\sqrt {n!}}}|n\rangle } In the picture where the coherent state is a massive particle in a quantum harmonic oscillator, the position and momentum operators may be expressed in terms of the annihilation operators in the same formulas above and used to calculate the variances, σ x 2 = ℏ 2 m ω , {\displaystyle \sigma _{x}^{2}={\frac {\hbar }{2m\omega }},} σ p 2 = ℏ m ω 2 . {\displaystyle \sigma _{p}^{2}={\frac {\hbar m\omega }{2}}.} Therefore, every coherent state saturates the Kennard bound σ x σ p = ℏ 2 m ω ℏ m ω 2 = ℏ 2 . {\displaystyle \sigma _{x}\sigma _{p}={\sqrt {\frac {\hbar }{2m\omega }}}\,{\sqrt {\frac {\hbar m\omega }{2}}}={\frac {\hbar }{2}}.} with position and momentum each contributing an amount ℏ / 2 {\textstyle {\sqrt {\hbar /2}}} in a "balanced" way. Moreover, every squeezed coherent state also saturates the Kennard bound although the individual contributions of position and momentum need not be balanced in general. Consider a particle in a one-dimensional box of length L {\displaystyle L} . The eigenfunctions in position and momentum space are ψ n ( x , t ) = { A sin ⁡ ( k n x ) e − i ω n t , 0 < x < L , 0 , otherwise, {\displaystyle \psi _{n}(x,t)={\begin{cases}A\sin(k_{n}x)\mathrm {e} ^{-\mathrm {i} \omega _{n}t},&0<x<L,\\0,&{\text{otherwise,}}\end{cases}}} and φ n ( p , t ) = π L ℏ n ( 1 − ( − 1 ) n e − i k L ) e − i ω n t π 2 n 2 − k 2 L 2 , {\displaystyle \varphi _{n}(p,t)={\sqrt {\frac {\pi L}{\hbar }}}\,\,{\frac {n\left(1-(-1)^{n}e^{-ikL}\right)e^{-i\omega _{n}t}}{\pi ^{2}n^{2}-k^{2}L^{2}}},} where ω n = π 2 ℏ n 2 8 L 2 m {\textstyle \omega _{n}={\frac {\pi ^{2}\hbar n^{2}}{8L^{2}m}}} and we have used the de Broglie relation p = ℏ k {\displaystyle p=\hbar k} . The variances of x {\displaystyle x} and p {\displaystyle p} can be calculated explicitly: σ x 2 = L 2 12 ( 1 − 6 n 2 π 2 ) {\displaystyle \sigma _{x}^{2}={\frac {L^{2}}{12}}\left(1-{\frac {6}{n^{2}\pi ^{2}}}\right)} σ p 2 = ( ℏ n π L ) 2 . {\displaystyle \sigma _{p}^{2}=\left({\frac {\hbar n\pi }{L}}\right)^{2}.} The product of the standard deviations is therefore σ x σ p = ℏ 2 n 2 π 2 3 − 2 . {\displaystyle \sigma _{x}\sigma _{p}={\frac {\hbar }{2}}{\sqrt {{\frac {n^{2}\pi ^{2}}{3}}-2}}.} For all n = 1 , 2 , 3 , … {\displaystyle n=1,\,2,\,3,\,\ldots } , the quantity n 2 π 2 3 − 2 {\textstyle {\sqrt {{\frac {n^{2}\pi ^{2}}{3}}-2}}} is greater than 1, so the uncertainty principle is never violated. For numerical concreteness, the smallest value occurs when n = 1 {\displaystyle n=1} , in which case σ x σ p = ℏ 2 π 2 3 − 2 ≈ 0.568 ℏ > ℏ 2 . {\displaystyle \sigma _{x}\sigma _{p}={\frac {\hbar }{2}}{\sqrt {{\frac {\pi ^{2}}{3}}-2}}\approx 0.568\hbar >{\frac {\hbar }{2}}.} Assume a particle initially has a momentum space wave function described by a normal distribution around some constant momentum p 0 according to φ ( p ) = ( x 0 ℏ π ) 1 / 2 exp ⁡ ( − x 0 2 ( p − p 0 ) 2 2 ℏ 2 ) , {\displaystyle \varphi (p)=\left({\frac {x_{0}}{\hbar {\sqrt {\pi }}}}\right)^{1/2}\exp \left({\frac {-x_{0}^{2}(p-p_{0})^{2}}{2\hbar ^{2}}}\right),} where we have introduced a reference scale x 0 = ℏ / m ω 0 {\textstyle x_{0}={\sqrt {\hbar /m\omega _{0}}}} , with ω 0 > 0 {\displaystyle \omega _{0}>0} describing the width of the distribution—cf. nondimensionalization . If the state is allowed to evolve in free space, then the time-dependent momentum and position space wave functions are Φ ( p , t ) = ( x 0 ℏ π ) 1 / 2 exp ⁡ ( − x 0 2 ( p − p 0 ) 2 2 ℏ 2 − i p 2 t 2 m ℏ ) , {\displaystyle \Phi (p,t)=\left({\frac {x_{0}}{\hbar {\sqrt {\pi }}}}\right)^{1/2}\exp \left({\frac {-x_{0}^{2}(p-p_{0})^{2}}{2\hbar ^{2}}}-{\frac {ip^{2}t}{2m\hbar }}\right),} Ψ ( x , t ) = ( 1 x 0 π ) 1 / 2 e − x 0 2 p 0 2 / 2 ℏ 2 1 + i ω 0 t exp ⁡ ( − ( x − i x 0 2 p 0 / ℏ ) 2 2 x 0 2 ( 1 + i ω 0 t ) ) . {\displaystyle \Psi (x,t)=\left({\frac {1}{x_{0}{\sqrt {\pi }}}}\right)^{1/2}{\frac {e^{-x_{0}^{2}p_{0}^{2}/2\hbar ^{2}}}{\sqrt {1+i\omega _{0}t}}}\,\exp \left(-{\frac {(x-ix_{0}^{2}p_{0}/\hbar )^{2}}{2x_{0}^{2}(1+i\omega _{0}t)}}\right).} Since ⟨ p ( t ) ⟩ = p 0 {\displaystyle \langle p(t)\rangle =p_{0}} and σ p ( t ) = ℏ / ( 2 x 0 ) {\displaystyle \sigma _{p}(t)=\hbar /({\sqrt {2}}x_{0})} , this can be interpreted as a particle moving along with constant momentum at arbitrarily high precision. On the other hand, the standard deviation of the position is σ x = x 0 2 1 + ω 0 2 t 2 {\displaystyle \sigma _{x}={\frac {x_{0}}{\sqrt {2}}}{\sqrt {1+\omega _{0}^{2}t^{2}}}} such that the uncertainty product can only increase with time as σ x ( t ) σ p ( t ) = ℏ 2 1 + ω 0 2 t 2 {\displaystyle \sigma _{x}(t)\sigma _{p}(t)={\frac {\hbar }{2}}{\sqrt {1+\omega _{0}^{2}t^{2}}}} Starting with Kennard's derivation of position-momentum uncertainty, Howard Percy Robertson developed [ 13 ] [ 1 ] a formulation for arbitrary Hermitian operators O ^ {\displaystyle {\hat {\mathcal {O}}}} expressed in terms of their standard deviation σ O = ⟨ O ^ 2 ⟩ − ⟨ O ^ ⟩ 2 , {\displaystyle \sigma _{\mathcal {O}}={\sqrt {\langle {\hat {\mathcal {O}}}^{2}\rangle -\langle {\hat {\mathcal {O}}}\rangle ^{2}}},} where the brackets ⟨ O ^ ⟩ {\displaystyle \langle {\hat {\mathcal {O}}}\rangle } indicate an expectation value of the observable represented by operator O ^ {\displaystyle {\hat {\mathcal {O}}}} . For a pair of operators A ^ {\displaystyle {\hat {A}}} and B ^ {\displaystyle {\hat {B}}} , define their commutator as [ A ^ , B ^ ] = A ^ B ^ − B ^ A ^ , {\displaystyle [{\hat {A}},{\hat {B}}]={\hat {A}}{\hat {B}}-{\hat {B}}{\hat {A}},} and the Robertson uncertainty relation is given by [ 14 ] σ A σ B ≥ | 1 2 i ⟨ [ A ^ , B ^ ] ⟩ | = 1 2 | ⟨ [ A ^ , B ^ ] ⟩ | . {\displaystyle \sigma _{A}\sigma _{B}\geq \left|{\frac {1}{2i}}\langle [{\hat {A}},{\hat {B}}]\rangle \right|={\frac {1}{2}}\left|\langle [{\hat {A}},{\hat {B}}]\rangle \right|.} Erwin Schrödinger [ 15 ] showed how to allow for correlation between the operators, giving a stronger inequality, known as the Robertson–Schrödinger uncertainty relation , [ 16 ] [ 1 ] σ A 2 σ B 2 ≥ | 1 2 ⟨ { A ^ , B ^ } ⟩ − ⟨ A ^ ⟩ ⟨ B ^ ⟩ | 2 + | 1 2 i ⟨ [ A ^ , B ^ ] ⟩ | 2 , {\displaystyle \sigma _{A}^{2}\sigma _{B}^{2}\geq \left|{\frac {1}{2}}\langle \{{\hat {A}},{\hat {B}}\}\rangle -\langle {\hat {A}}\rangle \langle {\hat {B}}\rangle \right|^{2}+\left|{\frac {1}{2i}}\langle [{\hat {A}},{\hat {B}}]\rangle \right|^{2},} where the anticommutator, { A ^ , B ^ } = A ^ B ^ + B ^ A ^ {\displaystyle \{{\hat {A}},{\hat {B}}\}={\hat {A}}{\hat {B}}+{\hat {B}}{\hat {A}}} is used. The derivation shown here incorporates and builds off of those shown in Robertson, [ 13 ] Schrödinger [ 16 ] and standard textbooks such as Griffiths. [ 17 ] : 138 For any Hermitian operator A ^ {\displaystyle {\hat {A}}} , based upon the definition of variance , we have σ A 2 = ⟨ ( A ^ − ⟨ A ^ ⟩ ) Ψ | ( A ^ − ⟨ A ^ ⟩ ) Ψ ⟩ . {\displaystyle \sigma _{A}^{2}=\langle ({\hat {A}}-\langle {\hat {A}}\rangle )\Psi |({\hat {A}}-\langle {\hat {A}}\rangle )\Psi \rangle .} we let | f ⟩ = | ( A ^ − ⟨ A ^ ⟩ ) Ψ ⟩ {\displaystyle |f\rangle =|({\hat {A}}-\langle {\hat {A}}\rangle )\Psi \rangle } and thus σ A 2 = ⟨ f ∣ f ⟩ . {\displaystyle \sigma _{A}^{2}=\langle f\mid f\rangle \,.} Similarly, for any other Hermitian operator B ^ {\displaystyle {\hat {B}}} in the same state σ B 2 = ⟨ ( B ^ − ⟨ B ^ ⟩ ) Ψ | ( B ^ − ⟨ B ^ ⟩ ) Ψ ⟩ = ⟨ g ∣ g ⟩ {\displaystyle \sigma _{B}^{2}=\langle ({\hat {B}}-\langle {\hat {B}}\rangle )\Psi |({\hat {B}}-\langle {\hat {B}}\rangle )\Psi \rangle =\langle g\mid g\rangle } for | g ⟩ = | ( B ^ − ⟨ B ^ ⟩ ) Ψ ⟩ . {\displaystyle |g\rangle =|({\hat {B}}-\langle {\hat {B}}\rangle )\Psi \rangle .} The product of the two deviations can thus be expressed as In order to relate the two vectors | f ⟩ {\displaystyle |f\rangle } and | g ⟩ {\displaystyle |g\rangle } , we use the Cauchy–Schwarz inequality [ 18 ] which is defined as ⟨ f ∣ f ⟩ ⟨ g ∣ g ⟩ ≥ | ⟨ f ∣ g ⟩ | 2 , {\displaystyle \langle f\mid f\rangle \langle g\mid g\rangle \geq |\langle f\mid g\rangle |^{2},} and thus Equation ( 1 ) can be written as Since ⟨ f ∣ g ⟩ {\displaystyle \langle f\mid g\rangle } is in general a complex number, we use the fact that the modulus squared of any complex number z {\displaystyle z} is defined as | z | 2 = z z ∗ {\displaystyle |z|^{2}=zz^{*}} , where z ∗ {\displaystyle z^{*}} is the complex conjugate of z {\displaystyle z} . The modulus squared can also be expressed as we let z = ⟨ f ∣ g ⟩ {\displaystyle z=\langle f\mid g\rangle } and z ∗ = ⟨ g ∣ f ⟩ {\displaystyle z^{*}=\langle g\mid f\rangle } and substitute these into the equation above to get The inner product ⟨ f ∣ g ⟩ {\displaystyle \langle f\mid g\rangle } is written out explicitly as ⟨ f ∣ g ⟩ = ⟨ ( A ^ − ⟨ A ^ ⟩ ) Ψ | ( B ^ − ⟨ B ^ ⟩ ) Ψ ⟩ , {\displaystyle \langle f\mid g\rangle =\langle ({\hat {A}}-\langle {\hat {A}}\rangle )\Psi |({\hat {B}}-\langle {\hat {B}}\rangle )\Psi \rangle ,} and using the fact that A ^ {\displaystyle {\hat {A}}} and B ^ {\displaystyle {\hat {B}}} are Hermitian operators, we find ⟨ f ∣ g ⟩ = ⟨ Ψ | ( A ^ − ⟨ A ^ ⟩ ) ( B ^ − ⟨ B ^ ⟩ ) Ψ ⟩ = ⟨ Ψ ∣ ( A ^ B ^ − A ^ ⟨ B ^ ⟩ − B ^ ⟨ A ^ ⟩ + ⟨ A ^ ⟩ ⟨ B ^ ⟩ ) Ψ ⟩ = ⟨ Ψ ∣ A ^ B ^ Ψ ⟩ − ⟨ Ψ ∣ A ^ ⟨ B ^ ⟩ Ψ ⟩ − ⟨ Ψ ∣ B ^ ⟨ A ^ ⟩ Ψ ⟩ + ⟨ Ψ ∣ ⟨ A ^ ⟩ ⟨ B ^ ⟩ Ψ ⟩ = ⟨ A ^ B ^ ⟩ − ⟨ A ^ ⟩ ⟨ B ^ ⟩ − ⟨ A ^ ⟩ ⟨ B ^ ⟩ + ⟨ A ^ ⟩ ⟨ B ^ ⟩ = ⟨ A ^ B ^ ⟩ − ⟨ A ^ ⟩ ⟨ B ^ ⟩ . {\displaystyle {\begin{aligned}\langle f\mid g\rangle &=\langle \Psi |({\hat {A}}-\langle {\hat {A}}\rangle )({\hat {B}}-\langle {\hat {B}}\rangle )\Psi \rangle \\[4pt]&=\langle \Psi \mid ({\hat {A}}{\hat {B}}-{\hat {A}}\langle {\hat {B}}\rangle -{\hat {B}}\langle {\hat {A}}\rangle +\langle {\hat {A}}\rangle \langle {\hat {B}}\rangle )\Psi \rangle \\[4pt]&=\langle \Psi \mid {\hat {A}}{\hat {B}}\Psi \rangle -\langle \Psi \mid {\hat {A}}\langle {\hat {B}}\rangle \Psi \rangle -\langle \Psi \mid {\hat {B}}\langle {\hat {A}}\rangle \Psi \rangle +\langle \Psi \mid \langle {\hat {A}}\rangle \langle {\hat {B}}\rangle \Psi \rangle \\[4pt]&=\langle {\hat {A}}{\hat {B}}\rangle -\langle {\hat {A}}\rangle \langle {\hat {B}}\rangle -\langle {\hat {A}}\rangle \langle {\hat {B}}\rangle +\langle {\hat {A}}\rangle \langle {\hat {B}}\rangle \\[4pt]&=\langle {\hat {A}}{\hat {B}}\rangle -\langle {\hat {A}}\rangle \langle {\hat {B}}\rangle .\end{aligned}}} Similarly it can be shown that ⟨ g ∣ f ⟩ = ⟨ B ^ A ^ ⟩ − ⟨ A ^ ⟩ ⟨ B ^ ⟩ . {\displaystyle \langle g\mid f\rangle =\langle {\hat {B}}{\hat {A}}\rangle -\langle {\hat {A}}\rangle \langle {\hat {B}}\rangle .} Thus, we have ⟨ f ∣ g ⟩ − ⟨ g ∣ f ⟩ = ⟨ A ^ B ^ ⟩ − ⟨ A ^ ⟩ ⟨ B ^ ⟩ − ⟨ B ^ A ^ ⟩ + ⟨ A ^ ⟩ ⟨ B ^ ⟩ = ⟨ [ A ^ , B ^ ] ⟩ {\displaystyle \langle f\mid g\rangle -\langle g\mid f\rangle =\langle {\hat {A}}{\hat {B}}\rangle -\langle {\hat {A}}\rangle \langle {\hat {B}}\rangle -\langle {\hat {B}}{\hat {A}}\rangle +\langle {\hat {A}}\rangle \langle {\hat {B}}\rangle =\langle [{\hat {A}},{\hat {B}}]\rangle } and ⟨ f ∣ g ⟩ + ⟨ g ∣ f ⟩ = ⟨ A ^ B ^ ⟩ − ⟨ A ^ ⟩ ⟨ B ^ ⟩ + ⟨ B ^ A ^ ⟩ − ⟨ A ^ ⟩ ⟨ B ^ ⟩ = ⟨ { A ^ , B ^ } ⟩ − 2 ⟨ A ^ ⟩ ⟨ B ^ ⟩ . {\displaystyle \langle f\mid g\rangle +\langle g\mid f\rangle =\langle {\hat {A}}{\hat {B}}\rangle -\langle {\hat {A}}\rangle \langle {\hat {B}}\rangle +\langle {\hat {B}}{\hat {A}}\rangle -\langle {\hat {A}}\rangle \langle {\hat {B}}\rangle =\langle \{{\hat {A}},{\hat {B}}\}\rangle -2\langle {\hat {A}}\rangle \langle {\hat {B}}\rangle .} We now substitute the above two equations above back into Eq. ( 4 ) and get | ⟨ f ∣ g ⟩ | 2 = ( 1 2 ⟨ { A ^ , B ^ } ⟩ − ⟨ A ^ ⟩ ⟨ B ^ ⟩ ) 2 + ( 1 2 i ⟨ [ A ^ , B ^ ] ⟩ ) 2 . {\displaystyle |\langle f\mid g\rangle |^{2}={\Big (}{\frac {1}{2}}\langle \{{\hat {A}},{\hat {B}}\}\rangle -\langle {\hat {A}}\rangle \langle {\hat {B}}\rangle {\Big )}^{2}+{\Big (}{\frac {1}{2i}}\langle [{\hat {A}},{\hat {B}}]\rangle {\Big )}^{2}\,.} Substituting the above into Equation ( 2 ) we get the Schrödinger uncertainty relation σ A σ B ≥ ( 1 2 ⟨ { A ^ , B ^ } ⟩ − ⟨ A ^ ⟩ ⟨ B ^ ⟩ ) 2 + ( 1 2 i ⟨ [ A ^ , B ^ ] ⟩ ) 2 . {\displaystyle \sigma _{A}\sigma _{B}\geq {\sqrt {{\Big (}{\frac {1}{2}}\langle \{{\hat {A}},{\hat {B}}\}\rangle -\langle {\hat {A}}\rangle \langle {\hat {B}}\rangle {\Big )}^{2}+{\Big (}{\frac {1}{2i}}\langle [{\hat {A}},{\hat {B}}]\rangle {\Big )}^{2}}}.} This proof has an issue [ 19 ] related to the domains of the operators involved. For the proof to make sense, the vector B ^ | Ψ ⟩ {\displaystyle {\hat {B}}|\Psi \rangle } has to be in the domain of the unbounded operator A ^ {\displaystyle {\hat {A}}} , which is not always the case. In fact, the Robertson uncertainty relation is false if A ^ {\displaystyle {\hat {A}}} is an angle variable and B ^ {\displaystyle {\hat {B}}} is the derivative with respect to this variable. In this example, the commutator is a nonzero constant—just as in the Heisenberg uncertainty relation—and yet there are states where the product of the uncertainties is zero. [ 20 ] (See the counterexample section below.) This issue can be overcome by using a variational method for the proof, [ 21 ] [ 22 ] or by working with an exponentiated version of the canonical commutation relations. [ 20 ] Note that in the general form of the Robertson–Schrödinger uncertainty relation, there is no need to assume that the operators A ^ {\displaystyle {\hat {A}}} and B ^ {\displaystyle {\hat {B}}} are self-adjoint operators . It suffices to assume that they are merely symmetric operators . (The distinction between these two notions is generally glossed over in the physics literature, where the term Hermitian is used for either or both classes of operators. See Chapter 9 of Hall's book [ 23 ] for a detailed discussion of this important but technical distinction.) In the phase space formulation of quantum mechanics, the Robertson–Schrödinger relation follows from a positivity condition on a real star-square function. Given a Wigner function W ( x , p ) {\displaystyle W(x,p)} with star product ★ and a function f , the following is generally true: [ 24 ] ⟨ f ∗ ⋆ f ⟩ = ∫ ( f ∗ ⋆ f ) W ( x , p ) d x d p ≥ 0 . {\displaystyle \langle f^{*}\star f\rangle =\int (f^{*}\star f)\,W(x,p)\,dx\,dp\geq 0~.} Choosing f = a + b x + c p {\displaystyle f=a+bx+cp} , we arrive at ⟨ f ∗ ⋆ f ⟩ = [ a ∗ b ∗ c ∗ ] [ 1 ⟨ x ⟩ ⟨ p ⟩ ⟨ x ⟩ ⟨ x ⋆ x ⟩ ⟨ x ⋆ p ⟩ ⟨ p ⟩ ⟨ p ⋆ x ⟩ ⟨ p ⋆ p ⟩ ] [ a b c ] ≥ 0 . {\displaystyle \langle f^{*}\star f\rangle ={\begin{bmatrix}a^{*}&b^{*}&c^{*}\end{bmatrix}}{\begin{bmatrix}1&\langle x\rangle &\langle p\rangle \\\langle x\rangle &\langle x\star x\rangle &\langle x\star p\rangle \\\langle p\rangle &\langle p\star x\rangle &\langle p\star p\rangle \end{bmatrix}}{\begin{bmatrix}a\\b\\c\end{bmatrix}}\geq 0~.} Since this positivity condition is true for all a , b , and c , it follows that all the eigenvalues of the matrix are non-negative. The non-negative eigenvalues then imply a corresponding non-negativity condition on the determinant , det [ 1 ⟨ x ⟩ ⟨ p ⟩ ⟨ x ⟩ ⟨ x ⋆ x ⟩ ⟨ x ⋆ p ⟩ ⟨ p ⟩ ⟨ p ⋆ x ⟩ ⟨ p ⋆ p ⟩ ] = det [ 1 ⟨ x ⟩ ⟨ p ⟩ ⟨ x ⟩ ⟨ x 2 ⟩ ⟨ x p + i ℏ 2 ⟩ ⟨ p ⟩ ⟨ x p − i ℏ 2 ⟩ ⟨ p 2 ⟩ ] ≥ 0 , {\displaystyle \det {\begin{bmatrix}1&\langle x\rangle &\langle p\rangle \\\langle x\rangle &\langle x\star x\rangle &\langle x\star p\rangle \\\langle p\rangle &\langle p\star x\rangle &\langle p\star p\rangle \end{bmatrix}}=\det {\begin{bmatrix}1&\langle x\rangle &\langle p\rangle \\\langle x\rangle &\langle x^{2}\rangle &\left\langle xp+{\frac {i\hbar }{2}}\right\rangle \\\langle p\rangle &\left\langle xp-{\frac {i\hbar }{2}}\right\rangle &\langle p^{2}\rangle \end{bmatrix}}\geq 0~,} or, explicitly, after algebraic manipulation, σ x 2 σ p 2 = ( ⟨ x 2 ⟩ − ⟨ x ⟩ 2 ) ( ⟨ p 2 ⟩ − ⟨ p ⟩ 2 ) ≥ ( ⟨ x p ⟩ − ⟨ x ⟩ ⟨ p ⟩ ) 2 + ℏ 2 4 . {\displaystyle \sigma _{x}^{2}\sigma _{p}^{2}=\left(\langle x^{2}\rangle -\langle x\rangle ^{2}\right)\left(\langle p^{2}\rangle -\langle p\rangle ^{2}\right)\geq \left(\langle xp\rangle -\langle x\rangle \langle p\rangle \right)^{2}+{\frac {\hbar ^{2}}{4}}~.} Since the Robertson and Schrödinger relations are for general operators, the relations can be applied to any two observables to obtain specific uncertainty relations. A few of the most common relations found in the literature are given below. The derivation of the Robertson inequality for operators A ^ {\displaystyle {\hat {A}}} and B ^ {\displaystyle {\hat {B}}} requires A ^ B ^ ψ {\displaystyle {\hat {A}}{\hat {B}}\psi } and B ^ A ^ ψ {\displaystyle {\hat {B}}{\hat {A}}\psi } to be defined. There are quantum systems where these conditions are not valid. [ 27 ] One example is a quantum particle on a ring , where the wave function depends on an angular variable θ {\displaystyle \theta } in the interval [ 0 , 2 π ] {\displaystyle [0,2\pi ]} . Define "position" and "momentum" operators A ^ {\displaystyle {\hat {A}}} and B ^ {\displaystyle {\hat {B}}} by A ^ ψ ( θ ) = θ ψ ( θ ) , θ ∈ [ 0 , 2 π ] , {\displaystyle {\hat {A}}\psi (\theta )=\theta \psi (\theta ),\quad \theta \in [0,2\pi ],} and B ^ ψ = − i ℏ d ψ d θ , {\displaystyle {\hat {B}}\psi =-i\hbar {\frac {d\psi }{d\theta }},} with periodic boundary conditions on B ^ {\displaystyle {\hat {B}}} . The definition of A ^ {\displaystyle {\hat {A}}} depends the θ {\displaystyle \theta } range from 0 to 2 π {\displaystyle 2\pi } . These operators satisfy the usual commutation relations for position and momentum operators, [ A ^ , B ^ ] = i ℏ {\displaystyle [{\hat {A}},{\hat {B}}]=i\hbar } . More precisely, A ^ B ^ ψ − B ^ A ^ ψ = i ℏ ψ {\displaystyle {\hat {A}}{\hat {B}}\psi -{\hat {B}}{\hat {A}}\psi =i\hbar \psi } whenever both A ^ B ^ ψ {\displaystyle {\hat {A}}{\hat {B}}\psi } and B ^ A ^ ψ {\displaystyle {\hat {B}}{\hat {A}}\psi } are defined, and the space of such ψ {\displaystyle \psi } is a dense subspace of the quantum Hilbert space. [ 28 ] Now let ψ {\displaystyle \psi } be any of the eigenstates of B ^ {\displaystyle {\hat {B}}} , which are given by ψ ( θ ) = e 2 π i n θ {\displaystyle \psi (\theta )=e^{2\pi in\theta }} . These states are normalizable, unlike the eigenstates of the momentum operator on the line. Also the operator A ^ {\displaystyle {\hat {A}}} is bounded, since θ {\displaystyle \theta } ranges over a bounded interval. Thus, in the state ψ {\displaystyle \psi } , the uncertainty of B {\displaystyle B} is zero and the uncertainty of A {\displaystyle A} is finite, so that σ A σ B = 0. {\displaystyle \sigma _{A}\sigma _{B}=0.} The Robertson uncertainty principle does not apply in this case: ψ {\displaystyle \psi } is not in the domain of the operator B ^ A ^ {\displaystyle {\hat {B}}{\hat {A}}} , since multiplication by θ {\displaystyle \theta } disrupts the periodic boundary conditions imposed on B ^ {\displaystyle {\hat {B}}} . [ 20 ] For the usual position and momentum operators X ^ {\displaystyle {\hat {X}}} and P ^ {\displaystyle {\hat {P}}} on the real line, no such counterexamples can occur. As long as σ x {\displaystyle \sigma _{x}} and σ p {\displaystyle \sigma _{p}} are defined in the state ψ {\displaystyle \psi } , the Heisenberg uncertainty principle holds, even if ψ {\displaystyle \psi } fails to be in the domain of X ^ P ^ {\displaystyle {\hat {X}}{\hat {P}}} or of P ^ X ^ {\displaystyle {\hat {P}}{\hat {X}}} . [ 29 ] The Robertson–Schrödinger uncertainty can be improved noting that it must hold for all components ϱ k {\displaystyle \varrho _{k}} in any decomposition of the density matrix given as ϱ = ∑ k p k ϱ k . {\displaystyle \varrho =\sum _{k}p_{k}\varrho _{k}.} Here, for the probabilities p k ≥ 0 {\displaystyle p_{k}\geq 0} and ∑ k p k = 1 {\displaystyle \sum _{k}p_{k}=1} hold. Then, using the relation ∑ k a k ∑ k b k ≥ ( ∑ k a k b k ) 2 {\displaystyle \sum _{k}a_{k}\sum _{k}b_{k}\geq \left(\sum _{k}{\sqrt {a_{k}b_{k}}}\right)^{2}} for a k , b k ≥ 0 {\displaystyle a_{k},b_{k}\geq 0} , it follows that [ 30 ] σ A 2 σ B 2 ≥ [ ∑ k p k L ( ϱ k ) ] 2 , {\displaystyle \sigma _{A}^{2}\sigma _{B}^{2}\geq \left[\sum _{k}p_{k}L(\varrho _{k})\right]^{2},} where the function in the bound is defined L ( ϱ ) = | 1 2 tr ⁡ ( ρ { A , B } ) − tr ⁡ ( ρ A ) tr ⁡ ( ρ B ) | 2 + | 1 2 i tr ⁡ ( ρ [ A , B ] ) | 2 . {\displaystyle L(\varrho )={\sqrt {\left|{\frac {1}{2}}\operatorname {tr} (\rho \{A,B\})-\operatorname {tr} (\rho A)\operatorname {tr} (\rho B)\right|^{2}+\left|{\frac {1}{2i}}\operatorname {tr} (\rho [A,B])\right|^{2}}}.} The above relation very often has a bound larger than that of the original Robertson–Schrödinger uncertainty relation. Thus, we need to calculate the bound of the Robertson–Schrödinger uncertainty for the mixed components of the quantum state rather than for the quantum state, and compute an average of their square roots. The following expression is stronger than the Robertson–Schrödinger uncertainty relation σ A 2 σ B 2 ≥ [ max p k , ϱ k ∑ k p k L ( ϱ k ) ] 2 , {\displaystyle \sigma _{A}^{2}\sigma _{B}^{2}\geq \left[\max _{p_{k},\varrho _{k}}\sum _{k}p_{k}L(\varrho _{k})\right]^{2},} where on the right-hand side there is a concave roof over the decompositions of the density matrix. The improved relation above is saturated by all single-qubit quantum states. [ 30 ] With similar arguments, one can derive a relation with a convex roof on the right-hand side [ 30 ] σ A 2 F Q [ ϱ , B ] ≥ 4 [ min p k , Ψ k ∑ k p k L ( | Ψ k ⟩ ⟨ Ψ k | ) ] 2 {\displaystyle \sigma _{A}^{2}F_{Q}[\varrho ,B]\geq 4\left[\min _{p_{k},\Psi _{k}}\sum _{k}p_{k}L(\vert \Psi _{k}\rangle \langle \Psi _{k}\vert )\right]^{2}} where F Q [ ϱ , B ] {\displaystyle F_{Q}[\varrho ,B]} denotes the quantum Fisher information and the density matrix is decomposed to pure states as ϱ = ∑ k p k | Ψ k ⟩ ⟨ Ψ k | . {\displaystyle \varrho =\sum _{k}p_{k}\vert \Psi _{k}\rangle \langle \Psi _{k}\vert .} The derivation takes advantage of the fact that the quantum Fisher information is the convex roof of the variance times four. [ 31 ] [ 32 ] A simpler inequality follows without a convex roof [ 33 ] σ A 2 F Q [ ϱ , B ] ≥ | ⟨ i [ A , B ] ⟩ | 2 , {\displaystyle \sigma _{A}^{2}F_{Q}[\varrho ,B]\geq \vert \langle i[A,B]\rangle \vert ^{2},} which is stronger than the Heisenberg uncertainty relation, since for the quantum Fisher information we have F Q [ ϱ , B ] ≤ 4 σ B , {\displaystyle F_{Q}[\varrho ,B]\leq 4\sigma _{B},} while for pure states the equality holds. The Robertson–Schrödinger uncertainty relation can be trivial if the state of the system is chosen to be eigenstate of one of the observable. The stronger uncertainty relations proved by Lorenzo Maccone and Arun K. Pati give non-trivial bounds on the sum of the variances for two incompatible observables. [ 34 ] (Earlier works on uncertainty relations formulated as the sum of variances include, e.g., Ref. [ 35 ] due to Yichen Huang.) For two non-commuting observables A {\displaystyle A} and B {\displaystyle B} the first stronger uncertainty relation is given by σ A 2 + σ B 2 ≥ ± i ⟨ Ψ ∣ [ A , B ] | Ψ ⟩ + ∣ ⟨ Ψ ∣ ( A ± i B ) ∣ Ψ ¯ ⟩ | 2 , {\displaystyle \sigma _{A}^{2}+\sigma _{B}^{2}\geq \pm i\langle \Psi \mid [A,B]|\Psi \rangle +\mid \langle \Psi \mid (A\pm iB)\mid {\bar {\Psi }}\rangle |^{2},} where σ A 2 = ⟨ Ψ | A 2 | Ψ ⟩ − ⟨ Ψ ∣ A ∣ Ψ ⟩ 2 {\displaystyle \sigma _{A}^{2}=\langle \Psi |A^{2}|\Psi \rangle -\langle \Psi \mid A\mid \Psi \rangle ^{2}} , σ B 2 = ⟨ Ψ | B 2 | Ψ ⟩ − ⟨ Ψ ∣ B ∣ Ψ ⟩ 2 {\displaystyle \sigma _{B}^{2}=\langle \Psi |B^{2}|\Psi \rangle -\langle \Psi \mid B\mid \Psi \rangle ^{2}} , | Ψ ¯ ⟩ {\displaystyle |{\bar {\Psi }}\rangle } is a normalized vector that is orthogonal to the state of the system | Ψ ⟩ {\displaystyle |\Psi \rangle } and one should choose the sign of ± i ⟨ Ψ ∣ [ A , B ] ∣ Ψ ⟩ {\displaystyle \pm i\langle \Psi \mid [A,B]\mid \Psi \rangle } to make this real quantity a positive number. The second stronger uncertainty relation is given by σ A 2 + σ B 2 ≥ 1 2 | ⟨ Ψ ¯ A + B ∣ ( A + B ) ∣ Ψ ⟩ | 2 {\displaystyle \sigma _{A}^{2}+\sigma _{B}^{2}\geq {\frac {1}{2}}|\langle {\bar {\Psi }}_{A+B}\mid (A+B)\mid \Psi \rangle |^{2}} where | Ψ ¯ A + B ⟩ {\displaystyle |{\bar {\Psi }}_{A+B}\rangle } is a state orthogonal to | Ψ ⟩ {\displaystyle |\Psi \rangle } . The form of | Ψ ¯ A + B ⟩ {\displaystyle |{\bar {\Psi }}_{A+B}\rangle } implies that the right-hand side of the new uncertainty relation is nonzero unless | Ψ ⟩ {\displaystyle |\Psi \rangle } is an eigenstate of ( A + B ) {\displaystyle (A+B)} . One may note that | Ψ ⟩ {\displaystyle |\Psi \rangle } can be an eigenstate of ( A + B ) {\displaystyle (A+B)} without being an eigenstate of either A {\displaystyle A} or B {\displaystyle B} . However, when | Ψ ⟩ {\displaystyle |\Psi \rangle } is an eigenstate of one of the two observables the Heisenberg–Schrödinger uncertainty relation becomes trivial. But the lower bound in the new relation is nonzero unless | Ψ ⟩ {\displaystyle |\Psi \rangle } is an eigenstate of both. An energy–time uncertainty relation like Δ E Δ t ≳ ℏ / 2 , {\displaystyle \Delta E\Delta t\gtrsim \hbar /2,} has a long, controversial history; the meaning of Δ t {\displaystyle \Delta t} and Δ E {\displaystyle \Delta E} varies and different formulations have different arenas of validity. [ 36 ] However, one well-known application is both well established [ 37 ] [ 38 ] and experimentally verified: [ 39 ] [ 40 ] the connection between the life-time of a resonance state, τ 1 / 2 {\displaystyle \tau _{\sqrt {1/2}}} and its energy width Δ E {\displaystyle \Delta E} : τ 1 / 2 Δ E = π ℏ / 4. {\displaystyle \tau _{\sqrt {1/2}}\Delta E=\pi \hbar /4.} In particle-physics, widths from experimental fits to the Breit–Wigner energy distribution are used to characterize the lifetime of quasi-stable or decaying states. [ 41 ] An informal, heuristic meaning of the principle is the following: [ 42 ] A state that only exists for a short time cannot have a definite energy. To have a definite energy, the frequency of the state must be defined accurately, and this requires the state to hang around for many cycles, the reciprocal of the required accuracy. For example, in spectroscopy , excited states have a finite lifetime. By the time–energy uncertainty principle, they do not have a definite energy, and, each time they decay, the energy they release is slightly different. The average energy of the outgoing photon has a peak at the theoretical energy of the state, but the distribution has a finite width called the natural linewidth . Fast-decaying states have a broad linewidth, while slow-decaying states have a narrow linewidth. [ 43 ] The same linewidth effect also makes it difficult to specify the rest mass of unstable, fast-decaying particles in particle physics . The faster the particle decays (the shorter its lifetime), the less certain is its mass (the larger the particle's width ). The concept of "time" in quantum mechanics offers many challenges. [ 44 ] There is no quantum theory of time measurement; relativity is both fundamental to time and difficult to include in quantum mechanics. [ 36 ] While position and momentum are associated with a single particle, time is a system property: it has no operator needed for the Robertson–Schrödinger relation. [ 1 ] The mathematical treatment of stable and unstable quantum systems differ. [ 45 ] These factors combine to make energy–time uncertainty principles controversial. Three notions of "time" can be distinguished: [ 36 ] external, intrinsic, and observable. External or laboratory time is seen by the experimenter; intrinsic time is inferred by changes in dynamic variables, like the hands of a clock or the motion of a free particle; observable time concerns time as an observable, the measurement of time-separated events. An external-time energy–time uncertainty principle might say that measuring the energy of a quantum system to an accuracy Δ E {\displaystyle \Delta E} requires a time interval Δ t > h / Δ E {\displaystyle \Delta t>h/\Delta E} . [ 38 ] However, Yakir Aharonov and David Bohm [ 46 ] [ 36 ] have shown that, in some quantum systems, energy can be measured accurately within an arbitrarily short time: external-time uncertainty principles are not universal. Intrinsic time is the basis for several formulations of energy–time uncertainty relations, including the Mandelstam–Tamm relation discussed in the next section. A physical system with an intrinsic time closely matching the external laboratory time is called a "clock". [ 44 ] : 31 Observable time, measuring time between two events, remains a challenge for quantum theories; some progress has been made using positive operator-valued measure concepts. [ 36 ] In 1945, Leonid Mandelstam and Igor Tamm derived a non-relativistic time–energy uncertainty relation as follows. [ 47 ] [ 36 ] From Heisenberg mechanics, the generalized Ehrenfest theorem for an observable B without explicit time dependence, represented by a self-adjoint operator B ^ {\displaystyle {\hat {B}}} relates time dependence of the average value of B ^ {\displaystyle {\hat {B}}} to the average of its commutator with the Hamiltonian: d ⟨ B ^ ⟩ d t = i ℏ ⟨ [ H ^ , B ^ ] ⟩ . {\displaystyle {\frac {d\langle {\hat {B}}\rangle }{dt}}={\frac {i}{\hbar }}\langle [{\hat {H}},{\hat {B}}]\rangle .} The value of ⟨ [ H ^ , B ^ ] ⟩ {\displaystyle \langle [{\hat {H}},{\hat {B}}]\rangle } is then substituted in the Robertson uncertainty relation for the energy operator H ^ {\displaystyle {\hat {H}}} and B ^ {\displaystyle {\hat {B}}} : σ H σ B ≥ | 1 2 i ⟨ [ H ^ , B ^ ] ⟩ | , {\displaystyle \sigma _{H}\sigma _{B}\geq \left|{\frac {1}{2i}}\langle [{\hat {H}},{\hat {B}}]\rangle \right|,} giving σ H σ B | d ⟨ B ^ ⟩ d t | ≥ ℏ 2 {\displaystyle \sigma _{H}{\frac {\sigma _{B}}{\left|{\frac {d\langle {\hat {B}}\rangle }{dt}}\right|}}\geq {\frac {\hbar }{2}}} (whenever the denominator is nonzero). While this is a universal result, it depends upon the observable chosen and that the deviations σ H {\displaystyle \sigma _{H}} and σ B {\displaystyle \sigma _{B}} are computed for a particular state. Identifying Δ E ≡ σ E {\displaystyle \Delta E\equiv \sigma _{E}} and the characteristic time τ B ≡ σ B | d ⟨ B ^ ⟩ d t | {\displaystyle \tau _{B}\equiv {\frac {\sigma _{B}}{\left|{\frac {d\langle {\hat {B}}\rangle }{dt}}\right|}}} gives an energy–time relationship Δ E τ B ≥ ℏ 2 . {\displaystyle \Delta E\tau _{B}\geq {\frac {\hbar }{2}}.} Although τ B {\displaystyle \tau _{B}} has the dimension of time, it is different from the time parameter t that enters the Schrödinger equation . This τ B {\displaystyle \tau _{B}} can be interpreted as time for which the expectation value of the observable, ⟨ B ^ ⟩ , {\displaystyle \langle {\hat {B}}\rangle ,} changes by an amount equal to one standard deviation. [ 48 ] Examples: Each example has a different meaning for the time uncertainty, according to the observable and state used. Some formulations of quantum field theory uses temporary electron–positron pairs in its calculations called virtual particles . The mass-energy and lifetime of these particles are related by the energy–time uncertainty relation. The energy of a quantum systems is not known with enough precision to limit their behavior to a single, simple history. Thus the influence of all histories must be incorporated into quantum calculations, including those with much greater or much less energy than the mean of the measured/calculated energy distribution. The energy–time uncertainty principle does not temporarily violate conservation of energy ; it does not imply that energy can be "borrowed" from the universe as long as it is "returned" within a short amount of time. [ 17 ] : 145 The energy of the universe is not an exactly known parameter at all times. [ 1 ] When events transpire at very short time intervals, there is uncertainty in the energy of these events. In the context of harmonic analysis the uncertainty principle implies that one cannot at the same time localize the value of a function and its Fourier transform. To wit, the following inequality holds, ( ∫ − ∞ ∞ x 2 | f ( x ) | 2 d x ) ( ∫ − ∞ ∞ ξ 2 | f ^ ( ξ ) | 2 d ξ ) ≥ ‖ f ‖ 2 4 16 π 2 . {\displaystyle \left(\int _{-\infty }^{\infty }x^{2}|f(x)|^{2}\,dx\right)\left(\int _{-\infty }^{\infty }\xi ^{2}|{\hat {f}}(\xi )|^{2}\,d\xi \right)\geq {\frac {\|f\|_{2}^{4}}{16\pi ^{2}}}.} Further mathematical uncertainty inequalities, including the above entropic uncertainty , hold between a function f and its Fourier transform ƒ̂ : [ 49 ] [ 50 ] [ 51 ] H x + H ξ ≥ log ⁡ ( e / 2 ) {\displaystyle H_{x}+H_{\xi }\geq \log(e/2)} In the context of time–frequency analysis uncertainty principles are referred to as the Gabor limit , after Dennis Gabor , or sometimes the Heisenberg–Gabor limit . The basic result, which follows from "Benedicks's theorem", below, is that a function cannot be both time limited and band limited (a function and its Fourier transform cannot both have bounded domain)—see bandlimited versus timelimited . More accurately, the time-bandwidth or duration-bandwidth product satisfies σ t σ f ≥ 1 4 π ≈ 0.08 cycles , {\displaystyle \sigma _{t}\sigma _{f}\geq {\frac {1}{4\pi }}\approx 0.08{\text{ cycles}},} where σ t {\displaystyle \sigma _{t}} and σ f {\displaystyle \sigma _{f}} are the standard deviations of the time and frequency energy concentrations respectively. [ 52 ] The minimum is attained for a Gaussian -shaped pulse ( Gabor wavelet ) [For the un-squared Gaussian (i.e. signal amplitude) and its un-squared Fourier transform magnitude σ t σ f = 1 / 2 π {\displaystyle \sigma _{t}\sigma _{f}=1/2\pi } ; squaring reduces each σ {\displaystyle \sigma } by a factor 2 {\displaystyle {\sqrt {2}}} .] Another common measure is the product of the time and frequency full width at half maximum (of the power/energy), which for the Gaussian equals 2 ln ⁡ 2 / π ≈ 0.44 {\displaystyle 2\ln 2/\pi \approx 0.44} (see bandwidth-limited pulse ). Stated differently, one cannot simultaneously sharply localize a signal f in both the time domain and frequency domain . When applied to filters , the result implies that one cannot simultaneously achieve a high temporal resolution and high frequency resolution at the same time; a concrete example are the resolution issues of the short-time Fourier transform —if one uses a wide window, one achieves good frequency resolution at the cost of temporal resolution, while a narrow window has the opposite trade-off. Alternate theorems give more precise quantitative results, and, in time–frequency analysis, rather than interpreting the (1-dimensional) time and frequency domains separately, one instead interprets the limit as a lower limit on the support of a function in the (2-dimensional) time–frequency plane. In practice, the Gabor limit limits the simultaneous time–frequency resolution one can achieve without interference; it is possible to achieve higher resolution, but at the cost of different components of the signal interfering with each other. As a result, in order to analyze signals where the transients are important, the wavelet transform is often used instead of the Fourier. Let { x n } := x 0 , x 1 , … , x N − 1 {\displaystyle \left\{\mathbf {x_{n}} \right\}:=x_{0},x_{1},\ldots ,x_{N-1}} be a sequence of N complex numbers and { X k } := X 0 , X 1 , … , X N − 1 , {\displaystyle \left\{\mathbf {X_{k}} \right\}:=X_{0},X_{1},\ldots ,X_{N-1},} be its discrete Fourier transform . Denote by ‖ x ‖ 0 {\displaystyle \|x\|_{0}} the number of non-zero elements in the time sequence x 0 , x 1 , … , x N − 1 {\displaystyle x_{0},x_{1},\ldots ,x_{N-1}} and by ‖ X ‖ 0 {\displaystyle \|X\|_{0}} the number of non-zero elements in the frequency sequence X 0 , X 1 , … , X N − 1 {\displaystyle X_{0},X_{1},\ldots ,X_{N-1}} . Then, ‖ x ‖ 0 ⋅ ‖ X ‖ 0 ≥ N . {\displaystyle \|x\|_{0}\cdot \|X\|_{0}\geq N.} This inequality is sharp , with equality achieved when x or X is a Dirac mass, or more generally when x is a nonzero multiple of a Dirac comb supported on a subgroup of the integers modulo N (in which case X is also a Dirac comb supported on a complementary subgroup, and vice versa). More generally, if T and W are subsets of the integers modulo N , let L T , R W : ℓ 2 ( Z / N Z ) → ℓ 2 ( Z / N Z ) {\displaystyle L_{T},R_{W}:\ell ^{2}(\mathbb {Z} /N\mathbb {Z} )\to \ell ^{2}(\mathbb {Z} /N\mathbb {Z} )} denote the time-limiting operator and band-limiting operators , respectively. Then ‖ L T R W ‖ 2 ≤ | T | | W | | G | {\displaystyle \|L_{T}R_{W}\|^{2}\leq {\frac {|T||W|}{|G|}}} where the norm is the operator norm of operators on the Hilbert space ℓ 2 ( Z / N Z ) {\displaystyle \ell ^{2}(\mathbb {Z} /N\mathbb {Z} )} of functions on the integers modulo N . This inequality has implications for signal reconstruction . [ 53 ] When N is a prime number , a stronger inequality holds: ‖ x ‖ 0 + ‖ X ‖ 0 ≥ N + 1. {\displaystyle \|x\|_{0}+\|X\|_{0}\geq N+1.} Discovered by Terence Tao , this inequality is also sharp. [ 54 ] Amrein–Berthier [ 55 ] and Benedicks's theorem [ 56 ] intuitively says that the set of points where f is non-zero and the set of points where ƒ̂ is non-zero cannot both be small. Specifically, it is impossible for a function f in L 2 ( R ) and its Fourier transform ƒ̂ to both be supported on sets of finite Lebesgue measure . A more quantitative version is [ 57 ] [ 58 ] ‖ f ‖ L 2 ( R d ) ≤ C e C | S | | Σ | ( ‖ f ‖ L 2 ( S c ) + ‖ f ^ ‖ L 2 ( Σ c ) ) . {\displaystyle \|f\|_{L^{2}(\mathbf {R} ^{d})}\leq Ce^{C|S||\Sigma |}{\bigl (}\|f\|_{L^{2}(S^{c})}+\|{\hat {f}}\|_{L^{2}(\Sigma ^{c})}{\bigr )}~.} One expects that the factor Ce C | S || Σ | may be replaced by Ce C (| S || Σ |) 1/ d , which is only known if either S or Σ is convex. The mathematician G. H. Hardy formulated the following uncertainty principle: [ 59 ] it is not possible for f and ƒ̂ to both be "very rapidly decreasing". Specifically, if f in L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} is such that | f ( x ) | ≤ C ( 1 + | x | ) N e − a π x 2 {\displaystyle |f(x)|\leq C(1+|x|)^{N}e^{-a\pi x^{2}}} and | f ^ ( ξ ) | ≤ C ( 1 + | ξ | ) N e − b π ξ 2 {\displaystyle |{\hat {f}}(\xi )|\leq C(1+|\xi |)^{N}e^{-b\pi \xi ^{2}}} ( C > 0 , N {\displaystyle C>0,N} an integer), then, if ab > 1, f = 0 , while if ab = 1 , then there is a polynomial P of degree ≤ N such that f ( x ) = P ( x ) e − a π x 2 . {\displaystyle f(x)=P(x)e^{-a\pi x^{2}}.} This was later improved as follows: if f ∈ L 2 ( R d ) {\displaystyle f\in L^{2}(\mathbb {R} ^{d})} is such that ∫ R d ∫ R d | f ( x ) | | f ^ ( ξ ) | e π | ⟨ x , ξ ⟩ | ( 1 + | x | + | ξ | ) N d x d ξ < + ∞ , {\displaystyle \int _{\mathbb {R} ^{d}}\int _{\mathbb {R} ^{d}}|f(x)||{\hat {f}}(\xi )|{\frac {e^{\pi |\langle x,\xi \rangle |}}{(1+|x|+|\xi |)^{N}}}\,dx\,d\xi <+\infty ~,} then f ( x ) = P ( x ) e − π ⟨ A x , x ⟩ , {\displaystyle f(x)=P(x)e^{-\pi \langle Ax,x\rangle }~,} where P is a polynomial of degree ( N − d )/2 and A is a real d × d positive definite matrix. This result was stated in Beurling's complete works without proof and proved in Hörmander [ 60 ] (the case d = 1 , N = 0 {\displaystyle d=1,N=0} ) and Bonami, Demange, and Jaming [ 61 ] for the general case. Note that Hörmander–Beurling's version implies the case ab > 1 in Hardy's Theorem while the version by Bonami–Demange–Jaming covers the full strength of Hardy's Theorem. A different proof of Beurling's theorem based on Liouville's theorem appeared in ref. [ 62 ] A full description of the case ab < 1 as well as the following extension to Schwartz class distributions appears in ref. [ 63 ] Theorem — If a tempered distribution f ∈ S ′ ( R d ) {\displaystyle f\in {\mathcal {S}}'(\mathbb {R} ^{d})} is such that e π | x | 2 f ∈ S ′ ( R d ) {\displaystyle e^{\pi |x|^{2}}f\in {\mathcal {S}}'(\mathbb {R} ^{d})} and e π | ξ | 2 f ^ ∈ S ′ ( R d ) , {\displaystyle e^{\pi |\xi |^{2}}{\hat {f}}\in {\mathcal {S}}'(\mathbb {R} ^{d})~,} then f ( x ) = P ( x ) e − π ⟨ A x , x ⟩ , {\displaystyle f(x)=P(x)e^{-\pi \langle Ax,x\rangle }~,} for some convenient polynomial P and real positive definite matrix A of type d × d . In quantum metrology , and especially interferometry , the Heisenberg limit is the optimal rate at which the accuracy of a measurement can scale with the energy used in the measurement. Typically, this is the measurement of a phase (applied to one arm of a beam-splitter ) and the energy is given by the number of photons used in an interferometer . Although some claim to have broken the Heisenberg limit, this reflects disagreement on the definition of the scaling resource. [ 64 ] Suitably defined, the Heisenberg limit is a consequence of the basic principles of quantum mechanics and cannot be beaten, although the weak Heisenberg limit can be beaten. [ 65 ] The inequalities above focus on the statistical imprecision of observables as quantified by the standard deviation σ {\displaystyle \sigma } . Heisenberg's original version, however, was dealing with the systematic error , a disturbance of the quantum system produced by the measuring apparatus, i.e., an observer effect. If we let ε A {\displaystyle \varepsilon _{A}} represent the error (i.e., inaccuracy ) of a measurement of an observable A and η B {\displaystyle \eta _{B}} the disturbance produced on a subsequent measurement of the conjugate variable B by the former measurement of A , then the inequality proposed by Masanao Ozawa − encompassing both systematic and statistical errors - holds: [ 66 ] ε A η B + ε A σ B + σ A η B ≥ 1 2 | ⟨ [ A ^ , B ^ ] ⟩ | {\displaystyle \varepsilon _{A}\,\eta _{B}+\varepsilon _{A}\,\sigma _{B}+\sigma _{A}\,\eta _{B}\,\geq \,{\frac {1}{2}}\,\left|{\Bigl \langle }{\bigl [}{\hat {A}},{\hat {B}}{\bigr ]}{\Bigr \rangle }\right|} Heisenberg's uncertainty principle, as originally described in the 1927 formulation, mentions only the first term of Ozawa inequality, regarding the systematic error . Using the notation above to describe the error/disturbance effect of sequential measurements (first A , then B ), it could be written as ε A η B ≥ 1 2 | ⟨ [ A ^ , B ^ ] ⟩ | {\displaystyle \varepsilon _{A}\,\eta _{B}\,\geq \,{\frac {1}{2}}\,\left|{\Bigl \langle }{\bigl [}{\hat {A}},{\hat {B}}{\bigr ]}{\Bigr \rangle }\right|} The formal derivation of the Heisenberg relation is possible but far from intuitive. It was not proposed by Heisenberg, but formulated in a mathematically consistent way only in recent years. [ 67 ] [ 68 ] Also, it must be stressed that the Heisenberg formulation is not taking into account the intrinsic statistical errors σ A {\displaystyle \sigma _{A}} and σ B {\displaystyle \sigma _{B}} . There is increasing experimental evidence [ 69 ] [ 70 ] [ 71 ] [ 72 ] that the total quantum uncertainty cannot be described by the Heisenberg term alone, but requires the presence of all the three terms of the Ozawa inequality. Using the same formalism, [ 1 ] it is also possible to introduce the other kind of physical situation, often confused with the previous one, namely the case of simultaneous measurements ( A and B at the same time): ε A ε B ≥ 1 2 | ⟨ [ A ^ , B ^ ] ⟩ | {\displaystyle \varepsilon _{A}\,\varepsilon _{B}\,\geq \,{\frac {1}{2}}\,\left|{\Bigl \langle }{\bigl [}{\hat {A}},{\hat {B}}{\bigr ]}{\Bigr \rangle }\right|} The two simultaneous measurements on A and B are necessarily [ 73 ] unsharp or weak . It is also possible to derive an uncertainty relation that, as the Ozawa's one, combines both the statistical and systematic error components, but keeps a form very close to the Heisenberg original inequality. By adding Robertson [ 1 ] σ A σ B ≥ 1 2 | ⟨ [ A ^ , B ^ ] ⟩ | {\displaystyle \sigma _{A}\,\sigma _{B}\,\geq \,{\frac {1}{2}}\,\left|{\Bigl \langle }{\bigl [}{\hat {A}},{\hat {B}}{\bigr ]}{\Bigr \rangle }\right|} and Ozawa relations we obtain ε A η B + ε A σ B + σ A η B + σ A σ B ≥ | ⟨ [ A ^ , B ^ ] ⟩ | . {\displaystyle \varepsilon _{A}\eta _{B}+\varepsilon _{A}\,\sigma _{B}+\sigma _{A}\,\eta _{B}+\sigma _{A}\sigma _{B}\geq \left|{\Bigl \langle }{\bigl [}{\hat {A}},{\hat {B}}{\bigr ]}{\Bigr \rangle }\right|.} The four terms can be written as: ( ε A + σ A ) ( η B + σ B ) ≥ | ⟨ [ A ^ , B ^ ] ⟩ | . {\displaystyle (\varepsilon _{A}+\sigma _{A})\,(\eta _{B}+\sigma _{B})\,\geq \,\left|{\Bigl \langle }{\bigl [}{\hat {A}},{\hat {B}}{\bigr ]}{\Bigr \rangle }\right|.} Defining: ε ¯ A ≡ ( ε A + σ A ) {\displaystyle {\bar {\varepsilon }}_{A}\,\equiv \,(\varepsilon _{A}+\sigma _{A})} as the inaccuracy in the measured values of the variable A and η ¯ B ≡ ( η B + σ B ) {\displaystyle {\bar {\eta }}_{B}\,\equiv \,(\eta _{B}+\sigma _{B})} as the resulting fluctuation in the conjugate variable B , Kazuo Fujikawa [ 74 ] established an uncertainty relation similar to the Heisenberg original one, but valid both for systematic and statistical errors : ε ¯ A η ¯ B ≥ | ⟨ [ A ^ , B ^ ] ⟩ | {\displaystyle {\bar {\varepsilon }}_{A}\,{\bar {\eta }}_{B}\,\geq \,\left|{\Bigl \langle }{\bigl [}{\hat {A}},{\hat {B}}{\bigr ]}{\Bigr \rangle }\right|} For many distributions, the standard deviation is not a particularly natural way of quantifying the structure. For example, uncertainty relations in which one of the observables is an angle has little physical meaning for fluctuations larger than one period. [ 22 ] [ 75 ] [ 76 ] [ 77 ] Other examples include highly bimodal distributions , or unimodal distributions with divergent variance. A solution that overcomes these issues is an uncertainty based on entropic uncertainty instead of the product of variances. While formulating the many-worlds interpretation of quantum mechanics in 1957, Hugh Everett III conjectured a stronger extension of the uncertainty principle based on entropic certainty. [ 78 ] This conjecture, also studied by I. I. Hirschman [ 79 ] and proven in 1975 by W. Beckner [ 80 ] and by Iwo Bialynicki-Birula and Jerzy Mycielski [ 81 ] is that, for two normalized, dimensionless Fourier transform pairs f ( a ) and g ( b ) where the Shannon information entropies H a = − ∫ − ∞ ∞ | f ( a ) | 2 log ⁡ | f ( a ) | 2 d a , {\displaystyle H_{a}=-\int _{-\infty }^{\infty }|f(a)|^{2}\log |f(a)|^{2}\,da,} and H b = − ∫ − ∞ ∞ | g ( b ) | 2 log ⁡ | g ( b ) | 2 d b {\displaystyle H_{b}=-\int _{-\infty }^{\infty }|g(b)|^{2}\log |g(b)|^{2}\,db} are subject to the following constraint, H a + H b ≥ log ⁡ ( e / 2 ) {\displaystyle H_{a}+H_{b}\geq \log(e/2)} where the logarithms may be in any base. The probability distribution functions associated with the position wave function ψ ( x ) and the momentum wave function φ ( x ) have dimensions of inverse length and momentum respectively, but the entropies may be rendered dimensionless by H x = − ∫ | ψ ( x ) | 2 ln ⁡ ( x 0 | ψ ( x ) | 2 ) d x = − ⟨ ln ⁡ ( x 0 | ψ ( x ) | 2 ) ⟩ {\displaystyle H_{x}=-\int |\psi (x)|^{2}\ln \left(x_{0}\,|\psi (x)|^{2}\right)dx=-\left\langle \ln \left(x_{0}\,\left|\psi (x)\right|^{2}\right)\right\rangle } H p = − ∫ | φ ( p ) | 2 ln ⁡ ( p 0 | φ ( p ) | 2 ) d p = − ⟨ ln ⁡ ( p 0 | φ ( p ) | 2 ) ⟩ {\displaystyle H_{p}=-\int |\varphi (p)|^{2}\ln(p_{0}\,|\varphi (p)|^{2})\,dp=-\left\langle \ln(p_{0}\left|\varphi (p)\right|^{2})\right\rangle } where x 0 and p 0 are some arbitrarily chosen length and momentum respectively, which render the arguments of the logarithms dimensionless. Note that the entropies will be functions of these chosen parameters. Due to the Fourier transform relation between the position wave function ψ ( x ) and the momentum wavefunction φ ( p ) , the above constraint can be written for the corresponding entropies as H x + H p ≥ log ⁡ ( e h 2 x 0 p 0 ) {\displaystyle H_{x}+H_{p}\geq \log \left({\frac {e\,h}{2\,x_{0}\,p_{0}}}\right)} where h is the Planck constant . Depending on one's choice of the x 0 p 0 product, the expression may be written in many ways. If x 0 p 0 is chosen to be h , then H x + H p ≥ log ⁡ ( e 2 ) {\displaystyle H_{x}+H_{p}\geq \log \left({\frac {e}{2}}\right)} If, instead, x 0 p 0 is chosen to be ħ , then H x + H p ≥ log ⁡ ( e π ) {\displaystyle H_{x}+H_{p}\geq \log(e\,\pi )} If x 0 and p 0 are chosen to be unity in whatever system of units are being used, then H x + H p ≥ log ⁡ ( e h 2 ) {\displaystyle H_{x}+H_{p}\geq \log \left({\frac {e\,h}{2}}\right)} where h is interpreted as a dimensionless number equal to the value of the Planck constant in the chosen system of units. Note that these inequalities can be extended to multimode quantum states, or wavefunctions in more than one spatial dimension. [ 82 ] The quantum entropic uncertainty principle is more restrictive than the Heisenberg uncertainty principle. From the inverse logarithmic Sobolev inequalities [ 83 ] H x ≤ 1 2 log ⁡ ( 2 e π σ x 2 / x 0 2 ) , {\displaystyle H_{x}\leq {\frac {1}{2}}\log(2e\pi \sigma _{x}^{2}/x_{0}^{2})~,} H p ≤ 1 2 log ⁡ ( 2 e π σ p 2 / p 0 2 ) , {\displaystyle H_{p}\leq {\frac {1}{2}}\log(2e\pi \sigma _{p}^{2}/p_{0}^{2})~,} (equivalently, from the fact that normal distributions maximize the entropy of all such with a given variance), it readily follows that this entropic uncertainty principle is stronger than the one based on standard deviations , because σ x σ p ≥ ℏ 2 exp ⁡ ( H x + H p − log ⁡ ( e h 2 x 0 p 0 ) ) ≥ ℏ 2 . {\displaystyle \sigma _{x}\sigma _{p}\geq {\frac {\hbar }{2}}\exp \left(H_{x}+H_{p}-\log \left({\frac {e\,h}{2\,x_{0}\,p_{0}}}\right)\right)\geq {\frac {\hbar }{2}}~.} In other words, the Heisenberg uncertainty principle, is a consequence of the quantum entropic uncertainty principle, but not vice versa. A few remarks on these inequalities. First, the choice of base e is a matter of popular convention in physics. The logarithm can alternatively be in any base, provided that it be consistent on both sides of the inequality. Second, recall the Shannon entropy has been used, not the quantum von Neumann entropy . Finally, the normal distribution saturates the inequality, and it is the only distribution with this property, because it is the maximum entropy probability distribution among those with fixed variance (cf. here for proof). x 0 = ℏ 2 m ω {\displaystyle x_{0}={\sqrt {\frac {\hbar }{2m\omega }}}} ψ ( x ) = ( m ω π ℏ ) 1 / 4 exp ⁡ ( − m ω x 2 2 ℏ ) = ( 1 2 π x 0 2 ) 1 / 4 exp ⁡ ( − x 2 4 x 0 2 ) {\displaystyle {\begin{aligned}\psi (x)&=\left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\exp {\left(-{\frac {m\omega x^{2}}{2\hbar }}\right)}\\&=\left({\frac {1}{2\pi x_{0}^{2}}}\right)^{1/4}\exp {\left(-{\frac {x^{2}}{4x_{0}^{2}}}\right)}\end{aligned}}} The probability distribution is the normal distribution | ψ ( x ) | 2 = 1 x 0 2 π exp ⁡ ( − x 2 2 x 0 2 ) {\displaystyle |\psi (x)|^{2}={\frac {1}{x_{0}{\sqrt {2\pi }}}}\exp {\left(-{\frac {x^{2}}{2x_{0}^{2}}}\right)}} with Shannon entropy H x = − ∫ | ψ ( x ) | 2 ln ⁡ ( | ψ ( x ) | 2 ⋅ x 0 ) d x = − 1 x 0 2 π ∫ − ∞ ∞ exp ⁡ ( − x 2 2 x 0 2 ) ln ⁡ [ 1 2 π exp ⁡ ( − x 2 2 x 0 2 ) ] d x = 1 2 π ∫ − ∞ ∞ exp ⁡ ( − u 2 2 ) [ ln ⁡ ( 2 π ) + u 2 2 ] d u = ln ⁡ ( 2 π ) + 1 2 . {\displaystyle {\begin{aligned}H_{x}&=-\int |\psi (x)|^{2}\ln(|\psi (x)|^{2}\cdot x_{0})\,dx\\&=-{\frac {1}{x_{0}{\sqrt {2\pi }}}}\int _{-\infty }^{\infty }\exp {\left(-{\frac {x^{2}}{2x_{0}^{2}}}\right)}\ln \left[{\frac {1}{\sqrt {2\pi }}}\exp {\left(-{\frac {x^{2}}{2x_{0}^{2}}}\right)}\right]\,dx\\&={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }\exp {\left(-{\frac {u^{2}}{2}}\right)}\left[\ln({\sqrt {2\pi }})+{\frac {u^{2}}{2}}\right]\,du\\&=\ln({\sqrt {2\pi }})+{\frac {1}{2}}.\end{aligned}}} A completely analogous calculation proceeds for the momentum distribution. Choosing a standard momentum of p 0 = ℏ / x 0 {\displaystyle p_{0}=\hbar /x_{0}} : φ ( p ) = ( 2 x 0 2 π ℏ 2 ) 1 / 4 exp ⁡ ( − x 0 2 p 2 ℏ 2 ) {\displaystyle \varphi (p)=\left({\frac {2x_{0}^{2}}{\pi \hbar ^{2}}}\right)^{1/4}\exp {\left(-{\frac {x_{0}^{2}p^{2}}{\hbar ^{2}}}\right)}} | φ ( p ) | 2 = 2 x 0 2 π ℏ 2 exp ⁡ ( − 2 x 0 2 p 2 ℏ 2 ) {\displaystyle |\varphi (p)|^{2}={\sqrt {\frac {2x_{0}^{2}}{\pi \hbar ^{2}}}}\exp {\left(-{\frac {2x_{0}^{2}p^{2}}{\hbar ^{2}}}\right)}} H p = − ∫ | φ ( p ) | 2 ln ⁡ ( | φ ( p ) | 2 ⋅ ℏ / x 0 ) d p = − 2 x 0 2 π ℏ 2 ∫ − ∞ ∞ exp ⁡ ( − 2 x 0 2 p 2 ℏ 2 ) ln ⁡ [ 2 π exp ⁡ ( − 2 x 0 2 p 2 ℏ 2 ) ] d p = 2 π ∫ − ∞ ∞ exp ⁡ ( − 2 v 2 ) [ ln ⁡ ( π 2 ) + 2 v 2 ] d v = ln ⁡ ( π 2 ) + 1 2 . {\displaystyle {\begin{aligned}H_{p}&=-\int |\varphi (p)|^{2}\ln(|\varphi (p)|^{2}\cdot \hbar /x_{0})\,dp\\&=-{\sqrt {\frac {2x_{0}^{2}}{\pi \hbar ^{2}}}}\int _{-\infty }^{\infty }\exp {\left(-{\frac {2x_{0}^{2}p^{2}}{\hbar ^{2}}}\right)}\ln \left[{\sqrt {\frac {2}{\pi }}}\exp {\left(-{\frac {2x_{0}^{2}p^{2}}{\hbar ^{2}}}\right)}\right]\,dp\\&={\sqrt {\frac {2}{\pi }}}\int _{-\infty }^{\infty }\exp {\left(-2v^{2}\right)}\left[\ln \left({\sqrt {\frac {\pi }{2}}}\right)+2v^{2}\right]\,dv\\&=\ln \left({\sqrt {\frac {\pi }{2}}}\right)+{\frac {1}{2}}.\end{aligned}}} The entropic uncertainty is therefore the limiting value H x + H p = ln ⁡ ( 2 π ) + 1 2 + ln ⁡ ( π 2 ) + 1 2 = 1 + ln ⁡ π = ln ⁡ ( e π ) . {\displaystyle {\begin{aligned}H_{x}+H_{p}&=\ln({\sqrt {2\pi }})+{\frac {1}{2}}+\ln \left({\sqrt {\frac {\pi }{2}}}\right)+{\frac {1}{2}}\\&=1+\ln \pi =\ln(e\pi ).\end{aligned}}} A measurement apparatus will have a finite resolution set by the discretization of its possible outputs into bins, with the probability of lying within one of the bins given by the Born rule. We will consider the most common experimental situation, in which the bins are of uniform size. Let δx be a measure of the spatial resolution. We take the zeroth bin to be centered near the origin, with possibly some small constant offset c . The probability of lying within the jth interval of width δx is P ⁡ [ x j ] = ∫ ( j − 1 / 2 ) δ x − c ( j + 1 / 2 ) δ x − c | ψ ( x ) | 2 d x {\displaystyle \operatorname {P} [x_{j}]=\int _{(j-1/2)\delta x-c}^{(j+1/2)\delta x-c}|\psi (x)|^{2}\,dx} To account for this discretization, we can define the Shannon entropy of the wave function for a given measurement apparatus as H x = − ∑ j = − ∞ ∞ P ⁡ [ x j ] ln ⁡ P ⁡ [ x j ] . {\displaystyle H_{x}=-\sum _{j=-\infty }^{\infty }\operatorname {P} [x_{j}]\ln \operatorname {P} [x_{j}].} Under the above definition, the entropic uncertainty relation is H x + H p > ln ⁡ ( e 2 ) − ln ⁡ ( δ x δ p h ) . {\displaystyle H_{x}+H_{p}>\ln \left({\frac {e}{2}}\right)-\ln \left({\frac {\delta x\delta p}{h}}\right).} Here we note that δx δp / h is a typical infinitesimal phase space volume used in the calculation of a partition function . The inequality is also strict and not saturated. Efforts to improve this bound are an active area of research. ψ ( x ) = ( m ω π ℏ ) 1 / 4 exp ⁡ ( − m ω x 2 2 ℏ ) {\displaystyle \psi (x)=\left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\exp {\left(-{\frac {m\omega x^{2}}{2\hbar }}\right)}} The probability of lying within one of these bins can be expressed in terms of the error function . P ⁡ [ x j ] = m ω π ℏ ∫ ( j − 1 / 2 ) δ x ( j + 1 / 2 ) δ x exp ⁡ ( − m ω x 2 ℏ ) d x = 1 π ∫ ( j − 1 / 2 ) δ x m ω / ℏ ( j + 1 / 2 ) δ x m ω / ℏ e u 2 d u = 1 2 [ erf ⁡ ( ( j + 1 2 ) δ x ⋅ m ω ℏ ) − erf ⁡ ( ( j − 1 2 ) δ x ⋅ m ω ℏ ) ] {\displaystyle {\begin{aligned}\operatorname {P} [x_{j}]&={\sqrt {\frac {m\omega }{\pi \hbar }}}\int _{(j-1/2)\delta x}^{(j+1/2)\delta x}\exp \left(-{\frac {m\omega x^{2}}{\hbar }}\right)\,dx\\&={\sqrt {\frac {1}{\pi }}}\int _{(j-1/2)\delta x{\sqrt {m\omega /\hbar }}}^{(j+1/2)\delta x{\sqrt {m\omega /\hbar }}}e^{u^{2}}\,du\\&={\frac {1}{2}}\left[\operatorname {erf} \left(\left(j+{\frac {1}{2}}\right)\delta x\cdot {\sqrt {\frac {m\omega }{\hbar }}}\right)-\operatorname {erf} \left(\left(j-{\frac {1}{2}}\right)\delta x\cdot {\sqrt {\frac {m\omega }{\hbar }}}\right)\right]\end{aligned}}} The momentum probabilities are completely analogous. P ⁡ [ p j ] = 1 2 [ erf ⁡ ( ( j + 1 2 ) δ p ⋅ 1 ℏ m ω ) − erf ⁡ ( ( j − 1 2 ) δ x ⋅ 1 ℏ m ω ) ] {\displaystyle \operatorname {P} [p_{j}]={\frac {1}{2}}\left[\operatorname {erf} \left(\left(j+{\frac {1}{2}}\right)\delta p\cdot {\frac {1}{\sqrt {\hbar m\omega }}}\right)-\operatorname {erf} \left(\left(j-{\frac {1}{2}}\right)\delta x\cdot {\frac {1}{\sqrt {\hbar m\omega }}}\right)\right]} For simplicity, we will set the resolutions to δ x = h m ω {\displaystyle \delta x={\sqrt {\frac {h}{m\omega }}}} δ p = h m ω {\displaystyle \delta p={\sqrt {hm\omega }}} so that the probabilities reduce to P ⁡ [ x j ] = P ⁡ [ p j ] = 1 2 [ erf ⁡ ( ( j + 1 2 ) 2 π ) − erf ⁡ ( ( j − 1 2 ) 2 π ) ] {\displaystyle \operatorname {P} [x_{j}]=\operatorname {P} [p_{j}]={\frac {1}{2}}\left[\operatorname {erf} \left(\left(j+{\frac {1}{2}}\right){\sqrt {2\pi }}\right)-\operatorname {erf} \left(\left(j-{\frac {1}{2}}\right){\sqrt {2\pi }}\right)\right]} The Shannon entropy can be evaluated numerically. H x = H p = − ∑ j = − ∞ ∞ P ⁡ [ x j ] ln ⁡ P ⁡ [ x j ] = − ∑ j = − ∞ ∞ 1 2 [ erf ⁡ ( ( j + 1 2 ) 2 π ) − erf ⁡ ( ( j − 1 2 ) 2 π ) ] ln ⁡ 1 2 [ erf ⁡ ( ( j + 1 2 ) 2 π ) − erf ⁡ ( ( j − 1 2 ) 2 π ) ] ≈ 0.3226 {\displaystyle {\begin{aligned}H_{x}=H_{p}&=-\sum _{j=-\infty }^{\infty }\operatorname {P} [x_{j}]\ln \operatorname {P} [x_{j}]\\&=-\sum _{j=-\infty }^{\infty }{\frac {1}{2}}\left[\operatorname {erf} \left(\left(j+{\frac {1}{2}}\right){\sqrt {2\pi }}\right)-\operatorname {erf} \left(\left(j-{\frac {1}{2}}\right){\sqrt {2\pi }}\right)\right]\ln {\frac {1}{2}}\left[\operatorname {erf} \left(\left(j+{\frac {1}{2}}\right){\sqrt {2\pi }}\right)-\operatorname {erf} \left(\left(j-{\frac {1}{2}}\right){\sqrt {2\pi }}\right)\right]\\&\approx 0.3226\end{aligned}}} The entropic uncertainty is indeed larger than the limiting value. H x + H p ≈ 0.3226 + 0.3226 = 0.6452 > ln ⁡ ( e 2 ) − ln ⁡ 1 ≈ 0.3069 {\displaystyle H_{x}+H_{p}\approx 0.3226+0.3226=0.6452>\ln \left({\frac {e}{2}}\right)-\ln 1\approx 0.3069} Note that despite being in the optimal case, the inequality is not saturated. ψ ( x ) = { 1 / 2 a for | x | ≤ a , 0 for | x | > a {\displaystyle \psi (x)={\begin{cases}{1}/{\sqrt {2a}}&{\text{for }}|x|\leq a,\\[8pt]0&{\text{for }}|x|>a\end{cases}}} then its Fourier transform is the sinc function, φ ( p ) = a π ℏ ⋅ sinc ⁡ ( a p ℏ ) {\displaystyle \varphi (p)={\sqrt {\frac {a}{\pi \hbar }}}\cdot \operatorname {sinc} \left({\frac {ap}{\hbar }}\right)} which yields infinite momentum variance despite having a centralized shape. The entropic uncertainty, on the other hand, is finite. Suppose for simplicity that the spatial resolution is just a two-bin measurement, δx = a , and that the momentum resolution is δp = h / a . Partitioning the uniform spatial distribution into two equal bins is straightforward. We set the offset c = 1/2 so that the two bins span the distribution. P ⁡ [ x 0 ] = ∫ − a 0 1 2 a d x = 1 2 {\displaystyle \operatorname {P} [x_{0}]=\int _{-a}^{0}{\frac {1}{2a}}\,dx={\frac {1}{2}}} P ⁡ [ x 1 ] = ∫ 0 a 1 2 a d x = 1 2 {\displaystyle \operatorname {P} [x_{1}]=\int _{0}^{a}{\frac {1}{2a}}\,dx={\frac {1}{2}}} H x = − ∑ j = 0 1 P ⁡ [ x j ] ln ⁡ P ⁡ [ x j ] = − 1 2 ln ⁡ 1 2 − 1 2 ln ⁡ 1 2 = ln ⁡ 2 {\displaystyle H_{x}=-\sum _{j=0}^{1}\operatorname {P} [x_{j}]\ln \operatorname {P} [x_{j}]=-{\frac {1}{2}}\ln {\frac {1}{2}}-{\frac {1}{2}}\ln {\frac {1}{2}}=\ln 2} The bins for momentum must cover the entire real line. As done with the spatial distribution, we could apply an offset. It turns out, however, that the Shannon entropy is minimized when the zeroth bin for momentum is centered at the origin. (The reader is encouraged to try adding an offset.) The probability of lying within an arbitrary momentum bin can be expressed in terms of the sine integral . P ⁡ [ p j ] = a π ℏ ∫ ( j − 1 / 2 ) δ p ( j + 1 / 2 ) δ p sinc 2 ⁡ ( a p ℏ ) d p = 1 π ∫ 2 π ( j − 1 / 2 ) 2 π ( j + 1 / 2 ) sinc 2 ⁡ ( u ) d u = 1 π [ Si ⁡ ( ( 4 j + 2 ) π ) − Si ⁡ ( ( 4 j − 2 ) π ) ] {\displaystyle {\begin{aligned}\operatorname {P} [p_{j}]&={\frac {a}{\pi \hbar }}\int _{(j-1/2)\delta p}^{(j+1/2)\delta p}\operatorname {sinc} ^{2}\left({\frac {ap}{\hbar }}\right)\,dp\\&={\frac {1}{\pi }}\int _{2\pi (j-1/2)}^{2\pi (j+1/2)}\operatorname {sinc} ^{2}(u)\,du\\&={\frac {1}{\pi }}\left[\operatorname {Si} ((4j+2)\pi )-\operatorname {Si} ((4j-2)\pi )\right]\end{aligned}}} The Shannon entropy can be evaluated numerically. H p = − ∑ j = − ∞ ∞ P ⁡ [ p j ] ln ⁡ P ⁡ [ p j ] = − P ⁡ [ p 0 ] ln ⁡ P ⁡ [ p 0 ] − 2 ⋅ ∑ j = 1 ∞ P ⁡ [ p j ] ln ⁡ P ⁡ [ p j ] ≈ 0.53 {\displaystyle H_{p}=-\sum _{j=-\infty }^{\infty }\operatorname {P} [p_{j}]\ln \operatorname {P} [p_{j}]=-\operatorname {P} [p_{0}]\ln \operatorname {P} [p_{0}]-2\cdot \sum _{j=1}^{\infty }\operatorname {P} [p_{j}]\ln \operatorname {P} [p_{j}]\approx 0.53} The entropic uncertainty is indeed larger than the limiting value. H x + H p ≈ 0.69 + 0.53 = 1.22 > ln ⁡ ( e 2 ) − ln ⁡ 1 ≈ 0.31 {\displaystyle H_{x}+H_{p}\approx 0.69+0.53=1.22>\ln \left({\frac {e}{2}}\right)-\ln 1\approx 0.31} For a particle of total angular momentum j {\displaystyle j} the following uncertainty relation holds σ J x 2 + σ J y 2 + σ J z 2 ≥ j , {\displaystyle \sigma _{J_{x}}^{2}+\sigma _{J_{y}}^{2}+\sigma _{J_{z}}^{2}\geq j,} where J l {\displaystyle J_{l}} are angular momentum components. The relation can be derived from ⟨ J x 2 + J y 2 + J z 2 ⟩ = j ( j + 1 ) , {\displaystyle \langle J_{x}^{2}+J_{y}^{2}+J_{z}^{2}\rangle =j(j+1),} and ⟨ J x ⟩ 2 + ⟨ J y ⟩ 2 + ⟨ J z ⟩ 2 ≤ j . {\displaystyle \langle J_{x}\rangle ^{2}+\langle J_{y}\rangle ^{2}+\langle J_{z}\rangle ^{2}\leq j.} The relation can be strengthened as [ 30 ] [ 84 ] σ J x 2 + σ J y 2 + F Q [ ϱ , J z ] / 4 ≥ j , {\displaystyle \sigma _{J_{x}}^{2}+\sigma _{J_{y}}^{2}+F_{Q}[\varrho ,J_{z}]/4\geq j,} where F Q [ ϱ , J z ] {\displaystyle F_{Q}[\varrho ,J_{z}]} is the quantum Fisher information. In 1925 Heisenberg published the Umdeutung (reinterpretation) paper where he showed that central aspect of quantum theory was the non- commutativity : the theory implied that the relative order of position and momentum measurement was significant. Working with Max Born and Pascual Jordan , he continued to develop matrix mechanics , that would become the first modern quantum mechanics formulation. [ 85 ] In March 1926, working in Bohr's institute, Heisenberg realized that the non- commutativity implies the uncertainty principle. Writing to Wolfgang Pauli in February 1927, he worked out the basic concepts. [ 86 ] In his celebrated 1927 paper " Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik " ("On the Perceptual Content of Quantum Theoretical Kinematics and Mechanics"), Heisenberg established this expression as the minimum amount of unavoidable momentum disturbance caused by any position measurement, [ 2 ] but he did not give a precise definition for the uncertainties Δx and Δ p . Instead, he gave some plausible estimates in each case separately. His paper gave an analysis in terms of a microscope that Bohr showed was incorrect; Heisenberg included an addendum to the publication. In his 1930 Chicago lecture [ 87 ] he refined his principle: Later work broadened the concept. Any two variables that do not commute cannot be measured simultaneously—the more precisely one is known, the less precisely the other can be known. Heisenberg wrote: It can be expressed in its simplest form as follows: One can never know with perfect accuracy both of those two important factors which determine the movement of one of the smallest particles—its position and its velocity. It is impossible to determine accurately both the position and the direction and speed of a particle at the same instant . [ 88 ] Kennard [ 6 ] [ 1 ] : 204 in 1927 first proved the modern inequality: where ħ = ⁠ h / 2 π ⁠ , and σ x , σ p are the standard deviations of position and momentum. (Heisenberg only proved relation ( A2 ) for the special case of Gaussian states. [ 87 ] ) In 1929 Robertson generalized the inequality to all observables and in 1930 Schrödinger extended the form to allow non-zero covariance of the operators; this result is referred to as Robertson-Schrödinger inequality. [ 1 ] : 204 Throughout the main body of his original 1927 paper, written in German, Heisenberg used the word "Ungenauigkeit", [ 2 ] to describe the basic theoretical principle. Only in the endnote did he switch to the word "Unsicherheit". Later on, he always used "Unbestimmtheit". When the English-language version of Heisenberg's textbook, The Physical Principles of the Quantum Theory , was published in 1930, however, only the English word "uncertainty" was used, and it became the term in the English language. [ 89 ] The principle is quite counter-intuitive, so the early students of quantum theory had to be reassured that naive measurements to violate it were bound always to be unworkable. One way in which Heisenberg originally illustrated the intrinsic impossibility of violating the uncertainty principle is by using the observer effect of an imaginary microscope as a measuring device. [ 87 ] He imagines an experimenter trying to measure the position and momentum of an electron by shooting a photon at it. [ 90 ] : 49–50 The combination of these trade-offs implies that no matter what photon wavelength and aperture size are used, the product of the uncertainty in measured position and measured momentum is greater than or equal to a lower limit, which is (up to a small numerical factor) equal to the Planck constant . [ 91 ] Heisenberg did not care to formulate the uncertainty principle as an exact limit, and preferred to use it instead, as a heuristic quantitative statement, correct up to small numerical factors, which makes the radically new noncommutativity of quantum mechanics inevitable. Historically, the uncertainty principle has been confused [ 92 ] [ 66 ] with a related effect in physics , called the observer effect , which notes that measurements of certain systems cannot be made without affecting the system, [ 93 ] [ 94 ] that is, without changing something in a system. Heisenberg used such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty. [ 95 ] It has since become clearer, however, that the uncertainty principle is inherent in the properties of all wave-like systems , [ 69 ] and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects. [ 96 ] Thus, the uncertainty principle actually states a fundamental property of quantum systems and is not a statement about the observational success of current technology. [ 97 ] The Copenhagen interpretation of quantum mechanics and Heisenberg's uncertainty principle were, in fact, initially seen as twin targets by detractors. According to the Copenhagen interpretation of quantum mechanics, there is no fundamental reality that the quantum state describes, just a prescription for calculating experimental results. There is no way to say what the state of a system fundamentally is, only what the result of observations might be. Albert Einstein believed that randomness is a reflection of our ignorance of some fundamental property of reality, while Niels Bohr believed that the probability distributions are fundamental and irreducible, and depend on which measurements we choose to perform. Einstein and Bohr debated the uncertainty principle for many years. Wolfgang Pauli called Einstein's fundamental objection to the uncertainty principle "the ideal of the detached observer" (phrase translated from the German): "Like the moon has a definite position," Einstein said to me last winter, "whether or not we look at the moon, the same must also hold for the atomic objects, as there is no sharp distinction possible between these and macroscopic objects. Observation cannot create an element of reality like a position, there must be something contained in the complete description of physical reality which corresponds to the possibility of observing a position, already before the observation has been actually made." I hope, that I quoted Einstein correctly; it is always difficult to quote somebody out of memory with whom one does not agree. It is precisely this kind of postulate which I call the ideal of the detached observer. The first of Einstein's thought experiments challenging the uncertainty principle went as follows: Consider a particle passing through a slit of width d . The slit introduces an uncertainty in momentum of approximately ⁠ h / d ⁠ because the particle passes through the wall. But let us determine the momentum of the particle by measuring the recoil of the wall. In doing so, we find the momentum of the particle to arbitrary accuracy by conservation of momentum. Bohr's response was that the wall is quantum mechanical as well, and that to measure the recoil to accuracy Δ p , the momentum of the wall must be known to this accuracy before the particle passes through. This introduces an uncertainty in the position of the wall and therefore the position of the slit equal to ⁠ h / Δ p ⁠ , and if the wall's momentum is known precisely enough to measure the recoil, the slit's position is uncertain enough to disallow a position measurement. A similar analysis with particles diffracting through multiple slits is given by Richard Feynman . [ 99 ] Bohr was present when Einstein proposed the thought experiment which has become known as Einstein's box . Einstein argued that "Heisenberg's uncertainty equation implied that the uncertainty in time was related to the uncertainty in energy, the product of the two being related to the Planck constant." [ 100 ] Consider, he said, an ideal box, lined with mirrors so that it can contain light indefinitely. The box could be weighed before a clockwork mechanism opened an ideal shutter at a chosen instant to allow one single photon to escape. "We now know, explained Einstein, precisely the time at which the photon left the box." [ 101 ] "Now, weigh the box again. The change of mass tells the energy of the emitted light. In this manner, said Einstein, one could measure the energy emitted and the time it was released with any desired precision, in contradiction to the uncertainty principle." [ 100 ] Bohr spent a sleepless night considering this argument, and eventually realized that it was flawed. He pointed out that if the box were to be weighed, say by a spring and a pointer on a scale, "since the box must move vertically with a change in its weight, there will be uncertainty in its vertical velocity and therefore an uncertainty in its height above the table. ... Furthermore, the uncertainty about the elevation above the Earth's surface will result in an uncertainty in the rate of the clock", [ 102 ] because of Einstein's own theory of gravity's effect on time . "Through this chain of uncertainties, Bohr showed that Einstein's light box experiment could not simultaneously measure exactly both the energy of the photon and the time of its escape." [ 103 ] In 1935, Einstein, Boris Podolsky and Nathan Rosen published an analysis of spatially separated entangled particles (EPR paradox). [ 104 ] According to EPR, one could measure the position of one of the entangled particles and the momentum of the second particle, and from those measurements deduce the position and momentum of both particles to any precision, violating the uncertainty principle. In order to avoid such possibility, the measurement of one particle must modify the probability distribution of the other particle instantaneously, possibly violating the principle of locality . [ 105 ] In 1964, John Stewart Bell showed that this assumption can be falsified, since it would imply a certain inequality between the probabilities of different experiments. Experimental results confirm the predictions of quantum mechanics, ruling out EPR's basic assumption of local hidden variables . Science philosopher Karl Popper approached the problem of indeterminacy as a logician and metaphysical realist . [ 106 ] He disagreed with the application of the uncertainty relations to individual particles rather than to ensembles of identically prepared particles, referring to them as "statistical scatter relations". [ 106 ] [ 107 ] In this statistical interpretation, a particular measurement may be made to arbitrary precision without invalidating the quantum theory. In 1934, Popper published Zur Kritik der Ungenauigkeitsrelationen ("Critique of the Uncertainty Relations") in Naturwissenschaften , [ 108 ] and in the same year Logik der Forschung (translated and updated by the author as The Logic of Scientific Discovery in 1959 [ 106 ] ), outlining his arguments for the statistical interpretation. In 1982, he further developed his theory in Quantum theory and the schism in Physics , writing: [Heisenberg's] formulae are, beyond all doubt, derivable statistical formulae of the quantum theory. But they have been habitually misinterpreted by those quantum theorists who said that these formulae can be interpreted as determining some upper limit to the precision of our measurements . [original emphasis] [ 109 ] Popper proposed an experiment to falsify the uncertainty relations, although he later withdrew his initial version after discussions with Carl Friedrich von Weizsäcker , Heisenberg, and Einstein; Popper sent his paper to Einstein and it may have influenced the formulation of the EPR paradox. [ 110 ] : 720 Some scientists, including Arthur Compton [ 111 ] and Martin Heisenberg , [ 112 ] have suggested that the uncertainty principle, or at least the general probabilistic nature of quantum mechanics, could be evidence for the two-stage model of free will. One critique, however, is that apart from the basic role of quantum mechanics as a foundation for chemistry, nontrivial biological mechanisms requiring quantum mechanics are unlikely, due to the rapid decoherence time of quantum systems at room temperature. [ 113 ] Proponents of this theory commonly say that this decoherence is overcome by both screening and decoherence-free subspaces found in biological cells. [ 113 ] There is reason to believe that violating the uncertainty principle also strongly implies the violation of the second law of thermodynamics . [ 114 ] See Gibbs paradox . Uncertainty principles relate quantum particles – electrons for example – to classical concepts – position and momentum. This presumes quantum particles have position and momentum. Edwin C. Kemble pointed out [ 115 ] [ clarification needed ] in 1937 that such properties cannot be experimentally verified and assuming they exist gives rise to many contradictions; similarly Rudolf Haag notes that position in quantum mechanics is an attribute of an interaction, say between an electron and a detector, not an intrinsic property. [ 116 ] [ 117 ] From this point of view the uncertainty principle is not a fundamental quantum property but a concept "carried over from the language of our ancestors", as Kemble says. Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. All forms of spectroscopy , including particle physics use the relationship to relate measured energy line-width to the lifetime of quantum states. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their main research program. These include, for example, tests of number–phase uncertainty relations in superconducting [ 118 ] or quantum optics [ 119 ] systems. Applications dependent on the uncertainty principle for their operation include extremely low-noise technology such as that required in gravitational wave interferometers . [ 120 ]
https://en.wikipedia.org/wiki/Uncertainty_principle
In computer interface design, to unclick is to deselect a specific preference, [ 2 ] [ 3 ] typically by tapping a selected checkbox with a finger or cursor. As a result, the check mark image or dark circle inside the box is removed. As the Internet becomes an increasingly popular medium for marketers , vendors and marketers often presume that a user will prefer certain choices, [ 4 ] [ 5 ] such as receiving emails in the future, having specific computer settings, or preferring that specific programs will be operational when a computer is turned on. As a result, it is sometimes necessary for a user to unclick these choices [ 6 ] to avoid exposure to unwanted advertising, [ 7 ] or to avoid a situation in which a different website is chosen for one's home page . [ 8 ] In Internet marketing , unclicking is often required for a user to avoid being billed automatically for unnecessary services, sometimes part of a deceptive business practice termed negative option billing . A user's Facebook privacy settings have often been chosen in advance by Facebook Inc., which presumes that a user would like particular settings, and to un-choose these options, a user may need to unclick or opt-out of the Facebook-determined choices by finding the right menus . [ 9 ] According to behavioral economics , computer and Internet users have a general tendency to go along with a default setting . The term unclick has also been used in other contexts, such as when there is a latching or locking mechanism, such as a lock on a briefcase , [ 10 ] or seat belts in a car [ 11 ] [ 12 ] or airplane , [ 13 ] or door lock , [ 14 ] or other mechanisms which typically make a "clicking" sound. In these contexts, unclicking means to open the latch or seat belt. It has also been used in the context of guns, in which a safety catch is "unclicked", [ 15 ] or flooring materials in which pieces are interlocked, [ 16 ] The term has been used to describe the act of answering a cell phone by pressing on a button when it is ringing. [ 17 ]
https://en.wikipedia.org/wiki/Unclick
Uncomfortable science , as identified by statistician John Tukey , [ 1 ] [ 2 ] comprises situations in which there is a need to draw an inference from a limited sample of data , where further samples influenced by the same cause system will not be available. More specifically, it involves the analysis of a finite natural phenomenon for which it is difficult to overcome the problem of using a common sample of data for both exploratory data analysis and confirmatory data analysis . This leads to the danger of systematic bias through testing hypotheses suggested by the data . A typical example is Bode's law , which provides a simple numerical rule for the distances of the planets in the Solar System from the Sun . Once the rule has been derived, through the trial and error matching of various rules with the observed data (exploratory data analysis), there are not enough planets remaining for a rigorous and independent test of the hypothesis (confirmatory data analysis). We have exhausted the natural phenomena . The agreement between data and the numerical rule should be no surprise, as we have deliberately chosen the rule to match the data. If we are concerned about what Bode's law tells us about the cause system of planetary distribution then we demand confirmation that will not be available until better information about other planetary systems becomes available.
https://en.wikipedia.org/wiki/Uncomfortable_science
Uncompressed video is digital video that either has never been compressed or was generated by decompressing previously compressed digital video. It is commonly used by video cameras, video monitors, video recording devices (including general-purpose computers), and in video processors that perform functions such as image resizing, image rotation, deinterlacing , and text and graphics overlay. It is conveyed over various types of baseband digital video interfaces, such as HDMI , DVI , DisplayPort and SDI . Standards also exist for the carriage of uncompressed video over computer networks . Some HD video cameras output uncompressed video, whereas others compress the video using a lossy compression method such as MPEG or H.264 . In any lossy compression process, some of the video information is removed, which creates compression artifacts and reduces the quality of the resulting decompressed video. When editing video, it is preferred to work with video that has never been compressed (or was losslessly compressed) as this maintains the best possible quality, with compression performed after completion of editing. [ 1 ] Uncompressed video should not be confused with raw video . Raw video represents largely unprocessed data (e.g. without demosaicing ) captured by an imaging device. A standalone video recorder is a device that receives uncompressed video and stores it in either uncompressed or compressed form. These devices typically have a video output that can be used to monitor or playback recorded video. When playing back compressed video, the compressed video is uncompressed by the device before being output. Such devices may also have a communication interface, such as Ethernet or USB, which can used to exchange video files with an external computer, and in some cases control the recorder from an external computer as well. Recording to a computer is a relatively inexpensive alternative to implementing a digital video recorder, but the computer and its video storage device (e.g., solid-state drive , RAID ) must be fast enough to keep up with the high video data rate, which in some cases may be HD video or multiple video sources, or both. Due to the extreme computational and storage system performance demands of real-time video processing, other unnecessary program activity (e.g., background processes , virus scanners ) and asynchronous hardware interfaces (e.g., computer networks ) may be disabled, and the process priority of the recording realtime process may be increased, to avoid disruption of the recording process. HDMI, DVI and HD-SDI inputs are available as PCI Express (partly multi-channel) or ExpressCard , USB 3.0 [ 2 ] and Thunderbolt interface [ 3 ] [ 4 ] [ 5 ] also for 2160p ( 4K resolution ). [ 6 ] [ 7 ] Software for recording uncompressed video is often supplied with suitable hardware or available for free e.g. Ingex . [ 8 ] SMPTE 2022 and 2110 are standards for professional digital video over IP networks . SMPTE 2022 includes provisions for both compressed and uncompressed video formats. SMPTE 2110 carries uncompressed video, audio, and ancillary data as separate streams . Wireless interfaces such as Wireless LAN (WLAN, Wi-Fi ), WiDi , and Wireless Home Digital Interface can be used to transmit uncompressed standard definition (SD) video but not HD video because the HD bit rates would exceed the network bandwidth. HD can be transmitted using higher-speed interfaces such as WirelessHD and WiGig . In all cases, when video is conveyed over a network, communication disruptions or diminished bandwidth can corrupt the video or prevent its transmission. Uncompressed video has a constant bitrate that is based on pixel representation, image resolution, and frame rate: For example: The actual data rate may be higher because some transmission media for uncompressed video require defined blanking intervals , which effectively add unused pixels around the visible image.
https://en.wikipedia.org/wiki/Uncompressed_video
In mathematics , specifically functional analysis , a series is unconditionally convergent if all reorderings of the series converge to the same value. In contrast, a series is conditionally convergent if it converges but different orderings do not all converge to that same value. Unconditional convergence is equivalent to absolute convergence in finite-dimensional vector spaces , but is a weaker property in infinite dimensions. Let X {\displaystyle X} be a topological vector space . Let I {\displaystyle I} be an index set and x i ∈ X {\displaystyle x_{i}\in X} for all i ∈ I . {\displaystyle i\in I.} The series ∑ i ∈ I x i {\displaystyle \textstyle \sum _{i\in I}x_{i}} is called unconditionally convergent to x ∈ X , {\displaystyle x\in X,} if Unconditional convergence is often defined in an equivalent way: A series is unconditionally convergent if for every sequence ( ε n ) n = 1 ∞ , {\displaystyle \left(\varepsilon _{n}\right)_{n=1}^{\infty },} with ε n ∈ { − 1 , + 1 } , {\displaystyle \varepsilon _{n}\in \{-1,+1\},} the series ∑ n = 1 ∞ ε n x n {\displaystyle \sum _{n=1}^{\infty }\varepsilon _{n}x_{n}} converges. If X {\displaystyle X} is a Banach space , every absolutely convergent series is unconditionally convergent, but the converse implication does not hold in general. Indeed, if X {\displaystyle X} is an infinite-dimensional Banach space, then by Dvoretzky–Rogers theorem there always exists an unconditionally convergent series in this space that is not absolutely convergent. However, when X = R n , {\displaystyle X=\mathbb {R} ^{n},} by the Riemann series theorem , the series ∑ n x n {\textstyle \sum _{n}x_{n}} is unconditionally convergent if and only if it is absolutely convergent. This article incorporates material from Unconditional convergence on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
https://en.wikipedia.org/wiki/Unconditional_convergence
Unconventional protein secretion (known as ER/Golgi-independent protein secretion or nonclassical protein export [ 1 ] ) represents a manner in which the proteins are delivered to the surface of plasma membrane or extracellular matrix independent of the endoplasmic reticulum or Golgi apparatus . [ 2 ] This includes cytokines and mitogens with crucial function in complex processes such as inflammatory response or tumor-induced angiogenesis. Most of these proteins are involved in processes in higher eukaryotes , however an unconventional export mechanism was found in lower eukaryotes too. [ 3 ] Even proteins folded in their correct conformation can pass plasma membrane this way, unlike proteins transported via ER/Golgi pathway . [ 1 ] Two types of unconventional protein secretion are these: signal-peptid-containing proteins and cytoplasmatic and nuclear proteins that are missing an ER-signal peptide (1). [ 2 ] These proteins contain a specific signal-peptide sequence, which is to be translated into the endoplasmic reticulum , but are, however, able to reach the cell surface unconventionally. They can be packed into a COPII-coated vesicle and directly fuse with plasma membrane or can fuse with endosomal or lysosomal compartment. Alternatively, they can be packed into non-COPII-coated vesicle as well and fuse with Golgi (before reaching plasma membrane) or directly delivered to the plasma membrane. [ 2 ] Soluble proteins can reach the surface of the cell both by non-vesicular and vesicular mechanisms. Non-vesicular mechanisms use a carrier to get proteins into extracellular space (for example phosphatidylinositol-4,5-bisphosphate). Vesicular mechanisms can use the lysosome-dependent pathway , microvesicle shedding or biogenesis of multivesicular bodies . [ 2 ]
https://en.wikipedia.org/wiki/Unconventional_protein_secretion
Unconventional superconductors are materials that display superconductivity which is not explained by the usual BCS theory or its extension, the Eliashberg theory . The pairing in unconventional superconductors may originate from some other mechanism than the electron–phonon interaction. [ 1 ] Alternatively, a superconductor is unconventional if the superconducting order parameter transforms according to a non-trivial irreducible representation of the point group or space group of the system. [ 2 ] Per definition, superconductors that break additional symmetries to U (1) symmetry are known as unconventional superconductors. [ 3 ] The superconducting properties of CeCu 2 Si 2 , a type of heavy fermion material , were reported in 1979 by Frank Steglich . [ 4 ] For a long time it was believed that CeCu 2 Si 2 was a singlet d-wave superconductor, but since the mid-2010s, this notion has been strongly contested. [ 5 ] In the early eighties, many more unconventional, heavy fermion superconductors were discovered, including UBe 13 , [ 6 ] UPt 3 [ 7 ] and URu 2 Si 2 . [ 8 ] In each of these materials, the anisotropic nature of the pairing was implicated by the power-law dependence of the nuclear magnetic resonance (NMR) relaxation rate and specific heat capacity on temperature. The presence of nodes in the superconducting gap of UPt 3 was confirmed in 1986 from the polarization dependence of the ultrasound attenuation. [ 9 ] The first unconventional triplet superconductor, organic material (TMTSF) 2 PF 6 , was discovered by Denis Jerome , Klaus Bechgaard and coworkers in 1980 (TMTSF = Tetramethyltetraselenafulvalenium, see Fulvalene ). [ 10 ] Experimental works by Paul Chaikin 's and Michael Naughton's groups as well as theoretical analysis of their data by Andrei Lebed have firmly confirmed unconventional nature of superconducting pairing in (TMTSF) 2 X (X=PF 6 , ClO 4 , etc.) organic materials. [ 11 ] High-temperature singlet d-wave superconductivity was discovered by J.G. Bednorz and K.A. Müller in 1986, who also discovered that the lanthanum -based cuprate perovskite material LaBaCuO 4 develops superconductivity at a critical temperature ( T c ) of approximately 35 K (-238 degrees Celsius ). This was well above the highest critical temperature known at the time ( T c = 23 K), and thus the new family of materials was called high-temperature superconductors . Bednorz and Müller received the Nobel Prize in Physics for this discovery in 1987. Since then, many other high-temperature superconductors have been synthesized. LSCO (La 2− x Sr x CuO 4 ) was discovered the same year (1986). Soon after, in January 1987, yttrium barium copper oxide (YBCO) was discovered to have a T c of 90 K, the first material to achieve superconductivity above the boiling point of liquid nitrogen (77 K). [ 12 ] This was highly significant from the point of view of the technological applications of superconductivity because liquid nitrogen is far less expensive than liquid helium , which is required to cool conventional superconductors down to their critical temperature. In 1988 bismuth strontium calcium copper oxide (BSCCO) with T c up to 107 K, [ 13 ] and thallium barium calcium copper oxide (TBCCO) (T=thallium) with T c of 125 K were discovered. The current record critical temperature is about T c = 133 K (−140 °C) at standard pressure, and somewhat higher critical temperatures can be achieved at high pressure. Nevertheless, at present it is considered unlikely that cuprate perovskite materials will achieve room-temperature superconductivity. On the other hand, other unconventional superconductors have been discovered. These include some that do not superconduct at high temperatures, such as strontium ruthenate Sr 2 RuO 4 , but that, like high-temperature superconductors, are unconventional in other ways. (For example, the origin of the attractive force leading to the formation of Cooper pairs may be different from the one postulated in BCS theory .) In addition to this, superconductors that have unusually high values of T c but that are not cuprate perovskites have been discovered. Some of them may be extreme examples of conventional superconductors (this is suspected of magnesium diboride , MgB 2 , with T c = 39 K). Others could display more unconventional features. In 2008 a new class that does not include copper (layered oxypnictide superconductors), for example LaOFeAs, was discovered. [ 14 ] [ 15 ] [ 16 ] An oxypnictide of samarium seemed to have a T c of about 43 K, which was higher than predicted by BCS theory. [ 17 ] Tests at up to 45 T [ 18 ] [ 19 ] suggested the upper critical field of LaFeAsO 0.89 F 0.11 to be around 64 T. Some other iron-based superconductors do not contain oxygen. As of 2009 [update] , the highest-temperature superconductor (at ambient pressure) is mercury barium calcium copper oxide (HgBa 2 Ca 2 Cu 3 O x ), at 138 K and is held by a cuprate-perovskite material, [ 20 ] possibly 164 K under high pressure. [ 21 ] Other unconventional superconductors not based on cuprate structure have too been found. [ 14 ] Some have unusually high values of the critical temperature , T c , and hence they are sometimes also called high-temperature superconductors. In 2017, scanning tunneling microscopy and spectroscopy experiments on graphene proximitized to the electron-doped (non-chiral) d -wave superconductor Pr 2− x Ce x CuO 4 (PCCO) revealed evidence for an unconventional superconducting density of states induced in graphene. [ 22 ] Publications in March 2018 provided evidence for unconventional superconducting properties of a graphene bilayer where one layer was offset by a "magic angle" of 1.1° relative to the other. [ 23 ] While the mechanism responsible for conventional superconductivity is well described by the BCS theory, [ 24 ] [ 25 ] the mechanism for unconventional superconductivity is still unknown. After more than twenty years of intense research, the origin of high-temperature superconductivity is still not clear, being one of the major unsolved problems of theoretical condensed matter physics . It appears that unlike conventional superconductivity driven by electron-phonon attraction, genuine electronic mechanisms (such as antiferromagnetic correlations) are at play. Moreover, d-wave pairing, rather than s-wave, is significant. One goal of much research is room-temperature superconductivity . [ 26 ] Despite intensive research and many promising leads, an explanation has so far eluded scientists. One reason for this is that the materials in question are generally very complex, multi-layered crystals (for example, BSCCO ), making theoretical modeling difficult. The most controversial topic in condensed matter physics has been the mechanism for high- T c superconductivity (HTS). There have been two representative theories on the HTS : (See also Resonating valence bond theory ) Firstly, it has been suggested that the HTS emerges by antiferromagnetic spin fluctuation in a doped system. [ 27 ] According to this weak-coupling theory , the pairing wave function of the HTS should have a d x 2 − y 2 symmetry. Thus, whether the symmetry of the pairing wave function is the d symmetry or not is essential to demonstrate on the mechanism of the HTS in respect of the spin fluctuation. That is, if the HTS order parameter (pairing wave function) does not have d symmetry, then a pairing mechanism related to spin fluctuation can be ruled out. The tunnel experiment (see below) seems to detect d symmetry in some HTS. Secondly, there is the interlayer coupling model , according to which a layered structure consisting of BCS-type (s symmetry) superconductor can enhance the superconductivity by itself. [ 28 ] By introducing an additional tunneling interaction between each layer, this model successfully explained the anisotropic symmetry of the order parameter in the HTS as well as the emergence of the HTS. [ citation needed ] Promising experimental results from various researchers in September 2022, including Weijiong Chen , J.C. Séamus Davis and H. Eisiaki revealed that superexchange of electrons is possibly the most probable reason for high-temperature superconductivity. [ 29 ] [ 30 ] The symmetry of the HTS order parameter has been studied in nuclear magnetic resonance measurements and, more recently, by angle-resolved photoemission and measurements of the microwave penetration depth in a HTS crystal. NMR measurements probe the local magnetic field around an atom and hence reflect the susceptibility of the material. They have been of special interest for the HTS materials because many researchers have wondered whether spin correlations might play a role in the mechanism of the HTS. NMR measurements of the resonance frequency on YBCO indicated that electrons in the copper oxide superconductors are paired in spin-singlet states. This indication came from the behavior of the Knight shift , the frequency shift that occurs when the internal field is different from the applied field: In a normal metal, the magnetic moments of the conduction electrons in the neighborhood of the ion being probed align with the applied field and create a larger internal field. As these metals go superconducting, electrons with oppositely directed spins couple to form singlet states. In the anisotropic HTS, perhaps NMR measurements have found that the relaxation rate for copper depends on the direction of the applied static magnetic field, with the rate being higher when the static field is parallel to one of the axes in the copper oxide plane. While this observation by some group supported the d symmetry of the HTS, other groups could not observe it. Also, by measuring the penetration depth , the symmetry of the HTS order parameter can be studied. The microwave penetration depth is determined by the superfluid density responsible for screening the external field. In the s wave BCS theory, because pairs can be thermally excited across the gap Δ, the change in superfluid density per unit change in temperature goes as exponential behavior, exp(-Δ/ k B T ). In that case, the penetration depth also varies exponentially with temperature T . If there are nodes in the energy gap as in the d symmetry HTS, electron pair can more easily be broken, the superfluid density should have a stronger temperature dependence, and the penetration depth is expected to increase as a power of T at low temperatures. If the symmetry is specially d x 2 - y 2 then the penetration depth should vary linearly with T at low temperatures. This technique is increasingly being used to study superconductors and is limited in application largely by the quality of available single crystals. Photoemission spectroscopy also could provide information on the HTS symmetry. By scattering photons off electrons in the crystal, one can sample the energy spectra of the electrons. Because the technique is sensitive to the angle of the emitted electrons one can determine the spectrum for different wave vectors on the Fermi surface. However, within the resolution of the angle-resolved photoemission spectroscopy (ARPES), researchers could not tell whether the gap goes to zero or just gets very small. Also, ARPES are sensitive only to the magnitude and not to the sign of the gap, so it could not tell if the gap goes negative at some point. This means that ARPES cannot determine whether the HTS order parameter has the d symmetry or not. There was a clever experimental design to overcome the muddy situation. An experiment based on pair tunneling and flux quantization in a three-grain ring of YBa 2 Cu 3 O 7 (YBCO) was designed to test the symmetry of the order parameter in YBCO. [ 31 ] Such a ring consists of three YBCO crystals with specific orientations consistent with the d-wave pairing symmetry to give rise to a spontaneously generated half-integer quantum vortex at the tricrystal meeting point. Furthermore, the possibility that junction interfaces can be in the clean limit (no defects) or with maximum zig-zag disorder was taken into account in this tricrystal experiment. [ 31 ] A proposal of studying vortices with half magnetic flux quanta in heavy-fermion superconductors in three polycrystalline configurations was reported in 1987 by V. B. Geshkenbein, A. Larkin and A. Barone in 1987. [ 32 ] In the first tricrystal pairing symmetry experiment, [ 31 ] the spontaneous magnetization of half flux quantum was clearly observed in YBCO, which convincingly supported the d-wave symmetry of the order parameter in YBCO. Because YBCO is orthorhombic , it might inherently have an admixture of s-wave symmetry. So, by tuning their technique further, it was found that there was an admixture of s-wave symmetry in YBCO within about 3%. [ 33 ] Also, it was demonstrated by Tsuei, Kirtley et al. that there was a pure d x 2 - y 2 order parameter symmetry in the tetragonal Tl 2 Ba 2 CuO 6 . [ 34 ]
https://en.wikipedia.org/wiki/Unconventional_superconductor
In game theory an uncorrelated asymmetry is an arbitrary asymmetry in a game which is otherwise symmetrical . The name 'uncorrelated asymmetry' is due to John Maynard Smith who called payoff relevant asymmetries in games with similar roles for each player 'correlated asymmetries' (note that any game with correlated asymmetries must also have uncorrelated asymmetries). The explanation of an uncorrelated asymmetry usually makes reference to "informational asymmetry". Which may confuse some readers, since, games which may have uncorrelated asymmetries are still games of complete information . What differs between the same game with and without an uncorrelated asymmetry is whether the players know which role they have been assigned. If players in a symmetric game know whether they are Player 1, Player 2, etc. (or row vs. column player in a bimatrix game ) then an uncorrelated asymmetry exists. If the players do not know which player they are then no uncorrelated asymmetry exists. The information asymmetry is that one player believes he is player 1 and the other believes he is player 2. Therefore, "informational asymmetry" does not refer to knowledge in the sense of an information set in an extensive form game . The concept of uncorrelated asymmetries is important in determining which Nash equilibria are evolutionarily stable strategies in discoordination games such as the game of chicken . In these games the mixing Nash is the ESS if there is no uncorrelated asymmetry, and the pure conditional Nash equilibria are ESSes when there is an uncorrelated asymmetry. The usual applied example of an uncorrelated asymmetry is territory ownership in the hawk-dove game . Even if the two players ("owner" and "intruder") have the same payoffs (i.e., the game is payoff symmetric), the territory owner will play Hawk, and the intruder Dove, in what is known as the 'Bourgeois strategy' (the reverse is also an ESS known as the 'anti-bourgeois strategy', but makes little biological sense). Maynard Smith, J (1982) Evolution and the Theory of Games Cambridge University Press. ISBN 0-521-28884-3
https://en.wikipedia.org/wiki/Uncorrelated_asymmetry
An uncoupling protein ( UCP ) is a mitochondrial inner membrane protein that is a regulated proton channel or transporter. An uncoupling protein is thus capable of dissipating the proton gradient generated by NADH -powered pumping of protons from the mitochondrial matrix to the mitochondrial intermembrane space. The energy lost in dissipating the proton gradient via UCPs is not used to do biochemical work. Instead, heat is generated. This is what links UCP to thermogenesis. However, not every type of UCPs are related to thermogenesis. Although UCP2 and UCP3 are closely related to UCP1, UCP2 and UCP3 do not affect thermoregulatory abilities of vertebrates. [ 1 ] UCPs are positioned in the same membrane as the ATP synthase , which is also a proton channel. The two proteins thus work in parallel with one generating heat and the other generating ATP from ADP and inorganic phosphate, the last step in oxidative phosphorylation . [ 2 ] Mitochondria respiration is coupled to ATP synthesis (ADP phosphorylation), but is regulated by UCPs. [ 3 ] [ 4 ] UCPs belong to the mitochondrial carrier (SLC25) family. [ 5 ] [ 6 ] Uncoupling proteins play a role in normal physiology, as in cold exposure or hibernation , because the energy is used to generate heat (see thermogenesis ) instead of producing ATP . Some plants species use the heat generated by uncoupling proteins for special purposes. Eastern skunk cabbage , for example, keeps the temperature of its spikes as much as 20 °C higher than the environment, spreading odor and attracting insects that fertilize the flowers. [ 7 ] However, other substances, such as 2,4-dinitrophenol and carbonyl cyanide m-chlorophenyl hydrazone , also serve the same uncoupling function. Salicylic acid is also an uncoupling agent (chiefly in plants) and will decrease production of ATP and increase body temperature if taken in extreme excess. [ 8 ] Uncoupling proteins are increased by thyroid hormone , norepinephrine , epinephrine , and leptin . [ 9 ] Scientists observed the thermogenic activity in brown adipose tissue , which eventually led to the discovery of UCP1, initially known as "Uncoupling Protein". [ 3 ] [ 4 ] The brown tissue revealed elevated levels of mitochondria respiration and another respiration not coupled to ATP synthesis, which symbolized strong thermogenic activity. [ 3 ] [ 4 ] UCP1 was the protein discovered responsible for activating a proton pathway that was not coupled to ADP phosphorylation (ordinarily done through ATP Synthase ). [ 3 ] There are five UCP homologs known in mammals. While each of these performs unique functions, certain functions are performed by several of the homologs. The homologs are as follows: The first uncoupling protein discovered, UCP1, was discovered in the brown adipose tissues of hibernators and small rodents, which provide non-shivering heat to these animals. [ 3 ] [ 4 ] These brown adipose tissues are essential to maintaining the body temperature of small rodents, and studies with (UCP1)- knockout mice show that these tissues do not function correctly without functioning uncoupling proteins. [ 3 ] [ 4 ] In fact, these studies revealed that cold-acclimation is not possible for these knockout mice, indicating that UCP1 is an essential driver of heat production in these brown adipose tissues. [ 10 ] [ 11 ] Elsewhere in the body, uncoupling protein activities are known to affect the temperature in micro-environments. [ 12 ] [ 13 ] This is believed to affect other proteins' activity in these regions, though work is still required to determine the true consequences of uncoupling-induced temperature gradients within cells. [ 12 ] The structure of human uncoupling protein 1 UCP1 has been solved by cryogenic-electron microscopy. [ 14 ] The structure has the typical fold of a member of the SLC25 family. [ 5 ] [ 6 ] UCP1 is locked in a cytoplasmic-open state by guanosine triphosphate in a pH-dependent manner. [ 14 ] The effect of UCP2 and UCP3 on ATP concentrations varies depending on cell type. [ 12 ] For example, pancreatic beta cells experience a decrease in ATP concentration with increased activity of UCP2. [ 12 ] This is associated with cell degeneration, decreased insulin secretion, and type II diabetes. [ 12 ] [ 15 ] Conversely, UCP2 in hippocampus cells and UCP3 in muscle cells stimulate production of mitochondria . [ 12 ] [ 16 ] The larger number of mitochondria increases the combined concentration of ADP and ATP, actually resulting in a net increase in ATP concentration when these uncoupling proteins become coupled (i.e. the mechanism to allow proton leaking is inhibited). [ 12 ] [ 16 ] The entire list of functions of UCP2 and UCP3 is not known. [ 17 ] However, studies indicate that these proteins are involved in a negative-feedback loop limiting the concentration of reactive oxygen species (ROS). [ 18 ] Current scientific consensus states that UCP2 and UCP3 perform proton transportation only when activation species are present. [ 19 ] Among these activators are fatty acids, ROS, and certain ROS byproducts that are also reactive. [ 18 ] [ 19 ] Therefore, higher levels of ROS directly and indirectly cause increased activity of UCP2 and UCP3. [ 18 ] This, in turn, increases proton leak from the mitochondria, lowering the proton-motive force across mitochondrial membranes, activating the electron transport chain. [ 17 ] [ 18 ] [ 19 ] Limiting the proton motive force through this process results in a negative feedback loop that limits ROS production. [ 18 ] Especially, UCP2 decreases the transmembrane potential of mitochondria, thus decreasing the production of ROS. Thus, cancer cells may increase the production of UCP2 in mitochondria. [ 20 ] This theory is supported by independent studies which show increased ROS production in both UCP2 and UCP3 knockout mice. [ 19 ] This process is important to human health, as high-concentrations of ROS are believed to be involved in the development of degenerative diseases. [ 19 ] By detecting the associated mRNA , UCP2, UCP4, and UCP5 were shown to reside in neurons throughout the human central nervous system. [ 22 ] These proteins play key roles in neuronal function. [ 12 ] While many study findings remain controversial, several findings are widely accepted. [ 12 ] For example, UCPs alter the free calcium concentrations in the neuron. [ 12 ] Mitochondria are a major site of calcium storage in neurons, and the storage capacity increases with potential across mitochondrial membranes. [ 12 ] [ 23 ] Therefore, when the uncoupling proteins reduce potential across these membranes, calcium ions are released to the surrounding environment in the neuron. [ 12 ] Due to the high concentrations of mitochondria near axon terminals , this implies UCPs play a role in regulating calcium concentrations in this region. [ 12 ] Considering calcium ions play a large role in neurotransmission, scientists predict that these UCPs directly affect neurotransmission. [ 12 ] As discussed above, neurons in the hippocampus experience increased concentrations of ATP in the presence of these uncoupling proteins. [ 12 ] [ 16 ] This leads scientists to hypothesize that UCPs improve synaptic plasticity and transmission. [ 12 ]
https://en.wikipedia.org/wiki/Uncoupling_protein
Uncrewed spacecraft or robotic spacecraft are spacecraft without people on board. Uncrewed spacecraft may have varying levels of autonomy from human input, such as remote control , or remote guidance. They may also be autonomous , in which they have a pre-programmed list of operations that will be executed unless otherwise instructed. A robotic spacecraft for scientific measurements is often called a space probe or space observatory . Many space missions are more suited to telerobotic rather than crewed operation, due to lower cost and risk factors. In addition, some planetary destinations such as Venus or the vicinity of Jupiter are too hostile for human survival, given current technology. Outer planets such as Saturn , Uranus , and Neptune are too distant to reach with current crewed spaceflight technology, so telerobotic probes are the only way to explore them. Telerobotics also allows exploration of regions that are vulnerable to contamination by Earth micro-organisms since spacecraft can be sterilized. Humans can not be sterilized in the same way as a spaceship, as they coexist with numerous micro-organisms, and these micro-organisms are also hard to contain within a spaceship or spacesuit. The first uncrewed space mission was Sputnik , launched October 4, 1957 to orbit the Earth. Nearly all satellites , landers and rovers are robotic spacecraft. Not every uncrewed spacecraft is a robotic spacecraft; for example, a reflector ball is a non-robotic uncrewed spacecraft. Space missions where other animals but no humans are on-board are called uncrewed missions. Many habitable spacecraft also have varying levels of robotic features. For example, the space stations Salyut 7 and Mir , and the International Space Station module Zarya , were capable of remote guided station-keeping and docking maneuvers with both resupply craft and new modules. Uncrewed resupply spacecraft are increasingly used for crewed space stations . The first robotic spacecraft was launched by the Soviet Union (USSR) on 22 July 1951, a suborbital flight carrying two dogs Dezik and Tsygan. [ 1 ] Four other such flights were made through the fall of 1951. The first artificial satellite , Sputnik 1 , was put into a 215-by-939-kilometer (116 by 507 nmi) Earth orbit by the USSR on 4 October 1957. On 3 November 1957, the USSR orbited Sputnik 2 . Weighing 113 kilograms (249 lb), Sputnik 2 carried the first animal into orbit, the dog Laika . [ 2 ] Since the satellite was not designed to detach from its launch vehicle 's upper stage, the total mass in orbit was 508.3 kilograms (1,121 lb). [ 3 ] In a close race with the Soviets , the United States launched its first artificial satellite, Explorer 1 , into a 357-by-2,543-kilometre (193 by 1,373 nmi) orbit on 31 January 1958. Explorer I was an 205-centimetre (80.75 in) long by 15.2-centimetre (6.00 in) diameter cylinder weighing 14.0 kilograms (30.8 lb), compared to Sputnik 1, a 58-centimeter (23 in) sphere which weighed 83.6 kilograms (184 lb). Explorer 1 carried sensors which confirmed the existence of the Van Allen belts, a major scientific discovery at the time, while Sputnik 1 carried no scientific sensors. On 17 March 1958, the US orbited its second satellite, Vanguard 1 , which was about the size of a grapefruit, and which remains in a 670-by-3,850-kilometre (360 by 2,080 nmi) orbit as of 2016 [update] . The first attempted lunar probe was the Luna E-1 No.1 , launched on 23 September 1958. The goal of a lunar probe repeatedly failed until 4 January 1959 when Luna 1 orbited around the Moon and then the Sun. The success of these early missions began a race between the US and the USSR to outdo each other with increasingly ambitious probes. Mariner 2 was the first probe to study another planet, revealing Venus' extremely hot temperature to scientists in 1962, while the Soviet Venera 4 was the first atmospheric probe to study Venus. Mariner 4 's 1965 Mars flyby snapped the first images of its cratered surface, which the Soviets responded to a few months later with images from on its surface from Luna 9 . In 1967, America's Surveyor 3 gathered information about the Moon's surface that would prove crucial to the Apollo 11 mission that landed humans on the Moon two years later. [ 4 ] The first interstellar probe was Voyager 1 , launched 5 September 1977. It entered interstellar space on 25 August 2012, [ 5 ] followed by its twin Voyager 2 on 5 November 2018. [ 6 ] Nine other countries have successfully launched satellites using their own launch vehicles: France (1965), [ 7 ] Japan [ 8 ] and China (1970), [ 9 ] the United Kingdom (1971), [ 10 ] India (1980), [ 11 ] Israel (1988), [ 12 ] Iran (2009), [ 13 ] North Korea (2012), [ 14 ] and South Korea (2022). [ 15 ] In spacecraft design, the United States Air Force considers a vehicle to consist of the mission payload and the bus (or platform). The bus provides physical structure, thermal control, electrical power, attitude control and telemetry, tracking and commanding. [ 16 ] JPL divides the "flight system" of a spacecraft into subsystems. [ 17 ] These include: The physical backbone structure, which This is sometimes referred to as the command and data subsystem. It is often responsible for: This system is mainly responsible for the correct spacecraft's orientation in space (attitude) despite external disturbance-gravity gradient effects, magnetic-field torques, solar radiation and aerodynamic drag; in addition it may be required to reposition movable parts, such as antennas and solar arrays. [ 18 ] Integrated sensing incorporates an image transformation algorithm to interpret the immediate imagery land data, perform a real-time detection and avoidance of terrain hazards that may impede safe landing, and increase the accuracy of landing at a desired site of interest using landmark localization techniques. Integrated sensing completes these tasks by relying on pre-recorded information and cameras to understand its location and determine its position and whether it is correct or needs to make any corrections (localization). The cameras are also used to detect any possible hazards whether it is increased fuel consumption or it is a physical hazard such as a poor landing spot in a crater or cliff side that would make landing very not ideal (hazard assessment). In planetary exploration missions involving robotic spacecraft, there are three key parts in the processes of landing on the surface of the planet to ensure a safe and successful landing. [ 19 ] This process includes an entry into the planetary gravity field and atmosphere, a descent through that atmosphere towards an intended/targeted region of scientific value, and a safe landing that guarantees the integrity of the instrumentation on the craft is preserved. While the robotic spacecraft is going through those parts, it must also be capable of estimating its position compared to the surface in order to ensure reliable control of itself and its ability to maneuver well. The robotic spacecraft must also efficiently perform hazard assessment and trajectory adjustments in real time to avoid hazards. To achieve this, the robotic spacecraft requires accurate knowledge of where the spacecraft is located relative to the surface (localization), what may pose as hazards from the terrain (hazard assessment), and where the spacecraft should presently be headed (hazard avoidance). Without the capability for operations for localization, hazard assessment, and avoidance, the robotic spacecraft becomes unsafe and can easily enter dangerous situations such as surface collisions, undesirable fuel consumption levels, and/or unsafe maneuvers. Components in the telecommunications subsystem include radio antennas, transmitters and receivers. These may be used to communicate with ground stations on Earth, or with other spacecraft. [ 20 ] The supply of electric power on spacecraft generally come from photovoltaic (solar) cells or from a radioisotope thermoelectric generator . Other components of the subsystem include batteries for storing power and distribution circuitry that connects components to the power sources. [ 21 ] Spacecraft are often protected from temperature fluctuations with insulation. Some spacecraft use mirrors and sunshades for additional protection from solar heating. They also often need shielding from micrometeoroids and orbital debris. [ 22 ] Spacecraft propulsion is a method that allows a spacecraft to travel through space by generating thrust to push it forward. [ 23 ] However, there is not one universally used propulsion system: monopropellant, bipropellant, ion propulsion, etc. Each propulsion system generates thrust in slightly different ways with each system having its own advantages and disadvantages. But, most spacecraft propulsion today is based on rocket engines. The general idea behind rocket engines is that when an oxidizer meets the fuel source, there is explosive release of energy and heat at high speeds, which propels the spacecraft forward. This happens due to one basic principle known as Newton's third law . According to Newton, "to every action there is an equal and opposite reaction." As the energy and heat is being released from the back of the spacecraft, gas particles are being pushed around to allow the spacecraft to propel forward. The main reason behind the usage of rocket engine today is because rockets are the most powerful form of propulsion there is. For a propulsion system to work, there is usually an oxidizer line and a fuel line. This way, the spacecraft propulsion is controlled. But in a monopropellant propulsion, there is no need for an oxidizer line and only requires the fuel line. [ 24 ] This works due to the oxidizer being chemically bonded into the fuel molecule itself. But for the propulsion system to be controlled, the combustion of the fuel can only occur due to a presence of a catalyst . This is quite advantageous due to making the rocket engine lighter and cheaper, easy to control, and more reliable. But, the downfall is that the chemical is very dangerous to manufacture, store, and transport. A bipropellant propulsion system is a rocket engine that uses a liquid propellant. [ 25 ] This means both the oxidizer and fuel line are in liquid states. This system is unique because it requires no ignition system, the two liquids would spontaneously combust as soon as they come into contact with each other and produces the propulsion to push the spacecraft forward. The main benefit for having this technology is because that these kinds of liquids have relatively high density, which allows the volume of the propellent tank to be small, therefore increasing space efficacy. The downside is the same as that of monopropellant propulsion system: very dangerous to manufacture, store, and transport. An ion propulsion system is a type of engine that generates thrust by the means of electron bombardment or the acceleration of ions. [ 26 ] By shooting high-energy electrons to a propellant atom (neutrally charge), it removes electrons from the propellant atom and this results in the propellant atom becoming a positively charged atom. The positively charged ions are guided to pass through positively charged grids that contains thousands of precise aligned holes are running at high voltages. Then, the aligned positively charged ions accelerates through a negative charged accelerator grid that further increases the speed of the ions up to 40 kilometres per second (90,000 mph). The momentum of these positively charged ions provides the thrust to propel the spacecraft forward. The advantage of having this kind of propulsion is that it is incredibly efficient in maintaining constant velocity, which is needed for deep-space travel. However, the amount of thrust produced is extremely low and that it needs a lot of electrical power to operate. Mechanical components often need to be moved for deployment after launch or prior to landing. In addition to the use of motors, many one-time movements are controlled by pyrotechnic devices. [ 27 ] Robotic spacecraft are specifically designed system for a specific hostile environment. [ 28 ] Due to their specification for a particular environment, it varies greatly in complexity and capabilities. While an uncrewed spacecraft is a spacecraft without personnel or crew and is operated by automatic (proceeds with an action without human intervention) or remote control (with human intervention). The term 'uncrewed spacecraft' does not imply that the spacecraft is robotic. Robotic spacecraft use telemetry to radio back to Earth acquired data and vehicle status information. Although generally referred to as "remotely controlled" or "telerobotic", the earliest orbital spacecraft – such as Sputnik 1 and Explorer 1 – did not receive control signals from Earth. Soon after these first spacecraft, command systems were developed to allow remote control from the ground. Increased autonomy is important for distant probes where the light travel time prevents rapid decision and control from Earth. Newer probes such as Cassini–Huygens and the Mars Exploration Rovers are highly autonomous and use on-board computers to operate independently for extended periods of time. [ 29 ] [ 30 ] A space probe is a robotic spacecraft that does not orbit Earth, but instead, explores further into outer space. Space probes have different sets of scientific instruments on board. A space probe may approach the Moon; travel through interplanetary space; flyby, orbit, or land on other planetary bodies; or enter interstellar space. Space probes send collected data to Earth. Space probes can be orbiters, landers, and rovers. Space probes can also gather materials from its target and return it to Earth. [ 31 ] [ 32 ] Once a probe has left the vicinity of Earth, its trajectory will likely take it along an orbit around the Sun similar to the Earth's orbit. To reach another planet, the simplest practical method is a Hohmann transfer orbit . More complex techniques, such as gravitational slingshots , can be more fuel-efficient, though they may require the probe to spend more time in transit. Some high Delta-V missions (such as those with high inclination changes ) can only be performed, within the limits of modern propulsion, using gravitational slingshots. A technique using very little propulsion, but requiring a considerable amount of time, is to follow a trajectory on the Interplanetary Transport Network . [ 33 ] A space telescope or space observatory is a telescope in outer space used to observe astronomical objects. Space telescopes avoid the filtering and distortion of electromagnetic radiation which they observe, and avoid light pollution which ground-based observatories encounter. They are divided into two types: satellites which map the entire sky ( astronomical survey ), and satellites which focus on selected astronomical objects or parts of the sky and beyond. Space telescopes are distinct from Earth imaging satellites , which point toward Earth for satellite imaging , applied for weather analysis , espionage , and other types of information gathering . Cargo or resupply spacecraft are robotic vehicles designed to transport supplies, such as food, propellant, and equipment, to space stations. This distinguishes them from space probes, which are primarily focused on scientific exploration. Automated cargo spacecraft have been servicing space stations since 1978, supporting missions like Salyut 6 , Salyut 7 , Mir , the International Space Station (ISS), and the Tiangong space station . Currently, the ISS relies on three types of cargo spacecraft: the Russian Progress , [ 34 ] along with the American Cargo Dragon 2 , [ 35 ] [ 36 ] and Cygnus . [ 37 ] China's Tiangong space station is solely supplied by the Tianzhou . [ 38 ] [ 39 ] [ 40 ] The American Dream Chaser [ 41 ] [ 42 ] and Japanese HTV-X are under development for future use with the ISS. The European Automated Transfer Vehicle was previously used between 2008 and 2015. Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of".
https://en.wikipedia.org/wiki/Uncrewed_spacecraft
Undark was a trade name for luminous paint made with a mixture of radioactive radium and zinc sulfide , as produced by the U.S. Radium Corporation between 1917 and 1926. [ 1 ] The U.S. Radium Corporation was based in Orange, New Jersey , but was not the only radium-painting business in the United States. Other big names in the early 1900s include the Radium Dial Company and the Luminous Processes Inc. Radium was discovered by Pierre and Marie Curie in December 1898. [ 2 ] Years later, in 1902, an electrical engineer, William J. Hammer, discovered that you could mix radium and zinc into paint, exciting the zinc atoms and giving off a faint blue-green light. Before Undark had its name, luminous paint was first used commercially during World War I . Soldiers used the luminous paint on their instrument dials so they would not give away their position. After World War I, the paint was marketed toward consumers and trademarked as Undark. [ 3 ] The U.S. Radium Corporation's radium-processing plant extracted and processed radium from carnotite ore. There, it was combined with other ingredients to create Undark. [ 4 ] Undark was used primarily in radium dials for watches and clocks. Undark was also used for compasses, weapon sights, speedometers, telephone mouthpieces, fish bait, locks, and many more articles of use. [ 3 ] Undark was also available as a kit for general consumer use and marketed as glow-in-the-dark paint. The people working in the industry who applied the radioactive paint became known as the Radium Girls [ 5 ] because many of them became ill, and some died from exposure to the radiation emitted by the radium contained within the product. The product was the direct cause of radium jaw , also referred to today as osteoradionecrosis. Radium jaw was the most common side effect because the painters were taught to put the bristles of the paintbrush in their lips to return it to a fine point after each stroke. There are also many accounts of the radium girls decorating themselves with the radioactive luminous paint. [ 3 ] Even to this day, there is no cure for radioactive poisoning; the best treatments involve surgical interventions that may remove tissues, muscle, and even bone. The surgical process is called Sequestrectomy. [ 6 ] Between 1917 and 1926, radium-226 was improperly used and disposed of, contaminating the processing plant and surrounding areas in Orange, New Jersey. Radium-226 emits ionizing radiation and decays into radon gas. [ 4 ] Radium-226 has a half-life of 1600 years before half of it would naturally decay. Radium was used to illuminate watches under safer practices until around 1968. [ 7 ] Mixtures similar to Undark, consisting of radium and zinc sulfide, were used by other companies. Trade names include:
https://en.wikipedia.org/wiki/Undark
In mathematics, the term undefined refers to a value , function , or other expression that cannot be assigned a meaning within a specific formal system . [ 1 ] Attempting to assign or use an undefined value within a particular formal system, may produce contradictory or meaningless results within that system. In practice, mathematicians may use the term undefined to warn that a particular calculation or property can produce mathematically inconsistent results, and therefore, it should be avoided. [ 2 ] Caution must be taken to avoid the use of such undefined values in a deduction or proof. Whether a particular function or value is undefined, depends on the rules of the formal system in which it is used. For example, the imaginary number − 1 {\displaystyle {\sqrt {-1}}} is undefined within the set of real numbers . So it is meaningless to reason about the value, solely within the discourse of real numbers. However, defining the imaginary number i {\displaystyle i} to be equal to − 1 {\displaystyle {\sqrt {-1}}} , allows there to be a consistent set of mathematics referred to as the complex number plane. Therefore, within the discourse of complex numbers, − 1 {\displaystyle {\sqrt {-1}}} is in fact defined. Many new fields of mathematics have been created, by taking previously undefined functions and values, and assigning them new meanings. [ 3 ] Most mathematicians generally consider these innovations significant, to the extent that they are both internally consistent and practically useful. For example, Ramanujan summation may seem unintuitive, as it works upon divergent series that assign finite values to apparently infinite sums such as 1 + 2 + 3 + 4 + ⋯ . However, Ramanujan summation is useful for modelling a number of real-world phenomena, including the Casimir effect and bosonic string theory . A function may be said to be undefined, outside of its domain . As one example, f ( x ) = 1 x {\textstyle f(x)={\frac {1}{x}}} is undefined when x = 0 {\displaystyle x=0} . As division by zero is undefined in algebra , x = 0 {\displaystyle x=0} is not part of the domain of f ( x ) {\displaystyle f(x)} . In some mathematical contexts, undefined can refer to a primitive notion which is not defined in terms of simpler concepts. [ 4 ] For example, in Elements , Euclid defines a point merely as "that of which there is no part", and a line merely as "length without breadth". [ 5 ] Although these terms are not further defined, Euclid uses them to construct more complex geometric concepts. [ 6 ] Contrast also the term undefined behavior in computer science, in which the term indicates that a function may produce or return any result, which may or may not be correct. Many fields of mathematics refer to various kinds of expressions as undefined. Therefore, the following examples of undefined expressions are not exhaustive. In arithmetic , and therefore algebra , division by zero is undefined. [ 7 ] Use of a division by zero in an arithmetical calculation or proof, can produce absurd or meaningless results. Assuming that division by zero exists, can produce inconsistent logical results, such as the following fallacious "proof" that one is equal to two [ 8 ] : The above "proof" is not meaningful. Since we know that x = y {\displaystyle x=y} , if we divide both sides of the equation by x − y {\displaystyle x-y} , we divide both sides of the equation by zero. This operation is undefined in arithmetic, and therefore deductions based on division by zero can be contradictory. If we assume that a non-zero answer n {\displaystyle n} exists, when some number k ∣ k ≠ 0 {\displaystyle k\mid k\neq 0} is divided by zero, then that would imply that k = n × 0 {\displaystyle k=n\times 0} . But there is no number, which when multiplied by zero, produces a number that is not zero. Therefore, our assumption is incorrect. [ 7 ] Depending on the particular context, mathematicians may refer to zero to the power of zero as undefined, [ 9 ] indefinite, [ 10 ] or equal to 1. [ 11 ] Controversy exists as to which definitions are mathematically rigorous, and under what conditions. [ 12 ] [ 13 ] When restricted to the field of real numbers, the square root of a negative number is undefined, as no real number exists which, when squared, equals a negative number. Mathematicians, including Gerolamo Cardano , John Wallis , Leonhard Euler , and Carl Friedrich Gauss , explored formal definitions for the square roots of negative numbers, giving rise to the field of complex analysis . [ 14 ] In trigonometry, for all n ∈ Z {\displaystyle n\in \mathbb {Z} } , the functions tan ⁡ θ {\displaystyle \tan \theta } and sec ⁡ θ {\displaystyle \sec \theta } are undefined for θ = π ( n − 1 2 ) {\textstyle \theta =\pi \left(n-{\frac {1}{2}}\right)} , while the functions cot ⁡ θ {\displaystyle \cot \theta } and csc ⁡ θ {\displaystyle \csc \theta } are undefined for all θ = π n {\displaystyle \theta =\pi n} . This is a consequence of the identities of these functions, which would imply a division by zero at those points. [ 15 ] Also, arcsin ⁡ k {\displaystyle \arcsin k} and arccos ⁡ k {\displaystyle \arccos k} are both undefined when k > 1 {\displaystyle k>1} or k < − 1 {\displaystyle k<-1} , because the range of the sin {\displaystyle \sin } and cos {\displaystyle \cos } functions is between − 1 {\displaystyle -1} and 1 {\displaystyle 1} inclusive. In complex analysis , a point z {\displaystyle z} on the complex plane where a holomorphic function is undefined, is called a singularity . Some different types of singularities include: The term undefined should be contrasted with the term indeterminate . In the first case, undefined generally indicates that a value or property can have no meaningful definition. In the second case, indeterminate generally indicates that a value or property can have many meaningful definitions. Additionally, it seems to be generally accepted that undefined values may not be safely used within a particular formal system, whereas indeterminate values might be, depending on the relevant rules of the particular formal system. [ 16 ]
https://en.wikipedia.org/wiki/Undefined_(mathematics)
Under cover removal , or UCR , is a method for colour printers to use less ink . Black ink is used for grey colours instead of the three CMY inks. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Under_cover_removal
In manufacturing , an undercut is a special type of recessed surface that is inaccessible using a straight tool. In turning , it refers to a recess in a diameter generally on the inside diameter of the part. In milling , it refers to a feature which is not visible when the part is viewed from the spindle. In molding , it refers to a feature that cannot be molded using only a single pull mold. In printed circuit board construction, it refers to the portion of the copper that is etched away under the photoresist . On turned parts an undercut is also known as a neck or "relief groove". They are often used at the end of the threaded portion of a shaft or screw to provide clearance for the cutting tool. Undercut - Any indentation or protrusion in a shape that will prevent its withdrawal from a one-piece mold. In milling the spindle is where a cutting tool is mounted. In some situations material must be cut from a direction where the feature can not be seen from the perspective of the spindle and requires special tooling to reach behind the visible material. The corners may be undercut to remove the radius that is usually left by the milling cutter this is commonly referred to as a relief. Undercuts from etching (microfabrication) are a side effect, not an intentional feature.
https://en.wikipedia.org/wiki/Undercut_(manufacturing)
In turning , an undercut is a recess in a diameter generally on the inside diameter of the part. On turned parts an undercut is also known as a neck or "relief groove". They are often used at the end of the threaded portion of a shaft or screw to provide clearance for the cutting tool, and also referred to as thread relief in this context. A rule of thumb is that the undercut should be at least 1.5 threads long and the diameter should be at least 0.015 in (0.38 mm) smaller than the minor diameter of the thread. [ 1 ] For externally threaded products with metric thread, ISO 4755 provides recommended dimensions. Strictly speaking the relief simply needs to be equal or slightly smaller than the minor diameter of the thread. Thread relief can also be internal on a bore, and then the relief needs to be larger than the major thread diameter. They are also often used on shafts that have diameter changes so that a mating part can seat against the shoulder. If an undercut is not provided there is always a small radius left behind even if a sharp corner is intended. These types of undercuts are called out on technical drawings by saying the width and either the depth or the diameter of the bottom of the neck. [ 2 ]
https://en.wikipedia.org/wiki/Undercut_(turning)
In mathematics , a system of linear equations or a system of polynomial equations is considered underdetermined if there are fewer equations than unknowns [ 1 ] (in contrast to an overdetermined system , where there are more equations than unknowns). The terminology can be explained using the concept of constraint counting . Each unknown can be seen as an available degree of freedom . Each equation introduced into the system can be viewed as a constraint that restricts one degree of freedom. Therefore, the critical case (between overdetermined and underdetermined) occurs when the number of equations and the number of free variables are equal. For every variable giving a degree of freedom, there exists a corresponding constraint removing a degree of freedom. An indeterminate system additional constraints that are not equations, such as restricting the solutions to integers. [ 2 ] The underdetermined case, by contrast, occurs when the system has been underconstrained—that is, when the unknowns outnumber the equations. An underdetermined linear system has either no solution or infinitely many solutions. For example, is an underdetermined system without any solution; any system of equations having no solution is said to be inconsistent . On the other hand, the system is consistent and has an infinitude of solutions, such as ( x , y , z ) = (1, −2, 2) , (2, −3, 2) , and (3, −4, 2) . All of these solutions can be characterized by first subtracting the first equation from the second, to show that all solutions obey z = 2 ; using this in either equation shows that any value of y is possible, with x = −1 − y . More specifically, according to the Rouché–Capelli theorem , any system of linear equations (underdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix . If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution; since in an underdetermined system this rank is necessarily less than the number of unknowns, there are indeed an infinitude of solutions, with the general solution having k free parameters where k is the difference between the number of variables and the rank. There are algorithms to decide whether an underdetermined system has solutions, and if it has any, to express all solutions as linear functions of k of the variables (same k as above). The simplest one is Gaussian elimination . See System of linear equations for more details. The homogeneous (with all constant terms equal to zero) underdetermined linear system always has non-trivial solutions (in addition to the trivial solution where all the unknowns are zero). There are an infinity of such solutions, which form a vector space , whose dimension is the difference between the number of unknowns and the rank of the matrix of the system. The main property of linear underdetermined systems, of having either no solution or infinitely many, extends to systems of polynomial equations in the following way. A system of polynomial equations which has fewer equations than unknowns is said to be underdetermined . It has either infinitely many complex solutions (or, more generally, solutions in an algebraically closed field ) or is inconsistent. It is inconsistent if and only if 0 = 1 is a linear combination (with polynomial coefficients) of the equations (this is Hilbert's Nullstellensatz ). If an underdetermined system of t equations in n variables ( t < n ) has solutions, then the set of all complex solutions is an algebraic set of dimension at least n - t . If the underdetermined system is chosen at random the dimension is equal to n - t with probability one. In general, an underdetermined system of linear equations has an infinite number of solutions, if any. However, in optimization problems that are subject to linear equality constraints, only one of the solutions is relevant, namely the one giving the highest or lowest value of an objective function . Some problems specify that one or more of the variables are constrained to take on integer values. An integer constraint leads to integer programming and Diophantine equations problems, which may have only a finite number of solutions. Another kind of constraint, which appears in coding theory , especially in error correcting codes and signal processing (for example compressed sensing ), consists in an upper bound on the number of variables which may be different from zero. In error correcting codes, this bound corresponds to the maximal number of errors that may be corrected simultaneously.
https://en.wikipedia.org/wiki/Underdetermined_system
Underglaze is a method of decorating pottery in which painted decoration is applied to the surface before it is covered with a transparent ceramic glaze and fired in a kiln . Because the glaze subsequently covers it, such decoration is completely durable, and it also allows the production of pottery with a surface that has a uniform sheen. Underglaze decoration uses pigments derived from oxides which fuse with the glaze when the piece is fired in a kiln. It is also a cheaper method, as only a single firing is needed, whereas overglaze decoration requires a second firing at a lower temperature. [ 1 ] Many historical styles, for example Persian mina'i ware , Japanese Imari ware , Chinese doucai and wucai , combine the two types of decoration. In such cases the first firing for the body, underglaze decoration and glaze is followed by the second firing after the overglaze enamels have been applied. However, because the main or glost firing is at a higher temperature than used in overglaze decoration, the range of colours available in underglaze is more limited, and was especially so for porcelain in historical times, as the firing temperature required for the porcelain body is especially high. Early porcelain was largely restricted to underglaze blue, and a range of browns and reds. Other colours turned black in a high-temperature firing. [ 2 ] Examples of oxides that do not lose their colour during a glost firing are the cobalt blue made famous by Chinese Ming dynasty blue and white porcelain and the cobalt and turquoise blues, pale purple, sage green, and bole red characteristic of İznik pottery – only some European centres knew how to achieve a good red. [ 3 ] The painting styles used are covered at (among other articles): china painting , blue and white pottery , tin-glazed pottery , maiolica , Egyptian faience , Delftware . In modern times a wider range of underglaze colours are available. An archaeological excavation at the Tongguan kiln Site proved that the technology of underglaze colour arose in the Tang and Five Dynasties periods and originated from Tonguan , Changsha . [ 4 ] However cobalt blue was first used in Persian pottery . [ 5 ] The technique has been very widely used for earthenware and porcelain , but much less often on stoneware . [ 6 ] Ancient Egyptian faience production in the New Kingdom period employed the use of underglaze in works producing green and blue pieces that are distinct from other eras of production. This was achieved by the use of an underglaze that contrasts with the overglaze. [ 7 ] This produces the effect of highlighting and lending spectral variance to relief patterns that are embossed into pieces such as tableware such as bowls or jars. Desired blue and green finishes were achieved with the use of copper oxide on their glazing process. Ptolemaic faience has a self-glazing process. In addition to not using successive layers of glaze after the underglaze, Ptolemaic faience also applied a lower kiln temperature. [ 8 ] [ 9 ] At the firing stage a bake between 900 and 1,000 °C (1,650 and 1,830 °F) is applied to achieve a spectrum between turquoise blue and green. Underglaze in Ptolemaic faience was widely used for Ushabti dolls en masse for grave goods in the late Kingdom period. Through the Yuan and Ming dynasty , Imperial porcelain was produced with red oxide under glazes and more popular cobalt blue . [ 10 ] Cobalt blue underglaze porcelain was adopted into the imperial style for both domestic production and Chinese export porcelain under the Yuan, Ming and Qing dynasties. Until late in the Xuande period the cobalt was imported from Persia; it has specks with high iron and low manganese content. [ 11 ] This cobalt had a tendency to run when used in a tin glaze, and Persian artisans relied on the experimentation of the Chinese in Jingdezhen porcelain to achieve clear blue designs in their ceramics. Chinese whiteware was prized as an import to Islamic countries [ 12 ] that would then trade cobalt for the manufacture of more Chinese porcelain. This was changed to a Chinese form of cobalt that in its ore form had a higher composition of MnFe 2 O 4 ( Jacobsite ) rather than Fe 3 O 4 ( Iron(II,III) oxide ). Due to the Middle Eastern demand for blue and white porcelain the primary use of this underglaze technology was utilised in creating many designs with Islamic decoration. Most styles in this group, such as Delftware , mostly used blue and white pottery decoration. Classical İznik pottery from the Ottoman Empire has a stonepaste or frit body, and uses lead glazing rather than tin, and has usually been painted in polychrome. [ 13 ] Persian pottery , which was aware of Chinese styles throughout the period, made great use of underglaze decoration, but mostly in a single colour, often blue using the local cobalt, but also black. [ 14 ] Underglaze normally uses a transparent glaze, and therefore reveals the undecorated parts of the fired body. In porcelain these are white, but many of the imitative types, such as Delftware , have brownish earthenware bodies, which are given a white tin-glaze and either inglaze or overglaze decoration . With the English invention of creamware and other white-bodied earthenwares in the 18th century, underglaze decoration became widely used on earthenware as well as porcelain. Transfer printing of underglaze was developed in England in Staffordshire pottery from the 1760s. The patterns were produced in the same way as printed engravings which were in industrial production at the time. A copper printing plate engraved with the design would transfer underglaze pigment to a piece of dampened tissue paper through a rolled press which could then be adhered to earthenware. The colourants were metallic pigments such as cobalt blue but also include the use of chromium to create greens and browns. [ 8 ] To ensure clean transfer, quick firing at a low temperature might be given to fix the colours, known as "hardening on". [ 15 ] Initially most production just included one colour, but later techniques were developed for printing in several colours. [ 16 ] One type of English Creamware using blue, green, orange and yellow colours is known as " Prattware ", after the leading manufacturer. [ 17 ] This technique was also used in Europe and America in the 19th century on Creamware . [ 18 ] Underglaze is available in a variety of colours from commercial retailers and is used in industrial production of pottery. [ 19 ] Low firing temperature underglazes have been formulated as well as application options such as in the form of liquid pens of glaze or solid chalk blocks. The application of underglaze techniques such as stained slips have diversified and a variety of artists have created independent chemical processes of their own to achieve desired effects. Within commercial production there is a decline in underglaze use in comparison to 18th century use due to the creation and improvement of other glazing techniques that do not require such a high heat point. The vibrancy that only underglaze was able to supply is now achievable with a variety of over-glazes therefore discounting the advantage that underglaze commercial production had. A well-known New York underglaze tile and pottery decorator of the 1940s, Carol Janeway (1913-1989), was diagnosed with lead poisoning after eight years of using a lead-based overglaze, retiring in 1950. Her tiles' glazes tested strongly for lead in 2010 using X-Ray Fluorescence technology. [ 20 ] Underglaze transfers are a technique that involves screenprinting or free handing a pattern onto a transfer paper (often rice paper or newspaper) which is then placed, dampened, and burnished onto the surface of a leather-hard piece of clay (similar to how a lick-and-stick tattoo might be applied). [ 21 ] Artists can acquire rice paper to make their own custom designs, and also purchase pre-printed designs online. Unlike overglaze decals, underglaze decals are often applied to greenware and bisque and fired at higher temperatures compared to their overglaze counterparts. The desirability of specific periods of white and blue underglaze Chinese porcelain has led to wide and sophisticated forgery operations. The collector market of blue and white underglaze porcelain is notable due to Orientalism's popularity in Europe. Counterfeiting operations have developed both in foreign areas and within China [ 22 ] to profit from the collectability of Ming and Qing dynasty blue and white porcelain. From the baroque period onward, there was a slight decline in the profitability of forging Chinese porcelain as European hard paste techniques were developed but kept as industry secrets in countries such as Germany and France. Despite this there still was and continues to be a high European demand for Chinese blue and White porcelain. In the last three decades there has been a considerable increase in demand for antiques of Ming and Qing porcelain amongst China's rising middle class, which has led to another growth in counterfeiting efforts to supply the large amount of new collectors. This counterfeiting is performed within China and sold to its own population unlike previous rushes in Europe. [ citation needed ] Due to the extensive efforts to counterfeit Chinese blue and white porcelain, there has been a promotion of detailed scientific analysis of the composition of cobalt used in the underglazes through xeroradiography which has provided insight to the chemical make up of original underglaze recipes on a chemical scale. This in turn reveals historical data about the supply and manufacture industry within China at the time of production of each piece. Multiple enquires are being made in an academic and scientific context as to quantifying the physical and chemical composition of multiple types of underglaze. X-ray fluorescence is a primary building block if this but is not acceptable for full understanding. The more prevalent techniques include the use of synchrotron radiation-based techniques. [ 23 ] This is to achieve an analysis of the microstructure of underglazes and attempt in verifying and dating historical porcelains such as those of the Ming dynasty. This functions as a method to identify pigments and their origin. Such information is conducive to understanding the trade relations of nations at given times as pigments are sourced internationally and speak to the relationships between nations or empires. Differing cobalts used to colour underglazes in the Middle East and Asia regions were traded and that evidence can be found by inspecting the microstructures [ 24 ] of historic samples of pottery using these underglazes therefore supporting other archaeological data on the interactions of these cultures.
https://en.wikipedia.org/wiki/Underglaze
The Underground House in Ward, Colorado , was a subterranean dwelling known for its architectural design, which embraced the concept of underground living. The house was designed by architect Julian "Jay" Swayze (1923–1981) in the 1960s. The dwelling is an example of an unconventional approach to residential construction and integration with the natural environment. It was included in the Underground World Home exhibit at the 1964 New York World's Fair . [ 1 ] [ 2 ] In 1964, Girard Henderson had an underground home built on a 320 acres (130 ha) mountain ranch located near Ward, Colorado . [ 3 ] [ 4 ] The construction was completed by builders Julian "Jay" and Kenneth Swayze, from Plainview, Texas. [ 5 ] The Swayze brothers established a company known as Underground World Homes, specializing in the design and construction of full-sized underground residences. On May 13, 1963, Swayze initiated the process of securing a patent for his underground home design. Patent US3227061A was officially granted to Swayze on January 4, 1966, recognizing the underground home concept. This patent marked a milestone in the development of underground dwelling technologies. [ 6 ] Swayze's approach led them to create various underground homes, including one that Jay Swayze resided in with his wife and two daughters called Atomitat . It was the first home in the U.S. to meet civil defense specifications for a nuclear shelter. Henderson became intrigued by the idea and decided to invest, acquiring a 51 percent share of Underground World Homes. [ 1 ] [ 7 ] During that same year, Henderson undertook the construction of an almost identical underground home, sponsoring the Underground World Home exhibit at the 1964 New York World's Fair , copying the concept pioneered by the Swayze brothers. [ 8 ] [ 9 ] [ 10 ] Henderson and his wife spent time on the property. [ 11 ] The Swayze brothers authored a book titled Underground Gardens & Homes: The best of two worlds, above and below. [ 2 ] Published in 1980, the book delved into the nuclear age, addressing the imperative need for comprehensive planning to safeguard ourselves from potential adverse consequences. [ 7 ] Situated twenty-eight miles (45 km) northwest of Boulder, Colorado and at an elevation of 9,500 ft (2,900 m) above sea level, the dwelling, dubbed "Mountain Home" by its contractors, employed a building technique known as "ship-in-a-bottle", that deployed mountain top removal, followed by the pouring of a concrete shell, and finally the reinstatement of the mountain top. [ 1 ] [ 2 ] The Ranch-style house one-level 3,400 sq ft (320 m 2 ) underground earth shelter was designed to blend with the surroundings with earth against the walls and on the roof. It had a brick veneer siding but was enclosed in a waterproof concrete shell and covered with a compacted earth berm . The entrance was created to look like an opening to a mineshaft. To make the house functional, over $104,000 (equivalent to $1,054,395 in 2024) was spent on the hydroelectric system that supplied the underground dwelling with power. Water for the system flowed from glacial snowpack on Mt. Audubon. [ 5 ] More than $200,000 (equivalent to $2,027,682 in 2024 was spent in total to make the house livable. [ 12 ] To imitate the comforts of above-ground living, the wood-frame home had three-bedrooms, a swimming pool, and fake "outdoor" patio. [ 11 ] Because the house had no window, artist Jewell Smith painted Trompe-l'œil murals depicting the New York City skyline from the living room and the Golden Gate Bridge from a bedroom. [ 13 ] [ 1 ] Windows within the structure revealed a narrow corridor that served as a separation between the "exterior" wall and the concrete retaining wall. As noted by architecture historian Beatriz Colomina in her book, Domesticity at War, this architectural element disrupted the conventional notions of inside and outside. [ 14 ] The house had a remote-controlled lighting system that could imitate the night sky and sunrise. [ 7 ] Additionally, a fireplace channeled smoke through a fake tree trunk to the surface. [ 11 ] After Henderson died on November 16, 1983, the Colorado mountain property, including the underground home, was put up for sale for $1.5 million dollars. It was purchased for $1.17 million by the Sacred Mountain Ashram on June 9, 1988 from a mysterious reclusive millionaire who was "terrified...of being caught in a nuclear holocaust." After the sale, the exterior walls of the underground house were dug free of dirt, windows were built to allow sunlight to come into the home. [ 15 ] [ 5 ]
https://en.wikipedia.org/wiki/Underground_House_Colorado
The Underground House in Las Vegas , Nevada, is a Cold War -era subterranean dwelling. This structure was built in the wake of the Cuban Missile Crisis .The house was completed in the 1978. [ 2 ] [ 3 ] In 1969, the Avon Products executive Girard B. Henderson relocated to Las Vegas , Nevada , and embarked on the construction of the Dawson buildings on Spencer Street and an underground house across the street, which took from 1974 to 1978 to build. Oswald Gutsche, the president of Alexander Dawson Inc., oversaw the building of a new underground residence. This dwelling was inspired by the designs of Jay Swayze and served as a model. He enlisted Frank Zupancic, a private contractor who had previously constructed Oswald Gutsche's home, to undertake the construction. Henderson and his wife chose to reside in this subterranean abode, located on 3970 Spencer Street. [ 4 ] [ 1 ] To access the underground home, a stairwell or a 23 ft (7.0 m) elevator descent takes people below ground level, opening into the entry of the residence. The underground property consists of several key features, including the 6,000 sq ft (560 m 2 ) home centered in the 16,000 sq ft (1,500 m 2 ) space. [ 4 ] After Henderson died on November 16, 1983, his wife Mary lived in the underground house for a short while. Following her death on October 1, 1988, businessman Thomas "Tex" Edmonson (1908–2003) acquired the underground property. As the second husband of Lucy Henderson, Tex Edmonson purchased the property under the Tex-Tex Corporation, becoming the new owner of the underground dwelling. [ 4 ] An article appeared in The New Yorker magazine, which talked about Susan Roy, a magazine editor and architecture historian, who saw images of family fallout shelters including this one back in 2003, in Nest magazine (published from 1997 to 2004). The experience resulted in a book she wrote, Bamboozled: How the U.S. Government Misled Itself and Its People into Believing They Could Survive a Nuclear Attack. [ 5 ] [ 6 ] The Ranch-style house is 23 ft (7.0 m) underground and has brick veneer siding but is enclosed in a waterproof concrete shell measuring approximately 16,000 sq ft (1,500 m 2 ) and covered with a compacted earth berm. The Clark County, Nevada Records show that the Underground House is on 1.05 acres (0.42 ha). [ 1 ] The main house itself encompasses three bedrooms and three bathrooms, and includes a small guest quarters. [ 4 ] The home, designed to sustain life for approximately one year, was equipped with an underground generator and fuel tank." [ 4 ] The interior design of this home serves as a reflection of the Cold War era during which it was constructed. The prevailing atmosphere at the time, particularly in the aftermath of the Cuban Missile Crisis , was one of heightened concern among Americans regarding the looming threat of nuclear war. The homeowner held a firm conviction that the United States and the Soviet Union might continue to intensify their conflict, ultimately leading to a catastrophic nuclear confrontation . [ 4 ] The underground area has been designed to imitate an above-ground setting, including grass-looking carpet as an imitation lawn, artificial trees and wall to wall, floor to ceiling scenery. A fireplace chimney channeled smoke through a "trunk and branches" of a fake tree on the surface. The house was lit with nearly 1,000 fluorescent lights. These lights, in four colors, enabled the night sky to simulate a sunrise. [ 4 ] [ 7 ] The muralist Jewel Smith painted the Trompe-l'œil murals to depict Henderson's sheep ranch in Cecil Peak Station, New Zealand, the ranch he owned in Colorado, a view of Los Angeles from Beverly Hills, and a depiction of his childhood home in Suffern, New York . [ 7 ] The underground property has changed hands over the years. The property sold in 1990 for $1.3 million after Henderson passed away, and again in 2005 for $2 million. The current owners bought it in 2014 for $1,150,000. In 2019 it was again on the market for $18 million, then in 2024 reduced to $5.9 million. [ 8 ] [ 9 ] [ 10 ] [ 11 ] The purchasers, under the name "Society for the Preservation of Near Extinct Species," made the decision to maintain their anonymity while acquiring the property, which is now recognized as the Stasis Foundation. [ 1 ] [ 12 ] Media related to Underground House Las Vegas at Wikimedia Commons
https://en.wikipedia.org/wiki/Underground_House_Las_Vegas
An underground personnel carrier is any heavy duty vehicle designed specifically for the safe transport of personnel and their supplies into underground work areas. The most common underground applications is for the mining of either precious metal or coal . Where tight turning in confined spaces is necessary, personnel carriers designed on tractors are common. Heavy duty fenders, bumpers and man baskets (gondolas) are fabricated to mount on the tractor or tractor frames to provide more durability. These personnel carriers use a front loader to perform various loader applications. Also a front basket is typically attached to the front loader arms so that they may be lifted. These carriers are typically built to carry 5 or 7 men. Some personnel carriers use a heavy duty hydraulic rear hitch that can two various attachments. Towing PCs can be used in conjunction with a front loader as well. Other narrow vein personnel carriers are designed for a specific job based on their attachments. These attachments include rear and/or front lift baskets for utility and electrical work, mechanic packages, cable reels, heavy duty canes, ANFO loaders, and shotcrete booms. These personnel carriers are designed and built from the ground up. They are typically 5 man to 15 man carriers. These carriers may also be designed using multiple attachments for job specific applications. In the coal mining industry low profile personnel carriers are the most commonly used. These carriers may only have a 3 to 3.5 feet (0.91 to 1.07 m) height dimension and carry up to 14 men, and are typically built from the ground up and can be designed with job specific attachments. A mantrip is a shuttle for transporting miners down into an underground mine at the start of their shift, and out again at the end. Mantrips usually take the form of a train , running on a mine railway and operating like a cable car . Mantrips may also be self-powered, for example by a diesel locomotive . Other types of mantrips do not require a track and take the form of a pickup truck running on rubber tires. Because many mines have low ceilings, mantrips tend to have a reduced height. In the United States, the Mine Safety and Health Administration has published safety regulations governing the operation of mantrips. [ 1 ]
https://en.wikipedia.org/wiki/Underground_personnel_carrier
An underground storage tank ( UST ) is, according to United States federal regulations, a storage tank , including any underground piping connected to the tank, that has at least 10 percent of its volume underground. "Underground storage tank" or "UST" means any one or combination of tanks including connected underground pipes that is used to contain regulated substances, and the volume of which including the volume of underground pipes is 10 percent or more beneath the surface of the ground. [ 1 ] This does not include, among other things, any farm or residential tank of 1,100 gallons or less capacity used for storing motor fuel for noncommercial purposes, tanks for storing heating oil for consumption on the premises, or septic tanks . USTs are regulated in the United States by the U.S. Environmental Protection Agency to prevent the leaking of petroleum or other hazardous substances and the resulting contamination of groundwater and soil. [ 2 ] In 1984, U.S. Congress amended the Resource Conservation Recovery Act to include Subtitle I: Underground Storage Tanks, calling on the U.S. Environmental Protection Agency (EPA) to regulate the tanks. In 1985, when it was launched, there were more than 2 million tanks in the country and more than 750,000 owners and operators. The program was given 90 staff to oversee this responsibility. [ 3 ] In September 1988, the EPA published initial underground storage tank regulations, including a 10-year phase-in period that required all operators to upgrade their USTs with spill prevention and leak detection equipment. [ 4 ] For USTs in service in the United States, the EPA and states collectively require tank operators to take financial responsibility for any releases or leaks associated with the operation of those below ground tanks. As a condition to keep a tank in operation a demonstrated ability to pay for any release must be shown via UST insurance, a bond, or some other ability to pay. [ 5 ] EPA updated UST and state program approval regulations in 2015, the first major changes since 1988. [ 6 ] The revisions increase the emphasis on properly operating and maintaining UST equipment. The revisions will help prevent and detect UST releases, which are a leading source of groundwater contamination. The revisions will also help ensure all USTs in the United States, including those in Indian country, meet the same minimum standards. The changes established federal requirements that are similar to key portions of the Energy Policy Act of 2005 . In addition, EPA added new operation and maintenance requirements and addressed UST systems deferred in the 1988 UST regulation. The changes: Underground storage tanks fall into four different types: Underground storage tanks for water are traditionally called cisterns and are usually constructed from bricks and mortar or concrete . Petroleum USTs are used throughout North America at automobile filling stations and by the US military. Many have leaked , allowing petroleum to contaminate the soil and groundwater and enter as vapor into buildings, ending up as brownfields or Superfund sites. [ citation needed ] Many USTs installed before 1980 consisted of bare steel pipes, which corrode over time. [ citation needed ] Faulty installation can also cause structural failure of the tank or piping, causing leaks. [ 7 ] The 1984 Hazardous and Solid Waste Amendments to the Resource Conservation and Recovery Act (RCRA) required EPA to develop regulations for the underground storage of motor fuels to minimize and prevent environmental damage, by mandating owners and operators of UST systems to verify, maintain, and clean up sites damaged by petroleum contamination. [ 8 ] In December 1988, EPA regulations asking owners to locate, remove, upgrade, or replace underground storage tanks became effective. Each state was given authority to establish such a program within its own jurisdiction, to compensate owners for the cleanup of underground petroleum leaks, to set standards and licensing for installers, and to register and inspect underground tanks. [ citation needed ] Most upgrades to USTs consisted [ when? ] of the installation of corrosion control ( cathodic protection , interior lining, or a combination of cathodic protection and interior lining), overfill protection (to prevent overfills of the tank during tank filling operations), spill containment (to catch spills when filling), and leak detection for both the tank and piping. [ citation needed ] Many USTs were removed without replacement during the 10-year program. Many thousands of old underground tanks were replaced with newer tanks made of corrosion resistant materials (such as fiberglass , steel clad with a thick FRP shell, and well-coated steel with galvanic anodes ) and others constructed as double walled tanks to form an interstice between two tank walls (a tank within a tank) which allowed for the detection of leaks from the inner or outer tank wall through monitoring of the interstice using vacuum, pressure or a liquid sensor probe. Piping was replaced during the same period with much of the new piping being double-wall construction and made of fiberglass or plastic materials. [ citation needed ] Tank monitoring systems capable of detecting small leaks (must be capable of detecting a 0.1 gallons-per-hour with a probability of detection of 95% or greater and a probability of false alarm of 5% or less) were installed and other methods were adopted to alert the tank operator of leaks and potential leaks. [ citation needed ] U.S. regulations required that UST cathodic protection systems be tested by a cathodic protection expert (minimum every three years) and that systems be monitored to ensure continued compliant operation. [ 9 ] Some industrial owners, who previously stored fuel in underground tanks, switched to above-ground tanks to avoid environmental regulations that require monitoring of fuel storage. Many states, however, do not permit above-ground storage of motor fuel for resale to the public. The EPA Underground Storage Tank Program is considered to have been very successful. [ according to whom? ] The national inventory of underground tanks has been reduced by more than half, and most of the rest have been replaced or upgraded to much safer standards. [ citation needed ] Of the approximately one million underground storage tanks sites in the United States as of 2008, most of which handled some type of fuel, an estimated 500,000 have had leaks. [ 10 ] As of 2009 [update] , there were approximately 600,000 active USTs at 223,000 sites subject to federal regulation. [ 11 ] In 2012, EPA published how to screen buildings vulnerable to petroleum vapor intrusion , [ 12 ] and in June 2015, U.S. EPA finally released its "Technical Guide for Assessing and Mitigating the Vapor Intrusion Pathway from Subsurface Vapor Sources to Indoor Air" and "Technical Guide For Addressing Petroleum Vapor Intrusion At Leaking Underground Storage Tank Sites" [ 13 ] Similarly to the US, the UK defines an underground tank as having 10% of its combined potential volume below the ground. [ 14 ] The requirements set by The Environment Agency for Decommissioning an underground tank apply to all underground storage tanks and not just those used for the storage of fuels. [ 15 ] They give extensive guidance in The Blue Book and PETEL 65/34 . The Environment Agency states that any tank no longer in use should be immediately decommissioned. This process includes both the closing and removal of a UST system (the tank and any ancillaries connected to it) as a whole and the replacing of individual tanks or lengths of pipe. Regardless of whether the decommissioning of the tank is permanent or temporary; it must be ensured that the tank and all components don't cause pollution. This is true of the removal and of any filling of the tank with inert material. The Decommissioning of a tank can be via removal from the ground after any volatile gas or liquid has been removed. This is called bottoming and degassing the tank. The other option involved filling the tank with either: If any plan is made to leave the tank on site, the owner will be responsible for keeping record of: If any tanks and their pipework have been deemed unsuitable for petroleum spirits then they shouldn't be used for the storage of any hydrocarbon based products without first checking their integrity.
https://en.wikipedia.org/wiki/Underground_storage_tank
The Underhanded C Contest was a programming contest to turn out code that is malicious, but passes a rigorous inspection, and looks like an honest mistake even if discovered . The contest rules define a task, and a malicious component. Entries must perform the task in a malicious manner as defined by the contest, and hide the malice. Contestants are allowed to use C -like compiled languages to make their programs. [ 1 ] The contest was organized by Dr. Scott Craver [ 2 ] of the Department of Electrical Engineering at Binghamton University . The contest was initially inspired by Daniel Horn's Obfuscated V contest in the fall of 2004. [ 3 ] For the 2005 to 2008 contests, the prize was a $100 gift certificate to ThinkGeek . The 2009 contest had its prize increased to $200 due to the very late announcement of winners, and the prize for the 2013 contest is also a $200 gift certificate. The 2005 contest had the task of basic image processing , such as resampling or smoothing, but covertly inserting unique and useful " fingerprinting " data into the image. Winning entries from 2005 used uninitialized data structures, reuse of pointers , and an embedding of machine code in constants . The 2006 contest required entries to count word occurrences, but have vastly different runtimes on different platforms. To accomplish the task, entries used fork implementation errors, optimization problems, endian differences and various API implementation differences. The winner called strlen() in a loop, leading to quadratic complexity which was optimized out by a Linux compiler but not by Windows. The 2007 contest required entries to encrypt and decrypt files with a strong, readily available encryption algorithm such that a low percentage (1% - 0.01%) of the encrypted files may be cracked in a reasonably short time. The contest commenced on April 16 and ended on July 4. Entries used misimplementations of RC4, misused API calls, and incorrect function prototypes. The 2008 contest required entries to redact a rectangular portion of a PPM image in a way that the portion may be reconstructed. Any method of "blocking out" the rectangle was allowed, as long as the original pixels were removed, and the pixel reconstruction didn't have to be perfect [ 4 ] (although the reconstruction's fidelity to the original file would be a factor in judging). The contest began on June 12, and ended on September 30. Entries tended to either xor the region with a retrievable pseudo-random mask or append the masked data to the end of the file format. The second placing programs both used improperly defined macros while the winner, choosing to work with an uncommon text based format, zeroed out pixel values while keeping the number of digits intact. The 2009 contest required participants to write a program that sifts through routing directives but redirects a piece of luggage based on some innocuous-looking comment in the space-delimited input data file. The contest began December 29, 2009, and was due to end on March 1, 2010. [ 5 ] However, no activity occurred for three years. The winners were only announced on April 1, 2013, with one overall winner and six runners-up. [ 6 ] [ 7 ] The 2013 contest was announced on April 1, 2013, and was due July 4, 2013; results were announced on September 29, 2014. [ 8 ] It was about a fictional social website called "ObsessBook". The challenge consisted of writing a function to compute the DERPCON (Degrees of Edge- Reachable Personal CONnection) between two users which "accidentally" computes a lower distance for a special user. The 2014 contest was announced on November 2, 2014, and was due January 1, 2015. The results were announced on June 1, 2015. [ 9 ] The objective was to write surveillance code for a Twitter -like social networking service , to comply with a secret government surveillance request; but for non-obvious reasons, the code must subtly leak the act of surveillance to a user. The general approach is to obfuscate writes to the user data as writing to surveillance data, and the winning entry did so by implementing a buggy time-checking function that overwrites the input. The 2015 contest was announced on August 15, 2015, and was due November 15, 2015. The results were announced on January 15, 2016. The scenario was a nuclear disarmament process between the Peoples Glorious Democratic Republic of Alice and the Glorious Democratic Peoples Republic of Bob ( Alice and Bob ), and the mission was to write a test function for comparing potentially fissile material against a reference sample, which under certain circumstances would label a warhead as containing fissile material when it doesn't. Around a third of the submissions used NaN poisoning by erroneous floating-point operations, which generates more NaN's in the later computation and always evaluates to false for a comparison. The winning entry used a confusion of datatypes between double and float to distort values.
https://en.wikipedia.org/wiki/Underhanded_C_Contest
In particle physics , underlying event (UE) refers to the additional interactions of two particle beams at a collision point beyond the main collision under study. Specifically, the term is used for hadron collider events which do not originate from the primary hard scattering (high energy, high momentum impact) process. [ 1 ] The term was first defined in 2002. [ 2 ] [ 3 ] Underlying events can be thought of as the remnants of scattering interactions . [ 4 ] [ 5 ] The UE may involve contributions from both "hard" and "soft" processes (here “soft” refers to interactions with low p-T, i.e. transverse momentum, transfer [ 6 ] ). These are important both in the simulation of particle experiments (often using event generators ); and interpretation and analysis of data so as to filter out the desired signals . [ 7 ] Contents of UE include initial and final state radiation , beam-beam remnants, multiple parton interactions, pile-up, noise . [ 5 ]
https://en.wikipedia.org/wiki/Underlying_event
In construction or renovation , underpinning is the process of strengthening the foundation of an existing building or other structure . Underpinning may be necessary for a variety of reasons: Underpinning may be accomplished by extending the foundation in depth or breadth so it either rests on a more supportive soil stratum or distributes its load across a greater area. Use of micropiles [ 1 ] and jet grouting are common methods in underpinning. Underpinning may be necessary where P class (problem) soils in certain areas of the site are encountered. Through semantic change the word underpinning has evolved to encompass all abstract concepts that serve as a foundation. Mass concrete underpinning is one of the simplest forms of remedial underpinning at shallow depths. This type of underpinning is done by excavating "bays" along and under the existing foundation and filling them with mass concrete. It is sometimes called a "traditional" method to distinguish it from other types of underpinning like piling and needling. The latter often require underpinning specialists and may use proprietary underpinning systems. Mass concrete underpinning work is performed in compliance with the Party Wall Act (in the UK) using plans that are designed with engineering calculations to plan a sequence of excavating bays along and underneath the existing foundation without damaging existing walls. In some cases walls have collapsed because lateral support was inadequate leading to disputes among contractors, subcontractors and architects about where the responsibility lay for the mistake. [ 2 ] [ 3 ] [ 4 ] Mass concrete underpinning is commonly used when permanent support is needed to comply with the Party Wall Act of 1996 in the construction of a new basement during a restoration, rehabilitation or redevelopment. [ 5 ] In the United Kingdom most subsidence claims are for buildings at least 40 years old with shallow strip foundations . This is one of the most common types of foundations suffering from subsidence-related damage and according to the Building Research Establishment subsidence database, mass concrete underpinning was the most common underpinning and was often applied only to part of a building. If the soils have a low bearing capacity partial underpinning may increase the risk of differential settlement and localized settlement due to additional load on the soil. [ 6 ] The beam and base method of underpinning is a more technically advanced adaptation of traditional mass concrete underpinning. A reinforced concrete beam is constructed below, above or in replacement of the existing footing. The beam then transfers the load of the building to mass concrete bases, which are constructed at designed strategic locations. Base sizes and depths are dependent upon the prevailing ground conditions. Beam design is dependent upon the configuration of the building and the applied loads. Anti-heave precautions are often incorporated in schemes where potential expansion of clay soils may occur. Mini-piles have the greatest use where ground conditions are variable, where access is restrictive, where environmental pollution aspects are significant, and where structural movements in service must be minimal. Mini-piled underpinning is generally used when the loads from the foundations need to be transferred to stable soils at considerable depths – usually in excess of 5 m (16 ft). Mini-piles may either be augured or driven steel cased, and are normally between 150 mm (5.9 in) and 300 mm (12 in) in diameter. Structural engineers will use rigs which are specifically designed to operate in environments with restricted headroom and limited space, and can gain access through a regular domestic doorway. They are capable of constructing piles to depths of up to 15 m (49 ft). The technique of minipiling was first applied in Italy in 1952, and has gone through many different names, reflecting worldwide acceptance and expiration of the original patents. [ 7 ] The relatively small diameter of mini-piles is distinctive of this type of underpinning and generally uses anchoring or tie backs into an existing structure or rock. Conventional drilling and grouting methods are used for this method of underpinning. These mini-piles have a high slenderness ratio, feature substantial steel reinforcing elements and can sustain axial loading in both senses. [ 7 ] The working loads of mini-piles can sustain up to 1,000 kN (100 long tons-force; 110 short tons-force) loads. In comparison to Mass Concrete Underpinning, the engineering aspect of mini-piles is somewhat more involved, including rudimentary engineering mechanics such as statics and strength of materials. These mini-piles must be designed to work in tension and compression, depending on the orientation and application of the design. In detail, attention with design must be paid analytically to settlement, bursting, buckling, cracking, and interface consideration, whereas, from a practical viewpoint, corrosion resistance, and compatibility with the existing ground and structure must be regarded. Mini-piled underpinning schemes include pile and beam, cantilever pile-caps and piled raft systems. Cantilevered pile-caps are usually used to avoid disturbing the inside of a building, and require the construction of tension and compression piles to each cap. These are normally linked by a beam. The pile and beam system usually involves constructing pairs of piles on either side of the wall and linking them with a pile cap to support the wall. The pile caps are usually linked by reinforced concrete beams to support the entire length of the wall. Piled raft underpinning systems are commonly used when an entire building needs to be underpinned. The internal floors are completely removed, a grid of piles is installed, and a reinforced concrete raft is then constructed over the complete floor level, picking up and fully supporting all external and internal walls. One of the uses of the soil improvement technique of pressure grouting is foundation underpinning especially during excavations like subway constructions. [ 8 ] This technique has been used in municipal development projects of significant scope in the United States. One of the largest chemical grouting projects was the extension of the Pittsburgh Light Rail Transit subway when the foundations of six large buildings needed to be protected from ground movement and related building settlement. [ 9 ] Pressure grouting can be low pressure or high pressure. "Jet grouting" is a general term used for high pressure grouting where the high pressure air, water and cementing grout are injected into the ground at high velocity. This can be doing with single tube systems to mix the grout with in situ soil to form a grouted "column" in the ground. Double tube systems and triple tube systems using air and water remove some of the soil to create larger grouted bulbs or columns. [ 8 ] Low pressure chemical injection grouting is used to underpin structures in sandy soils. [ 10 ]
https://en.wikipedia.org/wiki/Underpinning
Underpotential deposition ( UPD ), in electrochemistry , is a phenomenon of electrodeposition of a species (typically reduction of a metal cation to a solid metal) at a potential less negative than the equilibrium ( Nernst ) potential for the reduction of this metal. The equilibrium potential for the reduction of a metal in this context is the potential at which it will deposit onto itself. Underpotential deposition can then be understood to be when a metal can deposit onto another material more easily than it can deposit onto itself. The occurrence of underpotential deposition is often interpreted as a result of a strong interaction between the electrodepositing metal M with the substrate S (of which the electrode is built). The M-S interaction needs to be energetically favoured to the M-M interaction in the crystal lattice of the pure metal M. This mechanism is deduced from the observation that UPD typically occurs only up to a monolayer of M (sometimes up to two monolayers). The electrodeposition of a metal on a substrate of the same metal occurs at an equilibrium potential , thus defining the reference point for the underpotential deposition. Underpotential deposition is much sharper on monocrystals than on polycrystalline materials. [ 1 ] This electrochemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Underpotential_deposition
In forestry and ecology , understory ( American English ), or understorey ( Commonwealth English ), also known as underbrush or undergrowth , includes plant life growing beneath the forest canopy without penetrating it to any great extent, but above the forest floor . Only a small percentage of light penetrates the canopy, so understory vegetation is generally shade-tolerant . The understory typically consists of trees stunted through lack of light, other small trees with low light requirements, saplings, shrubs, vines, and undergrowth. Small trees such as holly and dogwood are understory specialists. In temperate deciduous forests , many understory plants start into growth earlier in the year than the canopy trees, to make use of the greater availability of light at that particular time of year. A gap in the canopy caused by the death of a tree stimulates the potential emergent trees into competitive growth as they grow upward to fill the gap. These trees tend to have straight trunks and few lower branches. At the same time, the bushes, undergrowth, and plant life on the forest floor become denser. The understory experiences greater humidity than the canopy, and the shaded ground does not vary in temperature as much as open ground. This causes a proliferation of ferns , mosses , and fungi and encourages nutrient recycling , which provides favorable habitats for many animals and plants. The understory is the underlying layer of vegetation in a forest or wooded area, especially the trees and shrubs growing between the forest canopy and the forest floor. Plants in the understory comprise an assortment of seedlings and saplings of canopy trees together with specialist understory shrubs and herbs. Young canopy trees often persist in the understory for decades as suppressed juveniles until an opening in the forest overstory permits their growth into the canopy. In contrast understory shrubs complete their life cycles in the shade of the forest canopy. Some smaller tree species, such as dogwood and holly , rarely grow tall and generally are understory trees. The canopy of a tropical forest is typically about 10 m (33 ft) thick, and intercepts around 95% of the sunlight. [ 1 ] The understory therefore receives less intense light than plants in the canopy and such light as does penetrate is impoverished in wavelengths of light that are most effective for photosynthesis. Understory plants therefore must be shade tolerant —they must be able to photosynthesize adequately using such light as does reach their leaves. They often are able to use wavelengths that canopy plants cannot. In temperate deciduous forests towards the end of the leafless season, understory plants take advantage of the shelter of the still leafless canopy plants to "leaf out" before the canopy trees do. This is important because it provides the understory plants with a window in which to photosynthesize without the canopy shading them. This brief period (usually 1–2 weeks) is often a crucial period in which the plant can maintain a net positive carbon balance over the course of the year. As a rule forest understories also experience higher humidity than exposed areas. The forest canopy reduces solar radiation, so the ground does not heat up or cool down as rapidly as open ground. Consequently, the understory dries out more slowly than more exposed areas do. The greater humidity encourages epiphytes such as ferns and mosses, and allows fungi and other decomposers to flourish. This drives nutrient cycling , and provides favorable microclimates for many animals and plants , such as the pygmy marmoset . [ 2 ]
https://en.wikipedia.org/wiki/Understory
The undervoltage-lockout ( UVLO ) is an electronic circuit used to turn off the power of an electronic device in the event of the voltage dropping below the operational value that could cause unpredictable system behavior. For instance, in battery powered embedded devices , UVLOs can be used to monitor the battery voltage and turn off the embedded device's circuit if the battery voltage drops below a specific threshold, thus protecting the associated equipment from deep discharge . Some variants may also have unique values for power-up (positive-going) and power-down (negative-going) thresholds. [ 1 ] Typical usages include: This electronics-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Undervoltage-lockout
The Underwater Association (UA) was a research association focused on the conduct of research underwater by diving scientists and archaeologists . It was established by a group of UK scientists in Malta in 1966 to assist in the organisation and publication of British diving science. Membership grew to over 400, with approximately one third joining from outside the UK. From 1972 to 1979 the UA published a Code of Practice for scientific diving. This was expanded in 1987 and 1990 to form the UNESCO Code of practice. [ 1 ] [ 2 ] Membership declined in the late 1980s, and the UA merged with the Society for Underwater Technology in 1992. [ 3 ] The Underwater Association grew out of the popularity of scuba sports diving clubs in British universities in the 1950s and 60s. Diving became popular after the introduction of the first successful and safe open-circuit scuba set , the Aqua-Lung , in the 1940s, and clubs were established in many British universities from 1957 onwards. Members of these clubs, and of research institutions, used scuba diving to pursue various scientific projects under water. Some projects involved the study of diver physiology and psychology, while others made use of diving to study marine biology , underwater archaeology , geology, physics and other topics. [ 4 ] Members of the Cambridge University Underwater Exploration Group and Imperial College London were particularly active. They ran expeditions to Malta in the early 1960s, and enjoyed close cooperation with the Royal Navy which provided compressed air and a recompression chamber . In 1965 five different scientific diving teams were active in Malta. The different teams were organised as follows: The Cambridge University Malta Expedition 1965, an undergraduate group studying diurnal behaviour in marine invertebrates (winners of the first ever Duke of Edinburgh/British Sub-Aqua Club award for diving science); and a group from Oxford University, studying mainly algae [ 5 ] and geology. There was also a Vision Group from various institutions, studying the visibility of underwater objects and the perception of size and distance, [ 6 ] a group led by John Woods from the Physics Department Imperial College London studying thermocline instability; [ 7 ] and a Helium Group, led by Nic Flemming from Cambridge University, studying the psychological and ergonomic efficiency of divers breathing heliox and air at a depth of 60 m. In total, 32 divers were involved in these projects. The groups shared many facilities and a conference was organised to present the results in late 1965, with the papers published in 1966. [ 8 ] After this publication, the group established an association as a company limited by guarantee and which was registered as a charitable organization . The Underwater Association for Malta 1966 was established under that name in 1966 to assist in the organisation and publication of British diving science. The first organising committee consisted of John N. Lythgoe (Institute of Ophthalmology, London), John D. Woods (Physics Department, Imperial College, London), Nicholas C. Flemming (Pembroke College, Cambridge), Anthony Larkum (Botany, Cambridge), Andrew E. Dorey (Zoology, Bristol), Christopher C. "Bill" Hemmings (Fisheries Laboratory, Aberdeen). Bill Hemmings was appointed the first Chair. Subsequent Chairs were Nic Flemming, John Lythgoe and Richard Pagett. As the Association expanded and broadened its range of international recruitment, the name was shortened to the Underwater Association. After a few years the use of Malta as a central base for multiple projects was abandoned, and the Association expanded its objectives to promote underwater science at any location, attracting one third of its membership from outside the UK. The Underwater Association corresponded and collaborated with equivalent voluntary bodies in other countries, especially the American Academy of Underwater Sciences in the USA. The methods and goals of the Underwater Association helped equivalent bodies to develop in South Africa and Australia. [ 3 ] The UA held its first symposium in 1965 to discuss the results of that year's expeditions. It took place in the Physics Department of Imperial College London , on 29 October. The resulting papers, and those of symposia from 1966 to 1969, were published in booklet form. The proceedings volumes were reviewed in maritime research journals for the information of other marine scientists. [ 9 ] Woods and Lythgoe in 1971 [ 10 ] and Drew, Lythgoe and Woods in 1976 [ 11 ] also published books on underwater science with contributions from members of the UA and elsewhere. The Association also published a newsletter several times each year, circulated by post, which was an essential component of its success in the days before digital communications. Edward Drew and Helen Ross were newsletter editors for much of this time. Copies of most of the UA publications on the proceedings of symposia are held by the library of the National Oceanography Centre, Southampton. The Underwater Association provided a vehicle for marine scientists who used diving to pool experiences, learn different techniques, experiment with different breathing gas mixtures and diving gear, and to publish their results in a way that emphasised the effectiveness of diving, as well as the skills needed to work efficiently underwater. This achievement coexisted with routine publishing in the mainstream single-discipline refereed journals that demonstrate the high quality of the research. Published papers by Members in the Proceedings volumes and other academic journals included research on diving medicine, psychology, marine biology, fisheries, marine geology, marine physics, archaeology, and oceanographic engineering. The Underwater Association enabled many young marine scientists to develop their careers at a time when diving was not a common research method. [ 4 ] The following selected citations illustrate some significant papers in the disciplines of Marine Physics, [ 7 ] Psychology of perception, [ 6 ] Geoarchaeology, [ 12 ] Archaeology, [ 13 ] Marine botany, [ 14 ] Marine algae, [ 5 ] Marine zoology, [ 15 ] the psychology of memory in different environments, [ 16 ] and diving physiology. [ 17 ] During the early 1970s there were increasing numbers of accidents to divers operating commercially in the North Sea oil fields. The UK Health and Safety Executive (HSE) published a consultative document on proposed new regulations for divers at work in 1978, and new statutory regulations came into force in 1981. These were updated in 1997. The Underwater Association, whose members had an accident-free safety record, published an advisory Code of Practice for Scientific Diving in 1972, [ 18 ] which was revised and updated in 1974, [ 19 ] after which it was adopted as the standard for the Natural Environment Research Council (NERC) [ 20 ] [ 21 ] and other governmental research diving organisations. This code was circulated internationally, which led to the convening of a UNESCO committee of the Intergovernmental Oceanographic Commission, whose purpose was to publish an international version, compatible with the safety laws in many countries. [ 1 ] [ 2 ] NERC, a UK government agency which employed marine scientists in several institutes, adopted the UA Code of Practice and co-published it through several editions (Code in references).The Royal Geographical Society Expedition Advisory Centre recommended the UA Code for use in diving research projects (Palmer).The UA Code helped in the negotiations with the HSE, firstly to obtain Exemptions for Scientific Diving (HSE), and later to create new regulations for specialised diving groups through supervisory Approved Codes of Practice such as the Scientific and archaeological diving projects: Diving at work regulations 1997 . The negotiations on safety also led to the formation of the Scientific Diving Supervisory Committee [ 22 ] as an advisory body for UK governmental research groups. Reciprocal standards of training and safety for scientific diving were also agreed throughout European agencies and national bodies. [ 1 ] [ 2 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] The Underwater Association performed a vital role in the early stages of scientific diving, but ceased to function in 1992. The situation had changed since the 1960s, when it was very difficult for diving scientists to share expertise, and scientific papers in narrow-discipline journals seldom mentioned the use of diving. Before the days of digital search engines, it was hard to assess the scope of underwater observations, or to learn from the experimental techniques of other researchers. The best way to develop underwater research was to organise national and international meetings, and to publish the proceedings. When digital media and citation indices increased, academics preferred to publish in journals specific to their own disciplines rather than in general diving publications, and they did not need to attend meetings devoted to diving science. Diving for research purposes had become a common practice. As membership declined, a merger with the Society for Underwater Technology (SUT) [ 23 ] was organised by John Bevan.
https://en.wikipedia.org/wiki/Underwater_Association
The Underwater Demolition Team ( UDT ), or frogmen , were amphibious units created by the United States Navy during World War II with specialized missions. They were predecessors of the Navy's current SEAL teams . Their primary WWII function began with reconnaissance and underwater demolition of natural or man-made obstacles obstructing amphibious landings . Postwar they transitioned to scuba gear changing their capabilities. With that they came to be considered more elite and tactical during the Korean and Vietnam Wars . UDTs were pioneers in underwater demolition , closed-circuit diving , combat swimming , riverine warfare and midget submarine (dry and wet submersible) operations. They later were tasked with ensuring recovery of space capsules and astronauts after splash down in the Mercury , Gemini and Apollo space flight programs. [ 1 ] Commando training was added making them the forerunner to the United States Navy SEAL program that exists today. [ 2 ] By 1983, the UDTs were re-designated as SEAL Teams or Swimmer Delivery Vehicle Teams (SDVTs); however, some UDTs, had already been re-designated into UCTs and special boat units prior. SDVTs have since been re-designated SEAL Delivery Vehicle Teams . [ 3 ] The United States Navy studied the problems encountered by the disastrous Allied amphibious landings during the Gallipoli Campaign of World War I . This contributed to the development and experimentation of new landing techniques in the mid-1930s. In August 1941, landing trials were performed and one hazardous operation led to Army Second Lieutenant Lloyd E. Peddicord being assigned the task of analyzing the need for a human intelligence (HUMINT) capability. [ 2 ] When the U.S. entered World War II, the Navy realized that in order to strike at the Axis powers the U.S. forces would need to perform a large number of amphibious attacks. The Navy decided that men would have to go in to reconnoiter the landing beaches, locate obstacles and defenses, as well as guide the landing forces ashore. In August 1942, Peddicord set up a recon school for his new unit, Navy Scouts and Raiders , at the amphibious training base at Little Creek, Virginia . [ 2 ] In 1942, the Army and Navy jointly established the Amphibious Scout and Raider School at Fort Pierce , Florida . Here Lieutenant Commander Phil H. Bucklew , the "Father of Naval Special Warfare", helped organize and train what became the Navy's 'first group' to specialize in amphibious raids and tactics. The need for intelligence gathering prior to landings became paramount following the amphibious assault at the Battle of Tarawa in November 1943. Although Navy and Marine Corps planners had identified coral as an issue, they incorrectly assumed landing craft would be able to crawl over the coral. Marines were forced to exit their craft in chest deep water a thousand yards from shore, with many men drowning due to the irregularities of the reefs and Japanese gunners inflicting heavy U.S. casualties. [ 2 ] After that experience, Rear Admiral Kelley Turner , Commander of the V Amphibious Corps (VAC), directed Seabee Lt. Crist (CEC) to come up with a means to deal with the coral and the men to do it. Lt. Crist staged 30 officers and 150 enlisted men from the 7th Naval Construction Regiment [ 4 ] at Waipio Amphibious Operating Base on Oahu to form the nucleus of a reconnaissance and demolition training program. It is here that the UDTs of the Pacific were born. [ 5 ] [ 6 ] Later in war, the Army Engineers passed down demolition jobs to the U.S. Navy. It then became the Navy's responsibility to clear any obstacles and defenses in the near shore area. [ citation needed ] A memorial to the founding of the UDT has been built at Bellows Air Force Station near the original Amphibious Training Base (ATB) in Oahu. In early May 1943, a two-phase "Naval Demolition Project" was ordered by the Chief of Naval Operations (CNO) "to meet a present and urgent requirement". The first phase began at Amphibious Training Base (ATB) Solomons, Maryland with the establishment of Operational Naval Demolition Unit No. 1. Six Officers and eighteen enlisted men reported from the Seabees dynamiting and demolition school at Camp Peary for a four-week course. [ 7 ] [ 8 ] Those Seabees were immediately sent to participate in the invasion of Sicily [ 9 ] where they were divided in three groups that landed on the beaches near Licata , Gela and Scoglitti . [ 10 ] Also in May, the Navy created Naval Combat Demolition Units (NCDUs) tasked with eliminating beach obstructions in advance of amphibious assaults, going ashore in an LCRS inflatable boat . [ 11 ] Each NCDU consisted of five enlisted men led by a single, junior (CEC) officer. In early May, Chief of Naval Operations Admiral Ernest J. King , picked Lieutenant Commander Draper L. Kauffman to lead the training. The first six classes graduated from "Area E" at the Seabee's Camp Peary between May and mid-July. [ 12 ] Training was moved to Fort Pierce, Florida, where the first class began mid-July 1943. Despite the move and having the Scouts Raiders base close by, Camp Peary was Kauffman's primary source of recruits. "He would go up to Camp Peary's Dynamite School and assemble the Seabees in the auditorium saying: "I need volunteers for hazardous, prolonged and distant duty." [ 13 ] Kauffman's other volunteers came from the U.S. Marines, and U.S. Army combat engineers . Training commenced with one grueling week designed to "separate the men from the boys". Some said that "the men had sense enough to quit, leaving Kauffman with the boys." [ 14 ] It was and is still considered the first " Hell Week ". In early November 1943 NCDU-11 was assigned as the advance NCDU party for Operation Overlord . They would be joined in England by 33 more NCDUs. They trained with the 146th, 277th and 299th Combat Engineers to prepare for the landing. [ 15 ] Each Unit had five Combat engineers attached to it. The first 10 NCDUs divided into three groups. [ 15 ] The senior officer, by rank, was the commanding officer of Group III, Lieutenant Smith (CEC). He assumed command in an unofficial capacity. [ 15 ] His Group III worked on experimental demolitions and developed the Hagensen Pack. [ 15 ] (an innovation that used 2.5-pound (1.1 kg) of tetryl placed into rubber tubes that could be twisted around obstacles) [ 16 ] As more teams arrived a NCDU Command was created for NCDUs: 11, 22–30, 41–46, 127–8, 130–42 [ 15 ] The Germans had constructed elaborate defenses on the French coast. These included steel posts driven into the beach and topped with explosive charges. Large 3-ton steel barricades called Belgian Gates and hedgehogs were placed throughout the tidal zone. Behind which was a network of reinforced: coastal artillery , mortar and machine gun positions. The Scouts and Raiders spent weeks gathering information during nightly surveillance missions up and down the French coast. Replicas of the Belgian Gates were constructed on the south coast of England for the NCDUs to practice demolitions on. It was possible to blow a gate to pieces, but that only created a mass of tangled iron creating more of an obstacle. The NCDUs found that the best method was to blast the structural joints of a gate so that it fell down flat. The NCDU teams (designated Demolitions Gap-Assault teams) would come in at low tide to clear the obstacles. Their mission was to open sixteen 50-foot (15 m) wide corridors for the landing at each of the U.S. landing zones ( Omaha Beach and Utah Beach ). Unfortunately, the plans were not executed as laid out. The preparatory air and naval bombardment was ineffective, leaving many German guns to fire on the assault. Also, tidal conditions caused difficulties for the NCDUs. Despite heavy German fire and casualties, the NCDUs charges opened gaps in the defenses. As the infantry came ashore, some used obstacles for cover that had demolition charges on them. The greatest difficulty was on Omaha Beach. By nightfall thirteen of the planned sixteen gaps were open. Of the 175 NCDU men that landed, 31 were killed and 60 were wounded. The attack on Utah Beach was better, four dead and eleven wounded. [ 5 ] Overall, NCDUs suffered a 53 percent casualty rate. NCDUs were also assigned to Operation Dragoon , the invasion of southern France , with a few units from Normandy participating there too. With Europe invaded Admiral Turner requisitioned all available NCDUs from Fort Pierce for integration into the UDTs for the Pacific. However, the first NCDUs, 1–10, had been staged at Turner City, Florida Island in the Solomon Islands during January 1944. [ 17 ] A few were temporarily attached to UDTs. [ 17 ] Later NCDUs 1–10 were combined to form Underwater Demolition Team Able. [ 17 ] This team was disbanded with NCDUs 2 and 3, plus three others assigned to MacArthur's 7th Amphibious force, and were the only NCDUs remaining at war's end. The other men from Team Able were assigned to numbered UDTs. The first units designated as Underwater Demolition Teams were formed in the Pacific Theater . Rear Admiral Turner , the Navy's amphibious expert, ordered the formation of Underwater Demolition Teams in response to the assault debacle experienced at Tarawa . Turner recognized that amphibious operations required intelligence of underwater obstacles. [ 6 ] The personnel in teams 1-15 were primarily Seabees that had started out in the NCDUs. UDT training was at the Waipio Amphibious Operating Base , under V Amphibious Corps operational and administrative control. Most of the instructors and trainees were graduates of the Fort Pierce NCDU or Scouts and Raiders schools, Seabees, Marines, and Army soldiers. [ citation needed ] When Teams 1 and 2 were formed they were "provisional" and trained by a Marine Corps Amphibious Reconnaissance Battalion that had nothing to do with the Fort Pierce program. After a successful mission at Kwajalein, where 2 UDT men stripped down to swim trunks and effectively gathered the intelligence Admiral Turner desired, the UDT mission model evolved to daylight reconnaissance, wearing swim trunks, fins, and masks. The immediate success of the UDTs made them an indispensable part of all future amphibious landings. A UDT was organized with approximately sixteen officers and eighty enlisted. One Marine and one Army officer were liaisons within each team. [ 18 ] They were deployed in every major amphibious landing after Tarawa with 34 teams eventually being commissioned. Teams 1–21 were the teams that had deployed operationally, with slightly over half of the officers and enlisted coming from the Seabees in those teams. The remaining teams were not deployed due to the war ending. Prior to Tarawa , both Naval and Marine Corps planners had identified coral as an issue for amphibious operations . At Tarawa the neap tide created draft issues for the Higgins boats (LCVPs) clearing the reef. The Amtracs carrying the first wave crossed the reef successfully. The LCVPs carrying the second wave ran aground, disembarking their Marines several hundred yards to shore in full combat gear, under heavy fire. Many drowned or were killed before making the beach, forced to wade across treacherously uneven coral. The first wave was left fighting without reinforcements and took heavy casualties on the beach. This disaster made it clear to Admiral Turner that pre-assault intelligence was needed to avoid similar difficulties in future operations. To that end, Turner ordered the formation of underwater demolition teams to do reconnaissance of beach conditions and do removal of submerged obstructions for Amphibious operations. [ 6 ] After a thorough review, V Amphibious Corps found that the only people having any applicable experience with the coral were men in the Naval Construction Battalions . The Admiral tasked Lt. Thomas C. Crist (CEC) of CB 10 to develop a method for blasting coral under combat conditions and putting together a team for that purpose. [ 19 ] Lt. Crist started by recruiting others he had blasted coral with in CB 10 and by the end November 1943 he had assembled close to 30 officers and 150 enlisted men from the 7th Naval Construction Regiment, [ 4 ] at Waipio Amphibious Operating Base on Maui . [ 19 ] The first operation after Tarawa was Operation Flintlock in the Marshall Islands. It began with the island of Kwajalein in January 1944. Admiral Turner wanted the intelligence and to get it, the men that Lt. Crist had staged were used to form Underwater Demolition Teams: UDT 1 and UDT 2. Initially, the team commanders were Cmdr. E. D. Brewster (CEC) and Lt. Crist (CEC). However, Lt. Crist was made Ops officer of Team 2 and Lt. John T. Koehler was made the team Commander. [ 6 ] As with all Seabee military training, the Marines provided it. A Marine Corps Amphibious Reconnaissance Battalion oversaw five weeks further training of the Seabees in UDTs 1 and 2 to prepare for the mission. [ 20 ] UDT 1 was tasked with two daylight recons. [ 21 ] The men were to follow Marine Corps Recon procedure with each two-man team getting close to the beach in an inflatable boat to make their observations wearing fatigues, boots, d helmets, and life-lined to their boats. Team 1 found that the reef kept them from ascertaining conditions both in the water and on the beach as had been anticipated. In keeping with the Seabee traditions of: (1) doing whatever it takes to accomplish the job and (2) not always following military rules to get it done, UDT 1 did both: the fatigues and boots came off. Ensign Lewis F. Luehrs and Seabee Chief Bill Acheson had anticipated that they would not be able to get the intel Admiral Turner wanted following USMC Recon protocol and had worn swim trunks beneath their fatigues. [ 21 ] Stripping down, they swam 45 minutes undetected across the reef returning with sketches of gun emplacements and other intelligence. Still in their trunks, they were taken directly to Rear Admiral Turner's flagship to report. [ 21 ] Afterwards, Turner concluded that the only way to get this kind of information was to do what these men had done as individual swimmers, which is what he relayed to Admiral Nimitz . The planning and decisions of Rear Admiral Turner, Ensign Luehrs, and Chief Acheson made Kwajalein a developmental day in UDT history, changing both the mission model and training regimen. Luehrs would make rank and be in UDT 3 until he was made XO of UDT 18. Acheson and three other UDT officers were posted to the 301st CB as blasting officers. [ 4 ] The 301st specialized in Harbor dredging. It saved UDT teams from blasting channels and harbor clearance, but it required its own blasters. Admiral Turner ordered the formation of nine teams, six for V AC and three for III Amphibious Corps. Seabees made up the majority of the men in teams 1–9, 13 and 15. The officers of those teams were primarily CEC [ 22 ] (Seabees). UDT 2 was sent to Roi-Namur where Lt. Crist earned a Silver Star. UDTs 1 and 2 were decommissioned upon return to Hawaii with most of the men transferred to UDTs 3, 4, 5, and 6. Admiral Turner ordered the formation of nine teams, three for III Amphibious Corps and six for V Amphibious Corps (in all Teams 3–11). As more NCDUs arrived in the Pacific they were used to form even more teams. UDT 15 was an all-NCDU team. To implement these changes and grow the UDTs, Koehler was made the commanding officer of the Naval Combat Demolition Training and Experimental Base on Maui. Admiral Turner also brought on LCDR Draper Kaufmann as a combat officer. [ 6 ] It became obvious more men were needed than the NCDUs would supply and Cmdr. Kauffman was no longer recruiting Seabees, so Admiral Nimitz put out a call to the Pacific Fleet for volunteers. They would form three teams; UDT 14 would be the first of them. Recruiting was such an issue that three Lt. Cmdrs who had no background in demolition were transferred from USN Beach Battalions to command UDTs 11, 12, and 13. [ citation needed ] Admiral Turner requested the establishment of the Naval Combat Demolition Training and Experimental Base at Kihei independent of Fort Pierce, expanding upon what had been learned from UDT 1 at Kwajalein. Operations began in February 1944 with Lt. Crist the first head of training. Most of the procedures from Fort Pierce were changed, replaced with an emphasis on developing swimmers, daylight reconnaissance, and no lifelines. The uniform of the day changed to diving masks , swim trunks, and a Ka-Bar , creating the UDT image as "Naked Warriors" (swim-fins were added after UDT 10 introduced them). At Saipan and Tinian UDTs 5, 6, and 7 were given the missions: day time for Saipan and night for Tinian . At Saipan UDT 7 developed a method to recover swimmers on the move without making the recovery vessel a stationary target. For Guam UDTs 3, 4, and 6 were the teams assigned. When it was over the Seabee-dominated teams had made naval history . [ 23 ] For the Marianas operations Admiral Turner recommended over sixty Silver Stars and over three hundred Bronze Stars with Vs for UDTs 3–7 [ 23 ] That was unprecedented in U.S. Naval/Marine Corps history. [ 23 ] For UDTs 5 and 7, all officers received silver stars and all the enlisted received bronze stars with Vs for Operation Forager (Tinian). [ 24 ] For UDTs 3 and 4 all officers received a silver stars and all the enlisted received bronze stars with Vs for Operation Forager (Guam). [ 24 ] Admiral Conolly felt the commanders of teams 3 and 4 (Lt. Crist and Lt. W.G. Carberry) should have received Navy Crosses. [ 24 ] Teams 4 & 7 also received Naval Unit Commendations. UDTs 6, 7, and 10 drew the Peleliu [ 25 ] assignment while UDT 8 went to Angaur . The officers were almost all CEC and the enlisted were Seabees. [ 26 ] At formation UDT 10 was assigned 5 officers and 24 enlisted that had trained as OSS Operational Swimmers (Maritime Unit: Operational Swimmer Group II). They were led by a Lt. A.O. Chote Jr., who became UDT 10's commanding officer. The men were multi-service: Army, Coast Guard, Marine Corps and Navy [ 27 ] [ 28 ] but the OSS was not allowed to operate in the Pacific Theater . Admiral Nimitz needed swimmers and did approve their transfer from the OSS to his operational and administrative control. Most of their OSS gear was stored as it was not applicable to UDT work however, their swimfins came with them. The other UDTs quickly adopted them. UDT 14 was the first all-Navy team (one of three from the Pacific fleet) even though its CO and XO were CEC and some of Team Able was incorporated. In the Philippines Leyte Gulf UDTs 10 & 15 reconnoitered beaches at Luzon , teams 3, 4, 5, & 8 were sent to Dulag and teams 6, 9, & 10 went to Tacloban . When UDT 3 returned to Maui the team was made the instructors of the school. [ 29 ] Lt Crist was again made Training Officer. Under his direction training was broken into four 2-week blocks with an emphasis on swimming and reconnaissance. [ 29 ] There were classes in night operations, unit control, coral and lava blasting in addition to bivouacking, small unit tactics and small arms. [ 29 ] Lt Crist would be promoted to Lt Cmdr and the team would remain in Hawaii until April 1945. [ 29 ] At that time the Seabees of UDT 3 were transferred to Fort Pierce to be the instructors there. [ 29 ] In all they would train teams 12 to 22. [ 29 ] Lt. Cmdr. Crist would be sent back to Hawaii. D-minus 2 at Iwo Jima UDTs 12, 13, 14, and 15 reconnoitered the beaches from twelve LCI (G) with just one man wounded. They did come under intense heavy fire that sank three of their LCI(G) with the others seriously damaged or disabled. The LCI(G) crews suffered more than the UDTs with the skipper of one boat earning a Medal of Honor . The next day a Japanese bomb hit UDT 15's APD, USS Blessman killing fifteen and wounding 23. It was the largest loss suffered by the UDTs during the war. On D-plus 2 the beachmaster requested help. There were so many broached or damaged landing craft and the beach was so clogged with war debris that there was no place for landing craft to get ashore. Lt Cmdr. E. Hochuli of UDT 12 volunteered his team to go deal with the problem and teams 13 and 14 were ordered to go with. [ 30 ] Lt Cmdr. Vincent Moranz of UDT 13 was "reluctant, and radioed that his men ... were not salvage-men. [ 30 ] It is reported that Capt. (Bull) Hanlon, Underwater Demolition Operations Commanding Officer radioed back that he did not want anything salvaged, he wanted that beach cleared." [ 30 ] The difference in attitude between Hochuli and Moranz would be remembered in the unit awards. The three teams worked for five days clearing the waters edge. While the teams all did the same job under the same conditions [ 30 ] the Navy gave them different unit awards: UDT 12 a PUC , UDT 14 a NUC and UDT 13 nothing. The USMC ground commanders felt that every man that set foot on the island during the assault had an award coming. The Navy did not share this point of view, besides UDT 13 not a single USN beach party received a unit award either. On D-plus 2, when the UDTs set foot on beaches that were under a USMC assault, any unit award they received should have come under the USMC award protocol. The USMC Iwo Jima PUC/NUC was a mass award with the PUC going to assault units and the NUC going to support units. UDTs also served at Eniwetok , Ulithi , Leyte , Lingayen Gulf , Zambales , Labuan , and Brunei Bay. At Lingayen UDT 9 was aboard the USS Belknap when she was hit by a Kamikaze . It cost the team one officer, 7 enlisted, 3 MIA and 13 wounded. The largest UDT operation of WWII was the invasion of Okinawa , involving teams 7, 11, 12, 13, 14, 16, 17, and 18 (nearly 1,000 men). All prior missions had been in warm tropic waters but, the waters around Okinawa were cool enough that long immersion could cause hypothermia and severe cramps. Since thermal protection for swimmers was not available, UDTs were at risk to these hazards working around Okinawa. Operations included both real reconnaissance and demolition at the landing beaches, and feints to create the illusion of landings in other locations. Pointed poles set into the coral reef protected the beaches on Okinawa. Teams 11 and 16 were sent in to blast the poles. The charges took out all of UDT 11's targets and half of UDT 16's. UDT 16 aborted the operation due to the death of one of their men; hence, their mission was considered a failure. UDT 11 went back the next day and took out the remaining poles after-which the team remained to guide landing-craft to the beach. By war's end 34 teams had been formed with teams 1–21 having actually been deployed. The Seabees provided half of the men in the teams that saw service. The U.S. Navy did not publicize the existence of the UDTs until post war and when they did they gave credit to Lt. Commander Kauffman and the Seabees. [ 31 ] During WWII the Navy did not have a rating for the UDTs nor did they have an insignia. Those men with the CB rating on their uniforms considered themselves Seabees that were doing underwater demolition. They did not call themselves "UDTs" or " Frogmen " but rather "Demolitioneers" which had carried over from the NCDUs [ 32 ] and LtCdr Kauffmans recruiting them from the Seabee dynamiting and demolition school. UDTs had to meet the military's standard age guidelines, Seabees older could not volunteer. In preparation for the invasion of Japan the UDTs created a cold water training center and mid-1945 UDTs had to meet a "new physical standard". UDT 9 lost 70% of the team to this change. The last UDT demolition operation of the war was on 4 July 1945 at Balikpapan , Borneo . The UDTs continued to prepare for the invasion of Japan until VJ Day when the need for their services ceased. With the draw-down from the war two half-strength UDTs were retained, one on each coast: UDT Baker and UDT Easy. However, the UDTs were the only special troops that avoided complete disbandment after the war, unlike the OSS Maritime Unit, the VAC Recon Battalion, and several Marine recon missions. [ 6 ] In 1942 the Seabees became a completely new branch of the United States War Department. The Marine Corps provided both training and an organizational model. Something that either was not shared or the Seabees chose to ignore or considered not important was the keeping of logs, journals and records. The Seabees brought this record keeping approach with to the NCDUs and UDTs. On 20 August 1945 USS Begor embarked UDT 21 at Guam as a component of the U.S. occupation force heading for Japan. [ 33 ] Nine days later UDT 21 became the first U.S military unit to set foot on Japanese home soil when it reconned the beaches at Futtsu-misaki Point in Tokyo Bay . [ 33 ] Their assessment was that the area was well suited for landing U.S. amphibious forces. UDT 21 made a large sign to greet the Marines on the beach. Team 21 was all fleet and the sign said greetings from "USN" UDT 21. The next day Begor took UDT 21 to Yokosuka Naval Base. [ 33 ] There the team cleared the docks for the first U.S. warship to dock in Japan, USS San Diego . [ 33 ] The team remained in Tokyo Bay until 8 Sept when it was tasked with locating remaining Kamikaze and two-man submarines at Katsura Wan, Uchiura Wan at Suruga Bay , Sendai , Onohama Shipyards and Choshi . [ 33 ] Orders arrived for Begor to return the team to San Diego on 27 September. [ 33 ] From 21 to 26 September UDT 11 was at Nagasaki and reported men getting sick from the stench. [ 34 ] With the war over thousands of Japanese troops remained in China . The issue was given to the Marine's III Marine Amphibious Corps. UDT 9 was assigned to Operation Beleaguer to recon the landings of the 1st Marine Division at Taku and Qingdao the first two weeks of October 1945. [ 35 ] On their way to China the Navy had UDT 8 carry out a mission at Jinaen, Korea 8–27 September 1945. [ 35 ] When UDT 9 arrived back in the States it was made one of the two post-War teams and redesignated UDT Baker. [ 35 ] UDT 8 was also sent to China and was at Taku, Yantai , and Qingdao. [ 36 ] Bikini atoll was chosen for the site of the nuclear tests of Operation Crossroads . "In March 1946, Project Y scientists from Los Alamos decided that the analysis of a sample of water from the immediate vicinity of the nuclear detonation was essential if the tests were to be properly evaluated. After consideration of several proposals to accomplish this, it was finally decided to employ drone boats of the type used by Naval Combat Demolition Units in France during the war". [ 37 ] UDT Easy, later named UDT 3, was given the designation TU 1.1.3 for the Operation and was assigned the control and maintenance of the drone boats. On 27 April, 7 officers and 51 enlisted men embarked the USS Begor at the Seabee base Port Hueneme, CA, [ 37 ] for transit to Bikini. At Bikini the drones were controlled from the Begor . Once a water sample was taken the drone would return to the Begor to be hosed down for decontamination . After a Radiation Safety Officer had taken a Geiger counter reading and the OK given, the UDTs would board with a radiation chemist to retrieve the sample. [ 38 ] Begor came to have the reputation as the most contaminated boat in the fleet. [ 38 ] A major issue afterwards was the treatment of the dislocated natives. In November 1948, the Bikinians were relocated to the uninhabited Island of Kili , however that island was located inside a coral reef that had no channel for access to the sea. [ 39 ] In the spring of 1949, the governor of the Trust Territories , Marshall Group requested the U.S. Navy blast a channel to change this. [ 39 ] That task was given to the Seabees on Kwajalin whose CO quickly determined this was actually a UDT project. [ 39 ] He sent a request to CINCPACFLT who forwarded it to COMPHIBPAC . [ 39 ] This ultimately resulted in the sending of UDT 3 on a Civic action program that turned out better than politicians could have hoped. The King of the Bikinians held a send off feast for the UDTs the night before they departed. [ 39 ] Post WWII the UDTs continued to research new techniques for underwater and shallow-water operations. One area was the use of SCUBA equipment. Dr. Chris Lambertsen had developed the Lambertsen Amphibious Respiratory Unit (LARU), an oxygen rebreather , which was used by the Maritime Unit of the OSS. In October 1943, he demonstrated it to LtCmdr. Kauffman, but was told the device was not applicable to current UDT operations. [ 40 ] [ 41 ] Dr. Lambertsen and the OSS continued to work on closed-circuit oxygen diving and combat swimming. When the OSS was dissolved in 1945, Lambertsen retained the LARU inventory. He later demonstrated the LARU to Army Engineers, the Coast Guard , and the UDTs. In 1947, he demonstrated the LARU to LtCmdr. Francis "Doug" Fane, then a senior UDT commander. [ 40 ] [ 42 ] LtCmdr. Fane was enthusiastic for new diving techniques. He pushed for the adoption of rebreathers and SCUBA gear for future operations, but the Navy Experimental Diving Unit and the Navy Dive School, which used the old "hard-hat" diving apparatus, declared the new equipment be too dangerous. Nonetheless, LtCmdr. Fane invited Dr. Lambertsen to NAB Little Creek , Virginia in January 1948 to demonstrate and train UDT personnel in SCUBA operations. This was the first-ever SCUBA training for USN divers. Following this training, Lcdr. Fane and Dr. Lambertsen demonstrated new UDT capabilities with a successful lock-out and re-entry from USS Grouper , an underway submarine , to show the Navy's need for this capability. LtCmdr. Fane then started the classified "Submersible Operations" or SUBOPS platoon with men drawn from UDT 2 and 4 under the direction of Lieutenant (junior grade) Bruce Dunning. [ 40 ] [ 43 ] LtCmdr. Fane also brought the conventional " Aqua-lung " open-circuit SCUBA system into use by the UDTs. Open-circuit SCUBA is less useful to combat divers, as the exhausted air produces a tell-tale trail of bubbles. However, in the early 1950s, the UDTs decided they preferred open-circuit SCUBA, and converted entirely to it. The remaining stock of LARUs was supposedly destroyed in a beach-party bonfire. [ citation needed ] Later on, the UDT reverted to closed-circuit SCUBA, using improved rebreathers developed by Dr. Lambertsen. It was at this time that the UDTs, led by LtCmdr. Fane, established training facilities at Saint Thomas in the Virgin Islands . [ 44 ] The UDTs also began developing weapons skills and procedures for commando operations on land in coastal regions. The UDTs started experiments with insertion/extraction by helicopter, jumping from a moving helicopter into the water or rappelling like mountain climbers to the ground. Experimentation developed a system for emergency extraction by plane called "Skyhook" . Skyhook utilized a large helium balloon and cable rig with harness. A special grabbing device on the nose of a C-130 enabled a pilot to snatch the cable tethered to the balloon and lift a person off the ground. Once airborne, the crew would winch the cable in and retrieve the personnel though the back of the aircraft. Training this technique was discontinued following the death of a SEAL at NAB Coronado during a training exercise. Teams still utilize the Skyhook for equipment extraction and retain the combat capability for personnel if needed. During the Korean War , the UDTs operated on the coasts of North Korea , with their efforts initially focused on demolitions and mine disposal. Additionally, the UDT accompanied South Korean commandos on raids in the North to demolish railroad tunnels and bridges. The higher-ranking officers of the UDT frowned upon this activity because it was a non-traditional use of the Naval forces, which took them too far from the water line. Due to the nature of the war, the UDT maintained a low operational profile. Some of the better-known missions include the transport of spies into North Korea, and the destruction of North Korean fishing nets. A more traditional role for the UDT was in support of Operation CHROMITE , the amphibious landing at Inchon . UDT 1 and UDT 3 divers went in ahead of the landing craft, scouting mud flats, marking low points in the channel, clearing fouled propellers, and searching for mines. Four UDT personnel acted as wave-guides for the Marine landing. [ 45 ] The UDT assisted in clearing mines in Wonsan harbor, under fire from enemy shore batteries. Two minesweepers were sunk in these operations. A UDT diver dove on the wreck of USS Pledge (AM-277) , the first U.S. combat operation using SCUBA gear. The Korean War was a period of transition for the men of the UDT. They tested their previous limits and defined new parameters for their special style of warfare. These new techniques and expanded horizons positioned the UDT well to assume an even broader role as war began brewing to the south in Vietnam . [ 46 ] Initially, the splashdown of U.S. crewed space capsules were unassisted. [ 47 ] That changed quickly after the second crewed flight; when Mercury 11 hit the water following reentry, the hatch blew and she sank, nearly drowning Gus Grissom . All Mercury, Gemini, and Apollo space capsules were subsequently met by UDTs 11 or 12 upon splashdown. Before the hatch was opened, the UDTs would attach a flotation collar to the capsule and liferaft for the astronauts to safely exit the craft. [ 47 ] The Navy entered the Vietnam War in 1958, when the UDTs delivered a small watercraft far up the Mekong River into Laos . In 1961, naval advisers started training South Vietnamese personnel in South Vietnam . The men were called the Liên Đoàn Người Nhái (LDNN) or Vietnamese Frogmen, which translates as "Frogmen Team". UDT teams carried out hydrographic surveys in South Vietnam's coastal waters and reconnaissance missions of harbors, beaches and rivers often under hazardous conditions and enemy fire. [ 48 ] Later, the UDTs supported the Amphibious Ready Groups operating on South Vietnam's rivers. UDTs manned riverine patrol craft and went ashore to demolish obstacles and enemy bunkers. They operated throughout South Vietnam, from the Mekong Delta ( Sea Float ), the Parrot's Beak and French canal AO's through I Corps and the Song Cui Dai estuary south of Da Nang . In the mid-1950s, the Navy saw how the UDT's mission had expanded to a broad range of " unconventional warfare ", but also that this clashed with the UDT's traditional focus on maritime operations swimming, boat, and diving operations. It was therefore decided to create a new type of unit that would build on the UDT's elite qualities and water-borne expertise, but would add land combat skills, including parachute training and guerrilla/counterinsurgency operations. [ 49 ] These new teams would come to be known as the US Navy SEALs , an acronym for Sea, Air, and Land. Initially there was a lag in the unit's creation until President John F. Kennedy took office. Kennedy recognized the need for unconventional warfare, and supported the use of special operations forces against guerrilla activity. The Navy moved forward to establish its new special operations force and in January 1962 commissioned SEAL Team ONE in NAB Coronado and SEAL Team TWO at NAB Little Creek. In 1964, Boat Support Unit ONE was established, designed to directly support NSW operations, and was initially outfitted primarily by UDTs and newly established SEALs. UDT-11 & 12 were still active on the west coast and UDT-21 & 22 on the east coast. The SEALs quickly earned a reputation for valor and stealth in Vietnam, where they conducted clandestine raids in perilous territory. From 1974–1975, UDT 13 was redesignated; some personnel established Underwater Construction Teams , while others joined the special boat detachment. In May 1983, the remaining UDT teams were reorganized as SEAL teams. UDT 11 became SEAL Team Five and UDT 12 became Seal Delivery Vehicle Team One. UDT 21 became SEAL Team Four and UDT 22 became Seal Delivery Vehicle Team Two. A new team, SEAL Team Three was established in October 1983. Since then, teams of SEALs have taken on clandestine missions in war-torn regions around the world, tracking high-profile targets such as Panama's Manuel Noriega and Colombian drug lord Pablo Escobar , and playing integral roles in the wars in Iraq and Afghanistan. [ 50 ] [ 51 ] For those who served in an Underwater Demolition Team, the U.S. Navy authorized the Underwater Demolition operator badge in 1970. However, the UDT badge was phased out in 1971, a few months after it appeared, as was the silver badge for enlisted UDT/SEAL frogmen. After that, SEAL and UDT operators, both officer and enlisted, all wore the same gold Trident, as well as gold Navy jump wings. The UDTs have received several unit citations and commendations. Members who participated in actions that merited the award are authorized to wear the medal or ribbon associated with the award on their uniform. Awards and decorations of the United States Armed Forces have different categories, (i.e. Service, Campaign, Unit, and Personal). Unit Citations are distinct from the other decorations. [ 52 ] Naval Combat Demolition Force O (Omaha beach) Normandy Naval Combat Demolition Force U (Utah beach): Normandy UDT 1 UDT 4 UDT 7 UDT 11 UDT 12 UDT 13 UDT 14 UDT 21 UDT 22
https://en.wikipedia.org/wiki/Underwater_Demolition_Team
Underwater construction is industrial construction in an underwater environment . It is a part of the marine construction industry. [ 1 ] It can involve the use of a variety of building materials, mainly concrete and steel. There is often, but not necessarily, a significant component of commercial diving involved. [ 2 ] [ 3 ] Some underwater work can be done by divers, but they are limited by depth and site conditions. And it is hazardous work, with expensive risk reduction and mitigation, and a limited range of suitable equipment. Remotely operated underwater vehicles are an alternative for some classes of work, but are also limited and expensive. When reasonably practicable, the bulk of the work is done out of the water, with underwater work restricted to installation, modification and repair, and inspection. Underwater construction is common in the civil engineering , coastal engineering , energy, and petroleum extraction industries. Coastal engineering is a branch of civil engineering concerned with the specific demands posed by constructing at or near the coast , as well as the development of the coast itself. Harbours , docks , breakwaters , jetties , piers , wharfs and similar structures are all immediately adjacent to, or project into coastal waters, and are supported in part by seabed. Stormwater and sewer outfalls require pipelines to be laid underwater. Dykes , levees , navigation channels , canals , locks . The most commonly used materials in marine construction are concrete and steel. [ 11 ] Underwater work by divers on construction sites is generally within the scope of diving regulations . [ 12 ] [ 13 ] The work may also come within the scope of other occupational heath and safety related regulations.
https://en.wikipedia.org/wiki/Underwater_concrete_placement
Underwater construction is industrial construction in an underwater environment . It is a part of the marine construction industry. [ 1 ] It can involve the use of a variety of building materials, mainly concrete and steel. There is often, but not necessarily, a significant component of commercial diving involved. [ 2 ] [ 3 ] Some underwater work can be done by divers, but they are limited by depth and site conditions. And it is hazardous work, with expensive risk reduction and mitigation, and a limited range of suitable equipment. Remotely operated underwater vehicles are an alternative for some classes of work, but are also limited and expensive. When reasonably practicable, the bulk of the work is done out of the water, with underwater work restricted to installation, modification and repair, and inspection. Underwater construction is common in the civil engineering , coastal engineering , energy, and petroleum extraction industries. Coastal engineering is a branch of civil engineering concerned with the specific demands posed by constructing at or near the coast , as well as the development of the coast itself. Harbours , docks , breakwaters , jetties , piers , wharfs and similar structures are all immediately adjacent to, or project into coastal waters, and are supported in part by seabed. Stormwater and sewer outfalls require pipelines to be laid underwater. Dykes , levees , navigation channels , canals , locks . The most commonly used materials in marine construction are concrete and steel. [ 11 ] Underwater work by divers on construction sites is generally within the scope of diving regulations . [ 12 ] [ 13 ] The work may also come within the scope of other occupational heath and safety related regulations.
https://en.wikipedia.org/wiki/Underwater_construction
An underwater explosion (also known as an UNDEX ) is a chemical or nuclear explosion that occurs under the surface of a body of water. While useful in anti-ship and submarine warfare, underwater bombs are not as effective against coastal facilities. Underwater explosions differ from in-air explosions due to the properties of water : Effects of an underwater explosion depend on the several things that, including distance from the explosion, the energy of the explosion, the depth of the explosion, and the depth of the water. [ 2 ] Underwater explosions are categorized by the depth of the explosion. Shallow underwater explosions are those where a crater formed at the water's surface is large in comparison with the depth of the explosion. Deep underwater explosions are those where the crater is small in comparison with the depth of the explosion, [ 2 ] or nonexistent. The overall effect of an underwater explosion depends on depth, the size and nature of the explosive charge, and the presence, composition and distance of reflecting surfaces such as the seabed, surface, thermoclines , etc. This phenomenon has been extensively used in antiship warhead design since an underwater explosion (particularly one underneath a hull) can produce greater damage than an above-surface one of the same explosive size. Initial damage to a target will be caused by the first shockwave ; this damage will be amplified by the subsequent physical movement of water and by the repeated secondary shockwaves or bubble pulse . Additionally, charge detonation away from the target can result in damage over a larger hull area. [ 3 ] Underwater nuclear tests close to the surface can disperse radioactive water and steam over a large area, with severe effects on marine life, nearby infrastructures and humans. [ 4 ] [ 5 ] The detonation of nuclear weapons underwater was banned by the 1963 Partial Nuclear Test Ban Treaty and it is also prohibited under the Comprehensive Nuclear-Test-Ban Treaty of 1996. The Baker nuclear test at Bikini Atoll in July 1946 was a shallow underwater explosion, part of Operation Crossroads . A 20 kiloton warhead was detonated in a lagoon which was approximately 200 ft (61 m) deep. The first effect was illumination of the sea from the underwater fireball. A rapidly expanding gas bubble created a shock wave that caused an expanding ring of apparently dark water at the surface, called the slick , followed by an expanding ring of apparently white water, called the crack . A mound of water and spray, called the spray dome , formed at the water's surface which became more columnar as it rose. When the rising gas bubble broke the surface, it created a shock wave in the air as well. Water vapor in the air condensed as a result of Prandtl–Meyer expansion fans decreasing the air pressure, density, and temperature below the dew point; making a spherical cloud that marked the location of the shock wave. Water filling the cavity formed by the bubble caused a hollow column of water, called the chimney or plume , to rise 6,000 ft (1,800 m) in the air and break through the top of the cloud. A series of ocean surface waves moved outward from the center. The first wave was about 94 ft (29 m) high at 1,000 ft (300 m) from the center. Other waves followed, and at further distances some of these were higher than the first wave. For example, at 22,000 ft (6,700 m) from the center, the ninth wave was the highest at 6 ft (1.8 m). Gravity caused the column to fall to the surface and caused a cloud of mist to move outward rapidly from the base of the column, called the base surge . The ultimate size of the base surge was 3.5 mi (5.6 km) in diameter and 1,800 ft (550 m) high. The base surge rose from the surface and merged with other products of the explosion, to form clouds which produced moderate to heavy rainfall for nearly one hour. [ 6 ] An example of a deep underwater explosion is the Wahoo test, which was carried out in 1958 as part of Operation Hardtack I . A 9 kt Mk-7 was detonated at a depth of 500 ft (150 m) in deep water. There was little evidence of a fireball. The spray dome rose to a height of 900 ft (270 m). Gas from the bubble broke through the spray dome to form jets which shot out in all directions and reached heights of up to 1,700 ft (520 m). The base surge at its maximum size was 2.5 mi (4.0 km) in diameter and 1,000 ft (300 m) high. [ 6 ] The heights of surface waves generated by deep underwater explosions are greater because more energy is delivered to the water. During the Cold War , underwater explosions were thought to operate under the same principles as tsunamis, potentially increasing dramatically in height as they move over shallow water, and flooding the land beyond the shoreline. [ 7 ] Later research and analysis suggested that water waves generated by explosions were different from those generated by tsunamis and landslides. Méhauté et al. conclude in their 1996 overview Water Waves Generated by Underwater Explosion that the surface waves from even a very large offshore undersea explosion would expend most of their energy on the continental shelf, resulting in coastal flooding no worse than that from a bad storm. [ 2 ] The Operation Wigwam test in 1955 occurred at a depth of 2,000 ft (610 m), the deepest detonation of any nuclear device. Unless it breaks the water surface while still a hot gas bubble, an underwater nuclear explosion leaves no trace at the surface but hot, radioactive water rising from below. This is always the case with explosions deeper than about 2,000 ft (610 m). [ 6 ] About one second after such an explosion, the hot gas bubble collapses because: Since water is not readily compressible, moving this much of it out of the way so quickly absorbs a massive amount of energy—all of which comes from the pressure inside the expanding bubble. Water pressure outside the bubble soon causes it to collapse back into a small sphere and rebound, expanding again. This is repeated several times, but each rebound contains only about 40% of the energy of the previous cycle. At the maximum diameter of the first oscillation, a very large nuclear bomb exploded in very deep water creates a bubble about a half-mile (800 m) wide in about one second and then contracts, which also takes about a second. Blast bubbles from deep nuclear explosions have slightly longer oscillations than shallow ones. They stop oscillating and become mere hot water in about six seconds. This happens sooner in nuclear blasts than bubbles from conventional explosives. The water pressure of a deep explosion prevents any bubbles from surviving to float up to the surface. The drastic 60% loss of energy between oscillation cycles is caused in part by the extreme force of a nuclear explosion pushing the bubble wall outward supersonically (faster than the speed of sound in saltwater). This causes Rayleigh–Taylor instability . That is, the smooth water wall touching the blast face becomes turbulent and fractal, with fingers and branches of cold ocean water extending into the bubble. That cold water cools the hot gas inside and causes it to condense. The bubble becomes less of a sphere and looks more like the Crab Nebula —the deviation of which from a smooth surface is also due to Rayleigh–Taylor instability as ejected stellar material pushes through the interstellar medium. As might be expected, large, shallow explosions expand faster than deep, small ones. Despite being in direct contact with a nuclear explosion fireball, the water in the expanding bubble wall does not boil; the pressure inside the bubble exceeds (by far) the vapor pressure of water. The water touching the blast can only boil during bubble contraction. This boiling is like evaporation, cooling the bubble wall, and is another reason that an oscillating blast bubble loses most of the energy it had in the previous cycle. During these hot gas oscillations, the bubble continually rises for the same reason a mushroom cloud does: it is less dense. This causes the blast bubble never to be perfectly spherical. Instead, the bottom of the bubble is flatter, and during contraction, it even tends to "reach up" toward the blast center. In the last expansion cycle, the bottom of the bubble touches the top before the sides have fully collapsed, and the bubble becomes a torus in its last second of life. About six seconds after detonation, all that remains of a large, deep nuclear explosion is a column of hot water rising and cooling in the near-freezing ocean. Relatively few underwater nuclear tests were performed before they were banned by the Partial Test Ban Treaty . They are: Note: it is often believed that the French did extensive underwater tests in French West Polynesia on the Moruroa and Fangataufa Atolls. This is incorrect; the bombs were placed in shafts drilled into the underlying coral and volcanic rock, and they did not intentionally leak fallout. There are several methods of detecting nuclear detonations. Hydroacoustics is the primary means of determining if a nuclear detonation has occurred underwater. Hydrophones are used to monitor the change in water pressure as sound waves propagate through the world's oceans. [ 9 ] Sound travels through 20 °C water at approximately 1482 meters per second, compared to the 332 m/s speed of sound through air. [ 10 ] [ 11 ] In the world's oceans, sound travels most efficiently at a depth of approximately 1000 meters. Sound waves that travel at this depth travel at minimum speed and are trapped in a layer known as the Sound Fixing and Ranging Channel ( SOFAR ). [ 9 ] Sounds can be detected in the SOFAR from large distances, allowing for a limited number of monitoring stations required to detect oceanic activity. Hydroacoustics was originally developed in the early 20th century as a means of detecting objects like icebergs and shoals to prevent accidents at sea. [ 9 ] Three hydroacoustic stations were built before the adoption of the Comprehensive Nuclear-Test-Ban Treaty . Two hydrophone stations were built in the North Pacific Ocean and Mid-Atlantic Ocean, and a T-phase [ clarification needed ] station was built off the west coast of Canada. When the CTBT was adopted, 8 more hydroacoustic stations were constructed to create a comprehensive network capable of identifying underwater nuclear detonations anywhere in the world. [ 12 ] These 11 hydroacoustic stations, in addition to 326 monitoring stations and laboratories, comprise the International Monitoring System (IMS), which is monitored by the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). [ 13 ] There are two types of hydroacoustic stations currently used in the IMS network; 6 hydrophone monitoring stations and 5 T-phase stations. These 11 stations are primarily located in the southern hemisphere, which is primarily ocean. [ 14 ] Hydrophone monitoring stations consist of an array of three hydrophones suspended from cables tethered to the ocean floor. They are positioned at a depth located within the SOFAR in order to effectively gather readings. [ 12 ] Each hydrophone records 250 samples per second, while the tethering cable supplies power and carries information to the shore. [ 12 ] This information is converted to a usable form and transmitted via secure satellite link to other facilities for analysis. T-phase monitoring stations record seismic signals generate from sound waves that have coupled with the ocean floor or shoreline. [ 15 ] T-phase stations are generally located on steep-sloped islands in order to gather the cleanest possible seismic readings. [ 14 ] Like hydrophone stations, this information is sent to the shore and transmitted via satellite link for further analysis. [ 15 ] Hydrophone stations have the benefit of gathering readings directly from the SOFAR, but are generally more expensive to implement than T-phase stations. [ 15 ] Hydroacoustic stations monitor frequencies from 1 to 100 Hertz to determine if an underwater detonation has occurred. If a potential detonation has been identified by one or more stations, the gathered signals will contain a high bandwidth with the frequency spectrum indicating an underwater cavity at the source. [ 15 ]
https://en.wikipedia.org/wiki/Underwater_explosion
Underwater habitats are underwater structures in which people can live for extended periods and carry out most of the basic human functions of a 24-hour day , such as working, resting, eating, attending to personal hygiene, and sleeping. In this context, ' habitat ' is generally used in a narrow sense to mean the interior and immediate exterior of the structure and its fixtures, but not its surrounding marine environment . Most early underwater habitats lacked regenerative systems for air, water, food, electricity, and other resources. However, some underwater habitats allow for these resources to be delivered using pipes, or generated within the habitat, rather than manually delivered. [ 1 ] An underwater habitat has to meet the needs of human physiology and provide suitable environmental conditions, and the one which is most critical is breathing gas of suitable quality. Others concern the physical environment ( pressure , temperature , light , humidity ), the chemical environment ( drinking water , food , waste products , toxins ) and the biological environment (hazardous sea creatures, microorganisms , marine fungi ). Much of the science covering underwater habitats and their technology designed to meet human requirements is shared with diving , diving bells , submersible vehicles and submarines , and spacecraft . Numerous underwater habitats have been designed, built and used around the world since as early as the start of the 1960s, either by private individuals or by government agencies. [ 2 ] They have been used almost exclusively for research and exploration , but, in recent years, at least one underwater habitat has been provided for recreation and tourism . Research has been devoted particularly to the physiological processes and limits of breathing gases under pressure, for aquanaut , as well as astronaut training, and for research on marine ecosystems. The term 'underwater habitat' is used for a range of applications, including some structures that are not exclusively underwater while operational, but all include a significant underwater component. There may be some overlap between underwater habitats and submersible vessels, and between structures which are completely submerged and those which have some part extending above the surface when in operation. In 1970 G. Haux stated: [ 3 ] At this point it must also be said that it is not easy to sharply define the term "underwater laboratory". One may argue whether Link's diving chamber which was used in the "Man-in-Sea I" project, may be called an underwater laboratory. But the Bentos 300, planned by the Soviets, is not so easy to classify as it has a certain ability to maneuver. Therefore, the possibility exists that this diving hull is classified elsewhere as a submersible. Well, a certain generosity can not hurt. In an underwater habitat, observations can be carried out at any hour to study the behavior of both diurnal and nocturnal organisms. [ 4 ] Habitats in shallow water can be used to accommodate divers from greater depths for a major portion of the decompression required. This principle was used in the project Conshelf II. Saturation dives provide the opportunity to dive with shorter intervals than possible from the surface, and risks associated with diving and ship operations at night can be minimized. In the habitat La Chalupa , 35% of all dives took place at night. To perform the same amount of useful work diving from the surface instead of from La Chalupa , an estimated eight hours of decompression time would have been necessary every day. [ 5 ] However, maintaining an underwater habitat is much more expensive and logistically difficult than diving from the surface. It also restricts the diving to a much more limited area. Underwater habitats are designed to operate in two fundamental modes. A third or composite type has compartments of both types within the same habitat structure and connected via airlocks, such as Aquarius . An excursion is a visit to the environment outside the habitat. Diving excursions can be done on scuba or umbilical supply, and are limited upwards by decompression obligations while on the excursion, and downwards by decompression obligations while returning from the excursion. Open circuit or rebreather scuba have the advantage of mobility, but it is critical to the safety of a saturation diver to be able to get back to the habitat, as surfacing directly from saturation is likely to cause severe and probably fatal decompression sickness. For this reason, in most of the programs, signs and guidelines are installed around the habitat in order to prevent divers from getting lost. Umbilicals or airline hoses are safer, as the breathing gas supply is unlimited, and the hose is a guideline back to the habitat, but they restrict freedom of movement and can become tangled. [ 7 ] The horizontal extent of excursions is limited by the scuba air supply or the length of the umbilical. The distance above and below the level of the habitat are also limited and depend on the depth of the habitat and the associated saturation of the divers. The volume of the underwater environment available for excursions thus takes the shape of a vertical axis cylinder centred on the habitat. As an example, in the Tektite I program, the habitat was located at a depth of 13.1 metres (43 ft). Excursions were limited vertically to a depth of 6.7 metres (22 ft) (6.4 m above the habitat) and 25.9 metres (85 ft) (12.8 m below the habitat level) and were horizontally limited to a distance of 549 metres (1,801 ft) from the Habitat. [ 5 ] The history of underwater habitats follows on from the previous development of diving bells and caissons , and as long exposure to a hyperbaric environment results in saturation of the body tissues with the ambient inert gases, it is also closely connected to the history of saturation diving . The original inspiration for the development of underwater habitats was the work of George F. Bond , who investigated the physiological and medical effects of hyperbaric saturation in the Genesis project between 1957 and 1963. Edwin Albert Link started the Man-in-the-Sea project in 1962, which exposed divers to hyperbaric conditions underwater in a diving chamber, culminating in the first aquanaut , Robert Sténuit , spending over 24 hours at a depth of 200 feet (61 m). [ 5 ] Also inspired by Genesis, Jacques-Yves Cousteau conducted the first Conshelf project in France in 1962 where two divers spent a week at a depth of 10 metres (33 ft), followed in 1963 by Conshelf II at 11 metres (36 ft) for a month and 25 metres (82 ft) for two weeks. [ 8 ] In June 1964, Robert Sténuit and Jon Lindberg spent 49 hours at 126m in Link's Man-in-the-Sea II project. The habitat was an inflatable structure called SPID. This was followed by a series of underwater habitats where people stayed for several weeks at great depths. Sealab II had a usable area of 63 square metres (680 sq ft), and was used at a depth of more than 60 metres (200 ft). Several countries built their own habitats at much the same time and mostly began experimenting in shallow waters. In Conshelf III six aquanauts lived for several weeks at a depth of 100 metres (330 ft). In Germany, the Helgoland UWL was the first habitat to be used in cold water, the Tektite stations were more spacious and technically more advanced. The most ambitious project was Sealab III, a rebuild of Sealab II, which was to be operated at 186 metres (610 ft). When one of the divers died in the preparatory phase due to human error, all similar projects of the United States Navy were terminated. Internationally, except for the La Chalupa Research Laboratory the large-scale projects were carried out, but not extended, so that the subsequent habitats were smaller and designed for shallower depths. The race for greater depths, longer missions and technical advances seemed to have come to an end. For reasons such as lack of mobility, lack of self-sufficiency, shifting focus to space travel and transition to surface-based saturation systems, the interest in underwater habitats decreased, resulting in a noticeable decrease in major projects after 1970. In the mid eighties, the Aquarius habitat was built in the style of Sealab and Helgoland and is still in operation today. The first aquanaut was Robert Stenuit in the Man-in-the-Sea I project run by Edwin A. Link. On 6 September 1962, he spent 24 hours and 15 minutes at a depth of 61 metres (200 ft) in a steel cylinder, doing several excursions. In June 1964 Stenuit and Jon Lindbergh spent 49 hours at a depth of 126 metres (413 ft) in the Man-in-the-Sea II program. The habitat consisted of a submerged portable inflatable dwelling (SPID). Conshelf, short for Continental Shelf Station, was a series of undersea living and research stations undertaken by Jacques Cousteau's team in the 1960s. The original design was for five of these stations to be submerged to a maximum depth of 300 metres (1,000 ft) over the decade; in reality only three were completed with a maximum depth of 100 metres (330 ft). Much of the work was funded in part by the French petrochemical industry , who, along with Cousteau, hoped that such colonies could serve as base stations for the future exploitation of the sea. Such colonies did not find a productive future, however, as Cousteau later repudiated his support for such exploitation of the sea and put his efforts toward conservation. It was also found in later years that industrial tasks underwater could be more efficiently performed by undersea robot devices and men operating from the surface or from smaller lowered structures, made possible by a more advanced understanding of diving physiology. Still, these three undersea living experiments did much to advance man's knowledge of undersea technology and physiology, and were valuable as " proof of concept " constructs. They also did much to publicize oceanographic research and, ironically, usher in an age of ocean conservation through building public awareness. Along with Sealab and others, it spawned a generation of smaller, less ambitious yet longer-term undersea habitats primarily for marine research purposes. [ 9 ] [ 2 ] Conshelf I (Continental Shelf Station), constructed in 1962, was the first inhabited underwater habitat. Developed by Cousteau to record basic observations of life underwater, Conshelf I was submerged in 10 metres (33 ft) of water near Marseille , and the first experiment involved a team of two spending seven days in the habitat. The two oceanauts, Albert Falco and Claude Wesly , were expected to spend at least five hours a day outside the station, and were subject to daily medical exams. [ citation needed ] Conshelf Two , the first ambitious attempt for men to live and work on the sea floor, was launched in 1963. In it, a half-dozen oceanauts lived 10 metres (33 ft) down in the Red Sea off Sudan in a starfish -shaped house for 30 days. The undersea living experiment also had two other structures, one a submarine hangar that housed a small, two-man submarine named SP-350 Denise , often referred to as the "diving saucer" for its resemblance to a science fiction flying saucer, and a smaller "deep cabin" where two oceanauts lived at a depth of 30 metres (100 ft) for a week. They were among the first to breathe heliox , a mixture of helium and oxygen, avoiding the normal nitrogen /oxygen mixture, which, when breathed under pressure, can cause narcosis . The deep cabin was also an early effort in saturation diving , in which the aquanauts' body tissues were allowed to become totally saturated by the helium in the breathing mixture, a result of breathing the gases under pressure. The necessary decompression from saturation was accelerated by using oxygen enriched breathing gases. [ citation needed ] They suffered no apparent ill effects. [ citation needed ] The undersea colony was supported with air, water, food, power, all essentials of life, from a large support team above. Men on the bottom performed a number of experiments intended to determine the practicality of working on the sea floor and were subjected to continual medical examinations. Conshelf II was a defining effort in the study of diving physiology and technology, and captured wide public appeal due to its dramatic " Jules Verne " look and feel. A Cousteau-produced feature film about the effort ( World Without Sun ) was awarded an Academy Award for Best Documentary the following year. [ 10 ] Conshelf III was initiated in 1965. Six divers lived in the habitat at 102.4 metres (336 ft) in the Mediterranean Sea near the Cap Ferrat lighthouse, between Nice and Monaco, for three weeks. In this effort, Cousteau was determined to make the station more self-sufficient, severing most ties with the surface. A mock oil rig was set up underwater, and divers successfully performed several industrial tasks. [ citation needed ] SEALAB I, II, and III were experimental underwater habitats developed by the United States Navy in the 1960s to prove the viability of saturation diving and humans living in isolation for extended periods of time. The knowledge gained from the SEALAB expeditions helped advance the science of deep sea diving and rescue, and contributed to the understanding of the psychological and physiological strains humans can endure. [ 11 ] [ 12 ] [ 13 ] The three SEALABs were part of the United States Navy Genesis Project. Preliminary research work was undertaken by George F. Bond . Bond began investigations in 1957 to develop theories about saturation diving . Bond's team exposed rats , goats , monkeys , and human beings to various gas mixtures at different pressures. By 1963 they had collected enough data to test the first SEALAB habitat. [ 14 ] The Tektite underwater habitat was constructed by General Electric and was funded by NASA , the Office of Naval Research and the United States Department of the Interior . [ 15 ] On 15 February 1969, four Department of the Interior scientists (Ed Clifton, Conrad Mahnken, Richard Waller and John VanDerwalker) descended to the ocean floor in Great Lameshur Bay in the United States Virgin Islands to begin an ambitious diving project dubbed "Tektite I". By 18 March 1969, the four aquanauts had established a new world's record for saturated diving by a single team. On 15 April 1969, the aquanaut team returned to the surface after performing 58 days of marine scientific studies. More than 19 hours of decompression were needed to safely return the team to the surface. [ citation needed ] Inspired in part by NASA's budding Skylab program and an interest in better understanding the effectiveness of scientists working under extremely isolated living conditions, Tektite was the first saturation diving project to employ scientists rather than professional divers. [ citation needed ] The term tektite generally refers to a class of meteorites formed by extremely rapid cooling. These include objects of celestial origins that strike the sea surface and come to rest on the bottom (note project Tektite's conceptual origins within the U.S space program). [ citation needed ] The Tektite II missions were carried out in 1970. Tektite II comprised ten missions lasting 10 to 20 days with four scientists and an engineer on each mission. One of these missions included the first all-female aquanaut team, led by Dr. Sylvia Earle . Other scientists participating in the all-female mission included Dr. Renate True of Tulane University , as well as Ann Hartline and Alina Szmant, graduate students at Scripps Institute of Oceanography. The fifth member of the crew was Margaret Ann Lucas, a Villanova University engineering graduate, who served as Habitat Engineer. The Tektite II missions were the first to undertake in-depth ecological studies. [ 16 ] Tektite II included 24 hour behavioral and mission observations of each of the missions by a team of observers [ 17 ] from the University of Texas at Austin . Selected episodic events and discussions were videotaped using cameras in the public areas of the habitat. Data about the status, location and activities of each of the 5 members of each mission was collected via key punch data cards every six minutes during each mission. This information was collated and processed by BellComm [ 18 ] and was used for the support of papers written about the research concerning the relative predictability of behavior patterns of mission participants in constrained, dangerous conditions for extended periods of time, such as those that might be encountered in crewed spaceflight. [ 19 ] The Tektite habitat was designed and built by General Electric Space Division at the Valley Forge Space Technology Center in King of Prussia, Pennsylvania . The Project Engineer who was responsible for the design of the habitat was Brooks Tenney, Jr. Tenney also served as the underwater Habitat Engineer on the International Mission, the last mission on the Tektite II project. The Program Manager for the Tektite projects at General Electric was Dr. Theodore Marton. [ citation needed ] Hydrolab was constructed in 1966 at a cost of $60,000 [ 20 ] ($560,000 in today's currency) and used as a research station from 1970. The project was in part funded by the National Oceanic and Atmospheric Administration (NOAA). Hydrolab could house four people. Approximately 180 Hydrolab missions were conducted—100 missions in The Bahamas during the early to mid-1970s, and 80 missions off Saint Croix, U.S. Virgin Islands , from 1977 to 1985. These scientific missions are chronicled in the Hydrolab Journal . [ 21 ] Dr. William Fife spent 28 days in saturation, performing physiology experiments on researchers such as Dr. Sylvia Earle . [ 22 ] [ 23 ] The habitat was decommissioned in 1985 and placed on display at the Smithsonian Institution 's National Museum of Natural History in Washington, D.C. As of 2017 [update] , the habitat is located at the NOAA Auditorium and Science Center at National Oceanic and Atmospheric Administration (NOAA) headquarters in Silver Spring, Maryland. [ citation needed ] The Engineering Design and Analysis Laboratory Habitat was a horizontal cylinder 2.6 m high, 3.3 m long and weighing 14 tonnes was built by students of the Engineering Design and Analysis Laboratory in the US at a cost of $20,000 [ 20 ] or $187,000 in today's currency. From 26 April 1968, four students spent 48 hours and 6 minutes in this habitat in Alton Bay, New Hampshire. Two further missions followed to 12.2 m. [ 24 ] In the 1972 Edalhab II Florida Aquanaut Research Expedition experiments, the University of New Hampshire and NOAA used nitrox as a breathing gas. [ 25 ] In the three FLARE missions, the habitat was positioned off Miami at a depth of 13.7 m. The conversion to this experiment increased the weight of the habitat to 23 tonnes. BAH I (for Biological Institute Helgoland ) had a length of 6 m and a diameter of 2 m. It weighed about 20 tons and was intended for a crew of two people. [ 26 ] The first mission in September 1968 with Jürgen Dorschel and Gerhard Lauckner at 10 m depth in the Baltic Sea lasted 11 days. In June 1969, a one-week flat-water mission took place in Lake Constance. In attempting to anchor the habitat at 47 m, the structure was flooded with the two divers in it and sank to the seabed. It was decided to lift it with the two divers according to the necessary decompression profile and nobody was harmed. [ 5 ] BAH I provided valuable experience for the much larger underwater laboratory Helgoland. In 2003 it was taken over as a technical monument by the Technical University of Clausthal-Zellerfeld and in the same year went on display at the Nautineum Stralsund on Kleiner Dänholm island. [ 27 ] The Helgoland underwater laboratory (UWL) is an underwater habitat. It was built in Lübeck , Germany in 1968, and was the first of its kind in the world built for use in colder waters. [ 28 ] The 14 meter long, 7 meter diameter UWL allowed divers to spend several weeks under water using saturation diving techniques. The scientists and technicians would live and work in the laboratory, returning to it after every diving session. At the end of their stay they decompressed in the UWL, and could resurface without decompression sickness. The UWL was used in the waters of the North and Baltic Seas and, in 1975, on Jeffreys Ledge , in the Gulf of Maine off the coast of New England in the United States. [ 29 ] [ 30 ] At the end of the 1970s it was decommissioned and in 1998 donated to the German Oceanographic Museum where it can be visited at the Nautineum , a branch of the museum in Stralsund . Bentos-300 (Bentos minus 300) was a maneuverable Soviet submersible with a diver lockout facility that could be stationed at the seabed. It was able to spend two weeks underwater at a maximum depth of 300m with about 25 people on board. Although announced in 1966, it had its first deployment in 1977. [ 5 ] [1] There were two vessels in the project. After Bentos-300 sank in the Russian Black Sea port of Novorossiisk in 1992, several attempts to recover it failed. In November 2011 it was cut up and recovered for scrap in the following six months. [ citation needed ] The Italian Progetto Abissi habitat, also known as La Casa in Fondo al Mare (Italian for The House at the Bottom of the Sea), was designed by the diving team Explorer Team Pellicano, consisted of three cylindrical chambers and served as a platform for a television game show. It was deployed for the first time in September 2005 for ten days, and six aquanauts lived in the complex for 14 days in 2007. [ 31 ] The MarineLab underwater laboratory was the longest serving seafloor habitat in history, having operated continuously from 1984 to 2018 under the direction of aquanaut Chris Olstad at Key Largo , Florida. The seafloor laboratory has trained hundreds of individuals in that time, featuring an extensive array of educational and scientific investigations from United States military investigations to pharmaceutical development. [ 32 ] [ 33 ] Beginning with a project initiated in 1973, MarineLab, then known as Midshipman Engineered & Designed Undersea Systems Apparatus (MEDUSA), was designed and built as part of an ocean engineering student program at the United States Naval Academy under the direction of Dr. Neil T. Monney. In 1983, MEDUSA was donated to the Marine Resources Development Foundation (MRDF), and in 1984 was deployed on the seafloor in John Pennekamp Coral Reef State Park, Key Largo, Florida. The 2.4-by-4.9-metre (8 by 16 ft) shore-supported habitat supports three or four persons and is divided into a laboratory, a wet-room, and a 1.7-metre-diameter (5 ft 7 in) transparent observation sphere. From the beginning, it has been used by students for observation, research, and instruction. In 1985, it was renamed MarineLab and moved to the 9-metre-deep (30 ft) mangrove lagoon at MRDF headquarters in Key Largo at a depth of 8.3 metres (27 ft) with a hatch depth of 6 m (20 ft). The lagoon contains artifacts and wrecks placed there for education and training. From 1993 to 1995, NASA used MarineLab repeatedly to study Controlled Ecological Life Support Systems (CELLS). These education and research programs qualify MarineLab as the world's most extensively used habitat. [ citation needed ] MarineLab was used as an integral part of the "Scott Carpenter, Man in the Sea" Program. [ 34 ] In 2018 the habitat was retired and restored to its 1985 condition and is on display to the public at Marine Resources Development Foundation, Inc. Key Largo, Florida. [ 33 ] The Aquarius Reef Base is an underwater habitat located 5.4 miles (9 kilometers) off Key Largo in the Florida Keys National Marine Sanctuary . It is deployed on the ocean floor 62 feet (19 m) below the surface and next to a deep coral reef named Conch Reef . Aquarius is one of three undersea laboratories in the world dedicated to science and education. Two additional undersea facilities, also located in Key Largo, Florida , are owned and operated by Marine Resources Development Foundation. Aquarius was owned by the National Oceanic and Atmospheric Administration (NOAA) and operated by the University of North Carolina–Wilmington [ 35 ] until 2013 when Florida International University assumed operational control. [ 36 ] Florida International University (FIU) took ownership of Aquarius in October 2014. As part of the FIU Marine Education and Research Initiative, the Medina Aquarius Program is dedicated to the study and preservation of marine ecosystems worldwide and is enhancing the scope and impact of FIU on research, educational outreach, technology development, and professional training. At the heart of the program is the Aquarius Reef Base. [ 37 ] In the early 1970s, Ian Koblick, president of Marine Resources Development Foundation, developed and operated the La Chalupa [ 38 ] research laboratory, which was the largest and most technologically advanced underwater habitat of its time. [ citation needed ] Koblick, who has continued his work as a pioneer in developing advanced undersea programs for ocean science and education, is the co-author of the book Living and Working in the Sea and is considered one of the foremost authorities on undersea habitation. [ citation needed ] La Chalupa was operated off Puerto Rico . During the habitat's launching for its second mission, a steel cable wrapped around Dr. Lance Rennka's left wrist, shattering his arm, which he subsequently lost to gas gangrene . [ 39 ] In the mid-1980s La Chalupa was transformed into Jules' Undersea Lodge in Key Largo , Florida. Jules' co-developer, Dr. Neil Monney, formerly served as Professor and Director of Ocean Engineering at the U.S. Naval Academy, and has extensive experience as a research scientist, aquanaut and designer of underwater habitats. [ citation needed ] La Chalupa was used as the primary platform for the Scott Carpenter Man in the Sea Program, [ 40 ] an underwater analog to Space Camp . Unlike Space Camp, which utilizes simulations, participants performed scientific tasks while using actual saturation diving systems . This program, envisioned by Ian Koblick and Scott Carpenter , was directed by Phillip Sharkey with operational help of Chris Olstad . Also used in the program was the MarineLab Underwater Habitat, the submersible Sea Urchin (designed and built by Phil Nuytten ), and an Oceaneering Saturation Diving system consisting of an on-deck decompression chamber and a diving bell . La Chalupa was the site of the first underwater computer chat, [ citation needed ] a session hosted on GEnie 's Scuba RoundTable (the first non-computing related area on GEnie) by then-director Sharkey from inside the habitat. Divers from all over the world were able to direct questions to him and to Commander Carpenter. [ citation needed ] The Scott Carpenter Space Analog Station was launched near Key Largo on six-week missions in 1997 and 1998. [ 41 ] The station was a NASA project illustrating the analogous science and engineering concepts common to both undersea and space missions. During the missions, some 20 aquanauts rotated through the undersea station including NASA scientists, engineers and director James Cameron . The SCSAS was designed by NASA engineer Dennis Chamberland . [ 41 ] Lloyd Godson's Biosub was an underwater habitat, built in 2007 for a competition by Australian Geographic. The Biosub [ 42 ] generated its own electricity (using a bike); its own water, using the Air2Water Dragon Fly M18 system; and its own air, using algae that produce O 2 . The algae were fed using the Cascade High School Advanced Biology Class Biocoil . [ 43 ] The habitat shelf itself was constructed by Trygons Designs . The first underwater habitat built by Jacques Rougerie was launched and immersed on 4 August 1977. [ 44 ] The unique feature of this semi-mobile habitat- laboratory is that it can be moored at any depth between 9 and 60 metres, which gives it the capability of phased integration in the marine environment. This habitat therefore has a limited impact on the marine ecosystem and is easy to position. Galathée was experienced by Jacques Rougerie himself. [ 45 ] [ 46 ] [ clarification needed ] Launched for the first time in March 1978, this underwater shelter suspended in midwater (between 0 and 60 metres) is a mini scientific observatory 2.8 metres high by 2.5 metres in diameter. [ 47 ] The Aquabulle , created and experienced by Jacques Rougerie, can accommodate three people for a period of several hours and acts as an underwater refuge. A series of Aquabulles were later built and some are still being used by laboratories. [ 44 ] [ 48 ] This underwater habitat, created by a French architect, Jacques Rougerie , was launched in 1981 to act as a scientific base suspended in midwater using the same method as Galathée. [ 47 ] Hippocampe can accommodate 2 people on saturation dives up to a depth of 12 metres for periods of 7 to 15 days, and was also designed to act as a subsea logistics base for the offshore industry. [ 44 ] Ithaa ( Dhivehi for mother of pearl ) is the world's only fully glazed underwater restaurant and is located in the Conrad Maldives Rangali Island hotel. [ 49 ] It is accessible via a corridor from above the water and is open to the atmosphere, so there is no need for compression or decompression procedures. Ithaa was built by M.J. Murphy Ltd, and has an unballasted mass of 175 tonnes. [ 50 ] The "Red Sea Star" restaurant in Eilat, Israel, consisted of three modules: an entrance area above the water surface, a restaurant with 62 panorama windows 6 m under water and a ballast area below. The entire construction weighs about 6000 tons. The restaurant had a capacity of 105 people. [ 51 ] [ 52 ] It shut down in 2012. [ 53 ] The first part of Eilat 's Coral World Underwater Observatory was built in 1975 and it was expanded in 1991 by adding a second underwater observatory connected by a tunnel. The underwater complex is accessible via a footbridge from the shore and a shaft from above the water surface. The observation area is at a depth of approximately 12 m. [ 54 ] Alpha Deep SeaPod is located off the coast of Puerto Lindo in Portobelo . The Pod was commissioned on the 5th of Feb, 2024. It is currently operational and serves as the residence of its owner. The floating residence provides living quarters with 360-degree panoramic views, while the underwater capsule serves as a 300-square-foot (about 28-square-meter) functional living space. [ 55 ] [ 56 ]
https://en.wikipedia.org/wiki/Underwater_habitat
In anatomy , Underwood's septa (or maxillary sinus septa , singular septum ) [ 1 ] [ 2 ] are fin-shaped projections of bone that may exist in the maxillary sinus , first described in 1910 by Arthur S. Underwood, an anatomist at King's College in London . [ 3 ] The presence of septa at or near the floor of the sinus are of interest to the dental clinician when proposing or performing sinus floor elevation procedures because of an increased likelihood of surgical complications , such as tearing of the Schneiderian membrane . [ 4 ] The prevalence of Underwood's septa in relation to the floor of the maxillary sinus has been reported at nearly 32%. [ 5 ] Underwood divided the maxillary sinus into three regions relating to zones of distinct tooth eruption activity: anterior (corresponding to the premolars), middle (corresponding to the first molar) and posterior (corresponding to the second molar). Thus, he asserted, these septa always arise between teeth and never opposite the middle of a tooth. [ 3 ] Different studies reveal a different predisposition for the presence of septa based on sinus region: Recent studies have classified two types of maxillary sinus septa: primary and secondary. Primary septa are those initially described by Underwood and that form as a result of the floor of the sinus sinking along with the roots of erupting teeth ; these primary septa are thus generally found in the sinus corresponding to the space between teeth, as explained by Underwood. Conversely, secondary septa form as a result of irregular pneumatization of the sinus following loss of maxillary posterior teeth. [ 6 ] Sinus pneumatization is a poorly understood phenomenon that results in an increased volume of the maxillary sinus, generally following maxillary posterior tooth loss, at the expense of the bone which used to house the roots of the maxillary posterior teeth.
https://en.wikipedia.org/wiki/Underwood's_septa
An unditching beam is a device that is used to aid in the recovery of armoured fighting vehicles when they become bogged or "ditched". The device is a beam that is attached to the continuous tracks that provides additional traction for the vehicle to extricate itself from a ditch or from boggy conditions. The unditching beam was first introduced into service during the First World War with the British Mark IV tank . [ 1 ] It is believed the device was designed by Philip Johnson who was serving as an engineering officer at the British Army's depot at Érin ; originally the device weighed one-half long ton (0.51 t) and was constructed of a solid beam of oak with two large steel plates bolted to two sides to provide protection. [ 2 ] When not in use it was stowed on two rails mounted on the roof of the tank that ran the entire length of the vehicle, and when employed the beam was chained to the tank's tracks, giving the vehicle something firm to drive over. [ 2 ] [ 3 ] Unditching beams remain a commonly carried standard ancillary on a number of Russian produced armoured fighting vehicles. [ 4 ] This military -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Unditching_beam
Unequal crossing over is a type of gene duplication or deletion event that deletes a sequence in one strand and replaces it with a duplication from its sister chromatid in mitosis or from its homologous chromosome during meiosis . It is a type of chromosomal crossover between homologous sequences that are not paired precisely. Normally genes are responsible for occurrence of crossing over. It exchanges sequences of different links between chromosomes. Along with gene conversion , it is believed to be the main driver for the generation of gene duplications and is a source of mutation in the genome. [ 1 ] During meiosis , the duplicated chromosomes ( chromatids ) in eukaryotic organisms are attached to each other in the centromere region and are thus paired. The maternal and paternal chromosomes then align alongside each other. During this time, recombination can take place via crossing over of sections of the paternal and maternal chromatids and leads to reciprocal recombination or non-reciprocal recombination. [ 1 ] Unequal crossing over requires a measure of similarity between the sequences for misalignment to occur. The more similarity within the sequences, the more likely unequal crossing over will occur. [ 1 ] One of the sequences is thus lost and replaced with the duplication of another sequence. When two sequences are misaligned, unequal crossing over may create a tandem repeat on one chromosome and a deletion on the other. The rate of unequal crossing over will increase with the number of repeated sequences around the duplication. This is because these repeated sequences will pair together, allowing for the mismatch in the cross over point to occur. [ 2 ] Unequal crossing over is the process most responsible for creating regional gene duplications in the genome. [ 1 ] Repeated rounds of unequal crossing over cause the homogenization of the two sequences. With the increase in the duplicates, unequal crossing over can lead to dosage imbalance in the genome and can be highly deleterious. [ 1 ] [ 2 ] In unequal crossing over, there can be large sequence exchanges between the chromosomes. Compared with gene conversion, which can only transfer a maximum of 1,500 base pairs, unequal crossing over in yeast rDNA genes has been found to transfer about 20,000 base pairs in a single crossover event [ 1 ] [ 3 ] Unequal crossover can be followed by the concerted evolution of duplicated sequences. It has been suggested that longer intron found between two beta-globin genes are a response to deleterious selection from unequal crossing over in the beta-globin genes. [ 1 ] [ 4 ] Comparisons between alpha-globin, which does not have long introns, and beta-globin genes show that alpha-globin have 50 times higher concerted evolution. When unequal crossing over creates a gene duplication , the duplicate has 4 evolutionary fates. This is due to the fact that purifying selection acting on a duplicated copy is not very strong. Now that there is a redundant copy, neutral mutations can act on the duplicate. Most commonly the neutral mutations will continue until the duplicate becomes a pseudogene . If the duplicate copy increases the dosage effect of the gene product, then the duplicate may be retained as a redundant copy. Neofunctionalization is also a possibility: the duplicated copy acquires a mutation that gives it a different function than its ancestor. If both copies acquire mutations, it is possible that a subfunctional event occurs. This happens when both of the duplicated sequences have a more specialized function than the ancestral copy [ 5 ] Gene duplications are the main reason for the increase of genome size, and as unequal crossing over is the main mechanism for gene duplication, unequal crossing over contributes to genome size evolution is the most common regional duplication event that increases the size of the genome. When viewing the genome of a eukaryote, a striking observation is the large amount of tandem, repetitive DNA sequences that make up a large portion of the genome. For example, over 50% of the Dipodmys ordii genome is made up of three specific repeats. Drosophila virilis has three sequences that make up 40% of the genome, and 35% of the Absidia glauca is repetitive DNA sequences. [ 1 ] These short sequences have no selection pressure acting on them and the frequency of the repeats can be changed by unequal crossing over. [ 6 ]
https://en.wikipedia.org/wiki/Unequal_crossing_over
The unexpected red theory is a design theory asserting that incorporating red-colored home accessories can enhance interior design . [ 1 ] Coined by Taylor Migliazzo Simon, a designer based in Williamsburg, Brooklyn , [ 2 ] the theory first attained popularity on the social media platform TikTok in January 2024, and eventually received widespread coverage across various design magazines. [ 3 ] Design journalists and publications have created listicles to highlight interior spaces, such as houses and hotels, that reflect the theory. [ 4 ] In Real Simple , journalist Morgan Noll wrote that "red is one of the most visible colors in the color spectrum so it has a strong ability to grab attention and attract the eye." [ 5 ] In The Daily Telegraph , Sophie Robinson , a designer, criticized that “you can’t just add red to any room – it’s just not that simple. It can look jarring.” [ 6 ] This design -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Unexpected_red_theory
In quantum mechanics , an unextendible product basis is a set of orthogonal, non- entangled state vectors for a multipartite system, with the property that local operations and classical communication are insufficient to distinguish one member of the set from the others. Because these states are product states and yet local measurements cannot tell them apart, they are sometimes said to exhibit "nonlocality without entanglement". [ 1 ] [ 2 ] They provide examples of non-entangled states that pass the Peres–Horodecki criterion for entanglement. [ 3 ] This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Unextendible_product_basis
An unfinished building is a building (or other architectural structure, as a bridge, a road or a tower) where construction work was abandoned or on hold at some stage or only exists as a design. It may also refer to buildings that are currently being built, particularly those that have been delayed or at which construction work progresses extremely slowly. Many construction or engineering projects have remained unfinished at various stages of development. The work may be finished as a blueprint or whiteprint and never be realised, or be abandoned during construction. One of the best-known perennially incomplete buildings is Antoni Gaudí 's basilica Sagrada Família in Barcelona . [ 1 ] It has been under construction since 1882 and planned to be complete by 2026, Gaudí's death centenary. [ 2 ] There are numerous unfinished buildings that remain partially constructed in countries around the world, some of which can be used in their incomplete state but with others remaining as a mere shell. Some projects are intentionally left with an unfinished appearance, particularly the follies of the late 16th to 18th century. Some buildings are in a cycle of near-perpetual construction, with work lasting for decades or even centuries. Antoni Gaudí 's Sagrada Família in Barcelona, Spain , has been under construction for around 140 years, having started in the 1880s. Work was delayed by the Spanish Civil War , during which the original models and parts of the building itself were destroyed. Today, even with portions of the basilica incomplete, it is still the most popular tourist destination in Barcelona with 1.5 million visitors every year. Gaudí spent 40 years of his life overseeing the project and is buried in the crypt. [ 3 ] Germany's Cologne Cathedral took even longer to complete; construction started in 1248 and finished in 1880, a total of 632 years. [ 4 ] Buildings that were never completed and remain in that state include: In other cases, construction works proceeds extremely slowly, so one can also say form incomplete structures. Examples are: There are also roads, railway lines and channels which remained unfinished. Many projects do not get to the construction phase, halted during or after planning. Ludwig II of Bavaria commissioned several designs for Castle Falkenstein , with the fourth plan being vastly different from that of the first. The first two designs were turned down, one because of costs and one because the design displeased Ludwig, and the third designer withdrew from the project. The fourth and final plan was completed and some infrastructure was prepared for the site but Ludwig died before construction work began. [ 5 ] The Palace of Whitehall , at the time the largest palace in Europe , was mostly destroyed by a fire in 1698. Sir Christopher Wren , most famous for his role in rebuilding several churches after the Great Fire of London in 1666, sketched a proposed replacement for part of the palace but financial constraints prevented construction. Even without being constructed, many architectural designs and ideas have had a lasting influence. The Russian constructivism movement started in 1914 and was taught in the Bauhaus and other architecture schools, leading to numerous architects integrating it into their style. Computer technology has allowed for 3D representations of projects to be shown before they are built. In some cases the construction is never started and the computer model is the nearest that anyone will ever get to seeing the finished piece. For example, in 1999 Kent Larson's exhibition " Unbuilt Ruins: Digital Interpretations of Eight Projects by Louis I. Kahn " showed computer images of designs completed by noted architect Louis Kahn but never built. [ 8 ] Computer simulations can also be used to create prototypes of projects and test them before they are actually built; this has allowed the design process to be more successful and efficient.
https://en.wikipedia.org/wiki/Unfinished_building
The unfolded protein response ( UPR ) is a cellular stress response related to the endoplasmic reticulum (ER) stress. [ 1 ] It has been found to be conserved between mammalian species, [ 2 ] as well as yeast [ 1 ] [ 3 ] and worm organisms. The UPR is activated in response to an accumulation of unfolded or misfolded proteins in the lumen of the endoplasmic reticulum. In this scenario, the UPR has three aims: initially to restore normal function of the cell by halting protein translation , degrading misfolded proteins, and activating the signaling pathways that lead to increasing the production of molecular chaperones involved in protein folding . If these objectives are not achieved within a certain time span or the disruption is prolonged, the UPR aims towards apoptosis . Sustained overactivation of the UPR has been implicated in prion diseases as well as several other neurodegenerative diseases , and inhibiting the UPR could become a treatment for those diseases. [ 4 ] Diseases amenable to UPR inhibition include Creutzfeldt–Jakob disease , Alzheimer's disease , Parkinson's disease , and Huntington's disease . [ 5 ] [ 6 ] The term protein folding incorporates all the processes involved in the production of a protein after the nascent polypeptides have become synthesized by the ribosomes . The proteins destined to be secreted or sorted to other cell organelles carry an N-terminal signal sequence that will interact with a signal recognition particle (SRP). The SRP will lead the whole complex ( Ribosome , RNA , polypeptide ) to the ER membrane. Once the sequence has “docked”, the protein continues translation, with the resultant strand being fed through the polypeptide translocator directly into the ER. Protein folding commences as soon as the polypeptide enters to the luminal environment, even as translation of the remaining polypeptide continues. Protein folding steps involve a range of enzymes and molecular chaperones to coordinate and regulate reactions, in addition to a range of substrates required in order for the reactions to take place. The most important of these to note are N-linked glycosylation and disulfide bond formation. N-linked glycosylation occurs as soon as the protein sequence passes into the ER through the translocon , where it is glycosylated with a sugar molecule that forms the key ligand for the lectin molecules calreticulin (CRT; soluble in ER lumen) and calnexin (CNX; membrane bound). [ 7 ] Favoured by the highly oxidizing environment of the ER, protein disulfide isomerases facilitate formation of disulfide bonds, which confer structural stability to the protein in order for it to withstand adverse conditions such as extremes of pH and degradative enzymes . The ER is capable of recognizing misfolding proteins without causing disruption to the functioning of the ER. The aforementioned sugar molecule remains the means by which the cell monitors protein folding, as the misfolding protein becomes characteristically devoid of glucose residues, targeting it for identification and re-glycosylation by the enzyme UGGT (UDP-glucose:glycoprotein glucosyltransferase). [ 7 ] If this fails to restore the normal folding process, exposed hydrophobic residues of the misfolded protein are bound by the protein glucose regulate protein 78 (Grp78), a heat shock protein 70kDa family member [ 8 ] that prevents the protein from further transit and secretion. [ 9 ] Where circumstances continue to cause a particular protein to misfold, the protein is recognized as posing a threat to the proper functioning of the ER, as they can aggregate to one another and accumulate. In such circumstances the protein is guided through endoplasmic reticulum-associated degradation ( ERAD ). The chaperone EDEM guides the retrotranslocation of the misfolded protein back into the cytosol in transient complexes with PDI and Grp78. [ 10 ] Here it enters the ubiquitin-proteasome pathway, as it is tagged by multiple ubiquitin molecules, targeting it for degradation by cytosolic proteasomes. Successful protein folding requires a tightly controlled environment of substrates that include glucose to meet the metabolic energy requirements of the functioning molecular chaperones; calcium that is stored bound to resident molecular chaperones; and redox buffers that maintain the oxidizing environment required for disulfide bond formation. [ 11 ] Unsuccessful protein folding can be caused by HLA-B27 , disturbing balance of important ( IL-10 and TNF ) signaling proteins. At least some disturbances are reliant on correct HLA-B27 folding. [ 12 ] However, where circumstances cause a more global disruption to protein folding that overwhelms the ER's coping mechanisms, the UPR is activated. The molecular chaperone BiP/Grp78 has a range of functions within the ER. It maintains specific transmembrane receptor proteins involved in initiation of the downstream signalling of the UPR in an inactive state by binding to their luminal domains. An overwhelming load of misfolded proteins or simply the over-expression of proteins (e.g. IgG) [ 13 ] requires more of the available BiP/Grp78 to bind to the exposed hydrophobic regions of these proteins, and consequently BiP/Grp78 dissociates from these receptor sites to meet this requirement. Dissociation from the intracellular receptor domains allows them to become active. PERK dimerizes with BiP in resting cells and oligomerizes in ER-stressed cells. Although this is traditionally the accepted model, doubts have been raised over its validity. It has been argued that the genetic and structural evidence supporting the model simply shows BiP dissociation to be merely correlated with Ire1 activation, rather than specifically causing it. [ 14 ] An alternative model has been proposed, whereby unfolded proteins interact directly with the ER-lumenal domain of Ire1, causing oligomerization and transautophosphorylation. [ 14 ] However these models are not mutually exclusive, it is also possible that both direct interaction of Ire1 with unfolded proteins and dissociation of BiP from IRE1 contribute to the activation of the Ire1 pathway. The initial phases of UPR activation have two key roles: Translation Attenuation and Cell Cycle Arrest by the PERK Receptor This occurs within minutes to hours of UPR activation to prevent further translational loading of the ER. PERK (protein kinase RNA-like endoplasmic reticulum kinase) activates itself by oligomerization and autophosphorylation of the free luminal domain. The activated cytosolic domain causes translational attenuation by directly phosphorylating the α subunit of the regulating initiator of the mRNA translation machinery, eIF2. [ 15 ] This also produces translational attenuation of the protein machinery involved in running the cell cycle, producing cell cycle arrest in the G1 phase. [ 16 ] PERK deficiency may have a significant impact on physiological states associated with ER stress . Increased Production of Proteins Involved in the Functions of the UPR UPR activation also results in upregulation of proteins involved in chaperoning malfolding proteins, protein folding and ERAD, including further production of Grp78. Ultimately this increases the cell's molecular mechanisms by which it can deal with the misfolded protein load. These receptor proteins have been identified as: The aim of these responses is to remove the accumulated protein load whilst preventing any further addition to the stress, so that normal function of the ER can be restored as soon as possible. If the UPR pathway is activated in an abnormal fashion, such as when obesity triggers chronic ER stress and the pathway is constitutively active, this can lead to insensitivity to insulin signaling and thus insulin resistance. Individuals suffering from obesity have an elevated demand placed on the secretory and synthesis systems of their cells. This activates cellular stress signaling and inflammatory pathways because of the abnormal conditions disrupting ER homeostasis. A downstream effect of the ER stress is a significant decrease in insulin-stimulated phosphorylation of tyrosine residues of insulin receptor substrate (IRS-1), which is the substrate for insulin tyrosine kinase (the insulin receptor). C-Jun N-terminal kinase (JNK) is also activated at high levels by IRE-1α, which itself is phosphorylated to become activated in the presence of ER stress. Subsequently, JNK phosphorylates serine residues of IRS-1, and thus inhibits insulin receptor signaling. IRE-1α also recruits tumor necrosis factor receptor-associated factor 2 ( TRAF2 ). This kinase cascade that is dependent on IRE-1α and JNK mediates ER stress–induced inhibition of insulin action. [ 23 ] Obesity provides chronic cellular stimuli for the UPR pathway as a result of the stresses and strains placed upon the ER, and without allowing restoration to normal cellular responsiveness to insulin hormone signaling, an individual becomes very likely to develop type 2 diabetes. Skeletal muscles are sensitive to physiological stress, as exercise can impair ER homeostasis. This causes the expression of ER chaperones to be induced by the UPR in response to the exercise-induced ER stress . Muscular contraction during exercise causes calcium to be released from the sarcoplasmic reticulum (SR), a specialized ER network in skeletal muscles. This calcium then interacts with calcineurin and calcium/calmodulin-dependent kinases that in turn activate transcription factors. These transcription factors then proceed to alter the expression of exercise-regulated muscle genes. PGC-1alpha , a transcriptional coactivator, is a key transcription factor involved in mediating the UPR in a tissue-specific manner in skeletal muscles by coactivating ATF6alpha. Therefore, PGC-1alpha gets expressed in muscles after acute and long-term exercise training. The function of this transcription factor is to increase the number and function of mitochondria, as well as to induce a switch of skeletal fibers to slow oxidative muscle fibers, as these are fatigue-resistant. Therefore, this UPR pathway mediates changes in muscles that have undergone endurance training by making them more resistant to fatigue and protecting them from future stress. [ 24 ] In conditions of prolonged stress, the goal of the UPR changes from being one that promotes cellular survival to one that commits the cell to a pathway of apoptosis. Proteins downstream of all 3 UPR receptor pathways have been identified as having pro-apoptotic roles. However, the point at which the 'apoptotic switch' is activated has not yet been determined, but it is a logical consideration that this should be beyond a certain time period in which resolution of the stress has not been achieved. The two principal UPR receptors involved are Ire1 and PERK. By binding with the protein TRAF2, Ire1 activates a JNK signaling pathway, [ 25 ] at which point human procaspase 4 is believed to cause apoptosis by activating downstream caspases. Although PERK is recognised to produce a translational block, certain genes can bypass this block. An important example is that the proapoptotic protein CHOP ( CCAAT/-enhancer-binding protein homologous protein ), is upregulated downstream of the bZIP transcription factor ATF4 (activating transcription factor 4) and uniquely responsive to ER stress. [ 26 ] CHOP causes downregulation of the anti-apoptotic mitochondrial protein Bcl-2, [ 27 ] favouring a pro-apoptotic drive at the mitochondria by proteins that cause mitochondrial damage, cytochrome c release and caspase 3 activation. Diseases Diseases amenable to UPR inhibition include Creutzfeldt–Jakob disease , Alzheimer's disease , Parkinson's disease , and Huntington's disease . [ 28 ] Endoplasmic reticulum stress was reported to play a major role in non‐alcoholic fatty liver disease (NAFLD) induction and progression. High fat diet fed rats showed increased ER stress markers CHOP , XBP1 , and GRP78 . ER stress is known to activate hepatic de novo lipogenesis, inhibit VLDL secretion, promote insulin resistance and inflammatory process, and promote cell apoptosis. Thus it increase the level of fat accumulation and worsens the NAFLD to a more serious hepatic state. [ 29 ] Zingiber officinale (ginger) extract and omega‐3 fatty acids were reported to ameliorate endoplasmic reticulum stress in a nonalcoholic fatty liver rat model. [ 29 ] As stated above, the UPR can also be activated as a compensatory mechanism in disease states. For instance, the UPR is up-regulated in an inherited form of dilated cardiomyopathy due to a mutation in gene encoding the Phospholamban protein. [ 30 ] Further activation proved therapeutic in a human induced pluripotent stem cell model of PLN mutant dilated cardiomyopathy. [ 30 ]
https://en.wikipedia.org/wiki/Unfolded_protein_response
Ungiminorine is an acetylcholinesterase inhibitor isolated from Narcissus . [ 1 ] This organic chemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ungiminorine
UniBRITE-1 is, along with TUGSAT-1 , one of the first two Austrian satellites to be launched. Along with TUGSAT, it operates as part of the BRIght Target Explorer constellation of satellites. The two spacecraft were launched aboard the same rocket, an Indian PSLV-CA , in February 2013. UniBRITE is an optical astronomy spacecraft operated by the University of Vienna as part of the BRIght Target Explorer programme. UniBRITE-1 was manufactured by the Space Flight Laboratory (SFL) of the University of Toronto Institute for Aerospace Studies (UTIAS), based on the Generic Nanosatellite Bus , and had a mass at launch of 7 kilograms (15 lb) [ 2 ] (plus another 7 kg for the XPOD separation system). The satellite will be used, along with five other spacecraft, to conduct photometric observations of stars with apparent magnitude of greater than 4.0 as seen from Earth. [ 3 ] UniBRITE-1 was one of the first two BRITE satellites to be launched, along with the Austrian TUGSAT-1 spacecraft. Four more satellites, two Canadian and two Polish, were launched at later dates. UniBRITE-1 will observe the stars in the red color range whereas TUGSAT-1 will do it in blue. Due to the multicolour option, geometrical and thermal effects in the analysis of the observed phenomena are separated. The much larger satellites, such as MOST and CoRoT , both do not have this colour option. It will be extremely helpful in the diagnosis of the internal structure of stars. [ 4 ] UniBRITE-1 will photometrically measure low-level oscillations and temperature variations in stars brighter than visual magnitude (4.0), with unprecedented precision and temporal coverage not achievable through terrestrial based methods. [ 2 ] The UniBRITE-1 satellite along with TUGSAT-1 and AAUSAT3 was launched through the University of Toronto's Nanosatellite Launch System programme, named NLS-8. [ 5 ] The NLS-8 launch was subcontracted to the Indian Space Research Organisation which launched the satellites using PSLV-C20 rocket from the First Launch Pad at the Satish Dhawan Space Centre . [ 6 ] The NLS spacecraft were secondary payloads on the rocket, whose primary mission was to deploy the Indo-French SARAL ocean research satellite. Canada's Sapphire and NEOSSat-1 spacecraft, and the United Kingdom's STRaND-1 , were also carried by the same rocket under separate launch contracts. [ 2 ] The launch took place at 12:31 UTC on 25 February 2013, and the rocket deployed all of its payloads successfully. [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/UniBRITE-1
UniFirst Corporation is a uniform rental company based in Wilmington, Massachusetts , United States , that manufactures, sells, and rents uniforms and protective clothing . UniFirst employs more than 14,000 people and has over 260 facilities in the United States , Canada , and Europe , including customer service centers, nuclear decontamination facilities, cleanroom locations, distribution centers, and manufacturing plants. [ 2 ] [ 3 ] [ 4 ] [ 5 ] UniFirst was founded in 1936 by the Croatti family, [ 6 ] under the name of National Overall Dry Cleaning Company. [ 7 ] The company began in a horse barn that had been converted into a makeshift laundry, and its equipment consisted of a single washing machine and a delivery truck. It served Boston area factory workers and other laborers, whose heavy soiled work clothing needed to be cleaned frequently. [ 7 ] The National Overall Dry Cleaning Company was incorporated in Massachusetts on October 6, 1950. [ 8 ] In the 1980s, UniFirst was sued by residents of Woburn, Massachusetts in a class-action lawsuit. The residents alleged that Unifirst, along with two other firms, had released pollution that had leaked into the water supply, and that this was a cause of increased instances of leukemia in the town. UniFirst settled with the residents without going to trial, for a sum of one million dollars. [ 9 ] This episode was featured in the non-fiction book A Civil Action by Jonathan Harr , later adapted into a film of the same name . [ 10 ] As of 2009, UniFirst's environmental record has improved; it has received awards for its water treatment processes from the Missouri Water Environment Association and the Water and Wastewater Utility Special Service Division of Austin, Texas, among others. [ 11 ] In 1991 Ronald Croatti became the chief executive officer of the company. He continued his rise in 1995, when he became the company president, and again in 2002, when he became the chairman of the board. [ 12 ] In 2011, UniFirst featured in an episode of the reality television series Undercover Boss . [ 13 ] In May 2017, Ronald Croatti died and Steven S. Sintros became President & CEO. [ 14 ] UniFirst supplies uniforms and protective clothing, as well as restroom and cleaning products such as floor mats, mops, air fresheners and soap. [ 6 ] [ 15 ] Products that it manufactures in-house include work shirts, work pants, outerwear, and flame-resistant work apparel. [ 15 ] It also manufactures a majority of the garments it places in rental programs. [ 15 ] UniFirst subsidiary companies include Green Guard, UniTech Services Group, and UniClean. Green Guard is a corporate supplier of first aid equipment; [ 16 ] UniTech provides laundering and decontamination services to the nuclear industry ; [ 17 ] and UniClean supplies clothing and services related to cleanrooms . [ 6 ] [ 18 ] UniFirst also has a Canadian uniform rental subsidiary called UniFirst Canada. [ 19 ]
https://en.wikipedia.org/wiki/UniFirst
UniFrac , a shortened version of unique fraction metric , is a distance metric used for comparing biological communities . It differs from dissimilarity measures such as Bray-Curtis dissimilarity in that it incorporates information on the relative relatedness of community members by incorporating phylogenetic distances between observed organisms in the computation. Both weighted (quantitative) and unweighted (qualitative) variants of UniFrac [ 1 ] are widely used in microbial ecology , where the former accounts for abundance of observed organisms, while the latter only considers their presence or absence. The method was devised by Catherine Lozupone , when she was working with Rob Knight [ 2 ] of the University of Colorado at Boulder in 2005. [ 3 ] [ 4 ] The distance is calculated between pairs of samples (each sample represents an organismal community). All taxa found in one or both samples are placed on a phylogenetic tree . A branch leading to taxa from both samples is marked as "shared" and branches leading to taxa which appears only in one sample are marked as "unshared". The distance between the two samples is then calculated as: This definition satisfies the requirements of a distance metric , being non-negative, zero only when entities are identical, transitive, and conforming to the triangle inequality . If there are several different samples, a distance matrix can be created by making a tree for each pair of samples and calculating their UniFrac measure. Subsequently, standard multivariate statistical methods such as data clustering and principal co-ordinates analysis can be used. One can determine the statistical significance of the UniFrac distance between two samples using Monte Carlo simulations . By randomizing the sample classification of each taxon on the tree (leaving the branch structure unchanged) and creating a distribution of UniFrac distance values, one can obtain a distribution of UniFrac values. From this, a p-value can be given to the actual distance between the samples. Additionally, there is a weighted version of the UniFrac metric which accounts for the relative abundance of each of the taxa within the communities. This is commonly used in metagenomic studies, where the number of metagenomic reads can be in the tens of thousands, and it is appropriate to 'bin' these reads into operational taxonomic units , or OTUs, which can then be dealt with as taxa within the UniFrac framework. In 2012, a generalized UniFrac version, [ 5 ] which unifies the weighted and unweighted UniFrac distance in a single framework, was proposed. The authors argued that the weighted and unweighted UniFrac distances place too much emphasis on either abundant lineages or rare lineages, respectively, leading to “loss of power when the important composition change occurs in moderately abundant lineages”. The generalized UniFrac distance aims to address this limitation by down-weighting the emphasis on abundant or rare lineages.
https://en.wikipedia.org/wiki/UniFrac
UniHan IME is an input method based on the framework of IIIMF developed by Hong Kong Sun Wah Hi-Tech Ltd.. UniHan IME is an input method interface that maps the keyboard keys string to the Han character in the latest version of Unicode Table. UniHan is the CJKV characters section which occupied more than half the storage space of the Unicode Table. There are more than 75,000 characters coded in version 6.0.0 in year 2010. The Chinese, Japanese, Korean and Vietnamese shared the Han characters for naming for more than thousand years. The input methods for the Han characters from Unicode are mainly keyboard typing, mouse pointing on screen or hand writing on pad. The popular methods are the pinyin keyboard method and the hand writing method. A complete font set for Unihan version 6.0.0 is yet to come and so is the Unihan IME. A similar IME called 8 Steps Unihan was developed by 8 Steps Unihan company in Melbourne, Australia. [ 1 ] The 8StepsA font coupled with Microsoft windows 10 SimSunExtB font are able to display all the characters in Unihan 10.0. which includes the extension F character set. The glyphs that are repeated have been linked together and only one of the linked code is used by the IME so that all the displayable characters are unique. This Chinese character-related article is a stub . You can help Wikipedia by expanding it . This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/UniHan_IME
The Universal PBM Resource for Oligonucleotide-Binding Evaluation ( UniPROBE ) is database of DNA-binding proteins determined by protein-binding microarrays. [ 1 ] [ 2 ] [ 3 ] This Biological database -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/UniPROBE
UniProt is a freely accessible database of protein sequence and functional information, many entries being derived from genome sequencing projects . It contains a large amount of information about the biological function of proteins derived from the research literature. It is maintained by the UniProt consortium, which consists of several European bioinformatics organisations and a foundation from Washington, DC , USA . The UniProt consortium comprises the European Bioinformatics Institute (EBI), the Swiss Institute of Bioinformatics (SIB), and the Protein Information Resource (PIR). EBI, located at the Wellcome Trust Genome Campus in Hinxton, UK, hosts a large resource of bioinformatics databases and services. SIB, located in Geneva, Switzerland, maintains the ExPASy (Expert Protein Analysis System) servers that are a central resource for proteomics tools and databases. PIR, hosted by the National Biomedical Research Foundation (NBRF) at the Georgetown University Medical Center in Washington, DC, US, is heir to the oldest protein sequence database, Margaret Dayhoff 's Atlas of Protein Sequence and Structure, first published in 1965. [ 2 ] In 2002, EBI, SIB, and PIR joined forces as the UniProt consortium. [ 3 ] Each consortium member is heavily involved in protein database maintenance and annotation. Until recently, EBI and SIB together produced the Swiss-Prot and TrEMBL databases, while PIR produced the Protein Sequence Database (PIR-PSD). [ 4 ] [ 5 ] [ 6 ] These databases coexisted with differing protein sequence coverage and annotation priorities. Swiss-Prot was created in 1986 by Amos Bairoch during his PhD and developed by the Swiss Institute of Bioinformatics and subsequently developed by Rolf Apweiler at the European Bioinformatics Institute . [ 7 ] [ 8 ] [ 9 ] Swiss-Prot aimed to provide reliable protein sequences associated with a high level of annotation (such as the description of the function of a protein, its domain structure, post-translational modifications , variants, etc.), a minimal level of redundancy and high level of integration with other databases. Recognizing that sequence data were being generated at a pace exceeding Swiss-Prot's ability to keep up, TrEMBL (Translated EMBL Nucleotide Sequence Data Library) was created to provide automated annotations for those proteins not in Swiss-Prot. Meanwhile, PIR maintained the PIR-PSD and related databases, including iProClass , a database of protein sequences and curated families. The consortium members pooled their overlapping resources and expertise, and launched UniProt in December 2003. [ 10 ] UniProt provides four core databases: UniProtKB (with sub-parts Swiss-Prot and TrEMBL), UniParc, UniRef and Proteome. UniProt Knowledgebase (UniProtKB) is a protein database partially curated by experts, consisting of two sections: UniProtKB/Swiss-Prot (containing reviewed, manually annotated entries) and UniProtKB/TrEMBL (containing unreviewed, automatically annotated entries). [ 11 ] As of 22 February 2023 [update] , release "2023_01" of UniProtKB/Swiss-Prot contains 569,213 sequence entries (comprising 205,728,242 amino acids abstracted from 291,046 references) and release "2023_01" of UniProtKB/TrEMBL contains 245,871,724 sequence entries (comprising 85,739,380,194 amino acids). [ 12 ] UniProtKB/Swiss-Prot is a manually annotated, non-redundant protein sequence database. It combines information extracted from scientific literature and biocurator -evaluated computational analysis. The aim of UniProtKB/Swiss-Prot is to provide all known relevant information about a particular protein. Annotation is regularly reviewed to keep up with current scientific findings. The manual annotation of an entry involves detailed analysis of the protein sequence and of the scientific literature. [ 13 ] Sequences from the same gene and the same species are merged into the same database entry. Differences between sequences are identified, and their cause documented (for example alternative splicing , natural variation , incorrect initiation sites, incorrect exon boundaries, frameshifts , unidentified conflicts). A range of sequence analysis tools is used in the annotation of UniProtKB/Swiss-Prot entries. Computer-predictions are manually evaluated, and relevant results selected for inclusion in the entry. These predictions include post-translational modifications, transmembrane domains and topology , signal peptides , domain identification, and protein family classification. [ 13 ] [ 14 ] Relevant publications are identified by searching databases such as PubMed . The full text of each paper is read, and information is extracted and added to the entry. Annotation arising from the scientific literature includes, but is not limited to: [ 10 ] [ 13 ] [ 14 ] Annotated entries undergo quality assurance before inclusion into UniProtKB/Swiss-Prot. When new data becomes available, entries are updated. UniProtKB/TrEMBL contains high-quality computationally analyzed records, which are enriched with automatic annotation. It was introduced in response to increased dataflow resulting from genome projects, as the time- and labour-consuming manual annotation process of UniProtKB/Swiss-Prot could not be broadened to include all available protein sequences. [ 10 ] The translations of annotated coding sequences in the EMBL-Bank/GenBank/DDBJ nucleotide sequence database are automatically processed and entered in UniProtKB/TrEMBL. UniProtKB/TrEMBL also contains sequences from PDB , and from gene prediction, including Ensembl , RefSeq and CCDS . [ 15 ] Since 22 July 2021 it also includes structures predicted with AlphaFold2 . [ 16 ] UniProt Archive (UniParc) is a comprehensive and non-redundant database, which contains all the protein sequences from the main, publicly available protein sequence databases. [ 17 ] Proteins may exist in several different source databases, and in multiple copies in the same database. In order to avoid redundancy, UniParc stores each unique sequence only once. Identical sequences are merged, regardless of whether they are from the same or different species. Each sequence is given a stable and unique identifier (UPI), making it possible to identify the same protein from different source databases. UniParc contains only protein sequences, with no annotation. Database cross-references in UniParc entries allow further information about the protein to be retrieved from the source databases. When sequences in the source databases change, these changes are tracked by UniParc and history of all changes is archived. Currently UniParc contains protein sequences from the following publicly available databases: The UniProt Reference Clusters (UniRef) consist of three databases of clustered sets of protein sequences from UniProtKB and selected UniParc records. [ 20 ] The UniRef100 database combines identical sequences and sequence fragments (from any organism ) into a single UniRef entry. The sequence of a representative protein, the accession numbers of all the merged entries and links to the corresponding UniProtKB and UniParc records are displayed. UniRef100 sequences are clustered using the CD-HIT algorithm to build UniRef90 and UniRef50. [ 20 ] [ 21 ] Each cluster is composed of sequences that have at least 90% or 50% sequence identity, respectively, to the longest sequence. Clustering sequences significantly reduces database size, enabling faster sequence searches. UniRef is available from the UniProt FTP site . UniProt is funded by grants from the National Human Genome Research Institute , the National Institutes of Health (NIH), the European Commission , the Swiss Federal Government through the Federal Office of Education and Science, NCI-caBIG , and the US Department of Defense. [ 11 ]
https://en.wikipedia.org/wiki/UniProt
Tensile testing , also known as tension testing , [ 1 ] is a fundamental materials science and engineering test in which a sample is subjected to a controlled tension until failure. Properties that are directly measured via a tensile test are ultimate tensile strength , breaking strength , maximum elongation and reduction in area. [ 2 ] From these measurements the following properties can also be determined: Young's modulus , Poisson's ratio , yield strength , and strain-hardening characteristics. [ 3 ] Uniaxial tensile testing is the most commonly used for obtaining the mechanical characteristics of isotropic materials. Some materials use biaxial tensile testing . The main difference between these testing machines being how load is applied on the materials. Tensile testing might have a variety of purposes, such as: The preparation of test specimens depends on the purposes of testing and on the governing test method or specification . A tensile specimen usually has a standardized sample cross-section. It has two shoulders and a gauge (section) in between. The shoulders and grip section are generally larger than the gauge section by 33% [ 4 ] so they can be easily gripped. The gauge section's smaller diameter also allows the deformation and failure to occur in this area. [ 2 ] [ 5 ] The shoulders of the test specimen can be manufactured in various ways to mate to various grips in the testing machine (see the image below). Each system has advantages and disadvantages; for example, shoulders designed for serrated grips are easy and cheap to manufacture, but the alignment of the specimen is dependent on the skill of the technician. On the other hand, a pinned grip assures good alignment. Threaded shoulders and grips also assure good alignment, but the technician must know to thread each shoulder into the grip at least one diameter's length, otherwise the threads can strip before the specimen fractures. [ 6 ] In large castings and forgings it is common to add extra material, which is designed to be removed from the casting so that test specimens can be made from it. These specimens may not be exact representation of the whole workpiece because the grain structure may be different throughout. In smaller workpieces or when critical parts of the casting must be tested, a workpiece may be sacrificed to make the test specimens. [ 7 ] For workpieces that are machined from bar stock , the test specimen can be made from the same piece as the bar stock. For soft and porous materials, like electrospun nonwovens made of nanofibers, the specimen is usually a sample strip supported by a paper frame to favour its mounting on the machine and to avoid membrane damaging. [ 8 ] [ 9 ] A. A Threaded shoulder for use with a thread B. A round shoulder for use with serrated grips C. A butt end shoulder for use with a split collar D. A flat shoulder for used with serrated grips The repeatability of a testing machine can be found by using special test specimens meticulously made to be as similar as possible. [ 7 ] A standard specimen is prepared in a round or a square section along the gauge length, depending on the standard used. Both ends of the specimens should have sufficient length and a surface condition such that they are firmly gripped during testing. The initial gauge length Lo is standardized (in several countries) and varies with the diameter (Do) or the cross-sectional area (Ao) of the specimen as listed The following tables gives examples of test specimen dimensions and tolerances per standard ASTM E8. The most common testing machine used in tensile testing is the universal testing machine . This type of machine has two crossheads; one is adjusted for the length of the specimen and the other is driven to apply tension to the test specimen. Testing machines are either electromechanical or hydraulic . [ 5 ] The electromechanical machine uses an electric motor, gear reduction system and one, two or four screws to move the crosshead up or down. A range of crosshead speeds can be achieved by changing the speed of the motor. The speed of the crosshead, and consequently the load rate, can be controlled by a microprocessor in the closed-loop servo controller. A hydraulic testing machine uses either a single- or dual-acting piston to move the crosshead up or down. Manually operated testing systems are also available. Manual configurations require the operator to adjust a needle valve in order to control the load rate. A general comparison shows that the electromechanical machine is capable of a wide range of test speeds and long crosshead displacements, whereas the hydraulic machine is a cost-effective solution for generating high forces. [ 11 ] The machine must have the proper capabilities for the test specimen being tested. There are four main parameters: force capacity, speed, precision and accuracy . Force capacity refers to the fact that the machine must be able to generate enough force to fracture the specimen. The machine must be able to apply the force quickly or slowly enough to properly mimic the actual application. Finally, the machine must be able to accurately and precisely measure the gauge length and forces applied; for instance, a large machine that is designed to measure long elongations may not work with a brittle material that experiences short elongations prior to fracturing. [ 6 ] Alignment of the test specimen in the testing machine is critical, because if the specimen is misaligned, either at an angle or offset to one side, the machine will exert a bending force on the specimen. This is especially bad for brittle materials, because it will dramatically skew the results. This situation can be minimized by using spherical seats or U-joints between the grips and the test machine. [ 6 ] If the initial portion of the stress–strain curve is curved and not linear, it indicates the specimen is misaligned in the testing machine. [ 12 ] The strain measurements are most commonly measured with an extensometer , but strain gauges are also frequently used on small test specimen or when Poisson's ratio is being measured. [ 6 ] Newer test machines have digital time, force, and elongation measurement systems consisting of electronic sensors connected to a data collection device (often a computer) and software to manipulate and output the data. However, analog machines continue to meet and exceed ASTM, NIST, and ASM metal tensile testing accuracy requirements, continuing to be used today. [ citation needed ] The test process involves placing the test specimen in the testing machine and slowly extending it until it fractures. During this process, the elongation of the gauge section is recorded against the applied force. The data is manipulated so that it is not specific to the geometry of the test sample. The elongation measurement is used to calculate the engineering strain , ε , using the following equation: [ 5 ] where Δ L is the change in gauge length, L 0 is the initial gauge length, and L is the final length. The force measurement is used to calculate the engineering stress , σ, using the following equation: [ 5 ] where F is the tensile force and A is the nominal cross-section of the specimen. The machine does these calculations as the force increases, so that the data points can be graphed into a stress–strain curve . [ 5 ] When dealing with porous and soft materials, as electrospun nanofibrous membranes, the application of the above stress formula is problematic. The membrane thickness, indeed, is dependent on the pressure applied during its measurement, leading to variable thicknesses value. As a consequence, the obtained stress-strain curves show high variability. In this case, the normalization of load with respect to the specimen mass instead of the cross-section area (A) is recommended to obtain reliable tensile results. [ 13 ] Tensile testing can be used to test creep in materials, a slow plastic deformation of the material from constant applied stresses over extended periods of time. Creep is generally aided by diffusion and dislocation movement. While there are many ways to test creep, tensile testing is useful for materials such as concrete and ceramics that behave differently in tension and compression, and thus possess different tensile and compressive creep rates. As such, understanding the tensile creep is important in the design of concrete for structures that experience tension, such as water holding containers, or for general structural integrity. [ 14 ] Tensile testing of creep generally follows the same testing process as standard testing albeit generally at lower stresses to remain in the creep domain rather than plastic deformation. Additionally, specialized tensile creep testing equipment may include incorporated high temperature furnace components to aid diffusion. [ 15 ] The sample is held at constant temperature and tension, and strain on the material is measured using strain gauges or laser gauges. The measured strain can be fitted with equations governing different mechanisms of creep, such as power law creep or diffusion creep (see creep for more information). Further analysis can be obtained from examining the sample post fracture. Understanding the creep mechanism and rate be able to aid materials selection and design. It is important to note that sample alignment is important for tensile testing creep. Off centered loading will result in a bending stress being applied to the sample. Bending can be measured by tracking strain on all sides of the sample. The percent bending can then be defined as the difference between strain on one face ( ε 1 {\displaystyle \varepsilon _{1}} ) and the average strain ( ε 0 {\displaystyle \varepsilon _{0}} ): [ 16 ] Percent Bending = ε 1 − ε 0 ε 0 × 100 {\displaystyle {\text{Percent Bending}}={\frac {\varepsilon _{1}-\varepsilon _{0}}{\varepsilon _{0}}}\times 100} Percent bending should be under 1% on the wider face of loaded samples, and under 2% on the thinner face. Bending can be caused by misalignment on the loading clamp and asymmetric machining of samples. [ 16 ]
https://en.wikipedia.org/wiki/Uniaxial_tensile_test
A unicellular organism , also known as a single-celled organism , is an organism that consists of a single cell , unlike a multicellular organism that consists of multiple cells. Organisms fall into two general categories: prokaryotic organisms and eukaryotic organisms. Most prokaryotes are unicellular and are classified into bacteria and archaea . Many eukaryotes are multicellular, but some are unicellular such as protozoa , unicellular algae , and unicellular fungi . Unicellular organisms are thought to be the oldest form of life, with early organisms emerging 3.5–3.8 billion years ago. [ 1 ] [ 2 ] Although some prokaryotes live in colonies , they are not specialised cells with differing functions. These organisms live together, and each cell must carry out all life processes to survive. In contrast, even the simplest multicellular organisms have cells that depend on each other to survive. Most multicellular organisms have a unicellular life-cycle stage. Gametes , for example, are reproductive unicells for multicellular organisms. [ 3 ] Additionally, multicellularity appears to have evolved independently many times in the history of life. Some organisms are partially unicellular, like Dictyostelium discoideum . Additionally, unicellular organisms can be multinucleate , like Caulerpa , Plasmodium , and Myxogastria . Primitive protocells were the precursors to today's unicellular organisms. Although the origin of life is largely still a mystery, in the currently prevailing theory, known as the RNA world hypothesis , early RNA molecules would have been the basis for catalyzing organic chemical reactions and self-replication. [ 4 ] Compartmentalization was necessary for chemical reactions to be more likely as well as to differentiate reactions with the external environment. For example, an early RNA replicator ribozyme may have replicated other replicator ribozymes of different RNA sequences if not kept separate. [ 5 ] Such hypothetic cells with an RNA genome instead of the usual DNA genome are called ' ribocells ' or 'ribocytes'. [ 4 ] When amphiphiles like lipids are placed in water, the hydrophobic tails aggregate to form micelles and vesicles , with the hydrophilic ends facing outwards. [ 6 ] [ 5 ] Primitive cells likely used self-assembling fatty-acid vesicles to separate chemical reactions and the environment. [ 5 ] Because of their simplicity and ability to self-assemble in water, it is likely that these simple membranes predated other forms of early biological molecules. [ 6 ] Prokaryotes lack membrane-bound organelles, such as mitochondria or a nucleus . [ 7 ] Instead, most prokaryotes have an irregular region that contains DNA, known as the nucleoid . [ 8 ] Most prokaryotes have a single, circular chromosome , which is in contrast to eukaryotes, which typically have linear chromosomes. [ 9 ] Nutritionally, prokaryotes have the ability to utilize a wide range of organic and inorganic material for use in metabolism, including sulfur, cellulose, ammonia, or nitrite. [ 10 ] Prokaryotes are relatively ubiquitous in the environment and some (known as extremophiles) thrive in extreme environments. [ citation needed ] Bacteria are one of the world's oldest forms of life, and are found virtually everywhere in nature. [ 10 ] Many common bacteria have plasmids , which are short, circular, self-replicating DNA molecules that are separate from the bacterial chromosome. [ 12 ] Plasmids can carry genes responsible for novel abilities, of current critical importance being antibiotic resistance. [ 13 ] Bacteria predominantly reproduce asexually through a process called binary fission . However, about 80 different species can undergo a sexual process referred to as natural genetic transformation . [ 14 ] Transformation is a bacterial process for transferring DNA from one cell to another, and is apparently an adaptation for repairing DNA damage in the recipient cell. [ 15 ] In addition, plasmids can be exchanged through the use of a pilus in a process known as conjugation . [ 13 ] The photosynthetic cyanobacteria are arguably the most successful bacteria, and changed the early atmosphere of the earth by oxygenating it. [ 16 ] Stromatolites , structures made up of layers of calcium carbonate and trapped sediment left over from cyanobacteria and associated community bacteria, left behind extensive fossil records. [ 16 ] [ 17 ] The existence of stromatolites gives an excellent record as to the development of cyanobacteria, which are represented across the Archaean (4 billion to 2.5 billion years ago), Proterozoic (2.5 billion to 540 million years ago), and Phanerozoic (540 million years ago to present day) eons. [ 17 ] Much of the fossilized stromatolites of the world can be found in Western Australia . [ 17 ] There, some of the oldest stromatolites have been found, some dating back to about 3,430 million years ago. [ 17 ] Clonal aging occurs naturally in bacteria , and is apparently due to the accumulation of damage that can happen even in the absence of external stressors. [ 18 ] Hydrothermal vents release heat and hydrogen sulfide , allowing extremophiles to survive using chemolithotrophic growth. [ 20 ] Archaea are generally similar in appearance to bacteria, hence their original classification as bacteria, but have significant molecular differences most notably in their membrane structure and ribosomal RNA. [ 21 ] [ 22 ] By sequencing the ribosomal RNA, it was found that the Archaea most likely split from bacteria and were the precursors to modern eukaryotes, and are actually more phylogenetically related to eukaryotes. [ 22 ] As their name suggests, Archaea comes from a Greek word archaios, meaning original, ancient, or primitive. [ 23 ] Some archaea inhabit the most biologically inhospitable environments on earth, and this is believed to in some ways mimic the early, harsh conditions that life was likely exposed to [ citation needed ] . Examples of these Archaean extremophiles are as follows: Methanogens are a significant subset of archaea and include many extremophiles, but are also ubiquitous in wetland environments as well as the ruminant and hindgut of animals. [ 28 ] This process utilizes hydrogen to reduce carbon dioxide into methane, releasing energy into the usable form of adenosine triphosphate . [ 28 ] They are the only known organisms capable of producing methane. [ 29 ] Under stressful environmental conditions that cause DNA damage , some species of archaea aggregate and transfer DNA between cells. [ 30 ] The function of this transfer appears to be to replace damaged DNA sequence information in the recipient cell by undamaged sequence information from the donor cell. [ 31 ] Eukaryotic cells contain membrane bound organelles. Some examples include mitochondria, a nucleus, or the Golgi apparatus. Prokaryotic cells probably transitioned into eukaryotic cells between 2.0 and 1.4 billion years ago. [ 32 ] This was an important step in evolution. In contrast to prokaryotes, eukaryotes reproduce by using mitosis and meiosis . Sex appears to be a ubiquitous and ancient, and inherent attribute of eukaryotic life. [ 33 ] Meiosis, a true sexual process, allows for efficient recombinational repair of DNA damage [ 15 ] and a greater range of genetic diversity by combining the DNA of the parents followed by recombination . [ 32 ] Metabolic functions in eukaryotes are more specialized as well by sectioning specific processes into organelles. [ citation needed ] The endosymbiotic theory holds that mitochondria and chloroplasts have bacterial origins. Both organelles contain their own sets of DNA and have bacteria-like ribosomes. It is likely that modern mitochondria were once a species similar to Rickettsia , with the parasitic ability to enter a cell. [ 34 ] However, if the bacteria were capable of respiration, it would have been beneficial for the larger cell to allow the parasite to live in return for energy and detoxification of oxygen. [ 34 ] Chloroplasts probably became symbionts through a similar set of events, and are most likely descendants of cyanobacteria. [ 35 ] While not all eukaryotes have mitochondria or chloroplasts, mitochondria are found in most eukaryotes, and chloroplasts are found in all plants and algae. Photosynthesis and respiration are essentially the reverse of one another, and the advent of respiration coupled with photosynthesis enabled much greater access to energy than fermentation alone. [ citation needed ] Protozoa are largely defined by their method of locomotion, including flagella , cilia , and pseudopodia . [ 36 ] While there has been considerable debate on the classification of protozoa caused by their sheer diversity, in one system there are currently seven phyla recognized under the kingdom Protozoa: Euglenozoa , Amoebozoa , Choanozoa sensu Cavalier-Smith , Loukozoa , Percolozoa , Microsporidia and Sulcozoa . [ 37 ] [ 38 ] Protozoa, like plants and animals, can be considered heterotrophs or autotrophs. [ 34 ] Autotrophs like Euglena are capable of producing their energy using photosynthesis, while heterotrophic protozoa consume food by either funneling it through a mouth-like gullet or engulfing it with pseudopods, a form of phagocytosis . [ 34 ] While protozoa reproduce mainly asexually, some protozoa are capable of sexual reproduction. [ 34 ] Protozoa with sexual capability include the pathogenic species Plasmodium falciparum , Toxoplasma gondii , Trypanosoma brucei , Giardia duodenalis and Leishmania species. [ 15 ] Ciliophora , or ciliates, are a group of protists that utilize cilia for locomotion. Examples include Paramecium , Stentors , and Vorticella . [ 39 ] Ciliates are widely abundant in almost all environments where water can be found, and the cilia beat rhythmically in order to propel the organism. [ 40 ] Many ciliates have trichocysts , which are spear-like organelles that can be discharged to catch prey, anchor themselves, or for defense. [ 41 ] [ 42 ] Ciliates are also capable of sexual reproduction, and utilize two nuclei unique to ciliates: a macronucleus for normal metabolic control and a separate micronucleus that undergoes meiosis. [ 41 ] Examples of such ciliates are Paramecium and Tetrahymena that likely employ meiotic recombination for repairing DNA damage acquired under stressful conditions. [ citation needed ] The Amebozoa utilize pseudopodia and cytoplasmic flow to move in their environment. Entamoeba histolytica is the cause of amebic dysentery. [ 43 ] Entamoeba histolytica appears to be capable of meiosis . [ 44 ] Unicellular algae are plant-like autotrophs and contain chlorophyll . [ 45 ] They include groups that have both multicellular and unicellular species: Unicellular fungi include the yeasts . Fungi are found in most habitats, although most are found on land. [ 51 ] Yeasts reproduce through mitosis, and many use a process called budding , where most of the cytoplasm is held by the mother cell. [ 51 ] Saccharomyces cerevisiae ferments carbohydrates into carbon dioxide and alcohol, and is used in the making of beer and bread. [ 52 ] S. cerevisiae is also an important model organism, since it is a eukaryotic organism that is easy to grow. It has been used to research cancer and neurodegenerative diseases as well as to understand the cell cycle . [ 53 ] [ 54 ] Furthermore, research using S. cerevisiae has played a central role in understanding the mechanism of meiotic recombination and the adaptive function of meiosis . Candida spp . are responsible for candidiasis , causing infections of the mouth and/or throat (known as thrush) and vagina (commonly called yeast infection). [ 55 ] Most unicellular organisms are of microscopic size and are thus classified as microorganisms . However, some unicellular protists and bacteria are macroscopic and visible to the naked eye. [ 56 ] Examples include:
https://en.wikipedia.org/wiki/Unicellular_organism
A Unicode font is a computer font that maps glyphs to code points defined in the Unicode Standard . [ 1 ] The vast majority of modern computer fonts use Unicode mappings, even those fonts which only include glyphs for a single writing system , or even only support the basic Latin alphabet . The distinction is historic: before Unicode, when most computer systems used only eight-bit bytes , no more than 256 characters (or control codes) could be encoded. This meant that each character repertoire had to have its own codepoint assignments – and thus a given codepoint could have multiple meanings. By assuring unique assignments, Unicode resolved this issue. Fonts which support a wide range of Unicode scripts and Unicode symbols are sometimes referred to as "pan-Unicode fonts", although as the maximum number of glyphs that can be defined in a TrueType font is restricted to 65,535, it is not possible for a single TrueType font to provide individual glyphs for all defined Unicode characters (154,998 characters, with Unicode 16.0). This article lists some widely used Unicode fonts (those shipped with an operating system or produced by a well-known commercial font company) that support a comparatively large number and broad range of Unicode characters. The Unicode standard does not specify or create any font ( typeface ), a collection of graphical shapes called glyphs, itself. Rather, it defines the abstract characters as a specific number (known as a code point ) and also defines the required changes of shape depending on the context the glyph is used in (e.g., combining characters , precomposed characters and letter - diacritic combinations). The choice of font, which governs how the abstract characters in the Universal Coded Character Set (UCS) are converted into a bitmap or vector output that can then be viewed on a screen or printed, is left up to the user. If a font is chosen which does not contain a glyph for a code point used in the document, it typically displays a question mark, a box, or some other substitute character . Computer fonts use various techniques to display characters or glyphs. A bitmap font contains a grid of dots known as pixels forming an image of each glyph in each face and size. Outline fonts (also known as vector fonts) use drawing instructions or mathematical formulae to describe each glyph. Stroke fonts use a series of specified lines (for the glyph's border) and additional information to define the profile , or size and shape of the line in a specific face and size, which together describe the appearance of the glyph. Fonts also include embedded special orthographic rules to output certain combinations of letterforms (an alternative symbols for the same letter) be combined into special ligature forms (mixed characters). Operating systems , web browsers ( user agent ), and other software that extensively use typography, use a font to display text on the screen or print media, and can be programmed to use those embedded rules. Alternatively, they may use external script-shaping technologies (rendering technology or “ smart font ” engine), and they can also be programmed to use either a large Unicode font, or use multiple different fonts for different characters or languages. No single "Unicode font" includes all the characters defined in the present revision of ISO 10646 (Unicode) standard, as more and more languages and characters are continually added to it, and common font formats cannot contain more than 65,535 glyphs (about half the number of characters encoded in Unicode). As a result, font developers and foundries incorporate new characters in newer versions or revisions of a font, or in separate auxiliary fonts intended specifically for particular languages. UCS has over 1.1 million code points, but only the first 65,536 (the Plane 0: Basic Multilingual Plane , or BMP) had entered into common use before 2000. The first Unicode fonts (with very large character sets and supporting many Unicode blocks ) were Lucida Sans Unicode (released March 1993), Unihan font (1993), and Everson Mono (1995). There are typographical ambiguities in Unicode, so that some of the unified Han characters (seen in Chinese, Japanese, and Korean) will be typographically different in different regions. For example, Unicode point U+9AA8 骨 CJK UNIFIED IDEOGRAPH-9AA8 is typographically different between simplified Chinese and traditional Chinese. This has implications for the idea that a single typeface can satisfy the needs of all locales. [ 2 ] The design of Unicode ensures that such differences do not create semantic ambiguity, but the use of incorrect forms is often considered visually awkward or aesthetically inappropriate to native readers of East Asian languages. Unicode is now the standard encoding for many new standards and protocols, and is built into the architecture of operating systems ( Microsoft Windows , Apple Mac OS, and many versions of Unix and Linux ), programming languages ( Ada , Perl , Python , Java , Common LISP , APL ), and libraries (IBM International Components for Unicode (ICU), along with the Pango , Graphite , Scribe , Uniscribe , and ATSUI rendering engines), font formats ( TrueType and OpenType ) and so on. Many other standards are also getting upgraded to be Unicode-compliant. Here is a selection of some of the utility software that can identify the characters present in a font file: Of the many Unicode fonts available, those listed below are the most commonly used worldwide on mainstream computing platforms . 2015-6-4 OTF Number of characters included by the above version of fonts , for different Unicode blocks are listed below. Basic Latin (128: 0000–007F ) means that in the range called 'Basic Latin', there are 128 assigned codes, numbered 0 to 7F . The cells then show the number of those codes which are covered by each font. Unicode blocks listed are valid for Unicode version 8.0 . Unicode blocks listed are valid for Unicode version 8.0 . Unicode blocks listed are valid for Unicode version 8.0 . Unicode blocks listed are valid for Unicode version 8.0 .
https://en.wikipedia.org/wiki/Unicode_font
In computing , a Unicode symbol is a Unicode character which is not part of a script used to write a natural language, but is nonetheless available for use as part of a text. Many of the symbols are drawn from existing character sets or ISO / IEC or other national and international standards. The Unicode Standard states that "The universe of symbols is rich and open-ended," but that in order to be considered, a symbol must have a "demonstrated need or strong desire to exchange in plain text." [ 1 ] This makes the issue of what symbols to encode and how symbols should be encoded more complicated than the issues surrounding writing systems. Unicode focuses on symbols that make sense in a one-dimensional plain-text context. For example, the typical two-dimensional arrangement of electronic diagram symbols justifies their exclusion. [ 2 ] (Legacy characters such as box-drawing characters , Symbols for Legacy Computing and the Symbols for Legacy Computing Supplement , are an exception, since these symbols largely exist for backward compatibility with past encoding systems; a number of electronic diagram symbols are indeed encoded in Unicode's Miscellaneous Technical block.) For adequate treatment in plain text, symbols must also be displayable in a monochromatic setting. Even with these limitations – monochromatic, one-dimensional and standards-based – the domain of potential Unicode symbols is extensive. (However, emojis – ideograms , graphic symbols – that were admitted into Unicode, allow colors although the colors are not standardized.) There are 154,998 characters, with Unicode 16.0, [ 3 ] [ 4 ] including the following symbol blocks:
https://en.wikipedia.org/wiki/Unicode_symbol
UNICOM Focal Point is a portfolio management and decision analysis tool used by the product organizations of corporations and government agencies [ 1 ] to collect information and feedback from internal and external stakeholders on the value of applications, products, systems, technologies, capabilities, ideas, and other organizational artifacts—prioritize on which ones will provide the most value to the business, and manage the roadmap of how artifacts will be fielded, improved, or removed from the market or organization. UNICOM Focal Point is also used to manage a portfolio of projects, to understand resources used on those projects, and timelines for completion. The product is also used for pure product management—where product managers use it to gather and analyze enhancement requests from customers to decide on what features to put in a product, and develop roadmaps for future product versions. UNICOM Focal Point is used for: UNICOM Focal Point is a pure web tool that stores information in an underlying database (user choice of PostgreSQL , Oracle Database , or IBM Db2 ). Users may input information into the database in multiple ways—via the web interface, through automatic import of spreadsheet files, or through direct Representational State Transfer integration with other tools. Information is then analyzed with a variety of methods to make prioritization decisions and manage roadmaps of project and product delivery, and scheduling. Focal Point's web-based portal allows end-users from across an organization, and external customers from all over the world, to input information into the database, analyze the information, or view analysis of that information. Focal Point provides a workflow engine so that suggested changes to an organization based on analysis can be signed off. UNICOM Focal Point is used in combination with other tools for corporate analysis and workflow—such as UNICOM System Architect for enterprise architecture, and Rational Team Concert for DevOps. Integrations to these and a variety of other tools are enabled by UNICOM Focal Point's REST read/write interface and its support for Open Services Lifecycle Collaboration ( OSLC ) with other OSLC-enabled tools. [ 4 ] Joachim Karlsson, Ph.D founded Focal Point AB; the tool originated from a doctoral dissertation by Joachim Karlsson at Linköping University, “A Systematic Approach for Prioritizing Software Requirements”. The first version of Focal Point was released in 1997. [ 5 ] Focal Point was acquired by Telelogic on April 13, 2005. IBM Rational acquired Telelogic in April 2008., [ 6 ] Focal Point was acquired by UNICOM Global on 1 January 2015. [ 7 ] Focal Point includes support for: See also
https://en.wikipedia.org/wiki/Unicom_Focal_Point
A unicorn horn , also known as an alicorn , [ 1 ] is a legendary object whose reality was accepted in Europe and Asia from the earliest recorded times. This "horn" comes from the creature known as a unicorn , also known in the Hebrew Bible as a re'em or wild ox. [ 2 ] Many healing powers and antidotal virtues were attributed to the alicorn, making it one of the most expensive and reputable remedies during the Renaissance , [ 3 ] and justifying its use in the highest circles. Beliefs related to the alicorn influenced alchemy through spagyric medicine . The horn's purificational properties were eventually put to the test in, for example, the book of Ambroise Paré , Discourse on unicorn . Seen as one of the most valuable assets that a person could possess, unicorn horns were given as diplomatic gifts, and chips and dust from them could be purchased at apothecaries as universal antidotes until the 18th century. Sections of horns were later displayed in cabinets of curiosities . The horn was used to create sceptres and other royal objects, such as the unicorn throne of the Danish kings , the sceptre and imperial crown of the Austrian Empire , and the scabbard and the hilt of the sword of Charles the Bold . The legendary unicorn could never be captured alive, but its symbolic association with virginity made it the symbol of innocence and the incarnation of God's Word . Belief in the power of the alicorn persisted until the 16th century, when the true source, the narwhal , was discovered. This marine mammal is the true bearer of the alicorn, actually an extended tooth found in the mouth of males and some females. Since then, the unicorn horn has been mentioned in fantasy works, role-playing games , and video games , which make use of its legendary symbolism . Around 400 BCE, the unicorn was described by Ctesias , according to Photius , as carrying a horn which princes would use to make hanaps to protect against poison . Claudius Aelianus said that drinking from this horn protects against diseases and poisons. [ 4 ] These writings influenced authors from the Middle Ages to the Renaissance: the unicorn becomes the most important and frequently mentioned fantastic animal in the West, but it was considered real. Other parts of its body were alleged to have medicinal properties, and in the 12th century abbess Hildegard of Bingen recommended an ointment against leprosy made from unicorn liver and egg yolk. [ 5 ] Wearing a unicorn leather belt was supposed to protect a person from the plague and fevers , while leather shoes of this animal prevented diseases of the feet, legs and loins. [ 6 ] The medicinal efficacy linked to its horn and its alexipharmic powers were assumed to be true in antiquity , but were not explicitly mentioned in the West again until the 14th century. Legends about these properties were the stimulus for a flourishing trade in these chips and dust up to the mid-17th century, when their true origin became widely known. The alicorn never existed as such; it was most often narwhal teeth that were known as "unicorn horns". [ 7 ] The first post-classical reference to the cleansing power of the unicorn appears in an interpretation of the Physiologus (dated perhaps to the 14th century), when reference is made to a large lake where animals congregate to drink: But before they are assembled, the serpent comes and casts his poison into the water. Now the animals mark well the poison and do not dare to drink, and they wait for the unicorn. It comes and immediately goes into the lake, and making with his horn the sign of the cross , renders the power of the poison harmless Freeman 1983 , p. 27 This theme became very popular, and in 1389 Father Johann van Hesse claimed to have seen a unicorn emerge at sunrise to decontaminate the contaminated water of the River Marah, so that the good animals could drink. Freeman 1983 , p. 27 Symbolically , the snake that poisons the water is the devil and the unicorn represents Christ the Redeemer. [ 8 ] The origin of this legend seems Indian , and Greek texts report that Indian nobles drank out of unicorn horns to protect themselves from diseases and poisons. [ 9 ] The unicorn is most often represented beside a river, lake or fountain, while animals wait for him to finish his work before drinking. This scene is common in the art of the 16th and 17th centuries. [ 10 ] Studies and translations of these drawings and stories popularized the belief that the power of the animal came from its horn, which could neutralize the poison as soon as the liquid or solid touched the alicorn piece. [ 9 ] The alleged properties of the alicorn may be compared with those of the bezoar stone , another object of animal origin known to Renaissance medicine and exposed as a rarity in the cabinets of curiosities . [ 11 ] The alicorn was assigned many medicinal properties and, over time, in addition to the purification of polluted water in nature, [ 12 ] its use was recommended against rubella , measles , fevers and pains . [ 13 ] The monks of the Parisian monasteries used to soak it in the drinking water given to lepers . [ 12 ] It was thought to act as an antidote and, in a powder form, was known to facilitate wound healing, help neutralize poisons (such as scorpion or viper venom ) [ 14 ] or against the plague . [ 15 ] The horn was prepared in several ways, in solid form, or by infusion [ 16 ] Its prophylactic function and magical power were assumed for centuries; as its trade increased, "fake" horns and false powders appeared. [ 17 ] The astronomical prices paid for alicorn reflected the belief that their imaginary virtues could cause real healing. [ 12 ] Many works are devoted to the explanation and defence of the medicinal properties of the alicorn, including The Treaty of the Unicorn, its wonderful properties and its use (1573) by Andrea Bacci and Natural History, Hunting, Virtues, and Use of Lycorn (1624) by apothecary Laurent Catelan. Bacci probably wrote his book at the request of his patients, who were major investors in the unicorn horn trade. [ 18 ] Of a twisted configuration, alicorns were traded as valuable items for many centuries: according to legend, the "horn" on display at the Musée national du Moyen Âge was a gift from the Caliph of Baghdad , Harun al-Rashid , to Charlemagne in 807. [ 4 ] It measures almost three meters. [ 19 ] An eight-foot long horn is exhibited in Bruges , Flanders . [ 4 ] In the Middle Ages , the alicorn was the most valuable asset that a prince could possess. [ 14 ] Its medicinal use was attested and revived possibly in the 13th century, when pharmacists incorporated narwhal teeth (presented as unicorn horns) in their treatments; they displayed large pieces in order to distinguish it from products of other animals, such as the ox . [ 20 ] These objects would have been exchanged up to eleven times their weight in gold . [ 12 ] Depictions of unicorns in a religious context were discouraged indirectly by the Council of Trent in 1563, despite their display in the Saint-Denis Cathedral in Paris , and St Mark's Basilica in Venice. They were often mounted on silver socles and presented as trophies that were only shown for important ceremonies. [ 14 ] Ambroise Paré explains that alicorns were used in the court of the King of France to detect the presence of poison in food and drink: if the comestible became hot and started to smoke, then the dish was poisoned. [ 21 ] Pope Clement VII offered a unicorn horn two cubits long to King Francis I of France at the wedding of his niece Catherine de' Medici in Marseille in October 1533, [ 22 ] and the king did not ever move without a bag filled with unicorn powder. [ 23 ] Also, the Grand Inquisitor Torquemada always carried unicorn horn to protect himself from poison and murderers. [ 24 ]
https://en.wikipedia.org/wiki/Unicorn_horn
Unidentified decedent , or unidentified person (also abbreviated as UID or UP ), is a corpse of a person whose identity cannot be established by police and medical examiners. In many cases, it is several years before the identities of some UIDs are found, while in some cases, they are never identified. [ 1 ] A UID may remain unidentified due to lack of evidence as well as absence of personal identification such as a driver's license. Where the remains have deteriorated or been mutilated to the point that the body is not easily recognized, a UID's face may be reconstructed to show what they had looked like before death. [ 2 ] UIDs are often referred to by the placeholder names "John Doe" or "Jane Doe". [ 3 ] In a database maintained by the Ontario Provincial Police, 371 unidentified decedents were found between 1964 and 2015. [ 4 ] There were approximately 14,000 UIDs in the United States as of 2023. [ 5 ] A body may go unidentified due to death in a state where the person was unrecorded, in an advanced state of decomposition, or with major facial injuries. [ 6 ] In many cases in the United States, teenagers with a history of running away would be removed from missing person files when they would turn 18, thus eliminating potential matches with existing unidentified person listings. [ 7 ] Some UIDs die outside their native state. The Sumter County Does , murdered in South Carolina, were thought to have been Canadian. [ 8 ] Both were eventually identified as individuals from Pennsylvania and Minnesota. [ 9 ] Barbara Hess Precht died in Ohio in 2006, but was not identified until 2014. She had been living as a transient with her husband in California for decades but returned to her native state of Ohio, where she died of unknown circumstances. [ 10 ] In both of these cases, the UIDs were found in a recognizable state and had their fingerprints and dental records taken with ease. It is unknown if the Sumter County Does' DNA was later recovered, since their bodies would require exhumation to recover DNA. [ 8 ] Many undocumented immigrants who die in the United States after crossing the border from Mexico remain unidentified. [ 11 ] Many UIDs are found long after they die and are found to have decomposed severely. This significantly changes their facial features and may prevent identification through fingerprints. Environmental conditions often are a major factor in decomposition, as some UIDs are found months after death with little decomposition if their bodies are placed in cold areas. Some are found in warm areas shortly after death, but hot temperatures and scavenging animals deteriorated the features. [ 6 ] [ 12 ] [ 13 ] In some cases, warm temperatures mummify the corpse, which also distorts its features, though the tissues have survived initial decomposition. One example is the " Persian Princess ", who died in the 1990s but, in an act of archaeological forgery , was untruthfully stated in Pakistan to have been over 2,000 years old. [ 14 ] Putrefaction often occurs when bacteria decompose the remains and generate gasses inside out, causing the corpse to swell and become discolored. [ 6 ] In cases such as the Rogers family, who were murdered in 1989 by Oba Chandler , the bodies were deposited in water but surfaced after gasses in their remains caused them to float to the surface. They were deceased a short period of time but were already severely decomposed and unrecognizable, due to putrefaction that occurred while underwater and high temperatures. It was not until a week later that dental records revealed their identities. [ 15 ] Skeletonization occurs when the UID has decayed to the point that bones and possibly some tissues are all that is found, usually when death occurred a significant amount of time before discovery. If a skeletonized body is found, fingerprints and toeprints are impossible to recover, unless they have survived the initial decomposition of the remains. Fingerprints are often used to identify the dead and were used widely before DNA comparison was possible. [ 6 ] In some cases, partial remains limit the available information. Skeletonized UIDs are often forensically reconstructed if searching dental records and DNA databases is unsuccessful. Often, someone who tries to conceal a body attempts to destroy it or render it unrecognizable. [ 16 ] The currently unidentified Yermo John Doe was killed approximately one hour before he was found, but was completely unrecognizable. [ 17 ] When Lynn Breeden, a Canadian model, was murdered and set ablaze in a dumpster, her body was so severely damaged that DNA processing and fingerprint analysis were impossible. She was identified sometime later after her unique dentition matched her dental records, and DNA extracted from her blood at a different scene was matched. [ 18 ] Linda Agostini 's body was found burned near Albury, Australia in 1934. Her remains were identified ten years later through dental comparison. [ 19 ] Usually, bodies are identified by comparing their usually unique DNA , fingerprints and dental characteristics. [ 20 ] DNA is considered the most accurate, but was not widely used until the 1990s. It is often obtained through hair follicles, blood, tissue and other biological material. [ 21 ] Bodies can also be identified with other physical information, such as illnesses, evidence of surgery, breaks and fractures, and height and weight information. [ 22 ] A medical examiner will often be involved with identifying a body. [ 23 ] [ 24 ] Since 2018, genetic genealogy has also been used to identify many bodies by matching the deceased’s DNA with that of relatives who have uploaded their DNA to genealogy sites. Many police departments and medical examiners have made efforts to identify the deceased by placing mortuary photographs of the UID's face online. In some instances, the mortuary photographs would be retouched of wounds if they are to be released to the public. [ 25 ] Dismembered corpses may also be digitally altered to appear attached to the body. [ 26 ] This is not considered to be the most effective method, as the nature of death often distorts the UID's face. [ 27 ] An example of this is that of " Grateful Doe ," who was killed in a vehicular crash in 1995. He sustained extreme trauma that disfigured his face. [ 28 ] A Jane Doe found in a river in Milwaukee, Wisconsin, had died months earlier, but was preserved by the cold temperatures. Her morgue photographs were displayed publicly on a medical examiner's website, but her face had been distorted by swelling after absorbing water, with additional decomposition. [ 29 ] Death masks have also been used to assist with identification, which have been stated to be more accurate, as they are required to display "relaxed expressions," which often do not illustrate the faces of the UIDs as they were found, such as that of L'Inconnue de la Seine , a French suicide victim found in the late 1800s. [ 30 ] However, a death mask will still depict sunken eyes or other characteristics of a long-term illness, which do not often show how they would have looked in life. [ 6 ] When a body is found in an advanced state of decomposition or has died violently, reconstructions are sometimes required to receive assistance from the public, when releasing images of a corpse is considered taboo. [ 31 ] Often, those in a recognizable state will often be reconstructed due to the same reason. Faces can be reconstructed with a three-dimensional model or by 2D, which includes sketches or digital reconstructions, similar to facial composites . [ 32 ] [ 33 ] Sketches have been used in a variety of cases. Forensic artist Karen T. Taylor created her own method during the 1980s, which involved much more precise techniques, such as estimating locations and sizes of the features of a skull. This method has been shown to be fairly successful. [ 34 ] The National Center for Missing and Exploited Children has developed methods to estimate the likenesses of the faces of UIDs whose remains were too deteriorated to create a two-dimensional sketch or reconstruction due to the lack of tissue on the bones. A skull would be placed through a CT scanner and the image would then be manipulated with a software that was intended for architecture design, to add digital layers of tissue based on the UID's age, sex and race. [ 35 ] The following gallery depicts various ways UIDs have been reconstructed: In some cases, such as that of Colleen Orsborn, law enforcement had erroneously excluded the true identity of the unidentified person as a possible identity. In Orsborn's case, she had fractured one of the bones in her leg, but a medical examiner who performed the autopsy on her remains was not able to discover evidence of the injury and subsequently excluded her from the case. It was not until 2011 when DNA confirmed Orsborn was indeed the victim found in 1984. [ 36 ] In cases such as the Racine County Jane Doe , the decision to rule out one possible identity has also been subjected to criticism. Aundria Bowman, a teen who disappeared in 1989 who bore a strong resemblance to a body found in 1999, was excluded, according to the National Missing and Unidentified Persons System . [ 37 ] On an online forum, known as Websleuths , users have disagreed with this ruling. [ 38 ] In the case of Lavender Doe , a mother of a missing girl also disagreed with the exclusion of her missing daughter through DNA, as she claimed the reconstruction of the victim looked very similar to her daughter. [ 39 ]
https://en.wikipedia.org/wiki/Unidentified_decedent
The unidentified infrared emission (UIR or UIE) bands are infrared discrete emissions from circumstellar regions, interstellar media, star-forming regions and extragalactic objects for which the identity of the emitting materials is unknown. The main infrared features occur around 3.3, 6.2, 7.7, 8.6, 11.2, and 12.7 μm, although there are many other weak emission features within the ~ 5–19 μm spectral range. In the 1980s, astronomers discovered that the origin of the UIR emission bands is inherent in compounds made of aromatic C–H and C=C chemical bonds, [ 1 ] and some went on to hypothesize that the materials responsible should be polycyclic aromatic hydrocarbon (PAH) molecules. [ 2 ] [ 3 ] [ 4 ] Nevertheless, data recorded with the ESA's Infrared Space Observatory and NASA's Spitzer Space Telescope have suggested that the UIR emission bands arise from compounds that are far more complex in composition and structure than PAH molecules. Moreover, the UIR bands follow a clear evolutionary spectral trend that is linked to the lifespan of the astronomical source; from the time the UIR bands first appear around evolved stars in the protoplanetary nebula stage to evolved stages such as the planetary nebula phase. [ 5 ] The UIR emission phenomenon has been studied for approximately 30 years. [ 5 ]
https://en.wikipedia.org/wiki/Unidentified_infrared_emission
In logic and computer science , specifically automated reasoning , unification is an algorithmic process of solving equations between symbolic expressions , each of the form Left-hand side = Right-hand side . For example, using x , y , z as variables, and taking f to be an uninterpreted function , the singleton equation set { f (1, y ) = f ( x ,2) } is a syntactic first-order unification problem that has the substitution { x ↦ 1, y ↦ 2 } as its only solution. Conventions differ on what values variables may assume and which expressions are considered equivalent. In first-order syntactic unification, variables range over first-order terms and equivalence is syntactic. This version of unification has a unique "best" answer and is used in logic programming and programming language type system implementation, especially in Hindley–Milner based type inference algorithms. In higher-order unification, possibly restricted to higher-order pattern unification , terms may include lambda expressions, and equivalence is up to beta-reduction. This version is used in proof assistants and higher-order logic programming, for example Isabelle , Twelf , and lambdaProlog . Finally, in semantic unification or E-unification, equality is subject to background knowledge and variables range over a variety of domains. This version is used in SMT solvers , term rewriting algorithms, and cryptographic protocol analysis. A unification problem is a finite set E ={ l 1 ≐ r 1 , ..., l n ≐ r n } of equations to solve, where l i , r i are in the set T {\displaystyle T} of terms or expressions . Depending on which expressions or terms are allowed to occur in an equation set or unification problem, and which expressions are considered equal, several frameworks of unification are distinguished. If higher-order variables, that is, variables representing functions , are allowed in an expression, the process is called higher-order unification , otherwise first-order unification . If a solution is required to make both sides of each equation literally equal, the process is called syntactic or free unification , otherwise semantic or equational unification , or E-unification , or unification modulo theory . If the right side of each equation is closed (no free variables), the problem is called (pattern) matching . The left side (with variables) of each equation is called the pattern . [ 1 ] Formally, a unification approach presupposes As an example of how the set of terms and theory affects the set of solutions, the syntactic first-order unification problem { y = cons (2, y ) } has no solution over the set of finite terms . However, it has the single solution { y ↦ cons (2, cons (2, cons (2,...))) } over the set of infinite tree terms. Similarly, the semantic first-order unification problem { a ⋅ x = x ⋅ a } has each substitution of the form { x ↦ a ⋅...⋅ a } as a solution in a semigroup , i.e. if (⋅) is considered associative . But the same problem, viewed in an abelian group , where (⋅) is considered also commutative , has any substitution at all as a solution. As an example of higher-order unification, the singleton set { a = y ( x ) } is a syntactic second-order unification problem, since y is a function variable. One solution is { x ↦ a , y ↦ ( identity function ) }; another one is { y ↦ ( constant function mapping each value to a ), x ↦ (any value) }. A substitution is a mapping σ : V → T {\displaystyle \sigma :V\rightarrow T} from variables to terms; the notation { x 1 ↦ t 1 , . . . , x k ↦ t k } {\displaystyle \{x_{1}\mapsto t_{1},...,x_{k}\mapsto t_{k}\}} refers to a substitution mapping each variable x i {\displaystyle x_{i}} to the term t i {\displaystyle t_{i}} , for i = 1 , . . . , k {\displaystyle i=1,...,k} , and every other variable to itself; the x i {\displaystyle x_{i}} must be pairwise distinct. Applying that substitution to a term t {\displaystyle t} is written in postfix notation as t { x 1 ↦ t 1 , . . . , x k ↦ t k } {\displaystyle t\{x_{1}\mapsto t_{1},...,x_{k}\mapsto t_{k}\}} ; it means to (simultaneously) replace every occurrence of each variable x i {\displaystyle x_{i}} in the term t {\displaystyle t} by t i {\displaystyle t_{i}} . The result t τ {\displaystyle t\tau } of applying a substitution τ {\displaystyle \tau } to a term t {\displaystyle t} is called an instance of that term t {\displaystyle t} . As a first-order example, applying the substitution { x ↦ h ( a , y ), z ↦ b } to the term If a term t {\displaystyle t} has an instance equivalent to a term u {\displaystyle u} , that is, if t σ ≡ u {\displaystyle t\sigma \equiv u} for some substitution σ {\displaystyle \sigma } , then t {\displaystyle t} is called more general than u {\displaystyle u} , and u {\displaystyle u} is called more special than, or subsumed by, t {\displaystyle t} . For example, x ⊕ a {\displaystyle x\oplus a} is more general than a ⊕ b {\displaystyle a\oplus b} if ⊕ is commutative , since then ( x ⊕ a ) { x ↦ b } = b ⊕ a ≡ a ⊕ b {\displaystyle (x\oplus a)\{x\mapsto b\}=b\oplus a\equiv a\oplus b} . If ≡ is literal (syntactic) identity of terms, a term may be both more general and more special than another one only if both terms differ just in their variable names, not in their syntactic structure; such terms are called variants , or renamings of each other. For example, f ( x 1 , a , g ( z 1 ) , y 1 ) {\displaystyle f(x_{1},a,g(z_{1}),y_{1})} is a variant of f ( x 2 , a , g ( z 2 ) , y 2 ) {\displaystyle f(x_{2},a,g(z_{2}),y_{2})} , since f ( x 1 , a , g ( z 1 ) , y 1 ) { x 1 ↦ x 2 , y 1 ↦ y 2 , z 1 ↦ z 2 } = f ( x 2 , a , g ( z 2 ) , y 2 ) {\displaystyle f(x_{1},a,g(z_{1}),y_{1})\{x_{1}\mapsto x_{2},y_{1}\mapsto y_{2},z_{1}\mapsto z_{2}\}=f(x_{2},a,g(z_{2}),y_{2})} and f ( x 2 , a , g ( z 2 ) , y 2 ) { x 2 ↦ x 1 , y 2 ↦ y 1 , z 2 ↦ z 1 } = f ( x 1 , a , g ( z 1 ) , y 1 ) . {\displaystyle f(x_{2},a,g(z_{2}),y_{2})\{x_{2}\mapsto x_{1},y_{2}\mapsto y_{1},z_{2}\mapsto z_{1}\}=f(x_{1},a,g(z_{1}),y_{1}).} However, f ( x 1 , a , g ( z 1 ) , y 1 ) {\displaystyle f(x_{1},a,g(z_{1}),y_{1})} is not a variant of f ( x 2 , a , g ( x 2 ) , x 2 ) {\displaystyle f(x_{2},a,g(x_{2}),x_{2})} , since no substitution can transform the latter term into the former one. The latter term is therefore properly more special than the former one. For arbitrary ≡ {\displaystyle \equiv } , a term may be both more general and more special than a structurally different term. For example, if ⊕ is idempotent , that is, if always x ⊕ x ≡ x {\displaystyle x\oplus x\equiv x} , then the term x ⊕ y {\displaystyle x\oplus y} is more general than z {\displaystyle z} , [ note 2 ] and vice versa, [ note 3 ] although x ⊕ y {\displaystyle x\oplus y} and z {\displaystyle z} are of different structure. A substitution σ {\displaystyle \sigma } is more special than, or subsumed by, a substitution τ {\displaystyle \tau } if t σ {\displaystyle t\sigma } is subsumed by t τ {\displaystyle t\tau } for each term t {\displaystyle t} . We also say that τ {\displaystyle \tau } is more general than σ {\displaystyle \sigma } . More formally, take a nonempty infinite set V {\displaystyle V} of auxiliary variables such that no equation l i ≐ r i {\displaystyle l_{i}\doteq r_{i}} in the unification problem contains variables from V {\displaystyle V} . Then a substitution σ {\displaystyle \sigma } is subsumed by another substitution τ {\displaystyle \tau } if there is a substitution θ {\displaystyle \theta } such that for all terms X ∉ V {\displaystyle X\notin V} , X σ ≡ X τ θ {\displaystyle X\sigma \equiv X\tau \theta } . [ 2 ] For instance { x ↦ a , y ↦ a } {\displaystyle \{x\mapsto a,y\mapsto a\}} is subsumed by τ = { x ↦ y } {\displaystyle \tau =\{x\mapsto y\}} , using θ = { y ↦ a } {\displaystyle \theta =\{y\mapsto a\}} , but σ = { x ↦ a } {\displaystyle \sigma =\{x\mapsto a\}} is not subsumed by τ = { x ↦ y } {\displaystyle \tau =\{x\mapsto y\}} , as f ( x , y ) σ = f ( a , y ) {\displaystyle f(x,y)\sigma =f(a,y)} is not an instance of f ( x , y ) τ = f ( y , y ) {\displaystyle f(x,y)\tau =f(y,y)} . [ 3 ] A substitution σ is a solution of the unification problem E if l i σ ≡ r i σ for i = 1 , . . . , n {\displaystyle i=1,...,n} . Such a substitution is also called a unifier of E . For example, if ⊕ is associative , the unification problem { x ⊕ a ≐ a ⊕ x } has the solutions { x ↦ a }, { x ↦ a ⊕ a }, { x ↦ a ⊕ a ⊕ a }, etc., while the problem { x ⊕ a ≐ a } has no solution. For a given unification problem E , a set S of unifiers is called complete if each solution substitution is subsumed by some substitution in S . A complete substitution set always exists (e.g. the set of all solutions), but in some frameworks (such as unrestricted higher-order unification) the problem of determining whether any solution exists (i.e., whether the complete substitution set is nonempty) is undecidable. The set S is called minimal if none of its members subsumes another one. Depending on the framework, a complete and minimal substitution set may have zero, one, finitely many, or infinitely many members, or may not exist at all due to an infinite chain of redundant members. [ 4 ] Thus, in general, unification algorithms compute a finite approximation of the complete set, which may or may not be minimal, although most algorithms avoid redundant unifiers when possible. [ 2 ] For first-order syntactical unification, Martelli and Montanari [ 5 ] gave an algorithm that reports unsolvability or computes a single unifier that by itself forms a complete and minimal substitution set, called the most general unifier . Syntactic unification of first-order terms is the most widely used unification framework. It is based on T being the set of first-order terms (over some given set V of variables, C of constants and F n of n -ary function symbols) and on ≡ being syntactic equality . In this framework, each solvable unification problem { l 1 ≐ r 1 , ..., l n ≐ r n } has a complete, and obviously minimal, singleton solution set { σ } . Its member σ is called the most general unifier ( mgu ) of the problem. The terms on the left and the right hand side of each potential equation become syntactically equal when the mgu is applied i.e. l 1 σ = r 1 σ ∧ ... ∧ l n σ = r n σ . Any unifier of the problem is subsumed [ note 4 ] by the mgu σ . The mgu is unique up to variants: if S 1 and S 2 are both complete and minimal solution sets of the same syntactical unification problem, then S 1 = { σ 1 } and S 2 = { σ 2 } for some substitutions σ 1 and σ 2 , and xσ 1 is a variant of xσ 2 for each variable x occurring in the problem. For example, the unification problem { x ≐ z , y ≐ f ( x ) } has a unifier { x ↦ z , y ↦ f ( z ) }, because This is also the most general unifier. Other unifiers for the same problem are e.g. { x ↦ f ( x 1 ), y ↦ f ( f ( x 1 )), z ↦ f ( x 1 ) }, { x ↦ f ( f ( x 1 )), y ↦ f ( f ( f ( x 1 ))), z ↦ f ( f ( x 1 )) }, and so on; there are infinitely many similar unifiers. As another example, the problem g ( x , x ) ≐ f ( y ) has no solution with respect to ≡ being literal identity, since any substitution applied to the left and right hand side will keep the outermost g and f , respectively, and terms with different outermost function symbols are syntactically different. Symbols are ordered such that variables precede function symbols. Terms are ordered by increasing written length; equally long terms are ordered lexicographically . [ 6 ] For a set T of terms, its disagreement path p is the lexicographically least path where two member terms of T differ. Its disagreement set is the set of subterms starting at p , formally: { t | p : t ∈ T {\displaystyle t\in T} }. [ 7 ] Algorithm: [ 8 ] Given a set T of terms to be unified Let σ {\displaystyle \sigma } initially be the identity substitution do forever if T σ {\displaystyle T\sigma } is a singleton set then return σ {\displaystyle \sigma } fi let D be the disagreement set of T σ {\displaystyle T\sigma } let s , t be the two lexicographically least terms in D if s is not a variable or s occurs in t then return "NONUNIFIABLE" fi σ := σ { s ↦ t } {\displaystyle \sigma :=\sigma \{s\mapsto t\}} done Jacques Herbrand discussed the basic concepts of unification and sketched an algorithm in 1930. [ 9 ] [ 10 ] [ 11 ] But most authors attribute the first unification algorithm to John Alan Robinson (cf. box). [ 12 ] [ 13 ] [ note 5 ] Robinson's algorithm had worst-case exponential behavior in both time and space. [ 11 ] [ 15 ] Numerous authors have proposed more efficient unification algorithms. [ 16 ] Algorithms with worst-case linear-time behavior were discovered independently by Martelli & Montanari (1976) and Paterson & Wegman (1976) [ note 6 ] Baader & Snyder (2001) uses a similar technique as Paterson-Wegman, hence is linear, [ 17 ] but like most linear-time unification algorithms is slower than the Robinson version on small sized inputs due to the overhead of preprocessing the inputs and postprocessing of the output, such as construction of a DAG representation. de Champeaux (2022) is also of linear complexity in the input size but is competitive with the Robinson algorithm on small size inputs. The speedup is obtained by using an object-oriented representation of the predicate calculus that avoids the need for pre- and post-processing, instead making variable objects responsible for creating a substitution and for dealing with aliasing. de Champeaux claims that the ability to add functionality to predicate calculus represented as programmatic objects provides opportunities for optimizing other logic operations as well. [ 15 ] The following algorithm is commonly presented and originates from Martelli & Montanari (1982) . [ note 7 ] Given a finite set G = { s 1 ≐ t 1 , . . . , s n ≐ t n } {\displaystyle G=\{s_{1}\doteq t_{1},...,s_{n}\doteq t_{n}\}} of potential equations, the algorithm applies rules to transform it to an equivalent set of equations of the form { x 1 ≐ u 1 , ..., x m ≐ u m } where x 1 , ..., x m are distinct variables and u 1 , ..., u m are terms containing none of the x i . A set of this form can be read as a substitution. If there is no solution the algorithm terminates with ⊥; other authors use "Ω", or " fail " in that case. The operation of substituting all occurrences of variable x in problem G with term t is denoted G { x ↦ t }. For simplicity, constant symbols are regarded as function symbols having zero arguments. An attempt to unify a variable x with a term containing x as a strict subterm x ≐ f (..., x , ...) would lead to an infinite term as solution for x , since x would occur as a subterm of itself. In the set of (finite) first-order terms as defined above, the equation x ≐ f (..., x , ...) has no solution; hence the eliminate rule may only be applied if x ∉ vars ( t ). Since that additional check, called occurs check , slows down the algorithm, it is omitted e.g. in most Prolog systems. From a theoretical point of view, omitting the check amounts to solving equations over infinite trees, see #Unification of infinite terms below. For the proof of termination of the algorithm consider a triple ⟨ n v a r , n l h s , n e q n ⟩ {\displaystyle \langle n_{var},n_{lhs},n_{eqn}\rangle } where n var is the number of variables that occur more than once in the equation set, n lhs is the number of function symbols and constants on the left hand sides of potential equations, and n eqn is the number of equations. When rule eliminate is applied, n var decreases, since x is eliminated from G and kept only in { x ≐ t }. Applying any other rule can never increase n var again. When rule decompose , conflict , or swap is applied, n lhs decreases, since at least the left hand side's outermost f disappears. Applying any of the remaining rules delete or check can't increase n lhs , but decreases n eqn . Hence, any rule application decreases the triple ⟨ n v a r , n l h s , n e q n ⟩ {\displaystyle \langle n_{var},n_{lhs},n_{eqn}\rangle } with respect to the lexicographical order , which is possible only a finite number of times. Conor McBride observes [ 18 ] that "by expressing the structure which unification exploits" in a dependently typed language such as Epigram , Robinson's unification algorithm can be made recursive on the number of variables , in which case a separate termination proof becomes unnecessary. In the Prolog syntactical convention a symbol starting with an upper case letter is a variable name; a symbol that starts with a lowercase letter is a function symbol; the comma is used as the logical and operator. For mathematical notation, x,y,z are used as variables, f,g as function symbols, and a,b as constants. Succeeds in traditional Prolog and in Prolog II, unifying x with infinite term x=f(f(f(f(...)))) . The most general unifier of a syntactic first-order unification problem of size n may have a size of 2 n . For example, the problem ⁠ ( ( ( a ∗ z ) ∗ y ) ∗ x ) ∗ w ≐ w ∗ ( x ∗ ( y ∗ ( z ∗ a ) ) ) {\displaystyle (((a*z)*y)*x)*w\doteq w*(x*(y*(z*a)))} ⁠ has the most general unifier ⁠ { z ↦ a , y ↦ a ∗ a , x ↦ ( a ∗ a ) ∗ ( a ∗ a ) , w ↦ ( ( a ∗ a ) ∗ ( a ∗ a ) ) ∗ ( ( a ∗ a ) ∗ ( a ∗ a ) ) } {\displaystyle \{z\mapsto a,y\mapsto a*a,x\mapsto (a*a)*(a*a),w\mapsto ((a*a)*(a*a))*((a*a)*(a*a))\}} ⁠ , cf. picture. In order to avoid exponential time complexity caused by such blow-up, advanced unification algorithms work on directed acyclic graphs (dags) rather than trees. [ 19 ] The concept of unification is one of the main ideas behind logic programming . Specifically, unification is a basic building block of resolution , a rule of inference for determining formula satisfiability. In Prolog , the equality symbol = implies first-order syntactic unification. It represents the mechanism of binding the contents of variables and can be viewed as a kind of one-time assignment. In Prolog: Type inference algorithms are typically based on unification, particularly Hindley-Milner type inference which is used by the functional languages Haskell and ML . For example, when attempting to infer the type of the Haskell expression True : ['x'] , the compiler will use the type a -> [a] -> [a] of the list construction function (:) , the type Bool of the first argument True , and the type [Char] of the second argument ['x'] . The polymorphic type variable a will be unified with Bool and the second argument [a] will be unified with [Char] . a cannot be both Bool and Char at the same time, therefore this expression is not correctly typed. Like for Prolog, an algorithm for type inference can be given: Unification has been used in different research areas of computational linguistics. [ 21 ] [ 22 ] Order-sorted logic allows one to assign a sort , or type , to each term, and to declare a sort s 1 a subsort of another sort s 2 , commonly written as s 1 ⊆ s 2 . For example, when reаsoning about biological creatures, it is useful to declare a sort dog to be a subsort of a sort animal . Wherever a term of some sort s is required, a term of any subsort of s may be supplied instead. For example, assuming a function declaration mother : animal → animal , and a constant declaration lassie : dog , the term mother ( lassie ) is perfectly valid and has the sort animal . In order to supply the information that the mother of a dog is a dog in turn, another declaration mother : dog → dog may be issued; this is called function overloading , similar to overloading in programming languages . Walther gave a unification algorithm for terms in order-sorted logic, requiring for any two declared sorts s 1 , s 2 their intersection s 1 ∩ s 2 to be declared, too: if x 1 and x 2 is a variable of sort s 1 and s 2 , respectively, the equation x 1 ≐ x 2 has the solution { x 1 = x , x 2 = x }, where x : s 1 ∩ s 2 . [ 23 ] After incorporating this algorithm into a clause-based automated theorem prover, he could solve a benchmark problem by translating it into order-sorted logic, thereby boiling it down an order of magnitude, as many unary predicates turned into sorts. Smolka generalized order-sorted logic to allow for parametric polymorphism . [ 24 ] In his framework, subsort declarations are propagated to complex type expressions. As a programming example, a parametric sort list ( X ) may be declared (with X being a type parameter as in a C++ template ), and from a subsort declaration int ⊆ float the relation list ( int ) ⊆ list ( float ) is automatically inferred, meaning that each list of integers is also a list of floats. Schmidt-Schauß generalized order-sorted logic to allow for term declarations. [ 25 ] As an example, assuming subsort declarations even ⊆ int and odd ⊆ int , a term declaration like ∀ i : int . ( i + i ) : even allows to declare a property of integer addition that could not be expressed by ordinary overloading. Background on infinite trees: Unification algorithm, Prolog II: Applications: E-unification is the problem of finding solutions to a given set of equations , taking into account some equational background knowledge E . The latter is given as a set of universal equalities . For some particular sets E , equation solving algorithms (a.k.a. E-unification algorithms ) have been devised; for others it has been proven that no such algorithms can exist. For example, if a and b are distinct constants, the equation ⁠ x ∗ a ≐ y ∗ b {\displaystyle x*a\doteq y*b} ⁠ has no solution with respect to purely syntactic unification , where nothing is known about the operator ⁠ ∗ {\displaystyle *} ⁠ . However, if the ⁠ ∗ {\displaystyle *} ⁠ is known to be commutative , then the substitution { x ↦ b , y ↦ a } solves the above equation, since The background knowledge E could state the commutativity of ⁠ ∗ {\displaystyle *} ⁠ by the universal equality " ⁠ u ∗ v = v ∗ u {\displaystyle u*v=v*u} ⁠ for all u , v ". It is said that unification is decidable for a theory, if a unification algorithm has been devised for it that terminates for any input problem. It is said that unification is semi-decidable for a theory, if a unification algorithm has been devised for it that terminates for any solvable input problem, but may keep searching forever for solutions of an unsolvable input problem. Unification is decidable for the following theories: Unification is semi-decidable for the following theories: If there is a convergent term rewriting system R available for E , the one-sided paramodulation algorithm [ 38 ] can be used to enumerate all solutions of given equations. Starting with G being the unification problem to be solved and S being the identity substitution, rules are applied nondeterministically until the empty set appears as the actual G , in which case the actual S is a unifying substitution. Depending on the order the paramodulation rules are applied, on the choice of the actual equation from G , and on the choice of R ' s rules in mutate , different computations paths are possible. Only some lead to a solution, while others end at a G ≠ {} where no further rule is applicable (e.g. G = { f (...) ≐ g (...) }). For an example, a term rewrite system R is used defining the append operator of lists built from cons and nil ; where cons ( x , y ) is written in infix notation as x . y for brevity; e.g. app ( a . b . nil , c . d . nil ) → a . app ( b . nil , c . d . nil ) → a . b . app ( nil , c . d . nil ) → a . b . c . d . nil demonstrates the concatenation of the lists a . b . nil and c . d . nil , employing the rewrite rule 2,2, and 1. The equational theory E corresponding to R is the congruence closure of R , both viewed as binary relations on terms. For example, app ( a . b . nil , c . d . nil ) ≡ a . b . c . d . nil ≡ app ( a . b . c . d . nil , nil ). The paramodulation algorithm enumerates solutions to equations with respect to that E when fed with the example R . A successful example computation path for the unification problem { app ( x , app ( y , x )) ≐ a . a . nil } is shown below. To avoid variable name clashes, rewrite rules are consistently renamed each time before their use by rule mutate ; v 2 , v 3 , ... are computer-generated variable names for this purpose. In each line, the chosen equation from G is highlighted in red. Each time the mutate rule is applied, the chosen rewrite rule ( 1 or 2 ) is indicated in parentheses. From the last line, the unifying substitution S = { y ↦ nil , x ↦ a . nil } can be obtained. In fact, app ( x , app ( y , x )) { y ↦ nil , x ↦ a . nil } = app ( a . nil , app ( nil , a . nil )) ≡ app ( a . nil , a . nil ) ≡ a . app ( nil , a . nil ) ≡ a . a . nil solves the given problem. A second successful computation path, obtainable by choosing "mutate(1), mutate(2), mutate(2), mutate(1)" leads to the substitution S = { y ↦ a . a . nil , x ↦ nil }; it is not shown here. No other path leads to a success. If R is a convergent term rewriting system for E , an approach alternative to the previous section consists in successive application of " narrowing steps"; this will eventually enumerate all solutions of a given equation. A narrowing step (cf. picture) consists in Formally, if l → r is a renamed copy of a rewrite rule from R , having no variables in common with a term s , and the subterm s | p is not a variable and is unifiable with l via the mgu σ , then s can be narrowed to the term t = sσ [ rσ ] p , i.e. to the term sσ , with the subterm at p replaced by rσ . The situation that s can be narrowed to t is commonly denoted as s ↝ t . Intuitively, a sequence of narrowing steps t 1 ↝ t 2 ↝ ... ↝ t n can be thought of as a sequence of rewrite steps t 1 → t 2 → ... → t n , but with the initial term t 1 being further and further instantiated, as necessary to make each of the used rules applicable. The above example paramodulation computation corresponds to the following narrowing sequence ("↓" indicating instantiation here): The last term, v 2 . v 2 . nil can be syntactically unified with the original right hand side term a . a . nil . The narrowing lemma [ 39 ] ensures that whenever an instance of a term s can be rewritten to a term t by a convergent term rewriting system, then s and t can be narrowed and rewritten to a term s ′ and t ′ , respectively, such that t ′ is an instance of s ′ . Formally: whenever sσ → ∗ t holds for some substitution σ, then there exist terms s ′ , t ′ such that s ↝ ∗ s ′ and t → ∗ t ′ and s ′ τ = t ′ for some substitution τ. Many applications require one to consider the unification of typed lambda-terms instead of first-order terms. Such unification is often called higher-order unification . Higher-order unification is undecidable , [ 40 ] [ 41 ] [ 42 ] and such unification problems do not have most general unifiers. For example, the unification problem { f ( a , b , a ) ≐ d ( b , a , c ) }, where the only variable is f , has the solutions { f ↦ λ x .λ y .λ z . d ( y , x , c ) }, { f ↦ λ x .λ y .λ z . d ( y , z , c ) }, { f ↦ λ x .λ y .λ z . d ( y , a , c ) }, { f ↦ λ x .λ y .λ z . d ( b , x , c ) }, { f ↦ λ x .λ y .λ z . d ( b , z , c ) } and { f ↦ λ x .λ y .λ z . d ( b , a , c ) }. A well studied branch of higher-order unification is the problem of unifying simply typed lambda terms modulo the equality determined by αβη conversions. Gérard Huet gave a semi-decidable (pre-)unification algorithm [ 43 ] that allows a systematic search of the space of unifiers (generalizing the unification algorithm of Martelli-Montanari [ 5 ] with rules for terms containing higher-order variables) that seems to work sufficiently well in practice. Huet [ 44 ] and Gilles Dowek [ 45 ] have written articles surveying this topic. Several subsets of higher-order unification are well-behaved, in that they are decidable and have a most-general unifier for solvable problems. One such subset is the previously described first-order terms. Higher-order pattern unification , due to Dale Miller, [ 46 ] is another such subset. The higher-order logic programming languages λProlog and Twelf have switched from full higher-order unification to implementing only the pattern fragment; surprisingly pattern unification is sufficient for almost all programs, if each non-pattern unification problem is suspended until a subsequent substitution puts the unification into the pattern fragment. A superset of pattern unification called functions-as-constructors unification is also well-behaved. [ 47 ] The Zipperposition theorem prover has an algorithm integrating these well-behaved subsets into a full higher-order unification algorithm. [ 2 ] In computational linguistics, one of the most influential theories of elliptical construction is that ellipses are represented by free variables whose values are then determined using Higher-Order Unification. For instance, the semantic representation of "Jon likes Mary and Peter does too" is like( j , m ) ∧ R( p ) and the value of R (the semantic representation of the ellipsis) is determined by the equation like( j , m ) = R( j ) . The process of solving such equations is called Higher-Order Unification. [ 48 ] Wayne Snyder gave a generalization of both higher-order unification and E-unification, i.e. an algorithm to unify lambda-terms modulo an equational theory. [ 49 ]
https://en.wikipedia.org/wiki/Unification_(computer_science)
Unification of theories about observable fundamental phenomena of nature is one of the primary goals of physics . [ 1 ] [ 2 ] [ 3 ] The two great unifications to date are Isaac Newton ’s unification of gravity and astronomy, and James Clerk Maxwell ’s unification of electromagnetism ; the latter has been further unified with the concept of electroweak interaction . This process of "unifying" forces continues today, with the ultimate goal of finding a theory of everything . The "first great unification" was Isaac Newton 's 17th century unification of gravity , which brought together the understandings of the observable phenomena of gravity on Earth with the observable behaviour of celestial bodies in space. [ 2 ] [ 4 ] [ 5 ] His work is credited with laying the foundations of future endeavors for a grand unified theory. For example, it has been stated that "If we have to take any single individual as the originator of the quest for a unified theory of physics, and, by implication, the whole of knowledge, it has to be Newton." [ 6 ] Physicist Steven Weinberg stated that "It is with Isaac Newton that the modern dream of a final theory really begins". [ 7 ] The ancient Chinese people observed that certain rocks such as lodestone and magnetite were attracted to one another by an invisible force. This effect was later called magnetism , which was first rigorously studied in the 17th century. However, prior to ancient Chinese observations of magnetism, the ancient Greeks knew of other objects such as amber , that when rubbed with fur would cause a similar invisible attraction between the two. [ 8 ] This was also studied rigorously in the 17th century and came to be called electricity . Thus, physics had come to understand two observations of nature in terms of some root cause (electricity and magnetism). However, work in the 19th century revealed that these two forces were just two different aspects of one force – electromagnetism . The "second great unification" was James Clerk Maxwell 's 19th century unification of electromagnetism . It brought together the understandings of the observable phenomena of magnetism , electricity and light (and more broadly, the spectrum of electromagnetic radiation ). [ 9 ] This was followed in the 20th century by Albert Einstein 's unification of space and time, and of mass and energy through his theory of special relativity . [ 9 ] Later, Paul Dirac developed quantum field theory , unifying quantum mechanics and special relativity. [ 10 ] A relatively recent unification of electromagnetism and the weak nuclear force now consider them to be two aspects of the electroweak interaction . This process of "unifying" forces continues today, with the ultimate goal of finding a theory of everything – it remains perhaps the most prominent of the unsolved problems in physics . There remain four fundamental forces which have not been decisively unified : the gravitational and electromagnetic interactions, which produce significant long-range forces whose effects can be seen directly in everyday life, and the strong and weak interactions , which produce forces at minuscule, subatomic distances and govern nuclear interactions. Electromagnetism and the weak interactions are widely considered to be two aspects of the electroweak interaction . Attempts to unify quantum mechanics and general relativity into a single theory of quantum gravity , a program ongoing for over half a century, have not yet been decisively resolved; current leading candidates are M-theory , superstring theory and loop quantum gravity . [ 2 ]
https://en.wikipedia.org/wiki/Unification_of_theories_in_physics