text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Second-order Jahn-Teller distortion (commonly known as pseudo Jahn-Teller distortion) is a singular, general, and powerful approach rigorously based in first-principle vibronic coupling interactions. It enables prediction and explication of molecular geometries that are not necessarily satisfactorily or even correctly explained by semi-empirical theories such as Walsh diagrams , atomic state hybridization , valence shell electron pair repulsion (VSEPR) , softness-hardness-based models , aromaticity and antiaromaticity, hyperconjugation , etc. [ 1 ]
The application to main-group element compounds utilizes principles of group theory and symmetry . A molecule will distort in order to maximize symmetry-allowed interactions between the highest occupied molecular orbitals and lowest unoccupied molecular orbitals, and thereby stabilize the HOMOs and destabilize the LUMOs (resulting in the overall stabilization of the molecule). The extent of second-order Jahn-Teller distortion is inversely proportional to the energy difference between orbitals. Direct products are used to determine the allowedness of a given interaction: the interaction is allowed if the product of the symmetry of the first molecular orbital, the symmetry of the vibration, and the symmetry of the second molecular orbital contains the totally symmetric irreducible representation of the molecule’s point group. For heavier main-group compounds, molecular orbital interactions are larger due to the decreasing bond strength resulting in a smaller energy difference between the interacting orbitals. [ 2 ]
Group 14 analogues of alkenes and alkynes have previously been prepared. [ 2 ] Moving down the group, the compounds experience increasing geometric distortion, becoming increasingly trans-bent from the original linear geometry and displaying increasingly limited shortening of the multiple bond. These patterns are also observed in group 13 multiply-bonded compounds. These geometry trends are rationalized [ 2 ] below.
This trend can be rationalized with hybridization [ 3 ] – moving down a group, the gap between the ns and np orbitals widens and there is an increasing mismatch between valence orbital sizes. The mismatch leads to lower hybridization – that is, increased nonbonding character on each of the heavier group 13 or 14 atoms involved in multiple bonding, which manifests as increased deviation from the typically expected linear and planar geometries. This rationalization is not especially cohesive with the typical approach to multiple bonds in organic chemistry – that is, a single σ-bond and one or two π-bonds.
This rationalization is simple and preserves the double-bond nature of the group 13 or 14 atom interaction. The multiple bond is not exactly a typical σ+π interaction; rather the two halves of the alkyne analogue are treated as singlet bent monomers and the multiple bond is treated as an aggregation between them, with the sp x p y -hybridized filled orbital on one group 13 or 14 atom donating to the vacant p z of the other.
This rationalization is consistent with valence bond theory and suggests a weakened E-E multiple bond. The electron pair is described as resonating between the two group 13 or 14 atoms, and the resonance is favored by occupation of the empty (but not mandatorily vacant) orbital.
Second-order Jahn-Teller distortion provides a rigorous and first-principles approach to the distortion problem. The interactions between the HOMOs and LUMOs to afford a new set of molecular orbitals is an example of second-order Jahn-Teller distortion.
The trans-pyramidalization distortion is taken as an example. [ 2 ] The frontier molecular orbitals of the undistorted alkene possessing D 2h symmetry have symmetries a g (HOMO-1), b 2u (HOMO), b 1g (LUMO), and b 3u (LUMO+1). The symmetry of the trans-pyramidalization vibration is b 1g . A triple product of ground state, vibrational mode, and excited state that can be taken is b 2u (HOMO) x b 1g (trans-pyramidalizing vibrational mode) x b 3u (LUMO+1) = a 1g . Since a 1g is the totally symmetric representation, the b 2u and b 3u molecular orbitals participate in an allowed interaction through the trans-pyramidalizing vibrational mode. The molecule will distort in a trans-pyramidal fashion (into C 2h symmetry) in order to enable this interaction, which produces a more stabilized HOMO and more destabilized LUMO.
This treatment can be repeated for all other combinations of HOMO-1, HOMO and LUMO, LUMO+1. Notably, it is found that the HOMO and LUMO are symmetry-disallowed to mix.
This distortion can be treated in the same fashion, using the triple product to determine whether or not the distortion from the undistorted linear D ∞h symmetry will produce a symmetry-allowed interaction (and therefore, whether or not the distortion will occur).
The pyramidalization and energies of inversion of group 15 :MR 3 (M = N, P, As, Sb, Bi) and group 14 •MR 3 molecules can also be predicted and rationalized using a second-order Jahn-Teller distortion treatment. The “parent” planar molecule possessing D 3h symmetry has frontier orbitals of a 2 ” (HOMO) and a 1 ’ (LUMO) symmetries. The pyramidalizing vibration mode has symmetry a 2 ”. The triple product yields the totally symmetric representation a 1 ’, indicating that the molecule will indeed pyramidalize into C 3v symmetry.
The energies of inversion can also be predicted and compared. Due to lower energy overlap between the 3p and 1s orbitals in PH 3 (versus between 2p and 1s in NH 3 ), the HOMO-LUMO energy gap in PH 3 will be smaller than that of NH 3 . This allows for a stronger interaction between the HOMO and LUMO in second-order Jahn-Teller fashion. The distortion stabilizes the HOMO and destabilizes the LUMO, resulting in a larger barrier to inversion in PH 3 .
Tetravalent main-group-element hydrides of form A P H 4 (A P = B − , C, N + , O 2+ , Al − , Si, P + , and S 2+ , where A P is a tetravalent atom or ion) are known to distort from the square planar to tetrahedral geometry. For all A P H 4 systems in D 4h symmetry, the ground state is a 1g . The exact electronic configuration, however, is dependent on the electronegativity of the main group element. The distortion to tetrahedral geometry has b 2u symmetry. For these A P H 4 systems, the a 2u →b 1g * and e u →e g * one-electron charge-transfer transitions are most active in the b 2u mode. | https://en.wikipedia.org/wiki/Second-order_Jahn-Teller_distortion_in_main-group_element_compounds |
In mathematical logic , second-order arithmetic is a collection of axiomatic systems that formalize the natural numbers and their subsets . It is an alternative to axiomatic set theory as a foundation for much, but not all, of mathematics.
A precursor to second-order arithmetic that involves third-order parameters was introduced by David Hilbert and Paul Bernays in their book Grundlagen der Mathematik . [ 1 ] The standard axiomatization of second-order arithmetic is denoted by Z 2 .
Second-order arithmetic includes, but is significantly stronger than, its first-order counterpart Peano arithmetic . Unlike Peano arithmetic, second-order arithmetic allows quantification over sets of natural numbers as well as numbers themselves. Because real numbers can be represented as ( infinite ) sets of natural numbers in well-known ways, and because second-order arithmetic allows quantification over such sets, it is possible to formalize the real numbers in second-order arithmetic. For this reason, second-order arithmetic is sometimes called " analysis ". [ 2 ]
Second-order arithmetic can also be seen as a weak version of set theory in which every element is either a natural number or a set of natural numbers. Although it is much weaker than Zermelo–Fraenkel set theory , second-order arithmetic can prove essentially all of the results of classical mathematics expressible in its language.
A subsystem of second-order arithmetic is a theory in the language of second-order arithmetic each axiom of which is a theorem of full second-order arithmetic (Z 2 ). Such subsystems are essential to reverse mathematics , a research program investigating how much of classical mathematics can be derived in certain weak subsystems of varying strength. Much of core mathematics can be formalized in these weak subsystems, some of which are defined below. Reverse mathematics also clarifies the extent and manner in which classical mathematics is nonconstructive .
The language of second-order arithmetic is two-sorted . The first sort of terms and in particular variables , usually denoted by lower case letters, consists of individuals , whose intended interpretation is as natural numbers. The other sort of variables, variously called "set variables", "class variables", or even "predicates" are usually denoted by upper-case letters. They refer to classes/predicates/properties of individuals, and so can be thought of as sets of natural numbers. Both individuals and set variables can be quantified universally or existentially . A formula with no bound set variables (that is, no quantifiers over set variables) is called arithmetical . An arithmetical formula may have free set variables and bound individual variables.
Individual terms are formed from the constant 0, the unary function S (the successor function ), and the binary operations + and ⋅ {\displaystyle \cdot } (addition and multiplication). The successor function adds 1 to its input. The relations = (equality) and < (comparison of natural numbers) relate two individuals, whereas the relation ∈ (membership) relates an individual and a set (or class). Thus in notation the language of second-order arithmetic is given by the signature L = { 0 , S , + , ⋅ , = , < , ∈ } {\displaystyle {\mathcal {L}}=\{0,S,+,\cdot ,=,<,\in \}} .
For example, ∀ n ( n ∈ X → S n ∈ X ) {\displaystyle \forall n(n\in X\rightarrow Sn\in X)} , is a well-formed formula of second-order arithmetic that is arithmetical, has one free set variable X and one bound individual variable n (but no bound set variables, as is required of an arithmetical formula)—whereas ∃ X ∀ n ( n ∈ X ↔ n < S S S S S S 0 ⋅ S S S S S S S 0 ) {\displaystyle \exists X\forall n(n\in X\leftrightarrow n<SSSSSS0\cdot SSSSSSS0)} is a well-formed formula that is not arithmetical, having one bound set variable X and one bound individual variable n .
Several different interpretations of the quantifiers are possible. If second-order arithmetic is studied using the full semantics of second-order logic then the set quantifiers range over all subsets of the range of the individual variables. If second-order arithmetic is formalized using the semantics of first-order logic ( Henkin semantics ) then any model includes a domain for the set variables to range over, and this domain may be a proper subset of the full powerset of the domain of individual variables. [ 3 ]
The following axioms are known as the basic axioms , or sometimes the Robinson axioms. The resulting first-order theory , known as Robinson arithmetic , is essentially Peano arithmetic without induction. The domain of discourse for the quantified variables is the natural numbers , collectively denoted by N , and including the distinguished member 0 {\displaystyle 0} , called " zero ."
The primitive functions are the unary successor function , denoted by prefix S {\displaystyle S} , and two binary operations , addition and multiplication , denoted by the infix operator "+" and " ⋅ {\displaystyle \cdot } ", respectively. There is also a primitive binary relation called order , denoted by the infix operator "<".
Axioms governing the successor function and zero :
Addition defined recursively :
Multiplication defined recursively:
Axioms governing the order relation "<":
These axioms are all first-order statements . That is, all variables range over the natural numbers and not sets thereof, a fact even stronger than their being arithmetical. Moreover, there is but one existential quantifier , in Axiom 3. Axioms 1 and 2, together with an axiom schema of induction make up the usual Peano–Dedekind definition of N . Adding to these axioms any sort of axiom schema of induction makes redundant the axioms 3, 10, and 11.
If φ ( n ) is a formula of second-order arithmetic with a free individual variable n and possibly other free individual or set variables (written m 1 ,..., m k and X 1 ,..., X l ), the induction axiom for φ is the axiom:
The ( full ) second-order induction scheme consists of all instances of this axiom, over all second-order formulas.
One particularly important instance of the induction scheme is when φ is the formula " n ∈ X {\displaystyle n\in X} " expressing the fact that n is a member of X ( X being a free set variable): in this case, the induction axiom for φ is
This sentence is called the second-order induction axiom .
If φ ( n ) is a formula with a free variable n and possibly other free variables, but not the variable Z , the comprehension axiom for φ is the formula
This axiom makes it possible to form the set Z = { n | φ ( n ) } {\displaystyle Z=\{n|\varphi (n)\}} of natural numbers satisfying φ ( n ). There is a technical restriction that the formula φ may not contain the variable Z , for otherwise the formula n ∉ Z {\displaystyle n\not \in Z} would lead to the comprehension axiom
which is inconsistent. This convention is assumed in the remainder of this article.
The formal theory of second-order arithmetic (in the language of second-order arithmetic) consists of the basic axioms, the comprehension axiom for every formula φ (arithmetic or otherwise), and the second-order induction axiom. This theory is sometimes called full second-order arithmetic to distinguish it from its subsystems, defined below. Because full second-order semantics imply that every possible set exists, the comprehension axioms may be taken to be part of the deductive system when full second-order semantics is employed. [ 3 ]
This section describes second-order arithmetic with first-order semantics. Thus a model M {\displaystyle {\mathcal {M}}} of the language of second-order arithmetic consists of a set M (which forms the range of individual variables) together with a constant 0 (an element of M ), a function S from M to M , two binary operations + and · on M , a binary relation < on M , and a collection D of subsets of M , which is the range of the set variables. Omitting D produces a model of the language of first-order arithmetic.
When D is the full powerset of M , the model M {\displaystyle {\mathcal {M}}} is called a full model . The use of full second-order semantics is equivalent to limiting the models of second-order arithmetic to the full models. In fact, the axioms of second-order arithmetic have only one full model. This follows from the fact that the Peano axioms with the second-order induction axiom have only one model under second-order semantics.
The first-order functions that are provably total in second-order arithmetic are precisely the same as those representable in system F . [ 4 ] Almost equivalently, system F is the theory of functionals corresponding to second-order arithmetic in a manner parallel to how Gödel's system T corresponds to first-order arithmetic in the Dialectica interpretation .
When a model of the language of second-order arithmetic has certain properties, it can also be called these other names:
There are many named subsystems of second-order arithmetic.
A subscript 0 in the name of a subsystem indicates that it includes only
a restricted portion of the full second-order induction scheme. [ 9 ] Such a restriction lowers the proof-theoretic strength of the system significantly. For example, the system ACA 0 described below is equiconsistent with Peano arithmetic . The corresponding theory ACA, consisting of ACA 0 plus the full second-order induction scheme, is stronger than Peano arithmetic.
Many of the well-studied subsystems are related to closure properties of models. For example, it can be shown that every ω-model of full second-order arithmetic is closed under Turing jump , but not every ω-model closed under Turing jump is a model of full second-order arithmetic. The subsystem ACA 0 includes just enough axioms to capture the notion of closure under Turing jump.
ACA 0 is defined as the theory consisting of the basic axioms, the arithmetical comprehension axiom scheme (in other words the comprehension axiom for every arithmetical formula φ ) and the ordinary second-order induction axiom. It would be equivalent to also include the entire arithmetical induction axiom scheme, in other words to include the induction axiom for every arithmetical formula φ .
It can be shown that a collection S of subsets of ω determines an ω-model of ACA 0 if and only if S is closed under Turing jump, Turing reducibility , and Turing join. [ 10 ]
The subscript 0 in ACA 0 indicates that not every instance of the induction axiom scheme is included this subsystem. This makes no difference for ω-models, which automatically satisfy every instance of the induction axiom. It is of importance, however, in the study of non-ω-models. The system consisting of ACA 0 plus induction for all formulas is sometimes called ACA with no subscript.
The system ACA 0 is a conservative extension of first-order arithmetic (or first-order Peano axioms), defined as the basic axioms, plus the first-order induction axiom scheme (for all formulas φ involving no class variables at all, bound or otherwise), in the language of first-order arithmetic (which does not permit class variables at all). In particular it has the same proof-theoretic ordinal ε 0 as first-order arithmetic, owing to the limited induction schema.
A formula is called bounded arithmetical , or Δ 0 0 , when all its quantifiers are of the form ∀ n < t or ∃ n < t (where n is the individual variable being quantified and t is an individual term), where
stands for
and
stands for
A formula is called Σ 0 1 (or sometimes Σ 1 ), respectively Π 0 1 (or sometimes Π 1 ) when it is of the form ∃ m φ , respectively ∀ m φ where φ is a bounded arithmetical formula and m is an individual variable (that is free in φ ). More generally, a formula is called Σ 0 n , respectively Π 0 n when it is obtained by adding existential, respectively universal, individual quantifiers to a Π 0 n −1 , respectively Σ 0 n −1 formula (and Σ 0 0 and Π 0 0 are both equal to Δ 0 0 ). By construction, all these formulas are arithmetical (no class variables are ever bound) and, in fact, by putting the formula in Skolem prenex form one can see that every arithmetical formula is logically equivalent to a Σ 0 n or Π 0 n formula for all large enough n .
The subsystem RCA 0 is a weaker system than ACA 0 and is often used as the base system in reverse mathematics . It consists of: the basic axioms, the Σ 0 1 induction scheme, and the Δ 0 1 comprehension scheme. The former term is clear: the Σ 0 1 induction scheme is the induction axiom for every Σ 0 1 formula φ . The term "Δ 0 1 comprehension" is more complex, because there is no such thing as a Δ 0 1 formula. The Δ 0 1 comprehension scheme instead asserts the comprehension axiom for every Σ 0 1 formula that is logically equivalent to a Π 0 1 formula. This scheme includes, for every Σ 0 1 formula φ and every Π 0 1 formula ψ , the axiom:
The set of first-order consequences of RCA 0 is the same as those of the subsystem IΣ 1 of Peano arithmetic in which induction is restricted to Σ 0 1 formulas. In turn, IΣ 1 is conservative over primitive recursive arithmetic (PRA) for Π 2 0 {\displaystyle \Pi _{2}^{0}} sentences. Moreover, the proof-theoretic ordinal of R C A 0 {\displaystyle \mathrm {RCA} _{0}} is ω ω , the same as that of PRA.
It can be seen that a collection S of subsets of ω determines an ω-model of RCA 0 if and only if S is closed under Turing reducibility and Turing join. In particular, the collection of all computable subsets of ω gives an ω-model of RCA 0 . This is the motivation behind the name of this system—if a set can be proved to exist using RCA 0 , then the set is recursive (i.e. computable).
Sometimes an even weaker system than RCA 0 is desired. One such system is defined as follows: one must first augment the language of arithmetic with an exponential function symbol (in stronger systems the exponential can be defined in terms of addition and multiplication by the usual trick, but when the system becomes too weak this is no longer possible) and the basic axioms by the obvious axioms defining exponentiation inductively from multiplication; then the system consists of the (enriched) basic axioms, plus Δ 0 1 comprehension, plus Δ 0 0 induction.
Over ACA 0 , each formula of second-order arithmetic is equivalent to a Σ 1 n or Π 1 n formula for all large enough n . The system Π 1 1 -comprehension is the system consisting of the basic axioms, plus the ordinary second-order induction axiom and the comprehension axiom for every ( boldface [ 11 ] ) Π 1 1 formula φ . This is equivalent to Σ 1 1 -comprehension (on the other hand, Δ 1 1 -comprehension, defined analogously to Δ 0 1 -comprehension, is weaker).
Projective determinacy is the assertion that every two-player perfect information game with moves being natural numbers, game length ω and projective payoff set is determined, that is, one of the players has a winning strategy. (The first player wins the game if the play belongs to the payoff set; otherwise, the second player wins.) A set is projective if and only if (as a predicate) it is expressible by a formula in the language of second-order arithmetic, allowing real numbers as parameters, so projective determinacy is expressible as a schema in the language of Z 2 .
Many natural propositions expressible in the language of second-order arithmetic are independent of Z 2 and even ZFC but are provable from projective determinacy. Examples include coanalytic perfect subset property , measurability and the property of Baire for Σ 2 1 {\displaystyle \Sigma _{2}^{1}} sets, Π 3 1 {\displaystyle \Pi _{3}^{1}} uniformization , etc. Over a weak base theory (such as RCA 0 ), projective determinacy implies comprehension and provides an essentially complete theory of second-order arithmetic — natural statements in the language of Z 2 that are independent of Z 2 with projective determinacy are hard to find. [ 12 ]
ZFC + {there are n Woodin cardinals : n is a natural number} is conservative over Z 2 with projective determinacy [ citation needed ] , that is a statement in the language of second-order arithmetic is provable in Z 2 with projective determinacy if and only if its translation into the language of set theory is provable in ZFC + {there are n Woodin cardinals: n ∈N}.
Second-order arithmetic directly formalizes natural numbers and sets of natural numbers. However, it is able to formalize other mathematical objects indirectly via coding techniques, a fact that was first noticed by Weyl . [ 13 ] The integers , rational numbers , and real numbers can all be formalized in the subsystem RCA 0 , along with complete separable metric spaces and continuous functions between them. [ 14 ]
The research program of reverse mathematics uses these formalizations of mathematics in second-order arithmetic to study the set-existence axioms required to prove mathematical theorems. [ 15 ] For example, the intermediate value theorem for functions from the reals to the reals is provable in RCA 0 , [ 16 ] while the Bolzano – Weierstrass theorem is equivalent to ACA 0 over RCA 0 . [ 17 ]
The aforementioned coding works well for continuous and total functions, assuming a higher-order base theory plus weak Kőnig's lemma . [ 18 ] As perhaps expected, in the case of topology , coding is not without problems. [ 19 ] | https://en.wikipedia.org/wiki/Second-order_arithmetic |
In logic and mathematics , second-order logic is an extension of first-order logic , which itself is an extension of propositional logic . [ a ] Second-order logic is in turn extended by higher-order logic and type theory .
First-order logic quantifies only variables that range over individuals (elements of the domain of discourse ); second-order logic, in addition, quantifies over relations . For example, the second-order sentence ∀ P ∀ x ( P x ∨ ¬ P x ) {\displaystyle \forall P\,\forall x(Px\lor \neg Px)} says that for every formula P , and every individual x , either Px is true or not( Px ) is true (this is the law of excluded middle ). Second-order logic also includes quantification over sets , functions , and other variables (see section below ). Both first-order and second-order logic use the idea of a domain of discourse (often called simply the "domain" or the "universe"). The domain is a set over which individual elements may be quantified.
First-order logic can quantify over individuals, but not over properties. That is, we can take an atomic sentence like Cube( b ) and obtain a quantified sentence by replacing the name with a variable and attaching a quantifier: [ 1 ]
∃ x C u b e ( x ) {\displaystyle \exists x\,\mathrm {Cube} (x)}
However, we cannot do the same with the predicate. That is, the following expression:
∃ P P ( b ) {\displaystyle \exists \mathrm {P} \,\mathrm {P} (b)}
is not a sentence of first-order logic, but this is a legitimate sentence of second-order logic. Here, P is a predicate variable and is semantically a set of individuals. [ 1 ]
As a result, second-order logic has greater expressive power than first-order logic. For example, there is no way in first-order logic to identify the set of all cubes and tetrahedrons. But the existence of this set can be asserted in second-order logic as:
∃ P ∀ x ( P x ↔ ( C u b e ( x ) ∨ T e t ( x ) ) ) . {\displaystyle \exists \mathrm {P} \,\forall x\,(\mathrm {P} x\leftrightarrow (\mathrm {Cube} (x)\vee \mathrm {Tet} (x))).}
We can then assert properties of this set. For instance, the following says that the set of all cubes and tetrahedrons does not contain any dodecahedrons:
∀ P ( ∀ x ( P x ↔ ( C u b e ( x ) ∨ T e t ( x ) ) ) → ¬ ∃ x ( P x ∧ D o d e c ( x ) ) ) . {\displaystyle \forall \mathrm {P} \,(\forall x\,(\mathrm {P} x\leftrightarrow (\mathrm {Cube} (x)\vee \mathrm {Tet} (x)))\rightarrow \lnot \exists x\,(\mathrm {P} x\wedge \mathrm {Dodec} (x))).}
Second-order quantification is especially useful because it gives the ability to express reachability properties. For example, if Parent( x , y ) denotes that x is a parent of y , then first-order logic cannot express the property that x is an ancestor of y . In second-order logic we can express this by saying that every set of people containing y and closed under the Parent relation contains x :
∀ P ( ( P y ∧ ∀ a ∀ b ( ( P b ∧ P a r e n t ( a , b ) ) → P a ) ) → P x ) . {\displaystyle \forall \mathrm {P} \,((\mathrm {P} y\wedge \forall a\,\forall b\,((\mathrm {P} b\wedge \mathrm {Parent} (a,b))\rightarrow \mathrm {P} a))\rightarrow \mathrm {P} x).}
It is notable that while we have variables for predicates in second-order-logic, we don't have variables for properties of predicates. We cannot say, for example, that there is a property Shape( P ) that is true for the predicates P Cube, Tet, and Dodec. This would require third-order logic . [ 2 ]
The syntax of second-order logic tells which expressions are well formed formulas . In addition to the syntax of first-order logic , second-order logic includes many new sorts (sometimes called types ) of variables. These are:
Each of the variables just defined may be universally and/or existentially quantified over, to build up formulas. Thus there are many kinds of quantifiers, two for each sort of variables. A sentence in second-order logic, as in first-order logic, is a well-formed formula with no free variables (of any sort).
It's possible to forgo the introduction of function variables in the definition given above (and some authors do this) because an n -ary function variable can be represented by a relation variable of arity n +1 and an appropriate formula for the uniqueness of the "result" in the n +1 argument of the relation. (Shapiro 2000, p. 63)
Monadic second-order logic (MSO) is a restriction of second-order logic in which only quantification over unary relations (i.e. sets) is allowed. Quantification over functions, owing to the equivalence to relations as described above, is thus also not allowed. The second-order logic without these restrictions is sometimes called full second-order logic to distinguish it from the monadic version. Monadic second-order logic is particularly used in the context of Courcelle's theorem , an algorithmic meta-theorem in graph theory . The MSO theory of the complete infinite binary tree ( S2S ) is decidable. By contrast, full second order logic over any infinite set (or MSO logic over for example ( N {\displaystyle \mathbb {N} } ,+)) can interpret the true second-order arithmetic .
Just as in first-order logic, second-order logic may include non-logical symbols in a particular second-order language. These are restricted, however, in that all terms that they form must be either first-order terms (which can be substituted for a first-order variable) or second-order terms (which can be substituted for a second-order variable of an appropriate sort).
A formula in second-order logic is said to be of first-order (and sometimes denoted Σ 0 1 {\displaystyle \Sigma _{0}^{1}} or Π 0 1 {\displaystyle \Pi _{0}^{1}} ) if its quantifiers (which may be universal or existential) range only over variables of first order, although it may have free variables of second order. A Σ 1 1 {\displaystyle \Sigma _{1}^{1}} (existential second-order) formula is one additionally having some existential quantifiers over second order variables, i.e. ∃ R 0 … ∃ R m ϕ {\displaystyle \exists R_{0}\ldots \exists R_{m}\phi } , where ϕ {\displaystyle \phi } is a first-order formula. The fragment of second-order logic consisting only of existential second-order formulas is called existential second-order logic and abbreviated as ESO, as Σ 1 1 {\displaystyle \Sigma _{1}^{1}} , or even as ∃SO. The fragment of Π 1 1 {\displaystyle \Pi _{1}^{1}} formulas is defined dually, it is called universal second-order logic. More expressive fragments are defined for any k > 0 by mutual recursion: Σ k + 1 1 {\displaystyle \Sigma _{k+1}^{1}} has the form ∃ R 0 … ∃ R m ϕ {\displaystyle \exists R_{0}\ldots \exists R_{m}\phi } , where ϕ {\displaystyle \phi } is a Π k 1 {\displaystyle \Pi _{k}^{1}} formula, and similar, Π k + 1 1 {\displaystyle \Pi _{k+1}^{1}} has the form ∀ R 0 … ∀ R m ϕ {\displaystyle \forall R_{0}\ldots \forall R_{m}\phi } , where ϕ {\displaystyle \phi } is a Σ k 1 {\displaystyle \Sigma _{k}^{1}} formula. (See analytical hierarchy for the analogous construction of second-order arithmetic .)
The semantics of second-order logic establish the meaning of each sentence. Unlike first-order logic, which has only one standard semantics, there are two different semantics that are commonly used for second-order logic: standard semantics and Henkin semantics . In each of these semantics, the interpretations of the first-order quantifiers and the logical connectives are the same as in first-order logic. Only the ranges of quantifiers over second-order variables differ in the two types of semantics. [ 3 ]
In standard semantics, also called full semantics, the quantifiers range over all sets or functions of the appropriate sort. A model with this condition is called a full model, and these are the same as models in which the range of the second-order quantifiers is the powerset of the model's first-order part. [ 3 ] Thus once the domain of the first-order variables is established, the meaning of the remaining quantifiers is fixed. It is these semantics that give second-order logic its expressive power, and they will be assumed for the remainder of this article.
Leon Henkin (1950) defined an alternative kind of semantics for second-order and higher-order theories, in which the meaning of the higher-order domains is partly determined by an explicit axiomatisation, drawing on type theory , of the properties of the sets or functions ranged over. Henkin semantics is a kind of many-sorted first-order semantics, where there are a class of models of the axioms, instead of the semantics being fixed to just the standard model as in the standard semantics. A model in Henkin semantics will provide a set of sets or set of functions as the interpretation of higher-order domains, which may be a proper subset of all sets or functions of that sort. For his axiomatisation, Henkin proved that Gödel's completeness theorem and compactness theorem , which hold for first-order logic, carry over to second-order logic with Henkin semantics. Since also the Skolem–Löwenheim theorems hold for Henkin semantics, Lindström's theorem imports that Henkin models are just disguised first-order models . [ 4 ]
For theories such as second-order arithmetic, the existence of non-standard interpretations of higher-order domains isn't just a deficiency of the particular axiomatisation derived from type theory that Henkin used, but a necessary consequence of Gödel's incompleteness theorem : Henkin's axioms can't be supplemented further to ensure the standard interpretation is the only possible model. Henkin semantics are commonly used in the study of second-order arithmetic .
Jouko Väänänen argued that the distinction between Henkin semantics and full semantics for second-order logic is analogous to the distinction between provability in ZFC and truth in V , in that the former obeys model-theoretic properties like the Lowenheim-Skolem theorem and compactness, and the latter has categoricity phenomena. [ 3 ] For example, "we cannot meaningfully ask whether the V {\displaystyle V} as defined in Z F C {\displaystyle \mathrm {ZFC} } is the real V {\displaystyle V} . But if we reformalize Z F C {\displaystyle \mathrm {ZFC} } inside Z F C {\displaystyle \mathrm {ZFC} } , then we can note that the reformalized Z F C {\displaystyle \mathrm {ZFC} } ... has countable models and hence cannot be categorical." [ citation needed ]
Second-order logic is more expressive than first-order logic. For example, if the domain is the set of all real numbers , one can assert in first-order logic the existence of an additive inverse of each real number by writing ∀ x ∃ y ( x + y = 0) but one needs second-order logic to assert the least-upper-bound property for sets of real numbers, which states that every bounded, nonempty set of real numbers has a supremum . If the domain is the set of all real numbers, the following second-order sentence (split over two lines) expresses the least upper bound property:
This formula is a direct formalization of "every nonempty , bounded set A has a least upper bound ." It can be shown that any ordered field that satisfies this property is isomorphic to the real number field. On the other hand, the set of first-order sentences valid in the reals has arbitrarily large models due to the compactness theorem. Thus the least-upper-bound property cannot be expressed by any set of sentences in first-order logic. (In fact, every real-closed field satisfies the same first-order sentences in the signature ⟨ + , ⋅ , ≤ ⟩ {\displaystyle \langle +,\cdot ,\leq \rangle } as the real numbers.)
In second-order logic, it is possible to write formal sentences that say "the domain is finite " or "the domain is of countable cardinality ." To say that the domain is finite, use the sentence that says that every surjective function from the domain to itself is injective . To say that the domain has countable cardinality, use the sentence that says that there is a bijection between every two infinite subsets of the domain. It follows from the compactness theorem and the upward Löwenheim–Skolem theorem that it is not possible to characterize finiteness or countability, respectively, in first-order logic.
Certain fragments of second-order logic like ESO are also more expressive than first-order logic even though they are strictly less expressive than the full second-order logic. ESO also enjoys translation equivalence with some extensions of first-order logic that allow non-linear ordering of quantifier dependencies, like first-order logic extended with Henkin quantifiers , Hintikka and Sandu's independence-friendly logic , and Väänänen's dependence logic .
A deductive system for a logic is a set of inference rules and logical axioms that determine which sequences of formulas constitute valid proofs. Several deductive systems can be used for second-order logic, although none can be complete for the standard semantics (see below). Each of these systems is sound , which means any sentence they can be used to prove is logically valid in the appropriate semantics.
The weakest deductive system that can be used consists of a standard deductive system for first-order logic (such as natural deduction ) augmented with substitution rules for second-order terms. [ b ] This deductive system is commonly used in the study of second-order arithmetic .
The deductive systems considered by Shapiro (2000) and Henkin (1950) add to the augmented first-order deductive scheme both comprehension axioms and choice axioms. These axioms are sound for standard second-order semantics. They are sound for Henkin semantics restricted to Henkin models satisfying the comprehension and choice axioms. [ c ]
One might attempt to reduce the second-order theory of the real numbers, with full second-order semantics, to the first-order theory in the following way. First expand the domain from the set of all real numbers to a two-sorted domain, with the second sort containing all sets of real numbers. Add a new binary predicate to the language: the membership relation. Then sentences that were second-order become first-order, with the formerly second-order quantifiers ranging over the second sort instead. This reduction can be attempted in a one-sorted theory by adding unary predicates that tell whether an element is a number or a set, and taking the domain to be the union of the set of real numbers and the power set of the real numbers.
But notice that the domain was asserted to include all sets of real numbers. That requirement cannot be reduced to a first-order sentence, as the Löwenheim–Skolem theorem shows. That theorem implies that there is some countably infinite subset of the real numbers, whose members we will call internal numbers , and some countably infinite collection of sets of internal numbers, whose members we will call "internal sets", such that the domain consisting of internal numbers and internal sets satisfies exactly the same first-order sentences as are satisfied by the domain of real numbers and sets of real numbers. In particular, it satisfies a sort of least-upper-bound axiom that says, in effect:
Countability of the set of all internal numbers (in conjunction with the fact that those form a densely ordered set) implies that that set does not satisfy the full least-upper-bound axiom. Countability of the set of all internal sets implies that it is not the set of all subsets of the set of all internal numbers (since Cantor's theorem implies that the set of all subsets of a countably infinite set is an uncountably infinite set). This construction is closely related to Skolem's paradox .
Thus the first-order theory of real numbers and sets of real numbers has many models, some of which are countable. The second-order theory of the real numbers has only one model, however.
This follows from the classical theorem that there is only one Archimedean complete ordered field , along with the fact that all the axioms of an Archimedean complete ordered field are expressible in second-order logic. This shows that the second-order theory of the real numbers cannot be reduced to a first-order theory, in the sense that the second-order theory of the real numbers has only one model but the corresponding first-order theory has many models.
There are more extreme examples showing that second-order logic with standard semantics is more expressive than first-order logic. There is a finite second-order theory whose only model is the real numbers if the continuum hypothesis holds and that has no model if the continuum hypothesis does not hold. [ 5 ] This theory consists of a finite theory characterizing the real numbers as a complete Archimedean ordered field plus an axiom saying that the domain is of the first uncountable cardinality. This example illustrates that the question of whether a sentence in second-order logic is consistent is extremely subtle.
Additional limitations of second-order logic are described in the next section.
It is a corollary of Gödel's incompleteness theorem that there is no deductive system (that is, no notion of provability ) for second-order formulas that simultaneously satisfies these three desired attributes: [ d ]
This corollary is sometimes expressed by saying that second-order logic does not admit a complete proof theory . In this respect second-order logic with standard semantics differs from first-order logic; Quine pointed to the lack of a complete proof system as a reason for thinking of second-order logic as not logic , properly speaking. [ 6 ]
As mentioned above, Henkin proved that the standard deductive system for first-order logic is sound, complete, and effective for second-order logic with Henkin semantics , and the deductive system with comprehension and choice principles is sound, complete, and effective for Henkin semantics using only models that satisfy these principles.
The compactness theorem and the Löwenheim–Skolem theorem do not hold for full models of second-order logic. They do hold however for Henkin models. [ 7 ]
Predicate logic was introduced to the mathematical community by C. S. Peirce , who coined the term second-order logic and whose notation is most similar to the modern form (Putnam 1982). However, today most students of logic are more familiar with the works of Frege , who published his work several years prior to Peirce but whose works remained less known until Bertrand Russell and Alfred North Whitehead made them famous. Frege used different variables to distinguish quantification over objects from quantification over properties and sets; but he did not see himself as doing two different kinds of logic. After the discovery of Russell's paradox it was realized that something was wrong with his system. Eventually logicians found that restricting Frege's logic in various ways—to what is now called first-order logic —eliminated this problem: sets and properties cannot be quantified over in first-order logic alone. The now-standard hierarchy of orders of logics dates from this time.
It was found that set theory could be formulated as an axiomatized system within the apparatus of first-order logic (at the cost of several kinds of completeness , but nothing so bad as Russell's paradox), and this was done (see Zermelo–Fraenkel set theory ), as sets are vital for mathematics . Arithmetic , mereology , and a variety of other powerful logical theories could be formulated axiomatically without appeal to any more logical apparatus than first-order quantification, and this, along with Gödel and Skolem 's adherence to first-order logic, led to a general decline in work in second (or any higher) order logic. [ citation needed ]
This rejection was actively advanced by some logicians, most notably W. V. Quine . Quine advanced the view [ citation needed ] that in predicate-language sentences like Fx the " x " is to be thought of as a variable or name denoting an object and hence can be quantified over, as in "For all things, it is the case that . . ." but the " F " is to be thought of as an abbreviation for an incomplete sentence, not the name of an object (not even of an abstract object like a property). For example, it might mean " . . . is a dog." But it makes no sense to think we can quantify over something like this. (Such a position is quite consistent with Frege's own arguments on the concept-object distinction). So to use a predicate as a variable is to have it occupy the place of a name, which only individual variables should occupy. This reasoning has been rejected by George Boolos . [ citation needed ]
In recent years [ when? ] second-order logic has made something of a recovery, buoyed by Boolos' interpretation of second-order quantification as plural quantification over the same domain of objects as first-order quantification (Boolos 1984). Boolos furthermore points to the claimed nonfirstorderizability of sentences such as "Some critics admire only each other" and "Some of Fianchetto's men went into the warehouse unaccompanied by anyone else", which he argues can only be expressed by the full force of second-order quantification. However, generalized quantification and partially ordered (or branching) quantification may suffice to express a certain class of purportedly nonfirstorderizable sentences as well and these do not appeal to second-order quantification.
The expressive power of various forms of second-order logic on finite structures is intimately tied to computational complexity theory . The field of descriptive complexity studies which computational complexity classes can be characterized by the power of the logic needed to express languages (sets of finite strings) in them. A string w = w 1 ··· w n in a finite alphabet A can be represented by a finite structure with domain D = {1,..., n }, unary predicates P a for each a ∈ A , satisfied by those indices i such that w i = a , and additional predicates that serve to uniquely identify which index is which (typically, one takes the graph of the successor function on D or the order relation <, possibly with other arithmetic predicates). Conversely, the Cayley tables of any finite structure (over a finite signature ) can be encoded by a finite string.
This identification leads to the following characterizations of variants of second-order logic over finite structures:
Relationships among these classes directly impact the relative expressiveness of the logics over finite structures; for example, if PH = PSPACE , then adding a transitive closure operator to second-order logic would not make it any more expressive over finite structures. | https://en.wikipedia.org/wiki/Second-order_logic |
In applied mathematics, the Johnson bound (named after Selmer Martin Johnson ) is a limit on the size of error-correcting codes , as used in coding theory for data transmission or communications.
Let C {\displaystyle C} be a q -ary code of length n {\displaystyle n} , i.e. a subset of F q n {\displaystyle \mathbb {F} _{q}^{n}} . Let d {\displaystyle d} be the minimum distance of C {\displaystyle C} , i.e.
where d ( x , y ) {\displaystyle d(x,y)} is the Hamming distance between x {\displaystyle x} and y {\displaystyle y} .
Let C q ( n , d ) {\displaystyle C_{q}(n,d)} be the set of all q -ary codes with length n {\displaystyle n} and minimum distance d {\displaystyle d} and let C q ( n , d , w ) {\displaystyle C_{q}(n,d,w)} denote the set of codes in C q ( n , d ) {\displaystyle C_{q}(n,d)} such that every element has exactly w {\displaystyle w} nonzero entries.
Denote by | C | {\displaystyle |C|} the number of elements in C {\displaystyle C} . Then, we define A q ( n , d ) {\displaystyle A_{q}(n,d)} to be the largest size of a code with length n {\displaystyle n} and minimum distance d {\displaystyle d} :
Similarly, we define A q ( n , d , w ) {\displaystyle A_{q}(n,d,w)} to be the largest size of a code in C q ( n , d , w ) {\displaystyle C_{q}(n,d,w)} :
Theorem 1 (Johnson bound for A q ( n , d ) {\displaystyle A_{q}(n,d)} ):
If d = 2 t + 1 {\displaystyle d=2t+1} ,
If d = 2 t + 2 {\displaystyle d=2t+2} ,
Theorem 2 (Johnson bound for A q ( n , d , w ) {\displaystyle A_{q}(n,d,w)} ):
(i) If d > 2 w , {\displaystyle d>2w,}
(ii) If d ≤ 2 w {\displaystyle d\leq 2w} , then define the variable e {\displaystyle e} as follows. If d {\displaystyle d} is even, then define e {\displaystyle e} through the relation d = 2 e {\displaystyle d=2e} ; if d {\displaystyle d} is odd, define e {\displaystyle e} through the relation d = 2 e − 1 {\displaystyle d=2e-1} . Let q ∗ = q − 1 {\displaystyle q^{*}=q-1} . Then,
where ⌊ ⌋ {\displaystyle \lfloor ~~\rfloor } is the floor function .
Remark: Plugging the bound of Theorem 2 into the bound of Theorem 1 produces a numerical upper bound on A q ( n , d ) {\displaystyle A_{q}(n,d)} . | https://en.wikipedia.org/wiki/Second_Johnson_bound |
Second Reality (originally titled Unreal ] [ - The 2nd Reality ) is an IBM PC compatible demo created by the Finnish demogroup Future Crew . It debuted at the Assembly 1993 demoparty on July 30, 1993, [ 1 ] where it was entered into the PC demo competition , and finished in first place with its demonstration of 2D and 3D computer graphics rendering . [ 2 ] The demo was released to the public in October 1993. It is considered to be one of the best demos created during the early 1990s on the PC; in 1999 Slashdot voted it one of the "Top 10 Hacks of All Time". [ 3 ] Its source code was released in a GitHub repository as public domain software using the Unlicense [ 4 ] on the 20th anniversary of the release on 1 August 2013. [ 5 ]
Many techniques used by other demos, including Future Crew's own earlier work, were refined and reused in Second Reality. The demo had a soundtrack of Techno music composed by Skaven and Purple Motion using ScreamTracker 3 . The degree of synchronization of the visuals with the music was highly impressive for its time.
First, the introduction plays, demonstrating text rendering on a background. After that is done, several ships appear and fly away from the camera, demonstrating 3D rendering. After some distance the ships disappear, sending out a shockwave (reminiscent of the Praxis explosion effect seen in the film Star Trek VI: The Undiscovered Country ). The screen fades to display a rendition of Wendigo , at which point Purple Motion 's main musical score for the demo begins. The image then flattens and falls horizontally to become a 3D, polygonal checkerboard.
The music has now finished its introductory notes at this point and the first melody starts. Next, a glenz (additively blended) polyhedron appears and bounces on the checkered surface, in perfect timing with the orchestra hits in the score, demonstrating 3D rendering and real-time mesh deformation. After a while, another larger polyhedron appears and the smaller polyhedron begins bouncing inside the larger.
The next scene is a winding, fluid tunnel built up by discrete points that move towards the camera. This creates a feeling of rushing through the tunnel for the viewer.
The tunnel fades out into some oscillating circles which soon fade into the next scene.
A scene that could be described as a light show. The scenes consist of multiple moiré patterns interacting. Moiré patterns were quite popular in demos of that time.
Next an image of Ulik rolls in from the right, and fades away. Some leaves and water are displayed, along with text characters floating downstream. The text says "Another way to scroll" and is an example of a scroller , which was present in most demos of the time.
After the text has floated by, again the scene changes to display a demonic-like human head (inspired by the head in How to Draw Comics the Marvel Way [ 6 ] ) with a pentagram engraved on his forehead. A sphere comes down from the top left corner simulating the below surface being refracted through a magnifying sphere. This is where the soundtrack utters the cult phrase " I am not an atomic playboy ", quoting Vice Admiral William H.P. Blandy's remarks before the Bikini nuclear test . The sphere vanishes down in the lower right corner and the camera begins to spin while zooming in and out to reveal a repeating pattern of heads, demonstrating a technique known as rotozooming . The camera then falls down and bounces back up on the surface twice, after which the scene again fades out.
When the image fades in, the camera is placed close to a surface changing texture every time. This is a continuation of their work in Unreal where they first introduced the 'unreal' plasma effect .
After a few surfaces have been displayed, a cube with animated surfaces attached comes up and spins around while translating towards and away from the camera.
After a while, this scene fades and many small vector balls fall onto the screen and begin bouncing off the ground and morphing into various spiral patterns. Because of a bug, this part will crash if the demo is installed in a directory with the complete path length exceeding 30 characters.
Again there is a fade out and a fade in, this time we are looking at a scene with two spheres, the words "Ten seconds to transmission" are spoken (sampled from the 1989 movie Batman ), [ 7 ] and a sword starts translating towards the camera. The spheres will display a reflection of the sword as well as a reflection of the aforementioned reflection in the other sphere. The scene was rendered using Future Crew's homemade ray tracing software. [ citation needed ]
As the scene changes again, this time the image rendered will be of a surface changing shape, similar to that of water. This scene is rendered using a raycasting landscape rendering technique.
After this, an image will fall in from above, depicting a rider on what appears to be yet another fantasy creature. The image will hit the ground and bounce up while behaving like jelly .
The image filename is "ICEKNGDM.LBM" ("Ice Kingdom Interleaved Bitmap "),
Future Crew call the image "Ice Kingdom"; [ 8 ] and it is an artistic rendering created by a member of Future Crew, but based [ 9 ] [ 10 ] on a painting used in a Rumple Minze alcohol advertisement from the early 1990s. [ 11 ]
In the next scene, a craft reminiscent of the TIE/Advanced fighter from Star Wars: A New Hope flies around in a large 3D city, leaving it and heading up right over the text "Future Crew". This was later redone by some of the previous members of Future Crew working for Remedy Entertainment as part of the benchmarking demo Final Reality . Flat shading is used for the buildings and Gouraud shading for the smooth trees and lettering at the end.
The image fades out and the final scene fades in, an image of two nuts with the text "Future Crew" written on them.
The demo can be started with a single character command line argument "2" through "5" to start from any of the later four parts.
For another part that its introductory text calls "just an experiment" start the demo with a command line argument of "u". The screen will start filling with ever more stars warping towards the screen.
In 2013, a reverse engineering analysis of SR with the now-available source code revealed a design which is built around two characteristic demoscene paradigms: teamwork and obfuscation . [ 12 ]
Internally, the demo consists of 23 separated parts which allowed independent, parallel development and free selection of programming language ( assembly , C and Turbo Pascal ) and development tools. [ 13 ]
The analysis of the source code also disproved the long-standing and popular speculation that SR uses its own memory manager that accesses the MMU directly; in fact, SR uses standard DOS memory management functions. [ 14 ]
The demo runs best on an Intel 80486 PC with a Gravis Ultrasound or a Sound Blaster Pro (or register-compatible clone). In the original version which was released, the demo had a bug which caused a slow down. A patch was later released to rectify this problem. [ 15 ]
While the demo code remains freely available on numerous Internet sites and is now hosted on GitHub , it is difficult or impossible to run Second Reality directly on a modern PC. The demo accesses video and sound hardware directly (using its own built-in device drivers ) which is incompatible with modern operating systems , and many of the timings in the demo do not scale up to modern CPU speeds. Therefore, the only way to run the demo on a modern PC with few glitches is by running it under an emulator such as DOSBox . [ 16 ] DOSBox is capable of emulating the exotic video modes and the Gravis Ultrasound card preferred by Second Reality, and can be configured to the 33 MHz recommended on the demo's configuration screen for optimal viewing. | https://en.wikipedia.org/wiki/Second_Reality |
The second battle of Tembien was fought on the northern front of the Second Italo-Ethiopian War . This battle consisted of attacks by Italian forces under Marshal Pietro Badoglio on Ethiopian forces under Ras [ nb 1 ] Kassa Haile Darge and Ras Seyoum Mangasha . This battle, which resulted in a decisive defeat of Ethiopian forces, was primarily fought in the area around the Tembien Province . The battle is notable for the large-scale use of mustard gas by the Italians.
On 3 October 1935, General Emilio De Bono advanced into Ethiopia from Eritrea without a declaration of War . De Bono advanced towards Addis Ababa with a force of approximately 100,000 Italian soldiers and 25,000 Eritreans . In December, after a brief period of inactivity and minor setbacks for the Italians, De Bono was replaced by Badoglio . [ 1 ] Ethiopian Emperor Haile Selassie launched the Christmas Offensive late in the year to test Badoglio. Initially successful, the goals of this offensive were overly ambitious.
As the progress of the Christmas Offensive slowed, Italian plans to renew the advance on the northern front got under way. In addition to being granted permission to use poison gas, Badoglio received additional ground forces. The elements of the Italian III Corps and the Italian IV Corps arrived in Eritrea during early 1936. [ 2 ] By mid-January 1936, Badoglio was ready to renew the Italian advance. In response to his frequent exhortations, Badoglio cabled Mussolini: "It has always been my rule to be meticulous in preparation so that I may be swift in action." [ 3 ] [ 4 ]
In early January 1936 Ethiopian forces were in the hills overlooking the Italian positions and launching attacks against them on a regular basis. The Ethiopians facing the Italians were in three groups. In the center, near Abbi Addi and along the Beles River in the Tembien, were Ras Kassa with approximately 40,000 men and Ras Seyoum with about 30,000 men. On the Ethiopian right was Ras Mulugeta Yeggazu and his army of approximately 80,000 men in positions atop Amba Aradam . Ras Imru Haile Selassie with approximately 40,000 men was on the Ethiopian left in the area around Seleclaca in Shire Province. [ 5 ] Only a minority of the Ethiopian soldiers had received military training, there were few modern weapons and less than one rifle per man. [ 6 ]
Badoglio had five army corps at his disposal. On his right, he had the Italian IV Corps and the Italian II Corps facing Ras Imru in the Shire. In the Italian center was the Eritrean Corps facing Ras Kassa and Ras Seyoum in the Tembien. Facing Ras Mulugeta dug into Amba Aradam was the Italian I Corps and III Corps. [ 5 ] Italian dictator Benito Mussolini was impatient for an Italian offensive to get under way. [ 7 ]
Initially, Badoglio saw the destruction of Ras Mulugeta's army as his first priority. This force would have to be dislodged from its strong positions on Amba Aradam in order for the Italians to continue the advance towards Addis Ababa. But Ras Kassa and Ras Seyoumm were exerting such pressure from the Tembien that Badoglio decided that he would have to deal with them first. If the Ethiopian center was to successfully advance, I Corps and III Corps facing Ras Mulugeta would be cut off from reinforcement and resupply. [ 7 ] From 20 January to 24 January, the first battle of Tembien was fought. This was fiercely fought, with the Ethiopians cutting off the Italian 1st CC.NN. Division "23 Marzo" for several days and Badoglio drawing up contingency plans for withdrawing the entire army. Eventually Italian pressure and the large scale use of mustard gas told and the threat Ras Kassa posed to the I Corps and III Corps was neutralized. [ 7 ]
From the 10 to 19 February, Badoglio attacked the army of Ras Mulugeta, dug in on Amba Aradam during the Battle of Enderta . The Italians made good use of their artillery and aerial superiority, and again the heavy use of Mustard gas. Ras Mulugeta was killed and his army collapsed and was destroyed as a fighting force in the ensuing rout. With this completed, Badoglio turned back to the center to complete what he had started with the first battle of Tembien. He would leave the army of Ras Imru Haile Selassie for another day. [ 8 ]
Badoglio now had three times the men fielded by the three remaining Ethiopian armies; extra divisions had arrived in Eritrea and the network of roads he needed to guarantee resupply had been all but completed. Even so, Badoglio stockpiled 48,000 shells and 7 million rounds of ammunition in forward areas before he started the attack. [ 8 ]
Badoglio planned to send the III Corps towards Gaela to cut off the main line of withdrawal for Ras Kassa. After establishing itself across the roads running south from the Abbi Addi region, the Eritrean Corps would advance south from the Worsege (Italian: Uarieu ) and Ab'aro passes. These moves by the III Corps and the Eritrean Corps would place the armies of Ras Kassa and Ras Seyoum in a trap. [ 8 ] It is possible that Ras Kassa anticipated Badoglio's plan. He sent a wireless message to Emperor Haile Selassie requesting permission to withdraw from the Tembien. The request was superfluous, Selassie had already indicated that Ras Kassa should fall back towards Amba Aradam and link up with the remnants of Ras Mulageta's army. [ 8 ]
In accordance with Badoglio's plan, the Eritrean Corps advanced from the mountain passes and the III Corps moved up from the Geba Valley. The second battle of the Tembien was fought on terrain which favoured the defence. It was a region of forests, ravines, and torrents where the Italians were unable to deploy artillery properly or use armoured vehicles. However the Ethiopian soldiers of Ras Seyoum failed to take full advantage of the terrain. [ 8 ]
The right wing of the Ethiopian armies rested on Uork Amba (the "mountain of gold"). The Ethiopians established a strong point there. Amba Work blocked the road to Abbi Addi on which the Eritrean Corps and the III Corp planned to converge. One-hundred-and-fifty Alpini and Blackshirt commandos were ordered to capture it under cover of darkness. Armed with grenades and knives, the commandos found the Ethiopians on the summit unprepared when they scaled the peak. [ 8 ]
Early on the morning of the 27 February, the army of Ras Seyoum was drawn up in battle array in front of Abbi Addi. Heralded by the wail of battle horns and the roll of the war drums ( negarait ), a large force of Ethiopians left the shelter of the woods covering Debra Ansa to attack the Italians in the open. From 8:00am to 4:00pm, wave after wave of Ethiopians attempted to break through or get around the positions established by the Alpini and the Blackshirts of the Eritrean columns. Armed for the most part with swords and clubs, the attacks were mowed down and turned back by concentrated machine gun fire. [ 9 ] As the attacks wavered the Italian commander counterattacked . Ras Seyoum decided that his men could take no more. His army left more than one-thousand dead on the battlefield as it fled. [ 9 ]
With his right flank in the air, Ras Seyoum ordered his army to pull back to the Tekezé fords. But, as his men straggled back along the one road open to them, they were bombed repeatedly. The rocky ravine where they were to cross the river turned out to be a bottleneck. The Italian bombers focused on the concentrated solid mass of defeated Ethiopians and soon the area was turned into a charnel house . [ 9 ]
Meanwhile, Ras Kassa and his army on Debra Amba had not yet seen action. Ras Kassa now decided to do what the Emperor had indicated and started to withdraw his army towards Amba Aradam. His army in turn was heavily bombed. [ 9 ]
On 29 February, the III Corps and the Eritrean Corps linked up about three miles west of Abbi Addi and the trap was complete. Even so, a large portion of both Ethiopian armies managed to escape Badoglio's dragnet. However, the men who escaped were demoralized and with little or no equipment. By the time Ras Kassa and Ras Seyoum reached Haile Sellassie's headquarters at Quorom two weeks later, they were accompanied by little more than the men of their personal bodyguards. [ 9 ]
Writing as a correspondent at Italian Military Headquarters, Herbert L. Matthews of the New York Times , cabled the following to his paper:
Ras Kassa's army in the Tembien region of Ethiopia, northwest of Makale, has been destroyed. He himself is fleeing for his life with a few followers. Now between the Italian forces and Addis Ababa all Northern Ethiopia lies open and almost defenseless. Only Emperor Haile Selassie's private army can offer resistance, and it is not expected to be serious. [ 10 ]
A United Press correspondent wrote:
Using his entire northern army of 300,000, Badoglio shattered the armies of Ras Kassa and Ras Seyoum... The Victory saw Fascist legions occupy strategic Golden Mountain [Amba Work], giving Badoglio control of northern Ethiopia. [ 10 ]
Ras Mulugeta was dead. Ras Kassa and Ras Seyoum were beaten. All three armies commanded by these men had been effectively destroyed. Only one of the four main northern armies remained intact. Badoglio now turned his attention towards Ras Imru and his forces in the Shire. Both Ras Kassa and Ras Seyoum were present at Maychew , the final battle of the war. [ 11 ] | https://en.wikipedia.org/wiki/Second_battle_of_Tembien |
In calculus , the second derivative , or the second-order derivative , of a function f is the derivative of the derivative of f . Informally, the second derivative can be phrased as "the rate of change of the rate of change"; for example, the second derivative of the position of an object with respect to time is the instantaneous acceleration of the object, or the rate at which the velocity of the object is changing with respect to time. In Leibniz notation : a = d v d t = d 2 x d t 2 , {\displaystyle a={\frac {dv}{dt}}={\frac {d^{2}x}{dt^{2}}},} where a is acceleration, v is velocity, t is time, x is position, and d is the instantaneous "delta" or change. The last expression d 2 x d t 2 {\displaystyle {\tfrac {d^{2}x}{dt^{2}}}} is the second derivative of position ( x ) with respect to time.
On the graph of a function , the second derivative corresponds to the curvature or concavity of the graph. The graph of a function with a positive second derivative is upwardly concave, while the graph of a function with a negative second derivative curves in the opposite way.
The power rule for the first derivative, if applied twice, will produce the second derivative power rule as follows: d 2 d x 2 x n = d d x d d x x n = d d x ( n x n − 1 ) = n d d x x n − 1 = n ( n − 1 ) x n − 2 . {\displaystyle {\frac {d^{2}}{dx^{2}}}x^{n}={\frac {d}{dx}}{\frac {d}{dx}}x^{n}={\frac {d}{dx}}\left(nx^{n-1}\right)=n{\frac {d}{dx}}x^{n-1}=n(n-1)x^{n-2}.}
The second derivative of a function f ( x ) {\displaystyle f(x)} is usually denoted f ″ ( x ) {\displaystyle f''(x)} . [ 1 ] [ 2 ] That is: f ″ = ( f ′ ) ′ {\displaystyle f''=\left(f'\right)'} When using Leibniz's notation for derivatives, the second derivative of a dependent variable y with respect to an independent variable x is written d 2 y d x 2 . {\displaystyle {\frac {d^{2}y}{dx^{2}}}.} This notation is derived from the following formula: d 2 y d x 2 = d d x ( d y d x ) . {\displaystyle {\frac {d^{2}y}{dx^{2}}}\,=\,{\frac {d}{dx}}\left({\frac {dy}{dx}}\right).}
Given the function f ( x ) = x 3 , {\displaystyle f(x)=x^{3},} the derivative of f is the function f ′ ( x ) = 3 x 2 . {\displaystyle f'(x)=3x^{2}.} The second derivative of f is the derivative of f ′ {\displaystyle f'} , namely f ″ ( x ) = 6 x . {\displaystyle f''(x)=6x.}
The second derivative of a function f can be used to determine the concavity of the graph of f . [ 2 ] A function whose second derivative is positive is said to be concave up (also referred to as convex), meaning that the tangent line near the point where it touches the function will lie below the graph of the function. Similarly, a function whose second derivative is negative will be concave down (sometimes simply called concave), and its tangent line will lie above the graph of the function near the point of contact.
If the second derivative of a function changes sign, the graph of the function will switch from concave down to concave up, or vice versa. A point where this occurs is called an inflection point . Assuming the second derivative is continuous, it must take a value of zero at any inflection point, although not every point where the second derivative is zero is necessarily a point of inflection.
The relation between the second derivative and the graph can be used to test whether a stationary point for a function (i.e., a point where f ′ ( x ) = 0 {\displaystyle f'(x)=0} ) is a local maximum or a local minimum . Specifically,
The reason the second derivative produces these results can be seen by way of a real-world analogy. Consider a vehicle that at first is moving forward at a great velocity, but with a negative acceleration. Clearly, the position of the vehicle at the point where the velocity reaches zero will be the maximum distance from the starting position – after this time, the velocity will become negative and the vehicle will reverse. The same is true for the minimum, with a vehicle that at first has a very negative velocity but positive acceleration.
It is possible to write a single limit for the second derivative: f ″ ( x ) = lim h → 0 f ( x + h ) − 2 f ( x ) + f ( x − h ) h 2 . {\displaystyle f''(x)=\lim _{h\to 0}{\frac {f(x+h)-2f(x)+f(x-h)}{h^{2}}}.}
The limit is called the second symmetric derivative . [ 3 ] [ 4 ] The second symmetric derivative may exist even when the (usual) second derivative does not.
The expression on the right can be written as a difference quotient of difference quotients: f ( x + h ) − 2 f ( x ) + f ( x − h ) h 2 = f ( x + h ) − f ( x ) h − f ( x ) − f ( x − h ) h h . {\displaystyle {\frac {f(x+h)-2f(x)+f(x-h)}{h^{2}}}={\frac {{\dfrac {f(x+h)-f(x)}{h}}-{\dfrac {f(x)-f(x-h)}{h}}}{h}}.} This limit can be viewed as a continuous version of the second difference for sequences .
However, the existence of the above limit does not mean that the function f {\displaystyle f} has a second derivative. The limit above just gives a possibility for calculating the second derivative—but does not provide a definition. A counterexample is the sign function sgn ( x ) {\displaystyle \operatorname {sgn}(x)} , which is defined as: sgn ( x ) = { − 1 if x < 0 , 0 if x = 0 , 1 if x > 0. {\displaystyle \operatorname {sgn}(x)={\begin{cases}-1&{\text{if }}x<0,\\0&{\text{if }}x=0,\\1&{\text{if }}x>0.\end{cases}}}
The sign function is not continuous at zero, and therefore the second derivative for x = 0 {\displaystyle x=0} does not exist. But the above limit exists for x = 0 {\displaystyle x=0} : lim h → 0 sgn ( 0 + h ) − 2 sgn ( 0 ) + sgn ( 0 − h ) h 2 = lim h → 0 sgn ( h ) − 2 ⋅ 0 + sgn ( − h ) h 2 = lim h → 0 sgn ( h ) + ( − sgn ( h ) ) h 2 = lim h → 0 0 h 2 = 0. {\displaystyle {\begin{aligned}\lim _{h\to 0}{\frac {\operatorname {sgn}(0+h)-2\operatorname {sgn}(0)+\operatorname {sgn}(0-h)}{h^{2}}}&=\lim _{h\to 0}{\frac {\operatorname {sgn}(h)-2\cdot 0+\operatorname {sgn}(-h)}{h^{2}}}\\&=\lim _{h\to 0}{\frac {\operatorname {sgn}(h)+(-\operatorname {sgn}(h))}{h^{2}}}=\lim _{h\to 0}{\frac {0}{h^{2}}}=0.\end{aligned}}}
Just as the first derivative is related to linear approximations , the second derivative is related to the best quadratic approximation for a function f . This is the quadratic function whose first and second derivatives are the same as those of f at a given point. The formula for the best quadratic approximation to a function f around the point x = a is f ( x ) ≈ f ( a ) + f ′ ( a ) ( x − a ) + 1 2 f ″ ( a ) ( x − a ) 2 . {\displaystyle f(x)\approx f(a)+f'(a)(x-a)+{\tfrac {1}{2}}f''(a)(x-a)^{2}.} This quadratic approximation is the second-order Taylor polynomial for the function centered at x = a .
For many combinations of boundary conditions explicit formulas for eigenvalues and eigenvectors of the second derivative can be obtained. For example, assuming x ∈ [ 0 , L ] {\displaystyle x\in [0,L]} and homogeneous Dirichlet boundary conditions (i.e., v ( 0 ) = v ( L ) = 0 {\displaystyle v(0)=v(L)=0} where v is the eigenvector), the eigenvalues are λ j = − j 2 π 2 L 2 {\displaystyle \lambda _{j}=-{\tfrac {j^{2}\pi ^{2}}{L^{2}}}} and the corresponding eigenvectors (also called eigenfunctions ) are v j ( x ) = 2 L sin ( j π x L ) {\displaystyle v_{j}(x)={\sqrt {\tfrac {2}{L}}}\sin \left({\tfrac {j\pi x}{L}}\right)} . Here, v j ″ ( x ) = λ j v j ( x ) {\displaystyle v''_{j}(x)=\lambda _{j}v_{j}(x)} , for j = 1 , … , ∞ {\displaystyle j=1,\ldots ,\infty } .
For other well-known cases, see Eigenvalues and eigenvectors of the second derivative .
The second derivative generalizes to higher dimensions through the notion of second partial derivatives . For a function f : R 3 → R , these include the three second-order partials ∂ 2 f ∂ x 2 , ∂ 2 f ∂ y 2 , and ∂ 2 f ∂ z 2 {\displaystyle {\frac {\partial ^{2}f}{\partial x^{2}}},\;{\frac {\partial ^{2}f}{\partial y^{2}}},{\text{ and }}{\frac {\partial ^{2}f}{\partial z^{2}}}} and the mixed partials ∂ 2 f ∂ x ∂ y , ∂ 2 f ∂ x ∂ z , and ∂ 2 f ∂ y ∂ z . {\displaystyle {\frac {\partial ^{2}f}{\partial x\,\partial y}},\;{\frac {\partial ^{2}f}{\partial x\,\partial z}},{\text{ and }}{\frac {\partial ^{2}f}{\partial y\,\partial z}}.}
If the function's image and domain both have a potential, then these fit together into a symmetric matrix known as the Hessian . The eigenvalues of this matrix can be used to implement a multivariable analogue of the second derivative test. (See also the second partial derivative test .)
Another common generalization of the second derivative is the Laplacian . This is the differential operator ∇ 2 {\displaystyle \nabla ^{2}} (or Δ {\displaystyle \Delta } ) defined by ∇ 2 f = ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2 + ∂ 2 f ∂ z 2 . {\displaystyle \nabla ^{2}f={\frac {\partial ^{2}f}{\partial x^{2}}}+{\frac {\partial ^{2}f}{\partial y^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}}.} The Laplacian of a function is equal to the divergence of the gradient , and the trace of the Hessian matrix . | https://en.wikipedia.org/wiki/Second_derivative |
In differential geometry , the second fundamental form (or shape tensor ) is a quadratic form on the tangent plane of a smooth surface in the three-dimensional Euclidean space , usually denoted by I I {\displaystyle \mathrm {I\!I} } (read "two"). Together with the first fundamental form , it serves to define extrinsic invariants of the surface, its principal curvatures . More generally, such a quadratic form is defined for a smooth immersed submanifold in a Riemannian manifold .
The second fundamental form of a parametric surface S in R 3 was introduced and studied by Gauss . First suppose that the surface is the graph of a twice continuously differentiable function, z = f ( x , y ) , and that the plane z = 0 is tangent to the surface at the origin. Then f and its partial derivatives with respect to x and y vanish at (0,0). Therefore, the Taylor expansion of f at (0,0) starts with quadratic terms:
and the second fundamental form at the origin in the coordinates ( x , y ) is the quadratic form
For a smooth point P on S , one can choose the coordinate system so that the plane z = 0 is tangent to S at P , and define the second fundamental form in the same way.
The second fundamental form of a general parametric surface is defined as follows. Let r = r ( u , v ) be a regular parametrization of a surface in R 3 , where r is a smooth vector-valued function of two variables. It is common to denote the partial derivatives of r with respect to u and v by r u and r v . Regularity of the parametrization means that r u and r v are linearly independent for any ( u , v ) in the domain of r , and hence span the tangent plane to S at each point. Equivalently, the cross product r u × r v is a nonzero vector normal to the surface. The parametrization thus defines a field of unit normal vectors n :
The second fundamental form is usually written as
its matrix in the basis { r u , r v } of the tangent plane is
The coefficients L , M , N at a given point in the parametric uv -plane are given by the projections of the second partial derivatives of r at that point onto the normal line to S and can be computed with the aid of the dot product as follows:
For a signed distance field of Hessian H , the second fundamental form coefficients can be computed as follows:
The second fundamental form of a general parametric surface S is defined as follows.
Let r = r ( u 1 , u 2 ) be a regular parametrization of a surface in R 3 , where r is a smooth vector-valued function of two variables. It is common to denote the partial derivatives of r with respect to u α by r α , α = 1, 2 . Regularity of the parametrization means that r 1 and r 2 are linearly independent for any ( u 1 , u 2 ) in the domain of r , and hence span the tangent plane to S at each point. Equivalently, the cross product r 1 × r 2 is a nonzero vector normal to the surface. The parametrization thus defines a field of unit normal vectors n :
The second fundamental form is usually written as
The equation above uses the Einstein summation convention .
The coefficients b αβ at a given point in the parametric u 1 u 2 -plane are given by the projections of the second partial derivatives of r at that point onto the normal line to S and can be computed in terms of the normal vector n as follows:
In Euclidean space , the second fundamental form is given by
where ν {\displaystyle \nu } is the Gauss map , and d ν {\displaystyle d\nu } the differential of ν {\displaystyle \nu } regarded as a vector-valued differential form , and the brackets denote the metric tensor of Euclidean space.
More generally, on a Riemannian manifold, the second fundamental form is an equivalent way to describe the shape operator (denoted by S ) of a hypersurface,
where ∇ v w denotes the covariant derivative of the ambient manifold and n a field of normal vectors on the hypersurface. (If the affine connection is torsion-free , then the second fundamental form is symmetric.)
The sign of the second fundamental form depends on the choice of direction of n (which is called a co-orientation of the hypersurface - for surfaces in Euclidean space, this is equivalently given by a choice of orientation of the surface).
The second fundamental form can be generalized to arbitrary codimension . In that case it is a quadratic form on the tangent space with values in the normal bundle and it can be defined by
where ( ∇ v w ) ⊥ {\displaystyle (\nabla _{v}w)^{\bot }} denotes the orthogonal projection of covariant derivative ∇ v w {\displaystyle \nabla _{v}w} onto the normal bundle.
In Euclidean space , the curvature tensor of a submanifold can be described by the following formula:
This is called the Gauss equation , as it may be viewed as a generalization of Gauss's Theorema Egregium .
For general Riemannian manifolds one has to add the curvature of ambient space; if N is a manifold embedded in a Riemannian manifold ( M , g ) then the curvature tensor R N of N with induced metric can be expressed using the second fundamental form and R M , the curvature tensor of M : | https://en.wikipedia.org/wiki/Second_fundamental_form |
The second law of thermodynamics is a physical law based on universal empirical observation concerning heat and energy interconversions . A simple statement of the law is that heat always flows spontaneously from hotter to colder regions of matter (or 'downhill' in terms of the temperature gradient). Another statement is: "Not all heat can be converted into work in a cyclic process ." [ 1 ] [ 2 ] [ 3 ]
The second law of thermodynamics establishes the concept of entropy as a physical property of a thermodynamic system . It predicts whether processes are forbidden despite obeying the requirement of conservation of energy as expressed in the first law of thermodynamics and provides necessary criteria for spontaneous processes . For example, the first law allows the process of a cup falling off a table and breaking on the floor, as well as allowing the reverse process of the cup fragments coming back together and 'jumping' back onto the table, while the second law allows the former and denies the latter. The second law may be formulated by the observation that the entropy of isolated systems left to spontaneous evolution cannot decrease, as they always tend toward a state of thermodynamic equilibrium where the entropy is highest at the given internal energy. [ 4 ] An increase in the combined entropy of system and surroundings accounts for the irreversibility of natural processes, often referred to in the concept of the arrow of time . [ 5 ] [ 6 ]
Historically, the second law was an empirical finding that was accepted as an axiom of thermodynamic theory . Statistical mechanics provides a microscopic explanation of the law in terms of probability distributions of the states of large assemblies of atoms or molecules . The second law has been expressed in many ways. Its first formulation, which preceded the proper definition of entropy and was based on caloric theory , is Carnot's theorem , formulated by the French scientist Sadi Carnot , who in 1824 showed that the efficiency of conversion of heat to work in a heat engine has an upper limit. [ 7 ] [ 8 ] The first rigorous definition of the second law based on the concept of entropy came from German scientist Rudolf Clausius in the 1850s and included his statement that heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.
The second law of thermodynamics allows the definition of the concept of thermodynamic temperature , but this has been formally delegated to the zeroth law of thermodynamics .
The first law of thermodynamics provides the definition of the internal energy of a thermodynamic system , and expresses its change for a closed system in terms of work and heat . [ 9 ] It can be linked to the law of conservation of energy . [ 10 ] Conceptually, the first law describes the fundamental principle that systems do not consume or 'use up' energy, that energy is neither created nor destroyed, but is simply converted from one form to another.
The second law is concerned with the direction of natural processes. [ 11 ] It asserts that a natural process runs only in one sense, and is not reversible. That is, the state of a natural system itself can be reversed, but not without increasing the entropy of the system's surroundings, that is, both the state of the system plus the state of its surroundings cannot be together, fully reversed, without implying the destruction of entropy.
For example, when a path for conduction or radiation is made available, heat always flows spontaneously from a hotter to a colder body. Such phenomena are accounted for in terms of entropy change . [ 12 ] [ 13 ] A heat pump can reverse this heat flow, but the reversal process and the original process, both cause entropy production, thereby increasing the entropy of the system's surroundings. If an isolated system containing distinct subsystems is held initially in internal thermodynamic equilibrium by internal partitioning by impermeable walls between the subsystems, and then some operation makes the walls more permeable, then the system spontaneously evolves to reach a final new internal thermodynamic equilibrium , and its total entropy, S {\displaystyle S} , increases.
In a reversible or quasi-static , idealized process of transfer of energy as heat to a closed thermodynamic system of interest, (which allows the entry or exit of energy – but not transfer of matter), from an auxiliary thermodynamic system, an infinitesimal increment ( d S {\displaystyle \mathrm {d} S} ) in the entropy of the system of interest is defined to result from an infinitesimal transfer of heat ( δ Q {\displaystyle \delta Q} ) to the system of interest, divided by the common thermodynamic temperature ( T ) {\displaystyle (T)} of the system of interest and the auxiliary thermodynamic system: [ 14 ]
Different notations are used for an infinitesimal amount of heat ( δ ) {\displaystyle (\delta )} and infinitesimal change of entropy ( d ) {\displaystyle (\mathrm {d} )} because entropy is a function of state , while heat, like work, is not.
For an actually possible infinitesimal process without exchange of mass with the surroundings, the second law requires that the increment in system entropy fulfills the inequality [ 15 ] [ 16 ]
This is because a general process for this case (no mass exchange between the system and its surroundings) may include work being done on the system by its surroundings, which can have frictional or viscous effects inside the system, because a chemical reaction may be in progress, or because heat transfer actually occurs only irreversibly, driven by a finite difference between the system temperature ( T ) and the temperature of the surroundings ( T surr ). [ 17 ] [ 18 ]
The equality still applies for pure heat flow (only heat flow, no change in chemical composition and mass),
which is the basis of the accurate determination of the absolute entropy of pure substances from measured heat capacity curves and entropy changes at phase transitions, i.e. by calorimetry. [ 19 ] [ 15 ]
The zeroth law of thermodynamics in its usual short statement allows recognition that two bodies in a relation of thermal equilibrium have the same temperature, especially that a test body has the same temperature as a reference thermometric body. [ 20 ] For a body in thermal equilibrium with another, there are indefinitely many empirical temperature scales, in general respectively depending on the properties of a particular reference thermometric body. The second law allows [ clarification needed ] a distinguished temperature scale, which defines an absolute, thermodynamic temperature , independent of the properties of any particular reference thermometric body. [ 21 ] [ 22 ]
The second law of thermodynamics may be expressed in many specific ways, [ 23 ] the most prominent classical statements [ 24 ] being the statement by Rudolf Clausius (1854), the statement by Lord Kelvin (1851), and the statement in axiomatic thermodynamics by Constantin Carathéodory (1909). These statements cast the law in general physical terms citing the impossibility of certain processes. The Clausius and the Kelvin statements have been shown to be equivalent. [ 25 ]
The historical origin [ 26 ] of the second law of thermodynamics was in Sadi Carnot 's theoretical analysis of the flow of heat in steam engines (1824). The centerpiece of that analysis, now known as a Carnot engine , is an ideal heat engine fictively operated in the limiting mode of extreme slowness known as quasi-static, so that the heat and work transfers are between subsystems that are always in their own internal states of thermodynamic equilibrium . It represents the theoretical maximum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures. Carnot's principle was recognized by Carnot at a time when the caloric theory represented the dominant understanding of the nature of heat, before the recognition of the first law of thermodynamics , and before the mathematical expression of the concept of entropy. Interpreted in the light of the first law, Carnot's analysis is physically equivalent to the second law of thermodynamics, and remains valid today. Some samples from his book are:
In modern terms, Carnot's principle may be stated more precisely:
The German scientist Rudolf Clausius laid the foundation for the second law of thermodynamics in 1850 by examining the relation between heat transfer and work. [ 36 ] His formulation of the second law, which was published in German in 1854, is known as the Clausius statement :
Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time. [ 37 ]
The statement by Clausius uses the concept of 'passage of heat'. As is usual in thermodynamic discussions, this means 'net transfer of energy as heat', and does not refer to contributory transfers one way and the other.
Heat cannot spontaneously flow from cold regions to hot regions without external work being performed on the system, which is evident from ordinary experience of refrigeration , for example. In a refrigerator, heat is transferred from cold to hot, but only when forced by an external agent, the refrigeration system.
Lord Kelvin expressed the second law in several wordings.
Suppose there is an engine violating the Kelvin statement: i.e., one that drains heat and converts it completely into work (the drained heat is fully converted to work) in a cyclic fashion without any other result. Now pair it with a reversed Carnot engine as shown by the right figure. The efficiency of a normal heat engine is η and so the efficiency of the reversed heat engine is 1/ η . The net and sole effect of the combined pair of engines is to transfer heat Δ Q = Q ( 1 η − 1 ) {\textstyle \Delta Q=Q\left({\frac {1}{\eta }}-1\right)} from the cooler reservoir to the hotter one, which violates the Clausius statement. This is a consequence of the first law of thermodynamics , as for the total system's energy to remain the same; Input + Output = 0 ⟹ ( Q + Q c ) − Q η = 0 {\textstyle {\text{Input}}+{\text{Output}}=0\implies (Q+Q_{c})-{\frac {Q}{\eta }}=0} , so therefore Q c = Q ( 1 η − 1 ) {\textstyle Q_{c}=Q\left({\frac {1}{\eta }}-1\right)} , where (1) the sign convention of heat is used in which heat entering into (leaving from) an engine is positive (negative) and (2) Q η {\displaystyle {\frac {Q}{\eta }}} is obtained by the definition of efficiency of the engine when the engine operation is not reversed. Thus a violation of the Kelvin statement implies a violation of the Clausius statement, i.e. the Clausius statement implies the Kelvin statement. We can prove in a similar manner that the Kelvin statement implies the Clausius statement, and hence the two are equivalent.
Planck offered the following proposition as derived directly from experience. This is sometimes regarded as his statement of the second law, but he regarded it as a starting point for the derivation of the second law.
It is almost customary in textbooks to speak of the "Kelvin–Planck statement" of the law, as for example in the text by ter Haar and Wergeland . [ 41 ] This version, also known as the heat engine statement , of the second law states that
Max Planck stated the second law as follows.
Rather like Planck's statement is that of George Uhlenbeck and G. W. Ford for irreversible phenomena .
Constantin Carathéodory formulated thermodynamics on a purely mathematical axiomatic foundation. His statement of the second law is known as the Principle of Carathéodory, which may be formulated as follows: [ 46 ]
In every neighborhood of any state S of an adiabatically enclosed system there are states inaccessible from S. [ 47 ]
With this formulation, he described the concept of adiabatic accessibility for the first time and provided the foundation for a new subfield of classical thermodynamics, often called geometrical thermodynamics . It follows from Carathéodory's principle that quantity of energy quasi-statically transferred as heat is a holonomic process function , in other words, δ Q = T d S {\displaystyle \delta Q=TdS} . [ 48 ]
Though it is almost customary in textbooks to say that Carathéodory's principle expresses the second law and to treat it as equivalent to the Clausius or to the Kelvin-Planck statements, such is not the case. To get all the content of the second law, Carathéodory's principle needs to be supplemented by Planck's principle, that isochoric work always increases the internal energy of a closed system that was initially in its own internal thermodynamic equilibrium. [ 18 ] [ 49 ] [ 50 ] [ 51 ] [ clarification needed ]
In 1926, Max Planck wrote an important paper on the basics of thermodynamics. [ 50 ] [ 52 ] He indicated the principle
This formulation does not mention heat and does not mention temperature, nor even entropy, and does not necessarily implicitly rely on those concepts, but it implies the content of the second law. A closely related statement is that "Frictional pressure never does positive work." [ 53 ] Planck wrote: "The production of heat by friction is irreversible." [ 54 ] [ 55 ]
Not mentioning entropy, this principle of Planck is stated in physical terms. It is very closely related to the Kelvin statement given just above. [ 56 ] It is relevant that for a system at constant volume and mole numbers , the entropy is a monotonic function of the internal energy. Nevertheless, this principle of Planck is not actually Planck's preferred statement of the second law, which is quoted above, in a previous sub-section of the present section of this present article, and relies on the concept of entropy.
A statement that in a sense is complementary to Planck's principle is made by Claus Borgnakke and Richard E. Sonntag. They do not offer it as a full statement of the second law:
Differing from Planck's just foregoing principle, this one is explicitly in terms of entropy change. Removal of matter from a system can also decrease its entropy.
The second law has been shown to be equivalent to the internal energy U defined as a convex function of the other extensive properties of the system. [ 58 ] That is, when a system is described by stating its internal energy U , an extensive variable, as a function of its entropy S , volume V , and mol number N , i.e. U = U ( S , V , N ), then the temperature is equal to the partial derivative of the internal energy with respect to the entropy [ 59 ] (essentially equivalent to the first TdS equation for V and N held constant):
The Clausius inequality, as well as some other statements of the second law, must be re-stated to have general applicability for all forms of heat transfer, i.e. scenarios involving radiative fluxes. For example, the integrand (đQ/T) of the Clausius expression applies to heat conduction and convection, and the case of ideal infinitesimal blackbody radiation (BR) transfer, but does not apply to most radiative transfer scenarios and in some cases has no physical meaning whatsoever. Consequently, the Clausius inequality was re-stated [ 60 ] so that it is applicable to cycles with processes involving any form of heat transfer. The entropy transfer with radiative fluxes ( δ S NetRad \delta S_{\text{NetRad}} ) is taken separately from that due to heat transfer by conduction and convection ( δ Q C C \delta Q_{CC} ), where the temperature is evaluated at the system boundary where the heat transfer occurs. The modified Clausius inequality, for all heat transfer scenarios, can then be expressed as, ∫ cycle ( δ Q C C T b + δ S NetRad ) ≤ 0 {\displaystyle \int _{\text{cycle}}({\frac {\delta Q_{CC}}{T_{b}}}+\delta S_{\text{NetRad}})\leq 0}
In a nutshell, the Clausius inequality is saying that when a cycle is completed, the change in the state property S will be zero, so the entropy that was produced during the cycle must have transferred out of the system by heat transfer. The δ \delta (or đ) indicates a path dependent integration.
Due to the inherent emission of radiation from all matter, most entropy flux calculations involve incident, reflected and emitted radiative fluxes. The energy and entropy of unpolarized blackbody thermal radiation, is calculated using the spectral energy and entropy radiance expressions derived by Max Planck [ 61 ] using equilibrium statistical mechanics, K ν = 2 h c 2 ν 3 exp ( h ν k T ) − 1 , {\displaystyle K_{\nu }={\frac {2h}{c^{2}}}{\frac {\nu ^{3}}{\exp \left({\frac {h\nu }{kT}}\right)-1}},} L ν = 2 k ν 2 c 2 ( ( 1 + c 2 K ν 2 h ν 3 ) ln ( 1 + c 2 K ν 2 h ν 3 ) − ( c 2 K ν 2 h ν 3 ) ln ( c 2 K ν 2 h ν 3 ) ) {\displaystyle L_{\nu }={\frac {2k\nu ^{2}}{c^{2}}}((1+{\frac {c^{2}K_{\nu }}{2h\nu ^{3}}})\ln(1+{\frac {c^{2}K_{\nu }}{2h\nu ^{3}}})-({\frac {c^{2}K_{\nu }}{2h\nu ^{3}}})\ln({\frac {c^{2}K_{\nu }}{2h\nu ^{3}}}))} where c is the speed of light, k is the Boltzmann constant, h is the Planck constant, ν is frequency, and the quantities K v and L v are the energy and entropy fluxes per unit frequency, area, and solid angle. In deriving this blackbody spectral entropy radiance, with the goal of deriving the blackbody energy formula, Planck postulated that the energy of a photon was quantized (partly to simplify the mathematics), thereby starting quantum theory.
A non-equilibrium statistical mechanics approach has also been used to obtain the same result as Planck, indicating it has wider significance and represents a non-equilibrium entropy. [ 62 ] A plot of K v versus frequency (v) for various values of temperature ( T) gives a family of blackbody radiation energy spectra, and likewise for the entropy spectra. For non-blackbody radiation (NBR) emission fluxes, the spectral entropy radiance L v is found by substituting K v spectral energy radiance data into the L v expression (noting that emitted and reflected entropy fluxes are, in general, not independent). For the emission of NBR, including graybody radiation (GR), the resultant emitted entropy flux, or radiance L , has a higher ratio of entropy-to-energy ( L/K ), than that of BR. That is, the entropy flux of NBR emission is farther removed from the conduction and convection q / T result, than that for BR emission. [ 63 ] This observation is consistent with Max Planck's blackbody radiation energy and entropy formulas and is consistent with the fact that blackbody radiation emission represents the maximum emission of entropy for all materials with the same temperature, as well as the maximum entropy emission for all radiation with the same energy radiance.
Second law analysis is valuable in scientific and engineering analysis in that it provides a number of benefits over energy analysis alone, including the basis for determining energy quality (exergy content [ 64 ] [ 65 ] [ 66 ] ), understanding fundamental physical phenomena, and improving performance evaluation and optimization. As a result, a conceptual statement of the principle is very useful in engineering analysis. Thermodynamic systems can be categorized by the four combinations of either entropy (S) up or down, and uniformity (Y) – between system and its environment – up or down. This 'special' category of processes, category IV, is characterized by movement in the direction of low disorder and low uniformity, counteracting the second law tendency towards uniformity and disorder. [ 67 ]
The second law can be conceptually stated [ 67 ] as follows: Matter and energy have the tendency to reach a state of uniformity or internal and external equilibrium, a state of maximum disorder (entropy). Real non-equilibrium processes always produce entropy, causing increased disorder in the universe, while idealized reversible processes produce no entropy and no process is known to exist that destroys entropy. The tendency of a system to approach uniformity may be counteracted, and the system may become more ordered or complex, by the combination of two things, a work or exergy source and some form of instruction or intelligence. Where 'exergy' is the thermal, mechanical, electric or chemical work potential of an energy source or flow, and 'instruction or intelligence', although subjective, is in the context of the set of category IV processes.
Consider a category IV example of robotic manufacturing and assembly of vehicles in a factory. The robotic machinery requires electrical work input and instructions, but when completed, the manufactured products have less uniformity with their surroundings, or more complexity (higher order) relative to the raw materials they were made from. Thus, system entropy or disorder decreases while the tendency towards uniformity between the system and its environment is counteracted. In this example, the instructions, as well as the source of work may be internal or external to the system, and they may or may not cross the system boundary. To illustrate, the instructions may be pre-coded and the electrical work may be stored in an energy storage system on-site. Alternatively, the control of the machinery may be by remote operation over a communications network, while the electric work is supplied to the factory from the local electric grid. In addition, humans may directly play, in whole or in part, the role that the robotic machinery plays in manufacturing. In this case, instructions may be involved, but intelligence is either directly responsible, or indirectly responsible, for the direction or application of work in such a way as to counteract the tendency towards disorder and uniformity.
There are also situations where the entropy spontaneously decreases by means of energy and entropy transfer. When thermodynamic constraints are not present, spontaneously energy or mass, as well as accompanying entropy, may be transferred out of a system in a progress to reach external equilibrium or uniformity in intensive properties of the system with its surroundings. This occurs spontaneously because the energy or mass transferred from the system to its surroundings results in a higher entropy in the surroundings, that is, it results in higher overall entropy of the system plus its surroundings. Note that this transfer of entropy requires dis-equilibrium in properties, such as a temperature difference. One example of this is the cooling crystallization of water that can occur when the system's surroundings are below freezing temperatures. Unconstrained heat transfer can spontaneously occur, leading to water molecules freezing into a crystallized structure of reduced disorder (sticking together in a certain order due to molecular attraction). The entropy of the system decreases, but the system approaches uniformity with its surroundings (category III).
On the other hand, consider the refrigeration of water in a warm environment. Due to refrigeration, as heat is extracted from the water, the temperature and entropy of the water decreases, as the system moves further away from uniformity with its warm surroundings or environment (category IV). The main point, take-away, is that refrigeration not only requires a source of work, it requires designed equipment, as well as pre-coded or direct operational intelligence or instructions to achieve the desired refrigeration effect.
Before the establishment of the second law, many people who were interested in inventing a perpetual motion machine had tried to circumvent the restrictions of first law of thermodynamics by extracting the massive internal energy of the environment as the power of the machine. Such a machine is called a "perpetual motion machine of the second kind". The second law declared the impossibility of such machines.
Carnot's theorem (1824) is a principle that limits the maximum efficiency for any possible engine. The efficiency solely depends on the temperature difference between the hot and cold thermal reservoirs. Carnot's theorem states:
In his ideal model, the heat of caloric converted into work could be reinstated by reversing the motion of the cycle, a concept subsequently known as thermodynamic reversibility . Carnot, however, further postulated that some caloric is lost, not being converted to mechanical work. Hence, no real heat engine could realize the Carnot cycle 's reversibility and was condemned to be less efficient.
Though formulated in terms of caloric (see the obsolete caloric theory ), rather than entropy , this was an early insight into the second law.
The Clausius theorem (1854) states that in a cyclic process
The equality holds in the reversible case [ 68 ] and the strict inequality holds in the irreversible case, with T surr as the temperature of the heat bath (surroundings) here. The reversible case is used to introduce the state function entropy . This is because in cyclic processes the variation of a state function is zero from state functionality.
For an arbitrary heat engine, the efficiency is:
where W n is the net work done by the engine per cycle, q H > 0 is the heat added to the engine from a hot reservoir, and q C = −| q C | < 0 [ 69 ] is waste heat given off to a cold reservoir from the engine. Thus the efficiency depends only on the ratio | q C | / | q H |.
Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures T H and T C must have the same efficiency, that is to say, the efficiency is a function of temperatures only:
In addition, a reversible heat engine operating between temperatures T 1 and T 3 must have the same efficiency as one consisting of two cycles, one between T 1 and another (intermediate) temperature T 2 , and the second between T 2 and T 3 , where T 1 > T 2 > T 3 . This is because, if a part of the two cycle engine is hidden such that it is recognized as an engine between the reservoirs at the temperatures T 1 and T 3 , then the efficiency of this engine must be same to the other engine at the same reservoirs. If we choose engines such that work done by the one cycle engine and the two cycle engine are same, then the efficiency of each heat engine is written as the below.
Here, the engine 1 is the one cycle engine, and the engines 2 and 3 make the two cycle engine where there is the intermediate reservoir at T 2 . We also have used the fact that the heat q 2 {\displaystyle q_{2}} passes through the intermediate thermal reservoir at T 2 {\displaystyle T_{2}} without losing its energy. (I.e., q 2 {\displaystyle q_{2}} is not lost during its passage through the reservoir at T 2 {\displaystyle T_{2}} .) This fact can be proved by the following.
In order to have the consistency in the last equation, the heat q 2 {\displaystyle q_{2}} flown from the engine 2 to the intermediate reservoir must be equal to the heat q 2 ∗ {\displaystyle q_{2}^{*}} flown out from the reservoir to the engine 3.
Then
Now consider the case where T 1 {\displaystyle T_{1}} is a fixed reference temperature: the temperature of the triple point of water as 273.16 K; T 1 = 273.16 K {\displaystyle T_{1}=\mathrm {273.16~K} } . Then for any T 2 and T 3 ,
Therefore, if thermodynamic temperature T * is defined by
then the function f , viewed as a function of thermodynamic temperatures, is simply
and the reference temperature T 1 * = 273.16 K × f ( T 1 , T 1 ) = 273.16 K. (Any reference temperature and any positive numerical value could be used – the choice here corresponds to the Kelvin scale.)
According to the Clausius equality , for a reversible process
That means the line integral ∫ L δ Q T {\displaystyle \int _{L}{\frac {\delta Q}{T}}} is path independent for reversible processes.
So we can define a state function S called entropy, which for a reversible process or for pure heat transfer satisfies
With this we can only obtain the difference of entropy by integrating the above formula. To obtain the absolute value, we need the third law of thermodynamics , which states that S = 0 at absolute zero for perfect crystals.
For any irreversible process, since entropy is a state function, we can always connect the initial and terminal states with an imaginary reversible process and integrating on that path to calculate the difference in entropy.
Now reverse the reversible process and combine it with the said irreversible process. Applying the Clausius inequality on this loop, with T surr as the temperature of the surroundings,
Thus,
where the equality holds if the transformation is reversible. If the process is an adiabatic process , then δ Q = 0 {\displaystyle \delta Q=0} , so Δ S ≥ 0 {\displaystyle \Delta S\geq 0} .
An important and revealing idealized special case is to consider applying the second law to the scenario of an isolated system (called the total system or universe), made up of two parts: a sub-system of interest, and the sub-system's surroundings. These surroundings are imagined to be so large that they can be considered as an unlimited heat reservoir at temperature T R and pressure P R – so that no matter how much heat is transferred to (or from) the sub-system, the temperature of the surroundings will remain T R ; and no matter how much the volume of the sub-system expands (or contracts), the pressure of the surroundings will remain P R .
Whatever changes to dS and dS R occur in the entropies of the sub-system and the surroundings individually, the entropy S tot of the isolated total system must not decrease according to the second law of thermodynamics:
According to the first law of thermodynamics , the change dU in the internal energy of the sub-system is the sum of the heat δq added to the sub-system, minus any work δw done by the sub-system, plus any net chemical energy entering the sub-system d Σ μ iR N i , so that:
where μ iR are the chemical potentials of chemical species in the external surroundings.
Now the heat leaving the reservoir and entering the sub-system is
where we have first used the definition of entropy in classical thermodynamics (alternatively, in statistical thermodynamics, the relation between entropy change, temperature and absorbed heat can be derived); and then the second law inequality from above.
It therefore follows that any net work δw done by the sub-system must obey
It is useful to separate the work δw done by the subsystem into the useful work δw u that can be done by the sub-system, over and beyond the work p R dV done merely by the sub-system expanding against the surrounding external pressure, giving the following relation for the useful work (exergy) that can be done:
It is convenient to define the right-hand-side as the exact derivative of a thermodynamic potential, called the availability or exergy E of the subsystem,
The second law therefore implies that for any process which can be considered as divided simply into a subsystem, and an unlimited temperature and pressure reservoir with which it is in contact,
i.e. the change in the subsystem's exergy plus the useful work done by the subsystem (or, the change in the subsystem's exergy less any work, additional to that done by the pressure reservoir, done on the system) must be less than or equal to zero.
In sum, if a proper infinite-reservoir-like reference state is chosen as the system surroundings in the real world, then the second law predicts a decrease in E for an irreversible process and no change for a reversible process.
This expression together with the associated reference state permits a design engineer working at the macroscopic scale (above the thermodynamic limit ) to utilize the second law without directly measuring or considering entropy change in a total isolated system (see also Process engineer ). Those changes have already been considered by the assumption that the system under consideration can reach equilibrium with the reference state without altering the reference state. An efficiency for a process or collection of processes that compares it to the reversible ideal may also be found (see Exergy efficiency ).
This approach to the second law is widely utilized in engineering practice, environmental accounting , systems ecology , and other disciplines.
The second law determines whether a proposed physical or chemical process is forbidden or may occur spontaneously. For isolated systems , no energy is provided by the surroundings and the second law requires that the entropy of the system alone cannot decrease: Δ S ≥ 0. Examples of spontaneous physical processes in isolated systems include the following:
However, for some non-isolated systems which can exchange energy with their surroundings, the surroundings exchange enough heat with the system, or do sufficient work on the system, so that the processes occur in the opposite direction. In such a case, the reverse process can occur because it is coupled to a simultaneous process that increases the entropy of the surroundings. The coupled process will go forward provided that the total entropy change of the system and surroundings combined is nonnegative as required by the second law: Δ S tot = Δ S + Δ S R ≥ 0. For the three examples given above:
For a spontaneous chemical process in a closed system at constant temperature and pressure without non- PV work, the Clausius inequality Δ S > Q/T surr transforms into a condition for the change in Gibbs free energy
or d G < 0. For a similar process at constant temperature and volume, the change in Helmholtz free energy must be negative, Δ A < 0 {\displaystyle \Delta A<0} . Thus, a negative value of the change in free energy ( G or A ) is a necessary condition for a process to be spontaneous. This is the most useful form of the second law of thermodynamics in chemistry, where free-energy changes can be calculated from tabulated enthalpies of formation and standard molar entropies of reactants and products. [ 19 ] [ 15 ] The chemical equilibrium condition at constant T and p without electrical work is d G = 0.
The first theory of the conversion of heat into mechanical work is due to Nicolas Léonard Sadi Carnot in 1824. He was the first to realize correctly that the efficiency of this conversion depends on the difference of temperature between an engine and its surroundings.
Recognizing the significance of James Prescott Joule 's work on the conservation of energy, Rudolf Clausius was the first to formulate the second law during 1850, in this form: heat does not flow spontaneously from cold to hot bodies. While common knowledge now, this was contrary to the caloric theory of heat popular at the time, which considered heat as a fluid. From there he was able to infer the principle of Sadi Carnot and the definition of entropy (1865).
Established during the 19th century, the Kelvin-Planck statement of the second law says, "It is impossible for any device that operates on a cycle to receive heat from a single reservoir and produce a net amount of work." This statement was shown to be equivalent to the statement of Clausius.
The ergodic hypothesis is also important for the Boltzmann approach. It says that, over long periods of time, the time spent in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e. that all accessible microstates are equally probable over a long period of time. Equivalently, it says that time average and average over the statistical ensemble are the same.
There is a traditional doctrine, starting with Clausius, that entropy can be understood in terms of molecular 'disorder' within a macroscopic system . This doctrine is obsolescent. [ 70 ] [ 71 ] [ 72 ]
In 1865, the German physicist Rudolf Clausius stated what he called the "second fundamental theorem in the mechanical theory of heat " in the following form: [ 73 ]
where Q is heat, T is temperature and N is the "equivalence-value" of all uncompensated transformations involved in a cyclical process. Later, in 1865, Clausius would come to define "equivalence-value" as entropy. On the heels of this definition, that same year, the most famous version of the second law was read in a presentation at the Philosophical Society of Zurich on April 24, in which, in the end of his presentation, Clausius concludes:
The entropy of the universe tends to a maximum.
This statement is the best-known phrasing of the second law. Because of the looseness of its language, e.g. universe , as well as lack of specific conditions, e.g. open, closed, or isolated, many people take this simple statement to mean that the second law of thermodynamics applies virtually to every subject imaginable. This is not true; this statement is only a simplified version of a more extended and precise description.
In terms of time variation, the mathematical statement of the second law for an isolated system undergoing an arbitrary transformation is:
where
The equality sign applies after equilibration. An alternative way of formulating of the second law for isolated systems is:
with S ˙ i {\displaystyle {\dot {S}}_{\text{i}}} the sum of the rate of entropy production by all processes inside the system. The advantage of this formulation is that it shows the effect of the entropy production. The rate of entropy production is a very important concept since it determines (limits) the efficiency of thermal machines. Multiplied with ambient temperature T a {\displaystyle T_{\text{a}}} it gives the so-called dissipated energy P diss = T a S ˙ i {\displaystyle P_{\text{diss}}=T_{\text{a}}{\dot {S}}_{\text{i}}} .
The expression of the second law for closed systems (so, allowing heat exchange and moving boundaries, but not exchange of matter) is:
Here,
The equality sign holds in the case that only reversible processes take place inside the system. If irreversible processes take place (which is the case in real systems in operation) the >-sign holds. If heat is supplied to the system at several places we have to take the algebraic sum of the corresponding terms.
For open systems (also allowing exchange of matter):
Here, S ˙ {\displaystyle {\dot {S}}} is the flow of entropy into the system associated with the flow of matter entering the system. It should not be confused with the time derivative of the entropy. If matter is supplied at several places we have to take the algebraic sum of these contributions.
Statistical mechanics gives an explanation for the second law by postulating that a material is composed of atoms and molecules which are in constant motion. A particular set of positions and velocities for each particle in the system is called a microstate of the system and because of the constant motion, the system is constantly changing its microstate. Statistical mechanics postulates that, in equilibrium, each microstate that the system might be in is equally likely to occur, and when this assumption is made, it leads directly to the conclusion that the second law must hold in a statistical sense. That is, the second law will hold on average, with a statistical variation on the order of 1/ √ N where N is the number of particles in the system. For everyday (macroscopic) situations, the probability that the second law will be violated is practically zero. However, for systems with a small number of particles, thermodynamic parameters, including the entropy, may show significant statistical deviations from that predicted by the second law. Classical thermodynamic theory does not deal with these statistical variations.
The first mechanical argument of the Kinetic theory of gases that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium was due to James Clerk Maxwell in 1860; [ 74 ] Ludwig Boltzmann with his H-theorem of 1872 also argued that due to collisions gases should over time tend toward the Maxwell–Boltzmann distribution .
Due to Loschmidt's paradox , derivations of the second law have to make an assumption regarding the past, namely that the system is uncorrelated at some time in the past; this allows for simple probabilistic treatment. This assumption is usually thought as a boundary condition , and thus the second law is ultimately a consequence of the initial conditions somewhere in the past, probably at the beginning of the universe (the Big Bang ), though other scenarios have also been suggested. [ 75 ] [ 76 ] [ 77 ]
Given these assumptions, in statistical mechanics, the second law is not a postulate, rather it is a consequence of the fundamental postulate , also known as the equal prior probability postulate, so long as one is clear that simple probability arguments are applied only to the future, while for the past there are auxiliary sources of information which tell us that it was low entropy. [ citation needed ] The first part of the second law, which states that the entropy of a thermally isolated system can only increase, is a trivial consequence of the equal prior probability postulate, if we restrict the notion of the entropy to systems in thermal equilibrium. The entropy of an isolated system in thermal equilibrium containing an amount of energy of E {\displaystyle E} is:
where Ω ( E ) {\displaystyle \Omega \left(E\right)} is the number of quantum states in a small interval between E {\displaystyle E} and E + δ E {\displaystyle E+\delta E} . Here δ E {\displaystyle \delta E} is a macroscopically small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the choice of δ E {\displaystyle \delta E} . However, in the thermodynamic limit (i.e. in the limit of infinitely large system size), the specific entropy (entropy per unit volume or per unit mass) does not depend on δ E {\displaystyle \delta E} .
Suppose we have an isolated system whose macroscopic state is specified by a number of variables. These macroscopic variables can, e.g., refer to the total volume, the positions of pistons in the system, etc. Then Ω {\displaystyle \Omega } will depend on the values of these variables. If a variable is not fixed, (e.g. we do not clamp a piston in a certain position), then because all the accessible states are equally likely in equilibrium, the free variable in equilibrium will be such that Ω {\displaystyle \Omega } is maximized at the given energy of the isolated system [ 78 ] as that is the most probable situation in equilibrium.
If the variable was initially fixed to some value then upon release and when the new equilibrium has been reached, the fact the variable will adjust itself so that Ω {\displaystyle \Omega } is maximized, implies that the entropy will have increased or it will have stayed the same (if the value at which the variable was fixed happened to be the equilibrium value).
Suppose we start from an equilibrium situation and we suddenly remove a constraint on a variable. Then right after we do this, there are a number Ω {\displaystyle \Omega } of accessible microstates, but equilibrium has not yet been reached, so the actual probabilities of the system being in some accessible state are not yet equal to the prior probability of 1 / Ω {\displaystyle 1/\Omega } . We have already seen that in the final equilibrium state, the entropy will have increased or have stayed the same relative to the previous equilibrium state. Boltzmann's H-theorem , however, proves that the quantity H increases monotonically as a function of time during the intermediate out of equilibrium state.
The second part of the second law states that the entropy change of a system undergoing a reversible process is given by:
where the temperature is defined as:
See Microcanonical ensemble for the justification for this definition. Suppose that the system has some external parameter, x , that can be changed. In general, the energy eigenstates of the system will depend on x . According to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in.
The generalized force, X , corresponding to the external variable x is defined such that X d x {\displaystyle Xdx} is the work performed by the system if x is increased by an amount dx . For example, if x is the volume, then X is the pressure. The generalized force for a system known to be in energy eigenstate E r {\displaystyle E_{r}} is given by:
Since the system can be in any energy eigenstate within an interval of δ E {\displaystyle \delta E} , we define the generalized force for the system as the expectation value of the above expression:
To evaluate the average, we partition the Ω ( E ) {\displaystyle \Omega \left(E\right)} energy eigenstates by counting how many of them have a value for d E r d x {\displaystyle {\frac {dE_{r}}{dx}}} within a range between Y {\displaystyle Y} and Y + δ Y {\displaystyle Y+\delta Y} . Calling this number Ω Y ( E ) {\displaystyle \Omega _{Y}\left(E\right)} , we have:
The average defining the generalized force can now be written:
We can relate this to the derivative of the entropy with respect to x at constant energy E as follows. Suppose we change x to x + dx . Then Ω ( E ) {\displaystyle \Omega \left(E\right)} will change because the energy eigenstates depend on x , causing energy eigenstates to move into or out of the range between E {\displaystyle E} and E + δ E {\displaystyle E+\delta E} . Let's focus again on the energy eigenstates for which d E r d x {\textstyle {\frac {dE_{r}}{dx}}} lies within the range between Y {\displaystyle Y} and Y + δ Y {\displaystyle Y+\delta Y} . Since these energy eigenstates increase in energy by Y dx , all such energy eigenstates that are in the interval ranging from E – Y dx to E move from below E to above E . There are
such energy eigenstates. If Y d x ≤ δ E {\displaystyle Ydx\leq \delta E} , all these energy eigenstates will move into the range between E {\displaystyle E} and E + δ E {\displaystyle E+\delta E} and contribute to an increase in Ω {\displaystyle \Omega } . The number of energy eigenstates that move from below E + δ E {\displaystyle E+\delta E} to above E + δ E {\displaystyle E+\delta E} is given by N Y ( E + δ E ) {\displaystyle N_{Y}\left(E+\delta E\right)} . The difference
is thus the net contribution to the increase in Ω {\displaystyle \Omega } . If Y dx is larger than δ E {\displaystyle \delta E} there will be the energy eigenstates that move from below E to above E + δ E {\displaystyle E+\delta E} . They are counted in both N Y ( E ) {\displaystyle N_{Y}\left(E\right)} and N Y ( E + δ E ) {\displaystyle N_{Y}\left(E+\delta E\right)} , therefore the above expression is also valid in that case.
Expressing the above expression as a derivative with respect to E and summing over Y yields the expression:
The logarithmic derivative of Ω {\displaystyle \Omega } with respect to x is thus given by:
The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and will thus vanish in the thermodynamic limit. We have thus found that:
Combining this with
gives:
If a system is in thermal contact with a heat bath at some temperature T then, in equilibrium, the probability distribution over the energy eigenvalues are given by the canonical ensemble :
Here Z is a factor that normalizes the sum of all the probabilities to 1, this function is known as the partition function . We now consider an infinitesimal reversible change in the temperature and in the external parameters on which the energy levels depend. It follows from the general formula for the entropy:
that
Inserting the formula for P j {\displaystyle P_{j}} for the canonical ensemble in here gives:
As elaborated above, it is thought that the second law of thermodynamics is a result of the very low-entropy initial conditions at the Big Bang . From a statistical point of view, these were very special conditions. On the other hand, they were quite simple, as the universe - or at least the part thereof from which the observable universe developed - seems to have been extremely uniform. [ 79 ]
This may seem somewhat paradoxical, since in many physical systems uniform conditions (e.g. mixed rather than separated gases) have high entropy. The paradox is solved once realizing that gravitational systems have negative heat capacity , so that when gravity is important, uniform conditions (e.g. gas of uniform density) in fact have lower entropy compared to non-uniform ones (e.g. black holes in empty space). [ 80 ] Yet another approach is that the universe had high (or even maximal) entropy given its size, but as the universe grew it rapidly came out of thermodynamic equilibrium, its entropy only slightly increased compared to the increase in maximal possible entropy, and thus it has arrived at a very low entropy when compared to the much larger possible maximum given its later size. [ 81 ]
As for the reason why initial conditions were such, one suggestion is that cosmological inflation was enough to wipe off non-smoothness, while another is that the universe was created spontaneously where the mechanism of creation implies low-entropy initial conditions. [ 82 ]
There are two principal ways of formulating thermodynamics, (a) through passages from one state of thermodynamic equilibrium to another, and (b) through cyclic processes, by which the system is left unchanged, while the total entropy of the surroundings is increased. These two ways help to understand the processes of life. The thermodynamics of living organisms has been considered by many authors, including Erwin Schrödinger (in his book What is Life? ) and Léon Brillouin . [ 83 ]
To a fair approximation, living organisms may be considered as examples of (b). Approximately, an animal's physical state cycles by the day, leaving the animal nearly unchanged. Animals take in food, water, and oxygen, and, as a result of metabolism , give out breakdown products and heat. Plants take in radiative energy from the sun, which may be regarded as heat, and carbon dioxide and water. They give out oxygen. In this way they grow. Eventually they die, and their remains rot away, turning mostly back into carbon dioxide and water. This can be regarded as a cyclic process. Overall, the sunlight is from a high temperature source, the sun, and its energy is passed to a lower temperature sink, i.e. radiated into space. This is an increase of entropy of the surroundings of the plant. Thus animals and plants obey the second law of thermodynamics, considered in terms of cyclic processes.
Furthermore, the ability of living organisms to grow and increase in complexity, as well as to form correlations with their environment in the form of adaption and memory, is not opposed to the second law – rather, it is akin to general results following from it: Under some definitions, an increase in entropy also results in an increase in complexity, [ 84 ] and for a finite system interacting with finite reservoirs, an increase in entropy is equivalent to an increase in correlations between the system and the reservoirs. [ 85 ]
Living organisms may be considered as open systems, because matter passes into and out from them. Thermodynamics of open systems is currently often considered in terms of passages from one state of thermodynamic equilibrium to another, or in terms of flows in the approximation of local thermodynamic equilibrium. The problem for living organisms may be further simplified by the approximation of assuming a steady state with unchanging flows. General principles of entropy production for such approximations are a subject of ongoing research .
Commonly, systems for which gravity is not important have a positive heat capacity , meaning that their temperature rises with their internal energy. Therefore, when energy flows from a high-temperature object to a low-temperature object, the source temperature decreases while the sink temperature is increased; hence temperature differences tend to diminish over time.
This is not always the case for systems in which the gravitational force is important: systems that are bound by their own gravity, such as stars, can have negative heat capacities. As they contract, both their total energy and their entropy decrease [ 86 ] but their internal temperature may increase . This can be significant for protostars and even gas giant planets such as Jupiter . When the entropy of the black-body radiation emitted by the bodies is included, however, the total entropy of the system can be shown to increase even as the entropy of the planet or star decreases. [ 87 ]
The theory of classical or equilibrium thermodynamics is idealized. A main postulate or assumption, often not even explicitly stated, is the existence of systems in their own internal states of thermodynamic equilibrium. In general, a region of space containing a physical system at a given time, that may be found in nature, is not in thermodynamic equilibrium, read in the most stringent terms. In looser terms, nothing in the entire universe is or has ever been truly in exact thermodynamic equilibrium. [ 88 ] [ 89 ]
For purposes of physical analysis, it is often enough convenient to make an assumption of thermodynamic equilibrium . Such an assumption may rely on trial and error for its justification. If the assumption is justified, it can often be very valuable and useful because it makes available the theory of thermodynamics. Elements of the equilibrium assumption are that a system is observed to be unchanging over an indefinitely long time, and that there are so many particles in a system, that its particulate nature can be entirely ignored. Under such an equilibrium assumption, in general, there are no macroscopically detectable fluctuations . There is an exception, the case of critical states , which exhibit to the naked eye the phenomenon of critical opalescence . For laboratory studies of critical states, exceptionally long observation times are needed.
In all cases, the assumption of thermodynamic equilibrium, once made, implies as a consequence that no putative candidate "fluctuation" alters the entropy of the system.
It can easily happen that a physical system exhibits internal macroscopic changes that are fast enough to invalidate the assumption of the constancy of the entropy. Or that a physical system has so few particles that the particulate nature is manifest in observable fluctuations. Then the assumption of thermodynamic equilibrium is to be abandoned. There is no unqualified general definition of entropy for non-equilibrium states. [ 90 ]
There are intermediate cases, in which the assumption of local thermodynamic equilibrium is a very good approximation, [ 91 ] [ 92 ] [ 93 ] [ 94 ] but strictly speaking it is still an approximation, not theoretically ideal.
For non-equilibrium situations in general, it may be useful to consider statistical mechanical definitions of other quantities that may be conveniently called 'entropy', but they should not be confused or conflated with thermodynamic entropy properly defined for the second law. These other quantities indeed belong to statistical mechanics, not to thermodynamics, the primary realm of the second law.
The physics of macroscopically observable fluctuations is beyond the scope of this article.
The second law of thermodynamics is a physical law that is not symmetric to reversal of the time direction. This does not conflict with symmetries observed in the fundamental laws of physics (particularly CPT symmetry ) since the second law applies statistically on time-asymmetric boundary conditions. [ 95 ] The second law has been related to the difference between moving forwards and backwards in time, or to the principle that cause precedes effect ( the causal arrow of time , or causality ). [ 96 ]
Irreversibility in thermodynamic processes is a consequence of the asymmetric character of thermodynamic operations, and not of any internally irreversible microscopic properties of the bodies. Thermodynamic operations are macroscopic external interventions imposed on the participating bodies, not derived from their internal properties. There are reputed "paradoxes" that arise from failure to recognize this.
Loschmidt's paradox , also known as the reversibility paradox, is the objection that it should not be possible to deduce an irreversible process from the time-symmetric dynamics that describe the microscopic evolution of a macroscopic system.
In the opinion of Schrödinger, "It is now quite obvious in what manner you have to reformulate the law of entropy – or for that matter, all other irreversible statements – so that they be capable of being derived from reversible models. You must not speak of one isolated system but at least of two, which you may for the moment consider isolated from the rest of the world, but not always from each other." [ 97 ] The two systems are isolated from each other by the wall, until it is removed by the thermodynamic operation, as envisaged by the law. The thermodynamic operation is externally imposed, not subject to the reversible microscopic dynamical laws that govern the constituents of the systems. It is the cause of the irreversibility. The statement of the law in this present article complies with Schrödinger's advice. The cause–effect relation is logically prior to the second law, not derived from it. This reaffirms Albert Einstein's postulates that cornerstone Special and General Relativity - that the flow of time is irreversible, however it is relative. Cause must precede effect, but only within the constraints as defined explicitly within General Relativity (or Special Relativity , depending on the local spacetime conditions). Good examples of this are the Ladder Paradox , time dilation and length contraction exhibited by objects approaching the velocity of light or within proximity of a super-dense region of mass/energy - e.g. black holes, neutron stars, magnetars and quasars.
The Poincaré recurrence theorem considers a theoretical microscopic description of an isolated physical system. This may be considered as a model of a thermodynamic system after a thermodynamic operation has removed an internal wall. The system will, after a sufficiently long time, return to a microscopically defined state very close to the initial one. The Poincaré recurrence time is the length of time elapsed until the return. It is exceedingly long, likely longer than the life of the universe, and depends sensitively on the geometry of the wall that was removed by the thermodynamic operation. The recurrence theorem may be perceived as apparently contradicting the second law of thermodynamics. More obviously, however, it is simply a microscopic model of thermodynamic equilibrium in an isolated system formed by removal of a wall between two systems. For a typical thermodynamical system, the recurrence time is so large (many many times longer than the lifetime of the universe) that, for all practical purposes, one cannot observe the recurrence. One might wish, nevertheless, to imagine that one could wait for the Poincaré recurrence, and then re-insert the wall that was removed by the thermodynamic operation. It is then evident that the appearance of irreversibility is due to the utter unpredictability of the Poincaré recurrence given only that the initial state was one of thermodynamic equilibrium, as is the case in macroscopic thermodynamics. Even if one could wait for it, one has no practical possibility of picking the right instant at which to re-insert the wall. The Poincaré recurrence theorem provides a solution to Loschmidt's paradox. If an isolated thermodynamic system could be monitored over increasingly many multiples of the average Poincaré recurrence time, the thermodynamic behavior of the system would become invariant under time reversal.
James Clerk Maxwell imagined one container divided into two parts, A and B . Both parts are filled with the same gas at equal temperatures and placed next to each other, separated by a wall. Observing the molecules on both sides, an imaginary demon guards a microscopic trapdoor in the wall. When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B . The average speed of the molecules in B will have increased while in A they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in A and increases in B , contrary to the second law of thermodynamics. [ 98 ]
One response to this question was suggested in 1929 by Leó Szilárd and later by Léon Brillouin . Szilárd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energy. [ 99 ] Likewise, Brillouin demonstrated that the decrease in entropy caused by the demon would be less than the entropy produced by choosing molecules based on their speed. [ 98 ]
Maxwell's 'demon' repeatedly alters the permeability of the wall between A and B . It is therefore performing thermodynamic operations on a microscopic scale, not just observing ordinary spontaneous or natural macroscopic thermodynamic processes. [ 99 ]
The law that entropy always increases holds, I think, the supreme position among the laws of Nature . If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations – then so much the worse for Maxwell's equations. If it is found to be contradicted by observation – well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.
There have been nearly as many formulations of the second law as there have been discussions of it.
Clausius is the author of the sibyllic utterance, "The energy of the universe is constant; the entropy of the universe tends to a maximum." The objectives of continuum thermomechanics stop far short of explaining the "universe", but within that theory we may easily derive an explicit statement in some ways reminiscent of Clausius, but referring only to a modest object: an isolated body of finite size. | https://en.wikipedia.org/wiki/Second_law_of_thermodynamics |
In mathematics, the second moment method is a technique used in probability theory and analysis to show that a random variable has positive probability of being positive. More generally, the "moment method" consists of bounding the probability that a random variable fluctuates far from its mean, by using its moments. [ 1 ]
The method is often quantitative, in that one can often deduce a lower bound on the probability that the random variable is larger than some constant times its expectation. The method involves comparing the second moment of random variables to the square of the first moment.
The first moment method is a simple application of Markov's inequality for integer-valued variables. For a non-negative , integer-valued random variable X , we may want to prove that X = 0 with high probability. To obtain an upper bound for Pr( X > 0) , and thus a lower bound for Pr( X = 0) , we first note that since X takes only integer values, Pr( X > 0) = Pr( X ≥ 1) . Since X is non-negative we can now apply Markov's inequality to obtain Pr( X ≥ 1) ≤ E[ X ] . Combining these we have Pr( X > 0) ≤ E[ X ] ; the first moment method is simply the use of this inequality.
In the other direction, E[ X ] being "large" does not directly imply that Pr( X = 0) is small. However, we can often use the second moment to derive such a conclusion, using the Cauchy–Schwarz inequality .
Theorem — If X ≥ 0 is a random variable with
finite variance, then Pr ( X > 0 ) ≥ ( E [ X ] ) 2 E [ X 2 ] . {\displaystyle \Pr(X>0)\geq {\frac {(\operatorname {E} [X])^{2}}{\operatorname {E} [X^{2}]}}.}
Using the Cauchy–Schwarz inequality , we have E [ X ] = E [ X 1 { X > 0 } ] ≤ E [ X 2 ] 1 / 2 Pr ( X > 0 ) 1 / 2 . {\displaystyle \operatorname {E} [X]=\operatorname {E} [X\,\mathbf {1} _{\{X>0\}}]\leq \operatorname {E} [X^{2}]^{1/2}\Pr(X>0)^{1/2}.} Solving for Pr ( X > 0 ) {\displaystyle \Pr(X>0)} , the desired inequality then follows. Q.E.D.
The method can also be used on distributional limits of random variables. Furthermore, the estimate of the previous theorem can be refined by means of the so-called Paley–Zygmund inequality . Suppose that X n is a sequence of non-negative real-valued random variables which converge in law to a random variable X . If there are finite positive constants c 1 , c 2 such that E [ X n 2 ] ≤ c 1 E [ X n ] 2 E [ X n ] ≥ c 2 {\displaystyle {\begin{aligned}\operatorname {E} \left[X_{n}^{2}\right]&\leq c_{1}\operatorname {E} [X_{n}]^{2}\\\operatorname {E} \left[X_{n}\right]&\geq c_{2}\end{aligned}}}
hold for every n , then it follows from the Paley–Zygmund inequality that for every n and θ in (0, 1) Pr ( X n ≥ c 2 θ ) ≥ ( 1 − θ ) 2 c 1 . {\displaystyle \Pr(X_{n}\geq c_{2}\theta )\geq {\frac {(1-\theta )^{2}}{c_{1}}}.}
Consequently, the same inequality is satisfied by X .
The Bernoulli bond percolation subgraph of a graph G at parameter p is a random subgraph obtained from G by deleting every edge of G with probability 1− p , independently. The infinite complete binary tree T is an infinite tree where one vertex (called the root) has two neighbors and every other vertex has three neighbors. The second moment method can be used to show that at every parameter p ∈ (1/2, 1] with positive probability the connected component of the root in the percolation subgraph of T is infinite.
Let K be the percolation component of the root, and let T n be the set of vertices of T that are at distance n from the root. Let X n be the number of vertices in T n ∩ K .
To prove that K is infinite with positive probability, it is enough to show that Pr ( X n > 0 ∀ n ) > 0 {\displaystyle \Pr(X_{n}>0\ \ \forall n)>0} . Since the events { X n > 0 } {\displaystyle \{X_{n}>0\}} form a decreasing sequence, by continuity of probability measures this is equivalent to showing that inf n Pr ( X n > 0 ) > 0 {\displaystyle \inf _{n}\Pr(X_{n}>0)>0} .
The Cauchy–Schwarz inequality gives E [ X n ] 2 ≤ E [ X n 2 ] E [ ( 1 X n > 0 ) 2 ] = E [ X n 2 ] Pr ( X n > 0 ) . {\displaystyle \operatorname {E} [X_{n}]^{2}\leq \operatorname {E} [X_{n}^{2}]\,\operatorname {E} \left[(1_{X_{n}>0})^{2}\right]=\operatorname {E} [X_{n}^{2}]\,\Pr(X_{n}>0).} Therefore, it is sufficient to show that inf n E [ X n ] 2 E [ X n 2 ] > 0 , {\displaystyle \inf _{n}{\frac {\operatorname {E} \left[X_{n}\right]^{2}}{\operatorname {E} \left[X_{n}^{2}\right]}}>0\,,} that is, that the second moment is bounded from above by a constant times the first moment squared (and both are nonzero). In many applications of the second moment method, one is not able to calculate the moments precisely, but can nevertheless establish this inequality.
In this particular application, these moments can be calculated. For every specific v in T n , Pr ( v ∈ K ) = p n . {\displaystyle \Pr(v\in K)=p^{n}.} Since | T n | = 2 n {\displaystyle |T_{n}|=2^{n}} , it follows that E [ X n ] = 2 n p n {\displaystyle \operatorname {E} [X_{n}]=2^{n}\,p^{n}} which is the first moment. Now comes the second moment calculation. E [ X n 2 ] = E [ ∑ v ∈ T n ∑ u ∈ T n 1 v ∈ K 1 u ∈ K ] = ∑ v ∈ T n ∑ u ∈ T n Pr ( v , u ∈ K ) . {\displaystyle \operatorname {E} \!\left[X_{n}^{2}\right]=\operatorname {E} \!\left[\sum _{v\in T_{n}}\sum _{u\in T_{n}}1_{v\in K}\,1_{u\in K}\right]=\sum _{v\in T_{n}}\sum _{u\in T_{n}}\Pr(v,u\in K).} For each pair v , u in T n let w ( v , u ) denote the vertex in T that is farthest away from the root and lies on the simple path in T to each of the two vertices v and u , and let k ( v , u ) denote the distance from w to the root. In order for v , u to both be in K , it is necessary and sufficient for the three simple paths from w ( v , u ) to v , u and the root to be in K . Since the number of edges contained in the union of these three paths is 2 n − k ( v , u ) , we obtain Pr ( v , u ∈ K ) = p 2 n − k ( v , u ) . {\displaystyle \Pr(v,u\in K)=p^{2n-k(v,u)}.} The number of pairs ( v , u ) such that k ( v , u ) = s is equal to 2 s 2 n − s 2 n − s − 1 = 2 2 n − s − 1 {\displaystyle 2^{s}\,2^{n-s}\,2^{n-s-1}=2^{2n-s-1}} , for s = 0 , 1 , … , n − 1 {\displaystyle s=0,1,\dots ,n-1} and equal to 2 n {\displaystyle 2^{n}} for s = n {\displaystyle s=n} . Hence, for p > 1 2 {\displaystyle p>{\frac {1}{2}}} , E [ X n 2 ] = ( 2 p ) n + ∑ s = 0 n − 1 2 2 n − s − 1 p 2 n − s = ( 2 p ) n + 1 − 2 ( 2 p ) n + ( 2 p ) 2 n + 1 4 p − 2 , {\displaystyle \operatorname {E} [X_{n}^{2}]=(2p)^{n}+\sum _{s=0}^{n-1}2^{2n-s-1}p^{2n-s}={\frac {(2p)^{n+1}-2(2p)^{n}+(2p)^{2n+1}}{4p-2}},} so that ( E [ X n ] ) 2 E [ X n 2 ] = 4 p − 2 ( 2 p ) 1 − n − 2 ( 2 p ) − n + 2 p → 2 − 1 p > 0 , {\displaystyle {\frac {(\operatorname {E} [X_{n}])^{2}}{\operatorname {E} [X_{n}^{2}]}}={\frac {4p-2}{(2p)^{1-n}-2(2p)^{-n}+2p}}\to 2-{\frac {1}{p}}>0,} which completes the proof. | https://en.wikipedia.org/wiki/Second_moment_method |
The second moment of area , or second area moment , or quadratic moment of area and also known as the area moment of inertia , is a geometrical property of an area which reflects how its points are distributed with regard to an arbitrary axis. The second moment of area is typically denoted with either an I {\displaystyle I} (for an axis that lies in the plane of the area) or with a J {\displaystyle J} (for an axis perpendicular to the plane). In both cases, it is calculated with a multiple integral over the object in question. Its dimension is L (length) to the fourth power. Its unit of dimension, when working with the International System of Units , is meters to the fourth power, m 4 , or inches to the fourth power, in 4 , when working in the Imperial System of Units or the US customary system .
In structural engineering , the second moment of area of a beam is an important property used in the calculation of the beam's deflection and the calculation of stress caused by a moment applied to the beam. In order to maximize the second moment of area, a large fraction of the cross-sectional area of an I-beam is located at the maximum possible distance from the centroid of the I-beam's cross-section. The planar second moment of area provides insight into a beam's resistance to bending due to an applied moment, force , or distributed load perpendicular to its neutral axis , as a function of its shape. The polar second moment of area provides insight into a beam's resistance to torsional deflection, due to an applied moment parallel to its cross-section, as a function of its shape.
Different disciplines use the term moment of inertia (MOI) to refer to different moments . It may refer to either of the planar second moments of area (often I x = ∬ R y 2 d A {\textstyle I_{x}=\iint _{R}y^{2}\,dA} or I y = ∬ R x 2 d A , {\textstyle I_{y}=\iint _{R}x^{2}\,dA,} with respect to some reference plane), or the polar second moment of area ( I = ∬ R r 2 d A {\textstyle I=\iint _{R}r^{2}\,dA} , where r is the distance to some reference axis). In each case the integral is over all the infinitesimal elements of area , dA , in some two-dimensional cross-section. In physics , moment of inertia is strictly the second moment of mass with respect to distance from an axis: I = ∫ Q r 2 d m {\textstyle I=\int _{Q}r^{2}dm} , where r is the distance to some potential rotation axis, and the integral is over all the infinitesimal elements of mass , dm , in a three-dimensional space occupied by an object Q . The MOI, in this sense, is the analog of mass for rotational problems. In engineering (especially mechanical and civil), moment of inertia commonly refers to the second moment of the area. [ 1 ]
The second moment of area for an arbitrary shape R with respect to an arbitrary axis B B ′ {\displaystyle BB'} ( B B ′ {\displaystyle BB'} axis is not drawn in the adjacent image; is an axis coplanar with x and y axes and is perpendicular to the line segment ρ {\displaystyle \rho } ) is defined as J B B ′ = ∬ R ρ 2 d A {\displaystyle J_{BB'}=\iint _{R}{\rho }^{2}\,dA} where
For example, when the desired reference axis is the x-axis, the second moment of area I x x {\displaystyle I_{xx}} (often denoted as I x {\displaystyle I_{x}} ) can be computed in Cartesian coordinates as
I x = ∬ R y 2 d x d y {\displaystyle I_{x}=\iint _{R}y^{2}\,dx\,dy}
The second moment of the area is crucial in Euler–Bernoulli theory of slender beams.
More generally, the product moment of area is defined as [ 3 ] I x y = ∬ R y x d x d y {\displaystyle I_{xy}=\iint _{R}yx\,dx\,dy}
It is sometimes necessary to calculate the second moment of area of a shape with respect to an x ′ {\displaystyle x'} axis different to the centroidal axis of the shape. However, it is often easier to derive the second moment of area with respect to its centroidal axis, x {\displaystyle x} , and use the parallel axis theorem to derive the second moment of area with respect to the x ′ {\displaystyle x'} axis. The parallel axis theorem states I x ′ = I x + A d 2 {\displaystyle I_{x'}=I_{x}+Ad^{2}} where
A similar statement can be made about a y ′ {\displaystyle y'} axis and the parallel centroidal y {\displaystyle y} axis. Or, in general, any centroidal B {\displaystyle B} axis and a parallel B ′ {\displaystyle B'} axis.
For the simplicity of calculation, it is often desired to define the polar moment of area (with respect to a perpendicular axis) in terms of two area moments of inertia (both with respect to in-plane axes). The simplest case relates J z {\displaystyle J_{z}} to I x {\displaystyle I_{x}} and I y {\displaystyle I_{y}} .
J z = ∬ R ρ 2 d A = ∬ R ( x 2 + y 2 ) d A = ∬ R x 2 d A + ∬ R y 2 d A = I x + I y {\displaystyle J_{z}=\iint _{R}\rho ^{2}\,dA=\iint _{R}\left(x^{2}+y^{2}\right)dA=\iint _{R}x^{2}\,dA+\iint _{R}y^{2}\,dA=I_{x}+I_{y}}
This relationship relies on the Pythagorean theorem which relates x {\displaystyle x} and y {\displaystyle y} to ρ {\displaystyle \rho } and on the linearity of integration .
For more complex areas, it is often easier to divide the area into a series of "simpler" shapes. The second moment of area for the entire shape is the sum of the second moment of areas of all of its parts about a common axis. This can include shapes that are "missing" (i.e. holes, hollow shapes, etc.), in which case the second moment of area of the "missing" areas are subtracted, rather than added. In other words, the second moment of area of "missing" parts are considered negative for the method of composite shapes.
See list of second moments of area for other shapes.
Consider a rectangle with base b {\displaystyle b} and height h {\displaystyle h} whose centroid is located at the origin. I x {\displaystyle I_{x}} represents the second moment of area with respect to the x-axis; I y {\displaystyle I_{y}} represents the second moment of area with respect to the y-axis; J z {\displaystyle J_{z}} represents the polar moment of inertia with respect to the z-axis.
I x = ∬ R y 2 d A = ∫ − b 2 b 2 ∫ − h 2 h 2 y 2 d y d x = ∫ − b 2 b 2 1 3 h 3 4 d x = b h 3 12 I y = ∬ R x 2 d A = ∫ − b 2 b 2 ∫ − h 2 h 2 x 2 d y d x = ∫ − b 2 b 2 h x 2 d x = b 3 h 12 {\displaystyle {\begin{aligned}I_{x}&=\iint _{R}y^{2}\,dA=\int _{-{\frac {b}{2}}}^{\frac {b}{2}}\int _{-{\frac {h}{2}}}^{\frac {h}{2}}y^{2}\,dy\,dx=\int _{-{\frac {b}{2}}}^{\frac {b}{2}}{\frac {1}{3}}{\frac {h^{3}}{4}}\,dx={\frac {bh^{3}}{12}}\\I_{y}&=\iint _{R}x^{2}\,dA=\int _{-{\frac {b}{2}}}^{\frac {b}{2}}\int _{-{\frac {h}{2}}}^{\frac {h}{2}}x^{2}\,dy\,dx=\int _{-{\frac {b}{2}}}^{\frac {b}{2}}hx^{2}\,dx={\frac {b^{3}h}{12}}\end{aligned}}}
Using the perpendicular axis theorem we get the value of J z {\displaystyle J_{z}} .
J z = I x + I y = b h 3 12 + h b 3 12 = b h 12 ( b 2 + h 2 ) {\displaystyle J_{z}=I_{x}+I_{y}={\frac {bh^{3}}{12}}+{\frac {hb^{3}}{12}}={\frac {bh}{12}}\left(b^{2}+h^{2}\right)}
Consider an annulus whose center is at the origin, outside radius is r 2 {\displaystyle r_{2}} , and inside radius is r 1 {\displaystyle r_{1}} . Because of the symmetry of the annulus, the centroid also lies at the origin. We can determine the polar moment of inertia, J z {\displaystyle J_{z}} , about the z {\displaystyle z} axis by the method of composite shapes. This polar moment of inertia is equivalent to the polar moment of inertia of a circle with radius r 2 {\displaystyle r_{2}} minus the polar moment of inertia of a circle with radius r 1 {\displaystyle r_{1}} , both centered at the origin. First, let us derive the polar moment of inertia of a circle with radius r {\displaystyle r} with respect to the origin. In this case, it is easier to directly calculate J z {\displaystyle J_{z}} as we already have r 2 {\displaystyle r^{2}} , which has both an x {\displaystyle x} and y {\displaystyle y} component. Instead of obtaining the second moment of area from Cartesian coordinates as done in the previous section, we shall calculate I x {\displaystyle I_{x}} and J z {\displaystyle J_{z}} directly using polar coordinates .
I x , circle = ∬ R y 2 d A = ∬ R ( r sin θ ) 2 d A = ∫ 0 2 π ∫ 0 r ( r sin θ ) 2 ( r d r d θ ) = ∫ 0 2 π ∫ 0 r r 3 sin 2 θ d r d θ = ∫ 0 2 π r 4 sin 2 θ 4 d θ = π 4 r 4 J z , circle = ∬ R r 2 d A = ∫ 0 2 π ∫ 0 r r 2 ( r d r d θ ) = ∫ 0 2 π ∫ 0 r r 3 d r d θ = ∫ 0 2 π r 4 4 d θ = π 2 r 4 {\displaystyle {\begin{aligned}I_{x,{\text{circle}}}&=\iint _{R}y^{2}\,dA=\iint _{R}\left(r\sin {\theta }\right)^{2}\,dA=\int _{0}^{2\pi }\int _{0}^{r}\left(r\sin {\theta }\right)^{2}\left(r\,dr\,d\theta \right)\\&=\int _{0}^{2\pi }\int _{0}^{r}r^{3}\sin ^{2}{\theta }\,dr\,d\theta =\int _{0}^{2\pi }{\frac {r^{4}\sin ^{2}{\theta }}{4}}\,d\theta ={\frac {\pi }{4}}r^{4}\\J_{z,{\text{circle}}}&=\iint _{R}r^{2}\,dA=\int _{0}^{2\pi }\int _{0}^{r}r^{2}\left(r\,dr\,d\theta \right)=\int _{0}^{2\pi }\int _{0}^{r}r^{3}\,dr\,d\theta \\&=\int _{0}^{2\pi }{\frac {r^{4}}{4}}\,d\theta ={\frac {\pi }{2}}r^{4}\end{aligned}}}
Now, the polar moment of inertia about the z {\displaystyle z} axis for an annulus is simply, as stated above, the difference of the second moments of area of a circle with radius r 2 {\displaystyle r_{2}} and a circle with radius r 1 {\displaystyle r_{1}} .
J z = J z , r 2 − J z , r 1 = π 2 r 2 4 − π 2 r 1 4 = π 2 ( r 2 4 − r 1 4 ) {\displaystyle J_{z}=J_{z,r_{2}}-J_{z,r_{1}}={\frac {\pi }{2}}r_{2}^{4}-{\frac {\pi }{2}}r_{1}^{4}={\frac {\pi }{2}}\left(r_{2}^{4}-r_{1}^{4}\right)}
Alternatively, we could change the limits on the d r {\displaystyle dr} integral the first time around to reflect the fact that there is a hole. This would be done like this.
J z = ∬ R r 2 d A = ∫ 0 2 π ∫ r 1 r 2 r 2 ( r d r d θ ) = ∫ 0 2 π ∫ r 1 r 2 r 3 d r d θ = ∫ 0 2 π [ r 2 4 4 − r 1 4 4 ] d θ = π 2 ( r 2 4 − r 1 4 ) {\displaystyle {\begin{aligned}J_{z}&=\iint _{R}r^{2}\,dA=\int _{0}^{2\pi }\int _{r_{1}}^{r_{2}}r^{2}\left(r\,dr\,d\theta \right)=\int _{0}^{2\pi }\int _{r_{1}}^{r_{2}}r^{3}\,dr\,d\theta \\&=\int _{0}^{2\pi }\left[{\frac {r_{2}^{4}}{4}}-{\frac {r_{1}^{4}}{4}}\right]\,d\theta ={\frac {\pi }{2}}\left(r_{2}^{4}-r_{1}^{4}\right)\end{aligned}}}
The second moment of area about the origin for any simple polygon on the XY-plane can be computed in general by summing contributions from each segment of the polygon after dividing the area into a set of triangles. This formula is related to the shoelace formula and can be considered a special case of Green's theorem .
A polygon is assumed to have n {\displaystyle n} vertices, numbered in counter-clockwise fashion. If polygon vertices are numbered clockwise, returned values will be negative, but absolute values will be correct.
I y = 1 12 ∑ i = 1 n ( x i y i + 1 − x i + 1 y i ) ( x i 2 + x i x i + 1 + x i + 1 2 ) I x = 1 12 ∑ i = 1 n ( x i y i + 1 − x i + 1 y i ) ( y i 2 + y i y i + 1 + y i + 1 2 ) I x y = 1 24 ∑ i = 1 n ( x i y i + 1 − x i + 1 y i ) ( x i y i + 1 + 2 x i y i + 2 x i + 1 y i + 1 + x i + 1 y i ) {\displaystyle {\begin{aligned}I_{y}&={\frac {1}{12}}\sum _{i=1}^{n}\left(x_{i}y_{i+1}-x_{i+1}y_{i}\right)\left(x_{i}^{2}+x_{i}x_{i+1}+x_{i+1}^{2}\right)\\I_{x}&={\frac {1}{12}}\sum _{i=1}^{n}\left(x_{i}y_{i+1}-x_{i+1}y_{i}\right)\left(y_{i}^{2}+y_{i}y_{i+1}+y_{i+1}^{2}\right)\\I_{xy}&={\frac {1}{24}}\sum _{i=1}^{n}\left(x_{i}y_{i+1}-x_{i+1}y_{i}\right)\left(x_{i}y_{i+1}+2x_{i}y_{i}+2x_{i+1}y_{i+1}+x_{i+1}y_{i}\right)\end{aligned}}}
where x i , y i {\displaystyle x_{i},y_{i}} are the coordinates of the i {\displaystyle i} -th polygon vertex, for 1 ≤ i ≤ n {\displaystyle 1\leq i\leq n} . Also, x n + 1 , y n + 1 {\displaystyle x_{n+1},y_{n+1}} are assumed to be equal to the coordinates of the first vertex, i.e., x n + 1 = x 1 {\displaystyle x_{n+1}=x_{1}} and y n + 1 = y 1 {\displaystyle y_{n+1}=y_{1}} . [ 6 ] [ 7 ] [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Second_moment_of_area |
The second solar spectrum is an electromagnetic spectrum of the Sun that shows the degree of linear polarization . The term was coined by V. V. Ivanov in 1991. The polarization is at a maximum close to the limb (edge) of the Sun, thus the best place to observe such a spectrum is from just inside the limb. [ 1 ] It is also possible to get polarized light from outside the limb, but since this is much dimmer compared to the disk of the Sun, it is very easily polluted by scattered light.
The second solar spectrum differs significantly from the solar spectrum determined by the intensity of light. [ 1 ] Large effects come around the Ca II K and H line. These have broad effects 200 Å wide and show a sign reversal at their centers. [ 1 ] Molecular lines with stronger polarization than the background due to MgH and C 2 are common. [ 1 ] Rare-earth elements stand out far more than expected from the intensity spectrum. [ 1 ]
Other odd lines include Li I at 6708 Å which has 0.005% more polarization at its peak, but is almost unobservable in the intensity spectrum. The Ba II 4554 Å appears as a triplet in the second solar spectrum. This is due to differing isotopes and hyperfine structure . [ 1 ]
Two lines at 5896 Å 4934 Å being the D 1 lines of sodium and barium were predicted not to be polarized, but nevertheless are present in this spectrum. [ 1 ]
The continuum in the spectrum is the light with wavelengths between the lines. Polarization in the continuum is due to Rayleigh scattering by neutral hydrogen atoms (H I) and Thomson scattering by free electrons . Most of the opacity in the sun is due to the hydride ion, H − which however does not alter polarization. [ 2 ] In 1950 Subrahmanyan Chandrasekhar came up with a solution for the degree of polarization due to scattering, and predicted 11.7% polarization at the limb of the Sun. But nowhere near this level is observed. What happens at the limb is that there is a forest of spicules poking out from the edge, so it is not possible to get parallel to such a rough surface. [ 2 ]
For most of the solar disk the degree of linear polarization of the continuum is under 0.1%, but it rises to 1% at the limb. The polarization also depends strongly on the wavelength, and for near ultraviolet 3000 Å the light near the limb is 100 times more polarized than red light at 7000 Å. [ 2 ] At the limit of the Balmer series a change happens where at shorter wavelengths more bound-bound Balmer series transitions cause more opacity. This extra opacity drops the polarization degree by a factor of two near 3746 Å. [ 2 ] | https://en.wikipedia.org/wiki/Second_solar_spectrum |
In condensed matter physics , second sound is a quantum mechanical phenomenon in which heat transfer occurs by wave -like motion, rather than by the more usual mechanism of diffusion . Its presence leads to a very high thermal conductivity . It is known as "second sound" because the wave motion of entropy and temperature is similar to the propagation of pressure waves in air ( sound ). [ 1 ] The phenomenon of second sound was first described by Lev Landau in 1941. [ 2 ]
Normal sound waves are fluctuations in the displacement and density of molecules in a substance; [ 3 ] [ 4 ] second sound waves are fluctuations in the density of quasiparticle thermal excitations ( rotons and phonons [ 5 ] ). Second sound can be observed in any system in which most phonon-phonon collisions conserve momentum, like superfluids [ 6 ] and in some dielectric crystals [ 1 ] [ 7 ] [ 8 ] when Umklapp scattering is small.
Contrary to molecules in a gas, quasiparticles are not necessarily conserved. Also gas molecules in a box conserve momentum (except at the boundaries of box), while quasiparticles can sometimes not conserve momentum in the presence of impurities or Umklapp scattering. Umklapp phonon-phonon scattering exchanges momentum with the crystal lattice, so phonon momentum is not conserved, but Umklapp processes can be reduced at low temperatures. [ 9 ]
Normal sound in gases is a consequence of the collision rate τ between molecules being large compared to the frequency of the sound wave ω ≪ 1/ τ . For second sound, the Umklapp rate τ u has to be small compared to the oscillation frequency ω ≫ 1/ τ u for energy and momentum conservation. However analogous to gasses, the relaxation time τ N describing the collisions has to be large with respect to the frequency ω ≪ 1/ τ N , leaving a window: [ 9 ]
for sound-like behaviour or second sound. The second sound thus behaves as oscillations of the local number of quasiparticles (or of the local energy carried by these particles). Contrary to the normal sound where energy is related to pressure and temperature, in a crystal the local energy density is purely a function of the temperature. In this sense, the second sound can also be considered as oscillations of the local temperature. Second sound is a wave-like phenomenon which makes it very different from usual heat diffusion . [ 9 ]
Second sound is observed in liquid helium at temperatures below the lambda point , 2.1768 K , where 4 He becomes a superfluid known as helium II . Helium II has the highest thermal conductivity of any known material (several hundred times higher than copper ). [ 10 ] Second sound can be observed either as pulses or in a resonant cavity. [ 11 ]
The speed of second sound is close to zero near the lambda point, increasing to approximately 20 m/s around 1.8 K, [ 12 ] about ten times slower than normal sound waves. [ 13 ] At temperatures below 1 K, the speed of second sound in helium II increases as the temperature decreases. [ 14 ]
Second sound is also observed in superfluid helium-3 below its lambda point 2.5 mK. [ 15 ]
As per the two-fluid, the speed of second sound is given by
c 2 = ( T S 2 C ρ s ρ n ) 1 / 2 {\displaystyle c_{2}=\left({\frac {TS^{2}}{C}}\,{\frac {\rho _{s}}{\rho _{n}}}\right)^{1/2}}
where T {\displaystyle T} is the temperature, S {\displaystyle S} is the entropy, C {\displaystyle C} is the specific heat, ρ s {\displaystyle \rho _{s}} is the superfluid density and ρ n {\displaystyle \rho _{n}} is the normal fluid density. As T → 0 {\displaystyle T\rightarrow 0} , c 2 = c / 3 {\displaystyle c_{2}=c/{\sqrt {3}}} , where c = ( ∂ p / ∂ ρ ) S ≈ ( ∂ p / ∂ ρ ) T {\displaystyle c=(\partial p/\partial \rho )_{S}\approx (\partial p/\partial \rho )_{T}} is the ordinary (or first) sound speed.
Second sound has been observed in solid 4 He and 3 He, [ 16 ] [ 17 ] and in some dielectric solids such as Bi in the temperature
range of 1.2 to 4.0 K with a velocity of 780 ± 50 m/s, [ 18 ] or solid sodium fluoride (NaF) around 10 to 20 K. [ 19 ] In 2021 this effect was observed in a BKT superfluid [ 20 ] as well as in a germanium semiconductor [ 21 ] [ 22 ]
In 2019 it was reported that ordinary graphite exhibits second sound at 120 K . This feature was both predicted theoretically and observed experimentally, and
was by far the highest temperature at which second sound has been observed. [ 23 ] However, this second sound is observed only at the microscale, because the wave dies out exponentially with characteristic length 1-10 microns. Therefore, presumably graphite in the right temperature regime has extraordinarily high thermal conductivity but only for the purpose of transferring heat pulses distances of order 10 microns, and for pulses of duration on the order of 10 nanoseconds. For more "normal" heat-transfer, graphite's observed thermal conductivity is less than that of, e.g., copper. The theoretical models, however, predict longer absorption lengths would be seen in isotopically pure graphite, and perhaps over a wider temperature range, e.g. even at room temperature. (As of March 2019, that experiment has not yet been tried.)
Measuring the speed of second sound in 3 He- 4 He mixtures can be
used as a thermometer in the range 0.01-0.7 K. [ 24 ]
Oscillating superleak transducers (OST) [ 25 ] use second sound to locate defects in superconducting accelerator cavities . [ 26 ] [ 27 ] | https://en.wikipedia.org/wiki/Second_sound |
Second wind is a phenomenon in endurance sports, such as marathons or road running (as well as other sports), whereby an athlete who is out of breath and too tired to continue (known as " hitting the wall "), finds the strength to press on at top performance with less exertion. The feeling may be similar to that of a " runner's high ", the most obvious difference being that the runner's high occurs after the race is over. [ 1 ] In muscle glycogenoses (muscle GSDs), an inborn error of carbohydrate metabolism impairs either the formation or utilization of muscle glycogen. As such, those with muscle glycogenoses do not need to do prolonged exercise to experience "hitting the wall". Instead, signs of exercise intolerance , such as an inappropriate rapid heart rate response to exercise, are experienced from the beginning of an activity, and some muscle GSDs can achieve second wind within about 10 minutes from the beginning of the aerobic activity, such as walking. (See below in pathology ) .
In experienced athletes, "hitting the wall" is conventionally believed to be due to the body's glycogen stores being depleted, with "second wind" occurring when fatty acids become the predominant source of energy. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] The delay between "hitting the wall" and "second wind" occurring, has to do with the slow speed at which fatty acids sufficiently produce ATP (energy); with fatty acids taking approximately 10 minutes, whereas muscle glycogen is considerably faster at about 30 seconds. [ 5 ] [ 7 ] Some scientists believe the second wind to be a result of the body finding the proper balance of oxygen to counteract the buildup of lactic acid in the muscles. [ 8 ] Others claim second winds are due to endorphin production.
Heavy breathing during exercise also provides cooling for the body. After some time the veins and capillaries dilate and cooling takes place more through the skin, so less heavy breathing is needed. The increase in the temperature of the skin can be felt at the same time as the "second wind" takes place.
Documented experiences of the second wind go back at least 100 years, when it was taken to be a commonly held fact of exercise. [ 9 ] The phenomenon has come to be used as a metaphor for continuing on with renewed energy past the point thought to be one's prime, whether in other sports, careers, or life in general. [ 10 ] [ 11 ] [ 12 ]
When non-aerobic glycogen metabolism is insufficient to meet energy demands, physiologic mechanisms utilize alternative sources of energy such as fatty acids and proteins via aerobic respiration. Second-wind phenomena in metabolic disorders such as McArdle's disease are attributed to this metabolic switch and the same or a similar phenomenon may occur in healthy individuals (see symptoms of McArdle's disease ).
Muscular exercise as well as other cellular functions requires oxygen to produce ATP and properly function. This normal function is called aerobic metabolism and does not produce lactic acid if enough oxygen is present. During heavy exercise such as long distance running or any demanding exercise, the body's need for oxygen to produce energy is higher than the oxygen supplied in the blood from respiration. Anaerobic metabolism to some degree then takes place in the muscle and this less ideal energy production produces lactic acid as a waste metabolite. If the oxygen supply is not soon restored, this may lead to accumulation of lactic acid.
This is the case even without exercise in people with respiratory disease , challenged circulation of blood to parts of the body or any other situation when oxygen cannot be supplied to the tissues involved.
Some people's bodies may take more time than others to be able to balance the amount of oxygen they need to counteract the lactic acid. This theory of the second wind posits that, by pushing past the point of pain and exhaustion, runners may give their systems enough time to warm up and begin to use the oxygen to its fullest potential. For this reason, well-conditioned Olympic-level runners do not generally experience a second wind (or they experience it much sooner) because their bodies are trained to perform properly from the start of the race. [ 8 ]
Endorphins are credited as the cause of the feeling of euphoria and wellbeing found in many forms of exercise, so proponents of this theory believe that the second wind is caused by their early release. [ 13 ] Many of these proponents feel that the second wind is very closely related to—or even interchangeable with—the runner's high. [ 14 ]
A second wind phenomenon is also seen in some medical conditions, such as McArdle disease (GSD-V) and Phosphoglucomutase deficiency (PGM1-CDG/CDG1T/GSD-XIV). [ 15 ] [ 16 ] Unlike non-affected individuals that have to do long-distance running to deplete their muscle glycogen, in GSD-V individuals their muscle glycogen is unavailable, so second wind is achieved after 6–10 minutes of light to moderate aerobic activity (such as walking without an incline). [ 17 ] [ 18 ] [ 19 ]
Skeletal muscle relies predominantly on glycogenolysis for the first few minutes as it transitions from rest to activity, as well as throughout high-intensity aerobic activity and all anaerobic activity. [ 17 ] In GSD-V, due to a glycolytic block, there is an energy shortage in the muscle cells after the phosphagen system has been depleted. The heart tries to compensate for the energy shortage by increasing heart rate to maximize delivery of oxygen and blood borne fuels to the muscle cells for oxidative phosphorylation . [ 17 ] Exercise intolerance such as muscle fatigue and pain , an inappropriate rapid heart rate in response to exercise ( tachycardia ), heavy ( hyperpnea ) and rapid breathing ( tachypnea ) are experienced until sufficient energy is produced via oxidative phosphorylation , primarily from free fatty acids . [ 17 ] [ 18 ] [ 20 ]
Oxidative phosphorylation by free fatty acids is more easily achievable for light to moderate aerobic activity (below the aerobic threshold ), as high-intensity (fast-paced) aerobic activity relies more on muscle glycogen due to its high ATP consumption. Oxidative phosphorylation by free fatty acids is not achievable with isometric and other anaerobic activity (such as lifting weights), as contracted muscles restrict blood flow (leaving oxygen and blood borne fuels unable to be delivered to muscle cells adequately for oxidative phosphorylation). [ 17 ] [ 18 ]
The second wind phenomenon in GSD-V individuals can be demonstrated by measuring heart rate during a 12 Minute Walk Test. [ 21 ] [ 22 ] [ 23 ] A "third wind" phenomenon is also seen in GSD-V individuals, where after approximately 2 hours, they see a further improvement of symptoms as the body becomes even more fat adapted. [ 24 ] [ 25 ]
Without muscle glycogen, it is important to get into second wind without going too fast, too soon nor trying to push through the pain. Going too fast, too soon encourages protein metabolism over fat metabolism, and the muscle pain in this circumstance is a result of muscle damage due to a severely low ATP reservoir. [ 18 ] [ 19 ] Aiming for ATP production primarily from fat metabolism rather than protein metabolism is also why the preferred method for getting into second wind is to slowly increase speed during aerobic activity for 10 minutes, rather than to go quickly from the outset and then resting for 10 minutes before resuming. [ 18 ] In muscle glycogenoses, second wind is achieved gradually over 6–10 minutes from the beginning of aerobic activity and individuals may struggle to get into second wind within that timeframe if they accelerate their speed too soon or if they try to push through the pain. [ 18 ] Understanding the types of activity with which second wind can be achieved and which external factors affect it (such as walking into a headwind, walking on sand, or an icy surface), with practice while paying attention to the sensations in their muscles and using a heart rate monitor to see if their heart rate shoots up too high, individuals can learn how to get into second wind safely to the point where it becomes almost second nature (much like riding a bicycle or driving). [ 18 ] [ 19 ]
Pain killers and muscle relaxants dull the sensations in the muscles that let us know if we are going too fast, so either take them after exercise or be extra mindful about the speed if you have to take them during exercise. [ 18 ] Otherwise, individuals might find themselves in a spiral of taking painkillers or muscle relaxants, inadvertently causing muscle damage because they can’t feel the early warning signals that their muscles are giving them, then having to take more because of the increased pain from muscle damage, then causing even more muscle damage while exercising on the increased dosage, which then causes more pain, and so on. [ 18 ] Due to the glycolytic block, those with McArdle disease and select other muscle glycogenoses don’t produce enough lactic acid to feel the usual kind of pain that unaffected individuals do during exercise, so the phrase “no pain, no gain” should be ignored; muscle pain and tightness should be recognized as signals to slow down or rest briefly. [ 17 ] [ 18 ] [ 19 ]
Going too fast, too soon encourages protein metabolism over fat metabolism. [ 18 ] [ 19 ] Protein metabolism occurs through amino acid degradation which converts amino acids into pyruvate , the breakdown of protein to maintain the amino acid pool, the myokinase (adenylate kinase) reaction and purine nucleotide cycle . [ 26 ] Amino acids are vital to the purine nucleotide cycle as they are precursors for purines, nucleotides, and nucleosides; as well as branch-chained amino acids are converted into glutamate and aspartate for use in the cycle ( see Aspartate and glutamate synthesis ). Severe breakdown of muscle leads to rhabdomyolysis and myoglobinuria . Excessive use of the myokinase reaction and purine nucleotide cycle leads to myogenic hyperuricemia . [ 27 ]
For McArdle disease (GSD-V), regular aerobic exercise utilizing "second wind" to enable the muscles to become aerobically conditioned, as well as anaerobic exercise (strength training) that follows the activity adaptations so as not to cause muscle injury, helps to improve exercise intolerance symptoms and maintain overall health. [ 17 ] [ 18 ] [ 28 ] [ 22 ] Studies have shown that regular low-moderate aerobic exercise increases peak power output, increases peak oxygen uptake ( VO 2 peak ), lowers heart rate, and lowers serum CK in individuals with McArdle disease. [ 28 ] [ 22 ] [ 29 ] [ 30 ]
Regardless of whether the patient experiences symptoms of muscle pain, muscle fatigue, or cramping, the phenomenon of second wind having been achieved is demonstrable by the sign of an increased heart rate dropping while maintaining the same speed on the treadmill. [ 22 ] Inactive patients experienced second wind, demonstrated through relief of typical symptoms and the sign of an increased heart rate dropping, while performing low-moderate aerobic exercise (walking or brisk walking). [ 22 ]
Conversely, patients that were regularly active did not experience the typical symptoms during low-moderate aerobic exercise (walking or brisk walking), but still demonstrated second wind by the sign of an increased heart rate dropping. [ 22 ] [ 31 ] For the regularly active patients, it took more strenuous exercise (very brisk walking/jogging or bicycling) for them to experience both the typical symptoms and relief thereof, along with the sign of an increased heart rate dropping, demonstrating second wind. [ 22 ] [ 31 ] [ 19 ]
In young children (<10 years old) with McArdle disease (GSD-V), it may be more difficult to detect the second wind phenomenon. They may show a normal heart rate, with normal or above normal peak cardio-respiratory capacity ( VO 2max ). [ 17 ] [ 32 ] That said, patients with McArdle disease typically experience symptoms of exercise intolerance before the age of 10 years, [ 17 ] with the median symptomatic age of 3 years. [ 33 ] [ 34 ]
Tarui disease ( GSD-VII ) patients do not experience the "second wind" phenomenon; instead are said to be "out-of-wind". [ 6 ] [ 17 ] [ 18 ] [ 35 ] However, they can achieve sub-maximal benefit from lipid metabolism of free fatty acids during aerobic activity following a warm-up. [ 17 ] | https://en.wikipedia.org/wiki/Second_wind |
Secondary is a term used in organic chemistry to classify various types of compounds (e. g. alcohols, alkyl halides, amines) or reactive intermediates (e. g. alkyl radicals, carbocations). An atom is considered secondary if it has two 'R' Groups attached to it. [ 1 ] An 'R' group is a carbon containing group such as a methyl (CH 3 ). A secondary compound is most often classified on an alpha carbon (middle carbon) or a nitrogen. The word secondary comes from the root word 'second' which means two.
Secondary central atoms compared with primary , tertiary and quaternary central atoms.
This nomenclature can be used in many cases and further used to explain relative reactivity. The reactivity of molecules varies with respect to the attached atoms. Thus, a primary, secondary, tertiary and quaternary molecule of the same function group will have different reactivities.
Secondary alcohols have the formula RCH(OH)R' where R and R' are organyl. [ 2 ]
A secondary amine has the formula RR'NH where R and R' are organyl.
Secondary amides have the formula RC(O)NHR' where R can be H or organyl and R' is organyl. [ citation needed ] which is the loss of the single proton bonded to the middle nitrogen.
Secondary phosphines have two 'R' groups attached to a phosphorus atom and again, a P-H bond. [ 3 ]
"Secondary" is a general term used in chemistry that can be applied to many molecules, even more than the ones listed here; the principles seen in these examples can be further applied to other functional group containing molecules. The ones shown above are common molecules seen in many organic reactions. By classifying a molecule as secondary it then be compared with a molecule of primary or tertiary nature to determine the relative reactivity. | https://en.wikipedia.org/wiki/Secondary_(chemistry) |
A secondary carbon is a carbon atom bound to two other carbon atoms and has sp3 hybridization. [ 1 ] For this reason, secondary carbon atoms are found in almost (neopentane, for example, does not have any secondary carbon atoms) all hydrocarbons having at least three carbon atoms. In unbranched alkanes , the inner carbon atoms are always secondary carbon atoms (see figure). [ 2 ] | https://en.wikipedia.org/wiki/Secondary_carbon |
The secondary cell wall is a structure found in many plant cells , located between the primary cell wall and the plasma membrane . The cell starts producing the secondary cell wall after the primary cell wall is complete and the cell has stopped expanding. [ 1 ] It is most prevalent in the Ground tissue found in vascular plants, with Collenchyma having little to no lignin, and Sclerenchyma having lignified secondary cells walls. [ 2 ] [ 3 ]
Secondary cell walls provide additional protection to cells and rigidity and strength to the larger plant. These walls are constructed of layered sheaths of cellulose microfibrils, wherein the fibers are in parallel within each layer. The inclusion of lignin makes the secondary cell wall less flexible and less permeable to water than the primary cell wall. [ 4 ] In addition to making the walls more resistant to degradation, the hydrophobic nature of lignin within these tissues is essential for containing water within the vascular tissues that carry it throughout the plant.
The secondary cell wall consists primarily of cellulose , along with other polysaccharides , lignin , and glycoprotein . It sometimes consists of three distinct layers - S 1 , S 2 and S 3 - where the direction of the cellulose microfibrils differs between the layers. [ 1 ] The direction of the microfibrils is called microfibril angle (MFA). In the secondary cell wall of fibres of trees a low microfibril angle is found in the S2-layer, while S1 and S3-layers show a higher MFA . However, the MFA can also change depending on the loads on the tissue. It has been shown that in reaction wood the MFA in S2-layer can vary. Tension wood has a low MFA, meaning that the microfibril is oriented parallel to the axis of the fibre. In compression wood the MFA is high and reaches up to 45°. [ 5 ] These variations influence the mechanical properties of the cell wall. [ 6 ]
The secondary cell wall has different ratios of constituents compared to the primary wall . An example of this is that secondary wall in wood contains polysaccharides called xylan , whereas the primary wall contains the polysaccharide xyloglucan .
The cells fraction in secondary walls is also higher. [ 7 ] Pectins may also be absent from the secondary wall, and unlike primary walls, no structural proteins or enzymes have been identified. [ 4 ] Because of the low permeability through the secondary cell wall, cellular transport is carried out through openings in the wall called pits .
Wood consists mostly of secondary cell wall, and holds the plant up against gravity. [ 8 ]
Some secondary cell walls store nutrients, such as those in the cotyledons and the endosperm . These contain little cellulose, and mostly other polysaccharides . [ 1 ]
The first lignified secondary walls evolved 430 million years ago, creating the structure necessary for vascular plants. The genes used to form the constituents of secondary cells walls have also been found in Physcomitrella patens . This suggests that a duplication of these genes was the driver of secondary cells wall formation. [ 2 ]
The secondary cells wall plays an active role in pathogen resistance. It has been shown to accumulate anti-microbial peptides that prevent the bacteria and fungus from entering the cell. Lignin has also been shown to prevent the infection of cells. Plant cells will increase the production of lignin generating enzymes when stressed by some pathogens, further lignifying the secondary cell wall. Increased lignin content is particularly effective at resisting vascular pathogens that use the secondary xylem to spread. [ 9 ] | https://en.wikipedia.org/wiki/Secondary_cell_wall |
Secondary contact is the process in which two allopatrically distributed populations of a species are geographically reunited. This contact allows for the potential for the exchange of genes, dependent on how reproductively isolated the two populations have become. There are several primary outcomes of secondary contact: extinction of one species, fusion of the two populations back into one, reinforcement , the formation of a hybrid zone , and the formation of a new species through hybrid speciation . [ 1 ]
One of the two populations may go extinct due to competitive exclusion after secondary contact. This tends to happen when the two populations have strong reproductive isolation and significant overlap in their niche. A possible way to prevent extinction is if there is an advantage to being rare. For example, sexual imprinting and male-male competition [ clarification needed ] may prevent extinction. [ 2 ]
The population that goes extinct may leave behind some of its genes in the surviving population if they hybridize. For example, the secondary contact between Homo sapiens and Neanderthals , as well as the Denisovans , left traces of their genes in modern human. However, if hybridization is so common that the resulting population received significant amount of genetic contribution from both populations, the result should be considered a fusion.
The two populations may fuse back into one population. This tends to occur when there is little to no reproductive isolation between the two. During the process of fusion, a hybrid zone may occur. This is sometimes called introgressive hybridization or reverse speciation. Concerns have been raised that the homogenizing of the environment may contribute to more and more fusion, leading to the loss of biodiversity . [ 3 ]
A hybrid zone may appear during secondary contact, meaning there would be an area where the two populations cohabitate and produce hybrids, often arranged in a cline . The width of the zone may vary from tens of meters to several hundred kilometers. A hybrid zone may be stable, or it may not. Some shift in one direction, which may eventually lead to the extinction of the receding population. Some expand over time until the two populations fuse. [ 4 ]
Reinforcement may occur in hybrid zones.
Hybrid zones are important study systems for speciation. [ 4 ]
Reinforcement is the evolution towards increased reproductive isolation due to selection against hybridization. This occurs when the populations already have some reproductive isolation, but still hybridize to some extent. Because hybridization is costly (e.g. giving birth and raising a weak offspring), natural selection favors strong isolation mechanisms that can avoid such outcome, such as assortative mating. [ 5 ] Evidence for speciation by reinforcement has been accumulating since the 1990s.
Occasionally, the hybrids may be able to survive and reproduce, but not backcross with either of the two parental lineages, thus becoming a new species. This often occur in plants through polyploidy , including in many important food crops. [ 6 ]
Occasionally, the hybrids may lead to the extinction of one or both parental lineages. | https://en.wikipedia.org/wiki/Secondary_contact |
Secondary electrons are electrons generated as ionization products. They are called 'secondary' because they are generated by other radiation (the primary radiation). This radiation can be in the form of ions , electrons, or photons with sufficiently high energy, i.e. exceeding the ionization potential . Photoelectrons can be considered an example of secondary electrons where the primary radiation are photons; in some discussions photoelectrons with higher energy (>50 eV ) are still considered "primary" while the electrons freed by the photoelectrons are "secondary".
Secondary electrons are also the main means of viewing images in the scanning electron microscope (SEM). The range of secondary electrons depends on the energy. Plotting the inelastic mean free path as a function of energy often shows characteristics of the "universal curve" [ 1 ] familiar to electron spectroscopists and surface analysts. This distance is on the order of a few nanometers in metals and tens of nanometers in insulators. [ 2 ] [ 3 ] This small distance allows such fine resolution to be achieved in the SEM.
For SiO 2 , for a primary electron energy of 100 eV , the secondary electron range is up to 20 nm from the point of incidence. [ 4 ] [ 5 ] | https://en.wikipedia.org/wiki/Secondary_electrons |
In particle physics , secondary emission is a phenomenon where primary incident particles of sufficient energy , when hitting a surface or passing through some material, induce the emission of secondary particles. The term often refers to the emission of electrons when charged particles like electrons or ions in a vacuum tube strike a metal surface; these are called secondary electrons . [ 1 ] In this case, the number of secondary electrons emitted per incident particle is called secondary emission yield . If the secondary particles are ions, the effect is termed secondary ion emission . Secondary electron emission is used in photomultiplier tubes and image intensifier tubes to amplify the small number of photoelectrons produced by photoemission, making the tube more sensitive. It also occurs as an undesirable side effect in electronic vacuum tubes when electrons from the cathode strike the anode , and can cause parasitic oscillation .
Commonly used secondary emissive materials include
In a photomultiplier tube, [ 2 ] one or more electrons are emitted from a photocathode and accelerated towards a polished metal electrode (called a dynode ).
They hit the electrode surface with sufficient energy to release a number of electrons through secondary emission. These new electrons are then accelerated towards another dynode, and the process is repeated several times, resulting in an overall gain ('electron multiplication') in the order of typically one million and thus generating an electronically detectable current pulse at the last dynodes.
Similar electron multipliers can be used for detection of fast particles like electrons or ions .
In the 1930s special amplifying tubes were developed which deliberately "folded" the electron beam, by having it strike a dynode to be reflected into the anode. This had the effect of increasing the plate-grid distance for a given tube size, increasing the transconductance of the tube and reducing its noise figure. A typical such "orbital beam hexode" was the RCA 1630, introduced in 1939. Because the heavy electron current in such tubes damaged the dynode surface rapidly, their lifetime tended to be very short compared to conventional tubes. [ 3 ]
The first random access computer memory used a type of cathode-ray tube called the Williams tube that used secondary emission to store bits on the tube face. Another random access computer memory tube based on secondary emission was the Selectron tube . Both were made obsolete by the invention of magnetic-core memory .
Secondary emission can be undesirable such as in the tetrode thermionic valve (tube). In this instance the positively charged screen grid can accelerate the electron stream sufficiently to cause secondary emission at the anode ( plate ). This can give rise to excessive screen grid current. It is also partly responsible for this type of valve (tube), particularly early types with anodes not treated to reduce secondary emission, exhibiting a ' negative resistance ' characteristic, which could cause the tube to become unstable. This side effect could be put to use by using some older valves (e.g., type 77 pentode) as dynatron oscillators . This effect was prevented by adding a third grid to the tetrode, called the suppressor grid, to repel the electrons back toward the plate. This tube was called the pentode . | https://en.wikipedia.org/wiki/Secondary_emission |
In fluid dynamics , flow can be decomposed into primary flow plus secondary flow , a relatively weaker flow pattern superimposed on the stronger primary flow pattern. The primary flow is often chosen to be an exact solution to simplified or approximated governing equations, such as potential flow around a wing or geostrophic current or wind on the rotating Earth. In that case, the secondary flow usefully spotlights the effects of complicated real-world terms neglected in those approximated equations. For instance, the consequences of viscosity are spotlighted by secondary flow in the viscous boundary layer , resolving the tea leaf paradox . As another example, if the primary flow is taken to be a balanced flow approximation with net force equated to zero, then the secondary circulation helps spotlight acceleration due to the mild imbalance of forces. A smallness assumption about secondary flow also facilitates linearization .
In engineering , secondary flow also identifies an additional flow path.
The basic principles of physics and the Coriolis effect define an approximate geostrophic wind or gradient wind , balanced flows that are parallel to the isobars . Measurements of wind speed and direction at heights well above ground level confirm that wind matches these approximations quite well. However, nearer the Earth's surface, the wind speed is less than predicted by the barometric pressure gradient, and the wind direction is partly across the isobars rather than parallel to them. This flow of air across the isobars is a secondary flow ., a difference from the primary flow which is parallel to the isobars. Interference by surface roughness elements such as terrain, waves, trees and buildings cause drag on the wind and prevent the air from accelerating to the speed necessary to achieve balanced flow. As a result, the wind direction near ground level is partly parallel to the isobars in the region, and partly across the isobars in the direction from higher pressure to lower pressure.
As a result of the slower wind speed at the earth's surface, in a region of low pressure the barometric pressure is usually significantly higher at the surface than would be expected, given the barometric pressure at mid altitudes, due to Bernoulli's principle . Hence, the secondary flow toward the center of a region of low pressure is also drawn upward by the significantly lower pressure at mid altitudes. This slow, widespread ascent of the air in a region of low pressure can cause widespread cloud and rain if the air is of sufficiently high relative humidity .
In a region of high pressure (an anticyclone ) the secondary flow includes a slow, widespread descent of air from mid altitudes toward ground level, and then outward across the isobars. This descent causes a reduction in relative humidity and explains why regions of high pressure usually experience cloud-free skies for many days.
The flow around a tropical cyclone is often well approximated as parallel to circular isobars , such as in a vortex . A strong pressure gradient draws air toward the center of the cyclone, a centripetal force nearly balanced by Coriolis and centrifugal forces in gradient wind balance. The viscous secondary flow near the Earth's surface converges toward the center of the cyclone, ascending in the eyewall to satisfy mass continuity . As the secondary flow is drawn upward the air cools as its pressure falls, causing extremely heavy rainfall and releasing latent heat which is an important driver of the storm's energy budget.
Tornadoes and dust devils display localised vortex flow. Their fluid motion is similar to tropical cyclones but on a much smaller scale so that the Coriolis effect is not significant. The primary flow is circular around the vertical axis of the tornado or dust devil. As with all vortex flow, the speed of the flow is fastest at the core of the vortex. In accordance with Bernoulli's principle where the wind speed is fastest the air pressure is lowest; and where the wind speed is slowest the air pressure is highest. Consequently, near the center of the tornado or dust devil the air pressure is low. There is a pressure gradient toward the center of the vortex. This gradient, coupled with the slower speed of the air near the earth's surface, causes a secondary flow toward the center of the tornado or dust devil, rather than in a purely circular pattern.
The slower speed of the air at the surface prevents the air pressure from falling as low as would normally be expected from the air pressure at greater heights. This is compatible with Bernoulli's principle. The secondary flow is toward the center of the tornado or dust devil, and is then drawn upward by the significantly lower pressure several thousands of feet above the surface in the case of a tornado, or several hundred feet in the case of a dust devil. Tornadoes can be very destructive and the secondary flow can cause debris to be swept into a central location and carried to low altitudes.
Dust devils can be seen by the dust stirred up at ground level, swept up by the secondary flow and concentrated in a central location. The accumulation of dust then accompanies the secondary flow upward into the region of intense low pressure that exists outside the influence of the ground.
When water in a circular bowl or cup is moving in circular motion the water displays free-vortex flow – the water at the center of the bowl or cup spins at relatively high speed, and the water at the perimeter spins more slowly. The water is a little deeper at the perimeter and a little more shallow at the center, and the surface of the water is not flat but displays the characteristic depression toward the axis of the spinning fluid. At any elevation within the water the pressure is a little greater near the perimeter of the bowl or cup where the water is a little deeper, than near the center. The water pressure is a little greater where the water speed is a little slower, and the pressure is a little less where the speed is faster, and this is consistent with Bernoulli's principle .
There is a pressure gradient from the perimeter of the bowl or cup toward the center. This pressure gradient provides the centripetal force necessary for the circular motion of each parcel of water. The pressure gradient also accounts for a secondary flow of the boundary layer in the water flowing across the floor of the bowl or cup. The slower speed of the water in the boundary layer is unable to balance the pressure gradient. The boundary layer spirals inward toward the axis of circulation of the water. On reaching the center the secondary flow is then upward toward the surface, progressively mixing with the primary flow. Near the surface there may also be a slow secondary flow outward toward the perimeter.
The secondary flow along the floor of the bowl or cup can be seen by sprinkling heavy particles such as sugar, sand, rice or tea leaves into the water and then setting the water in circular motion by stirring with a hand or spoon. The boundary layer spirals inward and sweeps the heavier solids into a neat pile in the center of the bowl or cup. With water circulating in a bowl or cup, the primary flow is purely circular and might be expected to fling heavy particles outward to the perimeter. Instead, heavy particles can be seen to congregate in the center as a result of the secondary flow along the floor. [ 1 ]
Water flowing through a bend in a river must follow curved streamlines to remain within the banks of the river. The water surface is slightly higher near the concave bank than near the convex bank. (The "concave bank" has the greater radius. The "convex bank" has the smaller radius.) As a result, at any elevation within the river, water pressure is slightly higher near the concave bank than near the convex bank. A pressure gradient results from the concave bank toward the other bank. Centripetal forces are necessary for the curved path of each parcel of water, which is provided by the pressure gradient. [ 1 ]
The primary flow around the bend approximates a free vortex – fastest speed where the radius of curvature of the stream itself is smallest and slowest speed where the radius is largest. [ 2 ] The higher pressure near the concave (outer) bank is accompanied by slower water speed, and the lower pressure near the convex bank is accompanied by faster water speed, and all this is consistent with Bernoulli's principle .
A secondary flow is produced in the boundary layer along the floor of the river bed. The boundary layer is not moving fast enough to balance the pressure gradient and so its path is partly downstream and partly across the stream from the concave bank toward the convex bank, driven by the pressure gradient. [ 3 ] The secondary flow is then upward toward the surface where it mixes with the primary flow or moves slowly across the surface, back toward the concave bank. [ 4 ] This motion is called helicoidal flow .
On the floor of the river bed the secondary flow sweeps sand, silt and gravel across the river and deposits the solids near the convex bank, in similar fashion to sugar or tea leaves being swept toward the center of a bowl or cup as described above. [ 1 ] This process can lead to accentuation or creation of D-shaped islands, meanders through creation of cut banks and opposing point bars which in turn may result in an oxbow lake . The convex (inner) bank of river bends tends to be shallow and made up of sand, silt and fine gravel; the concave (outer) bank tends to be steep and elevated due to heavy erosion.
Different definitions have been put forward for secondary flow in turbomachinery, such as "Secondary flow in broad terms means flow at right angles to intended primary flow". [ 5 ]
Secondary flows occur in the main, or primary, flowpath in turbomachinery compressors and turbines (see also unrelated use of term for flow in the secondary air system of a gas turbine engine). They are always present when a wall boundary layer is turned through an angle by a curved surface. [ 6 ] They are a source of total pressure loss and limit the efficiency that can be achieved for the compressor or turbine. Modelling the flow enables blade, vane and end-wall surfaces to be shaped to reduce the losses. [ 7 ] [ 8 ]
Secondary flows occur throughout the impeller in a centrifugal compressor but are less marked in axial compressors due to shorter passage lengths. [ 9 ] Flow turning is low in axial compressors but boundary layers are thick on the annulus walls which gives significant secondary flows. [ 10 ] Flow turning in turbine blading and vanes is high and generates strong secondary flow. [ 11 ]
Secondary flows also occur in pumps for liquids and include inlet prerotation, or intake vorticity, tip clearance flow (tip leakage), flow separation when operating away from the design condition, and secondary vorticity. [ 12 ]
The following, from Dixon, [ 13 ] shows the secondary flow generated by flow turning in an axial compressor blade or stator passage. Consider flow with an approach velocity c1. The velocity profile will be non-uniform due to friction between the annulus wall and the fluid. The vorticity of this boundary layer is normal to the approach velocity c 1 {\displaystyle c_{1}} and of magnitude w 1 = d c 1 d z , {\displaystyle w_{1}={\frac {dc_{1}}{dz}},} where z is the distance to the wall.
As the vorticity of each blade onto each other will be of opposite directions, a secondary vorticity will be generated. If the deflection angle, e, between the guide vanes is small, the magnitude of the secondary vorticity is represented as w s = − 2 e ( d c 1 d z ) {\displaystyle w_{s}=-2e\left({\frac {dc_{1}}{dz}}\right)}
This secondary flow will be the integrated effect of the distribution of secondary vorticity along the blade length.
Gas turbine engines have a power-producing primary airflow passing through the compressor. They also have a substantial (25% of core flow in a Pratt & Whitney PW2000 ) [ 14 ] secondary flow obtained from the primary flow and which is pumped from the compressor and used by the secondary air system. Like the secondary flow in turbomachinery this secondary flow is also a loss to the power-producing capability of the engine.
Thrust-producing flow which passes through an engines thermal cycle is called primary airflow. Using only cycle flow was relatively short-lived as the turbojet engine. Airflow through a propeller or a turbomachine fan is called secondary flow and is not part of the thermal cycle. [ 15 ] This use of secondary flow reduces losses and increases the overall efficiency of the propulsion system. The secondary flow may be many times that through the engine.
During the 1960s cruising at speeds between Mach 2 to 3 was pursued for commercial and military aircraft. Concorde , North American XB-70 and Lockheed SR-71 used ejector-type supersonic nozzles which had a secondary flow obtained from the inlet upstream of the engine compressor. The secondary flow was used to purge the engine compartment, cool the engine case, cool the ejector nozzle and cushion the primary expansion. The secondary flow was ejected by the pumping action of the primary gas flow through the engine nozzle and the ram pressure in the inlet. | https://en.wikipedia.org/wiki/Secondary_flow |
Secondary metabolites , also called specialised metabolites , secondary products , or natural products , are organic compounds produced by any lifeform, e.g. bacteria , archaea , fungi , animals , or plants , which are not directly involved in the normal growth , development , or reproduction of the organism. Instead, they generally mediate ecological interactions , which may produce a selective advantage for the organism by increasing its survivability or fecundity . Specific secondary metabolites are often restricted to a narrow set of species within a phylogenetic group. Secondary metabolites often play an important role in plant defense against herbivory and other interspecies defenses. Humans use secondary metabolites as medicines, flavourings, pigments, and recreational drugs. [ 2 ]
The term secondary metabolite was first coined by Albrecht Kossel , the 1910 Nobel Prize laureate for medicine and physiology. [ 3 ] 30 years later a Polish botanist Friedrich Czapek described secondary metabolites as end products of nitrogen metabolism . [ 4 ]
Secondary metabolites commonly mediate antagonistic interactions, such as competition and predation , as well as mutualistic ones such as pollination and resource mutualisms . Usually, secondary metabolites are confined to a specific lineage or even species, [ 5 ] though there is considerable evidence that horizontal transfer across species or genera of entire pathways plays an important role in bacterial (and, likely, fungal) evolution. [ 6 ] Research also shows that secondary metabolism can affect different species in varying ways. In the same forest, four separate species of arboreal marsupial folivores reacted differently to a secondary metabolite in eucalypts. [ 7 ] This shows that differing types of secondary metabolites can be the split between two herbivore ecological niches. [ 7 ] Additionally, certain species evolve to resist secondary metabolites and even use them for their own benefit. For example, monarch butterflies have evolved to be able to eat milkweed ( Asclepias ) despite the presence of toxic cardiac glycosides . [ 8 ] The butterflies are not only resistant to the toxins, but are actually able to benefit by actively sequestering them, which can lead to the deterrence of predators. [ 8 ]
Plants are capable of producing and synthesizing diverse groups of organic compounds and are divided into two major groups: primary and secondary metabolites. [ 9 ] Secondary metabolites are metabolic intermediates or products which are not essential to growth and life of the producing plants but rather required for interaction of plants with their environment and produced in response to stress. Their antibiotic, antifungal and antiviral properties protect the plant from pathogens. Some secondary metabolites such as phenylpropanoids protect plants from UV damage. [ 10 ] The biological effects of plant secondary metabolites on humans have been known since ancient times. The herb Artemisia annua which contains Artemisinin , has been widely used in Chinese traditional medicine more than two thousand years ago. [ 11 ] Plant secondary metabolites are classified by their chemical structure and can be divided into four major classes: terpenes , phenylpropanoids (i.e. phenolics ), polyketides , and alkaloids . [ 12 ]
Terpenes constitute a large class of natural products which are composed of isoprene units. Terpenes are only hydrocarbons and terpenoids are oxygenated hydrocarbons. The general molecular formula of terpenes are multiples of (C 5 H 8 ) n, where 'n' is number of linked isoprene units. Hence, terpenes are also termed as isoprenoid compounds. Classification is based on the number of isoprene units present in their structure. Some terpenoids (i.e. many sterols ) are primary metabolites. Some terpenoids that may have originated as secondary metabolites have subsequently been recruited as plant hormones, such as gibberellins , brassinosteroids , and strigolactones .
Examples of terpenoids built from hemiterpene oligomerization are:
Phenolics are a chemical compound characterized by the presence of aromatic ring structure bearing one or more hydroxyl groups . Phenolics are the most abundant secondary metabolites of plants ranging from simple molecules such as phenolic acid to highly polymerized substances such as tannins . Classes of phenolics have been characterized on the basis of their basic skeleton.
An example of a plant phenol is:
Alkaloids are a diverse group of nitrogen-containing basic compounds. They are typically derived from plant sources and contain one or more nitrogen atoms. Chemically they are very heterogeneous. Based on chemical structures, they may be classified into two broad categories:
Examples of alkaloids produced by plants are:
Many alkaloids affect the central nervous system of animals by binding to neurotransmitter receptors .
Glucosinolates are secondary metabolites that include both sulfur and nitrogen atoms , and are derived from glucose , an amino acid and sulfate .
An example of a glucosinolate in plants is Glucoraphanin , from broccoli ( Brassica oleracea var. italica ).
Many drugs used in modern medicine are derived from plant secondary metabolites.
The two most commonly known terpenoids are artemisinin and paclitaxel . Artemisinin was widely used in Traditional Chinese medicine and later rediscovered as a powerful antimalarial by a Chinese scientist Tu Youyou . She was later awarded the Nobel Prize for the discovery in 2015. Currently, the malaria parasite , Plasmodium falciparum , has become resistant to artemisinin alone and the World Health Organization recommends its use with other antimalarial drugs for a successful therapy. Paclitaxel the active compound found in Taxol is a chemotherapy drug used to treat many forms of cancers including ovarian cancer , breast cancer , lung cancer , Kaposi sarcoma , cervical cancer , and pancreatic cancer . [ 15 ] Taxol was first isolated in 1973 from barks of a coniferous tree, the Pacific Yew . [ 16 ]
Morphine and codeine both belong to the class of alkaloids and are derived from opium poppies . Morphine was discovered in 1804 by a German pharmacist Friedrich Sertürner t. It was the first active alkaloid extracted from the opium poppy . It is mostly known for its strong analgesic effects, however, morphine is also used to treat shortness of breath and treatment of addiction to stronger opiates such as heroin . [ 17 ] [ 18 ] Despite its positive effects on humans, morphine has very strong adverse effects, such as addiction, hormone imbalance or constipation. [ 18 ] [ 19 ] Due to its highly addictive nature morphine is a strictly controlled substance around the world, used only in very severe cases with some countries underusing it compared to the global average due to the social stigma around it. [ 20 ]
Codeine, also an alkaloid derived from the opium poppy, is considered the most widely used drug in the world according to World Health Organization . It was first isolated in 1832 by a French chemist Pierre Jean Robiquet , also known for the discovery of caffeine and a widely used red dye alizarin . [ 22 ] Primarily codeine is used to treat mild pain and relief coughing [ 23 ] although in some cases it is used to treat diarrhea and some forms of irritable bowel syndrome . [ 23 ] Codeine has the strength of 0.1-0.15 compared to morphine ingested orally, [ 24 ] hence it is much safer to use. Although codeine can be extracted from the opium poppy, the process is not feasible economically due to the low abundance of pure codeine in the plant. A chemical process of methylation of the much more abundant morphine is the main method of production. [ 25 ]
Atropine is an alkaloid first found in Atropa belladonna , a member of the nightshade family . While atropine was first isolated in the 19th century, its medical use dates back to at least the fourth century B.C. where it was used for wounds, gout, and sleeplessness. Currently atropine is administered intravenously to treat bradycardia and as an antidote to organophosphate poisoning . Overdosing of atropine may lead to atropine poisoning which results in side effects such as blurred vision , nausea , lack of sweating, dry mouth and tachycardia . [ 26 ]
Resveratrol is a phenolic compound of the flavonoid class. It is highly abundant in grapes , blueberries , raspberries and peanuts . It is commonly taken as a dietary supplement for extending life and reducing the risk of cancer and heart disease, however there is no strong evidence supporting its efficacy. [ 27 ] [ 28 ] Nevertheless, flavonoids are in general thought to have beneficial effects for humans. [ 29 ] Certain studies shown that flavonoids have direct antibiotic activity. [ 30 ] A number of in vitro and limited in vivo studies shown that flavonoids such as quercetin have synergistic activity with antibiotics and are able to suppress bacterial loads. [ 31 ]
Digoxin is a cardiac glycoside first derived by William Withering in 1785 from the foxglove (Digitalis) plant. It is typically used to treat heart conditions such as atrial fibrillation , atrial flutter or heart failure . [ 32 ] Digoxin can, however, have side effects such as nausea , bradycardia , diarrhea or even life-threatening arrhythmia .
The three main classes of fungal secondary metabolites are: polyketides , nonribosomal peptides and terpenes . Although fungal secondary metabolites are not required for growth they play an essential role in survival of fungi in their ecological niche. [ 33 ] The most known fungal secondary metabolite is penicillin discovered by Alexander Fleming in 1928. Later in 1945, Fleming, alongside Ernst Chain and Howard Florey , received a Nobel Prize for its discovery which was pivotal in reducing the number of deaths in World War II by over 100,000. [ 34 ]
Examples of other fungal secondary metabolites are:
Lovastatin was the first FDA approved secondary metabolite to lower cholesterol levels. Lovastatin occurs naturally in low concentrations in oyster mushrooms , [ 35 ] red yeast rice , [ 36 ] and Pu-erh . [ 37 ] Lovastatin's mode of action is competitive inhibition of HMG-CoA reductase , and a rate-limiting enzyme responsible for converting HMG-CoA to mevalonate .
Fungal secondary metabolites can also be dangerous to humans. Claviceps purpurea , a member of the ergot group of fungi typically growing on rye, results in death when ingested. The build-up of poisonous alkaloids found in C. purpurea lead to symptoms such as seizures and spasms , diarrhea , paresthesias , Itching , psychosis or gangrene . Currently, removal of ergot bodies requires putting the rye in brine solution with healthy grains sinking and infected floating. [ 38 ]
Bacterial production of secondary metabolites starts in the stationary phase as a consequence of lack of nutrients or in response to environmental stress. Secondary metabolite synthesis in bacteria is not essential for their growth, however, they allow them to better interact with their ecological niche. The main synthetic pathways of secondary metabolite production in bacteria are; b-lactam, oligosaccharide, shikimate, polyketide and non-ribosomal pathways. [ 39 ] Many bacterial secondary metabolites are toxic to mammals . When secreted those poisonous compounds are known as exotoxins whereas those found in the prokaryotic cell wall are endotoxins .
Examples of bacterial secondary metabolites are:
Archaea are capable of producing a variety of secondary metabolites, which may have significant biotechnological applications. [ 46 ] Despite knowing this, the biosynthetic pathways for secondary metabolites in archaea are less understood than those in bacteria. Notably, archaea often lack some biosynthesis genes commonly present in bacteria, which suggests that they may possess unique metabolic pathways for synthetizing these compounds. [ 46 ]
Extracellular polymeric substances can effectively adsorb and degrade hazardous organic chemicals. While these compounds are produced by various organisms, archaea are particularly promising for wastewater treatment due to their high tolerance to saline concentrations and their ability to grow anaerobically. [ 47 ]
Selective breeding was used as one of the first biotechnological techniques used to reduce the unwanted secondary metabolites in food, such as naringin causing bitterness in grapefruit. [ 48 ] In some cases increasing the content of secondary metabolites in a plant is the desired outcome. Traditionally this was done using in-vitro plant tissue culture techniques which allow for: control of growth conditions, mitigate seasonality of plants or protect them from parasites and harmful-microbes. [ 49 ] Synthesis of secondary metabolites can be further enhanced by introducing elicitors into a tissue plant culture, such as jasmonic acid , UV-B or ozone . These compounds induce stress onto a plant leading to increased production of secondary metabolites.
To further increase the yield of SMs new approaches have been developed. A novel approach used by Evolva uses recombinant yeast S. cerevisiae strains to produce secondary metabolites normally found in plants. The first successful chemical compound synthesised with Evolva was vanillin, widely used in the food beverage industry as flavouring. The process involves inserting the desired secondary metabolite gene into an artificial chromosome in the recombinant yeast leading to synthesis of vanillin. Currently Evolva produces a wide array of chemicals such as stevia , resveratrol or nootkatone .
With the development of recombinant technologies the Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity was signed in 2010. The protocol regulates the conservation and protection of genetic resources to prevent the exploitation of smaller and poorer countries. If genetic, protein or small molecule resources sourced from biodiverse countries become profitable a compensation scheme was put in place for the countries of origin. [ 50 ] | https://en.wikipedia.org/wiki/Secondary_metabolite |
Secondary poisoning , or relay toxicity , is the poisoning that results when one organism comes into contact with or ingests another organism that has poison in its system. It typically occurs when a predator eats an animal, such as a mouse , rat , or insect , that has previously been poisoned by a commercial pesticide . If the level of toxicity in the prey animal is sufficiently high, it will harm the predator.
Mammals susceptible to secondary poisoning include humans , pets such as cats and dogs , as well as wild birds . [ not verified in body ]
Various pesticides such as rodenticides may cause secondary poisoning. [ 1 ] Some pesticides require multiple feedings spanning several days; this increases the time a target organism continues to move after ingestion, raising the risk of secondary poisoning of a predator.
Most of slow-acting poisons for pests have cumulative effects and so can cause secondary poisoning and environment pollution. | https://en.wikipedia.org/wiki/Secondary_poisoning |
In mathematics , the secondary polynomials { q n ( x ) } {\displaystyle \{q_{n}(x)\}} associated with a sequence { p n ( x ) } {\displaystyle \{p_{n}(x)\}} of polynomials orthogonal with respect to a density ρ ( x ) {\displaystyle \rho (x)} are defined by
To see that the functions q n ( x ) {\displaystyle q_{n}(x)} are indeed polynomials, consider the simple example of p 0 ( x ) = x 3 . {\displaystyle p_{0}(x)=x^{3}.} Then,
which is a polynomial x {\displaystyle x} provided that the three integrals in t {\displaystyle t} (the moments of the density ρ {\displaystyle \rho } ) are convergent.
This polynomial -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Secondary_polynomials |
Secondary spill containment is the containment of hazardous liquids in order to prevent pollution of soil and water . Common techniques include the use of spill berms to contain oil -filled equipment, fuel tanks , truck washing decks, or any other places or items that may leak hazardous liquids.
Secondary spill containment involves the sequestration of hazardous waste to prevent the contamination of local soils and water. [ 1 ]
United States Environmental Protection Agency (EPA) Spill Prevention, Control, and Countermeasure (SPCC) guidelines require that facilities that store large quantities of petroleum ( products ) must have a plan in place to contain a spill. [ 2 ] The purpose of the SPCC rule is to establish requirements for facilities to prevent a discharge of oil into navigable waters or adjoining shorelines. Within the electric utility industry , oil-filled transformers are often in need of secondary containment. [ 3 ] Outdated secondary containment techniques such as concrete catch-basins are quickly losing ground to solutions that offer more cost-effective cleanup in case of a spill or leak. [ citation needed ] One example of a more cost-effective method involves placing a geotextile boom filled with oil solidifying polymers around a transformer. These geotextile barriers allow for flow of water, but completely solidify oil in the event of a leak and effectively seal the spill. [ citation needed ] Many electrical utility companies are switching to this method because it saves them significant amounts of money when a spill occurs, because there is no need to employ vac-trucks afterwards to clean up a spill inside a catch-basin. [ citation needed ]
Portable containment berms are essentially a basin that can catch many different types of hazardous liquids and chemicals. They are a form of secondary spill containment useful for containing mobile equipment such as oil drums , trucks, tankers and trailers. Unlike geotextile berms, portable berms usually do not solidify oil. [ citation needed ] Many companies involved in fracking use spill containment berms to capture contaminated water that is a by-product of the operation. [ citation needed ] Each well site has multiple trucks that transport water used in deep well drilling procedure. [ citation needed ] | https://en.wikipedia.org/wiki/Secondary_spill_containment |
Secondary stability , also known as reserve stability , is a boat or ship's ability to right itself at large angles of heel (lateral tilt), as opposed to primary or initial stability , the boat's tendency to stay laterally upright when tilted to low (<10°) angles. [ 1 ]
The study of initial and secondary stability are part of naval architecture as applied to small watercraft (as distinct from the study of ship stability concerning large ships ).
A greater lateral width ( beam ) and more initial stability decrease the secondary stability- once tilted more than a certain angle the boat is conversely harder to restore to its stable upright position.
Johnson, Shelley (2009). The Complete Sea-Kayakers Handbook, Second Edition . Asbjorn Jokstad. p. 20. ISBN 978-0071748711 .
This article related to water transport is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Secondary_stability |
Secondary treatment (mostly biological wastewater treatment ) is the removal of biodegradable organic matter (in solution or suspension) from sewage or similar kinds of wastewater . [ 1 ] : 11 The aim is to achieve a certain degree of effluent quality in a sewage treatment plant suitable for the intended disposal or reuse option. A " primary treatment " step often precedes secondary treatment, whereby physical phase separation is used to remove settleable solids . During secondary treatment, biological processes are used to remove dissolved and suspended organic matter measured as biochemical oxygen demand (BOD). These processes are performed by microorganisms in a managed aerobic or anaerobic process depending on the treatment technology . Bacteria and protozoa consume biodegradable soluble organic contaminants (e.g. sugars , fats, and organic short-chain carbon molecules from human waste, food waste , soaps and detergent) while reproducing to form cells of biological solids. Secondary treatment is widely used in sewage treatment and is also applicable to many agricultural and industrial wastewaters .
Secondary treatment systems are classified as fixed-film or suspended-growth systems, and as aerobic versus anaerobic. Fixed-film or attached growth systems include trickling filters , constructed wetlands , bio-towers, and rotating biological contactors , where the biomass grows on media and the sewage passes over its surface. [ 2 ] : 11–13 The fixed-film principle has further developed into moving bed biofilm reactors (MBBR) [ 3 ] and Integrated Fixed-Film Activated Sludge (IFAS) processes. [ 4 ] Suspended-growth systems include activated sludge , which is an aerobic treatment system, based on the maintenance and recirculation of a complex biomass composed of micro-organisms ( bacteria and protozoa ) able to absorb and adsorb the organic matter carried in the wastewater. Constructed wetlands are also being used. An example for an anaerobic secondary treatment system is the upflow anaerobic sludge blanket reactor .
Fixed-film systems are more able to cope with drastic changes in the amount of biological material and can provide higher removal rates for organic material and suspended solids than suspended growth systems. [ 2 ] : 11–13 Most of the aerobic secondary treatment systems include a secondary clarifier to settle out and separate biological floc or filter material grown in the secondary treatment bioreactor.
Primary treatment settling removes about half of the solids and a third of the BOD from raw sewage. [ 7 ] Secondary treatment is defined as the "removal of biodegradable organic matter (in solution or suspension) and suspended solids. Disinfection is also typically included in the definition of conventional secondary treatment." [ 1 ] : 11 Biological nutrient removal is regarded by some sanitary engineers as secondary treatment and by others as tertiary treatment. [ 1 ] : 11
After this kind of treatment, the wastewater may be called secondary-treated wastewater. [ citation needed ]
Secondary treatment systems are classified as fixed-film or suspended-growth systems A great number of secondary treatment processes exist, see List of wastewater treatment technologies . The main ones are explained below.
In older plants and those receiving variable loadings, trickling filter beds are used where the settled sewage liquor is spread onto the surface of a bed made up of coke (carbonized coal), limestone chips or specially fabricated plastic media. Such media must have large surface areas to support the biofilms that form. The liquor is typically distributed through perforated spray arms. The distributed liquor trickles through the bed and is collected in drains at the base. These drains also provide a source of air which percolates up through the bed, keeping it aerobic. Biofilms of bacteria, protozoa and fungi form on the media’s surfaces and eat or otherwise reduce the organic content. [ 8 ] : 12 The filter removes a small percentage of the suspended organic matter, while the majority of the organic matter supports microorganism reproduction and cell growth from the biological oxidation and nitrification taking place in the filter. With this aerobic oxidation and nitrification, the organic solids are converted into biofilm grazed by insect larvae, snails, and worms which help maintain an optimal thickness. Overloading of beds may increase biofilm thickness leading to anaerobic conditions and possible bioclogging of the filter media and ponding on the surface. [ 9 ]
Activated sludge is a common suspended-growth method of secondary treatment. Activated sludge plants encompass a variety of mechanisms and processes using dissolved oxygen to promote growth of biological floc that substantially removes organic material. [ 8 ] : 12–13 Biological floc is an ecosystem of living biota subsisting on nutrients from the inflowing primary clarifier effluent. These mostly carbonaceous dissolved solids undergo aeration to be broken down and either biologically oxidized to carbon dioxide or converted to additional biological floc of reproducing micro-organisms. Nitrogenous dissolved solids (amino acids, ammonia , etc.) are similarly converted to biological floc or oxidized by the floc to nitrites , nitrates , and, in some processes, to nitrogen gas through denitrification . While denitrification is encouraged in some treatment processes, denitrification often impairs the settling of the floc causing poor quality effluent in many suspended aeration plants. Overflow from the activated sludge mixing chamber is sent to a clarifier where the suspended biological floc settles out while the treated water moves into tertiary treatment or disinfection. Settled floc is returned to the mixing basin to continue growing in primary effluent. Like most ecosystems, population changes among activated sludge biota can reduce treatment efficiency. Nocardia , a floating brown foam sometimes misidentified as sewage fungus , is the best known of many different fungi and protists that can overpopulate the floc and cause process upsets. Elevated concentrations of toxic wastes including pesticides, industrial metal plating waste, or extreme pH, can kill the biota of an activated sludge reactor ecosystem. [ 18 ]
One type of system that combines secondary treatment and settlement is the cyclic activated sludge (CASSBR), or sequencing batch reactor (SBR). Typically, activated sludge is mixed with raw incoming sewage, and then mixed and aerated. The settled sludge is run off and re-aerated before a proportion is returned to the headworks. [ 19 ]
The disadvantage of the CASSBR process is that it requires a precise control of timing, mixing and aeration. This precision is typically achieved with computer controls linked to sensors. Such a complex, fragile system is unsuited to places where controls may be unreliable, poorly maintained, or where the power supply may be intermittent. [ citation needed ]
Extended aeration package plants use separate basins for aeration and settling, and are somewhat larger than SBR plants with reduced timing sensitivity. [ 23 ]
Membrane bioreactors (MBR) are activated sludge systems using a membrane liquid-solid phase separation process. The membrane component uses low pressure microfiltration or ultrafiltration membranes and eliminates the need for a secondary clarifier or filtration. The membranes are typically immersed in the aeration tank; however, some applications utilize a separate membrane tank. One of the key benefits of an MBR system is that it effectively overcomes the limitations associated with poor settling of sludge in conventional activated sludge (CAS) processes. The technology permits bioreactor operation with considerably higher mixed liquor suspended solids (MLSS) concentration than CAS systems, which are limited by sludge settling. The process is typically operated at MLSS in the range of 8,000–12,000 mg/L, while CAS are operated in the range of 2,000–3,000 mg/L. The elevated biomass concentration in the MBR process allows for very effective removal of both soluble and particulate biodegradable materials at higher loading rates. Thus increased sludge retention times, usually exceeding 15 days, ensure complete nitrification even in extremely cold weather.
The cost of building and operating an MBR is often higher than conventional methods of sewage treatment. Membrane filters can be blinded with grease or abraded by suspended grit and lack a clarifier's flexibility to pass peak flows. The technology has become increasingly popular for reliably pretreated waste streams and has gained wider acceptance where infiltration and inflow have been controlled, however, and the life-cycle costs have been steadily decreasing. The small footprint of MBR systems, and the high quality effluent produced, make them particularly useful for water reuse applications. [ 24 ]
Aerobic granular sludge can be formed by applying specific process conditions that favour slow growing organisms such as PAOs (polyphosphate accumulating organisms) and GAOs (glycogen accumulating organisms). Another key part of granulation is selective wasting whereby slow settling floc-like sludge is discharged as waste sludge and faster settling biomass is retained. This process has been commercialized as Nereda process . [ 25 ]
Aerated lagoons are a low technology suspended-growth method of secondary treatment using motor-driven aerators floating on the water surface to increase atmospheric oxygen transfer to the lagoon and to mix the lagoon contents. The floating surface aerators are typically rated to deliver the amount of air equivalent to 1.8 to 2.7 kg O 2 / kW·h . Aerated lagoons provide less effective mixing than conventional activated sludge systems and do not achieve the same performance level. The basins may range in depth from 1.5 to 5.0 metres. Surface-aerated basins achieve 80 to 90 percent removal of BOD with retention times of 1 to 10 days. [ 26 ] Many small municipal sewage systems in the United States (1 million gal./day or less) use aerated lagoons. [ 27 ]
The United States Environmental Protection Agency (EPA) defined secondary treatment based on the performance observed at late 20th-century bioreactors treating typical United States municipal sewage. [ 31 ] Secondary treated sewage is expected to produce effluent with a monthly average of less than 30 mg/L BOD and less than 30 mg/L suspended solids . Weekly averages may be up to 50 percent higher. A sewage treatment plant providing both primary and secondary treatment is expected to remove at least 85 percent of the BOD and suspended solids from domestic sewage. The EPA regulations describe stabilization ponds as providing treatment equivalent to secondary treatment removing 65 percent of the BOD and suspended solids from incoming sewage and discharging approximately 50 percent higher effluent concentrations than modern bioreactors. The regulations also recognize the difficulty of meeting the specified removal percentages from combined sewers , dilute industrial wastewater, or Infiltration/Inflow . [ 32 ]
Process upsets are temporary decreases in treatment plant performance caused by significant population change within the secondary treatment ecosystem. [ 33 ] Conditions likely to create upsets include toxic chemicals and unusually high or low concentrations of organic waste BOD providing food for the bioreactor ecosystem.
Measures creating uniform wastewater loadings tend to reduce the probability of upsets. Fixed-film or attached growth secondary treatment bioreactors are similar to a plug flow reactor model circulating water over surfaces colonized by biofilm , while suspended-growth bioreactors resemble a continuous stirred-tank reactor keeping microorganisms suspended while water is being treated. Secondary treatment bioreactors may be followed by a physical phase separation to remove biological solids from the treated water. Upset duration of fixed film secondary treatment systems may be longer because of the time required to recolonize the treatment surfaces. Suspended growth ecosystems may be restored from a population reservoir. Activated sludge recycle systems provide an integrated reservoir if upset conditions are detected in time for corrective action. Sludge recycle may be temporarily turned off to prevent sludge washout during peak storm flows when dilution keeps BOD concentrations low. Suspended growth activated sludge systems can be operated in a smaller space than fixed-film trickling filter systems that treat the same amount of water; but fixed-film systems are better able to cope with drastic changes in the amount of biological material and can provide higher removal rates for organic material and suspended solids than suspended growth systems. [ 8 ] : 11–13
Wastewater flow variations may be reduced by limiting stormwater collection by the sewer system, and by requiring industrial facilities to discharge batch process wastes to the sewer over a time interval rather than immediately after creation. Discharge of appropriate organic industrial wastes may be timed to sustain the secondary treatment ecosystem through periods of low residential waste flow. [ 34 ] Sewage treatment systems experiencing holiday waste load fluctuations may provide alternative food to sustain secondary treatment ecosystems through periods of reduced use. Small facilities may prepare a solution of soluble sugars. Others may find compatible agricultural wastes, or offer disposal incentives to septic tank pumpers during low use periods.
Waste containing biocide concentrations exceeding the secondary treatment ecosystem tolerance level may kill a major fraction of one or more important ecosystem species. BOD reduction normally accomplished by that species temporarily ceases until other species reach a suitable population to utilize that food source, or the original population recovers as biocide concentrations decline. [ 35 ]
Waste containing unusually low BOD concentrations may fail to sustain the secondary treatment population required for normal waste concentrations. The reduced population surviving the starvation event may be unable to completely utilize available BOD when waste loads return to normal. Dilution may be caused by addition of large volumes of relatively uncontaminated water such as stormwater runoff into a combined sewer. Smaller sewage treatment plants may experience dilution from cooling water discharges, major plumbing leaks, firefighting, or draining large swimming pools.
A similar problem occurs as BOD concentrations drop when low flow increases waste residence time within the secondary treatment bioreactor. Secondary treatment ecosystems of college communities acclimated to waste loading fluctuations from student work/sleep cycles may have difficulty surviving school vacations. Secondary treatment systems accustomed to routine production cycles of industrial facilities may have difficulty surviving industrial plant shutdown. Populations of species feeding on incoming waste initially decline as concentration of those food sources decrease. Population decline continues as ecosystem predator populations compete for a declining population of lower trophic level organisms. [ 36 ]
High BOD concentrations initially exceed the ability of the secondary treatment ecosystem to utilize available food. Ecosystem populations of aerobic organisms increase until oxygen transfer limitations of the secondary treatment bioreactor are reached. Secondary treatment ecosystem populations may shift toward species with lower oxygen requirements, but failure of those species to use some food sources may produce higher effluent BOD concentrations. More extreme increases in BOD concentrations may drop oxygen concentrations before the secondary treatment ecosystem population can adjust, and cause an abrupt population decrease among important species. Normal BOD removal efficiency will not be restored until populations of aerobic species recover after oxygen concentrations rise to normal.
Biological oxidation processes are sensitive to temperature and, between 0 °C and 40 °C, the rate of biological reactions increase with temperature. Most surface aerated vessels operate at between 4 °C and 32 °C. [ 26 ] | https://en.wikipedia.org/wiki/Secondary_treatment |
In mathematics , particularly differential topology , the secondary vector bundle structure refers to the natural vector bundle structure ( TE , p ∗ , TM ) on the total space TE of the tangent bundle of a smooth vector bundle ( E , p , M ) , induced by the push-forward p ∗ : TE → TM of the original projection map p : E → M .
This gives rise to a double vector bundle structure ( TE , E , TM , M ) .
In the special case ( E , p , M ) = ( TM , π TM , M ) , where TE = TTM is the double tangent bundle , the secondary vector bundle ( TTM , ( π TM ) ∗ , TM ) is isomorphic to the tangent bundle ( TTM , π TTM , TM ) of TM through the canonical flip .
Let ( E , p , M ) be a smooth vector bundle of rank N . Then the preimage ( p ∗ ) −1 ( X ) ⊂ TE of any tangent vector X in TM in the push-forward p ∗ : TE → TM of the canonical projection p : E → M is a smooth submanifold of dimension 2 N , and it becomes a vector space with the push-forwards
of the original addition and scalar multiplication
as its vector space operations. The triple ( TE , p ∗ , TM ) becomes a smooth vector bundle with these vector space operations on its fibres.
Let ( U , φ ) be a local coordinate system on the base manifold M with φ ( x ) = ( x 1 , ..., x n ) and let
be a coordinate system on W := p − 1 ( U ) ⊂ E {\displaystyle W:=p^{-1}(U)\subset E} adapted to it. Then
so the fiber of the secondary vector bundle structure at X in T x M is of the form
Now it turns out that
gives a local trivialization χ : TW → TU × R 2 N for ( TE , p ∗ , TM ) , and the push-forwards of the original vector space operations read in the adapted coordinates as
and
so each fibre ( p ∗ ) −1 ( X ) ⊂ TE is a vector space and the triple ( TE , p ∗ , TM ) is a smooth vector bundle.
The general Ehresmann connection TE = HE ⊕ VE on a vector bundle ( E , p , M ) can be characterized in terms of the connector map
where vl v : E → V v E is the vertical lift , and vpr v : T v E → V v E is the vertical projection . The mapping
induced by an Ehresmann connection is a covariant derivative on Γ( E ) in the sense that
if and only if the connector map is linear with respect to the secondary vector bundle structure ( TE , p ∗ , TM ) on TE . Then the connection is called linear . Note that the connector map is automatically linear with respect to the tangent bundle structure ( TE , π TE , E ) . | https://en.wikipedia.org/wiki/Secondary_vector_bundle_structure |
Secotioid fungi produce an intermediate fruiting body form that is between the mushroom-like hymenomycetes and the closed bag-shaped gasteromycetes , where an evolutionary process of gasteromycetation has started but not run to completion. Secotioid fungi may or may not have opening caps, but in any case they often lack the vertical geotropic orientation of the hymenophore needed to allow the spores to be dispersed by wind, and the basidiospores are not forcibly discharged or otherwise prevented from being dispersed (e.g. gills completely inclosed and never exposed as in the secotioid form of Lentinus tigrinus )—note—some mycologists do not consider a species to be secotioid unless it has lost ballistospory . [ 1 ]
Historically agarics and boletes (which bear their spores on a hymenium of gills or tubes respectively) were classified quite separately from the gasteroid fungi , such as puff-balls and truffles , of which the spores are formed in a large mass enclosed in an outer skin. However, in spite of this apparently very great difference in form, recent mycological research, both at microscopic [ 2 ] and molecular [ 3 ] level has shown that sometimes species of open mushrooms are much more closely related to particular species of gasteroid fungi than they are to each other. Fungi which do not open up to let their spores be dispersed in the air, but which show a clear morphological relation to agarics or boletes, constitute an intermediate form and are called secotioid . [ 2 ]
The word is derived from the name of the genus Secotium , which was defined in 1840 by Kunze for a South African example, S. gueinzii , which is the type species. In the following years numerous secotioid species were added to this genus, including ones which according to modern taxonomy belong to other genera or families. [ 4 ] [ 5 ] [ 6 ]
On a microscopic scale, secotioid fungi do not expel their spores forcibly from the basidium; their spores are "statismospores". Like gasteroid fungi, secotioid species rely on animals such as rodents or insects to distribute their spores.
It can at times be disadvantageous for a mushroom to open up and free its spores in the usual way. If this development is aborted, a secotioid form arises, perhaps to be followed eventually by an evolutionary progression to a fully gasteroid form. This type of progression is called gasteromycetation and seems to have happened several times independently starting from various genera of "normal" mushrooms. This means that the secotioid and also the gasteroid fungi are polyphyletic . According to the paper by Thiers, [ 2 ] in certain climates and certain seasons, it may be an advantage to remain closed, because moisture can be conserved in that way.
For example, the gasteroid genus Hymenogaster has been shown to be closely related to agaric genera such as Hebeloma , which were formerly placed in family Cortinariaceae or Strophariaceae . This is found by DNA analysis and also indicated on a microscopic scale by the resemblance of the spores and basidia. According to a current classification system, Hebeloma now belongs to family Hymenogastraceae , and is considered more narrowly related to the closed Hymenogaster fungi than, for instance, to the ordinary mushrooms in genus Cortinarius . [ 7 ] [ 8 ]
A similar case is the well-known "Deceiver" mushroom Laccaria laccata which is now classified in the Hydnangiaceae , Hydnangium being a gasteroid genus.
It has been found that a change in a single locus of a gene of the gilled mushroom Lentinus tigrinus causes it to have a closed fruiting body. This suggests that the emergence of a secotioid species may not require many mutations. [ 3 ]
There is a spectrum of secotioid species ranging from the open form to the closed form in the following respects:
The adjective " sequestrate " is sometimes used as a general term to mean "either secotioid or gasteroid".
Cortinarius is a very widespread genus of agarics, but also contains some secotioid species, such as C. leucocephalus , C. coneae and C. cartilagineus .
Pholiota nubigena is a secotioid species found early in the year at high altitude in the western United States. It was originally assigned to Secotium and later to a more specific secotioid genus Nivatogastrium , but in fact it is closely allied to Pholiota squarrosa [ 2 ] and it has now been moved to genus Pholiota itself, although the latter consists primarily of agarics . [ 9 ] [ 10 ] [ 11 ]
Gastroboletus is a secotioid bolete genus where the fruiting bodies may or may not open, but in any case the tubes are not aligned vertically as in a true bolete. [ 2 ]
Secotioid mushrooms of the genus Endoptychum (such as E. agaricoides , E. arizonicum ) have been now moved to genus Chlorophyllum , closely related to Macrolepiota .
Agaricus deserticola is a secotioid species of Agaricus (the genus of common cultivated mushrooms) which at one time was placed in the genus Secotium . Similarly, Agaricus inapertus was formerly known as Endoptychum depressum until molecular analysis revealed it to be closely aligned with Agaricus . [ 12 ] | https://en.wikipedia.org/wiki/Secotioid |
6422
20377
ENSG00000104332
ENSMUSG00000031548
Q8N474
Q8C4U3
NM_003012
NM_013834
NP_003003
NP_038862
Secreted frizzled-related protein 1 , also known as SFRP1 , is a protein which in humans is encoded by the SFRP1 gene . [ 5 ]
Secreted frizzled-related protein 1 (SFRP1) is a member of the SFRP family that contains a cysteine -rich domain homologous to the putative Wnt-binding site of Frizzled proteins. SFRPs act as soluble modulators of Wnt signaling . SFRP1 and SFRP5 may be involved in determining the polarity of photoreceptor cells in the retina. SFRP1 is expressed in several human tissues, with the highest levels in the heart. [ 5 ]
The Secreted frizzled-related protein (SFRP) family consists of five secreted glycoproteins in humans (SFRP1, SFRP2 , SFRP3 , SFRP4 , SFRP5 ) that act as extracellular signaling ligands. Each SFRP is ~300 amino acids in length and contains a cysteine-rich domain (CRD) that shares 30–50% sequence homology with the CRD of Frizzled (Fz) receptors. SFRPs are able to bind Wnt proteins and Fz receptors in the extracellular compartment. The interaction between SFRPs and Wnt proteins prevents the latter from binding the Fz receptors. [ 6 ] SFRPs are also able to downregulate Wnt signaling by the formation of an inhibitory complex with the Frizzled receptors. [ 7 ] The Wnt pathway plays a key role in embryonic development, cell differentiation and cell proliferation. It has been shown that the deregulation of this critical developmental pathway occurs in several human tumor entities. [ 8 ]
SFRP1 is a 35 kDa prototypical member of the SFRP family. It acts as a biphasic modulator of Wnt signaling, counteracting Wnt-induced effects at high concentrations and promoting them at lower concentrations. [ 9 ] It is located in a chromosomal region (8p12-p11.1) that is frequently deleted in breast cancer and is thought to harbour a tumor suppressor gene. [ 10 ]
There are 3 types of tumor suppressor genes : [ 11 ]
SFRP1 appears to fall in the first category of genes, those that affect cell growth.
The role of SFRP1 as a tumor suppressor has been proposed in many cancers, based on its loss in patient tumors. Its frequent inactivation by methylation-induced silencing is consistent with it behaving as a tumor suppressor. [ 12 ] Also, the SFRP1 gene is located in a region on chromosome 8 that is frequently lost in many cancer types. [ 13 ] Expression levels of several targets of the Wnt signaling pathways are increased in tumor tissue compared with normal, and the expression of SFRP1 is lost in patient tumor samples. The role for the Wnt/β-catenin signaling in cancer has been well defined: β-catenin drives transcription of genes that contribute to the tumor phenotype by regulating processes such as proliferation, survival and invasion. [ 12 ]
Gumz et al. showed that SFRP1 expression in UMRC3 cells (clear cell renal cell carcinoma cell line) resulted in a growth-inhibited phenotype. SFRP1 expression not only reduced the expression of Wnt target genes, but also markedly inhibited tumor cell growth in culture, soft agar and xenografts in athymic nude mice. Growth in culture and anchorage-independent growth were inhibited in SFRP1-expressing UMRC3 cells. The growth-inhibitory effects of SFRP1 were due primarily to decreased cell proliferation rather than an increase in apoptosis. [ 12 ] This was consistent with the effect of SFRP1 on cellular proliferation as seen in prostate cancer, where retroviral-mediated expression of SFRP1 resulted in inhibited cellular proliferation but had no effect on apoptosis. [ 14 ] Also, restoration of SFRP1 expression attenuated the malignant phenotype of cRCC; moreover, other studies showed reexpression of SFRP1 resulted in decreased colony formation in colon and lung cancer models. [ 15 ] [ 16 ]
The Wnt signaling pathways are initiated by the binding of the Wnt ligand to the Fz receptor. There are three different molecular pathways downstream of the Wnt/Fz interaction. The majority of research has focused on the Wnt/ β-catenin pathway (also known as the "canonical" Wnt pathway), which manages cell fate determination by regulating gene expression. The Wnt/Ca 2+ and Wnt/polarity pathways are known as the "non-canonical pathways". The decision of which pathway is activated most likely depends on which Wnt ligand and Fz receptor are present, as well as the cellular context. Nineteen Wnt ligands and ten different members of the Fz seven-transmembrane receptor family have been described in the human genome. As a result, a large variety of responses could be initiated from the Wnt/Fz interactions. [ 17 ]
The Wnt/β-catenin pathway starts with the binding of Wnt to a receptor complex encompassing a Fz receptor and LRP co-receptor. After Wnt binds, an intracellular protein named Dishevelled (Dvl) is activated via phosphorylation. β-catenin degradation complexes in the cytoplasm are composed of adenomatous polyposis coli (APC), glycogen synthase kinase 3β (GSK3β) and Axin. APC promotes the degradation of β-catenin by increasing the affinity of the degradation complex to β-catenin. Axin is a scaffolding protein which holds the degradation complex together. The activated Dvl associates with Axin and prevents GSK3β and casein kinase 1α (CK1α) from phosphorylating critical substrates, such as β-catenin. Phosphorylation of β-catenin marks the protein for ubiquitylation and rapid degradation by proteasomes. Thus, the binding of Wnt to the receptor results in a non-phosphorylated form of β-catenin which localizes to the nucleus and, after displacing the Groucho corepressor protein, forms a complex with Tcf/Lef transcription factors and co-activators (such as CREB binding protein) and induces the expression of downstream target genes. [ 17 ]
β-catenin is actively stabilized in over 50% of breast cancers and its nuclear localization correlates with poor patient prognosis. Several target genes of the Wnt signaling pathway, such as cyclin D1, are activated in a significant proportion of breast tumours. [ 6 ] It has been shown that SFRP1 transcription can be driven by B-catenin in normal intestinal epithelial cells. Neoplastic epithelial cells were treated with lithium chloride , which inhibits GSK3B and thus stabilizes B-catenin. Lithium chloride is widely used to mimic Wnt signaling. Rather than suppressing SFRP1 expression, B-catenin/TCF activity was associated with the induction of SFRP1. This is consistent with a negative feedback response restricting the exposure of a normal cell to a prolonged Wnt growth factor signal. [ 18 ]
Hedgehog signaling in the intestinal epithelium represses the canonical Wnt signaling to restrict expression of Wnt target genes to stem or progenitor cells. It was thought that the Hedgehog signaling pathway does this via the induction of the secreted-type Wnt inhibitor. Katoh et al. searched for the GLI-binding site within the promoter region of Wnt inhibitor genes. GLI are transcription factors that activate the transcription of Hedgehog target genes. The GLI-binding site was identified within the 5’-flanking promoter region of the human SFRP1 gene. The GLI-binding site was conserved among promoter regions of mammalian SFRP1 orthologs . These facts indicate that the SFRP1 gene was identified as the evolutionarily conserved target of the Hedgehog-GLI signaling pathway. SFRP1 was found to be expressed in mesenchymal cells. Hedgehog is secreted from differentiated epithelial cells to induce SFRP1 expression in mesenchymal cells, which keeps differentiated epithelial cells away from the effect of canonical Wnt signaling. Thus, SFRP1 is most likely the Hedgehog target to confine canonical Wnt signaling within stem or progenitor cells. Epigenetic CpG hypermethylation of the SFRP1 promoter during chronic persistent inflammation and aging leads to the occurrence of gastrointestinal cancers, such as colorectal cancer and gastric cancer, through the breakdown of Hedgehog-dependent Wnt signal inhibition. [ 19 ]
Regions of the short arm of chromosome 8 are frequently deleted in a range of solid tumors, indicating that tumor suppressor genes reside at these loci. [ 20 ] Caldwell et al. have shown frequent interstitial deletions in a series of prostate cancers, squamous cell head and neck cancers and colorectal carcinomas. There was also an association between 8p11.2 deletion and local invasion. [ 13 ]
The first coding exon contains the whole of the frizzled-related cysteine rich domain (CRD), while the third exon ( COOH-terminal domain) contains the netrin-related domain. Netrin is a regulator of apoptosis; the SFRP1 netrin-related motif is also found in a range of other proteins that is thought to mediate protein-protein interactions. The middle exon most likely represents a spacer between the first and third exon. There are 2 introns present within the coding sequence of SFRP1. [ 13 ]
Three out of 10 advanced colorectal tumors had mutations leading to premature termination of the SFRP1 translation product. The mutations were two single-base deletions (26delG and 67delG) and a single-base change (G450A), which generates an in-frame stop codon. These three mutations were found within the first exon, which was shown previously to be sufficient for Wnt antagonist activity by itself [26, 32]. Of the 10 tumors analyzed, no truncating mutations were found in the second or third exons of SFRP1. [ 13 ]
An additional 51 tumors were analyzed via direct sequence analysis, yielding 49 clearly interpretable results. Only the first exon was sequenced for stop codon mutations, but none were found. This indicates that point mutation is not a frequent method of inactivation of the SFRP1 gene in colorectal cancer. [ 13 ]
The primary translation product of SFRP1 contains an atypical signaling sequence, where a chain of 15 hydrophilic amino acids precede the hydrophobic domain. Looking at 7 tumors without the truncating mutation, the retained SFRP1 allele contained an in-frame three-base insertion after nucleotide 37. This is thought to lead to an extra alanine in the protein after codon 13. However, no significant association was found between the development of colorectal cancer and the presence of the 3-bp insertion. [ 13 ]
An unspliced form of SFRP1 is the dominant form in the lung and liver, leading to an extended protein. This extended sequence contains a hydrophobic region that may act as a transmembrane anchor, modifying the localization of the protein. This may then influence the function of SFRP1 in different tissues because an untethered protein may be more effective in antagonizing Wnt signaling to tumor cells than would a membrane-bound form. [ 13 ]
DNA methylation involves the addition of a methyl group to the carbon-5 position of the cytosine ring in the CpG dinucleotide and converting it to methylcytosine. This process is catalyzed by DNA methyltransferase. In numerous cancers, the CpG islands of selected genes are aberrantly methylated (hypermethylated) which results in transcriptional repression. This may be an alternate mechanism of gene inactivation. [ 7 ]
Multiple genes have been discovered to be frequently methylated in cancers and leukemias. [ 23 ] [ 24 ] More specifically, the deregulation of the Wnt signaling pathway has been implicated in a wide array of cancers [ 25 ] [ 26 ] that is mainly seen as a result of loss-of-function mutations of APC and axin or as a gain-of-function mutation of CTNNB1 (B-catenin). [ 25 ] [ 27 ] The GC content of the SFRP1 promoter in humans is 56.3%. [ 19 ]
It has been found that the overexpression of B-catenin may lead to enhanced proliferation in myeloma plasma cells; thus, soluble Wnt inhibitors are potential tumor suppressor genes and, if inactivated, may contribute to myeloma pathogenesis. This led Chim et al. to investigate the role of aberrant gene methylation of a panel of soluble Wnt antagonists, including SFRP1. Complete methylation led to silencing of respective genes (no transcripts), whereas absence of gene methylation was associated with constitutive gene expression. Methylation of soluble Wnt antagonists would be important in the pathogenesis of multiple myeloma if Wnt signaling was regulated by an autocrine loop by Wnt and Fz. If an autocrine loops exists, then both the ligand (Wz) and receptor (Fzd) should be simultaneously expressed in myeloma cells and growth of tumour cells should be inhibited upon addition of SFRP1. Chim et al. demonstrated simultaneous expression of Wz and Fzd in myeloma plasma cells. Moreover, treatment with recombinant SFRP1 inhibited the growth of myeloma cells in a dose-dependent manner. These findings implicate soluble Wnt inhibitors as tumor suppressors that could be inactivated by methylation. [ 7 ]
Veeck and colleagues found all of their eight breast cancer cell lines had complete methylation in the SFRP1 promoter region, while no methylation was detectable in non-malignant cell lines. After treatment with 5-Aza-2’-deoxycytidine (DAC), an inhibitor of DNA methyltransferase, SFRP1 expression was restored in all four treated breast cancer cell lines, supporting the hypothesis of methylation-mediated SFRP1 gene silencing in breast cancer. [ 6 ]
Furthermore, the transcriptional silencing mechanism underlying DNA methylation which is brought about through the hypermethylation of CpG-rich islands present in the promoter region of genes, can cooperate with histone deacetylation to change chromatin structure to a repressed form. Lo and colleagues looked at the effects of DAC and trichostatin A (TSA, selectively inhibits the mammalian histone deacetylase family of enzymes) on cancer cells. In 4 breast cancer cell lines, SFRP1 expression was significantly restored after treatment with DAC alone. TSA, only in combination with DAC, had a slightly enhanced effect on SFRP1 expression in these cell lines. A different breast cancer cell line (SKBR3, showed loss of SFRP1 expression without significant methylation of the SFRP1 promoter. Lo et al. hypothesized that this may be due to silencing via histone deacetylation. After SKBR3 cells were treated with TSA, SFRP1 expression was restored in a dose- and time-dependent manner. Yet another breast cancer cell line (T47D) required both DAC and TSA to upregulate SFRP1 expression. This indicates that T47D cells are tightly regulated by two layers of epigenetic control (DNA methylation and histion deacetylation) and relieving inhibition by both mechanisms is necessary for reactivation of SFRP1. This study shows that both the epigenetic mechanisms, DNA methylation and histone deacetylation, are involved in silencing of SFRP1. [ 28 ]
Uterine leiomyomas are the most common tumors found in the female genital tract. Leiomyomas have been reported to grow under the influence of ovarian steroids ( estrogen and progesterone ). Aberrations of wnt signaling, as well as SFRPs, can contribute to the neoplastic process. This led Fukuhara et al. to investigate whether SFRP1 is associated with the pathogenesis of uterine leiomyomas by analyzing mRNA and protein expression of SFRP1 in leiomyomas and matched normal myometrium. [ 29 ] The following outlines their findings:
Twenty-three out of 25 patients showed high expression of SFRP1 mRNA in leiomyoma than the matched normal myometrium . During the menstrual cycle, the level of SFRP1 mRNA in leiomyoma was highest in the follicular phase. Gonadotropin releasing hormone analogue (GnRHa) decreases estrogen secretion from the ovary. Patients treated with (GnRHa) presurgically showed the lowest expression of SFRP1 in both myometrial and leiomyoma tissues. These findings suggest that SFRP1 could be under the control of estrogen. Gene expression of estrogen receptors in leiomyomas is stronger than that in the myometrium. This suggests that leiomyoma possess increased sensitivity to E2 (estradiol, a form of estrogen) and the estrogen-dependent expression of SFRP1 in leiomyoma could be associated with the growth and pathogenesis of leiomyoma. [ 29 ]
Smooth muscle cells cultured from the myometrium showed no significant induction of SFRP1 mRNA in response to treatment with E2 and/or progesterone. Conversely, cells cultured from leiomyomas showed significant dose-dependent induction of SFRP1 mRNA in response to treatment with E2; however, progesterone had no effect on SFRP1 even when coapplied with E2. [ 29 ]
Both hypoxic conditions and serum deprivation induced increased expression of SFRP1 in leiomyoma cells. However, the smooth muscle cells cultured from the myometrium showed no significant correlation between SFRP1 expression and oxygen concentration. This suggests that SFRP1 may protect the cells from the damage caused by these stresses. [ 29 ]
The formation of new blood capillaries is an important component of pathological tissue repair in response to ischemia. The angiogenic process is complex and involves endothelial cell (EC) movement and proliferation. [ 30 ]
SFRP1 has been shown to have a role in new vascularization after an ischemic event and as a potent angiogenic factor. In vitro SFRP1 modulated the EC angiogenic response (migration, differentiation) and in vivo SFRP1 stimulated neovascularization in plug or tumor models. The directed movements of EC during de novo vessel formation are coordinated through cellular adhesion mechanisms, cytoskeletal reorganization and by association with elevated expression of angiogenic factors such as, the key factor, vascular endothelial growth factor. The regulation of the EC cytoskeleton is critical to EC spreading and motility. SFRP1 was found to have a major role in mediating EC spreading by regulating reorganization of the actin network and focal contact formations. [ 30 ]
In vivo data supports a critical role for SFRP1 in ischemia-induced angiogenesis in adults. Using adenovirus-expressing SFRP1, impaired the canonical Wnt/Fzd pathway in the early phase of ischemia and as a result reduced vascular cell proliferation and delayed vessel formation. When SFRP1 was induced specifically in ECs along the kinetics of ischemia repair, a biphasic response was seen: a delay in capillary formation until day 15 and then an increase in vascular formation at day 25. This indicates that SFRP1 can fine tune the outcome of Wnt/Fzd signaling at different steps in the course of neovessel formation. [ 30 ]
Loss of SFRP1 protein expression is associated with poor overall survival (OS) in patients with early breast cancer (pT1 tumours); this indicates that SFRP1 may be a putative tumor suppressor gene. SFRP1 methylation has been shown to be an independent risk factor for OS. [ 6 ] Veeck and colleagues demonstrate, via Kaplan-Meier analysis, that clear SFRP1 promoter methylation is associated with unfavourable prognosis. Furthermore, a correlation between SFRP1 methylation and OS in breast cancer is dependent on a gene dose effect. In order for the OS to be affected, a sufficient amount of tumour cells may be required to lose SFRP1 expression due to promoter methylation. [ 6 ]
Heparin and heparan sulfate (HS) are mammalian glycosaminoglycans with the highest negative charge density of known biological macromolecules. They bind by ionic interactions with a variety of proteins. Heparin is widely used as an injectable anticoagulant . SFRP1 are heparin-binding proteins, with the heparin-binding domain within the C-terminal region of the SFRP1 protein. In vitro studies show that SFRP1 is stabilized by heparin, suggesting that heparin or endogenous heparan-sulfate proteoglycan (HSPG) has the potential to promote SFRP1/Wnt binding by serving as a scaffold to facilitate interaction between SFRP1 and Wnt proteins. [ 31 ] [ 32 ] Lowering HSPG levels in tissue have been shown to impair Wnt signaling in vivo, supporting the idea that HSPG plays an important role in Wnt signaling regulation. Furthermore, SFRP1 is tyrosine -sulfated at two N-terminal tyrosines; this modification is, however, inhibited by heparin. Tyrosine sulfation could partially destabilize the SFRP1 protein, which is supported by previous studies showing that SFRP1 is susceptible to degradation in the absence of heparin. [ 31 ] The finding that heparin can inhibit intracellular post-translational modification of SFRP1 was surprising. This indicates that heparin may inhibit the process of tyrosine sulfation, for example, by tyrosyl-protein sulfotransferases enzymes or sulfate donor pathways. Since heparin is highly negatively charged and cannot permeate the membrane, it must activate a signal transduction pathway to carry out its effect. It is well known that fibroblast growth factors (FGFs) bind heparin with relatively high affinity. HSPGs have also been shown to be involved in FGF cell signaling. [ 33 ] [ 34 ] Zhong et al. revealed a specificity of FGFs and FGF receptors on SFRP1 accumulation, demonstrating that FGF and their receptors are involved in post-translational modification of SFRP1. [ 31 ] As stated above, SFRP1 has been shown to attenuate the malignant phenotype and decrease the growth of tumors. Thus, Heparin is a potential drug that could be used to stabilize and accumulate SFRP1 in cancer cells. [ 31 ]
Aberrant promoter hypermethylation of SFRP1 occurs frequently during the pathogenesis of human cancers and has been found to be one of the primary mechanisms in SFRP1 down-regulation. Methylation-specific PCR (MSP) is able to detect this epigenetic change and could be used for cancer detection. [ 35 ] Detection and quantification of promoter CpG methylation in body fluid is both feasible and noninvasive. Combined MSP analyses of multiple genes in voided urine could provide a reliable way to improve cancer diagnosis. [ 36 ]
Urakami et al. were able to detect cancers cells using conventional MSP analysis of Wnt-antagonist genes (including SFRP1) in voided urine of patients with bladder tumor. Their results showed a high percentage of identical methylation with tumor-tissue DNA. Conversely, no aberrant methylation was detected in >90% of urine DNA from normal controls. This demonstrates that methylation detection of SFRP1 is both feasible and reliable and that the urine methylation score (M score) of Wnt antagonist genes could be used as an excellent noninvasive diagnostic biomarker for bladder tumor. Furthermore, the M score of Wnt-antagonist genes may reflect the presence of bladder tumor that progresses to invasive disease that would signal for future aggressive treatment. An optimal hypermethylation panel of Wnt-antagonist genes could contribute significantly to early detection of bladder tumor and predict bladder tumor aggressiveness. In fact, the methylation of SFRP1 genes in fecal DNA isolated from stool samples has been used to screen for colorectal cancer. [ 37 ]
Immunotherapy is a treatment used to produce immunity to a disease or enhance the resistance of the immune system to an active disease process, such as cancer
Wnt and Fz genes are frequently overexpressed in head and neck squamous cell carcinoma (HNSCC). Treatment of a HNSCC cell line (SNU 1076) with anti-Wnt1 antibodies reduced the activity of the Wnt/Fz dependent transcription factor LEF /TCF and diminished the expression of cyclin D1 and B-catenin proteins. Similar to anti-Wnt antibodies, treatment with recombinant SFRP1 inhibited growth of SNU 1076 cells as well. This suggests that Wnt and Fz receptors may be attractive targets for immunotherapy and drug therapy of HNSCC. [ 38 ]
Epigenetic Therapy is the use of drugs or other epigenome-influencing techniques to treat medical conditions.
It is recently regarded as promising therapy to NSCLC. [ 39 ] As can be seen in the above, SFRP1 was downregulated epigenetically in NSCLC and was recently proposed as one of epigenetic therapy target. [ 40 ]
Secreted frizzled-related protein 1 has been shown to interact with FZD6 . [ 32 ] | https://en.wikipedia.org/wiki/Secreted_frizzled-related_protein_1 |
Secretion is the movement of material from one point to another, such as a secreted chemical substance from a cell or gland . In contrast, excretion is the removal of certain substances or waste products from a cell or organism. The classical mechanism of cell secretion is via secretory portals at the plasma membrane called porosomes . [ 1 ] Porosomes are permanent cup-shaped lipoprotein structures embedded in the cell membrane, where secretory vesicles transiently dock and fuse to release intra-vesicular contents from the cell.
Secretion in bacterial species means the transport or translocation of effector molecules. For example: proteins , enzymes or toxins (such as cholera toxin in pathogenic bacteria e.g. Vibrio cholerae ) from across the interior ( cytoplasm or cytosol ) of a bacterial cell to its exterior. Secretion is a very important mechanism in bacterial functioning and operation in their natural surrounding environment for adaptation and survival.
Eukaryotic cells , including human cells , have a highly evolved process of secretion. Proteins targeted for the outside are synthesized by ribosomes docked to the rough endoplasmic reticulum (ER). As they are synthesized, these proteins translocate into the ER lumen , where they are glycosylated and where molecular chaperones aid protein folding . Misfolded proteins are usually identified here and retrotranslocated by ER-associated degradation to the cytosol , where they are degraded by a proteasome . The vesicles containing the properly folded proteins then enter the Golgi apparatus .
In the Golgi apparatus, the glycosylation of the proteins is modified and further post-translational modifications , including cleavage and functionalization, may occur. The proteins are then moved into secretory vesicles which travel along the cytoskeleton to the edge of the cell. More modification can occur in the secretory vesicles (for example insulin is cleaved from proinsulin in the secretory vesicles).
Eventually, there is vesicle fusion with the cell membrane at porosomes, by a process called exocytosis , dumping its contents out of the cell's environment. [ 2 ]
Strict biochemical control is maintained over this sequence by usage of a pH gradient: the pH of the cytosol is 7.4, the ER's pH is 7.0, and the cis-golgi has a pH of 6.5. Secretory vesicles have pHs ranging between 5.0 and 6.0; some secretory vesicles evolve into lysosomes , which have a pH of 4.8.
There are many proteins like FGF1 (aFGF), FGF2 (bFGF), interleukin-1 (IL1) etc. which do not have a signal sequence. They do not use the classical ER-Golgi pathway. These are secreted through various nonclassical pathways.
At least four nonclassical (unconventional) protein secretion pathways have been described. [ 3 ] They include:
In addition, proteins can be released from cells by mechanical or physiological wounding [ 4 ] and through non-lethal, transient oncotic pores in the plasma membrane induced by washing cells with serum-free media or buffers. [ 5 ]
Many human cell types have the ability to be secretory cells. They have a well-developed endoplasmic reticulum , and Golgi apparatus to fulfill this function. Tissues that produce secretions include the gastrointestinal tract which secretes digestive enzymes and gastric acid , the lungs which secrete surfactants , and sebaceous glands which secrete sebum to lubricate the skin and hair. Meibomian glands in the eyelid secrete meibum to lubricate and protect the eye.
Secretion is not unique to eukaryotes – it is also present in bacteria and archaea as well. ATP binding cassette (ABC) type transporters are common to the three domains of life. Some secreted proteins are translocated across the cytoplasmic membrane by the SecYEG translocon , one of two translocation systems, which requires the presence of an N-terminal signal peptide on the secreted protein. Others are translocated across the cytoplasmic membrane by the twin-arginine translocation pathway (Tat). Gram-negative bacteria have two membranes, thus making secretion topologically more complex. There are at least six specialized secretion systems in Gram-negative bacteria. [ 6 ]
Type I secretion is a chaperone dependent secretion system employing the Hly and Tol gene clusters. The process begins as a leader sequence on the protein to be secreted is recognized by HlyA and binds HlyB on the membrane. This signal sequence is extremely specific for the ABC transporter. The HlyAB complex stimulates HlyD which begins to uncoil and reaches the outer membrane where TolC recognizes a terminal molecule or signal on HlyD. HlyD recruits TolC to the inner membrane and HlyA is excreted outside of the outer membrane via a long-tunnel protein channel.
Type I secretion system transports various molecules, from ions, drugs, to proteins of various sizes (20 – 900 kDa). The molecules secreted vary in size from the small Escherichia coli peptide colicin V, (10 kDa) to the Pseudomonas fluorescens cell adhesion protein LapA of 520 kDa. [ 7 ] The best characterized are the RTX toxins and the lipases. Type I secretion is also involved in export of non-proteinaceous substrates like cyclic β-glucans and polysaccharides.
Proteins secreted through the type II system, or main terminal branch of the general secretory pathway, depend on the Sec or Tat system for initial transport into the periplasm . Once there, they pass through the outer membrane via a multimeric (12–14 subunits) complex of pore forming secretin proteins. In addition to the secretin protein, 10–15 other inner and outer membrane proteins compose the full secretion apparatus, many with as yet unknown function. Gram-negative type IV pili use a modified version of the type II system for their biogenesis, and in some cases certain proteins are shared between a pilus complex and type II system within a single bacterial species.
It is homologous to the basal body in bacterial flagella. It is like a molecular syringe through which a bacterium (e.g. certain types of Salmonella , Shigella , Yersinia , Vibrio ) can inject proteins into eukaryotic cells. The low Ca 2+ concentration in the cytosol opens the gate that regulates T3SS. One such mechanism to detect low calcium concentration has been illustrated by the lcrV (Low Calcium Response) antigen utilized by Yersinia pestis , which is used to detect low calcium concentrations and elicits T3SS attachment. The Hrp system in plant pathogens inject harpins and pathogen effector proteins through similar mechanisms into plants. This secretion system was first discovered in Yersinia pestis and showed that toxins could be injected directly from the bacterial cytoplasm into the cytoplasm of its host's cells rather than simply be secreted into the extracellular medium. [ 8 ]
It is homologous to conjugation machinery of bacteria, the conjugative pili . It is capable of transporting both DNA and proteins. It was discovered in Agrobacterium tumefaciens , which uses this system to introduce the T-DNA portion of the Ti plasmid into the plant host, which in turn causes the affected area to develop into a crown gall (tumor). Helicobacter pylori uses a type IV secretion system to deliver CagA into gastric epithelial cells, which is associated with gastric carcinogenesis. [ 9 ] Bordetella pertussis , the causative agent of whooping cough, secretes the pertussis toxin partly through the type IV system. Legionella pneumophila , the causing agent of legionellosis (Legionnaires' disease) utilizes a type IVB secretion system , known as the icm/dot ( i ntra c ellular m ultiplication / d efect in o rganelle t rafficking genes) system, to translocate numerous effector proteins into its eukaryotic host. [ 10 ] The prototypic Type IVA secretion system is the VirB complex of Agrobacterium tumefaciens . [ 11 ]
Protein members of this family are components of the type IV secretion system. They mediate intracellular transfer of macromolecules via a mechanism ancestrally related to that of bacterial conjugation machineries. [ 12 ] [ 13 ]
The Type IV secretion system (T4SS) is the general mechanism by which bacterial cells secrete or take up macromolecules. Their precise mechanism remains unknown. T4SS is encoded on Gram-negative conjugative elements in bacteria . T4SS are cell envelope-spanning complexes, or, in other words, 11–13 core proteins that form a channel through which DNA and proteins can travel from the cytoplasm of the donor cell to the cytoplasm of the recipient cell. T4SS also secrete virulence factor proteins directly into host cells as well as taking up DNA from the medium during natural transformation . [ 14 ]
As shown in the above figure, TraC, in particular consists of a three helix bundle and a loose globular appendage. [ 13 ]
T4SS has two effector proteins: firstly, ATS-1, which stands for Anaplasma translocated substrate 1, and secondly AnkA , which stands for ankyrin repeat domain-containing protein A. Additionally, T4SS coupling proteins are VirD4, which bind to VirE2. [ 15 ]
Also called the autotransporter system, [ 16 ] type V secretion involves use of the Sec system for crossing the inner membrane. Proteins which use this pathway have the capability to form a beta-barrel with their C-terminus which inserts into the outer membrane, allowing the rest of the peptide (the passenger domain) to reach the outside of the cell. Often, autotransporters are cleaved, leaving the beta-barrel domain in the outer membrane and freeing the passenger domain. Some researchers believe remnants of the autotransporters gave rise to the porins which form similar beta-barrel structures. [ citation needed ] A common example of an autotransporter that uses this secretion system is the Trimeric Autotransporter Adhesins . [ 17 ]
Type VI secretion systems were originally identified in 2006 by the group of John Mekalanos at the Harvard Medical School (Boston, USA) in two bacterial pathogens, Vibrio cholerae and Pseudomonas aeruginosa . [ 18 ] [ 19 ] These were identified when mutations in the Hcp and VrgG genes in Vibrio cholerae led to decreased virulence and pathogenicity. Since then, Type VI secretion systems have been found in a quarter of all proteobacterial genomes, including animal, plant, human pathogens, as well as soil, environmental or marine bacteria. [ 20 ] [ 21 ] While most of the early studies of Type VI secretion focused on its role in the pathogenesis of higher organisms, more recent studies suggested a broader physiological role in defense against simple eukaryotic predators and its role in inter-bacteria interactions. [ 22 ] [ 23 ] The Type VI secretion system gene clusters contain from 15 to more than 20 genes, two of which, Hcp and VgrG, have been shown to be nearly universally secreted substrates of the system. Structural analysis of these and other proteins in this system bear a striking resemblance to the tail spike of the T4 phage, and the activity of the system is thought to functionally resemble phage infection. [ 24 ]
In addition to the use of the multiprotein complexes listed above, Gram-negative bacteria possess another method for release of material: the formation of bacterial outer membrane vesicles . [ 25 ] Portions of the outer membrane pinch off, forming nano-scale spherical structures made of a lipopolysaccharide-rich lipid bilayer enclosing periplasmic materials, and are deployed for membrane vesicle trafficking to manipulate environment or invade at host–pathogen interface . Vesicles from a number of bacterial species have been found to contain virulence factors, some have immunomodulatory effects, and some can directly adhere to and intoxicate host cells. release of vesicles has been demonstrated as a general response to stress conditions, the process of loading cargo proteins seems to be selective. [ 26 ]
In some Staphylococcus and Streptococcus species, the accessory secretory system handles the export of highly repetitive adhesion glycoproteins.
[ 27 ] | https://en.wikipedia.org/wiki/Secretion |
A secretory protein is any protein, whether it be endocrine or exocrine , which is secreted by a cell. Secretory proteins include many hormones , enzymes , toxins , and antimicrobial peptides . Secretory proteins are synthesized in the endoplasmic reticulum. [ 1 ]
The production of a secretory protein starts like any other protein. The mRNA is produced and transported to the cytosol where it interacts with a free cytosolic ribosome . The part that is produced first, the N-terminal, contains a signal sequence consisting of 6 to 12 amino acids with hydrophobic side chains. This sequence is recognised by a cytosolic protein, SRP (Signal Recognition Particle), which stops the translation and aids in the transport of the mRNA-ribosome complex to an SRP receptor found in the membrane of the endoplasmic reticulum . When it arrives at the ER, the signal sequence is transferred to the translocon , a protein-conducting channel in the membrane that allows the newly synthesized polypeptide to be translocated to the ER lumen. The dissociation of SRP from the ribosome restores the translation of the secretory protein. The signal sequence is removed and the translation continues while the produced chain moves through the translocon (cotranslational translocation).
After the production of the protein is completed, it interacts with several other proteins to gain its final state.
After translation, proteins within the ER make sure that the protein is folded correctly. If after a first attempt the folding is unsuccessful, a second folding is attempted. If this fails too the protein is exported to the cytosol and labelled for destruction. Aside from the folding, there is also a sugar chain added to the protein. After these changes, the protein is transported to the Golgi apparatus by a coated vesicle using coating protein COPII.
In the Golgi apparatus , the sugar chains are modified by adding or removing certain sugars. The secretory protein leaves the Golgi apparatus by an uncoated vesicle.
Membrane proteins with functional areas on the cytosolic side of both the vesicle and cell membrane make sure the vesicle associates with the membrane. The vesicle membrane fuses with the cell membrane and so the protein leaves the cell.
Some vesicles don't fuse immediately and await a signal before starting the fusing. This is seen in vesicles carrying neurotransmitter in presynaptic cells. This process constitutes an effective cell-cell signaling mechanism via membrane vesicle trafficking from secretory cell to the target cells in human or animal body.
The process has been extended to the host–pathogen interface , wherein, gram negative bacteria secrete outer membrane vesicles containing fully conformed signal proteins and virulence factors via exocytosis of nano-sized vesicles, in order to control host or target cell activities and exploit their environment. | https://en.wikipedia.org/wiki/Secretory_protein |
Section Beams are made of steel and they have a specific lengths and shapes like Ɪ-beam , 'L', C-channel and I flanged beam. These types of section are usually used in steel structures and it is common to connect them with plates of steel.
There are three connection types:
Rivets are the strongest and most common type. [ 1 ] [ citation needed ]
There are some calculation methods to determine whether the design and construction of the truss is safe. [ 1 ] [ citation needed ] | https://en.wikipedia.org/wiki/Section_beam |
A sectional cooler is a type of liquid cooled rotary drum cooler used for continuous processes in chemical engineering . [ 1 ] Sectional coolers consist of a rotating cylinder ("drum" or "shell"), a drive unit and a support structure. At each end of the drum there are stationary chutes for material feed and outlet. Depending on the size of the cooler the drum is pivoted on a shaft running through its axis, or is supported on running treads or external ring gears. The interior of the drum consists of several wedge-shaped chambers arranged around a central hollow shaft. This arrangement is completely surrounded by a jacket of water. In the wedge-shaped chambers are angled fins or shovels which move material through the cooler as the drum is rotated. [ 2 ]
Sectional coolers work by indirectly cooling the material passing through them with water . [ 3 ] The water enters the cooler and reaches the space between the wedge-shaped chambers through the central hollow shaft, circulates between and around the chambers then leaves the cooler. The material to be cooled usually falls into the feed chute directly from another machine . As the drum rotates, the fins on the inside faces of the wedge-shaped sections convey the material to other end of the cooler. The rotation causes a constant mixing of the product in the chambers, providing good heat transfer and increasing efficiency . Sectional coolers can be rotated by chains , gears or by friction drive onto the drum itself.
Any free flowing bulk material can be cooled in a sectional cooler. They are often found after rotary kilns in calcination or other similar processes. [ 4 ] The purpose of sectional coolers is usually to reduce the material temperature enough that it can be handled with other machines such as conveyor belts and mills . Often the cooling itself is also an important part in the production process. Typical products processed with sectional coolers include calcined petroleum coke , zinc calcine, soda ash and pigments. The entry temperature of the products can reach up to 1400 °C. | https://en.wikipedia.org/wiki/Sectional_cooler |
A sector instrument is a general term for a class of mass spectrometer that uses a static electric (E) or magnetic (B) sector or some combination of the two (separately in space) as a mass analyzer. [ 1 ] Popular combinations of these sectors have been the EB, BE (of so-called reverse geometry), three-sector BEB and four-sector EBEB (electric-magnetic-electric-magnetic) instruments. Most modern sector instruments are double-focusing instruments (first developed by Francis William Aston , Arthur Jeffrey Dempster , Kenneth Bainbridge and Josef Mattauch in 1936 [ 2 ] ) in that they focus the ion beams both in direction and velocity. [ 3 ]
The behavior of ions in a homogeneous, linear, static electric or magnetic field (separately) as is found in a sector instrument is simple. The physics are described by a single equation called the Lorentz force law. This equation is the fundamental equation of all mass spectrometric techniques and applies in non-linear, non-homogeneous cases too and is an important equation in the field of electrodynamics in general.
where E is the electric field strength, B is the magnetic field induction, q is the charge of the particle, v is its current velocity (expressed as a vector), and × is the cross product .
So the force on an ion in a linear homogenous electric field (an electric sector) is:
in the direction of the electric field, with positive ions and opposite that with negative ions.
The force is only dependent on the charge and electric field strength. The lighter ions will be deflected more and heavier ions less due to the difference in inertia and the ions will physically separate from each other in space into distinct beams of ions as they exit the electric sector.
And the force on an ion in a linear homogenous magnetic field (a magnetic sector) is:
perpendicular to both the magnetic field and the velocity vector of the ion itself, in the direction determined by the right-hand rule of cross products and the sign of the charge.
The force in the magnetic sector is complicated by the velocity dependence but with the right conditions (uniform velocity for example) ions of different masses will separate physically in space into different beams as with the electric sector.
These are some of the classic geometries from mass spectrographs which are often used to distinguish different types of sector arrangements, although most current instruments do not fit precisely into any of these categories as the designs have evolved further.
The sector instrument geometry consists of a 127.30° ( π 2 ) {\displaystyle \left({\frac {\pi }{\sqrt {2}}}\right)} electric sector without an initial drift length followed by a 60° magnetic sector with the same direction of curvature. Sometimes called a "Bainbridge mass spectrometer," this configuration is often used to determine isotopic masses . A beam of positive particles is produced from the isotope under study. The beam is subject to the combined action of perpendicular electric and magnetic fields . Since the forces due to these two fields are equal and opposite when the particles have a velocity given by
they do not experience a resultant force ; they pass freely through a slit, and are then subject to another magnetic field, transversing a semi-circular path and striking a photographic plate . The mass of the isotope is determined through subsequent calculation.
The Mattauch–Herzog geometry consists of a 31.82° ( π / 4 2 {\displaystyle \pi /4{\sqrt {2}}} radians) electric sector, a drift length which is followed by a 90° magnetic sector of opposite curvature direction. [ 4 ] The entry of the ions sorted primarily by charge into the magnetic field produces an energy focussing effect and much higher transmission than a standard energy filter. This geometry is often used in applications with a high energy spread in the ions produced where sensitivity is nonetheless required, such as spark source mass spectrometry (SSMS) and secondary ion mass spectrometry (SIMS). [ 5 ] The advantage of this geometry over the Nier–Johnson geometry is that the ions of different masses are all focused onto the same flat plane. This allows the use of a photographic plate or other flat detector array.
The Nier–Johnson geometry consists of a 90° electric sector, a long intermediate drift length and a 60° magnetic sector of the same curvature direction. [ 6 ] [ 7 ]
The Hinterberger–Konig geometry consists of a 42.43° electric sector, a long intermediate drift length and a 130° magnetic sector of the same curvature direction.
The Takeshita geometry consists of a 54.43° electric sector, and short drift length, a second electric sector of the same curvature direction followed by another drift length before a 180° magnetic sector of opposite curvature direction.
The Matsuda geometry consists of an 85° electric sector, a quadrupole lens and a 72.5° magnetic sector of the same curvature direction. [ 8 ] This geometry is used in the SHRIMP and Panorama (gas source, high-resolution, multicollector to measure isotopologues in geochemistry). | https://en.wikipedia.org/wiki/Sector_mass_spectrometer |
Secular Review (1876–1907) was a freethought / secularist weekly publication in nineteenth and early twentieth-century Britain that appeared under a variety of names. It represented a "relatively moderate style of Secularism," more open to old Owenite and new socialist influences in contrast to the individualism and social conservatism of Charles Bradlaugh and his National Reformer . [ 1 ] It was edited during the period 1882–1906 by William Stewart Ross (1844–1906), who signed himself "Saladin." [ 2 ]
The journal was founded in August 1876 by George Jacob Holyoake , after he and George William Foote experienced difficulties with their collaborative editorship of the Secularist: A Liberal Weekly Review (1876–1877).
In February 1877, Charles Watts assumed the editorship. A new series was started in June 1877, merging it with Foote's Secularist , under joint editorship, to form the Secular Review and Secularist . Foote served as co-editor with Watts until March 1878, after which Watts edited the paper on his own until 1882.
William Stewart Ross joined Watts as co-editor in January 1882 and assumed sole editorship in July 1884, using the pseudonym "Saladin". In December 1888, Ross rechristened it the Agnostic Journal and Secular Review , and shortly thereafter changed the name again to the Agnostic Journal and Eclectic Review . Ross died in November 1906 and the last issue was published in June 1907. [ 1 ] [ 3 ] [ 4 ] | https://en.wikipedia.org/wiki/Secular_Review |
Secular Thought (1887–1911) was a Canadian periodical , published in Toronto , dedicated to promoting the principles of freethought and secularism . Founded and edited during its first several years by English freethinker Charles Watts , the editorship was assumed by Toronto printer and publisher James Spencer Ellis in 1891 when Watts returned to England. During that period, Secular Thought was the principal organ of the freethought movement in Canada, publishing large amounts of material from England and the United States in addition to commenting on Canadian affairs. [ 1 ] | https://en.wikipedia.org/wiki/Secular_Thought |
In nuclear physics , secular equilibrium is a situation in which the quantity of a radioactive isotope remains constant because its production rate (e.g., due to decay of a parent isotope) is equal to its decay rate.
Secular equilibrium can occur in a radioactive decay chain only if the half-life of the daughter radionuclide B is much shorter than the half-life of the parent radionuclide A. In such a case, the decay rate of A and hence the production rate of B is approximately constant, because the half-life of A is very long compared to the time scales considered. The quantity of radionuclide B builds up until the number of B atoms decaying per unit time becomes equal to the number being produced per unit time. The quantity of radionuclide B then reaches a constant, equilibrium value. Assuming the initial concentration of radionuclide B is zero, full equilibrium usually takes several half-lives of radionuclide B to establish.
The quantity of radionuclide B when secular equilibrium is reached is determined by the quantity of its parent A and the half-lives of the two radionuclide. That can be seen from the time rate of change of the number of atoms of radionuclide B:
where λ A and λ B are the decay constants of radionuclide A and B , related to their half-lives t 1/2 by λ = ln ( 2 ) / t 1 / 2 {\displaystyle \lambda =\ln(2)/t_{1/2}} , and N A and N B are the number of atoms of A and B at a given time.
Secular equilibrium occurs when d N B / d t = 0 {\displaystyle dN_{B}/dt=0} , or
Over long enough times, comparable to the half-life of radionuclide A , the secular equilibrium is only approximate; N A decays away according to
and the "equilibrium" quantity of radionuclide B declines in turn. For times short compared to the half-life of A , λ A t ≪ 1 {\displaystyle \lambda _{A}t\ll 1} and the exponential can be approximated as 1. | https://en.wikipedia.org/wiki/Secular_equilibrium |
SecureEasySetup , or SES is a proprietary technology developed by Broadcom to easily set up wireless LANs with Wi-Fi Protected Access . A user presses a button on the wireless access point , then a button on the device to be set up (printer, etc.) and the wireless network is automatically set up.
This technology has been succeeded by the industry-standard Wi-Fi Protected Setup . However, Wi-Fi Protected Setup was recently broken and has been shown to be easily breakable with brute-force attacks .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/SecureEasySetup |
In computer science , secure transmission refers to the transfer of data such as confidential or proprietary information over a secure channel . Many secure transmission methods require a type of encryption . The most common email encryption is called PKI . In order to open the encrypted file, an exchange of key is done.
Many infrastructures such as banks rely on secure transmission protocols to prevent a catastrophic breach of security. Secure transmissions are put in place to prevent attacks such as ARP spoofing and general data loss . Software and hardware implementations which attempt to detect and prevent the unauthorized transmission of information from the computer systems to an organization on the outside may be referred to as Information Leak Detection and Prevention (ILDP), Information Leak Prevention (ILP), Content Monitoring and Filtering (CMF) or Extrusion Prevention systems and are used in connection with other methods to ensure secure transmission of data.
WEP is a deprecated algorithm to secure IEEE 802.11 wireless networks. Wireless networks broadcast messages using radio, so are more susceptible to eavesdropping than wired networks. When introduced in 1999, WEP was intended to provide confidentiality comparable to that of a traditional wired network. A later system, called Wi-Fi Protected Access (WPA) has since been developed to provide stronger security.
Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols that provide secure communications on the Internet for such things as web browsing, e-mail, Internet faxing, instant messaging and other data transfers. There are slight differences between SSL and TLS, but they are substantially the same. [ 1 ]
This computer security article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Secure_transmission |
Securin is a protein involved in control of the metaphase-anaphase transition and anaphase onset. Following bi-orientation of chromosome pairs and inactivation of the spindle checkpoint system, the underlying regulatory system, which includes securin, produces an abrupt stimulus that induces highly synchronous chromosome separation in anaphase . [ 1 ]
Securin is initially present in the cytoplasm and binds to separase , a protease that degrades the cohesin rings that link the two sister chromatids. Separase is vital for onset of anaphase. This securin-separase complex is maintained when securin is phosphorylated by Cdk1 , inhibiting ubiquitination. When bound to securin, separase is not functional. [ 1 ]
In addition, both securin and separase are well conserved proteins (Figure 1). [ 1 ] Note that separase cannot function without initially forming the securin-separase complex. This is because securin helps properly fold separase into the functional conformation. However, yeast does not appear to require securin to form functional separase as anaphase occurs in yeast with a securin deletion mutation. [ 1 ]
Securin has five known phosphorylation sites that are targets of Cdk1; two sites at the N-terminal in the Ken-Box and D-box region are known to affect APC recognition and ubiquitination (Figure 2). [ 2 ] To initiate the onset of anaphase, securin is dephosphorylated by Cdc14 and other phosphatases. Dephosphorylated securin is recognized by the Anaphase-Promoting Complex (APC) bound primarily to Cdc20 (Cdh1 is also an activating substrate of APC). The APC Cdc20 complex ubiquitinates securin and targets it for degradation by 26S proteasome. This results in free separase that is able to destroy cohesin and initiate chromosome separation. [ 1 ] [ 2 ]
It is thought that securin integrates multiple regulatory inputs to make separase activation switch like, resulting in sudden, coordinated anaphase. This likely involves a network with several feedback loops, including positive feedback which leads to switch-like behavior. One proposed signaling pathway generating switch-like behavior contains a positive feedback loop for activation of Cdc14 by separase, [ 3 ] leading to dephosphorylation and degradation of securin (Figure 3). [ 2 ]
David Morgan’s group found that segregation time of chromosomes 4 and 5 is significantly elongated in budding-yeast strains with mutations in the 2 N-terminal securin phosphorylation sites and securin deletion strains. In addition, these mutant strains exhibited very high rates of mis-segregation compared to normal behavior. Switch like characteristics are necessary to trigger quick, coordinated chromosomal segregation in anaphase. This means that strong inactivation of separase by securin followed by sudden, rapid destruction of securin and activation of separase is vital for proper anaphase.
Overall, securin and separase act in an anaphase-regulating network. Figure 4 depicts a potential network diagram. [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Securin |
SecurityFocus was an online computer security news portal and purveyor of information security services. Home to the well-known Bugtraq mailing list, SecurityFocus columnists and writers included former Department of Justice cybercrime prosecutor Mark Rasch , and hacker-turned-journalist Kevin Poulsen . [ 1 ]
This article about a computing website is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/SecurityFocus |
A security alarm is a system designed to detect intrusions, such as unauthorized entry, into a building or other areas, such as a home or school. Security alarms protect against burglary ( theft ) or property damage , as well as against intruders. Examples include personal systems, neighborhood security alerts, car alarms , and prison alarms.
Some alarm systems serve a single purpose of burglary protection; combination systems provide fire and intrusion protection. Intrusion-alarm systems are combined with closed-circuit television surveillance (CCTV) systems to record intruders' activities and interface to access control systems for electrically locked doors. There are many types of security systems. Homeowners typically have small, self-contained noisemakers. These devices can also be complicated, multirole systems with computer monitoring and control. It may even include a two-way voice which allows communication between the panel and monitoring station.
The most basic alarm consists of at least one sensor to detect trespassers and an alerting device to indicate the intrusion. However, a typical premises security alarm employs the following components:
In addition to the system itself, security alarms often offer a monitoring service. In the event of an alarm, the premises control unit contacts a central monitoring station. Operators at the station take appropriate action, such as contacting property owners, notifying the police, or dispatching private security forces. Such alerts transmit via dedicated alarm circuits, telephone lines, or the internet.
The hermetically sealed reed switch is a common type of two-piece sensor. This switch operates with an electrically conductive switch that is either normally open or normally closed when under the influence of a magnetic field in respect to proximity to the second piece, which contains a magnet . When the magnet moves away from the reed switch, the reed switch either closes or opens, based on the normally closed or open design. This action, coupled with an electric current, allows an alarm control panel to detect a fault on that zone or circuit. These sensors are common, are found wired directly to an alarm control panel, or are typically found in wireless door or window contacts as sub-components.
The passive infrared (PIR) motion detector is one of the most common sensors found in household and small business environments. This sensor does not generate or radiate energy; it works entirely by detecting the heat energy given off by other objects.
PIR sensors identify abrupt changes in temperature at a given point. As an intruder walks in front of the sensor, the temperature at that point will rise from room temperature to body temperature and then back again. This quick change triggers the detection.
PIR sensors designed to be wall- or ceiling-mounted come in various fields of view . PIRs require a power supply in addition to the detection signaling circuit.
The infrasound detector works by detecting infrasound, or sound waves at frequencies below 20 Hz. Sounds at those frequencies are inaudible to the human ear. [ 1 ] Due to its inherent properties, infrasound can travel distances of many hundreds of kilometers. [ 2 ]
The entire infrasound detection system consists of the following components: a speaker (infrasound sensor) as a microphone input, an order-frequency filter, an analog-to-digital (A/D) converter, and an microcomputer to analyze the recorded signal.
If a potential intruder tries to enter into a house, they test whether it is closed and locked, uses tools on openings, or/and applies pressure, creating low-frequency sound vibrations. Before the intruder breaks in, the infrasound detector automatically detects the intruder's actions.
The purpose of such a system is to detect burglars before they enter the house to avoid both theft and vandalism. The sensitivity is dependent on the size of a home and the presence of animals.
These active detectors transmit ultrasonic sound waves that are inaudible to humans using frequencies between 15 kHz and 75 kHz. The Doppler shift principle is the underlying method of operation which detects a change in frequency due to object motion. This detection occurs when the object must cause a change in the ultrasonic frequency to the receiver relative to the transmitting frequency.
The ultrasonic detector operates by the transmitter emitting an ultrasonic signal into the area to be protected. Solid objects (such as the surrounding floor, walls, and ceiling) reflect sound waves, which the receiver will detect. Because ultrasonic waves are transmitted through air, hard-surfaced objects tend to reflect most of the ultrasonic energy, while soft surfaces tend to absorb the most energy.
When the surfaces are stationary, the frequency of the waves detected by the receiver will be equal to the transmitted frequency. However, a change in frequency will occur as a result of the Doppler principle when a person or object is moving towards or away from the detector. Such an event initiates an alarm signal. This technology is not active in many properties as many consider this obsolete.
This device emits microwaves from a transmitter and detects any reflected microwaves or reduction in beam intensity using a receiver. The transmitter and receiver are usually combined inside a single housing (monostatic) for indoor applications and separate housings (bistatic) for the protection of outdoor perimeters high-risk sites and critical infrastructures such as fuel storage, petrochemical facilities, military sites, civil and military airports , nuclear facilities and more. To reduce false alarms this type of detector is usually combined with a passive infrared detector or similar alarm. Compared to the monostatic, the bistatic units work over longer distances: typical distances for transmitter-receivers up to 200 m for X-band frequencies and up to 500 m for K-band frequencies. [ 3 ]
Microwave detectors respond to a Doppler shift in the frequency of the reflected energy, by a phase shift, or by a sudden reduction of the level of received energy. Any of these effects may indicate motion of an intruder. Microwave detectors are low cost, easy to install, have an invisible perimeter barrier. and is not affected by fog, rain, snow, sand storms, or wind. May be affected by the presence of water dripping on the ground. Typically need a sterile clearance area to prevent partial blocking of the detection field.
The microwave generator is equipped with an antenna that allows it to concentrate the beam of electromagnetic waves in one preferred location and the beam is intercepted by the receiver, equipped with a similar antenna to the transmitter.
The graphical representation of the beam is similar to a cigar, and, when not disturbed, it runs between the transmitter and the receiver and generates a continuous signal. When an individual tries to cross this beam, it produces a disturbance that is caught by the receiver as a variation of amplitude of the received signal.
These barriers are immune to harsh weather, such as fog , heavy rain , snow and sandstorms : none of these atmospheric phenomena affect in any way the behaviour and the reliability of the microwave detection. Furthermore, the working temperature range of this technology goes from -35 °C to +70 °C. [ 4 ]
The more recent and higher performance models of these detectors generate a detection whether the intruder is rolling, crossing, crawling or moving very slow within the electromagnetic field [ 5 ] reducing false alarms. The ellipsoidal shape of the longitudinal section however does not allow a good detection capability close to the receiver or transmitter heads, and those areas are commonly referred to as "dead zones". A solution to avoid this problem, when installing 2 or more barriers, is to cross the respective transmitter and receiver heads some meters from the respective heads or to use mono-head sensor to cover the dead zones. [ 6 ]
Compact surveillance radar emits microwaves from a transmitter and detects any reflected microwaves. They are similar to microwave detectors but can detect the precise location and a GPS coordinate of intruders in areas extending over hundreds of acres. It has the capability of measuring the range, angle, velocity, direction, and size of the target. This target information is typically displayed on a map, user interface or situational awareness software that defines geographical alert zones or geofences with different types of actions initiated depending on time of day and other factors. CSR is commonly used to protect outside the fence line of critical facilities such as electrical substations , power plants, dams, and bridges.
Photoelectric beam systems detect the presence of an intruder by transmitting invisible infrared light beams across an area, where these beams may be obstructed. To improve the detection surface area, the beams are often employed in stacks of two or more. However, if an intruder is aware of the technology's presence, it can be avoided. The technology can be an effective long-range detection system, if installed in stacks of three or more where the transmitters and receivers are staggered to create a fence-like barrier. To prevent a clandestine attack using a secondary light source being used to hold the detector in a sealed condition whilst an intruder passes through, most systems use and detect a modulated light source. These sensors are low cost, easy to install, and require very little sterile clearance area to operate. However, it may be affected by fog or very high luminosity, and the position of the transmitter can be located with cameras.
A glass-break detector may be used for internal perimeter building protection. Glass-break acoustic detectors are mounted in close proximity to the glass panes and listen for sound frequencies associated with glass breaking.
Seismic glass-break detectors, generally referred to as shock sensors, are different in that they are installed on the glass pane. When glass breaks it produces specific shock frequencies which travel through the glass and often through the window frame and the surrounding walls and ceiling. Typically, the most intense frequencies generated are between 3 and 5 kHz, depending on the type of glass and the presence of a plastic interlayer. Seismic glass-break detectors feel these shock frequencies and in turn generate an alarm condition.
Window foil is a less advanced detection method that involves gluing a thin strip of conducting foil on the inside of the glass and putting low-power electric current through it. Breaking the glass will tear the foil and break the circuit.
Most systems can also be equipped with smoke, heat, and/or carbon monoxide detectors. These are also known as 24-hour zones (which are on at all times). Smoke and heat detectors protect from the risk of fire using different detection methods. Carbon monoxide detectors help protect from the risk of carbon monoxide poisoning. Although an intruder alarm panel may also have these detectors connected, it may not meet all the local fire code requirements of a fire alarm system.
Traditional smoke detectors are ionization smoke detectors which create an electric current between two metal plates, which sound an alarm when disrupted by smoke entering the chamber. Ionization smoke alarms can quickly detect the small amounts of particles produced by fast-flaming fires, such as cooking fires or those fueled by paper or flammable liquids. A newer type of the smoke detector is the photoelectric smoke detector. It contains a light source, which is positioned indirectly to the light sensitive electric sensor. Normally, light from the light source shoots straight across and misses the sensor. When smoke enters the chamber, it scatters the light, which then hits the sensor and triggers the alarm. Photoelectric smoke detectors typically respond faster to a fire in its early, smoldering stage, before the source of the fire bursts into flames.
Motion sensors are devices that use various forms of technology to detect movement. The technology typically found in motion sensors to trigger an alarm includes infrared, ultrasonic, vibration and contact. Dual technology sensors combine two or more forms of detection in order to reduce false alarms as each method has its advantages and disadvantages. Traditionally motion sensors are an integral part of a home security system. These devices are typically installed to cover a large area as they commonly cover up to 40 ft (12 m), with a 135° field of vision.
A type of motion sensor was used by the Japanese since ancient times. In the past, "(m)any people in Japan kept singing crickets and used them like watch dogs." [ 7 ] Although a dog would bark when it senses an intruder, a cricket stops singing when approached by an intruder. The crickets are kept in decorative cages resembling bird cages, and these cages are placed in contact with the floor. During the day, the house is busy with normal daytime tasks. When activity reduces at night, the crickets start singing. If someone comes into the house at night, the floor starts to vibrate. "The vibration frightens the crickets and they stop singing. Then everyone wakes up --- from the silence. [ 8 ] The family is used to hearing crickets at night and knows something is wrong if the crickets aren't singing. A similar observation was made in England about millers who lived in their mills. A mill wheel makes a great deal of noise, but the miller only awakens when the mill wheel stops turning.
Driveway alarm systems can be combined with most security and automation systems. They are designed to alert residents to unexpected visitors, intruders, or deliveries arriving at the property. Types of driveway sensors include magnetic and infrared motion sensors. Driveway alarms can be found in both hard-wired and wireless systems. They are common in rural security systems as well as for commercial applications.
These electro-mechanical devices are mounted on barriers and are used primarily to detect an attack on the structure itself. The technology relies on an unstable mechanical configuration that forms part of the electrical circuit. When movement or vibration occurs, the unstable portion of the circuit moves and breaks the current flow, which produces an alarm. The medium transmitting the vibration must be correctly selected for the specific sensor as they are best suited to different types of structures and configurations. These systems are low cost and easily installed on existing fences, but can only be fence mounted and are unable to analyze differences in the pattern of vibrations (for example, the difference between gusts of wind and a person climbing the fence). For this reason, this technology is gradually being replaced by digital accelerometer -based systems.
MEMS technology is an electromagnetic device that is created using photolithography , incision and ionian implantation. This produces a very compact and small device. In this device, in addition to the mechanical system, there are electronic circuits for control, acquisition and conditioning of the signal able to sense the environment. [ 9 ]
MEMS accelerometer can be divided into two groups, piezoresistive and capacitive-based accelerometers. The former consists of a single-degree-of-freedom system of a mass suspended by a spring. They also have a beam with a proof mass at the beam’s tip and a Piezoresistive patch on the beam web. [ 10 ]
On the contrary, capacitive-based accelerometers, also known as vibration sensors, rely on a change in electrical capacitance in response to acceleration. [ 11 ]
The current technology allows to realize suspended silicon structures that are attached to the substrate in some points called anchors, and that constitute the sensitive mass of the accelerometer MEMS. These structures are free to move in the direction of the acceleration detected. They constitute the mobile reinforcement of a pair of capacitors connected to the half bridge .
In this way, the acquired signals are amplified, filtered and converted in digital signals with the supervision of specific control circuits. MEMS' incorporations evolved from a single, stand-alone device to the integrated inertial motion units that are available today. [ 12 ]
This technology uses a variety of transduction mechanisms to detect the displacement. They include capacitive, piezoresistive, thermal, optical, piezoelectric and tunneling. [ 13 ]
In the last decades, many technological progresses have been made in this area and MEMS accelerometers are used in high-reliability environments and are starting to replace other established technologies.
MEMS accelerometer can be applied as a sensor in the earthquake disaster prevention, since one of the main characteristics of MEMS accelerometers is the linear frequency response to DC to about 500 Hz, and this capability offers an improvement in measuring ground motion at lower-frequency band. [ 14 ]
Another practical application of MEMS accelerometers is in machine condition monitoring to reduce machines’ maintenance. Wireless and embedded technologies such as Micro-electro Mechanical system sensors offer a wireless smart vibration measurement of machine’s condition. [ 15 ]
Moving to the defence field, it can be applied in fence-mounted intrusion detection systems. Since MEMS sensors are able to work in a wide temperature range, they can prevent intrusions in outdoors and very spread-off perimeters.
An advantage offered by MEMS Accelerometers is the ability to measure static accelerations, such as acceleration due to gravity. This enables them to constantly verify that the positioning of the sensor, based on MEMS accelerometer, remains unaltered from the installation one.
MEMS accelerometers’ significant advantages also stem from their small size and high measurement frequency; additionally, they can be integrated with multiple sensors with different functions. [ 16 ]
Change in the local magnetic field due to the presence of ferrous metals induces a current in the buried sensors (buried cables or discrete sensors) which are analyzed by the system. If the change exceeds a predetermined threshold, an alarm is generated. [ 17 ] This type of sensor can be used to detect intruders carrying substantial amounts of metal, such as a firearm, making it ideally suited for anti-poaching applications. [ 18 ]
Sometimes referred to as E-field, this volumetric sensor uses Electric field proximity sensing and can be installed on buildings, perimeters, fences, and walls. It also has the ability to be installed free-standing on dedicated poles. The system uses an electromagnetic field generator powering one wire, with another sensing wire running parallel to it. The sensing wire is connected to a signal processor that analyses amplitude change (mass of intruder), rate change (movement of intruder), and preset disturbance time (time the intruder is in the pattern). These items define the characteristics of an intruder and when all three are detected simultaneously, an alarm signal is generated.
The barrier can provide vertical protection from the ground to the height of the mounting posts (typically 4–6 meters of height), depending on the number of sensor wires installed. It is usually configured in zones of about 200 metre lengths. Electrostatic field sensors are high-security and difficult to defeat, and have high vertical detection field. However, these sensors are expensive and have short zones, which contributes to more electronics (and thus a higher cost).
Microphonic systems vary in design (for example, time-domain reflectrometer or piezo-electric ) but each is generally based on the detection of an intruder attempting to cut or climb over a fence. Usually the microphonic detection systems are installed as sensor cables attached to rigid chain-wire fences, however, some specialized versions of these systems can also be installed buried underground. Depending on the type, it can be sensitive to different frequencies or levels of noise or vibration. The system is based on coaxial or electro-magnetic sensor cable with the controller having the ability to differentiate between signals from the cable or chain-wire being cut, an intruder climbing the fence, or bad weather conditions.
The systems are designed to detect and analyze incoming electronic signals received from the sensor cable, and then to generate alarms from signals which exceed pre-set conditions. The systems have adjustable electronics to permit installers to change the sensitivity of the alarm detectors to the suit specific environmental conditions. The tuning of the system is usually done during commissioning of the detection devices.
Microphonic systems are relatively inexpensive compared to other systems and easy to install, but older systems may have a high rate of false alarms caused by wind and other distances. Some newer systems use DSP to process the signal and reduce false alarms.
A taut wire perimeter security system is an independent screen of tensioned tripwires usually mounted on a fence or wall. Alternatively, the screen can be made thicker to avoid the need for a supporting chain-wire fence. These systems are designed to detect any physical attempt to penetrate the barrier. Taut wire systems can operate with a variety of switches or detectors that sense movement at each end of the tense wires. These switches or detectors can be a simple mechanical contact, static force transducer or an electronic strain gauge. Unwanted alarms caused by birds and other animals can be avoided by adjusting the sensors to ignore objects that exert small amounts of pressure on the wires. This type of system is vulnerable to intruders digging under the fence. A concrete footing directly below the fence is installed to prevent this type of attack.
Taut wire fence systems have low false alarm rates, reliable sensors, and high detection rates, but is expensive and complicated to install.
A fiber-optic cable can be used to detect intruders by measuring the difference in the amount of light sent through the fiber core. A variety of fiber optic sensing technologies may be used, including Rayleigh scattering or interferometry . If the cable is disturbed, the light will change and the intrusion is detected. The cable can be attached directly to a chain-wire fence or bonded into a barbed steel tape that is used to protect the tops of walls and fences. This type of barbed tape provides a good physical deterrent as well as giving an immediate alarm if the tape is cut or severely distorted.
Being cable-based, fiber optic cables are very similar to the microphonic system and easy to install and can cover large perimeters. However, despite performing in a similar manner to microphonic-based systems, fiber optic cables have higher cost and is more complex due to the use of fiber-optic technology.
This system employs an electro-magnetic field disturbance principle based on two unshielded coaxial cables. [ 19 ] The transmitter emits continuous radio frequency (RF) energy along one cable and the energy is received by the other cable. When the change in field strength weakens due to the presence of an object and reaches a pre-set lower threshold, an alarm condition is generated. The system is covert after installation. The surrounding soil must offer good drainage in order to avoid nuisance alarms. Ported coaxial cables are concealed as a buried form but can be affected by RF noise and is difficult to install.
Security electric fences consist of wires that carry pulses of electric current to provide a non-lethal shock to deter potential intruders. Tampering with the fence also results in an alarm that is logged by the security electric fence energiser, and can also trigger a siren, strobe, and/or notifications to a control room or directly to the owner via email or phone. In practical terms, security electric fences are a type of sensor array that acts as a (or part of a) physical barrier, a psychological deterrent to potential intruders, and as part of a security alarm system.
Electric fences are less expensive than many other methods, less likely to give false alarms than many other alternative perimeter security methods, and have highest psychological deterrent of all methods, but there is a potential for unintended shock.
The trigger signal from sensors are transmitted to one or more control units either through wires or wireless means, such as radio, line carrier, and infrared.
Wired systems are convenient when sensors, such as passive infrared motion sensors and smoke detectors require external power to operate correctly; however, they may be more costly to install. Basic wired systems utilize a star network topology, where the panel is at the center logically, and all devices home run their line wires back to the panel. More complex panels use a Bus network topology where the wire basically is a data loop around the perimeter of the facility, and has drops for the sensor devices which must include a unique device identifier integrated into the sensor device itself. Wired systems also have the advantage, if wired properly for example by dual loop, of being tamper-evident .
Wireless systems, on the other hand, often use battery-powered transmitters which are easier to install and have less expensive start-up costs, but may fail if the batteries are not maintained. Depending on distance and construction materials, one or more wireless repeaters may be required to bring the signal to the alarm panel reliably. A wireless system can be moved to a new property easily. An important wireless connection for security is between the control panel and the monitoring station. Wireless monitoring of the alarm system protects against a burglar cutting cables or from failures of an internet provider. This setup is commonly referred to as fully wireless.
Hybrid systems use both wired and wireless sensors to achieve the benefits of both. Transmitters can also be connected through the premises' electrical circuits to transmit coded signals to the control unit (line carrier). The control unit usually has a separate channel or zone for burglar and fire sensors, and more advanced systems have a separate zone for every different sensor, as well as internal trouble indicators, such as mains power loss, low battery, and broken wires.
Depending upon the application, the alarm output may be local, remote or a combination of both. Local alarms do not include monitoring, but may include indoor and/or outdoor sounders, such as motorized bells or electronic sirens and lights, such strobe lights which may be useful for signaling an evacuation notice during fire emergencies, or to scare off an amateur burglar quickly. However, with the widespread use of alarm systems, especially in cars, false alarms are very frequent and many urbanites tend to ignore alarms rather than investigating, and not contacting the necessary authorities. In rural areas where not many may hear the fire bell or burglar siren, lights or sounds may not make much difference, as the nearest emergency responders may arrive too late to avoid losses.
Remote alarm systems are used to connect the control unit to a predetermined monitor of some sort, and they are available in many different configurations. Advanced systems connect to a central station or first responder (e.g. police/fire/medical) via a direct phone wire, a cellular network, a radio network, or an IP path. In the case of a dual signaling system two of these options are utilized simultaneously. The alarm monitoring includes not only the sensors, but also the communication transmitter itself. While direct phone circuits are still available in some areas from phone companies, because of their high cost and the advent of dual signaling with its comparatively lower cost, their use is being phased out. Direct connections are now most usually seen only in federal, state, and local government buildings, or on a school campus that has a dedicated security, police, fire, or emergency medical department. In the United Kingdom, communication is only possible to an alarm receiving centre, and communication directly to the emergency services is not permitted.
More typical systems incorporate a digital cellular communication unit that will contact the central station or a monitoring station via the Public Switched Telephone Network (PSTN) and raise the alarm, either with a synthesized voice or increasingly via an encoded message string that the central station decodes. These may connect to the regular phone system on the system side of the demarcation point , but typically connect on the customer side ahead of all phones within the monitored premises so that the alarm system can seize the line by cutting-off any active calls and call the monitoring company if needed. A dual signaling system would raise the alarm wirelessly via a radio path or cellular path using the phone line or broadband line as a backup overcoming any compromise to the phone line. Encoders can be programmed to indicate which specific sensor was triggered, and monitors can show the physical location of the sensor on a list or even a map of the protected premises, which can make the resulting response more effective.
Many alarm panels are equipped with a backup communication path for use when the primary PSTN circuit is not functioning. The redundant dialer may be connected to a second communication path, or a specialized encoded cellular phone , radio, or internet interface device to bypass the PSTN entirely, to thwart intentional tampering with the phone lines. Tampering with the line could trigger a supervisory alarm via the radio network, giving early warning of an imminent problem. In some cases a remote building may not have PSTN phone service, and the cost of trenching and running a direct line may be prohibitive. It is possible to use a wireless cellular or radio device as the primary communication method.
In the UK, the most popular solution of this kind is similar in principle to the above but with the primary and backup paths reversed. Utilizing a radio path as the primary signaling path is not only quicker than PSTN but also allows significant cost savings as unlimited amounts of data can be sent at no extra expense.
Increasing deployment of voice over IP technology (VoIP) is driving the adoption of broadband signaling for alarm reporting. Many sites requiring alarm installations no longer have conventional telephone lines (POTS), and alarm panels with conventional telephone dialer capability do not work reliably over some types of VoIP service.
Dial-up analogue alarm panels or systems with serial/parallel data ports may be migrated to broadband through the addition of an alarm server device which converts telephone signaling signals or data port traffic to IP messages suitable for broadband transmission. However, the direct use of VoIP to transport analogue alarms without an alarm server device is problematic as the audio codecs used throughout the entire network transmission path cannot guarantee a suitable level of reliability or quality of service acceptable for the application.
In response to the changing public communications network, new alarm systems often use broadband signaling as a method of alarm transmission, and manufacturers are including IP reporting capability directly in their alarm panel products. When the Internet is used as a primary signaling method for critical security and life safety applications, frequent supervision messages are configured to overcome concerns about backup power for network equipment and signal delivery time. But for typical applications, connectivity concerns are controlled by normal supervision messages.
Dual signaling is a method of alarm transmission that uses a mobile phone network and a telephone and/or IP path to transmit intruder, fire and personal attack signals at high speed from the protected premises to an Alarm Receiving Centre (ARC). It most commonly uses GPRS or GSM, a high-speed signaling technology used to send and receive ‘packets’ of data, with a telephone line in addition. IP is not used as frequently due to issues with installation and configuration as high levels of expertise is often required in addition to alarm installation knowledge.
A dual signaling communication device is attached to a control panel on a security installation and is the component that transmits the alarm to the ARC. It can do this in a number of different ways, via the GPRS radio path, via the GSM radio path or via the telephone line/or IP. These multiple signaling paths are all present and live at the same time backing each other up to minimize exposure of the property to intruders. Should one fail, there is always a back up and depending on the manufacturer chosen up to three paths working simultaneously at any one time.
Dual paths allow distinction between hardware failures and a genuine attack on the alarm. This helps eliminate false alarms and unnecessary responses. Dual signaling has helped considerably with the restoration of police response as in an instance where a phone line is cut as the dual signaling device can continue to send alarm calls via one of its alternative paths either confirming or denying the alarm from the initial path.
In the UK, CSL DualCom Ltd pioneered dual signaling in 1996. The company offered an alternative to existing alarm signaling while setting the current standard for professional dual path security monitoring. Dual signaling is now firmly regarded as the standard format for alarm signaling and is duly specified by all of the leading insurance companies. [ 20 ]
Monitored alarms and speaker phones allow for the central station to speak with the homeowner or intruder. This may be beneficial to the owner for medical emergencies. For actual break-ins, the speaker phones allow the central station to urge the intruder to cease and desist as response units have been dispatched. Listen-in alarm monitoring is also known as Immediate Audio-Response monitoring or Speaking Alarm Systems in the UK.
In the United States, police respond to at least 36 million alarm activations each year, at an estimated annual cost of $1.8 billion. [ 21 ]
Depending upon the zone triggered, number and sequence of zones, time of day, and other factors, the alarm monitoring center may automatically initiate various actions. Central station operators might be instructed to call emergency services immediately, or to first call the protected premises or property manager to try to determine if the alarm is genuine. Operators could also start calling a list of phone numbers provided by the customer to contact someone to go check on the protected premises. Some zones may trigger a call to the local heating oil company to check on the system, or a call to the owner with details of which room may be flooded. Some alarm systems are tied to video surveillance systems so that current video of the intrusion area can be instantly displayed on a remote monitor and recorded.
Some alarm systems use real-time audio and video monitoring technology to verify the legitimacy of an alarm. In some municipalities around the United States, this type of alarm verification allows the property it is protecting to be placed on a "verified response" list, allowing for quicker and safer police responses.
To be useful, an intrusion alarm system is deactivated or reconfigured when authorized personnel are present. Authorization may be indicated in any number of ways, often with keys or codes used at the control panel or a remote panel near an entry. High-security alarms may require multiple codes, or a fingerprint, badge, hand-geometry , retinal scan, encrypted-response generator, and other means that are deemed sufficiently secure for the purpose.
Failed authorizations would result in an alarm or a timed lockout to prevent experimenting with possible codes. Some systems can be configured to permit deactivation of individual sensors or groups. Others can also be programmed to bypass or ignore individual sensors and leave the remainder of the system armed. This feature is useful for permitting a single door to be opened and closed before the alarm is armed, or to permit a person to leave, but not return. High-end systems allow multiple access codes, and may only permit them to be used once, or on particular days, or only in combination with other users' codes (i.e., escorted). In any case, a remote monitoring center should arrange an oral code to be provided by an authorized person in case of false alarms, so the monitoring center can be assured that a further alarm response is unnecessary. As with access codes, there can also be a hierarchy of oral codes, for example, for furnace repairperson to enter the kitchen and basement sensor areas but not the silver vault in the pantry. There are also systems that permit a duress code to be entered and silence the local alarm, but still trigger the remote alarm to summon the police to a robbery.
Fire sensors can be isolated, meaning that when triggered, they will not trigger the main alarm network. This is important when smoke and heat is intentionally produced. The owners of buildings can be fined for generating false alarms that waste the time of emergency personnel.
The United States Department of Justice estimates that between 94% and 98% of all alarm calls to law enforcement are false alarms . [ 21 ]
System reliability and user error are the cause of most false alarms, sometimes called "nuisance alarms." False alarms can be very costly to local governments, local law enforcement, security system users and members of local communities. In 2007, the Department of Justice reported that in just one year, false alarms cost local municipalities and their constituents at least $1.8 billion. [ 21 ]
In many municipalities across the United States, policies have been adopted to fine home and business owners for multiple false alarm activations from their security system. If multiple false alarms from the same property persist, that property could be added to a "no response" list, which prevents police dispatch to the property except in the event of verified emergency. Approximately 1% of police alarm calls actually involve a crime. [ 21 ] Nuisance alarms occur when an unintended event evokes an alarm status by an otherwise properly working alarm system. A false alarm also occurs when there is an alarm system malfunction that results in an alarm state. In all three circumstances, the source of the problem should be immediately found and fixed, so that responders will not lose confidence in the alarm reports. It is easier to know when there are false alarms, because the system is designed to react to that condition. Failure alarms are more troublesome because they usually require periodic testing to make sure the sensors are working and that the correct signals are getting through to the monitor. Some systems are designed to detect problems internally, such as low or dead batteries, loose connections, phone circuit trouble, etc. While earlier nuisance alarms could be set off by small disturbances, like insects or pets, newer model alarms have technology to measure the size/weight of the object causing the disturbance, and thus are able to decide how serious the threat is, which is especially useful in burglar alarms.
Some municipalities across the United States require alarm verification before police are dispatched. Under this approach, alarm monitoring companies must verify the legitimacy of alarms (except holdup, duress, and panic alarms ) before calling the police. Verified response typically involves visual on-scene verification of a break-in, or remote audio or video verification. [ 21 ]
Alarms that utilize audio, video, or combination of both audio and video verification technology give security companies, dispatchers, police officers, and property managers more reliable data to assess the threat level of a triggered alarm. [ 22 ]
Audio and video verification techniques use microphones and cameras to record audio frequencies, video signals, or image snapshots. The source audio and video streams are sent over a communication link, usually an Internet protocol (IP) network, to the central station where monitors retrieve the images through proprietary software. The information is then relayed to law enforcement and recorded to an event file, which can be used to plan a more strategic and tactical approach of a property, and later as prosecution evidence.
An example of this system is when a passive infrared or other sensor is triggered a designated number of video frames from before and after the event is sent to the central station.
A second video solution can be incorporated into a standard panel, which sends the central station an alarm. When a signal is received, a trained monitoring professional accesses the on-site digital video recorder (DVR) through an IP link to determine the cause of the activation. For this type of system, the camera input to the DVR reflects the alarm panel's zones and partitioning, which allows personnel to look for an alarm source in multiple areas.
The United States Department of Justice states that legislation requiring alarm companies to verify the legitimacy of an alarm, before contacting law enforcement (commonly known as "verified response") is the most effective way to reduce false burglar alarms. The Department of Justice considers audio, video, or an eye-witness account as verification for the legitimacy of a burglar alarm. [ 21 ]
Cross-zoning is a strategy that uses multiple sensors to monitor activity in one area and software analyses input from all the sources. For example, if a motion detector trips in one area, the signal is recorded and the central-station monitor notifies the customer. A second alarm signal—received in an adjacent zone within a short time—is the confirmation the central-station monitor needs to request a dispatch immediately. This method builds increased protection.
Enhanced call verification (ECV) helps reduce false dispatches while still protecting citizens, and is mandated in several US jurisdictions, although the alarm industry has successfully opposed it in others. [ 21 ] ECV requires central station personnel to attempt to verify the alarm activation by making a minimum of two phone calls to two different responsible party telephone numbers before dispatching law enforcement to the scene.
The first alarm-verification call goes to the location the alarm originated. If contact with a person is not made, a second call is placed to a different number. The secondary number, best practices dictate, should be to a telephone that is answered even after hours, preferably a cellular phone of a decision maker authorized to request or bypass emergency response.
ECV, as it cannot confirm an actual intrusion event and will not prompt a priority law enforcement dispatch, is not considered true alarm verification by the security industry.
Some insurance companies and local agencies require that alarm systems be installed to code or be certified by an independent third party. The alarm system is required to have a maintenance check carried out every 6 – 12 months. In the UK, 'Audible Only' intruder alarm systems require a routine service visit once every 12 months and monitored intruder alarm systems require a check twice in every 12-month period. This is to ensure all internal components, sensors and PSUs are functioning correctly. In the past, this would require an alarm service engineer to attend site and carry the checks out. With the use of the Internet or radio path and a compatible IP/radio transmitting device (at the alarmed premises), some checks can now be carried out remotely from the central station. | https://en.wikipedia.org/wiki/Security_alarm |
The iOS operating system utilizes many security features in both hardware and software . These include a secure boot chain, biometric authentication ( Face ID and Touch ID ), data encryption, app sandboxing, and the Secure Enclave—a dedicated coprocessor for sensitive data. iOS also employs memory protection techniques like address space layout randomization (ASLR) and non-executable memory, and includes features like App Transport Security and two-factor authentication to enhance user privacy. Apple's ecosystem further ensures app integrity through code signing and App Store policies, although some controversies have arisen around enterprise certificate misuse and emerging threats like malicious apps slipping past vetting processes.
Before fully booting into iOS, there is low-level code that runs from the Boot ROM . Its task is to verify that the Low-Level Bootloader is signed by the Apple Root CA public key before running it. This process is to ensure that no malicious or otherwise unauthorized software can be run on an iOS device. After the Low-Level Bootloader finishes its tasks, it runs the higher level bootloader, known as iBoot . If all goes well, iBoot will then proceed to load the iOS kernel as well as the rest of the operating system. [ 1 ]
The Secure Enclave is a coprocessor found in iOS devices part of the A7 and newer chips used for data protection. It includes the user data pertaining to Touch ID , Face ID , and Apple Pay , among other sensitive data. [ 2 ] The purpose of the Secure Enclave is to handle keys and other info such as biometrics that is sensitive enough to not be handled by the Application Processor (AP). It is isolated with a hardware filter so the AP cannot access it. [ 2 ] It shares RAM with the AP, but its portion of the RAM (known as TZ0) is encrypted. The secure enclave itself is a flashable 4 MB AKF processor core called the secure enclave processor (SEP) as documented in Apple Patent Application 20130308838 . The technology used is similar to ARM's TrustZone/SecurCore but contains proprietary code for Apple KF cores in general and SEP specifically. It is also responsible for generating the UID key on A9 or newer chips that protects user data at rest. [ citation needed ]
It has its own secure boot process to ensure that it is completely secure. A hardware random number generator is also included as a part of this coprocessor. Each device's Secure Enclave has a unique ID that is fused into the SoC at manufacturing time and cannot be changed. Starting with A9 devices, the unique ID is generated by the Secure Enclave's random number generator and is never exposed outside of the device. This identifier is used to create a temporary key that encrypts the memory in this portion of the system. The Secure Enclave also contains an anti-replay counter to prevent brute force attacks . [ 1 ]
The SEP is located in the devicetree under IODeviceTree:/arm-io/sep and managed by the AppleSEPManager driver. [ 3 ]
In 2020, security flaws in the SEP were discovered, causing concerns about Apple devices such as iPhones. [ 4 ]
Face ID is a face scanner that is embedded in the notch on iPhone models X , XS , XS Max , XR , 11 , 11 Pro , 11 Pro Max , 12 , 12 Mini , 12 Pro , 12 Pro Max , 13 , 13 Mini , 13 Pro , 13 Pro Max , 14 , and the 14 Plus . On the iPhone 14 Pro , 14 Pro Max , iPhone 15 , iPhone 15 Plus , iPhone 15 Pro , iPhone 15 Pro Max , iPhone 16 , iPhone 16 Plus , iPhone 16 Pro , and iPhone 16 Pro Max , it is embedded in the Dynamic Island . [ 5 ] It can be used to unlock the device, make purchases, and log into applications among other functions. When used, Face ID only temporarily stores the face data in encrypted memory in the Secure Enclave, as described above. There is no way for the device's main processor or any other part of the system to access the raw data that is obtained from the Face ID sensor. [ 1 ]
iOS devices can have a passcode that is used to unlock the device, make changes to system settings, and encrypt the device's contents. Until recently, these were typically four numerical digits long. However, since unlocking the devices with a fingerprint by using Touch ID has become more widespread, six-digit passcodes are now the default on iOS with the option to switch back to four or use an alphanumeric passcode. [ 1 ]
Touch ID is a fingerprint scanner that is embedded in the home button and can be used to unlock the device, make purchases, and log into applications among other functions. When used, Touch ID only temporarily stores the fingerprint data in encrypted memory in the Secure Enclave, as described above. Like Face ID, there is no way for the device's main processor or any other part of the system to access the raw fingerprint data that is obtained from the Touch ID sensor. [ 1 ]
Address Space Layout Randomization (ASLR) is a low-level technique of preventing memory corruption attacks such as buffer overflows . It involves placing data in randomly selected locations in memory in order to make it more difficult to predict ways to corrupt the system and create exploits. ASLR makes app bugs more likely to crash the app than to silently overwrite memory, regardless of whether the behavior is accidental or malicious. [ citation needed ]
iOS utilizes the ARM architecture 's Execute Never (XN) feature. This allows some portions of the memory to be marked as non-executable, working alongside ASLR to prevent buffer overflow attacks including return-to-libc attacks . [ 1 ]
As mentioned above, one use of encryption in iOS is in the memory of the Secure Enclave . When a passcode is utilized on an iOS device, the contents of the device are encrypted. This is done by using a hardware AES 256 implementation that is very efficient because it is placed directly between the flash storage and RAM. [ 1 ]
iOS, in combination with its specific hardware, uses crypto-shredding when erasing all content and settings by obliterating all the keys in ' effaceable storage'. This renders all user data on the device cryptographically inaccessible. [ 6 ]
The iOS keychain is a database of login information that can be shared across apps written by the same person or organization. [ 1 ] This service is often used for storing passwords for web applications. [ 7 ]
Third-party applications such as those distributed through the App Store must be code signed with an Apple-issued certificate . In principle, this continues the chain of trust all the way from the Secure Boot process as mentioned above to the actions of the applications installed on the device by users. Applications are also sandboxed , meaning that they can only modify the data within their individual home directory unless explicitly given permission to do otherwise. For example, they cannot access data owned by other user-installed applications on the device. There is a very extensive set of privacy controls contained within iOS with options to control apps' ability to access a wide variety of permissions such as the camera, contacts, background app refresh, cellular data, and access to other data and services. Most of the code in iOS, including third-party applications, runs as the "mobile" user which does not have root privileges . This ensures that system files and other iOS system resources remain hidden and inaccessible to user-installed applications. [ 1 ]
In February 2025, SparkCat, a first OCR infostealer found in iOS App Store. [ 8 ]
Companies can apply to Apple for enterprise developer certificates. These can be used to sign apps such that iOS will install them directly (sometimes called " sideloading "), without the app needing to be distributed via the App Store. [ 9 ] The terms under which they are granted make clear that they are only to be used for companies who wish to distribute apps directly to their employees. [ 9 ]
Circa January–February 2019, it emerged that a number of software developers were misusing enterprise developer certificates to distribute software directly to non-employees, thereby bypassing the App Store. Facebook was found to be abusing an Apple enterprise developer certificate to distribute an application to underage users that would give Facebook access to all private data on their devices. [ 10 ] [ 11 ] [ 12 ] Google was abusing an Apple enterprise developer certificate to distribute an app to adults to collect data from their devices, including unencrypted data belonging to third parties. [ 13 ] [ 9 ] Certificates are also used by services such as AltStore , AppValley , Panda Helper, TweakBox and TutuApp to distribute apps that offer pirated software . [ 14 ]
iOS supports TLS with both low- and high-level APIs for developers. By default, the App Transport Security (ATS) framework requires that servers use at least TLS 1.2. However, developers are free to override this framework and utilize their own methods of communicating over networks. When Wi-Fi is enabled, iOS uses a randomized MAC address so that devices cannot be tracked by anyone sniffing wireless traffic. [ 1 ]
Two-factor authentication is an option in iOS to ensure that even if an unauthorized person knows an Apple ID and password combination, they cannot gain access to the account. It works by requiring not only the Apple ID and password, but also a verification code that is sent to an iDevice or mobile phone number that is already known to be trusted. [ 1 ] If an unauthorized user attempts to sign in using another user's Apple ID, the owner of the Apple ID receives a notification that allows them to deny access to the unrecognized device. [ 15 ]
iOS features a hardened memory allocator known as kalloc_type that was introduced in iOS 15 . Since the XNU kernel is primarily written in memory unsafe languages such as C and C++ , [ 16 ] kalloc_type is designed to mitigate the large amount of vulnerabilities that result from the use of these languages in the kernel. In order to achieve this, kalloc_type implements mitigations such as type isolation in order to prevent type confusion and buffer overflow vulnerabilities. Ultimately, the prevention of privilege escalation is intended. [ 17 ] | https://en.wikipedia.org/wiki/Security_and_privacy_of_iOS |
In telecommunications , the term security kernel has the following meanings: | https://en.wikipedia.org/wiki/Security_kernel |
A security log is used to track security-related information on a computer system. Examples include:
According to Stefan Axelsson, "Most UNIX installations do not run any form of security logging software, mainly because the security logging facilities are expensive in terms of disk storage, processing time, and the cost associated with analyzing the audit trail, either manually or by special software." [ 1 ] | https://en.wikipedia.org/wiki/Security_log |
The Java software platform provides a number of features designed for improving the security of Java applications. This includes enforcing runtime constraints through the use of the Java Virtual Machine (JVM), a security manager that sandboxes untrusted code from the rest of the operating system, and a suite of security APIs that Java developers can utilise. Despite this, criticism has been directed at the programming language, and Oracle, due to an increase in malicious programs that revealed security vulnerabilities in the JVM, which were subsequently not properly addressed by Oracle in a timely manner.
The binary form of programs running on the Java platform is not native machine code but an intermediate bytecode . The JVM performs verification on this bytecode before running it to prevent the program from performing unsafe operations such as branching to incorrect locations, which may contain data rather than instructions. It also allows the JVM to enforce runtime constraints such as array bounds checking . This means that Java programs are significantly less likely to suffer from memory safety flaws such as buffer overflow than programs written in languages such as C which do not provide such memory safety guarantees.
The platform does not allow programs to perform certain potentially unsafe operations such as pointer arithmetic or unchecked type casts . It manages memory allocation and initialization and provides automatic garbage collection which in many cases (but not all) relieves the developer from manual memory management. This contributes to type safety and memory safety.
The platform provides a security manager which allows users to run untrusted bytecode in a "sandboxed" environment designed to protect them from malicious or poorly written software by preventing the untrusted code from accessing certain platform features and APIs. For example, untrusted code might be prevented from reading or writing files on the local filesystem, running arbitrary commands with the current user's privileges, accessing communication networks, accessing the internal private state of objects using reflection, or causing the JVM to exit.
The security manager also allows Java programs to be cryptographically signed ; users can choose to allow code with a valid digital signature from a trusted entity to run with full privileges in circumstances where it would otherwise be untrusted.
Users can also set fine-grained access control policies for programs from different sources. For example, a user may decide that only system classes should be fully trusted, that code from certain trusted entities may be allowed to read certain specific files, and that all other code should be fully sandboxed.
The Java Class Library provides a number of APIs related to security, such as standard cryptographic algorithms, authentication, and secure communication protocols.
There are a number of possible sources of security vulnerabilities in Java applications, some of which are common to non-Java applications and some of which are specific to the Java platform. (Note that these refer to potential sources of vulnerabilities which need to be kept in mind by security-conscious programmers: this is not intended as a list of actual vulnerabilities.)
Examples of potential sources of vulnerability common to Java and non-Java applications are:
However, much discussion of Java security focusses on potential sources of vulnerability specific to the Java platform. These include:
A vulnerability in the Java platform will not necessarily make all Java applications vulnerable. When vulnerabilities and patches are announced, for example by Oracle, the announcement will normally contain a breakdown of which types of application are affected ( example ).
For example, a hypothetical security flaw which affects only the security manager sandboxing mechanism of a particular JVM implementation would mean that only Java applications which run arbitrary untrusted bytecode would be compromised: applications where the user fully trusts and controls all bytecode being executed would not. This would mean that, say, a web browser plugin based on that JVM would be vulnerable to malicious applets downloaded from public websites, but a server-side web application running on the same version of the JVM where the administrator has full control over the classpath would be unaffected. [ 1 ] As with non-Java applications, security vulnerabilities can stem from parts of the platform which may not initially appear to be security-related. For example, in 2011, Oracle issued a security fix for a bug in the Double.parseDouble method. [ 2 ] This method converts a string such as "12.34" into the equivalent double-precision floating point number. The bug caused this method to enter an infinite loop when called on a specific input. This bug had security implications, because for example if a web server converts a string typed into a form by the user using this method, a malicious user could type in the string which triggers the bug. This would cause the web server thread processing the malicious request to enter an infinite loop and become unavailable for serving requests from other users. Doing this repeatedly to a vulnerable web server would be an easy denial-of-service attack : all the web server's threads for responding to user requests would soon be stuck in the infinite loop and the web server would be unable to serve any legitimate users at all.
The security manager in the Java platform (which, as mentioned above, is designed to allow the user to safely run untrusted bytecode) has been criticized in recent years for making users vulnerable to malware , especially in web browser plugins which execute Java applets downloaded from public websites, more informally known as "Java in the browser".
Oracle's efforts to address these vulnerabilities resulted in a delay to the release of Java 8. [ 3 ]
An OS X trojan referred to as Flashback exploited a vulnerability in Java, which had not been patched by Apple , although Oracle had already released a patch. [ 4 ] In April, Apple later released a removal tool for Lion users without Java. [ 5 ] With Java 7 Update 4, Oracle began to release Java directly for Lion and later . [ 6 ]
In October, Apple released an update that removed the Java plugin from all browsers . [ 7 ] This was seen as a move by Apple to distance OS X from Java. [ 8 ]
In January, a zero-day vulnerability was found in all versions of Java 7, including the latest version Java 7 Update 10, which was already exploited in the wild. [ 9 ] The vulnerability was caused by a patch to fix an earlier vulnerability. [ 10 ] In response, Apple blacklisted the latest version of the Java plugin. [ 11 ] Oracle released a patch (Update 11) within three days. [ 12 ] Microsoft also released a patch for Internet Explorer versions 6 , 7 , and 8 . [ 13 ]
Cyberespionage malware Red October was found exploiting a Java vulnerability that was patched in October 2011. [ 14 ] The website for Reporters Without Borders was also compromised by a Java vulnerability in versions prior to Update 11. [ 15 ]
After the release of Update 11, another vulnerability began circulating online, [ 16 ] which was later confirmed. [ 17 ] It was also found that Java's security mode itself was vulnerable due to a bug. [ 18 ] In response, Mozilla disabled Java (as well as Adobe Reader and Microsoft Silverlight ) in Firefox by default, [ 19 ] while Apple blacklisted the latest Java plugin again. [ 20 ]
In February, Twitter reported that it had shut down an attack. Twitter advised users to disable Java, although it did not explain why. [ 21 ] Later in the month, Facebook reported that it had been hacked by a zero-day Java attack. [ 22 ] Apple also reported an attack. [ 23 ] It was found that a breach of an iPhone developer forum was used to attack Twitter, Facebook, and Apple. [ 24 ] The forum itself was unaware of the breach. [ 25 ] Following Twitter, Facebook, and Apple, Microsoft reported that it was also similarly compromised. [ 26 ]
Another vulnerability discovered allowed for the Java security sandbox to be completely bypassed in the original release of Java 7, as well as Updates 11 and 15. [ 27 ] In March, trojan called McRat was found exploiting a zero-day Java vulnerability. [ 28 ] Oracle then released another patch to address the vulnerability. [ 29 ] | https://en.wikipedia.org/wiki/Security_of_the_Java_software_platform |
Securus Inc. , was a Cary , North Carolina –based provider of GPS tracking and personal emergency response technology. [ 1 ]
The company was founded in 2008 as Positioning Animals Worldwide Inc. [ 2 ] [ 3 ] [ 4 ] [ 5 ] and launched SpotLight and SpotLite GPS pet locator in collaboration with American Kennel Club Companion Animal Recovery (AKC CAR). [ 6 ] In 2010 the company changed its name to Securus, Inc., to reflect its diversification into new markets. The company's chief executive officer is Chris Newton. [ 7 ]
In 2011, Securus, Inc acquired Zoombak , LLC from TruePosition, Inc., a subsidiary of Liberty Media . [ 8 ] Zoombak, Inc was known for developing GPS -based products that help people track things, ranging from teenage drivers to employees, pets and automobiles.
In March 2015, it was announced that Securus was selling its personal emergency response service (PERS) platform to Ogden, UT–based Freeus, LLC, sister company of AvantGuard Monitoring Systems. [ 9 ] That same month, it was announced that BrickHouse Security acquired Securus, including Zoombak and related brands. [ 10 ]
Securus has been involved in buying and selling location data of private citizens that use cell-phones. [ 11 ] This has been raised as a privacy concern as it allows Securus to sell for its own profit the "location of nearly any phone in the US without authorization," including to bounty hunters. [ 12 ]
This United States corporation or company article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Securus,_Inc. |
SedDB is an online database for sediment geochemistry .
SedDB is based on a relational database that contains the full range of analytical values for sediment samples, primarily from marine sediment cores, including major and trace element concentrations, radiogenic and stable isotope ratios, and data for all types of material such as organic and inorganic components, leachates , and size fractions. SedDB also archives a vast array of metadata relating to the individual sample. Examples of Seddb metadata are: sample latitude and longitude; elevation below sea surface; material analyzed; analytical methodology ; analytical precision and reference standard measurements. As of April, 2013 SedDB contains nearly 750,000 individual analytical data points of 104,000 samples. SedDB contents have been migrated to The EarthChem Portal .
SedDB was developed to complement current geological data systems ( PetDB , EarthChem, NavDat and Georoc) with an integrated and easily accessible compilation of geochemical data of marine and continental sediments to be utilized for sedimentological, geochemical, petrological, oceanographic, and paleoclimate research, as well as for educational purposes.
SedDB was developed, operated and maintained by a joint team of disciplinary scientists, data scientists, data managers and information technology developers at the Lamont–Doherty Earth Observatory as part of the Integrated Earth Data Applications ( IEDA ) Research Group funded by the US National Science Foundation . SedDB was built collaboratively by researchers and information technologists at the Lamont–Doherty Earth Observatory , Oregon State University , Boston University , and Boise State University .
This geochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/SedDB |
Sedaxane is a broad spectrum fungicide used as a seed treatment in agriculture to protect crops from fungal diseases. It was first marketed by Syngenta in 2011 using their brand name Vibrance . The compound is an amide which combines a pyrazole acid with an aryl amine to give an inhibitor of succinate dehydrogenase . [ 1 ] [ 2 ]
The compound is widely registered for use, including in Australia, the EU, UK and US.
Inhibition of succinate dehydrogenase, the complex II in the mitochondrial respiration chain, has been known as a fungicidal mechanism of action since the first examples were marketed in the 1960s. The first compound in this class was carboxin , which had a narrow spectrum of useful biological activity, mainly on basidiomycetes and was used as a seed treatment . [ 3 ] [ 4 ] By 2016, at least 17 further examples of this mechanism of action were developed by crop protection companies, with the market leader being boscalid , owing to its broader spectrum of fungal species controlled. However, it lacked full control of important cereal diseases, especially septoria leaf blotch Zymoseptoria tritici . [ 3 ]
A group of compounds which did control septoria were amides of 3-(difluoromethyl)-1-methyl-1H-pyrazole-4-carboxylic acid . These included fluxapyroxad and pydiflumetofen as well as sedaxane. [ 5 ] [ 6 ]
Sedaxane combines the acid chloride of the pyrazole carboxylic acid with a novel amine derivative which was made from 2-chlorobenzaldehyde .
A base-catalysed aldol condensation between the aldehyde and cyclopropyl methyl ketone forms an α,β-unsaturated carbonyl compound which, when combined with hydrazine gives a dihydropyrazole derivative. Further treatment with potassium hydroxide forms the second cyclopropyl ring and this material is converted to the aniline required for sedaxane formation by Buchwald–Hartwig amination using benzophenone imine in the presence of a palladium catalyst, followed by hydroxylamine. [ 3 ] : 413–4 [ 7 ]
Owing to a lack of stereoselectivity in the formation of the second cyclopropane ring, sedaxane consists of two diastereomers : two pairs of enantiomers which are cis–trans isomers . [ 1 ] : 1828 [ 3 ] : 414 The commercial product consists of >80% of the trans isomers. [ 8 ]
SDHI of this type act by binding at the quinone reduction site of the enzyme complex, preventing ubiquinone from doing so. As a consequence, the tricarboxylic acid cycle and electron transport chain cannot function. [ 9 ] [ 10 ]
Sedaxane is used as a seed treatment to control, for example, common bunt , Rhizoctonia species and Ustilago species (smuts). As a result, it has potential use in crops including cereals, cotton, potato and soybean. As of 2023 [update] it is registered for use in Argentina, Australia, Canada, Chile, China, the EU, Mexico, the UK, Uruguay and the US. [ 8 ]
Sedaxane has low toxicity [ 8 ] and its use was found to leave no residues in human food: [ 1 ] however the Codex Alimentarius database maintained by the FAO lists the maximum residue limits for it in various food products. [ 11 ]
Fungal populations have the ability to develop resistance to SDHI inhibitors. This potential can be mitigated by careful management . Reports of individual pest species becoming resistant [ 8 ] are monitored by manufacturers, regulatory bodies such as the EPA and the Fungicides Resistance Action Committee (FRAC). [ 12 ] The risks of resistance developing can be reduced by using a mixture of two or more fungicides which each have activity on relevant pests but with unrelated mechanisms of action. FRAC assigns fungicides into classes so as to facilitate this and sedaxane is frequently used in combination with other active ingredients as seed treatments. [ 13 ] [ 14 ]
Sedaxane is the ISO common name [ 15 ] for the active ingredient which is formulated into the branded product sold to end-users. Vibrance is the brand name for Syngenta's suspension concentrate . [ 14 ] | https://en.wikipedia.org/wiki/Sedaxane |
Sediment is a solid material that is transported to a new location where it is deposited. [ 1 ] It occurs naturally and, through the processes of weathering and erosion , is broken down and subsequently transported by the action of wind, water, or ice or by the force of gravity acting on the particles. For example, sand and silt can be carried in suspension in river water and on reaching the sea bed deposited by sedimentation ; if buried, they may eventually become sandstone and siltstone ( sedimentary rocks ) through lithification .
Sediments are most often transported by water ( fluvial processes ), but also wind ( aeolian processes ) and glaciers . Beach sands and river channel deposits are examples of fluvial transport and deposition , though sediment also often settles out of slow-moving or standing water in lakes and oceans. Desert sand dunes and loess are examples of aeolian transport and deposition. Glacial moraine deposits and till are ice-transported sediments.
Sediment can be classified based on its grain size , grain shape, and composition.
Sediment size is measured on a log base 2 scale, called the "Phi" scale, which classifies particles by size from "colloid" to "boulder".
The shape of particles can be defined in terms of three parameters. The form is the overall shape of the particle, with common descriptions being spherical, platy, or rodlike. The roundness is a measure of how sharp grain corners are. This varies from well-rounded grains with smooth corners and edges to poorly rounded grains with sharp corners and edges. Finally, surface texture describes small-scale features such as scratches, pits, or ridges on the surface of the grain. [ 2 ]
Form (also called sphericity ) is determined by measuring the size of the particle on its major axes. William C. Krumbein proposed formulas for converting these numbers to a single measure of form, [ 3 ] such as
where D L {\displaystyle D_{L}} , D I {\displaystyle D_{I}} , and D S {\displaystyle D_{S}} are the long, intermediate, and short axis lengths of the particle. [ 4 ] The form ψ l {\displaystyle \psi _{l}} varies from 1 for a perfectly spherical particle to very small values for a platelike or rodlike particle.
An alternate measure was proposed by Sneed and Folk: [ 5 ]
which, again, varies from 0 to 1 with increasing sphericity.
Roundness describes how sharp the edges and corners of particle are. Complex mathematical formulas have been devised for its precise measurement, but these are difficult to apply, and most geologists estimate roundness from comparison charts. Common descriptive terms range from very angular to angular to subangular to subrounded to rounded to very rounded, with increasing degree of roundness. [ 6 ]
Surface texture describes the small-scale features of a grain, such as pits, fractures, ridges, and scratches. These are most commonly evaluated on quartz grains, because these retain their surface markings for long periods of time. Surface texture varies from polished to frosted, and can reveal the history of transport of the grain; for example, frosted grains are particularly characteristic of aeolian sediments, transported by wind. Evaluation of these features often requires the use of a scanning electron microscope . [ 7 ]
Composition of sediment can be measured in terms of:
This leads to an ambiguity in which clay can be used as both a size-range and a composition (see clay minerals ).
Sediment is transported based on the strength of the flow that carries it and its own size, volume, density, and shape. Stronger flows will increase the lift and drag on the particle, causing it to rise, while larger or denser particles will be more likely to fall through the flow.
In geography and geology , fluvial sediment processes or fluvial sediment transport are associated with rivers and streams and the deposits and landforms created by sediments. It can result in the formation of ripples and dunes , in fractal -shaped patterns of erosion, in complex patterns of natural river systems, and in the development of floodplains and the occurrence of flash floods . Sediment moved by water can be larger than sediment moved by air because water has both a higher density and viscosity . In typical rivers the largest carried sediment is of sand and gravel size, but larger floods can carry cobbles and even boulders .
Wind results in the transportation of fine sediment and the formation of sand dune fields and soils from airborne dust.
Glaciers carry a wide range of sediment sizes, and deposit it in moraines .
The overall balance between sediment in transport and sediment being deposited on the bed is given by the Exner equation . This expression states that the rate of increase in bed elevation due to deposition is proportional to the amount of sediment that falls out of the flow. This equation is important in that changes in the power of the flow change the ability of the flow to carry sediment, and this is reflected in the patterns of erosion and deposition observed throughout a stream. This can be localized, and simply due to small obstacles; examples are scour holes behind boulders, where flow accelerates, and deposition on the inside of meander bends. Erosion and deposition can also be regional; erosion can occur due to dam removal and base level fall. Deposition can occur due to dam emplacement that causes the river to pool and deposit its entire load, or due to base level rise.
Seas, oceans, and lakes accumulate sediment over time. The sediment can consist of terrigenous material, which originates on land, but may be deposited in either terrestrial, marine, or lacustrine (lake) environments, or of sediments (often biological) originating in the body of water. Terrigenous material is often supplied by nearby rivers and streams or reworked marine sediment (e.g. sand ). In the mid-ocean, the exoskeletons of dead organisms are primarily responsible for sediment accumulation.
Deposited sediments are the source of sedimentary rocks , which can contain fossils of the inhabitants of the body of water that were, upon death, covered by accumulating sediment. Lake bed sediments that have not solidified into rock can be used to determine past climatic conditions.
The major areas for deposition of sediments in the marine environment include:
One other depositional environment which is a mixture of fluvial and marine is the turbidite system, which is a major source of sediment to the deep sedimentary and abyssal basins as well as the deep oceanic trenches .
Any depression in a marine environment where sediments accumulate over time is known as a sediment trap .
The null point theory explains how sediment deposition undergoes a hydrodynamic sorting process within the marine environment leading to a seaward fining of sediment grain size.
One cause of high sediment loads is slash and burn and shifting cultivation of tropical forests. When the ground surface is stripped of vegetation and then seared of all living organisms, the upper soils are vulnerable to both wind and water erosion. In a number of regions of the earth, entire sectors of a country have become erodible. For example, on the Madagascar high central plateau , which constitutes approximately ten percent of that country's land area, most of the land area is devegetated, and gullies have eroded into the underlying soil to form distinctive gulleys called lavakas . These are typically 40 meters (130 ft) wide, 80 meters (260 ft) long and 15 meters (49 ft) deep. [ 12 ] Some areas have as many as 150 lavakas/square kilometer, [ 13 ] and lavakas may account for 84% of all sediments carried off by rivers. [ 14 ] This siltation results in discoloration of rivers to a dark red brown color and leads to fish kills. In addition, sedimentation of river basins implies sediment management and siltation costs. The cost of removing an estimated 135 million m 3 of accumulated sediments due to water erosion only is likely exceeding 2.3 billion euro (€) annually in the EU and UK, with large regional differences between countries. [ 15 ]
Erosion is also an issue in areas of modern farming, where the removal of native vegetation for the cultivation and harvesting of a single type of crop has left the soil unsupported. [ 16 ] Many of these regions are near rivers and drainages. Loss of soil due to erosion removes useful farmland, adds to sediment loads, and can help transport anthropogenic fertilizers into the river system, which leads to eutrophication . [ 17 ]
The Sediment Delivery Ratio (SDR) is fraction of gross erosion (interill, rill, gully and stream erosion) that is expected to be delivered to the outlet of the river. [ 18 ] The sediment transfer and deposition can be modelled with sediment distribution models such as WaTEM/SEDEM. [ 19 ] In Europe, according to WaTEM/SEDEM model estimates the Sediment Delivery Ratio is about 15%. [ 20 ]
Watershed development near coral reefs is a primary cause of sediment-related coral stress. The stripping of natural vegetation in the watershed for development exposes soil to increased wind and rainfall and, as a result, can cause exposed sediment to become more susceptible to erosion and delivery to the marine environment during rainfall events. Sediment can negatively affect corals in many ways, such as by physically smothering them, abrading their surfaces, causing corals to expend energy during sediment removal, and causing algal blooms that can ultimately lead to less space on the seafloor where juvenile corals (polyps) can settle.
When sediments are introduced into the coastal regions of the ocean, the proportion of land, marine, and organic-derived sediment that characterizes the seafloor near sources of sediment output is altered. In addition, because the source of sediment (i.e., land, ocean, or organically) is often correlated with how coarse or fine sediment grain sizes that characterize an area are on average, grain size distribution of sediment will shift according to the relative input of land (typically fine), marine (typically coarse), and organically-derived (variable with age) sediment. These alterations in marine sediment characterize the amount of sediment suspended in the water column at any given time and sediment-related coral stress. [ 21 ]
In July 2020, marine biologists reported that aerobic microorganisms (mainly), in " quasi-suspended animation ", were found in organically-poor sediments, up to 101.5 million years old, 250 feet below the seafloor in the South Pacific Gyre (SPG) ("the deadest spot in the ocean"), and could be the longest-living life forms ever found. [ 22 ] [ 23 ] | https://en.wikipedia.org/wiki/Sediment |
A sediment basin is a temporary pond built on a construction site to capture eroded or disturbed soil that is washed off during rain storms, and protect the water quality of a nearby stream, river, lake, or bay. The sediment -laden soil settles in the pond before the runoff is discharged. Sediment basins are typically used on construction sites of 5 acres (20,000 m 2 ) or more, where there is sufficient room. They are often used in conjunction with erosion controls and other sediment control practices. On smaller construction sites, where a basin is not practical, sediment traps may be used. [ 1 ]
Essential sediment abundance is prevalent in the construction industry which gives insight to future endeavors.
On some construction projects, the sediment basin is cleaned out after the soil disturbance (earth-moving) phase of the project, and modified to function as a permanent stormwater management system for the completed site, either as a detention basin or a retention basin . [ 2 ]
A sediment trap is a temporary settling basin installed on a construction site to capture eroded or disturbed soil that is washed off during rain storms, and protect the water quality of a nearby stream, river, lake, or bay. The trap is basically an embankment built along a waterway or low-lying area on the site. They are typically installed at the perimeter of a site and above storm drain inlets, to keep sediment from entering the drainage system. Sediment traps are commonly used on small construction sites, where a sediment basin is not practical. Sediment basins are typically used on construction sites of 5 acres (20,000 m 2 ) or more, where there is sufficient room. [ 3 ]
Sediment traps are installed before land disturbance (earth moving, grading ) begins on a construction site. The traps are often used in conjunction with erosion controls and other sediment control practices. [ 4 ] | https://en.wikipedia.org/wiki/Sediment_basin |
A sediment control is a practice or device designed to keep eroded soil on a construction site, so that it does not wash off and cause water pollution to a nearby stream, river, lake, or sea. Sediment controls are usually employed together with erosion controls , which are designed to prevent or minimize erosion and thus reduce the need for sediment controls. Sediment controls are generally designed to be temporary measures, however, some can be used for storm water management purposes. [ 1 ]
Treatment of silt impacted water using equipment and chemical addition, commonly called an active treatment system, is a relatively new form of sediment control for the construction industry. These systems are designed to reduce Total Suspended Solids (TSS) from entering nearby water bodies where silt pollution can be of environmental concern. Sediment-laden stormwater is collected and or pumped, and a chemical flocculant is added to aide in clarification . Types of flocculant include;
Extreme caution should be observed when using cationic flocculants like chitosan or positively charged polyacrylamide or polyDADMAC which cause hypoxia in fish . The use of anionic,negatively charged, flocculants is best practice on open loop treatment systems to ensure the protection aquatic habitat, fish and invertebrates .
The water is then either filtered ( sand or cartridge filter,) or settled ( lamella clarifier or weir tank ) prior to discharge. Chemical sediment control is currently used on some construction sites around the United States and Europe, typically larger sites where there is a high potential for damage to nearby streams. [ 3 ] Another active treatment system design uses electrocoagulation to flocculate suspended particles in the stormwater, followed by a filtration stage. [ 4 ] Active treatment systems require technical expertise to operate effectively as multiple types of equipment are utilized.
Chemical treatment of water to remove sediment may also be accomplished passively. Passive treatment systems use the energy of water flowing by gravity through ditches, canals, culverts or other constructed conveyances to effect treatment. Self dosing products, such as Gel Flocculants , are placed in the flowing water where sediment particles, colloids and flow energy combine to release the required dosage, thereby creating heavy flocs which can then be easily filtered or settled. Natural woven fibers like jute are often used in ditch bottoms to act as filtration media. Silt retention mats can also be placed insitu to capture floccules. Sedimentation ponds are often utilized as a deposition area to clarify the water and concentrate the material. Mining, heavy construction and other industries have used passive systems for more than twenty years. These types of systems are low carbon as no external power source is needed, they require little skill to operate, minimal maintenance and are effective at reducing Total Suspended Solids , some heavy metals and the nutrient phosphorus .
Stormwater treatment can also be achieved passively. Stormwater management facilities (SWMF's) are generally designed Stokes' law to remove particulate matter larger than 40 micron in size, or to detain water to reduce downstream flooding. However, regulation on the effluent from SWMF's is becoming more stringent, as the detrimental impact from nutrients like Phosphorus either dissolved from (fertilizers), or bound to sediment particles from construction or agriculture runoff, cause algae and toxic cyanobacteria (aka Blue-green algae ) blooms in receiving lakes. Cyanotoxin is of particular concern as many drinking water treatment plants can not effectively remove this toxin. In a recent municipal stormwater treatment study, [ 5 ] an advanced sedimentation technology was used passively in large diameter stormwater mains upstream of SWMF's to remove an average of 90% of TSS and phosphorus during a near 50 year rain event.
All states in the U.S. have laws requiring installation of erosion and sediment controls (ESCs) on construction sites of a specified size. Federal regulations require ESCs on sites 1 acre (0.40 ha) and larger. Smaller sites which are part of a common plan of development (e.g. a residential subdivision ) are also required to have ESCs. [ 6 ] In some states, non-contiguous sites under 1-acre (4,000 m 2 ) are also required to have ESCs. For example, the State of Maryland requires ESCs on sites of 5,000 sq ft (460 m 2 ) or more. [ 7 ] The sediment controls must be installed before the beginning of land disturbance (i.e. land clearing, grubbing and grading ) and must be maintained during the entire disturbance phase of construction. Approval for use of any chemical flocculant must be obtained prior to its deployment. | https://en.wikipedia.org/wiki/Sediment_control |
In aquatic toxicology , the sediment quality triad (SQT) approach has been used as an assessment tool to evaluate the extent of sediment degradation resulting from contaminants released due to human activity present in aquatic environments (Chapman, 1990). [ 1 ] This evaluation focuses on three main components: 1.) sediment chemistry, 2.) sediment toxicity tests using aquatic organisms, and 3.) the field effects on the benthic organisms (Chapman, 1990). [ 1 ] Often used in risk assessment, the combination of three lines of evidence can lead to a comprehensive understanding of the possible effects to the aquatic community (Chapman, 1997). [ 2 ] Although the SQT approach does not provide a cause-and-effect relationship linking concentrations of individual chemicals to adverse biological effects, it does provide an assessment of sediment quality commonly used to explain sediment characteristics quantitatively. The information provided by each portion of the SQT is unique and complementary, and the combination of these portions is necessary because no single characteristic provides comprehensive information regarding a specific site (Chapman, 1997) [ 2 ]
Sediment chemistry provides information on contamination, however it does not provide information of biological effects (Chapman, 1990). [ 1 ] Sediment chemistry is used as a screening tool to determine the contaminants that are most likely to be destructive to organisms present in the benthic community at a specific site. During analysis, sediment chemistry data does not depend strictly on comparisons to sediment quality guidelines when utilizing the triad approach. Rather, sediment chemistry data, once collected for the specific site, is compared to the most relevant guide values, based on site characteristics, to assess which chemicals are of the greatest concern. This technique is used because no one set of data is adequate for all situations. This allows you to identify the chemicals of concern, which most frequently exceed effects-based guidelines. Once the chemical composition of the sediment is determined and the most concerning contaminants have been identified, toxicity tests are conducted to link environmental concentrations to potential adverse effects.
Sediment toxicity is evaluated based on bioassay analysis. Standard bioassay toxicity tests are utilized and are not organism restricted (Chapman, 1997). [ 2 ] Differences in mechanisms of exposure and organism physiology must be taken into account when selecting your test organisms, and you must be able to adequately justify the use of that organism. These bioassay tests evaluate effects based on different toxicological endpoints. The toxicity tests are conducted with respect to the chemicals of concern at environmentally relevant concentrations identified by the sediment chemistry portion of the triad approach. Chapman (1990) [ 1 ] lists typically used endpoints, which include lethal endpoints such as mortality, and sublethal endpoints such as growth, behavior, reproduction, cytotoxicity and optionally bioaccumulation . Often pilot studies are utilized to assist in the selection of the appropriate test organism and end points. Multiple endpoints are recommended and each of the selected endpoints must adequately complement each of the others (Chapman, 1997). [ 2 ] Effects are evaluated using statistical methods that allow for the distinction between responses that are significantly different than negative controls. If sufficient data is generated, minimum significant differences (MSDs) are calculated using power analyses and applied to toxicity tests to determine the difference between statistical difference and ecological relevance.
The function of the toxicity portion of the triad approach is to allow you to estimate the effects in the field. While laboratory based experiments simplify a complex and dynamic environment, toxicity results allow the potential for field extrapolation. This creates a link of exposure and effect and allows the determination of an exposure-response relationship. When combined with the other two components of the Sediment Quality Triad it allows for a holistic understanding between cause and effect.
The analysis of field effects on benthic organisms functions to assess the potential for community based effects resulting from the identified contaminants. This is done because benthic organisms are sessile and location specific, allowing them to be used as accurate markers of contaminant effect (Chapman, 1990). [ 1 ] This is done through conducting field-based tests, which analyze changes in benthic community structures focusing on changes in number of species, abundance, and percentage of major taxonomic groups (Chapman, 1997). [ 2 ] Changes in benthic communities are typically quantified using a principal component analysis and classification (Chapman, 1997). [ 2 ] There is no one specifically defined method for conducting these field assessments, however the different multivariate analysis typically produces results identifying relationships between variables when a robust correlation exists.
Knowledge of the site-specific ecosystem and the ecological roles of dominant species within that ecosystem are critical to producing biological evidence of alteration in benthic community resultant of contaminant exposure. When possible, it is recommended to observe changes in community structure that directly relate to the test species used during the sediment toxicity portion of the triad approach in order to produce the most reliable evidence.
Bioaccumulation should be considered during the utilization of the triad approach depending on the study goals. It preparation for measuring bioaccumulation, it must be specified if the test will serve to assess secondary poisoning or biomagnification (Chapman, 1997). [ 2 ] Bioaccumulation analysis should be conducted appropriately based on the contaminants of concern (for example, metals do not biomagnify). This can be done with field-collected, caged organisms, or laboratory exposed organisms (Chapman, 1997). [ 2 ] While the bioaccumulation portion is recommended, it is not required. However, it serves an important role with the purpose of quantifying effects due to trophic transfer of contaminants through consumption of contaminated prey.
Site-specific pollution induced degradation is measured through the combination of the three portions of the sediment quality triad. The sediment chemistry, sediment toxicity, and the field effects to benthic organisms are compared quantitatively. Data is most useful when it has been normalized to reference site values by converting them to reference-to-ratio values (Chapman et al. 1986; Chapman 1989). [ 3 ] [ 4 ] The reference site is chosen to be the site with the least contamination with respect to the other sites sampled. Once normalized, data between portions of the triad are able to be compared even when large differences in measurements or units exits (Chapman, 1990). [ 1 ] From the combination of the results from each portion of the triad a multivariate figure is developed and used to determine the level of degradation.
No single method can assess impact of contamination-induced degradation of sediment across aquatic communities. Methods of each component of the triad should be selected for efficacy and relevance in lab and field tests. Application of the SQT is typically location-specific and can be used to compare differences in sediment quality temporally or across regions (Chapman, 1997). [ 2 ]
The SQT incorporates three lines of evidence (LOE) to provide direct assessment of sediment quality. The chemistry, toxicity, and benthic components of the triad each provide a LOE, which is then integrated into a Weight of evidence.
In order to qualify for SQT assessment chemistry, toxicity, and in situ measurements must be collected synoptically using standardized methods of sediment quality. A control sample is necessary to evaluate impact of contaminated sites. An appropriate reference is a whole sediment sample (particles and associated pore water) collected near area of concern and is representative of background conditions in the absence of contaminants. Evidence of contaminant exposure and biological effect is required in order to assign a site as chemically impacted.
The chemistry component incorporates both bioavailability and potential effects on benthic community. The potential of sediment toxicity for a given site is based on a linear regression model (LRM). A chemical score index (CSI) of the contaminant describes the magnitude of exposure relative to benthic community disturbance. An optimal set of index-specific thresholds are selected for the chemistry component by statistically comparing several candidates to evaluate which set exhibited greatest overall agreement (Bay and Weisberg, 2012). [ 5 ] The magnitude of sediment toxicity is determined by multiple toxicity tests conducted in the lab to complement chemistry component. Toxicity LOE are determined by the mean of toxicity category score from all relevant tests. Development of LOE for benthic component is based on community metrics and abundance. Several indices such as benthic response index (BRI), benthic biotic integrity (IBI), and relative biotic index (RBI) are utilized to assess biological response of the benthic community. The median score of all individual indices will establish benthic LOE.
Each component of the triad is assigned a response category: minimal, low, moderate, or high disturbance relative to background conditions. Individual LOEs are ranked into categories by comparing test results of each component to established thresholds (Bay and Weisberg, 2012). [ 5 ] Integration of benthos and toxicity LOE classify the severity and effects of contamination. LOE of chemistry and toxicity are combined to assign the potential of chemically-mediated effects.
A site is assigned an impact category by integrating the severity of effect and the potential of chemically mediated effects. The conditions of individual sites of concern are assigned an impact category between 1 and 5 (with 1 being unimpacted and 5 being clearly impacted by contamination). The SQT triad can also classify impact as inconclusive in cases when LOE between components are in disagreement or additional information is required (Bay and Weisberg, 2012). [ 5 ]
SQT measurements are scaled proportionately by relative impact and visually represented on triaxial graphs. Evaluation of sediment integrity and interrelationships between components can be determined by the size and morphology of the triangle. The magnitude of the triangle is indicative of the relative impact of contamination. Equilateral triangles imply agreement among components. (USEPA, 1994) [ 6 ]
The SQT approach has been praised for a variety of reasons as a technique for characterizing sediment conditions. Relative to the depth of information it provides, and the inclusive nature, it is very cost effective. It can be applied to all sediment classifications, and even adapted to soil and water column assessments (Chapman and McDonald 2005). [ 7 ] A decision matrix can be employed such that all three measures be analyzed simultaneously, and a deduction of possible ecological impacts be made (USEPA 1994) [ 6 ]
Other advantages of the SQT include information on the potential bioaccumulation and biomagnifcation effects of contaminants, and its flexibility in application, resulting from its design as a framework rather than a formula or standard method. By using multiple lines of evidence, there are a host of ways to manipulate and interpret SQT data (Bay and Weisberg 2012). [ 5 ] It has been accepted on an international scale as the most comprehensive approach to assessing sediment (Chapman and McDonald 2005). [ 7 ] The SQT approach to sediment testing has been used in North America, Europe, Australia, South America, and the Antarctic.
Stemming from the National Pollutant Discharge Elimination System (NPDES) EPA permitting guidelines, point and nonpoint discharges may adversely affect sediment quality. As per state regulatory criteria, information on point and nonpoint source contamination, and its effects on sediment quality may be required for assessment of compliance. For example, Washington State Sediment Management Standards, Part IV, mandates sediment control standards which allow for establishment of discharge sediment monitoring requirements, and criteria for creation and maintenance of sediment impact zones (WADOE 2013). [ 8 ] In this instance, the SQT could be particularly useful encompassing multiple relevant analyses simultaneously.
Although there are numerous benefits in using the SQT approach, drawbacks in its use have been identified. The major limitations include: lack of statistical criteria development within the framework, large database requirements, difficulties in chemical mixture application, and data interpretation can be laboratory intensive (Chapman 1989). [ 4 ] The SQT does not evidently consider the bioavailability of complexed or sediment-associated contaminants (FDEP 1994). [ 9 ] Lastly, it is difficult to translate laboratory toxicity results to biological effects seen in the field (Kamlet 1989). [ 10 ] | https://en.wikipedia.org/wiki/Sediment_quality_triad |
Sediment transport is the movement of solid particles ( sediment ), typically due to a combination of gravity acting on the sediment, and the movement of the fluid in which the sediment is entrained. Sediment transport occurs in natural systems where the particles are clastic rocks ( sand , gravel , boulders , etc.), mud , or clay ; the fluid is air, water, or ice; and the force of gravity acts to move the particles along the sloping surface on which they are resting. Sediment transport due to fluid motion occurs in rivers , oceans , lakes , seas , and other bodies of water due to currents and tides . Transport is also caused by glaciers as they flow, and on terrestrial surfaces under the influence of wind . Sediment transport due only to gravity can occur on sloping surfaces in general, including hillslopes , scarps , cliffs , and the continental shelf —continental slope boundary.
Sediment transport is important in the fields of sedimentary geology , geomorphology , civil engineering , hydraulic engineering and environmental engineering (see applications , below). Knowledge of sediment transport is most often used to determine whether erosion or deposition will occur, the magnitude of this erosion or deposition, and the time and distance over which it will occur.
Aeolian or eolian (depending on the parsing of æ ) is the term for sediment transport by wind . This process results in the formation of ripples and sand dunes . Typically, the size of the transported sediment is fine sand (<1 mm) and smaller, because air is a fluid with low density and viscosity , and can therefore not exert very much shear on its bed.
Bedforms are generated by aeolian sediment transport in the terrestrial near-surface environment. Ripples [ 1 ] and dunes [ 2 ] form as a natural self-organizing response to sediment transport.
Aeolian sediment transport is common on beaches and in the arid regions of the world, because it is in these environments that vegetation does not prevent the presence and motion of fields of sand.
Wind-blown very fine-grained dust is capable of entering the upper atmosphere and moving across the globe. Dust from the Sahara deposits on the Canary Islands and islands in the Caribbean , [ 3 ] and dust from the Gobi Desert has deposited on the western United States . [ 4 ] This sediment is important to the soil budget and ecology of several islands.
Deposits of fine-grained wind-blown glacial sediment are called loess .
In geography and geology , fluvial sediment processes or fluvial sediment transport are associated with rivers and streams and the deposits and landforms created by sediments . It can result in the formation of ripples and dunes , in fractal -shaped patterns of erosion, in complex patterns of natural river systems, and in the development of floodplains and the occurrence of flash floods . Sediment moved by water can be larger than sediment moved by air because water has both a higher density and viscosity . In typical rivers the largest carried sediment is of sand and gravel size, but larger floods can carry cobbles and even boulders .
Coastal sediment transport takes place in near-shore environments due to the motions of waves and currents. At the mouths of rivers, coastal sediment and fluvial sediment transport processes mesh to create river deltas .
Coastal sediment transport results in the formation of characteristic coastal landforms such as beaches , barrier islands , and capes. [ 9 ]
As glaciers move over their beds, they entrain and move material of all sizes. Glaciers can carry the largest sediment, and areas of glacial deposition often contain a large number of glacial erratics , many of which are several metres in diameter. Glaciers also pulverize rock into " glacial flour ", which is so fine that it is often carried away by winds to create loess deposits thousands of kilometres afield. Sediment entrained in glaciers often moves approximately along the glacial flowlines , causing it to appear at the surface in the ablation zone .
In hillslope sediment transport, a variety of processes move regolith downslope. These include:
These processes generally combine to give the hillslope a profile that looks like a solution to the diffusion equation , where the diffusivity is a parameter that relates to the ease of sediment transport on the particular hillslope. For this reason, the tops of hills generally have a parabolic concave-up profile, which grades into a convex-up profile around valleys.
As hillslopes steepen, however, they become more prone to episodic landslides and other mass wasting events. Therefore, hillslope processes are better described by a nonlinear diffusion equation in which classic diffusion dominates for shallow slopes and erosion rates go to infinity as the hillslope reaches a critical angle of repose . [ 10 ]
Large masses of material are moved in debris flows , hyperconcentrated mixtures of mud, clasts that range up to boulder-size, and water. Debris flows move as granular flows down steep mountain valleys and washes. Because they transport sediment as a granular mixture, their transport mechanisms and capacities scale differently from those of fluvial systems.
Sediment transport is applied to solve many environmental, geotechnical, and geological problems. Measuring or quantifying sediment transport or erosion is therefore important for coastal engineering . Several sediment erosion devices have been designed in order to quantify sediment erosion (e.g., Particle Erosion Simulator (PES)). One such device, also referred to as the BEAST (Benthic Environmental Assessment Sediment Tool) has been calibrated in order to quantify rates of sediment erosion. [ 11 ]
Movement of sediment is important in providing habitat for fish and other organisms in rivers. Therefore, managers of highly regulated rivers, which are often sediment-starved due to dams, are often advised to stage short floods to refresh the bed material and rebuild bars. This is also important, for example, in the Grand Canyon of the Colorado River , to rebuild shoreline habitats also used as campsites.
Sediment discharge into a reservoir formed by a dam forms a reservoir delta . This delta will fill the basin, and eventually, either the reservoir will need to be dredged or the dam will need to be removed. Knowledge of sediment transport can be used to properly plan to extend the life of a dam.
Geologists can use inverse solutions of transport relationships to understand flow depth, velocity, and direction, from sedimentary rocks and young deposits of alluvial materials.
Flow in culverts, over dams, and around bridge piers can cause erosion of the bed. This erosion can damage the environment and expose or unsettle the foundations of the structure. Therefore, good knowledge of the mechanics of sediment transport in a built environment are important for civil and hydraulic engineers.
When suspended sediment transport is increased due to human activities, causing environmental problems including the filling of channels, it is called siltation after the grain-size fraction dominating the process.
For a fluid to begin transporting sediment that is currently at rest on a surface, the boundary (or bed) shear stress τ b {\displaystyle \tau _{b}} exerted by the fluid must exceed the critical shear stress τ c {\displaystyle \tau _{c}} for the initiation of motion of grains at the bed. This basic criterion for the initiation of motion can be written as:
This is typically represented by a comparison between a dimensionless shear stress τ b ∗ {\displaystyle \tau _{b}*} and a dimensionless critical shear stress τ c ∗ {\displaystyle \tau _{c}*} . The nondimensionalization is in order to compare the driving forces of particle motion (shear stress) to the resisting forces that would make it stationary (particle density and size). This dimensionless shear stress, τ ∗ {\displaystyle \tau *} , is called the Shields parameter and is defined as: [ 12 ]
And the new equation to solve becomes:
The equations included here describe sediment transport for clastic , or granular sediment. They do not work for clays and muds because these types of floccular sediments do not fit the geometric simplifications in these equations, and also interact thorough electrostatic forces. The equations were also designed for fluvial sediment transport of particles carried along in a liquid flow, such as that in a river, canal, or other open channel.
Only one size of particle is considered in this equation. However, river beds are often formed by a mixture of sediment of various sizes. In case of partial motion where only a part of the sediment mixture moves, the river bed becomes enriched in large gravel as the smaller sediments are washed away. The smaller sediments present under this layer of large gravel have a lower possibility of movement and total sediment transport decreases. This is called armouring effect. [ 13 ] Other forms of armouring of sediment or decreasing rates of sediment erosion can be caused by carpets of microbial mats, under conditions of high organic loading. [ 14 ]
The Shields diagram empirically shows how the dimensionless critical shear stress (i.e. the dimensionless shear stress required for the initiation of motion) is a function of a particular form of the particle Reynolds number , R e p {\displaystyle \mathrm {Re} _{p}} or Reynolds number related to the particle. This allows the criterion for the initiation of motion to be rewritten in terms of a solution for a specific version of the particle Reynolds number, called R e p ∗ {\displaystyle \mathrm {Re} _{p}*} .
This can then be solved by using the empirically derived Shields curve to find τ c ∗ {\displaystyle \tau _{c}*} as a function of a specific form of the particle Reynolds number called the boundary Reynolds number. The mathematical solution of the equation was given by Dey . [ 15 ]
In general, a particle Reynolds number has the form:
Where U p {\displaystyle U_{p}} is a characteristic particle velocity, D {\displaystyle D} is the grain diameter (a characteristic particle size), and ν {\displaystyle \nu } is the kinematic viscosity, which is given by the dynamic viscosity, μ {\displaystyle \mu } , divided by the fluid density, ρ f {\displaystyle {\rho _{f}}} .
The specific particle Reynolds number of interest is called the boundary Reynolds number, and it is formed by replacing the velocity term in the particle Reynolds number by the shear velocity , u ∗ {\displaystyle u_{*}} , which is a way of rewriting shear stress in terms of velocity.
where τ b {\displaystyle \tau _{b}} is the bed shear stress (described below), and κ {\displaystyle \kappa } is the von Kármán constant , where
The particle Reynolds number is therefore given by:
The boundary Reynolds number can be used with the Shields diagram to empirically solve the equation
which solves the right-hand side of the equation
In order to solve the left-hand side, expanded as
the bed shear stress needs to be found, τ b {\displaystyle {\tau _{b}}} . There are several ways to solve for the bed shear stress. The simplest approach is to assume the flow is steady and uniform, using the reach-averaged depth and slope. because it is difficult to measure shear stress in situ , this method is also one of the most-commonly used. The method is known as the depth-slope product .
For a river undergoing approximately steady, uniform equilibrium flow, of approximately constant depth h and slope angle θ over the reach of interest, and whose width is much greater than its depth, the bed shear stress is given by some momentum considerations stating that the gravity force component in the flow direction equals exactly the friction force. [ 16 ] For a wide channel, it yields:
For shallow slope angles, which are found in almost all natural lowland streams, the small-angle formula shows that sin ( θ ) {\displaystyle \sin(\theta )} is approximately equal to tan ( θ ) {\displaystyle \tan(\theta )} , which is given by S {\displaystyle S} , the slope. Rewritten with this:
For the steady case, by extrapolating the depth-slope product and the equation for shear velocity:
The depth-slope product can be rewritten as:
u ∗ {\displaystyle u*} is related to the mean flow velocity, u ¯ {\displaystyle {\bar {u}}} , through the generalized Darcy–Weisbach friction factor , C f {\displaystyle C_{f}} , which is equal to the Darcy-Weisbach friction factor divided by 8 (for mathematical convenience). [ 17 ] Inserting this friction factor,
For all flows that cannot be simplified as a single-slope infinite channel (as in the depth-slope product , above), the bed shear stress can be locally found by applying the Saint-Venant equations for continuity , which consider accelerations within the flow.
The criterion for the initiation of motion, established earlier, states that
In this equation,
For a particular particle Reynolds number, τ c ∗ {\displaystyle \tau _{c}*} will be an empirical constant given by the Shields Curve or by another set of empirical data (depending on whether or not the grain size is uniform).
Therefore, the final equation to solve is:
Some assumptions allow the solution of the above equation.
The first assumption is that a good approximation of reach-averaged shear stress is given by the depth-slope product. The equation then can be rewritten as:
Moving and re-combining the terms produces:
where R is the submerged specific gravity of the sediment.
The second assumption is that the particle Reynolds number is high. This typically applies to particles of gravel-size or larger in a stream, and means the critical shear stress is constant. The Shields curve shows that for a bed with a uniform grain size,
Later researchers [ 18 ] have shown this value is closer to
for more uniformly sorted beds. Therefore the replacement
is used to insert both values at the end.
The equation now reads:
This final expression shows the product of the channel depth and slope is equal to the Shield's criterion times the submerged specific gravity of the particles times the particle diameter.
For a typical situation, such as quartz-rich sediment ( ρ s = 2650 k g m 3 ) {\displaystyle \left(\rho _{s}=2650{\frac {kg}{m^{3}}}\right)} in water ( ρ = 1000 k g m 3 ) {\displaystyle \left(\rho =1000{\frac {kg}{m^{3}}}\right)} , the submerged specific gravity is equal to 1.65.
Plugging this into the equation above,
For the Shield's criterion of τ c ∗ = 0.06 {\displaystyle \tau _{c}*=0.06} . 0.06 * 1.65 = 0.099, which is well within standard margins of error of 0.1. Therefore, for a uniform bed,
For these situations, the product of the depth and slope of the flow should be 10% of the diameter of the median grain diameter.
The mixed-grain-size bed value is τ c ∗ = 0.03 {\displaystyle \tau _{c}*=0.03} , which is supported by more recent research as being more broadly applicable because most natural streams have mixed grain sizes. [ 18 ] If this value is used, and D is changed to D_50 ("50" for the 50th percentile, or the median grain size, as an appropriate value for a mixed-grain-size bed), the equation becomes:
Which means that the depth times the slope should be about 5% of the median grain diameter in the case of a mixed-grain-size bed.
The sediments entrained in a flow can be transported along the bed as bed load in the form of sliding and rolling grains, or in suspension as suspended load advected by the main flow. [ 16 ] Some sediment materials may also come from the upstream reaches and be carried downstream in the form of wash load .
The location in the flow in which a particle is entrained is determined by the Rouse number , which is determined by the density ρ s and diameter d of the sediment particle, and the density ρ and kinematic viscosity ν of the fluid, determine in which part of the flow the sediment particle will be carried. [ 19 ]
Here, the Rouse number is given by P . The term in the numerator is the (downwards) sediment the sediment settling velocity w s , which is discussed below. The upwards velocity on the grain is given as a product of the von Kármán constant , κ = 0.4, and the shear velocity , u ∗ .
The following table gives the approximate required Rouse numbers for transport as bed load , suspended load , and wash load . [ 19 ] [ 20 ]
The settling velocity (also called the "fall velocity" or " terminal velocity ") is a function of the particle Reynolds number . Generally, for small particles (laminar approximation), it can be calculated with Stokes' Law . For larger particles (turbulent particle Reynolds numbers), fall velocity is calculated with the turbulent drag law. Dietrich (1982) compiled a large amount of published data to which he empirically fit settling velocity curves. [ 21 ] Ferguson and Church (2006) analytically combined the expressions for Stokes flow and a turbulent drag law into a single equation that works for all sizes of sediment, and successfully tested it against the data of Dietrich. [ 22 ] Their equation is
In this equation w s is the sediment settling velocity, g is acceleration due to gravity, and D is mean sediment diameter. ν {\displaystyle \nu } is the kinematic viscosity of water , which is approximately 1.0 x 10 −6 m 2 /s for water at 20 °C.
C 1 {\displaystyle C_{1}} and C 2 {\displaystyle C_{2}} are constants related to the shape and smoothness of the grains.
The expression for fall velocity can be simplified so that it can be solved only in terms of D . We use the sieve diameters for natural grains, g = 9.8 {\displaystyle g=9.8} , and values given above for ν {\displaystyle \nu } and R {\displaystyle R} . From these parameters, the fall velocity is given by the expression:
Alternatively, settling velocity for a particle of sediment can be derived using Stokes Law assuming quiescent (or still) fluid in steady state . The resulting formulation for settling velocity is,
w s = g ( ρ s − ρ ρ ) d s e d 2 18 ν , {\displaystyle {\displaystyle w_{s}={\frac {g~({\frac {\rho _{s}-\rho }{\rho }})~d_{sed}^{2}}{18\nu }}},}
where g {\displaystyle g} is the gravitational constant ; ρ s {\displaystyle \rho _{s}} is the density of the sediment; ρ {\displaystyle \rho } is the density of water ; d s e d {\displaystyle d_{sed}} is the sediment particle diameter (commonly assumed to be the median particle diameter, often referred to as d 50 {\displaystyle d_{50}} in field studies); and ν {\displaystyle \nu } is the molecular viscosity of water. The Stokes settling velocity can be thought of as the terminal velocity resulting from balancing a particle's buoyant force (proportional to the cross-sectional area) with the gravitational force (proportional to the mass). Small particles will have a slower settling velocity than heavier particles, as seen in the figure. This has implications for many aspects of sediment transport, for example, how far downstream a particle might be advected in a river.
In 1935, Filip Hjulström created the Hjulström curve , a graph which shows the relationship between the size of sediment and the velocity required to erode (lift it), transport it, or deposit it. [ 23 ] The graph is logarithmic .
Åke Sundborg later modified the Hjulström curve to show separate curves for the movement threshold corresponding to several water depths, as is necessary if the flow velocity rather than the boundary shear stress (as in the Shields diagram) is used for the flow strength. [ 24 ]
This curve has no more than a historical value nowadays, although its simplicity is still attractive. Among the drawbacks of this curve are that it does not take the water depth into account and more importantly, that it does not show that sedimentation is caused by flow velocity deceleration and erosion is caused by flow acceleration . The dimensionless Shields diagram is now unanimously accepted for initiation of sediment motion in rivers.
Formulas to calculate sediment transport rate exist for sediment moving in several different parts of the flow. These formulas are often segregated into bed load , suspended load , and wash load . They may sometimes also be segregated into bed material load and wash load.
Bed load moves by rolling, sliding, and hopping (or saltating ) over the bed, and moves at a small fraction of the fluid flow velocity. Bed load is generally thought to constitute 5–10% of the total sediment load in a stream, making it less important in terms of mass balance. However, the bed material load (the bed load plus the portion of the suspended load which comprises material derived from the bed) is often dominated by bed load, especially in gravel-bed rivers. This bed material load is the only part of the sediment load that actively interacts with the bed. As the bed load is an important component of that, it plays a major role in controlling the morphology of the channel.
Bed load transport rates are usually expressed as being related to excess dimensionless shear stress raised to some power. Excess dimensionless shear stress is a nondimensional measure of bed shear stress about the threshold for motion.
Bed load transport rates may also be given by a ratio of bed shear stress to critical shear stress, which is equivalent in both the dimensional and nondimensional cases. This ratio is called the "transport stage" ( T s or ϕ ) {\displaystyle (T_{s}{\text{ or }}\phi )} and is an important in that it shows bed shear stress as a multiple of the value of the criterion for the initiation of motion.
When used for sediment transport formulae, this ratio is typically raised to a power.
The majority of the published relations for bedload transport are given in dry sediment weight per unit channel width, b {\displaystyle b} (" breadth "):
Due to the difficulty of estimating bed load transport rates, these equations are typically only suitable for the situations for which they were designed.
The transport formula of Meyer-Peter and Müller, originally developed in 1948, [ 25 ] was designed for well- sorted fine gravel at a transport stage of about 8. [ 19 ] The formula uses the above nondimensionalization for shear stress, [ 19 ]
and Hans Einstein's nondimensionalization for sediment volumetric discharge per unit width [ 19 ]
Their formula reads:
Their experimentally determined value for τ ∗ c {\displaystyle \tau *_{c}} is 0.047, and is the third commonly used value for this (in addition to Parker's 0.03 and Shields' 0.06).
Because of its broad use, some revisions to the formula have taken place over the years that show that the coefficient on the left ("8" above) is a function of the transport stage: [ 19 ] [ 26 ] [ 27 ] [ 28 ]
The variations in the coefficient were later generalized as a function of dimensionless shear stress: [ 19 ] [ 29 ]
In 2003, Peter Wilcock and Joanna Crowe (now Joanna Curran) published a sediment transport formula that works with multiple grain sizes across the sand and gravel range. [ 30 ] Their formula works with surface grain size distributions, as opposed to older models which use subsurface grain size distributions (and thereby implicitly infer a surface grain sorting ).
Their expression is more complicated than the basic sediment transport rules (such as that of Meyer-Peter and Müller) because it takes into account multiple grain sizes: this requires consideration of reference shear stresses for each grain size, the fraction of the total sediment supply that falls into each grain size class, and a "hiding function".
The "hiding function" takes into account the fact that, while small grains are inherently more mobile than large grains, on a mixed-grain-size bed, they may be trapped in deep pockets between large grains. Likewise, a large grain on a bed of small particles will be stuck in a much smaller pocket than if it were on a bed of grains of the same size. In gravel-bed rivers, this can cause "equal mobility", in which small grains can move just as easily as large ones. [ 31 ] As sand is added to the system, it moves away from the "equal mobility" portion of the hiding function to one in which grain size again matters. [ 30 ]
Their model is based on the transport stage, or ratio of bed shear stress to critical shear stress for the initiation of grain motion. Because their formula works with several grain sizes simultaneously, they define the critical shear stress for each grain size class, τ c , D i {\displaystyle \tau _{c,D_{i}}} , to be equal to a "reference shear stress", τ r i {\displaystyle \tau _{ri}} . [ 30 ]
They express their equations in terms of a dimensionless transport parameter, W i ∗ {\displaystyle W_{i}^{*}} (with the " ∗ {\displaystyle *} " indicating nondimensionality and the " i {\displaystyle _{i}} " indicating that it is a function of grain size):
q b i {\displaystyle q_{bi}} is the volumetric bed load transport rate of size class i {\displaystyle i} per unit channel width b {\displaystyle b} . F i {\displaystyle F_{i}} is the proportion of size class i {\displaystyle i} that is present on the bed.
They came up with two equations, depending on the transport stage, ϕ {\displaystyle \phi } . For ϕ < 1.35 {\displaystyle \phi <1.35} :
and for ϕ ≥ 1.35 {\displaystyle \phi \geq 1.35} :
This equation asymptotically reaches a constant value of W i ∗ {\displaystyle W_{i}^{*}} as ϕ {\displaystyle \phi } becomes large.
In 2002, Peter Wilcock and T. A. Kenworthy, following Peter Wilcock (1998), [ 32 ] published a sediment bed-load transport formula that works with only two sediments fractions, i.e. sand and gravel fractions. [ 33 ] A mixed-sized sediment bed-load transport model using only two fractions offers practical advantages in terms of both computational and conceptual modeling by taking into account the nonlinear effects of sand presence in gravel beds on bed-load transport rate of both fractions. In fact, in the two-fraction bed load formula appears a new ingredient with respect to that of Meyer-Peter and Müller that is the proportion F i {\displaystyle F_{i}} of fraction i {\displaystyle i} on the bed surface where the subscript i {\displaystyle _{i}} represents either the sand (s) or gravel (g) fraction. The proportion F i {\displaystyle F_{i}} , as a function of sand content f s {\displaystyle f_{s}} , physically represents the relative influence of the mechanisms controlling sand and gravel transport, associated with the change from a clast-supported to matrix-supported gravel bed. Moreover, since f s {\displaystyle f_{s}} spans between 0 and 1, phenomena that vary with f s {\displaystyle f_{s}} include the relative size effects producing "hiding" of fine grains and "exposure" of coarse grains.
The "hiding" effect takes into account the fact that, while small grains are inherently more mobile than large grains, on a mixed-grain-size bed, they may be trapped in deep pockets between large grains. Likewise, a large grain on a bed of small particles will be stuck in a much smaller pocket than if it were on a bed of grains of the same size, which the Meyer-Peter and Müller formula refers to. In gravel-bed rivers, this can cause "equal mobility", in which small grains can move just as easily as large ones. [ 31 ] As sand is added to the system, it moves away from the "equal mobility" portion of the hiding function to one in which grain size again matters. [ 33 ]
Their model is based on the transport stage, i.e. ϕ {\displaystyle \phi } , or ratio of bed shear stress to critical shear stress for the initiation of grain motion. Because their formula works with only two fractions simultaneously, they define the critical shear stress for each of the two grain size classes, τ r i {\displaystyle \tau _{ri}} , where i {\displaystyle _{i}} represents either the sand (s) or gravel (g) fraction. The critical shear stress that represents the incipient motion for each of the two fractions is consistent with established values in the limit of pure sand and gravel beds and shows a sharp change with increasing sand content over the transition from a clast- to matrix-supported bed. [ 33 ]
They express their equations in terms of a dimensionless transport parameter, W i ∗ {\displaystyle W_{i}^{*}} (with the " ∗ {\displaystyle *} " indicating nondimensionality and the " i {\displaystyle _{i}} " indicating that it is a function of grain size):
q b i {\displaystyle q_{bi}} is the volumetric bed load transport rate of size class i {\displaystyle i} per unit channel width b {\displaystyle b} . F i {\displaystyle F_{i}} is the proportion of size class i {\displaystyle i} that is present on the bed.
They came up with two equations, depending on the transport stage, ϕ {\displaystyle \phi } . For ϕ < ϕ ′ {\displaystyle \phi <\phi ^{'}} :
and for ϕ ≥ ϕ ′ {\displaystyle \phi \geq \phi ^{'}} :
This equation asymptotically reaches a constant value of W i ∗ {\displaystyle W_{i}^{*}} as ϕ {\displaystyle \phi } becomes large and the symbols A , ϕ ′ , χ {\displaystyle A,\phi ^{'},\chi } have the following values:
In order to apply the above formulation, it is necessary to specify the characteristic grain sizes D s {\displaystyle D_{s}} for the sand portion and D g {\displaystyle D_{g}} for the gravel portion of the surface layer, the fractions F s {\displaystyle F_{s}} and F g {\displaystyle F_{g}} of sand and gravel, respectively in the surface layer, the submerged specific gravity of the sediment R and shear velocity associated with skin friction u ∗ {\displaystyle u_{*}} .
For the case in which sand fraction is transported by the current over and through an immobile gravel bed, Kuhnle et al. (2013), [ 34 ] following the theoretical analysis done by Pellachini (2011), [ 35 ] provides a new relationship for the bed load transport of the sand fraction when gravel particles remain at rest. It is worth mentioning that Kuhnle et al. (2013) [ 34 ] applied the Wilcock and Kenworthy (2002) [ 33 ] formula to their experimental data and found out that predicted bed load rates of sand fraction were about 10 times greater than measured and approached 1 as the sand elevation became near the top of the gravel layer. [ 34 ] They, also, hypothesized that the mismatch between predicted and measured sand bed load rates is due to the fact that the bed shear stress used for the Wilcock and Kenworthy (2002) [ 33 ] formula was larger than that available for transport within the gravel bed because of the sheltering effect of the gravel particles. [ 34 ] To overcome this mismatch, following Pellachini (2011), [ 35 ] they assumed that the variability of the bed shear stress available for the sand to be transported by the current would be some function of the so-called "Roughness Geometry Function" (RGF), [ 36 ] which represents the gravel bed elevations distribution. Therefore, the sand bed load formula follows as: [ 34 ]
where
the subscript s {\displaystyle _{s}} refers to the sand fraction, s represents the ratio ρ s / ρ w {\displaystyle \rho _{s}/\rho _{w}} where ρ s {\displaystyle \rho _{s}} is the sand fraction density, A ( z s ) {\displaystyle A(z_{s})} is the RGF as a function of the sand level z s {\displaystyle z_{s}} within the gravel bed, τ b {\displaystyle \tau _{b}} is the bed shear stress available for sand transport and τ c s {\displaystyle \tau _{cs}} is the critical shear stress for incipient motion of the sand fraction, which was calculated graphically using the updated Shields-type relation of Miller et al. (1977) {\displaystyle } . [ 37 ]
Suspended load is carried in the lower to middle parts of the flow, and moves at a large fraction of the mean flow velocity in the stream.
A common characterization of suspended sediment concentration in a flow is given by the Rouse Profile. This characterization works for the situation in which sediment concentration c 0 {\displaystyle c_{0}} at one particular elevation above the bed z 0 {\displaystyle z_{0}} can be quantified. It is given by the expression:
Here, z {\displaystyle z} is the elevation above the bed, c s {\displaystyle c_{s}} is the concentration of suspended sediment at that elevation, h {\displaystyle h} is the flow depth, P {\displaystyle P} is the Rouse number, and α {\displaystyle \alpha } relates the eddy viscosity for momentum K m {\displaystyle K_{m}} to the eddy diffusivity for sediment, which is approximately equal to one. [ 38 ]
Experimental work has shown that α {\displaystyle \alpha } ranges from 0.93 to 1.10 for sands and silts. [ 39 ]
The Rouse profile characterizes sediment concentrations because the Rouse number includes both turbulent mixing and settling under the weight of the particles. Turbulent mixing results in the net motion of particles from regions of high concentrations to low concentrations. Because particles settle downward, for all cases where the particles are not neutrally buoyant or sufficiently light that this settling velocity is negligible, there is a net negative concentration gradient as one goes upward in the flow. The Rouse Profile therefore gives the concentration profile that provides a balance between turbulent mixing (net upwards) of sediment and the downwards settling velocity of each particle.
Bed material load comprises the bed load and the portion of the suspended load that is sourced from the bed.
Three common bed material transport relations are the "Ackers-White", [ 40 ] "Engelund-Hansen", "Yang" formulae. The first is for sand to granule -size gravel, and the second and third are for sand [ 41 ] though Yang later expanded his formula to include fine gravel. That all of these formulae cover the sand-size range and two of them are exclusively for sand is that the sediment in sand-bed rivers is commonly moved simultaneously as bed and suspended load.
The bed material load formula of Engelund and Hansen is the only one to not include some kind of critical value for the initiation of sediment transport. It reads:
where q s ∗ {\displaystyle q_{s}*} is the Einstein nondimensionalization for sediment volumetric discharge per unit width, c f {\displaystyle c_{f}} is a friction factor, and τ ∗ {\displaystyle \tau *} is the Shields stress. The Engelund–Hansen formula is one of the few sediment transport formulae in which a threshold "critical shear stress" is absent.
Wash load is carried within the water column as part of the flow, and therefore moves with the mean velocity of main stream. Wash load concentrations are approximately uniform in the water column. This is described by the endmember case in which the Rouse number is equal to 0 (i.e. the settling velocity is far less than the turbulent mixing velocity), which leads to a prediction of a perfectly uniform vertical concentration profile of material.
Some authors have attempted formulations for the total sediment load carried in water. [ 42 ] [ 43 ] These formulas are designed largely for sand, as (depending on flow conditions) sand often can be carried as both bed load and suspended load in the same stream or shoreface.
Riverside intake structures used in water supply , canal diversions, and water cooling can experience entrainment of bed load (sand-size) sediments. These entrained sediments produce multiple deleterious effects such as reduction or blockage of intake capacity, feedwater pump impeller damage or vibration, and result in sediment deposition in downstream pipelines and canals. Structures that modify local near-field secondary currents are useful to mitigate these effects and limit or prevent bed load sediment entry. [ 44 ] | https://en.wikipedia.org/wiki/Sediment_transport |
Sedimentation is the deposition of sediments . [ 1 ] It takes place when particles in suspension settle out of the fluid in which they are entrained and come to rest against a barrier. This is due to their motion through the fluid in response to the forces acting on them: these forces can be due to gravity , centrifugal acceleration , or electromagnetism . Settling is the falling of suspended particles through the liquid, whereas sedimentation is the final result of the settling process.
In geology , sedimentation is the deposition of sediments which results in the formation of sedimentary rock . The term is broadly applied to the entire range of processes that result in the formation of sedimentary rock, from initial erosion through sediment transport and settling to the lithification of the sediments. However, the strict geological definition of sedimentation is the mechanical deposition of sediment particles from an initial suspension in air or water.
Sedimentation may pertain to objects of various sizes, ranging from large rocks in flowing water, to suspensions of dust and pollen particles, to cellular suspensions, to solutions of single molecules such as proteins and peptides . Even small molecules supply a sufficiently strong force to produce significant sedimentation.
Settling is the process by which particulates move towards the bottom of a liquid and form a sediment . Particles that experience a force, either due to gravity or due to centrifugal motion will tend to move in a uniform manner in the direction exerted by that force. For gravity settling, this means that the particles will tend to fall to the bottom of the vessel, forming sludge or slurry at the vessel base.
Settling is an important operation in many applications, such as mining , wastewater and drinking water treatment, biological science, space propellant reignition, [ 2 ]
Classification of sedimentation: [ 3 ]
When particles settling from a suspension reach a hard boundary, the concentration of particles at the boundary is opposed by the diffusion of the particles. The distribution of sediment near the boundary comes into sedimentation equilibrium . Measurements of the distribution yields information on the nature of the particles. [ 4 ] [ 5 ]
In geology , the term sedimentation is broadly applied to the entire range of processes that result in the formation of sedimentary rock, from initial formation of sediments by erosion of particles from rock outcrops, through sediment transport and settling, to the lithification of the sediments. However, the term is more particularly applied to the deposition of sediments, and in the strictest sense, it applies only to the mechanical deposition of sediment particles from an initial suspension in air or water. Sedimentation results in the formation of depositional landforms and the rocks that constitute the sedimentary record . [ 6 ] The building up of land surfaces by sedimentation, particularly in river valleys, is called aggradation . [ 7 ]
The rate of sedimentation is the thickness of sediment accumulated per unit time. [ 8 ] For suspended load, this can be expressed mathematically by the Exner equation . [ 9 ] Rates of sedimentation vary from less than 3 millimeters (0.12 in) per thousand years for pelagic sediment to several meters per thousand years in portions of major river deltas . However, long-term accumulation of sediments is determined less by rate of sedimentation than by rate of subsidence, which creates accommodation space for sediments to accumulate over geological time scales. Most sedimentation in the geologic record occurred in relative brief depositional episodes separated by long intervals of nondeposition or even erosion. [ 10 ]
In estuarine environments, settling can be influenced by the presence or absence of vegetation. Trees such as mangroves are crucial to the attenuation of waves or currents, promoting the settlement of suspended particles. [ 11 ]
An undesired increased transport and sedimentation of suspended material is called siltation , and it is a major source of pollution in waterways in some parts of the world. [ 12 ] [ 13 ] High sedimentation rates can be a result of poor land management and a high frequency of flooding events. If not managed properly, it can be detrimental to fragile ecosystems on the receiving end, such as coral reefs. [ 14 ] Climate change also affects siltation rates. [ 15 ]
In chemistry, sedimentation has been used to measure the size of large molecules ( macromolecule ), where the force of gravity is augmented with centrifugal force in an ultracentrifuge . | https://en.wikipedia.org/wiki/Sedimentation |
The physical process of sedimentation (the act of depositing sediment ) has applications in water treatment , whereby gravity acts to remove suspended solids from water. [ 1 ] Solid particles entrained by the turbulence of moving water may be removed naturally by sedimentation in the still water of lakes and oceans. Settling basins are ponds constructed for the purpose of removing entrained solids by sedimentation. [ 2 ] Clarifiers are tanks built with mechanical means for continuous removal of solids being deposited by sedimentation; [ 3 ] however, clarification does not remove dissolved solids . [ 4 ]
Suspended solids (or SS), is the mass of dry solids retained by a filter of a given porosity related to the volume of the water sample. This includes particles 10 μm and greater.
Colloids are particles of a size between 1 nm (0.001 μm) and 1 μm depending on the method of quantification. Because of Brownian motion and electrostatic forces balancing the gravity, they are not likely to settle naturally.
The limit sedimentation velocity of a particle is its theoretical descending speed in clear and still water. In settling process theory, a particle will settle only if:
Removal of suspended particles by sedimentation depends upon the size, zeta potential and specific gravity of those particles. Suspended solids retained on a filter may remain in suspension if their specific gravity is similar to water while very dense particles passing through the filter may settle. Settleable solids are measured as the visible volume accumulated at the bottom of an Imhoff cone after water has settled for one hour. [ 5 ]
Gravitational theory is employed, alongside the derivation from Newton's second law and the Navier–Stokes equations .
Stokes' law explains the relationship between the settling rate and the particle diameter. Under specific conditions, the particle settling rate is directly proportional to the square of particle diameter and inversely proportional to liquid viscosity. [ 6 ]
The settling velocity, defined as the residence time taken for the particles to settle in the tank, enables the calculation of tank volume. Precise design and operation of a sedimentation tank is of high importance in order to keep the amount of sediment entering the diversion system to a minimum threshold by maintaining the transport system and stream stability to remove the sediment diverted from the system. This is achieved by reducing stream velocity as low as possible for the longest period of time possible. This is feasible by widening the approach channel and lowering its floor to reduce flow velocity thus allowing sediment to settle out of suspension due to gravity. The settling behavior of heavier particulates is also affected by the turbulence. [ 7 ]
Although sedimentation might occur in tanks of other shapes, removal of accumulated solids is easiest with conveyor belts in rectangular tanks or with scrapers rotating around the central axis of circular tanks. [ 8 ] Settling basins and clarifiers should be designed based on the settling velocity (v s ) of the smallest particle to be theoretically 100% removed. The overflow rate is defined as: [ citation needed ]
In many countries this value is named as surface loading in m 3 /h per m 2 . Overflow rate is often used for flow over an edge (for example a weir) in the unit m 3 /h per m.
The unit of overflow rate is usually meters (or feet) per second, a velocity. Any particle with settling velocity ( v s ) greater than the overflow rate will settle out, while other particles will settle in the ratio v s / v o .
There are recommendations on the overflow rates for each design that ideally take into account the change in particle size as the solids move through the operation:
However, factors such as flow surges, wind shear, scour, and turbulence reduce the effectiveness of settling. To compensate for these less than ideal conditions, it is recommended doubling the area calculated by the previous equation. [ 9 ] It is also important to equalize flow distribution at each point across the cross-section of the basin. Poor inlet and outlet designs can produce extremely poor flow characteristics for sedimentation. [ citation needed ]
Settling basins and clarifiers can be designed as long rectangles (Figure 1.a), that are hydraulically more stable and easier to control for large volumes. Circular clarifiers (Fig. 1.b) work as a common thickener (without the usage of rakes), or as upflow tanks (Fig. 1.c). [ citation needed ]
Sedimentation efficiency does not depend on the tank depth. If the forward velocity is low enough so that the settled material does not re-suspend from the tank floor, the area is still the main parameter when designing a settling basin or clarifier, taking care that the depth is not too low. [ citation needed ]
Settling basins and clarifiers are designed to retain water so that suspended solids can settle. By sedimentation principles, the suitable treatment technologies should be chosen depending on the specific gravity, size and shear resistance of particles. Depending on the size and density of particles, and physical properties of the solids, there are four types of sedimentation processes:
Different factors control the sedimentation rate in each. [ 10 ]
Unhindered settling is a process that removes the discrete particles in a very low concentration without interference from nearby particles. In general, if the concentration of the solutions is lower than 500 mg/L total suspended solids, sedimentation will be considered discrete. [ 11 ] Concentrations of raceway effluent total suspended solids (TSS) in the west are usually less than 5 mg/L net. TSS concentrations of off-line settling basin effluent are less than 100 mg/L net. [ 12 ] The particles keep their size and shape during discrete settling, with an independent velocity. With such low concentrations of suspended particles, the probability of particle collisions is very low and consequently the rate of flocculation is small enough to be neglected for most calculations. Thus the surface area of the settling basin becomes the main factor of sedimentation rate. All continuous flow settling basins are divided into four parts: inlet zone, settling zone, sludge zone and outlet zone (Figure 2).
In the inlet zone, flow is established in a same forward direction. Sedimentation occurs in the settling zone as the water flow towards to outlet zone. The clarified liquid is then flow out from outlet zone.
Sludge zone: settled will be collected here and usually we assume that it is removed from water flow once the particles arrives the sludge zone. [ 9 ]
In an ideal rectangular sedimentation tank, in the settling zone, the critical particle enters at the top of the settling zone, and the settle velocity would be the smallest value to reach the sludge zone, and at the end of outlet zone, the velocity component of this critical particle are the settling velocity in vertical direction (v s ) and in horizontal direction (v h ).
From Figure 1, the time needed for the particle to settle;
Since the surface area of the tank is WL, and v s = Q/WL, v h = Q/WH, where Q is the flow rate and W, L, H is the width, length, depth of the tank.
According to Eq. 1, this also is a basic factor that can control the sedimentation tank performance which called overflow rate. [ 13 ]
Eq. 2 also shows that the depth of sedimentation tank is independent to the sedimentation efficiency, only if the forward velocity is low enough to make sure the settled mass would not suspended again from the tank floor.
In a horizontal sedimentation tank, some particles may not follow the diagonal line in Fig. 1, while settling faster as they grow. So this says that particles can grow and develop a higher settling velocity if a greater depth with longer retention time. However, the collision chance would be even greater if the same retention time were spread over a longer, shallower tank. In fact, in order to avoid hydraulic short-circuiting, tanks usually are made 3–6 m deep with retention times of a few hours.
As the concentration of particles in a suspension is increased, a point is reached where particles are so close together that they no longer settle independently of one another and the velocity fields of the fluid displaced by adjacent particles, overlap. There is also a net upward flow of liquid displaced by the settling particles. This results in a reduced particle-settling velocity and the effect is known as hindered settling.
There is a common case for hindered settling occurs. the whole suspension tends to settle as a ‘blanket’ due to its extremely high particle concentration. This is known as zone settling, because it is easy to make a distinction between several different zones which separated by concentration discontinuities. Fig. 3 represents a typical batch-settling column tests on a suspension exhibiting zone-settling characteristics. There is a clear interface near the top of the column would be formed to separating the settling sludge mass from the clarified supernatant as long as leaving such a suspension to stand in a settling column. As the suspension settles, this interface will move down at the same speed. At the same time, there is an interface near the bottom between that settled suspension and the suspended blanket. After settling of suspension is complete, the bottom interface would move upwards and meet the top interface which moves downwards.
The settling particles can contact each other and arise when approaching the floor of the sedimentation tanks at very high particle concentration. So that further settling will only occur in adjust matrix as the sedimentation rate decreasing. This is can be illustrated by the lower region of the zone-settling diagram (Figure 3). In Compression zone, the settled solids are compressed by gravity (the weight of solids), as the settled solids are compressed under the weight of overlying solids, and water is squeezed out while the space gets smaller.
Sedimentation in potable water treatment generally follows a step of chemical coagulation and flocculation , which allows grouping particles together into flocs of a bigger size. This increases the settling speed of suspended solids and allows settling colloids.
Sedimentation has been used to treat wastewater for millennia. [ 14 ]
Primary treatment of sewage is removal of floating and settleable solids through sedimentation. [ 15 ] Primary clarifiers reduce the content of suspended solids as well as the pollutant embedded in the suspended solids. [ 16 ] : 5–9 Because of the large amount of reagent necessary to treat domestic wastewater, preliminary chemical coagulation and flocculation are generally not used, remaining suspended solids being reduced by following stages of the system. However, coagulation and flocculation can be used for building a compact treatment plant (also called a "package treatment plant"), or for further polishing of the treated water. [ 17 ]
Sedimentation tanks called "secondary clarifiers" remove flocs of biological growth created in some methods of secondary treatment including activated sludge , trickling filters and rotating biological contactors . [ 16 ] : 13 | https://en.wikipedia.org/wiki/Sedimentation_(water_treatment) |
In chemistry , the sedimentation coefficient ( s ) of a particle characterizes its sedimentation (tendency to settle out of suspension ) during centrifugation . It is defined as the ratio of a particle's sedimentation velocity to the applied acceleration causing the sedimentation. [ 1 ] s = v t a {\displaystyle s={\frac {v_{t}}{a}}}
The sedimentation speed v t is also the terminal velocity . It is constant because the force applied to a particle by gravity or by a centrifuge (typically in multiples of tens of thousands of gravities in an ultracentrifuge ) is balanced by the viscous resistance (or "drag") of the fluid (normally water ) through which the particle is moving. The applied acceleration a can be either the gravitational acceleration g , or more commonly the centrifugal acceleration ω 2 r . In the latter case, ω is the angular velocity of the rotor and r is the distance of a particle to the rotor axis ( radius ).
The viscous resistance for a spherical particle is given by Stokes' law : F d = 6 π η r 0 v {\displaystyle F_{d}=6\pi \eta r_{0}v} where η is the viscosity of the medium, r 0 is the radius of the particle and v is the velocity of the particle. Stokes' law applies to small spheres in an infinite amount of fluid at the small Reynolds Number limit.
The centrifugal force is given by the equation: F c = m r ω 2 {\displaystyle F_{c}=mr\omega ^{2}} where m is the excess mass of the particle over and above the mass of an equivalent volume of the fluid in which the particle is situated (see Archimedes' principle ) and r is the distance of the particle from the axis of rotation. When the two opposing forces, viscous and centrifugal, balance, the particle moves at constant (terminal) velocity. The terminal velocity for a spherical particle is given by the equation:
v t = m r ω 2 6 π η r 0 {\displaystyle v_{t}={\frac {mr\omega ^{2}}{6\pi \eta r_{0}}}}
Rearranging this equation gives the final formula:
s = v t r ω 2 = m 6 π η r 0 {\displaystyle s={\frac {v_{t}}{r\omega ^{2}}}={\frac {m}{6\pi \eta r_{0}}}}
The sedimentation coefficient has units of time , expressed in svedbergs . One svedberg is 10 −13 s . The sedimentation coefficient normalizes the sedimentation rate of a particle to its applied acceleration. The result no longer depends on acceleration, but only on the properties of the particle and the fluid in which it is suspended. Sedimentation coefficients quoted in literature usually pertain to sedimentation in water at 20 °C.
The sedimentation coefficient is in fact the amount of time it would take the particle to reach its terminal velocity under the given acceleration if there were no drag.
The above equation shows that s is proportional to m and inversely proportional to r 0 . Also for non-spherical particles of a given shape, s is proportional to m and inversely proportional to some characteristic dimension with units of length.
For a given shape, m is proportional to the size to the third power, so larger, heavier particles sediment faster and have higher svedberg, or s , values. Sedimentation coefficients are, however, not additive. When two particles bind together, the shape will be different from the shapes of the original particles. Even if the shape were the same, the ratio of excess mass to size would not be equal to the sum of the ratios for the starting particles. Thus, when measured separately they have svedberg values that do not add up to that of the bound particle. For example ribosomes are typically identified by their sedimentation coefficient. The 70 S ribosome from bacteria has a sedimentation coefficient of 70 svedberg, although it is composed of a 50 S subunit and a 30 S subunit.
The sedimentation coefficient is typically dependent on the concentration of the solute (i.e. a macromolecular solute such as a protein). Despite 80+ years of study, there is not yet a consensus on the way to perfectly model this relationship while also taking into account all possible non-ideal terms to account for the diverse possible sizes, shapes, and densities of molecular solutes. [ 2 ] But in most simple cases, one of two equations can be used to describe the relationship between the sedimentation coefficient and the solute concentration:
1 s = 1 s ∘ ( 1 + k s c ) {\displaystyle {\frac {1}{s}}={\frac {1}{s^{\circ }(1+k_{s}c)}}}
For compact and symmetrical macromolecular solutes (i.e. globular proteins), a weaker dependence of the sedimentation coefficient vs concentration allows adequate accuracy through an approximated form of the previous equation: [ 2 ] [ 3 ]
s = s ∘ ( 1 − k s c ) {\displaystyle s=s^{\circ }(1-k_{s}c)}
During a single ultracentrifuge experiment, the sedimentation coefficient of compounds with a significant concentration dependence changes over time. Using the differential equation for the ultracentrifuge, s may be expressed as following power series in time for any particular relation between s and c .
s t = s i ( 1 + a t + b t 2 + . . . ) {\displaystyle s_{t}=s_{i}(1+at+bt^{2}+...)} | https://en.wikipedia.org/wiki/Sedimentation_coefficient |
Sedimentation equilibrium in a suspension of different particles, such as molecules , exists when the rate of transport of each material in any one direction due to sedimentation equals the rate of transport in the opposite direction due to diffusion . Sedimentation is due to an external force, such as gravity or centrifugal force in a centrifuge.
It was discovered for colloids by Jean Baptiste Perrin for which he received the Nobel Prize in Physics in 1926. [ 1 ]
In a colloid , the colloidal particles are said to be in sedimentation equilibrium if the rate of sedimentation is equal to the rate of movement from Brownian motion . For dilute colloids, this is described using the Laplace-Perrin distribution law:
Φ ( z ) = Φ 0 exp ( − m ∗ g k B T z ) = Φ 0 e − z / l g {\displaystyle \Phi (z)=\Phi _{0}\exp {\biggl (}-{\frac {m^{*}g}{k_{B}T}}z{\biggr )}=\Phi _{0}e^{-z/l_{g}}}
where
Φ ( z ) {\displaystyle \Phi (z)} is the colloidal particle volume fraction as a function of vertical distance z {\displaystyle z} above reference point z = 0 {\displaystyle z=0} ,
Φ 0 {\displaystyle \Phi _{0}} is the colloidal particle volume fraction at reference point z = 0 {\displaystyle z=0} ,
m ∗ {\displaystyle m^{*}} is the buoyant mass of the colloidal particles,
g {\displaystyle g} is the standard acceleration due to gravity ,
k B {\displaystyle k_{B}} is the Boltzmann constant ,
T {\displaystyle T} is the absolute temperature ,
and l g {\displaystyle l_{g}} is the sedimentation length.
The buoyant mass is calculated using m ∗ = Δ ρ V P = 4 3 π Δ ρ R 3 {\displaystyle m^{*}=\Delta \rho V_{P}={\frac {4}{3}}\pi \Delta \rho R^{3}}
where Δ ρ {\displaystyle \Delta \rho } is the difference in mass density between the colloidal particles and the suspension medium, and V P {\displaystyle V_{P}} is the colloidal particle volume found using the volume of a sphere ( R {\displaystyle R} is the radius of the colloidal particle).
The Laplace-Perrin distribution law can be rearranged to give the sedimentation length l g {\displaystyle l_{g}} . The sedimentation length describes the probability of finding a colloidal particle at a height z {\displaystyle z} above the point of reference z = 0 {\displaystyle z=0} . At the length l g {\displaystyle l_{g}} above the reference point, the concentration of colloidal particles decreases by a factor of e {\displaystyle e} .
l g = k B T m ∗ g {\displaystyle l_{g}={\frac {k_{B}T}{m^{*}g}}}
If the sedimentation length is much greater than the diameter d {\displaystyle d} of the colloidal particles ( l g >> d {\displaystyle l_{g}>>d} ), the particles can diffuse a distance greater than this diameter, and the substance remains a suspension. However, if the sedimentation length is less than the diameter ( l g < d {\displaystyle l_{g}<d} ), the particles can only diffuse by a much shorter length. They will sediment under the influence of gravity and settle to the bottom of the container. The substance can no longer be considered a colloidal suspension. It may become a colloidal suspension again if an action to undertaken to suspend the colloidal particles again, such as stirring the colloid. [ 2 ]
The difference in mass density Δ ρ {\displaystyle \Delta \rho } between the colloidal particles of mass density ρ 1 {\displaystyle \rho _{1}} and the medium of suspension of mass density ρ 2 {\displaystyle \rho _{2}} , and the diameter of the particles, have an influence on the value of l g {\displaystyle l_{g}} . As an example, consider a colloidal suspension of polyethylene particles in water, and three different values for the diameter of the particles: 0.1 μm, 1 μm and 10 μm. The volume of a colloidal particles can be calculated using the volume of a sphere V = 4 3 π R 3 {\displaystyle V={\frac {4}{3}}\pi R^{3}} .
ρ 1 {\displaystyle \rho _{1}} is the mass density of polyethylene, which is approximately on average 920 kg/m 3 [ 3 ] and ρ 2 {\displaystyle \rho _{2}} is the mass density of water, which is approximately 1000 kg/m 3 at room temperature (293K). [ 4 ] Therefore Δ ρ = ρ 1 − ρ 2 {\displaystyle \Delta \rho =\rho _{1}-\rho _{2}} is -80 kg/m 3 .
Generally, l g {\displaystyle l_{g}} decreases with d 3 {\displaystyle d^{3}} . For the 0.1 μm diameter particle, l g {\displaystyle l_{g}} is larger than the diameter, and the particles will be able to diffuse. For the 10 μm diameter particle, l g {\displaystyle l_{g}} is much smaller than the diameter. As l g {\displaystyle l_{g}} is negative the particles will cream, and the substance will no longer be a colloidal suspension.
In this example, the difference is mass density Δ ρ {\displaystyle \Delta \rho } is relatively small. Consider a colloid with particles much denser than polyethylene, for example silicon with a mass density of approximately 2330 kg/m 3 . [ 4 ] If these particles are suspended in water, Δ ρ {\displaystyle \Delta \rho } will be 1330 kg/m 3 . l g {\displaystyle l_{g}} will decrease as Δ ρ {\displaystyle \Delta \rho } increases. For example, if the particles had a diameter of 10 μm the sedimentation length would be 5.92×10 −4 μm, one order of magnitude smaller than for polyethylene particles. Also, because the particles are more dense than water, l g {\displaystyle l_{g}} is positive and the particles will sediment.
Modern applications use the analytical ultracentrifuge . The theoretical basis for the measurements is developed from the Mason-Weaver equation . The advantage of using analytical sedimentation equilibrium analysis for Molecular Weight of proteins and their interacting mixtures is the avoidance of need for derivation of a frictional coefficient , otherwise required for interpretation of dynamic sedimentation .
Sedimentation equilibrium can be used to determine molecular mass . It forms the basis for an analytical ultracentrifugation method for measuring molecular masses, such as those of proteins , in solution. | https://en.wikipedia.org/wiki/Sedimentation_equilibrium |
Sedimentation potential occurs when dispersed particles move under the influence of either gravity or centrifugation or electricity in a medium. This motion disrupts the equilibrium symmetry of the particle's double layer . While the particle moves, the ions in the electric double layer lag behind due to the liquid flow. This causes a slight displacement between the surface charge and the electric charge of the diffuse layer . As a result, the moving particle creates a dipole moment . The sum of all of the dipoles generates an electric field which is called sedimentation potential . It can be measured with an open electrical circuit, which is also called sedimentation current .
There are detailed descriptions of this effect in many books on colloid and interface science . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ]
Electrokinetic phenomena are a family of several different effects that occur in heterogeneous fluids or in porous bodies filled with fluid. The sum of these phenomena deals with the effect on a particle from some outside resulting in a net electrokinetic effect.
The common source of all these effects stems from the interfacial 'double layer' of charges. Particles influenced by an external force generate tangential motion of a fluid with respect to an adjacent charged surface. This force may consist of electric, pressure gradient, concentration gradient, gravity. In addition, the moving phase might be either the continuous fluid or dispersed phase.
Sedimentation potential is the field of electrokinetic phenomena dealing with the generation of an electric field by sedimenting colloid particles.
This phenomenon was first discovered by Friedrich Ernst Dorn in 1879. He observed that a vertical electric field had developed in a suspension of glass beads in water, as the beads were settling. This was the origin of sedimentation potential, which is often referred to as the Dorn effect.
Smoluchowski built the first models to calculate the potential in the early 1900s. Booth created a general theory on sedimentation potential in 1954 based on Overbeek's 1943 theory on electrophoresis. In 1980, Stigter extended Booth's model to allow for higher surface potentials. Ohshima created a model based on O'Brien and White 's 1978 model used to analyze the sedimentation velocity of a single charged sphere and the sedimentation potential of a dilute suspension.
As a charged particle moves through a gravitational force or centrifugation, an electric potential is induced. While the particle moves, ions in the electric double layer lag behind creating a net dipole moment behind due to liquid flow. The sum of all dipoles on the particle is what causes sedimentation potential. Sedimentation potential has the opposite effect compared to electrophoresis where an electric field is applied to the system. Ionic conductivity is often referred to when dealing with sedimentation potential.
The following relation provides a measure of the sedimentation potential due to the settling of charged spheres. First discovered by Smoluchowski in 1903 and 1921. This relationship only holds true for non-overlapping electric double layers and for dilute suspensions. In 1954, Booth proved that this idea held true for Pyrex glass powder settling in a KCl solution. From this relation, the sedimentation potential, E S , is independent of the particle radius and that E S → 0, Φ p → 0 (a single particle).
Smoluchowski's sedimentation potential is defined where ε 0 is the permitivity of free space, D the dimensionless dielectric constant, ξ the zeta potential, g the acceleration due to gravity, Φ the particle volume fraction, ρ the particle density, ρ o the medium density, λ the specific volume conductivity, and η the viscosity. [ 8 ]
Smoluchowski developed the equation under five assumptions:
Where D i is the diffusion coefficient of the ith solute species, and n i∞ is the number concentration of electrolyte solution.
Ohshima's model was developed in 1984 and was originally used to analyze the sedimentation velocity of a single charged sphere and the sedimentation potential of a dilute suspension. The model provided below holds true for dilute suspensions of low zeta potential, i.e. e ζ/κ B T ≤2
Sedimentation potential is measured by attaching electrodes to a glass column filled with the dispersion of interest. A voltmeter is attached to measure the potential generated from the suspension. To account for different geometries of the electrode, the column is typically rotated 180 degrees while measuring the potential. This difference in potential through rotation by 180 degrees is twice the sedimentation potential. The zeta potential can be determined through measurement by sedimentation potential, as the concentration, conductivity of the suspension, density of the particle, and potential difference are known. By rotating the column 180 degrees, drift and geometry differences of the column can be ignored. [ 9 ]
When dealing with the case of concentrated systems, the zeta potential can be determined through measurement of the sedimentation potential E s {\displaystyle E_{s}} , from the potential difference relative to the distance between the electrodes. The other parameters represent the following: η {\displaystyle \eta } the viscosity of the medium; λ {\displaystyle \lambda } the bulk conductivity; ε r {\displaystyle \varepsilon _{r}} the relative permittivity of the medium; ε 0 {\displaystyle \varepsilon _{0}} the permittivity of free space; ρ {\displaystyle \rho } the density of the particle; ρ 0 {\displaystyle \rho _{0}} the density of the medium; g {\displaystyle g} is the acceleration due to gravity; and σ ∞ is the electrical conductivity of the bulk electrolyte solution. [ 9 ]
An improved design cell was developed to determine sedimentation potential, specific conductivity, volume fraction of the solids as well as pH. Two pairs of electrodes are used in this set up, one to measure potential difference and the other for resistance. A flip switch is utilized to avoid polarization of the resistance electrodes and buildup of charge by alternating the current. The pH of the system could be monitored and the electrolyte was drawn into the tube using a vacuum pump. [ 10 ]
Sedimentation field flow fractionation (SFFF) is a non-destructive separation technique which can be used for both separation, and collecting fractions. Some applications of SFFF include characterization of particle size of latex materials for adhesives, coatings and paints, colloidal silica for binders, coatings and compounding agents, titanium oxide pigments for paints, paper and textiles, emulsion for soft drinks, and biological materials like viruses and liposomes. [ 11 ]
Some main aspects of SFFF include: it provides high-resolution possibilities for size distribution measurements with high precision, the resolution is dependent on experimental conditions, the typical analysis time is 1 to 2 hours, and it is a non-destructive technique which offers the possibility of collecting fraction. [ 11 ]
As sedimentation field flow fractionation (SFFF) is one of field flow fractionation separation techniques, it is appropriate for fractionation and characterization of particulate materials and soluble samples in the colloid size range. Differences in interaction between a centrifugal force field and particles with different masses or sizes lead to the separation. An exponential distribution of particles of a certain size or weight is results due to the Brownian motion. Some of the assumptions to develop the theoretical equations include that there is no interaction between individual particles and equilibrium can occur anywhere in separation channels. [ 11 ]
Various combinations of the driving force and moving phase determine various electrokinetic effects. Following "Fundamentals of Interface and Colloid Science" by Lyklema (1995), the complete family of electrokinetic phenomena includes: | https://en.wikipedia.org/wiki/Sedimentation_potential |
A sednoid is a trans-Neptunian object with a large semi-major axis and a high perihelion , similar to the orbit of the dwarf planet Sedna . The consensus among astronomers is that there are only four objects that are known from this population: Sedna, 2012 VP 113 , 541132 Leleākūhonua ( 2015 TG 387 ), and 2023 KQ 14 [ citation needed ] . All four have perihelia greater than 60 AU . [ 1 ] These objects lie outside an apparently nearly empty gap in the Solar System and have no significant interaction with the planets. [ citation needed ] They are usually grouped [ by whom? ] with the detached objects . Some astronomers [ 2 ] consider the sednoids to be Inner Oort Cloud (IOC) objects , though the inner Oort cloud , or Hills cloud , was originally predicted to lie beyond 2,000 AU, beyond the aphelia of the known sednoids. [ citation needed ]
One attempt at a precise definition of sednoids is any body with a perihelion greater than 50 AU and a semi-major axis greater than 150 AU . [ 3 ] [ 4 ] However, this definition applies to the objects 2013 SY 99 , 2020 MQ 53 , and 2021 RR 205 [ 5 ] which have perihelia beyond 50 AU and semi-major axes over 700 AU. Despite this, these objects are thought [ by whom? ] to not belong to the sednoids, but rather to the same dynamical class [ which? ] as 474640 Alicanto , 2014 SR 349 and 2010 GB 174 . [ 6 ] [ 7 ]
With their high eccentricities (greater than 0.8), sednoids are distinguished from the high-perihelion objects with moderate eccentricities that are in a stable resonance with Neptune, namely 2015 KQ 174 , 2015 FJ 345 , (612911) 2004 XR 190 ("Buffy"), (690420) 2014 FC 72 and 2014 FZ 71 . [ 8 ]
The sednoids' orbits cannot be explained by perturbations from the giant planets , [ 9 ] nor by interaction with the galactic tides . [ 3 ] If they formed in their current locations, their orbits must originally have been circular; otherwise accretion (the coalescence of smaller bodies into larger ones) would not have been possible because the large relative velocities between planetesimals would have been too disruptive. [ 10 ] Their present elliptical orbits can be explained by several hypotheses:
The first three known sednoids, like all of the more extreme detached objects (objects with semi-major axes > 150 AU and perihelia > 30 AU; the orbit of Neptune ), have a similar orientation ( argument of perihelion ) of ≈ 0° ( 338° ± 38° ). This is not due to an observational bias and is unexpected, because interaction with the giant planets should have randomized their arguments of perihelion (ω), [ 3 ] with precession periods between 40 Myr and 650 Myr and 1.5 Gyr for Sedna. [ 13 ] This suggests that one [ 3 ] or more [ 23 ] undiscovered massive perturbers may exist in the outer Solar System. A super-Earth at 250 AU would cause these objects to librate around ω = 0° ± 60° for billions of years. There are multiple possible configurations and a low-albedo super-Earth at that distance would have an apparent magnitude below the current all-sky-survey detection limits. This hypothetical super-Earth has been dubbed Planet Nine . Larger, more-distant perturbers would also be too faint to be detected. [ 3 ]
As of 2016 [update] , [ needs update ] 27 known objects have a semi-major axis greater than 150 AU, a perihelion beyond Neptune, an argument of perihelion of 340° ± 55° , and an observation arc of more than 1 year. [ 24 ] 2013 SY 99 , 2014 ST 373 , 2015 FJ 345 , 2021 RW 209 , (612911) 2004 XR 190 , (690420) 2014 FC 72 , 2014 US 277 , 2014 FZ 71 , and 2021 RR 205 are near the limit of perihelion of 50 AU, but are not considered sednoids.
On 1 October 2018, Leleākūhonua , then known as 2015 TG 387 , was announced with perihelion of 65 AU and a semi-major axis of 1094 AU. With an aphelion over 2100 AU, it brings the object further out than Sedna .
In late 2015, V774104 was announced at the Division for Planetary Science conference as a further candidate sednoid, but its observation arc was too short to know whether its perihelion was even outside Neptune's influence. [ 25 ] The talk about V774104 was probably meant to refer to Leleākūhonua ( 2015 TG 387 ) even though V774104 is the internal designation for non-sednoid 2015 TH 367 .
Sednoids might constitute a proper dynamical class, but they may have a heterogeneous origin; the spectral slope of 2012 VP 113 is very different from that of Sedna. [ 26 ]
Malena Rice and Gregory Laughlin applied a targeted shift-stacking search algorithm to analyze data from TESS sectors 18 and 19 looking for candidate outer Solar System objects. [ 27 ] Their search recovered known objects like Sedna and produced 17 new outer Solar System body candidates located at geocentric distances in the range 80–200 AU, that need follow-up observations with ground-based telescope resources for confirmation. Early results from a survey with the William Herschel Telescope aimed at recovering these distant TNO candidates have failed to confirm two of them. [ 28 ] [ 29 ]
Each of the proposed mechanisms for Sedna's extreme orbit would leave a distinct mark on the structure and dynamics of any wider population. If a trans-Neptunian planet were responsible, all such objects would share roughly the same perihelion (≈80 AU). If Sedna had been captured from another planetary system that rotated in the same direction as the Solar System, then all of its population would have orbits on relatively low inclinations and have semi-major axes ranging from 100 to 500 AU. If it rotated in the opposite direction, then two populations would form, one with low and one with high inclinations. The perturbations from passing stars would produce a wide variety of perihelia and inclinations, each dependent on the number and angle of such encounters. [ 30 ]
Acquiring a larger sample of such objects would therefore help in determining which scenario is most likely. [ 31 ] "I call Sedna a fossil record of the earliest Solar System", said Brown in 2006. "Eventually, when other fossil records are found, Sedna will help tell us how the Sun formed and the number of stars that were close to the Sun when it formed." [ 32 ] A 2007–2008 survey by Brown, Rabinowitz and Schwamb attempted to locate another member of Sedna's hypothetical population. Although the survey was sensitive to movement out to 1,000 AU and discovered the likely dwarf planet Gonggong , it detected no new sednoids. [ 31 ] Subsequent simulations incorporating the new data suggested about 40 Sedna-sized objects probably exist in this region, with the brightest being about Eris 's magnitude (−1.0). [ 31 ]
Following the discovery of Leleākūhonua, Sheppard et al. concluded that it implies a population of about 2 million Inner Oort Cloud objects larger than 40 km, with a total mass in the range of 1 × 10 22 kg , about the mass of Pluto and several times the mass of the asteroid belt . [ 33 ]
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Sednoid |
Sedoheptulose-bisphosphatase (also sedoheptulose-1,7-bisphosphatase or SBPase , EC number 3.1.3.37; systematic name sedoheptulose-1,7-bisphosphate 1-phosphohydrolase ) is an enzyme that catalyzes the removal of a phosphate group from sedoheptulose 1,7-bisphosphate to produce sedoheptulose 7-phosphate . SBPase is an example of a phosphatase , or, more generally, a hydrolase . This enzyme participates in the Calvin cycle .
SBPase is a homodimeric protein, meaning that it is made up of two identical subunits. [ 2 ] The size of this protein varies between species, but is about 92,000 Da (two 46,000 Da subunits) in cucumber plant leaves. [ 3 ] The key functional domain controlling SBPase function involves a disulfide bond between two cysteine residues. [ 4 ] These two cysteine residues, Cys52 and Cys57, appear to be located in a flexible loop between the two subunits of the homodimer, [ 5 ] near the active site of the enzyme. Reduction of this regulatory disulfide bond by thioredoxin incites a conformational change in the active site, activating the enzyme. [ 6 ] Additionally, SBPase requires the presence of magnesium (Mg 2+ ) to be functionally active. [ 7 ] SBPase is bound to the stroma -facing side of the thylakoid membrane in the chloroplast in a plant. Some studies have suggested the SBPase may be part of a large (900 kDa) multi-enzyme complex along with a number of other photosynthetic enzymes. [ 8 ]
SBPase is involved in the regeneration of 5-carbon sugars during the Calvin cycle. Although SBPase has not been emphasized as an important control point in the Calvin cycle historically, it plays a large part in controlling the flux of carbon through the Calvin cycle. [ 9 ] Additionally, SBPase activity has been found to have a strong correlation with the amount of photosynthetic carbon fixation. [ 10 ] Like many Calvin cycle enzymes, SBPase is activated in the presence of light through a ferredoxin/thioredoxin system. [ 11 ] In the light reactions of photosynthesis, light energy powers the transport of electrons to eventually reduce ferredoxin. The enzyme ferredoxin-thioredoxin reductase uses reduced ferredoxin to reduce thioredoxin from the disulfide form to the dithiol. Finally, the reduced thioredoxin is used to reduced a cysteine-cysteine disulfide bond in SBPase to a dithiol, which converts the SBPase into its active form. [ 7 ]
SBPase has additional levels of regulation beyond the ferredoxin/thioredoxin system. Mg2+ concentration has a significant impact on the activity of SBPase and the rate of the reactions it catalyzes. [ 12 ] SBPase is inhibited by acidic conditions (low pH). This is a large contributor to the overall inhibition of carbon fixation when the pH is low inside the stroma of the chloroplast. [ 13 ] Finally, SBPase is subject to negative feedback regulation by sedoheptulose-7-phosphate and inorganic phosphate, the products of the reaction it catalyzes. [ 14 ]
SBPase and FBPase (fructose-1,6-bisphosphatase, EC 3.1.3.11) are both phosphatases that catalyze similar during the Calvin cycle. The genes for SBPase and FBPase are related. Both genes are found in the nucleus in plants, and have bacterial ancestry. [ 15 ] SBPase is found across many species. In addition to being universally present in photosynthetic organism, SBPase is found in a number of evolutionarily-related, non-photosynthetic microorganisms. SBPase likely originated in red algae. [ 16 ]
Moreso than other enzymes in the Calvin cycle, SBPase levels have a significant impact on plant growth, photosynthetic ability, and response to environmental stresses. Small decreases in SBPase activity result in decreased photosynthetic carbon fixation and reduced plant biomass. [ 17 ] Specifically, decreased SBPase levels result in stunted plant organ growth and development compared to wild-type plants, [ 18 ] and starch levels decrease linearly with decreases in SBPase activity, suggesting that SBPase activity is a limiting factor to carbon assimilation. [ 19 ] This sensitivity of plants to decreased SBPase activity is significant, as SBPase itself is sensitive to oxidative damage and inactivation from environmental stresses. SBPase contains several catalytically relevant cysteine residues that are vulnerable to irreversible oxidative carbonylation by reactive oxygen species (ROS) , [ 20 ] particularly from hydroxyl radicals created during the production of hydrogen peroxide . [ 21 ] Carbonylation results in SBPase enzyme inactivation and subsequent growth retardation due to inhibition of carbon assimilation. [ 18 ] Oxidative carbonylation of SBPase can be induced by environmental pressures such as chilling, which causes an imbalance in metabolic processes leading to increased production of reactive oxygen species, particularly hydrogen peroxide. [ 21 ] Notably, chilling inhibits SBPase and a related enzyme, fructose bisphosphatase , but does not affect other reductively activated Calvin cycle enzymes. [ 22 ]
The sensitivity of plants to synthetically reduced or inhibited SBPase levels provides an opportunity for crop engineering. There are significant indications that transgenic plants which overexpress SBPase may be useful in improving food production efficiency by producing crops that are more resilient to environmental stresses, as well as have earlier maturation and higher yield. Overexpression of SBPase in transgenic tomato plants provided resistance to chilling stress, with the transgenic plants maintaining higher SBPase activity, increased carbon dioxide fixation, reduced electrolyte leakage and increased carbohydrate accumulation relative to wild-type plants under the same chilling stress. [ 21 ] It is also likely that transgenic plants would be more resilient to osmotic stress caused by drought or salinity, as the activation of SBPase is shown to be inhibited in chloroplasts exposed to hypertonic conditions, [ 23 ] though this has not been directly tested. Overexpression of SBPase in transgenic tobacco plants resulted in enhanced photosynthetic efficiency and growth. Specifically, transgenic plants exhibited greater biomass and improved carbon dioxide fixation, as well as an increase in RuBisCO activity. The plants grew significantly faster and larger than wild-type plants, with increased sucrose and starch levels. [ 24 ] | https://en.wikipedia.org/wiki/Sedoheptulose-bisphosphatase |
Sedoheptulose 7-phosphate is an intermediate in the pentose phosphate pathway . [ 1 ]
It is formed by transketolase and acted upon by transaldolase .
Sedoheptulokinase is an enzyme that uses sedoheptulose and ATP to produce ADP and sedoheptulose 7-phosphate.
Sedoheptulose-bisphosphatase is an enzyme that uses sedoheptulose 1,7-bisphosphate and H 2 O to produce sedoheptulose 7-phosphate and phosphate.
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Sedoheptulose_7-phosphate |
Mass spectrometry is a scientific technique for measuring the mass-to-charge ratio of ions. It is often coupled to chromatographic techniques such as gas- or liquid chromatography and has found widespread adoption in the fields of analytical chemistry and biochemistry where it can be used to identify and characterize small molecules and proteins ( proteomics ). The large volume of data produced in a typical mass spectrometry experiment requires that computers be used for data storage and processing. Over the years, different manufacturers of mass spectrometers have developed various proprietary data formats for handling such data which makes it difficult for academic scientists to directly manipulate their data. To address this limitation, several open , XML -based data formats have recently been developed by the Trans-Proteomic Pipeline at the Institute for Systems Biology to facilitate data manipulation and innovation in the public sector. [ 1 ] These data formats are described here.
This format was one of the earliest attempts to supply a standardized file format for data exchange in mass spectrometry. JCAMP-DX was initially developed for infrared spectrometry. JCAMP-DX is an ASCII based format and therefore not very compact even though it includes standards for file compression. JCAMP was officially released in 1988. [ 2 ] Together with the American Society for Mass Spectrometry a JCAMP-DX format for mass spectrometry was developed with aim to preserve legacy data. [ 3 ]
The Analytical Data Interchange Format for Mass Spectrometry is a format for exchanging data. Many mass spectrometry software packages can read or write ANDI files. ANDI is specified in the ASTM E1947 Standard. [ 4 ] ANDI is based on netCDF which is a software tool library for writing and reading data files. ANDI was initially developed for chromatography-MS data and therefore was not used in the proteomics gold rush where new formats based on XML were developed. [ 5 ]
AnIML is a joined effort of IUPAC and ASTM International to create an XML based standard that covers a wide variety of analytical techniques including mass spectrometry. [ 6 ]
mzData was the first attempt by the Proteomics Standards Initiative (PSI) from the Human Proteome Organization (HUPO) to create a standardized format for Mass Spectrometry data. [ 7 ] This format is now deprecated, and replaced by mzML. [ 8 ]
mzXML is a XML (eXtensible Markup Language) based common file format for proteomics mass spectrometric data. [ 9 ] [ 10 ] This format was developed at the Seattle Proteome Center/Institute for Systems Biology while the HUPO-PSI was trying to specify the standardized mzData format, and is still in use in the proteomics community.
Y et A nother F ormat for M ass S pectrometry (YAFMS) is a suggestion to save data in four table relational server-less database schema with data extraction and appending being exercised using SQL queries. [ 11 ]
As two formats (mzData and mzXML) for representing the same information is an undesirable state, a joint effort was set by HUPO-PSI, the SPC/ISB and instrument vendors to create a unified standard borrowing the best aspects of both mzData and mzXML, and intended to replace them. Originally called dataXML, it was officially announced as mzML. [ 12 ] The first specification was published in June 2008. [ 13 ] This format was officially released at the 2008 American Society for Mass Spectrometry Meeting, and is since then relatively stable with very few updates.
On 1 June 2009, mzML 1.1.0 was released. There are no planned further changes as of 2013.
Instead of defining new file formats and writing converters for proprietary vendor formats a group of scientists proposed to define a common application program interface to shift the burden of standards compliance to the instrument manufacturers' existing data access libraries. [ 14 ]
The mz5 format addresses the performance problems of the previous XML based formats. It uses the mzML ontology, but saves the data using the HDF5 backend for reduced storage space requirements and improved read/write speed. [ 15 ]
The imzML standard was proposed to exchange data from mass spectrometry imaging in a standardized XML file based on the mzML ontology. It splits experimental data into XML and spectral data in a binary file. Both files are linked by a universally unique identifier . [ 16 ]
mzDB saves data in an SQLite database to save on storage space and improve access times as the data points can be queried from a relational database . [ 17 ]
Toffee is an open lossless file format for data-independent acquisition mass spectrometry. It leverages HDF5 and aims to achieve file sizes similar to those from the proprietary and closed vendor formats. [ 18 ]
mzMLb is another take on using a HDF5 backend for performant raw data saving. It, however, preserves the mzML XML data structure and stays compliant to the existing standard. [ 19 ]
The Allotrope Foundation curates a HDF5 and Triplestore based file format named Allotrope Data Format (ADF) and a flat JSON representation ASM short for Allotrope Simple Model. Both are based on the Allotrope Foundation Ontologies (AFO) and contain schemas for mass spectrometry and chromatography coupled with MS detectors. [ 20 ]
Below is a table of different file format extensions.
(*) Note that the RAW formats of each vendor are not interchangeable; software from one cannot handle the RAW files from another. (**) Micromass was acquired by Waters in 1997 (***) Finnigan is a division of Thermo
There are several viewers for mzXML, mzML and mzData. These viewers are of two types: Free Open Source Software (FOSS) or proprietary.
In the FOSS viewer category, one can find MZmine, [ 22 ] mineXpert2 (mzXML, mzML, native timsTOF, xy, MGF, BafAscii) [ 23 ] MS-Spectre, [ 24 ] TOPPView (mzXML, mzML and mzData), [ 25 ] Spectra Viewer, [ 26 ] SeeMS, [ 27 ] msInspect, [ 28 ] jmzML. [ 29 ]
In the proprietary category, one can find PEAKS, [ 30 ] Insilicos , [ 31 ] Mascot Distiller, [ 32 ] Elsci Peaksel. [ 33 ]
There is a viewer for ITA images. [ 34 ] ITA and ITM images can be parsed with the pySPM python library. [ 35 ]
Known converters for mzData to mzXML:
Known converters for mzXML:
Known converters for mzML:
Converters for proprietary formats:
Currently available converters are : | https://en.wikipedia.org/wiki/SeeMS |
Seeblatt ( [ˈzeː.blat] , German for 'lake leaf', plural Seeblätter ; Danish : søblad ; West Frisian : pompeblêd ; East Frisian: Pupkeblad) is the term for the stylized leaf of a water lily , used as a charge in heraldry. [ 1 ]
This charge is used in the heraldry of Germany, the Netherlands and Scandinavia, but not so much in France and Britain. Seeblätter feature prominently on the coat of arms of Denmark as well as on Danish coins . [ citation needed ]
In West Frisian , the term pompeblêd is used. The name is used to indicate the seven red lily leaf-shaped blades on the Frisian flag . The seven red pompeblêden (leaves of the yellow water lily and the European white waterlily ) refer to the medieval Frisian 'sea districts': more or less autonomous regions along the Southern North Sea coast from the city of Alkmaar to the Weser River. There never have been exactly seven of these administrative units, the number of seven bears the suggestion of 'a lot'. Late medieval sources identify seven Frisian districts, though with different names. The most important regions were West Friesland , Westergo , Oostergo , Hunsingo , Fivelingo , Reiderland , Emsingo , Brokmerland , Harlingerland and Rüstringen ( Jeverland and Butjadingen ). [ citation needed ] | https://en.wikipedia.org/wiki/Seeblatt |
Seed counting machines count seeds for research and packaging purposes. The machines typically provide total counts of seeds or batch sizes for packaging.
The first seed counters were developed to count legumes and other seeds which were large. [ 1 ]
Traditionally, the seed packaging industry packed seeds by weight but sold them by number. In order to assure the correct quantity of seeds, the distributors added a safety margin to the packed weight, like a bakers' dozen . This safety margin increased cost. By counting the seeds, the margin of error could be reduced and so costs reduced. [ 2 ]
Originally people counted seeds by hand, or used a trip board. The first seed-counting machine was the vibratory mechanical seed counter. Modern day electronic seed counters are faster and more accurate. [ 3 ]
In 1929 the US Bureau of Plant Industry worked with several seed companies to perfect a seed counter. [ 4 ] In 1962 an electric seed counter was developed by the USDA's Agricultural Marketing Service . The electronic counter operation involved the a vibrating the seeds so that they move to the edge of the counting machine. [ 1 ]
The machine will pay for itself over the labor intensive tedious task of manually counting seeds, which is necessarily characterized by human error . By contrast, the new devices, even in the early 1960s, boasted increased speed and “about 1 error in counting 10,000 seeds counted.” The accuracy helps lessen the need to build in safety margins for quantity ; and the costs of the machinery can be more than paid for by reduced labor costs . [ 2 ]
In the 1970s other electronic seed counting advancements included an electric eye to count the seeds. Seed counting still involved vibrating the seed, but now the seed would fall through a seed hole. [ 5 ]
If the items are put onto the conveyor in a single file, then a simple counting mechanism may provide satisfactory results. However, such a mechanism is inherently slower than if the items were freely placed on the conveyor without posing such limitations. Thus, in the 2000s other parallel counting of multiple objects evolved, including devices that use multiple electromagnetic energy sources and receptors. [ 6 ]
At one time, the methodology included use of vacuum tubes , vacuum pumps , a light source and a photo transistor . The size needed to be adjusted so only one seed passes through at a time. To be useful, batch counters need to be commercially available. A single preset count facility is a plus, as is “adequate count capacity, the ability to provide external power supplies and [control of] ... the means to stop the picking up and counting of seeds.” [ 7 ]
In commercial operations, it is important for the counter to be automatic and accurate. For example, one commercial counter is capable of measuring the hundredth/thousandth grain weight for seeds , tablets , pearls , and small components. It adopts far-infrared area sensor , and a large enough photosensitive area, "suitable for the sensitivity of all crops (millet-peanut)." Blockages or splashes are to be avoided. Adaptable speed variation adjustment helps "solve the contradiction between speed and accuracy, and ensure error-free counting (counting error of 0/1000)" Manual feeding vs automatic cup changing "improve the counting efficiency, reduce labor intensity." Automatic discharge can obviate demands for the operator to constantly feed the vibrating plate. One counter is so fast that Millet "counting can reach 2000 grains/min, and the wheat and rice counting" Suitability of the vibrating plate for different seeds is a consideration. It is useful to have an adjustable baffle at the exit of the bowl "according to the diameter of the seeds (workpieces), only one seed (workpiece) at a time, not side by side, for all large and small seeds." [ 8 ]
Some seed counters use laser light . [ 9 ]
In counting, it is important to position one seed at a time by manipulating slit width when using a Photoelectric seed counter. [ 10 ]
Some are able to handle up to 23 sample containers. They can do this while maintaining notable accuracy. [ A ] General purpose electronic seed counters usually count seeds during free fall . They have achieved satisfactory error rates. [ 9 ] For example: "Counting errors of less than 0.4% at counting speeds of 400 to 1,180 seeds/min were obtained for seeds of nine different species ranging in size from corn (Zea mays L.) to trefoil (Lotus corniculatus L.). Under some conditions, the seed dispenser, a vibratory small parts feeder, segregated wheat kernels (Triticum aestivum L.) into weight classes dispensing heavier kernels first into the counting system." [ 11 ] [ 12 ] | https://en.wikipedia.org/wiki/Seed-counting_machine |
Seed balls , also known as earth balls or nendo dango ( Japanese : 粘土団子 ) , consist of seeds rolled within a ball of clay and other matter to assist germination. They are then thrown into vacant lots and over fences as a form of guerilla gardening . Matter such as humus and compost are often placed around the seeds to provide microbial inoculants . Cotton-fibres or liquefied paper are sometimes added to further protect the clay ball in particularly harsh habitats. An ancient technique, it was re-discovered by Japanese natural farming pioneer Masanobu Fukuoka .
The technique for creating seed balls was rediscovered by Japanese natural farming pioneer Masanobu Fukuoka . [ 1 ] The technique was also used, for instance, in ancient Egypt to repair farms after the annual spring flooding of the Nile. Masanobu Fukuoka developed his technique during the period of the Second World War, while working in a Japanese government lab as a plant scientist on the mountainous island of Shikoku. He wanted to find a technique that would increase food production without taking away from the land already allocated for traditional rice production which thrived in the volcanic rich soils of Japan. [ 2 ] [ 3 ]
To make a seed ball, generally about five measures of red clay by volume are combined with one measure of seeds. The balls are formed between 10 mm and 80 mm (about 1 ⁄ 2 " to 3") in diameter. After the seed balls have been formed, they must dry for 24–48 hours before use.
Seed bombing is the practice of introducing vegetation to land by throwing or dropping seed balls. It is used in modern aerial seeding as a way to deter seed predation. It has also been popularized by green movements such as guerrilla gardening as a way to introduce new plants to an environment.
The term "seed green-aide" was first used by Liz Christy in 1973 when she started the Green Guerillas . [ 4 ] The first seed green-aides were made from condoms filled with tomato seeds, and fertilizer. [ 5 ] They were tossed over fences onto empty lots in New York City in order to make the neighborhoods look better. It was the start of the guerrilla gardening movement. [ 6 ] | https://en.wikipedia.org/wiki/Seed_ball |
Seed blanking is a plant disease injury causing the seed producing anatomy to contain no seeds despite otherwise normal development. This term is used to contrast with other causes of seed production failure, including but not limited to earlier or more widespread damage to the plant. [ 1 ] [ 2 ] [ 3 ] For one example, wheat blast causes widespread seed blanking. [ 4 ]
This plant disease article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Seed_blanking |
In spermatophyte plants, seed dispersal is the movement, spread or transport of seeds away from the parent plant. [ 1 ] Plants have limited mobility and rely upon a variety of dispersal vectors to transport their seeds, including both abiotic vectors, such as the wind, and living ( biotic ) vectors such as birds. Seeds can be dispersed away from the parent plant individually or collectively, as well as dispersed in both space and time.
The patterns of seed dispersal are determined in large part by the dispersal mechanism and this has important implications for the demographic and genetic structure of plant populations, as well as migration patterns and species interactions. There are five main modes of seed dispersal: gravity , wind, ballistic, water, and by animals. Some plants are serotinous and only disperse their seeds in response to an environmental stimulus.
These modes are typically inferred based on adaptations, such as wings or fleshy fruit. [ 1 ] However, this simplified view may ignore complexity in dispersal. Plants can disperse via modes without possessing the typical associated adaptations and plant traits may be multifunctional. [ 2 ] [ 3 ]
Seed dispersal is likely to have several benefits for different plant species. Seed survival is often higher away from the parent plant. This higher survival may result from the actions of density-dependent seed and seedling predators and pathogens , which often target the high concentrations of seeds beneath adults. [ 4 ] Competition with adult plants may also be lower when seeds are transported away from their parent.
Seed dispersal also allows plants to reach specific habitats that are favorable for survival, a hypothesis known as directed dispersal . For example, Ocotea endresiana (Lauraceae) is a tree species from Latin America which is dispersed by several species of birds, including the three-wattled bellbird . Male bellbirds perch on dead trees in order to attract mates, and often defecate seeds beneath these perches where the seeds have a high chance of survival because of high light conditions and escape from fungal pathogens. [ 5 ] In the case of fleshy-fruited plants, seed-dispersal in animal guts (endozoochory) often enhances the amount, the speed, and the asynchrony of germination, which can have important plant benefits. [ 6 ]
Seeds dispersed by ants ( myrmecochory ) are not only dispersed short distances but are also buried underground by the ants. These seeds can thus avoid adverse environmental effects such as fire or drought, reach nutrient-rich microsites and survive longer than other seeds. [ 7 ] These features are peculiar to myrmecochory, which may thus provide additional benefits not present in other dispersal modes. [ 8 ]
Seed dispersal may also allow plants to colonize vacant habitats and even new geographic regions. [ 9 ] Dispersal distances and deposition sites depend on the movement range of the disperser, and longer dispersal distances are sometimes accomplished through diplochory , the sequential dispersal by two or more different dispersal mechanisms. In fact, recent evidence suggests that the majority of seed dispersal events involves more than one dispersal phase. [ 10 ]
Seed dispersal is sometimes split into autochory (when dispersal is attained using the plant's own means) and allochory (when obtained through external means).
Long-distance seed dispersal (LDD) is a type of spatial dispersal that is currently defined by two forms, proportional and actual distance. A plant's fitness and survival may heavily depend on this method of seed dispersal depending on certain environmental factors. The first form of LDD, proportional distance, measures the percentage of seeds (1% out of total number of seeds produced) that travel the farthest distance out of a 99% probability distribution. [ 11 ] [ 12 ] The proportional definition of LDD is in actuality a descriptor for more extreme dispersal events. An example of LDD would be that of a plant developing a specific dispersal vector or morphology in order to allow for the dispersal of its seeds over a great distance. The actual or absolute method identifies LDD as a literal distance. It classifies 1 km as the threshold distance for seed dispersal. Here, threshold means the minimum distance a plant can disperse its seeds and have it still count as LDD. [ 13 ] [ 12 ] There is a second, unmeasurable, form of LDD besides proportional and actual. This is known as the non-standard form. Non-standard LDD is when seed dispersal occurs in an unusual and difficult-to-predict manner. An example would be a rare or unique incident in which a normally-lemur-dependent deciduous tree of Madagascar was to have seeds transported to the coastline of South Africa via attachment to a mermaid purse (egg case) laid by a shark or skate. [ 14 ] [ 15 ] [ 16 ] A driving factor for the evolutionary significance of LDD is that it increases plant fitness by decreasing neighboring plant competition for offspring. However, it is still unclear today as to how specific traits, conditions and trade-offs (particularly within short seed dispersal) affect LDD evolution.
Autochorous plants disperse their seed without any help from an external vector. This limits considerably the distance they can disperse their seed. [ 17 ] Two other types of autochory not described in detail here are blastochory , where the stem of the plant crawls along the ground to deposit its seed far from the base of the plant; and herpochory , where the seed crawls by means of trichomes or hygroscopic appendages (awns) and changes in humidity . [ 18 ]
Barochory or the plant use of gravity for dispersal is a simple means of achieving seed dispersal. The effect of gravity on heavier fruits causes them to fall from the plant when ripe. Fruits exhibiting this type of dispersal include apples , coconuts and passionfruit and those with harder shells (which often roll away from the plant to gain more distance). Gravity dispersal also allows for later transmission by water or animal. [ 19 ]
Ballochory is a type of dispersal where the seed is forcefully ejected by explosive dehiscence of the fruit. Often the force that generates the explosion results from turgor pressure within the fruit or due to internal hygroscopic tensions within the fruit. [ 17 ] Some examples of plants which disperse their seeds autochorously include: Arceuthobium spp. , Cardamine hirsuta , Ecballium elaterium , Euphorbia heterophylla , [ 20 ] Geranium spp. , Impatiens spp. , Sucrea spp , Raddia spp. [ 21 ] and others. An exceptional example of ballochory is Hura crepitans —this plant is commonly called the dynamite tree due to the sound of the fruit exploding. The explosions are powerful enough to throw the seed up to 100 meters. [ 22 ]
Witch hazel uses ballistic dispersal without explosive mechanisms by simply squeezing the seeds out at approx. 45 km/h (28 mph). [ 23 ]
Allochory refers to any of many types of seed dispersal where a vector or secondary agent is used to disperse seeds. These vectors may include wind, water, animals or others.
Wind dispersal ( anemochory ) is one of the more primitive means of dispersal. Wind dispersal can take on one of two primary forms: seeds or fruits can float on the breeze or, alternatively, they can flutter to the ground. [ 24 ] The classic examples of these dispersal mechanisms, in the temperate northern hemisphere, include dandelions , which have a feathery pappus attached to their fruits ( achenes ) and can be dispersed long distances, and maples , which have winged fruits ( samaras ) that flutter to the ground.
An important constraint on wind dispersal is the need for abundant seed production to maximize the likelihood of a seed landing in a site suitable for germination . Some wind-dispersed plants, such as the dandelion, can adjust their morphology in order to increase or decrease the rate of diaspore detachment. [ 25 ] There are also strong evolutionary constraints on this dispersal mechanism. For instance, Cody and Overton (1996) found that species in the Asteraceae on islands tended to have reduced dispersal capabilities (i.e., larger seed mass and smaller pappus) relative to the same species on the mainland. [ 26 ] Also, Helonias bullata , a species of perennial herb native to the United States, evolved to utilize wind dispersal as the primary seed dispersal mechanism; however, limited wind in its habitat prevents the seeds from successfully dispersing away from its parents, resulting in clusters of population. [ 27 ] Reliance on wind dispersal is common among many weedy or ruderal species. Unusual mechanisms of wind dispersal include tumbleweeds , where the entire plant (except for the roots) is blown by the wind. Physalis fruits, when not fully ripe, may sometimes be dispersed by wind due to the space between the fruit and the covering calyx , which acts as an air bladder.
Many aquatic (water dwelling) and some terrestrial (land dwelling) species use hydrochory , or seed dispersal through water. Seeds can travel for extremely long distances, depending on the specific mode of water dispersal; this especially applies to fruits which are waterproof and float on water.
The water lily is an example of such a plant. Water lilies' flowers make a fruit that floats in the water for a while and then drops down to the bottom to take root on the floor of the pond.
The seeds of palm trees can also be dispersed by water. If they grow near oceans, the seeds can be transported by ocean currents over long distances, allowing the seeds to be dispersed as far as other continents .
Mangrove trees grow directly out of the water; when their seeds are ripe they fall from the tree and grow roots as soon as they touch any kind of soil. During low tide, they might fall in soil instead of water and start growing right where they fell. If the water level is high, however, they can be carried far away from where they fell. Mangrove trees often make little islands as dirt and detritus collect in their roots, making little bodies of land.
Animals can disperse plant seeds in several ways, all named zoochory . Seeds can be transported on the outside of vertebrate animals (mostly mammals), a process known as epizoochory . Plant species transported externally by animals can have a variety of adaptations for dispersal, including adhesive mucus, and a variety of hooks, spines and barbs. [ 28 ] A typical example of an epizoochorous plant is Trifolium angustifolium , a species of Old World clover which adheres to animal fur by means of stiff hairs covering the seed . [ 9 ] Epizoochorous plants tend to be herbaceous plants, with many representative species in the families Apiaceae and Asteraceae . [ 28 ] However, epizoochory is a relatively rare dispersal syndrome for plants as a whole; the percentage of plant species with seeds adapted for transport on the outside of animals is estimated to be below 5%. [ 28 ] Nevertheless, epizoochorous transport can be highly effective if the seeds attach to animals that travel widely. This form of seed dispersal has been implicated in rapid plant migration and the spread of invasive species. [ 9 ]
Seed dispersal via ingestion and defecation by vertebrate animals (mostly birds and mammals), or endozoochory , is the dispersal mechanism for most tree species. [ 29 ] Endozoochory is generally a coevolved mutualistic relationship in which a plant surrounds seeds with an edible, nutritious fruit as a good food resource for animals that consume it. Such plants may advertise the presence of food resource by using colour. [ 30 ] Birds and mammals are the most important seed dispersers, but a wide variety of other animals, including turtles, fish, and insects (e.g. tree wētā and scree wētā ), can transport viable seeds. [ 31 ] [ 32 ] The exact percentage of tree species dispersed by endozoochory varies between habitats , but can range to over 90% in some tropical rainforests. [ 29 ] Seed dispersal by animals in tropical rainforests has received much attention, and this interaction is considered an important force shaping the ecology and evolution of vertebrate and tree populations. [ 33 ] In the tropics, large-animal seed dispersers (such as tapirs , chimpanzees , black-and-white colobus , toucans and hornbills ) may disperse large seeds that have few other seed dispersal agents. The extinction of these large frugivores from poaching and habitat loss may have negative effects on the tree populations that depend on them for seed dispersal and reduce genetic diversity among trees. [ 34 ] [ 35 ] Seed dispersal through endozoochory can lead to quick spread of invasive species, such as in the case of prickly acacia in Australia. [ 36 ] A variation of endozoochory is regurgitation of seeds rather than their passage in faeces after passing through the entire digestive tract. [ 37 ]
Seed dispersal by ants ( myrmecochory ) is a dispersal mechanism of many shrubs of the southern hemisphere or understorey herbs of the northern hemisphere. [ 7 ] Seeds of myrmecochorous plants have a lipid-rich attachment called the elaiosome , which attracts ants. Ants carry such seeds into their colonies, feed the elaiosome to their larvae and discard the otherwise intact seed in an underground chamber. [ 38 ] Myrmecochory is thus a coevolved mutualistic relationship between plants and seed-disperser ants. Myrmecochory has independently evolved at least 100 times in flowering plants and is estimated to be present in at least 11 000 species, but likely up to 23 000 (which is 9% of all species of flowering plants). [ 7 ] Myrmecochorous plants are most frequent in the fynbos vegetation of the Cape Floristic Region of South Africa, the kwongan vegetation and other dry habitat types of Australia, dry forests and grasslands of the Mediterranean region and northern temperate forests of western Eurasia and eastern North America, where up to 30–40% of understorey herbs are myrmecochorous. [ 7 ] Seed dispersal by ants is a mutualistic relationship and benefits both the ant and the plant. [ 39 ]
Seed dispersal by bees ( melittochory ) is an unusual dispersal mechanism for a small number of tropical plants. As of 2023 it has only been documented in five plant species including Corymbia torelliana , Coussapoa asperifolia subsp. magnifolia , Zygia racemosa , Vanilla odorata , and Vanilla planifolia . The first three are tropical trees and the last two are tropical vines. [ 40 ]
Seed predators, which include many rodents (such as squirrels) and some birds (such as jays) may also disperse seeds by hoarding the seeds in hidden caches. [ 41 ] The seeds in caches are usually well-protected from other seed predators and if left uneaten will grow into new plants. Rodents may also disperse seeds when the presence of secondary metabolites in ripe fruits causes them to spit out certain seeds rather than consuming them. [ 42 ] Finally, seeds may be secondarily dispersed from seeds deposited by primary animal dispersers, a process known as diplochory . For example, dung beetles are known to disperse seeds from clumps of feces in the process of collecting dung to feed their larvae. [ 43 ]
Other types of zoochory are chiropterochory (by bats), malacochory (by molluscs, mainly terrestrial snails), ornithochory (by birds) and saurochory (by non-bird sauropsids). Zoochory can occur in more than one phase, for example through diploendozoochory , where a primary disperser (an animal that ate a seed) along with the seeds it is carrying is eaten by a predator that then carries the seed further before depositing it. [ 44 ]
Dispersal by humans ( anthropochory ) used to be seen as a form of dispersal by animals. Its most widespread and intense cases account for the planting of much of the land area on the planet, through agriculture. In this case, human societies form a long-term relationship with plant species, and create conditions for their growth.
Recent research points out that human dispersers differ from animal dispersers by having a much higher mobility, based on the technical means of human transport. [ 45 ] On the one hand, dispersal by humans also acts on smaller, regional scales and drives the dynamics of existing biological populations . On the other hand, dispersal by humans may act on large geographical scales and lead to the spread of invasive species . [ 46 ]
Humans may disperse seeds by many various means and some surprisingly high distances have been repeatedly measured. [ 47 ] Examples are: dispersal on human clothes (up to 250 m), [ 48 ] on shoes (up to 5 km), [ 45 ] or by cars (regularly ~ 250 m, single cases > 100 km). [ 49 ] Humans can unintentionally transport seeds by car, which can carry the seeds much greater distances than other conventional methods of dispersal. [ 50 ] Soil on cars can contain viable seeds. A study by Dunmail J. Hodkinson and Ken Thompson found that the most common seeds carried by vehicle were broadleaf plantain ( Plantago major ), Annual meadow grass ( Poa annua ), rough meadow grass ( Poa trivialis ), stinging nettle ( Urtica dioica ) and wild chamomile ( Matricaria discoidea ). [ 50 ]
Deliberate seed dispersal also occurs as seed bombing . This has risks, as it may introduce genetically unsuitable plants to new environments.
Seed dispersal has many consequences for the ecology and evolution of plants. Dispersal is necessary for species migrations, and in recent times dispersal ability is an important factor in whether or not a species transported to a new habitat by humans will become an invasive species. [ 51 ] Dispersal is also predicted to play a major role in the origin and maintenance of species diversity. For example, myrmecochory increased the rate of diversification more than twofold in plant groups in which it has evolved, because myrmecochorous lineages contain more than twice as many species as their non-myrmecochorous sister groups. [ 52 ] Dispersal of seeds away from the parent organism has a central role in two major theories for how biodiversity is maintained in natural ecosystems, the Janzen-Connell hypothesis and recruitment limitation. [ 4 ] Seed dispersal is essential in allowing forest migration of flowering plants. It can be influenced by the production of different fruit morphs in plants, a phenomenon known as heterocarpy. [ 53 ] These fruit morphs are different in size and shape and have different dispersal ranges, which allows seeds to be dispersed over varying distances and adapt to different environments. [ 53 ] The distances of the dispersal also affect the kernel of the seed. The lowest distances of seed dispersal were found in wetlands , whereas the longest were in dry landscapes. [ 54 ]
In addition, the speed and direction of wind are highly influential in the dispersal process and in turn the deposition patterns of floating seeds in stagnant water bodies. The transportation of seeds is led by the wind direction. This affects colonization when it is situated on the banks of a river, or to wetlands adjacent to streams relative to the given wind directions. The wind dispersal process can also affect connections between water bodies. Essentially, wind plays a larger role in the dispersal of waterborne seeds in a short period of time, days and seasons, but the ecological process allows the phenomenon to become balanced throughout a time period of several years. The time period over which the dispersal occurs is essential when considering the consequences of wind on the ecological process. [ citation needed ] | https://en.wikipedia.org/wiki/Seed_dispersal |
Seed dispersal syndromes are morphological characters of seeds correlated to particular seed dispersal agents. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Dispersal is the event by which individuals move from the site of their parents to establish in a new area. [ 5 ] A seed disperser is the vector by which a seed moves from its parent to the resting place where the individual will establish, for instance an animal . Similar to the term syndrome, a diaspore is a morphological functional unit of a seed for dispersal purposes. [ 6 ]
Characteristics for seed dispersal syndromes are commonly fruit colour, mass, and persistence. [ 4 ] These syndrome characteristics are often associated with the fruit that carries the seeds. Fruits are packages for seeds, composed of nutritious tissues to feed animals. However, fruit pulp is not commonly used as a seed dispersal syndrome because pulp nutritional value does not enhance seed dispersal success. [ 5 ] Animals interact with these fruits because they are a common food source for them. Although, not all seed dispersal syndromes have fruits because not all seeds are dispersed by animals. Suitable biological and environmental conditions of dispersal syndromes are needed for seed dispersal [ 2 ] and invasion success [ 1 ] such as temperature and moisture .
Seed dispersal syndromes are parallel to pollination syndromes , which are defined as floral characteristics that attract organisms as pollinators . [ 7 ] They are considered parallels because they are both plant-animal interactions, which increase the reproductive success of a plant. However, seed dispersal syndromes are more common in gymnosperms , while pollination syndromes are found in angiosperms . [ 5 ] Seeds disperse to increase the reproductive success of the plant. The farther away a seed is from a parent, the better its chances of survival and germination . Therefore, a plant should select certain traits to increase dispersal by a vector (i.e. bird ) to increase the reproductive success of the plant.
Seeds have evolved traits to reward animals to enhance their dispersal abilities. [ 5 ] Differing foraging behaviours of animals can lead to selection of dispersal traits and spatial variation [ 3 ] [ 8 ] such as increase in seed size for mammal dispersal, which can limit seed production . [ 9 ] Seed production is limited by some seed syndromes because of their cost to the plant. Therefore, seed dispersal syndromes will evolve in a plant when the trait benefit outweighs the cost. [ 1 ] The seed dispersers themselves play an essential role in syndrome evolution. [ 10 ] For example, birds put strong selection pressure on seeds for colour of fruits because of their enhanced vision. Illustrations of such colour evolution include green colour being produced because its photosynthesis abilities are less costly [ 11 ] while red colour emerges as a byproduct for protection from arthropods . [ 9 ]
For visible characteristic differences to develop between dispersers and non-dispersers a few conditions need to be met 1. Specialization must increase dispersal success whether morphological, physiological or behavioural 2. Energy investment for dispersal will be taken from energy investment of other traits 3. Dispersal traits will benefit the dispersers over non-dispersers. Phenotypic (visible characteristics) differences in non-dispersers and dispersers can be caused by external factors, kin competition , intraspecific competition and habitat quality. [ 1 ]
In 1930, Ridley wrote an important book called The dispersal of plants throughout the world, which goes into detail about each form of dispersal; dispersal by wind, water, animals, birds, reptiles and fish, adhesion , and people. He details the morphology and traits for each dispersal method, which are later described as seed dispersal syndromes. This began the idea of seed trait selection being associated with a form of seed dispersal. Then in 1969 van der Pijl identified seed dispersal syndromes based on each mechanism of seed dispersal in his book Principles of Dispersal in Higher Plants. He is the pinnacle of seed dispersal syndromes and is cited by many scientists who study seed dispersal syndromes. He describes the morphology of interactions between fruits and flowers , and classifies dispersal in invertebrates, fish, reptiles, birds, mammals, ants, wind, water and the plant itself. Janson in 1983 continued the study on seed dispersal syndromes and classified seed dispersal syndromes of fruit by size, colour and husk or no husks in species of Peruvian tropical forest . He went in depth about the interaction between plants that have adapted to seed dispersal by birds and mammals. Willson, Irvine & Walsh in 1989 added more factors to the study of seed dispersal syndromes and looked at differing fleshy fruits and their correlation to moisture and differing ecological factors. They looked at bird-dispersal and mammal-dispersal and how the fruits differed in dispersal syndromes such as colour and size. These scientists began the theory and ideas behind seed dispersal syndromes that are crucial to the evolution of reproduction in plants .
Dispersal syndromes have been previously classified by: size, colour, weight, protection, flesh type, number of seeds, weight and start time of ripening. [ 3 ] [ 4 ] [ 9 ] [ 12 ] Syndromes are often associated with the type of dispersal and morphology . Also chemical composition can influence the disperser's fruit choice. [ 4 ] The following are types of seed dispersal and their syndromes.
Anemochory is defined as seed dispersal by wind. Common dispersal syndromes of anemochory are wing structures [ 8 ] and brown or dull coloured seeds without further rewards. [ 12 ] Van der Pijl named seeds for anemochory flyers, rollers, or throwers to represent the seed dispersal syndromes and their behaviour. Flyers are typically categorized as dust diaspores, balloons, plumed or winged. Dust diaspores are small flat structures on seeds that appear to be the transition to wing diaspores, balloons are inflated seed characteristics and plumes are hairs or elongation seed characteristics. [ 13 ] Wings have evolved to increase dispersal distance to promote gene flow. [ 8 ] Anemochory is commonly found in open habitats, [ 14 ] canopy trees , [ 10 ] and dry season deciduous forests. [ 14 ] Wind dispersers mature in the dry season for optimum high long-distance dispersal [ 2 ] [ 12 ] to increase success of germination.
Barochory is seed dispersal by gravity alone in which a plant's seeds fall beneath the parent plant. [ 8 ] These seeds commonly have heavy seed dispersal syndromes. [ 13 ] However, heavy seeds may not be a form of seed dispersal syndrome, but a random seed characteristic that has no dispersal purpose. It has been thought [ by whom? ] that barochory does not develop a seed dispersal syndrome because it does not select for characters to enhance dispersal. It is questionable whether barochory is dispersal at all.
Hydrochory is seed dispersal by water. [ 13 ] Seeds can disperse by rain or ice or be submerged in water. Seeds dispersed by water need to have the ability to float and resist water damage. They often have hairs to assist with enlargement and floating. More features that cause floating are air space, lightweight tissues and corky tissues. Hydrochory syndromes are most common in aquatic plants . [ 13 ] [ 15 ]
Zoochory is the dispersal of seeds by animals and can be further divided into three classes.
Endozoochory syndrome characteristics will develop based on palatability of the fruit by an organism. For example, mammals are attracted to scent of a seed and birds are attracted to colour. Endozoochory syndromes have evolved to be ingested by animals and later bypassed in a new environment so the seed can germinate. [ 16 ] Synzoochory should possess hard skins to protect seeds from damage of mouthparts; for example, sharp beaks on animals such as birds or turtles . Epizoochory commonly has burrs or spines to transport seeds on the outside of animals. These syndromes are highly associated with animals that have fur , [ 13 ] while burrs would be lacking on seeds that are dispersed by reptiles because of their smooth skin. It is believed that not all animals that interact with plant fruits are dispersers because some animals do not increase the successful dispersal of seeds but consume and destroy them. Therefore, some animals are dispersers and some are consumers.
Mammalochory is specifically the seed dispersal by mammals. The dispersal syndromes for mammalochory include large fleshy fruit, green or dull coloured fruits, and husked or unhusked. [ 3 ] [ 4 ] [ 9 ] [ 11 ] [ 13 ] [ 17 ] The seeds tend to have more protection to prevent mechanical destruction. Mammals rely on smell more than vision for foraging, which causes the seeds they disperse to be more scented compared to bird-dispersed seeds. [ 18 ] Animal-dispersed seeds ripen in rainy season when foraging activity is high, resulting in fleshy diaspores. [ 2 ] [ 10 ] [ 12 ] Mammals consume fruits whole or in smaller pieces, [ 11 ] which explains the larger seed syndromes. Mammalochory syndromes can increase the reproductive success of the plant compared to seed dispersal syndromes of a plant associated with barochory for example. An example of seed dispersal syndromes associated with mammals that increases reproductive success would be seed-consuming rodents that increase germination by burial of seeds. [ 19 ]
Ornithochory is seed dispersal by birds. Common syndrome characteristics include small fleshy fruits with bright colours and without husks. [ 3 ] [ 9 ] [ 11 ] [ 13 ] [ 14 ] Ornithochory is common in temperate zones [ 12 ] and oceanic islands because of absence of native mammals. [ 14 ] Birds have heightened colour vision and swallow seeds and fruits whole, [ 11 ] explaining the small and coloured characteristics of dispersal syndromes. Birds have a weak sense of smell , therefore ornithochory syndromes would specialize more in colour than scent, [ 13 ] in comparison to mammalochory. Ornithochory can increase the reproductive success of a plant because a bird's digestive tract increases seed germination [ 19 ] after it has been bypassed and dispersed by the bird.
Myrmecochory is seed dispersal by ants . Myrmecochory is considered an ant-plant mutualistic relationship. [ 2 ] [ 8 ] The common syndrome traits for myrmecochory are elaiosomes , and are often hard and difficult to damage. [ 8 ] [ 13 ] Elaiosomes are structures that attract ants because they are high in lipid content, providing important nutrients for the ant. [ 8 ] Without ants, seed dispersal becomes barochory and dispersal success declines. [ 2 ] [ 8 ] It is debated if ants are good dispersers and if plants would select for ant dispersal. Ants do clearly interact with seeds, however ants cannot travel very long distances. Therefore, would a plant select for a bird over an ant when birds can disperse seeds much farther than ants, increasing a plant's reproductive success.
Some scientists are skeptical whether seed dispersal syndromes actually exist because their parallel, pollination syndromes, are often disputed in scientific literature . Seed dispersal syndromes do not seem to have much disagreement among scientists. It is unclear whether this is due to lack of research or interest in seed dispersal syndromes, or that scientists agree with the idea of seed dispersal syndromes. It also may be that seed dispersal syndromes are harder to test because once seeds disperse they are difficult to collect and study. Jordano (1995) states that the evolution of fruit traits for seed dispersal success is only dependent on diameter. [ 16 ] This is one scientist's perspective but does not appear to be the common consensus among scientists. Colour and olfaction are other common seed dispersal syndromes tested and discussed in scientific literature, with equivocal results. One possible reason is that adaptive variation in fruit colours could be scale dependent, occurring only on broad taxonomic scales rather than within assemblages of either bird-dispersed or mammal-dispersed fruit species. [ 20 ] One limitation to seed dispersal syndromes mentioned is the limited definitions of syndrome characteristics such as odour or texture. [ 11 ] It is possible that there has not been enough research to test these characteristics or they do not play a role in seed dispersal syndromes.
The differences in seed dispersal syndromes appear to be weak, but do exist. There needs to be consideration for the possibility that these syndromes evolved not to benefit seed dispersal but possibility to combat other selective pressures . [ 16 ] For example, syndromes may have developed to combat predation or environmental hazards. Predation could produce a secondary metabolite syndrome. Secondary metabolites are compounds that are not used for the primary function of a plant and are normally used as defense mechanisms . [ 21 ]
Seed dispersal syndromes have not been studied in complete breadth for every seed dispersal method. Therefore, further research should be conducted to fill the gaps of knowledge about dispersal syndromes. The following are problems areas or directions research can continue on the study of seed dispersal syndromes. There is a lack of understanding of morphology in correlation to behavioural traits of dispersers. [ 1 ] [ 12 ] Research in this area would assist in the understanding of why particular dispersers are selected by plants to enhance reproductive success. Also, understanding movement strategies of factors affecting departure to settlement [ 1 ] is important in determining whether seed dispersal syndromes are only affect by plant selection for a disperser. There are few studies concerning phenotype-dependent dispersal and how it affects spatial structures of populations. [ 1 ] [ 12 ] Distance of dispersal is not researched in enough detail to correlate to a seed dispersal syndrome. More experimental field studies on plant-animal interactions regarding seed dispersal need to be conducted [ 9 ] [ 14 ] for a thorough understanding of seed dispersal syndromes. There is limited knowledge about the presence of elaisomes and ant behaviour affecting seed dispersal, and how ant-plant interactions evolved under various plant traits. [ 8 ] Understanding these interactions would help clarify if myrmecochory did evolve seed dispersal syndromes. Micro and macroevolutionary processes are needed to determine the effects of biological dispersal of seeds. [ 9 ] There cannot be inferences about seed dispersal syndromes without robust phylogenies and evolutionary studies. There is also a gap in the understanding of genetic consequences of zoochory. [ 16 ] Using genetics could help clarify if these syndromes were formed at random or if they correspond to evolution of seed dispersal. It is unclear if these seed dispersal syndromes evolved for specialization between plants and animals to increase seed dispersal success or if these syndromes are simply formed from generalist plant-animal interactions. Understanding these relationships would clarify the confusion about seed dispersal syndromes and if they are true examples of evolution increasing plant reproductive success or if they have developed without selective pressures. | https://en.wikipedia.org/wiki/Seed_dispersal_syndrome |
A seed nucleus is an isotope that is the starting point for any of a variety of fusion chain reactions . The mix of nuclei produced at the conclusion of the chain reaction generally depends strongly on the relative availability of the seed nucleus or nuclei and the component being fused—whether neutrons as in the r-process and s-process or protons as in the rp-process . A smaller proportion of seed nuclei will generally result in products of larger mass , whereas a larger seed-to-neutron or seed-to-proton ratio will tend to produce comparatively lighter masses.
This nuclear physics or atomic physics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Seed_nucleus |
A seed plant or spermatophyte ( lit. ' seed plant ' ; New Latin spermat- and Greek φυτόν (phytón)|plant), [ 1 ] also known as a phanerogam (taxon Phanerogamae ) or a phaenogam (taxon Phaenogamae ), is any plant that produces seeds . It is a category of embryophyte (i.e. land plant) that includes most of the familiar land plants, including the flowering plants and the gymnosperms , but not ferns , mosses , or algae .
The term phanerogam or phanerogamae is derived from the Greek φανερός ( phanerós ), meaning "visible", in contrast to the term "cryptogam" or " cryptogamae " (from Ancient Greek κρυπτός (kruptós) ' hidden ' , and γαμέω ( gaméō ), 'to marry'). These terms distinguish those plants with hidden sexual organs (cryptogamae) from those with visible ones (phanerogamae).
The extant spermatophytes form five divisions, the first four of which are classified as gymnosperms , plants that have unenclosed, "naked seeds": [ 2 ] : 172
The fifth extant division is the flowering plants , also known as angiosperms or magnoliophytes, the largest and most diverse group of spermatophytes:
In addition to the five living taxa listed above, the fossil record contains evidence of many extinct taxa of seed plants, among those:
By the Triassic period, seed ferns had declined in ecological importance, and representatives of modern gymnosperm groups were abundant and dominant through the end of the Cretaceous , when the angiosperms radiated.
A series of evolutionary changes began with a whole genome duplication event in the ancestor of seed plants occurred about 319 million years ago . [ 3 ]
A middle Devonian (385-million-year-old) precursor to seed plants from Belgium has been identified predating the earliest seed plants by about 20 million years. Runcaria , small and radially symmetrical, is an integumented megasporangium surrounded by a cupule. The megasporangium bears an unopened distal extension protruding above the mutlilobed integument . It is suspected that the extension was involved in anemophilous (wind) pollination . Runcaria sheds new light on the sequence of character acquisition leading to the seed. Runcaria has all of the qualities of seed plants except for a solid seed coat and a system to guide the pollen to the seed. [ 4 ]
Runcaria was followed shortly after by plants with a more condensed cupule, such as Spermasporites and Moresnetia . Seed-bearing plants had diversified substantially by the Famennian , the last stage of the Devonian. Examples include Elkinsia , Xenotheca , Archaeosperma , " Hydrasperma ", Aglosperma , and Warsteinia . Some of these Devonian seeds are now classified within the order Lyginopteridales . [ 5 ]
Seed-bearing plants are a clade within the vascular plants (tracheophytes). [ 6 ]
The spermatophytes were traditionally divided into angiosperms , or flowering plants, and gymnosperms , which includes the gnetophytes, cycads, [ 6 ] ginkgo, and conifers. Older morphological studies believed in a close relationship between the gnetophytes and the angiosperms, [ 7 ] in particular based on vessel elements . However, molecular studies (and some more recent morphological [ 8 ] [ 9 ] and fossil [ 10 ] papers) have generally shown a clade of gymnosperms , with the gnetophytes in or near the conifers. For example, one common proposed set of relationships is known as the gne-pine hypothesis and looks like: [ 11 ] [ 12 ] [ 13 ]
(flowering plants)
Cycads
Ginkgo
Pinaceae (the pine family)
Gnetophytes
other conifers
However, the relationships between these groups should not be considered settled. [ 7 ] [ 14 ]
Other classifications group all the seed plants in a single division , with classes for the five groups: [ citation needed ]
A more modern classification ranks these groups as separate divisions (sometimes under the Superdivision Spermatophyta ): [ citation needed ]
Unassigned extinct spermatophyte orders, some of them formerly grouped as " Pteridospermatophyta ", the polyphyletic "seed ferns". [ 15 ] | https://en.wikipedia.org/wiki/Seed_plant |
Seeding is a fundamental technique in fluid dynamics . It is used to visualize and measure fluid flow. Researchers introduce small particles, called seed particles, into a fluid. These particles move with the fluid. This allows researchers to observe and analyze the fluid's movement under different conditions.
The significance of seeding is its ability to provide insights into complex fluid behaviors. These behaviors are otherwise invisible to the naked eye. Techniques like Particle Image Velocimetry (PIV) and Laser Doppler Velocimetry (LDV) rely on seeding to obtain accurate data. Seeding is an indispensable tool in experimental fluid mechanics . It enables precise measurements and detailed visualizations. This drives advancements in science and engineering, such as investigating airflow over aircraft wings, analyzing blood flow through arteries, and studying the dispersion of pollutants in the environment. [ 1 ]
Seeding in fluid dynamics is the process of introducing small particles, called seed particles, into a fluid. This allows the visualization and measurement of the fluid's motion. The particles are chosen to closely follow the fluid's flow. They act as tracers, making the invisible flow patterns visible when illuminated by light sources, such as lasers . The movement of these particles can be captured and analyzed using various techniques. This provides insights into the velocity , turbulence , and other dynamic properties of the fluid. [ 1 ] [ 2 ]
The use of seeding techniques in fluid dynamics has a long history dating back to the early 20th century. Initially, researchers used simple methods like injecting dye or smoke into fluids to observe flow patterns. These early techniques provided a basic understanding of fluid behavior but lacked the precision needed for detailed analysis.
In the mid-20th century, the development of more sophisticated seeding techniques began with the advent of modern experimental methods like Particle Image Velocimetry (PIV) and Laser Doppler Velocimetry (LDV). PIV, developed in the 1980s, revolutionized fluid dynamics research by allowing for the detailed measurement of flow velocities across entire fields of view, rather than just single points. LDV, developed in the 1960s, provided a way to measure fluid velocity at precise points using laser beams and seed particles. These advancements marked a significant evolution in seeding techniques, enabling researchers to conduct more accurate and comprehensive studies of fluid dynamics.
PIV is a technique. In this technique, seed particles are introduced into a fluid. The movement of these particles is captured by high-speed cameras. By analyzing the sequential images, the velocity of the fluid can be determined across the entire field of view. This method is widely used for studying complex flow patterns. These complex flow patterns are found in various applications, such as aerodynamics and biomedical research.
LDV uses laser beams that shine into a fluid. This fluid has particles in it. As the particles move through the intersection of the laser beams, they scatter light. This scattered light is then detected. This allows measuring the velocity of the fluid at precise points. LDV is very useful for getting accurate velocity measurements in turbulent or high-speed flows.
Flow visualization techniques use seeding to make fluid flows visible. These techniques include methods like dye injection, smoke seeding, or the use of reflective particles. These techniques help researchers observe and analyze flow patterns, vortices, and other fluid behaviors in both experimental and educational settings.
The selection of seed particles is crucial for accurate measurements. The particles must be small enough to closely follow the fluid flow without affecting it, but large enough to be detected by imaging or laser systems. The density, size, and material of the particles are carefully chosen based on the fluid properties and the specific technique used.
Seeding is important in aerospace engineering . It helps study airflow over aircraft wings and other aerodynamic surfaces. Researchers use seeding in wind tunnel tests to see and measure how air moves over wings, fuselages, and control surfaces. This information is essential for improving aircraft design. It helps increase lift, reduce drag, and enhance overall performance. Seeding techniques are also used in engine testing. They study airflow within jet engines. This helps engineers improve efficiency, increase thrust, and reduce emissions. [ 3 ]
In biomedical engineering , seeding is used to study blood flow in arteries and airflow in the respiratory system . For example, in cardiovascular research, small particles are added to fluids that mimic blood to visualize and measure flow patterns within arteries , especially at locations where blockages or aneurysms may occur. This helps understand the hemodynamics involved in various cardiovascular diseases. Similarly, seeding techniques are used in respiratory studies to track airflow in the lungs and nasal passages , which assists in the design of medical devices like inhalers and the treatment of respiratory conditions. [ 4 ]
Seeding is essential in environmental engineering . It helps track the spread of pollutants in air or water. Researchers introduce seed particles or tracers into water or air. This allows them to study how pollutants spread over time. This information is crucial for modeling the impact of industrial discharges, oil spills, or air pollution. It also helps develop strategies to reduce environmental damage. Seeding techniques also help study natural processes. This includes the movement of sediments in rivers and the spread of nutrients in marine ecosystems .
In various industrial processes , seeding is used to optimize operations. For example, in chemical reactors , seeding can help visualize and measure the mixing of different reactants. This ensures uniformity and improves reaction efficiency. In combustion research, seeding particles are introduced into fuel-air mixtures. This is to study flame propagation and combustion efficiency. This is vital for improving the performance of engines and industrial burners. [ 5 ]
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Seeding_(fluid_dynamics) |
Disphenoidal or seesaw (also known as sawhorse [ 1 ] ) is a type of molecular geometry where there are four bonds to a central atom with overall C 2v molecular symmetry . The name "seesaw" comes from the observation that it looks like a playground seesaw . Most commonly, four bonds to a central atom result in tetrahedral or, less commonly, square planar geometry .
The seesaw geometry occurs when a molecule has a steric number of 5, with the central atom being bonded to 4 other atoms and 1 lone pair (AX 4 E 1 in AXE notation ). An atom bonded to 5 other atoms (and no lone pairs) forms a trigonal bipyramid with two axial and three equatorial positions, but in the seesaw geometry one of the atoms is replaced by a lone pair of electrons, which is always in an equatorial position. This is true because the lone pair occupies more space near the central atom (A) than does a bonding pair of electrons. An equatorial lone pair is repelled by only two bonding pairs at 90°, whereas a hypothetical axial lone pair would be repelled by three bonding pairs at 90° which would make the molecule unstable. Repulsion by bonding pairs at 120° is much smaller and less important. [ 2 ] [ 1 ]
Compounds with disphenoidal (see-saw) geometry have two types of ligands : axial and equatorial. The axial pair lie along a common bond axis so that are related by a bond angle of 180°. The equatorial pair of ligands is situated in a plane orthogonal to the axis of the axial pair. Typically the bond distance to the axial ligands is longer than to the equatorial ligands. The ideal angle between the axial ligands and the equatorial ligands is 90°; whereas the ideal angle between the two equatorial ligands themselves is 120°.
Disphenoidal molecules, like trigonal bipyramidal ones, are subject to Berry pseudorotation in which the axial ligands move to equatorial positions and vice versa. This exchange of positions results in similar time-averaged environments for the two types of ligands. Thus, the 19 F NMR spectrum of SF 4 (like that of PF 5 ) consists of single resonance near room temperature. [ 3 ] The four atoms in motion act as a lever about the central atom; for example, the four fluorine atoms of sulfur tetrafluoride rotate around the sulfur atom. [ 4 ]
Sulfur tetrafluoride is the premier example of a molecule with the disphenoidal molecular geometry (see image at upper right). The following compounds and ions have disphenoidal geometry: [ 5 ] | https://en.wikipedia.org/wiki/Seesaw_molecular_geometry |
Segal's Burnside ring conjecture , or, more briefly, the Segal conjecture , is a theorem in homotopy theory , a branch of mathematics . The theorem relates the Burnside ring of a finite group G to the stable cohomotopy of the classifying space BG . The conjecture was made in the mid 1970s by Graeme Segal and proved in 1984 by Gunnar Carlsson . This statement is still commonly referred to as the Segal conjecture, even though it now has the status of a theorem.
The Segal conjecture has several different formulations, not all of which are equivalent. Here is a weak form: there exists, for every finite group G , an isomorphism
Here, lim denotes the inverse limit , π S * denotes the stable cohomotopy ring, B denotes the classifying space, the superscript k denotes the k - skeleton , and the subscript + denotes the addition of a disjoint basepoint. On the right-hand side, the hat denotes the completion of the Burnside ring with respect to its augmentation ideal .
The Burnside ring of a finite group G is constructed from the category of finite G -sets as a Grothendieck group . More precisely, let M ( G ) be the commutative monoid of isomorphism classes of finite G -sets, with addition the disjoint union of G -sets and identity element the empty set (which is a G -set in a unique way). Then A ( G ), the Grothendieck group of M ( G ), is an abelian group. It is in fact a free abelian group with basis elements represented by the G -sets G / H , where H varies over the subgroups of G . (Note that H is not assumed here to be a normal subgroup of G , for while G / H is not a group in this case, it is still a G -set.) The ring structure on A ( G ) is induced by the direct product of G -sets; the multiplicative identity is the (isomorphism class of any) one-point set, which becomes a G -set in a unique way.
The Burnside ring is the analogue of the representation ring in the category of finite sets, as opposed to the category of finite-dimensional vector spaces over a field (see motivation below). It has proven to be an important tool in the representation theory of finite groups.
For any topological group G admitting the structure of a CW-complex , one may consider the category of principal G -bundles . One can define a functor from the category of CW-complexes to the category of sets by assigning to each CW-complex X the set of principal G -bundles on X . This functor descends to a functor on the homotopy category of CW-complexes, and it is natural to ask whether the functor so obtained is representable . The answer is affirmative, and the representing object is called the classifying space of the group G and typically denoted BG . If we restrict our attention to the homotopy category of CW-complexes, then BG is unique. Any CW-complex that is homotopy equivalent to BG is called a model for BG .
For example, if G is the group of order 2, then a model for BG is infinite-dimensional real projective space. It can be shown that if G is finite, then any CW-complex modelling BG has cells of arbitrarily large dimension. On the other hand, if G = Z , the integers, then the classifying space BG is homotopy equivalent to the circle S 1 .
The content of the theorem becomes somewhat clearer if it is placed in its historical context. In the theory of representations of finite groups, one can form an object R [ G ] {\displaystyle R[G]} called the representation ring of G {\displaystyle G} in a way entirely analogous to the construction of the Burnside ring outlined above. The stable cohomotopy is in a sense the natural analog to complex K-theory , which is denoted K U ∗ {\displaystyle KU^{*}} . Segal was inspired to make his conjecture after Michael Atiyah proved the existence of an isomorphism
which is a special case of the Atiyah–Segal completion theorem . | https://en.wikipedia.org/wiki/Segal's_conjecture |
Segmental analysis is a method of anatomical analysis for describing the connective morphology of the human body . Instead of describing anatomy in terms of spatial relativity, as in the anatomical position method, segmental analysis describes anatomy in terms of which organs, tissues, etc. connect to each other, and the characteristics of those connections.
This anatomy article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Segmental_analysis_(biology) |
Segmentation in biology is the division of some animal and plant body plans into a linear series of repetitive segments that may or may not be interconnected to each other. This article focuses on the segmentation of animal body plans, specifically using the examples of the taxa Arthropoda , Chordata , and Annelida . These three groups form segments by using a "growth zone" to direct and define the segments. While all three have a generally segmented body plan and use a growth zone, they use different mechanisms for generating this patterning. Even within these groups, different organisms have different mechanisms for segmenting the body. Segmentation of the body plan is important for allowing free movement and development of certain body parts. It also allows for regeneration in specific individuals.
Segmentation is a difficult process to satisfactorily define. Many taxa (for example the molluscs) have some form of serial repetition in their units but are not conventionally thought of as segmented. Segmented animals are those considered to have organs that were repeated, or to have a body composed of self-similar units, but usually it is the parts of an organism that are referred to as being segmented. [ 1 ]
Segmentation in animals typically falls into three types, characteristic of different arthropods , vertebrates , and annelids . Arthropods such as the fruit fly form segments from a field of equivalent cells based on transcription factor gradients. Vertebrates like the zebrafish use oscillating gene expression to define segments known as somites . Annelids such as the leech use smaller blast cells budded off from large teloblast cells to define segments. [ 2 ]
Although Drosophila segmentation is not representative of the arthropod phylum in general, it is the most highly studied. Early screens to identify genes involved in cuticle development led to the discovery of a class of genes that was necessary for proper segmentation of the Drosophila embryo. [ 3 ]
To properly segment the Drosophila embryo, the anterior - posterior axis is defined by maternally supplied transcripts giving rise to gradients of these proteins. [ 2 ] [ 3 ] [ 4 ] This gradient then defines the expression pattern for gap genes , which set up the boundaries between the different segments. The gradients produced from gap gene expression then define the expression pattern for the pair-rule genes . [ 2 ] [ 4 ] The pair-rule genes are mostly transcription factors , expressed in regular stripes down the length of the embryo. [ 4 ] These transcription factors then regulate the expression of segment polarity genes , which define the polarity of each segment. Boundaries and identities of each segment are later defined. [ 4 ]
Within the arthropods, the body wall, nervous system, kidneys, muscles and body cavity are segmented, as are the appendages (when they are present). Some of these elements (e.g. musculature) are not segmented in their sister taxon, the onychophora . [ 1 ]
While not as well studied as in Drosophila and zebrafish , segmentation in the leech has been described as “budding” segmentation. Early divisions within the leech embryo result in teloblast cells, which are stem cells that divide asymmetrically to create bandlets of blast cells. [ 2 ] Furthermore, there are five different teloblast lineages (N, M, O, P, and Q), with one set for each side of the midline. The N and Q lineages contribute two blast cells for each segment, while the M, O, and P lineages only contribute one cell per segment. [ 5 ] Finally, the number of segments within the embryo is defined by the number of divisions and blast cells. [ 2 ] Segmentation appears to be regulated by the gene Hedgehog , suggesting its common evolutionary origin in the ancestor of arthropods and annelids. [ 6 ]
Within the annelids, as with the arthropods, the body wall, nervous system, kidneys, muscles and body cavity are generally segmented. However, this is not true for all of the traits all of the time: many lack segmentation in the body wall, coelom and musculature. [ 1 ]
Although perhaps not as well understood as Drosophila , the embryological process of segmentation has been studied in many vertebrate groups, such as fish ( Zebrafish , Medaka ), reptiles ( Corn Snake ), birds ( Chicken ), and mammals ( Mouse ). Segmentation in chordates is characterized as the formation of a pair of somites on either side of the midline. This is often referred to as somitogenesis .
In vertebrates, segmentation is most often explained in terms of the clock and wavefront model . The "clock" refers to the periodic oscillation in abundance of specific gene products, such as members of the Hairy and Enhancer of Split (Hes) gene family. Expression starts at the posterior end of the embryo and moves towards the anterior , creating travelling waves of gene expression. The "wavefront" is where clock oscillations arrest, initiating gene expression that leads to the patterning of somite boundaries. The position of the wavefront is defined by a decreasing posterior-to-anterior gradient of FGF signalling. In higher vertebrates including Mouse and Chick , (but not Zebrafish ), the wavefront also depends upon an opposing anterior-to-posterior decreasing gradient of retinoic acid which limits the anterior spreading of FGF8 ; retinoic acid repression of Fgf8 gene expression defines the wavefront as the point at which the concentrations of both retinoic acid and diffusible FGF8 protein are at their lowest. Cells at this point will mature and form a pair of somites. [ 7 ] [ 8 ] The interaction of other signaling molecules, such as myogenic regulatory factors , with this gradient promotes the development of other structures, such as muscles, across the basic segments. [ 9 ] Lower vertebrates such as zebrafish do not require retinoic acid repression of caudal Fgf8 for somitogenesis due to differences in gastrulation and neuromesodermal progenitor function compared to higher vertebrates. [ 10 ]
In other taxa, there is some evidence of segmentation in some organs, but this segmentation is not pervasive to the full list of organs mentioned above for arthropods and annelids. One might think of the serially repeated units in many Cycloneuralia , or the segmented body armature of the chitons (which is not accompanied by a segmented coelom). [ 1 ]
Segmentation can be seen as originating in two ways. To caricature, the 'amplification' pathway would involve a single-segment ancestral organism becoming segmented by repeating itself. This seems implausible, and the 'parcellization' framework is generally preferred – where existing organization of organ systems is 'formalized' from loosely defined packets into more rigid segments. [ 1 ] As such, organisms with a loosely defined metamerism, whether internal (as some molluscs) or external (as onychophora), can be seen as 'precursors' to eusegmented organisms such as annelids or arthropods. [ 1 ] | https://en.wikipedia.org/wiki/Segmentation_(biology) |
Segmented filamentous bacteria or Candidatus Savagella are members of the gut microbiota of rodents, fish and chickens, and have been shown to potently induce immune responses in mice. [ 2 ] They form a distinct lineage within the Clostridiaceae and the name Candidatus Savagella has been proposed for this lineage. [ 1 ]
They were previously named Candidatus Arthromitus because of their morphological resemblance to bacterial filaments previously observed in the guts of insects by Joseph Leidy . [ 3 ]
Despite the fact that they have been widely referred to as segmented filamentous bacteria, this term is somewhat problematic as it does not allow one to distinguish between bacteria that colonize various hosts or even if segmented filamentous bacteria are actually several different bacterial species. In mice, these bacteria grow primarily in the terminal ileum in close proximity to the intestinal epithelium where they are thought to help induce T helper 17 cell responses. [ 4 ]
Intriguingly, Segmented Filamentous Bacteria were found to expand in AID -deficient mice, which lack the ability to mount an appropriate humoral immune response because of impaired somatic hypermutation ; parabiotic experiments revealed the importance of IgA in eliminating Segmented Filamentous Bacteria. [ 5 ] This goes hand in hand with an earlier study demonstrating the ability of monocolonization with Segmented Filamentous Bacteria to dramatically increase mucosal IgA levels. [ 6 ] Segmented Filamentous Bacteria are species specific, and may be important to immune development.
Two review articles | https://en.wikipedia.org/wiki/Segmented_filamentous_bacteria |
In projective geometry , Segre's theorem , named after the Italian mathematician Beniamino Segre , is the statement:
This statement was assumed 1949 by the two Finnish mathematicians G. Järnefelt and P. Kustaanheimo and its proof was published in 1955 by B. Segre.
A finite pappian projective plane can be imagined as the projective closure of the real plane (by a line at infinity), where the real numbers are replaced by a finite field K . Odd order means that | K | = n is odd. An oval is a curve similar to a circle (see definition below): any line meets it in at most 2 points and through any point of it there is exactly one tangent. The standard examples are the nondegenerate projective conic sections.
In pappian projective planes of even order greater than four there are ovals which are not conics. In an infinite plane there exist ovals, which are not conics. In the real plane one just glues a half of a circle and a suitable ellipse smoothly .
The proof of Segre's theorem, shown below, uses the 3-point version of Pascal's theorem and a property of a finite field of odd order, namely, that the product of all the nonzero elements equals -1.
If | g ∩ o | = 0 {\displaystyle |g\cap {\mathfrak {o}}|=0} the line g {\displaystyle g} is an exterior (or passing ) line; in case | g ∩ o | = 1 {\displaystyle |g\cap {\mathfrak {o}}|=1} a tangent line and if | g ∩ o | = 2 {\displaystyle |g\cap {\mathfrak {o}}|=2} the line is a secant line .
For finite planes (i.e. the set of points is finite) we have a more convenient characterization:
Let be o {\displaystyle {\mathfrak {o}}} an oval in a pappian projective plane of characteristic ≠ 2 {\displaystyle \neq 2} . o {\displaystyle {\mathfrak {o}}} is a nondegenerate conic if and only if statement (P3) holds:
Let the projective plane be coordinatized inhomogeneously over a field K {\displaystyle K} such that P 3 = ( 0 ) , g ∞ {\displaystyle P_{3}=(0),\;g_{\infty }} is the tangent at P 3 , ( 0 , 0 ) ∈ o {\displaystyle P_{3},\ (0,0)\in {\mathfrak {o}}} , the x-axis is the tangent at the point ( 0 , 0 ) {\displaystyle (0,0)} and o {\displaystyle {\mathfrak {o}}} contains the point ( 1 , 1 ) {\displaystyle (1,1)} . Furthermore, we set P 1 = ( x 1 , y 1 ) , P 2 = ( x 2 , y 2 ) . {\displaystyle P_{1}=(x_{1},y_{1}),\;P_{2}=(x_{2},y_{2})\ .} (s. image) The oval o {\displaystyle {\mathfrak {o}}} can be described by a function f : K ↦ K {\displaystyle f:K\mapsto K} such that:
The tangent at point ( x 0 , f ( x 0 ) ) {\displaystyle (x_{0},f(x_{0}))} will be described using a function f ′ {\displaystyle f'} such that its equation is
Hence (s. image)
I: if o {\displaystyle {\mathfrak {o}}} is a non degenerate conic we have f ( x ) = x 2 {\displaystyle f(x)=x^{2}} and f ′ ( x ) = 2 x {\displaystyle f'(x)=2x} and one calculates easily that P 4 , P 5 , P 6 {\displaystyle P_{4},P_{5},P_{6}} are collinear.
II: If o {\displaystyle {\mathfrak {o}}} is an oval with property (P3) , the slope of the line P 4 P 5 ¯ {\displaystyle {\overline {P_{4}P_{5}}}} is equal to the slope of the line P 1 P 2 ¯ {\displaystyle {\overline {P_{1}P_{2}}}} , that means:
With f ( 0 ) = f ′ ( 0 ) = 0 {\displaystyle f(0)=f'(0)=0} one gets
(i) and (ii) yield
A consequence of (ii) and (v) is
Hence o {\displaystyle {\mathfrak {o}}} is a nondegenerate conic.
Remark: Property (P3) is fulfilled for any oval in a pappian projective plane of characteristic 2 with a nucleus (all tangents meet at the nucleus). Hence in this case (P3) is also true for non-conic ovals. [ 2 ]
Any oval o {\displaystyle {\mathfrak {o}}} in a finite pappian projective plane of odd order is a nondegenerate conic section.
For the proof we show that the oval has property (P3) of the 3-point version of Pascal's theorem.
Let be P 1 , P 2 , P 3 {\displaystyle P_{1},P_{2},P_{3}} any triangle on o {\displaystyle {\mathfrak {o}}} and P 4 , P 5 , P 6 {\displaystyle P_{4},P_{5},P_{6}} defined as described in (P3) .
The pappian plane will be coordinatized inhomogeneously over a finite field K {\displaystyle K} , such that P 3 = ( ∞ ) , P 2 = ( 0 ) , P 1 = ( 1 , 1 ) {\displaystyle P_{3}=(\infty ),\;P_{2}=(0),\;P_{1}=(1,1)} and ( 0 , 0 ) {\displaystyle (0,0)} is the common point of the tangents at P 2 {\displaystyle P_{2}} and P 3 {\displaystyle P_{3}} . The oval o {\displaystyle {\mathfrak {o}}} can be described using a bijective function f : K ∗ := K ∪ ∖ { 0 } ↦ K ∗ {\displaystyle f:K^{*}:=K\cup \setminus \{0\}\mapsto K^{*}} :
For a point P = ( x , y ) , x ∈ K ∖ { 0 , 1 } {\displaystyle P=(x,y),\;x\in K\setminus \{0,1\}} , the expression m ( x ) = f ( x ) − 1 x − 1 {\displaystyle m(x)={\tfrac {f(x)-1}{x-1}}} is the slope of the secant P P 1 ¯ . {\displaystyle {\overline {PP_{1}}}\;.} Because both the functions x ↦ f ( x ) − 1 {\displaystyle x\mapsto f(x)-1} and x ↦ x − 1 {\displaystyle x\mapsto x-1} are bijections from K ∖ { 0 , 1 } {\displaystyle K\setminus \{0,1\}} to K ∖ { 0 , − 1 } {\displaystyle K\setminus \{0,-1\}} , and x ↦ m ( x ) {\displaystyle x\mapsto m(x)} a bijection from K ∖ { 0 , 1 } {\displaystyle K\setminus \{0,1\}} onto K ∖ { 0 , m 1 } {\displaystyle K\setminus \{0,m_{1}\}} , where m 1 {\displaystyle m_{1}} is the slope of the tangent at P 1 {\displaystyle P_{1}} , for K ∗ ∗ := K ∖ { 0 , 1 } : {\displaystyle K^{**}:=K\setminus \{0,1\}\;:} we get
(Remark: For K ∗ := K ∖ { 0 } {\displaystyle K^{*}:=K\setminus \{0\}} we have: ∏ k ∈ K ∗ k = − 1 . {\displaystyle \displaystyle \prod _{k\in K^{*}}k=-1\;.} ) Hence
Because the slopes of line P 5 P 6 ¯ {\displaystyle {\overline {P_{5}P_{6}}}} and tangent P 1 P 1 ¯ {\displaystyle {\overline {P_{1}P_{1}}}} both are − 1 {\displaystyle -1} , it follows that P 1 P 1 ¯ ∩ P 2 P 3 ¯ = P 4 ∈ P 5 P 6 ¯ {\displaystyle {\overline {P_{1}P_{1}}}\cap {\overline {P_{2}P_{3}}}=P_{4}\in {\overline {P_{5}P_{6}}}} .
This is true for any triangle P 1 , P 2 , P 3 ∈ o {\displaystyle P_{1},P_{2},P_{3}\in {\mathfrak {o}}} .
So: (P3) of the 3-point Pascal theorem holds and the oval is a non degenerate conic. | https://en.wikipedia.org/wiki/Segre's_theorem |
The Segre classification is an algebraic classification of rank two symmetric tensors . It was proposed by the italian mathematician Corrado Segre in 1884.
The resulting types are then known as Segre types . It is most commonly applied to the energy–momentum tensor (or the Ricci tensor ) and primarily finds application in the classification of exact solutions in general relativity .
This linear algebra -related article is a stub . You can help Wikipedia by expanding it .
This relativity -related article is a stub . You can help Wikipedia by expanding it .
This mathematical physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Segre_classification |
In taxonomy , a segregate , or a segregate taxon is created when a taxon is split off from another taxon. This other taxon will be better known, usually bigger, and will continue to exist, even after the segregate taxon has been split off. A segregate will be either new or ephemeral: there is a tendency for taxonomists to disagree on segregates, and later workers often reunite a segregate with the 'mother' taxon. [ 1 ]
If a segregate is generally accepted as a 'good' taxon it ceases to be a segregate. Thus, this is a way of indicating change in the taxonomic status. It should not be confused with, for example, the subdivision of a genus into subgenera . [ 2 ] | https://en.wikipedia.org/wiki/Segregate_(taxonomy) |
The Segregated Runge–Kutta (SRK) method [ 1 ] is a family of IMplicit–EXplicit (IMEX) Runge–Kutta methods [ 2 ] [ 3 ] that were developed to approximate the solution of differential algebraic equations (DAE) of index 2.
The SRK method were motivated as a numerical method for the time integration of the incompressible Navier–Stokes equations with two salient properties. First, velocity and pressure computations are segregated. Second, the method keeps the same order of accuracy for both velocities and pressures. However, the SRK method can also be applied to any other DAE of index 2.
Consider an index 2 DAE defined as follows:
In the previous equations y {\displaystyle y} is known as the differential variable, while z {\displaystyle z} is known as the algebraic variable. The time derivative of the differential variable, y ˙ {\displaystyle {\dot {y}}} , depends on itself, y {\displaystyle y} , on the algebraic variable, z {\displaystyle z} , and on the time, t {\displaystyle t} . The second equation can be seen as a constraint on differential variable, y {\displaystyle y} .
Let us take the time derivative of the second equation. Assuming that the function g {\displaystyle g} is linear and does not depend on time, and that the function f {\displaystyle f} is linear with respect to z {\displaystyle z} , we have that
A Runge–Kutta time integration scheme is defined as a multistage integration in which each stage is computed as a combination of the unknowns evaluated in other stages. Depending on the definition of the parameters, this combination can lead to an implicit scheme or an explicit scheme. Implicit and explicit schemes can be combined, leading to IMEX schemes.
Suppose that the function f {\displaystyle f} can be split into two operators F {\displaystyle {\mathcal {F}}} and G {\displaystyle {\mathcal {G}}} such that
The SRK method is based on the use of IMEX Runge–Kutta schemes and can be defined by the following scheme: | https://en.wikipedia.org/wiki/Segregated_Runge–Kutta_methods |
In materials science , segregation is the enrichment of atoms, ions, or molecules at a microscopic region in a materials system. While the terms segregation and adsorption are essentially synonymous, in practice, segregation is often used to describe the partitioning of molecular constituents to defects from solid solutions, [ 1 ] whereas adsorption is generally used to describe such partitioning from liquids and gases to surfaces. The molecular-level segregation discussed in this article is distinct from other types of materials phenomena that are often called segregation, such as particle segregation in granular materials , and phase separation or precipitation, wherein molecules are segregated in to macroscopic regions of different compositions. Segregation has many practical consequences, ranging from the formation of soap bubbles, to microstructural engineering in materials science, [ 2 ] to the stabilization of colloidal suspensions.
Segregation can occur in various materials classes. In polycrystalline solids, segregation occurs at defects , such as dislocations, grain boundaries , stacking faults, or the interface between two phases. In liquid solutions, chemical gradients exist near second phases and surfaces due to combinations of chemical and electrical effects.
Segregation which occurs in well-equilibrated systems due to the instrinsic chemical properties of the system is termed equilibrium segregation. Segregation that occurs due to the processing history of the sample (but that would disappear at long times) is termed non-equilibrium segregation.
Equilibrium segregation is associated with the lattice disorder at interfaces, where there are sites of energy different from those within the lattice at which the solute atoms can deposit themselves. The equilibrium segregation is so termed because the solute atoms segregate themselves to the interface or surface in accordance with the statistics of thermodynamics in order to minimize the overall free energy of the system. This sort of partitioning of solute atoms between the grain boundary and the lattice was predicted by McLean in 1957. [ 3 ]
Non-equilibrium segregation, first theorized by Westbrook in 1964, [ 4 ] occurs as a result of solutes coupling to vacancies which are moving to grain boundary sources or sinks during quenching or application of stress. It can also occur as a result of solute pile-up at a moving interface. [ 5 ]
There are two main features of non-equilibrium segregation, by which it is most easily distinguished from equilibrium segregation. In the non-equilibrium effect, the magnitude of the segregation increases with increasing temperature and the alloy can be homogenized without further quenching because its lowest energy state corresponds to a uniform solute distribution. In contrast, the equilibrium segregated state, by definition, is the lowest energy state in a system that exhibits equilibrium segregation, and the extent of the segregation effect decreases with increasing temperature. The details of non-equilibrium segregation are not going to be discussed here, but can be found in the review by Harries and Marwick. [ 6 ]
Segregation of a solute to surfaces and grain boundaries in a solid produces a section of material with a discrete composition and its own set of properties that can have important (and often deleterious) effects on the overall properties of the material. These 'zones' with an increased concentration of solute can be thought of as the cement between the bricks of a building. The structural integrity of the building depends not only on the material properties of the brick, but also greatly on the properties of the long lines of mortar in between.
Segregation to grain boundaries, for example, can lead to grain boundary fracture as a result of temper brittleness, creep embrittlement, stress relief cracking of weldments, hydrogen embrittlement , environmentally assisted fatigue, grain boundary corrosion, and some kinds of intergranular stress corrosion cracking . [ 7 ] A very interesting and important field of study of impurity segregation processes involves AES of grain boundaries of materials. This technique includes tensile fracturing of special specimens directly inside the UHV chamber of the Auger Electron Spectrometer that was developed by Ilyin. [ 8 ] [ 9 ] Segregation to grain boundaries can also affect their respective migration rates, and so affects sinterability, as well as the grain boundary diffusivity (although sometimes these effects can be used advantageously). [ 10 ]
Segregation to free surfaces also has important consequences involving the purity of metallurgical samples. Because of the favorable segregation of some impurities to the surface of the material, a very small concentration of impurity in the bulk of the sample can lead to a very significant coverage of the impurity on a cleaved surface of the sample. In applications where an ultra-pure surface is needed (for example, in some nanotechnology applications), the segregation of impurities to surfaces requires a much higher purity of bulk material than would be needed if segregation effects did not exist. The following figure illustrates this concept with two cases in which the total fraction of impurity atoms is 0.25 (25 impurity atoms in 100 total). In the representation on the left, these impurities are equally distributed throughout the sample, and so the fractional surface coverage of impurity atoms is also approximately 0.25. In the representation to the right, however, the same number of impurity atoms are shown segregated on the surface, so that an observation of the surface composition would yield a much higher impurity fraction (in this case, about 0.69). In fact, in this example, were impurities to completely segregate to the surface, an impurity fraction of just 0.36 could completely cover the surface of the material. In an application where surface interactions are important, this result could be disastrous.
While the intergranular failure problems noted above are sometimes severe, they are rarely the cause of major service failures (in structural steels, for example), as suitable safety margins are included in the designs. Perhaps the greater concern is that with the development of new technologies and materials with new and more extensive mechanical property requirements, and with the increasing impurity contents as a result of the increased recycling of materials, we may see intergranular failure in materials and situations not seen currently. Thus, a greater understanding of all of the mechanisms surrounding segregation might lead to being able to control these effects in the future. [ 11 ] Modeling potentials, experimental work, and related theories are still being developed to explain these segregation mechanisms for increasingly complex systems.
Several theories describe the equilibrium segregation activity in materials. The adsorption theories for the solid-solid interface and the solid-vacuum surface are direct analogues of theories well known in the field of gas adsorption on the free surfaces of solids. [ 12 ]
This is the earliest theory specifically for grain boundaries, in which McLean [ 3 ] uses a model of P solute atoms distributed at random amongst N lattice sites and p solute atoms distributed at random amongst n independent grain boundary sites. The total free energy due to the solute atoms is then:
where E and e are energies of the solute atom in the lattice and in the grain boundary, respectively and the kln term represents the configurational entropy of the arrangement of the solute atoms in the bulk and grain boundary. McLean used basic statistical mechanics to find the fractional monolayer of segregant, X b {\displaystyle X_{b}} , at which the system energy was minimized (at the equilibrium state ), differentiating G with respect to p , noting that the sum of p and P is constant. Here the grain boundary analogue of Langmuir adsorption at free surfaces becomes:
Here, X b 0 {\displaystyle X_{b}^{0}} is the fraction of the grain boundary monolayer available for segregated atoms at saturation, X b {\displaystyle X_{b}} is the actual fraction covered with segregant, X c {\displaystyle X_{c}} is the bulk solute molar fraction, and Δ G {\displaystyle \Delta G} is the free energy of segregation per mole of solute.
Values of Δ G {\displaystyle \Delta G} were estimated by McLean using the elastic strain energy , E el {\displaystyle E_{\text{el}}} , released by the segregation of solute atoms. The solute atom is represented by an elastic sphere fitted into a spherical hole in an elastic matrix continuum. The elastic energy associated with the solute atom is given by:
where K {\displaystyle \mathrm {K} } is the solute bulk modulus , μ 0 , {\displaystyle \mu _{0},} is the matrix shear modulus , and r 0 , {\displaystyle r_{0},} and r 1 , {\displaystyle r_{1},} are the atomic radii of the matrix and impurity atoms, respectively. This method gives values correct to within a factor of two (as compared with experimental data for grain boundary segregation), but a greater accuracy is obtained using the method of Seah and Hondros, [ 10 ] described in the following section.
Using truncated BET theory (the gas adsorption theory developed by Brunauer, Emmett, and Teller), Seah and Hondros [ 10 ] write the solid-state analogue as:
where Δ G = Δ G ′ + Δ G sol {\displaystyle \Delta G=\Delta G'+\Delta G_{\text{sol}}}
X c 0 {\displaystyle X_{c}^{0}} is the solid solubility , which is known for many elements (and can be found in metallurgical handbooks). In the dilute limit, a slightly soluble substance has X c 0 = exp ( Δ G sol R T ) {\displaystyle X_{c}^{0}=\exp \left({\frac {\Delta G_{\text{sol}}}{RT}}\right)} , so the above equation reduces to that found with the Langmuir-McLean theory. This equation is only valid for X c ≤ X c 0 {\displaystyle X_{c}\leq X_{c}^{0}} . If there is an excess of solute such that a second phase appears, the solute content is limited to X c 0 {\displaystyle X_{c}^{0}} and the equation becomes
This theory for grain boundary segregation, derived from truncated BET theory, provides excellent agreement with experimental data obtained by Auger electron spectroscopy and other techniques. [ 12 ]
Other models exist to model more complex binary systems. [ 12 ] The above theories operate on the assumption that the segregated atoms are non-interacting. If, in a binary system , adjacent adsorbate atoms are allowed an interaction energy ω {\displaystyle \omega \,} , such that they can attract (when ω {\displaystyle \omega \,} is negative) or repel (when ω {\displaystyle \omega \,} is positive) each other, the solid-state analogue of the Fowler adsorption theory is developed as
When ω {\displaystyle \omega \,} is zero, this theory reduces to that of Langmuir and McLean. However, as ω {\displaystyle \omega \,} becomes more negative, the segregation shows progressively sharper rises as the temperature falls until eventually the rise in segregation is discontinuous at a certain temperature, as shown in the following figure.
Guttman, in 1975, extended the Fowler theory to allow for interactions between two co-segregating species in multicomponent systems. This modification is vital to explaining the segregation behavior that results in the intergranular failures of engineering materials. More complex theories are detailed in the work by Guttmann [ 13 ] and McLean and Guttmann. [ 14 ]
The Langmuir–McLean equation for segregation, when using the regular solution model for a binary system, is valid for surface segregation (although sometimes the equation will be written replacing X b {\displaystyle X_{b}} with X s {\displaystyle X_{s}} ). [ 15 ] The free energy of surface segregation is Δ G s = Δ H s − T Δ S {\displaystyle \Delta G_{s}=\Delta H_{s}-T\,\Delta S} . The enthalpy is given by
where γ 0 {\displaystyle \gamma _{0}} and γ 1 {\displaystyle \gamma _{1}} are matrix surface energies without and with solute, H 1 {\displaystyle H_{1}} is their heat of mixing, Z and Z 1 {\displaystyle Z_{1}} are the coordination numbers in the matrix and at the surface, and Z v {\displaystyle Z_{v}} is the coordination number for surface atoms to the layer below. The last term in this equation is the elastic strain energy E el {\displaystyle E_{\text{el}}} , given above, and is governed by the mismatch between the solute and the matrix atoms. For solid metals, the surface energies scale with the melting points . The surface segregation enrichment ratio increases when the solute atom size is larger than the matrix atom size and when the melting point of the solute is lower than that of the matrix. [ 12 ]
A chemisorbed gaseous species on the surface can also have an effect on the surface composition of a binary alloy. In the presence of a coverage of a chemisorbed species theta, it is proposed that the Langmuir-McLean model is valid with the free energy of surface segregation given by Δ G chem {\displaystyle \Delta G_{\text{chem}}} , [ 16 ] where
E A {\displaystyle E_{A}} and E B {\displaystyle E_{B}} are the chemisorption energies of the gas on solute A and matrix B and Θ {\displaystyle \Theta } is the fractional coverage. At high temperatures, evaporation from the surface can take place, causing a deviation from the McLean equation. At lower temperatures, both grain boundary and surface segregation can be limited by the diffusion of atoms from the bulk to the surface or interface.
In some situations where segregation is important, the segregant atoms do not have sufficient time to reach their equilibrium level as defined by the above adsorption theories. The kinetics of segregation become a limiting factor and must be analyzed as well. Most existing models of segregation kinetics follow the McLean approach. In the model for equilibrium monolayer segregation, the solute atoms are assumed to segregate to a grain boundary from two infinite half-crystals or to a surface from one infinite half-crystal. The diffusion in the crystals is described by Fick's laws. The ratio of the solute concentration in the grain boundary to that in the adjacent atomic layer of the bulk is given by an enrichment ratio, β {\displaystyle \beta } . Most models assume β {\displaystyle \beta } to be a constant, but in practice this is only true for dilute systems with low segregation levels. In this dilute limit, if X b 0 {\displaystyle X_{b}^{0}} is one monolayer, β {\displaystyle \beta } is given as β = X b X c = exp ( − Δ G ′ R T ) X c 0 {\displaystyle \beta ={\frac {X_{b}}{X_{c}}}={\frac {\exp \left({\frac {-\Delta G'}{RT}}\right)}{X_{c}^{0}}}} .
The kinetics of segregation can be described by the following equation: [ 11 ]
where F = 4 {\displaystyle F=4} for grain boundaries and 1 for the free surface, X b ( t ) {\displaystyle X_{b}(t)} is the boundary content at time t {\displaystyle t} , D {\displaystyle D} is the solute bulk diffusivity, f {\displaystyle f} is related to the atomic sizes of the solute and the matrix, b {\displaystyle b} and a {\displaystyle a} , respectively, by f = a 3 b − 2 {\displaystyle f=a^{3}b^{-2}} . For short times, this equation is approximated by: [ 11 ]
In practice, β {\displaystyle \beta } is not a constant but generally falls as segregation proceeds due to saturation. If β {\displaystyle \beta } starts high and falls rapidly as the segregation saturates, the above equation is valid until the point of saturation. [ 12 ]
All metal castings experience segregation to some extent, and a distinction is made between macro segregation and micro segregation. Microsegregation refers to localized differences in composition between dendrite arms, and can be significantly reduced by a homogenizing heat treatment. This is possible because the distances involved (typically on the order of 10 to 100 μm) are sufficiently small for diffusion to be a significant mechanism. This is not the case in macrosegregation. Therefore, macrosegregation in metal castings cannot be remedied or removed using heat treatment. [ 17 ] | https://en.wikipedia.org/wiki/Segregation_(materials_science) |
Segrosomes are protein complexes that ensure accurate segregation (partitioning) of plasmids or chromosomes during bacterial cell division .
Just as higher forms of life have evolved a complex mitotic apparatus to partition duplicated DNA during cell division , bacteria require a specialized apparatus to partition their duplicated DNA. In bacteria, segrosomes perform the function similar to that performed by mitotic spindle . Therefore, segrosomes can be thought of as minimalist spindles.
Segrosomes are usually composed of three basic components- the DNA (plasmid or chromosome) that needs to be segregated into daughter cells, a motor protein that provides the necessary physical forces for accomplishing the segregation and a DNA binding protein that connects the DNA and the motor protein, to form the complete segrosome complex.
The majority of motor proteins participating in plasmid segrosomes are Walker-type or ParM type ATPases . Segrosome formation could be a highly regulated and ordered process to ensure its coupling with the other events of the bacterial cell cycle. Recently segrosomal complexes derived from the tubulin family of cytoskeletal proteins , which are GTPases have been discovered in megaplasmids found in Bacillus species. | https://en.wikipedia.org/wiki/Segrosome |
The Segrè–Silberberg effect is a fluid dynamic separation effect where a dilute suspension of neutrally buoyant particles flowing (in laminar flow ) in a tube equilibrates at a distance of 0.6R from the tube's centre. This effect was first observed by Gino Segrè and Alexander Silberberg. [ 1 ] [ 2 ] The solid particles are subjected to both viscous drag forces and inertial lift forces. The drag forces are responsible for driving particles along the flow streamlines, whereas the inertial forces are responsible for the lateral migration of particles across the flow streamlines. The parabolic nature of the laminar velocity profile in Poiseuille flow produces a shear-induced inertial lift force that drives particles towards the channel walls. As particles migrate closer to the channel walls, the flow around the particle induces a pressure increase between the particle and the wall which prevents particles of moving closer. [ 3 ] The opposing lift forces are dependent on the particle diameter to channel diameter
ratio ( d / D {\displaystyle d/D} ), and dominate for d / D ≥ 0.07 {\displaystyle d/D\geq 0.07} . | https://en.wikipedia.org/wiki/Segrè–Silberberg_effect |
In quantum field theory , Seiberg duality , conjectured by Nathan Seiberg in 1994, [ 1 ] is an S-duality relating two different supersymmetric QCDs . The two theories are not identical, but they agree at low energies. More precisely under a renormalization group flow they flow to the same IR fixed point , and so are in the same universality class . It is an extension to nonabelian gauge theories with N=1 supersymmetry of Montonen–Olive duality in N=4 theories and electromagnetic duality in abelian theories.
Seiberg duality is an equivalence of the IR fixed points in an N =1 theory with SU(N c ) as the gauge group and N f flavors of fundamental chiral multiplets and N f flavors of antifundamental chiral multiplets in the chiral limit (no bare masses ) and an N=1 chiral QCD with N f -N c colors and N f flavors, where N c and N f are positive integers satisfying
A stronger version of the duality relates not only the chiral limit but also the full deformation space of the theory. In the special case in which
the IR fixed point is a nontrivial interacting superconformal field theory . For a superconformal field theory, the anomalous scaling dimension of a chiral superfield D = 3 2 R {\displaystyle D={\frac {3}{2}}R} where R is the R-charge. This is an exact result.
The dual theory contains a fundamental "meson" chiral superfield M which is color neutral but transforms as a bifundamental under the flavor symmetries.
The dual theory contains the superpotential W = α M Q c ~ Q ~ {\displaystyle W=\alpha M{\tilde {Q^{c}}}{\tilde {Q}}} .
Being an S-duality, Seiberg duality relates the strong coupling regime with the weak coupling regime, and interchanges chromoelectric fields ( gluons ) with chromomagnetic fields (gluons of the dual gauge group), and chromoelectric charges ( quarks ) with nonabelian 't Hooft–Polyakov monopoles . In particular, the Higgs phase is dual to the confinement phase as in the dual superconducting model .
The mesons and baryons are preserved by the duality. However, in the electric theory the meson is a quark bilinear ( M ≡ Q c Q {\displaystyle M\equiv Q^{c}Q} ), while in the magnetic theory it is a fundamental field. In both theories the baryons are constructed from quarks, but the number of quarks in one baryon is the rank of the gauge group, which differs in the two dual theories.
The gauge symmetries of the theories do not agree, which is not problematic as the gauge symmetry is a feature of the formulation and not of the fundamental physics. The global symmetries relate distinct physical configurations, and so they need to agree in any dual description.
The moduli spaces of the dual theories are identical.
The global symmetries agree, as do the charges of the mesons and baryons.
In certain cases it reduces to ordinary electromagnetic duality.
It may be embedded in string theory via Hanany–Witten brane cartoons consisting of intersecting D-branes . There it is realized as the motion of an NS5-brane which is conjectured to preserve the universality class.
Six nontrivial anomalies may be computed on both sides of the duality, and they agree as they must in accordance with Gerard 't Hooft 's anomaly matching conditions . The role of the additional fundamental meson superfield M in the dual theory is very crucial in matching the anomalies. The global gravitational anomalies also match up as the parity of the number of chiral fields is the same in both theories. The R-charge of the Weyl fermion in a chiral superfield is one less than the R-charge of the superfield. The R-charge of a gaugino is +1.
Another evidence for Seiberg duality comes from identifying the superconformal index, which is a generalization of the Witten index , for the electric and the magnetic phase. The identification gives rise to complicated integral identities which have been studied in the mathematical literature. [ 2 ]
Seiberg duality has been generalized in many directions. One generalization applies to quiver gauge theories , in which the flavor symmetries are also gauged. The simplest of these is a super QCD with the flavor group gauged and an additional term in the superpotential . It leads to a series of Seiberg dualities known as a duality cascade, introduced by Igor Klebanov and Matthew Strassler . [ 3 ]
Whether Seiberg duality exists in 3-dimensional nonabelian gauge theories with only 4 supercharges is not known, although it is conjectured in some special cases with Chern–Simons terms. [ 4 ] | https://en.wikipedia.org/wiki/Seiberg_duality |
In mathematics , in graph theory , the Seidel adjacency matrix of a simple undirected graph G is a symmetric matrix with a row and column for each vertex, having 0 on the diagonal, −1 for positions whose rows and columns correspond to adjacent vertices, and +1 for positions corresponding to non-adjacent vertices.
It is also called the Seidel matrix or – its original name – the (−1,1,0)- adjacency matrix .
It can be interpreted as the result of subtracting the adjacency matrix of G from the adjacency matrix of the complement of G .
The multiset of eigenvalues of this matrix is called the Seidel spectrum .
The Seidel matrix was introduced by J. H. van Lint and Johan Jacob Seidel [ de ; nl ] in 1966 and extensively exploited by Seidel and coauthors.
The Seidel matrix of G is also the adjacency matrix of a signed complete graph K G in which the edges of G are negative and the edges not in G are positive. It is also the adjacency matrix of the two-graph associated with G and K G .
The eigenvalue properties of the Seidel matrix are valuable in the study of strongly regular graphs .
This graph theory -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Seidel_adjacency_matrix |
Seidor is a technology consulting firm with headquarters in Barcelona , Spain . It was founded in 1982 in Vic . [ 1 ] By 2024, it has a team of 9,000 people and a direct presence in 45 countries in Europe, the United States, Latin America, the Middle East, Africa and Asia. The Carlyle Group joined Seidor as a major shareholder in August 2024. [ 2 ]
It has a comprehensive portfolio of technology services and solutions covering AI , enterprise resource planning (ERP), customer experience (CX), employee experience , data , application modernisation, cloud , edge , connectivity and cyber security . [ 3 ]
Seidor was established in 1982 in Vic (Barcelona). It was founded by the brothers Santiago and Andreu Benito. The company's initial focus was on developing customised business management software for and medium sized companies.
In 1983, Seidor opened its Barcelona office, the company's global headquarters. A year later, it created Microsistemes to manage its microcomputer business, and in 1984 it began its alliance with IBM . [ 4 ]
The first office in Madrid was opened in 1991. In the same year, the company began implementing standard ERP business management solutions, which has been one of its main areas of activity ever since. [ 5 ] In 1992, Josep Benito, who would later become the company's CEO, joined the company; [ 6 ] in 1996, it partnered with SAP , a German business management software company, in order to have a strategic partner for its global growth. I [ 4 ] t was also during this decade that Seidor sealed its alliance with Microsoft , a partnership that has been strengthened over the years. [ 7 ]
In 2003, Seidor acquired Saytel, which marked the start of its activities in the large enterprise sector; [ 8 ] in 2005, the company began its international expansion with the opening of offices in Chile . It also entered the SAP business for SMEs.
The first corporate operation outside Spain took place in 2010, with the creation of Crystal Solutions ( Brazil ), specialising in data analysis; [ 9 ] in 2012, the company entered the cloud computing business; in 2014, it continued its international expansion and started working with global institutions and agencies such as the European Union , the United Nations and the World Bank ; [ 10 ] in 2016, Seidor expanded its global presence and continued diversifying its activities in areas such as digital transformation , cybersecurity and online education , and in the same year, established its alliance with AWS .
In 2017, it created the first global competence centre in the field of SAP CX in Peru, Valencia and Taiwan.
In 2021, Seidor became a partner of Salesforce and Google Cloud ; in 2022, it entered the field of connectivity services and established an alliance with CISCO .
From 2023, it develops the field of Artificial Intelligence and Edge Technologies . In 2024, The Carlyle Group becomes a shareholder of the company; Sergi Biosca is appointed CEO and Josep Benito, Executive Chairman. [ 11 ]
In order to expand its geographic presence and capabilities, the group has made a number of strategic acquisitions and integrations of other companies. Key transactions include the following:
The company has been expanding its international presence through a combination of its own office openings and local acquisitions. The chronology of this growth is as follows:
The company began by developing business management applications for small and medium-sized enterprises. It has gradually expanded its range of services and technological solutions and, in turn, its geographical presence to serve customers in a variety of sectors, as well as large companies. [ 26 ]
Sectors served include: government, agri-food , food & beverage , banking & insurance, ceramics, construction, consumer, pharmaceutical distribution, education, pharmaceuticals, automotive & aerospace, engineering & machinery, real estate, process products, chemicals, retail, healthcare, professional services and transportation. [ 27 ]
The Group offers solutions in the areas of AI, standard ERP, customer experience, employee experience, data, application modernisation, cloud, edge technology, networking and cyber security.
There are centres of innovation and excellence in several countries:
CX competence centres in Bilbao, Bogota, Lima, Madrid, Santiago, Taipei and Valencia; AI and Innovation competence centres in Barcelona, Bilbao, Dubai and Santiago; Data competence centres in Barcelona, Buenos Aires and Dubai; Workplace competence centres - Workplace - in Barcelona; [ 28 ] cloud computing competency centres - Cloud - in Barcelona, Lima, Sao Paulo and Zaragoza; [ 29 ] and an Application Development competency centre in Kerala. It also has Cybersecurity competence centres in Barcelona, Mexico City and Lima. [ 30 ] [ 28 ]
The technology consultancy collaborates with different organisations in the academic and educational field, such as ESADE , [ 31 ] IESE , San Telmo Business School, [ 32 ] Universitat Autónoma de Barcelona , [ 33 ] Deusto , Universidad de Castilla la Mancha , [ 34 ] Universidad de Nebrija, Universidad del País Vasco , Universitat Politècnica de Catalunya , Universitat Politècnica de Valencia , [ 35 ] Universitat Oberta Catalunya Open University of Catalonia ( UOC ), Universidad Internacional de La Rioja (UNIR) , Universitat de València , [ 36 ] University of Vic, and Universidad del Desarrollo , in Santiago de Chile. It also participates in various technological meetings with universities and companies, [ 33 ] and incorporates and prepares students in training practices in the IT field. [ 34 ]
The company has received a number of awards and has been recognised as a reference technology partner by a range of technology companies including IBM, SAP, [ 37 ] Microsoft, [ 38 ] Salesforce, [ 39 ] Google and AWS. | https://en.wikipedia.org/wiki/Seidor_(company) |
Seiffert's spherical spiral is a curve on a sphere made by moving on the sphere with constant speed and angular velocity with respect to a fixed diameter. If the selected diameter is the line from the north pole to the south pole, then the requirement of constant angular velocity means that the longitude of the moving point changes at a constant rate. [ 1 ] The cylindrical coordinates of the varying point on this curve are given by the Jacobian elliptic functions .
cn ( s , k ) {\displaystyle \operatorname {cn} (s,k)}
The Seiffert's spherical spiral can be expressed in cylindrical coordinates as
r = sn ( s , k ) , θ = k ⋅ s and z = cn ( s , k ) {\displaystyle r=\operatorname {sn} (s,k),\,\theta =k\cdot s{\text{ and }}z=\operatorname {cn} (s,k)}
or expressed as Jacobi theta functions
r = ϑ 3 ( 0 ) ⋅ ϑ 1 ( s ⋅ ϑ 3 − 2 ( 0 ) ) ϑ 2 ( 0 ) ⋅ ϑ 4 ( s ⋅ ϑ 3 − 2 ( 0 ) ) , θ = ϑ 2 2 ( q ) ϑ 3 2 ( q ) ⋅ s and z = ϑ 4 ( 0 ) ⋅ ϑ 3 ( s ⋅ ϑ 3 − 2 ( 0 ) ) ϑ 3 ( 0 ) ⋅ ϑ 4 ( s ⋅ ϑ 3 − 2 ( 0 ) ) {\displaystyle r={\frac {\vartheta _{3}(0)\cdot \vartheta _{1}(s\cdot \vartheta _{3}^{-2}(0))}{\vartheta _{2}(0)\cdot \vartheta _{4}(s\cdot \vartheta _{3}^{-2}(0))}},\,\theta ={\frac {\vartheta _{2}^{2}(q)}{\vartheta _{3}^{2}(q)}}\cdot s{\text{ and }}z={\frac {\vartheta _{4}(0)\cdot \vartheta _{3}(s\cdot \vartheta _{3}^{-2}(0))}{\vartheta _{3}(0)\cdot \vartheta _{4}(s\cdot \vartheta _{3}^{-2}(0))}}} . [ 5 ]
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Seiffert's_spiral |
Seipin is a homo-oligomeric integral membrane protein in the endoplasmic reticulum (ER) that concentrates at junctions with cytoplasmic lipid droplets (LDs). Alternatively, seipin can be referred to as Berardinelli–Seip congenital lipodystrophy type 2 protein (BSCL2), and it is encoded by the corresponding gene of the same name, i.e. BSCL2 . At protein level, seipin is expressed in cortical neurons in the frontal lobes, as well as motor neurons in the spinal cord. It is highly expressed in areas like the brain, testis and adipose tissue. [ 1 ] Seipin's function is still unclear but it has been localized close to lipid droplets, and cells knocked out in seipin have anomalous droplets. [ 2 ] Hence, recent evidence suggests that seipin plays a crucial role in lipid droplet biogenesis. [ 3 ]
Though it was initially dubbed "mysterious protein", [ 4 ] recent empirical studies are gradually starting to unveil some of seipin's most compelling physiological functions. [ 2 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] Among these, the following have been identified: central regulation of energy homeostasis, lipid catabolism (essential for adipocyte differentiation), lipid storage and lipid droplet maintenance, as well as prevention of ectopic lipid droplet formation in non-adipose tissues. [ 9 ] Additionally, mutations of BSCL2 have been recently linked to the Silver syndrome [ 10 ] (hereditary spastic periplegia type 17 [ 11 ] ) and Celia's encephalopathy . [ 12 ] [ 13 ]
The seipin gene BSCL2 was originally identified in mammals and the fruit fly, and later extended to fungi and plants. [ 14 ] The human seipin gene is located on chromosome 11q 13, with protein coding on the Crick strand. [ 15 ]
There are three validated coding transcripts in GenBank. The primary transcript originally described, contained 11 exons with protein coding beginning on exon 2 and ending in exon 11 (transcript variant 2), resulting in a 398 amino acid protein with two strongly predicted transmembrane domains (TMDs) , coded in exons 2 and 7 (isoform 2).
However, a longer transcript (variant 1) is generated with an alternative first exon containing a translational start site that results in an additional 64 amino acids at the N-terminal extension , 462 amino acids in total (isoform 1).
A third coding transcript (variant 3) splices out exon 7 and produces a shortened and altered carboxy terminus in exon 10, generating a protein of 287 amino acids (isoform 3). [ 3 ] Celia's encephalopathy is associated with a mutation in BSCL2 that leads to increased alternative splicing of the pre-mRNA to an mRNA that lacks the seventh exon, corresponding to the second transmembrane domain of the protein product. [ 16 ]
The secondary structure of seipin includes a conserved central core domain, and diverse cytosolic N- and C-termini. [ 17 ]
The protein has a short cytoplasmic region, a transmembrane alpha-helix, a water-soluble beta-sandwich domain located in endoplasmic reticulum, and second TM helix. [ citation needed ]
There are three different variations of seipin's amino acid sequence: [ 18 ]
All seipin mutations occur within its loop domain. Between some of these, four large deletions can be found which indicate that at least exons 4 and 5 are required for seipin function in humans. In addition, other six mutations have been identified in the loop domain. The majority of these cluster at the single asparagine -linked glycosylation site (NVS) in seipin. [ 3 ] The two mutations that cause neuronal seipinopathy, N88S and S90L, are located directly within this site. [ 19 ] Apart from suspending the glycosylation process, these mutations engender an aggregation of seipin and, consequently, the initiation of the ER stress response. The seipin protein can also have a modification residue, that can transform the 289’ and 372’ serine into a phosphoserine , an ester of serine and phosphoric acid .
Overexpression of mutated seipin proteins N88S or S90L can also activate autophagy , and substantially altering the sub-cellular distribution of the autophagosome marker GFP-LC3, which leads to a number of large vacuoles appearing in the cytoplasm . The sub-cellular location of GFP-LC3 and mutated seipin proteins highly overlap. Moreover, these seipin proteins can diffuse small lipid droplets to fuse into larger lipid.
Seipin mutations have been associated with congenital generalized lipodystrophy (see below), and mutations in an N-glycosylation motif links seipin to two other disorders, i.e. Silver syndrome and autosomal-dominant distal hereditary motor neuropathy type V. [ 20 ]
CGL ( congenital generalized lipodystrophy ) is a heterogeneous genetic disorder characterized by almost complete loss of adipose tissue (both metabolic and mechanical adipose depots) and an increase of ectopic fat storage in liver and muscle. Of the four CGL types, BSCL2 (Berardinelli-Seip Congenital lipodystrophy type 2), resulting from mutations in the BSCL2/seipin gene, exhibits the most severe lipodystrophic phenotype. [ 21 ]
Furthermore, these patients could suffer dyslipidemia , hepatic steatosis , insulin resistance and hypertrophic cardiomyopathy due to a cell-autonomous defect in cardiomyocytes. [ 3 ]
For many years mutations of the seipin gene were associated with a loss of function, such as in CGL (see above). However, recent studies show that mutations such as N88S and S90L seem to have a gain-of-toxic-function which may result in autosomal dominant motor neuron diseases and distal hereditary motor neuropathy , such as Silver syndrome and distal hereditary motor neuropathy type V. [ 22 ]
Owing to the wide clinical spectrum of these mutations, it has been proposed to collectively refer to seipin-related motor neuron diseases as seipinopathies. [ 23 ]
Symptoms can vary and include: developmental regression of motor and cognitive skills in the first years of life leading to death ( encephalopathy ), muscle weakness and spasticity in lower limbs ( spastic paraplegia type XVII), weakness of distal muscles of upper limbs (distal hereditary motor neuropathy type V) as well as wasting of the hand muscles (in both cases). Complex forms of seipinopathies may include deafness, dementia or mental retardation. [ 3 ]
Testicular tissue-derived seipin is essential for male fertility by modulating testicular phospholipid homeostasis. The lack of seipin in germ cells results in complete male infertility and teratozoospermia . Spermatids devoid of seipin in germ cells are morphologically abnormal with large ectopic lipid droplets and aggregate in dysfunctional clusters. Elevated levels of phosphatidic acid accompanied with an altered ratio of polyunsaturated to monounsaturated and saturated fatty acids show impaired phospholipid homeostasis during spermiogenesis . [ 24 ] | https://en.wikipedia.org/wiki/Seipin |
In reflection seismology , a seismic attribute is a quantity extracted or derived from seismic data that can be analysed in order to enhance information that might be more subtle in a traditional seismic image, leading to a better geological or geophysical interpretation of the data. [ 1 ] Examples of seismic attributes can include measured time, amplitude , frequency and attenuation , in addition to combinations of these. Most seismic attributes are post-stack , but those that use CMP gathers , such as amplitude versus offset (AVO), must be analysed pre-stack . [ 2 ] They can be measured along a single seismic trace or across multiple traces within a defined window.
The first attributes developed were related to the 1D complex seismic trace and included: envelope amplitude , instantaneous phase , instantaneous frequency , and apparent polarity . Acoustic impedance obtained from seismic inversion can also be considered an attribute and was among the first developed. [ 3 ]
Other attributes commonly used include: coherence , azimuth , dip , instantaneous amplitude , response amplitude , response phase , instantaneous bandwidth , AVO , and spectral decomposition .
A seismic attribute that can indicate the presence or absence of hydrocarbons is known as a direct hydrocarbon indicator .
Amplitude attributes use the seismic signal amplitude as the basis for their computation.
A post-stack attribute that computes the arithmetic mean of the amplitudes of a trace within a specified window. This can be used to observe the trace bias which could indicate the presence of a bright spot .
A post-stack attribute that computes the sum of the squared amplitudes divided by the number of samples within the specified window used. This provides a measure of reflectivity and allows one to map direct hydrocarbon indicators within a zone of interest.
A post-stack attribute that computes the square root of the sum of squared amplitudes divided by the number of samples within the specified window used. With this root mean square amplitude , one can measure reflectivity in order to map direct hydrocarbon indicators in a zone of interest. However, RMS is sensitive to noise as it squares every value within the window.
A post-stack attribute that computes the maximum value of the absolute value of the amplitudes within a window. This can be used to map the strongest direct hydrocarbon indicator within a zone of interest.
AVO (amplitude versus offset) attributes are pre-stack attributes that have as the basis for their computation, the variation in amplitude of a seismic reflection with varying offset. These attributes include: AVO intercept, AVO gradient, intercept multiplied by gradient, far minus near, fluid factor, etc. [ 4 ]
The anelastic attenuation factor (or Q) is a seismic attribute that can be determined from seismic reflection data for both reservoir characterisation and advanced seismic processing .
A post-stack attribute that measures the continuity between seismic traces in a specified window along a picked horizon. It can be used to map the lateral extent of a formation. It can also be used to see faults, channels or other discontinuous features.
Although it should be used along a specified horizon, many software packages compute this attribute along arbitrary time-slices.
A post-stack attribute that computes, for each trace, the best fit plane (3D) or line (2D) between its immediate neighbor traces on a horizon and outputs the magnitude of dip (gradient) of said plane or line measured in degrees. This can be used to create a pseudo paleo geological map on a horizon slice.
A post-stack attribute that computes, for each trace, the best fit plane (3D) between its immediate neighbor traces on a horizon and outputs the direction of maximum slope (dip direction) measured in degrees, clockwise from north. This is not to be confused with the geological concept of azimuth, which is equivalent to strike and is measured 90° counterclockwise from the dip direction.
A group of post-stack attributes that are computed from the curvature of a specified horizon. These attributes include: magnitude or direction of maximum curvature, magnitude or direction of minimum curvature, magnitude of curvature along the horizon's azimuth (dip) direction, magnitude of curvature along the horizon's strike direction, magnitude of curvature of a contour line along a horizon.
These attributes involve separating and classifying seismic events within each trace based on their frequency content. The application of these attributes is commonly called spectral decomposition . The starting point of spectral decomposition is to decompose each 1D trace from the time domain into its corresponding 2D representation in the time-frequency domain by means of any method of time-frequency decomposition such as: short-time Fourier transform , continuous wavelet transform , Wigner-Ville distribution , matching pursuit , among many others. Once each trace has been transformed into the time-frequency domain, a bandpass filter can be applied to view the amplitudes of seismic data at any frequency or range of frequencies.
Technically, each individual frequency or band of frequencies could be considered an attribute. The seismic data is usually filtered at various frequency ranges in order to show certain geological patterns that may not be obvious in the other frequency bands. There is an inverse relationship between the thickness of a rock layer and the corresponding peak frequency of its seismic reflection. That is, thinner rock layers are much more apparent at higher frequencies and thicker rock layers are much more apparent at lower frequencies. This can be used to qualitatively identify thinning or thickening of a rock unit in different directions.
Spectral decomposition has also been widely used as a direct hydrocarbon indicator. | https://en.wikipedia.org/wiki/Seismic_attribute |
Seismic base isolation , also known as base isolation , [ 3 ] or base isolation system , [ 4 ] is one of the most popular means of protecting a structure against earthquake forces. [ 5 ] It is a collection of structural elements which should substantially decouple a superstructure from its substructure that is in turn resting on the shaking ground, thus protecting a building or non-building structure 's integrity. [ 6 ]
Base isolation is one of the most powerful tools of earthquake engineering pertaining to the passive structural vibration control technologies.
The isolation can be obtained by the use of various techniques like rubber bearings, friction bearings, ball bearings, spring systems and other means. It is meant to enable a building or non-building structure to survive a potentially devastating seismic impact through a proper initial design or subsequent modifications. In some cases, application of base isolation can raise both a structure's seismic performance and its seismic sustainability considerably. Contrary to popular belief, base isolation does not make a building earthquake proof.
Base isolation system consists of isolation units with or without isolation components , where:
Isolation units could consist of shear or sliding units. [ 7 ] [ unreliable source? ] [ 8 ] [ unreliable source? ]
This technology can be used for both new structural design [ 9 ] and seismic retrofit . In process of seismic retrofit , some of the most prominent U.S. monuments, e.g. Pasadena City Hall , San Francisco City Hall , Salt Lake City and County Building or LA City Hall were mounted on base isolation systems . It required creating rigidity diaphragms and moats around the buildings, as well as making provisions against overturning and P-Delta Effect .
Base isolation is also used on a smaller scale—sometimes down to a single room in a building. Isolated raised-floor systems are used to safeguard essential equipment against earthquakes. The technique has been incorporated to protect statues and other works of art—see, for instance, Rodin 's Gates of Hell at the National Museum of Western Art in Tokyo 's Ueno Park . [ 10 ]
Base isolation units consist of Linear-motion bearings , that allow the building to move, oil dampers that absorb the forces generated by the movement of the building, and laminated rubber bearings that allow the building to return to its original position when the earthquake has ended. [ 11 ]
Base isolator bearings were pioneered in New Zealand by Dr Bill Robinson during the 1970s. [ 12 ] The bearing, which consists of layers of rubber and steel with a lead core, was invented by Dr Robinson in 1974. Later, in 2018, the technology was commercialized by Kamalakannan Ganesan and subsequently made patent-free, allowing for broader access and application of this earthquake-resistant technology [ 13 ] The earliest uses of base isolation systems date back all the way to 550 B.C. in the construction of the Tomb of Cyrus the Great in Pasargadae , Iran. [ 14 ] More than 90% of Iran’s territory, including this historic site, is located in the Alpine-Himalaya belt, which is one of the Earth’s most active seismic zones. Historians discovered that this structure, predominantly composed of limestone, was designed to have two foundations. The first and lower foundation, composed of stones that were bonded together with a lime plaster and sand mortar, known as Saroj mortar, was designed to move in the case of an earthquake. The top foundation layer, which formed a large plate that was in no way attached to the structure’s base, was composed of polished stones. The reason this second foundation was not tied down to the base was that in the case of an earthquake, this plate-like layer would be able to slide freely over the structure’s first foundation. As historians discovered thousands of years later, this system worked exactly as its designers had predicted, and as a result, the Tomb of Cyrus the Great still stands today. The development of the idea of base isolation can be divided into two eras. In ancient times the isolation was performed through the construction of multilayered cut stones (or by laying sand or gravel under the foundation) while in recent history, beside layers of gravel or sand as an isolation interface wooden logs between the ground and the foundation are used. [ 15 ]
Through the George E. Brown, Jr. Network for Earthquake Engineering Simulation ( NEES ), researchers are studying the performance of base isolation systems. [ 16 ] The project, a collaboration among researchers at University of Nevada, Reno ; University of California, Berkeley ; University of Wisconsin, Green Bay ; and the University at Buffalo is conducting a strategic assessment of the economic, technical, and procedural barriers to the widespread adoption of seismic isolation in the United States.
NEES resources have been used for experimental and numerical simulation, data mining, networking and collaboration to understand the complex interrelationship among the factors controlling the overall performance of an isolated structural system.
This project involves earthquake shaking table and hybrid tests at the NEES experimental facilities at the University of California, Berkeley, and the University at Buffalo, aimed at understanding ultimate performance limits to examine the propagation of local isolation failures (e.g., bumping against stops, bearing failures, uplift) to the system level response. These tests will include a full-scale, three-dimensional test of an isolated 5-story steel building on the E-Defense shake table in Miki, Hyōgo, Japan. [ 17 ] Seismic isolation research in the middle and late 1970s was largely predicated on the observation that most strong-motion records recorded up to that time had very low spectral acceleration values (2 sec) in the long-period range.
Records obtained from lakebed sites in the 1985 Mexico City earthquake raised concerns of the possibility of resonance, but such examples were considered exceptional and predictable.
One of the early examples of the earthquake design strategy is the one given by Dr. J.A. Calantariens in 1909. It was proposed that the building can be built on a layer of fine sand, mica or talc that would allow the building to slide in an earthquake, thereby reducing the forces transmitted to building.
A detailed literature review of semi-active control systems Michael D. Symans et al. (1999) provides references to both theoretical and experimental research but concentrates on describing the results of experimental work. Specifically, the review focuses on descriptions of the dynamic behavior and distinguishing features of various systems which have been experimentally tested both at the component level and within small scale structural models.
An adaptive base isolation system includes a tunable isolator that can adjust its properties based on the input to minimize the transferred vibration. Magnetorheological fluid dampers [ 18 ] and isolators with Magnetorheological elastomer [ 19 ] have been suggested as adaptive base isolators. | https://en.wikipedia.org/wiki/Seismic_base_isolation |
A seismic metamaterial , is a metamaterial that is designed to counteract the adverse effects of seismic waves on artificial structures, which exist on or near the surface of the Earth. [ 1 ] [ 2 ] [ 3 ] Current designs of seismic metamaterials utilize configurations of boreholes, [ 4 ] trees [ 5 ] [ 6 ] or proposed underground resonators to act as a large scale material. Experiments have observed both reflections and bandgap attenuation from artificially induced seismic waves. These are the first experiments to verify that seismic metamaterials can be measured for frequencies below 100 Hz, where damage from Rayleigh waves is the most harmful to artificial structures.
More than a million earthquakes are recorded each year, by a worldwide system of earthquake detection stations. The propagation velocity of the seismic waves depends on density and elasticity of the earth materials. In other words, the speeds of the seismic waves vary as they travel through different materials in the Earth . The two main components of a seismic event are body waves and surface waves . Both of these have different modes of wave propagation. [ 7 ]
Computations showed that seismic waves traveling toward a building , could be directed around the building, leaving the building unscathed, by using seismic metamaterials . The very long wavelengths of earthquake waves would be shortened as they interact with the metamaterials ; the waves would pass around the building so as to arrive in phase as the earthquake wave proceeded, as if the building was not there. The mathematical models produce the regular pattern provided by Metamaterial cloaking . This method was first understood with electromagnetic cloaking metamaterials - the electromagnetic energy is in effect directed around an object, or hole, and protecting buildings from seismic waves employs this same principle. [ 1 ] [ 2 ]
Giant polymer -made split ring resonators combined with other metamaterials are designed to couple at the seismic wavelength . Concentric layers of this material would be stacked, each layer separated by an elastic medium. The design that worked is ten layers of six different materials, which can be easily deployed in building foundations. As of 2009, the project is still in the design stage. [ 1 ] [ 2 ]
For seismic metamaterials to protect surface structures, the proposal includes a layered structure of metamaterials, separated by elastic plates in a cylindrical configuration. A prior simulation showed that it is possible to create concealment from electromagnetic radiation with concentric, alternating layers of electromagnetic metamaterials. That study is in contrast to concealment by inclusions in a split ring resonator designed as an anisotropic metamaterial. [ 8 ]
The configuration can be viewed as alternating layers of " homogeneous isotropic dielectric material" A. with "homogeneous isotropic dielectric material" B. Each dielectric material is much thinner than the radiated wavelength. As a whole, such structure is an anisotropic medium. The layered dielectric materials surround an "infinite conducting cylinder". The layered dielectric materials radiate outward, in a concentric fashion, and the cylinder is encased in the first layer. The other layers alternate and surround the previous layer all the way to the first layer. Electromagnetic wave scattering was calculated and simulated for the layered (metamaterial) structure and the split-ring resonator anisotropic metamaterial, to show the effectiveness of the layered metamaterial. [ 8 ]
The theory and ultimate development for the seismic metamaterial is based on coordinate transformations achieved when concealing a small cylindrical object with electromagnetic waves . This was followed by an analysis of acoustic cloaking, and whether or not coordinate transformations could be applied to artificially fabricated acoustic materials . [ 3 ]
Applying the concepts used to understand electromagnetic materials to material properties in other systems shows them to be closely analogous. Wave vector , wave impedance , and direction of power flow are universal. By understanding how permittivity and permeability control these components of wave propagation , applicable analogies can be used for other material interactions. [ 9 ]
In most instances, applying coordinate transformation to engineered artificial elastic media is not possible. However, there is at least one special case where there is a direct equivalence between electromagnetics and elastodynamics . Furthermore, this case appears practically useful. In two dimensions, isotropic acoustic media and isotropic electromagnetic media are exactly equivalent. Under these conditions, the isotropic characteristic works in anisotropic media as well. [ 9 ]
It has been demonstrated mathematically that the 2D Maxwell equations with normal incidence apply to 2D acoustic equations when replacing the electromagnetic parameters with the following acoustic parameters: pressure , vector fluid velocity , fluid mass density and the fluid bulk modulus . The compressional wave solutions used in the electromagnetic cloaking are transferred to material fluidic solutions where fluid motion is parallel to the wavevector. The computations then show that coordinate transformations can be applied to acoustic media when restricted to normal incidence in two dimensions. [ 9 ]
Next the electromagnetic cloaking shell is referenced as an exact equivalence for a simulated demonstration of the acoustic cloaking shell. Bulk modulus and mass density determine the spatial dimensions of the cloak, which can bend any incident wave around the center of the shell. In a simulation with perfect conditions, because it is easier to demonstrate the principles involved, there is zero scattering in any direction. [ 9 ]
However, it can be demonstrated through computation and visual simulation that the waves are in fact dispersed around the location of the building. The frequency range of this capability is shown to have no limitation regarding the radiated frequency . The cloak itself demonstrates no forward or back scattering , hence, the seismic cloak becomes an effective medium. [ 3 ]
In 2012, researchers held an experimental field-test near Grenoble (France), with the aim to highlight analogy with phononic crystals. [ 4 ]
At the geophysics scale, in a forest in the Landes region of France in 2016, an ambitious seismic experiment called the METAFORET experiment [ 6 ] demonstrated that trees could significantly modify the surface wavefield due to their coupled resonances when arranged at a subwavelength scale. A follow-up field experiment called the META-WT experiment was performed in the Nauen wind farm. [ 10 ] This for the first time demonstrated that at the city scale, collective resonance of wind turbine structures can modify seismic waves propagating through it. These new observations have implications for seismic hazard in a city where dense urban structures like tall buildings can strongly modify the wavefield. | https://en.wikipedia.org/wiki/Seismic_metamaterial |
Seismic retrofitting is the modification of existing structures to make them more resistant to seismic activity , ground motion , or soil failure due to earthquakes . With better understanding of seismic demand on structures and with recent experiences with large earthquakes near urban centers, the need of seismic retrofitting is well acknowledged. Prior to the introduction of modern seismic codes in the late 1960s for developed countries (US, Japan etc.) and late 1970s for many other parts of the world (Turkey, China etc.), [ 1 ] many structures were designed without adequate detailing and reinforcement for seismic protection. In view of the imminent problem, various research work has been carried out. State-of-the-art technical guidelines for seismic assessment, retrofit and rehabilitation have been published around the world – such as the ASCE-SEI 41 [ 2 ] and the New Zealand Society for Earthquake Engineering (NZSEE)'s guidelines. [ 3 ] These codes must be regularly updated; the 1994 Northridge earthquake brought to light the brittleness of welded steel frames, for example. [ 4 ]
The retrofit techniques outlined here are also applicable for other natural hazards such as tropical cyclones , tornadoes , and severe winds from thunderstorms . Whilst current practice of seismic retrofitting is predominantly concerned with structural improvements to reduce the seismic hazard of using the structures, it is similarly essential to reduce the hazards and losses from non-structural elements. It is also important to keep in mind that there is no such thing as an earthquake-proof structure, although seismic performance can be greatly enhanced through proper initial design or subsequent modifications.
Seismic retrofit (or rehabilitation) strategies have been developed in the past few decades following the introduction of new seismic provisions and the availability of advanced materials (e.g. fiber-reinforced polymers (FRP) , fiber reinforced concrete and high strength steel). [ 5 ]
Recently more holistic approaches to building retrofitting are being explored, including combined seismic and energy retrofitting. Such combined strategies aim to exploit cost savings by applying energy retrofitting and seismic strengthening interventions at once, hence improving the seismic and thermal performance of buildings. [ 8 ] [ 9 ] [ 10 ]
In the past, seismic retrofit was primarily applied to achieve public safety, with engineering solutions limited by economic and political considerations. However, with the development of Performance-based earthquake engineering (PBEE), several levels of performance objectives are gradually recognised:
Common seismic retrofitting techniques fall into several categories:
The use of external post-tensioning for new structural systems have been developed in the past decade. Under the PRESS (Precast Seismic Structural Systems), [ 11 ] a large-scale U.S./Japan joint research program, unbonded post-tensioning high strength steel tendons have been used to achieve a moment-resisting system that has self-centering capacity.
An extension of the same idea for seismic retrofitting has been experimentally tested for seismic retrofit of California bridges under a Caltrans research project [ 12 ] and for seismic retrofit of non-ductile reinforced concrete frames. [ 13 ] Pre-stressing can increase the capacity of structural elements such as beam, column and beam-column joints. External pre-stressing has been used for structural upgrade for gravity/live loading since the 1970s. [ 14 ]
Base isolation is a collection of structural elements of a building that should substantially decouple the building's structure from the shaking ground thus protecting the building's integrity and enhancing its seismic performance . This earthquake engineering technology, which is a kind of seismic vibration control , can be applied both to a newly designed building and to seismic upgrading of existing structures. [ 15 ] [ 16 ] Normally, excavations are made around the building and the building is separated from the foundations. Steel or reinforced concrete beams replace the connections to the foundations, while under these, the isolating pads, or base isolators, replace the material removed. While the base isolation tends to restrict transmission of the ground motion to the building, it also keeps the building positioned properly over the foundation. Careful attention to detail is required where the building interfaces with the ground, especially at entrances, stairways and ramps, to ensure sufficient relative motion of those structural elements.
Supplementary dampers absorb the energy of motion and convert it to heat, thus damping resonant effects in structures that are rigidly attached to the ground. In addition to adding energy dissipation capacity to the structure, supplementary damping can reduce the displacement and acceleration demand within the structures. [ 17 ] In some cases, the threat of damage does not come from the initial shock itself, but rather from the periodic resonant motion of the structure that repeated ground motion induces. In the practical sense, supplementary dampers act similarly to Shock absorbers used in automotive suspensions .
Tuned mass dampers (TMD) employ movable weights on some sort of springs. These are typically employed to reduce wind sway in very tall, light buildings. Similar designs may be employed to impart earthquake resistance in eight to ten story buildings that are prone to destructive earthquake induced resonances. [ 18 ]
A slosh tank is a large container of low viscosity fluid (usually water) that may be placed at locations in a structure where lateral swaying motions are significant, such as the roof, and tuned to counter the local resonant dynamic motion. During a seismic (or wind) event the fluid in the tank will slosh back and forth with the fluid motion usually directed and controlled by internal baffles – partitions that prevent the tank itself becoming resonant with the structure, see Slosh dynamics . The net dynamic response of the overall structure is reduced due to both the counteracting movement of mass, as well as energy dissipation or vibration damping which occurs when the fluid's kinetic energy is converted to heat by the baffles. Generally the temperature rise in the system will be minimal and is passively cooled by the surrounding air. One Rincon Hill in San Francisco is a skyscraper with a rooftop slosh tank which was designed primarily to reduce the magnitude of lateral swaying motion from wind. A slosh tank is a passive tuned mass damper . In order to be effective the mass of the liquid is usually on the order of 1% to 5% of the mass it is counteracting, and often this requires a significant volume of liquid. In some cases these systems are designed to double as emergency water cisterns for fire suppression.
Very tall buildings (" skyscrapers "), when built using modern lightweight materials, might sway uncomfortably (but not dangerously) in certain wind conditions. A solution to this problem is to include at some upper story a large mass, constrained, but free to move within a limited range, and moving on some sort of bearing system such as an air cushion or hydraulic film. Hydraulic pistons , powered by electric pumps and accumulators, are actively driven to counter the wind forces and natural resonances. These may also, if properly designed, be effective in controlling excessive motion – with or without applied power – in an earthquake. In general, though, modern steel frame high rise buildings are not as subject to dangerous motion as are medium rise (eight to ten story ) buildings, as the resonant period of a tall and massive building is longer than the approximately one second shocks applied by an earthquake.
The most common form of seismic retrofit to lower buildings is adding strength to the existing structure to resist seismic forces. The strengthening may be limited to connections between existing building elements or it may involve adding primary resisting elements such as walls or frames, particularly in the lower stories. Common retrofit measures for unreinforced masonry buildings in the Western United States include the addition of steel frames, the addition of reinforced concrete walls, and in some cases, the addition of base isolation.
Frequently, building additions will not be strongly connected to the existing structure, but simply placed adjacent to it, with only minor continuity in flooring, siding, and roofing. As a result, the addition may have a different resonant period than the original structure, and they may easily detach from one another. The relative motion will then cause the two parts to collide, causing severe structural damage. Seismic modification will either tie the two building components rigidly together so that they behave as a single mass or it will employ dampers to expend the energy from relative motion, with appropriate allowance for this motion, such as increased spacing and sliding bridges between sections.
Historic buildings, made of unreinforced masonry, may have culturally important interior detailing or murals that should not be disturbed. In this case, the solution may be to add a number of steel, reinforced concrete, or poststressed concrete columns to the exterior. Careful attention must be paid to the connections with other members such as footings, top plates, and roof trusses.
Shown here is an exterior shear reinforcement of a conventional reinforced concrete dormitory building. In this case, there was sufficient vertical strength in the building columns and sufficient shear strength in the lower stories that only limited shear reinforcement was required to make it earthquake resistant for this location near the Hayward fault .
In other circumstances, far greater reinforcement is required. In the structure shown at right – a parking garage over shops – the placement, detailing, and painting of the reinforcement becomes itself an architectural embellishment.
This collapse mode is known as soft story collapse . In many buildings the ground level is designed for different uses than the upper levels. Low rise residential structures may be built over a parking garage which have large doors on one side. Hotels may have a tall ground floor to allow for a grand entrance or ballrooms. Office buildings may have retail stores on the ground floor with continuous display windows .
Traditional seismic design assumes that the lower stories of a building are stronger than the upper stories; where this is not the case—if the lower story is less strong than the upper structure—the structure will not respond to earthquakes in the expected [ clarification needed ] fashion. Using modern design methods, it is possible to take a weak lower story into account. Several failures of this type in one large apartment complex caused most of the fatalities in the 1994 Northridge earthquake .
Typically, where this type of problem is found, the weak story is reinforced to make it stronger than the floors above by adding shear walls or moment frames. Moment frames consisting of inverted U bents are useful in preserving lower story garage access, while a lower cost solution may be to use shear walls or trusses in several locations, which partially reduce the usefulness for automobile parking but still allow the space to be used for other storage.
Beam-column joint connections are a common structural weakness in dealing with seismic retrofitting. Prior to the introduction of modern seismic codes in early 1970s, beam-column joints were typically non-engineered or designed. Laboratory testings have confirmed the seismic vulnerability of these poorly detailed and under-designed connections. [ 19 ] [ 20 ] [ 21 ] [ 22 ] Failure of beam-column joint connections can typically lead to catastrophic collapse of a frame-building, as often observed in recent earthquakes [ 23 ] [ 24 ]
For reinforced concrete beam-column joints – various retrofit solutions have been proposed and tested in the past 20 years. Philosophically, the various seismic retrofit strategies discussed above can be implemented for reinforced concrete joints. Concrete or steel jacketing have been a popular retrofit technique until the advent of composite materials such as Carbon fiber-reinforced polymer (FRP). Composite materials such as carbon FRP and aramic FRP have been extensively tested for use in seismic retrofit with some success. [ 25 ] [ 26 ] [ 27 ] One novel technique includes the use of selective weakening of the beam and added external post-tensioning to the joint [ 28 ] in order to achieve flexural hinging in the beam, which is more desirable in terms of seismic design.
Widespread weld failures at beam-column joints of low-to-medium rise steel buildings during the Northridge 1994 earthquake for example, have shown the structural defiencies of these 'modern-designed' post-1970s welded moment-resisting connections. [ 29 ] A subsequent SAC research project [4] has documented, tested and proposed several retrofit solutions for these welded steel moment-resisting connections. Various retrofit solutions have been developed for these welded joints – such as a) weld strengthening and b) addition of steel haunch or 'dog-bone' shape flange. [ 30 ]
Following the Northridge earthquake, a number of steel moment -frame buildings were found to have experienced brittle fractures of beam to column connections. Discovery of these unanticipated brittle fractures of framing connections was alarming to engineers and the building industry. Starting in the 1960s, engineers began to regard welded steel moment-frame buildings as being among the most ductile systems contained in the building code. Many engineers believed that steel moment-frame buildings were essentially invulnerable to earthquake induced damage and thought that should damage occur, it would be limited to ductile yielding of members and connections. Observation of damage sustained by buildings in the 1994 Northridge earthquake indicated that contrary to the intended behavior, in many cases, brittle fractures initiated within the connections at very low levels of plastic demand. In September 1994, The SAC joint Venture, AISC, AISI, and NIST jointly convened an international workshop in Los Angeles to coordinate the efforts of various participants and to lay the foundation for systematic investigation and resolution of the problem. In September 1995 the SAC Joint Venture entered into a contractual agreement with FEMA to conduct Phase II of the SAC Steel project. Under Phase II, SAC continued its extensive problem-focused study of the performance of moment resisting steel frames and connections of various configurations, with the ultimate goal of developing seismic design criteria for steel construction. As a result of these studies it is now known that the typical moment-resisting connection detail employed in steel moment frame construction prior to the 1994 Northridge earthquake had a number of features that rendered it inherently susceptible to brittle fracture. [ 31 ]
Floors in wooden buildings are usually constructed upon relatively deep spans of wood, called joists , covered with a diagonal wood planking or plywood to form a subfloor upon which the finish floor surface is laid. In many structures these are all aligned in the same direction. To prevent the beams from tipping over onto their side, blocking is used at each end, and for additional stiffness, blocking or diagonal wood or metal bracing may be placed between beams at one or more points in their spans. At the outer edge it is typical to use a single depth of blocking and a perimeter beam overall.
If the blocking or nailing is inadequate, each beam can be laid flat by the shear forces applied to the building. In this position they lack most of their original strength and the structure may further collapse. As part of a retrofit the blocking may be doubled, especially at the outer edges of the building. It may be appropriate to add additional nails between the sill plate of the perimeter wall erected upon the floor diaphragm, although this will require exposing the sill plate by removing interior plaster or exterior siding. As the sill plate may be quite old and dry and substantial nails must be used, it may be necessary to pre-drill a hole for the nail in the old wood to avoid splitting. When the wall is opened for this purpose it may also be appropriate to tie vertical wall elements into the foundation using specialty connectors and bolts glued with epoxy cement into holes drilled in the foundation.
Single or two-story wood-frame domestic structures built on a perimeter or slab foundation are relatively safe in an earthquake, but in many structures built before 1950 the sill plate that sits between the concrete foundation and the floor diaphragm (perimeter foundation) or studwall (slab foundation) may not be sufficiently bolted in. Additionally, older attachments (without substantial corrosion-proofing) may have corroded to a point of weakness. A sideways shock can slide the building entirely off of the foundations or slab.
Often such buildings, especially if constructed on a moderate slope, are erected on a platform connected to a perimeter foundation through low stud-walls called "cripple wall" or pin-up . This low wall structure itself may fail in shear or in its connections to itself at the corners, leading to the building moving diagonally and collapsing the low walls. The likelihood of failure of the pin-up can be reduced by ensuring that the corners are well reinforced in shear and that the shear panels are well connected to each other through the corner posts. This requires structural grade sheet plywood, often treated for rot resistance. This grade of plywood is made without interior unfilled knots and with more, thinner layers than common plywood. New buildings designed to resist earthquakes will typically use OSB ( oriented strand board ), sometimes with metal joins between panels, and with well attached stucco covering to enhance its performance. In many modern tract homes, especially those built upon expansive (clay) soil the building is constructed upon a single and relatively thick monolithic slab, kept in one piece by high tensile rods that are stressed after the slab has set. This poststressing places the concrete under compression – a condition under which it is extremely strong in bending and so will not crack under adverse soil conditions.
Some older low-cost structures are elevated on tapered concrete pylons set into shallow pits, a method frequently used to attach outdoor decks to existing buildings. This is seen in conditions of damp soil, especially in tropical conditions, as it leaves a dry ventilated space under the house, and in far northern conditions of permafrost (frozen mud) as it keeps the building's warmth from destabilizing the ground beneath. During an earthquake, the pylons may tip, spilling the building to the ground. This can be overcome by using deep-bored holes to contain cast-in-place reinforced pylons, which are then secured to the floor panel at the corners of the building. Another technique is to add sufficient diagonal bracing or sections of concrete shear wall between pylons.
Reinforced concrete columns typically contain large diameter vertical rebar (reinforcing bars) arranged in a ring, surrounded by lighter-gauge hoops of rebar. Upon analysis of failures due to earthquakes, it has been realized that the weakness was not in the vertical bars, but rather in inadequate strength and quantity of hoops. Once the integrity of the hoops is breached, the vertical rebar can flex outward, stressing the central column of concrete. The concrete then simply crumbles into small pieces, now unconstrained by the surrounding rebar. In new construction a greater amount of hoop-like structures are used.
One simple retrofit is to surround the column with a jacket of steel plates formed and welded into a single cylinder. The space between the jacket and the column is then filled with concrete, a process called grouting. Where soil or structure conditions require such additional modification, additional pilings may be driven near the column base and concrete pads linking the pilings to the pylon are fabricated at or below ground level. In the example shown not all columns needed to be modified to gain sufficient seismic resistance for the conditions expected. (This location is about a mile from the Hayward Fault Zone .)
Concrete walls are often used at the transition between elevated road fill and overpass structures. The wall is used both to retain the soil and so enable the use of a shorter span and also to transfer the weight of the span directly downward to footings in undisturbed soil. If these walls are inadequate they may crumble under the stress of an earthquake's induced ground motion.
One form of retrofit is to drill numerous holes into the surface of the wall, and secure short L -shaped sections of rebar to the surface of each hole with epoxy adhesive . Additional vertical and horizontal rebar is then secured to the new elements, a form is erected, and an additional layer of concrete is poured. This modification may be combined with additional footings in excavated trenches and additional support ledgers and tie-backs to retain the span on the bounding walls.
In masonry structures, brick building structures have been reinforced with coatings of glass fiber and appropriate resin (epoxy or polyester). In lower floors these may be applied over entire exposed surfaces, while in upper floors this may be confined to narrow areas around window and door openings. This application provides tensile strength that stiffens the wall against bending away from the side with the application. The efficient protection of an entire building requires extensive analysis and engineering to determine the appropriate locations to be treated.
In reinforced concrete buildings, masonry infill walls are considered non-structural elements, but damage to infills can lead to large repair costs and change the behaviour of a structure, even leading to aforementioned soft-storey or beam-column joint shear failures. Local failure of the infill panels due to in and out-of-plane mechanisms, but also due to their combination, can lead to a sudden drop in capacity and hence cause global brittle failure of the structure. Even at lower intensity earthquakes, damage to infilled frames can lead to high economic losses and loss of life. [ 32 ]
To prevent masonry infill damage and failure, typical retrofit strategies aim to strengthen the infills and provide adequate connection to the frame. Examples of retrofit techniques for masonry infills include steel reinforced plasters, [ 33 ] [ 34 ] engineered cementitious composites , [ 35 ] [ 36 ] thin layers fibre-reinforced polymers (FRP), [ 37 ] [ 38 ] and most recently also textile-reinforced mortars (TRM). [ 39 ] [ 40 ]
Where moist or poorly consolidated alluvial soil interfaces in a "beach like" structure against underlying firm material, seismic waves traveling through the alluvium can be amplified, just as are water waves against a sloping beach . In these special conditions, vertical accelerations up to twice the force of gravity have been measured. If a building is not secured to a well-embedded foundation it is possible for the building to be thrust from (or with) its foundations into the air, usually with severe damage upon landing. Even if it is well-founded, higher portions such as upper stories or roof structures or attached structures such as canopies and porches may become detached from the primary structure.
Good practices in modern, earthquake-resistant structures dictate that there be good vertical connections throughout every component of the building, from undisturbed or engineered earth to foundation to sill plate to vertical studs to plate cap through each floor and continuing to the roof structure. Above the foundation and sill plate the connections are typically made using steel strap or sheet stampings, nailed to wood members using special hardened high-shear strength nails, and heavy angle stampings secured with through bolts, using large washers to prevent pull-through. Where inadequate bolts are provided between the sill plates and a foundation in existing construction (or are not trusted due to possible corrosion), special clamp plates may be added, each of which is secured to the foundation using expansion bolts inserted into holes drilled in an exposed face of concrete. Other members must then be secured to the sill plates with additional fittings.
One of the most difficult retrofits is that required to prevent damage due to soil failure. Soil failure can occur on a slope, a slope failure or landslide , or in a flat area due to liquefaction of water-saturated sand and/or mud. Generally, deep pilings must be driven into stable soil (typically hard mud or sand) or to underlying bedrock or the slope must be stabilized. For buildings built atop previous landslides the practicality of retrofit may be limited by economic factors, as it is not practical to stabilize a large, deep landslide. The likelihood of landslide or soil failure may also depend upon seasonal factors, as the soil may be more stable at the beginning of a wet season than at the beginning of the dry season. Such a "two season" Mediterranean climate is seen throughout California .
In some cases, the best that can be done is to reduce the entrance of water runoff from higher, stable elevations by capturing and bypassing through channels or pipes, and to drain water infiltrated directly and from subsurface springs by inserting horizontal perforated tubes. There are numerous locations in California where extensive developments have been built atop archaic landslides, which have not moved in historic times but which (if both water-saturated and shaken by an earthquake) have a high probability of moving en masse , carrying entire sections of suburban development to new locations. While the most modern of house structures (well tied to monolithic concrete foundation slabs reinforced with post tensioning cables) may survive such movement largely intact, the building will no longer be in its proper location.
Natural gas and propane supply pipes to structures often prove especially dangerous during and after earthquakes. Should a building move from its foundation or fall due to cripple wall collapse, the ductile iron pipes transporting the gas within the structure may be broken, typically at the location of threaded joints. The gas may then still be provided to the pressure regulator from higher pressure lines and so continue to flow in substantial quantities; it may then be ignited by a nearby source such as a lit pilot light or arcing electrical connection.
There are two primary methods of automatically restraining the flow of gas after an earthquake, installed on the low pressure side of the regulator, and usually downstream of the gas meter.
It appears that the most secure configuration would be to use one of each of these devices in series.
Unless the tunnel penetrates a fault likely to slip, the greatest danger to tunnels is a landslide blocking an entrance. Additional protection around the entrance may be applied to divert any falling material (similar as is done to divert snow avalanches ) or the slope above the tunnel may be stabilized in some way. Where only small- to medium-sized rocks and boulders are expected to fall, the entire slope may be covered with wire mesh, pinned down to the slope with metal rods. This is also a common modification to highway cuts where appropriate conditions exist.
The safety of underwater tubes is highly dependent upon the soil conditions through which the tunnel was constructed, the materials and reinforcements used, and the maximum predicted earthquake expected, and other factors, some of which may remain unknown under current knowledge.
A tube of particular structural, seismic, economic, and political interest is the BART (Bay Area Rapid Transit) transbay tube . This tube was constructed at the bottom of San Francisco Bay through an innovative process. Rather than pushing a shield through the soft bay mud, the tube was constructed on land in sections. Each section consisted of two inner train tunnels of circular cross section, a central access tunnel of rectangular cross section, and an outer oval shell encompassing the three inner tubes. The intervening space was filled with concrete. At the bottom of the bay a trench was excavated and a flat bed of crushed stone prepared to receive the tube sections. The sections were then floated into place and sunk, then joined with bolted connections to previously placed sections. An overfill was then placed atop the tube to hold it down. Once completed from San Francisco to Oakland, the tracks and electrical components were installed. The predicted response of the tube during a major earthquake was likened to be as that of a string of (cooked) spaghetti in a bowl of gelatin dessert . To avoid overstressing the tube due to differential movements at each end, a sliding slip joint was included at the San Francisco terminus under the landmark Ferry Building .
The engineers of the construction consortium PBTB (Parsons Brinckerhoff-Tudor-Bechtel) used the best estimates of ground motion available at the time, now known to be insufficient given modern computational analysis methods and geotechnical knowledge. Unexpected settlement of the tube has reduced the amount of slip that can be accommodated without failure. These factors have resulted in the slip joint being designed too short to ensure survival of the tube under possible (perhaps even likely) large earthquakes in the region. To correct this deficiency the slip joint must be extended to allow for additional movement, a modification expected to be both expensive and technically and logistically difficult. Other retrofits to the BART tube include vibratory consolidation of the tube's overfill to avoid potential liquefying of the overfill, which has now been completed. (Should the overfill fail there is a danger of portions of the tube rising from the bottom, an event which could potentially cause failure of the section connections.)
Bridges have several failure modes.
Many short bridge spans are statically anchored at one end and attached to rockers at the other. This rocker gives vertical and transverse support while allowing the bridge span to expand and contract with temperature changes. The change in the length of the span is accommodated over a gap in the roadway by comb-like expansion joints . During severe ground motion, the rockers may jump from their tracks or be moved beyond their design limits, causing the bridge to unship from its resting point and then either become misaligned or fail completely. Motion can be constrained by adding ductile or high-strength steel restraints that are friction-clamped to beams and designed to slide under extreme stress while still limiting the motion relative to the anchorage.
Suspension bridges may respond to earthquakes with a side-to-side motion exceeding that which was designed for wind gust response. Such motion can cause fragmentation of the road surface, damage to bearings, and plastic deformation or breakage of components. Devices such as hydraulic dampers or clamped sliding connections and additional diagonal reinforcement may be added.
Lattice girders consist of two "I"-beams connected with a criss-cross lattice of flat strap or angle stock. These can be greatly strengthened by replacing the open lattice with plate members. This is usually done in concert with the replacement of hot rivets with bolts.
Many older structures were fabricated by inserting red-hot rivets into pre-drilled holes; the soft rivets are then peened using an air hammer on one side and a bucking bar on the head end. As these cool slowly, they are left in an annealed (soft) condition, while the plate, having been hot rolled and quenched during manufacture, remains relatively hard. Under extreme stress the hard plates can shear the soft rivets, resulting in failure of the joint.
The solution is to burn out each rivet with an oxygen torch . The hole is then prepared to a precise diameter with a reamer . A special locator bolt , consisting of a head, a shaft matching the reamed hole, and a threaded end is inserted and retained with a nut, then tightened with a wrench . As the bolt has been formed from an appropriate high-strength alloy and has also been heat-treated, it is not subject to either the plastic shear failure typical of hot rivets nor the brittle fracture of ordinary bolts. Any partial failure will be in the plastic flow of the metal secured by the bolt; with proper engineering any such failure should be non-catastrophic.
Elevated roadways are typically built on sections of elevated earth fill connected with bridge-like segments, often supported with vertical columns. If the soil fails where a bridge terminates, the bridge may become disconnected from the rest of the roadway and break away. The retrofit for this is to add additional reinforcement to any supporting wall, or to add deep caissons adjacent to the edge at each end and connect them with a supporting beam under the bridge.
Another failure occurs when the fill at each end moves (through resonant effects) in bulk, in opposite directions. If there is an insufficient founding shelf for the overpass, then it may fall. Additional shelf and ductile stays may be added to attach the overpass to the footings at one or both ends. The stays, rather than being fixed to the beams, may instead be clamped to them. Under moderate loading, these keep the overpass centered in the gap so that it is less likely to slide off its founding shelf at one end. The ability for the fixed ends to slide, rather than break, will prevent the complete drop of the structure if it should fail to remain on the footings.
Large sections of roadway may consist entirely of viaduct, sections with no connection to the earth other than through vertical columns. When concrete columns are used, the detailing is critical. Typical failure may be in the toppling of a row of columns due either to soil connection failure or to insufficient cylindrical wrapping with rebar. Both failures were seen in the 1995 Great Hanshin earthquake in Kobe, Japan , where an entire viaduct, centrally supported by a single row of large columns, was laid down to one side. Such columns are reinforced by excavating to the foundation pad, driving additional pilings, and adding a new, larger pad, well connected with rebar alongside or into the column. A column with insufficient wrapping bar, which is prone to burst and then hinge at the bursting point, may be completely encased in a circular or elliptical jacket of welded steel sheet and grouted as described above.
Sometimes viaducts may fail in the connections between components. This was seen in the failure of the Cypress Freeway in Oakland, California , during the Loma Prieta earthquake . This viaduct was a two-level structure, and the upper portions of the columns were not well connected to the lower portions that supported the lower level; this caused the upper deck to collapse upon the lower deck. Weak connections such as these require additional external jacketing – either through external steel components or by a complete jacket of reinforced concrete, often using stub connections that are glued (using epoxy adhesive) into numerous drilled holes. These stubs are then connected to additional wrappings, external forms (which may be temporary or permanent) are erected, and additional concrete is poured into the space. Large connected structures similar to the Cypress Viaduct must also be properly analyzed in their entirety using dynamic computer simulations.
Side-to-side forces cause most earthquake damage. Bolting of the mudsill to the foundation and application of plywood to cripple walls are a few basic retrofit techniques which homeowners may apply to wood-framed residential structures to mitigate the effects of seismic activity. The City of San Leandro created guidelines for these procedures, as outlined in the following pamphlet . Public awareness and initiative are critical to the retrofit and preservation of existing building stock, and such efforts as those of the Association of Bay Area Governments are instrumental in providing informational resources to seismically active communities.
Most houses in North America are wood-framed structures. Wood is one of the best materials for earthquake-resistant construction since it is lightweight and more flexible than masonry. It is easy to work with and less expensive than steel, masonry, or concrete. In older homes the most significant weaknesses are the connection from the wood-framed walls to the foundation and the relatively weak "cripple-walls." (Cripple walls are the short wood walls that extend from the top of the foundation to the lowest floor level in houses that have raised floors.) Adding connections from the base of the wood-framed structure to the foundation is almost always an important part of a seismic retrofit. Bracing the cripple-walls to resist side-to-side forces is essential in houses with cripple walls; bracing is usually done with plywood . Oriented strand board (OSB) does not perform as consistently as plywood, and is not the favored choice of retrofit designers or installers.
Retrofit methods in older wood-frame structures may consist of the following, and other methods not described here.
Wooden framing is efficient when combined with masonry, if the structure is properly designed. In Turkey, the traditional houses (bagdadi) are made with this technology. In El Salvador , wood and bamboo are used for residential construction.
In many parts of developing countries such as Pakistan, Iran and China, unreinforced or in some cases reinforced masonry is the predominantly form of structures for rural residential and dwelling. Masonry was also a common construction form in the early part of the 20th century, which implies that a substantial number of these at-risk masonry structures would have significant heritage value. Masonry walls that are not reinforced are especially hazardous. Such structures may be more appropriate for replacement than retrofit, but if the walls are the principal load bearing elements in structures of modest size they may be appropriately reinforced. It is especially important that floor and ceiling beams be securely attached to the walls. Additional vertical supports in the form of steel or reinforced concrete may be added.
In the western United States, much of what is seen as masonry is actually brick or stone veneer. Current construction rules dictate the amount of tie–back required, which consist of metal straps secured to vertical structural elements. These straps extend into mortar courses, securing the veneer to the primary structure. Older structures may not secure this sufficiently for seismic safety. A weakly secured veneer in a house interior (sometimes used to face a fireplace from floor to ceiling) can be especially dangerous to occupants. Older masonry chimneys are also dangerous if they have substantial vertical extension above the roof. These are prone to breakage at the roofline and may fall into the house in a single large piece. For retrofit, additional supports may be added; however, it is extremely expensive to strengthen an existing masonry chimney to conform with contemporary design standards. It is best to simply remove the extension and replace it with lighter materials, with special metal flue replacing the flue tile and a wood structure replacing the masonry. This may be matched against existing brickwork by using very thin veneer (similar to a tile, but with the appearance of a brick). | https://en.wikipedia.org/wiki/Seismic_retrofit |
A seismometer is an instrument that responds to ground displacement and shaking such as caused by quakes , volcanic eruptions , and explosions . They are usually combined with a timing device and a recording device to form a seismograph . [ 1 ] The output of such a device—formerly recorded on paper (see picture) or film, now recorded and processed digitally—is a seismogram . Such data is used to locate and characterize earthquakes , and to study the internal structure of Earth .
A simple seismometer, sensitive to up-down motions of the Earth, is like a weight hanging from a spring, both suspended from a frame that moves along with any motion detected. The relative motion between the weight (called the mass) and the frame provides a measurement of the vertical ground motion . A rotating drum is attached to the frame and a pen is attached to the weight, thus recording any ground motion in a seismogram .
Any movement from the ground moves the frame. The mass tends not to move because of its inertia , and by measuring the movement between the frame and the mass, the motion of the ground can be determined.
Early seismometers used optical levers or mechanical linkages to amplify the small motions involved, recording on soot-covered paper or photographic paper. Modern instruments use electronics. In some systems, the mass is held nearly motionless relative to the frame by an electronic negative feedback loop . The motion of the mass relative to the frame is measured, and the feedback loop applies a magnetic or electrostatic force to keep the mass nearly motionless. The voltage needed to produce this force is the output of the seismometer, which is recorded digitally.
In other systems the weight is allowed to move, and its motion produces an electrical charge in a coil attached to the mass which voltage moves through the magnetic field of a magnet attached to the frame. This design is often used in a geophone , which is used in exploration for oil and gas.
Seismic observatories usually have instruments measuring three axes: north-south (y-axis), east–west (x-axis), and vertical (z-axis). If only one axis is measured, it is usually the vertical because it is less noisy and gives better records of some seismic waves. [ citation needed ]
The foundation of a seismic station is critical. [ 2 ] A professional station is sometimes mounted on bedrock . The best mountings may be in deep boreholes, which avoid thermal effects, ground noise and tilting from weather and tides. Other instruments are often mounted in insulated enclosures on small buried piers of unreinforced concrete. Reinforcing rods and aggregates would distort the pier as the temperature changes. A site is always surveyed for ground noise with a temporary installation before pouring the pier and laying conduit. Originally, European seismographs were placed in a particular area after a destructive earthquake. Today, they are spread to provide appropriate coverage (in the case of weak-motion seismology ) or concentrated in high-risk regions ( strong-motion seismology ). [ 3 ]
The word derives from the Greek σεισμός, seismós , a shaking or quake, from the verb σείω, seíō , to shake; and μέτρον, métron , to measure, and was coined by David Milne-Home in 1841, to describe an instrument designed by Scottish physicist James David Forbes . [ 4 ]
Seismograph is another Greek term from seismós and γράφω, gráphō , to draw. It is often used to mean seismometer , though it is more applicable to the older instruments in which the measuring and recording of ground motion were combined, than to modern systems, in which these functions are separated. Both types provide a continuous record of ground motion; this record distinguishes them from seismoscopes , which merely indicate that motion has occurred, perhaps with some simple measure of how large it was. [ 5 ]
The technical discipline concerning such devices is called seismometry , [ 6 ] a branch of seismology .
The concept of measuring the "shaking" of something means that the word "seismograph" might be used in a more general sense. For example, a monitoring station that tracks changes in electromagnetic noise affecting amateur radio waves presents an rf seismograph . [ 7 ] And helioseismology studies the "quakes" on the Sun . [ 8 ]
The first seismometer was made in China during the 2nd century. [ 9 ] It was invented by Zhang Heng , a Chinese mathematician and astronomer. The first Western description of the device comes from the French physicist and priest Jean de Hautefeuille in 1703. [ 10 ] The modern seismometer was developed in the 19th century. [ 3 ]
Seismometers were placed on the Moon starting in 1969 as part of the Apollo Lunar Surface Experiments Package . In December 2018, a seismometer was deployed on the planet Mars by the InSight lander, the first time a seismometer was placed onto the surface of another planet. [ 11 ]
In Ancient Egypt , Amenhotep, son of Hapu invented a precursor of seismometer, a vertical wooden poles connected with wooden gutters on the central axis functioned to fill water into a vessel until full to detect earthquakes.
In AD 132 , Zhang Heng of China's Han dynasty is said to have invented the first seismoscope (by the definition above), which was called Houfeng Didong Yi (translated as, "instrument for measuring the seasonal winds and the movements of the Earth"). The description we have, from the History of the Later Han Dynasty , says that it was a large bronze vessel, about 2 meters in diameter; at eight points around the top were dragon's heads holding bronze balls. When there was an earthquake, one of the dragons' mouths would open and drop its ball into a bronze toad at the base, making a sound and supposedly showing the direction of the earthquake. On at least one occasion, probably at the time of a large earthquake in Gansu in AD 143, the seismoscope indicated an earthquake even though one was not felt. The available text says that inside the vessel was a central column that could move along eight tracks; this is thought to refer to a pendulum, though it is not known exactly how this was linked to a mechanism that would open only one dragon's mouth. The first earthquake recorded by this seismoscope was supposedly "somewhere in the east". Days later, a rider from the east reported this earthquake. [ 9 ] [ 12 ]
By the 13th century, seismographic devices existed in the Maragheh observatory (founded 1259) in Persia, though it is unclear whether these were constructed independently or based on the first seismoscope. [ 13 ] French physicist and priest Jean de Hautefeuille described a seismoscope in 1703, [ 10 ] which used a bowl filled with mercury which would spill into one of eight receivers equally spaced around the bowl, though there is no evidence that he actually constructed the device. [ 14 ] A mercury seismoscope was constructed in 1784 or 1785 by Atanasio Cavalli , [ 15 ] a copy of which can be found at the University Library in Bologna, and a further mercury seismoscope was constructed by Niccolò Cacciatore in 1818. [ 14 ] James Lind also built a seismological tool of unknown design or efficacy (known as an earthquake machine) in the late 1790s. [ 16 ]
Pendulum devices were developing at the same time. Neapolitan naturalist Nicola Cirillo set up a network of pendulum earthquake detectors following the 1731 Puglia Earthquake, where the amplitude was detected using a protractor to measure the swinging motion. Benedictine monk Andrea Bina further developed this concept in 1751, having the pendulum create trace marks in sand under the mechanism, providing both magnitude and direction of motion. Neapolitan clockmaker Domenico Salsano produced a similar pendulum which recorded using a paintbrush in 1783, labelling it a geo-sismometro , possibly the first use of a similar word to seismometer . Naturalist Nicolo Zupo devised an instrument to detect electrical disturbances and earthquakes at the same time (1784). [ 14 ]
The first moderately successful device for detecting the time of an earthquake was devised by Ascanio Filomarino in 1796, who improved upon Salsano's pendulum instrument, using a pencil to mark, and using a hair attached to the mechanism to inhibit the motion of a clock's balance wheel. This meant that the clock would only start once an earthquake took place, allowing determination of the time of incidence. [ 14 ]
After an earthquake taking place on October 4, 1834, Luigi Pagani observed that the mercury seismoscope held at Bologna University had completely spilled over, and did not provide useful information. He therefore devised a portable device that used lead shot to detect the direction of an earthquake, where the lead fell into four bins arranged in a circle, to determine the quadrant of earthquake incidence. He completed the instrument in 1841. [ 14 ]
In response to a series of earthquakes near Comrie in Scotland in 1839, a committee was formed in the United Kingdom in order to produce better detection devices for earthquakes. The outcome of this was an inverted pendulum seismometer constructed by James David Forbes , first presented in a report by David Milne-Home in 1842, which recorded the measurements of seismic activity through the use of a pencil placed on paper above the pendulum. The designs provided did not prove effective, according to Milne's reports. [ 14 ] It was Milne who coined the word seismometer in 1841, to describe this instrument. [ 4 ] In 1843, the first horizontal pendulum was used in a seismometer, reported by Milne (though it is unclear if he was the original inventor). [ 17 ] After these inventions, Robert Mallet published an 1848 paper where he suggested ideas for seismometer design, suggesting that such a device would need to register time, record amplitudes horizontally and vertically, and ascertain direction. His suggested design was funded, and construction was attempted, but his final design did not fulfill his expectations and suffered from the same problems as the Forbes design, being inaccurate and not self-recording. [ 17 ]
Karl Kreil constructed a seismometer in Prague between 1848 and 1850, which used a point-suspended rigid cylindrical pendulum covered in paper, drawn upon by a fixed pencil. The cylinder was rotated every 24 hours, providing an approximate time for a given quake. [ 14 ]
Luigi Palmieri , influenced by Mallet's 1848 paper, [ 17 ] invented a seismometer in 1856 that could record the time of an earthquake. This device used metallic pendulums which closed an electric circuit with vibration, which then powered an electromagnet to stop a clock. Palmieri seismometers were widely distributed and used for a long time. [ 18 ]
By 1872, a committee in the United Kingdom led by James Bryce expressed their dissatisfaction with the current available seismometers, still using the large 1842 Forbes device located in Comrie Parish Church, and requested a seismometer which was compact, easy to install and easy to read. In 1875 they settled on a large example of the Mallet device, consisting of an array of cylindrical pins of various sizes installed at right angles to each other on a sand bed, where larger earthquakes would knock down larger pins. This device was constructed in 'Earthquake House' near Comrie, which can be considered the world's first purpose-built seismological observatory. [ 17 ] As of 2013, no earthquake has been large enough to cause any of the cylinders to fall in either the original device or replicas.
The first seismographs were invented in the 1870s and 1880s. The first seismograph was produced by Filippo Cecchi in around 1875. A seismoscope would trigger the device to begin recording, and then a recording surface would produce a graphical illustration of the tremors automatically (a seismogram). However, the instrument was not sensitive enough, and the first seismogram produced by the instrument was in 1887, by which time John Milne had already demonstrated his design in Japan . [ 19 ]
In 1880, the first horizontal pendulum seismometer was developed by the team of John Milne , James Alfred Ewing and Thomas Gray , who worked as foreign-government advisors in Japan, from 1880 to 1895. [ 3 ] Milne, Ewing and Gray, all having been hired by the Meiji Government in the previous five years to assist Japan's modernization efforts, founded the Seismological Society of Japan in response to an Earthquake that took place on February 22, 1880, at Yokohama (Yokohama earthquake). Two instruments were constructed by Ewing over the next year, one being a common-pendulum seismometer and the other being the first seismometer using a damped horizontal pendulum. The innovative recording system allowed for a continuous record, the first to do so. The first seismogram was recorded on 3 November 1880 on both of Ewing's instruments. [ 19 ] Modern seismometers would eventually descend from these designs. Milne has been referred to as the 'Father of modern seismology' [ 20 ] and his seismograph design has been called the first modern seismometer. [ 21 ]
This produced the first effective measurement of horizontal motion. Gray would produce the first reliable method for recording vertical motion, which produced the first effective 3-axis recordings. [ 19 ]
An early special-purpose seismometer consisted of a large, stationary pendulum , with a stylus on the bottom. As the earth started to move, the heavy mass of the pendulum had the inertia to stay still within the frame . The result is that the stylus scratched a pattern corresponding with the Earth's movement. This type of strong-motion seismometer recorded upon a smoked glass (glass with carbon soot ). While not sensitive enough to detect distant earthquakes, this instrument could indicate the direction of the pressure waves and thus help find the epicenter of a local quake. Such instruments were useful in the analysis of the 1906 San Francisco earthquake . Further analysis was performed in the 1980s, using these early recordings, enabling a more precise determination of the initial fault break location in Marin county and its subsequent progression, mostly to the south.
Later, professional suites of instruments for the worldwide standard seismographic network had one set of instruments tuned to oscillate at fifteen seconds, and the other at ninety seconds, each set measuring in three directions. Amateurs or observatories with limited means tuned their smaller, less sensitive instruments to ten seconds.
The basic damped horizontal pendulum seismometer swings like the gate of a fence. A heavy weight is mounted on the point of a long (from 10 cm to several meters) triangle, hinged at its vertical edge. As the ground moves, the weight stays unmoving, swinging the "gate" on the hinge.
The advantage of a horizontal pendulum is that it achieves very low frequencies of oscillation in a compact instrument. The "gate" is slightly tilted, so the weight tends to slowly return to a central position. The pendulum is adjusted (before the damping is installed) to oscillate once per three seconds, or once per thirty seconds. The general-purpose instruments of small stations or amateurs usually oscillate once per ten seconds. A pan of oil is placed under the arm, and a small sheet of metal mounted on the underside of the arm drags in the oil to damp oscillations. The level of oil, position on the arm, and angle and size of sheet is adjusted until the damping is "critical", that is, almost having oscillation. The hinge is very low friction, often torsion wires, so the only friction is the internal friction of the wire. Small seismographs with low proof masses are placed in a vacuum to reduce disturbances from air currents.
Zollner described torsionally suspended horizontal pendulums as early as 1869, but developed them for gravimetry rather than seismometry.
Early seismometers had an arrangement of levers on jeweled bearings, to scratch smoked glass or paper. Later, mirrors reflected a light beam to a direct-recording plate or roll of photographic paper. Briefly, some designs returned to mechanical movements to save money. In mid-twentieth-century systems, the light was reflected to a pair of differential electronic photosensors called a photomultiplier. The voltage generated in the photomultiplier was used to drive galvanometers which had a small mirror mounted on the axis. The moving reflected light beam would strike the surface of the turning drum, which was covered with photo-sensitive paper. The expense of developing photo-sensitive paper caused many seismic observatories to switch to ink or thermal-sensitive paper.
After World War II, the seismometers developed by Milne, Ewing and Gray were adapted into the widely used Press-Ewing seismometer .
Modern instruments use electronic sensors, amplifiers, and recording devices. Most are broadband covering a wide range of frequencies. Some seismometers can measure motions with frequencies from 500 Hz to 0.00118 Hz (1/500 = 0.002 seconds per cycle, to 1/0.00118 = 850 seconds per cycle). The mechanical suspension for horizontal instruments remains the garden-gate described above. Vertical instruments use some kind of constant-force suspension, such as the LaCoste suspension. The LaCoste suspension uses a zero-length spring to provide a long period (high sensitivity). [ 22 ] [ 23 ] Some modern instruments use a "triaxial" or "Galperin" design , in which three identical motion sensors are set at the same angle to the vertical but 120 degrees apart on the horizontal. Vertical and horizontal motions can be computed from the outputs of the three sensors.
Seismometers unavoidably introduce some distortion into the signals they measure, but professionally designed systems have carefully characterized frequency transforms.
Modern sensitivities come in three broad ranges: geophones , 50 to 750 V /m; local geologic seismographs, about 1,500 V/m; and teleseismographs, used for world survey, about 20,000 V/m. Instruments come in three main varieties: short-period, long-period and broadband. The short- and long-period instruments measure velocity and are very sensitive; however they 'clip' the signal or go off-scale for ground motion that is strong enough to be felt by people. A 24-bit analog-to-digital conversion channel is commonplace. Practical devices are linear to roughly one part per million.
Delivered seismometers come with two styles of output: analog and digital. Analog seismographs require analog recording equipment, possibly including an analog-to-digital converter. The output of a digital seismograph can be simply input to a computer. It presents the data in a standard digital format (often "SE2" over Ethernet ).
The modern broadband seismograph can record a very broad range of frequencies . It consists of a small "proof mass", confined by electrical forces, driven by sophisticated electronics . As the earth moves, the electronics attempt to hold the mass steady through a feedback circuit. The amount of force necessary to achieve this is then recorded.
In most designs the electronics holds a mass motionless relative to the frame. This device is called a "force balance accelerometer". It measures acceleration instead of velocity of ground movement. Basically, the distance between the mass and some part of the frame is measured very precisely, by a linear variable differential transformer . Some instruments use a linear variable differential capacitor .
That measurement is then amplified by electronic amplifiers attached to parts of an electronic negative feedback loop . One of the amplified currents from the negative feedback loop drives a coil very like a loudspeaker . The result is that the mass stays nearly motionless.
Most instruments measure directly the ground motion using the distance sensor. The voltage generated in a sense coil on the mass by the magnet directly measures the instantaneous velocity of the ground. The current to the drive coil provides a sensitive, accurate measurement of the force between the mass and frame, thus measuring directly the ground's acceleration (using f=ma where f=force, m=mass, a=acceleration).
One of the continuing problems with sensitive vertical seismographs is the buoyancy of their masses. The uneven changes in pressure caused by wind blowing on an open window can easily change the density of the air in a room enough to cause a vertical seismograph to show spurious signals. Therefore, most professional seismographs are sealed in rigid gas-tight enclosures. For example, this is why a common Streckeisen model has a thick glass base that must be glued to its pier without bubbles in the glue.
It might seem logical to make the heavy magnet serve as a mass, but that subjects the seismograph to errors when the Earth's magnetic field moves. This is also why seismograph's moving parts are constructed from a material that interacts minimally with magnetic fields. A seismograph is also sensitive to changes in temperature so many instruments are constructed from low expansion materials such as nonmagnetic invar .
The hinges on a seismograph are usually patented, and by the time the patent has expired, the design has been improved. The most successful public domain designs use thin foil hinges in a clamp.
Another issue is that the transfer function of a seismograph must be accurately characterized, so that its frequency response is known. This is often the crucial difference between professional and amateur instruments. Most are characterized on a variable frequency shaking table.
Another type of seismometer is a digital strong-motion seismometer, or accelerograph . The data from such an instrument is essential to understand how an earthquake affects man-made structures, through earthquake engineering . The recordings of such instruments are crucial for the assessment of seismic hazard , through engineering seismology .
A strong-motion seismometer measures acceleration. This can be mathematically integrated later to give velocity and position. Strong-motion seismometers are not as sensitive to ground motions as teleseismic instruments but they stay on scale during the strongest seismic shaking.
Strong motion sensors are used for intensity meter applications.
Accelerographs and geophones are often heavy cylindrical magnets with a spring-mounted coil inside. As the case moves, the coil tends to stay stationary, so the magnetic field cuts the wires, inducing current in the output wires. They receive frequencies from several hundred hertz down to 1 Hz. Some have electronic damping, a low-budget way to get some of the performance of the closed-loop wide-band geologic seismographs.
Strain-beam accelerometers constructed as integrated circuits are too insensitive for geologic seismographs (2002), but are widely used in geophones.
Some other sensitive designs measure the current generated by the flow of a non-corrosive ionic fluid through an electret sponge or a conductive fluid through a magnetic field .
Seismometers spaced in a seismic array can also be used to precisely locate, in three dimensions, the source of an earthquake, using the time it takes for seismic waves to propagate away from the hypocenter , the initiating point of fault rupture (See also Earthquake location ). Interconnected seismometers are also used, as part of the International Monitoring System to detect underground nuclear test explosions, as well as for Earthquake early warning systems. These seismometers are often used as part of a large-scale governmental or scientific project, but some organizations such as the Quake-Catcher Network , can use residential size detectors built into computers to detect earthquakes as well.
In reflection seismology , an array of seismometers image sub-surface features. The data are reduced to images using algorithms similar to tomography . The data reduction methods resemble those of computer-aided tomographic medical imaging X-ray machines (CAT-scans), or imaging sonars .
A worldwide array of seismometers can actually image the interior of the Earth in wave-speed and transmissivity. This type of system uses events such as earthquakes, impact events or nuclear explosions as wave sources. The first efforts at this method used manual data reduction from paper seismograph charts. Modern digital seismograph records are better adapted to direct computer use. With inexpensive seismometer designs and internet access, amateurs and small institutions have even formed a "public seismograph network". [ 24 ]
Seismographic systems used for petroleum or other mineral exploration historically used an explosive and a wireline of geophones unrolled behind a truck. Now most short-range systems use "thumpers" that hit the ground, and some small commercial systems have such good digital signal processing that a few sledgehammer strikes provide enough signal for short-distance refractive surveys. Exotic cross or two-dimensional arrays of geophones are sometimes used to perform three-dimensional reflective imaging of subsurface features. Basic linear refractive geomapping software (once a black art) is available off-the-shelf, running on laptop computers, using strings as small as three geophones. Some systems now come in an 18" (0.5 m) plastic field case with a computer, display and printer in the cover.
Small seismic imaging systems are now sufficiently inexpensive to be used by civil engineers to survey foundation sites, locate bedrock, and find subsurface water.
A new technique for detecting earthquakes has been found, using fiber optic cables. [ 25 ] In 2016 a team of metrologists running frequency metrology experiments in England observed noise with a wave-form resembling the seismic waves generated by earthquakes. This was found to match seismological observations of an M w 6.0 earthquake in Italy, ~1400 km away. Further experiments in England, Italy, and with a submarine fiber optic cable to Malta detected additional earthquakes, including one 4,100 km away, and an M L 3.4 earthquake 89 km away from the cable.
Seismic waves are detectable because they cause micrometer -scale changes in the length of the cable. As the length changes so does the time it takes a packet of light to traverse to the far end of the cable and back (using a second fiber). Using ultra-stable metrology-grade lasers, these extremely minute shifts of timing (on the order of femtoseconds ) appear as phase-changes.
The point of the cable first disturbed by an earthquake's p wave (essentially a sound wave in rock) can be determined by sending packets in both directions in the looped pair of optical fibers; the difference in the arrival times of the first pair of perturbed packets indicates the distance along the cable. This point is also the point closest to the earthquake's epicenter, which should be on a plane perpendicular to the cable. The difference between the P wave/S wave arrival times provides a distance (under ideal conditions), constraining the epicenter to a circle. A second detection on a non-parallel cable is needed to resolve the ambiguity of the resulting solution. Additional observations constrain the location of the earthquake's epicenter, and may resolve the depth.
This technique is expected to be a boon in observing earthquakes, especially the smaller ones, in vast portions of the global ocean where there are no seismometers, and at much lower cost than ocean-bottom seismometers.
Researchers at Stanford University created a deep-learning algorithm called UrbanDenoiser which can detect earthquakes, particularly in urban cities. [ 26 ] The algorithm filters out the background noise from the seismic noise gathered from busy cities in urban areas to detect earthquakes. [ 26 ] [ 27 ]
Today, the most common recorder is a computer with an analog-to-digital converter, a disk drive and an internet connection; for amateurs, a PC with a sound card and associated software is adequate. Most systems record continuously, but some record only when a signal is detected,
as shown by a short-term increase in the variation of the signal, compared to its long-term
average (which can vary slowly because of changes in seismic noise) [ citation needed ] , also known as a STA/LTA trigger.
Prior to the availability of digital processing of seismic data in the late 1970s, the records were done in a few different forms on different types of media. A "Helicorder" drum was a device used to record data into photographic paper or in the form of paper and ink. A "Develocorder" was a machine that record data from up to 20 channels into a 16-mm film. The recorded film can be viewed by a machine. The reading and measuring from these types of media can be done by hand. After the digital processing has been used, the archives of the seismic data were recorded in magnetic tapes. Due to the deterioration of older magnetic tape medias, large number of waveforms from the archives are not recoverable. [ 28 ] [ 29 ] | https://en.wikipedia.org/wiki/Seismometer |
Seita Emori (born 1970 in Kanagawa, Japan ) is a Japanese environmental scientist whose most noted work focuses upon the worldwide effects of Global Warming . [ 1 ] He completed his Doctorate at the University of Tokyo in 1997 and thereafter joined the National Institute for Environmental Studies , where he is currently the
Chief of the Climate Risk Assessment Research Section at the Center for Global Environmental Research. [ 2 ] Emori was a contributing author of the Fourth , Fifth and Sixth Assessment Reports of the Intergovernmental Panel on Climate Change (IPCC) [ 1 ] [ 3 ] and a member of the IPCC Steering Committee for the "Expert Meeting on New Scenarios", for which the IPCC received a Nobel Prize in 2007. [ 4 ]
Among Emori's publications are the academic paper "Sensitivity Map of LAI to Precipitation and Surface Air Temperature Variations in a Global Scale" (co-authored with his Japan's colleague Hiroshi Kanzawa and Jiahua Zhang and Congbin Fu of the START, Institute of Atmospheric Physics in Beijing, China). [ 5 ]
This article about climate change is a stub . You can help Wikipedia by expanding it .
See guidelines for writing about climate change . Further suggestions might be found on the article's talk page .
This article about a Japanese scientist is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Seita_Emori |
Seked (or seqed ) is an ancient Egyptian term describing the inclination of the triangular faces of a right pyramid. [ 1 ] The system was based on the Egyptians' length measure known as the royal cubit . It was subdivided into seven palms , each of which was sub-divided into four digits .
The inclination of measured slopes was therefore expressed as the number of horizontal palms and digits relative to each royal cubit rise.
The seked is proportional to the reciprocal of our modern measure of slope or gradient, and to the cotangent of the angle of elevation. [ 2 ] Specifically, if s is the seked, m the slope (rise over run), and ϕ {\displaystyle \phi } the angle of elevation from horizontal, then:
The most famous example of a seked slope is of the Great Pyramid of Giza in Egypt built around 2550 BC. Based on modern surveys, the faces of this monument had a seked of 5 + 1 / 2 , or 5 palms and 2 digits, in modern terms equivalent to a slope of 1.27, a gradient of 127%, and an elevation of 51.84° from the horizontal (in our 360° system).
Information on the use of the seked in the design of pyramids has been obtained from two mathematical papyri: the Rhind Mathematical Papyrus in the British Museum and the Moscow Mathematical Papyrus in the Museum of Fine Arts. [ 3 ]
Although there is no direct evidence of its application from the archaeology of the Old Kingdom, there are a number of examples from the two mathematical papyri, which date to the Middle Kingdom that show the use of this system for defining the slopes of the sides of pyramids, based on their height and base dimensions. The most widely quoted example is perhaps problem 56 from the Rhind Mathematical Papyrus .
The most famous of all the pyramids of Egypt is the Great Pyramid of Giza built around 2550 BC. Based on the surveys of this structure that have been carried out by Flinders Petrie and others, the slopes of the faces of this monument were a seked of 5 + 1 / 2 , or 5 palms and 2 digits [see figure above] which equates to a slope of 51.84° from the horizontal, using the modern 360° system. [ 4 ] [ 5 ]
This slope would probably have been accurately applied during construction by way of 'A frame' shaped wooden tools with plumb bobs, marked to the correct incline, so that slopes could be measured out and checked efficiently. [ 6 ]
Furthermore, according to Petrie's survey data in "The Pyramids and Temples of Gizeh" [ 7 ] the mean slope of the Great Pyramid's entrance passage is 26° 31' 23" ± 5". This is less than 1/20 of one degree in deviation from an ideal slope of 1 in 2, which is 26° 33' 54". This equates to a seked of 14 palms, and is generally considered to have been the intentional designed slope applied by the Old Kingdom builders for internal passages. [ citation needed ]
The seked of a pyramid is described by Richard Gillings in his book 'Mathematics in the Time of the Pharaohs' as follows:
The seked of a right pyramid is the inclination of any one of the four triangular faces to the horizontal plane of its base, and is measured as so many horizontal units per one vertical unit rise. It is thus a measure equivalent to our modern cotangent of the angle of slope.
In general, the seked of a pyramid is a kind of fraction, given as so many palms horizontally for each cubit of vertically, where 7 palms = 1 cubit. The Egyptian word 'seked' is thus related [in meaning, not origin] to our modern word 'gradient'. [ 2 ]
Many of the smaller pyramids in Egypt have varying slopes; however, like the Great Pyramid of Giza, the pyramid at Meidum is thought to have had sides that sloped by [ 8 ] 51.842° or 51° 50' 35", which is a seked of 5 + 1 / 2 palms.
The Great Pyramid scholar Professor I E S Edwards considered this to have been the 'normal' or most typical slope choice for pyramids. [ 9 ] Flinders Petrie also noted the similarity of the slope of this pyramid to that of the Great Pyramid at Giza, and both Egyptologists considered it to have been a deliberate choice, based on a desire to ensure that the circuit of the base of the pyramids precisely equalled the circumference of a circle that would be swept out if the pyramid's height were used as a radius. [ 10 ] [ clarification needed ] Petrie wrote "...these relations of areas and of circular ratio are so systematic that we should grant that they were in the builder's design". [ 11 ]
Slopes of edges are simpler ratios than slopes of faces. [ 12 ] | https://en.wikipedia.org/wiki/Seked |
Sel gris (pl. sels gris , "gray salt" in French ) is a coarse granular sea salt popularized by the French. Sel gris comes from the same solar evaporation salt pans as fleur de sel but is harvested differently; it is allowed to come into contact with the bottom of the salt pan before being raked, hence its gray color. Sel gris is coarser than fleur de sel but is also a moist salt, typically containing 13 percent residual moisture. [ 1 ]
The bottom of the salt pan ( French : oeillet ) is typically composed of clay lined pools, basalt, sand, concrete, or tile. This keeps the salt from coming into contact with the silt beneath and becoming dirty. Every few days, or on occasion daily, the harvester (French: paludier ) pushes or pulls the salt with a long wooden rake. This must be done carefully as the depth of the brine may be as little as 1 ⁄ 4 inch (6.4 mm) and the clay bottom must not be penetrated at the risk of contaminating the salt. [ 2 ] The salt is raked toward the sides of the pan where it is then shoveled into a pile and left to dry slightly before storing. [ 3 ] 90 to 165 pounds (41 to 75 kg) of sel gris can be harvested in one day, whereas for fleurs de sel the daily yield is only 4.5 to 6.6 pounds (2.0 to 3.0 kg). [ 2 ]
Because of its mineral complexity and coarse grain size, sel gris can be used both as a cooking salt and a finishing salt. Being much denser than table and kosher salt , there is a lot more salt in an equivalent volume of sel gris .
Because it is a moist salt, it does not suck all the moisture out of food when used as a finishing salt, unlike kosher salt (which is designed to absorb blood and other fluids from meat). [ 1 ]
Most producers of fleur de sel also produce sel gris. | https://en.wikipedia.org/wiki/Sel_gris |
In number theory , Selberg's identity is an approximate identity involving logarithms of primes named after Atle Selberg . The identity, discovered jointly by Selberg and Paul Erdős , was used in the first elementary proof for the prime number theorem .
There are several different but equivalent forms of Selberg's identity. One form is
where the sums are over primes p and q .
The strange-looking expression on the left side of Selberg's identity is (up to smaller terms) the sum
where the numbers
are the coefficients of the Dirichlet series
This function has a pole of order 2 at s = 1 with coefficient 2, which gives the dominant term 2 x log( x ) in the asymptotic expansion of ∑ n < x c n . {\displaystyle \sum _{n<x}c_{n}.}
Selberg's identity sometimes also refers to the following divisor sum identity involving the von Mangoldt function and the Möbius function when n ≥ 1 {\displaystyle n\geq 1} : [ 1 ]
This variant of Selberg's identity is proved using the concept of taking derivatives of arithmetic functions defined by f ′ ( n ) = f ( n ) ⋅ log ( n ) {\displaystyle f^{\prime }(n)=f(n)\cdot \log(n)} in Section 2.18 of Apostol's book (see also this link ). | https://en.wikipedia.org/wiki/Selberg's_identity |
In mathematics, the Selberg conjecture , named after Atle Selberg , is a theorem about the density of zeros of the Riemann zeta function ζ(1/2 + it ). It is known that the function has infinitely many zeroes on this line in the complex plane: the point at issue is how densely they are clustered. Results on this can be formulated in terms of N ( T ), the function counting zeroes on the line for which the value of t satisfies 0 ≤ t ≤ T .
In 1942 Atle Selberg investigated the problem of the Hardy–Littlewood conjecture 2 ; and he proved that for any
there exist
and
such that for
and
the inequality
holds true.
In his turn, Selberg stated a conjecture relating to shorter intervals, [ 1 ] namely that it is possible to decrease the value of the exponent a = 0.5 in
In 1984 Anatolii Karatsuba proved [ 2 ] [ 3 ] [ 4 ] that for a fixed ε {\displaystyle \varepsilon } satisfying the condition
a sufficiently large T and
the interval in the ordinate t ( T , T + H ) contains at least cH ln T real zeros of the Riemann zeta function
and thereby confirmed the Selberg conjecture. The estimates of Selberg and Karatsuba cannot be improved in respect of the order of growth as T → +∞.
In 1992 Karatsuba proved [ 5 ] that an analog of the Selberg conjecture holds for "almost all" intervals ( T , T + H ], H = T ε , where ε is an arbitrarily small fixed positive number. The Karatsuba method permits one to investigate zeroes of the Riemann zeta function on "supershort" intervals of the critical line, that is, on the intervals ( T , T + H ], the length H of which grows slower than any, even arbitrarily small degree T .
In particular, he proved that for any given numbers ε, ε 1 satisfying the conditions 0 < ε, ε 1 < 1 almost all intervals ( T , T + H ] for H ≥ exp[(ln T ) ε ] contain at least H (ln T ) 1 −ε 1 zeros of the function ζ(1/2 + it ). This estimate is quite close to the conditional result that follows from the Riemann hypothesis . | https://en.wikipedia.org/wiki/Selberg's_zeta_function_conjecture |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.