id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
100,267
https://en.wikipedia.org/wiki/Dihedral%20group
In mathematics, a dihedral group is the group of symmetries of a regular polygon, which includes rotations and reflections. Dihedral groups are among the simplest examples of finite groups, and they play an important role in group theory, geometry, and chemistry. The notation for the dihedral group differs in geometry and abstract algebra. In geometry, or refers to the symmetries of the -gon, a group of order . In abstract algebra, refers to this same dihedral group. This article uses the geometric convention, . Definition The word "dihedral" comes from "di-" and "-hedron". The latter comes from the Greek word hédra, which means "face of a geometrical solid". Overall it thus refers to the two faces of a polygon. Elements A regular polygon with sides has different symmetries: rotational symmetries and reflection symmetries. Usually, we take here. The associated rotations and reflections make up the dihedral group . If is odd, each axis of symmetry connects the midpoint of one side to the opposite vertex. If is even, there are axes of symmetry connecting the midpoints of opposite sides and axes of symmetry connecting opposite vertices. In either case, there are axes of symmetry and elements in the symmetry group. Reflecting in one axis of symmetry followed by reflecting in another axis of symmetry produces a rotation through twice the angle between the axes. Group structure As with any geometric object, the composition of two symmetries of a regular polygon is again a symmetry of this object. With composition of symmetries to produce another as the binary operation, this gives the symmetries of a polygon the algebraic structure of a finite group. The following Cayley table shows the effect of composition in the group D3 (the symmetries of an equilateral triangle). r0 denotes the identity; r1 and r2 denote counterclockwise rotations by 120° and 240° respectively, and s0, s1 and s2 denote reflections across the three lines shown in the adjacent picture. For example, , because the reflection s1 followed by the reflection s2 results in a rotation of 120°. The order of elements denoting the composition is right to left, reflecting the convention that the element acts on the expression to its right. The composition operation is not commutative. In general, the group Dn has elements r0, ..., rn−1 and s0, ..., sn−1, with composition given by the following formulae: In all cases, addition and subtraction of subscripts are to be performed using modular arithmetic with modulus n. Matrix representation If we center the regular polygon at the origin, then elements of the dihedral group act as linear transformations of the plane. This lets us represent elements of Dn as matrices, with composition being matrix multiplication. This is an example of a (2-dimensional) group representation. For example, the elements of the group D4 can be represented by the following eight matrices: In general, the matrices for elements of Dn have the following form: rk is a rotation matrix, expressing a counterclockwise rotation through an angle of . sk is a reflection across a line that makes an angle of with the x-axis. Other definitions is the semidirect product of acting on via the automorphism . It hence has presentation Using the relation , we obtain the relation . It follows that is generated by and . This substitution also shows that has the presentation In particular, belongs to the class of Coxeter groups. Small dihedral groups is isomorphic to , the cyclic group of order 2. is isomorphic to , the Klein four-group. and are exceptional in that: and are the only abelian dihedral groups. Otherwise, is non-abelian. is a subgroup of the symmetric group for . Since for or , for these values, is too large to be a subgroup. The inner automorphism group of is trivial, whereas for other even values of , this is . The cycle graphs of dihedral groups consist of an n-element cycle and n 2-element cycles. The dark vertex in the cycle graphs below of various dihedral groups represents the identity element, and the other vertices are the other elements of the group. A cycle consists of successive powers of either of the elements connected to the identity element. The dihedral group as symmetry group in 2D and rotation group in 3D An example of abstract group , and a common way to visualize it, is the group of Euclidean plane isometries which keep the origin fixed. These groups form one of the two series of discrete point groups in two dimensions. consists of rotations of multiples of about the origin, and reflections across lines through the origin, making angles of multiples of with each other. This is the symmetry group of a regular polygon with sides (for ; this extends to the cases and where we have a plane with respectively a point offset from the "center" of the "1-gon" and a "2-gon" or line segment). is generated by a rotation of order and a reflection of order 2 such that In geometric terms: in the mirror a rotation looks like an inverse rotation. In terms of complex numbers: multiplication by and complex conjugation. In matrix form, by setting and defining and for we can write the product rules for Dn as (Compare coordinate rotations and reflections.) The dihedral group D2 is generated by the rotation r of 180 degrees, and the reflection s across the x-axis. The elements of D2 can then be represented as {e, r, s, rs}, where e is the identity or null transformation and rs is the reflection across the y-axis. D2 is isomorphic to the Klein four-group. For n > 2 the operations of rotation and reflection in general do not commute and Dn is not abelian; for example, in D4, a rotation of 90 degrees followed by a reflection yields a different result from a reflection followed by a rotation of 90 degrees. Thus, beyond their obvious application to problems of symmetry in the plane, these groups are among the simplest examples of non-abelian groups, and as such arise frequently as easy counterexamples to theorems which are restricted to abelian groups. The elements of can be written as , , , ... , , , , , ... , . The first listed elements are rotations and the remaining elements are axis-reflections (all of which have order 2). The product of two rotations or two reflections is a rotation; the product of a rotation and a reflection is a reflection. So far, we have considered to be a subgroup of , i.e. the group of rotations (about the origin) and reflections (across axes through the origin) of the plane. However, notation is also used for a subgroup of SO(3) which is also of abstract group type : the proper symmetry group of a regular polygon embedded in three-dimensional space (if n ≥ 3). Such a figure may be considered as a degenerate regular solid with its face counted twice. Therefore, it is also called a dihedron (Greek: solid with two faces), which explains the name dihedral group (in analogy to tetrahedral, octahedral and icosahedral group, referring to the proper symmetry groups of a regular tetrahedron, octahedron, and icosahedron respectively). Examples of 2D dihedral symmetry Properties The properties of the dihedral groups with depend on whether is even or odd. For example, the center of consists only of the identity if n is odd, but if n is even the center has two elements, namely the identity and the element rn/2 (with Dn as a subgroup of O(2), this is inversion; since it is scalar multiplication by −1, it is clear that it commutes with any linear transformation). In the case of 2D isometries, this corresponds to adding inversion, giving rotations and mirrors in between the existing ones. For n twice an odd number, the abstract group is isomorphic with the direct product of and . Generally, if m divides n, then has n/m subgroups of type , and one subgroup m. Therefore, the total number of subgroups of (n ≥ 1), is equal to d(n) + σ(n), where d(n) is the number of positive divisors of n and σ(n) is the sum of the positive divisors of n. See list of small groups for the cases n ≤ 8. The dihedral group of order 8 (D4) is the smallest example of a group that is not a T-group. Any of its two Klein four-group subgroups (which are normal in D4) has as normal subgroup order-2 subgroups generated by a reflection (flip) in D4, but these subgroups are not normal in D4. Conjugacy classes of reflections All the reflections are conjugate to each other whenever n is odd, but they fall into two conjugacy classes if n is even. If we think of the isometries of a regular n-gon: for odd n there are rotations in the group between every pair of mirrors, while for even n only half of the mirrors can be reached from one by these rotations. Geometrically, in an odd polygon every axis of symmetry passes through a vertex and a side, while in an even polygon there are two sets of axes, each corresponding to a conjugacy class: those that pass through two vertices and those that pass through two sides. Algebraically, this is an instance of the conjugate Sylow theorem (for n odd): for n odd, each reflection, together with the identity, form a subgroup of order 2, which is a Sylow 2-subgroup ( is the maximum power of 2 dividing ), while for n even, these order 2 subgroups are not Sylow subgroups because 4 (a higher power of 2) divides the order of the group. For n even there is instead an outer automorphism interchanging the two types of reflections (properly, a class of outer automorphisms, which are all conjugate by an inner automorphism). Automorphism group The automorphism group of is isomorphic to the holomorph of /n, i.e., to and has order nϕ(n), where ϕ is Euler's totient function, the number of k in coprime to n. It can be understood in terms of the generators of a reflection and an elementary rotation (rotation by k(2π/n), for k coprime to n); which automorphisms are inner and outer depends on the parity of n. For n odd, the dihedral group is centerless, so any element defines a non-trivial inner automorphism; for n even, the rotation by 180° (reflection through the origin) is the non-trivial element of the center. Thus for n odd, the inner automorphism group has order 2n, and for n even (other than ) the inner automorphism group has order n. For n odd, all reflections are conjugate; for n even, they fall into two classes (those through two vertices and those through two faces), related by an outer automorphism, which can be represented by rotation by π/n (half the minimal rotation). The rotations are a normal subgroup; conjugation by a reflection changes the sign (direction) of the rotation, but otherwise leaves them unchanged. Thus automorphisms that multiply angles by k (coprime to n) are outer unless . Examples of automorphism groups has 18 inner automorphisms. As 2D isometry group D9, the group has mirrors at 20° intervals. The 18 inner automorphisms provide rotation of the mirrors by multiples of 20°, and reflections. As isometry group these are all automorphisms. As abstract group there are in addition to these, 36 outer automorphisms; e.g., multiplying angles of rotation by 2. has 10 inner automorphisms. As 2D isometry group D10, the group has mirrors at 18° intervals. The 10 inner automorphisms provide rotation of the mirrors by multiples of 36°, and reflections. As isometry group there are 10 more automorphisms; they are conjugates by isometries outside the group, rotating the mirrors 18° with respect to the inner automorphisms. As abstract group there are in addition to these 10 inner and 10 outer automorphisms, 20 more outer automorphisms; e.g., multiplying rotations by 3. Compare the values 6 and 4 for Euler's totient function, the multiplicative group of integers modulo n for n = 9 and 10, respectively. This triples and doubles the number of automorphisms compared with the two automorphisms as isometries (keeping the order of the rotations the same or reversing the order). The only values of n for which φ(n) = 2 are 3, 4, and 6, and consequently, there are only three dihedral groups that are isomorphic to their own automorphism groups, namely (order 6), (order 8), and (order 12). Inner automorphism group The inner automorphism group of is isomorphic to: if n is odd; if is even (for , ). Generalizations There are several important generalizations of the dihedral groups: The infinite dihedral group is an infinite group with algebraic structure similar to the finite dihedral groups. It can be viewed as the group of symmetries of the integers. The orthogonal group O(2), i.e., the symmetry group of the circle, also has similar properties to the dihedral groups. The family of generalized dihedral groups includes both of the examples above, as well as many other groups. The quasidihedral groups are family of finite groups with similar properties to the dihedral groups. See also Coordinate rotations and reflections Cycle index of the dihedral group Dicyclic group Dihedral group of order 6 Dihedral group of order 8 Dihedral symmetry groups in 3D Dihedral symmetry in three dimensions References External links Dihedral Group n of Order 2n by Shawn Dudzik, Wolfram Demonstrations Project. Dihedral group at Groupprops Dihedral groups on GroupNames Euclidean symmetries Finite reflection groups Properties of groups
Dihedral group
[ "Physics", "Mathematics" ]
3,022
[ "Functions and mappings", "Mathematical structures", "Euclidean symmetries", "Mathematical objects", "Properties of groups", "Algebraic structures", "Mathematical relations", "Symmetry" ]
100,303
https://en.wikipedia.org/wiki/List%20of%20group%20theory%20topics
In mathematics and abstract algebra, group theory studies the algebraic structures known as groups. The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces, can all be seen as groups endowed with additional operations and axioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra. Linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right. Various physical systems, such as crystals and the hydrogen atom, may be modelled by symmetry groups. Thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is also central to public key cryptography. Structures and operations Central extension Direct product of groups Direct sum of groups Extension problem Free abelian group Free group Free product Generating set of a group Group cohomology Group extension Presentation of a group Product of group subsets Schur multiplier Semidirect product Sylow theorems Hall subgroup Wreath product Basic properties of groups Butterfly lemma Center of a group Centralizer and normalizer Characteristic subgroup Commutator Composition series Conjugacy class Conjugate closure Conjugation of isometries in Euclidean space Core (group) Coset Derived group Euler's theorem Fitting subgroup Generalized Fitting subgroup Hamiltonian group Identity element Lagrange's theorem Multiplicative inverse Normal subgroup Perfect group p-core Schreier refinement theorem Subgroup Transversal (combinatorics) Torsion subgroup Zassenhaus lemma Group homomorphisms Automorphism Automorphism group Factor group Fundamental theorem on homomorphisms Group homomorphism Group isomorphism Homomorphism Isomorphism theorem Inner automorphism Order automorphism Outer automorphism group Quotient group Basic types of groups Examples of groups Abelian group Cyclic group Rank of an abelian group Dicyclic group Dihedral group Divisible group Finitely generated abelian group Group representation Klein four-group List of small groups Locally cyclic group Nilpotent group Non-abelian group Solvable group P-group Pro-finite group Simple groups and their classification Classification of finite simple groups Alternating group Borel subgroup Chevalley group Conway group Feit–Thompson theorem Fischer group General linear group Group of Lie type Group scheme HN group Janko group Lie group Simple Lie group Linear algebraic group List of finite simple groups Mathieu group Monster group Baby Monster group Bimonster Projective group Reductive group Simple group Quasisimple group Special linear group Symmetric group Thompson group (finite) Tits group Weyl group Permutation and symmetry groups Arithmetic group Braid group Burnside's lemma Cayley's theorem Coxeter group Crystallographic group Crystallographic point group, Schoenflies notation Discrete group Euclidean group Even and odd permutations Frieze group Frobenius group Fuchsian group Geometric group theory Group action Homogeneous space Hyperbolic group Isometry group Orbit (group theory) Permutation Permutation group Rubik's Cube group Space group Stabilizer subgroup Steiner system Strong generating set Symmetry Symmetric group Symmetry group Wallpaper group Concepts groups share with other mathematics Associativity Bijection Bilinear operator Binary operation Commutative Congruence relation Equivalence class Equivalence relation Lattice (group) Lattice (discrete subgroup) Multiplication table Prime number Up to Mathematical objects making use of a group operation Abelian variety Algebraic group Banach–Tarski paradox Category of groups Dimensional analysis Elliptic curve Galois group Gell-Mann matrices Group object Hilbert space Integer Lie group Matrix Modular arithmetic Number Pauli matrices Real number Quaternion Quaternion group Tensor Mathematical fields and topics making important use of group theory Algebraic geometry Algebraic topology Discrete space Fundamental group Geometry Homology Minkowski's theorem Topological group Algebraic structures related to groups Field Finite field Galois theory Grothendieck group Group ring Group with operators Heap Linear algebra Magma Module Monoid Monoid ring Quandle Quasigroup Quantum group Ring Semigroup Vector space Group representations Affine representation Character theory Great orthogonality theorem Maschke's theorem Monstrous moonshine Projective representation Representation theory Schur's lemma Computational group theory Coset enumeration Schreier's subgroup lemma Schreier–Sims algorithm Todd–Coxeter algorithm Applications Computer algebra system Cryptography Discrete logarithm Triple DES Caesar cipher Exponentiating by squaring Knapsack problem Shor's algorithm Standard Model Symmetry in physics Famous problems Burnside's problem Classification of finite simple groups Herzog–Schönheim conjecture Subset sum problem Whitehead problem Word problem for groups Other topics Amenable group Capable group Commensurability (group theory) Compact group Compactly generated group Complete group Complex reflection group Congruence subgroup Continuous symmetry Frattini subgroup Growth rate Heisenberg group, discrete Heisenberg group Molecular symmetry Nielsen transformation Reflection group Tarski monster group Thompson groups Tietze transformation Transfer (group theory) Group theorists N. Abel M. Aschbacher R. Baer R. Brauer W. Burnside R. Carter A. Cauchy A. Cayley J.H. Conway R. Dedekind L.E. Dickson M. Dunwoody W. Feit B. Fischer H. Fitting G. Frattini G. Frobenius E. Galois G. Glauberman D. Gorenstein R.L. Griess M. Hall, Jr. P. Hall G. Higman D. Hilbert O. Hölder B. Huppert K. Iwasawa Z. Janko C. Jordan F. Klein A. Kurosh J.L. Lagrange C. Leedham-Green F.W. Levi Sophus Lie W. Magnus E. Mathieu G.A. Miller B.H. Neumann H. Neumann J. Nielson Emmy Noether Ø. Ore O. Schreier I. Schur R. Steinberg M. Suzuki L. Sylow J. Thompson J. Tits Helmut Wielandt H. Zassenhaus M. Zorn See also List of abstract algebra topics List of category theory topics List of Lie group topics Mathematics-related lists Outlines of mathematics and logic Outlines
List of group theory topics
[ "Mathematics" ]
1,266
[ "Mathematical structures", "Properties of groups", "Group theory", "Fields of abstract algebra", "Algebraic structures", "nan" ]
100,349
https://en.wikipedia.org/wiki/Legendre%20polynomials
In mathematics, Legendre polynomials, named after Adrien-Marie Legendre (1782), are a system of complete and orthogonal polynomials with a wide number of mathematical properties and numerous applications. They can be defined in many ways, and the various definitions highlight different aspects as well as suggest generalizations and connections to different mathematical structures and physical and numerical applications. Closely related to the Legendre polynomials are associated Legendre polynomials, Legendre functions, Legendre functions of the second kind, big q-Legendre polynomials, and associated Legendre functions. Definition and representation Definition by construction as an orthogonal system In this approach, the polynomials are defined as an orthogonal system with respect to the weight function over the interval . That is, is a polynomial of degree , such that With the additional standardization condition , all the polynomials can be uniquely determined. We then start the construction process: is the only correctly standardized polynomial of degree 0. must be orthogonal to , leading to , and is determined by demanding orthogonality to and , and so on. is fixed by demanding orthogonality to all with . This gives conditions, which, along with the standardization fixes all coefficients in . With work, all the coefficients of every polynomial can be systematically determined, leading to the explicit representation in powers of given below. This definition of the 's is the simplest one. It does not appeal to the theory of differential equations. Second, the completeness of the polynomials follows immediately from the completeness of the powers 1, . Finally, by defining them via orthogonality with respect to the Lebesgue measure on , it sets up the Legendre polynomials as one of the three classical orthogonal polynomial systems. The other two are the Laguerre polynomials, which are orthogonal over the half line with the weight , and the Hermite polynomials, orthogonal over the full line with weight . Definition via generating function The Legendre polynomials can also be defined as the coefficients in a formal expansion in powers of of the generating function The coefficient of is a polynomial in of degree with . Expanding up to gives Expansion to higher orders gets increasingly cumbersome, but is possible to do systematically, and again leads to one of the explicit forms given below. It is possible to obtain the higher 's without resorting to direct expansion of the Taylor series, however. Equation  is differentiated with respect to on both sides and rearranged to obtain Replacing the quotient of the square root with its definition in Eq. , and equating the coefficients of powers of in the resulting expansion gives Bonnet’s recursion formula This relation, along with the first two polynomials and , allows all the rest to be generated recursively. The generating function approach is directly connected to the multipole expansion in electrostatics, as explained below, and is how the polynomials were first defined by Legendre in 1782. Definition via differential equation A third definition is in terms of solutions to Legendre's differential equation: This differential equation has regular singular points at so if a solution is sought using the standard Frobenius or power series method, a series about the origin will only converge for in general. When is an integer, the solution that is regular at is also regular at , and the series for this solution terminates (i.e. it is a polynomial). The orthogonality and completeness of these solutions is best seen from the viewpoint of Sturm–Liouville theory. We rewrite the differential equation as an eigenvalue problem, with the eigenvalue in lieu of . If we demand that the solution be regular at , the differential operator on the left is Hermitian. The eigenvalues are found to be of the form , with and the eigenfunctions are the . The orthogonality and completeness of this set of solutions follows at once from the larger framework of Sturm–Liouville theory. The differential equation admits another, non-polynomial solution, the Legendre functions of the second kind . A two-parameter generalization of (Eq. ) is called Legendre's general differential equation, solved by the Associated Legendre polynomials. Legendre functions are solutions of Legendre's differential equation (generalized or not) with non-integer parameters. In physical settings, Legendre's differential equation arises naturally whenever one solves Laplace's equation (and related partial differential equations) by separation of variables in spherical coordinates. From this standpoint, the eigenfunctions of the angular part of the Laplacian operator are the spherical harmonics, of which the Legendre polynomials are (up to a multiplicative constant) the subset that is left invariant by rotations about the polar axis. The polynomials appear as where is the polar angle. This approach to the Legendre polynomials provides a deep connection to rotational symmetry. Many of their properties which are found laboriously through the methods of analysis — for example the addition theorem — are more easily found using the methods of symmetry and group theory, and acquire profound physical and geometrical meaning. Rodrigues' formula and other explicit formulas An especially compact expression for the Legendre polynomials is given by Rodrigues' formula: This formula enables derivation of a large number of properties of the 's. Among these are explicit representations such as Expressing the polynomial as a power series, , the coefficients of powers of can also be calculated using a general formula:The Legendre polynomial is determined by the values used for the two constants and , where if is odd and if is even. In the fourth representation, stands for the largest integer less than or equal to . The last representation, which is also immediate from the recursion formula, expresses the Legendre polynomials by simple monomials and involves the generalized form of the binomial coefficient. The first few Legendre polynomials are: The graphs of these polynomials (up to ) are shown below: Main properties Orthogonality The standardization fixes the normalization of the Legendre polynomials (with respect to the norm on the interval ). Since they are also orthogonal with respect to the same norm, the two statements can be combined into the single equation, (where denotes the Kronecker delta, equal to 1 if and to 0 otherwise). This normalization is most readily found by employing Rodrigues' formula, given below. Completeness That the polynomials are complete means the following. Given any piecewise continuous function with finitely many discontinuities in the interval , the sequence of sums converges in the mean to as , provided we take This completeness property underlies all the expansions discussed in this article, and is often stated in the form with and . Applications Expanding an inverse distance potential The Legendre polynomials were first introduced in 1782 by Adrien-Marie Legendre as the coefficients in the expansion of the Newtonian potential where and are the lengths of the vectors and respectively and is the angle between those two vectors. The series converges when . The expression gives the gravitational potential associated to a point mass or the Coulomb potential associated to a point charge. The expansion using Legendre polynomials might be useful, for instance, when integrating this expression over a continuous mass or charge distribution. Legendre polynomials occur in the solution of Laplace's equation of the static potential, , in a charge-free region of space, using the method of separation of variables, where the boundary conditions have axial symmetry (no dependence on an azimuthal angle). Where is the axis of symmetry and is the angle between the position of the observer and the axis (the zenith angle), the solution for the potential will be and are to be determined according to the boundary condition of each problem. They also appear when solving the Schrödinger equation in three dimensions for a central force. In multipole expansions Legendre polynomials are also useful in expanding functions of the form (this is the same as before, written a little differently): which arise naturally in multipole expansions. The left-hand side of the equation is the generating function for the Legendre polynomials. As an example, the electric potential (in spherical coordinates) due to a point charge located on the -axis at (see diagram right) varies as If the radius of the observation point is greater than , the potential may be expanded in the Legendre polynomials where we have defined and . This expansion is used to develop the normal multipole expansion. Conversely, if the radius of the observation point is smaller than , the potential may still be expanded in the Legendre polynomials as above, but with and exchanged. This expansion is the basis of interior multipole expansion. In trigonometry The trigonometric functions , also denoted as the Chebyshev polynomials , can also be multipole expanded by the Legendre polynomials . The first several orders are as follows: Another property is the expression for , which is In recurrent neural networks A recurrent neural network that contains a -dimensional memory vector, , can be optimized such that its neural activities obey the linear time-invariant system given by the following state-space representation: In this case, the sliding window of across the past units of time is best approximated by a linear combination of the first shifted Legendre polynomials, weighted together by the elements of at time : When combined with deep learning methods, these networks can be trained to outperform long short-term memory units and related architectures, while using fewer computational resources. Additional properties Legendre polynomials have definite parity. That is, they are even or odd, according to Another useful property is which follows from considering the orthogonality relation with . It is convenient when a Legendre series is used to approximate a function or experimental data: the average of the series over the interval is simply given by the leading expansion coefficient . Since the differential equation and the orthogonality property are independent of scaling, the Legendre polynomials' definitions are "standardized" (sometimes called "normalization", but the actual norm is not 1) by being scaled so that The derivative at the end point is given by The Askey–Gasper inequality for Legendre polynomials reads The Legendre polynomials of a scalar product of unit vectors can be expanded with spherical harmonics using where the unit vectors and have spherical coordinates and , respectively. The product of two Legendre polynomials where is the complete elliptic integral of the first kind. Recurrence relations As discussed above, the Legendre polynomials obey the three-term recurrence relation known as Bonnet's recursion formula given by and or, with the alternative expression, which also holds at the endpoints Useful for the integration of Legendre polynomials is From the above one can see also that or equivalently where is the norm over the interval Asymptotics Asymptotically, for , the Legendre polynomials can be written as and for arguments of magnitude greater than 1 where , , and are Bessel functions. Zeros All zeros of are real, distinct from each other, and lie in the interval . Furthermore, if we regard them as dividing the interval into subintervals, each subinterval will contain exactly one zero of . This is known as the interlacing property. Because of the parity property it is evident that if is a zero of , so is . These zeros play an important role in numerical integration based on Gaussian quadrature. The specific quadrature based on the 's is known as Gauss-Legendre quadrature. From this property and the facts that , it follows that has local minima and maxima in . Equivalently, has zeros in . Pointwise evaluations The parity and normalization implicate the values at the boundaries to be At the origin one can show that the values are given by Variants with transformed argument Shifted Legendre polynomials The shifted Legendre polynomials are defined as Here the "shifting" function is an affine transformation that bijectively maps the interval to the interval , implying that the polynomials are orthogonal on : An explicit expression for the shifted Legendre polynomials is given by The analogue of Rodrigues' formula for the shifted Legendre polynomials is The first few shifted Legendre polynomials are: Legendre rational functions The Legendre rational functions are a sequence of orthogonal functions on [0, ∞). They are obtained by composing the Cayley transform with Legendre polynomials. A rational Legendre function of degree n is defined as: They are eigenfunctions of the singular Sturm–Liouville problem: with eigenvalues See also Gaussian quadrature Gegenbauer polynomials Turán's inequalities Legendre wavelet Legendre function Jacobi polynomials Romanovski polynomials Laplace expansion (potential) Notes References External links A quick informal derivation of the Legendre polynomial in the context of the quantum mechanics of hydrogen Wolfram MathWorld entry on Legendre polynomials Dr James B. Calvert's article on Legendre polynomials from his personal collection of mathematics The Legendre Polynomials by Carlyle E. Moore Legendre Polynomials from Hyperphysics Special hypergeometric functions Orthogonal polynomials Polynomials
Legendre polynomials
[ "Mathematics" ]
2,648
[ "Polynomials", "Algebra" ]
100,439
https://en.wikipedia.org/wiki/Wingspan
The wingspan (or just span) of a bird or an airplane is the distance from one wingtip to the opposite wingtip. For example, the Boeing 777–200 has a wingspan of , and a wandering albatross (Diomedea exulans) caught in 1965 had a wingspan of , the official record for a living bird. The term wingspan, more technically extent, is also used for other winged animals such as pterosaurs, bats, insects, etc., and other aircraft such as ornithopters. In humans, the term wingspan also refers to the arm span, which is the distance between the length from the end of an individual's arm (measured at the fingertips) to the individual's fingertips on the other arm when raised parallel to the ground at shoulder height. Wingspan of aircraft The wingspan of an aircraft is always measured in a straight line, from wingtip to wingtip, regardless of wing shape or sweep. Implications for aircraft design and animal evolution The lift from wings is proportional to their area, so the heavier the animal or aircraft the bigger that area must be. The area is the product of the span times the width (mean chord) of the wing, so either a long, narrow wing or a shorter, broader wing will support the same mass. For efficient steady flight, the ratio of span to chord, the aspect ratio, should be as high as possible (the constraints are usually structural) because this lowers the lift-induced drag associated with the inevitable wingtip vortices. Long-ranging birds, like albatrosses, and most commercial aircraft maximize aspect ratio. Alternatively, animals and aircraft which depend on maneuverability (fighters, predators and prey, as well as those who live amongst trees and bushes, insect catchers, etc.) need to be able to roll fast to turn, and the high moment of inertia of long narrow wings, as well as the high angular drag and quick balancing of aileron lift with wing lift at a low rotation rate, produce lower roll rates. For them, short-span, broad wings are preferred. Additionally, ground handling in aircraft is a significant problem for very high aspect ratios and flying animals may encounter similar issues. The highest aspect ratio of man-made wings are aircraft propellers, in their most extreme form as helicopter rotors. Wingspan of flying animals To measure the wingspan of a bird, a live or freshly-dead specimen is placed flat on its back, the wings are grasped at the wrist joints and the distance is measured between the tips of the longest primary feathers on each wing. The wingspan of an insect refers to the wingspan of pinned specimens, and may refer to the distance between the centre of the thorax and the apex of the wing doubled or to the width between the apices with the wings set with the trailing wing edge perpendicular to the body. Wingspan in sports In basketball and gridiron football, a fingertip-to-fingertip measurement is used to determine the player's wingspan, also called armspan. This is called reach in boxing terminology. The wingspan of 16-year-old BeeJay Anya, a top basketball Junior Class of 2013 prospect who played for the NC State Wolfpack, was officially measured at across, one of the longest of all National Basketball Association draft prospects, and the longest ever for a non-7-foot player, though Anya went undrafted in 2017. The wingspan of Manute Bol, at , is (as of 2013) the longest in NBA history, and his vertical reach was . Wingspan records Largest wingspan Aircraft (current): Scaled Composites Stratolaunch — 117 m (385 ft) Bat: Large flying fox – Bird: Wandering albatross – Bird (extinct): Argentavis – Estimated Reptile (extinct): Quetzalcoatlus pterosaur – Insect: White witch moth – Insect (extinct): Meganeuropsis (relative of dragonflies) – estimated up to Smallest wingspan Aircraft (biplane): Starr Bumble Bee II – Aircraft (jet): Bede BD-5 – Aircraft (twin engine): Colomban Cri-cri – Bat: Bumblebee bat – Bird: Bee hummingbird – Insect: Tanzanian parasitic wasp (Fairyfly) – References Aerospace engineering Aircraft wing design Birds Length Anatomical terminology
Wingspan
[ "Physics", "Mathematics", "Engineering", "Biology" ]
883
[ "Scalar physical quantities", "Animals", "Physical quantities", "Distance", "Quantity", "Size", "Length", "Aerospace engineering", "Birds", "Wikipedia categories named after physical quantities" ]
100,563
https://en.wikipedia.org/wiki/System%20on%20a%20chip
A system on a chip or system-on-chip (SoC ; pl. SoCs ) is an integrated circuit that integrates most or all components of a computer or electronic system. These components usually include an on-chip central processing unit (CPU), memory interfaces, input/output devices and interfaces, and secondary storage interfaces, often alongside other components such as radio modems and a graphics processing unit (GPU) – all on a single substrate or microchip. SoCs may contain digital and also analog, mixed-signal and often radio frequency signal processing functions (otherwise it may be considered on a discrete application processor). High-performance SoCs are often paired with dedicated and physically separate memory and secondary storage (such as LPDDR and eUFS or eMMC, respectively) chips that may be layered on top of the SoC in what is known as a package on package (PoP) configuration, or be placed close to the SoC. Additionally, SoCs may use separate wireless modems (especially WWAN modems). An SoC integrates a microcontroller, microprocessor or perhaps several processor cores with peripherals like a GPU, Wi-Fi and cellular network radio modems or one or more coprocessors. Similar to how a microcontroller integrates a microprocessor with peripheral circuits and memory, an SoC can be seen as integrating a microcontroller with even more advanced peripherals. Compared to a multi-chip architecture, an SoC with equivalent functionality will have reduced power consumption as well as a smaller semiconductor die area. This comes at the cost of reduced replaceability of components. By definition, SoC designs are fully or nearly fully integrated across different component modules. For these reasons, there has been a general trend towards tighter integration of components in the computer hardware industry, in part due to the influence of SoCs and lessons learned from the mobile and embedded computing markets. SoCs are very common in the mobile computing (as in smart devices such as smartphones and tablet computers) and edge computing markets. Types In general, there are three distinguishable types of SoCs: SoCs built around a microcontroller, SoCs built around a microprocessor, often found in mobile phones; Specialized application-specific integrated circuit SoCs designed for specific applications that do not fit into the above two categories. Applications SoCs can be applied to any computing task. However, they are typically used in mobile computing such as tablets, smartphones, smartwatches, and netbooks as well as embedded systems and in applications where previously microcontrollers would be used. Embedded systems Where previously only microcontrollers could be used, SoCs are rising to prominence in the embedded systems market. Tighter system integration offers better reliability and mean time between failure, and SoCs offer more advanced functionality and computing power than microcontrollers. Applications include AI acceleration, embedded machine vision, data collection, telemetry, vector processing and ambient intelligence. Often embedded SoCs target the internet of things, multimedia, networking, telecommunications and edge computing markets. Some examples of SoCs for embedded applications include: AMD Zynq 7000 SoC Zynq UltraScale+ MPSoC Zynq UltraScale+ RFSoC Versal Adaptive SoC Mobile computing Mobile computing based SoCs always bundle processors, memories, on-chip caches, wireless networking capabilities and often digital camera hardware and firmware. With increasing memory sizes, high end SoCs will often have no memory and flash storage and instead, the memory and flash memory will be placed right next to, or above (package on package), the SoC. Some examples of mobile computing SoCs include: Samsung Electronics: list, typically based on ARM Exynos, used mainly by Samsung's Galaxy series of smartphones Qualcomm: Snapdragon (list), used in many smartphones. In 2018, Snapdragon SoCs were being used as the backbone of laptop computers running Windows 10, marketed as "Always Connected PCs". MediaTek, typically based on ARM Dimensity & Kompanio Series. Standalone application & tablet processors that power devices such as Amazon Echo Show Personal computers In 1992, Acorn Computers produced the A3010, A3020 and A4000 range of personal computers with the ARM250 SoC. It combined the original Acorn ARM2 processor with a memory controller (MEMC), video controller (VIDC), and I/O controller (IOC). In previous Acorn ARM-powered computers, these were four discrete chips. The ARM7500 chip was their second-generation SoC, based on the ARM700, VIDC20 and IOMD controllers, and was widely licensed in embedded devices such as set-top-boxes, as well as later Acorn personal computers. Tablet and laptop manufacturers have learned lessons from embedded systems and smartphone markets about reduced power consumption, better performance and reliability from tighter integration of hardware and firmware modules, and LTE and other wireless network communications integrated on chip (integrated network interface controllers). On modern laptops and mini PCs, the low-power variants of AMD Ryzen and Intel Core processors, are use SoC design integrating CPU, IGPU, chipset and other processors in a single package. However, such x86 processors still require external memory and storage chips. Structure An SoC consists of hardware functional units, including microprocessors that run software code, as well as a communications subsystem to connect, control, direct and interface between these functional modules. Functional components Processor cores An SoC must have at least one processor core, but typically an SoC has more than one core. Processor cores can be a microcontroller, microprocessor (μP), digital signal processor (DSP) or application-specific instruction set processor (ASIP) core. ASIPs have instruction sets that are customized for an application domain and designed to be more efficient than general-purpose instructions for a specific type of workload. Multiprocessor SoCs have more than one processor core by definition. The ARM architecture is a common choice for SoC processor cores because some ARM-architecture cores are soft processors specified as IP cores. Memory SoCs must have semiconductor memory blocks to perform their computation, as do microcontrollers and other embedded systems. Depending on the application, SoC memory may form a memory hierarchy and cache hierarchy. In the mobile computing market, this is common, but in many low-power embedded microcontrollers, this is not necessary. Memory technologies for SoCs include read-only memory (ROM), random-access memory (RAM), Electrically Erasable Programmable ROM (EEPROM) and flash memory. As in other computer systems, RAM can be subdivided into relatively faster but more expensive static RAM (SRAM) and the slower but cheaper dynamic RAM (DRAM). When an SoC has a cache hierarchy, SRAM will usually be used to implement processor registers and cores' built-in caches whereas DRAM will be used for main memory. "Main memory" may be specific to a single processor (which can be multi-core) when the SoC has multiple processors, in this case it is distributed memory and must be sent via on-chip to be accessed by a different processor. For further discussion of multi-processing memory issues, see cache coherence and memory latency. Interfaces SoCs include external interfaces, typically for communication protocols. These are often based upon industry standards such as USB, Ethernet, USART, SPI, HDMI, I²C, CSI, etc. These interfaces will differ according to the intended application. Wireless networking protocols such as Wi-Fi, Bluetooth, 6LoWPAN and near-field communication may also be supported. When needed, SoCs include analog interfaces including analog-to-digital and digital-to-analog converters, often for signal processing. These may be able to interface with different types of sensors or actuators, including smart transducers. They may interface with application-specific modules or shields. Or they may be internal to the SoC, such as if an analog sensor is built in to the SoC and its readings must be converted to digital signals for mathematical processing. Digital signal processors Digital signal processor (DSP) cores are often included on SoCs. They perform signal processing operations in SoCs for sensors, actuators, data collection, data analysis and multimedia processing. DSP cores typically feature very long instruction word (VLIW) and single instruction, multiple data (SIMD) instruction set architectures, and are therefore highly amenable to exploiting instruction-level parallelism through parallel processing and superscalar execution. SP cores most often feature application-specific instructions, and as such are typically application-specific instruction set processors (ASIP). Such application-specific instructions correspond to dedicated hardware functional units that compute those instructions. Typical DSP instructions include multiply-accumulate, Fast Fourier transform, fused multiply-add, and convolutions. Other As with other computer systems, SoCs require timing sources to generate clock signals, control execution of SoC functions and provide time context to signal processing applications of the SoC, if needed. Popular time sources are crystal oscillators and phase-locked loops. SoC peripherals including counter-timers, real-time timers and power-on reset generators. SoCs also include voltage regulators and power management circuits. Intermodule communication SoCs comprise many execution units. These units must often send data and instructions back and forth. Because of this, all but the most trivial SoCs require communications subsystems. Originally, as with other microcomputer technologies, data bus architectures were used, but recently designs based on sparse intercommunication networks known as networks-on-chip (NoC) have risen to prominence and are forecast to overtake bus architectures for SoC design in the near future. Bus-based communication Historically, a shared global computer bus typically connected the different components, also called "blocks" of the SoC. A very common bus for SoC communications is ARM's royalty-free Advanced Microcontroller Bus Architecture (AMBA) standard. Direct memory access controllers route data directly between external interfaces and SoC memory, bypassing the CPU or control unit, thereby increasing the data throughput of the SoC. This is similar to some device drivers of peripherals on component-based multi-chip module PC architectures. Wire delay is not scalable due to continued miniaturization, system performance does not scale with the number of cores attached, the SoC's operating frequency must decrease with each additional core attached for power to be sustainable, and long wires consume large amounts of electrical power. These challenges are prohibitive to supporting manycore systems on chip. Network on a chip In the late 2010s, a trend of SoCs implementing communications subsystems in terms of a network-like topology instead of bus-based protocols has emerged. A trend towards more processor cores on SoCs has caused on-chip communication efficiency to become one of the key factors in determining the overall system performance and cost. This has led to the emergence of interconnection networks with router-based packet switching known as "networks on chip" (NoCs) to overcome the bottlenecks of bus-based networks. Networks-on-chip have advantages including destination- and application-specific routing, greater power efficiency and reduced possibility of bus contention. Network-on-chip architectures take inspiration from communication protocols like TCP and the Internet protocol suite for on-chip communication, although they typically have fewer network layers. Optimal network-on-chip network architectures are an ongoing area of much research interest. NoC architectures range from traditional distributed computing network topologies such as torus, hypercube, meshes and tree networks to genetic algorithm scheduling to randomized algorithms such as random walks with branching and randomized time to live (TTL). Many SoC researchers consider NoC architectures to be the future of SoC design because they have been shown to efficiently meet power and throughput needs of SoC designs. Current NoC architectures are two-dimensional. 2D IC design has limited floorplanning choices as the number of cores in SoCs increase, so as three-dimensional integrated circuits (3DICs) emerge, SoC designers are looking towards building three-dimensional on-chip networks known as 3DNoCs. Design flow A system on a chip consists of both the hardware, described in , and the software controlling the microcontroller, microprocessor or digital signal processor cores, peripherals and interfaces. The design flow for an SoC aims to develop this hardware and software at the same time, also known as architectural co-design. The design flow must also take into account optimizations () and constraints. Most SoCs are developed from pre-qualified hardware component IP core specifications for the hardware elements and execution units, collectively "blocks", described above, together with software device drivers that may control their operation. Of particular importance are the protocol stacks that drive industry-standard interfaces like USB. The hardware blocks are put together using computer-aided design tools, specifically electronic design automation tools; the software modules are integrated using a software integrated development environment. SoCs components are also often designed in high-level programming languages such as C++, MATLAB or SystemC and converted to RTL designs through high-level synthesis (HLS) tools such as C to HDL or flow to HDL. HLS products called "algorithmic synthesis" allow designers to use C++ to model and synthesize system, circuit, software and verification levels all in one high level language commonly known to computer engineers in a manner independent of time scales, which are typically specified in HDL. Other components can remain software and be compiled and embedded onto soft-core processors included in the SoC as modules in HDL as IP cores. Once the architecture of the SoC has been defined, any new hardware elements are written in an abstract hardware description language termed register transfer level (RTL) which defines the circuit behavior, or synthesized into RTL from a high level language through high-level synthesis. These elements are connected together in a hardware description language to create the full SoC design. The logic specified to connect these components and convert between possibly different interfaces provided by different vendors is called glue logic. Design verification Chips are verified for validation correctness before being sent to a semiconductor foundry. This process is called functional verification and it accounts for a significant portion of the time and energy expended in the chip design life cycle, often quoted as 70%. With the growing complexity of chips, hardware verification languages like SystemVerilog, SystemC, e, and OpenVera are being used. Bugs found in the verification stage are reported to the designer. Traditionally, engineers have employed simulation acceleration, emulation or prototyping on reprogrammable hardware to verify and debug hardware and software for SoC designs prior to the finalization of the design, known as tape-out. Field-programmable gate arrays (FPGAs) are favored for prototyping SoCs because FPGA prototypes are reprogrammable, allow debugging and are more flexible than application-specific integrated circuits (ASICs). With high capacity and fast compilation time, simulation acceleration and emulation are powerful technologies that provide wide visibility into systems. Both technologies, however, operate slowly, on the order of MHz, which may be significantly slower – up to 100 times slower – than the SoC's operating frequency. Acceleration and emulation boxes are also very large and expensive at over US$1 million. FPGA prototypes, in contrast, use FPGAs directly to enable engineers to validate and test at, or close to, a system's full operating frequency with real-world stimuli. Tools such as Certus are used to insert probes in the FPGA RTL that make signals available for observation. This is used to debug hardware, firmware and software interactions across multiple FPGAs with capabilities similar to a logic analyzer. In parallel, the hardware elements are grouped and passed through a process of logic synthesis, during which performance constraints, such as operational frequency and expected signal delays, are applied. This generates an output known as a netlist describing the design as a physical circuit and its interconnections. These netlists are combined with the glue logic connecting the components to produce the schematic description of the SoC as a circuit which can be printed onto a chip. This process is known as place and route and precedes tape-out in the event that the SoCs are produced as application-specific integrated circuits (ASIC). Optimization goals SoCs must optimize power use, area on die, communication, positioning for locality between modular units and other factors. Optimization is necessarily a design goal of SoCs. If optimization was not necessary, the engineers would use a multi-chip module architecture without accounting for the area use, power consumption or performance of the system to the same extent. Common optimization targets for SoC designs follow, with explanations of each. In general, optimizing any of these quantities may be a hard combinatorial optimization problem, and can indeed be NP-hard fairly easily. Therefore, sophisticated optimization algorithms are often required and it may be practical to use approximation algorithms or heuristics in some cases. Additionally, most SoC designs contain multiple variables to optimize simultaneously, so Pareto efficient solutions are sought after in SoC design. Oftentimes the goals of optimizing some of these quantities are directly at odds, further adding complexity to design optimization of SoCs and introducing trade-offs in system design. For broader coverage of trade-offs and requirements analysis, see requirements engineering. Targets Power consumption SoCs are optimized to minimize the electrical power used to perform the SoC's functions. Most SoCs must use low power. SoC systems often require long battery life (such as smartphones), can potentially spend months or years without a power source while needing to maintain autonomous function, and often are limited in power use by a high number of embedded SoCs being networked together in an area. Additionally, energy costs can be high and conserving energy will reduce the total cost of ownership of the SoC. Finally, waste heat from high energy consumption can damage other circuit components if too much heat is dissipated, giving another pragmatic reason to conserve energy. The amount of energy used in a circuit is the integral of power consumed with respect to time, and the average rate of power consumption is the product of current by voltage. Equivalently, by Ohm's law, power is current squared times resistance or voltage squared divided by resistance: SoCs are frequently embedded in portable devices such as smartphones, GPS navigation devices, digital watches (including smartwatches) and netbooks. Customers want long battery lives for mobile computing devices, another reason that power consumption must be minimized in SoCs. Multimedia applications are often executed on these devices, including video games, video streaming, image processing; all of which have grown in computational complexity in recent years with user demands and expectations for higher-quality multimedia. Computation is more demanding as expectations move towards 3D video at high resolution with multiple standards, so SoCs performing multimedia tasks must be computationally capable platform while being low power to run off a standard mobile battery. Performance per watt SoCs are optimized to maximize power efficiency in performance per watt: maximize the performance of the SoC given a budget of power usage. Many applications such as edge computing, distributed processing and ambient intelligence require a certain level of computational performance, but power is limited in most SoC environments. Waste heat SoC designs are optimized to minimize waste heat output on the chip. As with other integrated circuits, heat generated due to high power density are the bottleneck to further miniaturization of components. The power densities of high speed integrated circuits, particularly microprocessors and including SoCs, have become highly uneven. Too much waste heat can damage circuits and erode reliability of the circuit over time. High temperatures and thermal stress negatively impact reliability, stress migration, decreased mean time between failures, electromigration, wire bonding, metastability and other performance degradation of the SoC over time. In particular, most SoCs are in a small physical area or volume and therefore the effects of waste heat are compounded because there is little room for it to diffuse out of the system. Because of high transistor counts on modern devices, oftentimes a layout of sufficient throughput and high transistor density is physically realizable from fabrication processes but would result in unacceptably high amounts of heat in the circuit's volume. These thermal effects force SoC and other chip designers to apply conservative design margins, creating less performant devices to mitigate the risk of catastrophic failure. Due to increased transistor densities as length scales get smaller, each process generation produces more heat output than the last. Compounding this problem, SoC architectures are usually heterogeneous, creating spatially inhomogeneous heat fluxes, which cannot be effectively mitigated by uniform passive cooling. Throughput SoCs are optimized to maximize computational and communications throughput. Latency SoCs are optimized to minimize latency for some or all of their functions. This can be accomplished by laying out elements with proper proximity and locality to each-other to minimize the interconnection delays and maximize the speed at which data is communicated between modules, functional units and memories. In general, optimizing to minimize latency is an NP-complete problem equivalent to the Boolean satisfiability problem. For tasks running on processor cores, latency and throughput can be improved with task scheduling. Some tasks run in application-specific hardware units, however, and even task scheduling may not be sufficient to optimize all software-based tasks to meet timing and throughput constraints. Methodologies Systems on chip are modeled with standard hardware verification and validation techniques, but additional techniques are used to model and optimize SoC design alternatives to make the system optimal with respect to multiple-criteria decision analysis on the above optimization targets. Task scheduling Task scheduling is an important activity in any computer system with multiple processes or threads sharing a single processor core. It is important to reduce and increase for embedded software running on an SoC's . Not every important computing activity in a SoC is performed in software running on on-chip processors, but scheduling can drastically improve performance of software-based tasks and other tasks involving shared resources. Software running on SoCs often schedules tasks according to network scheduling and randomized scheduling algorithms. Pipelining Hardware and software tasks are often pipelined in processor design. Pipelining is an important principle for speedup in computer architecture. They are frequently used in GPUs (graphics pipeline) and RISC processors (evolutions of the classic RISC pipeline), but are also applied to application-specific tasks such as digital signal processing and multimedia manipulations in the context of SoCs. Probabilistic modeling SoCs are often analyzed though probabilistic models, queueing networks, and Markov chains. For instance, Little's law allows SoC states and NoC buffers to be modeled as arrival processes and analyzed through Poisson random variables and Poisson processes. Markov chains SoCs are often modeled with Markov chains, both discrete time and continuous time variants. Markov chain modeling allows asymptotic analysis of the SoC's steady state distribution of power, heat, latency and other factors to allow design decisions to be optimized for the common case. Fabrication SoC chips are typically fabricated using metal–oxide–semiconductor (MOS) technology. The netlists described above are used as the basis for the physical design (place and route) flow to convert the designers' intent into the design of the SoC. Throughout this conversion process, the design is analyzed with static timing modeling, simulation and other tools to ensure that it meets the specified operational parameters such as frequency, power consumption and dissipation, functional integrity (as described in the register transfer level code) and electrical integrity. When all known bugs have been rectified and these have been re-verified and all physical design checks are done, the physical design files describing each layer of the chip are sent to the foundry's mask shop where a full set of glass lithographic masks will be etched. These are sent to a wafer fabrication plant to create the SoC dice before packaging and testing. SoCs can be fabricated by several technologies, including: Full custom ASIC Standard cell ASIC Field-programmable gate array (FPGA) ASICs consume less power and are faster than FPGAs but cannot be reprogrammed and are expensive to manufacture. FPGA designs are more suitable for lower volume designs, but after enough units of production ASICs reduce the total cost of ownership. SoC designs consume less power and have a lower cost and higher reliability than the multi-chip systems that they replace. With fewer packages in the system, assembly costs are reduced as well. However, like most very-large-scale integration (VLSI) designs, the total cost is higher for one large chip than for the same functionality distributed over several smaller chips, because of lower yields and higher non-recurring engineering costs. When it is not feasible to construct an SoC for a particular application, an alternative is a system in package (SiP) comprising a number of chips in a single package. When produced in large volumes, SoC is more cost-effective than SiP because its packaging is simpler. Another reason SiP may be preferred is waste heat may be too high in a SoC for a given purpose because functional components are too close together, and in an SiP heat will dissipate better from different functional modules since they are physically further apart. Examples Some examples of systems on a chip are: Apple A series Cell processor Adapteva's Epiphany architecture Xilinx Zynq UltraScale Qualcomm Snapdragon Benchmarks SoC research and development often compares many options. Benchmarks, such as COSMIC, are developed to help such evaluations. See also Chiplet List of system on a chip suppliers Post-silicon validation ARM architecture family RISC-V Single-board computer System in a package Network on a chip Cypress PSoC Application-specific instruction set processor (ASIP) Platform-based design Lab-on-a-chip Organ-on-a-chip in biomedical technology Multi-chip module Parallel computing ARM big.LITTLE co-architecture Hardware acceleration Notes References Further reading 465 pages. External links SOCC Annual IEEE International SoC Conference Baya free SoC platform assembly and IP integration tool Systems on Chip for Embedded Applications, Auburn University seminar in VLSI Instant SoC SoC for FPGAs defined by C++ MPSoC – Annual Conference on MPSoC Annual Symposium Computer engineering Electronic design Microtechnology Hardware acceleration Computer systems Application-specific integrated circuits
System on a chip
[ "Materials_science", "Technology", "Engineering" ]
5,526
[ "Hardware acceleration", "Computer engineering", "Microtechnology", "Electronic design", "Materials science", "Computer systems", "Computer science", "Electronic engineering", "Application-specific integrated circuits", "Electrical engineering", "Design", "Computers" ]
101,219
https://en.wikipedia.org/wiki/Pearson%20hashing
Pearson hashing is a non-cryptographic hash function designed for fast execution on processors with 8-bit registers. Given an input consisting of any number of bytes, it produces as output a single byte that is strongly dependent on every byte of the input. Its implementation requires only a few instructions, plus a 256-byte lookup table containing a permutation of the values 0 through 255. This hash function is a CBC-MAC that uses an 8-bit substitution cipher implemented via the substitution table. An 8-bit cipher has negligible cryptographic security, so the Pearson hash function is not cryptographically strong, but it is useful for implementing hash tables or as a data integrity check code, for which purposes it offers these benefits: It is extremely simple. It executes quickly on resource-limited processors. There is no simple class of inputs for which collisions (identical outputs) are especially likely. Given a small, privileged set of inputs (e.g., reserved words for a compiler), the permutation table can be adjusted so that those inputs yield distinct hash values, producing what is called a perfect hash function. Two input strings differing by exactly one character never collide. E.g., applying the algorithm on the strings ABC and AEC will never produce the same value. One of its drawbacks when compared with other hashing algorithms designed for 8-bit processors is the suggested 256 byte lookup table, which can be prohibitively large for a small microcontroller with a program memory size on the order of hundreds of bytes. A workaround to this is to use a simple permutation function instead of a table stored in program memory. However, using a too simple function, such as T[i] = 255-i, partly defeats the usability as a hash function as anagrams will result in the same hash value; using a too complex function, on the other hand, will affect speed negatively. Using a function rather than a table also allows extending the block size. Such functions naturally have to be bijective, like their table variants. The algorithm can be described by the following pseudocode, which computes the hash of message C using the permutation table T: algorithm pearson hashing is h := 0 for each c in C loop h := T[ h xor c ] end loop return h The hash variable () may be initialized differently, e.g. to the length of the data () modulo 256. Example implementations C#, 8-bit public class PearsonHashing { public static byte Hash(string input) { byte[] T = { /* Permutation of 0-255 */ }; byte hash = 0; byte[] bytes = Encoding.UTF8.GetBytes(input); foreach (byte b in bytes) { hash = T[hash ^ b]; } return hash; } } See also Non-cryptographic hash functions References Error detection and correction Hash function (non-cryptographic) Articles with example pseudocode
Pearson hashing
[ "Engineering" ]
625
[ "Error detection and correction", "Reliability engineering" ]
101,336
https://en.wikipedia.org/wiki/High-temperature%20superconductivity
High-temperature superconductivity (high-c or HTS) is superconductivity in materials with a critical temperature (the temperature below which the material behaves as a superconductor) above , the boiling point of liquid nitrogen. They are only "high-temperature" relative to previously known superconductors, which function at colder temperatures, close to absolute zero. The "high temperatures" are still far below ambient (room temperature), and therefore require cooling. The first breakthrough of high-temperature superconductor was discovered in 1986 by IBM researchers Georg Bednorz and K. Alex Müller. Although the critical temperature is around , this new type of superconductor was readily modified by Ching-Wu Chu to make the first high-temperature superconductor with critical temperature . Bednorz and Müller were awarded the Nobel Prize in Physics in 1987 "for their important break-through in the discovery of superconductivity in ceramic materials". Most high-c materials are type-II superconductors. The major advantage of high-temperature superconductors is that they can be cooled using liquid nitrogen, in contrast to the previously known superconductors that require expensive and hard-to-handle coolants, primarily liquid helium. A second advantage of high-c materials is they retain their superconductivity in higher magnetic fields than previous materials. This is important when constructing superconducting magnets, a primary application of high-c materials. The majority of high-temperature superconductors are ceramic materials, rather than the previously known metallic materials. Ceramic superconductors are suitable for some practical uses but they still have many manufacturing issues. For example, most ceramics are brittle, which makes the fabrication of wires from them very problematic. However, overcoming these drawbacks is the subject of considerable research, and progress is ongoing. The main class of high-temperature superconductors is copper oxides combined with other metals, especially the rare-earth barium copper oxides (REBCOs) such as yttrium barium copper oxide (YBCO). The second class of high-temperature superconductors in the practical classification is the iron-based compounds. Magnesium diboride is sometimes included in high-temperature superconductors: It is relatively simple to manufacture, but it superconducts only below , which makes it unsuitable for liquid nitrogen cooling. History Superconductivity was discovered by Kamerlingh Onnes in 1911, in a metal solid. Ever since, researchers have attempted to observe superconductivity at increasing temperatures with the goal of finding a room-temperature superconductor. By the late 1970s, superconductivity was observed in several metallic compounds (in particular Nb-based, such as NbTi, Nb3Sn, and Nb3Ge) at temperatures that were much higher than those for elemental metals and which could even exceed . In 1986, at the IBM research lab near Zürich in Switzerland, Bednorz and Müller were looking for superconductivity in a new class of ceramics: the copper oxides, or cuprates. Bednorz encountered a particular copper oxide whose resistance dropped to zero at a temperature around . Their results were soon confirmed by many groups, notably Paul Chu at the University of Houston and Shoji Tanaka at the University of Tokyo. In 1987, Philip W. Anderson gave the first theoretical description of these materials, based on the resonating valence bond (RVB) theory, but a full understanding of these materials is still developing today. These superconductors are now known to possess a d-wave pair symmetry. The first proposal that high-temperature cuprate superconductivity involves d-wave pairing was made in 1987 by N. E. Bickers, Douglas James Scalapino and R. T. Scalettar, followed by three subsequent theories in 1988 by Masahiko Inui, Sebastian Doniach, Peter J. Hirschfeld and Andrei E. Ruckenstein, using spin-fluctuation theory, and by Claudius Gros, Didier Poilblanc, Maurice T. Rice and FC. Zhang, and by Gabriel Kotliar and Jialin Liu identifying d-wave pairing as a natural consequence of the RVB theory. The confirmation of the d-wave nature of the cuprate superconductors was made by a variety of experiments, including the direct observation of the d-wave nodes in the excitation spectrum through angle resolved photoemission spectroscopy (ARPES), the observation of a half-integer flux in tunneling experiments, and indirectly from the temperature dependence of the penetration depth, specific heat and thermal conductivity. As of 2021, the superconductor with the highest transition temperature at ambient pressure is the cuprate of mercury, barium, and calcium, at around . There are other superconductors with higher recorded transition temperaturesfor example lanthanum superhydride at , but these only occur at very high pressures. The origin of high-temperature superconductivity is still not clear, but it seems that instead of electron–phonon attraction mechanisms, as in conventional superconductivity, one is dealing with genuine electronic mechanisms (e.g. by antiferromagnetic correlations), and instead of conventional, purely s-wave pairing, more exotic pairing symmetries are thought to be involved (d-wave in the case of the cuprates; primarily extended s-wave, but occasionally d-wave, in the case of the iron-based superconductors). In 2014, evidence showing that fractional particles can happen in quasi two-dimensional magnetic materials, was found by École Polytechnique Fédérale de Lausanne (EPFL) scientists lending support for Anderson's theory of high-temperature superconductivity. Selected list of superconductors Properties The "high-temperature" superconductor class has had many definitions. The label high-c should be reserved for materials with critical temperatures greater than the boiling point of liquid nitrogen. However, a number of materialsincluding the original discovery and recently discovered pnictide superconductorshave critical temperatures below but nonetheless are commonly referred to in publications as high-c class. A substance with a critical temperature above the boiling point of liquid nitrogen, together with a high critical magnetic field and critical current density (above which superconductivity is destroyed), would greatly benefit technological applications. In magnet applications, the high critical magnetic field may prove more valuable than the high c itself. Some cuprates have an upper critical field of about 100 tesla. However, cuprate materials are brittle ceramics that are expensive to manufacture and not easily turned into wires or other useful shapes. Furthermore, high-temperature superconductors do not form large, continuous superconducting domains, rather clusters of microdomains within which superconductivity occurs. They are therefore unsuitable for applications requiring actual superconductive currents, such as magnets for magnetic resonance spectrometers. For a solution to this (powders), see HTS wire. There has been considerable debate regarding high-temperature superconductivity coexisting with magnetic ordering in YBCO, iron-based superconductors, several ruthenocuprates and other exotic superconductors, and the search continues for other families of materials. HTS are Type-II superconductors, which allow magnetic fields to penetrate their interior in quantized units of flux, meaning that much higher magnetic fields are required to suppress superconductivity. The layered structure also gives a directional dependence to the magnetic field response. All known high-c superconductors are Type-II superconductors. In contrast to Type-I superconductors, which expel all magnetic fields due to the Meissner effect, Type-II superconductors allow magnetic fields to penetrate their interior in quantized units of flux, creating "holes" or "tubes" of normal metallic regions in the superconducting bulk called vortices. Consequently, high-c superconductors can sustain much higher magnetic fields. Cuprates Cuprates are layered materials, consisting of superconducting layers of copper oxide, separated by spacer layers. Cuprates generally have a structure close to that of a two-dimensional material. Their superconducting properties are determined by electrons moving within weakly coupled copper-oxide (CuO2) layers. Neighbouring layers contain ions such as lanthanum, barium, strontium, or other atoms which act to stabilize the structures and dope electrons or holes onto the copper-oxide layers. The undoped "parent" or "mother" compounds are Mott insulators with long-range antiferromagnetic order at sufficiently low temperatures. Single band models are generally considered to be enough to describe the electronic properties. The cuprate superconductors adopt a perovskite structure. The copper-oxide planes are checkerboard lattices with squares of O2− ions with a Cu2+ ion at the centre of each square. The unit cell is rotated by 45° from these squares. Chemical formulae of superconducting materials generally contain fractional numbers to describe the doping required for superconductivity. There are several families of cuprate superconductors and they can be categorized by the elements they contain and the number of adjacent copper-oxide layers in each superconducting block. For example, YBCO and BSCCO can alternatively be referred to as "Y123" and Bi2201/Bi2212/Bi2223 depending on the number of layers in each superconducting block (). The superconducting transition temperature has been found to peak at an optimal doping value (=0.16) and an optimal number of layers in each superconducting block, typically =3. Possible mechanisms for superconductivity in the cuprates continue to be the subject of considerable debate and further research. Certain aspects common to all materials have been identified. Similarities between the antiferromagnetic the low-temperature state of undoped materials and the superconducting state that emerges upon doping, primarily the x2−y2 orbital state of the Cu2+ ions, suggest that electron–electron interactions are more significant than electron–phonon interactions in cupratesmaking the superconductivity unconventional. Recent work on the Fermi surface has shown that nesting occurs at four points in the antiferromagnetic Brillouin zone where spin waves exist and that the superconducting energy gap is larger at these points. The weak isotope effects observed for most cuprates contrast with conventional superconductors that are well described by BCS theory. Similarities and differences in the properties of hole-doped and electron doped cuprates: Presence of a pseudogap phase up to at least optimal doping. Different trends in the Uemura plot relating transition temperature to the superfluid density. The inverse square of the London penetration depth appears to be proportional to the critical temperature for a large number of underdoped cuprate superconductors, but the constant of proportionality is different for hole- and electron-doped cuprates. The linear trend implies that the physics of these materials is strongly two-dimensional. Universal hourglass-shaped feature in the spin excitations of cuprates measured using inelastic neutron diffraction. Nernst effect evident in both the superconducting and pseudogap phases. The electronic structure of superconducting cuprates is highly anisotropic (see the crystal structure of YBCO or BSCCO). Therefore, the Fermi surface of HTSC is very close to the Fermi surface of the doped CuO2 plane (or multi-planes, in case of multi-layer cuprates) and can be presented on the 2‑D reciprocal space (or momentum space) of the CuO2 lattice. The typical Fermi surface within the first CuO2 Brillouin zone is sketched in Fig. 1 (left). It can be derived from the band structure calculations or measured by angle resolved photoemission spectroscopy (ARPES). Fig. 1 (right) shows the Fermi surface of BSCCO measured by ARPES. In a wide range of charge carrier concentration (doping level), in which the hole-doped HTSC are superconducting, the Fermi surface is hole-like (i.e. open, as shown in Fig. 1). This results in an inherent in-plane anisotropy of the electronic properties of HTSC. In 2018, the full three dimensional Fermi surface structure was derived from soft x-ray ARPES. Iron-based Iron-based superconductors contain layers of iron and a pnictogensuch as arsenic or phosphorus, a chalcogen, or a crystallogen. This is currently the family with the second highest critical temperature, behind the cuprates. Interest in their superconducting properties began in 2006 with the discovery of superconductivity in LaFePO at and gained much greater attention in 2008 after the analogous material LaFeAs(O,F) was found to superconduct at up to under pressure. The highest critical temperatures in the iron-based superconductor family exist in thin films of FeSe, where a critical temperature in excess of was reported in 2014. Since the original discoveries several families of iron-based superconductors have emerged: LnFeAs(O,F) or LnFeAsO1−x (Ln=lanthanide) with c up to , referred to as 1111 materials. A fluoride variant of these materials was subsequently found with similar c values. (Ba,K)Fe2As2 and related materials with pairs of iron-arsenide layers, referred to as 122 compounds. c values range up to . These materials also superconduct when iron is replaced with cobalt. LiFeAs and NaFeAs with c up to around . These materials superconduct close to stoichiometric composition and are referred to as 111 compounds. FeSe with small off-stoichiometry or tellurium doping. LaFeSiH with c around in its stoichiometric composition. This superconducting crystallogenide has oxide and fluoride variants LaFeSiOx and LaFeSiFx. Most undoped iron-based superconductors show a tetragonal-orthorhombic structural phase transition followed at lower temperature by magnetic ordering, similar to the cuprate superconductors. However, they are poor metals rather than Mott insulators and have five bands at the Fermi surface rather than one. The phase diagram emerging as the iron-arsenide layers are doped is remarkably similar, with the superconducting phase close to or overlapping the magnetic phase. Strong evidence that the c value varies with the As–Fe–As bond angles has already emerged and shows that the optimal c value is obtained with undistorted FeAs4 tetrahedra. The symmetry of the pairing wavefunction is still widely debated, but an extended s-wave scenario is currently favoured. Magnesium diboride Magnesium diboride is occasionally referred to as a high-temperature superconductor because its c value of is above that historically expected for BCS superconductors. However, it is more generally regarded as the highest c conventional superconductor, the increased c resulting from two separate bands being present at the Fermi level. Carbon-based In 1991 Hebard et al. discovered Fulleride superconductors, where alkali-metal atoms are intercalated into C60 molecules. In 2008 Ganin et al. demonstrated superconductivity at temperatures of up to for Cs3C60. P-doped Graphane was proposed in 2010 to be capable of sustaining high-temperature superconductivity. On 31st of December 2023 "Global Room-Temperature Superconductivity in Graphite" was published in the journal "Advanced Quantum Technologies" claiming to demonstrate superconductivity at room temperature and ambient pressure in Highly oriented pyrolytic graphite with dense arrays of nearly parallel line defects. Nickelates In 1999, Anisimov et al. conjectured superconductivity in nickelates, proposing nickel oxides as direct analogs to the cuprate superconductors. Superconductivity in an infinite-layer nickelate, Nd0.8Sr0.2NiO2, was reported at the end of 2019 with a superconducting transition temperature between . This superconducting phase is observed in oxygen-reduced thin films created by the pulsed laser deposition of Nd0.8Sr0.2NiO3 onto SrTiO3 substrates that is then reduced to Nd0.8Sr0.2NiO2 via annealing the thin films at in the presence of CaH2. The superconducting phase is only observed in the oxygen reduced film and is not seen in oxygen reduced bulk material of the same stoichiometry, suggesting that the strain induced by the oxygen reduction of the Nd0.8Sr0.2NiO2 thin film changes the phase space to allow for superconductivity. Of important is further to extract access hydrogen from the reduction with CaH2, otherwise topotactic hydrogen may prevent superconductivity. Cuprates The structure of cuprates which are superconductors are often closely related to perovskite structure, and the structure of these compounds has been described as a distorted, oxygen deficient multi-layered perovskite structure. One of the properties of the crystal structure of oxide superconductors is an alternating multi-layer of CuO2 planes with superconductivity taking place between these layers. The more layers of CuO2, the higher c. This structure causes a large anisotropy in normal conducting and superconducting properties, since electrical currents are carried by holes induced in the oxygen sites of the CuO2 sheets. The electrical conduction is highly anisotropic, with a much higher conductivity parallel to the CuO2 plane than in the perpendicular direction. Generally, critical temperatures depend on the chemical compositions, cations substitutions and oxygen content. They can be classified as superstripes; i.e., particular realizations of superlattices at atomic limit made of superconducting atomic layers, wires, dots separated by spacer layers, that gives multiband and multigap superconductivity. Yttrium–barium cuprate An yttrium–barium cuprate, YBa2Cu3O7−x (or Y123), was the first superconductor found above liquid nitrogen boiling point. There are two atoms of Barium for each atom of Yttrium. The proportions of the three different metals in the YBa2Cu3O7 superconductor are in the mole ratio of 1 to 2 to 3 for yttrium to barium to copper, respectively: this particular superconductor has also often been referred to as the 123 superconductor. The unit cell of YBa2Cu3O7 consists of three perovskite unit cells, which is pseudocubic, nearly orthorhombic. The other superconducting cuprates have another structure: they have a tetragonal cell. Each perovskite cell contains a Y or Ba atom at the center: Ba in the bottom unit cell, Y in the middle one, and Ba in the top unit cell. Thus, Y and Ba are stacked in the sequence [Ba–Y–Ba] along the c-axis. All corner sites of the unit cell are occupied by Cu, which has two different coordinations, Cu(1) and Cu(2), with respect to oxygen. There are four possible crystallographic sites for oxygen: O(1), O(2), O(3) and O(4). The coordination polyhedra of Y and Ba with respect to oxygen are different. The tripling of the perovskite unit cell leads to nine oxygen atoms, whereas YBa2Cu3O7 has seven oxygen atoms and, therefore, is referred to as an oxygen-deficient perovskite structure. The structure has a stacking of different layers: (CuO)(BaO)(CuO2)(Y)(CuO2)(BaO)(CuO). One of the key feature of the unit cell of YBa2Cu3O7−x (YBCO) is the presence of two layers of CuO2. The role of the Y plane is to serve as a spacer between two CuO2 planes. In YBCO, the Cu–O chains are known to play an important role for superconductivity. c is maximal near when x ≈ 0.15 and the structure is orthorhombic. Superconductivity disappears at x ≈ 0.6, where the structural transformation of YBCO occurs from orthorhombic to tetragonal. Other cuprates The preparation of other cuprates is more difficult than the YBCO preparation. They also have a different crystal structure: they are tetragonal where YBCO is orthorhombic. Problems in these superconductors arise because of the existence of three or more phases having a similar layered structure. Moreover, the crystal structure of other tested cuprate superconductors are very similar. Like YBCO, the perovskite-type feature and the presence of simple copper oxide (CuO2) layers also exist in these superconductors. However, unlike YBCO, Cu–O chains are not present in these superconductors. The YBCO superconductor has an orthorhombic structure, whereas the other high-c superconductors have a tetragonal structure. There are three main classes of superconducting cuprates: bismuth-based, thallium-based and mercury-based. The second cuprate by practical importance is currently BSCCO, a compound of Bi–Sr–Ca–Cu–O. The content of bismuth and strontium creates some chemical issues. It has three superconducting phases forming a homologous series as Bi2Sr2Can−1CunO4+2n+x (n=1, 2 and 3). These three phases are Bi-2201, Bi-2212 and Bi-2223, having transition temperatures of , and , respectively, where the numbering system represent number of atoms for Bi Sr, Ca and Cu respectively. The two phases have a tetragonal structure which consists of two sheared crystallographic unit cells. The unit cell of these phases has double Bi–O planes which are stacked in a way that the Bi atom of one plane sits below the oxygen atom of the next consecutive plane. The Ca atom forms a layer within the interior of the CuO2 layers in both Bi-2212 and Bi-2223; there is no Ca layer in the Bi-2201 phase. The three phases differ with each other in the number of cuprate planes; Bi-2201, Bi-2212 and Bi-2223 phases have one, two and three CuO2 planes, respectively. The c axis lattice constants of these phases increases with the number of cuprate planes (see table below). The coordination of the Cu atom is different in the three phases. The Cu atom forms an octahedral coordination with respect to oxygen atoms in the 2201 phase, whereas in 2212, the Cu atom is surrounded by five oxygen atoms in a pyramidal arrangement. In the 2223 structure, Cu has two coordinations with respect to oxygen: one Cu atom is bonded with four oxygen atoms in square planar configuration and another Cu atom is coordinated with five oxygen atoms in a pyramidal arrangement. Cuprate of Tl–Ba–Ca: The first series of the Tl-based superconductor containing one Tl–O layer has the general formula TlBa2Can−1CunO2n+3, whereas the second series containing two Tl–O layers has a formula of Tl2Ba2Can−1CunO2n+4 with n =1, 2 and 3. In the structure of Tl2Ba2CuO6 (Tl-2201), there is one CuO2 layer with the stacking sequence (Tl–O) (Tl–O) (Ba–O) (Cu–O) (Ba–O) (Tl–O) (Tl–O). In Tl2Ba2CaCu2O8 (Tl-2212), there are two Cu–O layers with a Ca layer in between. Similar to the Tl2Ba2CuO6 structure, Tl–O layers are present outside the Ba–O layers. In Tl2Ba2Ca2Cu3O10 (Tl-2223), there are three CuO2 layers enclosing Ca layers between each of these. In Tl-based superconductors, c is found to increase with the increase in CuO2 layers. However, the value of c decreases after four CuO2 layers in TlBa2Can−1CunO2n+3, and in the Tl2Ba2Can−1CunO2n+4 compound, it decreases after three CuO2 layers. Cuprate of Hg–Ba–Ca The crystal structure of HgBa2CuO4 (Hg-1201), HgBa2CaCu2O6 (Hg-1212) and HgBa2Ca2Cu3O8 (Hg-1223) is similar to that of Tl-1201, Tl-1212 and Tl-1223, with Hg in place of Tl. It is noteworthy that the c of the Hg compound (Hg-1201) containing one CuO2 layer is much larger as compared to the one-CuO2-layer compound of thallium (Tl-1201). In the Hg-based superconductor, c is also found to increase as the CuO2 layer increases. For Hg-1201, Hg-1212 and Hg-1223, the values of c are 94, 128, and the record value at ambient pressure , respectively, as shown in table below. The observation that the c of Hg-1223 increases to under high pressure indicates that the c of this compound is very sensitive to the structure of the compound. Preparation and manufacturing The simplest method for preparing ceramic superconductors is a solid-state thermochemical reaction involving mixing, calcination and sintering. The appropriate amounts of precursor powders, usually oxides and carbonates, are mixed thoroughly using a Ball mill. Solution chemistry processes such as coprecipitation, freeze-drying and sol–gel methods are alternative ways for preparing a homogeneous mixture. These powders are calcined in the temperature range from for several hours. The powders are cooled, reground and calcined again. This process is repeated several times to get homogeneous material. The powders are subsequently compacted to pellets and sintered. The sintering environment such as temperature, annealing time, atmosphere and cooling rate play a very important role in getting good high-c superconducting materials. The YBa2Cu3O7−x compound is prepared by calcination and sintering of a homogeneous mixture of Y2O3, BaCO3 and CuO in the appropriate atomic ratio. Calcination is done at , whereas sintering is done at in an oxygen atmosphere. The oxygen stoichiometry in this material is very crucial for obtaining a superconducting YBa2Cu3O7−x compound. At the time of sintering, the semiconducting tetragonal YBa2Cu3O6 compound is formed, which, on slow cooling in oxygen atmosphere, turns into superconducting YBa2Cu3O7−x. The uptake and loss of oxygen are reversible in YBa2Cu3O7−x. A fully oxygenated orthorhombic YBa2Cu3O7−x sample can be transformed into tetragonal YBa2Cu3O6 by heating in a vacuum at temperature above . The preparation of Bi-, Tl- and Hg-based high-c superconductors is more difficult than the YBCO preparation. Problems in these superconductors arise because of the existence of three or more phases having a similar layered structure. Thus, syntactic intergrowth and defects such as stacking faults occur during synthesis and it becomes difficult to isolate a single superconducting phase. For Bi–Sr–Ca–Cu–O, it is relatively simple to prepare the Bi-2212 (c ≈ 85 K) phase, whereas it is very difficult to prepare a single phase of Bi-2223 (c ≈ 110 K). The Bi-2212 phase appears only after few hours of sintering at , but the larger fraction of the Bi-2223 phase is formed after a long reaction time of more than a week at . Although the substitution of Pb in the Bi–Sr–Ca–Cu–O compound has been found to promote the growth of the high-c phase, a long sintering time is still required. Ongoing research The question of how superconductivity arises in high-temperature superconductors is one of the major unsolved problems of theoretical condensed matter physics. The mechanism that causes the electrons in these crystals to form pairs is not known. Despite intensive research and many promising leads, an explanation has so far eluded scientists. One reason for this is that the materials in question are generally very complex, multi-layered crystals (for example, BSCCO), making theoretical modelling difficult. Improving the quality and variety of samples also gives rise to considerable research, both with the aim of improved characterisation of the physical properties of existing compounds, and synthesizing new materials, often with the hope of increasing c. Technological research focuses on making HTS materials in sufficient quantities to make their use economically viable as well as in optimizing their properties in relation to applications. Metallic hydrogen has been proposed as a room-temperature superconductor, some experimental observations have detected the occurrence of the Meissner effect. LK-99, copper-doped lead-apatite, has also been proposed as a room-temperature superconductor. Theoretical models There have been two representative theories for high-temperature or unconventional superconductivity. Firstly, weak coupling theory suggests superconductivity emerges from antiferromagnetic spin fluctuations in a doped system. According to this theory, the pairing wave function of the cuprate HTS should have a dx2-y2 symmetry. Thus, determining whether the pairing wave function has d-wave symmetry is essential to test the spin fluctuation mechanism. That is, if the HTS order parameter (a pairing wave function like in Ginzburg–Landau theory) does not have d-wave symmetry, then a pairing mechanism related to spin fluctuations can be ruled out. (Similar arguments can be made for iron-based superconductors but the different material properties allow a different pairing symmetry.) Secondly, there was the interlayer coupling model, according to which a layered structure consisting of BCS-type (s-wave symmetry) superconductors can enhance the superconductivity by itself. By introducing an additional tunnelling interaction between each layer, this model successfully explained the anisotropic symmetry of the order parameter as well as the emergence of the HTS. Thus, in order to solve this unsettled problem, there have been numerous experiments such as photoemission spectroscopy, NMR, specific heat measurements, etc. Up to date the results were ambiguous, some reports supported the d symmetry for the HTS whereas others supported the s symmetry. This muddy situation possibly originated from the indirect nature of the experimental evidence, as well as experimental issues such as sample quality, impurity scattering, twinning, etc. This summary makes an implicit assumption: superconductive properties can be treated by mean-field theory. It also fails to mention that in addition to the superconductive gap, there is a second gap, the pseudogap. The cuprate layers are insulating, and the superconductors are doped with interlayer impurities to make them metallic. The superconductive transition temperature can be maximized by varying the dopant concentration. The simplest example is La2CuO4, which consist of alternating CuO2 and LaO layers which are insulating when pure. When 8% of the La is replaced by Sr, the latter act as dopants, contributing holes to the CuO2 layers, and making the sample metallic. The Sr impurities also act as electronic bridges, enabling interlayer coupling. Proceeding from this picture, some theories argue that the basic pairing interaction is still interaction with phonons, as in the conventional superconductors with Cooper pairs. While the undoped materials are antiferromagnetic, even a few percent of impurity dopants introduce a smaller pseudogap in the CuO2 planes which is also caused by phonons. The gap decreases with increasing charge carriers, and as it nears the superconductive gap, the latter reaches its maximum. The reason for the high transition temperature is then argued to be due to the percolating behaviour of the carriersthe carriers follow zig-zag percolative paths, largely in metallic domains in the CuO2 planes, until blocked by charge density wave domain walls, where they use dopant bridges to cross over to a metallic domain of an adjacent CuO2 plane. The transition temperature maxima are reached when the host lattice has weak bond-bending forces, which produce strong electron–phonon interactions at the interlayer dopants. D symmetry in YBCO An experiment based on flux quantization of a three-grain ring of YBa2Cu3O7 (YBCO) was proposed to test the symmetry of the order parameter in the HTS. The symmetry of the order parameter could best be probed at the junction interface as the Cooper pairs tunnel across a Josephson junction or weak link. It was expected that a half-integer flux, that is, a spontaneous magnetization could only occur for a junction of d symmetry superconductors. But, even if the junction experiment is the strongest method to determine the symmetry of the HTS order parameter, the results have been ambiguous. John R. Kirtley and C. C. Tsuei thought that the ambiguous results came from the defects inside the HTS, so that they designed an experiment where both clean limit (no defects) and dirty limit (maximal defects) were considered simultaneously. In the experiment, the spontaneous magnetization was clearly observed in YBCO, which supported the d symmetry of the order parameter in YBCO. But, since YBCO is orthorhombic, it might inherently have an admixture of s symmetry. So, by tuning their technique further, they found that there was an admixture of s symmetry in YBCO within about 3%. Also, they found that there was a pure dx2−y2 order parameter symmetry in the tetragonal Tl2Ba2CuO6. Spin-fluctuation mechanism Despite all these years, the mechanism of high-c superconductivity is still highly controversial, mostly due to the lack of exact theoretical computations on such strongly interacting electron systems. However, most rigorous theoretical calculations, including phenomenological and diagrammatic approaches, converge on magnetic fluctuations as the pairing mechanism for these systems. The qualitative explanation is as follows: In a superconductor, the flow of electrons cannot be resolved into individual electrons, but instead consists of many pairs of bound electrons, called Cooper pairs. In conventional superconductors, these pairs are formed when an electron moving through the material distorts the surrounding crystal lattice, which in turn attracts another electron and forms a bound pair. This is sometimes called the "water bed" effect. Each Cooper pair requires a certain minimum energy to be displaced, and if the thermal fluctuations in the crystal lattice are smaller than this energy the pair can flow without dissipating energy. This ability of the electrons to flow without resistance leads to superconductivity. In a high-c superconductor, the mechanism is extremely similar to a conventional superconductor, except, in this case, phonons virtually play no role and their role is replaced by spin-density waves. Just as all known conventional superconductors are strong phonon systems, all known high-c superconductors are strong spin-density wave systems, within close vicinity of a magnetic transition to, for example, an antiferromagnet. When an electron moves in a high-c superconductor, its spin creates a spin-density wave around it. This spin-density wave in turn causes a nearby electron to fall into the spin depression created by the first electron (water-bed effect again). Hence, again, a Cooper pair is formed. When the system temperature is lowered, more spin density waves and Cooper pairs are created, eventually leading to superconductivity. Note that in high-c systems, as these systems are magnetic systems due to the Coulomb interaction, there is a strong Coulomb repulsion between electrons. This Coulomb repulsion prevents pairing of the Cooper pairs on the same lattice site. The pairing of the electrons occur at near-neighbor lattice sites as a result. This is the so-called d-wave pairing, where the pairing state has a node (zero) at the origin. Examples Examples of high-c cuprate superconductors include YBCO and BSCCO, which are the most known materials that achieve superconductivity above the boiling point of liquid nitrogen. See also References External links High-temperature superconductors Correlated electrons Unsolved problems in physics
High-temperature superconductivity
[ "Physics", "Materials_science" ]
7,992
[ "Unsolved problems in physics", "Condensed matter physics", "Correlated electrons" ]
101,700
https://en.wikipedia.org/wiki/Diophantine%20set
In mathematics, a Diophantine equation is an equation of the form P(x1, ..., xj, y1, ..., yk) = 0 (usually abbreviated P(, ) = 0) where P(, ) is a polynomial with integer coefficients, where x1, ..., xj indicate parameters and y1, ..., yk indicate unknowns. A Diophantine set is a subset S of , the set of all j-tuples of natural numbers, so that for some Diophantine equation P(, ) = 0, That is, a parameter value is in the Diophantine set S if and only if the associated Diophantine equation is satisfiable under that parameter value. The use of natural numbers both in S and the existential quantification merely reflects the usual applications in computability theory and model theory. It does not matter whether natural numbers refer to the set of nonnegative integers or positive integers since the two definitions for Diophantine sets are equivalent. We can also equally well speak of Diophantine sets of integers and freely replace quantification over natural numbers with quantification over the integers. Also it is sufficient to assume P is a polynomial over and multiply P by the appropriate denominators to yield integer coefficients. However, whether quantification over rationals can also be substituted for quantification over the integers is a notoriously hard open problem. The MRDP theorem (so named for the initials of the four principal contributors to its solution) states that a set of integers is Diophantine if and only if it is computably enumerable. A set of integers S is computably enumerable if and only if there is an algorithm that, when given an integer, halts if that integer is a member of S and runs forever otherwise. This means that the concept of general Diophantine set, apparently belonging to number theory, can be taken rather in logical or computability-theoretic terms. This is far from obvious, however, and represented the culmination of some decades of work. Matiyasevich's completion of the MRDP theorem settled Hilbert's tenth problem. Hilbert's tenth problem was to find a general algorithm that can decide whether a given Diophantine equation has a solution among the integers. While Hilbert's tenth problem is not a formal mathematical statement as such, the nearly universal acceptance of the (philosophical) identification of a decision algorithm with a total computable predicate allows us to use the MRDP theorem to conclude that the tenth problem is unsolvable. Examples In the following examples, the natural numbers refer to the set of positive integers. The equation is an example of a Diophantine equation with a parameter x and unknowns y1 and y2. The equation has a solution in y1 and y2 precisely when x can be expressed as a product of two integers greater than 1, in other words x is a composite number. Namely, this equation provides a Diophantine definition of the set {4, 6, 8, 9, 10, 12, 14, 15, 16, 18, ...} consisting of the composite numbers. Other examples of Diophantine definitions are as follows: The equation with parameter x and unknowns y1, y2 only has solutions in when x is a sum of two perfect squares. The Diophantine set of the equation is {2, 5, 8, 10, 13, 17, 18, 20, 25, 26, ...}. The equation with parameter x and unknowns y1, y2. This is a Pell equation, meaning it only has solutions in when x is not a perfect square. The Diophantine set is {2, 3, 5, 6, 7, 8, 10, 11, 12, 13, ...}. The equation is a Diophantine equation with two parameters x1, x2 and an unknown y, which defines the set of pairs (x1, x2) such that x1 < x2. Matiyasevich's theorem Matiyasevich's theorem, also called the Matiyasevich–Robinson–Davis–Putnam or MRDP theorem, says: Every computably enumerable set is Diophantine, and the converse. A set S of integers is computably enumerable if there is an algorithm such that: For each integer input n, if n is a member of S, then the algorithm eventually halts; otherwise it runs forever. That is equivalent to saying there is an algorithm that runs forever and lists the members of S. A set S is Diophantine precisely if there is some polynomial with integer coefficients f(n, x1, ..., xk) such that an integer n is in S if and only if there exist some integers x1, ..., xk such that f(n, x1, ..., xk) = 0. Conversely, every Diophantine set is computably enumerable: consider a Diophantine equation f(n, x1, ..., xk) = 0. Now we make an algorithm that simply tries all possible values for n, x1, ..., xk (in, say, some simple order consistent with the increasing order of the sum of their absolute values), and prints n every time f(n, x1, ..., xk) = 0. This algorithm will obviously run forever and will list exactly the n for which f(n, x1, ..., xk) = 0 has a solution in x1, ..., xk. Proof technique Yuri Matiyasevich utilized a method involving Fibonacci numbers, which grow exponentially, in order to show that solutions to Diophantine equations may grow exponentially. Earlier work by Julia Robinson, Martin Davis and Hilary Putnam – hence, MRDP – had shown that this suffices to show that every computably enumerable set is Diophantine. Application to Hilbert's tenth problem Hilbert's tenth problem asks for a general algorithm deciding the solvability of Diophantine equations. The conjunction of Matiyasevich's result with the fact that most recursively enumerable languages are not decidable implies that a solution to Hilbert's tenth problem is impossible. Refinements Later work has shown that the question of solvability of a Diophantine equation is undecidable even if the equation only has 9 natural number variables (Matiyasevich, 1977) or 11 integer variables (Zhi Wei Sun, 1992). Further applications Matiyasevich's theorem has since been used to prove that many problems from calculus and differential equations are unsolvable. One can also derive the following stronger form of Gödel's first incompleteness theorem from Matiyasevich's result: Corresponding to any given consistent axiomatization of number theory, one can explicitly construct a Diophantine equation that has no solutions, but such that this fact cannot be proved within the given axiomatization. According to the incompleteness theorems, a powerful-enough consistent axiomatic theory is incomplete, meaning the truth of some of its propositions cannot be established within its formalism. The statement above says that this incompleteness must include the solvability of a diophantine equation, assuming that the theory in question is a number theory. Notes References English translation in Soviet Mathematics 11 (2), pp. 354–357. External links Matiyasevich theorem article on Scholarpedia. Diophantine equations Hilbert's problems fr:Diophantien it:Teorema di Matiyasevich he:הבעיה העשירית של הילברט pt:Teorema de Matiyasevich ru:Диофантово множество
Diophantine set
[ "Mathematics" ]
1,691
[ "Mathematical objects", "Equations", "Hilbert's problems", "Diophantine equations", "Mathematical problems", "Number theory" ]
101,851
https://en.wikipedia.org/wiki/Hilbert%27s%20tenth%20problem
Hilbert's tenth problem is the tenth on the list of mathematical problems that the German mathematician David Hilbert posed in 1900. It is the challenge to provide a general algorithm that, for any given Diophantine equation (a polynomial equation with integer coefficients and a finite number of unknowns), can decide whether the equation has a solution with all unknowns taking integer values. For example, the Diophantine equation has an integer solution: . By contrast, the Diophantine equation has no such solution. Hilbert's tenth problem has been solved, and it has a negative answer: such a general algorithm cannot exist. This is the result of combined work of Martin Davis, Yuri Matiyasevich, Hilary Putnam and Julia Robinson that spans 21 years, with Matiyasevich completing the theorem in 1970. The theorem is now known as Matiyasevich's theorem or the MRDP theorem (an initialism for the surnames of the four principal contributors to its solution). When all coefficients and variables are restricted to be positive integers, the related problem of polynomial identity testing becomes a decidable (exponentiation-free) variation of Tarski's high school algebra problem, sometimes denoted Background Original formulation Hilbert formulated the problem as follows: Given a Diophantine equation with any number of unknown quantities and with rational integral numerical coefficients: To devise a process according to which it can be determined in a finite number of operations whether the equation is solvable in rational integers. The words "process" and "finite number of operations" have been taken to mean that Hilbert was asking for an algorithm. The term "rational integral" simply refers to the integers, positive, negative or zero: 0, ±1, ±2, ... . So Hilbert was asking for a general algorithm to decide whether a given polynomial Diophantine equation with integer coefficients has a solution in integers. Hilbert's problem is not concerned with finding the solutions. It only asks whether, in general, we can decide whether one or more solutions exist. The answer to this question is negative, in the sense that no "process can be devised" for answering that question. In modern terms, Hilbert's 10th problem is an undecidable problem. Diophantine sets In a Diophantine equation, there are two kinds of variables: the parameters and the unknowns. The Diophantine set consists of the parameter assignments for which the Diophantine equation is solvable. A typical example is the linear Diophantine equation in two unknowns, , where the equation is solvable if and only if the greatest common divisor evenly divides . The set of all ordered triples satisfying this restriction is called the Diophantine set defined by . In these terms, Hilbert's tenth problem asks whether there is an algorithm to determine if the Diophantine set corresponding to an arbitrary polynomial is non-empty. The problem is generally understood in terms of the natural numbers (that is, the non-negative integers) rather than arbitrary integers. However, the two problems are equivalent: any general algorithm that can decide whether a given Diophantine equation has an integer solution could be modified into an algorithm that decides whether a given Diophantine equation has a natural-number solution, and vice versa. By Lagrange's four-square theorem, every natural number is the sum of the squares of four integers, so we could rewrite every natural-valued parameter in terms of the sum of the squares of four new integer-valued parameters. Similarly, since every integer is the difference of two natural numbers, we could rewrite every integer parameter as the difference of two natural parameters. Furthermore, we can always rewrite a system of simultaneous equations (where each is a polynomial) as a single equation . Recursively enumerable sets A recursively enumerable set can be characterized as one for which there exists an algorithm that will ultimately halt when a member of the set is provided as input, but may continue indefinitely when the input is a non-member. It was the development of computability theory (also known as recursion theory) that provided a precise explication of the intuitive notion of algorithmic computability, thus making the notion of recursive enumerability perfectly rigorous. It is evident that Diophantine sets are recursively enumerable (also known as semi-decidable). This is because one can arrange all possible tuples of values of the unknowns in a sequence and then, for a given value of the parameter(s), test these tuples, one after another, to see whether they are solutions of the corresponding equation. The unsolvability of Hilbert's tenth problem is a consequence of the surprising fact that the converse is true: Every recursively enumerable set is Diophantine. This result is variously known as Matiyasevich's theorem (because he provided the crucial step that completed the proof) and the MRDP theorem (for Yuri Matiyasevich, Julia Robinson, Martin Davis, and Hilary Putnam). Because there exists a recursively enumerable set that is not computable, the unsolvability of Hilbert's tenth problem is an immediate consequence. In fact, more can be said: there is a polynomial with integer coefficients such that the set of values of for which the equation has solutions in natural numbers is not computable. So, not only is there no general algorithm for testing Diophantine equations for solvability, but there is none even for this family of single-parameter equations. History Applications The Matiyasevich/MRDP theorem relates two notions—one from computability theory, the other from number theory—and has some surprising consequences. Perhaps the most surprising is the existence of a universal Diophantine equation: There exists a polynomial such that, given any Diophantine set there is a number such that This is true simply because Diophantine sets, being equal to recursively enumerable sets, are also equal to Turing machines. It is a well known property of Turing machines that there exist universal Turing machines, capable of executing any algorithm. Hilary Putnam has pointed out that for any Diophantine set of positive integers, there is a polynomial such that consists of exactly the positive numbers among the values assumed by as the variables range over all natural numbers. This can be seen as follows: If provides a Diophantine definition of , then it suffices to set So, for example, there is a polynomial for which the positive part of its range is exactly the prime numbers. (On the other hand, no polynomial can only take on prime values.) The same holds for other recursively enumerable sets of natural numbers: the factorial, the binomial coefficients, the fibonacci numbers, etc. Other applications concern what logicians refer to as propositions, sometimes also called propositions of Goldbach type. These are like Goldbach's conjecture, in stating that all natural numbers possess a certain property that is algorithmically checkable for each particular number. The Matiyasevich/MRDP theorem implies that each such proposition is equivalent to a statement that asserts that some particular Diophantine equation has no solutions in natural numbers. A number of important and celebrated problems are of this form: in particular, Fermat's Last Theorem, the Riemann hypothesis, and the four color theorem. In addition the assertion that particular formal systems such as Peano arithmetic or ZFC are consistent can be expressed as sentences. The idea is to follow Kurt Gödel in coding proofs by natural numbers in such a way that the property of being the number representing a proof is algorithmically checkable. sentences have the special property that if they are false, that fact will be provable in any of the usual formal systems. This is because the falsity amounts to the existence of a counter-example that can be verified by simple arithmetic. So if a sentence is such that neither it nor its negation is provable in one of these systems, that sentence must be true. A particularly striking form of Gödel's incompleteness theorem is also a consequence of the Matiyasevich/MRDP theorem: Let provide a Diophantine definition of a non-computable set. Let be an algorithm that outputs a sequence of natural numbers such that the corresponding equation has no solutions in natural numbers. Then there is a number that is not output by while in fact the equation has no solutions in natural numbers. To see that the theorem is true, it suffices to notice that if there were no such number , one could algorithmically test membership of a number in this non-computable set by simultaneously running the algorithm to see whether is output while also checking all possible -tuples of natural numbers seeking a solution of the equation and we may associate an algorithm with any of the usual formal systems such as Peano arithmetic or ZFC by letting it systematically generate consequences of the axioms and then output a number whenever a sentence of the form is generated. Then the theorem tells us that either a false statement of this form is proved or a true one remains unproved in the system in question. Further results We may speak of the degree of a Diophantine set as being the least degree of a polynomial in an equation defining that set. Similarly, we can call the dimension of such a set the fewest unknowns in a defining equation. Because of the existence of a universal Diophantine equation, it is clear that there are absolute upper bounds to both of these quantities, and there has been much interest in determining these bounds. Already in the 1920s Thoralf Skolem showed that any Diophantine equation is equivalent to one of degree 4 or less. His trick was to introduce new unknowns by equations setting them equal to the square of an unknown or the product of two unknowns. Repetition of this process results in a system of second degree equations; then an equation of degree 4 is obtained by summing the squares. So every Diophantine set is trivially of degree 4 or less. It is not known whether this result is best possible. Julia Robinson and Yuri Matiyasevich showed that every Diophantine set has dimension no greater than 13. Later, Matiyasevich sharpened their methods to show that 9 unknowns suffice. Although it may well be that this result is not the best possible, there has been no further progress. So, in particular, there is no algorithm for testing Diophantine equations with 9 or fewer unknowns for solvability in natural numbers. For the case of rational integer solutions (as Hilbert had originally posed it), the 4-squares trick shows that there is no algorithm for equations with no more than 36 unknowns. But Zhi Wei Sun showed that the problem for integers is unsolvable even for equations with no more than 11 unknowns. Martin Davis studied algorithmic questions involving the number of solutions of a Diophantine equation. Hilbert's tenth problem asks whether or not that number is 0. Let and let be a proper non-empty subset of . Davis proved that there is no algorithm to test a given Diophantine equation to determine whether the number of its solutions is a member of the set . Thus there is no algorithm to determine whether the number of solutions of a Diophantine equation is finite, odd, a perfect square, a prime, etc. The proof of the MRDP theorem has been formalized in Coq. Extensions of Hilbert's tenth problem Although Hilbert posed the problem for the rational integers, it can be just as well asked for many rings (in particular, for any ring whose number of elements is countable). Obvious examples are the rings of integers of algebraic number fields as well as the rational numbers. There has been much work on Hilbert's tenth problem for the rings of integers of algebraic number fields. Basing themselves on earlier work by Jan Denef and Leonard Lipschitz and using class field theory, Harold N. Shapiro and Alexandra Shlapentokh were able to prove: Hilbert's tenth problem is unsolvable for the ring of integers of any algebraic number field whose Galois group over the rationals is abelian. Shlapentokh and Thanases Pheidas (independently of one another) obtained the same result for algebraic number fields admitting exactly one pair of complex conjugate embeddings. The problem for the ring of integers of algebraic number fields other than those covered by the results above remains open. Likewise, despite much interest, the problem for equations over the rationals remains open. Barry Mazur has conjectured that for any variety over the rationals, the topological closure over the reals of the set of solutions has only finitely many components. This conjecture implies that the integers are not Diophantine over the rationals and so if this conjecture is true a negative answer to Hilbert's Tenth Problem would require a different approach than that used for other rings. See also Tarski's high school algebra problem Notes References Works cited Further reading Reprinted in The Collected Works of Julia Robinson, Solomon Feferman, editor, pp. 269–378, American Mathematical Society 1996. Martin Davis, "Hilbert's Tenth Problem is Unsolvable," American Mathematical Monthly, vol.80(1973), pp. 233–269; reprinted as an appendix in Martin Davis, Computability and Unsolvability, Dover reprint 1982. Jan Denef, Leonard Lipschitz, Thanases Pheidas, Jan van Geel, editors, "Hilbert's Tenth Problem: Workshop at Ghent University, Belgium, November 2–5, 1999." Contemporary Mathematics vol. 270(2000), American Mathematical Society. M. Ram Murty and Brandon Fodden: "Hilbert’s Tenth Problem: An Introduction to Logic, Number Theory, and Computability", American Mathematical Society, (June, 2019). External links Hilbert's Tenth Problem: a History of Mathematical Discovery Hilbert's Tenth Problem page! Zhi Wei Sun: On Hilbert's Tenth Problem and Related Topics 10 Diophantine equations Undecidable problems
Hilbert's tenth problem
[ "Mathematics" ]
2,937
[ "Mathematical objects", "Computational problems", "Equations", "Hilbert's problems", "Diophantine equations", "Undecidable problems", "Mathematical problems", "Number theory" ]
101,863
https://en.wikipedia.org/wiki/Linear%20independence
In the theory of vector spaces, a set of vectors is said to be if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be . These concepts are central to the definition of dimension. A vector space can be of finite dimension or infinite dimension depending on the maximum number of linearly independent vectors. The definition of linear dependence and the ability to determine whether a subset of vectors in a vector space is linearly dependent are central to determining the dimension of a vector space. Definition A sequence of vectors from a vector space is said to be linearly dependent, if there exist scalars not all zero, such that where denotes the zero vector. This implies that at least one of the scalars is nonzero, say , and the above equation is able to be written as if and if Thus, a set of vectors is linearly dependent if and only if one of them is zero or a linear combination of the others. A sequence of vectors is said to be linearly independent if it is not linearly dependent, that is, if the equation can only be satisfied by for This implies that no vector in the sequence can be represented as a linear combination of the remaining vectors in the sequence. In other words, a sequence of vectors is linearly independent if the only representation of as a linear combination of its vectors is the trivial representation in which all the scalars are zero. Even more concisely, a sequence of vectors is linearly independent if and only if can be represented as a linear combination of its vectors in a unique way. If a sequence of vectors contains the same vector twice, it is necessarily dependent. The linear dependency of a sequence of vectors does not depend of the order of the terms in the sequence. This allows defining linear independence for a finite set of vectors: A finite set of vectors is linearly independent if the sequence obtained by ordering them is linearly independent. In other words, one has the following result that is often useful. A sequence of vectors is linearly independent if and only if it does not contain the same vector twice and the set of its vectors is linearly independent. Infinite case An infinite set of vectors is linearly independent if every nonempty finite subset is linearly independent. Conversely, an infinite set of vectors is linearly dependent if it contains a finite subset that is linearly dependent, or equivalently, if some vector in the set is a linear combination of other vectors in the set. An indexed family of vectors is linearly independent if it does not contain the same vector twice, and if the set of its vectors is linearly independent. Otherwise, the family is said to be linearly dependent. A set of vectors which is linearly independent and spans some vector space, forms a basis for that vector space. For example, the vector space of all polynomials in over the reals has the (infinite) subset as a basis. Geometric examples and are independent and define the plane P. , and are dependent because all three are contained in the same plane. and are dependent because they are parallel to each other. , and are independent because and are independent of each other and is not a linear combination of them or, equivalently, because they do not belong to a common plane. The three vectors define a three-dimensional space. The vectors (null vector, whose components are equal to zero) and are dependent since Geographic location A person describing the location of a certain place might say, "It is 3 miles north and 4 miles east of here." This is sufficient information to describe the location, because the geographic coordinate system may be considered as a 2-dimensional vector space (ignoring altitude and the curvature of the Earth's surface). The person might add, "The place is 5 miles northeast of here." This last statement is true, but it is not necessary to find the location. In this example the "3 miles north" vector and the "4 miles east" vector are linearly independent. That is to say, the north vector cannot be described in terms of the east vector, and vice versa. The third "5 miles northeast" vector is a linear combination of the other two vectors, and it makes the set of vectors linearly dependent, that is, one of the three vectors is unnecessary to define a specific location on a plane. Also note that if altitude is not ignored, it becomes necessary to add a third vector to the linearly independent set. In general, linearly independent vectors are required to describe all locations in -dimensional space. Evaluating linear independence The zero vector If one or more vectors from a given sequence of vectors is the zero vector then the vector are necessarily linearly dependent (and consequently, they are not linearly independent). To see why, suppose that is an index (i.e. an element of ) such that Then let (alternatively, letting be equal any other non-zero scalar will also work) and then let all other scalars be (explicitly, this means that for any index other than (i.e. for ), let so that consequently ). Simplifying gives: Because not all scalars are zero (in particular, ), this proves that the vectors are linearly dependent. As a consequence, the zero vector can not possibly belong to any collection of vectors that is linearly independent. Now consider the special case where the sequence of has length (i.e. the case where ). A collection of vectors that consists of exactly one vector is linearly dependent if and only if that vector is zero. Explicitly, if is any vector then the sequence (which is a sequence of length ) is linearly dependent if and only if alternatively, the collection is linearly independent if and only if Linear dependence and independence of two vectors This example considers the special case where there are exactly two vector and from some real or complex vector space. The vectors and are linearly dependent if and only if at least one of the following is true: is a scalar multiple of (explicitly, this means that there exists a scalar such that ) or is a scalar multiple of (explicitly, this means that there exists a scalar such that ). If then by setting we have (this equality holds no matter what the value of is), which shows that (1) is true in this particular case. Similarly, if then (2) is true because If (for instance, if they are both equal to the zero vector ) then both (1) and (2) are true (by using for both). If then is only possible if and ; in this case, it is possible to multiply both sides by to conclude This shows that if and then (1) is true if and only if (2) is true; that is, in this particular case either both (1) and (2) are true (and the vectors are linearly dependent) or else both (1) and (2) are false (and the vectors are linearly independent). If but instead then at least one of and must be zero. Moreover, if exactly one of and is (while the other is non-zero) then exactly one of (1) and (2) is true (with the other being false). The vectors and are linearly independent if and only if is not a scalar multiple of and is not a scalar multiple of . Vectors in R2 Three vectors: Consider the set of vectors and then the condition for linear dependence seeks a set of non-zero scalars, such that or Row reduce this matrix equation by subtracting the first row from the second to obtain, Continue the row reduction by (i) dividing the second row by 5, and then (ii) multiplying by 3 and adding to the first row, that is Rearranging this equation allows us to obtain which shows that non-zero ai exist such that can be defined in terms of and Thus, the three vectors are linearly dependent. Two vectors: Now consider the linear dependence of the two vectors and and check, or The same row reduction presented above yields, This shows that which means that the vectors and are linearly independent. Vectors in R4 In order to determine if the three vectors in are linearly dependent, form the matrix equation, Row reduce this equation to obtain, Rearrange to solve for v3 and obtain, This equation is easily solved to define non-zero ai, where can be chosen arbitrarily. Thus, the vectors and are linearly dependent. Alternative method using determinants An alternative method relies on the fact that vectors in are linearly independent if and only if the determinant of the matrix formed by taking the vectors as its columns is non-zero. In this case, the matrix formed by the vectors is We may write a linear combination of the columns as We are interested in whether for some nonzero vector Λ. This depends on the determinant of , which is Since the determinant is non-zero, the vectors and are linearly independent. Otherwise, suppose we have vectors of coordinates, with Then A is an n×m matrix and Λ is a column vector with entries, and we are again interested in AΛ = 0. As we saw previously, this is equivalent to a list of equations. Consider the first rows of , the first equations; any solution of the full list of equations must also be true of the reduced list. In fact, if is any list of rows, then the equation must be true for those rows. Furthermore, the reverse is true. That is, we can test whether the vectors are linearly dependent by testing whether for all possible lists of rows. (In case , this requires only one determinant, as above. If , then it is a theorem that the vectors must be linearly dependent.) This fact is valuable for theory; in practical calculations more efficient methods are available. More vectors than dimensions If there are more vectors than dimensions, the vectors are linearly dependent. This is illustrated in the example above of three vectors in Natural basis vectors Let and consider the following elements in , known as the natural basis vectors: Then are linearly independent. Linear independence of functions Let be the vector space of all differentiable functions of a real variable . Then the functions and in are linearly independent. Proof Suppose and are two real numbers such that Take the first derivative of the above equation: for values of We need to show that and In order to do this, we subtract the first equation from the second, giving . Since is not zero for some , It follows that too. Therefore, according to the definition of linear independence, and are linearly independent. Space of linear dependencies A linear dependency or linear relation among vectors is a tuple with scalar components such that If such a linear dependence exists with at least a nonzero component, then the vectors are linearly dependent. Linear dependencies among form a vector space. If the vectors are expressed by their coordinates, then the linear dependencies are the solutions of a homogeneous system of linear equations, with the coordinates of the vectors as coefficients. A basis of the vector space of linear dependencies can therefore be computed by Gaussian elimination. Generalizations Affine independence A set of vectors is said to be affinely dependent if at least one of the vectors in the set can be defined as an affine combination of the others. Otherwise, the set is called affinely independent. Any affine combination is a linear combination; therefore every affinely dependent set is linearly dependent. Conversely, every linearly independent set is affinely independent. Consider a set of vectors of size each, and consider the set of augmented vectors of size each. The original vectors are affinely independent if and only if the augmented vectors are linearly independent. Linearly independent vector subspaces Two vector subspaces and of a vector space are said to be if More generally, a collection of subspaces of are said to be if for every index where The vector space is said to be a of if these subspaces are linearly independent and See also References External links Linearly Dependent Functions at WolframMathWorld. Tutorial and interactive program on Linear Independence. Introduction to Linear Independence at KhanAcademy. Abstract algebra Linear algebra Articles containing proofs
Linear independence
[ "Mathematics" ]
2,521
[ "Linear algebra", "Articles containing proofs", "Algebra", "Abstract algebra" ]
102,024
https://en.wikipedia.org/wiki/Wetland
A wetland is a distinct semi-aquatic ecosystem whose groundcovers are flooded or saturated in water, either permanently, for years or decades, or only seasonally. Flooding results in oxygen-poor (anoxic) processes taking place, especially in the soils. Wetlands form a transitional zone between waterbodies and dry lands, and are different from other terrestrial or aquatic ecosystems due to their vegetation's roots having adapted to oxygen-poor waterlogged soils. They are considered among the most biologically diverse of all ecosystems, serving as habitats to a wide range of aquatic and semi-aquatic plants and animals, with often improved water quality due to plant removal of excess nutrients such as nitrates and phosphorus. Wetlands exist on every continent, except Antarctica. The water in wetlands is either freshwater, brackish or saltwater. The main types of wetland are defined based on the dominant plants and the source of the water. For example, marshes are wetlands dominated by emergent herbaceous vegetation such as reeds, cattails and sedges. Swamps are dominated by woody vegetation such as trees and shrubs (although reed swamps in Europe are dominated by reeds, not trees). Mangrove forest are wetlands with mangroves, halophytic woody plants that have evolved to tolerate salty water. Examples of wetlands classified by the sources of water include tidal wetlands, where the water source is ocean tides; estuaries, water source is mixed tidal and river waters; floodplains, water source is excess water from overflowed rivers or lakes; and bogs and vernal ponds, water source is rainfall or meltwater. The world's largest wetlands include the Amazon River basin, the West Siberian Plain, the Pantanal in South America, and the Sundarbans in the Ganges-Brahmaputra delta. Wetlands contribute many ecosystem services that benefit people. These include for example water purification, stabilization of shorelines, storm protection and flood control. In addition, wetlands also process and condense carbon (in processes called carbon fixation and sequestration), and other nutrients and water pollutants. Wetlands can act as a sink or a source of carbon, depending on the specific wetland. If they function as a carbon sink, they can help with climate change mitigation. However, wetlands can also be a significant source of methane emissions due to anaerobic decomposition of soaked detritus, and some are also emitters of nitrous oxide. Humans are disturbing and damaging wetlands in many ways, including oil and gas extraction, building infrastructure, overgrazing of livestock, overfishing, alteration of wetlands including dredging and draining, nutrient pollution, and water pollution. Wetlands are more threatened by environmental degradation than any other ecosystem on Earth, according to the Millennium Ecosystem Assessment from 2005. Methods exist for assessing wetland ecological health. These methods have contributed to wetland conservation by raising public awareness of the functions that wetlands can provide. Since 1971, work under an international treaty seeks to identify and protect "wetlands of international importance." Definitions and terminology Technical definitions A simplified definition of wetland is "an area of land that is usually saturated with water". More precisely, wetlands are areas where "water covers the soil, or is present either at or near the surface of the soil all year or for varying periods of time during the year, including during the growing season". A patch of land that develops pools of water after a rain storm would not necessarily be considered a "wetland", even though the land is wet. Wetlands have unique characteristics: they are generally distinguished from other water bodies or landforms based on their water level and on the types of plants that live within them. Specifically, wetlands are characterized as having a water table that stands at or near the land surface for a long enough period each year to support aquatic plants. A more concise definition is a community composed of hydric soil and hydrophytes. Wetlands have also been described as ecotones, providing a transition between dry land and water bodies. Wetlands exist "...at the interface between truly terrestrial ecosystems and aquatic systems, making them inherently different from each other, yet highly dependent on both." In environmental decision-making, there are subsets of definitions that are agreed upon to make regulatory and policy decisions. Under the Ramsar international wetland conservation treaty, wetlands are defined as follows: Article 1.1: "...wetlands are areas of marsh, fen, peatland or water, whether natural or artificial, permanent or temporary, with water that is static or flowing, fresh, brackish or salt, including areas of marine water the depth of which at low tide does not exceed six meters." Article 2.1: "[Wetlands] may incorporate riparian and coastal zones adjacent to the wetlands, and islands or bodies of marine water deeper than six meters at low tide lying within the wetlands." An ecological definition of a wetland is "an ecosystem that arises when inundation by water produces soils dominated by anaerobic and aerobic processes, which, in turn, forces the biota, particularly rooted plants, to adapt to flooding". Sometimes a precise legal definition of a wetland is required. The definition used for regulation by the United States government is: 'The term "wetlands" means those areas that are inundated or saturated by surface or ground water at a frequency and duration to support, and that under normal circumstances do support, a prevalence of vegetation typically adapted for life in saturated soil conditions. Wetlands generally included swamps, marshes, bogs, and similar areas.' For each of these definitions and others, regardless of the purpose, hydrology is emphasized (shallow waters, water-logged soils). The soil characteristics and the plants and animals controlled by the wetland hydrology are often additional components of the definitions. Types Wetlands can be tidal (inundated by tides) or non-tidal. The water in wetlands is either freshwater, brackish, saline, or alkaline. There are four main kinds of wetlands – marsh, swamp, bog, and fen (bogs and fens being types of peatlands or mires). Some experts also recognize wet meadows and aquatic ecosystems as additional wetland types. Sub-types include mangrove forests, carrs, pocosins, floodplains, peatlands, vernal pools, sinks, and many others. The following three groups are used within Australia to classify wetland by type: Marine and coastal zone wetlands, inland wetlands and human-made wetlands. In the US, the best known classifications are the Cowardin classification system and the hydrogeomorphic (HGM) classification system. The Cowardin system includes five main types of wetlands: marine (ocean-associated), estuarine (mixed ocean- and river-associated), riverine (within river channels), lacustrine (lake-associated) and palustrine (inland nontidal habitats). Peatlands Peatlands are a unique kind of wetland where lush plant growth and slow decay of dead plants (under anoxic conditions) results in organic peat accumulating; bogs, fens, and mires are different names for peatlands. Wetland names Variations of names for wetland systems: Bayou Flooded grasslands and savannas Marsh Brackish marsh Freshwater marsh Mire Fen Bog Riparian zone Swamp Freshwater swamp forest Tidal Freshwater forest Coniferous swamp Peat swamp forest Mangrove swamp Vernal pool Some wetlands have localized names unique to a region such as the prairie potholes of North America's northern plain, pocosins, Carolina bays and baygalls of the Southeastern US, mallines of Argentina, Mediterranean seasonal ponds of Europe and California, turloughs of Ireland, billabongs of Australia, among many others. Locations By temperature zone Wetlands are found throughout the world in different climates. Temperatures vary greatly depending on the location of the wetland. Many of the world's wetlands are in the temperate zones, midway between the North or South Poles and the equator. In these zones, summers are warm and winters are cold, but temperatures are not extreme. In subtropical zone wetlands, such as along the Gulf of Mexico, average temperatures might be . Wetlands in the tropics are subjected to much higher temperatures for a large portion of the year. Temperatures for wetlands on the Arabian Peninsula can exceed and these habitats would therefore be subject to rapid evaporation. In northeastern Siberia, which has a polar climate, wetland temperatures can be as low as . Peatlands in arctic and subarctic regions insulate the permafrost, thus delaying or preventing its thawing during summer, as well as inducing its formation. By precipitation amount The amount of precipitation a wetland receives varies widely according to its area. Wetlands in Wales, Scotland, and western Ireland typically receive about per year. In some places in Southeast Asia, where heavy rains occur, they can receive up to . In some drier regions, wetlands exist where as little as precipitation occurs each year. Temporal variation: Perennial systems Seasonal systems Episodic (periodic or intermittent) systems Ephemeral (short-lived) systems Surface flow may occur in some segments, with subsurface flow in other segments. Processes Wetlands vary widely due to local and regional differences in topography, hydrology, vegetation, and other factors, including human involvement. Other important factors include fertility, natural disturbance, competition, herbivory, burial and salinity. When peat accumulates, bogs and fens arise. Hydrology The most important factor producing wetlands is hydrology, or flooding. The duration of flooding or prolonged soil saturation by groundwater determines whether the resulting wetland has aquatic, marsh or swamp vegetation. Other important factors include soil fertility, natural disturbance, competition, herbivory, burial, and salinity. When peat from dead plants accumulates, bogs and fens develop. Wetland hydrology is associated with the spatial and temporal dispersion, flow, and physio-chemical attributes of surface and ground waters. Sources of hydrological flows into wetlands are predominantly precipitation, surface water (saltwater or freshwater), and groundwater. Water flows out of wetlands by evapotranspiration, surface flows and tides, and subsurface water outflow. Hydrodynamics (the movement of water through and from a wetland) affects hydro-periods (temporal fluctuations in water levels) by controlling the water balance and water storage within a wetland. Landscape characteristics control wetland hydrology and water chemistry. The O2 and CO2 concentrations of water depend upon temperature, atmospheric pressure and mixing with the air (from winds or water flows). Water chemistry within wetlands is determined by the pH, salinity, nutrients, conductivity, soil composition, hardness, and the sources of water. Water chemistry varies across landscapes and climatic regions. Wetlands are generally minerotrophic (waters contain dissolved materials from soils) with the exception of ombrotrophic bogs that are fed only by water from precipitation. Because bogs receive most of their water from precipitation and humidity from the atmosphere, their water usually has low mineral ionic composition. In contrast, wetlands fed by groundwater or tides have a higher concentration of dissolved nutrients and minerals. Fen peatlands receive water both from precipitation and ground water in varying amounts so their water chemistry ranges from acidic with low levels of dissolved minerals to alkaline with high accumulation of calcium and magnesium. Role of salinity Salinity has a strong influence on wetland water chemistry, particularly in coastal wetlands and in arid and semiarid regions with large precipitation deficits. Natural salinity is regulated by interactions between ground and surface water, which may be influenced by human activity. Soil Carbon is the major nutrient cycled within wetlands. Most nutrients, such as sulfur, phosphorus, carbon, and nitrogen are found within the soil of wetlands. Anaerobic and aerobic respiration in the soil influences the nutrient cycling of carbon, hydrogen, oxygen, and nitrogen, and the solubility of phosphorus thus contributing to the chemical variations in its water. Wetlands with low pH and saline conductivity may reflect the presence of acid sulfates and wetlands with average salinity levels can be heavily influenced by calcium or magnesium. Biogeochemical processes in wetlands are determined by soils with low redox potential. Biology The life forms of a wetland system includes its plants (flora) and animals (fauna) and microbes (bacteria, fungi). The most important factor is the wetland's duration of flooding. Other important factors include fertility and salinity of the water or soils. The chemistry of water flowing into wetlands depends on the source of water, the geological material that it flows through and the nutrients discharged from organic matter in the soils and plants at higher elevations. Plants and animals may vary within a wetland seasonally or in response to flood regimes. Flora There are four main groups of hydrophytes that are found in wetland systems throughout the world. Submerged wetland vegetation can grow in saline and fresh-water conditions. Some species have underwater flowers, while others have long stems to allow the flowers to reach the surface. Submerged species provide a food source for native fauna, habitat for invertebrates, and also possess filtration capabilities. Examples include seagrasses and eelgrass. Floating water plants or floating vegetation are usually small, like those in the Lemnoideae subfamily (duckweeds). Emergent vegetation like the cattails (Typha spp.), sedges (Carex spp.) and arrow arum (Peltandra virginica) rise above the surface of the water. When trees and shrubs comprise much of the plant cover in saturated soils, those areas in most cases are called swamps. The upland boundary of swamps is determined partly by water levels. This can be affected by dams Some swamps can be dominated by a single species, such as silver maple swamps around the Great Lakes. Others, like those of the Amazon basin, have large numbers of different tree species. Other examples include cypress (Taxodium) and mangrove swamps. Fauna Many species of fish are highly dependent on wetland ecosystems. Seventy-five percent of the United States' commercial fish and shellfish stocks depend solely on estuaries to survive. Amphibians such as frogs and salamanders need both terrestrial and aquatic habitats in which to reproduce and feed. Because amphibians often inhabit depressional wetlands like prairie potholes and Carolina bays, the connectivity among these isolated wetlands is an important control of regional populations. While tadpoles feed on algae, adult frogs forage on insects. Frogs are sometimes used as an indicator of ecosystem health because their thin skin permits absorption of nutrients and toxins from the surrounding environment resulting in increased extinction rates in unfavorable and polluted environmental conditions. Reptiles such as snakes, lizards, turtles, alligators and crocodiles are common in wetlands of some regions. In freshwater wetlands of the Southeastern US, alligators are common and a freshwater species of crocodile occurs in South Florida. The Florida Everglades is the only place in the world where both crocodiles and alligators coexist. The saltwater crocodile inhabits estuaries and mangroves. Snapping turtles also inhabit wetlands. Birds, particularly waterfowl and waders use wetlands extensively. Mammals of wetlands include numerous small and medium-sized species such as voles, bats, muskrats and platypus in addition to large herbivorous and apex predator species such as the beavers, coypu, swamp rabbit, Florida panther, jaguar, and moose. Wetlands attract many mammals due to abundant seeds, berries, and other vegetation as food for herbivores, as well as abundant populations of invertebrates, small reptiles and amphibians as prey for predators. Invertebrates of wetlands include aquatic insects such as dragonflies, aquatic bugs and beetles, midges, mosquitos, crustaceans such as crabs, crayfish, shrimps, microcrustaceans, mollusks like clams, mussels, snails and worms. Invertebrates comprise more than half of the known animal species in wetlands, and are considered the primary food web link between plants and higher animals (such as fish and birds). Ecosystem services Depending on a wetland's geographic and topographic location, the functions it performs can support multiple ecosystem services, values, or benefits. United Nations Millennium Ecosystem Assessment and Ramsar Convention described wetlands as a whole to be of biosphere significance and societal importance in the following areas: Water storage (flood control) Groundwater replenishment Shoreline stabilization and storm protection Water purification Wastewater treatment (in constructed wetlands) Reservoirs of biodiversity Pollination Wetland products Cultural values Recreation and tourism Climate change mitigation and adaptation According to the Ramsar Convention: The economic worth of the ecosystem services provided to society by intact, naturally functioning wetlands is frequently much greater than the perceived benefits of converting them to 'more valuable' intensive land use – particularly as the profits from unsustainable use often go to relatively few individuals or corporations, rather than being shared by society as a whole. To replace these wetland ecosystem services, enormous amounts of money would need to be spent on water purification plants, dams, levees, and other hard infrastructure, and many of the services are impossible to replace. Storage reservoirs and flood protection Floodplains and closed-depression wetlands can provide the functions of storage reservoirs and flood protection. The wetland system of floodplains is formed from major rivers downstream from their headwaters. "The floodplains of major rivers act as natural storage reservoirs, enabling excess water to spread out over a wide area, which reduces its depth and speed. Wetlands close to the headwaters of streams and rivers can slow down rainwater runoff and spring snowmelt so that it does not run straight off the land into water courses. This can help prevent sudden, damaging floods downstream." Notable river systems that produce wide floodplains include the Nile River, the Niger river inland delta, the Zambezi River flood plain, the Okavango River inland delta, the Kafue River flood plain, the Lake Bangweulu flood plain (Africa), Mississippi River (US), Amazon River (South America), Yangtze River (China), Danube River (Central Europe) and Murray-Darling River (Australia). Groundwater replenishment Groundwater replenishment can be achieved for example by marsh, swamp, and subterranean karst and cave hydrological systems. The surface water visibly seen in wetlands only represents a portion of the overall water cycle, which also includes atmospheric water (precipitation) and groundwater. Many wetlands are directly linked to groundwater and they can be a crucial regulator of both the quantity and quality of water found below the ground. Wetlands that have permeable substrates like limestone or occur in areas with highly variable and fluctuating water tables have especially important roles in groundwater replenishment or water recharge. Substrates that are porous allow water to filter down through the soil and underlying rock into aquifers which are the source of much of the world's drinking water. Wetlands can also act as recharge areas when the surrounding water table is low and as a discharge zone when it is high. Shoreline stabilization and storm protection Mangroves, coral reefs, salt marsh can help with shoreline stabilization and storm protection. Tidal and inter-tidal wetland systems protect and stabilize coastal zones. Coral reefs provide a protective barrier to coastal shoreline. Mangroves stabilize the coastal zone from the interior and will migrate with the shoreline to remain adjacent to the boundary of the water. The main conservation benefit these systems have against storms and storm surges is the ability to reduce the speed and height of waves and floodwaters. The United Kingdom has begun the concept of managed coastal realignment. This management technique provides shoreline protection through restoration of natural wetlands rather than through applied engineering. In East Asia, reclamation of coastal wetlands has resulted in widespread transformation of the coastal zone, and up to 65% of coastal wetlands have been destroyed by coastal development. One analysis using the impact of hurricanes versus storm protection provided naturally by wetlands projected the value of this service at US$33,000/hectare/year. Water purification Water purification can be provided by floodplains, closed-depression wetlands, mudflat, freshwater marsh, salt marsh, mangroves. Wetlands cycle both sediments and nutrients, sometimes serving as buffers between terrestrial and aquatic ecosystems. A natural function of wetland vegetation is the up-take, storage, and (for nitrate) the removal of nutrients found in runoff water from the surrounding landscapes. Precipitation and surface runoff induces soil erosion, transporting sediment in suspension into and through waterways. All types of sediments whether composed of clay, silt, sand or gravel and rock can be carried into wetland systems through erosion. Wetland vegetation acts as a physical barrier to slow water flow and then trap sediment for both short or long periods of time. Suspended sediment can contain heavy metals that are also retained when wetlands trap the sediment. The ability of wetland systems to store or remove nutrients and trap sediment is highly efficient and effective but each system has a threshold. An overabundance of nutrient input from fertilizer run-off, sewage effluent, or non-point pollution will cause eutrophication. Upstream erosion from deforestation can overwhelm wetlands making them shrink in size and cause dramatic biodiversity loss through excessive sedimentation load. Wastewater treatment Constructed wetlands are built for wastewater treatment. An example of how a natural wetland is used to provide some degree of sewage treatment is the East Kolkata Wetlands in Kolkata, India. The wetlands cover , and are used to treat Kolkata's sewage. The nutrients contained in the wastewater sustain fish farms and agriculture. Reservoirs of biodiversity Wetland systems' rich biodiversity has become a focal point catalysed by the Ramsar Convention and World Wildlife Fund. The impact of maintaining biodiversity is seen at the local level through job creation, sustainability, and community productivity. A good example is the Lower Mekong basin which runs through Cambodia, Laos, and Vietnam, supporting over 55 million people. A key fish species which is overfished, the Piramutaba catfish, Brachyplatystoma vaillantii, migrates more than from its nursery grounds near the mouth of the Amazon River to its spawning grounds in Andean tributaries, above sea level, distributing plant seeds along the route. Intertidal mudflats have a level of productivity similar to that of some wetlands even while possessing a low number of species. The abundant invertebrates found within the mud are a food source for migratory waterfowl. Mudflats, saltmarshes, mangroves, and seagrass beds have high levels of both species richness and productivity, and are home to important nursery areas for many commercial fish stocks. Populations of many species are confined geographically to only one or a few wetland systems, often due to the long period of time that the wetlands have been physically isolated from other aquatic sources. For example, the number of endemic species in the Selenga River Delta of Lake Baikal in Russia classifies it as a hotspot for biodiversity and one of the most biodiverse wetlands in the entire world. Wetland products Wetlands naturally produce an array of vegetation and other ecological products that can be harvested for personal and commercial use. Many fishes have all or part of their life-cycle occurring within a wetland system. Fresh and saltwater fish are the main source of protein for about one billion people and comprise 15% of an additional 3.5 billion people's protein intake. Another food staple found in wetland systems is rice, a popular grain that is consumed at the rate of one fifth of the total global calorie count. In Bangladesh, Cambodia and Vietnam, where rice paddies are predominant on the landscape, rice consumption reach 70%. Some native wetland plants in the Caribbean and Australia are harvested sustainably for medicinal compounds; these include the red mangrove (Rhizophora mangle) which possesses antibacterial, wound-healing, anti-ulcer effects, and antioxidant properties. Other mangrove-derived products include fuelwood, salt (produced by evaporating seawater), animal fodder, traditional medicines (e.g. from mangrove bark), fibers for textiles and dyes and tannins. Additional services and uses of wetlands Some types of wetlands can serve as fire breaks that help slow the spread of minor wildfires. Larger wetland systems can influence local precipitation patterns. Some boreal wetland systems in catchment headwaters may help extend the period of flow and maintain water temperature in connected downstream waters. Pollination services are supported by many wetlands which may provide the only suitable habitat for pollinating insects, birds, and mammals in highly developed areas. Disturbances and human impacts Wetlands, the functions and services they provide as well as their flora and fauna, can be affected by several types of disturbances. The disturbances (sometimes termed stressors or alterations) can be human-associated or natural, direct or indirect, reversible or not, and isolated or cumulative. Disturbances include exogenous factors such as flooding or drought. Humans are disturbing and damaging wetlands for example by oil and gas extraction, building infrastructure, overgrazing of livestock, overfishing, alteration of wetlands including dredging and draining, nutrient pollution and water pollution. Disturbance puts different levels of stress on an environment depending on the type and duration of disturbance. Predominant disturbances of wetlands include: Enrichment/eutrophication Organic loading and reduced dissolved oxygen Contaminant toxicity Acidification Salinization Sedimentation Altered solar input (turbidity/shade) Vegetation removal Thermal alteration Drying/aridification Inundation/flooding Habitat fragmentation Other human impacts Disturbances can be further categorized as follows: Minor disturbance: Stress that maintains ecosystem integrity. Moderate disturbance: Ecosystem integrity is damaged but can recover in time without assistance. Impairment or severe disturbance: Human intervention may be needed in order for ecosystem to recover. Nutrient pollution comes from nitrogen inputs to aquatic systems and have drastically effected the dissolved nitrogen content of wetlands, introducing higher nutrient availability which leads to eutrophication. Biodiversity loss occurs in wetland systems through land use changes, habitat destruction, pollution, exploitation of resources, and invasive species. For example, the introduction of water hyacinth, a native plant of South America into Lake Victoria in East Africa as well as duckweed into non-native areas of Queensland, Australia, have overtaken entire wetland systems overwhelming the habitats and reducing the diversity of native plants and animals. Conversion to dry land To increase economic productivity, wetlands are often converted into dry land with dykes and drains and used for agricultural purposes. The construction of dykes, and dams, has negative consequences for individual wetlands and entire watersheds. Their proximity to lakes and rivers means that they are often developed for human settlement. Once settlements are constructed and protected by dykes, the settlements then become vulnerable to land subsidence and ever increasing risk of flooding. The Mississippi River Delta around New Orleans, Louisiana is a well-known example; the Danube Delta in Europe is another. Drainage of floodplains Drainage of floodplains or development activities that narrow floodplain corridors (such as the construction of levees) reduces the ability of coupled river-floodplain systems to control flood damage. That is because modified and less expansive systems must still manage the same amount of precipitation, causing flood peaks to be higher or deeper and floodwaters to travel faster. Water management engineering developments in the past century have degraded floodplain wetlands through the construction of artificial embankments such as dykes, bunds, levees, weirs, barrages and dams. All concentrate water into a main channel and waters that historically spread slowly over a large, shallow area are concentrated. Loss of wetland floodplains results in more severe and damaging flooding. Catastrophic human impact in the Mississippi River floodplains was seen in death of several hundred individuals during a levee breach in New Orleans caused by Hurricane Katrina. Human-made embankments along the Yangtze River floodplains have caused the main channel of the river to become prone to more frequent and damaging flooding. Some of these events include the loss of riparian vegetation, a 30% loss of the vegetation cover throughout the river's basin, a doubling of the percentage of the land affected by soil erosion, and a reduction in reservoir capacity through siltation build-up in floodplain lakes. Overfishing Overfishing is a major problem for sustainable use of wetlands. Concerns are developing over certain aspects of farm fishing, which uses natural wetlands and waterways to harvest fish for human consumption. Aquaculture is continuing to develop rapidly throughout the Asia-Pacific region especially in China where 90% of the total number of aquaculture farms occur, contributing 80% of global value. Some aquaculture has eliminated massive areas of wetland through practices such as the shrimp farming industry's destruction of mangroves. Even though the damaging impact of large-scale shrimp farming on the coastal ecosystem in many Asian countries has been widely recognized for quite some time now, it has proved difficult to mitigate since other employment avenues for people are lacking. Also burgeoning demand for shrimp globally has provided a large and ready market. Conservation Wetlands have historically subjected to large draining efforts for development (real estate or agriculture), and flooding to create recreational lakes or generate hydropower. Some of the world's most important agricultural areas were wetlands that have been converted to farmland. Since the 1970s, more focus has been put on preserving wetlands for their natural functions. Since 1900, 65–70% of the world's wetlands have been lost. In order to maintain wetlands and sustain their functions, alterations and disturbances that are outside the normal range of variation should be minimized. Balancing wetland conservation with the needs of people Wetlands are vital ecosystems that enhance the livelihoods for the millions of people who live in and around them. Studies have shown that it is possible to conserve wetlands while improving the livelihoods of people living among them. Case studies conducted in Malawi and Zambia looked at how dambos – wet, grassy valleys or depressions where water seeps to the surface – can be farmed sustainably. Project outcomes included a high yield of crops, development of sustainable farming techniques, and water management strategies that generate enough water for irrigation. Ramsar Convention The Ramsar Convention (full title: Convention on Wetlands of International Importance, especially as Waterfowl Habitat), is an international treaty designed to address global concerns regarding wetland loss and degradation. The primary purposes of the treaty are to list wetlands of international importance and to promote their wise use, with the ultimate goal of preserving the world's wetlands. Methods include restricting access to some wetland areas, as well as educating the public to combat the misconception that wetlands are wastelands. The Convention works closely with five International Organisation Partners (IOPs). These are: Birdlife International, the IUCN, the International Water Management Institute, Wetlands International and the World Wide Fund for Nature. The partners provide technical expertise, help conduct or facilitate field studies and provide financial support. Restoration Restoration and restoration ecologists intend to return wetlands to their natural trajectory by aiding directly with the natural processes of the ecosystem. These direct methods vary with respect to the degree of physical manipulation of the natural environment and each are associated with different levels of restoration. Restoration is needed after disturbance or perturbation of a wetland. There is no one way to restore a wetland and the level of restoration required will be based on the level of disturbance although, each method of restoration does require preparation and administration. Levels of restoration Factors influencing selected approach may include budget, time scale limitations, project goals, level of disturbance, landscape and ecological constraints, political and administrative agendas and socioeconomic priorities. Prescribed natural or assisted regeneration For this strategy, there is no biophysical manipulation and the ecosystem is left to recover based on the process of succession alone. The focus is to eliminate and prevent further disturbance from occurring and for this type of restoration requires prior research to understand the probability that the wetland will recover naturally. This is likely to be the first method of approach since it is the least intrusive and least expensive although some biophysical non-intrusive manipulation may be required to enhance the rate of succession to an acceptable level. Example methods include prescribed burns to small areas, promotion of site specific soil microbiota and plant growth using nucleation planting whereby plants radiate from an initial planting site, and promotion of niche diversity or increasing the range of niches to promote use by a variety of different species. These methods can make it easier for the natural species to flourish by removing environmental impediments and can speed up the process of succession. Partial reconstruction For this strategy, a mixture of natural regeneration and manipulated environmental control is used. This may require some engineering, and more intensive biophysical manipulations including ripping of subsoil, agrichemical applications of herbicides or insecticides, laying of mulch, mechanical seed dispersal, and tree planting on a large scale. In these circumstances the wetland is impaired and without human assistance it would not recover within an acceptable period of time as determined by ecologists. Methods of restoration used will have to be determined on a site by site basis as each location will require a different approach based on levels of disturbance and the local ecosystem dynamics. Complete reconstruction This most expensive and intrusive method of reconstruction requires engineering and ground up reconstruction. Because there is a redesign of the entire ecosystem it is important that the natural trajectory of the ecosystem be considered and that the plant species promoted will eventually return the ecosystem towards its natural trajectory. In many cases constructed wetlands are often designed to treat stormwater/wastewater runoff. They can be used in developments as part of water-sensitive urban design systems and have benefits such as flood mitigation, removing pollutants, carbon sequestration, providing habitat for wildlife and biodiversity in often highly urbanised and fragmented landscapes. Traditional knowledge The ideas from traditional ecological knowledge can be applied as a holistic approach to the restoration of wetlands. These ideas focus more on responding to the observations detected from the environment considering that each part of a wetland ecosystem is interconnected. Applying these practices on specific locations of wetlands increase productivity, biodiversity, and improve its resilience. These practices include monitoring wetland resources, planting propagules, and addition of key species in order to create a self-sustaining wetland ecosystem. Climate change aspects Greenhouse gas emissions In Southeast Asia, peat swamp forests and soils are being drained, burnt, mined, and overgrazed, contributing to climate change. As a result of peat drainage, the organic carbon that had built up over thousands of years and is normally under water is suddenly exposed to the air. The peat decomposes and is converted into carbon dioxide (CO2), which is then released into the atmosphere. Peat fires cause the same process to occur rapidly and in addition create enormous clouds of smoke that cross international borders, which now happens almost yearly in Southeast Asia. While peatlands constitute only 3% of the world's land area, their degradation produces 7% of all CO2 emissions. Climate change mitigation Studies have favorably identified the potential for coastal wetlands (also called blue carbon ecosystems) to provide some degree of climate change mitigation in two ways: by conservation, reducing the greenhouse gas emissions arising from the loss and degradation of such habitats, and by restoration, to increase carbon dioxide drawdown and its long-term storage. However, CO2 removal using coastal blue carbon restoration has questionable cost-effectiveness when considered only as a climate mitigation action, either for carbon-offsetting or for inclusion in Nationally Determined Contributions. When wetlands are restored they have mitigation effects through their ability to sink carbon, converting a greenhouse gas (carbon dioxide) to solid plant material through the process of photosynthesis, and also through their ability to store and regulate water. Wetlands store approximately 44.6 million tonnes of carbon per year globally (estimate from 2003). In salt marshes and mangrove swamps in particular, the average carbon sequestration rate is while peatlands sequester approximately . Coastal wetlands, such as tropical mangroves and some temperate salt marshes, are known to be sinks for carbon that otherwise contribute to climate change in its gaseous forms (carbon dioxide and methane). The ability of many tidal wetlands to store carbon and minimize methane flux from tidal sediments has led to sponsorship of blue carbon initiatives that are intended to enhance those processes. Climate change adaptation The restoration of coastal blue carbon ecosystems is highly advantageous for climate change adaptation, coastal protection, food provision and biodiversity conservation. Since the middle of the 20th century, human-caused climate change has resulted in observable changes in the global water cycle. A warming climate makes extremely wet and very dry occurrences more severe, causing more severe floods and droughts. For this reason, some of the ecosystem services that wetlands provide (e.g. water storage and flood control, groundwater replenishment, shoreline stabilization and storm protection) are important for climate change adaptation measures. In most parts of the world and under all emission scenarios, water cycle variability and accompanying extremes are anticipated to rise more quickly than the changes of average values. Valuation The value of a wetland to local communities typically involves first mapping a region's wetlands, then assessing the functions and ecosystem services the wetlands provide individually and cumulatively, and finally evaluating that information to prioritize or rank individual wetlands or wetland types for conservation, management, restoration, or development. Over the longer term, it requires keeping inventories of known wetlands and monitoring a representative sample of the wetlands to determine changes due to both natural and human factors. Assessment Rapid assessment methods are used to score, rank, rate, or categorize various functions, ecosystem services, species, communities, levels of disturbance, and/or ecological health of a wetland or group of wetlands. This is often done to prioritize particular wetlands for conservation (avoidance) or to determine the degree to which loss or alteration of wetland functions should be compensated, such as by restoring degraded wetlands elsewhere or providing additional protections to existing wetlands. Rapid assessment methods are also applied before and after a wetland has been restored or altered, to help monitor or predict the effects of those actions on various wetland functions and the services they provide. Assessments are typically considered to be "rapid" when they require only a single visit to the wetland lasting less than one day, which in some cases may include interpretation of aerial imagery and geographic information system (GIS) analyses of existing spatial data, but not detailed post-visit laboratory analyses of water or biological samples. To achieve consistency among persons doing the assessment, rapid methods present indicator variables as questions or checklists on standardized data forms, and most methods standardize the scoring or rating procedure that is used to combine question responses into estimates of the levels of specified functions relative to the levels estimated in other wetlands ("calibration sites") assessed previously in a region. Rapid assessment methods, partly because they often use dozens of indicators pertaining to conditions surrounding a wetland as well as within the wetland itself, aim to provide estimates of wetland functions and services that are more accurate and repeatable than simply describing a wetland's class type. A need for wetland assessments to be rapid arises mostly when government agencies set deadlines for decisions affecting a wetland, or when the number of wetlands needing information on their functions or condition is large. Inventory Although developing a global inventory of wetlands has proven to be a large and difficult undertaking, many efforts at more local scales have been successful. Current efforts are based on available data, but both classification and spatial resolution have sometimes proven to be inadequate for regional or site-specific environmental management decision making. It is difficult to identify small, long, and narrow wetlands within the landscape. Many of today's remote sensing satellites do not have sufficient spatial and spectral resolution to monitor wetland conditions, although multispectral IKONOS and QuickBird data may offer improved spatial resolutions once it is 4 m or higher. Majority of the pixels are just mixtures of several plant species or vegetation types and are difficult to isolate which translates into an inability to classify the vegetation that defines the wetland. The growing availability of 3D vegetation and topography data from LiDAR has partially addressed the limitation of traditional multispectral imagery, as demonstrated in some case studies across the world. Monitoring and mapping A wetland needs to be monitored over time to assess whether it is functioning at an ecologically sustainable level or whether it is becoming degraded. Degraded wetlands will suffer a loss in water quality, loss of sensitive species, and aberrant functioning of soil geochemical processes. Practically, many natural wetlands are difficult to monitor from the ground as they quite often are difficult to access and may require exposure to dangerous plants and animals as well as diseases borne by insects or other invertebrates. Remote sensing such as aerial imagery and satellite imaging provides effective tools to map and monitor wetlands across large geographic regions and over time. Many remote sensing methods can be used to map wetlands. The integration of multi-sourced data such as LiDAR and aerial photos proves more effective at mapping wetlands than the use of aerial photos alone, especially with the aid of modern machine learning methods (e.g., deep learning). Overall, using digital data provides a standardized data-collection procedure and an opportunity for data integration within a geographic information system. Legislation International efforts National efforts United States Each country and region tends to have a codified definition of wetlands for legal purposes. In the United States, wetlands are defined as "those areas that are inundated or saturated by surface or groundwater at a frequency and duration sufficient to support, and that under normal circumstances do support, a prevalence of vegetation typically adapted for life in saturated soil conditions. Wetlands generally include swamps, marshes, bogs and similar areas". This definition has been used in the enforcement of the Clean Water Act. Some US states, such as Massachusetts and New York, have separate definitions that may differ from the federal government's. In the United States Code, the term wetland is defined "as land that (A) has a predominance of hydric soils, (B) is inundated or saturated by surface or groundwater at a frequency and duration sufficient to support a prevalence of hydrophytic vegetation typically adapted for life in saturated soil conditions and (C) under normal circumstances supports a prevalence of such vegetation." Related to these legal definitions, "normal circumstances" are expected to occur during the wet portion of the growing season under normal climatic conditions (not unusually dry or unusually wet) and in the absence of significant disturbance. It is not uncommon for a wetland to be dry for long portions of the growing season. Still, under normal environmental conditions, the soils will be inundated to the surface, creating anaerobic conditions persisting through the wet portion of the growing season. Canada Wetlands and wetland policies in Canada Other Individual Provincial and Territorial Based Policies Examples The world's largest wetlands include the swamp forests of the Amazon River basin, the peatlands of the West Siberian Plain, the Pantanal in South America, and the Sundarbans in the Ganges-Brahmaputra delta. See also References External links Aquatic ecology Environmental terminology Freshwater ecology Habitat Terrestrial biomes Bodies of water Articles containing video clips
Wetland
[ "Biology", "Environmental_science" ]
8,741
[ "Wetlands", "Aquatic ecology", "Ecosystems", "Hydrology" ]
102,140
https://en.wikipedia.org/wiki/Perturbation%20theory
In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem. A critical feature of the technique is a middle step that breaks the problem into "solvable" and "perturbative" parts. In regular perturbation theory, the solution is expressed as a power series in a small parameter The first term is the known solution to the solvable problem. Successive terms in the series at higher powers of usually become smaller. An approximate 'perturbation solution' is obtained by truncating the series, often keeping only the first two terms, the solution to the known problem and the 'first order' perturbation correction. Perturbation theory is used in a wide range of fields and reaches its most sophisticated and advanced forms in quantum field theory. Perturbation theory (quantum mechanics) describes the use of this method in quantum mechanics. The field in general remains actively and heavily researched across multiple disciplines. Description Perturbation theory develops an expression for the desired solution in terms of a formal power series known as a perturbation series in some "small" parameter, that quantifies the deviation from the exactly solvable problem. The leading term in this power series is the solution of the exactly solvable problem, while further terms describe the deviation in the solution, due to the deviation from the initial problem. Formally, we have for the approximation to the full solution a series in the small parameter (here called ), like the following: In this example, would be the known solution to the exactly solvable initial problem, and the terms represent the first-order, second-order, third-order, and higher-order terms, which may be found iteratively by a mechanistic but increasingly difficult procedure. For small these higher-order terms in the series generally (but not always) become successively smaller. An approximate "perturbative solution" is obtained by truncating the series, often by keeping only the first two terms, expressing the final solution as a sum of the initial (exact) solution and the "first-order" perturbative correction Some authors use big O notation to indicate the order of the error in the approximate solution: If the power series in converges with a nonzero radius of convergence, the perturbation problem is called a regular perturbation problem. In regular perturbation problems, the asymptotic solution smoothly approaches the exact solution. However, the perturbation series can also diverge, and the truncated series can still be a good approximation to the true solution if it is truncated at a point at which its elements are minimum. This is called an asymptotic series. If the perturbation series is divergent or not a power series (for example, if the asymptotic expansion must include non-integer powers or negative powers ) then the perturbation problem is called a singular perturbation problem. Many special techniques in perturbation theory have been developed to analyze singular perturbation problems. Prototypical example The earliest use of what would now be called perturbation theory was to deal with the otherwise unsolvable mathematical problems of celestial mechanics: for example the orbit of the Moon, which moves noticeably differently from a simple Keplerian ellipse because of the competing gravitation of the Earth and the Sun. Perturbation methods start with a simplified form of the original problem, which is simple enough to be solved exactly. In celestial mechanics, this is usually a Keplerian ellipse. Under Newtonian gravity, an ellipse is exactly correct when there are only two gravitating bodies (say, the Earth and the Moon) but not quite correct when there are three or more objects (say, the Earth, Moon, Sun, and the rest of the Solar System) and not quite correct when the gravitational interaction is stated using formulations from general relativity. Perturbative expansion Keeping the above example in mind, one follows a general recipe to obtain the perturbation series. The perturbative expansion is created by adding successive corrections to the simplified problem. The corrections are obtained by forcing consistency between the unperturbed solution, and the equations describing the system in full. Write for this collection of equations; that is, let the symbol stand in for the problem to be solved. Quite often, these are differential equations, thus, the letter "D". The process is generally mechanical, if laborious. One begins by writing the equations so that they split into two parts: some collection of equations which can be solved exactly, and some additional remaining part for some small The solution (to ) is known, and one seeks the general solution to Next the approximation is inserted into . This results in an equation for which, in the general case, can be written in closed form as a sum over integrals over Thus, one has obtained the first-order correction and thus is a good approximation to It is a good approximation, precisely because the parts that were ignored were of size The process can then be repeated, to obtain corrections and so on. In practice, this process rapidly explodes into a profusion of terms, which become extremely hard to manage by hand. Isaac Newton is reported to have said, regarding the problem of the Moon's orbit, that "It causeth my head to ache." This unmanageability has forced perturbation theory to develop into a high art of managing and writing out these higher order terms. One of the fundamental breakthroughs in quantum mechanics for controlling the expansion are the Feynman diagrams, which allow quantum mechanical perturbation series to be represented by a sketch. Examples Perturbation theory has been used in a large number of different settings in physics and applied mathematics. Examples of the "collection of equations" include algebraic equations, differential equations (e.g., the equations of motion and commonly wave equations), thermodynamic free energy in statistical mechanics, radiative transfer, and Hamiltonian operators in quantum mechanics. Examples of the kinds of solutions that are found perturbatively include the solution of the equation of motion (e.g., the trajectory of a particle), the statistical average of some physical quantity (e.g., average magnetization), and the ground state energy of a quantum mechanical problem. Examples of exactly solvable problems that can be used as starting points include linear equations, including linear equations of motion (harmonic oscillator, linear wave equation), statistical or quantum-mechanical systems of non-interacting particles (or in general, Hamiltonians or free energies containing only terms quadratic in all degrees of freedom). Examples of systems that can be solved with perturbations include systems with nonlinear contributions to the equations of motion, interactions between particles, terms of higher powers in the Hamiltonian/free energy. For physical problems involving interactions between particles, the terms of the perturbation series may be displayed (and manipulated) using Feynman diagrams. History Perturbation theory was first devised to solve otherwise intractable problems in the calculation of the motions of planets in the solar system. For instance, Newton's law of universal gravitation explained the gravitation between two astronomical bodies, but when a third body is added, the problem was, "How does each body pull on each?" Kepler's orbital equations only solve Newton's gravitational equations when the latter are limited to just two bodies interacting. The gradually increasing accuracy of astronomical observations led to incremental demands in the accuracy of solutions to Newton's gravitational equations, which led many eminent 18th and 19th century mathematicians, notably Joseph-Louis Lagrange and Pierre-Simon Laplace, to extend and generalize the methods of perturbation theory. These well-developed perturbation methods were adopted and adapted to solve new problems arising during the development of quantum mechanics in 20th century atomic and subatomic physics. Paul Dirac developed quantum perturbation theory in 1927 to evaluate when a particle would be emitted in radioactive elements. This was later named Fermi's golden rule. Perturbation theory in quantum mechanics is fairly accessible, mainly because quantum mechanics is limited to linear wave equations, but also since the quantum mechanical notation allows expressions to be written in fairly compact form, thus making them easier to comprehend. This resulted in an explosion of applications, ranging from the Zeeman effect to the hyperfine splitting in the hydrogen atom. Despite the simpler notation, perturbation theory applied to quantum field theory still easily gets out of hand. Richard Feynman developed the celebrated Feynman diagrams by observing that many terms repeat in a regular fashion. These terms can be replaced by dots, lines, squiggles and similar marks, each standing for a term, a denominator, an integral, and so on; thus complex integrals can be written as simple diagrams, with absolutely no ambiguity as to what they mean. The one-to-one correspondence between the diagrams, and specific integrals is what gives them their power. Although originally developed for quantum field theory, it turns out the diagrammatic technique is broadly applicable to many other perturbative series (although not always worthwhile). In the second half of the 20th century, as chaos theory developed, it became clear that unperturbed systems were in general completely integrable systems, while the perturbed systems were not. This promptly lead to the study of "nearly integrable systems", of which the KAM torus is the canonical example. At the same time, it was also discovered that many (rather special) non-linear systems, which were previously approachable only through perturbation theory, are in fact completely integrable. This discovery was quite dramatic, as it allowed exact solutions to be given. This, in turn, helped clarify the meaning of the perturbative series, as one could now compare the results of the series to the exact solutions. The improved understanding of dynamical systems coming from chaos theory helped shed light on what was termed the small denominator problem or small divisor problem. In the 19th century Poincaré observed (as perhaps had earlier mathematicians) that sometimes 2nd and higher order terms in the perturbative series have "small denominators": That is, they have the general form where and are some complicated expressions pertinent to the problem to be solved, and and are real numbers; very often they are the energy of normal modes. The small divisor problem arises when the difference is small, causing the perturbative correction to "blow up", becoming as large or maybe larger than the zeroth order term. This situation signals a breakdown of perturbation theory: It stops working at this point, and cannot be expanded or summed any further. In formal terms, the perturbative series is an asymptotic series: A useful approximation for a few terms, but at some point becomes less accurate if even more terms are added. The breakthrough from chaos theory was an explanation of why this happened: The small divisors occur whenever perturbation theory is applied to a chaotic system. The one signals the presence of the other. Beginnings in the study of planetary motion Since the planets are very remote from each other, and since their mass is small as compared to the mass of the Sun, the gravitational forces between the planets can be neglected, and the planetary motion is considered, to a first approximation, as taking place along Kepler's orbits, which are defined by the equations of the two-body problem, the two bodies being the planet and the Sun. Since astronomic data came to be known with much greater accuracy, it became necessary to consider how the motion of a planet around the Sun is affected by other planets. This was the origin of the three-body problem; thus, in studying the system Moon-Earth-Sun, the mass ratio between the Moon and the Earth was chosen as the "small parameter". Lagrange and Laplace were the first to advance the view that the so-called "constants" which describe the motion of a planet around the Sun gradually change: They are "perturbed", as it were, by the motion of other planets and vary as a function of time; hence the name "perturbation theory". Perturbation theory was investigated by the classical scholars – Laplace, Siméon Denis Poisson, Carl Friedrich Gauss – as a result of which the computations could be performed with a very high accuracy. The discovery of the planet Neptune in 1848 by Urbain Le Verrier, based on the deviations in motion of the planet Uranus. He sent the coordinates to J.G. Galle who successfully observed Neptune through his telescope – a triumph of perturbation theory. Perturbation orders The standard exposition of perturbation theory is given in terms of the order to which the perturbation is carried out: first-order perturbation theory or second-order perturbation theory, and whether the perturbed states are degenerate, which requires singular perturbation. In the singular case extra care must be taken, and the theory is slightly more elaborate. In chemistry Many of the ab initio quantum chemistry methods use perturbation theory directly or are closely related methods. Implicit perturbation theory works with the complete Hamiltonian from the very beginning and never specifies a perturbation operator as such. Møller–Plesset perturbation theory uses the difference between the Hartree–Fock Hamiltonian and the exact non-relativistic Hamiltonian as the perturbation. The zero-order energy is the sum of orbital energies. The first-order energy is the Hartree–Fock energy and electron correlation is included at second-order or higher. Calculations to second, third or fourth order are very common and the code is included in most ab initio quantum chemistry programs. A related but more accurate method is the coupled cluster method. Shell-crossing A shell-crossing (sc) occurs in perturbation theory when matter trajectories intersect, forming a singularity. This limits the predictive power of physical simulations at small scales. See also Boundary layer Cosmological perturbation theory Deformation (mathematics) Dynamic nuclear polarisation Eigenvalue perturbation Homotopy perturbation method Interval finite element Lyapunov stability Method of dominant balance Order of approximation Perturbation theory (quantum mechanics) Structural stability References External links Alternative approach to quantum perturbation theory Concepts in physics Functional analysis Ordinary differential equations Mathematical physics Computational chemistry Asymptotic analysis
Perturbation theory
[ "Physics", "Chemistry", "Mathematics" ]
2,999
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Applied mathematics", "Theoretical physics", "Mathematical objects", "Quantum mechanics", "Theoretical chemistry", "Computational chemistry", "Mathematical relations", "Asymptotic analysis", "nan", "Mathematical physi...
102,182
https://en.wikipedia.org/wiki/Celestial%20mechanics
Celestial mechanics is the branch of astronomy that deals with the motions of objects in outer space. Historically, celestial mechanics applies principles of physics (classical mechanics) to astronomical objects, such as stars and planets, to produce ephemeris data. History Modern analytic celestial mechanics started with Isaac Newton's Principia (1687). The name celestial mechanics is more recent than that. Newton wrote that the field should be called "rational mechanics". The term "dynamics" came in a little later with Gottfried Leibniz, and over a century after Newton, Pierre-Simon Laplace introduced the term celestial mechanics. Prior to Kepler, there was little connection between exact, quantitative prediction of planetary positions, using geometrical or numerical techniques, and contemporary discussions of the physical causes of the planets' motion. Laws of planetary motion Johannes Kepler as the first to closely integrate the predictive geometrical astronomy, which had been dominant from Ptolemy in the 2nd century to Copernicus, with physical concepts to produce a New Astronomy, Based upon Causes, or Celestial Physics in 1609. His work led to the laws of planetary orbits, which he developed using his physical principles and the planetary observations made by Tycho Brahe. Kepler's elliptical model greatly improved the accuracy of predictions of planetary motion, years before Newton developed his law of gravitation in 1686. Newtonian mechanics and universal gravitation Isaac Newton is credited with introducing the idea that the motion of objects in the heavens, such as planets, the Sun, and the Moon, and the motion of objects on the ground, like cannon balls and falling apples, could be described by the same set of physical laws. In this sense he unified celestial and terrestrial dynamics. Using his law of gravity, Newton confirmed Kepler's laws for elliptical orbits by deriving them from the gravitational two-body problem, which Newton included in his epochal Philosophiæ Naturalis Principia Mathematica in 1687. Three-body problem After Newton, Joseph-Louis Lagrange attempted to solve the three-body problem in 1772, analyzed the stability of planetary orbits, and discovered the existence of the Lagrange points. Lagrange also reformulated the principles of classical mechanics, emphasizing energy more than force, and developing a method to use a single polar coordinate equation to describe any orbit, even those that are parabolic and hyperbolic. This is useful for calculating the behaviour of planets and comets and such (parabolic and hyperbolic orbits are conic section extensions of Kepler's elliptical orbits). More recently, it has also become useful to calculate spacecraft trajectories. Henri Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). Poincaré showed that the three-body problem is not integrable. In other words, the general solution of the three-body problem can not be expressed in terms of algebraic and transcendental functions through unambiguous coordinates and velocities of the bodies. His work in this area was the first major achievement in celestial mechanics since Isaac Newton. These monographs include an idea of Poincaré, which later became the basis for mathematical "chaos theory" (see, in particular, the Poincaré recurrence theorem) and the general theory of dynamical systems. He introduced the important concept of bifurcation points and proved the existence of equilibrium figures such as the non-ellipsoids, including ring-shaped and pear-shaped figures, and their stability. For this discovery, Poincaré received the Gold Medal of the Royal Astronomical Society (1900). Standardisation of astronomical tables Simon Newcomb was a Canadian-American astronomer who revised Peter Andreas Hansen's table of lunar positions. In 1877, assisted by George William Hill, he recalculated all the major astronomical constants. After 1884 he conceived, with A.M.W. Downing, a plan to resolve much international confusion on the subject. By the time he attended a standardisation conference in Paris, France, in May 1886, the international consensus was that all ephemerides should be based on Newcomb's calculations. A further conference as late as 1950 confirmed Newcomb's constants as the international standard. Anomalous precession of Mercury Albert Einstein explained the anomalous precession of Mercury's perihelion in his 1916 paper The Foundation of the General Theory of Relativity. General relativity led astronomers to recognize that Newtonian mechanics did not provide the highest accuracy. Examples of problems Celestial motion, without additional forces such as drag forces or the thrust of a rocket, is governed by the reciprocal gravitational acceleration between masses. A generalization is the n-body problem, where a number n of masses are mutually interacting via the gravitational force. Although analytically not integrable in the general case, the integration can be well approximated numerically. Examples: 4-body problem: spaceflight to Mars (for parts of the flight the influence of one or two bodies is very small, so that there we have a 2- or 3-body problem; see also the patched conic approximation) 3-body problem: Quasi-satellite Spaceflight to, and stay at a Lagrangian point In the case (two-body problem) the configuration is much simpler than for . In this case, the system is fully integrable and exact solutions can be found. Examples: A binary star, e.g., Alpha Centauri (approx. the same mass) A binary asteroid, e.g., 90 Antiope (approx. the same mass) A further simplification is based on the "standard assumptions in astrodynamics", which include that one body, the orbiting body, is much smaller than the other, the central body. This is also often approximately valid. Examples: The Solar System orbiting the center of the Milky Way A planet orbiting the Sun A moon orbiting a planet A spacecraft orbiting Earth, a moon, or a planet (in the latter cases the approximation only applies after arrival at that orbit) Perturbation theory Perturbation theory comprises mathematical methods that are used to find an approximate solution to a problem which cannot be solved exactly. (It is closely related to methods used in numerical analysis, which are ancient.) The earliest use of modern perturbation theory was to deal with the otherwise unsolvable mathematical problems of celestial mechanics: Newton's solution for the orbit of the Moon, which moves noticeably differently from a simple Keplerian ellipse because of the competing gravitation of the Earth and the Sun. Perturbation methods start with a simplified form of the original problem, which is carefully chosen to be exactly solvable. In celestial mechanics, this is usually a Keplerian ellipse, which is correct when there are only two gravitating bodies (say, the Earth and the Moon), or a circular orbit, which is only correct in special cases of two-body motion, but is often close enough for practical use. The solved, but simplified problem is then "perturbed" to make its time-rate-of-change equations for the object's position closer to the values from the real problem, such as including the gravitational attraction of a third, more distant body (the Sun). The slight changes that result from the terms in the equations – which themselves may have been simplified yet again – are used as corrections to the original solution. Because simplifications are made at every step, the corrections are never perfect, but even one cycle of corrections often provides a remarkably better approximate solution to the real problem. There is no requirement to stop at only one cycle of corrections. A partially corrected solution can be re-used as the new starting point for yet another cycle of perturbations and corrections. In principle, for most problems the recycling and refining of prior solutions to obtain a new generation of better solutions could continue indefinitely, to any desired finite degree of accuracy. The common difficulty with the method is that the corrections usually progressively make the new solutions very much more complicated, so each cycle is much more difficult to manage than the previous cycle of corrections. Newton is reported to have said, regarding the problem of the Moon's orbit "It causeth my head to ache." This general procedure – starting with a simplified problem and gradually adding corrections that make the starting point of the corrected problem closer to the real situation – is a widely used mathematical tool in advanced sciences and engineering. It is the natural extension of the "guess, check, and fix" method used anciently with numbers. Reference frame Problems in celestial mechanics are often posed in simplifying reference frames, such as the synodic reference frame applied to the three-body problem, where the origin coincides with the barycenter of the two larger celestial bodies. Other reference frames for n-body simulations include those that place the origin to follow the center of mass of a body, such as the heliocentric and the geocentric reference frames. The choice of reference frame gives rise to many phenomena, including the retrograde motion of superior planets while on a geocentric reference frame. Orbital mechanics See also Astrometry is a part of astronomy that deals with measuring the positions of stars and other celestial bodies, their distances and movements. Astrophysics Celestial navigation is a position fixing technique that was the first system devised to help sailors locate themselves on a featureless ocean. Developmental Ephemeris or the Jet Propulsion Laboratory Developmental Ephemeris (JPL DE) is a widely used model of the solar system, which combines celestial mechanics with numerical analysis and astronomical and spacecraft data. Dynamics of the celestial spheres concerns pre-Newtonian explanations of the causes of the motions of the stars and planets. Dynamical time scale Ephemeris is a compilation of positions of naturally occurring astronomical objects as well as artificial satellites in the sky at a given time or times. Gravitation Lunar theory attempts to account for the motions of the Moon. Numerical analysis is a branch of mathematics, pioneered by celestial mechanicians, for calculating approximate numerical answers (such as the position of a planet in the sky) which are too difficult to solve down to a general, exact formula. Creating a numerical model of the solar system was the original goal of celestial mechanics, and has only been imperfectly achieved. It continues to motivate research. An orbit is the path that an object makes, around another object, whilst under the influence of a source of centripetal force, such as gravity. Orbital elements are the parameters needed to specify a Newtonian two-body orbit uniquely. Osculating orbit is the temporary Keplerian orbit about a central body that an object would continue on, if other perturbations were not present. Retrograde motion is orbital motion in a system, such as a planet and its satellites, that is contrary to the direction of rotation of the central body, or more generally contrary in direction to the net angular momentum of the entire system. Apparent retrograde motion is the periodic, apparently backwards motion of planetary bodies when viewed from the Earth (an accelerated reference frame). Satellite is an object that orbits another object (known as its primary). The term is often used to describe an artificial satellite (as opposed to natural satellites, or moons). The common noun ‘moon’ (not capitalized) is used to mean any natural satellite of the other planets. Tidal force is the combination of out-of-balance forces and accelerations of (mostly) solid bodies that raises tides in bodies of liquid (oceans), atmospheres, and strains planets' and satellites' crusts. Two solutions, called VSOP82 and VSOP87 are versions one mathematical theory for the orbits and positions of the major planets, which seeks to provide accurate positions over an extended period of time. Notes References Forest R. Moulton, Introduction to Celestial Mechanics, 1984, Dover, John E. Prussing, Bruce A. Conway, Orbital Mechanics, 1993, Oxford Univ. Press William M. Smart, Celestial Mechanics, 1961, John Wiley. J.M.A. Danby, Fundamentals of Celestial Mechanics, 1992, Willmann-Bell Alessandra Celletti, Ettore Perozzi, Celestial Mechanics: The Waltz of the Planets, 2007, Springer-Praxis, . Michael Efroimsky. 2005. Gauge Freedom in Orbital Mechanics. Annals of the New York Academy of Sciences, Vol. 1065, pp. 346-374 Alessandra Celletti, Stability and Chaos in Celestial Mechanics. Springer-Praxis 2010, XVI, 264 p., Hardcover Further reading Encyclopedia:Celestial mechanics Scholarpedia Expert articles External links Astronomy of the Earth's Motion in Space, high-school level educational web site by David P. Stern Newtonian Dynamics Undergraduate level course by Richard Fitzpatrick. This includes Lagrangian and Hamiltonian Dynamics and applications to celestial mechanics, gravitational potential theory, the 3-body problem and Lunar motion (an example of the 3-body problem with the Sun, Moon, and the Earth). Research Marshall Hampton's research page: Central configurations in the n-body problem Artwork Celestial Mechanics is a Planetarium Artwork created by D. S. Hessels and G. Dunne Course notes Professor Tatum's course notes at the University of Victoria Associations Italian Celestial Mechanics and Astrodynamics Association Simulations Classical mechanics Astronomical sub-disciplines Astrometry
Celestial mechanics
[ "Physics", "Astronomy" ]
2,794
[ "Classical mechanics", "Astrophysics", "Astrometry", "Mechanics", "Celestial mechanics", "Astronomical sub-disciplines" ]
102,213
https://en.wikipedia.org/wiki/Essential%20amino%20acid
An essential amino acid, or indispensable amino acid, is an amino acid that cannot be synthesized from scratch by the organism fast enough to supply its demand, and must therefore come from the diet. Of the 21 amino acids common to all life forms, the nine amino acids humans cannot synthesize are valine, isoleucine, leucine, methionine, phenylalanine, tryptophan, threonine, histidine, and lysine. Six other amino acids are considered conditionally essential in the human diet, meaning their synthesis can be limited under special pathophysiological conditions, such as prematurity in the infant or individuals in severe catabolic distress. These six are arginine, cysteine, glycine, glutamine, proline, and tyrosine. Six amino acids are non-essential (dispensable) in humans, meaning they can be synthesized in sufficient quantities in the body. These six are alanine, aspartic acid, asparagine, glutamic acid, serine, and selenocysteine (considered the 21st amino acid). Pyrrolysine (considered the 22nd amino acid), which is proteinogenic only in certain microorganisms, is not used by and therefore non-essential for most organisms, including humans. The limiting amino acid is the essential amino acid which is furthest from meeting nutritional requirements. This concept is important when determining the selection, number, and amount of foods to consume because even when total protein and all other essential amino acids are satisfied if the limiting amino acid is not satisfied then the meal is considered to be nutritionally limited by that amino acid. Overview (*) Pyrrolysine, sometimes considered the "22nd amino acid", is not used by the human body. Essentiality in humans Of the twenty amino acids common to all life forms (not counting selenocysteine), humans cannot synthesize nine: histidine, isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan and valine. Additionally, the amino acids arginine, cysteine, glutamine, glycine, proline and tyrosine are considered conditionally essential, which means that specific populations who do not synthesize it in adequate amounts, such as newborn infants and people with diseased livers who are unable to synthesize cysteine, must obtain one or more of these conditionally essential amino acids from their diet. For example, enough arginine is synthesized by the urea cycle to meet the needs of an adult but perhaps not those of a growing child. Amino acids that must be obtained from the diet are called essential amino acids. Eukaryotes can synthesize some of the amino acids from other substrates. Consequently, only a subset of the amino acids used in protein synthesis are essential nutrients. From intermediates of the citric acid cycle and other pathways Nonessential amino acids are produced in the body. The pathways for the synthesis of nonessential amino acids come from basic metabolic pathways. Glutamate dehydrogenase catalyzes the reductive amination of α-ketoglutarate to glutamate. A transamination reaction takes place in the synthesis of most amino acids. At this step, the chirality of the amino acid is established. Alanine and aspartate are synthesized by the transamination of pyruvate and oxaloacetate, respectively. Glutamine is synthesized from NH4+ and glutamate, and asparagine is synthesized similarly. Proline and arginine are both derived from glutamate. Serine, formed from 3-phosphoglycerate, which comes from glycolysis, is the precursor of glycine and cysteine. Tyrosine is synthesized by the hydroxylation of phenylalanine, which is an essential amino acid. Recommended daily intake Estimating the daily requirement for the indispensable amino acids has proven to be difficult; these numbers have undergone considerable revision over the last 20 years. The following table lists the recommended daily amounts currently in use for essential amino acids in adult humans (unless specified otherwise), together with their standard one-letter abbreviations. The recommended daily intakes for children aged three years and older is 10% to 20% higher than adult levels and those for infants can be as much as 150% higher in the first year of life. Cysteine (or sulfur-containing amino acids), tyrosine (or aromatic amino acids), and arginine are always required by infants and growing children. Methionine and cysteine are grouped together because one of them can be synthesized from the other using the enzyme methionine S-methyltransferase and the catalyst methionine synthase. Phenylalanine and tyrosine are grouped together because tyrosine can be synthesized from phenylalanine using the enzyme phenylalanine hydroxylase. Amino acid requirements and the amino acid content of food Historically, amino acid requirements were determined by calculating the balance between dietary nitrogen intake and nitrogen excreted in the liquid and solid wastes, because proteins represent the largest nitrogen content in a body. A positive balance occurs when more nitrogen is consumed than is excreted, which indicates that some of the nitrogen is being used by the body to build proteins. A negative nitrogen balance occurs when more nitrogen is excreted than is consumed, which indicates that there is insufficient intake for the body to maintain its health. Graduate students at the University of Illinois were fed an artificial diet so that there was a slightly positive nitrogen balance. Then one amino acid was omitted and the nitrogen balance recorded. If a positive balance continued, then that amino acid was deemed not essential. If a negative balance occurred, then that amino acid was slowly restored until a slightly positive nitrogen balance stabilized and the minimum amount recorded. A similar method was used to determine the protein content of foods. Test subjects were fed a diet containing no protein and the nitrogen losses recorded. During the first week or more there is a rapid loss of labile proteins. Once the nitrogen losses stabilize, this baseline is determined to be the minimum required for maintenance. Then the test subjects were fed a measured amount of the food being tested. The difference between the nitrogen in that food and the nitrogen losses above baseline was the amount the body retained to rebuild proteins. The amount of nitrogen retained divided by the total nitrogen intake is called net protein utilization. The amount of nitrogen retained divided by the (nitrogen intake minus nitrogen loss above baseline) is called biological value and is usually given as a percentage. Modern techniques make use of ion exchange chromatography to determine the actual amino acid content of foods. The USDA used this technique in their own labs to determine the content of 7793 foods across 28 categories. The USDA published the final database in 2018 to the public. The limiting amino acid depends on the human requirements and there are currently two sets of human requirements from authoritative sources: one published by WHO and the other published by USDA. Protein quality Various attempts have been made to express the "quality" or "value" of various kinds of protein. Measures include the biological value, net protein utilization, protein efficiency ratio, protein digestibility corrected amino acid score and the complete proteins concept. These concepts are important in the livestock industry, because the relative lack of one or more of the essential amino acids in animal feeds would have a limiting effect on growth and thus on feed conversion ratio. Thus, various feedstuffs may be fed in combination to increase net protein utilization, or a supplement of an individual amino acid (methionine, lysine, threonine, or tryptophan) can be added to the feed. Protein per calorie Protein content in foods is often measured in protein per serving rather than protein per calorie. For instance, the USDA lists 6 grams of protein per large whole egg (a 50-gram serving) rather than 84 mg of protein per calorie (71 calories total). For comparison, there are 2.8 grams of protein in a serving of raw broccoli (100 grams) or 82 mg of protein per calorie (34 calories total), or the Daily Value of 47.67g of protein after eating 1,690g of raw broccoli a day at 574 cal. An egg contains 12.5g of protein per 100g, but 4 mg more protein per calorie, or the protein DV after 381g of egg, which is 545 cal. The ratio of essential amino acids (the quality of protein) is not taken into account, one would actually need to eat more than 3 kg of broccoli a day to have a healthy protein profile, and almost 6 kg to get enough calories. It is recommended that adult humans obtain between 10–35% of their 2000 calories a day as protein. Complete proteins in non-human animals Scientists had known since the early 20th century that rats could not survive on a diet whose only protein source was zein, which comes from maize (corn), but recovered if they were fed casein from cow's milk. This led William Cumming Rose to the discovery of the essential amino acid threonine. Through manipulation of rodent diets, Rose was able to show that ten amino acids are essential for rats: lysine, tryptophan, histidine, phenylalanine, leucine, isoleucine, methionine, valine, and arginine, in addition to threonine. Rose's later work showed that eight amino acids are essential for adult human beings, with histidine also being essential for infants. Longer-term studies established histidine as also essential for adult humans. Interchangeability The distinction between essential and non-essential amino acids is somewhat unclear, as some amino acids can be produced from others. The sulfur-containing amino acids, methionine and homocysteine, can be converted into each other but neither can be synthesized de novo in humans. Likewise, cysteine can be made from homocysteine but cannot be synthesized on its own. So, for convenience, sulfur-containing amino acids are sometimes considered a single pool of nutritionally equivalent amino acids as are the aromatic amino acid pair, phenylalanine and tyrosine. Likewise arginine, ornithine, and citrulline, which are interconvertible by the urea cycle, are considered a single group. Effects of deficiency If one of the essential amino acids is not available in the required quantities, protein synthesis will be inhibited, irrespective of the availability of the other amino acids. Protein deficiency has been shown to affect all of the body's organs and many of its systems, for example affecting brain development in infants and young children; inhibiting upkeep of the immune system, increasing risk of infection; affecting gut mucosal function and permeability, thereby reducing absorption and increasing vulnerability to systemic disease; and impacting kidney function. The physical signs of protein deficiency include edema, failure to thrive in infants and children, poor musculature, dull skin, and thin and fragile hair. Biochemical changes reflecting protein deficiency include low serum albumin and low serum transferrin. The amino acids that are essential in the human diet were established in a series of experiments led by William Cumming Rose. The experiments involved elemental diets to healthy male graduate students. These diets consisted of corn starch, sucrose, butterfat without protein, corn oil, inorganic salts, the known vitamins, a large brown "candy" made of liver extract flavored with peppermint oil (to supply any unknown vitamins), and mixtures of highly purified individual amino acids. The main outcome measure was nitrogen balance. Rose noted that the symptoms of nervousness, exhaustion, and dizziness were encountered to a greater or lesser extent whenever human subjects were deprived of an essential amino acid. Essential amino acid deficiency should be distinguished from protein-energy malnutrition, which can manifest as marasmus or kwashiorkor. Kwashiorkor was once attributed to pure protein deficiency in individuals who were consuming enough calories ("sugar baby syndrome"). However, this theory has been challenged by the finding that there is no difference in the diets of children developing marasmus as opposed to kwashiorkor. Still, for instance in Dietary Reference Intakes (DRI) maintained by the USDA, lack of one or more of the essential amino acids is described as protein-energy malnutrition. See also Essential fatty acid Essential genes List of standard amino acids Low-protein diet, High-protein diet Orthomolecular medicine Ketogenic amino acid Glucogenic amino acid References External links Amino acid content of some vegetarian foods at veganhealth.org. Amino Acid Profiles of Some Common Feeds at Virginia Tech. Molecular Expressions: The Amino Acid Collection at Florida State University. Features detailed information and crystal photographs of each amino acid. vProtein, an online software tool to analyze the essential amino acid profiles of single and pairs of plant based foods based on human requirements. Amino acids Nitrogen cycle Nutrition
Essential amino acid
[ "Chemistry" ]
2,785
[ "Amino acids", "Biomolecules by chemical classification", "Nitrogen cycle", "Metabolism" ]
102,219
https://en.wikipedia.org/wiki/Net%20protein%20utilization
The net protein utilization (NPU) is the percentage of ingested nitrogen that is retained in the body. Rating It is used to determine the nutritional efficiency of protein in the diet, that is, it is used as a measure of "protein quality" for human nutritional purposes. As a value, NPU can range from 0 to 1 (or 100), with a value of 1 (or 100) indicating 100% utilization of dietary nitrogen as protein and a value of 0 an indication that none of the nitrogen supplied was converted to protein. Certain foodstuffs, such as eggs or milk, rate as 1 on an NPU chart. Experimentally, this value can be determined by determining dietary protein intake and then measuring nitrogen excretion. One formula for apparent NPU is: NPU = {0.16 × (24 hour protein intake in grams)} - {(24 hour urinary urea nitrogen) + 2} - {0.1 × (ideal body weight in kilograms)} / {0.16 × (24 hour protein intake in grams)} NPU and biological value (BV) both measure nitrogen retention; the difference is that biological value is calculated from nitrogen absorbed, whereas net protein utilization is from nitrogen ingested. Another closely related quantity is the net postprandial protein utilization (NPPU), which is the maximum potential NPU of a dietary protein source under ideal conditions. The Protein Digestibility Corrected Amino Acid Score (PDCAAS) is a more modern rating for determining protein quality, and the current ranking standard used by the FDA. The Digestible Indispensable Amino Acid Score (DIAAS) is a protein quality method, proposed in March 2013 by the Food and Agriculture Organization to replace the current protein ranking standard, the Protein Digestibility Corrected Amino Acid Score (PDCAAS). The proposition is contested, however, due to lack of data. See also Protein efficiency ratio Nitrogen balance References Amino acids Proteins Nutrition
Net protein utilization
[ "Chemistry" ]
401
[ "Amino acids", "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
102,352
https://en.wikipedia.org/wiki/Polyacrylamide%20gel%20electrophoresis
Polyacrylamide gel electrophoresis (PAGE) is a technique widely used in biochemistry, forensic chemistry, genetics, molecular biology and biotechnology to separate biological macromolecules, usually proteins or nucleic acids, according to their electrophoretic mobility. Electrophoretic mobility is a function of the length, conformation, and charge of the molecule. Polyacrylamide gel electrophoresis is a powerful tool used to analyze RNA samples. When polyacrylamide gel is denatured after electrophoresis, it provides information on the sample composition of the RNA species. Hydration of acrylonitrile results in formation of acrylamide molecules () by nitrile hydratase. Acrylamide monomer is in a powder state before addition of water. Acrylamide is toxic to the human nervous system, therefore all safety measures must be followed when working with it. Acrylamide is soluble in water and upon addition of free-radical initiators it polymerizes resulting in formation of polyacrylamide. It is useful to make polyacrylamide gel via acrylamide hydration because pore size can be regulated. Increased concentrations of acrylamide result in decreased pore size after polymerization. Polyacrylamide gel with small pores helps to examine smaller molecules better since the small molecules can enter the pores and travel through the gel while large molecules get trapped at the pore openings. As with all forms of gel electrophoresis, molecules may be run in their native state, preserving the molecules' higher-order structure. This method is called native-PAGE. Alternatively, a chemical denaturant may be added to remove this structure and turn the molecule into an unstructured molecule whose mobility depends only on its length (because the protein-SDS complexes all have a similar mass-to-charge ratio). This procedure is called SDS-PAGE. Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) is a method of separating molecules based on the difference of their molecular weight. At the pH at which gel electrophoresis is carried out the SDS molecules are negatively charged and bind to proteins in a set ratio, approximately one molecule of SDS for every 2 amino acids. In this way, the detergent provides all proteins with a uniform charge-to-mass ratio. By binding to the proteins the detergent destroys their secondary, tertiary and/or quaternary structure denaturing them and turning them into negatively charged linear polypeptide chains. When subjected to an electric field in PAGE, the negatively charged polypeptide chains travel toward the anode with different mobility. Their mobility, or the distance traveled by molecules, is inversely proportional to the logarithm of their molecular weight. By comparing the relative ratio of the distance traveled by each protein to the length of the gel (Rf) one can make conclusions about the relative molecular weight of the proteins, where the length of the gel is determined by the distance traveled by a small molecule like a tracking dye. For nucleic acids, urea is the most commonly used denaturant. For proteins, sodium dodecyl sulfate (SDS) is an anionic detergent applied to protein samples to coat proteins in order to impart two negative charges (from every SDS molecule) to every two amino acids of the denatured protein. 2-Mercaptoethanol may also be used to disrupt the disulfide bonds found between the protein complexes, which helps further denature the protein. In most proteins, the binding of SDS to the polypeptide chains impart an even distribution of charge per unit mass, thereby resulting in a fractionation by approximate size during electrophoresis. Proteins that have a greater hydrophobic content – for instance, many membrane proteins, and those that interact with surfactants in their native environment – are intrinsically harder to treat accurately using this method, due to the greater variability in the ratio of bound SDS. Procedurally, using both Native and SDS-PAGE together can be used to purify and to separate the various subunits of the protein. Native-PAGE keeps the oligomeric form intact and will show a band on the gel that is representative of the level of activity. SDS-PAGE will denature and separate the oligomeric form into its monomers, showing bands that are representative of their molecular weights. These bands can be used to identify and assess the purity of the protein. Procedure Sample preparation Samples may be any material containing proteins or nucleic acids. These may be biologically derived, for example from prokaryotic or eukaryotic cells, tissues, viruses, environmental samples, or purified proteins. In the case of solid tissues or cells, these are often first broken down mechanically using a blender (for larger sample volumes), using a homogenizer (smaller volumes), by sonicator or by using cycling of high pressure, and a combination of biochemical and mechanical techniques – including various types of filtration and centrifugation – may be used to separate different cell compartments and organelles prior to electrophoresis. Synthetic biomolecules such as oligonucleotides may also be used as analytes. The sample to analyze is optionally mixed with a chemical denaturant if so desired, usually SDS for proteins or urea for nucleic acids. SDS is an anionic detergent that denatures secondary and non–disulfide–linked tertiary structures, and additionally applies a negative charge to each protein in proportion to its mass. Urea breaks the hydrogen bonds between the base pairs of the nucleic acid, causing the constituent strands to separate. Heating the samples to at least 60 °C further promotes denaturation. In addition to SDS, proteins may optionally be briefly heated to near boiling in the presence of a reducing agent, such as dithiothreitol (DTT) or 2-mercaptoethanol (beta-mercaptoethanol/BME), which further denatures the proteins by reducing disulfide linkages, thus overcoming some forms of tertiary protein folding, and breaking up quaternary protein structure (oligomeric subunits). This is known as reducing SDS-PAGE. A tracking dye may be added to the solution. This typically has a higher electrophoretic mobility than the analytes to allow the experimenter to track the progress of the solution through the gel during the electrophoretic run. Preparing acrylamide gels The gels typically consist of acrylamide, bisacrylamide, the optional denaturant (SDS or urea), and a buffer with an adjusted pH. The solution may be degassed under a vacuum to prevent the formation of air bubbles during polymerization. Alternatively, butanol may be added to the resolving gel (for proteins) after it is poured, as butanol removes bubbles and makes the surface smooth. A source of free radicals and a stabilizer, such as ammonium persulfate and TEMED are added to initiate polymerization. The polymerization reaction creates a gel because of the added bisacrylamide, which can form cross-links between two acrylamide molecules. The ratio of bisacrylamide to acrylamide can be varied for special purposes, but is generally about 1 part in 35. The acrylamide concentration of the gel can also be varied, generally in the range from 5% to 25%. Lower percentage gels are better for resolving very high molecular weight molecules, while much higher percentages of acrylamide are needed to resolve smaller proteins. The average pore diameter of polyacrylamide gels is determined by the total concentration of acrylamides (% T with T = Total concentration of acrylamide and bisacrylamide) and the concentration of the cross-linker bisacrylamide (%C with C = bisacrylamide concentration). The pore size is reduced reciprocally to the %T. Concerning %C, a concentration of 5% produces the smallest pores, since the influence of bisacrylamide on the pore size has a parabola-shape with a vertex at 5%. Gels are usually polymerized between two glass plates in a gel caster, with a comb inserted at the top to create the sample wells. After the gel is polymerized the comb can be removed and the gel is ready for electrophoresis. Electrophoresis Various buffer systems are used in PAGE depending on the nature of the sample and the experimental objective. The buffers used at the anode and cathode may be the same or different. An electric field is applied across the gel, causing the negatively charged proteins or nucleic acids to migrate across the gel away from the negative electrode (which is the cathode being that this is an electrolytic rather than galvanic cell) and towards the positive electrode (the anode). Depending on their size, each biomolecule moves differently through the gel matrix: small molecules more easily fit through the pores in the gel, while larger ones have more difficulty. The gel is run usually for a few hours, though this depends on the voltage applied across the gel; migration occurs more quickly at higher voltages, but these results are typically less accurate than at those at lower voltages. After the set amount of time, the biomolecules have migrated different distances based on their size. Smaller biomolecules travel farther down the gel, while larger ones remain closer to the point of origin. Biomolecules may therefore be separated roughly according to size, which depends mainly on molecular weight under denaturing conditions, but also depends on higher-order conformation under native conditions. The gel mobility is defined as the rate of migration traveled with a voltage gradient of 1V/cm and has units of cm2/sec/V. For analytical purposes, the relative mobility of biomolecules, Rf, the ratio of the distance the molecule traveled on the gel to the total travel distance of a tracking dye is plotted versus the molecular weight of the molecule (or sometimes the log of MW, or rather the Mr, molecular radius). Such typically linear plots represent the standard markers or calibration curves that are widely used for the quantitative estimation of a variety of biomolecular sizes. Certain glycoproteins, however, behave anomalously on SDS gels. Additionally, the analysis of larger proteins ranging from 250,000 to 600,000 Da is also reported to be problematic due to the fact that such polypeptides move improperly in the normally used gel systems. Further processing Following electrophoresis, the gel may be stained (for proteins, most commonly with Coomassie brilliant blue R-250 or autoradiography; for nucleic acids, ethidium bromide; or for either, silver stain), allowing visualization of the separated proteins, or processed further (e.g. Western blot). After staining, different species biomolecules appear as distinct bands within the gel. It is common to run molecular weight size markers of known molecular weight in a separate lane in the gel to calibrate the gel and determine the approximate molecular mass of unknown biomolecules by comparing the distance traveled relative to the marker. For proteins, SDS-PAGE is usually the first choice as an assay of purity due to its reliability and ease. The presence of SDS and the denaturing step make proteins separate, approximately based on size, but aberrant migration of some proteins may occur. Different proteins may also stain differently, which interferes with quantification by staining. PAGE may also be used as a preparative technique for the purification of proteins. For example, preparative native PAGE is a method for separating native metalloproteins in complex biological matrices. Chemical ingredients and their roles Polyacrylamide gel (PAG) had been known as a potential embedding medium for sectioning tissues as early as 1964, and two independent groups employed PAG in electrophoresis in 1959. It possesses several electrophoretically desirable features that make it a versatile medium. It is a synthetic, thermo-stable, transparent, strong, chemically relatively inert gel, and can be prepared with a wide range of average pore sizes. The pore size of a gel and the reproducibility in gel pore size are determined by three factors, the total amount of acrylamide present (%T) (T = Total concentration of acrylamide and bisacrylamide monomer), the amount of cross-linker (%C) (C = bisacrylamide concentration), and the time of polymerization of acrylamide (cf. QPNC-PAGE). Pore size decreases with increasing %T; with cross-linking, 5%C gives the smallest pore size. Any increase or decrease in %C from 5% increases the pore size, as pore size with respect to %C is a parabolic function with vertex as 5%C. This appears to be because of non-homogeneous bundling of polymer strands within the gel. This gel material can also withstand high voltage gradients, is amenable to various staining and destaining procedures, and can be digested to extract separated fractions or dried for autoradiography and permanent recording. Components Polyacrylamide gels are composed of a stacking gel and separating gel. Stacking gels have a higher porosity relative to the separating gel, and allow for proteins to migrate in a concentrated area. Additionally, stacking gels usually have a pH of 6.8, since the neutral glycine molecules allow for faster protein mobility. Separating gels have a pH of 8.8, where the anionic glycine slows down the mobility of proteins. Separating gels allow for the separation of proteins and have a relatively lower porosity. Here, the proteins are separated based on size (in SDS-PAGE) and size/ charge (Native PAGE). Chemical buffer stabilizes the pH value to the desired value within the gel itself and in the electrophoresis buffer. The choice of buffer also affects the electrophoretic mobility of the buffer counterions and thereby the resolution of the gel. The buffer should also be unreactive and not modify or react with most proteins. Different buffers may be used as cathode and anode buffers, respectively, depending on the application. Multiple pH values may be used within a single gel, for example in DISC electrophoresis. Common buffers in PAGE include Tris, Bis-Tris, or imidazole. Counterion balance the intrinsic charge of the buffer ion and also affect the electric field strength during electrophoresis. Highly charged and mobile ions are often avoided in SDS-PAGE cathode buffers, but may be included in the gel itself, where it migrates ahead of the protein. In applications such as DISC SDS-PAGE the pH values within the gel may vary to change the average charge of the counterions during the run to improve resolution. Popular counterions are glycine and tricine. Glycine has been used as the source of trailing ion or slow ion because its pKa is 9.69 and mobility of glycinate are such that the effective mobility can be set at a value below that of the slowest known proteins of net negative charge in the pH range. The minimum pH of this range is approximately 8.0. Acrylamide (; mW: 71.08) when dissolved in water, slow, spontaneous autopolymerization of acrylamide takes place, joining molecules together by head on tail fashion to form long single-chain polymers. The presence of a free radical-generating system greatly accelerates polymerization. This kind of reaction is known as vinyl addition polymerisation. A solution of these polymer chains becomes viscous but does not form a gel, because the chains simply slide over one another. Gel formation requires linking various chains together. Acrylamide is carcinogenic, a neurotoxin, and a reproductive toxin. It is also essential to store acrylamide in a cool dark and dry place to reduce autopolymerisation and hydrolysis. Bisacrylamide (N,N′-Methylenebisacrylamide) (; mW: 154.17) is the most frequently used cross linking agent for polyacrylamide gels. Chemically it can be thought of as two acrylamide molecules coupled head to head at their non-reactive ends. Bisacrylamide can crosslink two polyacrylamide chains to one another, thereby resulting in a gel. Sodium dodecyl sulfate (SDS) (; mW: 288.38) (only used in denaturing protein gels) is a strong detergent agent used to denature native proteins to individual polypeptides. This denaturation, which is referred to as reconstructive denaturation, is not accomplished by the total linearization of the protein, but instead, through a conformational change to a combination of random coil and α helix secondary structures. When a protein mixture is heated to 100 °C in presence of SDS, the detergent wraps around the polypeptide backbone. It binds to polypeptides in a constant weight ratio of 1.4 g SDS/g of polypeptide. In this process, the intrinsic charges of polypeptides become negligible when compared to the negative charges contributed by SDS. Thus polypeptides after treatment become rod-like structures possessing a uniform charge density, that is same net negative charge per unit weight. The electrophoretic mobilities of these proteins is a linear function of the logarithms of their molecular weights. Without SDS, different proteins with similar molecular weights would migrate differently due to differences in mass-charge ratio, as each protein has an isoelectric point and molecular weight particular to its primary structure. This is known as native PAGE. Adding SDS solves this problem, as it binds to and unfolds the protein, giving a near uniform negative charge along the length of the polypeptide. Urea (; mW: 60.06) is a chaotropic agent that increases the entropy of the system by interfering with intramolecular interactions mediated by non-covalent forces such as hydrogen bonds and van der Waals forces. Macromolecular structure is dependent on the net effect of these forces, therefore it follows that an increase in chaotropic solutes denatures macromolecules, Ammonium persulfate (APS) (; mW: 228.2) is a source of free radicals and is often used as an initiator for gel formation. An alternative source of free radicals is riboflavin, which generated free radicals in a photochemical reaction. TEMED (N, N, N′, N′-tetramethylethylenediamine) (; mW: 116.21) stabilizes free radicals and improves polymerization. The rate of polymerisation and the properties of the resulting gel depend on the concentrations of free radicals. Increasing the amount of free radicals results in a decrease in the average polymer chain length, an increase in gel turbidity and a decrease in gel elasticity. Decreasing the amount shows the reverse effect. The lowest catalytic concentrations that allow polymerisation in a reasonable period of time should be used. APS and TEMED are typically used at approximately equimolar concentrations in the range of 1 to 10 mM. Chemicals for processing and visualization The following chemicals and procedures are used for processing of the gel and the protein samples visualized in it. Tracking dye; as proteins and nucleic acids are mostly colorless, their progress through the gel during electrophoresis cannot be easily followed. Anionic dyes of a known electrophoretic mobility are therefore usually included in the PAGE sample buffer. A very common tracking dye is Bromophenol blue (BPB, 3',3",5',5" tetrabromophenolsulfonphthalein). This dye is coloured at alkali and neutral pH and is a small negatively charged molecule that moves towards the anode. Being a highly mobile molecule it moves ahead of most proteins. As it reaches the anodic end of the electrophoresis medium electrophoresis is stopped. It can weakly bind to some proteins and impart a blue colour. Other common tracking dyes are xylene cyanol, which has lower mobility, and Orange G, which has a higher mobility. Loading aids; most PAGE systems are loaded from the top into wells within the gel. To ensure that the sample sinks to the bottom of the gel, sample buffer is supplemented with additives that increase the density of the sample. These additives should be non-ionic and non-reactive towards proteins to avoid interfering with electrophoresis. Common additives are glycerol and sucrose. Coomassie brilliant blue R-250 (CBB)(; mW: 825.97) is the most popular protein stain. It is an anionic dye, which non-specifically binds to proteins. The structure of CBB is predominantly non-polar, and it is usually used in methanolic solution acidified with acetic acid. Proteins in the gel are fixed by acetic acid and simultaneously stained. The excess dye incorporated into the gel can be removed by destaining with the same solution without the dye. The proteins are detected as blue bands on a clear background. As SDS is also anionic, it may interfere with staining process. Therefore, large volume of staining solution is recommended, at least ten times the volume of the gel. Ethidium bromide (EtBr) is a popular nucleic acid stain. EtBr allows one to easily visualize DNA or RNA on a gel as EtBr fluoresces an orange color under UV light. Ethidium bromide binds nucleic acid chains through the process of Intercalation. While Ethidium bromide is a popular stain it is important to exercise caution when using EtBr as it is a known carcinogen. Because of this fact, many researchers opt to use stains such as SYBR Green and SYBR Safe which are safer alternatives to EtBr. EtBr is used by simply adding it to the gel mixture. Once the gel has run, the gel may be viewed through the use of a photo-documentation system. Silver staining is used when more sensitive method for detection is needed, as classical Coomassie Brilliant Blue staining can usually detect a 50 ng protein band, Silver staining increases the sensitivity typically 10-100 fold more. This is based on the chemistry of photographic development. The proteins are fixed to the gel with a dilute methanol solution, then incubated with an acidic silver nitrate solution. Silver ions are reduced to their metallic form by formaldehyde at alkaline pH. An acidic solution, such as acetic acid stops development. Silver staining was introduced by Kerenyi and Gallyas as a sensitive procedure to detect trace amounts of proteins in gels. The technique has been extended to the study of other biological macromolecules that have been separated in a variety of supports. Many variables can influence the colour intensity and every protein has its own staining characteristics; clean glassware, pure reagents and water of highest purity are the key points to successful staining. Silver staining was developed in the 14th century for colouring the surface of glass. It has been used extensively for this purpose since the 16th century. The colour produced by the early silver stains ranged between light yellow and an orange-red. Camillo Golgi perfected the silver staining for the study of the nervous system. Golgi's method stains a limited number of cells at random in their entirety. Autoradiography, also used for protein band detection post gel electrophoresis, uses radioactive isotopes to label proteins, which are then detected by using X-ray film. Western blotting is a process by which proteins separated in the acrylamide gel are electrophoretically transferred to a stable, manipulable membrane such as a nitrocellulose, nylon, or PVDF membrane. It is then possible to apply immunochemical techniques to visualise the transferred proteins, as well as accurately identify relative increases or decreases of the protein of interest. See also Agarose gel electrophoresis Capillary electrophoresis DNA electrophoresis Eastern blotting Electroblotting Fast parallel proteolysis (FASTpp) History of electrophoresis Isoelectric focusing Isotachophoresis Native gel electrophoresis Northern blotting Protein electrophoresis QPNC-PAGE Southern blotting Two dimensional SDS-PAGE Zymography References External links SDS-PAGE: How it Works Demystifying SDS-PAGE Video Demystifying SDS-PAGE SDS-PAGE Calculator for customised recipes for TRIS Urea gels. 2-Dimensional Protein Gelelectrophoresis Hempelmann E. SDS-Protein PAGE and Proteindetection by Silverstaining and Immunoblotting of Plasmodium falciparum proteins. in: Moll K, Ljungström J, Perlmann H, Scherf A, Wahlgren M (eds) Methods in Malaria Research, 5th edition, 2008, 263-266 Molecular biology techniques Electrophoresis
Polyacrylamide gel electrophoresis
[ "Chemistry", "Biology" ]
5,386
[ "Instrumental analysis", "Biochemical separation processes", "Molecular biology techniques", "Molecular biology", "Electrophoresis" ]
102,505
https://en.wikipedia.org/wiki/Protein%20Data%20Bank
The Protein Data Bank (PDB) is a database for the three-dimensional structural data of large biological molecules such as proteins and nucleic acids, which is overseen by the Worldwide Protein Data Bank (wwPDB). These structural data are obtained and deposited by biologists and biochemists worldwide through the use of experimental methodologies such as X-ray crystallography, NMR spectroscopy, and, increasingly, cryo-electron microscopy. All submitted data are reviewed by expert biocurators and, once approved, are made freely available on the Internet under the CC0 Public Domain Dedication. Global access to the data is provided by the websites of the wwPDB member organisations (PDBe, PDBj, RCSB PDB, and BMRB). The PDB is a key in areas of structural biology, such as structural genomics. Most major scientific journals and some funding agencies now require scientists to submit their structure data to the PDB. Many other databases use protein structures deposited in the PDB. For example, SCOP and CATH classify protein structures, while PDBsum provides a graphic overview of PDB entries using information from other sources, such as Gene Ontology. History Two forces converged to initiate the PDB: a small but growing collection of sets of protein structure data determined by X-ray diffraction; and the newly available (1968) molecular graphics display, the Brookhaven RAster Display (BRAD), to visualize these protein structures in 3-D. In 1969, with the sponsorship of Walter Hamilton at the Brookhaven National Laboratory, Edgar Meyer (Texas A&M University) began to write software to store atomic coordinate files in a common format to make them available for geometric and graphical evaluation. By 1971, one of Meyer's programs, SEARCH, enabled researchers to remotely access information from the database to study protein structures offline. SEARCH was instrumental in enabling networking, thus marking the functional beginning of the PDB. The Protein Data Bank was announced in October 1971 in Nature New Biology as a joint venture between Cambridge Crystallographic Data Centre, UK and Brookhaven National Laboratory, US. Upon Hamilton's death in 1973, Tom Koetzle took over direction of the PDB for the subsequent 20 years. In January 1994, Joel Sussman of Israel's Weizmann Institute of Science was appointed head of the PDB. In October 1998, the PDB was transferred to the Research Collaboratory for Structural Bioinformatics (RCSB); the transfer was completed in June 1999. The new director was Helen M. Berman of Rutgers University (one of the managing institutions of the RCSB, the other being the San Diego Supercomputer Center at UC San Diego). In 2003, with the formation of the wwPDB, the PDB became an international organization. The founding members are PDBe (Europe), RCSB (US), and PDBj (Japan). The BMRB joined in 2006. Each of the four members of wwPDB can act as deposition, data processing and distribution centers for PDB data. The data processing refers to the fact that wwPDB staff review and annotate each submitted entry. The data are then automatically checked for plausibility (the source code for this validation software has been made available to the public at no charge). Contents The PDB database is updated weekly (UTC+0 Wednesday), along with its holdings list. , the PDB comprised: 162,041 structures in the PDB have a structure factor file. 11,242 structures have an NMR restraint file. 5,774 structures in the PDB have a chemical shifts file. 13,388 structures in the PDB have a 3DEM map file deposited in EM Data Bank Most structures are determined by X-ray diffraction, but about 7% of structures are determined by protein NMR. When using X-ray diffraction, approximations of the coordinates of the atoms of the protein are obtained, whereas using NMR, the distance between pairs of atoms of the protein is estimated. The final conformation of the protein is obtained from NMR by solving a distance geometry problem. After 2013, a growing number of proteins are determined by cryo-electron microscopy. For PDB structures determined by X-ray diffraction that have a structure factor file, their electron density map may be viewed. The data of such structures may be viewed on the three PDB websites. Historically, the number of structures in the PDB has grown at an approximately exponential rate, with 100 registered structures in 1982, 1,000 structures in 1993, 10,000 in 1999, 100,000 in 2014, and 200,000 in January 2023. File format The file format initially used by the PDB was called the PDB file format. The original format was restricted by the width of computer punch cards to 80 characters per line. Around 1996, the "macromolecular Crystallographic Information file" format, mmCIF, which is an extension of the CIF format was phased in. mmCIF became the standard format for the PDB archive in 2014. In 2019, the wwPDB announced that depositions for crystallographic methods would only be accepted in mmCIF format. An XML version of PDB, called PDBML, was described in 2005. The structure files can be downloaded in any of these three formats, though an increasing number of structures do not fit the legacy PDB format. Individual files are easily downloaded into graphics packages from Internet URLs: For PDB format files, use, e.g., http://www.pdb.org/pdb/files/4hhb.pdb.gz or http://pdbe.org/download/4hhb For PDBML (XML) files, use, e.g., http://www.pdb.org/pdb/files/4hhb.xml.gz or http://pdbe.org/pdbml/4hhb The "4hhb" is the PDB identifier. Each structure published in PDB receives a four-character alphanumeric identifier, its PDB ID. (This is not a unique identifier for biomolecules, because several structures for the same molecule—in different environments or conformations—may be contained in PDB with different PDB IDs.) Viewing the data The structure files may be viewed using one of several free and open source computer programs, including Jmol, Pymol, VMD, Molstar and Rasmol. Other non-free, shareware programs include ICM-Browser, MDL Chime, UCSF Chimera, Swiss-PDB Viewer, StarBiochem (a Java-based interactive molecular viewer with integrated search of protein databank), Sirius, and VisProt3DS (a tool for Protein Visualization in 3D stereoscopic view in anaglyph and other modes), and Discovery Studio. The RCSB PDB website contains an extensive list of both free and commercial molecule visualization programs and web browser plugins. See also Crystallographic database Protein structure Protein structure prediction Protein structure database PDBREPORT lists all anomalies (also errors) in PDB structures PDBsum—extracts data from other databases about PDB structures Proteopedia—a collaborative 3D encyclopedia of proteins and other molecules References External links The Worldwide Protein Data Bank (wwPDB)—parent site to regional hosts (below) RCSB Protein Data Bank (US) PDBe (Europe) PDBj (Japan) BMRB, Biological Magnetic Resonance Data Bank (US) wwPDB Documentation—documentation on both the PDB and PDBML file formats Looking at Structures —The RCSB's introduction to crystallography PDBsum Home Page—Extracts data from other databases about PDB structures. Nucleic Acid Database, NDB—a PDB mirror especially for searching for nucleic acids Introductory PDB tutorial sponsored by PDB PDBe: Quick Tour on EBI Train OnLine Protein databases Crystallographic databases Protein structure Science and technology in the United States Bioinformatics Computational biology
Protein Data Bank
[ "Chemistry", "Materials_science", "Engineering", "Biology" ]
1,706
[ "Biological engineering", "Crystallographic databases", "Bioinformatics", "Crystallography", "Structural biology", "Computational biology", "Protein structure" ]
102,575
https://en.wikipedia.org/wiki/Dihydrofolate%20reductase
Dihydrofolate reductase, or DHFR, is an enzyme that reduces dihydrofolic acid to tetrahydrofolic acid, using NADPH as an electron donor, which can be converted to the kinds of tetrahydrofolate cofactors used in one-carbon transfer chemistry. In humans, the DHFR enzyme is encoded by the DHFR gene. It is found in the q14.1 region of chromosome 5. There are two structural classes of DHFR, evolutionarily unrelated to each other. The former is usually just called DHFR and is found in bacterial chromosomes and animals. In bacteria, however, antibiotic pressure has caused this class to evolve different patterns of binding diaminoheterocyclic molecules, leading to many "types" named under this class, while mammalian ones remain highly similar. The latter (type II), represented by the plastid-encoded R67, is a tiny enzyme that works by forming a homotetramer. Function Dihydrofolate reductase converts dihydrofolate into tetrahydrofolate, a proton shuttle required for the de novo synthesis of purines, thymidylic acid, and certain amino acids. While the functional dihydrofolate reductase gene has been mapped to chromosome 5, multiple intronless processed pseudogenes or dihydrofolate reductase-like genes have been identified on separate chromosomes. Found in all organisms, DHFR has a critical role in regulating the amount of tetrahydrofolate in the cell. Tetrahydrofolate and its derivatives are essential for purine and thymidylate synthesis, which are important for cell proliferation and cell growth. DHFR plays a central role in the synthesis of nucleic acid precursors, and it has been shown that mutant cells that completely lack DHFR require glycine, a purine, and thymidine to grow. DHFR has also been demonstrated as an enzyme involved in the salvage of tetrahydrobiopterin from dihydrobiopterin. Structure A central eight-stranded beta-pleated sheet makes up the main feature of the polypeptide backbone folding of DHFR. Seven of these strands are parallel and the eighth runs antiparallel. Four alpha helices connect successive beta strands. Residues 9–24 are termed "Met20" or "loop 1" and, along with other loops, are part of the major subdomain that surround the active site. The active site is situated in the N-terminal half of the sequence, which includes a conserved Pro-Trp dipeptide; the tryptophan has been shown to be involved in the binding of substrate by the enzyme. Mechanism General mechanism DHFR catalyzes the transfer of a hydride from NADPH to dihydrofolate with an accompanying protonation to produce tetrahydrofolate. In the end, dihydrofolate is reduced to tetrahydrofolate and NADPH is oxidized to NADP+. The high flexibility of Met20 and other loops near the active site play a role in promoting the release of the product, tetrahydrofolate. In particular the Met20 loop helps stabilize the nicotinamide ring of the NADPH to promote the transfer of the hydride from NADPH to dihydrofolate. The mechanism of this enzyme is stepwise and steady-state random. Specifically, the catalytic reaction begins with the NADPH and the substrate attaching to the binding site of the enzyme, followed by the protonation and the hydride transfer from the cofactor NADPH to the substrate. However, two latter steps do not take place simultaneously in a same transition state. In a study using computational and experimental approaches, Liu et al conclude that the protonation step precedes the hydride transfer. DHFR's enzymatic mechanism is shown to be pH dependent, particularly the hydride transfer step, since pH changes are shown to have remarkable influence on the electrostatics of the active site and the ionization state of its residues. The acidity of the targeted nitrogen on the substrate is important in the binding of the substrate to the enzyme's binding site which is proved to be hydrophobic even though it has direct contact to water. Asp27 is the only charged hydrophilic residue in the binding site, and neutralization of the charge on Asp27 may alter the pKa of the enzyme. Asp27 plays a critical role in the catalytic mechanism by helping with protonation of the substrate and restraining the substrate in the conformation favorable for the hydride transfer. The protonation step is shown to be associated with enol tautomerization even though this conversion is not considered favorable for the proton donation. A water molecule is proved to be involved in the protonation step. Entry of the water molecule to the active site of the enzyme is facilitated by the Met20 loop. Conformational changes of DHFR The catalytic cycle of the reaction catalyzed by DHFR incorporates five important intermediate: holoenzyme (E:NADPH), Michaelis complex (E:NADPH:DHF), ternary product complex (E:NADP+:THF), tetrahydrofolate binary complex (E:THF), and THF‚NADPH complex (E:NADPH:THF). The product (THF) dissociation step from E:NADPH:THF to E:NADPH is the rate determining step during steady-state turnover. Conformational changes are critical in DHFR's catalytic mechanism. The Met20 loop of DHFR is able to open, close or occlude the active site. Correspondingly, three different conformations classified as the opened, closed and occluded states are assigned to Met20. In addition, an extra distorted conformation of Met20 was defined due to its indistinct characterization results. The Met20 loop is observed in its occluded conformation in the three product ligating intermediates, where the nicotinamide ring is occluded from the active site. This conformational feature accounts for the fact that the substitution of NADP+ by NADPH is prior to product dissociation. Thus, the next round of reaction can occur upon the binding of substrate. R67 DHFR Due to its unique structure and catalytic features, R67 DHFR is widely studied. R67 DHFR is a type II R-plasmid-encoded DHFR without geneticay or structural relation to the E. coli chromosomal DHFR. It is a homotetramer that possesses the 222 symmetry with a single active site pore that is exposed to solvent. This symmetry of active site results in the different binding mode of the enzyme: It can bind with two dihydrofolate (DHF) molecules with positive cooperativity or two NADPH molecules with negative cooperativity, or one substrate plus one, but only the latter one has the catalytical activity. Compare with E. coli chromosomal DHFR, it has higher Km in binding dihydrofolate (DHF) and NADPH. The much lower catalytical kinetics show that hydride transfer is the rate determine step rather than product (THF) release. In the R67 DHFR structure, the homotetramer forms an active site pore. In the catalytical process, DHF and NADPH enters into the pore from opposite position. The π-π stacking interaction between NADPH's nicotinamide ring and DHF's pteridine ring tightly connect two reactants in the active site. However, the flexibility of p-aminobenzoylglutamate tail of DHF was observed upon binding which can promote the formation of the transition state. Clinical significance DHFR mutations cause dihydrofolate reductase deficiency, a rare autosomal recessive inborn error of folate metabolism that results in megaloblastic anemia, pancytopenia and severe cerebral folate deficiency. These issues can be overcome by supplementation with a reduced form of folate, usually folinic acid. Therapeutic applications DHFR is an attractive pharmaceutical target for inhibition due to its pivotal role in DNA precursor (thymine) synthesis. Trimethoprim, an antibiotic, inhibits bacterial DHFR while methotrexate, a chemotherapy agent, inhibits mammalian DHFR. However, resistance has developed against some drugs, as a result of mutational changes in DHFR itself. Cancer DHFR is responsible for the levels of tetrahydrofolate in a cell, and the inhibition of DHFR can limit the growth and proliferation of cells that are characteristic of cancer and bacterial infections. Methotrexate, a competitive inhibitor of DHFR, is one such anticancer drug that inhibits DHFR. Folate is necessary for growth, and the pathway of the metabolism of folate is a target in developing treatments for cancer. DHFR is one such target. A regimen of fluorouracil, doxorubicin, and methotrexate was shown to prolong survival in patients with advanced gastric cancer. Further studies into inhibitors of DHFR can lead to more ways to treat cancer. Infection Bacteria also need DHFR to grow and multiply and hence inhibitors selective for bacterial DHFR have found application as antibacterial agents. Trimethoprim has shown to have activity against a variety of Gram-positive bacterial pathogens. However, resistance to trimethoprim and other drugs aimed at DHFR can arise due to a variety of mechanisms, limiting the success of their therapeutical uses. Resistance can arise from DHFR gene amplification, mutations in DHFR, decrease in the uptake of the drugs, among others. Regardless, trimethoprim and sulfamethoxazole in combination has been used as an antibacterial agent for decades. Pyrimethamine is a widely used antiprotozoal agent. Other classes of compounds that target DHFR in general, and bacterial DHFRs in particular, belong to the classes such as diaminopteridines, diaminotriazines, diaminopyrroloquinazolines, stilbenes, chalcones, deoxybenzoins, diaminoquinazolines, diaminopyrroloquinazolines, to name but a few. Potential anthrax treatment Dihydrofolate reductase from Bacillus anthracis (BaDHFR) is a validated drug target in the treatment of the infectious disease, anthrax. BaDHFR is less sensitive to trimethoprim analogs than is dihydrofolate reductase from other species such as Escherichia coli, Staphylococcus aureus, and Streptococcus pneumoniae. A structural alignment of dihydrofolate reductase from all four species shows that only BaDHFR has the combination phenylalanine and tyrosine in positions 96 and 102, respectively. BaDHFR's resistance to trimethoprim analogs is due to these two residues (F96 and Y102), which also confer improved kinetics and catalytic efficiency. Current research uses active site mutants in BaDHFR to guide lead optimization for new antifolate inhibitors. As a research tool DHFR has been used as a tool to detect protein–protein interactions in a protein-fragment complementation assay (PCA), using a split-protein approach. DHFR-lacking CHO cells are the most commonly used cell line for the production of recombinant proteins. These cells are transfected with a plasmid carrying the dhfr gene and the gene for the recombinant protein in a single expression system, and then subjected to selective conditions in thymidine-lacking medium. Only the cells with the exogenous DHFR gene along with the gene of interest survive. Supplementation of this medium with methotrexate, a competitive inhibitor of DHFR, can further select for those cells expressing the highest levels of DHFR, and thus, select for the top recombinant protein producers. Interactions Dihydrofolate reductase has been shown to interact with GroEL and Mdm2. Interactive pathway map References Further reading External links 1988 Nobel lecture in Medicine Proteopedia: Dihydrofolate reductase Protein domains EC 1.5.1 Enzymes of known structure
Dihydrofolate reductase
[ "Biology" ]
2,617
[ "Protein domains", "Protein classification" ]
10,225,184
https://en.wikipedia.org/wiki/Oxygen%20evolution
Oxygen evolution is the chemical process of generating elemental diatomic oxygen (O2) by a chemical reaction, usually from water, the most abundant oxide compound in the universe. Oxygen evolution on Earth is effected by biotic oxygenic photosynthesis, photodissociation, hydroelectrolysis, and thermal decomposition of various oxides and oxyacids. When relatively pure oxygen is required industrially, it is isolated by distilling liquefied air. Natural oxygen evolution is essential to the biological process of all complex life on Earth, as aerobic respiration has become the most important biochemical process of eukaryotic thermodynamics since eukaryotes evolved through symbiogenesis during the Proterozoic eon, and such consumption can only continue if oxygen is cyclically replenished by photosynthesis. The various oxygenation events during Earth's history had not only influenced changes in Earth's biosphere, but also significantly altered the atmospheric chemistry. The transition of Earth's atmosphere from an anoxic prebiotic reducing atmosphere high in methane and hydrogen sulfide to an oxidative atmosphere of which free nitrogen and oxygen make up 99% of the mole fractions, had led to major climate changes and caused numerous icehouse phenomena and global glaciations. In industries, oxygen evolution reaction (OER) is a limiting factor in the process of generating molecular oxygen through chemical reactions such as water splitting and electrolysis, and improved OER electrocatalysis is the key to the advancement of a number of renewable energy technologies such as solar fuels, regenerative fuel cells and metal–air batteries. Oxygen evolution in nature Photosynthetic oxygen evolution is the fundamental process by which oxygen is generated in the earth's biosphere. The reaction is part of the light-dependent reactions of photosynthesis in cyanobacteria and the chloroplasts of green algae and plants. It utilizes the energy of light to split a water molecule into its protons and electrons for photosynthesis. Free oxygen, generated as a by-product of this reaction, is released into the atmosphere. Water oxidation is catalyzed by a manganese-containing cofactor contained in photosystem II, known as the oxygen-evolving complex (OEC) or the water-splitting complex. Manganese is an important cofactor, and calcium and chloride are also required for the reaction to occur. The stoichiometry of this reaction is as follows: 2H2O ⟶ 4e− + 4H+ + O2 The protons are released into the thylakoid lumen, thus contributing to the generation of a proton gradient across the thylakoid membrane. This proton gradient is the driving force for adenosine triphosphate (ATP) synthesis via photophosphorylation and the coupling of the absorption of light energy and the oxidation of water for the creation of chemical energy during photosynthesis. History of discovery It was not until the end of the 18th century that Joseph Priestley accidentally discovered the ability of plants to "restore" air that had been "injured" by the burning of a candle. He followed up on the experiment by showing that air "restored" by vegetation was "not at all inconvenient to a mouse." He was later awarded a medal for his discoveries that "...no vegetable grows in vain... but cleanses and purifies our atmosphere." Priestley's experiments were further evaluated by Jan Ingenhousz, a Dutch physician, who then showed that the "restoration" of air only worked while in the presence of light and green plant parts. Water electrolysis Together with hydrogen (H2), oxygen is evolved by the electrolysis of water. The point of water electrolysis is to store energy in the form of hydrogen gas, a clean-burning fuel. The "oxygen evolution reaction (OER) is the major bottleneck [to water electrolysis] due to the sluggish kinetics of this four-electron transfer reaction." All practical catalysts are heterogeneous. Electrons (e−) are transferred from the cathode to protons to form hydrogen gas. The half reaction, balanced with acid, is: 2 H+ + 2e− → H2 At the positively charged anode, an oxidation reaction occurs, generating oxygen gas and releasing electrons to the anode to complete the circuit: 2 H2O → O2 + 4 H+ + 4e− Combining either half reaction pair yields the same overall decomposition of water into oxygen and hydrogen: Overall reaction: 2 H2O → 2 H2 + O2 Chemical oxygen generation Although some metal oxides eventually release O2 when heated, these conversions generally require high temperatures. A few compounds release O2 at mild temperatures. Chemical oxygen generators consist of chemical compounds that release O2 when stimulated, usually by heat. They are used in submarines and commercial aircraft to provide emergency oxygen. Oxygen is generated by the high-temperature decomposition of sodium chlorate: 2 NaClO3 → 2 NaCl + 3 O2 Potassium permanganate also releases oxygen upon heating, but the yield is modest. 2 KMnO4 → MnO2 + K2MnO4 + O2 See also Geological history of oxygen Great Oxygenation Event Neoproterozoic oxygenation event Silurian-Devonian Terrestrial Revolution Oxygen cycle References External links Plant Physiology Online, 4th edition: Topic 7.7 - Oxygen Evolution Oxygen evolution - Lecture notes by Antony Crofts, UIUC Evolution of the atmosphere – Lecture notes, Regents of the University of Michigan How to make oxygen and hydrogen from water using electrolysis Photosynthesis Breathing gases Oxygen Biological evolution
Oxygen evolution
[ "Chemistry", "Biology" ]
1,174
[ "Biochemistry", "Photosynthesis" ]
10,225,620
https://en.wikipedia.org/wiki/Marie-Th%C3%A9r%C3%A8se%20Morlet
Marie-Thérèse Morlet (Guise, Aisne, November 18, 1913 - July 9, 2005) was a French scholar (specialist in onomastics) and honorary director of research at CNRS. Her publications include Dictionnaire étymologique des noms de famille (Etymological Dictionary of Family Names). Les noms de personne sur le territoire de l'ancienne Gaule du VIe au XIIe siècle Her book Les noms de personne sur le territoire de l'ancienne Gaule du VIe au XIIe siècle (Personal Names in the Territory of the former Gaul from the 4th to the 13th Century, abbreviated NPAG) is an anthroponymical dictionary covering the evolution of names in France up to the Middle Ages. The work is published by the CNRS, structured as a series of alphabetical lists and made up of three volumes. The title of the third volume is slightly different: Les noms de personne sur le territoire de l'ancienne Gaule (Personal Names in the Territory of the former Gaul). Structure The Personal Names Of The Ancient Territory of Gaul Between The 4th And The 12th Century. - I. - Names from continental Germanic languages and Gallo-Germanic creations. 237 pages - published in 1971 with permission with photomechanic copying of this text with this publish edition in 1968, (no ISBN). The first volume contains almost exclusively an alphabetical list of names originating as indicated by the subtitle of the volume. The Personal Names Of The Ancient territory Of Gaul III - The personal names containing with the local names. 563 pages - published in 1985. . The second volume contains an alphabetical list of names originating as indicated by the subtitle of the volume: Volume 3 subtitled Personal Names Contained in place names contains: First part: Latin Names transmitted via Latin: Alphabetical lexicon indicating associated place names for each anthroponym; Second part: Personal names derived from continental Germanic: Alphabetical lexicon indicating associated place names for each anthroponym. A general index with alphabetical list of place names (pages 505 - 540) NPAG and the toponym The three volumes of NPAG are referenced in the works of Ernest Nègre Toponymie générale de la France (volume 1, 1990). The geographical study corresponds to modern France with the work of Ernest Nègre on ancient Gaul with publications by Marie-Thérèse Morlet. A number of personal names is preceded in NPAG with an asterisk (*). One example are the forms ending with -acum, the name forms with a proprietary name, one example includes the name *Stirpius (derived from *Stirpiacum in which is the etymology of Étréchy) with the explication: The name [of a person] comes from stirps, souche in French; from E. Nègre (TGF § 6359). Works Toponymie de la Thiérache (Toponym of Thiérarche), Artrey, Paris, 1957, 137 p. [pas d'ISBN] Étude d'anthroponymie picarde : les noms de personne en Haute Picardie aux XIIIe, XIVe, XVe siècles Picard Anthroponymy Studies: Place Names in Lower Picardy in the 12th, 13th and the 14th Centuries, 468 pages, la Société de linguistique picarde (Picard Linguistics Society) edition, edited for the Musée de Picardie, Amiens, 1967, pas d'ISBN Le vocabulaire de la Champagne septentrionale au Moyen âge : essai d'inventaire méthodique, Klincksieck, Paris, 1969, 429 p. [no ISBN] Les noms de personne sur le territoire de l'ancienne Gaule du VIe au XIIe siècle, 1971, 1973 et 1985 Les Études d'onomastique en France : de 1938 à 1970 (The Onomastic Studies In France: From 1938 To 1970), Société d'études linguistiques et anthropologiques de France, Paris, 1981, 214 Dictionnaire étymologique des noms de famille (Etymological Dictionary Of Family Names), 1st edition: Perrin, Paris, 1991, p. 983 References French science writers 20th-century French non-fiction writers Women science writers 20th-century French women writers 1913 births 2005 deaths Research directors of the French National Centre for Scientific Research
Marie-Thérèse Morlet
[ "Technology" ]
952
[ "Women science writers", "Women in science and technology" ]
10,233,756
https://en.wikipedia.org/wiki/Low-density%20lipoprotein%20receptor-related%20protein%208
Low-density lipoprotein receptor-related protein 8 (LRP8), also known as apolipoprotein E receptor 2 (ApoER2), is a protein that in humans is encoded by the LRP8 gene. ApoER2 is a cell surface receptor that is part of the low-density lipoprotein receptor family. These receptors function in signal transduction and endocytosis of specific ligands. Through interactions with one of its ligands, reelin, ApoER2 plays an important role in embryonic neuronal migration and postnatal long-term potentiation. Another LDL family receptor, VLDLR, also interacts with reelin, and together these two receptors influence brain development and function. Decreased expression of ApoER2 is associated with certain neurological diseases. Structure ApoER2 is a protein made up of 870 amino acids. It is separated into a ligand binding domain of eight ligand binding regions, an EGF-like domain containing three cysteine-rich repeats, an O-linked glycosylation domain of 89 amino acids, a transmembrane domain of 24 amino acids, and a cytoplasmic domain of 115 amino acids, including an NPXY motif. Each letter in the NPXY motif represents a certain amino acid where N is arginine, P is proline, X is any amino acid, and Y is tyrosine. Cytoplasmic tail All LDL receptor family proteins contain a cytoplasmic tail with at least one NPXY motif. This motif is important for binding intracellular adapter proteins and endocytosis. ApoER2 is distinct from most other members of the LDL family of receptors due to a unique insert on its cytoplasmic tail. In ApoER2, there is a proline-rich 59 amino acid insert encoded by the alternatively spliced exon 19. This insert allows for protein interactions that are unable to occur with other LDL receptors. It binds the PSD-95 adapter protein, cross-linking ApoER2 and the NMDA receptors during the process of long-term potentiation, and is also bound specifically by JIP-2, an important interaction in the JNK signalling pathway. It is also speculated that this insert may diminish the function of ApoER2 in lipoprotein endocytosis by somehow disrupting the NPXY motif. Function Reelin/Dab1 signalling pathway ApoER2 plays a critical role as a receptor in the reelin signalling pathway, which is important for brain development and postnatal function of the brain. This pathway specifically affects cortical migration and long-term potentiation. Cortical migration In development, reelin is secreted by Cajal-Retzius cells. Reelin acts as an extracellular ligand binding to ApoER2 and VLDLR on migrating neurons. A specific lysine residue on reelin binds to the first repeat on the ligand binding domain of ApoER2. This interaction with the two receptors activates intracellular processes that begin with the phosphorylation of Dab1, a tyrosine kinase phosphorylated protein which is encoded by the DAB1 gene. This protein associates with the NPXY motifs on the intracellular tails of ApoER2 and VLDLR. Upon reelin binding, Dab1 is phosphorylated by two tyrosine kinases, Fyn and Src. The phosphorylated Dab1 then causes further activation of these two kinases and others, including a phosphatidylinositol-3-kinase (PI3K). PI3K activation leads to inhibitory phosphorylation of the tau kinase glycogen synthase kinase 3 beta (GSK3B), which alters the activity of tau protein, a protein involved in stabilizing microtubules. This transduction is combined with the activation of other pathways that influence the cytoskeletal rearrangement necessary for proper cortical cell migration. The result of proper neuronal migration through the cortical plate (CP) is an inside-out arrangement of neurons, where the younger neurons migrate past the older neurons to their proper locations. Studies in reeler mutant mice show that knocking out the reeler gene results in aberrant migration as well as outside-in layering, in which younger neurons are unable to travel past the older ones. Such abnormal layering is also seen in VLDLR−apoER2− and dab1- mutants, indicating the importance of this entire pathway in cortical migration of the developing embryo. There is some confusion as to the exact function of the reelin-signalling pathway in the process of cortical migration. Some studies have shown that reelin release is necessary for the initiation of cell movement to its proper location, whereas others have shown that it is part of the process of terminating migration. These conflicting results have led researchers to speculate that it plays a role in both processes through interactions with different molecules at different stages of neuronal migration. Long-term potentiation After development, reelin is secreted in the cortex and hippocampus by gamma-aminobutyric acid-ergic interneurons. Through binding of ApoER2 in the hippocampus, it plays a role in the NMDA receptor activation that is required for long-term potentiation, a mechanism by which two neurons gain a stronger, longer-lasting transmission due to simultaneous firing. The increased synaptic plasticity associated with this process is essential in development of memory and spatial learning. Studies with mice have shown less expression of ApoER2 leads to impaired spatial learning, fear conditioned learning, and a mild disruption to the hippocampus. In the hippocampus, ApoER2 is complexed with NMDA receptors through the PSD-95 adapter protein. When reelin binds ApoER2, it initiates tyrosine phosphorylation of NMDA receptors. This occurs through Dab-1 activation of Src family kinases, which have been shown to play a role in regulating synaptic plasticity. VLDLR also acts as a receptor coupled to ApoER2 as it does during development, but its role is not well understood. ApoER2 plays a more important role in this process, most likely due to its ability to bind the PSD-95 adapter protein through the 59 amino acid insert on its cytoplasmic tail. Studies with mice have shown that knocking out ApoER2 or just the alternatively spliced exon 19 causes a much greater impairment of LTP than knocking out VLDLR. Other interacting proteins Apolipoprotein E Apolipoprotein E (ApoE) plays an important role in phospholipid and cholesterol homeostasis. After binding ApoER2, ApoE is taken up into the cell and may remain in the intracellular space, be shipped to the cell surface, or be degraded. ApoE binding leads to the cleavage of ApoER2 into secreted proteins by the actions of the plasma membrane protein gamma secretase. ApoE may be the signalling ligand responsible for ApoER2's role in modulating the JNK signalling pathway. FE65 FE65 is an intracellular protein that binds to the NPXY motif of ApoER2 and plays a role in linking other proteins, such as amyloid precursor protein, to ApoER2. This protein aids in a cell's migrational functions. Knockout studies of FE65 have shown a link to lissencephaly. JIP1 and JIP2 JIP1 and JIP2 are involved in the JNK-signaling pathway and interact with exon 19 of ApoER2. For JIP2, interaction with exon 19 of ApoER2 is through the PID domain. This interaction has led researchers to believe that ApoER2 is involved in many interactions at the surface of cells. Selenoprotein P Selenoprotein P transports the trace element selenium from the liver to the testes and brain, and binds to ApoER2 in these areas. ApoER2 functions to internalize this complex to maintain normal levels of selenium in these cells. Selenium is necessary in the testes for proper spermatozoa development. Mice that have had their ApoER2 or Selenoprotein P expression knocked out show impaired spermatozoa development and decreased fertility. In the brain, deficiencies in selenium and selenium uptake mechanisms result in brain damage. Thrombospondin and F-spondin Thrombospondin is a protein found in the extracellular matrix that competes with reelin to bind ApoER2. It is involved with cell-to-cell communication and migration of neurons, and causes the activation of Dab1. F-spondin is a secreted protein that also binds ApoER2 and leads to phosphorylation of Dab1. Clinical significance Alzheimer's disease Alzheimer's disease is the most common form of dementia, and studies have shown that manipulation of pathways involving LRP8/ApoER2 can lead to the disease. Certain alleles, such as apoe, app, ps1 and ps2, may lead to being genetically predisposed to the disease. A decrease in LRP8 expression is observed in patients with Alzheimer's disease. An example of a decrease in expression of LRP8 is when gamma secretase cleaves LRP8 as well as the ligand amyloid precursor protein (APP). The degradation products control transcription factors, which lead to the expression of a tau protein. The cascade dysfunction caused by the altered gene expression may be implicated with Alzheimer's disease. The presence of amyloid beta (Aβ) protein deposits in neuronal extracellular space is one of the hallmarks of Alzheimer's disease. The role of ApoER2 in Alzheimer's disease is relevant, yet incompletely understood. New evidence suggests ApoER2 plays a major role in the regulation of amyloid-β formation in the brain. The amyloid-β peptide is derived from the cleavage of APP by gamma secretase. ApoER2 works to reduce APP trafficking by altering break down. This interaction decreases APP endocytosis leading to an increase in amyloid-β production. In addition, the expression of ApoER2 within intracellular compartments leads to increased gamma secretase activity, a protease which works to cleave APP into Aβ. ApoER2 splice variants can act as a receptor for alpha-2-macroglobulin which can have a role in clearance of alpha-2-macroglobulin/proteinase complex. Proteases may play a role in synaptic plasticity balancing proteolytic activity and inhibition, which is controlled by proteolytic inhibitors such as alpha-2-macroglobulin. Studies have shown that a high presence of alpha-2-macroglobulin is present in the neuritic plaques in many Alzheimer patients. Isolation of cDNA encoding proteins associated with Aβ was used to discover alpha-2-macroglobulin. These discoveries may link alpha-2-macroglobulin and its receptors, one of them being ApoER2, to Alzheimer's disease. ApoER2 interaction with reelin and ApoE has implications with Alzheimer's disease. Binding of reelin to ApoER2 leads to cascade of signals that modulate NMDA receptor functions. ApoE competes with reelin in binding to ApoER2 resulting in weakened reelin signaling. Reduced reelin signaling leads to impaired plasticity in neurons and increases in the phosphorylation of tau protein, which is a microtubule stabilizing protein that is abundant in the Central Nervous System (CNS), producing neurofibrillary tangles which are implicated in Alzheimer's disease. Antiphospholipid syndrome Antiphospholipid syndrome is an autoimmune disease characterized by thrombosis and complications during pregnancy, often leading to fetal death. It is caused by the presence of antibodies against anionic phospholipids and β2-glycoprotein I (β2GPI). The anti-β2GPI antibodies are most prevalent in causing the symptoms of the disease. When bound by an antibody, β2GPI begins to interact with monocytes, endothelial cells, and platelets. ApoER2 is thought to play a key role in the process of platelet binding. β2GPI has the proper binding site for interaction with ApoER2 and other LDL family receptors, and it is speculated that the antibody/β2GPI complexes interact with ApoER2 on platelets. This causes the phosphorylation of a p38MAPkinase, resulting in the production of thromboxane A2. Thromboxane A2 functions to activate more platelets, and this leads to a greater chance for blood clots to form. There is also speculation that the antibody/β2GPI complexes sensitize other cell types through various LDL family receptors to lead to less common symptoms other than thrombosis. Cancer ApoER2 has been found to promote ferroptosis resistance in cancer. Loss of ApoER2 results in insufficient selenium levels, leading to failed translation of the key ferroptosis regulator and selenoprotein GPX4. Major depressive disorder Reduced expression of ApoER2 in peripheral blood lymphocytes can contribute to major depressive disorder (MDD) in some patients. Major depressive disorder is the most common psychiatric disorder, where people show symptoms of low self-esteem and a loss of interest in pleasure. By studying the levels of ApoER2 mRNA, low levels of ApoER2 were discovered. Results from experiments have shown that this could be because of transcriptional alterations in lymphocytes. However, low levels of ApoER2 do not appear to correlate with the severity or duration of the disease. It only aids as a trait marker in identification of the disease. The impact of the low levels of ApoER2 mRNA function relating to the disease remains unknown. References Further reading External links Are You reelin in the Years? Not without Alternative Splicing Low-density lipoprotein receptor gene family Receptors
Low-density lipoprotein receptor-related protein 8
[ "Chemistry" ]
3,032
[ "Receptors", "Signal transduction" ]
7,922,560
https://en.wikipedia.org/wiki/Stagnation%20enthalpy
In thermodynamics and fluid mechanics, the stagnation enthalpy of a fluid is the static enthalpy of the fluid at a stagnation point. The stagnation enthalpy is also called total enthalpy. At a point where the flow does not stagnate, it corresponds to the static enthalpy of the fluid at that point assuming it was brought to rest from velocity isentropically. That means all the kinetic energy was converted to internal energy without losses and is added to the local static enthalpy. When the potential energy of the fluid is negligible, the mass-specific stagnation enthalpy represents the total energy of a flowing fluid stream per unit mass. Stagnation enthalpy, or total enthalpy, is the sum of the static enthalpy (associated with the temperature and static pressure at that point) plus the enthalpy associated with the dynamic pressure, or velocity. This can be expressed in a formula in various ways. Often it is expressed in specific quantities, where specific means mass-specific, to get an intensive quantity: where: mass-specific total enthalpy, in [J/kg] mass-specific static enthalpy, in [J/kg] fluid velocity at the point of interest, in [m/s] mass-specific kinetic energy, in [J/kg] The volume-specific version of this equation (in units of energy per volume, [J/m^3] is obtained by multiplying the equation with the fluid density : where: volume-specific total enthalpy, in [J/m^3] volume-specific static enthalpy, in [J/m^3] fluid velocity at the point of interest, in [m/s] fluid density at the point of interest, in [kg/m^3] volume-specific kinetic energy, in [J/m^3] The non-specific version of this equation, that means extensive quantities are used, is: where: total enthalpy, in [J] static enthalpy, in [J] fluid mass, in [kg] fluid velocity at the point of interest, in [m/s] kinetic energy, in [J] The suffix ‘0’ usually denotes the stagnation condition and is used as such here. Enthalpy is the energy associated with the temperature plus the energy associated with the pressure. The stagnation enthalpy adds a term associated with the kinetic energy of the fluid mass. The total enthalpy for a real or ideal gas does not change across a shock. The total enthalpy can not be measured directly. Instead, the static enthalpy and the fluid velocity can be measured. Static enthalpy is often used in the energy equation for a fluid. See also Stagnation pressure Stagnation temperature Rothalpy References External links http://ocw.mit.edu/ans7870/16/16.unified/thermoF03/chapter_6.htm Fluid dynamics Enthalpy
Stagnation enthalpy
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
641
[ "Thermodynamic properties", "Physical quantities", "Chemical engineering", "Quantity", "Enthalpy", "Piping", "Fluid dynamics" ]
7,923,779
https://en.wikipedia.org/wiki/Industrial%20%26%20Engineering%20Chemistry%20Research
Industrial & Engineering Chemistry Research is a peer-reviewed scientific journal published by the American Chemical Society covering all aspects of chemical engineering. The editor-in-chief is Michael Baldea (University of Texas at Austin). History The journal was established in 1909 as the Journal of Industrial & Engineering Chemistry. It was renamed in 1930 as Industrial & Engineering Chemistry before obtaining its current title in 1970. From 1911 to 1916 it was edited by Milton C. Whitaker. From 1921 to 1942 it was edited by Dr. Harrison E. Howe. From 1962 to 1986, Industrial & Engineering Chemistry Fundamentals was edited by Robert L. Pigford. From 1986 to 2013 the journal was edited by Donald R. Paul, and from 2014 to 2023 by Phillip E. Savage. The journal I&EC Product Research and Development was established in 1962. It was renamed Product R&D in 1969 and renamed again in 1978 as Industrial & Engineering Chemistry Product Research and Development. In 1986, it and the journals Industrial & Engineering Chemistry Fundamentals and Industrial & Engineering Chemistry Process Design and Development, both also established in 1962, were combined into Industrial & Engineering Chemistry Research. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2022 impact factor of 4.2. References External links American Chemical Society academic journals Biweekly journals English-language journals Publications established in 1909 Chemical engineering journals
Industrial & Engineering Chemistry Research
[ "Chemistry", "Engineering" ]
286
[ "Chemical engineering", "Chemical engineering journals" ]
15,369,660
https://en.wikipedia.org/wiki/Ceramic%20flux
Fluxes are substances, usually oxides, used in glasses, glazes and ceramic bodies to lower the high melting point of the main glass forming constituents, usually silica and alumina. A ceramic flux functions by promoting partial or complete liquefaction. The most commonly used fluxing oxides in a ceramic glaze contain lead, sodium, potassium, lithium, calcium, magnesium, barium, zinc, strontium, and manganese. These are introduced to the raw glaze as compounds, for example lead as lead oxide. Boron is considered by many to be a glass former rather than a flux. Some oxides, such as calcium oxide, flux significantly only at high temperature. Lead oxide is the traditional low temperature flux used for crystal glass, but it is now avoided because it is toxic even in small quantities. It is being replaced by other substances, especially boron and zinc oxides. In clay bodies a flux creates a limited and controlled amount of glass, which works to cement crystalline phases together. Fluxes play a key role in the vitrification of clay bodies by lowering the overall melting point. The most common fluxes used in clay bodies are potassium oxide and sodium oxide which are found in feldspars. A predominant flux in glazes is calcium oxide which is usually obtained from limestone. The two most common feldspars in the ceramic industry are potash feldspar (orthoclase) and soda feldspar (albite). Common oxides List of commonly used ceramic oxides: See also Flux (metallurgy) Loss on ignition References See also Secondary flux Ceramic materials
Ceramic flux
[ "Engineering" ]
336
[ "Ceramic engineering", "Ceramic materials" ]
15,369,988
https://en.wikipedia.org/wiki/Axiomatic%20product%20development%20lifecycle
Axiomatic product sevelopment lifecycle (APDL), also known as transdisciplinary system development lifecycle (TSDL) and transdisciplinary product development lifecycle (TPDL), is a systems engineering product development model proposed by Bulent Gumus that extends the Axiomatic design (AD) method. APDL covers the whole product lifecycle including early factors that affect the entire cycle such as development testing, input constraints and system components. APDL provides an iterative and incremental way for a team of transdisciplinary members to approach holistic product development. A practical outcome includes capturing and managing product design knowledge. The APDL model addresses some weak patterns experienced in previous development models regarding quality of the design, requirements management, change management, project management, and communication between stakeholders. Practicing APDL may reduce development time and project cost. Overview APDL adds the Test domain and four new characteristics to Axiomatic design (AD): Input Constraints in the Functional Domain; Systems Components in the Physical Domain; Process Variables tied to System Components instead of Design Parameters; and Customer Needs mapped to Functional Requirements and Input Constraints. APDL proposes a V-shaped process to develop the Design Parameters and System Components (detailed design). Start top-down with Process Variables (PV) and Component Test Cases (CTC) to complete the PV, CTC, and Functional Test Cases (FTC); And after build, test the product with a bottom-up approach. APDL Domains Customer domain Customer Needs (CN) are elements that the customer seeks in a product or system. Functional domain Functional Requirements (FR) completely characterize the minimum performance to be met by the design solution, product etc. FR are documented in requirement specifications (RS). Input Constraints (IC) are included in the functional domain along with the FR. IC are specific to overall design goals and are imposed externally by CN, product users or conditions of use, such as regulations. IC are derived from CN and then revised based on other constraints that the product has to comply with but not mentioned in the Customer Domain. Physical domain The Design Parameters (DP) are the elements of the design solution in the physical domain that are chosen to satisfy the specified FRs. DPs can be conceptual design solutions, subsystems, components, or component attributes. System Components (SC) provide a categorical design solution in the DP, where the categories represent physical parts in the Physical Domain. The SC hierarchy represents the physical system architecture or product tree. The method for categorizing varies. Eppinger portrays general categories as system, subsystem, and component Eppinger (2001). NASA uses system, segment, element, subsystem, assembly, subassembly, and part (NASA, 1995). SC makes it possible to perform Design Structure Matrixes (DSM), change management, component-based cost management and impact analysis, and provides framework for capturing structural information and requirement traceability. Process domain Process Variables (PV) identify and describe the controls and processes to produce SC. Test domain A functional test consists of a set of Functional Test Cases (FTC). FTC are system tests used to verify that FR are satisfied by the system. Black-box testing is the software analog to FTC. At the end of the system development, a functional test verifies that the requirements of the system are met. Component Test Cases (CTC) are a physical analog to White-box testing. CTC verify that components satisfy the allocated FRs and ICs. Each system component is tested before it is integrated into the system to make sure that the requirements and constraints allocated to that component are all satisfied. See also Systems development life-cycle New product development Product lifecycle management Engineering design process Design–build Integrated project delivery References Further reading B. Gumus, A. Ertas, D. Tate and I. Cicek, Transdisciplinary Product Development Lifecycle, Journal of Engineering Design, 19(03), pp. 185–200, June 2008. . B. Gumus, A. Ertas, and D. TATE, "Transdisciplinary Product Development Lifecycle Framework And Its Application To An Avionics System", Integrated Design and Process Technology Conference, June 2006. B. Gumus and A. Ertas, "Requirements Management and Axiomatic Design", Journal of Integrated Design and Process Science, Vol. 8 Number 4, pp. 19–31, Dec 2004. Suh, Complexity: Theory and Applications, Oxford University Press, 2005, Suh, Axiomatic Design: Advances and Applications, Oxford University Press, 2001, Engineering concepts Product development Quality management Systems engineering
Axiomatic product development lifecycle
[ "Engineering" ]
969
[ "Systems engineering", "nan" ]
15,374,375
https://en.wikipedia.org/wiki/Bartell%20mechanism
The Bartell mechanism is a pseudorotational mechanism similar to the Berry mechanism. It occurs only in molecules with a pentagonal bipyramidal molecular geometry, such as IF7. This mechanism was first predicted by H. B. Bartell. The mechanism exchanges the axial atoms with one pair of the equatorial atoms with an energy requirement of about 2.7 kcal/mol. Similarly to the Berry mechanism in square planar molecules, the symmetry of the intermediary phase of the vibrational mode is "chimeric" of other mechanisms; it displays characteristics of the Berry mechanism, a "lever" mechanism seen in pseudorotation of disphenoidal molecules, and a "turnstile" mechanism (which can be seen in trigonal bipyramidal molecules under certain conditions). References See also Pseudorotation Bailar twist Berry mechanism Ray–Dutt twist Fluxional molecule Molecular geometry Chemical kinetics
Bartell mechanism
[ "Physics", "Chemistry" ]
189
[ "Chemical reaction engineering", "Molecular geometry", "Molecules", "Stereochemistry", "Stereochemistry stubs", "Chemical kinetics", "Matter" ]
15,376,172
https://en.wikipedia.org/wiki/Atmospheric%20instability
Atmospheric instability is a condition where the Earth's atmosphere is considered to be unstable and as a result local weather is highly variable through distance and time. Atmospheric instability encourages vertical motion, which is directly correlated to different types of weather systems and their severity. For example, under unstable conditions, a lifted parcel of air will find cooler and denser surrounding air, making the parcel prone to further ascent, in a positive feedback loop. In meteorology, instability can be described by various indices such as the Bulk Richardson Number, lifted index, K-index, convective available potential energy (CAPE), the Showalter, and the Vertical totals. These indices, as well as atmospheric instability itself, involve temperature changes through the troposphere with height, or lapse rate. Effects of atmospheric instability in moist atmospheres include thunderstorm development, which over warm oceans can lead to tropical cyclogenesis, and turbulence. In dry atmospheres, inferior mirages, dust devils, steam devils, and fire whirls can form. Stable atmospheres can be associated with drizzle, fog, increased air pollution, a lack of turbulence, and undular bore formation. Forms There are two primary forms of atmospheric instability. Under convective instability, thermal mixing through convection in the form of rising warm air leads to the development of clouds and possibly precipitation or convective storms. Dynamic instability is produced through the horizontal movement of air and the physical forces it is subjected to such as the Coriolis force and pressure gradient force; resulting dynamic lifting and mixing produces cloud, precipitation and storms often on a synoptic scale. Cause of instability Whether or not the atmosphere has stability depends partially on the moisture content. In a very dry troposphere, a temperature decrease with height less than per kilometer ascent indicates stability, while greater changes indicate instability. This lapse rate is known as the dry adiabatic lapse rate. In a completely moist troposphere, a temperature decrease with height less than per kilometer ascent indicates stability, while greater changes indicate instability. In the range between and temperature decrease per kilometer ascent, the term conditionally unstable is used. Indices used for its determination Lifted Index The lifted index (LI), usually expressed in kelvins, is the temperature difference between the temperature of the environment Te(p) and an air parcel lifted adiabatically Tp(p) at a given pressure height in the troposphere, usually 500 hPa (mb). When the value is positive, the atmosphere (at the respective height) is stable and when the value is negative, the atmosphere is unstable. Thunderstorms are expected with values below −2, and severe weather is anticipated with values below −6. K Index The K index is derived arithmetically: K-index = (850 hPa temperature – 500 hPa temperature) + 850 hPa dew point – 700 hPa dew point depression The temperature difference between 850 hPa ( above sea level) and 500 hPa ( above sea level) is used to parameterize the vertical temperature lapse rate. The 850 hPa dew point provides information on the moisture content of the lower atmosphere. The vertical extent of the moist layer is represented by the difference of the 700 hPa temperature ( above sea level) and 700 hPa dew point. CAPE and CIN Convective available potential energy (CAPE), sometimes, simply, available potential energy (APE), is the amount of energy a parcel of air would have if lifted a certain distance vertically through the atmosphere. CAPE is effectively the positive buoyancy of an air parcel and is an indicator of atmospheric instability, which makes it valuable in predicting severe weather. CIN, convective inhibition, is effectively negative buoyancy, expressed B-; the opposite of convective available potential energy (CAPE), which is expressed as B+ or simply B. As with CAPE, CIN is usually expressed in J/kg but may also be expressed as m2/s2, as the values are equivalent. In fact, CIN is sometimes referred to as negative buoyant energy (NBE). It is a form of fluid instability found in thermally stratified atmospheres in which a colder fluid overlies a warmer one. When an air mass is unstable, the element of the air mass that is displaced upwards is accelerated by the pressure differential between the displaced air and the ambient air at the (higher) altitude to which it was displaced. This usually creates vertically developed clouds from convection, due to the rising motion, which can eventually lead to thunderstorms. It could also be created in other phenomenon, such as a cold front. Even if the air is cooler on the surface, there is still warmer air in the mid-levels, that can rise into the upper-levels. However, if there is not enough water vapor present, there is no ability for condensation, thus storms, clouds, and rain will not form. Bulk Richardson Number The Bulk Richardson Number (BRN) is a dimensionless number relating vertical stability and vertical wind shear (generally, stability divided by shear). It represents the ratio of thermally-produced turbulence and turbulence generated by vertical shear. Practically, its value determines whether convection is free or forced. High values indicate unstable and/or weakly sheared environments; low values indicate weak instability and/or strong vertical shear. Generally, values in the range of around 10 to 45 suggest environmental conditions favorable for supercell development. Showalter index The Showalter index is a dimensionless number computed by taking the temperature at the 850 hPa level which is then taken dry adiabatically up to saturation, then up to the 500 hPa level, which is then subtracted by the observed 500 hPa level temperature. If the value is negative, then the lower portion of the atmosphere is unstable, with thunderstorms expected when the value is below −3. The application of the Showalter index is especially helpful when there is a cool, shallow air mass below 850 hPa that conceals the potential convective lifting. However, the index will underestimate the potential convective lifting if there are cool layers that extend above 850 hPa and it does not consider diurnal radiative changes or moisture below 850 hPa. Effects Stable atmosphere Stable conditions, such as during a clear and calm night, will cause pollutants to become trapped near ground level. Drizzle occurs within a moist air mass when it is stable. Air within a stable layer is not turbulent. Conditions associated with a marine layer, a stable atmosphere common on the west side of continents near cold water currents, leads to overnight and morning fog. Undular bores can form when a low level boundary such as a cold front or outflow boundary approaches a layer of cold, stable air. The approaching boundary will create a disturbance in the atmosphere producing a wave-like motion, known as a gravity wave. Although the undular bore waves appear as bands of clouds across the sky, they are transverse waves, and are propelled by the transfer of energy from an oncoming storm and are shaped by gravity. The ripple-like appearance of this wave is described as the disturbance in the water when a pebble is dropped into a pond or when a moving boat creates waves in the surrounding water. The object displaces the water or medium the wave is travelling through and the medium moves in an upward motion. However, because of gravity, the water or medium is pulled back down and the repetition of this cycle creates the transverse wave motion. Unstable atmosphere Within an unstable layer in the troposphere, the lifting of air parcels will occur, and continue for as long as the nearby atmosphere remains unstable. Once overturning through the depth of the troposphere occurs (with convection being capped by the relatively warmer, more stable layer of the stratosphere), deep convective currents lead to thunderstorm development when enough moisture is present. Over warm ocean waters and within a region of the troposphere with light vertical wind shear and significant low level spin (or vorticity), such thunderstorm activity can grow in coverage and develop into a tropical cyclone. Over hot surfaces during warm days, unstable dry air can lead to significant refraction of the light within the air layer, which causes inferior mirages. When winds are light, dust devils can develop on dry days within a region of instability at ground level. Small-scale, tornado-like circulations can occur over or near any intense surface heat source, which would have significant instability in its vicinity. Those that occur near intense wildfires are called fire whirls, which can spread a fire beyond its previous bounds. A steam devil is a rotating updraft that involves steam or smoke. They can form from smoke issuing from a power plant smokestack. Hot springs and warm lakes are also suitable locations for a steam devil to form, when cold arctic air passes over the relatively warm water. See also Atmospheric thermodynamics Buoyancy Stable and unstable stratification References Atmospheric dynamics Atmospheric thermodynamics
Atmospheric instability
[ "Chemistry" ]
1,849
[ "Atmospheric dynamics", "Fluid dynamics" ]
15,377,005
https://en.wikipedia.org/wiki/TerraVia
TerraVia Holdings, Inc. (formerly Solazyme) was a publicly held biotechnology company in the United States. TerraVia used proprietary technology to transform a range of low-cost plant-based sugars into high-value oils and whole algae ingredients. TerraVia supplied a variety of sustainable algae-based food ingredients to a number of brands, which included Hormel Food Corporation, Utz Quality Foods Inc and enjoy Life Foods. TerraVia also sold its own culinary algae oil under the Thrive Algae Oil brand. In 2017, the firm declared bankruptcy. Company history Founding Solazyme, Inc., was founded on 31 March 2003, with the mission of utilizing microalgae to create a renewable source of energy and transportation fuels. Founders Jonathan S. Wolfson and Harrison Dillon, who met while attending Emory University, started the company in a garage in Palo Alto. Regarding their partnership, Dillon said: "Neither of us wanted to go work for some giant organization where we were a tiny cog in a huge wheel. We wanted to make a difference and create something that had never existed before.” In 2013 Dillon announced his decision to step down from his full-time position as CTO and member of the Board of Directors of Solazyme and shift to a long-term consulting role focused on further developing the breadth of the technology platform and advising on intellectual property strategy. Wolfson continued on as Chairman and CEO of Solazyme until August 2016 when he stepped down from management while he stayed on the Board. TerraVia appointed Apu Mody, former President of Mars Food America, as new CEO and a member of the Board of Directors, in August 2016. Initial focus and technology In 2004 and 2005, Solazyme began development of an algal molecular biology platform, identified and initiated a platform for microalgae-based oil production. The company then expanded focus on skin and personal care products. Solazyme used a technique to grow microalgae, which allows the production process to be extremely efficient in terms of cost, scale, time, and sustainability. In contrast to common open pond and photo bioreactor approaches, TerraVia grows microalgae in the dark, inside huge stainless-steel containers. In September 2007, Solazyme received a $2 million grant from the National Institute of Standards and Technology, signed a joint development agreement with Chevron through its division Chevron Technology Ventures, began operating in commercially sized standard industrial fermentation equipment (75,000-liter scale), worked with a third party refiner to demonstrate the compatibility of the oil with standard refining equipment, and produced over 400 liters of microalgae-based oils. In January 2008, Solazyme was featured in Fields of Fuel, which was played at the Sundance Film Festival in Park City, Utah. At the event, it presented a Mercedes-Benz C320 fueled with its Soladiesel brand of algal fuel. Also in January 2008, the company announced a partnership with Chevron Technology Ventures to explore the commercialization of algal fuel. Later that year, the company stated that it had produced the world's first jet fuel derived from an algal source. In 2009, Solazyme was awarded approximately $22 million from the United States Department of Energy for the construction of an integrated biorefinery project. It also formed a contract with the United States Department of Defense to deliver microalgae-based marine (renewable F-76) diesel fuel to the United States Navy. In 2011, the company announced it had produced over 283,000 liters of military-spec diesel (HRF-76) for the United States Navy. The initial fuel production for phase 1 of a 550,000 liter contract was completed ahead of schedule. Also in 2011 Solazyme and United Airlines partnered to fly the first ever commercial flight on biofuels including the signing of an LOI for United to purchase 20 million gallons of Solajet fuel. In 2012, Solazyme partnered with the Navy for the Rim of the Pacific Exercises (RIMPAC) Great Green Fleet (GGF) demonstration in which numerous naval vessels and fighter jets ran on a blend of traditional and renewable fuels in the largest demonstration of its kind. Initial public offering In May 2011, Solazyme set terms for its initial public offering. The company planned to raise $160 million by offering 10 million shares at a price range of $15 to $17 and ended up selling 7,901,800 shares for $20 each in its first day of trading. Investment banking and securities firm Goldman Sachs reported in July 2011 that with the commercialization of new oil products, Solazyme stock had become less risky. The bank initiated coverage with a top rating and $31 target. In a note to clients, it said Solazyme () stood to boost sales and become more stable now that it had partnered with major agribusinesses like Bunge Limited. Joint venture with Bunge Limited On 8 August 2011, Solazyme announced a joint venture with agribusiness company Bunge Limited to develop renewable oils in Brazil using Solazyme's algae-based sugar-to-oil technology. In April 2012, Solazyme and Bunge announced a plan to construct a shared commercial-scale production facility in Brazil. As one of the world's largest vegetable oil distributors, Bunge would supply the facility with sugarcane feedstock from its sugarcane processor in Brazil for use in Solazyme algae oil production process. Construction of the new facility began in June 2012 and in May 2014, the joint venture plant began oil production. In late 2015, Solazyme and Bunge announced an expansion of their joint venture, which included an agreement to have Bunge market the food oils produced through the joint venture. The production of AlgaPrime DHA was announced in May 2016 and is the first product under the joint venture of TerraVia and Bunge. AlgaPrime DHA is a new algae-based specialty feed ingredient designed to reduce the aquaculture industry's dependence on wild fish populations. Products and brands Food In 2010, Solazyme launched its first products, the Golden Chlorella line of dietary supplements, as part of a market development initiative. Products incorporating Golden Chlorella could be found at retailers including Whole Foods Market and General Nutrition Centers. AlgaVia and AlgaWise supply algae-based ingredients to food manufacturers, such as South Coast Baking Company, Follow Your Heart, and So Delicious Dairy Free. TerraVia is also responsible for the Thrive Algae Oil brand. Thrive is marketed as "The Best Oil For Your Heart" due to its high levels of monounsaturated fats, which are known to decrease bad cholesterol and reduce the risk of heart disease and stroke. Thrive has been praised by food industry professionals for its neutral taste, versatility, and high smoke point. The Gelson's grocery chain began supplying Thrive to the Los Angeles area in 2015. Due to its success in this market, TerraVia announced it would expand the distribution of Thrive throughout the West Coast in May 2016. In 2024 the original Solazyme/TerraVia team relaunched Thrive Algae oil. Aquaculture As part of its joint venture with Bunge Limited, Solazyme/TerraVia supplied aquaculture feed producer BioMar with an algae-based feed ingredient known as AlgaPrime DHA. AlgaPrime DHA is a source of DHA omega-3 fatty acid. which is intended to reduce the aquaculture industry's dependence on wild fish as a source of DHA. AlgaPrime DHA therefore has the potential to boost the sustainability of aquaculture and allow the aquaculture industry to grow in spite of the world's fixed supply of ocean-based DHA. As of 2024, Corbion’s AlgaPrime™ DHA (originally developed and commercialized by Solazyme) is the world’s leading source of algae Omega-3. Personal care TerraVia currently manufactures the Algenist brand for the luxury skin care market through marketing and distribution arrangements with Sephora and QVC. It is sold in Canada, France, and the United Kingdom. The idea of using algae for a skincare line came from Arthur Grossman, the Chief of Genetics at Solazyme and Staff Scientist at the Carnegie Institution. Grossman is an expert in algal photosynthesis and has studied how algae protect themselves from harsh environments and reasoned they could be used to protect human skin. TerraVia also markets its AlgaPūr Algae Oil brand to personal care producers. Unilever, a leading consumer goods company, is one of TerraVia's biggest partners. In 2010, Solazyme and Unilever started its partnership to develop renewable algae oils for use in soaps and other personal care products. In 2011, Unilever funded TerraVia's research and development efforts, which ended in Unilever agreeing to the terms of a multi-year supply agreement. In September 2013 Unilever agreed to purchase 3 million gallons of algae oil. In March 2016 the companies signed a multi-year supply agreement, where Unilever has agreed to purchase over $200 million worth of a broad portfolio of renewable algae oils. Solazyme Industrials The company has formerly engaged in development activities with Chevron, Dow Chemical, Ecopetrol, Qantas, and Unilever. Additionally, Solazyme started a brand of industrial drilling lubricant known as Encapso. Scientists were able to harness the prolific oil-producing capabilities of microalgae to create a first-of-its kind product, microencapsulated oil cells that burst only under sufficient pressure, friction, and shear. As part of the change in focus, TerraVia stated that its fuels, industrial oils, and Encapso business would be grouped together under Solazyme Industrials starting in March 2016. Furthermore, "The company will be pursuing strategic alternatives over the next 12–18 months to unlock the value created [in Solazyme Industrials]. Solazyme's objective is to identify partners who have the operational capabilities needed to realize the potential of those businesses." TerraVia Solazyme officially changed its name to TerraVia Holdings Inc. in March 2016 with a redefined focus on food, nutrition, and personal care. As part of the change, the company stated that its previous fuel and industrial oil products and workings would operate under Solazyme Industrials. Bankruptcy filing and sale On 2 August 2017, TerraVia filed for bankruptcy protection under Chapter 11. The company announced a "stalking horse" offer from Corbion, N.V., a Netherlands food and biochemical company. The offer valued TerraVia's assets at $20 million plus the assumption of debt associated with the Brazilian manufacturing facility. By comparison, Solazyme's IPO raised over $197 million. The sale closed in September of 2017 and by 2023 the legacy Solazyme technology and products were the fastest growing part of Corbion’s portfolio, growing at over 40% yoy with over €110M in sales. Sustainability principles and commitments Principles and sustainability TerraVia sets a goal of providing replacements for unsustainable products like palm and soybean oil. The brand's food ingredients seek to provide a healthy alternative to animal-based lipids and proteins. According to its sustainability report, TerraVia states that "transparency is central to all of our sustainability principles." TerraVia supports this goal with a number of commitments, which include engaging with and asking for stakeholders' inputs, publishing product data in peer-reviewed journals, and openly communicating about the production of its products. As part of this transparency, TerraVia has made information regarding the environmental impacts of its products publicly available. Life cycle analysis TerraVia had the third party organization Thinkstep conduct life cycle analysis on its products in order to assess their environmental impacts in comparison to other oils. The results show that algae oil production often has significantly lower environmental impacts in comparison to traditional oil sources. A life cycle analysis conducted in 2016 compared the sugarcane-fed algae oil produced at the Solazyme Bunge joint venture facility in Brazil against soybean, tallow, palm kernel, olive, palm, canola, and sunflower oil in terms of its carbon emissions and water consumption. Thinkstep found that the algae oil was on par with sunflower oil in terms of global warming potential per kilogram of oil produced. All other oils compared had higher carbon emissions. Algae oil also had the second lowest impact in water consumption, second to canola oil. The company also analyzed the land impact of its algae oil compared to traditional oils. It found that the algae oil is comparable to palm oil in land use efficiency, with both algae and palm yielding more than six times the amount of oil per hectare than any other oil producing crop. The Thinkstep life cycle analysis highlights the advantages of producing algae oil over traditional oil sources in regard to the environmental impacts of oil production. Awards In 2016, Thrive Culinary Algae Oil won the Gama Innovation Award. Solazyme won the 2014 Presidential Green Chemistry Challenge Award. Solazyme's Algenist skincare brand won the 2014 Marie Claire Prix d'Excellence de la Beaute in France. World Economic Forum named Solazyme as one of their 2012 Technology Pioneers. Inc. Magazine named Solazyme "America's Fastest-Growing Manufacturing Company" in its September 2011 issue. In June 2011 Solazyme won San Francisco Business Times Cleantech and Sustainability Award in the Fuels, Chemicals, and Specialty Products category. BREATHE California awarded Solazyme the 2010 Clean Air Award for their renewable oil production technology which significantly reduces greenhouse gas emissions. Solazyme won the No. 1 Sustainable Biofuels Technology award at the World Biofuels Markets Conference in 2010. Solazyme was ranked No. 1 among the 2009–2010 "50 Hottest Companies in Bioenergy" in Biofuels Digest. Solazyme was selected by AlwaysOn as a GoingGreen 100 Top Private Company Award Winner for 2007. See also List of meat substitutes References External links TerraVia website Algal fuel producers Companies based in South San Francisco, California Biotechnology companies established in 2003 American companies established in 2003 Algae biomass producers 2011 initial public offerings Companies that filed for Chapter 11 bankruptcy in 2017
TerraVia
[ "Engineering", "Biology" ]
2,921
[ "Synthetic biology", "Algae biomass producers", "Genetic engineering" ]
15,378,722
https://en.wikipedia.org/wiki/DASB
DASB, also known as 3-amino-4-(2-dimethylaminomethylphenylsulfanyl)-benzonitrile, is a compound that binds to the serotonin transporter. Labeled with carbon-11 — a radioactive isotope — it has been used as a radioligand in neuroimaging with positron emission tomography (PET) since around year 2000. In this context it is regarded as one of the superior radioligands for PET study of the serotonin transporter in the brain, since it has high selectivity for the serotonin transporter. The DASB image from a human PET scan shows high binding in the midbrain, thalamus and striatum, moderate binding in the medial temporal lobe and anterior cingulate, and low binding in neocortex. The cerebellum is often regarded as a region with no specific serotonin transporter binding and the brain region is used as a reference in some studies. Since the serotonin transporter is the target of SSRIs used in the treatment of major depression it has been natural to examine DASB binding in depressed patients. Several such research studies have been performed. There are a number of alternative PET radioligands for imaging the serotonin transporter: [11C]ADAM, [11C]MADAM, [11C]AFM, [11C]DAPA, [11C]McN5652, and [11C]-NS 4194. A related molecule to DASB, that can be labeled with fluorine-18, has also been suggested as a PET radioligand. With single-photon emission computed tomography (SPECT) using the radioisotope iodine-123 there are further radioligands available: [123I]ODAM, [123I]IDAM, [123I]ADAM, and [123I]β-CIT. A few studies have examined the difference in binding between the radioligands in nonhuman primates, as well as in pigs. Other compounds that can be labeled to work as PET radioligands for the study of the serotonin system are, e.g., altanserin and WAY-100635. Methodological issues The binding potential of DASB can be estimated with kinetic modeling on a series of brain scans. A test-retest reproducibility PET study indicates that [11C]DASB can be used to measure the serotonin transporter parameters with high reliability in receptor-rich brain regions. When the DASB neuroimages are analyzed the kinetic models suggested by Ichise and coworkers can be employed to estimate the binding potential. A test-retest reproducibility experiment has been performed to evaluate this approach. Studies Besides the studies listed below a few occupancy studies have been reported. References Treatment of bipolar disorder Nitriles Thioethers PET radiotracers
DASB
[ "Chemistry" ]
617
[ "Medicinal radiochemistry", "Functional groups", "PET radiotracers", "Chemicals in medicine", "Nitriles" ]
15,380,061
https://en.wikipedia.org/wiki/Water%20scarcity
Water scarcity (closely related to water stress or water crisis) is the lack of fresh water resources to meet the standard water demand. There are two types of water scarcity. One is physical. The other is economic water scarcity. Physical water scarcity is where there is not enough water to meet all demands. This includes water needed for ecosystems to function. Regions with a desert climate often face physical water scarcity. Central Asia, West Asia, and North Africa are examples of arid areas. Economic water scarcity results from a lack of investment in infrastructure or technology to draw water from rivers, aquifers, or other water sources. It also results from weak human capacity to meet water demand. Many people in Sub-Saharan Africa are living with economic water scarcity. There is enough freshwater available globally and averaged over the year to meet demand. As such, water scarcity is caused by a mismatch between when and where people need water, and when and where it is available. This can happen due to an increase in the number of people in a region, changing living conditions and diets, and expansion of irrigated agriculture. Climate change (including droughts or floods), deforestation, water pollution and wasteful use of water can also mean there is not enough water. These variations in scarcity may also be a function of prevailing economic policy and planning approaches. Water scarcity assessments look at many types of information. They include green water (soil moisture), water quality, environmental flow requirements, and virtual water trade. Water stress is one parameter to measure water scarcity. It is useful in the context of Sustainable Development Goal 6. Half a billion people live in areas with severe water scarcity throughout the year, and around four billion people face severe water scarcity at least one month per year. Half of the world's largest cities experience water scarcity. There are 2.3 billion people who reside in nations with water scarcities (meaning less than 1700 m3 of water per person per year). There are different ways to reduce water scarcity. It can be done through supply and demand side management, cooperation between countries and water conservation. Expanding sources of usable water can help. Reusing wastewater and desalination are ways to do this. Others are reducing water pollution and changes to the virtual water trade. Definitions Water scarcity has been defined as the "volumetric abundance, or lack thereof, of freshwater resources" and it is thought to be "human-driven". This can also be called "physical water scarcity". There are two types of water scarcity. One is physical water scarcity and the other is economic water scarcity. Some definitions of water scarcity look at environmental water requirements. This approach varies from one organization to another. Related concepts are water stress and water risk. The CEO Water Mandate, an initiative of the UN Global Compact, proposed to harmonize these in 2014. In their discussion paper they state that these three terms should not be used interchangeably. Some organizations define water stress as a broader concept. It would include aspects of water availability, water quality and accessibility. Accessibility depends on existing infrastructure. It also depends on whether customers can afford to pay for the water. Some experts call this economic water scarcity. The FAO defines water stress as the "symptoms of water scarcity or shortage". Such symptoms could be "growing conflict between users, and competition for water, declining standards of reliability and service, harvest failures and food insecurity". This is measured with a range of Water Stress Indices. A group of scientists provided another definition for water stress in 2016: "Water stress refers to the impact of high water use (either withdrawals or consumption) relative to water availability." This means water stress would be a demand-driven scarcity. Types Experts have defined two types of water scarcity. One is physical water scarcity. The other is economic water scarcity. These terms were first defined in a 2007 study led by the International Water Management Institute. This examined the use of water in agriculture over the previous 50 years. It aimed to find out if the world had sufficient water resources to produce food for the growing population in the future. Physical water scarcity Physical water scarcity occurs when natural water resources are not enough to meet all demands. This includes water needed for ecosystems to function well. Dry regions often suffer from physical water scarcity. Human influence on climate has intensified water scarcity in areas where it was already a problem. It also occurs where water seems abundant but where resources are over-committed. One example is overdevelopment of hydraulic infrastructure. This can be for irrigation or energy generation. There are several symptoms of physical water scarcity. They include severe environmental degradation, declining groundwater and water allocations favouring some groups over others. Experts have proposed another indicator. This is called ecological water scarcity. It considers water quantity, water quality, and environmental flow requirements. Water is scarce in densely populated arid areas. These are projected to have less than 1000 cubic meters available per capita per year. Examples are Central and West Asia, and North Africa). A study in 2007 found that more than 1.2 billion people live in areas of physical water scarcity. This water scarcity relates to water available for food production, rather than for drinking water which is a much smaller amount. Some academics favour adding a third type which would be called ecological water scarcity. It would focus on the water demand of ecosystems. It would refer to the minimum quantity and quality of water discharge needed to maintain sustainable and functional ecosystems. Some publications argue that this is simply part of the definition of physical water scarcity. Economic water scarcity Economic water scarcity is due to a lack of investment in infrastructure or technology to draw water from rivers, aquifers, or other water sources. It also reflects insufficient human capacity to meet the demand for water. It causes people without reliable water access to travel long distances to fetch water for household and agricultural uses. Such water is often unclean. The United Nations Development Programme says economic water scarcity is the most common cause of water scarcity. This is because most countries or regions have enough water to meet household, industrial, agricultural, and environmental needs. But they lack the means to provide it in an accessible manner. Around a fifth of the world's population currently live in regions affected by physical water scarcity. A quarter of the world's population is affected by economic water scarcity. It is a feature of much of Sub-Saharan Africa. So better water infrastructure there could help to reduce poverty. Investing in water retention and irrigation infrastructure would help increase food production. This is especially the case for developing countries that rely on low-yield agriculture. Providing water that is adequate for consumption would also benefit public health. This is not only a question of new infrastructure. Economic and political intervention are necessary to tackle poverty and social inequality. The lack of funding means there is a need for planning. The emphasis is usually on improving water sources for drinking and domestic purposes. But more water is used for purposes such as bathing, laundry, livestock and cleaning than drinking and cooking. This suggests that too much emphasis on drinking water addresses only part of the problem. So it can limit the range of solutions available. Challenges Simple indicators There are several indicators for measuring water scarcity. One is the water use to availability ratio. This is also known as the criticality ratio. Another is the IWMI Indicator. This measures physical and economic water scarcity. Another is the water poverty index. "Water stress" is a criterion to measure water scarcity. Experts use it in the context of Sustainable Development Goal 6. A report by the FAO in 2018 provided a definition of water stress. It described it as "the ratio between total freshwater withdrawn (TFWW) by all major sectors and total renewable freshwater resources (TRWR), after taking into account environmental flow requirements (EFR)". This means that the value for TFWW is divided by the difference between TRWR minus EFR. Environmental flows are water flows required to sustain freshwater and estuarine ecosystems. A previous definition in Millennium Development Goal 7, target 7.A, was simply the proportion of total water resources used, without taking EFR into consideration. This definition sets out several categories for water stress. Below 10% is low stress; 10-20% is low-to-medium; 20-40% medium-to-high; 40-80% high; above 80% very high. Indicators are used to measure the extent of water scarcity. One way to measure water scarcity is to calculate the amount of water resources available per person each year. One example is the "Falkenmark Water Stress Indicator". This was developed by Malin Falkenmark. This indicator says a country or region experiences "water stress" when annual water supplies drop below 1,700 cubic meters per person per year. Levels between 1,700 and 1,000 cubic meters will lead to periodic or limited water shortages. When water supplies drop below 1,000 cubic meters per person per year the country faces "water scarcity". However, the Falkenmark Water Stress Indicator does not help to explain the true nature of water scarcity. Renewable freshwater resources It is also possible to measure water scarcity by looking at renewable freshwater. Experts use it when evaluating water scarcity. This metric can describe the total available water resources each country contains. This total available water resource gives an idea of whether a country tend to experience physical water scarcity. This metric has a drawback because it is an average. Precipitation delivers water unevenly across the planet each year. So annual renewable water resources vary from year to year. This metric does not describe how easy it is for individuals, households, industries or government to access water. Lastly this metric gives a description of a whole country. So it does not accurately portray whether a country is experiencing water scarcity. For example, Canada and Brazil both have very high levels of available water supply. But they still face various water-related problems. Some tropical countries in Asia and Africa have low levels of freshwater resources. More sophisticated indicators Water scarcity assessments must include several types of information. They include data on green water (soil moisture), water quality, environmental flow requirements, globalisation, and virtual water trade. Since the early 2000s, water scarcity assessments have used more complex models. These benefit from spatial analysis tools. Green-blue water scarcity is one of these. Footprint-based water scarcity assessment is another. Another is cumulative abstraction to demand ratio, which considers temporal variations. Further examples are LCA-based water stress indicators and integrated water quantity–quality environment flow. Since the early 2010s assessments have looked at water scarcity from both quantity and quality perspectives. Experts have proposed a further indicator. This is called ecological water scarcity. It considers water quantity, water quality, and environmental flow requirements. Results from a modelling study in 2022 show that northern China suffered more severe ecological water scarcity than southern China. The driving factor of ecological water scarcity in most provinces was water pollution rather than human water use. A successful assessment will bring together experts from several scientific discipline. These include the hydrological, water quality, aquatic ecosystem science, and social science communities. Available water The United Nations estimates that only 200,000 cubic kilometers of the total 1.4 billion cubic kilometers of water on Earth is freshwater available for human consumption. A mere 0.014% of all water on Earth is both fresh and easily accessible. Of the remaining water, 97% is saline, and a little less than 3% is difficult to access. The fresh water available to us on the planet is around 1% of the total water on earth. The total amount of easily accessible freshwater on Earth is 14,000 cubic kilometers. This takes the form of surface water such as rivers and lakes or groundwater, for example in aquifers. Of this total amount, humanity uses and resuses just 5,000 cubic kilometers. Technically, there is a sufficient amount of freshwater on a global scale. So in theory there is more than enough freshwater available to meet the demands of the current world population of 8 billion people. There is even enough to support population growth to 9 billion or more. But unequal geographical distribution and unequal consumption of water makes it a scarce resource in some regions and groups of people. Rivers and lakes provide common surface sources of freshwater. But other water resources such as groundwater and glaciers have become more developed sources of freshwater. They have become the main source of clean water. Groundwater is water that has pooled below the surface of the Earth. It can provide a usable quantity of water through springs or wells. These areas of groundwater are also known as aquifers. It is becoming harder to use conventional sources because of pollution and climate change. So people are drawing more and more on these other sources. Population growth is encouraging greater use of these types of water resources. Scale Current estimates In 2019 the World Economic Forum listed water scarcity as one of the largest global risks in terms of potential impact over the next decade. Water scarcity can take several forms. One is a failure to meet demand for water, partially or totally. Other examples are economic competition for water quantity or quality, disputes between users, irreversible depletion of groundwater, and negative impacts on the environment. About half of the world's population currently experience severe water scarcity for at least some part of the year. Half a billion people in the world face severe water scarcity all year round. Half of the world's largest cities experience water scarcity. Almost two billion people do not currently have access to clean drinking water. A study in 2016 calculated that the number of people suffering from water scarcity increased from 0.24 billion or 14% of global population in the 1900s to 3.8 billion (58%) in the 2000s. This study used two concepts to analyse water scarcity. One is shortage, or impacts due to low availability per capita. The other is stress, or impacts due to high consumption relative to availability. Future predictions In the 20th century, water use has been growing at more than twice the rate of the population increase. Specifically, water withdrawals are likely to rise by 50 percent by 2025 in developing countries, and 18 per cent in developed countries. One continent, for example, Africa, has been predicted to have 75 to 250 million inhabitants lacking access to fresh water. By 2025, 1.8 billion people will be living in countries or regions with absolute water scarcity, and two-thirds of the world population could be under stress conditions. By 2050, more than half of the world's population will live in water-stressed areas, and another billion may lack sufficient water, MIT researchers find. With the increase in global temperatures and an increase in water demand, six out of ten people are at risk of being water-stressed. The drying out of wetlands globally, at around 67%, was a direct cause of a large number of people at risk of water stress. As global demand for water increases and temperatures rise, it is likely that two thirds of the population will live under water stress in 2025. According to a projection by the United Nations, by 2040, there can be about 4.5 billion people affected by a water crisis (or water scarcity). Additionally, with the increase in population, there will be a demand for food, and for the food output to match the population growth, there would be an increased demand for water to irrigate crops. The World Economic Forum estimates that global water demand will surpass global supply by 40% by 2030. Increasing the water demand as well as increasing the population results in a water crisis where there is not enough water to share in healthy levels. The crises are not only due to quantity but quality also matters. A study found that 6-20% of about 39 million groundwater wells are at high risk of running dry if local groundwater levels decline by a few meters. In many areas and with possibly more than half of major aquifers this would apply if they simply continue to decline. Impacts Water supply shortages Controllable factors such as the management and distribution of the water supply can contribute to scarcity. A 2006 United Nations report focuses on issues of governance as the core of the water crisis. The report noted that: "There is enough water for everyone". It also said: "Water insufficiency is often due to mismanagement, corruption, lack of appropriate institutions, bureaucratic inertia and a shortage of investment in both human capacity and physical infrastructure". Economists and others have argued that a lack of property rights, government regulations and water subsidies have given rise to the situation with water. These factors cause prices to be too low and consumption too high, making a point for water privatization. The clean water crisis is an emerging global crisis affecting approximately 785 million people around the world. 1.1 billion people lack access to water and 2.7 billion experience water scarcity at least one month in a year. 2.4 billion people suffer from contaminated water and poor sanitation. Contamination of water can lead to deadly diarrheal diseases such as cholera and typhoid fever and other waterborne diseases. These account for 80% of illnesses around the world. Environment Using water for domestic, food and industrial uses has major impacts on ecosystems in many parts of the world. This can apply even to regions not considered "water scarce". Water scarcity damages the environment in many ways. These include adverse effects on lakes, rivers, ponds, wetlands and other fresh water resources. Thus results in water overuse because water is scarce. This often occurs in areas of irrigation agriculture. It can harm the environment in several ways. This includes increased salinity, nutrient pollution, and the loss of floodplains and wetlands. Water scarcity also makes it harder to use flow to rehabilitate urban streams. Through the last hundred years, more than half of the Earth's wetlands have been destroyed and have disappeared. These wetlands are important as the habitats of numerous creatures such as mammals, birds, fish, amphibians, and invertebrates. They also support the growing of rice and other food crops. And they provide water filtration and protection from storms and flooding. Freshwater lakes such as the Aral Sea in central Asia have also suffered. It was once the fourth largest freshwater lake in the world. But it has lost more than 58,000 square km of area and vastly increased in salt concentration over the span of three decades. Subsidence is another result of water scarcity. The U.S. Geological Survey estimates that subsidence has affected more than 17,000 square miles in 45 U.S. states, 80 percent of it due to groundwater usage. Vegetation and wildlife need sufficient freshwater. Marshes, bogs and riparian zones are more clearly dependent upon sustainable water supply. Forests and other upland ecosystems are equally at risk as water becomes less available. In the case of wetlands, a lot of ground has been simply taken from wildlife use to feed and house the expanding human population. Other areas have also suffered from a gradual fall in freshwater inflow as upstream water is diverted for human use. Potential for conflict Other impacts include growing conflict between users and growing competition for water. Examples for the potential for conflict from water scarcity include: Food insecurity in the Middle East and North Africa Region and regional conflicts over scarce water resources. Causes and contributing factors Population growth Around fifty years ago, the common view was that water was an infinite resource. At that time, there were fewer than half the current number of people on the planet. People were not as wealthy as today, consumed fewer calories and ate less meat, so less water was needed to produce their food. They required a third of the volume of water we presently take from rivers. Today, the competition for water resources is much more intense. This is because there are now seven billion people on the planet and their consumption of water-thirsty meat is rising. And industry, urbanization, biofuel crops, and water reliant food items are competing more and more for water. In the future, even more water will be needed to produce food because the Earth's population is forecast to rise to 9 billion by 2050. In 2000, the world population was 6.2 billion. The UN estimates that by 2050 there will be an additional 3.5 billion people, with most of the growth in developing countries that already suffer water stress. This will increase demand for water unless there are corresponding increases in water conservation and recycling. In building on the data presented here by the UN, the World Bank goes on to explain that access to water for producing food will be one of the main challenges in the decades to come. It will be necessary to balance access to water with managing water in a sustainable way. At the same time it will be necessary to take the impact of climate change and other environmental and social variables into account. In 60% of European cities with more than 100,000 people, groundwater is being used at a faster rate than it can be replenished. Over-exploitation of groundwater The increase in the number of people is increasing competition for water. This is depleting many of the world's major aquifers. It has two causes. One is direct human consumption. The other is agricultural irrigation. Millions of pumps of all sizes are currently extracting groundwater throughout the world. Irrigation in dry areas such as northern China, Nepal and India draws on groundwater. And it is extracting groundwater at an unsustainable rate. Many cities have experienced aquifer drops of between 10 and 50 meters. They include Mexico City, Bangkok, Beijing, Chennai and Shanghai. Until recently, groundwater was not a highly used resource. In the 1960s, more and more groundwater aquifers developed. Improved knowledge, technology and funding have made it possible to focus more on drawing water from groundwater resources instead of surface water. These made the agricultural groundwater revolution possible. They expanded the irrigation sector which made it possible to increase food production and development in rural areas. Groundwater supplies nearly half of all drinking water in the world. The large volumes of water stored underground in most aquifers have a considerable buffer capacity. This makes it possible to withdraw water during periods of drought or little rainfall. This is crucial for people that live in regions that cannot depend on precipitation or surface water for their only supplies. It provides reliable access to water all year round. As of 2010, the world's aggregated groundwater abstraction is estimated at 1,000 km3 per year. Of this 67% goes on irrigation, 22% on domestic purposes and 11% on industrial purposes. The top ten major consumers of abstracted water make up 72% of all abstracted water use worldwide. They are India, China, United States of America, Pakistan, Iran, Bangladesh, Mexico, Saudi Arabia, Indonesia, and Italy. Goundwater sources are quite plentiful. But one major area of concern is the renewal or recharge rate of some groundwater sources. Extracting from non-rewable groundwater sources could exhaust them if they are not properly monitored and managed. Increasing use of groundwater can also reduce water quality over time. Groundwater systems often show falls in natural outflows, stored volumes, and water levels as well as water degradation. Groundwater depletion can cause harm in many ways. These include more costly groundwater pumping and changes in salinity and other types of water quality. They can also lead to land subsidence, degraded springs and reduced baseflows. Expansion of agricultural and industrial users The main cause of water scarcity as a result of consumption is the extensive use of water in agriculture/livestock breeding and industry. People in developed countries generally use about 10 times more water a day than people in developing countries. A large part of this is indirect use in water-intensive agricultural and industrial production of consumer goods. Examples are fruit, oilseed crops and cotton. Many of these production chains are globalized, So a lot of water consumption and pollution in developing countries occurs to produce goods for consumption in developed countries. Many aquifers have been over-pumped and are not recharging quickly. This does not use up the total fresh water supply. But it means that much has become polluted, salted, unsuitable or otherwise unavailable for drinking, industry and agriculture. To avoid a global water crisis, farmers will have to increase productivity to meet growing demands for food. At the same time industry and cities find will have to find ways to use water more efficiently. Business activities such as tourism are continuing to expand. They create a need for increases in water supply and sanitation. This in turn can lead to more pressure on water resources and natural ecosystems. The approximate 50% growth in world energy use by 2040 will also increase the need for efficient water use. It may means some water use shifts from irrigation to industry. This is because thermal power generation uses water for steam generation and cooling. Water pollution Climate change Climate change could have a big impact on water resources around the world because of the close connections between the climate and hydrological cycle. Rising temperatures will increase evaporation and lead to increases in precipitation. However there will be regional variations in rainfall. Both droughts and floods may become more frequent and more severe in different regions at different times. There will be generally less snowfall and more rainfall in a warmer climate. Changes in snowfall and snow melt in mountainous areas will also take place. Higher temperatures will also affect water quality in ways that scientists do not fully understand. Possible impacts include increased eutrophication. Climate change could also boost demand for irrigation systems in agriculture. There is now ample evidence that greater hydrologic variability and climate change have had a profound impact on the water sector, and will continue to do so. This will show up in the hydrologic cycle, water availability, water demand, and water allocation at the global, regional, basin, and local levels. The United Nations' FAO states that by 2025 1.9 billion people will live in countries or regions with absolute water scarcity. It says two thirds of the world's population could be under stress conditions. The World Bank says that climate change could profoundly alter future patterns of water availability and use. This will make water stress and insecurity worse, at the global level and in sectors that depend on water. Scientists have found that population change is four time more important than long-term climate change in its effects on water scarcity. Retreat of mountain glaciers Options for improvements Supply and demand side management A review in 2006 stated that "It is surprisingly difficult to determine whether water is truly scarce in the physical sense at a global scale (a supply problem) or whether it is available but should be used better (a demand problem)". The International Resource Panel of the UN states that governments have invested heavily in inefficient solutions. These are mega-projects like dams, canals, aqueducts, pipelines and water reservoirs. They are generally neither environmentally sustainable nor economically viable. According to the panel, the most cost-effective way of decoupling water use from economic growth is for governments to create holistic water management plans. These would take into account the entire water cycle: from source to distribution, economic use, treatment, recycling, reuse and return to the environment. In general, there is enough water on an annual and global scale. The issue is more of variation of supply by time and by region. Reservoirs and pipelines would deal with this variable water supply. Well-planned infrastructure with demand side management is necessary. Both supply-side and demand-side management have advantages and disadvantages. Co-operation between countries Lack of cooperation may give rise to regional water conflicts. This is especially the case in developing countries. The main reason is disputes regarding the availability, use and management of water. One example is the dispute between Egypt and Ethiopia over the Grand Ethiopian Renaissance Dam which escalated in 2020. Egypt sees the dam as an existential threat, fearing that the dam will reduce the amount of water it receives from the Nile. Water conservation Expanding sources of usable water Wastewater treatment and reclaimed water Desalination Virtual water trade Regional examples Overview of regions The Consultative Group on International Agricultural Research (CGIAR) published a map showing the countries and regions suffering most water stress. They are North Africa, the Middle East, India, Central Asia, China, Chile, Colombia, South Africa, Canada and Australia. Water scarcity is also increasing in South Asia. As of 2016, about four billion people, or two thirds of the world's population, were facing severe water scarcity. The more developed countries of North America, Europe and Russia will not see a serious threat to water supply by 2025 in general. This is not only because of their relative wealth. Their populations will also be more in line with available water resources. North Africa, the Middle East, South Africa and northern China will face very severe water shortages. This is due to physical scarcity and too many people for the water that is available. Most of South America, Sub-Saharan Africa, southern China and India will face water supply shortages by 2025. For these regions, scarcity will be due to economic constraints on developing safe drinking water, and excessive population growth. Africa West Africa and North Africa Water scarcity in Yemen (see: Water supply and sanitation in Yemen) is a growing problem. Population growth and climate change are among the causes. Others are poor water management, shifts in rainfall, water infrastructure deterioration, poor governance, and other anthropogenic effects. As of 2011, water scarcity is having political, economic and social impacts in Yemen. As of 2015, Yemen is one of the countries suffering most from water scarcity. Most people in Yemen experience water scarcity for at least one month a year. In Nigeria, some reports have suggested that increase in extreme heat, drought and the shrinking of Lake Chad is causing water shortage and environmental migration. This is forcing thousands to migrate to neighboring Chad and towns. Asia A major report in 2019 by more than 200 researchers, found that the Himalayan glaciers could lose 66 percent of their ice by 2100. These glaciers are the sources of Asia's biggest rivers – Ganges, Indus, Brahmaputra, Yangtze, Mekong, Salween and Yellow. Approximately 2.4 billion people live in the drainage basin of the Himalayan rivers. India, China, Pakistan, Bangladesh, Nepal and Myanmar could experience floods followed by droughts in coming decades. In India alone, the Ganges provides water for drinking and farming for more than 500 million people. Even with the overpumping of its aquifers, China is developing a grain deficit. When this happens, it will almost certainly drive grain prices upward. Most of the 3 billion people projected to be added worldwide by mid-century will be born in countries already experiencing water shortages. Unless population growth can be slowed quickly, it is feared that there may not be a practical non-violent or humane solution to the emerging world water shortage. It is highly likely that climate change in Turkey will cause its southern river basins to be water scarce before 2070, and increasing drought in Turkey. America In the Rio Grande Valley, intensive agribusiness has made water scarcity worse. It has sparked jurisdictional disputes regarding water rights on both sides of the U.S.-Mexico border. Scholars such as Mexico's Armand Peschard-Sverdrup have argued that this tension has created the need for new strategic transnational water management. Some have likened the disputes to a war over diminishing natural resources. The west coast of North America, which gets much of its water from glaciers in mountain ranges such as the Rocky Mountains and Sierra Nevada, is also vulnerable. Australia By far the largest part of Australia is desert or semi-arid lands commonly known as the outback. Water restrictions are in place in many regions and cities of Australia in response to chronic shortages resulting from drought. Environmentalist Tim Flannery predicted that Perth in Western Australia could become the world's first ghost metropolis. This would mean it was an abandoned city with no more water to sustain its population, said Flannery, who was Australian of the year 2007. In 2010, Perth suffered its second-driest winter on record and the water corporation tightened water restrictions for spring. Some countries have already proven that decoupling water use from economic growth is possible. For example, in Australia, water consumption declined by 40% between 2001 and 2009 while the economy grew by more than 30%. By country Water scarcity or water crisis in particular countries: Society and culture Global goals Sustainable Development Goal 6 aims for clean water and sanitation for all. It is one of 17 Sustainable Development Goals established by the United Nations General Assembly in 2015. The fourth target of SDG 6 refers to water scarcity. It states: "By 2030, substantially increase water-use efficiency across all sectors and ensure sustainable withdrawals and supply of freshwater to address water scarcity and substantially reduce the number of people suffering from water scarcity". See also References External links The World Bank's work and publications on water resources Climate change adaptation Environmental economics Environmental issues with water Global natural environment Risk management Water Water supply Water treatment Human impact on the environment
Water scarcity
[ "Chemistry", "Engineering", "Environmental_science" ]
6,793
[ "Hydrology", "Water", "Water treatment", "Water pollution", "Environmental economics", "Environmental engineering", "Water technology", "Environmental social science", "Water supply" ]
19,296,207
https://en.wikipedia.org/wiki/Hanna%20Neumann%20conjecture
In the mathematical subject of group theory, the Hanna Neumann conjecture is a statement about the rank of the intersection of two finitely generated subgroups of a free group. The conjecture was posed by Hanna Neumann in 1957. In 2011, a strengthened version of the conjecture (see below) was proved independently by Joel Friedman and by Igor Mineyev. In 2017, a third proof of the Strengthened Hanna Neumann conjecture, based on homological arguments inspired by pro-p-group considerations, was published by Andrei Jaikin-Zapirain. History The subject of the conjecture was originally motivated by a 1954 theorem of Howson who proved that the intersection of any two finitely generated subgroups of a free group is always finitely generated, that is, has finite rank. In this paper Howson proved that if H and K are subgroups of a free group F(X) of finite ranks n ≥ 1 and m ≥ 1 then the rank s of H ∩ K satisfies: s − 1 ≤ 2mn − m − n. In a 1956 paper Hanna Neumann improved this bound by showing that: s − 1 ≤ 2mn − 2m − n. In a 1957 addendum, Hanna Neumann further improved this bound to show that under the above assumptions s − 1 ≤ 2(m − 1)(n − 1). She also conjectured that the factor of 2 in the above inequality is not necessary and that one always has s − 1 ≤ (m − 1)(n − 1). This statement became known as the Hanna Neumann conjecture. Formal statement Let H, K ≤ F(X) be two nontrivial finitely generated subgroups of a free group F(X) and let L = H ∩ K be the intersection of H and K. The conjecture says that in this case rank(L) − 1 ≤ (rank(H) − 1)(rank(K) − 1). Here for a group G the quantity rank(G) is the rank of G, that is, the smallest size of a generating set for G. Every subgroup of a free group is known to be free itself and the rank of a free group is equal to the size of any free basis of that free group. Strengthened Hanna Neumann conjecture If H, K ≤ G are two subgroups of a group G and if a, b ∈ G define the same double coset HaK = HbK then the subgroups H ∩ aKa−1 and H ∩ bKb−1 are conjugate in G and thus have the same rank. It is known that if H, K ≤ F(X) are finitely generated subgroups of a finitely generated free group F(X) then there exist at most finitely many double coset classes HaK in F(X) such that H ∩ aKa−1 ≠ {1}. Suppose that at least one such double coset exists and let a1,...,an be all the distinct representatives of such double cosets. The strengthened Hanna Neumann conjecture, formulated by her son Walter Neumann (1990), states that in this situation The strengthened Hanna Neumann conjecture was proved in 2011 by Joel Friedman. Shortly after, another proof was given by Igor Mineyev. Partial results and other generalizations In 1971 Burns improved Hanna Neumann's 1957 bound and proved that under the same assumptions as in Hanna Neumann's paper one has s ≤ 2mn − 3m − 2n + 4. In a 1990 paper, Walter Neumann formulated the strengthened Hanna Neumann conjecture (see statement above). Tardos (1992) established the strengthened Hanna Neumann Conjecture for the case where at least one of the subgroups H and K of F(X) has rank two. As most other approaches to the Hanna Neumann conjecture, Tardos used the technique of Stallings subgroup graphs for analyzing subgroups of free groups and their intersections. Warren Dicks (1994) established the equivalence of the strengthened Hanna Neumann conjecture and a graph-theoretic statement that he called the amalgamated graph conjecture. Arzhantseva (2000) proved that if H is a finitely generated subgroup of infinite index in F(X), then, in a certain statistical meaning, for a generic finitely generated subgroup in , we have H ∩ gKg−1 = {1} for all g in F. Thus, the strengthened Hanna Neumann conjecture holds for every H and a generic K. In 2001 Dicks and Formanek established the strengthened Hanna Neumann conjecture for the case where at least one of the subgroups H and K of F(X) has rank at most three. Khan (2002) and, independently, Meakin and Weil (2002), showed that the conclusion of the strengthened Hanna Neumann conjecture holds if one of the subgroups H, K of F(X) is positively generated, that is, generated by a finite set of words that involve only elements of X but not of X−1 as letters. Ivanov and Dicks and Ivanov obtained analogs and generalizations of Hanna Neumann's results for the intersection of subgroups H and K of a free product of several groups. Wise (2005) claimed that the strengthened Hanna Neumann conjecture implies another long-standing group-theoretic conjecture which says that every one-relator group with torsion is coherent (that is, every finitely generated subgroup in such a group is finitely presented). See also Geometric group theory References Group theory Geometric group theory Conjectures that have been proved
Hanna Neumann conjecture
[ "Physics", "Mathematics" ]
1,113
[ "Mathematical theorems", "Geometric group theory", "Group actions", "Group theory", "Fields of abstract algebra", "Conjectures that have been proved", "Mathematical problems", "Symmetry" ]
19,296,323
https://en.wikipedia.org/wiki/%28Cymene%29ruthenium%20dichloride%20dimer
(Cymene)ruthenium dichloride dimer is the organometallic compound with the formula [(cymene)RuCl]. This red-coloured, diamagnetic solid is a reagent in organometallic chemistry and homogeneous catalysis. The complex is structurally similar to (benzene)ruthenium dichloride dimer. Preparation and reactions The dimer is prepared by the reaction of the phellandrene with hydrated ruthenium trichloride. At high temperatures, [(cymene)RuCl] exchanges with other arenes: [(cymene)RuCl] + 2 CMe → [(CMe)RuCl] + 2 cymene (Cymene)ruthenium dichloride dimer reacts with Lewis bases to give monometallic adducts: [(cymene)RuCl] + 2 PPh → 2 (cymene)RuCl(PPh) Such monomers adopt pseudo-octahedral piano-stool structures. Precursor to catalysts Treatment of [(cymene)RuCl] with the chelating ligand TsDPENH gives (cymene)Ru(TsDPEN-H), a catalyst for asymmetric transfer hydrogenation. [(cymene)RuCl] is also used to prepare catalysts (by monomerization with dppf) used in borrowing hydrogen catalysis, a catalytic reaction that is based on the activation of alcohols towards nucleophilic attack. It can also used to prepare other ruthenium—arene complexes. References Organoruthenium compounds Chloro complexes Dimers (chemistry) Half sandwich compounds Ruthenium(II) compounds
(Cymene)ruthenium dichloride dimer
[ "Chemistry", "Materials_science" ]
371
[ "Organometallic chemistry", "Dimers (chemistry)", "Half sandwich compounds", "Polymer chemistry" ]
19,307,674
https://en.wikipedia.org/wiki/Bloch%20equations
In physics and chemistry, specifically in nuclear magnetic resonance (NMR), magnetic resonance imaging (MRI), and electron spin resonance (ESR), the Bloch equations are a set of macroscopic equations that are used to calculate the nuclear magnetization M = (Mx, My, Mz) as a function of time when relaxation times T1 and T2 are present. These are phenomenological equations that were introduced by Felix Bloch in 1946. Sometimes they are called the equations of motion of nuclear magnetization. They are analogous to the Maxwell–Bloch equations. In the laboratory (stationary) frame of reference Let M(t) = (Mx(t), My(t), Mz(t)) be the nuclear magnetization. Then the Bloch equations read: where γ is the gyromagnetic ratio and B(t) = (Bx(t), By(t), B0 + ΔBz(t)) is the magnetic field experienced by the nuclei. The z component of the magnetic field B is sometimes composed of two terms: one, B0, is constant in time, the other one, ΔBz(t), may be time dependent. It is present in magnetic resonance imaging and helps with the spatial decoding of the NMR signal. M(t) × B(t) is the cross product of these two vectors. M0 is the steady state nuclear magnetization (that is, for example, when t → ∞); it is in the z direction. Physical background With no relaxation (that is both T1 and T2 → ∞) the above equations simplify to: or, in vector notation: This is the equation for Larmor precession of the nuclear magnetization M in an external magnetic field B. The relaxation terms, represent an established physical process of transverse and longitudinal relaxation of nuclear magnetization M. As macroscopic equations These equations are not microscopic: they do not describe the equation of motion of individual nuclear magnetic moments. Those are governed and described by laws of quantum mechanics. Bloch equations are macroscopic: they describe the equations of motion of macroscopic nuclear magnetization that can be obtained by summing up all nuclear magnetic moment in the sample. Alternative forms Opening the vector product brackets in the Bloch equations leads to: The above form is further simplified assuming where i = . After some algebra one obtains: . where . is the complex conjugate of Mxy. The real and imaginary parts of Mxy correspond to Mx and My respectively. Mxy is sometimes called transverse nuclear magnetization. Matrix form The Bloch equations can be recast in matrix-vector notation: In a rotating frame of reference In a rotating frame of reference, it is easier to understand the behaviour of the nuclear magnetization M. This is the motivation: Solution of Bloch equations with T1, T2 → ∞ Assume that: at t = 0 the transverse nuclear magnetization Mxy(0) experiences a constant magnetic field B(t) = (0, 0, B0); B0 is positive; there are no longitudinal and transverse relaxations (that is T1 and T2 → ∞). Then the Bloch equations are simplified to: , . These are two (not coupled) linear differential equations. Their solution is: , . Thus the transverse magnetization, Mxy, rotates around the z axis with angular frequency ω0 = γB0 in clockwise direction (this is due to the negative sign in the exponent). The longitudinal magnetization, Mz remains constant in time. This is also how the transverse magnetization appears to an observer in the laboratory frame of reference (that is to a stationary observer). Mxy(t) is translated in the following way into observable quantities of Mx(t) and My(t): Since then , , where Re(z) and Im(z) are functions that return the real and imaginary part of complex number z. In this calculation it was assumed that Mxy(0) is a real number. Transformation to rotating frame of reference This is the conclusion of the previous section: in a constant magnetic field B0 along z axis the transverse magnetization Mxy rotates around this axis in clockwise direction with angular frequency ω0. If the observer were rotating around the same axis in clockwise direction with angular frequency Ω, Mxy it would appear to her or him rotating with angular frequency ω0 - Ω. Specifically, if the observer were rotating around the same axis in clockwise direction with angular frequency ω0, the transverse magnetization Mxy would appear to her or him stationary. This can be expressed mathematically in the following way: Let (x, y, z) the Cartesian coordinate system of the laboratory (or stationary) frame of reference, and (x′, y′, z′) = (x′, y′, z) be a Cartesian coordinate system that is rotating around the z axis of the laboratory frame of reference with angular frequency Ω. This is called the rotating frame of reference. Physical variables in this frame of reference will be denoted by a prime. Obviously: . What is Mxy′(t)? Expressing the argument at the beginning of this section in a mathematical way: . Equation of motion of transverse magnetization in rotating frame of reference What is the equation of motion of Mxy′(t)? Substitute from the Bloch equation in laboratory frame of reference: But by assumption in the previous section: Bz′(t) = Bz(t) = B0 + ΔBz(t) and Mz(t) = Mz′(t). Substituting into the equation above: This is the meaning of terms on the right hand side of this equation: i (Ω - ω0) Mxy′(t) is the Larmor term in the frame of reference rotating with angular frequency Ω. Note that it becomes zero when Ω = ω0. The -i γ ΔBz(t) Mxy′(t) term describes the effect of magnetic field inhomogeneity (as expressed by ΔBz(t)) on the transverse nuclear magnetization; it is used to explain T2*. It is also the term that is behind MRI: it is generated by the gradient coil system. The i γ Bxy′(t) Mz(t) describes the effect of RF field (the Bxy′(t) factor) on nuclear magnetization. For an example see below. - Mxy′(t) / T2 describes the loss of coherency of transverse magnetization. Similarly, the equation of motion of Mz in the rotating frame of reference is: Time independent form of the equations in the rotating frame of reference When the external field has the form: , We define: and , and get (in the matrix-vector notation): Simple solutions Relaxation of transverse nuclear magnetization Mxy Assume that: The nuclear magnetization is exposed to constant external magnetic field in the z direction Bz′(t) = Bz(t) = B0. Thus ω0 = γB0 and ΔBz(t) = 0. There is no RF, that is Bxy' = 0. The rotating frame of reference rotates with an angular frequency Ω = ω0. Then in the rotating frame of reference, the equation of motion for the transverse nuclear magnetization, Mxy'(t) simplifies to: This is a linear ordinary differential equation and its solution is . where Mxy'(0) is the transverse nuclear magnetization in the rotating frame at time t = 0. This is the initial condition for the differential equation. Note that when the rotating frame of reference rotates exactly at the Larmor frequency (this is the physical meaning of the above assumption Ω = ω0), the vector of transverse nuclear magnetization, Mxy(t) appears to be stationary. Relaxation of longitudinal nuclear magnetization Mz Assume that: The nuclear magnetization is exposed to constant external magnetic field in the z direction Bz′(t) = Bz(t) = B0. Thus ω0 = γB0 and ΔBz(t) = 0. There is no RF, that is Bxy' = 0. The rotating frame of reference rotates with an angular frequency Ω = ω0. Then in the rotating frame of reference, the equation of motion for the longitudinal nuclear magnetization, Mz(t) simplifies to: This is a linear ordinary differential equation and its solution is where Mz(0) is the longitudinal nuclear magnetization in the rotating frame at time t = 0. This is the initial condition for the differential equation. 90 and 180° RF pulses Assume that: Nuclear magnetization is exposed to constant external magnetic field in z direction Bz′(t) = Bz(t) = B0. Thus ω0 = γB0 and ΔBz(t) = 0. At t = 0 an RF pulse of constant amplitude and frequency ω0 is applied. That is B'xy(t) = B'xy is constant. Duration of this pulse is τ. The rotating frame of reference rotates with an angular frequency Ω = ω0. T1 and T2 → ∞. Practically this means that τ ≪ T1 and T2. Then for 0 ≤ t ≤ τ: See also The Bloch–Torrey equation is a generalization of the Bloch equations, which includes added terms due to the transfer of magnetization by diffusion. References Further reading Charles Kittel, Introduction to Solid State Physics'', John Wiley & Sons, 8th edition (2004), . Chapter 13 is on Magnetic Resonance. Eponymous equations of physics Nuclear magnetic resonance Magnetic resonance imaging
Bloch equations
[ "Physics", "Chemistry" ]
2,010
[ "Nuclear magnetic resonance", "Equations of physics", "Magnetic resonance imaging", "Eponymous equations of physics", "Nuclear physics" ]
14,241,105
https://en.wikipedia.org/wiki/Van%20der%20Waals%20surface
The van der Waals surface of a molecule is an abstract representation or model of that molecule, illustrating where, in very rough terms, a surface might reside for the molecule based on the hard cutoffs of van der Waals radii for individual atoms, and it represents a surface through which the molecule might be conceived as interacting with other molecules. Also referred to as a van der Waals envelope, the van der Waals surface is named for Johannes Diderik van der Waals, a Dutch theoretical physicist and thermodynamicist who developed theory to provide a liquid-gas equation of state that accounted for the non-zero volume of atoms and molecules, and on their exhibiting an attractive force when they interacted (theoretical constructions that also bear his name). van der Waals surfaces are therefore a tool used in the abstract representations of molecules, whether accessed, as they were originally, via hand calculation, or via physical wood/plastic models, or now digitally, via computational chemistry software. Practically speaking, CPK models, developed by and named for Robert Corey, Linus Pauling, and Walter Koltun, were the first widely used physical molecular models based on van der Waals radii, and allowed broad pedagogical and research use of a model showing the van der Waals surfaces of molecules. van der Waals volume and van der Waals surface area Related to the title concept are the ideas of a van der Waals volume, Vw, and a van der Waals surface area, abbreviated variously as Aw, vdWSA, VSA, and WSA. A van der Waals surface area is an abstract conception of the surface area of atoms or molecules from a mathematical estimation, either computing it from first principles or by integrating over a corresponding van der Waals volume. In simplest case, for a spherical monatomic gas, it is simply the computed surface area of a sphere of radius equal to the van der Waals radius of the gaseous atom: The van der Waals volume, a type of atomic or molecular volume, is a property directly related to the van der Waals radius, and is defined as the volume occupied by an individual atom, or in a combined sense, by all atoms of a molecule. It may be calculated for atoms if the van der Waals radius is known, and for molecules if its atoms radii and the inter-atomic distances and angles are known. As above, in simplest case, for a spherical monatomic gas, Vw is simply the computed volume of a sphere of radius equal to the van der Waals radius of the gaseous atom: For a molecule, Vw is the volume enclosed by the van der Waals surface; hence, computation of Vw presumes ability to describe and compute a van der Waals surface. van der Waals volumes of molecules are always smaller than the sum of the van der Waals volumes of their constituent atoms, due to the fact that the interatomic distances resulting from chemical bond are less than the sum of the atomic van der Waals radii. In this sense, a van der Waals surface of a homonuclear diatomic molecule can be viewed as an pictorial overlap of the two spherical van der Waals surfaces of the individual atoms, likewise for larger molecules like methane, ammonia, etc. (see images). van der Waals radii and volumes may be determined from the mechanical properties of gases (the original method, determining the van der Waals constant), from the critical point (e.g., of a fluid), from crystallographic measurements of the spacing between pairs of unbonded atoms in crystals, or from measurements of electrical or optical properties (i.e., polarizability or molar refractivity). In all cases, measurements are made on macroscopic samples and results are expressed as molar quantities. van der Waals volumes of a single atom or molecules are arrived at by dividing the macroscopically determined volumes by the Avogadro constant. The various methods give radius values which are similar, but not identical—generally within 1–2 Å (100–200 pm). Useful tabulated values of van der Waals radii are obtained by taking a weighted mean of a number of different experimental values, and, for this reason, different tables will be seen to present different values for the van der Waals radius of the same atom. As well, it has been argued that the van der Waals radius is not a fixed property of an atom in all circumstances, rather, that it will vary with the chemical environment of the atom. Gallery See also Molecular surface (disambiguation) van der Waals force van der Waals molecule van der Waals radius van der Waals strain References and notes Further reading DC Whitley, van der Waals surface graphs and molecular shape, Journal of Mathematical Chemistry, Volume 23, Numbers 3-4, 1998, pp. 377–397(21). M. Petitjean, On the Analytical Calculation of van der Waals Surfaces and Volumes: Some Numerical Aspects, Journal of Computational Chemistry, Volume 15, Number 5, 1994, pp. 507–523. External links VSAs for various molecules by Anton Antonov, The Wolfram Demonstrations Project, 2007. van der Waals radii, Structural Biology Glossary, Image Library of Biological Macromolecules. Analytical calculation of van der Waals surfaces and volumes. Intermolecular forces Physical chemistry Surface
Van der Waals surface
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,131
[ "Molecular physics", "Applied and interdisciplinary physics", "Materials science", "Intermolecular forces", "nan", "Physical chemistry" ]
14,241,662
https://en.wikipedia.org/wiki/8-OH-DPAT
8-OH-DPAT is a research chemical of the aminotetralin chemical class which was developed in the 1980s and has been widely used to study the function of the 5-HT1A receptor. It was one of the first major 5-HT1A receptor full agonists to have been discovered. Originally believed to be selective for the 5-HT1A receptor, 8-OH-DPAT was later found to act as a 5-HT7 receptor agonist and serotonin reuptake inhibitor/releasing agent as well. In animal studies, 8-OH-DPAT has been shown to possess antidepressant, anxiolytic, serenic, anorectic, antiemetic, hypothermic, hypotensive, bradycardic, hyperventilative, and analgesic effects. See also 5-OH-DPAT 7-OH-DPAT Bay R 1531 MDAT UH-301 References External links Yves Aubert, Thesis, Leiden University. (Dec 11, 2012) Sex, aggression and pair-bond: a study on the serotonergic regulation of female sexual function in the marmoset monkey 2-Aminotetralins 5-HT1A agonists 5-HT7 agonists Serotonin reuptake inhibitors Serotonin releasing agents Hydroxyarenes
8-OH-DPAT
[ "Chemistry" ]
294
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
14,242,021
https://en.wikipedia.org/wiki/Offline%20private%20key%20protocol
The Offline Private Key Protocol (OPKP) is a cryptographic protocol to prevent unauthorized access to back up or archive data. The protocol results in a public key that can be used to encrypt data and an offline private key that can later be used to decrypt that data. The protocol is based on three rules regarding the key. An offline private key should: not be stored with the encrypted data (obviously) not be kept by the organization that physically stores the encrypted data, to ensure privacy not be stored at the same system as the original data, to avoid the possibility that theft of only the private key would give access to all data at the storage provider; and to avoid that when the key would be needed to restore a backup, the key would be lost together with the data loss that made the restore necessary in the first place To comply with these rules, the offline private key protocol uses a method of asymmetric key wrapping. Security As the protocol does not provide rules on the strength of the encryption methods and keys to be used, the security of the protocol depends on the actual cryptographic implementation. When used in combination with strong encryption methods, the protocol can provide extreme security. Operation Initially: a client program (program) on a system (local system) with data to back up or archive generates a random private key PRIV program creates a public key PUB based on PRIV program stores PUB on the local system program presents PRIV to user who can store the key, e.g. printed as a trusted paper key, or on a memory card program destroys PRIV on the local system When archiving or creating a backup, for each session or file: program generates a one-time random key OTRK program encrypts data using OTRK and a symmetric encryption method program encrypts the (optionally padded) key OTRK using PUB to OTRKCR program stores the OTRKCR and the encrypted data to a server program destroys OTRK on the local system program destroys OTRKCR on the local system the server stores OTRKCR and stores the encrypted data To restore backed up or archived data: user feeds PRIV into program program downloads data with the respective OTRKCR program decrypts OTRKCR using PRIV, giving OTRK program decrypts data using OTRK program destroys PRIV on the local system References Cryptographic protocols Data security
Offline private key protocol
[ "Engineering" ]
501
[ "Cybersecurity engineering", "Data security" ]
14,245,428
https://en.wikipedia.org/wiki/More%20O%27Ferrall%E2%80%93Jencks%20plot
More O’Ferrall–Jencks plots are two-dimensional representations of multiple reaction coordinate potential energy surfaces for chemical reactions that involve simultaneous changes in two bonds. As such, they are a useful tool to explain or predict how changes in the reactants or reaction conditions can affect the position and geometry of the transition state of a reaction for which there are possible competing pathways. Brief history These plots were first introduced in a 1970 paper by R. A. More O’Ferrall to discuss mechanisms of β-eliminations and later adopted by W. P. Jencks in an attempt to clarify the finer details involved in the general acid-base catalysis of reversible addition reactions to carbon electrophiles such as the hydration of carbonyls. Description In this type of plot (Figure 1), each axis represents a unique reaction coordinate, the corners represent local minima along the potential surface such as reactants, products or intermediates and the energy axis projects vertically out of the page. Changing a single reaction parameter can change the height of one or more of the corners of the plot. These changes are transmitted across the surface such that the position of the transition state (the saddle point) is altered. Consider a generic example in which the initial transition state along a concerted pathway is represented by a black dot on a red diagonal (Figure 1). Changing the height of the corners can have two effects on the position of the transition state: it can move along the diagonal, reflecting a change in the Gibbs free energy of the reaction (ΔG°), or perpendicular to it, reflecting a change in the energy of competing pathways. Thus, in accordance with the Hammond postulate, the transition state moves along the diagonal towards the corner that is raised in energy (a Hammond effect) and perpendicular to the diagonal towards the corner that is lowered (an anti-Hammond effect). In this example, R is raised in energy and I(2) is lowered in energy. The transition state moves accordingly and the vector sum of both movements gives the real change in its position. Applications Elimination reactions Initially, More O’Ferrall introduced this type of analysis to discuss the continuity between concerted and step-wise β-elimination reaction mechanisms. The model also provided a framework within which to explain the effects of substituents and reaction conditions on the mechanism. The appropriate lower energy species were placed at the corners of the two dimensional plot (Figure 2). These were the reactants (top left), the products (bottom right) and the intermediates of the two possible stepwise reactions: the carbocation for E1 (bottom left) and the carbanion for E1cB (top-right). Thus, the horizontal axes represent the extent of deprotonation (C-H bond distance) and the vertical axes represent the extent of leaving group departure (C-LG distance). By applying the Hammond and anti-Hammond effects, he predicted the effects of various changes in the reactants or reaction conditions. For example, the effects of introducing a better leaving group on a substrate that initially eliminates via an E2 mechanism are illustrated in Figure 2. A better leaving group increases the energy of the reactants and of the carbanion intermediate. Thus, the transition state moves towards the reactants and away from the carbanion intermediate. The model does not predict any change in leaving group departure at the transition state. Instead the extent of deprotonation is expected to decrease. This can be explained by the fact that a better leaving group needs less assistance from a developing neighbouring negative charge in order to depart. The true change predicts more carbocation character at the transition state and a mechanism that is more E1-like. These observations can be correlated with Hammett ρ-values. Poor leaving groups correlate with large positive ρ-values. Gradually increasing the leaving group ability decreases the ρ-value until it becomes large and negative, indicating the development of positive charge in the transition state. Substitution reactions A similar analysis, done by J. M. Harris, has been applied to the competing SN1 and SN2 nucleophilic aliphatic substitution pathways. The effects of increasing the nucleophilicity of the nucleophile are shown as an example in Figure 3. An agreement with Hammet ρ-values is also apparent in this application. Addition to carbonyls Finally, this type of plot can readily be drawn to illustrate the effects of changing parameters in the acid-catalyzed nucleophilic addition to carbonyls. The example in Figure 4 demonstrates the effects of increasing the strength of the acid. In this case, the extent of protonation is the α-value in the Brønsted catalysis equation. The fact that the α-value remains unchanged explains the linearity of Brønsted plots for such a reaction. Ultimately, the More O’Ferrall–Jencks plots have qualitative predictive and explanatory power regarding the effects of changing substituents and reaction conditions for a wide variety of reactions. See also Potential energy surface Reaction coordinate Transition state References Plots (graphics) Chemical kinetics Physical organic chemistry
More O'Ferrall–Jencks plot
[ "Chemistry" ]
1,069
[ "Chemical reaction engineering", "Chemical kinetics", "Physical organic chemistry" ]
18,289,011
https://en.wikipedia.org/wiki/Continuous%20reactor
Continuous reactors (alternatively referred to as flow reactors) carry material as a flowing stream. Reactants are continuously fed into the reactor and emerge as continuous stream of product. Continuous reactors are used for a wide variety of chemical and biological processes within the food, chemical and pharmaceutical industries. A survey of the continuous reactor market will throw up a daunting variety of shapes and types of machine. Beneath this variation however lies a relatively small number of key design features which determine the capabilities of the reactor. When classifying continuous reactors, it can be more helpful to look at these design features rather than the whole system. Batch versus continuous Reactors can be divided into two broad categories, batch reactors and continuous reactors. Batch reactors are stirred tanks sufficiently large to handle the full inventory of a complete batch cycle. In some cases, batch reactors may be operated in semi batch mode where one chemical is charged to the vessel and a second chemical is added slowly. Continuous reactors are generally smaller than batch reactors and handle the product as a flowing stream. Continuous reactors may be designed as pipes with or without baffles or a series of interconnected stages. The advantages of the two options are considered below. Benefits of batch reactors Batch reactors are very versatile and are used for a variety for different unit operations (batch distillation, storage, crystallisation, liquid-liquid extraction etc.) in addition to chemical reactions. There is a large installed base of batch reactors within industry and their method of use is well established. Batch reactors are excellent at handling difficult materials like slurries or products with a tendency to foul. Batch reactors represent an effective and economic solution for many types of slow reactions. Benefits of continuous reactors The rate of many chemical reactions is dependent on reactant concentration. Continuous reactors are generally able to cope with much higher reactant concentrations due to their superior heat transfer capacities. Plug flow reactors have the additional advantage of greater separation between reactants and products giving a better concentration profile. The small size of continuous reactors makes higher mixing rates possible. The output from a continuous reactor can be altered by varying the run time. This increases operating flexibility for manufacturers. Heat transfer capacity The rate of heat transfer within a reactor can be determined from the following relationship: where: qx: the heat liberated or absorbed by the process (W) U: the heat transfer coefficient of the heat exchanger (W/(m2K)) A: the heat transfer area (m2) Tp: process temperature (K) Tj: jacket temperature (K) From a reactor design perspective, heat transfer capacity is heavily influenced by channel size since this determines the heat transfer area per unit volume. Channel size can be categorised in various ways however in broadest terms, the categories are as follows: Industrial batch reactors: 1–10 m2/m3 (depending on reactor capacity) Laboratory batch reactors: 10–100 m2/m3 (depending on reactor capacity) Continuous reactors (non-micro): 100–5,000 m2/m3 (depending on channel size) Micro reactors: 5,000–50,000 m2/m3 (depending on channel size) Small diameter channels have the advantage of high heat transfer capacity. Against this however they have lower flow capacity, higher pressure drop and an increased tendency to block. In many cases, the physical structure and fabrication techniques for micro reactors make cleaning and unblocking very difficult to achieve. Temperature control Temperature control is one of the key functions of a chemical reactor. Poor temperature control can severely affect both yield and product quality. It can also lead to boiling or freezing within the reactor which may stop the reactor from working altogether. In extreme cases, poor temperature control can lead to severe over pressure which can be destructive on the equipment and potentially dangerous. Single stage systems with high heating or cooling flux In a batch reactor, good temperature control is achieved when the heat added or removed by the heat exchange surface (qx) equals the heat generated or absorbed by the process material (qp). For flowing reactors made up of tubes or plates, satisfying the relationship qx = qp does not deliver good temperature control since the rate of process heat liberation/absorption varies at different points within the reactor. Controlling the outlet temperature does not prevent hot/cold spots within the reactor. Hot or cold spots caused by exothermic or endothermic activity can be eliminated by relocating the temperature sensor (T) to the point where the hot/cold spots exists. This however leads to overheating or overcooling downstream of the temperature sensor. Many different types of plate or tube reactors use simple feed back control of the product temperature. From a user’s perspective, this approach is only suitable for processes where the effects of hot/cold spots do not compromise safety, quality or yield. Single stage systems with low heating or cooling flux Micro reactors can be tube or plates and have the key feature of small diameter flow channels (typically less than <1 mm). The significance of micro reactors is that the heat transfer area (A) per unit volume (of product) is very large. A large heat transfer area means that high values of qx can be achieved with low values of Tp – Tj. The low value of Tp – Tj limits the extent of over cooling that can occur. Thus the product temperature can be controlled by regulating the temperature of the heat transfer fluid (or the product). The feedback signal for controlling the process temperature can be the product temperature or the heat transfer fluid temperature. It is often more practical to control the temperature of the heat transfer fluid. Although micro reactors are efficient heat transfer devices, the narrow channels can result in high pressure drops, limited flow capacity and a tendency to block. They are also often fabricated in a manner which makes cleaning and dismantling difficult or impossible. Multistage systems with high heating or cooling flux Conditions within a continuous reactor change as the product passes along the flow channel. In an ideal reactor the design of the flow channel is optimised to cope with this change. In practice, this is achieved by breaking the reactor into a series of stages. Within each stage the ideal heat transfer conditions can be achieved by varying the surface to volume ratio or the cooling/heating flux. Thus stages where process heat output is very high either use extreme heat transfer fluid temperatures or have high surface to volume ratios (or both). By tackling the problem as a series of stages, extreme cooling/heating conditions to be employed at the hot/cold spots without suffering overheating or overcooling elsewhere. The significance of this is that larger flow channels can be used. Larger flow channels are generally desirable as they permit higher rate, lower pressure drop and a reduced tendency to block. Mixing Mixing is another important classifying feature for continuous reactors. Good mixing improves the efficiency of heat and mass transfer. In terms of trajectory through the reactor, the ideal flow condition for a continuous reactor is plug flow (since this delivers uniform residence time within the reactor). There is however a measure of conflict between good mixing and plug flow since mixing generates axial as well as radial movement of the fluid. In tube type reactors (with or without static mixing), adequate mixing can be achieved without seriously compromising plug flow. For this reason, these types of reactor are sometimes referred to as plug flow reactors. Continuous reactors can be classified in terms of the mixing mechanism as follows: Mixing by diffusion Diffusion mixing relies on concentration or temperature gradients within the product. This approach is common with micro reactors where the channel thicknesses are very small and heat can be transmitted to and from the heat transfer surface by conduction. In larger channels and for some types of reaction mixture (especially immiscible fluids), mixing by diffusion is not practical. Mixing with the product transfer pump In a continuous reactor, the product is continuously pumped through the reactor. This pump can also be used to promote mixing. If the fluid velocity is sufficiently high, turbulent flow conditions exist (which promotes mixing). The disadvantage with this approach is that it leads to long reactors with high pressure drops and high minimum flow rates. This is particularly true where the reaction is slow or the product has high viscosity. This problem can be reduced with the use of static mixers. Static mixers are baffles in the flow channel which are used to promote mixing. They are able to work with or without turbulent conditions. Static mixers can be effective but still require relatively long flow channels and generate relatively high pressure drops. The oscillatory baffled reactor is specialised form of static mixer where the direction of process flow is cycled. This permits static mixing with low net flow through the reactor. This has the benefit of allowing the reactor to be kept comparatively short. Mixing with a mechanical agitator Some continuous reactors use mechanical agitation for mixing (rather than the product transfer pump). Whilst this adds complexity to the reactor design, it offers significant advantages in terms of versatility and performance. With independent agitation, efficient mixing can be maintained irrespective of product throughput or viscosity. It also eliminates the need for long flow channels and high pressure drops. One less desirable feature associated with mechanical agitators is the strong axial mixing they generate. This problem can be managed by breaking up the reactor into a series of mixed stages separated by small plug flow channels. The most familiar form of continuous reactor of this type is the continuously stirred tank reactor (CSTR). This is essentially a batch reactor used in a continuous flow. The disadvantage with a single stage CSTR is that it can be relatively wasteful on product during start up and shutdown. The reactants are also added to a mixture which is rich in product. For some types of process, this can affect quality and yield. These problems are managed by using multi stage CSTRs. At the large scale, conventional batch reactors can be used for the CSTR stages. See also Batch reactor Chemical reactor References External links ReelReactor Continuous Chemical and Biological Reactor ThalesNano Continuous Reactors Syrris Continuous Reactors Fluitec Contiplant Continuous Reactors Uniqsis Continuous Reactors Amtechuk Continuous Reactors Alfa Laval Continuous Reactors LIST Continuous Kneader Reactors NiTech Solutions Continuous Crystallisers Chemical reactors
Continuous reactor
[ "Chemistry", "Engineering" ]
2,070
[ "Chemical reactors", "Chemical reaction engineering", "Chemical equipment" ]
6,029,210
https://en.wikipedia.org/wiki/Phased%20array%20ultrasonics
Phased array ultrasonics (PA) is an advanced method of ultrasonic testing that has applications in medical imaging and industrial nondestructive testing. Common applications are to noninvasively examine the heart or to find flaws in manufactured materials such as welds. Single-element (non-phased array) probes, known technically as monolithic probes, emit a beam in a fixed direction. To test or interrogate a large volume of material, a conventional probe must be physically scanned (moved or turned) to sweep the beam through the area of interest. In contrast, the beam from a phased array probe can be focused and swept electronically without moving the probe. The beam is controllable because a phased array probe is made up of multiple small elements, each of which can be pulsed individually at a computer-calculated timing. The term phased refers to the timing, and the term array refers to the multiple elements. Phased array ultrasonic testing is based on principles of wave physics, which also have applications in fields such as optics and electromagnetic antennae. Principle of operation The PA probe consists of many small ultrasonic transducers, each of which can be pulsed independently. By varying the timing, for instance by making the pulse from each transducer progressively delayed going up the line, a pattern of constructive interference is set up that results in radiating a quasi-plane ultrasonic beam at a set angle depending on the progressive time delay. In other words, by changing the progressive time delay the beam can be steered electronically. It can be swept like a search-light through the tissue or object being examined, and the data from multiple beams are put together to make a visual image showing a slice through the object. Use in industry Phased array is widely used for nondestructive testing (NDT) in several industrial sectors, such as construction, pipelines, and power generation. This method is an advanced NDT method that is used to detect discontinuities i.e. cracks or flaws and thereby determine component quality. Due to the possibility to control parameters such as beam angle and focal distance, this method is very efficient regarding the defect detection and speed of testing. Apart from detecting flaws in components, phased array can also be used for wall thickness measurements in conjunction with corrosion testing. Phased array can be used for the following industrial purposes: Inspection of welds Thickness measurements Corrosion inspection PAUT Validation/Demonstration Blocks Rolling stock inspection (wheels and axles) PAUT & TOFD Standard Calibration Blocks Features The method most commonly used for medical ultrasonography. Multiple probe elements produce a steerable and focused beam. Focal spot size depends on probe active aperture (A), wavelength (λ) and focal length (F). Focusing is limited to the near field of the phased array probe. Produces an image that shows a slice through the object. Compared to conventional, single-element ultrasonic testing systems, PA instruments and probes are more complex and expensive. In industry, PA technicians require more experience and training than conventional UT technicians. Standards European Committee for Standardization (CEN) prEN 16018, Non destructive testing - Terminology - Terms used in ultrasonic testing with phased arrays ISO/WD 13588, Non-destructive testing of welds – Ultrasonic testing – Use of (semi-) automated phased array technology See also Phased array (general theory and electromagnetic telecommunications). Phased array optics References Books ASME Boiler and Pressure Vessel Code. American Society Of Mechanical Engineers, 2013. Section V — Nondestructive Examination. [See Article 4 — Ultrasonic Examination Methods for Welds. Para E-474 UT-Phased Array Technique] External links FOCUS - Fast Object-oriented C++ Ultrasound Simulator [MATLAB routines for creating and simulating phased arrays] Phased array animated simulator [registration required after 5 minutes' use] Phased Array Tutorial Education resource from Olympus Nondestructive testing Ultrasound Medical ultrasonography
Phased array ultrasonics
[ "Materials_science" ]
802
[ "Nondestructive testing", "Materials testing" ]
6,030,185
https://en.wikipedia.org/wiki/Limosilactobacillus%20reuteri
Limosilactobacillus reuteri is a lactic acid bacterium found in a variety of natural environments, including the gastrointestinal tract of humans and other animals. It does not appear to be pathogenic and may have health effects. Discovery At the turn of the 20th century, L. reuteri was recorded in scientific classifications of lactic acid bacteria, though at this time it was mistakenly grouped as a member of Lactobacillus fermentum. In the 1960s, further work by microbiologist Gerhard Reuter, for whom the species eventually was named, reclassified the species as L. fermentum biotype II. Significant differences were found between biotype II and other biotypes of L. fermentum, to the point that in 1980 it was identified as a distinct species and the formal species identity, L. reuteri, was proposed. In April 2020, L. reuteri was reassigned to the genus Limosilactobacillus. Prevalence Limosilactobacillus reuteri is found in a variety of natural environments. It has been isolated from many foods, especially meats and dairy products. It appears to be essentially ubiquitous in the animal kingdom, having been found in the gastrointestinal tracts and feces of healthy humans, sheep, chickens, pigs, and rodents. It is the only species to constitute a "major component" of the Lactobacillus species present in the gut of each of the tested host animals, and each host seems to harbor its own specific strain of L. reuteri. It is possible that L. reuteri contributes to the health of its host organism in some manner. Limosilactobacillus reuteri is present as a dominant member of fermenting organisms in type II sourdoughs; several metabolic traits of L. reuteri, including exopolysaccharide formation and conversion of glutamine to glutamate, improve bread quality. Effects Antimicrobial Limosilactobacillus reuteri is known to produce reuterin, reutericin 6 and reutericyclin. Reuterin In the late 1980s, Walter Dobrogosz, Ivan Casas and colleagues discovered that L. reuteri produced a novel broad-spectrum antibiotic substance via the organism's fermentation of glycerol. They named this substance reuterin, after Reuter. Reuterin is a multiple-compound dynamic equilibrium (HPA system, HPA) consisting of 3-hydroxypropionaldehyde, its hydrate, and its dimer. At concentrations above 1.4 M, the HPA dimer was predominant. However, at concentrations relevant for biological systems, HPA hydrate was the most abundant, followed by the aldehyde form. Reuterin inhibits the growth of some harmful Gram-negative and Gram-positive bacteria, along with yeasts, fungi and protozoa. Researchers found that L. reuteri can secrete sufficient amounts of reuterin to achieve the desired antimicrobial effects. Furthermore, since about four to five times the amount of reuterin is needed to kill "good" gut bacteria (i.e. L. reuteri and other Lactobacillus species) as "bad", this would allow L. reuteri to remove gut invaders without harming other gut microbiota. Some studies questioned whether reuterin production is essential for L. reuteris health-promoting activity. The discovery that it produces an antibiotic substance led to a great deal of further research. In early 2008, L. reuteri was confirmed to be capable of producing reuterin in the gastrointestinal tract, improving its ability to inhibit the growth of E. coli. The gene cluster controlling the biosynthesis of reuterin and cobalamin in the L. reuteri genome is a genomic island acquired from an anomalous source. Clinical results in humans Although L. reuteri occurs naturally in humans, it is not found in all individuals. Dietary supplementation can sustain high levels of it in those with deficiencies. Oral intake of L. reuteri has been shown to effectively colonize the intestines of healthy individuals. Colonization begins within days of ingestion, although levels drop months later if intake is stopped. L. reuteri is found in breast milk. Oral intake on the mother's part increases the amount of L. reuteri present in her milk, and the likelihood that it will be transferred to the child. Safety Manipulation of gut microbiota is a complex process that may cause bacteria-host interactions. Although probiotics in general are considered safe, concerns exist about their use in certain cases. Some people, such as those with compromised immune systems, short bowel syndrome, central venous catheters, heart valve disease, and premature infants, may be at higher risk for adverse events. Rarely, consumption of probiotics may cause bacteremia, fungemia and sepsis, potentially fatal infections, in children with compromised immune systems or who are already critically ill. Intestinal health One of the better documented effects of L. reuteri is a significant reduction of symptom duration in pediatric diarrheal disease. L. reuteri is effective as a prophylactic for this illness; children fed it while healthy are less likely to fall ill with diarrhea. With regard to prevention of gut infections, comparative research found L. reuteri to be more potent than other probiotics. Animal research found it to reduce motor complexes and thus intestinal motility. Limosilactobacillus reuteri may be effective treating necrotizing enterocolitis in preterm infants. Meta-analysis of randomized studies suggests that L. reuteri can reduce the incidence of sepsis and shorten the required duration of hospital treatment in this population. Limosilactobacillus reuteri is an effective treatment against infant colic. Studies suggest that colicky infants treated with L. reuteri experience a reduction in time spent crying compared to those treated with simethicone or placebo. However, colic is still poorly understood, and it is not clear why or how L. reuteri ameliorates its symptoms. One theory holds that affected infants cry because of gastrointestinal discomfort; if this is the case, it is plausible that L. reuteri somehow acts to lessen this discomfort, since its primary residence is inside the gut. Gastric health Limosilactobacillus reuteri have a pronounced anti-helicobacter activity and its use as adjuvant therapy of H. pylori in children appears to be very promising, especially in the case of detection of infection with H. pylori with no absolute indication of eradication. Growing evidence indicates L. reuteri is capable of fighting the gut pathogen Helicobacter pylori, which causes peptic ulcers and is endemic in parts of the developing world. One study showed dietary supplementation of L. reuteri alone reduces, but does not eradicate, H. pylori in the gut. Another study found the addition of L. reuteri to omeprazole therapy dramatically increased (from 0% to 60%) the cure rate of H. pylori-infected patients compared to the drug alone. Yet another study showed that L. reuteri effectively suppressed H. pylori infection and decreased the occurrence of dyspeptic symptoms, although it did not improve the outcome of antibiotic therapy. Llimosilactobacillus reuteri has the potential to suppress H. Pylori infection and may lead to an improvement of H. Pylori-associated gastrointestinal symptoms, reducing specific symptoms such as diarrhea and frequent abdominal distention. In the future, L. reuteri can become a central part of a strategy to avoid using antibiotics and fighting antibiotic resistance in H. pylori infections and besides fighting antibiotics resistance, L. reuteri may be a great alternative treatment for H. pylori causing fewer side effects than antibiotics. Oral health Limosilactobacillus reuteri may be capable of promoting dental health, as it has been proven to kill Streptococcus mutans, a bacterium responsible for tooth decay. A screen of several probiotic bacteria found L. reuteri was the only tested species able to block S. mutans. Before testing in humans began, another study showed L. reuteri had no harmful effects on teeth. Clinical trials proved that people whose mouths are colonized with L. reuteri (via dietary supplementation) have significantly less S. mutans. Since these studies were short-term, it is not known whether L. reuteri prevents tooth decay. However, since it is able to reduce the numbers of an important decay-causing bacterium, this would be expected. Gingivitis may be ameliorated by consumption of L. reuteri. Patients afflicted with severe gingivitis showed decreased gum bleeding, plaque formation and other gingivitis-associated symptoms compared with placebo after chewing gum containing L. reuteri. Bone density Lactobacillus reuteri and other probiotics may influence the gut microbiome in ways that protect against bone loss, common in post-menopausal women. General health By protecting against many common infections, L. reuteri promotes overall wellness in both children and adults. Double-blind, randomized studies in child care centers have found L. reuteri-fed infants fall sick less often, require fewer doctor visits and are absent fewer days from the center compared to placebo and to the competing probiotic Bifidobacterium lactis. Similar results have been found in adults; those consuming L. reuteri daily end up falling ill 50% less often, as measured by their decrease use of sick leave. Results in animal models Scientific studies that require harming the subjects (for example, exposing them to a dangerous virus) cannot be conducted in humans. Therefore, many of L. reuteri's benefits have been studied only in different animal species, such as pigs and mice. In general, animal studies on L. reuteri are done using the species-specific strain of the bacterium. Protection against pathogens Limosilactobacillus reuteri confers a high level of resistance to the pathogen Salmonella typhimurium, halving mortality rates in mice. The same is true for chickens and turkeys; L. reuteri greatly moderates the morbidity and mortality caused by this dangerous food-borne pathogen. Limosilactobacillus reuteri is effective in stopping harmful strains of E. coli from affecting their hosts. A study performed in chickens showed L. reuteri was as potent as the antibiotic gentamicin in preventing E. coli-related deaths. The protozoic parasite Cryptosporidium parvum causes severe watery diarrhea, which can become life-threatening in immunocompromised (as in individuals infected with HIV) patients. L. reuteri is known to lessen the symptoms of C. parvum infection in mice and pigs. Some protective effect against the yeast Candida albicans has been found in mice, but in this case, L. reuteri did not work as well as other probiotic organisms, such as L. acidophilus and L. casei. Body weight and growth In juvenile commercial livestock, such as turkey poults and piglets, body weight and growth rate are good health indicators. Animals raised in the dirty, crowded environments of commercial farms are generally less healthy (and therefore weigh less) than their counterparts born and bred in cleaner spaces. In turkeys, for example, this phenomenon is known as "poult growth depression", or PGD. Supplementing the diets of these young animals with L. reuteri helps them to largely overcome the stresses imposed by unhealthy environs. Commercial turkeys fed L. reuteri from birth had nearly a 10% higher adult body weight than their peers raised in the same conditions. A similar study on piglets showed L. reuteri is at least as effective as synthetic antibiotics in improving body weight under crowded conditions. The mechanism by which L. reuteri is able to support healthy growth is not entirely understood. It possibly serves to protect against illness caused by S. typhimurium and other pathogens (see above), which are much more common in crowded commercial farms. However, other studies found that it can help when the growth depression is caused entirely by a lack of dietary protein, and not by contagious disease. This raises the possibility that L. reuteri somehow improves the intestines' ability to absorb and process nutrients. Chemical and trauma-induced injury Treating colonic tissue from rats with acetic acid causes an injury similar to the human condition ulcerative colitis. Treating the injured tissue with L. reuteri immediately after removing the acid almost completely reverses any ill effects, leading to the possibility that L. reuteri may be beneficial in the treatment of human colitis patients. In addition to its role in digestion, the intestinal wall is also vital in preventing harmful bacteria, endotoxins, etc., from "leaking" into the bloodstream. This leaking, known as bacterial "translocation", can lead to lethal conditions such as sepsis. In humans, translocation is more likely to occur following such events as liver injury and ingestion of some poisons. In rodent studies, L. reuteri was found to greatly reduce the amount of bacterial translocation following either the surgical removal of the liver or injection with D-galactosamine, a chemical which causes liver damage. The anticancer drug methotrexate causes severe enterocolitis in high doses. L. reuteri greatly mitigates the symptoms of methotrexate-induced enterocolitis in rats, one of which is bacterial translocation. Links to fat in diet of mice, and reversible symptoms of behavioral abnormalities In mice, the absence of L. reuteri has been causally linked to maternal diet. A gut microbial imbalance, lacking in L. reuteri, was linked to behavioral abnormalities consistent with autism in humans. These symptoms were reversible by supplementing L. reuteri. References External links Joint Genome Institute on L. reuteri Type strain of Lactobacillus reuteri at BacDive - the Bacterial Diversity Metadatabase Digestive system Probiotics Lactobacillaceae Gut flora bacteria Bacteria described in 1982
Limosilactobacillus reuteri
[ "Biology" ]
3,091
[ "Digestive system", "Organ systems", "Gut flora bacteria", "Bacteria" ]
6,031,956
https://en.wikipedia.org/wiki/Muscularis%20mucosae
The muscularis mucosae (or lamina muscularis mucosae) is a thin layer (lamina) of muscle of the gastrointestinal tract, located outside the lamina propria, and separating it from the submucosa. It is present in a continuous fashion from the esophagus to the upper rectum (the exact nomenclature of the rectum's muscle layers is still being debated). A discontinuous muscularis mucosae–like muscle layer is present in the urinary tract, from the renal pelvis to the bladder; as it is discontinuous, it should not be regarded as a true muscularis mucosae. In the gastrointestinal tract, the term mucosa or mucous membrane refers to the combination of epithelium, lamina propria, and (where it occurs) muscularis mucosae. The etymology suggests this, since the Latin names translate to "the mucosa's own special layer" (lamina propria mucosae) and "muscular layer of the mucosa" (lamina muscularis mucosae). The muscularis mucosae is composed of several thin layers of smooth muscle fibers oriented in different ways which keep the mucosal surface and underlying glands in a constant state of gentle agitation to expel contents of glandular crypts and enhance contact between epithelium and the contents of the lumen. Additional images References Stacey E. Mills — Histology for Pathologists: 3rd (third) Edition, page 670. External links  — "Lung" - "Mammal, whole system (LM, Low)" Digestive system
Muscularis mucosae
[ "Biology" ]
346
[ "Digestive system", "Organ systems" ]
6,032,086
https://en.wikipedia.org/wiki/See-through%20clothing
See-through clothing is any garment of clothing made with lace, mesh or sheer fabric that allows the wearer's body or undergarments to be seen through its fabric. See-through fabrics were fashionable in Europe in the eighteenth century. There was a "sheer fashion trend" starting with designer clothing from 2008. See-through or sheer fabric, particularly in skintone (called "nude") colours, is sometimes called illusion, as in 'illusion bodice' (or sleeve) due to giving the impression of exposed flesh, or a revealing ensemble. Mesh, web, or net fabric may have many connected or woven pieces with many closely spaced holes, frequently used for modern sports jerseys. A sheer fabric is a thin cloth which is semi-transparent. These include chiffon, georgette, and gauze. Some are fine-denier knits used in tights, stockings, bodystockings, dancewear and lingerie. It can also be used in tops, pants, skirts, dresses, and gowns. Latex rubber, which is naturally translucent, or plastics, can be made into clothing material of any level of transparency. Clear plastic is typically only found in over-garments, such as raincoats. The use of translucent latex rubber for clothing can also be found in fetish clothing. Some materials become transparent when wet or when extreme light is shone on it, such as by a flashbulb. 18th and 19th centuries During the 1770s and 1780s, there was a fashion for wrap-over dresses which were sometimes worn by actresses in Oriental roles. These were criticised by Horace Walpole among others for resembling dressing gowns too closely, while others objected to their revealingly thin materials, such as silk gauze and muslin. In the 1780s the chemise a la Reine, as worn by Marie Antoinette in a notorious portrait of 1783 by Élisabeth Vigée Le Brun, became very popular. It was a filmy white muslin dress similar to the undergarment also called a chemise. In 1784 Abigail Adams visited Paris, where she was shocked to observe that fashionable Frenchwomen, including Madame Helvétius, favoured the more revealing and sheer versions of this gown. By the end of the 1790s, Louis-Sébastien Mercier, observing the dress of Frenchwomen, noted that were dressing in a manner he described as "a la sauvage", comprising a semi-sheer muslin gown worn only over a flesh-coloured bodystocking, with the breasts, arms and feet bare. Mercier blamed the public display of nude or lightly draped statues for encouraging this immodesty. In the very late 18th century and for the first decade of the 19th, neoclassical gowns made of lightweight translucent muslin were fashionable. As the fabric clung to the body, revealing what was beneath, it made nudity à la grecque, a centrepiece of public spectacle. The concept of transparency in women's dress was often satirised by caricaturists of the day such as Isaac Cruikshank. Throughout the 19th century, women's dresses, particularly for summer or evening wear, often featured transparent fabrics. However, these were almost always lined or worn over opaque undergarments or an underdress so that the wearer's modesty was preserved. Gallery Marie Antoinette in a Muslin Dress, or Chemise a la Reine, by Vigée Le Brun or Absolutely no agreement by Louis-Léopold Boilly. An is shown propositioning a woman dressed a la sauvage 1807 caricature showing an exaggeratedly transparent dress. Portrait of Lady Elizabeth Leveson-Gower, showing a sheer gauze overdress with long sleeves over a white silk underdress. Fashion plate showing a ball dress of sheer material over a pink underdress. Portrait of Elena Chertkova Stroganova in a black satin dress with transparent white gauze sleeves. Portrait of two sisters by James Tissot showing a muslin summer dress with a transparent bodice clearly showing the arms and a low-necked camisole. The Gallery of H.M.S. Calcutta by Tissot. Summer dresses of sheer fabric, one with clearly visible low-cut back lining. Portrait of Sonja Knips by Gustav Klimt. Afternoon dress in densely gathered sheer pink chiffon over a solid foundation lining. 20th century 1900s–1910s A fashionable garment in the early 20th century was the "peekaboo waist", a blouse made from or sheer fabric, which led to complaints that flesh could be seen through the eyelets in the embroidery or through the thin fabric. In 1913 the so-called "x-ray dress", defined as a woman's dress that was considered to be too sheer or revealing, caused similar consternation. In August that year, the chief of police of Los Angeles stated his intention to recommend a law banning women from wearing the "diaphanous" x‑ray dress on the streets. H. Russell Albee, the mayor of Portland, Oregon, ordered the arrest of any woman caught wearing an x‑ray dress on the street, which was defined as a gown cut too low at the neck or split to the knee. The following year in 1914, Jean-Philippe Worth, designer for the renowned Paris couture House of Worth, had a client object to the thickness of the taffeta lining of her dress, which was described as "thinner than a cigarette paper". Worth stated that using an even thinner, sheerer lining fabric would have had the effect of an "x-ray dress". In Australia, an article was published in The Daily Telegraph on the 24 November 1913 strongly opposed to "freak dresses" and "peek-a-boo blouses" that had lately become the fashion in "other Capitals". The editorial complains of dresses of "exiguous transparancy and undue scantiness" and "the low-cut blouse that invites pneumonia". 1960s See-through and transparent clothing became very fashionable in the latter part of the 1960s. In 1967, Missoni presented a show at the in Florence, where Rosita Missoni noticed the models' bras showed through their knit dresses and requested they remove them. However, under the catwalk lights, the garments became unexpectedly transparent, revealing nude breasts beneath. The see-through look was subsequently presented by Yves Saint Laurent the following year, and in London, Ossie Clark presented sheer chiffon dresses intended to be worn without underwear. The trend led to jewellery designers such as Daniel Stoenescu at Cadoro creating "body jewellery" to be worn with sheer blouses and low-cut dresses. Stoenescu designed metal filigree "breastplates" inspired by a statue of Venus found at Pompeii, which functioned like a brassiere and were designed to be visible through the transparent shirts while preserving the wearer's modesty. 1970s Punk rock artist Patti Smith wears a see-through slip inside-out on the cover of her 1978 album Easter. 21st century fashion A see-through dress worn by Kate Middleton, princess of Wales, to a charity fashion show in 2002 was sold at auction on 17 March 2011 for $127,500. See-through materials of various kinds continue to be available for a wide range of clothing styles. See-through fabrics have been featured heavily on high-fashion runways since 2006. This use of see-through fabrics as a common element in designer clothing resulted in the "sheer fashion trend" that has been predominant in fashion circles since 2008. In 2021, Megan Fox wore a see through dress seeing her torso and lingerie at the 2021 MTV Video Music Awards. In 2023, Fox wore another see through dress which was black and showed more in detail of her midriff and one could clearly see her belly button. See also Bralessness Fetish fashion Fishnet Lace Plastic clothing Sheer fabric Skin-tight garment Stockings Wet T-shirt contest References Clothing by material Fetish clothing Modesty Transparent materials
See-through clothing
[ "Physics" ]
1,647
[ "Physical phenomena", "Optical phenomena", "Materials", "Transparent materials", "Matter" ]
6,032,227
https://en.wikipedia.org/wiki/Cardanolide
Cardanolide is a steroid with a molecular weight of 344.54 g/mol. See also Cardiac glycoside External links Steroids
Cardanolide
[ "Chemistry" ]
33
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
1,787,483
https://en.wikipedia.org/wiki/Mina%20%28unit%29
The mina (; ; ; ; ; ; ) is an ancient Near Eastern unit of weight for silver or gold, equivalent to approximately , which was divided into 60 shekels. The mina, like the shekel, eventually also became a unit of currency. History Sumerian From earliest Sumerian times, a mina was a unit of weight. At first, talents and shekels had not yet been introduced. By the time of Ur-Nammu (shortly before 2000 BCE), the mina had a value of talent as well as 60 shekels. The weight of this mina is calculated at , or 570 grams of silver (18 troy ounces). Semitic languages The word mina comes from the ancient Semitic root / 'to count', Akkadian , (), / (/), (), . It is mentioned in the Bible, where Solomon is reported to have made 300 shields, each with 3 "mina" of gold (), or later after the Edict of Cyrus II of Persia the people are reported to have donated 5000 minas of silver for the reconstruction of Solomon's Temple in Jerusalem. In the Code of Hammurabi which is considered one of the first examples of written law, the mina is one of the most used terms denoting the weight of gold to be paid for crimes or to resolve civil conflicts. In the Biblical story of Belshazzar's feast, the words mene, mene, tekel, upharsin appear on the wall (Daniel 5:25), which according to one interpretation can mean "mina, mina, shekel, and half-pieces", although Daniel interprets the words differently for King Belshazzar. Writings from Ugarit give the value of a mina as equivalent to fifty shekels. The prophet Ezekiel refers to a mina (maneh in the King James Version) also as 60 shekels, in the Book of Ezekiel 45:12. Jesus of Nazareth tells the "parable of the minas" in Luke 19:11–27, also told as the "parable of the talents" in Matthew 25:14–30. In later Jewish usage, the is equal in weight to 100 . From the Akkadian period, 2 mina was equal to 1 of water (cf. clepsydra, water clock). Greek In ancient Greece, the mina was known as the (). It originally equalled 70 drachmae but later, at the time of the statesman Solon (c. 594 BC), was increased to 100 drachmae. The Greek word () was borrowed from Semitic. Different city states used minae of different weights. The Aeginetan mina weighed . The Attic mina weighed . In Solon's day, according to Plutarch, the price of a sheep was one drachma or a medimnos (about 40 kg) of wheat. Thus a mina was worth 100 sheep. Latin The word also occurs in Latin literature, but mainly in plays of Plautus and Terence adapted from Greek originals. In Terence's play Heauton Timorumenos, adapted from a play of the same name by the Greek playwright Menander, a certain sum of money is referred to in one place as "ten minae" (line 724) and in another as "1000 drachmas of silver" (line 601). Usually the word referred to a mina of silver, but Plautus also twice mentions a mina of gold. In the 4th century BC, gold was worth about 10 times the same weight of silver. In Plautus, 20 minae is mentioned as the price of buying a slave. It was also the price of hiring a courtesan for a year. 40 minae is given as the price of a house. In classical Latin the approximate equivalent of a mina was the (the word also meant "balance" or "weighing scales"). With a weight of only , however, the Roman was lighter than either a Greek mina or a modern pound of 16 ounces. It was divided into 12 Roman ounces. Sometimes the word was used together with the word "in weight", e.g. "a pound in weight" (Livy, 3.29); but often was used alone; e.g. "five (pounds) in weight of gold" (Cicero, pro Cluentio 179). Hence the word by itself came to mean "pound(s)". From Latin comes the English word "pound", and from come the abbreviations "lb" (for weight) and the pound sign "£" (for money). Images Notes References Bibliography Units of mass Ancient Near East Coins of ancient Greece Pound (currency)
Mina (unit)
[ "Physics", "Mathematics" ]
973
[ "Matter", "Quantity", "Units of mass", "Mass", "Units of measurement" ]
1,789,624
https://en.wikipedia.org/wiki/Dewetting
In fluid mechanics, dewetting is one of the processes that can occur at a solid–liquid, solid–solid or liquid–liquid interface. Generally, dewetting describes the process of retraction of a fluid from a non-wettable surface it was forced to cover. The opposite process—spreading of a liquid on a substrate—is called wetting. The factor determining the spontaneous spreading and dewetting for a drop of liquid placed on a solid substrate with ambient gas, is the so-called spreading coefficient : where is the solid-gas surface tension, is the solid-liquid surface tension and is the liquid-gas surface tension (measured for the mediums before they are brought in contact with each other). When , the spontaneous spreading occurs, and if , partial wetting is observed, meaning the liquid will only cover the substrate to some extent. The equilibrium contact angle is determined from the Young–Laplace equation. Spreading and dewetting are important processes for many applications, including adhesion, lubrication, painting, printing, and protective coating. For most applications, dewetting is an unwanted process, because it destroys the applied liquid film. Dewetting can be inhibited or prevented by photocrosslinking the thin film prior to annealing, or by incorporating nanoparticle additives into the film. Surfactants can have a significant effect on the spreading coefficient. When a surfactant is added, its amphiphilic properties cause it to be more energetically favorable to migrate to the surface, decreasing the interfacial tension and thus increasing the spreading coefficient (i.e. making S more positive). As more surfactant molecules are absorbed into the interface, the free energy of the system decreases in tandem to the surface tension decreasing, eventually causing the system to become completely wetting. In biology, by analogy with the physics of liquid dewetting, the process of tunnel formation through endothelial cells has been referred to as cellular dewetting. Dewetting of polymer thin films In most dewetting studies a thin polymer film is spin-cast onto a substrate. Even in the case of the film does not dewet immediately if it is in a metastable state, e.g. if the temperature is below the glass transition temperature of the polymer. Annealing such a metastable film above its glass transition temperature increases the mobility of the polymer-chain molecules and dewetting takes place. The process of dewetting occurs by the nucleation and growth of randomly formed holes, which coalesce to form a network of filaments, before breaking into droplets. When starting from a continuous film, an irregular pattern of droplets is formed. The droplet size and droplet spacing may vary over several orders of magnitude, since the dewetting starts from randomly formed holes in the film. There is no spatial correlation between the dry patches that develop. These dry patches grow and the material is accumulated in the rim surrounding the growing hole. In the case where the initially homogeneous film is thin (in the range of 100 nm), a polygon network of connected strings of material is formed, like a Voronoi pattern of polygons. These strings then can break up into droplets, a process which is known as the Plateau-Rayleigh instability. At other film thicknesses, other complicated patterns of droplets on the substrate can be observed, which stem from a fingering instability of the growing rim around the dry patch. Dewetting of metal thin films Solid-state dewetting of the metal thin films describe the transformation of a thin film into an energetically favoured set of droplets or particles at temperatures well below the melting point. The driving force for dewetting is the minimization of the total energy of the free surfaces of the film and substrate as well as of the film-substrate interface. The dedicated heating stage in SEM has been widely used to accurately control sample temperature through a thermocouple to observe the in-situ behaviour of the material, and can be recorded as a video format. Meanwhile, the two-dimensional morphology can be directly observed and characterised. ie. the partially dewetted Ni film is itself a workable fuel electrode for SOCs as it provides long TPB lines if the structure is fine enough, the connectivity of the nickel and pore phases as well as the TPB lines can be used for SOFC characterisation. References External links Fluid mechanics Surface science
Dewetting
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
915
[ "Civil engineering", "Fluid mechanics", "Condensed matter physics", "Surface science" ]
20,464,688
https://en.wikipedia.org/wiki/Xiao-Gang%20Wen
Xiao-Gang Wen (; born November 26, 1961) is a Chinese-American physicist. He is a Cecil and Ida Green Professor of Physics at the Massachusetts Institute of Technology and Distinguished Visiting Research Chair at the Perimeter Institute for Theoretical Physics. His expertise is in condensed matter theory in strongly correlated electronic systems. In Oct. 2016, he was awarded the Oliver E. Buckley Condensed Matter Prize. He is the author of a book in advanced quantum many-body theory entitled Quantum Field Theory of Many-body Systems: From the Origin of Sound to an Origin of Light and Electrons (Oxford University Press, 2004). Early life and education Wen attended the University of Science and Technology of China and earned a B.S. in Physics in 1982. In 1982, Wen came to the US for graduate school via the CUSPEA program, which was organized by Prof. T. D. Lee. He attended Princeton University, from which be attained an M.A. in Physics in 1983 and a Ph.D in Physics in 1987. Work Wen studied superstring theory under theoretical physicist Edward Witten at Princeton University where he received his Ph.D. degree in 1987. He later switched his research field to condensed matter physics while working with theoretical physicists Robert Schrieffer, Frank Wilczek, Anthony Zee in Institute for Theoretical Physics, UC Santa Barbara (1987–1989). Wen introduced the notion of topological order (1989) and quantum order (2002), to describe a new class of matter states. This opened up a new research direction in condensed matter physics. He found that states with topological order contain non-trivial boundary excitations and developed chiral Luttinger theory for the boundary states (1990). Boundary states can become ideal conduction channels which may lead to device application of topological phases. He proposed the simplest topological order — Z2 topological order (1990), which turns out to be the topological order in the toric code. He also proposed a special class of topological order: non-Abelian quantum Hall states. They contain emergent particles with non-Abelian statistics which generalizes the well known Bose and Fermi statistics. Non-Abelian particles may allow us to perform fault tolerant quantum computations. With Michael Levin, he found that string-net condensations can give rise to a large class of topological orders (2005). In particular, string-net condensation provides a unified origin of photons, electrons, and other elementary particles (2003). It unifies two fundamental phenomena: gauge interactions and Fermi statistics. He pointed out that topological order is nothing but the pattern of long range entanglements. This led to a notion of symmetry protected topological (SPT) order (short-range entangled states with symmetry) and its description by group cohomology of the symmetry group (2011). The notion of SPT order generalizes the notion of topological insulator to interacting cases. He also proposed the SU(2) gauge theory of high temperature superconductors (1996). Professional record Professor, MIT, 2000–present Isaac Newton Research Chair, Perimeter Institute for Theoretical Physics, 2012–2014 Associate professor, MIT, 1995—2000 Assistant professor, MIT, 1991—1995 Five-year member of IAS, 1989—1991 Member of ITP, UC Santa Barbara, 1987—1989 Honors A.P. Sloan Foundation fellow (1992) Overseas Chinese Physics Association outstanding young researcher award (1994) Changjiang professor, Center for Advanced Study, Tsinghua University (2000—2004) Fellow of American Physical Society (2002) Cecil and Ida Green Professor of Physics, MIT (2004—present) Distinguished Moore Scholar, Caltech (2006) Distinguished Research Chair, Perimeter Institute (2009) Isaac Newton Chair, Perimeter Institute (announced Sep 2011) 2017 Oliver E. Buckley Condensed Matter Prize (announced Oct. 2016) Member of National Academy of Sciences (2018) 2018 Dirac Medal of the ICTP Selected publications See also Topological order String-net Topological entanglement entropy References External links https://xgwen.mit.edu http://physics.stackexchange.com/users/9444/xiao-gang-wen 1961 births Living people 21st-century American physicists Chinese emigrants to the United States Massachusetts Institute of Technology School of Science faculty Princeton University alumni Theoretical physicists University of Science and Technology of China alumni Members of the United States National Academy of Sciences Physicists from Shaanxi People from Xi'an Educators from Shaanxi Sloan Research Fellows Fellows of the American Physical Society Oliver E. Buckley Condensed Matter Prize winners
Xiao-Gang Wen
[ "Physics" ]
935
[ "Theoretical physics", "Theoretical physicists" ]
20,468,824
https://en.wikipedia.org/wiki/De%20Sitter%20invariant%20special%20relativity
In mathematical physics, de Sitter invariant special relativity is the speculative idea that the fundamental symmetry group of spacetime is the indefinite orthogonal group SO(4,1), that of de Sitter space. In the standard theory of general relativity, de Sitter space is a highly symmetrical special vacuum solution, which requires a cosmological constant or the stress–energy of a constant scalar field to sustain. The idea of de Sitter invariant relativity is to require that the laws of physics are not fundamentally invariant under the Poincaré group of special relativity, but under the symmetry group of de Sitter space instead. With this assumption, empty space automatically has de Sitter symmetry, and what would normally be called the cosmological constant in general relativity becomes a fundamental dimensional parameter describing the symmetry structure of spacetime. First proposed by Luigi Fantappiè in 1954, the theory remained obscure until it was rediscovered in 1968 by Henri Bacry and Jean-Marc Lévy-Leblond. In 1972, Freeman Dyson popularized it as a hypothetical road by which mathematicians could have guessed part of the structure of general relativity before it was discovered. The discovery of the accelerating expansion of the universe has led to a revival of interest in de Sitter invariant theories, in conjunction with other speculative proposals for new physics, like doubly special relativity. Introduction De Sitter suggested that spacetime curvature might not be due solely to gravity but he did not give any mathematical details of how this could be accomplished. In 1968 Henri Bacry and Jean-Marc Lévy-Leblond showed that the de Sitter group was the most general group compatible with isotropy, homogeneity and boost invariance. Later, Freeman Dyson advocated this as an approach to making the mathematical structure of general relativity more self-evident. Minkowski's unification of space and time within special relativity replaces the Galilean group of Newtonian mechanics with the Lorentz group. This is called a unification of space and time because the Lorentz group is simple, while the Galilean group is a semi-direct product of rotations and Galilean boosts. This means that the Lorentz group mixes up space and time such that they cannot be disentangled, while the Galilean group treats time as a parameter with different units of measurement than space. An analogous thing can be made to happen with the ordinary rotation group in three dimensions. If you imagine a nearly flat world, one in which pancake-like creatures wander around on a pancake flat world, their conventional unit of height might be the micrometre (μm), since that is how high typical structures are in their world, while their unit of distance could be the metre, because that is their body's horizontal extent. Such creatures would describe the basic symmetry of their world as SO(2), being the known rotations in the horizontal (x–y) plane. Later on, they might discover rotations around the x- and y-axes—and in their everyday experience such rotations might always be by an infinitesimal angle, so that these rotations would effectively commute with each other. The rotations around the horizontal axes would tilt objects by an infinitesimal amount. The tilt in the x–z plane (the "x-tilt") would be one parameter, and the tilt in the y–z plane (the "y-tilt") another. The symmetry group of this pancake world is then SO(2) semidirect product with R2, meaning that a two-dimensional rotation plus two extra parameters, the x-tilt and the y-tilt. The reason it is a semidirect product is that, when you rotate, the x-tilt and the y-tilt rotate into each other, since they form a vector and not two scalars. In this world, the difference in height between two objects at the same x, y would be a rotationally invariant quantity unrelated to length and width. The z-coordinate is effectively separate from x and y. Eventually, experiments at large angles would convince the creatures that the symmetry of the world is SO(3). Then they would understand that z is really the same as x and y, since they can be mixed up by rotations. The SO(2) semidirect product R2 limit would be understood as the limit that the free parameter μ, the ratio of the height range μm to the length range m, approaches 0. The Lorentz group is analogous—it is a simple group that turns into the Galilean group when the time range is made long compared to the space range, or where velocities may be regarded as infinitesimal, or equivalently, may be regarded as the limit , where relativistic effects become observable "as good as at infinite velocity". The symmetry group of special relativity is not entirely simple, due to translations. The Lorentz group is the set of the transformations that keep the origin fixed, but translations are not included. The full Poincaré group is the semi-direct product of translations with the Lorentz group. If translations are to be similar to elements of the Lorentz group, then as boosts are non-commutative, translations would also be non-commutative. In the pancake world, this would manifest if the creatures were living on an enormous sphere rather than on a plane. In this case, when they wander around their sphere, they would eventually come to realize that translations are not entirely separate from rotations, because if they move around on the surface of a sphere, when they come back to where they started, they find that they have been rotated by the holonomy of parallel transport on the sphere. If the universe is the same everywhere (homogeneous) and there are no preferred directions (isotropic), then there are not many options for the symmetry group: they either live on a flat plane, or on a sphere with a constant positive curvature, or on a Lobachevski plane with constant negative curvature. If they are not living on the plane, they can describe positions using dimensionless angles, the same parameters that describe rotations, so that translations and rotations are nominally unified. In relativity, if translations mix up nontrivially with rotations, but the universe is still homogeneous and isotropic, the only option is that spacetime has a uniform scalar curvature. If the curvature is positive, the analog of the sphere case for the two-dimensional creatures, the spacetime is de Sitter space and its symmetry group is the de Sitter group rather than the Poincaré group. De Sitter special relativity postulates that the empty space has de Sitter symmetry as a fundamental law of nature. This means that spacetime is slightly curved even in the absence of matter or energy. This residual curvature implies a positive cosmological constant to be determined by observation. Due to the small magnitude of the constant, special relativity with its Poincaré group is indistinguishable from de Sitter space for most practical purposes. Modern proponents of this idea, such as S. Cacciatori, V. Gorini and A. Kamenshchik, have reinterpreted this theory as physics, not just mathematics. They postulate that the acceleration of the expansion of the universe is not entirely due to vacuum energy, but at least partly due to the kinematics of the de Sitter group, which would replace the Poincaré group. A modification of this idea allows to change with time, so that inflation may come from the cosmological constant being larger near the Big Bang than nowadays. It can also be viewed as a different approach to the problem of quantum gravity. High energy The Poincaré group contracts to the Galilean group for low-velocity kinematics, meaning that when all velocities are small the Poincaré group "morphs" into the Galilean group. (This can be made precise with İnönü and Wigner's concept of group contraction.) Similarly, the de Sitter group contracts to the Poincaré group for short-distance kinematics, when the magnitudes of all translations considered are very small compared to the de Sitter radius. In quantum mechanics, short distances are probed by high energies, so that for energies above a very small value related to the cosmological constant, the Poincaré group is a good approximation to the de Sitter group. In de Sitter relativity, the cosmological constant is no longer a free parameter of the same type; it is determined by the de Sitter radius, a fundamental quantity that determines the commutation relation of translation with rotations/boosts. This means that the theory of de Sitter relativity might be able to provide insight on the value of the cosmological constant, perhaps explaining the cosmic coincidence. Unfortunately, the de Sitter radius, which determines the cosmological constant, is an adjustable parameter in de Sitter relativity, so the theory requires a separate condition to determine its value in relation to the measurement scale. When a cosmological constant is viewed as a kinematic parameter, the definitions of energy and momentum must be changed from those of special relativity. These changes could significantly modify the physics of the early universe if the cosmological constant was greater back then. Some speculate that a high energy experiment could modify the local structure of spacetime from Minkowski space to de Sitter space with a large cosmological constant for a short period of time, and this might eventually be tested in the existing or planned particle collider. Doubly special relativity Since the de Sitter group naturally incorporates an invariant length parameter, de Sitter relativity can be interpreted as an example of the so-called doubly special relativity. There is a fundamental difference, though: whereas in all doubly special relativity models the Lorentz symmetry is violated, in de Sitter relativity it remains as a physical symmetry. A drawback of the usual doubly special relativity models is that they are valid only at the energy scales where ordinary special relativity is supposed to break down, giving rise to a patchwork relativity. On the other hand, de Sitter relativity is found to be invariant under a simultaneous re-scaling of mass, energy and momentum, and is consequently valid at all energy scales. A relationship between doubly special relativity, de Sitter space and general relativity is described by Derek Wise. See also MacDowell–Mansouri action. Newton–Hooke: de Sitter special relativity in the limit v ≪ c In the limit as , the de Sitter group contracts to the Newton–Hooke group. This has the effect that in the nonrelativistic limit, objects in de Sitter space have an extra "repulsion" from the origin: objects have a tendency to move away from the center with an outward pointing fictitious force proportional to their distance from the origin. While it looks as though this might pick out a preferred point in space—the center of repulsion, it is more subtly isotropic. Moving to the uniformly accelerated frame of reference of an observer at another point, all accelerations appear to have a repulsion center at the new point. What this means is that in a spacetime with non-vanishing curvature, gravity is modified from Newtonian gravity. At distances comparable to the radius of the space, objects feel an additional linear repulsion from the center of coordinates. History of de Sitter invariant special relativity "de Sitter relativity" is the same as the theory of "projective relativity" of Luigi Fantappiè and Giuseppe Arcidiacono first published in 1954 by Fantappiè and the same as another independent discovery in 1976. In 1968 Henri Bacry and Jean-Marc Lévy-Leblond published a paper on possible kinematics In 1972 Freeman Dyson further explored this. In 1973 Eliano Pessa described how Fantappié–Arcidiacono projective relativity relates to earlier conceptions of projective relativity and to Kaluza Klein theory. R. Aldrovandi, J.P. Beltrán Almeida and J.G. Pereira have used the terms "de Sitter special relativity" and "de Sitter relativity" starting from their 2007 paper "de Sitter special relativity". This paper was based on previous work on amongst other things: the consequences of a non-vanishing cosmological constant, on doubly special relativity and on the Newton–Hooke group and early work formulating special relativity with a de Sitter space In 2008 S. Cacciatori, V. Gorini and A. Kamenshchik published a paper about the kinematics of de Sitter relativity. Papers by other authors include: dSR and the fine structure constant; dSR and dark energy; dSR Hamiltonian Formalism; and De Sitter Thermodynamics from Diamonds's Temperature, Triply special relativity from six dimensions, Deformed General Relativity and Torsion. Quantum de Sitter special relativity There are quantized or quantum versions of de Sitter special relativity. Early work on formulating a quantum theory in a de Sitter space includes: See also Noncommutative geometry Quantum field theory in curved spacetime References Further reading Special relativity General relativity Physical cosmology Quantum gravity Kinematics Riemannian geometry Group theory
De Sitter invariant special relativity
[ "Physics", "Astronomy", "Mathematics", "Technology" ]
2,748
[ "Physical phenomena", "Unsolved problems in physics", "Physics beyond the Standard Model", "Astronomical sub-disciplines", "Kinematics", "Group theory", "Motion (physics)", "Theoretical physics", "General relativity", "Fields of abstract algebra", "Mechanics", "Theory of relativity", "Physic...
20,474,388
https://en.wikipedia.org/wiki/Lax%E2%80%93Friedrichs%20method
The Lax–Friedrichs method, named after Peter Lax and Kurt O. Friedrichs, is a numerical method for the solution of hyperbolic partial differential equations based on finite differences. The method can be described as the FTCS (forward in time, centered in space) scheme with a numerical dissipation term of 1/2. One can view the Lax–Friedrichs method as an alternative to Godunov's scheme, where one avoids solving a Riemann problem at each cell interface, at the expense of adding artificial viscosity. Illustration for a Linear Problem Consider a one-dimensional, linear hyperbolic partial differential equation for of the form: on the domain with initial condition and the boundary conditions If one discretizes the domain to a grid with equally spaced points with a spacing of in the -direction and in the -direction, we introduce an approximation of where are integers representing the number of grid intervals. Then the Lax–Friedrichs method to approximate the partial differential equation is given by: Or, rewriting this to solve for the unknown Where the initial values and boundary nodes are taken from Extensions to Nonlinear Problems A nonlinear hyperbolic conservation law is defined through a flux function : In the case of , we end up with a scalar linear problem. Note that in general, is a vector with equations in it. The generalization of the Lax-Friedrichs method to nonlinear systems takes the form This method is conservative and first order accurate, hence quite dissipative. It can, however be used as a building block for building high-order numerical schemes for solving hyperbolic partial differential equations, much like Euler time steps can be used as a building block for creating high-order numerical integrators for ordinary differential equations. We note that this method can be written in conservation form: where Without the extra terms and in the discrete flux, , one ends up with the FTCS scheme, which is well known to be unconditionally unstable for hyperbolic problems. Stability and accuracy This method is explicit and first order accurate in time and first order accurate in space ( provided are sufficiently-smooth functions. Under these conditions, the method is stable if and only if the following condition is satisfied: (A von Neumann stability analysis can show the necessity of this stability condition.) The Lax–Friedrichs method is classified as having second-order dissipation and third order dispersion. For functions that have discontinuities, the scheme displays strong dissipation and dispersion; see figures at right. References Numerical differential equations Computational fluid dynamics
Lax–Friedrichs method
[ "Physics", "Chemistry" ]
524
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
3,390,453
https://en.wikipedia.org/wiki/Regular%20measure
In mathematics, a regular measure on a topological space is a measure for which every measurable set can be approximated from above by open measurable sets and from below by compact measurable sets. Definition Let (X, T) be a topological space and let Σ be a σ-algebra on X. Let μ be a measure on (X, Σ). A measurable subset A of X is said to be inner regular if This property is sometimes referred to in words as "approximation from within by compact sets." Some authors use the term tight as a synonym for inner regular. This use of the term is closely related to tightness of a family of measures, since a finite measure μ is inner regular if and only if, for all ε > 0, there is some compact subset K of X such that μ(X \ K) < ε. This is precisely the condition that the singleton collection of measures {μ} is tight. It is said to be outer regular if A measure is called inner regular if every measurable set is inner regular. Some authors use a different definition: a measure is called inner regular if every open measurable set is inner regular. A measure is called outer regular if every measurable set is outer regular. A measure is called regular if it is outer regular and inner regular. Examples Regular measures The Lebesgue measure on the real line is a regular measure: see the regularity theorem for Lebesgue measure. Any Baire probability measure on any locally compact σ-compact Hausdorff space is a regular measure. Any Borel probability measure on a locally compact Hausdorff space with a countable base for its topology, or compact metric space, or Radon space, is regular. Inner regular measures that are not outer regular An example of a measure on the real line with its usual topology that is not outer regular is the measure where , , and for any other set . The Borel measure on the plane that assigns to any Borel set the sum of the (1-dimensional) measures of its horizontal sections is inner regular but not outer regular, as every non-empty open set has infinite measure. A variation of this example is a disjoint union of an uncountable number of copies of the real line with Lebesgue measure. An example of a Borel measure on a locally compact Hausdorff space that is inner regular, σ-finite, and locally finite but not outer regular is given by as follows. The topological space has as underlying set the subset of the real plane given by the y-axis together with the points (1/n,m/n2) with m,n positive integers. The topology is given as follows. The single points (1/n,m/n2) are all open sets. A base of neighborhoods of the point (0,y) is given by wedges consisting of all points in X of the form (u,v) with |v − y| ≤ |u| ≤ 1/n for a positive integer n. This space X is locally compact. The measure μ is given by letting the y-axis have measure 0 and letting the point (1/n,m/n2) have measure 1/n3. This measure is inner regular and locally finite, but is not outer regular as any open set containing the y-axis has measure infinity. Outer regular measures that are not inner regular If μ is the inner regular measure in the previous example, and M is the measure given by M(S) = infU⊇S μ(U) where the inf is taken over all open sets containing the Borel set S, then M is an outer regular locally finite Borel measure on a locally compact Hausdorff space that is not inner regular in the strong sense, though all open sets are inner regular so it is inner regular in the weak sense. The measures M and μ coincide on all open sets, all compact sets, and all sets on which M has finite measure. The y-axis has infinite M-measure though all compact subsets of it have measure 0. A measurable cardinal with the discrete topology has a Borel probability measure such that every compact subset has measure 0, so this measure is outer regular but not inner regular. The existence of measurable cardinals cannot be proved in ZF set theory but (as of 2013) is thought to be consistent with it. Measures that are neither inner nor outer regular The space of all ordinals at most equal to the first uncountable ordinal Ω, with the topology generated by open intervals, is a compact Hausdorff space. The measure that assigns measure 1 to Borel sets containing an unbounded closed subset of the countable ordinals and assigns 0 to other Borel sets is a Borel probability measure that is neither inner regular nor outer regular. See also Borel regular measure Radon measure Regularity theorem for Lebesgue measure References Bibliography (See chapter 2) Measures (measure theory)
Regular measure
[ "Physics", "Mathematics" ]
1,018
[ "Measures (measure theory)", "Quantity", "Physical quantities", "Size" ]
3,391,336
https://en.wikipedia.org/wiki/Yield%20%28engineering%29
In materials science and engineering, the yield point is the point on a stress–strain curve that indicates the limit of elastic behavior and the beginning of plastic behavior. Below the yield point, a material will deform elastically and will return to its original shape when the applied stress is removed. Once the yield point is passed, some fraction of the deformation will be permanent and non-reversible and is known as plastic deformation. The yield strength or yield stress is a material property and is the stress corresponding to the yield point at which the material begins to deform plastically. The yield strength is often used to determine the maximum allowable load in a mechanical component, since it represents the upper limit to forces that can be applied without producing permanent deformation. For most metals, such as aluminium and cold-worked steel, there is a gradual onset of non-linear behavior, and no precise yield point. In such a case, the offset yield point (or proof stress) is taken as the stress at which 0.2% plastic deformation occurs. Yielding is a gradual failure mode which is normally not catastrophic, unlike ultimate failure. For ductile materials, the yield strength is typically distinct from the ultimate tensile strength, which is the load-bearing capacity for a given material. The ratio of yield strength to ultimate tensile strength is an important parameter for applications such steel for pipelines, and has been found to be proportional to the strain hardening exponent. In solid mechanics, the yield point can be specified in terms of the three-dimensional principal stresses () with a yield surface or a yield criterion. A variety of yield criteria have been developed for different materials. Definitions It is often difficult to precisely define yielding due to the wide variety of stress–strain curves exhibited by real materials. In addition, there are several possible ways to define yielding: True elastic limit The lowest stress at which dislocations move. This definition is rarely used since dislocations move at very low stresses, and detecting such movement is very difficult. Proportionality limit Up to this amount of stress, stress is proportional to strain (Hooke's law), so the stress-strain graph is a straight line, and the gradient will be equal to the elastic modulus of the material. Elastic limit (yield strength) Beyond the elastic limit, permanent deformation will occur. The elastic limit is, therefore, the lowest stress point at which permanent deformation can be measured. This requires a manual load-unload procedure, and the accuracy is critically dependent on the equipment used and operator skill. For elastomers, such as rubber, the elastic limit is much larger than the proportionality limit. Also, precise strain measurements have shown that plastic strain begins at very low stresses. Yield point The point in the stress-strain curve at which the curve levels off and plastic deformation begins to occur. Offset yield point () When a yield point is not easily defined on the basis of the shape of the stress-strain curve an offset yield point is arbitrarily defined. The value for this is commonly set at 0.1% or 0.2% plastic strain. The offset value is given as a subscript, e.g., MPa or MPa. For most practical engineering uses, is multiplied by a factor of safety to obtain a lower value of the offset yield point. High strength steel and aluminum alloys do not exhibit a yield point, so this offset yield point is used on these materials. Upper and lower yield points Some metals, such as mild steel, reach an upper yield point before dropping rapidly to a lower yield point. The material response is linear up until the upper yield point, but the lower yield point is used in structural engineering as a conservative value. If a metal is only stressed to the upper yield point, and beyond, Lüders bands can develop. Usage in structural engineering Yielded structures have a lower stiffness, leading to increased deflections and decreased buckling strength. The structure will be permanently deformed when the load is removed, and may have residual stresses. Engineering metals display strain hardening, which implies that the yield stress is increased after unloading from a yield state. Testing Yield strength testing involves taking a small sample with a fixed cross-section area and then pulling it with a controlled, gradually increasing force until the sample changes shape or breaks. This is called a tensile test. Longitudinal and/or transverse strain is recorded using mechanical or optical extensometers. Indentation hardness correlates roughly linearly with tensile strength for most steels, but measurements on one material cannot be used as a scale to measure strengths on another. Hardness testing can therefore be an economical substitute for tensile testing, as well as providing local variations in yield strength due to, e.g., welding or forming operations. For critical situations, tension testing is often done to eliminate ambiguity. However, it is possible to obtain stress-strain curves from indentation-based procedures, provided certain conditions are met. These procedures are grouped under the term Indentation plastometry. Strengthening mechanisms There are several ways in which crystalline materials can be engineered to increase their yield strength. By altering dislocation density, impurity levels, grain size (in crystalline materials), the yield strength of the material can be fine-tuned. This occurs typically by introducing defects such as impurities dislocations in the material. To move this defect (plastically deforming or yielding the material), a larger stress must be applied. This thus causes a higher yield stress in the material. While many material properties depend only on the composition of the bulk material, yield strength is extremely sensitive to the materials processing as well. These mechanisms for crystalline materials include Work hardening Solid solution strengthening Precipitation strengthening Grain boundary strengthening Work hardening Where deforming the material will introduce dislocations, which increases their density in the material. This increases the yield strength of the material since now more stress must be applied to move these dislocations through a crystal lattice. Dislocations can also interact with each other, becoming entangled. The governing formula for this mechanism is: where is the yield stress, G is the shear elastic modulus, b is the magnitude of the Burgers vector, and is the dislocation density. Solid solution strengthening By alloying the material, impurity atoms in low concentrations will occupy a lattice position directly below a dislocation, such as directly below an extra half plane defect. This relieves a tensile strain directly below the dislocation by filling that empty lattice space with the impurity atom. The relationship of this mechanism goes as: where is the shear stress, related to the yield stress, and are the same as in the above example, is the concentration of solute and is the strain induced in the lattice due to adding the impurity. Particle/precipitate strengthening Where the presence of a secondary phase will increase yield strength by blocking the motion of dislocations within the crystal. A line defect that, while moving through the matrix, will be forced against a small particle or precipitate of the material. Dislocations can move through this particle either by shearing the particle or by a process known as bowing or ringing, in which a new ring of dislocations is created around the particle. The shearing formula goes as: and the bowing/ringing formula: In these formulas, is the particle radius, is the surface tension between the matrix and the particle, is the distance between the particles. Grain boundary strengthening Where a buildup of dislocations at a grain boundary causes a repulsive force between dislocations. As grain size decreases, the surface area to volume ratio of the grain increases, allowing more buildup of dislocations at the grain edge. Since it requires much energy to move dislocations to another grain, these dislocations build up along the boundary, and increase the yield stress of the material. Also known as Hall-Petch strengthening, this type of strengthening is governed by the formula: where is the stress required to move dislocations, is a material constant, and is the grain size. Theoretical yield strength The theoretical yield strength of a perfect crystal is much higher than the observed stress at the initiation of plastic flow. That experimentally measured yield strength is significantly lower than the expected theoretical value can be explained by the presence of dislocations and defects in the materials. Indeed, whiskers with perfect single crystal structure and defect-free surfaces have been shown to demonstrate yield stress approaching the theoretical value. For example, nanowhiskers of copper were shown to undergo brittle fracture at 1 GPa, a value much higher than the strength of bulk copper and approaching the theoretical value. The theoretical yield strength can be estimated by considering the process of yield at the atomic level. In a perfect crystal, shearing results in the displacement of an entire plane of atoms by one interatomic separation distance, b, relative to the plane below. In order for the atoms to move, considerable force must be applied to overcome the lattice energy and move the atoms in the top plane over the lower atoms and into a new lattice site. The applied stress to overcome the resistance of a perfect lattice to shear is the theoretical yield strength, τmax. The stress displacement curve of a plane of atoms varies sinusoidally as stress peaks when an atom is forced over the atom below and then falls as the atom slides into the next lattice point. where is the interatomic separation distance. Since τ = G γ and dτ/dγ = G at small strains (i.e. Single atomic distance displacements), this equation becomes: For small displacement of γ=x/a, where a is the spacing of atoms on the slip plane, this can be rewritten as: Giving a value of τmax equal to: The theoretical yield strength can be approximated as . Yield point elongation (YPE) During monotonic tensile testing, some metals such as annealed steel exhibit a distinct upper yield point or a delay in work hardening. These tensile testing phenomena, wherein the strain increases but stress does not increase as expected, are two types of yield point elongation. Yield Point Elongation (YPE) significantly impacts the usability of steel. In the context of tensile testing and the engineering stress-strain curve, the Yield Point is the initial stress level, below the maximum stress, at which an increase in strain occurs without an increase in stress. This characteristic is typical of certain materials, indicating the presence of YPE. The mechanism for YPE has been related to carbon diffusion, and more specifically to Cottrell atmospheres. YPE can lead to issues such as coil breaks, edge breaks, fluting, stretcher strain, and reel kinks or creases, which can affect both aesthetics and flatness. Coil and edge breaks may occur during either initial or subsequent customer processing, while fluting and stretcher strain arise during forming. Reel kinks, transverse ridges on successive inner wraps of a coil, are caused by the coiling process. When these conditions are undesirable, it is essential for suppliers to be informed to provide appropriate materials. The presence of YPE is influenced by chemical composition and mill processing methods such as skin passing or temper rolling, which temporarily eliminate YPE and improve surface quality. However, YPE can return over time due to aging, which is holding at a temperature usually 200-400 °C. Despite its drawbacks, YPE offers advantages in certain applications, such as roll forming, and reduces springback. Generally, steel with YPE is highly formable. See also Plasticity (physics) Specified minimum yield strength Ultimate tensile strength Yield curve (physics) Yield surface References Bibliography . . Boresi, A. P., Schmidt, R. J., and Sidebottom, O. M. (1993). Advanced Mechanics of Materials, 5th edition John Wiley & Sons. . Oberg, E., Jones, F. D., and Horton, H. L. (1984). Machinery's Handbook, 22nd edition. Industrial Press. Shigley, J. E., and Mischke, C. R. (1989). Mechanical Engineering Design, 5th edition. McGraw Hill. Engineer's Handbook Elasticity (physics) Mechanics Plasticity (physics) Solid mechanics Deformation (mechanics) Structural analysis
Yield (engineering)
[ "Physics", "Materials_science", "Engineering" ]
2,542
[ "Structural engineering", "Solid mechanics", "Physical phenomena", "Elasticity (physics)", "Deformation (mechanics)", "Structural analysis", "Materials science", "Plasticity (physics)", "Mechanics", "Mechanical engineering", "Aerospace engineering", "Physical properties" ]
3,392,451
https://en.wikipedia.org/wiki/National%20Centre%20of%20Scientific%20Research%20%22Demokritos%22
The National Centre of Scientific Research "Demokritos" (NCSRD; ) is a research center in Greece, employing over 1,000 researchers, engineers, technicians and administrative personnel. It focuses on several fields of natural sciences and engineering and hosts laboratory facilities. The facilities cover approximately of land at Aghia Paraskevi, Athens, ten kilometers from the center of the city, on the northern side of Hymettus mountain. The buildings cover an area of approximately 35.000 m (8.6 acres). The NCSRD is a self-administered governmental legal entity, under the supervision of the General Secretariat of Research and Innovation of the Ministry of Development and Investment. History The Centre was established in 1961 as an independent division of the public sector under the name Research Centre for Nuclear Research "Demokritos", named in honour of the Greek philosopher Democritus. In 1985 it was renamed and given self-governing jurisdiction under the auspices of the General Secretariat of Research and Technology. The original objective of the newly created center was the advancement of nuclear research and technology for peaceful purposes. Today, its activities cover several fields of science and engineering. Research Institutes The research activities of the centre are conducted in six administratively independent institutes: Institute of Informatics and Telecommunications (IIT) Institute of Biosciences and Applications (IBA) Institute of Nuclear and Particle Physics (INPP) Institute of Nanoscience and Nanotechnology (INN) Institute of Nuclear & Radiological Sciences and Technology, Energy & Safety (INRASTES) Institute of Quantum Computing and Quantum Technology (IQCQT) The INPP at Demokritos operates Greece's only nuclear reactor, a 5 MW research reactor. The Environmental Research Laboratory (EREL) The EREL is part of the Institute of Nuclear Technology - Radiation Protection (INT-RP) and it is one of the largest scientific research teams in Greece dealing with environmental research. The staff consists of 13 Ph.D. and 5 M.Sc. scientists, 4 Ph.D. students, 1 technician and 1 administrative assistant. The research and development activities of the EREL include: Weather Forecasting Urban air quality Emissions inventories Air quality measurement, analysis and predictions Air pollutant dispersion modelling over terrain of high complexity on local, urban and regional scales. Transport in porous media and characterization of porous materials Soil remediation The DETRACT atmospheric dispersion modeling system The DETRACT atmospheric dispersion modelling system, developed by the EREL, is an integrated set of modules for modelling air pollution dispersion over highly complex terrain. The Media Networks Laboratory The Laboratory is a part of the “Digital Communications” research programme of the Institute of Informatics and Telecommunications, NCSR “Demokritos”. It is located in the premises of NCSR in Athens, Greece and employs high-qualified research personnel, specialised in networking and multimedia technologies. The laboratory has been active for more than a decade in a number of national and European research projects, following the reputation of the Institute of Informatics and Telecommunication in the global research communities. In this context, it maintains close collaboration with Greek and international partners, both industrial and academic. The research achievements of the laboratory are reflected in a considerable number of publications in journals and conferences. The lab pioneered in 2001 by developing the first digital television platform to support fully interactive services in Greece (and one of the first ones in Europe). Gallery See also Atmospheric dispersion modelling List of atmospheric dispersion models Czech Hydrometeorological Institute Finnish Meteorological Institute List of nuclear reactors List of research institutes in Greece Met Office, the UK meteorological service National Center for Atmospheric Research NERI, the National Environmental Research Institute of Denmark NILU, the Norwegian Institute for Air Research UK Atmospheric Dispersion Modelling Liaison Committee UK Dispersion Modelling Bureau References External links NCSR Demokritos website (in Greek and English) Website of the Institute of Informatics and Telecommunications (IIT) Environmental Research Laboratory official website Website of the Media Networks Laboratory Website devoted to the DETRACT dispersion modeling system Website devoted to the DIPCOT dispersion model Scientific organizations established in 1959 1959 establishments in Greece Research institutes in Greece Nuclear technology in Greece Nuclear research institutes Multidisciplinary research institutes Atmospheric dispersion modeling Organizations based in Athens Agia Paraskevi
National Centre of Scientific Research "Demokritos"
[ "Chemistry", "Engineering", "Environmental_science" ]
886
[ "Nuclear research institutes", "Nuclear organizations", "Atmospheric dispersion modeling", "Environmental engineering", "Environmental modelling" ]
3,393,596
https://en.wikipedia.org/wiki/Stokes%20number
The Stokes number (Stk), named after George Gabriel Stokes, is a dimensionless number characterising the behavior of particles suspended in a fluid flow. The Stokes number is defined as the ratio of the characteristic time of a particle (or droplet) to a characteristic time of the flow or of an obstacle, or where is the relaxation time of the particle (the time constant in the exponential decay of the particle velocity due to drag), is the fluid velocity of the flow well away from the obstacle, and is the characteristic dimension of the obstacle (typically its diameter) or a characteristic length scale in the flow (like boundary layer thickness). A particle with a low Stokes number follows fluid streamlines (perfect advection), while a particle with a large Stokes number is dominated by its inertia and continues along its initial trajectory. In the case of Stokes flow, which is when the particle (or droplet) Reynolds number is less than about one, the particle drag coefficient is inversely proportional to the Reynolds number itself. In that case, the characteristic time of the particle can be written as where is the particle density, is the particle diameter and is the fluid dynamic viscosity. In experimental fluid dynamics, the Stokes number is a measure of flow tracer fidelity in particle image velocimetry (PIV) experiments where very small particles are entrained in turbulent flows and optically observed to determine the speed and direction of fluid movement (also known as the velocity field of the fluid). For acceptable tracing accuracy, the particle response time should be faster than the smallest time scale of the flow. Smaller Stokes numbers represent better tracing accuracy; for , particles will detach from a flow especially where the flow decelerates abruptly. For , particles follow fluid streamlines closely. If , tracing accuracy errors are below 1%. Relaxation time and tracking error in particle image velocimetry (PIV) The Stokes number provides a means of estimating the quality of PIV data sets, as previously discussed. However, a definition of a characteristic velocity or length scale may not be evident in all applications. Thus, a deeper insight of how a tracking delay arises could be drawn by simply defining the differential equations of a particle in the Stokes regime. A particle moving with the fluid at some velocity will encounter a variable fluid velocity field as it advects. Let's assume the velocity of the fluid, in the Lagrangian frame of reference of the particle, is . It is the difference between these velocities that will generate the drag force necessary to correct the particle path: The stokes drag force is then: The particle mass is: Thus, the particle acceleration can be found through Newton's second law: Note the relaxation time can be replaced to yield: The first-order differential equation above can be solved through the Laplace transform method: The solution above, in the frequency domain, characterizes a first-order system with a characteristic time of . Thus, the −3 dB gain (cut-off) frequency will be: The cut-off frequency and the particle transfer function, plotted on the side panel, allows for the assessment of PIV error in unsteady flow applications and its effect on turbulence spectral quantities and kinetic energy. Particles through a shock wave The bias error in particle tracking discussed in the previous section is evident in the frequency domain, but it can be difficult to appreciate in cases where the particle motion is being tracked to perform flow field measurements (like in particle image velocimetry). A simple but insightful solution to the above-mentioned differential equation is possible when the forcing function is a Heaviside step function; representing particles going through a shockwave. In this case, is the flow velocity upstream of the shock; whereas is the velocity drop across the shock. The step response for a particle is a simple exponential: To convert the velocity as a function of time to a particle velocity distribution as a function of distance, let's assume a 1-dimensional velocity jump in the direction. Let's assume is positioned where the shock wave is, and then integrate the previous equation to get: Considering a relaxation time of (time to 95% velocity change), we have: This means the particle velocity would be settled to within 5% of the downstream velocity at from the shock. In practice, this means a shock wave would look, to a PIV system, blurred by approximately this distance. For example, consider a normal shock wave of Mach number at a stagnation temperature of 298 K. A propylene glycol particle of would blur the flow by ; whereas a would blur the flow by (which would, in most cases, yield unacceptable PIV results). Although a shock wave is the worst-case scenario of abrupt deceleration of a flow, it illustrates the effect of particle tracking error in PIV, which results in a blurring of the velocity fields acquired at the length scales of order . Non-Stokesian drag regime The preceding analysis will not be accurate in the ultra-Stokesian regime. i.e. if the particle Reynolds number is much greater than unity. Assuming a Mach number much less than unity, a generalized form of the Stokes number was demonstrated by Israel & Rosner. Where is the "particle free-stream Reynolds number", An additional function was defined by; this describes the non-Stokesian drag correction factor, It follows that this function is defined by, Considering the limiting particle free-stream Reynolds numbers, as then and therefore . Thus as expected there correction factor is unity in the Stokesian drag regime. Wessel & Righi evaluated for from the empirical correlation for drag on a sphere from Schiller & Naumann. Where the constant . The conventional Stokes number will significantly underestimate the drag force for large particle free-stream Reynolds numbers. Thus overestimating the tendency for particles to depart from the fluid flow direction. This will lead to errors in subsequent calculations or experimental comparisons. Application to anisokinetic sampling of particles For example, the selective capture of particles by an aligned, thin-walled circular nozzle is given by Belyaev and Levin as: where is particle concentration, is speed, and the subscript 0 indicates conditions far upstream of the nozzle. The characteristic distance is the diameter of the nozzle. Here the Stokes number is calculated, where is the particle's settling velocity, is the sampling tube's inner diameter, and is the acceleration of gravity. See also Stokes' law – For the drag force in fluids on particles whose Reynolds number is less than one References Further reading Discrete-phase flow Aerosols Dimensionless numbers of fluid mechanics Fluid dynamics
Stokes number
[ "Chemistry", "Engineering" ]
1,356
[ "Discrete-phase flow", "Chemical engineering", "Colloids", "Aerosols", "Piping", "Fluid dynamics" ]
3,393,911
https://en.wikipedia.org/wiki/Arcata%20Wastewater%20Treatment%20Plant%20and%20Wildlife%20Sanctuary
Arcata Wastewater Treatment Plant and Wildlife Sanctuary is an innovative sewer management system employed by the city of Arcata, California. A series of oxidation ponds, treatment wetlands and enhancement marshes are used to filter sewage waste. The marshes also serve as a wildlife refuge, and are on the Pacific Flyway. The Arcata Marsh is a popular destination for birders. The marsh has been awarded the Innovations in Government award from the Ford Foundation/Harvard Kennedy School. Numerous holding pools in the marsh, called "lakes," are named after donors and citizens who helped start the marsh project, including Cal Poly Humboldt professors George Allen and Robert A. Gearheart who were instrumental in the creation of the Arcata Marsh. In 1969 Allen also started an aquaculture project at the marsh to raise salmonids in mixtures of sea water and partially treated wastewater. Despite being effectively a sewer, the series of open-air lakes do not have an odor, and are a popular destination for birdwatching, cycling and jogging. Sewage treatment The sewage treatment process takes place in stages: Primary Treatment (completed in 1949): Sewage is held in sedimentation tanks where the sludge is removed and processed for use as fertilizer. Secondary Treatment (completed in 1957): Primary effluent is pumped into oxidation ponds (here bacteria break down the waste). Disinfection (completed in 1966): Secondary effluent is chlorinated to kill pathogens and dechlorinated to avoid damage to natural environments. Tertiary Treatment (competed in 1986): Disinfected secondary effluent is put into artificial marshes where it is cleansed by reeds, cattails, and bacteria. Disinfection: Tertiary effluent is chlorinated to kill pathogens from bird droppings and dechlorinated to avoid damage to natural environments. Treatment and enhancement Sewage from the city of Arcata is treated and released to Humboldt Bay via complex flow routing through a number of contiguous ponds, wetlands, and marshes. Resemblance of treatment features to natural bay environments may cause potential ambiguity about where wastewater ceases to be considered partially treated sewage and meets enhancement objectives of the California Bays and Estuaries Policy. The wastewater treatment system includes both treatment wetlands and enhancement marshes. Treatment wetlands improve oxidation pond effluent quality to meet the federal definition of secondary treatment. Disinfection and dechlorination is the final step of the wastewater treatment process. Disinfected wastewater may be discharged either to Humboldt Bay or to enhancement marshes. Enhancement marshes purify the wastewater and provide wetland habitat. Enhancement marsh effluent is disinfected to improve coliform index changes from birds using tertiary treatment enhancement marsh habitat. After leaving the treatment wetlands the effluent is mixed with water returning from the enhancement marshes. Wild bird feces contain coliform bacteria similar to those found in human sewage. Recreational access is limited to areas where effluent has received secondary treatment and disinfection. Conventional pollutants or wetland detritus Wetland plants use the energy of sunlight to produce five to ten times as much carbohydrate biomass per acre as a wheat field. Detritus from decomposing wetlands vegetation forms the base of a food chain for aquatic organisms, birds and mammals. Individuals who value wetland environments may not realize the effluent characteristics necessary for release of treated wastewater to Humboldt Bay. Although there is no evidence of harm to wildlife, some regulators suggest potential risk to wildlife using treatment wetlands because of an absence of significant research on wildlife exposure to partially treated effluent and to potential accumulation of chemicals being removed from effluent in wetland treatment processes. Ongoing research at Cal Poly Humboldt minimizes potential risk to Humboldt Bay wetlands and wildlife. The City of Arcata generates an average volume of of sewage per day. Winter rainfall onto treatment ponds and marshes increases the volume of effluent discharged from the wetland treatment system to as much as per day. National Pollutant Discharge Elimination System regulations require monthly average effluent concentrations of biochemical oxygen demand and of total suspended solids to be no greater than 30 mg/L, with an additional requirement for removal of 85 percent of the quantities measured in untreated sewage from the City of Arcata. Unfortunately, when measuring concentrations leaving treatment wetlands, neither of these analytical methods can distinguish between unremoved conventional pollutants originally arriving in sewage, or detritus of decomposing wetland vegetation; so the limitations may apply to the sum of both. Wildlife The Arcata Marsh and Wildlife Sanctuary encompasses 307 acres of land situated along the Pacific Flyway. Over 327 species of birds have been recorded at the sanctuary. Numerous plant, mammal, fish, insect, reptile and amphibian species inhabit the marsh. These include river otters, gray foxes, red-legged frog, tidewater goby, bobcat, striped skunk, praying mantis and red-sided garter snake. Arcata Marsh Interpretive Center The Friends of the Arcata Marsh (FOAM) operate the Arcata Marsh Interpretive Center that contains exhibits about the operations of the treatment plant, the importance of the marsh, and about the plants and animals that live there. Volunteer docents give tours of the marsh. Education programs are offered for school, scout and other groups, as well as summer camp programs. References External links City of Arcata - Arcata Marsh and Wildlife Sanctuary Cal Poly Humboldt - Arcata Marsh and Wildlife Sanctuary Appropedia.org - Arcata Marsh Overview Arcata, California Sewage treatment plants in California Nature reserves in California Wildlife sanctuaries of the United States Constructed wetlands Waste treatment technology Museums in Humboldt County, California Nature centers in California Natural history museums in California Protected areas of Humboldt County, California 1949 establishments in California Protected areas established in 1949
Arcata Wastewater Treatment Plant and Wildlife Sanctuary
[ "Chemistry", "Engineering", "Biology" ]
1,180
[ "Constructed wetlands", "Water treatment", "Environmental engineering", "Bioremediation", "Waste treatment technology" ]
3,394,238
https://en.wikipedia.org/wiki/Activation
In chemistry and biology, activation is the process whereby something is prepared or excited for a subsequent reaction. Chemistry In chemistry, "activation" refers to the reversible transition of a molecule into a nearly identical chemical or physical state, with the defining characteristic being that this resultant state exhibits an increased propensity to undergo a specified chemical reaction. Thus, activation is conceptually the opposite of protection, in which the resulting state exhibits a decreased propensity to undergo a certain reaction. The energy of activation specifies the amount of free energy the reactants must possess (in addition to their rest energy) in order to initiate their conversion into corresponding products—that is, in order to reach the transition state for the reaction. The energy needed for activation can be quite small, and often it is provided by the natural random thermal fluctuations of the molecules themselves (i.e. without any external sources of energy). The branch of chemistry that deals with this topic is called chemical kinetics. Biology Biochemistry In biochemistry, activation, specifically called bioactivation, is where enzymes or other biologically active molecules acquire the ability to perform their biological function, such as inactive proenzymes being converted into active enzymes that are able to catalyze their substrates' reactions into products. Bioactivation may also refer to the process where inactive prodrugs are converted into their active metabolites, or the toxication of protoxins into actual toxins. An enzyme may be reversibly or irreversibly bioactivated. A major mechanism of irreversible bioactivation is where a piece of a protein is cut off by cleavage, producing an enzyme that will then stay active. A major mechanism of reversible bioactivation is substrate presentation where an enzyme translocates near its substrate. Another reversible reaction is where a cofactor binds to an enzyme, which then remains active while the cofactor is bound, and stops being active when the cofactor is removed. In protein synthesis, amino acids are carried by transfer RNA (tRNA) molecules and added to a growing polypeptide chain on the ribosome. In order to transfer the amino acids to the ribosome, tRNAs must first be covalently bonded to the amino acid through their 3' CCA terminal. This binding is catalyzed by aminoacyl-tRNA synthetase, and requires a molecule of ATP. The amino acid bound to the tRNA is called an aminoacyl-tRNA, and is considered the activated molecule in protein translation. Once activated, the aminoacyl-tRNA may move to the ribosome and add the amino acid to the growing polypeptide chain. Immunology In immunology, activation is the transition of leucocytes and other cell types involved in the immune system. On the other hand, deactivation is the transition in the reverse direction. This balance is tightly regulated, since a too small degree of activation causes susceptibility to infections, while, on the other hand, a too large degree of activation causes autoimmune diseases. Activation and deactivation results from a variety of factors, including cytokines, soluble receptors, arachidonic acid metabolites, steroids, receptor antagonists, adhesion molecules, bacterial products and viral products. Electrophysiology Activation refers to the opening of ion channels, i.e. the conformational change that allows ions to pass. References Chemical kinetics Biological processes Chemical reactions Biochemical reactions
Activation
[ "Chemistry", "Biology" ]
722
[ "Chemical reaction engineering", "Biochemical reactions", "nan", "Biochemistry", "Chemical kinetics" ]
3,396,016
https://en.wikipedia.org/wiki/Material%20properties%20%28thermodynamics%29
The thermodynamic properties of materials are intensive thermodynamic parameters which are specific to a given material. Each is directly related to a second order differential of a thermodynamic potential. Examples for a simple 1-component system are: Compressibility (or its inverse, the bulk modulus) Isothermal compressibility Adiabatic compressibility Specific heat (Note - the extensive analog is the heat capacity) Specific heat at constant pressure Specific heat at constant volume Coefficient of thermal expansion where P  is pressure, V  is volume, T  is temperature, S  is entropy, and N  is the number of particles. For a single component system, only three second derivatives are needed in order to derive all others, and so only three material properties are needed to derive all others. For a single component system, the "standard" three parameters are the isothermal compressibility , the specific heat at constant pressure , and the coefficient of thermal expansion . For example, the following equations are true: The three "standard" properties are in fact the three possible second derivatives of the Gibbs free energy with respect to temperature and pressure. Moreover, considering derivatives such as and the related Schwartz relations, shows that the properties triplet is not independent. In fact, one property function can be given as an expression of the two others, up to a reference state value. The second principle of thermodynamics has implications on the sign of some thermodynamic properties such isothermal compressibility. See also List of materials properties (thermal properties) Heat capacity ratio Statistical mechanics Thermodynamic equations Thermodynamic databases for pure substances Heat transfer coefficient Latent heat Specific heat of melting (Enthalpy of fusion) Specific heat of vaporization (Enthalpy of vaporization) Thermal mass External links The Dortmund Data Bank is a factual data bank for thermodynamic and thermophysical data. References Thermodynamic properties
Material properties (thermodynamics)
[ "Physics", "Chemistry", "Mathematics" ]
402
[ "Thermodynamic properties", "Quantity", "Thermodynamics", "Physical quantities" ]
3,396,310
https://en.wikipedia.org/wiki/Secular%20equilibrium
In nuclear physics, secular equilibrium is a situation in which the quantity of a radioactive isotope remains constant because its production rate (e.g., due to decay of a parent isotope) is equal to its decay rate. In radioactive decay Secular equilibrium can occur in a radioactive decay chain only if the half-life of the daughter radionuclide B is much shorter than the half-life of the parent radionuclide A. In such a case, the decay rate of A and hence the production rate of B is approximately constant, because the half-life of A is very long compared to the time scales considered. The quantity of radionuclide B builds up until the number of B atoms decaying per unit time becomes equal to the number being produced per unit time. The quantity of radionuclide B then reaches a constant, equilibrium value. Assuming the initial concentration of radionuclide B is zero, full equilibrium usually takes several half-lives of radionuclide B to establish. The quantity of radionuclide B when secular equilibrium is reached is determined by the quantity of its parent A and the half-lives of the two radionuclide. That can be seen from the time rate of change of the number of atoms of radionuclide B: where λA and λB are the decay constants of radionuclide A and B, related to their half-lives t1/2 by , and NA and NB are the number of atoms of A and B at a given time. Secular equilibrium occurs when , or Over long enough times, comparable to the half-life of radionuclide A, the secular equilibrium is only approximate; NA decays away according to and the "equilibrium" quantity of radionuclide B declines in turn. For times short compared to the half-life of A, and the exponential can be approximated as 1. See also Bateman equation Transient equilibrium References IUPAC definition "secular equilibrium", IUPAC definition (IUPAC Compendium of Chemical Terminology 2nd Edition, 1997) Radioactive Equilibrium, EPA definition Radioactive Equilibrium. An equilibrium as old as the Earth , radioactivity.eu.com, IN2P3, EDP Science Radioactivity
Secular equilibrium
[ "Physics", "Chemistry" ]
459
[ "Radioactivity", "Nuclear physics" ]
3,396,759
https://en.wikipedia.org/wiki/Pooper-scooper
A pooper-scooper, or poop scoop, is a device used to pick up animal feces from public places and yards, particularly those of dogs. Pooper-scooper devices often have a bag or bag attachment. 'Poop bags' are alternatives to pooper scoopers, and are simply a bag, usually turned inside out, to carry the feces to a proper disposal area. Sometimes, the person performing the cleanup is also known as the pooper-scooper. History The invention is credited to Brooke Miller, of Anaheim, California. The design she patented is a metal bin with a rake-like edge attached to a wooden stick. It also includes a rake-like device to scoop the poop into the scooper and a hatch that can be attached to a garbage bag that fits onto the base. The generic term pooper-scooper has been included in dictionaries since the early 1970s. Legislation Around 1935, "Curb Your Dog" signs started appearing in NYC, initiating discussions and correspondence with the Department of Sanitation. The Village of Great Neck Estates was one of the earliest communities to enact a local ordinance, in 1975, requiring residents to remove pollution on private and public property caused by dogs. Murray Seeman, Jay S. Goodman and Howard Zelikow, advocated in the face of heated opposition. In 1978, New York State passed the Pooper-Scooper Law. It was so controversial that Mayor Koch needed the New York State Legislature to pass it, after being unable to convince the New York City Council. The New York Times called actress and consumer advocate Fran Lee "New York's foremost fighter against dog dirt". October 20, 1978, KQED San Francisco news footage featured scenes from a Harvey Milk press conference in Duboce Park in which he discussed the city's new "pooper scooper law" with a how-to demonstration. Marking the 25th anniversary of the Pooper-scooper law, NYC Mayor Ed Koch was quoted saying, "If you’ve ever stepped in dog doo, you know how important it is to enforce the canine waste law. New Yorkers overwhelmingly do their duty and self-enforce. Those who don’t are not fit to call friend." In 2018, the City of San Francisco allocated budget funds in the amount of $830,977 to address this issue. A number of jurisdictions, including New York City, San Francisco and Chicago have laws requiring pet owners to clean up after their pets: a) A person who owns, possesses or controls a dog, cat or other animal shall not permit the animal to commit a nuisance on a sidewalk of any public place, on a floor, wall, stairway or roof of any public or private premises used in common by the public, or on a fence, wall or stairway of a building abutting on a public . Authorized employees of New York City Departments of Health (including Animal Care & Control), of Sanitation, or of Parks and Recreation can issue tickets. Such laws are often nicknamed "pooper-scooper laws", though the laws only stipulate that dog owners remove their dogs' feces, not the method or device used (thus using a hand-held plastic bag to remove feces complies with these laws). Some apartment complexes, condos, and neighborhoods require residents to pick up dog poop and use DNA testing on poop to fine people who did not pick up after their pet. Health concerns Dog droppings are one of the leading sources of E. coli (fecal coliforms) bacterial pollution, Toxocara canis and Neospora caninum helminth parasite pollution. One gram of dog feces contains over 20,000,000 E. coli cells. While an individual animal's deposit of feces will not measurably affect the environment, the cumulative effect of thousands of dogs and cats in a metropolitan area can create serious problems due to contamination of soil and water supplies. The runoff from neglected pet waste contaminates water, creating health hazards for people, fish, ducks, etc. In Germany an estimated of feces are deposited daily on public property. A citizen commission (2005) overwhelmingly recommended a plan that would break even at about seven months. DNA samples would be required when pet licenses come up for renewal. Within a year, a database of some 12,500 registration-required canine residents would be available to sanitation workers with sample-test kits. Evidence would be submitted to a forensics laboratory where technicians could readily match the waste to its dog. The prospect of a prompt fine equivalent to $600 US (at 2005 exchange rate) would help assure preventive compliance, as well as cover costs. In adult dogs, the infection by Toxocara canis is usually asymptomatic but can be fatal in puppies. A number of various vertebrates, including humans, and some invertebrates can become infected by Toxocara canis. Humans are infected, like other paratenic hosts, by ingestion of embryonated T. canis eggs. The disease caused by migrating T. canis larvae (toxocariasis) results in visceralis larva migrans and ocularis larva migrans. Clinically infected people have helminth infection and rarely blindness. See also Motocrotte – motorcycle-based solution for cleaning the streets of Paris Mutt Mitt – a plastic mitt used to pick up waste from pets References Sources ROMP (Responsible Owners of Mannerly Pets), metropolitan Twin Cities recreation-advocacy group; June 1996 in Roseville, MN, nonprofit incorporation April 2000 [About ROMP] New York attorney and dog lawyer; External links Sanitation Pet equipment Waste collection Dog equipment Feces
Pooper-scooper
[ "Biology" ]
1,176
[ "Excretion", "Feces", "Animal waste products" ]
3,397,404
https://en.wikipedia.org/wiki/Thomae%27s%20function
Thomae's function is a real-valued function of a real variable that can be defined as: It is named after Carl Johannes Thomae, but has many other names: the popcorn function, the raindrop function, the countable cloud function, the modified Dirichlet function, the ruler function (not to be confused with the integer ruler function), the Riemann function, or the Stars over Babylon (John Horton Conway's name). Thomae mentioned it as an example for an integrable function with infinitely many discontinuities in an early textbook on Riemann's notion of integration. Since every rational number has a unique representation with coprime (also termed relatively prime) and , the function is well-defined. Note that is the only number in that is coprime to It is a modification of the Dirichlet function, which is 1 at rational numbers and 0 elsewhere. Properties Related probability distributions Empirical probability distributions related to Thomae's function appear in DNA sequencing. The human genome is diploid, having two strands per chromosome. When sequenced, small pieces ("reads") are generated: for each spot on the genome, an integer number of reads overlap with it. Their ratio is a rational number, and typically distributed similarly to Thomae's function. If pairs of positive integers are sampled from a distribution and used to generate ratios , this gives rise to a distribution on the rational numbers. If the integers are independent the distribution can be viewed as a convolution over the rational numbers, . Closed form solutions exist for power-law distributions with a cut-off. If (where is the polylogarithm function) then . In the case of uniform distributions on the set , which is very similar to Thomae's function. The ruler function For integers, the exponent of the highest power of 2 dividing gives 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, ... . If 1 is added, or if the 0s are removed, 1, 2, 1, 3, 1, 2, 1, 4, 1, 2, 1, 3, 1, 2, 1, ... . The values resemble tick-marks on a 1/16th graduated ruler, hence the name. These values correspond to the restriction of the Thomae function to the dyadic rationals: those rational numbers whose denominators are powers of 2. Related functions A natural follow-up question one might ask is if there is a function which is continuous on the rational numbers and discontinuous on the irrational numbers. This turns out to be impossible. The set of discontinuities of any function must be an set. If such a function existed, then the irrationals would be an set. The irrationals would then be the countable union of closed sets , but since the irrationals do not contain an interval, neither can any of the . Therefore, each of the would be nowhere dense, and the irrationals would be a meager set. It would follow that the real numbers, being the union of the irrationals and the rationals (which, as a countable set, is evidently meager), would also be a meager set. This would contradict the Baire category theorem: because the reals form a complete metric space, they form a Baire space, which cannot be meager in itself. A variant of Thomae's function can be used to show that any subset of the real numbers can be the set of discontinuities of a function. If is a countable union of closed sets , define Then a similar argument as for Thomae's function shows that has A as its set of discontinuities. See also Blumberg theorem Cantor function Dirichlet function Euclid's orchard – Thomae's function can be interpreted as a perspective drawing of Euclid's orchard Volterra's function References (Example 5.1.6 (h)) External links Calculus Eponymous functions General topology Special functions
Thomae's function
[ "Mathematics" ]
847
[ "General topology", "Functions and mappings", "Mathematical analysis", "Special functions", "Calculus", "Mathematical objects", "Fractals", "Combinatorics", "Topology", "Mathematical relations" ]
2,473,068
https://en.wikipedia.org/wiki/Vapour%20phase%20decomposition
Vapour phase decomposition (VPD) is a method used in the semiconductor industry to improve the sensitivity of total-reflection x-ray fluorescence spectroscopy by changing the contaminant from a thin layer (which has an angle-dependent fluorescence intensity in the TXRF-domain) to a granular residue. When using granular residue the limits of detection are improved because of a more intense fluorescence signal in angles smaller than the isokinetic angle. Method When using granular residue the limits of detection are improved because of a more intense fluorescence signal in angles smaller than the isokinetic angle. This can be achieved by enhancing the impurity concentration in the solution to be analyzed. In standard atomic absorption spectroscopy (AAS), the impurity is dissolved together with the matrix element. In VPD, the surface of the wafer is exposed to hydrofluoric acid vapour, which causes the surface oxide to dissolve together with the impurity metals. The acid droplets, condensed on the surface, are then analyzed using AAS. Advantages The method has yielded good results for the detection and measurement of nickel and iron. To improve the range of elemental impurities and lower detection limits, the acid droplets obtained from the silicon wafers are analyzed by ICP-MS (Inductively coupled plasma mass spectrometry). This technique, VPD ICP-MS provides accurate measurement of up to 60 elements and detection limits of in the range of 1E6-E10 atoms/sq.cm on the silicon wafer. Related Techniques One related technique is VPD-DC (vapour phase decomposition-droplet collection), where the wafer is scanned with a droplet that collects the metal ions that were dissolved in the decomposition step. This procedure affords better limits of detection when applying AAS in order to detect metal impurities of very small concentrations on wafer surfaces. References Semiconductor device fabrication
Vapour phase decomposition
[ "Physics", "Chemistry", "Materials_science", "Astronomy" ]
391
[ "Spectroscopy stubs", "Spectrum (physical sciences)", "Microtechnology", "Astronomy stubs", "Semiconductor device fabrication", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
2,473,208
https://en.wikipedia.org/wiki/Frisch%E2%80%93Waugh%E2%80%93Lovell%20theorem
In econometrics, the Frisch–Waugh–Lovell (FWL) theorem is named after the econometricians Ragnar Frisch, Frederick V. Waugh, and Michael C. Lovell. The Frisch–Waugh–Lovell theorem states that if the regression we are concerned with is expressed in terms of two separate sets of predictor variables: where and are matrices, and are vectors (and is the error term), then the estimate of will be the same as the estimate of it from a modified regression of the form: where projects onto the orthogonal complement of the image of the projection matrix . Equivalently, MX1 projects onto the orthogonal complement of the column space of X1. Specifically, and this particular orthogonal projection matrix is known as the residual maker matrix or annihilator matrix. The vector is the vector of residuals from regression of on the columns of . The most relevant consequence of the theorem is that the parameters in do not apply to but to , that is: the part of uncorrelated with . This is the basis for understanding the contribution of each single variable to a multivariate regression (see, for instance, Ch. 13 in ). The theorem also implies that the secondary regression used for obtaining is unnecessary when the predictor variables are uncorrelated: using projection matrices to make the explanatory variables orthogonal to each other will lead to the same results as running the regression with all non-orthogonal explanators included. Moreover, the standard errors from the partial regression equal those from the full regression. History The origin of the theorem is uncertain, but it was well-established in the realm of linear regression before the Frisch and Waugh paper. George Udny Yule's comprehensive analysis of partial regressions, published in 1907, included the theorem in section 9 on page 184. Yule emphasized the theorem's importance for understanding multiple and partial regression and correlation coefficients, as mentioned in section 10 of the same paper. Yule 1907 also introduced the partial regression notation which is still in use today. The theorem, later associated with Frisch, Waugh, and Lovell, and Yule's partial regression notation, were included in chapter 10 of Yule's successful statistics textbook, first published in 1911. The book reached its tenth edition by 1932. In a 1931 paper co-authored with Mudgett, Frisch explicitly quoted Yule's results. Yule's formulas for partial regressions were quoted and explicitly attributed to him in order to rectify a misquotation by another author. Although Yule was not explicitly mentioned in the 1933 paper by Frisch and Waugh, they utilized the notation for partial regression coefficients initially introduced by Yule in 1907, which by 1933 was well known due to the success of Yule's textbook. In 1963, Lovell published a proof considered more straightforward and intuitive. In recognition, people generally add his name to the theorem name. References Further reading Economics theorems Regression analysis Theorems in statistics
Frisch–Waugh–Lovell theorem
[ "Mathematics" ]
621
[ "Mathematical theorems", "Mathematical problems", "Theorems in statistics" ]
2,473,210
https://en.wikipedia.org/wiki/Sterile%20neutrino
Sterile neutrinos (or inert neutrinos) are hypothetical particles (neutral leptons – neutrinos) that interact only via gravity and not via any of the other fundamental interactions of the Standard Model. The term sterile neutrino is used to distinguish them from the known, ordinary active neutrinos in the Standard Model, which carry an isospin charge of and engage in the weak interaction. The term typically refers to neutrinos with right-handed chirality (see right-handed neutrino), which may be inserted into the Standard Model. Particles that possess the quantum numbers of sterile neutrinos and masses great enough such that they do not interfere with the current theory of Big Bang nucleosynthesis are often called neutral heavy leptons (NHLs) or heavy neutral leptons (HNLs). The existence of right-handed neutrinos is theoretically well-motivated, because the known active neutrinos are left-handed and all other known fermions have been observed with both left and right chirality. They could also explain in a natural way the small active neutrino masses inferred from neutrino oscillation. The mass of the right-handed neutrinos themselves is unknown and could have any value between and less than 1 eV. To comply with theories of leptogenesis and dark matter, there must be at least 3 flavors of sterile neutrinos (if they exist). This is in contrast to the number of active neutrino types required to ensure the electroweak interaction is free of anomalies, which must be exactly 3: the number of charged leptons and quark generations. The search for sterile neutrinos is an active area of particle physics. If they exist and their mass is smaller than the energies of particles in the experiment, they can be produced in the laboratory, either by mixing between active and sterile neutrinos or in high energy particle collisions. If they are heavier, the only directly observable consequence of their existence would be the observed active neutrino masses. They may, however, be responsible for a number of unexplained phenomena in physical cosmology and astrophysics, including dark matter, baryogenesis or hypothetical dark radiation. In May 2018, physicists of the MiniBooNE experiment reported a stronger neutrino oscillation signal than expected, a possible hint of sterile neutrinos. However, results of the MicroBooNE experiment showed no evidence of sterile neutrinos in October 2021. Motivation Experimental results show that all produced and observed neutrinos have left-handed helicities (spin antiparallel to momentum), and all antineutrinos have right-handed helicities, within the margin of error. In the massless limit, it means that only one of two possible chiralities is observed for either particle. These are the only helicities (and chiralities) allowed in the Standard Model of particle interactions; particles with the contrary helicities are explicitly excluded from the formulas. Recent experiments such as neutrino oscillation, however, have shown that neutrinos have a non-zero mass, which is not predicted by the Standard Model and suggests new, unknown physics. This unexpected mass explains neutrinos with right-handed helicity and antineutrinos with left-handed helicity: Since they do not move at the speed of light, their helicity is not relativistic invariant (it is possible to move faster than them and observe the opposite helicity). Yet all neutrinos have been observed with left-handed chirality, and all antineutrinos right-handed. (See for the difference.) Chirality is a fundamental property of particles and is relativistically invariant: It is the same regardless of the particle's speed and mass in every inertial reference frame. However, a particle with mass that starts out with left-handed chirality can develop a right-handed component as it travels – unless it is massless, chirality is not conserved during the propagation of a free particle through space (nominally, through interaction with the Higgs field). The question, thus, remains: Do neutrinos and antineutrinos differ only in their chirality? Or do exotic right-handed neutrinos and left-handed antineutrinos exist as separate particles from the common left-handed neutrinos and right-handed antineutrinos? Properties Such particles would belong to a singlet representation with respect to the strong interaction and the weak interaction, having zero electric charge, zero weak hypercharge, zero weak isospin, and, as with the other leptons, zero color charge, although they are conventionally represented to have a quantum number of −1. If the Standard Model is embedded in a hypothetical SO(10) grand unified theory, they can be assigned an X charge of −5. The left-handed anti-neutrino has a of +1 and an X charge of +5. Due to the lack of electric charge, hypercharge, and color charge, sterile neutrinos would not interact via the electromagnetic, weak, or strong interactions, making them extremely difficult to detect. They have Yukawa interactions with ordinary leptons and Higgs bosons, which via the Higgs mechanism leads to mixing with ordinary neutrinos. In experiments involving energies larger than their mass, sterile neutrinos would participate in all processes in which ordinary neutrinos take part, but with a quantum mechanical probability that is suppressed by a small mixing angle. That makes it possible to produce them in experiments, if they are light enough to be within the reach of current particle accelerators. They would also interact gravitationally due to their mass, and if they are heavy enough, could explain cold dark matter or warm dark matter. In some grand unification theories, such as SO(10), they also interact via gauge interactions which are extremely suppressed at ordinary energies because their SO(10)-derived gauge boson is extremely massive. They do not appear at all in some other GUTs, such as the Georgi–Glashow model (i.e., all its SU(5) charges or quantum numbers are zero). Mass All particles are initially massless under the Standard Model, since there are no Dirac mass terms in the Standard Model's Lagrangian. The only mass terms are generated by the Higgs mechanism, which produces non-zero Yukawa couplings between the left-handed components of fermions, the Higgs field, and their right-handed components. This occurs when the SU(2) doublet Higgs field acquires its non-zero vacuum expectation value, , spontaneously breaking its SU(2) × U(1) symmetry, and thus yielding non-zero Yukawa couplings: Such is the case for charged leptons, like the electron, but within the Standard Model the right-handed neutrino does not exist. So absent the sterile right chiral neutrinos to pair up with the left chiral neutrinos, even with Yukawa coupling the active neutrinos remain massless. In other words, there are no mass-generating terms for neutrinos under the Standard Model: For each generation, the model only contains a left-handed neutrino and its antiparticle, a right-handed antineutrino, each of which is produced in weak eigenstates during weak interactions; the "sterile" neutrinos are omitted. (See neutrino masses in the Standard Model for a detailed explanation.) In the seesaw mechanism, the model is extended to include the missing right-handed neutrinos and left-handed antineutrinos; one of the eigenvectors of the neutrino mass matrix is then hypothesized to be remarkably heavier than the other. A sterile (right-chiral) neutrino would have the same weak hypercharge, weak isospin, and electric charge as its antiparticle, because all of these are zero and hence are unaffected by sign reversal. Dirac and Majorana terms Sterile neutrinos allow the introduction of a Dirac mass term as usual. This can yield the observed neutrino mass, but it requires that the strength of the Yukawa coupling be much weaker for the electron neutrino than the electron, without explanation. Similar problems (although less severe) are observed in the quark sector, where the top and bottom masses differ by a factor of 40. Unlike for the left-handed neutrino, a Majorana mass term can be added for a sterile neutrino without violating local symmetries (weak isospin and weak hypercharge) since it has no weak charge. However, this would still violate total lepton number. It is possible to include both Dirac and Majorana terms; this is done in the seesaw mechanism (below). In addition to satisfying the Majorana equation, if the neutrino were also its own antiparticle, then it would be the first Majorana fermion. In that case, it could annihilate with another neutrino, allowing neutrinoless double beta decay. The other case is that it is a Dirac fermion, which is not its own antiparticle. To put this in mathematical terms, we have to make use of the transformation properties of particles. For free fields, a Majorana field is defined as an eigenstate of charge conjugation. However, neutrinos interact only via the weak interactions, which are not invariant under charge conjugation (C), so an interacting Majorana neutrino cannot be an eigenstate of C. The generalized definition is: "a Majorana neutrino field is an eigenstate of the CP transformation". Consequently, Majorana and Dirac neutrinos would behave differently under CP transformations (actually Lorentz and CPT transformations). Also, a massive Dirac neutrino would have nonzero magnetic and electric dipole moments, whereas a Majorana neutrino would not. However, the Majorana and Dirac neutrinos are different only if their rest mass is not zero. For Dirac neutrinos, the dipole moments are proportional to mass and would vanish for a massless particle. Both Majorana and Dirac mass terms however can be inserted into the mass Lagrangian. Seesaw mechanism In addition to the left-handed neutrino, which couples to its family charged lepton in weak charged currents, if there is also a right-handed sterile neutrino partner (a weak isosinglet with zero charge) then it is possible to add a Majorana mass term without violating electroweak symmetry. Both left-handed and right-handed neutrinos could then have mass and handedness which are no longer exactly preserved (thus "left-handed neutrino" would mean that the state is mostly left and "right-handed neutrino" would mean mostly right-handed). To get the neutrino mass eigenstates, we have to diagonalize the general mass matrix where is the neutral heavy lepton's mass, which is big, and are intermediate-size mass terms, which interconnect the sterile and active neutrino masses. The matrix nominally assigns active neutrinos zero mass, but the terms provide a route for some small part of the sterile neutrinos' enormous mass, to "leak into" the active neutrinos. Apart from empirical evidence, there is also a theoretical justification for the seesaw mechanism in various extensions to the Standard Model. Both Grand Unification Theories (GUTs) and left-right symmetrical models predict the following relation: According to GUTs and left-right models, the right-handed neutrino is extremely heavy: while the smaller eigenvalue is approximately given by This is the seesaw mechanism: As the sterile right-handed neutrino gets heavier, the normal left-handed neutrino gets lighter. The left-handed neutrino is a mixture of two Majorana neutrinos, and this mixing process is how sterile neutrino mass is generated. Sterile neutrinos as dark matter For a particle to be considered a dark matter candidate, it must have non-zero mass and no electromagnetic charge. Naturally, neutrinos and neutrino-like particles are of interest in the search for dark matter because they possess both these properties. Observations suggest that there is more cold dark matter (non-relativistic) than hot dark matter (relativistic). The active neutrinos of the Standard Model, having very low mass (and therefore very high speeds) are therefore unlikely to account for all dark matter. Since no bounds on the mass of sterile neutrinos are known, the possibility that the sterile neutrino is dark matter has not yet been ruled out, as it has for active neutrinos. If dark matter consists of sterile neutrinos then certain constraints can be applied to their properties. Firstly, in order to produce the structure of the universe observed today the mass of the sterile neutrino would need to be on the keV scale, based on parameter space of the remaining supersymmetric models that have not yet been excluded by experiment. Secondly, while it is not required that dark matter be stable, the lifetime of the particles must be longer than the current age of the universe. This places an upper bound on the strength of the mixing between sterile and active neutrinos in the seesaw mechanism. From what is known about the particle thus far, the sterile neutrino is a promising dark matter candidate, but as with every other proposed dark matter particle, it has yet to be confirmed to exist. Detection attempts The production and decay of sterile neutrinos could happen through the mixing with virtual ("off mass shell") neutrinos. There were several experiments set up to discover or observe NHLs, for example the NuTeV (E815) experiment at Fermilab or LEP-L3 at CERN. They all led to establishing limits to observation, rather than actual observation of those particles. If they are indeed a constituent of dark matter, sensitive X-ray detectors would be needed to observe the radiation emitted by their decays. Sterile neutrinos may mix with ordinary neutrinos via a Dirac mass after electroweak symmetry breaking, in analogy to quarks and charged leptons. Sterile neutrinos and (in more-complicated models) ordinary neutrinos may also have Majorana masses. In the type 1 seesaw mechanism both Dirac and Majorana masses are used to drive ordinary neutrino masses down and make the sterile neutrinos much heavier than the Standard Model's interacting neutrinos. In GUT scale seesaw models the heavy neutrinos can be as heavy as the GUT scale (). In other models, such as the νMSM model where their masses are in the keV to GeV range, they could be lighter than the weak gauge bosons W and Z. A light (with the mass ) sterile neutrino was suggested as a possible explanation of the results of the Liquid Scintillator Neutrino Detector experiment. On 11 April 2007, researchers at the MiniBooNE experiment at Fermilab announced that they had not found any evidence supporting the existence of such a sterile neutrino. More-recent results and analysis have provided some support for the existence of the sterile neutrino. Two separate detectors near a nuclear reactor in France found 3% of anti-neutrinos missing. They suggested the existence of a fourth neutrino with a mass of 1.2 eV. Daya Bay has also searched for a light sterile neutrino and excluded some mass regions. Daya Bay collaboration measured the anti-neutrino energy spectrum, and found that anti-neutrinos at an energy of around 5 MeV are in excess relative to theoretical expectations. It also recorded 6% missing anti-neutrinos. This could suggest either that sterile neutrinos exist or that our understanding of some other aspect of neutrinos is incomplete. The number of neutrinos and the masses of the particles can have large-scale effects that shape the appearance of the cosmic microwave background. The total number of neutrino species, for instance, affects the rate at which the cosmos expanded in its earliest epochs: More neutrinos means a faster expansion. The Planck Satellite 2013 data release is compatible with the existence of a sterile neutrino. The implied mass range is from 0–3 eV. In 2016, scientists at the IceCube Neutrino Observatory did not find any evidence for the sterile neutrino. However, in May 2018, physicists of the MiniBooNE experiment reported a stronger neutrino oscillation signal than expected, a possible hint of sterile neutrinos. Since then, in October 2021, the MicroBooNE experiment's first results showed no hints of sterile neutrinos, rather finding the results aligning with the Standard Model's three neutrino flavours. This result had not found an explanation for MiniBooNE's anomalous results, however. In June 2022, the BEST experiment released two papers observing a 20–24% deficit in the production of the isotope germanium expected from the reaction . The so-called "Gallium anomaly" suggests that a sterile neutrino explanation could be consistent with the data. In January 2023, the STEREO experiment published its final result, reporting the most precise measurement of the antineutrino energy spectrum associated with the fission of uranium-235. The data is consistent with the Standard Model and rejects the hypothesis of a light sterile neutrino with a mass of around 1 eV. In 2023 results of searches by the CMS set new limits for sterile neutrinos with masses of 2–3 GeV. See also List of hypothetical particles MiniBooNE at Fermilab Weakly Interacting Slender Particle Footnotes References Sources External links Neutrinos Hypothetical elementary particles Dark matter
Sterile neutrino
[ "Physics", "Astronomy" ]
3,824
[ "Dark matter", "Unsolved problems in astronomy", "Concepts in astronomy", "Unsolved problems in physics", "Exotic matter", "Hypothetical elementary particles", "Physics beyond the Standard Model", "Matter" ]
2,473,617
https://en.wikipedia.org/wiki/HD%20ready
HD ready is a certification program introduced in 2005 by EICTA (European Information, Communications and Consumer Electronics Technology Industry Associations), now DIGITALEUROPE. HD ready minimum native resolution is 720 rows in widescreen ratio. There are currently four different labels: "HD ready", "HD TV", "HD ready 1080p", "HD TV 1080p". The logos are assigned to television equipment capable of certain features. In the United States, a similar "HD Ready" term usually refers to any display that is capable of accepting and displaying a high-definition signal at either 720p, 1080i or 1080p using a component video or digital input, but does not have a built-in HD-capable tuner. History The "HD ready" certification program was introduced on January 19, 2005. The labels and relevant specifications are based on agreements between over 60 broadcasters and manufacturers of the European HDTV Forum at its second session in June 2004, held at the Betzdorf, Luxembourg headquarters of founding member SES Astra. The "HD ready" logo is used on television equipment capable of displaying High Definition (HD) pictures from an external source. However, it does not have to feature a digital tuner to decode an HD signal; devices with tuners were certified under a separate "HD TV" logo, which does not require a "HD ready" display device. Before the introduction of the "HD ready" certification, many TV sources and displays were being promoted as capable of displaying high definition pictures when they were in fact SDTV devices; according to Alexander Oudendijk, senior VP of marketing for Astra, in early 2005 there were 74 different devices being sold as ready for HD that were not. Devices advertised as HD-compatible or HD ready could take HDTV-signal as an input (via analog -YPbPr or digital DVI or HDMI), but they did not have enough pixels for true representation of even the lower HD resolution (1280 × 720) (plasma-based sets with 853 × 480 resolution, CRT based sets only capable of SDTV-resolution or VGA-resolution, 640×480 pixels), much less the higher HD resolution (1920 × 1080), and so were unable to display the HD picture without downscaling to a lower resolution. Industry-sponsored labels such as "Full HD" were misleading as well, as they can refer to devices which do not fulfil some essential requirements such as having 1:1 pixel mapping with no overscan or accepting a 1080p signal. A UK BBC television programme found that separate labels for display devices and TV tuners/decoders confused purchasers, many of whom bought HD-ready equipment expecting to be able to receive HD with no additional equipment; they were sometimes actively misled by salespeople—a 2007 Ofcom survey found that 12% were told explicitly that they could view analog SDTV transmissions in HD, 7% that no extra equipment was needed, and 14% that HD-ready sets would receive existing digital SDTV transmissions in HD. On August 30, 2007, 1080p versions of the logos and licensing agreements were introduced; as an improvement to the earlier scheme, "HD TV 1080p" logo now requires "HD ready 1080p" certification. Requirements and logos HD ready and HD ready 1080p logos are assigned to displays (including integrated television sets, computer monitors and projectors) which have certain capabilities to process and display high-definition source video signal, outlined in a table below. The HD TV logo is assigned to either integrated digital television sets (containing a display conforming to "HD ready" requirements) or standalone set-top boxes which are capable of receiving, decoding and outputting or displaying high-definition broadcasts (that is, include a DVB tuner for cable, terrestrial or satellite broadcasting, a video decoder which supports H.264/MPEG-4 AVC compression in 720p and 1080i signal formats, and either video outputs or an integrated display capable of handling such signals). The HD TV 1080p logo is assigned to integrated digital television sets which have a display conforming to "HD ready 1080p" requirements, a DVB tuner and a decoder capable of processing 1080p signal. In order to be labelled "HD ready 1080p" or "HD Ready" logo, a display device has to meet the following requirements: References External links HD ready official UK website High Definition Television and Logos - EICTA EICTA: Broadcast License agreement and HD Ready 1080p requirements HD Ready 1080p press release DVDActive article - Are You Ready for HDTV? Television technology High-definition television Audiovisual introductions in 2005 Symbols introduced in 2005 2005 establishments in the European Union
HD ready
[ "Technology" ]
984
[ "Information and communications technology", "Television technology" ]
2,476,693
https://en.wikipedia.org/wiki/List%20of%20Croton%20sections
The sections and subsections of the genus Croton: sect. Cleodora (Klotzsch) Baill.? sect. Cyclostigma Griseb. subsect. Cyclostigma (Griseb.) Müll. Arg. subsect. Sampatik G.L.Webster subsect. Palanostigma Mart. ex Baill. sect. Klotzschiphytum (Baill.) Baill. sect. Eutropia (Klotzsch) Baill. sect. Luntia (Raf.) G.L. Webster subsect. Cuneati G.L. Webster subsect. Matourenses G.L. Webster sect Eluteria Griseb. sect. Croton sect. Ocalia (Klotzsch) Baill. sect. Corylocroton G.L.Webster sect. Anadenocroton G.L.Webster sect. Tiglium (Klotzsch) Baill. sect. Quadrilobus Müll. Arg. sect. Cascarilla Griseb. sect. Velamea Baill. sect. Andrichnia Baill. sect. Anisophyllum Baill. sect. Furcaria Boivin ex Baill. sect. Monguia Baill. sect. Decapetalon Müll. Arg. sect. Podostachys (Klotzsch) Baill. sect. Octolobium Chodat & Hassl. sect. Geiseleria (Klotzsch) Baill. sect. Pilinophyton (Klotzsch) A. Gray sect. Eremocarpus (Benth.) G.L.Webster sect. Gynamblosis (Torr.) A. Gray sect. Crotonopsis (Michx.) G.L.Webster sect. Argyrocroton (Müll. Arg.) G.L.Webster sect. Lamprocroton (Müll. Arg.) Pax sect. Julocroton (Mart.) G.L.Webster sect. Adenophyllum Griseb. sect. Barhamia (Klotzsch) Baill. sect. Decalobium Müll. Arg. sect. Micranthis Baill. sect. Medea (Klotzsch) Baill. sect. Lasiogyne (Klotzsch) Baill. sect. Argyroglossum Baill. sect. Astraeopsis Baill. sect. Codonacalyx Klotzsch ex Baill. sect. Astraea (Klotzsch) Baill. sect. Drepadenium (Raf.) Müll. Arg. References Taxonomic lists Plant sections
List of Croton sections
[ "Biology" ]
596
[ "Lists of biota", "Taxonomy (biology)", "Taxonomic lists" ]
2,476,993
https://en.wikipedia.org/wiki/Barometer%20question
The barometer question is an example of an incorrectly designed examination question demonstrating functional fixedness that causes a moral dilemma for the examiner. In its classic form, popularized by American test designer professor Alexander Calandra in the 1960s, the question asked the student to "show how it is possible to determine the height of a tall building with the aid of a barometer." The examiner was confident that there was one, and only one, correct answer, which is found by measuring the difference in pressure at the top and bottom of the building and solving for height. Contrary to the examiner's expectations, the student responded with a series of completely different answers. These answers were also correct, yet none of them proved the student's competence in the specific academic field being tested. The barometer question achieved the status of an urban legend; according to an internet meme, the question was asked at the University of Copenhagen and the student was Niels Bohr. The Kaplan, Inc. ACT preparation textbook describes it as an "MIT legend", and an early form is found in a 1958 American humor book. However, Calandra presented the incident as a real-life, first-person experience that occurred during the Sputnik crisis. Calandra's essay, "Angels on a Pin", was published in 1959 in Pride, a magazine of the American College Public Relations Association. It was reprinted in Current Science in 1964, in Saturday Review in 1968 and included in the 1969 edition of Calandra's The Teaching of Elementary Science and Mathematics. Calandra's essay became a subject of academic discussion. It was frequently reprinted since 1970, making its way into books on subjects ranging from teaching, writing skills, workplace counseling and investment in real estate to chemical industry, computer programming and integrated circuit design. Calandra's account A colleague of Calandra posed the barometer question to a student, expecting the correct answer: "the height of the building can be estimated in proportion to the difference between the barometer readings at the bottom and at the top of the building". The student provided a different, and also correct answer: "Take the barometer to the top of the building. Attach a long rope to it, lower the barometer to the street, then bring it up, measuring the length of the rope. The length of the rope is the height of the building." The examiner and Calandra, who was called to advise on the case, faced a moral dilemma. According to the format of the exam, a correct answer deserved a full credit. But issuing a full credit would have violated academic standards by rewarding a student who had not demonstrated competence in the academic field that had been tested (physics). Neither of two available options (pass or fail) was morally acceptable. By mutual agreement with the student and the examiner, Calandra gave the student another opportunity to answer, warning the student the answer would require demonstrating some knowledge of physics. The student came up with several possible answers, but settled on dropping the barometer from the top of the building, timing its fall, and using the equation of motion to derive the height. The examiner agreed that this satisfied the requirement and gave the student “almost full credit”. When Calandra asked about the other answers, the student gave the examples: using the proportion between the lengths of the building's shadow and that of the barometer to calculate the building's height from the height of the barometer using the barometer as a measuring rod to mark off its height on the wall while climbing the stairs, then counting the number of marks suspending the barometer from a string to create a pendulum, then using the pendulum to measure the strength of Earth's gravity at the top and bottom of the building, and calculating the height of the building from the difference in the two measurements (see Newton's law of universal gravitation) There were, the student said, many other possible solutions. The student admitted that he knew the expected “conventional” answer, but was fed up with the professor's "teaching him how to think ... rather than teaching him the structure of the subject." Internet meme According to Snopes.com, more recent (1999 and 1988) versions identify the problem as a question in "a physics degree exam at the University of Copenhagen" and the student was Niels Bohr, and includes the following answers: Tying a piece of string to the barometer, lowering the barometer from the roof to the ground, and measuring the length of the string and barometer. Dropping the barometer off the roof, measuring the time it takes to hit the ground, and calculating the building's height assuming constant acceleration under gravity. When the sun is shining, standing the barometer up, measuring the height of the barometer and the lengths of the shadows of both barometer and building, and finding the building's height using similar triangles. Tying a piece of string to the barometer, and swinging it like a pendulum both on the ground and on the roof, and from the known pendulum length and swing period, calculate the gravitational field for the two cases. Use Newton's law of gravitation to calculate the radial altitude of both the ground and the roof. The difference will be the height of the building. Tying a piece of string to the barometer, which is as long as the height of the building, and swinging it like a pendulum, and from the swing period, calculate the pendulum length. Marking off the number of barometer lengths vertically along the emergency staircase, and multiplying this with the length of the barometer. Trading the barometer for the correct information with the building's janitor or superintendent. Measuring the pressure difference between ground and roof and calculating the height difference (the expected answer). Interpretations Professor of physics Mark Silverman used what he called "The Barometer-Story formula" precisely for explaining the subject of pressure and recommended it to physics teachers. Silverman called Calandra's story "a delightful essay that I habitually read to my class whenever we study fluids ... the essay is short, hilarious and satisfying (at least to me and my class)." Financial advisor Robert G. Allen presented Calandra's essay to illustrate the process and role of creativity in finance. "Creativity is born when you have a problem to solve. And as you can see from this story ["Angels on a Pin"] there are many ways of solving a problem. Creativity is the art of looking for solutions that are out of the ordinary, different, unorthodox." O'Meara used the barometer question to illustrate the art of steering students' activities to a desired outcome: "if the question is not aligned [with the desired learning outcome] then the problem becomes an exercise of problem solving for its own value." The teacher can steer the students either through careful design of the questions (this rules out barometer questions), or through guiding the students to the desired choices. In case of the original barometer question, the examiner may explicitly say that the problem has more than one solution, insist on applying the laws of physics, or give them the "ending point" of the solution: "How did I discover that the building was 410 feet in height with only a barometer?" Herson used the Calandra account as an illustration of the difference between academic tests and assessment in education. Tests, even the ones designed for reliability and validity, are useful, but they are not sufficient in real-world education. Sanders interpreted Calandra's story as a conflict between perfection and optimal solutions: "We struggle to determine a 'best' answer, when a simple call to a building superintendent (the resource man) would quickly provide adequate information." Footnotes References Robert G. Allen (2004). Nothing Down for the 2000s: Dynamic New Wealth Strategies in Real Estate. Simon and Schuster. . Louis B. Barnes, Carl Roland Christensen, Abby J. Hansen (1994). Teaching and the case method: text, cases, and readings. Harvard Business Press. . Walter Grarzer (2004). Eurekas and euphorias: the Oxford book of scientific anecdotes. Oxford University Press. . Kaplan ACT Premier Program 2009. Kaplan, Inc. . Naomi L. Herson (1986), Evaluation for Excellence in Education, in: Evaluation for excellence in education: presentations given at a workshop/seminar. Canadian Education Association. . Jodi O'Meara (2010). Beyond Differentiated Instruction. Corwin Press. . Roy E. Sanders (2005). Chemical process safety: learning from case histories. Gulf Professional Publishing. . Mark P. Silverman (2002). A universe of atoms, an atom in the universe. Springer. . The cited chapter reproduces an earlier publication: Mark P. Silverman (1998). Flying High, Thinking Low? What Every Aeronaut Needs To Know. The Physics Teacher. 1998, vol. 36. pp. 288–293. The solution sought by Calandra's examiner is indexed with (4) in the very end of p. 289. Maryellen Weimer (2002). Learner-centered teaching: five key changes to practice. John Wiley and Sons. . See also Manhole cover question Microsoft interview Professional ethics Urban legends Tests Educational assessment and evaluation Physics education
Barometer question
[ "Physics" ]
1,905
[ "Applied and interdisciplinary physics", "Physics education" ]
2,477,282
https://en.wikipedia.org/wiki/Lambda%20transition
The λ (lambda) universality class is a group in condensed matter physics. It regroups several systems possessing strong analogies, namely, superfluids, superconductors and smectics (liquid crystals). All these systems are expected to belong to the same universality class for the thermodynamic critical properties of the phase transition. While these systems are quite different at the first glance, they all are described by similar formalisms and their typical phase diagrams are identical. See also Superfluid Superconductor Liquid crystal Phase transition Renormalization group Topological defect References Books Chaikin P. M. and Lubensky T. C. Principles of Condensed Matter Physics (Cambridge University Press, Cambridge) 1995, sect.9. Feynman R. P. Progress in Low Temperature Physics Vol.1, edited by C. Gorter (North Holland, Amsterdam) 1955. Journal articles Translated as: Condensed matter physics Critical phenomena Phase transitions Phases of matter
Lambda transition
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
202
[ "Physical phenomena", "Phase transitions", "Phases of matter", "Critical phenomena", "Materials science", "Condensed matter physics", "Statistical mechanics", "Matter", "Dynamical systems" ]
2,477,581
https://en.wikipedia.org/wiki/Poly-cap
A poly-cap is a small rubber polyethylene bushing used to create smooth joints, or to keep something in place without glue in scale-models. In model kit descriptions, they are sometimes referred to under the material acronym PE. They are usually found on kits such as those by Tamiya and Bandai who have employed them for a long time. Although regarded as a quick and reliable solution to create stiff joints to hold poses, they have shown to deteriorate in strength if too much friction is applied from shifting poses, as well as a loss in grip strength over the course of a few years without tampering, causing the final build to be unable to hold more complex poses over time. As such, manufactures such as Bandai have shown signs of abandonment in more recent releases of the part in favor of more solid, peg reliant engineering, and using harder plastics to keep poses held, although the material is still used in re-releases of old kits. A typical "cap" is a small cylinder, but many other shapes exist. Bandai for example have a wide range of designs used in their mecha model kits (Gunpla, etc.). Typical uses include Wheel or sprocket-hubs Pivot-points for guns Joints in robotic limbs Hardware (mechanical)
Poly-cap
[ "Physics", "Technology", "Engineering" ]
267
[ "Physical systems", "Machines", "Hardware (mechanical)", "Construction" ]
1,180,641
https://en.wikipedia.org/wiki/Stochastic%20gradient%20descent
Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient (calculated from the entire data set) by an estimate thereof (calculated from a randomly selected subset of the data). Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in exchange for a lower convergence rate. The basic idea behind stochastic approximation can be traced back to the Robbins–Monro algorithm of the 1950s. Today, stochastic gradient descent has become an important optimization method in machine learning. Background Both statistical estimation and machine learning consider the problem of minimizing an objective function that has the form of a sum: where the parameter that minimizes is to be estimated. Each summand function is typically associated with the -th observation in the data set (used for training). In classical statistics, sum-minimization problems arise in least squares and in maximum-likelihood estimation (for independent observations). The general class of estimators that arise as minimizers of sums are called M-estimators. However, in statistics, it has been long recognized that requiring even local minimization is too restrictive for some problems of maximum-likelihood estimation. Therefore, contemporary statistical theorists often consider stationary points of the likelihood function (or zeros of its derivative, the score function, and other estimating equations). The sum-minimization problem also arises for empirical risk minimization. There, is the value of the loss function at -th example, and is the empirical risk. When used to minimize the above function, a standard (or "batch") gradient descent method would perform the following iterations: The step size is denoted by (sometimes called the learning rate in machine learning) and here "" denotes the update of a variable in the algorithm. In many cases, the summand functions have a simple form that enables inexpensive evaluations of the sum-function and the sum gradient. For example, in statistics, one-parameter exponential families allow economical function-evaluations and gradient-evaluations. However, in other cases, evaluating the sum-gradient may require expensive evaluations of the gradients from all summand functions. When the training set is enormous and no simple formulas exist, evaluating the sums of gradients becomes very expensive, because evaluating the gradient requires evaluating all the summand functions' gradients. To economize on the computational cost at every iteration, stochastic gradient descent samples a subset of summand functions at every step. This is very effective in the case of large-scale machine learning problems. Iterative method In stochastic (or "on-line") gradient descent, the true gradient of is approximated by a gradient at a single sample: As the algorithm sweeps through the training set, it performs the above update for each training sample. Several passes can be made over the training set until the algorithm converges. If this is done, the data can be shuffled for each pass to prevent cycles. Typical implementations may use an adaptive learning rate so that the algorithm converges. In pseudocode, stochastic gradient descent can be presented as : Choose an initial vector of parameters and learning rate . Repeat until an approximate minimum is obtained: Randomly shuffle samples in the training set. For , do: A compromise between computing the true gradient and the gradient at a single sample is to compute the gradient against more than one training sample (called a "mini-batch") at each step. This can perform significantly better than "true" stochastic gradient descent described, because the code can make use of vectorization libraries rather than computing each step separately as was first shown in where it was called "the bunch-mode back-propagation algorithm". It may also result in smoother convergence, as the gradient computed at each step is averaged over more training samples. The convergence of stochastic gradient descent has been analyzed using the theories of convex minimization and of stochastic approximation. Briefly, when the learning rates decrease with an appropriate rate, and subject to relatively mild assumptions, stochastic gradient descent converges almost surely to a global minimum when the objective function is convex or pseudoconvex, and otherwise converges almost surely to a local minimum. This is in fact a consequence of the Robbins–Siegmund theorem. Linear regression Suppose we want to fit a straight line to a training set with observations and corresponding estimated responses using least squares. The objective function to be minimized is The last line in the above pseudocode for this specific problem will become: Note that in each iteration or update step, the gradient is only evaluated at a single . This is the key difference between stochastic gradient descent and batched gradient descent. In general, given a linear regression problem, stochastic gradient descent behaves differently when (underparameterized) and (overparameterized). In the overparameterized case, stochastic gradient descent converges to . That is, SGD converges to the interpolation solution with minimum distance from the starting . This is true even when the learning rate remains constant. In the underparameterized case, SGD does not converge if learning rate remains constant. History In 1951, Herbert Robbins and Sutton Monro introduced the earliest stochastic approximation methods, preceding stochastic gradient descent. Building on this work one year later, Jack Kiefer and Jacob Wolfowitz published an optimization algorithm very close to stochastic gradient descent, using central differences as an approximation of the gradient. Later in the 1950s, Frank Rosenblatt used SGD to optimize his perceptron model, demonstrating the first applicability of stochastic gradient descent to neural networks. Backpropagation was first described in 1986, with stochastic gradient descent being used to efficiently optimize parameters across neural networks with multiple hidden layers. Soon after, another improvement was developed: mini-batch gradient descent, where small batches of data are substituted for single samples. In 1997, the practical performance benefits from vectorization achievable with such small batches were first explored, paving the way for efficient optimization in machine learning. As of 2023, this mini-batch approach remains the norm for training neural networks, balancing the benefits of stochastic gradient descent with gradient descent. By the 1980s, momentum had already been introduced, and was added to SGD optimization techniques in 1986. However, these optimization techniques assumed constant hyperparameters, i.e. a fixed learning rate and momentum parameter. In the 2010s, adaptive approaches to applying SGD with a per-parameter learning rate were introduced with AdaGrad (for "Adaptive Gradient") in 2011 and RMSprop (for "Root Mean Square Propagation") in 2012. In 2014, Adam (for "Adaptive Moment Estimation") was published, applying the adaptive approaches of RMSprop to momentum; many improvements and branches of Adam were then developed such as Adadelta, Adagrad, AdamW, and Adamax. Within machine learning, approaches to optimization in 2023 are dominated by Adam-derived optimizers. TensorFlow and PyTorch, by far the most popular machine learning libraries, as of 2023 largely only include Adam-derived optimizers, as well as predecessors to Adam such as RMSprop and classic SGD. PyTorch also partially supports Limited-memory BFGS, a line-search method, but only for single-device setups without parameter groups. Notable applications Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the back propagation algorithm, it is the de facto standard algorithm for training artificial neural networks. Its use has been also reported in the Geophysics community, specifically to applications of Full Waveform Inversion (FWI). Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for training linear regression models, originally under the name ADALINE. Another stochastic gradient descent algorithm is the least mean squares (LMS) adaptive filter. Extensions and variants Many improvements on the basic stochastic gradient descent algorithm have been proposed and used. In particular, in machine learning, the need to set a learning rate (step size) has been recognized as problematic. Setting this parameter too high can cause the algorithm to diverge; setting it too low makes it slow to converge. A conceptually simple extension of stochastic gradient descent makes the learning rate a decreasing function of the iteration number , giving a learning rate schedule, so that the first iterations cause large changes in the parameters, while the later ones do only fine-tuning. Such schedules have been known since the work of MacQueen on -means clustering. Practical guidance on choosing the step size in several variants of SGD is given by Spall. Implicit updates (ISGD) As mentioned earlier, classical stochastic gradient descent is generally sensitive to learning rate . Fast convergence requires large learning rates but this may induce numerical instability. The problem can be largely solved by considering implicit updates whereby the stochastic gradient is evaluated at the next iterate rather than the current one: This equation is implicit since appears on both sides of the equation. It is a stochastic form of the proximal gradient method since the update can also be written as: As an example, consider least squares with features and observations . We wish to solve: where indicates the inner product. Note that could have "1" as the first element to include an intercept. Classical stochastic gradient descent proceeds as follows: where is uniformly sampled between 1 and . Although theoretical convergence of this procedure happens under relatively mild assumptions, in practice the procedure can be quite unstable. In particular, when is misspecified so that has large absolute eigenvalues with high probability, the procedure may diverge numerically within a few iterations. In contrast, implicit stochastic gradient descent (shortened as ISGD) can be solved in closed-form as: This procedure will remain numerically stable virtually for all as the learning rate is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and normalized least mean squares filter (NLMS). Even though a closed-form solution for ISGD is only possible in least squares, the procedure can be efficiently implemented in a wide range of models. Specifically, suppose that depends on only through a linear combination with features , so that we can write , where may depend on as well but not on except through . Least squares obeys this rule, and so does logistic regression, and most generalized linear models. For instance, in least squares, , and in logistic regression , where is the logistic function. In Poisson regression, , and so on. In such settings, ISGD is simply implemented as follows. Let , where is scalar. Then, ISGD is equivalent to: The scaling factor can be found through the bisection method since in most regular models, such as the aforementioned generalized linear models, function is decreasing, and thus the search bounds for are . Momentum Further proposals include the momentum method or the heavy ball method, which in ML context appeared in Rumelhart, Hinton and Williams' paper on backpropagation learning and borrowed the idea from Soviet mathematician Boris Polyak's 1964 article on solving functional equations. Stochastic gradient descent with momentum remembers the update at each iteration, and determines the next update as a linear combination of the gradient and the previous update: that leads to: where the parameter which minimizes is to be estimated, is a step size (sometimes called the learning rate in machine learning) and is an exponential decay factor between 0 and 1 that determines the relative contribution of the current gradient and earlier gradients to the weight change. The name momentum stems from an analogy to momentum in physics: the weight vector , thought of as a particle traveling through parameter space, incurs acceleration from the gradient of the loss ("force"). Unlike in classical stochastic gradient descent, it tends to keep traveling in the same direction, preventing oscillations. Momentum has been used successfully by computer scientists in the training of artificial neural networks for several decades. The momentum method is closely related to underdamped Langevin dynamics, and may be combined with simulated annealing. In mid-1980s the method was modified by Yurii Nesterov to use the gradient predicted at the next point, and the resulting so-called Nesterov Accelerated Gradient was sometimes used in ML in the 2010s. Averaging Averaged stochastic gradient descent, invented independently by Ruppert and Polyak in the late 1980s, is ordinary stochastic gradient descent that records an average of its parameter vector over time. That is, the update is the same as for ordinary stochastic gradient descent, but the algorithm also keeps track of When optimization is done, this averaged parameter vector takes the place of . AdaGrad AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative. Examples of such applications include natural language processing and image recognition. It still has a base learning rate , but this is multiplied with the elements of a vector which is the diagonal of the outer product matrix where , the gradient, at iteration . The diagonal is given by This vector essentially stores a historical sum of gradient squares by dimension and is updated after every iteration. The formula for an update is now or, written as per-parameter updates, Each gives rise to a scaling factor for the learning rate that applies to a single parameter . Since the denominator in this factor, is the ℓ2 norm of previous derivatives, extreme parameter updates get dampened, while parameters that get few or small updates receive higher learning rates. While designed for convex problems, AdaGrad has been successfully applied to non-convex optimization. RMSProp RMSProp (for Root Mean Square Propagation) is a method invented in 2012 by James Martens and Ilya Sutskever, at the time both PhD students in Geoffrey Hinton's group, in which the learning rate is, like in Adagrad, adapted for each of the parameters. The idea is to divide the learning rate for a weight by a running average of the magnitudes of recent gradients for that weight. Unusually, it was not published in an article but merely described in a Coursera lecture. Citation 1: https://deepai.org/machine-learning-glossary-and-terms/rmsprop#:~:text=The%20RMSProp%20algorithm%20was%20introduced,its%20effectiveness%20in%20various%20applications. Citation 2: this video at 36:37 https://www.youtube.com/watch?v=-eyhCTvrEtE&t=36m37s So, first the running average is calculated in terms of means square, where, is the forgetting factor. The concept of storing the historical gradient as sum of squares is borrowed from Adagrad, but "forgetting" is introduced to solve Adagrad's diminishing learning rates in non-convex problems by gradually decreasing the influence of old data. And the parameters are updated as, RMSProp has shown good adaptation of learning rate in different applications. RMSProp can be seen as a generalization of Rprop and is capable to work with mini-batches as well opposed to only full-batches. Adam Adam (short for Adaptive Moment Estimation) is a 2014 update to the RMSProp optimizer combining it with the main feature of the Momentum method. In this optimization algorithm, running averages with exponential forgetting of both the gradients and the second moments of the gradients are used. Given parameters and a loss function , where indexes the current training iteration (indexed at ), Adam's parameter update is given by: where is a small scalar (e.g. ) used to prevent division by 0, and (e.g. 0.9) and (e.g. 0.999) are the forgetting factors for gradients and second moments of gradients, respectively. Squaring and square-rooting is done element-wise. The initial proof establishing the convergence of Adam was incomplete, and subsequent analysis has revealed that Adam does not converge for all convex objectives. Despite this, Adam continues to be used due to its strong performance in practice. Variants The popularity of Adam inspired many variants and enhancements. Some examples include: Nesterov-enhanced gradients: NAdam, FASFA varying interpretations of second-order information: Powerpropagation and AdaSqrt. Using infinity norm: AdaMax AMSGrad, which improves convergence over Adam by using maximum of past squared gradients instead of the exponential average. AdamX further improves convergence over AMSGrad. AdamW, which improves the weight decay. Sign-based stochastic gradient descent Even though sign-based optimization goes back to the aforementioned Rprop, in 2018 researchers tried to simplify Adam by removing the magnitude of the stochastic gradient from being taken into account and only considering its sign. Backtracking line search Backtracking line search is another variant of gradient descent. All of the below are sourced from the mentioned link. It is based on a condition known as the Armijo–Goldstein condition. Both methods allow learning rates to change at each iteration; however, the manner of the change is different. Backtracking line search uses function evaluations to check Armijo's condition, and in principle the loop in the algorithm for determining the learning rates can be long and unknown in advance. Adaptive SGD does not need a loop in determining learning rates. On the other hand, adaptive SGD does not guarantee the "descent property" – which Backtracking line search enjoys – which is that for all n. If the gradient of the cost function is globally Lipschitz continuous, with Lipschitz constant L, and learning rate is chosen of the order 1/L, then the standard version of SGD is a special case of backtracking line search. Second-order methods A stochastic analogue of the standard (deterministic) Newton–Raphson algorithm (a "second-order" method) provides an asymptotically optimal or near-optimal form of iterative optimization in the setting of stochastic approximation. A method that uses direct measurements of the Hessian matrices of the summands in the empirical risk function was developed by Byrd, Hansen, Nocedal, and Singer. However, directly determining the required Hessian matrices for optimization may not be possible in practice. Practical and theoretically sound methods for second-order versions of SGD that do not require direct Hessian information are given by Spall and others. (A less efficient method based on finite differences, instead of simultaneous perturbations, is given by Ruppert.) Another approach to the approximation Hessian matrix is replacing it with the Fisher information matrix, which transforms usual gradient to natural. These methods not requiring direct Hessian information are based on either values of the summands in the above empirical risk function or values of the gradients of the summands (i.e., the SGD inputs). In particular, second-order optimality is asymptotically achievable without direct calculation of the Hessian matrices of the summands in the empirical risk function. When the objective is a nonlinear least-squres loss where is the predictive model (e.g., a deep neural network) the objective's structure can be exploited to estimate 2nd order information using gradients only. The resulting methods are simple and often effective Approximations in continuous time For small learning rate stochastic gradient descent can be viewed as a discretization of the gradient flow ODE subject to additional stochastic noise. This approximation is only valid on a finite time-horizon in the following sense: assume that all the coefficients are sufficiently smooth. Let and be a sufficiently smooth test function. Then, there exists a constant such that for all where denotes taking the expectation with respect to the random choice of indices in the stochastic gradient descent scheme. Since this approximation does not capture the random fluctuations around the mean behavior of stochastic gradient descent solutions to stochastic differential equations (SDEs) have been proposed as limiting objects. More precisely, the solution to the SDE for where denotes the Ito-integral with respect to a Brownian motion is a more precise approximation in the sense that there exists a constant such that However this SDE only approximates the one-point motion of stochastic gradient descent. For an approximation of the stochastic flow one has to consider SDEs with infinite-dimensional noise. See also Backtracking line search Broken Neural Scaling Law Coordinate descent – changes one coordinate at a time, rather than one example Linear classifier Online machine learning Stochastic hill climbing Stochastic variance reduction Notes References Further reading External links Interactive paper explaining momentum. Stochastic optimization Computational statistics Gradient methods M-estimators Machine learning algorithms Convex optimization Statistical approximations
Stochastic gradient descent
[ "Mathematics" ]
4,493
[ "Computational mathematics", "Mathematical relations", "Statistical approximations", "Computational statistics", "Approximations" ]
1,181,004
https://en.wikipedia.org/wiki/Knight%20shift
The Knight shift is a shift in the nuclear magnetic resonance (NMR) frequency of a paramagnetic substance first published in 1949 by the UC Berkeley physicist Walter D. Knight. For an ensemble of N spins in a magnetic induction field , the nuclear Hamiltonian for the Knight shift is expressed in Cartesian form by: , where for the ith spin is the gyromagnetic ratio, is a vector of the Cartesian nuclear angular momentum operators, the matrix is a second-rank tensor similar to the chemical shift shielding tensor. The Knight shift refers to the relative shift K in NMR frequency for atoms in a metal (e.g. sodium) compared with the same atoms in a nonmetallic environment (e.g. sodium chloride). The observed shift reflects the local magnetic field produced at the sodium nucleus by the magnetization of the conduction electrons. The average local field in sodium augments the applied resonance field by approximately one part per 1000. In nonmetallic sodium chloride the local field is negligible in comparison. The Knight shift is due to the conduction electrons in metals. They introduce an "extra" effective field at the nuclear site, due to the spin orientations of the conduction electrons in the presence of an external field. This is responsible for the shift observed in the nuclear magnetic resonance. The shift comes from two sources, one is the Pauli paramagnetic spin susceptibility, the other is the s-component wavefunctions at the nucleus. Depending on the electronic structure, the Knight shift may be temperature dependent. However, in metals which normally have a broad featureless electronic density of states, Knight shifts are temperature independent. References Nuclear magnetic resonance spectroscopy Chemical physics
Knight shift
[ "Physics", "Chemistry", "Astronomy" ]
354
[ "Spectroscopy stubs", "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Nuclear magnetic resonance", "Nuclear magnetic resonance spectroscopy", "Astronomy stubs", "Nuclear chemistry stubs", "Nuclear magnetic resonance stubs", "nan", "Molecular physics stubs", "Spectroscop...
1,182,058
https://en.wikipedia.org/wiki/Double%20switching
Double switching, double cutting, or double breaking is the practice of using a multipole switch to close or open both the positive and negative sides of a DC electrical circuit, or both the hot and neutral sides of an AC circuit. This technique is used to prevent shock hazard in electric devices connected with unpolarised AC power plugs and sockets. Double switching is a crucial safety engineering practice in railway signalling, wherein it is used to ensure that a single false feed of current to a relay is unlikely to cause a wrong-side failure. It is an example of using redundancy to increase safety and reduce the likelihood of failure, analogous to double insulation. Double switching increases the cost and complexity of systems in which it is employed, for example by extra relay contacts and extra relays, so the technique is applied selectively where it can provide a cost-effective safety improvement. Examples Landslip and Washaway Detectors A landslip or washaway detector is buried in the earth embankment, and opens a circuit should a landslide occur. It is not possible to guarantee that the wet earth of the embankment will not complete the circuit which is supposed to break. If the circuit is double cut with positive and negative wires, any wet conductive earth is likely to blow a fuse on the one hand, and short the detecting relay on the other hand, either of which is almost certain to apply the correct warning signal. Accidents Clapham The Clapham Junction rail crash of 1988 was caused in part by the lack of double switching (known as "double cutting" in the British railway industry). The signal relay in question was switched only on the hot side, while the return current came back on an unswitched wire. A loose wire bypassed the contacts by which the train detection relays switched the signal, allowing the signal to show green when in fact there was a stationary train ahead. 35 people were killed in the resultant collision. United Flight 811 A similar accident on the United Airlines Flight 811 was caused in part by a single-switched safety circuit for the baggage door mechanism. Failure of the wiring insulation in that circuit allowed the baggage door to be unlocked by a false feed, leading to a catastrophic de-pressurisation, and the deaths of nine passengers. Signalling in NSW A study of railway electrical signalling in New South Wales from the 1900s, shows an ever increasing proportion of double switching compared to single switching. Double switching does of course cost more wires, more relay contacts, and testing. On the other hand double switching is inherently less prone to wrong side failures; it helps overcome short-circuit faults that are hard to test for. Partial double switching might double switch the lever controls, and the track circuits between one signal and the next, while single switching the track circuits in the less critical overlap beyond the next signal. Double switching is facilitated by more modern relays that have more contacts in less space: Pre-1950 Shelf Type Relay - 12 contacts (front (make) and back (break)) - full size Post-1950 Q-type plug in relay - 16 contacts (front (make) and back (break)) - about half size See also Redundancy (engineering) Double insulation Single-wire earth return References Couplers Railway signalling Safety Fault tolerance
Double switching
[ "Engineering" ]
657
[ "Reliability engineering", "Fault tolerance" ]
1,182,359
https://en.wikipedia.org/wiki/Clock%20synchronization
Clock synchronization is a topic in computer science and engineering that aims to coordinate otherwise independent clocks. Even when initially set accurately, real clocks will differ after some amount of time due to clock drift, caused by clocks counting time at slightly different rates. There are several problems that occur as a result of clock rate differences and several solutions, some being more acceptable than others in certain contexts. Terminology In serial communication, clock synchronization can refer to clock recovery which achieves frequency synchronization, as opposed to full phase synchronization. Such clock synchronization is used in synchronization in telecommunications and automatic baud rate detection. Plesiochronous or isochronous operation refers to a system with frequency synchronization and loose constraints on phase synchronization. Synchronous operation implies a tighter synchronization based on time perhaps in addition to frequency. Problems As a result of the difficulties managing time at smaller scales, there are problems associated with clock skew that take on more complexity in distributed computing in which several computers will need to realize the same global time. For instance, in Unix systems the make command is used to compile new or modified code and seeks to avoid recompiling unchanged code. The make command uses the clock of the machine it runs on to determine which source files need to be recompiled. If the sources reside on a separate file server and the two machines have unsynchronized clocks, the make program might not produce the correct results. Synchronization is required for accurate reproduction of streaming media. Clock synchronization is a significant component of audio over Ethernet systems. Solutions In a system with a central server, the synchronization solution is trivial; the server will dictate the system time. Cristian's algorithm and the Berkeley algorithm are potential solutions to the clock synchronization problem in this environment. In distributed computing, the problem takes on more complexity because a global time is not easily known. The most used clock synchronization solution on the Internet is the Network Time Protocol (NTP) which is a layered client-server architecture based on User Datagram Protocol (UDP) message passing. Lamport timestamps and vector clocks are concepts of the logical clock in distributed computing. In a wireless network, the problem becomes even more challenging due to the possibility of collision of the synchronization packets on the wireless medium and the higher drift rate of clocks on low-cost wireless devices. Berkeley algorithm The Berkeley algorithm is suitable for systems where a radio clock is not present. This system has no way of making sure of the actual time other than by maintaining a global average time as the global time. A time server will periodically fetch the time from all the time clients, average the results, and then report back to the clients the adjustment that needs be made to their local clocks to achieve the average. This algorithm highlights the fact that internal clocks may vary not only in the time they contain but also in the clock rate. Clock-sampling mutual network synchronization Clock-sampling mutual network synchronization (CS-MNS) is suitable for distributed and mobile applications. It has been shown to be scalable over mesh networks that include indirectly-linked non-adjacent nodes, and is compatible with IEEE 802.11 and similar standards. It can be accurate to the order of few microseconds, but requires direct physical wireless connectivity with negligible link delay (less than 1 microsecond) on links between adjacent nodes, limiting the distance between neighboring nodes to a few hundred meters. Cristian's algorithm Cristian's algorithm relies on the existence of a time server. The time server maintains its clock by using a radio clock or other accurate time source, then all other computers in the system stay synchronized with it. A time client will maintain its clock by making a procedure call to the time server. Variations of this algorithm make more precise time calculations by factoring in network radio propagation time. Satellite navigation systems In addition to its use in navigation, the Global Positioning System (GPS) can also be used for clock synchronization. The accuracy of GPS time signals is ±10 nanoseconds. Using GPS (or other satellite navigation systems) for synchronization requires a receiver connected to an antenna with unobstructed view of the sky. Inter-range Instrumentation Group time codes IRIG timecodes are standard formats for transferring timing information. Atomic frequency standards and GPS receivers designed for precision timing are often equipped with an IRIG output. The standards were created by the Telecommunications Working Group of the United States military's Inter-Range Instrumentation Group (IRIG), the standards body of the Range Commanders Council. Work on these standards started in October 1956, and the original standards were accepted in 1960. Network Time Protocol Network Time Protocol (NTP) is a highly robust protocol, widely deployed throughout the Internet. Well tested over the years, it is generally regarded as the state of the art in distributed time synchronization protocols for unreliable networks. It can reduce synchronization offsets to times of the order of a few milliseconds over the public Internet, and to sub-millisecond levels over local area networks. A simplified version of the NTP protocol, Simple Network Time Protocol (SNTP), can also be used as a pure single-shot stateless primary/secondary synchronization protocol, but lacks the sophisticated features of NTP, and thus has much lower performance and reliability levels. Precision Time Protocol Precision Time Protocol (PTP) is a master/slave protocol for delivery of highly accurate time over local area networks. Reference broadcast synchronization The Reference Broadcast Time Synchronization (RBS) algorithm is often used in wireless networks and sensor networks. In this scheme, an initiator broadcasts a reference message to urge the receivers to adjust their clocks. Reference Broadcast Infrastructure Synchronization The Reference Broadcast Infrastructure Synchronization (RBIS) protocol is a master/slave synchronization protocol, like RBS, based on a receiver/receiver synchronization paradigm. It is specifically tailored to be used in IEEE 802.11 wireless networks configured in infrastructure mode (i.e., coordinated by an access point). The protocol does not require any modification to the access point. Synchronous Ethernet Synchronous Ethernet uses Ethernet in a synchronous manner such that when combined with synchronization protocols such as PTP in the case of the White Rabbit Project, sub-nanosecond synchronization accuracy is achieved. Wireless ad hoc networks Synchronization is achieved in wireless ad hoc networks through sending synchronization messages in a multi-hop manner and each node progressively synchronizing with the node that is the immediate sender of a synchronization message. Examples include Flooding Time Synchronization Protocol (FTSP), and Harmonia, both able to achieve synchronization with accuracy on the order of microseconds. Huygens Researchers from Stanford and Google introduced Huygens, a probe-based, end-to-end clock synchronization algorithm. Huygens is implemented in software and thus can be deployed in data centers or in public cloud environments. By leveraging some key aspects of modern data centers, and applying novel estimation algorithms and signal processing techniques, the Huygens algorithm achieved an accuracy of tens of nanoseconds even at high network load. The findings of this research are being tested in financial market applications. See also Einstein synchronisation International Atomic Time Network Identity and Time Zone Synchronization (computer science) Time and frequency transfer Time signal Time standard Reference Broadcast Infrastructure Synchronization References Further reading Synchronization Clocks Distributed computing problems
Clock synchronization
[ "Physics", "Mathematics", "Technology", "Engineering" ]
1,606
[ "Machines", "Telecommunications engineering", "Distributed computing problems", "Clocks", "Computational problems", "Measuring instruments", "Physical systems", "Mathematical problems", "Synchronization" ]
1,182,993
https://en.wikipedia.org/wiki/Pomeron
In physics, the pomeron is a Regge trajectory — a family of particles with increasing spin — postulated in 1961 to explain the slowly rising cross section of hadronic collisions at high energies. It is named after Isaak Pomeranchuk. Overview While other trajectories lead to falling cross sections, the pomeron can lead to logarithmically rising cross sections — which, experimentally, are approximately constant ones. The identification of the pomeron and the prediction of its properties was a major success of the Regge theory of strong interaction phenomenology. In later years, a BFKL pomeron was derived in further kinematic regimes from perturbative calculations in QCD, but its relationship to the pomeron seen in soft high energy scattering is still not fully understood. One consequence of the pomeron hypothesis is that the cross sections of proton–proton and proton–antiproton scattering should be equal at high enough energies. This was demonstrated by the Soviet physicist Isaak Pomeranchuk by analytic continuation assuming only that the cross sections do not fall. The pomeron itself was introduced by Vladimir Gribov, and it incorporated this theorem into Regge theory. Geoffrey Chew and Steven Frautschi introduced the pomeron in the West. The modern interpretation of Pomeranchuk's theorem is that the pomeron has no conserved charges—the particles on this trajectory have the quantum numbers of the vacuum. The pomeron was well accepted in the 1960s despite the fact that the measured cross sections of proton–proton and proton–antiproton scattering at the energies then available were unequal. The pomeron carries no charges. The absence of electric charge implies that pomeron exchange does not lead to the usual shower of Cherenkov radiation, while the absence of color charge implies that such events do not radiate pions. This is in accord with experimental observation. In high energy proton–proton and proton–antiproton collisions in which it is believed that pomerons have been exchanged, a rapidity gap is often observed: This is a large angular region in which no outgoing particles are detected. Odderon The odderon, the counterpart of the pomeron that carries odd charge parity, was introduced in 1973 by Leszek Łukaszuk and Basarab Nicolescu. Odderons exist in QCD as a compound state of three reggeized gluons. Potentially theorized in 2015. It was potentially observed only in 2017 by the TOTEM experiment at the LHC. This observation was later confirmed in a joint analysis with the DØ experiment at the Tevatron and appeared in the media as the particle's discovery in March 2021. String theory In early particle physics, the 'pomeron sector' was what is now called the 'closed string sector' while what was called the 'reggeon sector' is now the 'open string theory'. See also Giuseppe Cocconi Tamás Csörgő References Further reading External links Pomerons at Fermilab Subatomic particles Quantum chromodynamics Hadrons Hypothetical particles
Pomeron
[ "Physics" ]
645
[ "Hypothetical particles", "Matter", "Hadrons", "Unsolved problems in physics", "Particle physics", "Nuclear physics", "Atoms", "Physics beyond the Standard Model", "Subatomic particles" ]
1,183,118
https://en.wikipedia.org/wiki/Total%20internal%20reflection%20fluorescence%20microscope
A total internal reflection fluorescence microscope (TIRFM) is a type of microscope with which a thin region of a specimen, usually less than 200 nanometers can be observed. TIRFM is an imaging modality which uses the excitation of fluorescent cells in a thin optical specimen section that is supported on a glass slide. The technique is based on the principle that when excitation light is totally internally reflected in a transparent solid coverglass at its interface with a liquid medium, an electromagnetic field, also known as an evanescent wave, is generated at the solid-liquid interface with the same frequency as the excitation light. The intensity of the evanescent wave exponentially decays with distance from the surface of the solid so that only fluorescent molecules within a few hundred nanometers of the solid are efficiently excited. Two-dimensional images of the fluorescence can then be obtained, although there are also mechanisms in which three-dimensional information on the location of vesicles or structures in cells can be obtained. History Widefield fluorescence was introduced in 1910 which was an optical technique that illuminates the entire sample. Confocal microscopy was then introduced in 1960 which decreased the background and exposure time of the sample by directing light to a pinpoint and illuminating cones of light into the sample. In the 1980s, the introduction of TIRFM further decreased background and exposure time by only illuminating the thin section of the sample being examined. Background There are two common methods for producing the evanescent wave for TIRFM. The first is the prism method which uses a prism to direct the laser toward the interface between the coverglass and the media/cells at an incident angle sufficient to cause total internal reflection. This configuration has been applied to cellular microscopy for over 30 years but has never become a mainstream tool due to several limitations. Although there are many variations of the prism configuration, most restrict access to the specimen which makes it difficult to perform manipulations, inject media into the specimen space, or carry out physiological measurements. Another disadvantage is that in most configurations based on the inverted microscope designs, the illumination is introduced on the specimen side opposite of the objective optics which requires imaging of the evanescent field region through the bulk of the specimen. There is great complexity and precision required in imaging this system which meant that the prism method was not used by many biologists but rather limited to use by physicists. The other method is known as the objective lens method which has increased the use of TIRFM in cellular microscopy and increased furthermore since a commercial solution became available. In this mechanism, one can easily switch between standard widefield fluorescence and TIRF by changing the off-axis position of the beam focus at the objective's back focal plane. There are several developed ways to change the positions of the beam such as using an actuator that can change the position in relation to the fluorescence illuminator that is attached to the microscope. Application In cell and molecular biology, a large number of molecular events in cellular surfaces such as cell adhesion, binding of cells by hormones, secretion of neurotransmitters, and membrane dynamics have been studied with conventional fluorescence microscopes. However, fluorophores that are bound to the specimen surface and those in the surrounding medium exist in an equilibrium state. When these molecules are excited and detected with a conventional fluorescence microscope, the resulting fluorescence from those fluorophores bound to the surface is often overwhelmed by the background fluorescence due to the much larger population of non-bound molecules. TIRFM allows for selective excitation of the surface-bound fluorophores, while non-bound molecules are not excited and do not fluoresce. Due to the fact of sub-micron surface selectivity, TIRFM has become a method of choice for single molecule detection. There are many applications of TIRFM in cellular microscopy. Some of these applications include: Measuring the kinetics of receptor endocytosis in response to ligand binding and receptor movement Observing exocytic events through the loading of vesicles undergoing exocytosis with fluorescent dyes Qualitatively and quantitatively describing the roles different proteins play in endocytosis/exocytosis Observing the size, movement, and distance apart of the regions of contact between a cell and a solid substrate With the ability to resolve individual vesicles optically and follow the dynamics of their interactions directly, TIRFM provides the capability to study the vast number of proteins involved in neurobiological processes in a manner that was not possible before. Benefits TIRFM provides several benefits over standard widefield and confocal fluorescence microscopy such as: The background is substantially decreased so structures can be seen clearly There is virtually no out-of-focus fluorescence collected which decrease blurring effects Cells are exposed to a significantly smaller amount of light which limits phototoxicity to cells Overview The idea of using total internal reflection to illuminate cells contacting the surface of glass was first described by E.J. Ambrose in 1956. This idea was then extended by Daniel Axelrod at the University of Michigan, Ann Arbor in the early 1980s as TIRFM. A TIRFM uses an evanescent wave to selectively illuminate and excite fluorophores in a restricted region of the specimen immediately adjacent to the glass-water interface. The evanescent electromagnetic field decays exponentially from the interface, and thus penetrates to a depth of only approximately 100 nm into the sample medium. Thus the TIRFM enables a selective visualization of surface regions such as the basal plasma membrane (which are about 7.5 nm thick) of cells. Note, however, that the region visualized is at least a few hundred nanometers wide, so the cytoplasmic zone immediately beneath the plasma membrane is necessarily visualized in addition to the plasma membrane during TIRF microscopy. The selective visualization of the plasma membrane renders the features and events on the plasma membrane in living cells with high axial resolution. TIRF can also be used to observe the fluorescence of a single molecule, making it an important tool of biophysics and quantitative biology. TIRF microscopy has also been applied in the single molecule detection of DNA biomarkers and SNP discrimination. Cis-geometry (through-objective TIRFM) and trans-geometry (prism- and lightguide based TIRFM) have been shown to provide different quality of the effect of total internal reflection. In the case of trans-geometry, the excitation lightpath and the emission channel are separated, while in the case of objective-type TIRFM they share the objective and other optical elements of the microscope. Prism-based geometry was shown to generate clean evanescent wave, which exponential decay is close to theoretically predicted function. In the case of objective-based TIRFM, however, the evanescent wave is contaminated with intense stray light. The intensity of stray light was shown to amount 10–15% of the evanescent wave, which makes it difficult to interpret data obtained by objective-type TIRFM Mechanism The basic components of the TIRFM device include: Excitation beam light source Cover slip and immersion oil Objective lens Sample specimen Detector Objective-based vs prism-based Key differences between objective-based (cis) and prism-based (trans) TIRFM are that prism based TIRFM requires usage of a prism/solution interface to generate the evanescent field, while objective-based TIRFM does not require a prism and utilizes a cover slip/solution interface to generate the evanescent field. Typically objective-based TIRFM are more popularly used, however have lowered imaging quality due to stray light noise within the evanescent wave. Prism-based Less extraneous scattering Much cheaper (hundreds instead of thousands of dollars) Best for low-mid range magnification and water immersion objectives Easiest with free collimated laser sources Larger range of incidence angles Desirable to achieve smallest evanescent field depth Objective-based High magnification and aperture Stable, easy to set up and align Works with free collimated laser, optical fiber, or conventional arc sources Methodology Fundamental physics TIRFM is predicated on the optical phenomena of total internal reflection, in which waves arriving at a medium interface do not transmit into medium 2 but are completely reflected back into medium 1. Total internal reflection requires medium 2 to have a lower refractive index than medium 1, and for the waves must be incident at sufficiently oblique angles on the interface. An observed phenomena accompanying total internal reflection is the evanescent wave, which spatially extends away perpendicularly from the interface into medium 2, and decays exponentially, as a factor of wavelength, refractive index, and incident angle. It is the evanescent wave which is used to achieve increased excitation of the fluorophores close to the surface of the sample, and diminished excitation of superfluous fluorophores within solution. For practical purposes, in objective based TIRF, medium 1 is typically a high refractive index glass coverslip, and medium 2 is the sample in solution with a lower refractive index. There may be immersion oil between the lens and the glass coverslip to prevent significant refraction through air. Evanescent wave The critical angle for excitatory light incidence can be derived from Snell's law: For the refractive index of sample, the refractive index of the cover slip. Thus, as the angle of incidence reaches , we begin observing effects of total internal reflection and evanescent wave, and as it surpasses these effects are more prevalent. The intensity of the evanescent wave is given by: With penetration depth given by: Typically, ≤~100 nanometers, which is typically much smaller than the wavelength of light, and much thinner than a slice from confocal microscopes. For TIRFM imaging the wavelength of the excitation beam within the sample can be selected for by filtering. Additionally, the range of incident angles is determined by the numerical aperture (NA) of the objective, and requires that NA > . This parameter can be adjusted by changing the angle the excitation beam enters the objective lens. Finally, the reflective indices () of the solution and cover slip can be experimentally found or reported by manufacturers. Excitation beam For complex fluoroscope microscopy techniques, lasers are the preferred light source as they are highly uniform, intense, and near-monochromatic. However, it is noted that ARC LAMP light sources and other types of sources may also work. Typically the wavelength of excitation beam is designated by the requirements of the fluorophores within the sample, with most common excitation wavelengths being in the 400–700 nm range for biological samples. In practice, a lightbox will generate a high intensity multichromatic laser, which will then be filtered to allow the desired wavelengths through to excite the sample. For objective-based TIRFM, the excitation beam and fluoresced emission beam will be captured via the same objective lens. Thus, to split the beams, a dichromatic mirror is used to reflect the incoming excitation beam towards the objective lens, and allow the emission beam to pass through into the detector. Additional filtering may be required to further separate emission and excitation wavelengths. Emission beam When excited with specific wavelengths of light, fluorophore dyes will reemit light at longer wavelengths (which contain less energy). In the context of TIRFM, only fluorophores close to the interface will be readily excited by the evanescent field, while those past ~100 nm will be highly attenuated. Light emitted by the fluorophores will be undirected, and thus will pass through the objective lens at varying locations with varying intensities. This signal will then pass through the dichromatic mirror and onward to the detector. Cover slip and immersion oil Glass cover slips typically have a reflective index around , while the immersion oil refractive index is a comparable . The medium of air, which has a refractive index of , would cause refraction of the excitation beam between the objective and the coverslip, thus the oil is used to buffer the region and prevent superfluous interface interactions before the beam reaches the interface between coverslip and sample. Objective lens The objective lens numerical aperture (NA) specifies the range of angles over which the system can accept or emit light. To achieve the greatest incident angles, it is desirable to pass light at an off-axis angle through the peripheries of the lens. Back focal plane (BPF) The back focal plane (also called "aperture plane") is the plane through which the excitatory beam is focused before passing through the objective. Adjusting the distance between the objective and BPF can yield different imaging magnification, as the incident angle will become less or more steep. The beam must be passed through the BPF off-axis in order to pass through the objective at its ends, allowing for the angle to be sufficiently greater than the critical angle. The beam must also be focused at the BPF because this ensures that the light passing through the objective is collimated, interacting with the cover slip at the same angle and thus all totally internally reflecting. Sample The sample should be adsorbed to the surface of the glass cover slide and stained with appropriate fluorophores to resolve the features desired within the sample. This is in protocol with any other fluorescent microscopy technique. Dichroic (dichromatic) filter The dichroic filter is an edge filter used at an oblique angle of incidence (typically 45°) to efficiently reflect light in the excitation band and to transmit light in the emission band. The 45° angle of the filter separates the path of the excitation and emission beam. The filter is composed of a complex system of multiple layers of metals, metal salts and dielectrics which have been vacuum-deposited onto thin glass. This coating is designed to have high reflectivity for shorter wavelengths and high transmission for longer wavelengths. While the filter transmits the selected excitation light (shorter wavelength) through the objective and onto the plane of the specimen, it also passes emission fluorescence light (longer wavelength) to the barrier filter and reflecting any scattered excitation light back in the direction of the laser source. This maximizes the amount of exciting radiation passing through the filter and emitted fluorescence beam that is detected by the detector. Barrier filter The barrier filter mainly blocks off undesired wavelengths, especially shorter excitation light wavelengths. It is typically a bandpass filter that passes only the wavelengths emitted by the fluorophore and blocks all undesired light outside this band. More modern microscopes enable the barrier filter to be changed according to the wavelength of the fluorophore's specific emission. Image detection and resolution The image is detected by a charged-coupled device (CCD) digital camera. CCD cameras have photon detectors, which are thin silicon wafers, assembled into 2D arrays of light-sensitive regions. The detector arrays capture and store image information in the form of localized electrical charge that varies with incident light intensity. As shown in the schematic the photons are transform to electrons by the detectors and the electrons are converted to readable electrical signal in the circuit board. The electrical signal is then convoluted with a point spread function (PSF) to sample the original signal. As such, image resolution is highly dependent on the number of detectors and the point spread function will determine the image resolution. Image artifact and noise Most fluorescence imaging techniques exhibit background noise due to illuminating and reconstructing large slices (in the z-direction) of the samples. Since TIRFM uses an evanescent wave to fluoresce a thin slice of the sample, there is inherently less background noise and artifacts. However, there are still other noises and artifacts such as poisson noise, optical aberrations, photobleaching, and other fluorescence molecules. Poissonian noise are fundamental uncertainties with the measurement of light. This will cause uncertainties during the detection of fluorescence photons. If N photons are measured in a particular measurement, there is a 63% probability that the true average value is in the range between N +√N and N −√N. This noise may cause misrepresentation of the object at incorrect pixel locations. Optical aberrations can arise from diffraction of fluorescence light or microscope and objective misalignment. Diffraction of light on the sample slide can spread the fluorescence signal and result in blurring in the convoluted images. Similarly, if there is a misalignment between the objective lens, filter, and detector, the excitation or emission beam may not be in focus and can cause blurring in the images. Photobleaching can occur when the covalent or noncovalent bonds in the fluorophores are destructed by the excitation light and can no longer fluoresce. The fluorescing substances will always degrade to some extent by the energy of the exciting radiation and will cause the fluorescence to fade and result in a dark blurry image. Photobleaching is inevitable but can be minimized by avoiding unwanted light exposure and using immersion oils to minimize light scattering. Autofluorescence can occur in certain cell structures where the natural compound in the structure would fluoresce after being excited at relatively shorter wavelengths (similar to that of the excitation wavelength). Induced fluorescence can also occur when certain non-autofluorescent compounds become fluorescent after binding to certain chemicals (such as formaldehyde). These fluorescence can result in artifacts or background noise in the image. Noise from other fluorescence compounds can be effectively eliminated by using filters to capture the desired fluorescence wavelength, or by making sure the autofluorescence compounds are not present in the sample. Current and future work Modern fluorescence techniques attempt to incorporate methods to eliminate some blurring and noises. Optical aberrations are generally deterministic (it is constant throughout the image process and across different samples). Deterministic blurring can be eliminated by deconvoluting the signal and subtracting the known artifact. The deconvolution technique is simply using an inverse fourier transform to obtain the original fluorescence signal and remove the artifact. Nevertheless, deconvolution has only been shown to work if there is a strong fluorescence signal or when the noise is clearly identified. In addition, deconvolution performs poorly because it does not include statistical information and can not reduce non-deterministic noise such as poissonian noise. To obtain better image resolution and quality, researchers have used statistical techniques to model the probability where photons may be distributed on the detector. This technique, called the maximum likelihood method, is being further improved by algorithms to improve its performance speed. References External links Interactive Fluorescence Dye and Filter Database Carl Zeiss Interactive Fluorescence Dye and Filter Database. TIRF Microscopy: Introduction and Applications TIRF Tutorial from Microscopy U TIRF Microscopy: Overview TIRF Tutorial from Olympus Microscopy Resource Center Olympus TIRFM Microscopes commercial TIRF microscope systems Carl Zeiss Laser TIRF 3 commercial TIRF microscope systems Lightguide- and prism-based TIRF microscopy TIRF-Labs.com :Commercial TIRF Microscopy and Spectroscopy. Selecting TIRFM geometry for your application TIRF FLIM microscopy Lambert Instruments TIRF - FLIM microscopy Schwartz Research Group, CU-Boulder Single Molecule Imaging Research Group American inventions Cell imaging Scientific techniques Fluorescence techniques Laboratory equipment Microscopes Optical microscopy techniques
Total internal reflection fluorescence microscope
[ "Chemistry", "Technology", "Engineering", "Biology" ]
4,037
[ "Cell imaging", "Measuring instruments", "Microscopes", "Microscopy", "Fluorescence techniques" ]
1,183,122
https://en.wikipedia.org/wiki/Water%20potential
Water potential is the potential energy of water per unit volume relative to pure water in reference conditions. Water potential quantifies the tendency of water to move from one area to another due to osmosis, gravity, mechanical pressure and matrix effects such as capillary action (which is caused by surface tension). The concept of water potential has proved useful in understanding and computing water movement within plants, animals, and soil. Water potential is typically expressed in potential energy per unit volume and very often is represented by the Greek letter ψ. Water potential integrates a variety of different potential drivers of water movement, which may operate in the same or different directions. Within complex biological systems, many potential factors may be operating simultaneously. For example, the addition of solutes lowers the potential (negative vector), while an increase in pressure increases the potential (positive vector). If the flow is not restricted, water will move from an area of higher water potential to an area that is lower potential. A common example is water with dissolved salts, such as seawater or the fluid in a living cell. These solutions have negative water potential, relative to the pure water reference. With no restriction on flow, water will move from the locus of greater potential (pure water) to the locus of lesser (the solution); flow proceeds until the difference in potential is equalized or balanced by another water potential factor, such as pressure or elevation. Components of water potential Many different factors may affect the total water potential, and the sum of these potentials determines the overall water potential and the direction of water flow: where: is the reference correction, is the solute or osmotic potential, is the pressure component, is the gravimetric component, is the potential due to humidity, and is the potential due to matrix effects (e.g., fluid cohesion and surface tension.) All of these factors are quantified as potential energies per unit volume, and different subsets of these terms may be used for particular applications (e.g., plants or soils). Different conditions are also defined as reference depending on the application: for example, in soils, the reference condition is typically defined as pure water at the soil surface. Pressure potential Pressure potential is based on mechanical pressure and is an important component of the total water potential within plant cells. Pressure potential increases as water enters a cell. As water passes through the cell wall and cell membrane, it increases the total amount of water present inside the cell, which exerts an outward pressure that is opposed by the structural rigidity of the cell wall. By creating this pressure, the plant can maintain turgor, which allows the plant to keep its rigidity. Without turgor, plants will lose structure and wilt. The pressure potential in a plant cell is usually positive. In plasmolysed cells, pressure potential is almost zero. Negative pressure potentials occur when water is pulled through an open system such as a plant xylem vessel. Withstanding negative pressure potentials (frequently called tension) is an important adaptation of the xylem. This tension can be measured empirically using the Pressure bomb. Osmotic potential (solute potential) Pure water is usually defined as having an osmotic potential () of zero, and in this case, solute potential can never be positive. The relationship of solute concentration (in molarity) to solute potential is given by the van 't Hoff equation: where is the concentration in molarity of the solute, is the van 't Hoff factor, the ratio of amount of particles in solution to amount of formula units dissolved, is the ideal gas constant, and is the absolute temperature. For example, when a solute is dissolved in water, water molecules are less likely to diffuse away via osmosis than when there is no solute. A solution will have a lower and hence more negative water potential than that of pure water. Furthermore, the more solute molecules present, the more negative the solute potential is. Osmotic potential has important implications for many living organisms. If a living cell is surrounded by a more concentrated solution, the cell will tend to lose water to the more negative water potential () of the surrounding environment. This can be the case for marine organisms living in sea water and halophytic plants growing in saline environments. In the case of a plant cell, the flow of water out of the cell may eventually cause the plasma membrane to pull away from the cell wall, leading to plasmolysis. Most plants, however, have the ability to increase solute inside the cell to drive the flow of water into the cell and maintain turgor. This effect can be used to power an osmotic power plant. A soil solution also experiences osmotic potential. The osmotic potential is made possible due to the presence of both inorganic and organic solutes in the soil solution. As water molecules increasingly clump around solute ions or molecules, the freedom of movement, and thus the potential energy, of the water is lowered. As the concentration of solutes is increased, the osmotic potential of the soil solution is reduced. Since water has a tendency to move toward lower energy levels, water will want to travel toward the zone of higher solute concentrations. Although, liquid water will only move in response to such differences in osmotic potential if a semipermeable membrane exists between the zones of high and low osmotic potential. A semipermeable membrane is necessary because it allows water through its membrane while preventing solutes from moving through its membrane. If no membrane is present, movement of the solute, rather than of the water, largely equalizes concentrations. Since regions of soil are usually not divided by a semipermeable membrane, the osmotic potential typically has a negligible influence on the mass movement of water in soils. On the other hand, osmotic potential has an extreme influence on the rate of water uptake by plants. If soils are high in soluble salts, the osmotic potential is likely to be lower in the soil solution than in the plant root cells. In such cases, the soil solution would severely restrict the rate of water uptake by plants. In salty soils, the osmotic potential of soil water may be so low that the cells in young seedlings start to collapse (plasmolyze). Matrix potential (Matric potential) When water is in contact with solid particles (e.g., clay or sand particles within soil), adhesive intermolecular forces between the water and the solid can be large and important. The forces between the water molecules and the solid particles in combination with attraction among water molecules promote surface tension and the formation of menisci within the solid matrix. Force is then required to break these menisci. The magnitude of matrix potential depends on the distances between solid particles—the width of the menisci (also capillary action and differing Pa at ends of the capillary)—and the chemical composition of the solid matrix (meniscus, macroscopic motion due to ionic attraction). In many cases, the absolute value of matrix potential can be relatively large in comparison to the other components of water potential discussed above. Matrix potential markedly reduces the energy state of water near particle surfaces. Although water movement due to matrix potential may be slow, it is still extremely important in supplying water to plant roots and in engineering applications. The matrix potential is always negative because the water attracted by the soil matrix has an energy state lower than that of pure water. Matrix potential only occurs in unsaturated soil above the water table. If the matrix potential approaches a value of zero, nearly all soil pores are completely filled with water, i.e. fully saturated and at maximum retentive capacity. The matrix potential can vary considerably among soils. In the case that water drains into less-moist soil zones of similar porosity, the matrix potential is generally in the range of −10 to −30 kPa. Empirical examples Soil-plant-air continuum At a potential of 0 kPa, the soil is in a saturation state. At saturation, all soil pores are filled with water, and water typically drains from large pores by gravity. At a potential of −33 kPa, or −1/3 bar, (−10 kPa for sand), soil is at field capacity. Typically, at field capacity, air is in the macropores, and water is in the micropores. Field capacity is the optimal condition for plant growth and microbial activity. At a potential of −1500 kPa, the soil is at its permanent wilting point, at which plant roots cannot extract the water through osmotic diffusion. Soil waterways still evaporate at more negative potentials down to a hygroscopic level, at which soil water is held by solid particles in a thin film by molecular adhesion forces. In contrast, atmospheric water potentials are much more negative—a typical value for dry air is −100 MPa, though this value depends on the temperature and the humidity. Root water potential must be more negative than the soil, and the stem water potential must be an intermediate lower value than the roots but higher than the leaf water potential to create a passive flow of water from the soil to the roots, up the stem, to the leaves and then into the atmosphere. Measurement techniques A tensiometer, electrical resistance gypsum block, neutron probes, or time-domain reflectometry (TDR) can be used to determine soil water potential energy. Tensiometers are limited to 0 to −85 kPa, electrical resistance blocks are limited to −90 to −1500 kPa, neutron probes are limited to 0 to −1500 kPa, and a TDR is limited to 0 to −10,000 kPa. A scale can estimate water weight (percentage composition) if special equipment is not on hand. See also Water retention curve Pore water pressure Notes External links Plant physiology Hydrostatics Soil physics Potentials
Water potential
[ "Physics", "Biology" ]
2,070
[ "Plant physiology", "Applied and interdisciplinary physics", "Plants", "Soil physics" ]
1,183,272
https://en.wikipedia.org/wiki/Alan%20J.%20Heeger
Alan Jay Heeger (born January 22, 1936) is an American physicist, academic and Nobel Prize laureate in chemistry. Heegar was elected as a member into the National Academy of Engineering in 2002 for co-founding the field of conducting polymers and for pioneering work in making these novel materials available for technological applications. Life and career Heeger was born in Sioux City, Iowa, into a Jewish family. He grew up in Akron, Iowa, where his father owned a general store. At age nine, following his father's death, the family moved to Sioux City. Heeger earned a B.S. in physics and mathematics from the University of Nebraska-Lincoln in 1957, and a Ph.D in physics from the University of California, Berkeley in 1961. From 1962 to 1982 he was on the faculty of the University of Pennsylvania. In 1982 he commenced his present appointment as a professor in the Physics Department and the Materials Department at the University of California, Santa Barbara. His research has led to the formation of numerous start-up companies including Uniax, Konarka, and Sirigen, founded in 2003 by Guillermo C. Bazan, Patrick J. Dietzen, Brent S. Gaylord. Alan Heeger was a founder of Uniax, which was acquired by DuPont. He won the Nobel Prize for Chemistry in 2000 along with Alan G. MacDiarmid and Hideki Shirakawa "for their discovery and development of conductive polymers"; They published their results on polyacetylene a conductive polymer in 1977. This led to the construction of the Su–Schrieffer–Heeger model, a simple model for topological insulators. He had won the Oliver E. Buckley Prize of the American Physical Society in 1983 and, in 1995, the Balzan Prize for Science of Non-Biological Materials. His sons are the neuroscientist David Heeger and the immunologist Peter Heeger. In October 2010, Heeger participated in the USA Science and Engineering Festival's Lunch with a Laureate program where middle and high school students engage in an informal conversation with a Nobel Prize-winning scientist over a brown-bag lunch. Heeger is also a member of the USA Science and Engineering Festival's Advisory Board. Heeger has been a judge of the STAGE International Script Competition three times (2006, 2007, 2010). "Perhaps the greatest pleasure of being a scientist is to have an abstract idea, then to do an experiment (more often a series of experiments is required) that demonstrates the idea was correct; that is, Nature actually behaves as conceived in the mind of the scientist. This process is the essence of creativity in science. I have been fortunate to have experienced this intense pleasure many times in my life." Alan J Heeger, Never Lose Your Nerve! Publication list Journal Articles: Technical Reports: Heeger, A. J. and A. G. MacDiarmid. "Polyacetylene, (CH)x, as an Emerging Material for Solar Cell Applications. Final Technical Report, March 19, 1979 – March 18, 1980," University of Pennsylvania, United States Department of Energy, (June 5, 1980). Heeger, A. J., Sinclair, M., et al. "Subgap Absorption in Conjugated Polymers," Sandia National Laboratory, United States Department of Energy, (1991). Heeger, A. J., Sinclair, M., et al. " Measurements of Photo-induced Changes in Conjugated Polymers," Sandia National Laboratory, United States Department of Energy, (1991). Autobiography , World Scientific Publishing, See also List of Nobel laureates affiliated with the University of California, Santa Barbara List of Jewish Nobel laureates References External links Curriculum Vitae of Alan J. Heeger, posted at University of California, Santa Barbara. Retrieved November 18, 2007 including the Nobel lecture December 8, 2000 Semiconducting and Metallic Polymers: The Fourth Generation of Polymeric Materials Free to view video interview, Harry Kroto NL talks to Alan Heeger, 2005, provided by the Vega Science Trust UCSB profile Photos and video from presentation in Brno University of Technology, Czech Republic 1936 births Living people Nobel laureates in Chemistry American Nobel laureates Jewish Nobel laureates Members of the United States National Academy of Sciences Foreign members of the Chinese Academy of Sciences 21st-century American physicists Jewish American physicists Molecular electronics Organic semiconductors Polymer scientists and engineers People from Sioux City, Iowa University of Nebraska–Lincoln alumni University of California, Berkeley alumni University of California, Santa Barbara faculty Members of the United States National Academy of Engineering Fellows of the American Physical Society Oliver E. Buckley Condensed Matter Prize winners People from Akron, Iowa
Alan J. Heeger
[ "Chemistry", "Materials_science" ]
962
[ "Molecular physics", "Semiconductor materials", "Molecular electronics", "Nanotechnology", "Organic semiconductors" ]
1,183,515
https://en.wikipedia.org/wiki/Scratch%20drive%20actuator
A scratch drive actuator (SDA) is a microelectromechanical system device that converts electrical energy into one-dimensional motion. Description The actuator component can come in many shapes and sizes, depending on the fabrication method used. It can be visualised as an 'L'. The smaller end is called the 'bushing'. The actuator sits on top of a substrate that has a thin insulating dielectric layer on top. A voltage is applied between the actuator and the substrate, and the resulting potential pulls the body of the actuator downwards. When this occurs, the brush is pushed forwards by a small amount, and energy is stored in the strained actuator. When the voltage is removed, the actuator springs back into shape while the bushing remains in its new position. By applying a pulsed voltage, the SDA can be made to move forward. The voltage is usually applied to the actuator by means of a 'tether'. This can consist of a rigid connector or a rail which the SDA follows. The size of an SDA is typically measured on the μm scale. References Actuators Microtechnology Microelectronic and microelectromechanical systems
Scratch drive actuator
[ "Materials_science", "Engineering" ]
262
[ "Materials science stubs", "Microelectronic and microelectromechanical systems", "Materials science", "Microtechnology" ]
727,067
https://en.wikipedia.org/wiki/Artemisinin
Artemisinin () and its semisynthetic derivatives are a group of drugs used in the treatment of malaria due to Plasmodium falciparum. It was discovered in 1972 by Tu Youyou, who shared the 2015 Nobel Prize in Physiology or Medicine for her discovery. Artemisinin-based combination therapies (ACTs) are now standard treatment worldwide for P. falciparum malaria as well as malaria due to other species of Plasmodium. Artemisinin is extracted from the plant Artemisia annua (sweet wormwood) an herb employed in Chinese traditional medicine. A precursor compound can be produced using a genetically engineered yeast, which is much more efficient than using the plant. Artemisinin and its derivatives are all sesquiterpene lactones containing an unusual peroxide bridge. This endoperoxide 1,2,4-trioxane ring is responsible for their antimalarial properties. Few other natural compounds with such a peroxide bridge are known. Artemisinin and its derivatives have been used for the treatment of malarial and parasitic worm (helminth) infections. Advantages of such treatments over other anti-parasitics include faster parasite elimination and broader efficacy across the parasite life-cycle; disadvantages include their low bioavailability, poor pharmacokinetic properties, and high cost. Moreover, use of the drug by itself as a monotherapy is explicitly discouraged by the World Health Organization, as there have been signs that malarial parasites are developing resistance to the drug. Combination therapies, featuring artemisinin or its derivatives alongside some other antimalarial drug, constitute the contemporary standard-of-care treatment regimen for malaria. Medical use The World Health Organization (WHO) recommends artemisinin or one of its derivatives ― typically in combination with a longer-lasting partner drug ― as frontline therapy for all cases of malaria. For uncomplicated malaria, the WHO recommends three days of oral treatment with any of five artemisinin-based combination therapies (ACTs): artemether/lumefantrine, artesunate/amodiaquine (ASAQ), artesunate/mefloquine, dihydroartemisinin/piperaquine, or artesunate/sulfadoxine/pyrimethamine. In each of these combinations, the artemisinin derivative rapidly kills the parasites, but is itself rapidly cleared from the body. The longer-lived partner drug kills the remaining parasites and provides some lingering protection from reinfection. For severe malaria, the WHO recommends intravenous or intramuscular treatment with the artemisinin derivative artesunate for at least 24 hours. Artesunate treatment is continued until the treated person is well enough to take oral medication. They are then given a three-day course of an ACT, for uncomplicated malaria. Where artesunate is not available, the WHO recommends intramuscular injection of the less potent artemisinin derivative artemether. For children less than six years old, if injected artesunate is not available the WHO recommends rectal administration of artesunate, followed by referral to a facility with the resources for further care. Artemisinins are not used for malaria prevention because of the extremely short activity (half-life) of the drug. To be effective, it would have to be administered multiple times each day. Contraindications The WHO recommends avoiding ACT for women in their first trimester of pregnancy due to a lack of research on artemisinin's safety in early pregnancy. Instead the WHO recommends a seven-day course of clindamycin and quinine. For pregnant women in their second or third trimesters, the WHO recommends a normal treatment course with an ACT. For some other groups, certain ACTs are avoided due to side effects of the partner drug: sulfadoxine-pyrimethamine is avoided during the first few weeks of life as it interferes with the action of bilirubin and can worsen neonatal jaundice. In HIV-positive people, the combination of trimethoprim/sulfamethoxazole, zidovudine-containing antiretroviral treatments, and ASAQ is associated with neutropenia. The combination of the HIV drug efavirenz and ASAQ is associated with liver toxicity. Adverse effects Artemisinins are generally well tolerated at the doses used to treat malaria. The side effects from the artemisinin class of medications are similar to the symptoms of malaria: nausea, vomiting, loss of appetite, and dizziness. Mild blood abnormalities have also been noted. A rare but serious adverse effect is allergic reaction. One case of significant liver inflammation has been reported in association with prolonged use of a relatively high-dose of artemisinin for an unclear reason (the patient did not have malaria). The drugs used in combination therapies can contribute to the adverse effects experienced by those undergoing treatment. Adverse effects in patients with acute P. falciparum malaria treated with artemisinin derivatives tend to be higher. Chemistry An unusual component of the artemisinin molecules is an endoperoxide 1,2,4-trioxane ring. This is the main antimalarial centre of the molecule. Modifications at carbon 10 (C10) position give rise to a variety of derivatives which are more powerful than the original compound. Because the physical properties of artemisinin itself, such as poor bioavailability, limit its effectiveness, semisynthetic derivatives of artemisinin have been developed. Derivatives of dihydroartemisinin were made since 1976. Artesunate, arteether and artemether were first synthesized in 1986. Many derivatives have been produced of which artelinic acid, artemotil, artemisone, SM735, SM905, SM933, SM934, and SM1044 are among the most powerful compounds. There are also simplified analogs in preclinical development. Over 120 other derivatives have been prepared, but clinical testing has not been possible due to lack of financial support. Artemisinin is poorly soluble in oils and water. Therefore, it is typically administered via the digestive tract, either by oral or rectal administration. Artesunate however can be administered via the intravenous and intramuscular, as well as the oral and rectal routes. A synthetic compound with a similar trioxolane structure (a ring containing three oxygen atoms) named RBx-11160 showed promise in in vitro testing. Phase II testing in patients with malaria was not as successful as hoped, but the manufacturer decided to start Phase III testing anyway. Mechanism of action As of 2018, the exact mechanism of action of artemisinins has not been fully elucidated. Artemisinin itself is a prodrug of the biologically active dihydroartemisinin. This metabolite undergoes cleavage of its endoperoxide ring inside the erythrocytes. As the drug molecules come in contact with the haem (associated with the hemoglobin of the red blood cells), the iron(II) oxide breaks the endoperoxide ring. This process produces free radicals that in turn damage susceptible proteins, resulting in the death of the parasite. In 2016 artemisinin was shown to bind to a large number of targets suggesting that it acts in a promiscuous manner. Artemisinin's endoperoxide moiety is however less sensitive to free iron(II) oxide, and therefore more active in the intraerythrocytic stages of P. falciparum. In contrast, clinical practice shows that unlike other antimalarials, artemisinin is active during all life cycle stages of the parasite. Resistance Clinical evidence for artemisinin drug resistance in southeast Asia was first reported in 2008, and was subsequently confirmed by a detailed study from western Cambodia. Resistance in neighbouring Thailand was reported in 2012, and in northern Cambodia, Vietnam and eastern Myanmar in 2014. Emerging resistance was reported in southern Laos, central Myanmar and northeastern Cambodia in 2014. The parasite's kelch gene on chromosome 13 appears to be a reliable molecular marker for clinical resistance in Southeast Asia. In 2011, the WHO stated that resistance to the most effective antimalarial drug, artemisinin, could unravel national Indian malaria control programs, which have achieved significant progress in the last decade. WHO advocates the rational use of antimalarial drugs and acknowledges the crucial role of community health workers in reducing malaria in the region. Artemisinins can be used alone, but this leads to a high rate of return of parasites and other drugs are required to clear the body of all parasites and prevent a recurrence. The WHO is pressuring manufacturers to stop making the uncompounded drug available to the medical community at large, aware of the catastrophe that would result if the malaria parasite developed resistance to artemisinins. Two main mechanisms of resistance drive Plasmodium resistance to antimalarial drugs. The first one is an efflux of the drug away from its action site due to mutations in different transporter genes (like pfcrt in chloroquine resistance) or an increased number of the gene copies (like pfmdr1 copy number in mefloquine resistance). The second is a change in the parasite target due to mutations in corresponding genes (like, at the cytosol level, dhfr and dhps in sulfadoxine-pyrimethamine resistance or, at the mitochondrion level, cytochrome b in atovaquone resistance). Resistance of P. falciparum to the new artemisinin compounds involves a novel mechanism corresponding to a quiescence phenomenon. future resistance research will make use of transgenic mice to discover relevant molecular markers. Synthesis Biosynthesis in Artemisia annua The biosynthesis of artemisinin is believed to involve the mevalonate pathway (MVA) and the cyclization of farnesyl diphosphate (FDP). It is not clear whether the non-mevalonate pathway can also contribute 5-carbon precursors (IPP or DMAPP), as occurs in other sesquiterpene biosynthetic systems. The routes from artemisinic alcohol to artemisinin remain controversial, and they differ mainly in when the reduction step takes place. Both routes suggested dihydroartemisinic acid as the final precursor to artemisinin. Dihydroartemisinic acid then undergoes photo-oxidation to produce dihydroartemisinic acid hydroperoxide. Ring expansion by the cleavage of hydroperoxide and a second oxygen-mediated hydroperoxidation finish the biosynthesis of artemisinin. Chemical synthesis The total synthesis of artemisinin has been performed from available organic starting materials, using basic organic reagents, many times. The first two total syntheses were a stereoselective synthesis by Schmid and Hofheinz at Hoffmann-La Roche in Basel starting from (−)-isopulegol (13 steps, ~5% overall yield), and a concurrent synthesis by Zhou and coworkers at the Shanghai Institute of Organic Chemistry from (R)-(+)-citronellal (20 steps, ~0.3% overall yield). Key steps of the Schmid–Hofheinz approach included an initial Ohrloff stereoselective hydroboration/oxidation to establish the "off-ring" methyl stereocenter on the propene side chain; two sequential lithium-reagent mediated alkylations that introduced all needed carbon atoms and that were, together highly diastereoselective; and further reduction, oxidation, and desilylation steps performed on this mono-carbocyclic intermediate, including a final singlet oxygen-utilizing photooxygenation and ene reaction, which, after acidic workup closed the three remaining oxacyclic rings of the desired product, artemisinin, in a single step.(In essence, the final oxidative ring-closing operation in these syntheses accomplishes the closing three biosynthetic steps shown above.) A wide variety of further routes continue to be explored, from early days until today, including total synthesis routes from (R)-(+)-pulegone, isomenthene, and even 2-cyclohexen-1-one, as well as routes better described as partial or semisyntheses from a more plentiful biosynthetic precursor, artemisinic acid—in the latter case, including some very short and very high yielding biomimetic synthesis examples (of Roth and Acton, and Haynes et al., 3 steps, 30% yield), which again feature the singlet oxygen ene chemistry. Synthesis in engineered organisms The partnership to develop semisynthetic artemisinin was led by PATH's Drug Development program (through an affiliation with OneWorld Health), with funding from the Bill & Melinda Gates Foundation. The project began in 2004, and initial project partners included the University of California, Berkeley (which provided the technology on which the project was based – a process that genetically altered yeast to produce artemisinic acid) and Amyris (a biotechnology firm in California, which refined the process to enable large-scale production and developed scalable processes for transfer to an industrial partner). In 2006, a team from UC Berkeley reported they had engineered Saccharomyces cerevisiae yeast to produce a small amount of the precursor artemisinic acid. The synthesized artemisinic acid can then be transported out, purified and chemically converted into artemisinin that they claim will cost roughly US$0.25 per dose. In this effort of synthetic biology, a modified mevalonate pathway was used, and the yeast cells were engineered to express the enzyme amorphadiene synthase and a cytochrome P450 monooxygenase (CYP71AV1), both from Artemisia annua. A three-step oxidation of amorpha-4,11-diene gives the resulting artemisinic acid. The UC Berkeley method was augmented using technology from various other organizations. The final successful technology is based on inventions licensed from UC Berkeley and the National Research Council (NRC) Plant Biotechnology Institute of Canada. Commercial production of semisynthetic artemisinin is now underway at Sanofi's site in Garessio, Italy. This second source of artemisinin is poised to enable a more stable flow of key antimalarial treatments to those who need them most. The production goal is set at 35 tonnes for 2013. It is expected to increase to 50–60 tons per year in 2014, supplying approximately one-third of the global annual need for artemisinin. In 2013, WHO's Prequalification of Medicines Programme announced the acceptability of semisynthetic artemisinin for use in the manufacture of active pharmaceutical ingredients submitted to WHO for prequalification, or that have already been qualified by WHO. Sanofi's active pharmaceutical ingredient (API) produced from semisynthetic artemisinin (artesunate) was also prequalified by WHO on May 8, 2013, making it the first semisynthetic artemisinin derivative prequalified. In 2010, a team from Wageningen University and Research reported they had engineered a close relative of tobacco, Nicotiana benthamiana, that can also produce the precursor, artemisinic acid. Production and price China and Vietnam provide 70% and East Africa 20% of the raw plant material. Seedlings are grown in nurseries and then transplanted into fields. It takes about 8 months for them to reach full size. The plants are harvested, the leaves are dried and sent to facilities where the artemisinin is extracted using a solvent, typically hexane. Alternative extraction methods have been proposed. The market price for artemisinin has fluctuated widely, between US$120 and $1,200 per kilogram from 2005 to 2008. The Chinese company Artepharm created a combination artemisinin and piperaquine drug marketed as Artequick. In addition to clinical research performed in China and southeast Asia, Artequick was used in large-scale malaria eradication efforts in the Comoros. Those efforts, conducted in 2007, 2012, and 2013–14, produced a 95–97% reduction in the number of malaria cases in the Comoros. After negotiation with the WHO, Novartis and Sanofi provide ACT drugs at cost on a nonprofit basis; however, these drugs are still more expensive than other malaria treatments. Artesunate injection for severe malaria treatment is made by the Guilin Pharmaceutical factory in China where production has received WHO prequalification. High-yield varieties of Artemisia are being produced by the Centre for Novel Agricultural Products at the University of York using molecular breeding techniques. Using seed supplied by Action for Natural Medicine (ANAMED), the World Agroforestry Centre (ICRAF) has developed a hybrid, dubbed A3, which can grow to a height of 3 meters and produce 20 times more artemisinin than wild varieties. In northwestern Mozambique, ICRAF is working together with a medical organization, Médecins Sans Frontières, ANAMED and the Ministry of Agriculture and Rural Development to train farmers on how to grow the shrub from cuttings, and to harvest and dry the leaves to make artemisia tea. However, the WHO does not recommend the use of A. annua plant materials, including tea, for the prevention and treatment of malaria. In 2013, Sanofi announced the launch of a production facility in Garessio, Italy, to manufacture the antiplasmodial drug on a large scale. The partnership to create a new pharmaceutical manufacturing process was led by PATH's Drug Development program (through an affiliation with OneWorld Health), with funding from the Bill & Melinda Gates Foundation and based on a modified biosynthetic process for artemisinic acid, initially designed by Jay Keasling at UC Berkeley and optimized by Amyris. The reaction is followed by a photochemical process creating singlet oxygen to obtain the end product. Sanofi expects to produce 25 tons of artemisinin in 2013, ramping up the production to 55–60 tonnes in 2014. The price per kilogram will be US$350–400, roughly the same as the botanical source. Despite concerns that this equivalent source would lead to the demise of companies, which produce this substance conventionally through extraction of A. annua biomass, an increased supply of this drug will likely produce lower prices and therefore increase the availability for ACT treatment. In 2014, Sanofi announced the release of the first batch of semisynthetic artemisinin. 1.7 million doses of Sanofi's ASAQ, a fixed-dose artemisinin-based combination therapy will be shipped to half a dozen African countries over the next few months. A 2016 systematic review of four studies from East Africa concluded that subsidizing ACT in the private retail sector in combination with training and marketing has led to the increased availability of ACT in stores, increased use of ACT for febrile children under five years of age, and decrease in the use of older, less effective antimalarials among children under five years of age. The underlying studies did not determine if the children had malaria nor determine if there were health benefits. Metabolism After ingestion or injection, artemisinin and its derivatives (arteether, artemether, and artesunate) are all rapidly converted in the bloodstream to dihydroartemisinin (DHA), which has 5–10 times greater antimalarial potency than artemisinin. DHA is eventually converted in the liver into metabolites such as deoxyartemisinin, deoxydihydroartemisinin, and 9,10-dihydrodeoxyartemisinin. These reactions are catalyzed by the enzymes CYP2A6, CYP3A4, and CYP3A5, which belong to the cytochrome P450 group present in the smooth endoplasmic reticulum. These metabolites lack antimalarial properties due to the loss of the endoperoxide group (deoxyartemisinin however has anti-inflammatory and antiulcer properties.) All these metabolites undergo glucuronidation, after which they are excreted through the urine or feces. Glucuronosyltransferases, in particular UGT1A9 and UGT2B7, are responsible for this process. DHA is also removed through bile as minor glucuronides. Due to their rapid metabolism, artemisinin and its derivatives are relatively safe drugs with a relatively high therapeutic index. History Etymology Artemisinin is an antimalarial lactone derived from qinghao (, Artemisia annua or sweet wormwood). In 1596, Li Shizhen recommended tea made from qinghao specifically to treat malaria symptoms in his Compendium of Materia Medica. The genus name is derived from the Greek goddess Artemis and, more specifically, may have been named after Queen Artemisia II of Caria, a botanist and medical researcher in the fourth century BC. Discovery Artemisia annua a common herb found in many parts of the world. In 1967, a plant screening research program, under a secret military program code-named "Project 523", was set up by the People's Liberation Army to find an adequate treatment for malaria; the program and early clinical work were ordered by Mao Zedong at the request of North Vietnamese leaders to provide assistance for their malaria-ridden army. In the course of this research in 1972, Tu Youyou discovered artemisinin in the leaves of Artemisia annua. Named qinghaosu (), it was one of many candidates tested as possible treatments for malaria by Chinese scientists, from a list of nearly 2,000 traditional Chinese medicines. Tu Youyou also discovered that a low-temperature extraction process could be used to isolate an effective antimalarial substance from the plant. Tu says she was influenced by a traditional Chinese herbal medicine source The Handbook of Prescriptions for Emergency Treatments written in 340 CE by Ge Hong saying that this herb should be steeped in cold water. This book contained the useful reference to the herb: "A handful of qinghao immersed with two litres of water, wring out the juice and drink it all." Tu's team subsequently isolated an extract. Results were published in the Chinese Medical Journal in 1979. The extracted substance, once subject to purification, proved to be a useful starting point to obtain purified artemisinin. A 2012 review reported that artemisinin-based therapies were the most effective drugs for treatment of malaria at that time; it was also reported to clear malaria parasites from patients' bodies faster than other drugs. In addition to artemisinin, Project 523 developed a number of products that can be used in combination with artemisinin, including lumefantrine, piperaquine, and pyronaridine. In the late 1990s, Novartis filed a new Chinese patent for a combination treatment with artemether/lumefantrine, providing the first artemisinin-based combination therapies (Coartem) at reduced prices to the WHO. In 2006, after artemisinin had become the treatment of choice for malaria, the WHO called for an immediate halt to single-drug artemisinin preparations in favor of combinations of artemisinin with another malaria drug, to reduce the risk of parasites developing resistance. In 2011, Tu Youyou was awarded the Lasker-DeBakey Clinical Medical Research Award for her role in the discovery and development of artemisinin. On October 5, 2015, she was awarded half of the 2015 Nobel Prize in Physiology or Medicine for discovering artemisinin, "a drug that has significantly reduced the mortality rates for patients suffering from malaria". The other half of the prize was awarded jointly to William C. Campbell and Satoshi Ōmura for discovering avermectin, "the derivatives of which have radically lowered the incidence of river blindness and lymphatic filariasis, as well as showing efficacy against an expanding number of other parasitic diseases". Research New artemisinin-based combination therapies The WHO notes four additional ACTs that are in preliminary clinical trials or regionally used for which there is no evidence to recommend widespread use: artesunate/pyronaridine, arterolane-piperaquine, artemisinin-piperaquine base, and artemisinin/naphthoquine. Helminthiasis A serendipitous discovery was made in China in the early 1980s while searching for novel anthelmintics for schistosomiasis that artemisinin was effective against schistosomes, the human blood flukes, which are the second-most prevalent parasitic infections, after malaria. Artemisinin and its derivatives are all potent antihelmintics. Artemisinins were later found to possess a broad spectrum of activity against a wide range of trematodes, including Schistosoma japonicum, S. mansoni, S. haematobium, Clonorchis sinensis, Fasciola hepatica, and Opisthorchis viverrini. Cancer Artemisinin and its derivatives are under laboratory research for their potential anti-cancer effects. As of 2018, only preliminary clinical research had been conducted using artemisininin derivatives in various cancers, with no approved clinical applications. Autoimmune disease Artemisinin derivatives may suppress immune reactions, such as inflammation. One derivative, SM934, was approved in 2015 by the Chinese National Medical Products Administration for a clinical trial as a drug for systemic lupus erythematosus. Polycystic ovary syndrome Artemisinin may be potentially useful for treating symptoms of polycystic ovary syndrome (PCOS). See also Artemisia (genus) Artemisin Santonin Pharmacognosy References Further reading External links Antimalarial agents Chinese discoveries Experimental cancer drugs Organic peroxides Sesquiterpene lactones Trioxanes Oxygen heterocycles Heterocyclic compounds with 4 rings ATPase inhibitors Traditional Chinese medicine Commercialization of traditional medicines Chinese inventions
Artemisinin
[ "Chemistry" ]
5,478
[ "Organic compounds", "Organic peroxides" ]
727,088
https://en.wikipedia.org/wiki/Azimuth%20thruster
An azimuth thruster is a configuration of marine propellers placed in pods that can be rotated to any horizontal angle (azimuth), making a rudder redundant. These give ships better maneuverability than a fixed propeller and rudder system. Types of azimuth thrusters There are two major variants, based on the location of the motor: Mechanical transmission, which connects a motor inside the ship to the outboard unit by gearing. The motor may be diesel or diesel-electric. Depending on the shaft arrangement, mechanical azimuth thrusters are divided into L-drive and Z-drive. An L-drive thruster has a vertical input shaft and a horizontal output shaft with one right-angle gear. A Z-drive thruster has a horizontal input shaft, a vertical shaft in the rotating column and a horizontal output shaft, with two right-angle gears. Electrical transmission, more commonly called pods, where an electric motor is fitted in the pod itself, connected directly to the propeller without gears. The electricity is produced by an onboard engine, usually diesel or gas turbine. Invented in 1955 by Friedrich W. Pleuger and Friedrich Busmann (Pleuger Unterwasserpumpen GmbH), ABB Group's Azipod was the first product using this technology. The most powerful podded thrusters in use are the four 21.5 MW Rolls-Royce Mermaid units fitted to . Mechanical azimuth thrusters can be fixed installed, retractable or underwater-mountable. They may have fixed pitch propellers or controllable pitch propellers. Fixed installed thrusters are used for tugboats, ferries and supply-boats. Retractable thrusters are used as auxiliary propulsion for dynamically positioned vessels and take-home propulsion for military vessels. Underwater-mountable thrusters are used as dynamic positioning propulsion for very large vessels such as semi-submersible drilling rigs and drillships. Advantages and disadvantages Primary advantages are maneuverability, electrical efficiency, better use of ship space, and lower maintenance costs. Ships with azimuth thrusters do not need tugboats to dock, though they may still require tugs to maneuver in difficult places. The major disadvantage of azimuth drive systems is that a ship with azimuth drive maneuvers differently from one with a standard propeller and rudder configuration, necessitating specialized pilot training. Another disadvantage is they increase the draught of the ship. History English inventor Francis Ronalds described what he called a propelling rudder in 1859 that combined the propulsion and steering mechanisms of a boat in a single apparatus. The propeller was placed in a frame having an outer profile similar to a rudder and attached to a vertical shaft that allowed the device to rotate in plane while spin was transmitted to the propeller. The modern azimuth thruster using the Z-drive transmission was invented in 1951 by Joseph Becker, the founder of Schottel in Germany, and marketed as the Ruderpropeller. Becker was awarded the 2004 Elmer A. Sperry Award for the invention. This kind of propulsion was first patented in 1955 by Pleuger. In the late 1980s Wärtsilä Marine, Strömberg and the Finnish National Board of Navigation developed the Azipod thruster with the motor located in the pod itself. See also References Marine propulsion
Azimuth thruster
[ "Engineering" ]
666
[ "Marine propulsion", "Marine engineering" ]
728,168
https://en.wikipedia.org/wiki/Monstrous%20moonshine
In mathematics, monstrous moonshine, or moonshine theory, is the unexpected connection between the monster group M and modular functions, in particular the j function. The initial numerical observation was made by John McKay in 1978, and the phrase was coined by John Conway and Simon P. Norton in 1979. The monstrous moonshine is now known to be underlain by a vertex operator algebra called the moonshine module (or monster vertex algebra) constructed by Igor Frenkel, James Lepowsky, and Arne Meurman in 1988, which has the monster group as its group of symmetries. This vertex operator algebra is commonly interpreted as a structure underlying a two-dimensional conformal field theory, allowing physics to form a bridge between two mathematical areas. The conjectures made by Conway and Norton were proven by Richard Borcherds for the moonshine module in 1992 using the no-ghost theorem from string theory and the theory of vertex operator algebras and generalized Kac–Moody algebras. History In 1978, John McKay found that the first few terms in the Fourier expansion of the normalized J-invariant could be expressed in terms of linear combinations of the dimensions of the irreducible representations of the monster group M with small non-negative coefficients. The J-invariant is with and τ as the half-period ratio, and the M expressions, letting = 1, 196883, 21296876, 842609326, 18538750076, 19360062527, 293553734298, ..., are The LHS are the coefficients of , while in the RHS the integers are the dimensions of irreducible representations of the monster group M. (Since there can be several linear relations between the such as , the representation may be in more than one way.) McKay viewed this as evidence that there is a naturally occurring infinite-dimensional graded representation of M, whose graded dimension is given by the coefficients of J, and whose lower-weight pieces decompose into irreducible representations as above. After he informed John G. Thompson of this observation, Thompson suggested that because the graded dimension is just the graded trace of the identity element, the graded traces of nontrivial elements g of M on such a representation may be interesting as well. Conway and Norton computed the lower-order terms of such graded traces, now known as McKay–Thompson series Tg, and found that all of them appeared to be the expansions of Hauptmoduln. In other words, if Gg is the subgroup of SL2(R) which fixes Tg, then the quotient of the upper half of the complex plane by Gg is a sphere with a finite number of points removed, and furthermore, Tg generates the field of meromorphic functions on this sphere. Based on their computations, Conway and Norton produced a list of Hauptmoduln, and conjectured the existence of an infinite dimensional graded representation of M, whose graded traces Tg are the expansions of precisely the functions on their list. In 1980, A.O.L. Atkin, Paul Fong and Stephen D. Smith produced strong computational evidence that such a graded representation exists, by decomposing a large number of coefficients of J into representations of M. A graded representation whose graded dimension is J, called the moonshine module, was explicitly constructed by Igor Frenkel, James Lepowsky, and Arne Meurman, giving an effective solution to the McKay–Thompson conjecture, and they also determined the graded traces for all elements in the centralizer of an involution of M, partially settling the Conway–Norton conjecture. Furthermore, they showed that the vector space they constructed, called the Moonshine Module , has the additional structure of a vertex operator algebra, whose automorphism group is precisely M. In 1985, the Atlas of Finite Groups was published by a group of mathematicians, including John Conway. The Atlas, which enumerates all sporadic groups, included "Moonshine" as a section in its list of notable properties of the monster group. Borcherds proved the Conway–Norton conjecture for the Moonshine Module in 1992. He won the Fields Medal in 1998 in part for his solution of the conjecture. The moonshine module The Frenkel–Lepowsky–Meurman construction starts with two main tools: The construction of a lattice vertex operator algebra VL for an even lattice L of rank n. In physical terms, this is the chiral algebra for a bosonic string compactified on a torus Rn/L. It can be described roughly as the tensor product of the group ring of L with the oscillator representation in n dimensions (which is itself isomorphic to a polynomial ring in countably infinitely many generators). For the case in question, one sets L to be the Leech lattice, which has rank 24. The orbifold construction. In physical terms, this describes a bosonic string propagating on a quotient orbifold. The construction of Frenkel–Lepowsky–Meurman was the first time orbifolds appeared in conformal field theory. Attached to the –1 involution of the Leech lattice, there is an involution h of VL, and an irreducible h-twisted VL-module, which inherits an involution lifting h. To get the Moonshine Module, one takes the fixed point subspace of h in the direct sum of VL and its twisted module. Frenkel, Lepowsky, and Meurman then showed that the automorphism group of the moonshine module, as a vertex operator algebra, is M. Furthermore, they determined that the graded traces of elements in the subgroup 21+24.Co1 match the functions predicted by Conway and Norton (). Borcherds' proof Richard Borcherds' proof of the conjecture of Conway and Norton can be broken into the following major steps: One begins with a vertex operator algebra V with an invariant bilinear form, an action of M by automorphisms, and with known decomposition of the homogeneous spaces of seven lowest degrees into irreducible M-representations. This was provided by Frenkel–Lepowsky–Meurman's construction and analysis of the Moonshine Module. A Lie algebra , called the monster Lie algebra, is constructed from V using a quantization functor. It is a generalized Kac–Moody Lie algebra with a monster action by automorphisms. Using the Goddard–Thorn "no-ghost" theorem from string theory, the root multiplicities are found to be coefficients of J. One uses the Koike–Norton–Zagier infinite product identity to construct a generalized Kac–Moody Lie algebra by generators and relations. The identity is proved using the fact that Hecke operators applied to J yield polynomials in J. By comparing root multiplicities, one finds that the two Lie algebras are isomorphic, and in particular, the Weyl denominator formula for is precisely the Koike–Norton–Zagier identity. Using Lie algebra homology and Adams operations, a twisted denominator identity is given for each element. These identities are related to the McKay–Thompson series Tg in much the same way that the Koike–Norton–Zagier identity is related to J. The twisted denominator identities imply recursion relations on the coefficients of Tg, and unpublished work of Koike showed that Conway and Norton's candidate functions satisfied these recursion relations. These relations are strong enough that one only needs to check that the first seven terms agree with the functions given by Conway and Norton. The lowest terms are given by the decomposition of the seven lowest degree homogeneous spaces given in the first step. Thus, the proof is completed (). Borcherds was later quoted as saying "I was over the moon when I proved the moonshine conjecture", and "I sometimes wonder if this is the feeling you get when you take certain drugs. I don't actually know, as I have not tested this theory of mine." More recent work has simplified and clarified the last steps of the proof. Jurisich (, ) found that the homology computation could be substantially shortened by replacing the usual triangular decomposition of the Monster Lie algebra with a decomposition into a sum of gl2 and two free Lie algebras. Cummins and Gannon showed that the recursion relations automatically imply the McKay-Thompson series are either Hauptmoduln or terminate after at most 3 terms, thus eliminating the need for computation at the last step. Generalized moonshine Conway and Norton suggested in their 1979 paper that perhaps moonshine is not limited to the monster, but that similar phenomena may be found for other groups. While Conway and Norton's claims were not very specific, computations by Larissa Queen in 1980 strongly suggested that one can construct the expansions of many Hauptmoduln from simple combinations of dimensions of irreducible representations of sporadic groups. In particular, she decomposed the coefficients of McKay-Thompson series into representations of subquotients of the Monster in the following cases: T2B and T4A into representations of the Conway group Co0 T3B and T6B into representations of the Suzuki group 3.2.Suz T3C into representations of the Thompson group Th = F3 T5A into representations of the Harada–Norton group HN = F5 T5B and T10D into representations of the Hall–Janko group 2.HJ T7A into representations of the Held group He = F7 T7B and T14C into representations of 2.A7 T11A into representations of the Mathieu group 2.M12 Queen found that the traces of non-identity elements also yielded q-expansions of Hauptmoduln, some of which were not McKay–Thompson series from the Monster. In 1987, Norton combined Queen's results with his own computations to formulate the Generalized Moonshine conjecture. This conjecture asserts that there is a rule that assigns to each element g of the monster, a graded vector space V(g), and to each commuting pair of elements (g, h) a holomorphic function f(g, h, τ) on the upper half-plane, such that: Each V(g) is a graded projective representation of the centralizer of g in M. Each f(g, h, τ) is either a constant function, or a Hauptmodul. Each f(g, h, τ) is invariant under simultaneous conjugation of g and h in M, up to a scalar ambiguity. For each (g, h), there is a lift of h to a linear transformation on V(g), such that the expansion of f(g, h, τ) is given by the graded trace. For any , is proportional to . f(g, h, τ) is proportional to J if and only if g = h = 1. This is a generalization of the Conway–Norton conjecture, because Borcherds's theorem concerns the case where g is set to the identity. Like the Conway–Norton conjecture, Generalized Moonshine also has an interpretation in physics, proposed by Dixon–Ginsparg–Harvey in 1988 (). They interpreted the vector spaces V(g) as twisted sectors of a conformal field theory with monster symmetry, and interpreted the functions f(g, h, τ) as genus one partition functions, where one forms a torus by gluing along twisted boundary conditions. In mathematical language, the twisted sectors are irreducible twisted modules, and the partition functions are assigned to elliptic curves with principal monster bundles, whose isomorphism type is described by monodromy along a basis of 1-cycles, i.e., a pair of commuting elements. Modular moonshine In the early 1990s, the group theorist A. J. E. Ryba discovered remarkable similarities between parts of the character table of the monster, and Brauer characters of certain subgroups. In particular, for an element g of prime order p in the monster, many irreducible characters of an element of order kp whose kth power is g are simple combinations of Brauer characters for an element of order k in the centralizer of g. This was numerical evidence for a phenomenon similar to monstrous moonshine, but for representations in positive characteristic. In particular, Ryba conjectured in 1994 that for each prime factor p in the order of the monster, there exists a graded vertex algebra over the finite field Fp with an action of the centralizer of an order p element g, such that the graded Brauer character of any p-regular automorphism h is equal to the McKay-Thompson series for gh (). In 1996, Borcherds and Ryba reinterpreted the conjecture as a statement about Tate cohomology of a self-dual integral form of . This integral form was not known to exist, but they constructed a self-dual form over Z[1/2], which allowed them to work with odd primes p. The Tate cohomology for an element of prime order naturally has the structure of a super vertex algebra over Fp, and they broke up the problem into an easy step equating graded Brauer super-trace with the McKay-Thompson series, and a hard step showing that Tate cohomology vanishes in odd degree. They proved the vanishing statement for small odd primes, by transferring a vanishing result from the Leech lattice (). In 1998, Borcherds showed that vanishing holds for the remaining odd primes, using a combination of Hodge theory and an integral refinement of the no-ghost theorem (, ). The case of order 2 requires the existence of a form of over a 2-adic ring, i.e., a construction that does not divide by 2, and this was not known to exist at the time. There remain many additional unanswered questions, such as how Ryba's conjecture should generalize to Tate cohomology of composite order elements, and the nature of any connections to generalized moonshine and other moonshine phenomena. Conjectured relationship with quantum gravity In 2007, E. Witten suggested that AdS/CFT correspondence yields a duality between pure quantum gravity in (2 + 1)-dimensional anti de Sitter space and extremal holomorphic CFTs. Pure gravity in 2 + 1 dimensions has no local degrees of freedom, but when the cosmological constant is negative, there is nontrivial content in the theory, due to the existence of BTZ black hole solutions. Extremal CFTs, introduced by G. Höhn, are distinguished by a lack of Virasoro primary fields in low energy, and the moonshine module is one example. Under Witten's proposal (), gravity in AdS space with maximally negative cosmological constant is AdS/CFT dual to a holomorphic CFT with central charge c=24, and the partition function of the CFT is precisely j-744, i.e., the graded character of the moonshine module. By assuming Frenkel-Lepowsky-Meurman's conjecture that moonshine module is the unique holomorphic VOA with central charge 24 and character j-744, Witten concluded that pure gravity with maximally negative cosmological constant is dual to the monster CFT. Part of Witten's proposal is that Virasoro primary fields are dual to black-hole-creating operators, and as a consistency check, he found that in the large-mass limit, the Bekenstein-Hawking semiclassical entropy estimate for a given black hole mass agrees with the logarithm of the corresponding Virasoro primary multiplicity in the moonshine module. In the low-mass regime, there is a small quantum correction to the entropy, e.g., the lowest energy primary fields yield ln(196883) ~ 12.19, while the Bekenstein–Hawking estimate gives 4 ~ 12.57. Later work has refined Witten's proposal. Witten had speculated that the extremal CFTs with larger cosmological constant may have monster symmetry much like the minimal case, but this was quickly ruled out by independent work of Gaiotto and Höhn. Work by Witten and Maloney () suggested that pure quantum gravity may not satisfy some consistency checks related to its partition function, unless some subtle properties of complex saddles work out favorably. However, Li–Song–Strominger () have suggested that a chiral quantum gravity theory proposed by Manschot in 2007 may have better stability properties, while being dual to the chiral part of the monster CFT, i.e., the monster vertex algebra. Duncan–Frenkel () produced additional evidence for this duality by using Rademacher sums to produce the McKay–Thompson series as (2 + 1)-dimensional gravity partition functions by a regularized sum over global torus-isogeny geometries. Furthermore, they conjectured the existence of a family of twisted chiral gravity theories parametrized by elements of the monster, suggesting a connection with generalized moonshine and gravitational instanton sums. At present, all of these ideas are still rather speculative, in part because 3d quantum gravity does not have a rigorous mathematical foundation. Mathieu moonshine In 2010, Tohru Eguchi, Hirosi Ooguri, and Yuji Tachikawa observed that the elliptic genus of a K3 surface can be decomposed into characters of the superconformal algebra, such that the multiplicities of massive states appear to be simple combinations of irreducible representations of the Mathieu group M24. This suggests that there is a sigma-model conformal field theory with K3 target that carries M24 symmetry. However, by the Mukai–Kondo classification, there is no faithful action of this group on any K3 surface by symplectic automorphisms, and by work of Gaberdiel–Hohenegger–Volpato, there is no faithful action on any K3 sigma-model conformal field theory, so the appearance of an action on the underlying Hilbert space is still a mystery. By analogy with McKay–Thompson series, Cheng suggested that both the multiplicity functions and the graded traces of nontrivial elements of M24 form mock modular forms. In 2012, Gannon proved that all but the first of the multiplicities are non-negative integral combinations of representations of M24, and Gaberdiel–Persson–Ronellenfitsch–Volpato computed all analogues of generalized moonshine functions, strongly suggesting that some analogue of a holomorphic conformal field theory lies behind Mathieu moonshine. Also in 2012, Cheng, Duncan, and Harvey amassed numerical evidence of an umbral moonshine phenomenon where families of mock modular forms appear to be attached to Niemeier lattices. The special case of the A lattice yields Mathieu Moonshine, but in general the phenomenon does not yet have an interpretation in terms of geometry. Origin of the term The term "monstrous moonshine" was coined by Conway, who, when told by John McKay in the late 1970s that the coefficient of (namely 196884) was precisely one more than the degree of the smallest faithful complex representation of the monster group (namely 196883), replied that this was "moonshine" (in the sense of being a crazy or foolish idea). Thus, the term not only refers to the monster group M; it also refers to the perceived lunacy of the intricate relationship between M and the theory of modular functions. Related observations The monster group was investigated in the 1970s by mathematicians Jean-Pierre Serre, Andrew Ogg and John G. Thompson; they studied the quotient of the hyperbolic plane by subgroups of SL2(R), particularly, the normalizer Γ0(p)+ of the Hecke congruence subgroup Γ0(p) in SL(2,R). They found that the Riemann surface resulting from taking the quotient of the hyperbolic plane by Γ0(p)+ has genus zero exactly for p = 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 41, 47, 59 or 71. When Ogg heard about the monster group later on, and noticed that these were precisely the prime factors of the size of M, he published a paper offering a bottle of Jack Daniel's whiskey to anyone who could explain this fact (). These 15 primes are known as the supersingular primes, not to be confused with the use of the same phrase with a different meaning in algebraic number theory. Notes References Sources . . . . . . . . . . . (Provides introductory reviews to applications in physics). . . (The first book about the Monster Group written in Japanese). . . . . . . . . (Concise introduction for the lay reader). . . External links Mathematicians Chase Moonshine's Shadow 1970s neologisms Group theory John Horton Conway Sporadic groups
Monstrous moonshine
[ "Mathematics" ]
4,405
[ "Group theory", "Fields of abstract algebra" ]
729,189
https://en.wikipedia.org/wiki/A%20Treatise%20on%20Electricity%20and%20Magnetism
A Treatise on Electricity and Magnetism is a two-volume treatise on electromagnetism written by James Clerk Maxwell in 1873. Maxwell was revising the Treatise for a second edition when he died in 1879. The revision was completed by William Davidson Niven for publication in 1881. A third edition was prepared by J. J. Thomson for publication in 1892. The treatise is said to be notoriously hard to read, containing plenty of ideas but lacking both the clear focus and orderliness that may have allowed it catch on more easily. It was noted by one historian of science that Maxwell's attempt at a comprehensive treatise on all of electrical science tended to bury the important results of his work under "long accounts of miscellaneous phenomena discussed from several points of view." He goes on to say that, outside the treatment of the Faraday effect, Maxwell failed to expound on his earlier work, especially the generation of electromagnetic waves and the derivation of the laws governing reflection and refraction. Maxwell introduced the use of vector fields, and his labels have been perpetuated: A (vector potential), B (magnetic induction), C (electric current), D (displacement), E (electric field – Maxwell's electromotive intensity), F (mechanical force), H (magnetic field – Maxwell's magnetic force). Maxwell's work is considered an exemplar of rhetoric of science: Lagrange's equations appear in the Treatise as the culmination of a long series of rhetorical moves, including (among others) Green's theorem, Gauss's potential theory and Faraday's lines of force – all of which have prepared the reader for the Lagrangian vision of a natural world that is whole and connected: a veritable sea change from Newton's vision. Contents Preliminary. On the Measurement of Quantities. Part I. Electrostatics. Description of Phenomena. Elementary Mathematical Theory of Electricity. On Electrical Work and Energy in a System of Conductors. General Theorems. Mechanical Action Between Two Electrical Systems. Points and Lines of Equilibrium. Forms of Equipotential Surfaces and Lines of Flow. Simple Cases of Electrification. Spherical Harmonics. Confocal Surfaces of the Second Degree. Theory of Electric Images. Conjugate Functions in Two Dimensions. Electrostatic Instruments. Part II. Electrokinematics. The Electric Current. Conduction and Resistance. Electromotive Force Between Bodies in Contact. Electrolysis. Electrolytic Polarization. Mathematical Theory of the Distribution of Electric Currents. Conduction in Three Dimensions. Resistance and Conductivity in Three Dimensions. Conduction through Heterogeneous Media. Conduction in Dielectrics. Measurement of the Electric Resistance of Conductors. Electric Resistance of Substances. Part III. Magnetism Elementary Theory of Magnetism. Magnetic Force and Magnetic Induction. Particular Forms of Magnets. Induced Magnetization. Magnetic Problems. Weber's Theory of Magnetic Induction. Magnetic Measurements. Terrestrial Magnetism. Part IV. Electromagnetism. Electromagnetic Force. Mutual Action of Electric Currents. Induction of Electric Currents. Induction of a Current on Itself. General Equations of Dynamics. Application of Dynamics to Electromagnetism. Electrokinetics. Exploration of the Field by means of the Secondary Circuit. General Equations. Dimensions of Electric Units. Energy and Stress. Current-Sheets. Parallel Currents. Circular Currents. Electromagnetic Instruments. Electromagnetic Observations. Electrical Measurement of Coefficients of Induction. Determination of Resistance in Electromagnetic Measure. Comparison of Electrostatic With Electromagnetic Units. Electromagnetic Theory of Light. Magnetic Action on Light. Electric Theory of Magnetism. Theories of Action at a distance. Reception Reviews On April 24, 1873, Nature announced the publication with an extensive description and much praise. When the second edition was published in 1881, George Chrystal wrote the review for Nature. Pierre Duhem published a critical essay outlining mistakes he found in Maxwell's Treatise. Duhem's book was reviewed in Nature. Comments Hermann von Helmholtz (1881): "Now that the mathematical interpretations of Faraday's conceptions regarding the nature of electric and magnetic force has been given by Clerk Maxwell, we see how great a degree of exactness and precision was really hidden behind Faraday's words…it is astonishing in the highest to see what a large number of general theories, the mechanical deduction of which requires the highest powers of mathematical analysis, he has found by a kind of intuition, with the security of instinct, without the help of a single mathematical formula." Oliver Heaviside (1893):”What is Maxwell's theory? The first approximation is to say: There is Maxwell's book as he wrote it; there is his text, and there are his equations: together they make his theory. But when we come to examine it closely, we find that this answer is unsatisfactory. To begin with, it is sufficient to refer to papers by physicists, written say during the first twelve years following the first publication of Maxwell's treatise to see that there may be much difference of opinion as to what his theory is. It may be, and has been, differently interpreted by different men, which is a sign that is not set forth in a perfectly clear and unmistakable form. There are many obscurities and some inconsistencies. Speaking for myself, it was only by changing its form of presentation that I was able to see it clearly, and so as to avoid the inconsistencies. Now there is no finality in a growing science. It is, therefore, impossible to adhere strictly to Maxwell's theory as he gave it to the world, if only on account of its inconvenient form. Alexander Macfarlane (1902): "This work has served as the starting point of many advances made in recent years. Maxwell is the scientific ancestor of Hertz, Hertz of Marconi and all other workers at wireless telegraphy. Oliver Lodge (1907) "Then comes Maxwell, with his keen penetration and great grasp of thought, combined with mathematical subtlety and power of expression; he assimilates the facts, sympathizes with the philosophic but untutored modes of expression invented by Faraday, links the theorems of Green and Stokes and Thomson to the facts of Faraday, and from the union rears the young modern science of electricity..." E. T. Whittaker (1910): "In this celebrated work is comprehended almost every branch of electric and magnetic theory, but the intention of the writer was to discuss the whole from a single point of view, namely, that of Faraday, so that little or no account was given of the hypotheses that had been propounded in the two preceding decades by the great German electricians...The doctrines peculiar to Maxwell ... were not introduced in the first volume, or in the first half of the second." Albert Einstein (1931): "Before Maxwell people conceived of physical reality – in so far as it is supposed to represent events in nature – as material points, whose changes consist exclusively of motions, which are subject to total differential equations. After Maxwell they conceived physical reality as represented by continuous fields, not mechanically explicable, which are subject to partial differential equations. This change in the conception of reality is the most profound and fruitful one that has come to physics since Newton; but it has at the same time to be admitted that the program has by no means been completely carried out yet." Richard P. Feynman (1964): "From a long view of the history of mankind—seen from, say, ten thousand years from now—there can be little doubt that the most significant event of the 19th century will be judged as Maxwell's discovery of the laws of electrodynamics. The American Civil War will pale into provincial insignificance in comparison with this important scientific event of the same decade." L. Pearce Williams (1991): "In 1873, James Clerk Maxwell published a rambling and difficult two-volume Treatise on Electricity and Magnetism that was destined to change the orthodox picture of physical reality. This treatise did for electromagnetism what Newton's Principia had done for classical mechanics. It not only provided the mathematical tools for the investigation and representation of the whole of electromagnetic theory, but it altered the very framework of both theoretical and experimental physics. Although the process had been going on throughout the nineteenth century, it was this work that finally displaced action at a distance physics and substituted the physics of the field." Mark P. Silverman (1998) "I studied the principles on my own – in this case with Maxwell's Treatise as both my inspiration and textbook. This is not an experience that I would necessarily recommend to others. For all his legendary gentleness, Maxwell is a demanding teacher, and his magnum opus is anything but coffee-table reading...At the same time, the experience was greatly rewarding in that I had come to understand, as I realized much later, aspects of electromagnetism that are rarely taught at any level today and that reflect the unique physical insight of their creator. Andrew Warwick (2003): "In developing the mathematical theory of electricity and magnetism in the Treatise, Maxwell made a number of errors, and for students with only a tenuous grasp of the physical concepts of basic electromagnetic theory and the specific techniques to solve some problems, it was extremely difficult to discriminate between cases where Maxwell made an error and cases where they simply failed to follow the physical or mathematical reasoning." See also "On Physical Lines of Force" "A Dynamical Theory of the Electromagnetic Field" Introduction to Electrodynamics Classical Electrodynamics References Further reading External links Reprint from Dover Publications () A Treatise on Electricity And Magnetism – Volume 1 – 1873 – Posner Memorial Collection – Carnegie Mellon University. Volume 2 A Treatise on Electricity and Magnetism at Internet Archive 1st edition 1873 Volume 1, Volume 2 2nd edition 1881 Volume 1, Volume 2 3rd edition 1892 (ed. J. J. Thomson) Volume 1, Volume 2 3rd edition 1892 (Dover reprint 1954) Volume 1, Volume 2 Original Maxwell Equations – Maxwell's 20 Equations in 20 Unknowns – PDF Physics books Historical physics publications 1873 books Electromagnetism 1870s in science Works by James Clerk Maxwell Treatises
A Treatise on Electricity and Magnetism
[ "Physics" ]
2,114
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions" ]
729,237
https://en.wikipedia.org/wiki/A%20Dynamical%20Theory%20of%20the%20Electromagnetic%20Field
"A Dynamical Theory of the Electromagnetic Field" is a paper by James Clerk Maxwell on electromagnetism, published in 1865. Physicist Freeman Dyson called the publishing of the paper the "most important event of the nineteenth century in the history of the physical sciences." The paper was key in establishing the classical theory of electromagnetism. Maxwell derives an electromagnetic wave equation with a velocity for light in close agreement with measurements made by experiment, and also deduces that light is an electromagnetic wave. Publication Following standard procedure for the time, the paper was first read to the Royal Society on 8 December 1864, having been sent by Maxwell to the society on 27 October. It then underwent peer review, being sent to William Thomson (later Lord Kelvin) on 24 December 1864. It was then sent to George Gabriel Stokes, the Society's physical sciences secretary, on 23 March 1865. It was approved for publication in the Philosophical Transactions of the Royal Society on 15 June 1865, by the Committee of Papers (essentially the society's governing council) and sent to the printer the following day (16 June). During this period, Philosophical Transactions was only published as a bound volume once a year, and would have been prepared for the society's anniversary day on 30 November (the exact date is not recorded). However, the printer would have prepared and delivered to Maxwell offprints, for the author to distribute as he wished, soon after 16 June. Maxwell's original equations In part III of the paper, which is entitled "General Equations of the Electromagnetic Field", Maxwell formulated twenty equations which were to become known as Maxwell's equations, until this term became applied instead to a vectorized set of four equations selected in 1884, which had all appeared in his 1861 paper "On Physical Lines of Force". Heaviside's versions of Maxwell's equations are distinct by virtue of the fact that they are written in modern vector notation. They actually only contain one of the original eight—equation "G" (Gauss's Law). Another of Heaviside's four equations is an amalgamation of Maxwell's law of total currents (equation "A") with Ampère's circuital law (equation "C"). This amalgamation, which Maxwell himself had actually originally made at equation (112) in "On Physical Lines of Force", is the one that modifies Ampère's Circuital Law to include Maxwell's displacement current. Heaviside's equations Eighteen of Maxwell's twenty original equations can be vectorized into six equations, labeled (A) to (F) below, each of which represents a group of three original equations in component form. The 19th and 20th of Maxwell's component equations appear as (G) and (H) below, making a total of eight vector equations. These are listed below in Maxwell's original order, designated by the letters that Maxwell assigned to them in his 1864 paper. (A) The law of total currents (B) Definition of the magnetic potential (C) Ampère's circuital law (D) The Lorentz force and Faraday's law of induction (E) The electric elasticity equation (F) Ohm's law (G) Gauss's law (H) Equation of continuity of charge . Notation is the magnetic field, which Maxwell called the "magnetic intensity". is the electric current density (with being the total current density including displacement current). is the displacement field (called the "electric displacement" by Maxwell). is the free charge density (called the "quantity of free electricity" by Maxwell). is the magnetic potential (called the "angular impulse" by Maxwell). is the force per unit charge (called the "electromotive force" by Maxwell, not to be confused with the scalar quantity that is now called electromotive force; see below). is the electric potential (which Maxwell also called "electric potential"). is the electrical conductivity (Maxwell called the inverse of conductivity the "specific resistance", what is now called the resistivity). is the vector operator del. Clarifications Maxwell did not consider completely general materials; his initial formulation used linear, isotropic, nondispersive media with permittivity ϵ and permeability μ, although he also discussed the possibility of anisotropic materials. Gauss's law for magnetism () is not included in the above list, but follows directly from equation (B) by taking divergences (because the divergence of the curl is zero). Substituting (A) into (C) yields the familiar differential form of the Maxwell-Ampère law. Equation (D) implicitly contains the Lorentz force law and the differential form of Faraday's law of induction. For a static magnetic field, vanishes, and the electric field becomes conservative and is given by , so that (D) reduces to . This is simply the Lorentz force law on a per-unit-charge basis — although Maxwell's equation (D) first appeared at equation (77) in "On Physical Lines of Force" in 1861, 34 years before Lorentz derived his force law, which is now usually presented as a supplement to the four "Maxwell's equations". The cross-product term in the Lorentz force law is the source of the so-called motional emf in electric generators (see also Moving magnet and conductor problem). Where there is no motion through the magnetic field — e.g., in transformers — we can drop the cross-product term, and the force per unit charge (called ) reduces to the electric field , so that Maxwell's equation (D) reduces to . Taking curls, noting that the curl of a gradient is zero, we obtain which is the differential form of Faraday's law. Thus the three terms on the right side of equation (D) may be described, from left to right, as the motional term, the transformer term, and the conservative term. In deriving the electromagnetic wave equation, Maxwell considers the situation only from the rest frame of the medium, and accordingly drops the cross-product term. But he still works from equation (D), in contrast to modern textbooks which tend to work from Faraday's law (see below). The constitutive equations (E) and (F) are now usually written in the rest frame of the medium as and . Maxwell's equation (G), viewed in isolation as printed in the 1864 paper, at first seems to say that .  However, if we trace the signs through the previous two triplets of equations, we see that what seem to be the components of are in fact the components of . The notation used in Maxwell's later Treatise on Electricity and Magnetism is different, and avoids the misleading first impression. Maxwell – electromagnetic light wave In part VI of "A Dynamical Theory of the Electromagnetic Field", subtitled "Electromagnetic theory of light", Maxwell uses the correction to Ampère's Circuital Law made in part III of his 1862 paper, "On Physical Lines of Force", which is defined as displacement current, to derive the electromagnetic wave equation. He obtained a wave equation with a speed in close agreement to experimental determinations of the speed of light. He commented, Maxwell's derivation of the electromagnetic wave equation has been replaced in modern physics by a much less cumbersome method which combines the corrected version of Ampère's Circuital Law with Faraday's law of electromagnetic induction. Modern equation methods To obtain the electromagnetic wave equation in a vacuum using the modern method, we begin with the modern 'Heaviside' form of Maxwell's equations. Using (SI units) in a vacuum, these equations are If we take the curl of the curl equations we obtain If we note the vector identity where is any vector function of space, we recover the wave equations where meters per second is the speed of light in free space. Legacy and impact Of this paper and Maxwell's related works, fellow physicist Richard Feynman said: "From the long view of this history of mankind – seen from, say, 10,000 years from now – there can be little doubt that the most significant event of the 19th century will be judged as Maxwell's discovery of the laws of electromagnetism." Albert Einstein used Maxwell's equations as the starting point for his special theory of relativity, presented in The Electrodynamics of Moving Bodies, one of Einstein's 1905 Annus Mirabilis papers. In it is stated: the same laws of electrodynamics and optics will be valid for all frames of reference for which the equations of mechanics hold good and Any ray of light moves in the "stationary" system of co-ordinates with the determined velocity c, whether the ray be emitted by a stationary or by a moving body. Maxwell's equations can also be derived by extending general relativity into five physical dimensions. See also A Treatise on Electricity and Magnetism Gauge theory References Further reading Darrigol, Olivier (2000). Electromagnetism from Ampère to Einstein. Oxford University Press. ISBN 978-0198505945 1860s in science Electromagnetism Physics papers Works by James Clerk Maxwell Maxwell's equations 1865 documents Works originally published in Philosophical Transactions of the Royal Society
A Dynamical Theory of the Electromagnetic Field
[ "Physics" ]
1,921
[ "Electromagnetism", "Physical phenomena", "Equations of physics", "Fundamental interactions", "Maxwell's equations" ]
729,572
https://en.wikipedia.org/wiki/Bloch%20sphere
In quantum mechanics and computing, the Bloch sphere is a geometrical representation of the pure state space of a two-level quantum mechanical system (qubit), named after the physicist Felix Bloch. Mathematically each quantum mechanical system is associated with a separable complex Hilbert space . A pure state of a quantum system is represented by a non-zero vector in . As the vectors and (with ) represent the same state, the level of the quantum system corresponds to the dimension of the Hilbert space and pure states can be represented as equivalence classes, or, rays in a projective Hilbert space . For a two-dimensional Hilbert space, the space of all such states is the complex projective line This is the Bloch sphere, which can be mapped to the Riemann sphere. The Bloch sphere is a unit 2-sphere, with antipodal points corresponding to a pair of mutually orthogonal state vectors. The north and south poles of the Bloch sphere are typically chosen to correspond to the standard basis vectors and , respectively, which in turn might correspond e.g. to the spin-up and spin-down states of an electron. This choice is arbitrary, however. The points on the surface of the sphere correspond to the pure states of the system, whereas the interior points correspond to the mixed states. The Bloch sphere may be generalized to an n-level quantum system, but then the visualization is less useful. The natural metric on the Bloch sphere is the Fubini–Study metric. The mapping from the unit 3-sphere in the two-dimensional state space to the Bloch sphere is the Hopf fibration, with each ray of spinors mapping to one point on the Bloch sphere. Definition Given an orthonormal basis, any pure state of a two-level quantum system can be written as a superposition of the basis vectors and , where the coefficient of (or contribution from) each of the two basis vectors is a complex number. This means that the state is described by four real numbers. However, only the relative phase between the coefficients of the two basis vectors has any physical meaning (the phase of the quantum system is not directly measurable), so that there is redundancy in this description. We can take the coefficient of to be real and non-negative. This allows the state to be described by only three real numbers, giving rise to the three dimensions of the Bloch sphere. We also know from quantum mechanics that the total probability of the system has to be one: , or equivalently . Given this constraint, we can write using the following representation: , where and . The representation is always unique, because, even though the value of is not unique when is one of the states (see Bra-ket notation) or , the point represented by and is unique. The parameters and , re-interpreted in spherical coordinates as respectively the colatitude with respect to the z-axis and the longitude with respect to the x-axis, specify a point on the unit sphere in . For mixed states, one considers the density operator. Any two-dimensional density operator can be expanded using the identity and the Hermitian, traceless Pauli matrices , , where is called the Bloch vector. It is this vector that indicates the point within the sphere that corresponds to a given mixed state. Specifically, as a basic feature of the Pauli vector, the eigenvalues of are . Density operators must be positive-semidefinite, so it follows that . For pure states, one then has in comportance with the above. As a consequence, the surface of the Bloch sphere represents all the pure states of a two-dimensional quantum system, whereas the interior corresponds to all the mixed states. u, v, w representation The Bloch vector can be represented in the following basis, with reference to the density operator : where This basis is often used in laser theory, where is known as the population inversion. In this basis, the numbers are the expectations of the three Pauli matrices , allowing one to identify the three coordinates with x y and z axes. Pure states Consider an n-level quantum mechanical system. This system is described by an n-dimensional Hilbert space Hn. The pure state space is by definition the set of rays of Hn. Theorem. Let U(n) be the Lie group of unitary matrices of size n. Then the pure state space of Hn can be identified with the compact coset space To prove this fact, note that there is a natural group action of U(n) on the set of states of Hn. This action is continuous and transitive on the pure states. For any state , the isotropy group of , (defined as the set of elements of U(n) such that ) is isomorphic to the product group In linear algebra terms, this can be justified as follows. Any of U(n) that leaves invariant must have as an eigenvector. Since the corresponding eigenvalue must be a complex number of modulus 1, this gives the U(1) factor of the isotropy group. The other part of the isotropy group is parametrized by the unitary matrices on the orthogonal complement of , which is isomorphic to U(n − 1). From this the assertion of the theorem follows from basic facts about transitive group actions of compact groups. The important fact to note above is that the unitary group acts transitively on pure states. Now the (real) dimension of U(n) is n2. This is easy to see since the exponential map is a local homeomorphism from the space of self-adjoint complex matrices to U(n). The space of self-adjoint complex matrices has real dimension n2. Corollary. The real dimension of the pure state space of Hn is 2n − 2. In fact, Let us apply this to consider the real dimension of an m qubit quantum register. The corresponding Hilbert space has dimension 2m. Corollary. The real dimension of the pure state space of an m-qubit quantum register is 2m+1 − 2. Plotting pure two-spinor states through stereographic projection Mathematically the Bloch sphere for a two-spinor state can be mapped to a Riemann sphere , i.e., the projective Hilbert space with the 2-dimensional complex Hilbert space a representation space of SO(3). Given a pure state where and are complex numbers which are normalized so that and such that and , i.e., such that and form a basis and have diametrically opposite representations on the Bloch sphere, then let be their ratio. If the Bloch sphere is thought of as being embedded in with its center at the origin and with radius one, then the plane z = 0 (which intersects the Bloch sphere at a great circle; the sphere's equator, as it were) can be thought of as an Argand diagram. Plot point u in this plane — so that in it has coordinates . Draw a straight line through u and through the point on the sphere that represents . (Let (0,0,1) represent and (0,0,−1) represent .) This line intersects the sphere at another point besides . (The only exception is when , i.e., when and .) Call this point P. Point u on the plane z = 0 is the stereographic projection of point P on the Bloch sphere. The vector with tail at the origin and tip at P is the direction in 3-D space corresponding to the spinor . The coordinates of P are Density operators Formulations of quantum mechanics in terms of pure states are adequate for isolated systems; in general quantum mechanical systems need to be described in terms of density operators. The Bloch sphere parametrizes not only pure states but mixed states for 2-level systems. The density operator describing the mixed-state of a 2-level quantum system (qubit) corresponds to a point inside the Bloch sphere with the following coordinates: where is the probability of the individual states within the ensemble and are the coordinates of the individual states (on the surface of Bloch sphere). The set of all points on and inside the Bloch sphere is known as the Bloch ball. For states of higher dimensions there is difficulty in extending this to mixed states. The topological description is complicated by the fact that the unitary group does not act transitively on density operators. The orbits moreover are extremely diverse as follows from the following observation: Theorem. Suppose A is a density operator on an n level quantum mechanical system whose distinct eigenvalues are μ1, ..., μk with multiplicities n1, ..., nk. Then the group of unitary operators V such that V A V* = A is isomorphic (as a Lie group) to In particular the orbit of A is isomorphic to It is possible to generalize the construction of the Bloch ball to dimensions larger than 2, but the geometry of such a "Bloch body" is more complicated than that of a ball. Rotations A useful advantage of the Bloch sphere representation is that the evolution of the qubit state is describable by rotations of the Bloch sphere. The most concise explanation for why this is the case is that the Lie algebra for the group of unitary and hermitian matrices is isomorphic to the Lie algebra of the group of three dimensional rotations . Rotation operators about the Bloch basis The rotations of the Bloch sphere about the Cartesian axes in the Bloch basis are given by Rotations about a general axis If is a real unit vector in three dimensions, the rotation of the Bloch sphere about this axis is given by: An interesting thing to note is that this expression is identical under relabelling to the extended Euler formula for quaternions. Derivation of the Bloch rotation generator Ballentine presents an intuitive derivation for the infinitesimal unitary transformation. This is important for understanding why the rotations of Bloch spheres are exponentials of linear combinations of Pauli matrices. Hence a brief treatment on this is given here. A more complete description in a quantum mechanical context can be found here. Consider a family of unitary operators representing a rotation about some axis. Since the rotation has one degree of freedom, the operator acts on a field of scalars such that: where We define the infinitesimal unitary as the Taylor expansion truncated at second order. By the unitary condition: Hence For this equality to hold true (assuming is negligible) we require . This results in a solution of the form: where is any Hermitian transformation, and is called the generator of the unitary family. Hence Since the Pauli matrices are unitary Hermitian matrices and have eigenvectors corresponding to the Bloch basis, , we can naturally see how a rotation of the Bloch sphere about an arbitrary axis is described by with the rotation generator given by External links Online Bloch sphere visualization by Konstantin Herb See also Atomic electron transition Gyrovector space Versors Specific implementations of the Bloch sphere are enumerated under the qubit article. Notes References Quantum mechanics Projective geometry
Bloch sphere
[ "Physics" ]
2,298
[ "Theoretical physics", "Quantum mechanics" ]
729,900
https://en.wikipedia.org/wiki/Typhoon%20Lee
Typhoon Lee (; born 1948) is an astrophysicist and geochemist at Academia Sinica, Taiwan, where he specializes in isotope geochemistry and nuclear astrophysics. Lee received his B.S in Physics at National Tsing Hua Univ. and received his PhD in astronomy at the University of Texas in 1977. He specializes in key research areas encompassing chondrules, mineralogy, Allende meteorite, chondrites, and astrophysics. Within mineralogy, his focus extends to subjects like seawater, directly linked to disciplines such as coral and Porites. Notably, his work delves into analytical chemistry, stable isotope ratios, and magnesium isotopes within the framework of Allende meteorite investigations. His honors include the Robert J. Trumpler Award in 1978 from the Astronomical Society of the Pacific, and Outstanding Researcher Awards from the National Science Council in 1985-87 and once again 1988–90. A selection of his publications includes: X-wind, Refractory IDPs and Cometary Nuclei, 1999, in Proc. IAU Colloquium 168, Astro. Soc. Pacific, San Francisco. Proto-stellar Cosmic Rays and Extinct Radioactivities in Meteorites, 1998, Ap. J. 506, 898–912. Coral Sr/Ca as a High Precision High Time-Resolution Paleo Thermometer for Sea Surface Temperature: Looking for ENSO Effects in Kuroshio near Taiwan, 1996, Proc. 1995 Nagoya ICBP-PAGES/PEP-II Symposium, 211–216. U-Disequilibrium Dating of Corals in Southern Taiwan by Mass Spectrometry, 1993, J. Geol. Soc. China. 36, 57–66. Model-Dependent Be-10 Sedimentation Rates for the Taiwan Strait and their Tectonic Significance, 1993, Geology, 21, 423–426. First Detection of Fallout Cs-135 and Potential Application of 137Cs/135Cs, 1993, Geochim. Cosmochim. Acta(Letters), 57, 3493–3497. References External links Typhoon Lee personal website Taiwanese astronomers Planetary scientists National Tsing Hua University alumni University of Texas at Austin College of Natural Sciences alumni Living people Members of Academia Sinica Taiwanese geochemists 1948 births
Typhoon Lee
[ "Chemistry", "Astronomy" ]
473
[ "Geochemists", "Astronomers", "Astronomy stubs", "Taiwanese geochemists", "Astronomer stubs" ]
729,933
https://en.wikipedia.org/wiki/Kenichi%20Fukui
was a Japanese chemist. He became the first person of East Asian ancestry to be awarded the Nobel Prize in Chemistry when he won the 1981 prize with Roald Hoffmann, for their independent investigations into the mechanisms of chemical reactions. Fukui's prize-winning work focused on the role of frontier orbitals in chemical reactions: specifically that molecules share loosely bonded electrons which occupy the frontier orbitals, that is, the Highest Occupied Molecular Orbital (HOMO) and the Lowest Unoccupied Molecular Orbital (LUMO). Early life Fukui was the eldest of three sons of Ryokichi Fukui, a foreign trade merchant, and Chie Fukui. He was born in Nara, Japan. In his student days between 1938 and 1941, Fukui's interest was stimulated by quantum mechanics and Erwin Schrödinger's equation. He also had developed the belief that a breakthrough in science occurs through the unexpected fusion of remotely related fields. In an interview with The Chemical Intelligencer Kenichi discusses his path towards chemistry starting from middle school. "The reason for my selection of chemistry is not easy to explain, since chemistry was never my favorite branch in middle school and high school years. Actually, the fact that my respected Fabre had been a genius in chemistry had captured my heart latently, the most decisive occurrence in my education career came when my father asked the advice of Professor Gen-itsu Kita of the Kyoto Imperial University concerning the cause I should take.” On the advice of Kita, a personal friend of the elder Fukui, young Kenichi was directed to the Department of Industrial Chemistry, with which Kita was then affiliated. He also explains that chemistry was difficult to him because it seemed to require memorization to learn it, and that he preferred more logical character in chemistry. He followed the advice a mentor that was well respected by Kenichi himself and never looked back. He also followed in those footsteps by attending Kyoto University in Japan. During that same interview Kenichi also discussed his reason for preferring more theoretical chemistry rather than experimental chemistry. Although he certainly acceded at theoretical science he actually spent much of his early research on experimental. Kenichi had quickly completed more than 100 experimental projects and papers, and he rather enjoyed the experimental phenomena of chemistry. In fact, later on when teaching he would recommend experimental thesis projects for his students to balance them out, theoretical science came more natural to students, but by suggesting or assigning experimental projects his students could understand the concept of both, as all scientist should. Following his graduation from Kyoto Imperial University in 1941, Fukui was engaged in the Army Fuel Laboratory of Japan during World War II. In 1943, he was appointed a lecturer in fuel chemistry at Kyoto Imperial University and began his career as an experimental organic chemist. Research He was professor of physical chemistry at Kyoto University from 1951 to 1982, president of the Kyoto Institute of Technology between 1982 and 1988, and a member of the International Academy of Quantum Molecular Science and honorary member of the International Academy of Science, Munich. He was also director of the Institute for Fundamental Chemistry from 1988 till his death. As well as President of the Chemical Society of Japan from 1983–84, receiving multiple awards aside from his Nobel Prize such as; Japan Academy Prize in 1962, Person of Cultural Merit in 1981, Imperial Honour of Grand Cordon of the Order of the Rising Sun in 1988, with many other awards not quite as prestigious. In 1952, Fukui with his young collaborators T. Yonezawa and H. Shingu presented his molecular orbital theory of reactivity in aromatic hydrocarbons, which appeared in the Journal of Chemical Physics. At that time, his concept failed to garner adequate attention among chemists. Fukui observed in his Nobel lecture in 1981 that his original paper 'received a number of controversial comments. This was in a sense understandable, because for lack of my experiential ability, the theoretical foundation for this conspicuous result was obscure or rather improperly given.' The frontier orbitals concept came to be recognized following the 1965 publication by Robert B. Woodward and Roald Hoffmann of the Woodward-Hoffmann stereoselection rules, which could predict the reaction rates between two reactants. These rules, depicted in diagrams, explain why some pairs react easily while other pairs do not. The basis for these rules lies in the symmetry properties of the molecules and especially in the disposition of their electrons. Fukui had acknowledged in his Nobel lecture that, 'It is only after the remarkable appearance of the brilliant work by Woodward and Hoffmann that I have become fully aware that not only the density distribution but also the nodal property of the particular orbitals have significance in such a wide variety of chemical reactions.' What has been striking in Fukui's significant contributions is that he developed his ideas before chemists had access to large computers for modeling. Apart from exploring the theory of chemical reactions, Fukui's contributions to chemistry also include the statistical theory of gelation, organic synthesis by inorganic salts and polymerization kinetics. In an interview to New Scientist magazine in 1985, Fukui had been highly critical on the practices adopted in Japanese universities and industries to foster science. He noted, "Japanese universities have a chair system that is a fixed hierarchy. This has its merits when trying to work as a laboratory on one theme. But if you want to do original work you must start young, and young people are limited by the chair system. Even if students cannot become assistant professors at an early age they should be encouraged to do original work." Fukui also admonished Japanese industrial research stating, "Industry is more likely to put its research effort into its daily business. It is very difficult for it to become involved in pure chemistry. There is a need to encourage long-range research, even if we don't know its goal and if its application is unknown." In another interview with The Chemical Intelligencer he further elaborates on his criticism by saying, "As is known worldwide, Japan has tried to catch up with the western countries since the beginning of this century by importing science from them." Japan is, in a sense, relatively new to fundamental science as a part of its society and the lack of originality ability, and funding which the western countries have more advantages in hurt the country in fundamental science. Although, he has also stated that it is improving in Japan, especially funding for fundamental science as it has seen a steady increase for years. Recognition Fukui was awarded the Nobel Prize for his realization that a good approximation for reactivity could be found by looking at the frontier orbitals (HOMO/LUMO). This was based on three main observations of molecular orbital theory as two molecules interact. The occupied orbitals of different molecules repel each other. Positive charges of one molecule attract the negative charges of the other. The occupied orbitals of one molecule and the unoccupied orbitals of the other (especially HOMO and LUMO) interact with each other causing attraction. From these observations, frontier molecular orbital (FMO) theory simplifies reactivity to interactions between HOMO of one species and the LUMO of the other. This helps to explain the predictions of the Woodward-Hoffman rules for thermal pericyclic reactions, which are summarized in the following statement: "A ground-state pericyclic change is symmetry-allowed when the total number of (4q+2)s and (4r)a components is odd" Fukui was elected a Foreign Member of the Royal Society (ForMemRS) in 1989. Bibliography See also List of Japanese Nobel laureates List of Nobel laureates affiliated with Kyoto University References External links 1918 births 1998 deaths 20th-century Japanese chemists Academic staff of Kyoto University Japanese Nobel laureates Foreign associates of the National Academy of Sciences Foreign members of the Royal Society Imperial Japanese Army personnel of World War II Nobel laureates in Chemistry Kyoto University alumni Members of the International Academy of Quantum Molecular Science People from Nara, Nara Recipients of the Order of Culture Theoretical chemists
Kenichi Fukui
[ "Chemistry" ]
1,636
[ "Quantum chemistry", "Theoretical chemistry", "Theoretical chemists", "Physical chemists" ]
730,378
https://en.wikipedia.org/wiki/Goddard%E2%80%93Thorn%20theorem
In mathematics, and in particular in the mathematical background of string theory, the Goddard–Thorn theorem (also called the no-ghost theorem) is a theorem describing properties of a functor that quantizes bosonic strings. It is named after Peter Goddard and Charles Thorn. The name "no-ghost theorem" stems from the fact that in the original statement of the theorem, the natural inner product induced on the output vector space is positive definite. Thus, there were no so-called ghosts (Pauli–Villars ghosts), or vectors of negative norm. The name "no-ghost theorem" is also a word play on the no-go theorem of quantum mechanics. Statement This statement is that of Borcherds (1992). Suppose that is a unitary representation of the Virasoro algebra , so is equipped with a non-degenerate bilinear form and there is an algebra homomorphism so that where the adjoint is defined with respect to the bilinear form, and Suppose also that decomposes into a direct sum of eigenspaces of with non-negative, integer eigenvalues , denoted , and that each is finite dimensional (giving a -grading). Assume also that admits an action from a group that preserves this grading. For the two-dimensional even unimodular Lorentzian lattice II1,1, denote the corresponding lattice vertex algebra by . This is a II1,1-graded algebra with a bilinear form and carries an action of the Virasoro algebra. Let be the subspace of the vertex algebra consisting of vectors such that for . Let be the subspace of of degree . Each space inherits a -action which acts as prescribed on and trivially on . The quotient of by the nullspace of its bilinear form is naturally isomorphic as a -module with an invariant bilinear form, to if and if . II1,1 The lattice II1,1 is the rank 2 lattice with bilinear form This is even, unimodular and integral with signature (+,-). Formalism There are two naturally isomorphic functors that are typically used to quantize bosonic strings. In both cases, one starts with positive-energy representations of the Virasoro algebra of central charge 26, equipped with Virasoro-invariant bilinear forms, and ends up with vector spaces equipped with bilinear forms. Here, "Virasoro-invariant" means Ln is adjoint to L−n for all integers n. The first functor historically is "old canonical quantization", and it is given by taking the quotient of the weight 1 primary subspace by the radical of the bilinear form. Here, "primary subspace" is the set of vectors annihilated by Ln for all strictly positive n, and "weight 1" means L0 acts by identity. A second, naturally isomorphic functor, is given by degree 1 BRST cohomology. Older treatments of BRST cohomology often have a shift in the degree due to a change in choice of BRST charge, so one may see degree −1/2 cohomology in papers and texts from before 1995. A proof that the functors are naturally isomorphic can be found in Section 4.4 of Polchinski's String Theory text. The Goddard–Thorn theorem amounts to the assertion that this quantization functor more or less cancels the addition of two free bosons, as conjectured by Lovelace in 1971. Lovelace's precise claim was that at critical dimension 26, Virasoro-type Ward identities cancel two full sets of oscillators. Mathematically, this is the following claim: Let V be a unitarizable Virasoro representation of central charge 24 with Virasoro-invariant bilinear form, and let be the irreducible module of the R1,1 Heisenberg Lie algebra attached to a nonzero vector λ in R1,1. Then the image of V ⊗  under quantization is canonically isomorphic to the subspace of V on which L0 acts by 1-(λ,λ). The no-ghost property follows immediately, since the positive-definite Hermitian structure of V is transferred to the image under quantization. Applications The bosonic string quantization functors described here can be applied to any conformal vertex algebra of central charge 26, and the output naturally has a Lie algebra structure. The Goddard–Thorn theorem can then be applied to concretely describe the Lie algebra in terms of the input vertex algebra. Perhaps the most spectacular case of this application is Richard Borcherds's proof of the monstrous moonshine conjecture, where the unitarizable Virasoro representation is the monster vertex algebra (also called "moonshine module") constructed by Frenkel, Lepowsky, and Meurman. By taking a tensor product with the vertex algebra attached to a rank-2 hyperbolic lattice, and applying quantization, one obtains the monster Lie algebra, which is a generalized Kac–Moody algebra graded by the lattice. By using the Goddard–Thorn theorem, Borcherds showed that the homogeneous pieces of the Lie algebra are naturally isomorphic to graded pieces of the moonshine module, as representations of the monster simple group. Earlier applications include Frenkel's determination of upper bounds on the root multiplicities of the Kac–Moody Lie algebra whose Dynkin diagram is the Leech lattice, and Borcherds's construction of a generalized Kac–Moody Lie algebra that contains Frenkel's Lie algebra and saturates Frenkel's 1/∆ bound. References I. Frenkel, Representations of Kac-Moody algebras and dual resonance models Applications of group theory in theoretical physics, Lect. Appl. Math. 21 A.M.S. (1985) 325–353. Theorems in linear algebra String theory Theorems in mathematical physics No-go theorems
Goddard–Thorn theorem
[ "Physics", "Astronomy", "Mathematics" ]
1,256
[ "Theorems in linear algebra", "Astronomical hypotheses", "No-go theorems", "Mathematical theorems", "Equations of physics", "Theorems in algebra", "Theorems in mathematical physics", "String theory", "Mathematical problems", "Physics theorems" ]
12,568,006
https://en.wikipedia.org/wiki/Histidine-tryptophan-ketoglutarate
Histidine-tryptophan-ketoglutarate, or Custodiol HTK solution, is a high-flow, low-potassium preservation solution used for organ transplantation. The solution was initially developed by Hans-Jürgen Bretschneider. HTK solution is intended for perfusion and flushing of donor liver, kidney, heart, lung and pancreas prior to removal from the donor and for preserving these organs during hypothermic storage and transport to the recipient. HTK solution is based on the principle of inactivating organ function by withdrawal of extracellular sodium and calcium, together with intensive buffering of the extracellular space by means of histidine/histidine hydrochloride, so as to prolong the period during which the organs will tolerate interruption of oxygenated blood. The composition of HTK is similar to that of intracellular fluid. All of the components of HTK occur naturally in the body. The osmolarity of HTK is 310 mOsm/L. Composition Sodium: 15 mmol/L Potassium: 9 mmol/L Magnesium: 4 mmol/L Calcium: 0.015 mmol/L Ketoglutarate/glutamic acid: 1 mmol/L Histidine: 198 mmol/L Mannitol: 30 mmol/L Tryptophan: 2 mmol/L Clinical Application HTK (branded as Custodiol® by Essential Pharmaceuticals LLC), has been presented by industry to surgeons as an alternative solution that exceeds other cardioplegias in myocardial protection during cardiac surgery. This claim relies on the single-dose administration of HTK compared with other multidose cardioplegias (MDC), sparing time in the adjustment of equipment during cardioplegia re-administration, allowing greater time to operate and thus a decreased CPB duration. Other benefits include a lower concentration of sodium, calcium, and potassium compared with other cardioplegias with cardiac arrest arising from the deprivation of sodium. Finally, histidine is thought to aid buffering, mannitol and tryptophan to improve membrane stability, and ketoglutarate to help ATP production during reperfusion. A 2021 meta-analysis demonstrated no statistical advantage of HTK over blood or other crystalloid cardioplegias during adult cardiac surgery. The only practical advantage of HTK, therefore, is the single-dose administration compared to multi-dose requirements of blood and other crystalloid cardioplegia. See also Viaspan (UW Solution) Biostasis Organ transplant References 1. 510(k) Summary. Custodiol HTK Solution Common/Classification Name: Isolated Kidney Perfusion and Transport System and Accessories, 21 CFR 876.5880; Franz Kohler. Prepared December 14, 2004. https://www.fda.gov/cdrh/pdf4/K043461.pdf 2. Ringe B., et al. Safety and efficacy of living donor liver preservation with HTK solution. Transplant Proc. 2005;37:316–319. 3. Agarawal A., et al. Follow-up experience using histidine-tryptophan-ketoglutarate solution in clinical pancreas transplantation Transplant Proc. 2005;37:3523–3526. 4. Pokorny H., et al.: Histidine-tryptophan-ketoglutarate solution for organ preservation in human liver transplantation — a prospective multi-centre observation study. Transpl Int. 2004;17:256-60. 5. de Boer J., et al.: Eurotransplant randomized multicenter kidney graft preservation study comparing HTK with UW and Euro-Collins. Transpl lnt. 1999;12:447-453. 6. Hesse U.J., et al.: Organ preservation with HTK and UW solution. Pabst Sci. Publishers, D-49525 Lengerich, 1999. 7. Hatano E., et al.: Hepatic preservation with histidine-tryptophan-ketoglutarate solution in living related and cadaveric liver transplantation. Clin Sci. 1997;93:81-88. Specific External links Official Custodiol Website Custodiol HTK Canadian Website Cryobiology Transplantation medicine
Histidine-tryptophan-ketoglutarate
[ "Physics", "Chemistry", "Biology" ]
940
[ "Biochemistry", "Physical phenomena", "Phase transitions", "Cryobiology" ]
18,296,474
https://en.wikipedia.org/wiki/Magnetic%20resonance%20neurography
Magnetic resonance neurography (MRN) is the direct imaging of nerves in the body by optimizing selectivity for unique MRI water properties of nerves. It is a modification of magnetic resonance imaging. This technique yields a detailed image of a nerve from the resonance signal that arises from in the nerve itself rather than from surrounding tissues or from fat in the nerve lining. Because of the intraneural source of the image signal, the image provides a medically useful set of information about the internal state of the nerve such as the presence of irritation, nerve swelling (edema), compression, pinch or injury. Standard magnetic resonance images can show the outline of some nerves in portions of their courses but do not show the intrinsic signal from nerve water. Magnetic resonance neurography is used to evaluate major nerve compressions such as those affecting the sciatic nerve (e.g. piriformis syndrome), the brachial plexus nerves (e.g. thoracic outlet syndrome), the pudendal nerve, or virtually any named nerve in the body. A related technique for imaging neural tracts in the brain and spinal cord is called magnetic resonance tractography or diffusion tensor imaging. History and physical basis Magnetic resonance imaging (MRI) is based on differences in the physical properties of protons in water molecules in different tissues in the body. The protons and the water molecules of which they are part have subtly different movement characteristics that relate to their biophysical surroundings. Because of this, MRI is capable of differentiating one tissue from another; this provides "tissue contrast." From the time of the first clinical use of MRI in the mid-1970s until 1992, however, despite the active work of many thousands of researchers, there was no reliable method for visualizing nerve. In some parts of the body, nerves could be observed as areas of absent signal delineated by bright fat, or as bland grey structures that could not be reliably distinguished from other similar-appearing structures in cross sectional images. In 1992, Aaron Filler and Franklyn Howe, working at St. George's Hospital Medical School in London, succeeded in identifying the unique water properties of nerve water that would make it possible to generate tissue-specific nerve images. The result was an initial "pure" nerve image in which every other tissue was made to disappear leaving behind only the image of the nerves. The initial pure nerve image served as the basis of image processing techniques leading to discovery of a series of other MRI pulse sequence techniques that would make nerves imageable as well. Further, because they demonstrate water signal arising in the neural tissue itself, they can also reveal abnormalities that affect only the nerve and that do not affect surrounding tissues. More than three million patients seek medical attention every year for nerve-related disorders such as sciatica, carpal tunnel syndrome or various other nerve injuries, yet before 1992, no radiologists were trained to image nerves. There are two main physical bases for the imaging discovery. Firstly, it was known at the time that water diffused preferentially along the long axis of neural tissue in the brain – a property called "anisotropic diffusion". Diffusion MRI had been developed to take advantage of this phenomenon to show contrast between white matter and grey matter in the brain. However, diffusion MRI proved ineffective for imaging of nerves for reasons that were not initially clear. Filler and Howe discovered that the problem was that most of the image signal in nerves came from protons that were not involved in anisotropic diffusion. They developed a collection of methods to suppress the "isotropic signal" and this resulted in allowing the anisotropic signal to be unmasked. This was based on the discovery that Chemical Shift Selection could be used to suppress "short T2 water" in the nerve and that this mostly affected isotropic water. The endoneurial fluid compartment in nerve can be unmasked by similar techniques resulting in a "T2" based neurography as well as the original diffusion based neurography technique. Endoneurial fluid increases when nerve is compressed, irritated or injured, leading to nerve image hyperintensity in a magnetic resonance neurography image. Subsequent research has further demonstrated the biophysical basis for the ability of MR Neurography to show nerve injury and irritation. Measurements of the T2 relaxation rate of nerve by Filler and Howe revealed that previous reports of a short relaxation time were wrong and that—once signal from lipid protons was suppressed—the primary image signal from nerve had long T2 relaxation rates best imaged with pulse sequence echo times in the range of 50 to 100 milliseconds. In addition, they later showed that T2-neurography differs from most other MR imaging in that the conspicuity or relative prominence of nerve is affected by the angle of voxel orientation during the acquisition of the image. When acquisitions are done with echo times below 40 milliseconds, there can be "magic angle effects" that provide some spurious information, so MR Neurography is always done with echo times greater than 40 milliseconds. The need for long echo times also characterizes the type of inversion recovery fat suppression sequences used for neurography nerve imaging. Within a few months of the initial findings on diffusion-based nerve imaging, the diffusion technique for nerve imaging was adapted to permit for visualization of neural tracts in the spinal cord and brain via Diffusion Tensor Imaging. Clinical uses The most significant impact of magnetic resonance neurography is on the evaluation of the large proximal nerve elements such as the brachial plexus (the nerves between the cervical spine and the underarm that innervate shoulder, arm and hand), the lumbosacral plexus (nerves between the lumbosacral spine and legs), the sciatic nerve in the pelvis, as well as other nerves such as the pudendal nerve that follow deep or complex courses. Neurography has also been helpful for improving image diagnosis in spine disorders. It can help identify which spinal nerve is actually irritated as a supplement to routine spinal MRI. Standard spinal MRI only demonstrates the anatomy and numerous disk bulges, bone spurs or stenoses that may or may not actually cause nerve impingement symptoms. Many nerves, such as the median and ulnar nerve in the arm or the tibial nerve in the tarsal tunnel, are just below the skin surface and can be tested for pathology with electromyography, but this technique has always been difficult to apply for deep proximal nerves. Magnetic resonance neurography has greatly expanded the efficacy of nerve diagnosis by allowing uniform evaluation of virtually any nerve in the body. There are numerous reports dealing with specialized uses of magnetic resonance neurography for nerve pathology such as traumatic brachial plexus root avulsions, cervical radiculopathy, guidance for nerve blocks, demonstration of cysts in nerves, carpal tunnel syndrome, and obstetrical brachial plexus palsy. In addition several formal large scale outcome trials carried out with high quality "Class A" methodology have been published that have verified the clinical efficacy and validity of MR Neurography. Use of magnetic resonance neurography is increasing in neurology and neurosurgery as the implications of its value in diagnosing various causes of sciatica becomes more widespread. There are 1.5 million lumbar MRI scans performed in the US each year for sciatica, leading to surgery for a herniated disk in about 300,000 patients per year. Of these, about 100,000 surgeries fail. Therefore, there is successful treatment for sciatica in just 200,000 and failure of diagnosis or treatment in up to 1.3 million annually in the US alone. The success rate of the paradigm of lumbar MRI and disk resection for treatment of sciatica is therefore about 15%(Filler 2005). Neurography has been applied increasingly to evaluate the distal nerve roots, lumbo-sacral plexus and proximal sciatic nerve in the pelvis and thigh to find other causes of sciatica. It is increasingly important for brachial plexus imaging and for the diagnosis of thoracic outlet syndrome. Research and development in the clinical use of diagnostic neurography has taken place at Johns Hopkins, the Mayo Clinic, UCLA, UCSF, Harvard, the University of Washington in Seattle, University of London, and Oxford University (see references below) as well as through the Neurography Institute. Recent patent litigation concerning MR Neurography has led some unlicensed centers to discontinue offering the technique. Courses have been offered for radiologists at the annual meetings of the Radiological Society of North America (RSNA), and at the International Society for Magnetic Resonance in Medicine and for surgeons at the annual meetings of the American Association of Neurological Surgeons and the Congress of Neurological Surgeons. The use of imaging for diagnosis of nerve disorders represents a change from the way most physicians were trained to practice over the past several decades, as older routine tests fail to identify the diagnosis for nerve related disorders. The New England Journal of Medicine in July 2009 published a report on whole body neurography using a diffusion based neurography technique. In 2010, RadioGraphics - a publication of the Radiological Society of North America that serves to provide continuing medical education to radiologists - published an article series taking the position that Neurography has an important role in the evaluation of entrapment neuropathies. Magnetic resonance neurography does not pose any diagnostic disadvantage relative to standard magnetic resonance imaging because neurography studies typically include high resolution standard MRI image series for anatomical reference along with the neurographic sequences. However, the patient will generally have a slightly longer time in the scanner compared to a routine MRI scan. Magnetic resonance neurography can only be performed in 1.5 tesla and 3 tesla cylindrical type scanners and can't really be done effectively in lower power "open" MR scanners - this can pose significant challenges for claustrophobic patients. Although it has been in use for fifteen years and is the subject of more than 150 research publications, most insurance companies still classify this test as experimental and may decline reimbursement, resulting in the need to file appeals. Patients in some plans obtain standard insurance coverage for this widely used procedure. References External links Neurography Institute Neurography
Magnetic resonance neurography
[ "Chemistry" ]
2,131
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
18,298,594
https://en.wikipedia.org/wiki/Crossing%20number%20%28graph%20theory%29
In graph theory, the crossing number of a graph is the lowest number of edge crossings of a plane drawing of the graph . For instance, a graph is planar if and only if its crossing number is zero. Determining the crossing number continues to be of great importance in graph drawing, as user studies have shown that drawing graphs with few crossings makes it easier for people to understand the drawing. The study of crossing numbers originated in Turán's brick factory problem, in which Pál Turán asked for a factory plan that minimized the number of crossings between tracks connecting brick kilns to storage sites. Mathematically, this problem can be formalized as asking for the crossing number of a complete bipartite graph. The same problem arose independently in sociology at approximately the same time, in connection with the construction of sociograms. Turán's conjectured formula for the crossing numbers of complete bipartite graphs remains unproven, as does an analogous formula for the complete graphs. The crossing number inequality states that, for graphs where the number of edges is sufficiently larger than the number of vertices, the crossing number is at least proportional to . It has applications in VLSI design and incidence geometry. Without further qualification, the crossing number allows drawings in which the edges may be represented by arbitrary curves. A variation of this concept, the rectilinear crossing number, requires all edges to be straight line segments, and may differ from the crossing number. In particular, the rectilinear crossing number of a complete graph is essentially the same as the minimum number of convex quadrilaterals determined by a set of points in general position. The problem of determining this number is closely related to the happy ending problem. Definitions For the purposes of defining the crossing number, a drawing of an undirected graph is a mapping from the vertices of the graph to disjoint points in the plane, and from the edges of the graph to curves connecting their two endpoints. No vertex should be mapped onto an edge that it is not an endpoint of, and whenever two edges have curves that intersect (other than at a shared endpoint) their intersections should form a finite set of proper crossings, where the two curves are transverse. A crossing is counted separately for each of these crossing points, for each pair of edges that cross. The crossing number of a graph is then the minimum, over all such drawings, of the number of crossings in a drawing. Some authors add more constraints to the definition of a drawing, for instance that each pair of edges have at most one intersection point (a shared endpoint or crossing). For the crossing number as defined above, these constraints make no difference, because a crossing-minimal drawing cannot have edges with multiple intersection points. If two edges with a shared endpoint cross, the drawing can be changed locally at the crossing point, leaving the rest of the drawing unchanged, to produce a different drawing with one fewer crossing. And similarly, if two edges cross two or more times, the drawing can be changed locally at two crossing points to make a different drawing with two fewer crossings. However, these constraints are relevant for variant definitions of the crossing number that, for instance, count only the numbers of pairs of edges that cross rather than the number of crossings. Special cases As of April 2015, crossing numbers are known for very few graph families. In particular, except for a few initial cases, the crossing number of complete graphs, bipartite complete graphs, and products of cycles all remain unknown, although there has been some progress on lower bounds. Complete bipartite graphs During World War II, Hungarian mathematician Pál Turán was forced to work in a brick factory, pushing wagon loads of bricks from kilns to storage sites. The factory had tracks from each kiln to each storage site, and the wagons were harder to push at the points where tracks crossed each other, from which Turán was led to ask his brick factory problem: how can the kilns, storage sites, and tracks be arranged to minimize the total number of crossings? Mathematically, the kilns and storage sites can be formalized as the vertices of a complete bipartite graph, with the tracks as its edges. A factory layout can be represented as a drawing of this graph, so the problem becomes: what is the minimum possible number of crossings in a drawing of a complete bipartite graph? Kazimierz Zarankiewicz attempted to solve Turán's brick factory problem; his proof contained an error, but he established a valid upper bound of for the crossing number of the complete bipartite graph . This bound has been conjectured to be the optimal number of crossings for all complete bipartite graphs. Complete graphs and graph coloring The problem of determining the crossing number of the complete graph was first posed by Anthony Hill, and appeared in print in 1960. Hill and his collaborator John Ernest were two constructionist artists fascinated by mathematics. They not only formulated this problem but also originated a conjectural formula for this crossing number, which Richard K. Guy published in 1960. Namely, it is known that there always exists a drawing with crossings. This formula gives values of for ; see sequence in the On-line Encyclopedia of Integer Sequences. The conjecture is that there can be no better drawing, so that this formula gives the optimal number of crossings for the complete graphs. An independent formulation of the same conjecture was made by Thomas L. Saaty in 1964. Saaty further verified that this formula gives the optimal number of crossings for and Pan and Richter showed that it also is optimal for . The Albertson conjecture, formulated by Michael O. Albertson in 2007, states that, among all graphs with chromatic number , the complete graph has the minimum number of crossings. That is, if the conjectured formula for the crossing number of the complete graph is correct, then every -chromatic graph has crossing number at least equal to the same formula. The Albertson conjecture is now known to hold for . Cubic graphs The smallest cubic graphs with crossing numbers 1–11 are known . The smallest 1-crossing cubic graph is the complete bipartite graph , with 6 vertices. The smallest 2-crossing cubic graph is the Petersen graph, with 10 vertices. The smallest 3-crossing cubic graph is the Heawood graph, with 14 vertices. The smallest 4-crossing cubic graph is the Möbius-Kantor graph, with 16 vertices. The smallest 5-crossing cubic graph is the Pappus graph, with 18 vertices. The smallest 6-crossing cubic graph is the Desargues graph, with 20 vertices. None of the four 7-crossing cubic graphs, with 22 vertices, are well known. The smallest 8-crossing cubic graphs include the Nauru graph and the McGee graph or (3,7)-cage graph, with 24 vertices. The smallest 11-crossing cubic graphs include the Coxeter graph with 28 vertices. In 2009, Pegg and Exoo conjectured that the smallest cubic graph with crossing number 13 is the Tutte–Coxeter graph and the smallest cubic graph with crossing number 170 is the Tutte 12-cage. Connections to the bisection width The 2/3-bisection width of a simple graph is the minimum number of edges whose removal results in a partition of the vertex set into two separated sets so that no set has more than vertices. Computing is NP-hard. Leighton proved that , provided that has bounded vertex degrees. This fundamental inequality can be used to derive an asymptotic lower bound for , when , or an estimate of it is known. In addition, this inequality has algorithmic application. Specifically, Bhat and Leighton used it (for the first time) for deriving an upper bound on the number of edge crossings in a drawing which is obtained by a divide and conquer approximation algorithm for computing . Complexity and approximation In general, determining the crossing number of a graph is hard; Garey and Johnson showed in 1983 that it is an NP-hard problem. In fact the problem remains NP-hard even when restricted to cubic graphs and to near-planar graphs (graphs that become planar after removal of a single edge). A closely related problem, determining the rectilinear crossing number, is complete for the existential theory of the reals. On the positive side, there are efficient algorithms for determining whether the crossing number is less than a fixed constant . In other words, the problem is fixed-parameter tractable. It remains difficult for larger , such as . There are also efficient approximation algorithms for approximating on graphs of bounded degree which use the general and previously developed framework of Bhat and Leighton. In practice heuristic algorithms are used, such as the simple algorithm which starts with no edges and continually adds each new edge in a way that produces the fewest additional crossings possible. These algorithms are used in the Rectilinear Crossing Number distributed computing project. The crossing number inequality For an undirected simple graph with vertices and edges such that the crossing number is always at least This relation between edges, vertices, and the crossing number was discovered independently by Ajtai, Chvátal, Newborn, and Szemerédi, and by Leighton . It is known as the crossing number inequality or crossing lemma. The constant is the best known to date, and is due to Ackerman. The constant can be lowered to , but at the expense of replacing with the worse constant of . The motivation of Leighton in studying crossing numbers was for applications to VLSI design in theoretical computer science. Later, Székely also realized that this inequality yielded very simple proofs of some important theorems in incidence geometry, such as Beck's theorem and the Szemerédi-Trotter theorem, and Tamal Dey used it to prove upper bounds on geometric k-sets. Variations If edges are required to be drawn as straight line segments, rather than arbitrary curves, then some graphs need more crossings. The rectilinear crossing number is defined to be the minimum number of crossings of a drawing of this type. It is always at least as large as the crossing number, and is larger for some graphs. It is known that, in general, the rectilinear crossing number can not be bounded by a function of the crossing number. The rectilinear crossing numbers for through are , () and values up to are known, with requiring either 7233 or 7234 crossings. Further values are collected by the Rectilinear Crossing Number project. A graph has local crossing number if it can be drawn with at most crossings per edge, but not fewer. The graphs that can be drawn with at most crossings per edge are also called -planar. Other variants of the crossing number include the pairwise crossing number (the minimum number of pairs of edges that cross in any drawing) and the odd crossing number (the number of pairs of edges that cross an odd number of times in any drawing). The odd crossing number is at most equal to the pairwise crossing number, which is at most equal to the crossing number. However, by the Hanani–Tutte theorem, whenever one of these numbers is zero, they all are. surveys many such variants. See also Planarization, a planar graph formed by replacing each crossing by a new vertex Three utilities problem, the puzzle that asks whether can be drawn with 0 crossings References Topological graph theory Graph invariants Graph drawing Geometric intersection NP-complete problems
Crossing number (graph theory)
[ "Mathematics" ]
2,332
[ "Graph theory", "Computational problems", "Graph invariants", "Topology", "Mathematical relations", "Mathematical problems", "Topological graph theory", "NP-complete problems" ]
18,298,785
https://en.wikipedia.org/wiki/Plancherel%20theorem%20for%20spherical%20functions
In mathematics, the Plancherel theorem for spherical functions is an important result in the representation theory of semisimple Lie groups, due in its final form to Harish-Chandra. It is a natural generalisation in non-commutative harmonic analysis of the Plancherel formula and Fourier inversion formula in the representation theory of the group of real numbers in classical harmonic analysis and has a similarly close interconnection with the theory of differential equations. It is the special case for zonal spherical functions of the general Plancherel theorem for semisimple Lie groups, also proved by Harish-Chandra. The Plancherel theorem gives the eigenfunction expansion of radial functions for the Laplacian operator on the associated symmetric space X; it also gives the direct integral decomposition into irreducible representations of the regular representation on . In the case of hyperbolic space, these expansions were known from prior results of Mehler, Weyl and Fock. The main reference for almost all this material is the encyclopedic text of . History The first versions of an abstract Plancherel formula for the Fourier transform on a unimodular locally compact group G were due to Segal and Mautner. At around the same time, Harish-Chandra and Gelfand & Naimark derived an explicit formula for SL(2,R) and complex semisimple Lie groups, so in particular the Lorentz groups. A simpler abstract formula was derived by Mautner for a "topological" symmetric space G/K corresponding to a maximal compact subgroup K. Godement gave a more concrete and satisfactory form for positive definite spherical functions, a class of special functions on G/K. Since when G is a semisimple Lie group these spherical functions φλ were naturally labelled by a parameter λ in the quotient of a Euclidean space by the action of a finite reflection group, it became a central problem to determine explicitly the Plancherel measure in terms of this parametrization. Generalizing the ideas of Hermann Weyl from the spectral theory of ordinary differential equations, Harish-Chandra introduced his celebrated c-function c(λ) to describe the asymptotic behaviour of the spherical functions φλ and proposed c(λ)−2 dλ as the Plancherel measure. He verified this formula for the special cases when G is complex or real rank one, thus in particular covering the case when G/K is a hyperbolic space. The general case was reduced to two conjectures about the properties of the c-function and the so-called spherical Fourier transform. Explicit formulas for the c-function were later obtained for a large class of classical semisimple Lie groups by Bhanu-Murthy. In turn these formulas prompted Gindikin and Karpelevich to derive a product formula for the c-function, reducing the computation to Harish-Chandra's formula for the rank 1 case. Their work finally enabled Harish-Chandra to complete his proof of the Plancherel theorem for spherical functions in 1966. In many special cases, for example for complex semisimple group or the Lorentz groups, there are simple methods to develop the theory directly. Certain subgroups of these groups can be treated by techniques generalising the well-known "method of descent" due to Jacques Hadamard. In particular gave a general method for deducing properties of the spherical transform for a real semisimple group from that of its complexification. One of the principal applications and motivations for the spherical transform was Selberg's trace formula. The classical Poisson summation formula combines the Fourier inversion formula on a vector group with summation over a cocompact lattice. In Selberg's analogue of this formula, the vector group is replaced by G/K, the Fourier transform by the spherical transform and the lattice by a cocompact (or cofinite) discrete subgroup. The original paper of implicitly invokes the spherical transform; it was who brought the transform to the fore, giving in particular an elementary treatment for SL(2,R) along the lines sketched by Selberg. Spherical functions Let G be a semisimple Lie group and K a maximal compact subgroup of G. The Hecke algebra Cc(K \G/K), consisting of compactly supported K-biinvariant continuous functions on G, acts by convolution on the Hilbert space H=L2(G / K). Because G / K is a symmetric space, this *-algebra is commutative. The closure of its (the Hecke algebra's) image in the operator norm is a non-unital commutative C* algebra , so by the Gelfand isomorphism can be identified with the continuous functions vanishing at infinity on its spectrum X. Points in the spectrum are given by continuous *-homomorphisms of into C, i.e. characters of . If S denotes the commutant of a set of operators S on H, then can be identified with the commutant of the regular representation of G on H. Now leaves invariant the subspace H0 of K-invariant vectors in H. Moreover, the abelian von Neumann algebra it generates on H0 is maximal Abelian. By spectral theory, there is an essentially unique measure μ on the locally compact space X and a unitary transformation U between H0 and L2(X, μ) which carries the operators in onto the corresponding multiplication operators. The transformation U is called the spherical Fourier transform or sometimes just the spherical transform and μ is called the Plancherel measure. The Hilbert space H0 can be identified with L2(K\G/K), the space of K-biinvariant square integrable functions on G. The characters χλ of (i.e. the points of X) can be described by positive definite spherical functions φλ on G, via the formula for f in Cc(K\G/K), where π(f) denotes the convolution operator in and the integral is with respect to Haar measure on G. The spherical functions φλ on G are given by Harish-Chandra's formula: {| border="1" cellspacing="0" cellpadding="5" | |} In this formula: the integral is with respect to Haar measure on K; λ is an element of A* =Hom(A,T) where A is the Abelian vector subgroup in the Iwasawa decomposition G =KAN of G; λ' is defined on G by first extending λ to a character of the solvable subgroup AN, using the group homomorphism onto A, and then setting for k in K and x in AN, where ΔAN is the modular function of AN. Two different characters λ1 and λ2 give the same spherical function if and only if λ1 = λ2·s, where s is in the Weyl group of A the quotient of the normaliser of A in K by its centraliser, a finite reflection group. It follows that X can be identified with the quotient space A*/W. Spherical principal series The spherical function φλ can be identified with the matrix coefficient of the spherical principal series of G. If M is the centraliser of A in K, this is defined as the unitary representation πλ of G induced by the character of B = MAN given by the composition of the homomorphism of MAN onto A and the character λ. The induced representation is defined on functions f on G with for b in B by where The functions f can be identified with functions in L2(K / M) and As proved, the representations of the spherical principal series are irreducible and two representations πλ and πμ are unitarily equivalent if and only if μ = σ(λ) for some σ in the Weyl group of A. Example: SL(2, C) The group G = SL(2,C) acts transitively on the quaternionic upper half space by Möbius transformations. The complex matrix acts as The stabiliser of the point j is the maximal compact subgroup K = SU(2), so that It carries the G-invariant Riemannian metric with associated volume element and Laplacian operator Every point in can be written as k(etj) with k in SU(2) and t determined up to a sign. The Laplacian has the following form on functions invariant under SU(2), regarded as functions of the real parameter t: The integral of an SU(2)-invariant function is given by Identifying the square integrable SU(2)-invariant functions with L2(R) by the unitary transformation Uf(t) = f(t) sinh t, Δ is transformed into the operator By the Plancherel theorem and Fourier inversion formula for R, any SU(2)-invariant function f can be expressed in terms of the spherical functions by the spherical transform and the spherical inversion formula Taking with fi in Cc(G / K) and , and evaluating at i yields the Plancherel formula For biinvariant functions this establishes the Plancherel theorem for spherical functions: the map is unitary and sends the convolution operator defined by into the multiplication operator defined by . The spherical function Φλ is an eigenfunction of the Laplacian: Schwartz functions on R are the spherical transforms of functions f belonging to the Harish-Chandra Schwartz space By the Paley-Wiener theorem, the spherical transforms of smooth SU(2)-invariant functions of compact support are precisely functions on R which are restrictions of holomorphic functions on C satisfying an exponential growth condition As a function on G, Φλ is the matrix coefficient of the spherical principal series defined on L2(C), where C is identified with the boundary of . The representation is given by the formula The function is fixed by SU(2) and The representations πλ are irreducible and unitarily equivalent only when the sign of λ is changed. The map W of onto (with measure λ2 dλ on the first factor) given by is unitary and gives the decomposition of as a direct integral of the spherical principal series. Example: SL(2, R) The group G = SL(2,R) acts transitively on the Poincaré upper half plane by Möbius transformations. The real matrix acts as The stabiliser of the point i is the maximal compact subgroup K = SO(2), so that = G / K. It carries the G-invariant Riemannian metric with associated area element and Laplacian operator Every point in can be written as k( et i ) with k in SO(2) and t determined up to a sign. The Laplacian has the following form on functions invariant under SO(2), regarded as functions of the real parameter t: The integral of an SO(2)-invariant function is given by There are several methods for deriving the corresponding eigenfunction expansion for this ordinary differential equation including: the classical spectral theory of ordinary differential equations applied to the hypergeometric equation (Mehler, Weyl, Fock); variants of Hadamard's method of descent, realising 2-dimensional hyperbolic space as the quotient of 3-dimensional hyperbolic space by the free action of a 1-parameter subgroup of SL(2,C); Abel's integral equation, following Selberg and Godement; orbital integrals (Harish-Chandra, Gelfand & Naimark). The second and third technique will be described below, with two different methods of descent: the classical one due Hadamard, familiar from treatments of the heat equation and the wave equation on hyperbolic space; and Flensted-Jensen's method on the hyperboloid. Hadamard's method of descent If f(x,r) is a function on and then where Δn is the Laplacian on . Since the action of SL(2,C) commutes with Δ3, the operator M0 on S0(2)-invariant functions obtained by averaging M1f by the action of SU(2) also satisfies The adjoint operator M1* defined by satisfies The adjoint M0*, defined by averaging M*f over SO(2), satisfies for SU(2)-invariant functions F and SO(2)-invariant functions f. It follows that The function is SO(2)-invariant and satisfies On the other hand, since the integral can be computed by integrating around the rectangular indented contour with vertices at ±R and ±R + πi. Thus the eigenfunction satisfies the normalisation condition φλ(i) = 1. There can only be one such solution either because the Wronskian of the ordinary differential equation must vanish or by expanding as a power series in sinh r. It follows that Similarly it follows that If the spherical transform of an SO(2)-invariant function on is defined by then Taking f=M1*F, the SL(2, C) inversion formula for F immediately yields the spherical inversion formula for SO(2)-invariant functions on . As for SL(2,C), this immediately implies the Plancherel formula for fi in Cc(SL(2,R) / SO(2)): The spherical function φλ is an eigenfunction of the Laplacian: Schwartz functions on R are the spherical transforms of functions f belonging to the Harish-Chandra Schwartz space The spherical transforms of smooth SO(2)-invariant functions of compact support are precisely functions on R which are restrictions of holomorphic functions on C satisfying an exponential growth condition Both these results can be deduced by descent from the corresponding results for SL(2,C), by verifying directly that the spherical transform satisfies the given growth conditions and then using the relation . As a function on G, φλ is the matrix coefficient of the spherical principal series defined on L2(R), where R is identified with the boundary of . The representation is given by the formula The function is fixed by SO(2) and The representations πλ are irreducible and unitarily equivalent only when the sign of λ is changed. The map with measure on the first factor, is given by the formula is unitary and gives the decomposition of as a direct integral of the spherical principal series. Flensted–Jensen's method of descent Hadamard's method of descent relied on functions invariant under the action of 1-parameter subgroup of translations in the y parameter in . Flensted–Jensen's method uses the centraliser of SO(2) in SL(2,C) which splits as a direct product of SO(2) and the 1-parameter subgroup K1 of matrices The symmetric space SL(2,C)/SU(2) can be identified with the space H3 of positive 2×2 matrices A with determinant 1 with the group action given by Thus So on the hyperboloid , gt only changes the coordinates y and a. Similarly the action of SO(2) acts by rotation on the coordinates (b,x) leaving a and y unchanged. The space H2 of real-valued positive matrices A with y = 0 can be identified with the orbit of the identity matrix under SL(2,R). Taking coordinates (b,x,y) in H3 and (b,x) on H2 the volume and area elements are given by where r2 equals b2 + x2 + y2 or b2 + x2, so that r is related to hyperbolic distance from the origin by . The Laplacian operators are given by the formula where and For an SU(2)-invariant function F on H3 and an SO(2)-invariant function on H2, regarded as functions of r or t, If f(b,x) is a function on H2, Ef is defined by Thus If f is SO(2)-invariant, then, regarding f as a function of r or t, On the other hand, Thus, setting Sf(t) = f(2t), leading to the fundamental descent relation of Flensted-Jensen for M0 = ES: The same relation holds with M0 by M, where Mf is obtained by averaging M0f over SU(2). The extension Ef is constant in the y variable and therefore invariant under the transformations gs. On the other hand, for F a suitable function on H3, the function QF defined by is independent of the y variable. A straightforward change of variables shows that Since K1 commutes with SO(2), QF is SO(2)--invariant if F is, in particular if F is SU(2)-invariant. In this case QF is a function of r or t, so that M*F can be defined by The integral formula above then yields and hence, since for f SO(2)-invariant, the following adjoint formula: As a consequence Thus, as in the case of Hadamard's method of descent. with and It follows that Taking f=M*F, the SL(2,C) inversion formula for F then immediately yields Abel's integral equation The spherical function φλ is given by so that Thus so that defining F by the spherical transform can be written The relation between F and f is classically inverted by the Abel integral equation: In fact The relation between F and is inverted by the Fourier inversion formula: Hence This gives the spherical inversion for the point i. Now for fixed g in SL(2,R) define another rotation invariant function on with f1(i)=f(g(i)). On the other hand, for biinvariant functions f, so that where w = g(i). Combining this with the above inversion formula for f1 yields the general spherical inversion formula: Other special cases All complex semisimple Lie groups or the Lorentz groups SO0(N,1) with N odd can be treated directly by reduction to the usual Fourier transform. The remaining real Lorentz groups can be deduced by Flensted-Jensen's method of descent, as can other semisimple Lie groups of real rank one. Flensted-Jensen's method of descent also applies to the treatment of real semisimple Lie groups for which the Lie algebras are normal real forms of complex semisimple Lie algebras. The special case of SL(N,R) is treated in detail in ; this group is also the normal real form of SL(N,C). The approach of applies to a wide class of real semisimple Lie groups of arbitrary real rank and yields the explicit product form of the Plancherel measure on * without using Harish-Chandra's expansion of the spherical functions φλ in terms of his c-function, discussed below. Although less general, it gives a simpler approach to the Plancherel theorem for this class of groups. Complex semisimple Lie groups If G is a complex semisimple Lie group, it is the complexification of its maximal compact subgroup U, a compact semisimple Lie group. If and are their Lie algebras, then Let T be a maximal torus in U with Lie algebra Then setting there is the Cartan decomposition: The finite-dimensional irreducible representations πλ of U are indexed by certain λ in . The corresponding character formula and dimension formula of Hermann Weyl give explicit formulas for These formulas, initially defined on and , extend holomorphic to their complexifications. Moreover, where W is the Weyl group and δ(eX) is given by a product formula (Weyl's denominator formula) which extends holomorphically to the complexification of . There is a similar product formula for d(λ), a polynomial in λ. On the complex group G, the integral of a U-biinvariant function F can be evaluated as where . The spherical functions of G are labelled by λ in and given by the Harish-Chandra-Berezin formula They are the matrix coefficients of the irreducible spherical principal series of G induced from the character of the Borel subgroup of G corresponding to λ; these representations are irreducible and can all be realized on L2(U/T). The spherical transform of a U-biinvariant function F is given by and the spherical inversion formula by where is a Weyl chamber. In fact the result follows from the Fourier inversion formula on since so that is just the Fourier transform of . Note that the symmetric space G/U has as compact dual the compact symmetric space U x U / U, where U is the diagonal subgroup. The spherical functions for the latter space, which can be identified with U itself, are the normalized characters χλ/d(λ) indexed by lattice points in the interior of and the role of A is played by T. The spherical transform of f of a class function on U is given by and the spherical inversion formula now follows from the theory of Fourier series on T: There is an evident duality between these formulas and those for the non-compact dual. Real semisimple Lie groups Let G0 be a normal real form of the complex semisimple Lie group G, the fixed points of an involution σ, conjugate linear on the Lie algebra of G. Let τ be a Cartan involution of G0 extended to an involution of G, complex linear on its Lie algebra, chosen to commute with σ. The fixed point subgroup of τσ is a compact real form U of G, intersecting G0 in a maximal compact subgroup K0. The fixed point subgroup of τ is K, the complexification of K0. Let G0= K0·P0 be the corresponding Cartan decomposition of G0 and let A be a maximal Abelian subgroup of P0. proved that where A+ is the image of the closure of a Weyl chamber in under the exponential map. Moreover, Since it follows that there is a canonical identification between K \ G / U, K0 \ G0 /K0 and A+. Thus K0-biinvariant functions on G0 can be identified with functions on A+ as can functions on G that are left invariant under K and right invariant under U. Let f be a function in and define Mf in by Here a third Cartan decomposition of G = UAU has been used to identify U \ G / U with A+. Let Δ be the Laplacian on G0/K0 and let Δc be the Laplacian on G/U. Then For F in , define M*F in by Then M and M* satisfy the duality relations In particular There is a similar compatibility for other operators in the center of the universal enveloping algebra of G0. It follows from the eigenfunction characterisation of spherical functions that is proportional to φλ on G0, the constant of proportionality being given by Moreover, in this case If f = M*F, then the spherical inversion formula for F on G implies that for f on G0: since The direct calculation of the integral for b(λ), generalising the computation of for SL(2,R), was left as an open problem by . An explicit product formula for b(λ) was known from the prior determination of the Plancherel measure by , giving where α ranges over the positive roots of the root system in and C is a normalising constant, given as a quotient of products of Gamma functions. Harish-Chandra's Plancherel theorem Let G be a noncompact connected real semisimple Lie group with finite center. Let denote its Lie algebra. Let K be a maximal compact subgroup given as the subgroup of fixed points of a Cartan involution σ. Let be the ±1 eigenspaces of σ in , so that is the Lie algebra of K and give the Cartan decomposition Let be a maximal Abelian subalgebra of and for α in let If α ≠ 0 and , then α is called a restricted root and is called its multiplicity. Let A = exp , so that G = KAK.The restriction of the Killing form defines an inner product on and hence , which allows to be identified with . With respect to this inner product, the restricted roots Σ give a root system. Its Weyl group can be identified with . A choice of positive roots defines a Weyl chamber . The reduced root system Σ0 consists of roots α such that α/2 is not a root. Defining the spherical functions φ λ as above for λ in , the spherical transform of f in Cc∞(K \ G / K) is defined by The spherical inversion formula states that where Harish-Chandra's c-function c(λ) is defined by with and the constant c0 chosen so that c(−iρ) = 1 where The Plancherel theorem for spherical functions states that the map is unitary and transforms convolution by into multiplication by . Harish-Chandra's spherical function expansion Since G = KAK, functions on G/K that are invariant under K can be identified with functions on A, and hence , that are invariant under the Weyl group W. In particular since the Laplacian Δ on G/K commutes with the action of G, it defines a second order differential operator L on , invariant under W, called the radial part of the Laplacian. In general if X is in , it defines a first order differential operator (or vector field) by L can be expressed in terms of these operators by the formula where Aα in is defined by and is the Laplacian on , corresponding to any choice of orthonormal basis (Xi). Thus where so that L can be regarded as a perturbation of the constant-coefficient operator L0. Now the spherical function φλ is an eigenfunction of the Laplacian: and therefore of L, when viewed as a W-invariant function on . Since eiλ–ρ and its transforms under W are eigenfunctions of L0 with the same eigenvalue, it is natural look for a formula for φλ in terms of a perturbation series with Λ the cone of all non-negative integer combinations of positive roots, and the transforms of fλ under W. The expansion leads to a recursive formula for the coefficients aμ(λ). In particular they are uniquely determined and the series and its derivatives converges absolutely on , a fundamental domain for W. Remarkably it turns out that fλ is also an eigenfunction of the other G-invariant differential operators on G/K, each of which induces a W-invariant differential operator on . It follows that φλ can be expressed in terms as a linear combination of fλ and its transforms under W: Here c(λ) is Harish-Chandra's c-function. It describes the asymptotic behaviour of φλ in , since for X in and t > 0 large. Harish-Chandra obtained a second integral formula for φλ and hence c(λ) using the Bruhat decomposition of G: where B = MAN and the union is disjoint. Taking the Coxeter element s0 of W, the unique element mapping onto , it follows that σ(N) has a dense open orbit G/B = K/M whose complement is a union of cells of strictly smaller dimension and therefore has measure zero. It follows that the integral formula for φλ initially defined over K/M can be transferred to σ(N): for X in . Since for X in , the asymptotic behaviour of φλ can be read off from this integral, leading to the formula: Harish-Chandra's c-function The many roles of Harish-Chandra's c-function in non-commutative harmonic analysis are surveyed in . Although it was originally introduced by Harish-Chandra in the asymptotic expansions of spherical functions, discussed above, it was also soon understood to be intimately related to intertwining operators between induced representations, first studied in this context by . These operators exhibit the unitary equivalence between πλ and πsλ for s in the Weyl group and a c-function cs(λ) can be attached to each such operator: namely the value at 1 of the intertwining operator applied to ξ0, the constant function 1, in L2(K/M). Equivalently, since ξ0 is up to scalar multiplication the unique vector fixed by K, it is an eigenvector of the intertwining operator with eigenvalue cs(λ). These operators all act on the same space L2(K/M), which can be identified with the representation induced from the 1-dimensional representation defined by λ on MAN. Once A has been chosen, the compact subgroup M is uniquely determined as the centraliser of A in K. The nilpotent subgroup N, however, depends on a choice of a Weyl chamber in , the various choices being permuted by the Weyl group W = M ' / M, where M ' is the normaliser of A in K. The standard intertwining operator corresponding to (s, λ) is defined on the induced representation by where σ is the Cartan involution. It satisfies the intertwining relation The key property of the intertwining operators and their integrals is the multiplicative cocycle property whenever for the length function on the Weyl group associated with the choice of Weyl chamber. For s in W, this is the number of chambers crossed by the straight line segment between X and sX for any point X in the interior of the chamber. The unique element of greatest length s0, namely the number of positive restricted roots, is the unique element that carries the Weyl chamber onto . By Harish-Chandra's integral formula, it corresponds to Harish-Chandra's c-function: The c-functions are in general defined by the equation where ξ0 is the constant function 1 in L2(K/M). The cocycle property of the intertwining operators implies a similar multiplicative property for the c-functions: provided This reduces the computation of cs to the case when s = sα, the reflection in a (simple) root α, the so-called "rank-one reduction" of . In fact the integral involves only the closed connected subgroup Gα corresponding to the Lie subalgebra generated by where α lies in Σ0+. Then Gα is a real semisimple Lie group with real rank one, i.e. dim Aα = 1, and cs is just the Harish-Chandra c-function of Gα. In this case the c-function can be computed directly by various means: by noting that φλ can be expressed in terms of the hypergeometric function for which the asymptotic expansion is known from the classical formulas of Gauss for the connection coefficients; by directly computing the integral, which can be expressed as an integral in two variables and hence a product of two beta functions. This yields the following formula: where The general Gindikin–Karpelevich formula for c(λ) is an immediate consequence of this formula and the multiplicative properties of cs(λ). Paley–Wiener theorem The Paley-Wiener theorem generalizes the classical Paley-Wiener theorem by characterizing the spherical transforms of smooth K-bivariant functions of compact support on G. It is a necessary and sufficient condition that the spherical transform be W-invariant and that there is an R > 0 such that for each N there is an estimate In this case f is supported in the closed ball of radius R about the origin in G/K. This was proved by Helgason and Gangolli ( pg. 37). The theorem was later proved by independently of the spherical inversion theorem, using a modification of his method of reduction to the complex case. Rosenberg's proof of inversion formula noticed that the Paley-Wiener theorem and the spherical inversion theorem could be proved simultaneously, by a trick which considerably simplified previous proofs. The first step of his proof consists in showing directly that the inverse transform, defined using Harish-Chandra's c-function, defines a function supported in the closed ball of radius R about the origin if the Paley-Wiener estimate is satisfied. This follows because the integrand defining the inverse transform extends to a meromorphic function on the complexification of ; the integral can be shifted to for μ in and t > 0. Using Harish-Chandra's expansion of φλ and the formulas for c'''(λ) in terms of Gamma functions, the integral can be bounded for t large and hence can be shown to vanish outside the closed ball of radius R about the origin. This part of the Paley-Wiener theorem shows that defines a distribution on G/K with support at the origin o. A further estimate for the integral shows that it is in fact given by a measure and that therefore there is a constant C such that By applying this result to it follows that A further scaling argument allows the inequality C = 1 to be deduced from the Plancherel theorem and Paley-Wiener theorem on . Schwartz functions The Harish-Chandra Schwartz space can be defined as Under the spherical transform it is mapped onto the space of W-invariant Schwartz functions on The original proof of Harish-Chandra was a long argument by induction. found a short and simple proof, allowing the result to be deduced directly from versions of the Paley-Wiener and spherical inversion formula. He proved that the spherical transform of a Harish-Chandra Schwartz function is a classical Schwartz function. His key observation was then to show that the inverse transform was continuous on the Paley-Wiener space endowed with classical Schwartz space seminorms, using classical estimates. Notes References , Appendix to Chapter VI, The Plancherel Formula for Complex Semisimple Lie Groups''. , section 21. (a general introduction for physicists) . Representation theory of Lie groups Theorems in harmonic analysis Theorems in functional analysis
Plancherel theorem for spherical functions
[ "Mathematics" ]
7,114
[ "Theorems in mathematical analysis", "Theorems in functional analysis", "Theorems in harmonic analysis" ]
18,302,417
https://en.wikipedia.org/wiki/Microwave%20Imaging%20Radiometer%20with%20Aperture%20Synthesis
Microwave Imaging Radiometer with Aperture Synthesis (MIRAS) is the major instrument on the Soil Moisture and Ocean Salinity satellite (SMOS). MIRAS employs a planar antenna composed of a central body (the so-called hub) and three telescoping, deployable arms, in total 69 receivers on the Unit. Each receiver is composed of one Lightweight Cost-Effective Front-end (LICEF) module, which detects radiation in the microwave L-band, both in horizontal and vertical polarizations. The aperture on the LICEF detectors, planar in arrangement on MIRAS, point directly toward the Earth's surface as the satellite orbits. The arrangement and orientation of MIRAS makes the instrument a 2-D interferometric radiometer that generates brightness temperature images, from which both geophysical variables are computed. The salinity measurement requires demanding performance of the instrument in terms of calibration and stability. The MIRAS instrument's prime contractor was EADS CASA Espacio, manufacturing the payload of SMOS under ESA's contract. LICEF The LICEF detector is composed of a round patch antenna element, with 2 pairs of probes for orthogonal linear polarisations, feeding two receiver channels in a compact lightweight package behind the antenna. It picks up thermal radiation emitted by the Earth near 1.4 GHz in the microwave L-band, amplifies it 100 dB, and digitises it with 1-bit quantisation. References Embedded systems Space imagers Earth observation satellite sensors
Microwave Imaging Radiometer with Aperture Synthesis
[ "Technology", "Engineering" ]
314
[ "Embedded systems", "Computer science", "Computer engineering", "Computer systems" ]
18,303,445
https://en.wikipedia.org/wiki/Kapitsa%E2%80%93Dirac%20effect
The Kapitza–Dirac effect is a quantum mechanical effect consisting of the diffraction of matter by a standing wave of light, in complete analogy to the diffraction of light by a periodic grating, but with the role of matter and light reversed. The effect was first predicted as the diffraction of electrons from a standing wave of light by Paul Dirac and Pyotr Kapitsa (or Peter Kapitza) in 1933. The effect relies on the wave–particle duality of matter as stated by the de Broglie hypothesis in 1924. The matter-wave diffraction by a standing wave of light was first observed using a beam of neutral atoms. Later, the Kapitza-Dirac effect as originally proposed was observed in 2001. Overview In 1924, French physicist Louis de Broglie postulated that matter exhibits a wave-like nature given by: where h is the Planck constant, and p is the particle's momentum, and λ is the wavelength of the matter wave. From this, it follows that interference effects between particles of matter will occur. This forms the basis of the Kapitza–Dirac effec: the diffraction of matter wave due to a standing wave of light. A coherent beam of light will diffract into several peaks once it passes through a periodic diffraction grating. Due to matter-wave duality, the matter can be diffracted by a periodic diffraction grating as well. Such a diffraction grating can be made out of physical matter, but can also be created by a standing wave of light formed by a pair of counterpropagating light beams, due light-matter interaction. Here, the standing wave of light forms the spatially periodic grating that will diffract the matter wave, as we will now explain. The original idea proposes that a beam of electron can be diffracted by a standing wave formed by a superposition of two counterpropagating beams of light. The diffraction is caused by light-matter interaction. In this case, each electron absorbs a photon from one of the beams, and re-emits a photon into the other beam traveling to the opposite direction. This describes a stimulated Compton scattering of photons by the electrons, since the re-emission here is stimulated by the presence of a second beam of light. Due to the nature of the stimulated Compton scattering, the re-emitted photon must carry the same frequency and opposite direction of the absorbed one. Consequently, the momentum transferred to the electron must have a magnitude of where is the wavevector of the light forming the standing wave pattern. Although the original proposal focused on electrons, the above analysis can be generalized to other types of matter waves that interacts with the light. Cold neutral atoms, for example, can also experience the Kapitza-Dirac effect. Indeed, one of the first observations of Kapitza-Dirac effect was using a beams of cold sodium atoms. Today, the Kapitza-Dirac effect is a standard tool to calibrate the depth of optical lattices which are formed by standing waves of light. Different regimes of diffraction Diffraction from a periodic grating, regardless of electromagnetic or matter wave, can be roughly divided into two regimes: the Bragg regime and Raman-Nath regime. In the Bragg regime, essentially only one diffraction peak is produced. In the Raman-Nath regime, multiple diffraction peaks can be observed. It is helpful to go back to the familiar example of light diffraction from a matter grating. In this case, The Bragg regime is reached with a thick grating, whereas the Raman-Nath regime is obtained with a thin grating. The same language can be applied to Kapitza-Dirac effect. Here, the concept of "thickness" of the grating can be transferred to the amount of time the matter wave spent in the light field. Here we give an example in the Raman-Nath regime, where the matter spends an amount of time in the standing wave that is short compared to the so-called recoil frequency of the particle. This approximation holds if the interaction time is less than the inverse of the recoil frequency of the particle, where . A coherent beam of particles incident on a standing wave of electromagnetic radiation (typically light) will be diffracted according to the equation: where n is an integer, λ is the de Broglie wavelength of the incident particles, d is the spacing of the grating and θ is the angle of incidence. Diffraction pattern in the Raman-Nath regime Here we present an analysis of the diffraction pattern of the Kapitza-Dirac effect in the Raman-Nath regime For a matter wave interacting in a standing wave of light, the effect of the light-matter interaction can be parametrized by the potential energy where is the strength of the potential energy, and describes the pulse shape of applied standing wave. For example, for ultracold atoms trapped in an optical lattice, due to the AC Stark shift. As described previously, the Raman-Nath regime is reached when the duration is short. In this case, the kinetic energy can be ignored and the resulting Schrodinger equation is greatly simplified. For a given initial state , the time-evolution within the Raman-Nath regime is then given by where and the integral is over the duration of the interaction. Using the Jacobi–Anger expansion for Bessel functions of the first kind, , the above wavefunction becomes where in the second line has been taken to be . It can now be seen that momentum states are populated with a probability of where and the pulse area (duration and amplitude of the interaction) . The transverse RMS momentum of the diffracted particles is therefore linearly proportional to the pulse area: Realisation The invention of the laser in 1960 allowed the production of coherent light and therefore the ability to construct the standing waves of light that are required to observe the effect experimentally. Kapitsa–Dirac scattering of sodium atoms by a near resonant standing wave laser field was experimentally demonstrated in 1985 by the group of D. E. Pritchard at the Massachusetts Institute of Technology. A supersonic atomic beam with sub-recoil transverse momentum was passed through a near resonant standing wave and diffraction up to 10ħk was observed. The scattering of electrons by an intense optical standing wave was experimentally realised by the group of M. Bashkansky at AT&T Bell Laboratories, New Jersey, in 1988. The Kapitza-Dirac effect is routinely used in calibration of the depth of the optical lattices. References Diffraction Quantum mechanics Paul Dirac
Kapitsa–Dirac effect
[ "Physics", "Chemistry", "Materials_science" ]
1,397
[ "Spectrum (physical sciences)", "Theoretical physics", "Quantum mechanics", "Diffraction", "Crystallography", "Spectroscopy", "Quantum physics stubs" ]
18,304,412
https://en.wikipedia.org/wiki/Fiber-optic%20sensor
A fiber-optic sensor is a sensor that uses optical fiber either as the sensing element ("intrinsic sensors"), or as a means of relaying signals from a remote sensor to the electronics that process the signals ("extrinsic sensors"). Fibers have many uses in remote sensing. Depending on the application, fiber may be used because of its small size, or because no electrical power is needed at the remote location, or because many sensors can be multiplexed along the length of a fiber by using light wavelength shift for each sensor, or by sensing the time delay as light passes along the fiber through each sensor. Time delay can be determined using a device such as an optical time-domain reflectometer and wavelength shift can be calculated using an instrument implementing optical frequency domain reflectometry. Fiber-optic sensors are also immune to electromagnetic interference, and do not conduct electricity so they can be used in places where there is high voltage electricity or flammable material such as jet fuel. Fiber-optic sensors can be designed to withstand high temperatures as well. Intrinsic sensors Optical fibers can be used as sensors to measure strain, temperature, pressure and other quantities by modifying a fiber so that the quantity to be measured modulates the intensity, phase, polarization, wavelength or transit time of light in the fiber. Sensors that vary the intensity of light are the simplest, since only a simple source and detector are required. A particularly useful feature of intrinsic fiber-optic sensors is that they can, if required, provide distributed sensing over very large distances. Temperature can be measured by using a fiber that has evanescent loss that varies with temperature, or by analyzing the Rayleigh Scattering, Raman scattering or the Brillouin scattering in the optical fiber. Electrical voltage can be sensed by nonlinear optical effects in specially-doped fiber, which alter the polarization of light as a function of voltage or electric field. Angle measurement sensors can be based on the Sagnac effect. Special fibers like long-period fiber grating (LPG) optical fibers can be used for direction recognition . Photonics Research Group of Aston University in UK has some publications on vectorial bend sensor applications. Optical fibers are used as hydrophones for seismic and sonar applications. Hydrophone systems with more than one hundred sensors per fiber cable have been developed. Hydrophone sensor systems are used by the oil industry as well as a few countries' navies. Both bottom-mounted hydrophone arrays and towed streamer systems are in use. The German company Sennheiser developed a laser microphone for use with optical fibers. A fiber-optic microphone and fiber-optic based headphone are useful in areas with strong electrical or magnetic fields, such as communication amongst the team of people working on a patient inside a magnetic resonance imaging (MRI) machine during MRI-guided surgery. Optical fiber sensors for temperature and pressure have been developed for downhole measurement in oil wells. The fiber-optic sensor is well suited for this environment as it functions at temperatures too high for semiconductor sensors (distributed temperature sensing). Optical fibers can be made into interferometric sensors such as fiber-optic gyroscopes, which are used in the Boeing 767 and in some car models (for navigation purposes). They are also used to make hydrogen sensors. Fiber-optic sensors have been developed to measure co-located temperature and strain simultaneously with very high accuracy using fiber Bragg gratings. This is particularly useful when acquiring information from small or complex structures. Fiber optic sensors are also particularly well suited for remote monitoring, and they can be interrogated 290 km away from the monitoring station using an optical fiber cable. Brillouin scattering effects can also be used to detect strain and temperature over large distances (20–120 kilometers). Other examples A fiber-optic AC/DC voltage sensor in the middle and high voltage range (100–2000 V) can be created by inducing measurable amounts of Kerr nonlinearity in single-mode optical fiber by exposing a calculated length of fiber to the external electric field. The measurement technique is based on polarimetric detection and high accuracy is achieved in a hostile industrial environment. High frequency (5 MHz–1 GHz) electromagnetic fields can be detected by induced nonlinear effects in fiber with a suitable structure. The fiber used is designed such that the Faraday and Kerr effects cause considerable phase change in the presence of the external field. With appropriate sensor design, this type of fiber can be used to measure different electrical and magnetic quantities and different internal parameters of fiber material. Electrical power can be measured in a fiber by using a structured bulk fiber ampere sensor coupled with proper signal processing in a polarimetric detection scheme. Experiments have been carried out in support of the technique. Fiber-optic sensors are used in electrical switchgear to transmit light from an electrical arc flash to a digital protective relay to enable fast tripping of a breaker to reduce the energy in the arc blast. Fiber Bragg grating based fiber-optic sensors significantly enhance performance, efficiency and safety in several industries. With FBG integrated technology, sensors can provide detailed analysis and comprehensive reports on insights with very high resolution. These type of sensors are used extensively in several industries like telecommunication, automotive, aerospace, energy, etc. Fiber Bragg gratings are sensitive to the static pressure, mechanical tension and compression and fiber temperature changes. The efficiency of fiber Bragg grating based fiber-optic sensors can be provided by means of central wavelength adjustment of light emitting source in accordance with the current Bragg gratings reflection spectra. Extrinsic sensors Extrinsic fiber-optic sensors use an optical fiber cable, normally a multimode one, to transmit modulated light from either a non-fiber optical sensor, or an electronic sensor connected to an optical transmitter. A major benefit of extrinsic sensors is their ability to reach places which are otherwise inaccessible. An example is the measurement of temperature inside aircraft jet engines by using a fiber to transmit radiation into a radiation pyrometer located outside the engine. Extrinsic sensors can also be used in the same way to measure the internal temperature of electrical transformers, where the extreme electromagnetic fields present make other measurement techniques impossible. Extrinsic fiber-optic sensors provide excellent protection of measurement signals against noise corruption. Unfortunately, many conventional sensors produce electrical output which must be converted into an optical signal for use with fiber. For example, in the case of a platinum resistance thermometer, the temperature changes are translated into resistance changes. The PRT must therefore have an electrical power supply. The modulated voltage level at the output of the PRT can then be injected into the optical fiber via the usual type of transmitter. This complicates the measurement process and means that low-voltage power cables must be routed to the transducer. Extrinsic sensors are used to measure vibration, rotation, displacement, velocity, acceleration, torque, and temperature. Chemical sensors and biosensors It is well-known the propagation of light in optical fiber is confined in the core of the fiber based on the total internal reflection (TIR) principle and near-zero propagation loss within the cladding, which is very important for the optical communication but limits its sensing applications due to the non-interaction of light with surroundings. Therefore, it is essential to exploit novel fiber-optic structures to disturb the light propagation, thereby enabling the interaction of the light with surroundings and constructing fiber-optic sensors. Until now, several methods, including polishing, chemical etching, tapering, bending, as well as femtosecond grating inscription, have been proposed to tailor the light propagation and prompt the interaction of light with sensing materials. In the above-mentioned fiber-optic structures, the enhanced evanescent fields can be efficiently excited to induce the light to expose to and interact with the surrounding medium. However, the fibers themselves can only sense very few kinds of analytes with low-sensitivity and zero-selectivity, which greatly limits their development and applications, especially for biosensors that require both high-sensitivity and high-selectivity. To overcome the issue, an efficient way is to resort to responsive materials, which possess the ability to change their properties, such as RI, absorption, conductivity, etc., once the surrounding environments change. Due to the rapid progress of functional materials in recent years, various sensing materials are available for fiber-optic chemical sensors and biosensors fabrication, including graphene, metals and metal oxides, carbon nanotubes, nanowires, nanoparticles, polymers, quantum dots, etc. Generally, these materials reversibly change their shape/volume upon stimulation by the surrounding environments (the target analysts), which then leads to the variation of RI or absorption of the sensing materials. Consequently, the surrounding changes will be recorded and interrogated by the optical fibers, realizing sensing functions of optical fibers. Currently, various fiber-optic chemical sensors and biosensors have been proposed and demonstrated. See also Distributed acoustic sensing Fiber Optic Sensing Association References Sensors Fiber optics
Fiber-optic sensor
[ "Technology", "Engineering" ]
1,838
[ "Sensors", "Measuring instruments" ]
18,305,300
https://en.wikipedia.org/wiki/Circle%20criterion
In nonlinear control and stability theory, the circle criterion is a stability criterion for nonlinear time-varying systems. It can be viewed as a generalization of the Nyquist stability criterion for linear time-invariant (LTI) systems. Overview Consider a linear system subject to non-linear feedback, i.e., a nonlinear element is present in the feedback loop. Assume that the element satisfies a sector condition , and (to keep things simple) that the open loop system is stable. Then the closed loop system is globally asymptotically stable if the Nyquist locus does not penetrate the circle having as diameter the segment located on the x-axis. General description Consider the nonlinear system Suppose that is stable Then such that for any solution of the system, the following relation holds: Condition 3 is also known as the frequency condition. Condition 1 is the sector condition. External links Sufficient Conditions for Dynamical Output Feedback Stabilization via the Circle Criterion Popov and Circle Criterion (Cam UK) Stability analysis using the circle criterion in Mathematica References Nonlinear control Stability theory
Circle criterion
[ "Mathematics" ]
216
[ "Stability theory", "Dynamical systems" ]
19,309,200
https://en.wikipedia.org/wiki/T7%20DNA%20polymerase
T7 DNA polymerase is an enzyme used during the DNA replication of the T7 bacteriophage. During this process, the DNA polymerase “reads” existing DNA strands and creates two new strands that match the existing ones. The T7 DNA polymerase requires a host factor, E. coli thioredoxin, in order to carry out its function. This helps stabilize the binding of the necessary protein to the primer-template to improve processivity by more than 100-fold, which is a feature unique to this enzyme. It is a member of the Family A DNA polymerases, which include E. coli DNA polymerase I and Taq DNA polymerase. This polymerase has various applications in site-directed mutagenesis as well as a high-fidelity enzyme suitable for PCR. It has also served as the precursor to Sequenase, an engineered-enzyme optimized for DNA sequencing. Mechanism Phosphoryl transfer Figure 2. Nucleotidyl transfer by DNA polymerase. T7 DNA polymerase catalyzes the phosphoryl transfer during DNA replication of the T7 phage. As shown in Figure 2, the 3’ hydroxyl group of a primer acts as a nucleophile and attacks the phosphodiester bond of nucleoside 5’-triphosphate (dTMP-PP). This reaction adds a nucleoside monophosphate into DNA and releases a pyrophosphate (PPi). Generally, the reaction is metal-dependent and cations such as Mg2+ are often present in the enzyme active site. For T7 DNA polymerase, the fingers, palm and thumb (Figure 1) position the primer-template so that the 3’-end of the primer strand is positioned next to the nucleotide-binding site (located at the intersection of the fingers and thumb). The base pair formed between the nucleotide and the template base fits nicely into a groove between the fingers and the 3’-end of the primer. Two Mg2+ ions form an octahedral coordinate network with oxygen ligand and also bring the reactive primer hydroxyl and the nucleotide α-phosphate close together, thereby lowering the entropic cost of nucleophilic addition. The rate-limiting step in the catalytic cycle occurs after the nucleoside triphosphate binds and before it is incorporated into the DNA (corresponding to the closure of the fingers subdomain around the DNA and nucleotide). Role of Mg2+ ions and amino acid residues in the active site The amino acids present in the active site assist in creating a stabilizing environment for the reaction to proceed. Amino acids such as Lys522, Tyr526, His506 and Arg518 act as hydrogen bond donors. The backbone carbonyl of Ala476, Asp475 and Asp654 form coordinate bonds with the Mg2+ ions. Asp475 and Asp654 form a bridge with the Mg2+ cations to orient them properly. The Mg2+ ion on the right (Figure 3) interacts with negatively charged oxygens of the alpha(α), beta(β) and gamma(γ) phosphates to align the scissile bond for the primer to attack. Even if there is no general base within the active site to deprotonate the primer hydroxyl, the lowered pka of the metal-bound hydroxyl favors the formation of the 3’-hydroxide nucleophile. Metal ions and Lys522 contact non-bridging oxygens on the α-phosphate to stabilize the negative charge developing on the α-phosphorus during bond formation with the nucleophile. Moreover, the Lys522 sidechain also moves to neutralize the negatively charged pyrophosphate group. Tyr526, His506, Arg518 side chains and the oxygen from the backbone carbonyl group of Ala476 take part in the hydrogen bond network and assist in aligning the substrate for phosphoryl transfer. Accessory proteins While phage T7 mediates DNA replication in very similar manner to higher organisms, T7 system is generally simpler compared to other replication systems. In addition to T7 DNA polymerase (also known as gp5), T7 replisome requires only four accessory proteins for proper function: host thioredoxin, gp4, gp2.5, and gp1.7. Host thioredoxin T7 polymerase by itself has a very low processivity. It dissociates from the primer-template after incorporating about 15 nucleotides. Upon infection of the host, T7 polymerase binds to host thioredoxin in 1:1 ratio. The hydrophobic interaction between thioredoxin and T7 polymerase helps to stabilize the binding of T7 polymerase to primer-template. In addition, the binding of thioredoxin increases T7 polymerase processivity to nearly 80-fold. The precise mechanism for how the thioredoxin-T7 polymerase complex is able to achieve such increase in processivity is still unknown. Binding of thioredoxin exposes a large number of basic amino acid residues in the thumb region of T7 polymerase. Several studies suggest that the electrostatic interaction between these positively charged basic residues with the negatively charged phosphate backbone of DNA and other accessory proteins is responsible for increased processivity in gp5/thioredoxin complex. gp4 gp4 is a hexameric protein containing two functional domains: helicase domain and primase domain. The helicase domain unwinds double-stranded DNA to provide template for replication. The C-terminal tail of helicase domain contains several negatively charged acidic residues which make contact with the exposed basic residue of T7 polymerase/thioredoxin. These interactions help to load T7 polymerase/thioredoxin complex onto replication fork. The primase domain catalyzes the synthesis of short oligoribonucleotides. These oligoribonucleotides, called primers, are complementary to the template strand and used to initiate DNA replication. In T7 system, primase domain of one subunit interacts with primase domain of adjacent subunit. This interaction between primase domains acts as a brake to stop helicase when needed, which ensure the leading stand synthesis in-pace with lagging stand synthesis. gp2.5 gp2.5 has similar function to single-stranded DNA binding protein. gp2.5 protects single-stranded DNA produced during replication and coordinates synthesis of leading and lagging strands through interaction between its acidic C-terminal tail and gp5/thioredoxin. gp1.7 gp1.7 is a nucleoside monophosphate kinase, which catalyzes the conversion of deoxynucleoside 5'-monophosphates to di and triphosphate nucleotides, which accounts for the sensitivity of T7 polymerase to dideoxynucleotides (see Sequenase below). Properties Processivity The primary gp5 subunit of T7 DNA Polymerase by itself has low processivity and dissociates from DNA after the incorporation of just a few nucleotides. In order to become efficiently processive, T7 DNA polymerase recruits host thioredoxin to form a thioredoxin-gp5 complex. Thioredoxin binds the thioredoxin binding domain of gp5 thereby stabilizes a flexible DNA binding region of gp5. The stabilization of this region of gp5 allosterically increases the amount of protein surface interaction with the duplex portion of the primer-template. The resulting thioredoxin-gp5 complex increases the affinity of T7 polymerase for the primer terminus by ~80-fold and acts processively around 800 nucleotide incorporation steps. The mechanism adopted by T7 polymerase to achieve its processivity differs from many other polymerases in that it does not rely on a DNA clamp or a clamp loader. Instead, the T7 DNA polymerase complex requires only three proteins for processive DNA polymerization: T7 polymerase (gp5), Escherichia coli thioredoxin, and single-stranded DNA-binding protein gp2.5. Although these three proteins are the only ones required for template single-stranded DNA polymerization, in a native biological setting the thioredoxin-gp5 interacts with gp4 helicase, which provides single-stranded DNA template (figure 4). During leading strand synthesis thioredoxin-gp5 and gp4 form a high affinity complex increasing overall polymerase processivity to around 5 kb. Exonuclease activity T7 DNA polymerase possesses a 3’-5’ single and double stranded DNA exonuclease activity. This exonuclease activity is activated when a newly synthesized base does not correctly base-pair with the template strand. Excision of incorrectly incorporated bases acts as a proofreading mechanism thereby increasing the fidelity of T7 polymerase. During early characterization of exonuclease activity, it was discovered that iron-catalyzed oxidation of T7 polymerase produced a modified enzyme with greatly reduced exonuclease activity. This discovery lead to the development and use of T7 Polymerase as a sequenase in early DNA sequencing methods. The mechanism by which T7 DNA polymerase senses that a mismatched base has been incorporated is still a topic of study. However, some studies have provided evidence to suggesting that changes in tension of the template DNA strand caused by base-pair mismatch may induce exonuclease activation. Wuite et al. observed that applying tension of above 40 pN to the template DNA resulted in 100-fold increase in exonuclease activity. Applications Strand extensions in site directed mutagenesis Site-directed mutagenesis is a molecular biology method that is used to make specific and intentional changes to the DNA sequence of a gene and any gene products. The technique was developed at a time when the highest quality commercially available DNA polymerase for converting an oligonucleotide into a complete complementary DNA strand was the large (Klenow) fragment of E. coli DNA polymerase 1. However, ligation step can become an issue with oligonucleotide mutagenesis. That is when the DNA ligase operates inefficiently relative to the DNA polymerase, strand displacement of the oligonucleotide can reduce the mutant frequency. In the other hand, T7 DNA polymerase does not perform strand displacement synthesis; and thus, can be utilized to obtain high mutant frequencies for point mutants independent of ligation. Second strand synthesis of cDNA cDNA cloning is a major technology for analysis of the expression of genomes. The full-length first-strand can be synthesized through the commercially available reverse transcriptases. Synthesis of the second-strand was once a major limitation to cDNA cloning. Two groups of methods differing by the mechanism of initiation were developed to synthesize the second-strand. In a first group of methods, initiation of second-strand synthesis takes place within the sequence of the first strand. However, the digestion of the 3' end of the first strand is required and therefore results in the loss of the sequences corresponding to the 5'end of the mRNA. In a second group of methods, initiation of second-strand synthesis takes place outside the sequence of the first strand. This group of methods does not require digestion of the 3' end of the first strand. However, the limitation of this group of method lies upon the elongation. Cloning with T7 DNA polymerase helps overcome this limitation by allowing digestion of the poly(dT) tract during the second-strand synthesis reaction. Therefore, the size of the tract synthesized with terminal transferase is not required to be within a given size range and the resulting clones contain a tract of a limited size. Moreover, due to high 3’ exonuclease activity of T7 DNA polymerase, high yield of the full-length second-strand can be obtained. Sequenase (DNA sequencing) In Sanger sequencing, one of the major problem regarding DNA polymerases is the discrimination against dideoxynucleotides, the chain-terminating nucleotides. Most of known DNA polymerases strongly discriminate against ddNTP; and thus, a high ratio of ddNTP to dNTP must be used for efficient chain-termination. T7 DNA polymerase discriminates against ddNTP only several fold; and thereby, requires much lower concentration of ddNTP to provide high uniformity of DNA bands on the gel. However, its strong 3’-5’ exonuclease activity can disrupt the sequencing since when the concentration of dNTP falls, the exonuclease activity increases resulting in no net DNA synthesis or degradation of DNA. In order to use for DNA sequencing, T7 DNA polymerase has been modified to remove its exonuclease activity, either chemically (Sequenase 1.0) or by deletion of residues (Sequenase Version 2.0). References EC 2.7.7 DNA replication Phage proteins T-phages
T7 DNA polymerase
[ "Biology" ]
2,789
[ "Genetics techniques", "DNA replication", "Molecular genetics" ]
19,315,523
https://en.wikipedia.org/wiki/Architectural%20geometry
Architectural geometry is an area of research which combines applied geometry and architecture, which looks at the design, analysis and manufacture processes. It lies at the core of architectural design and strongly challenges contemporary practice, the so-called architectural practice of the digital age. Architectural geometry is influenced by following fields: differential geometry, topology, fractal geometry, and cellular automata. Topics include: freeform curves and surfaces creation developable surfaces discretisation generative design digital prototyping and manufacturing See also Geometric design Computer-aided architectural design Mathematics and architecture Fractal geometry Blobitecture References External links Theory Charles Jencks: The New Paradigm in Architecture Institutions Geometric Modeling and Industrial Geometry Städelschule Architecture Class SIAL - The Spatial Information Architecture Laboratory Companies Evolute Research and Consulting Events Smart Geometry Advances in Architectural Geometry,( Conference Proceedings, 80MB) Resource collections Geometry in Action: Architecture Tools K3DSurf — A program to visualize and manipulate Mathematical models in three, four, five and six dimensions. K3DSurf supports Parametric equations and Isosurfaces JavaView — a 3D geometry viewer and a mathematical visualization software. Generative Components — Generative design software that captures and exploits the critical relationships between design intent and geometry. ParaCloud GEM— A software for components population based on points of interest, with no requirement for scripting. Grasshopper— a graphical algorithm editor tightly integrated with Rhino's 3-D modeling tools. Computer-aided design Computer-aided design software ar:جيوميترية العمارة
Architectural geometry
[ "Engineering" ]
323
[ "Architecture stubs", "Computer-aided design", "Design engineering", "Architecture" ]
19,316,427
https://en.wikipedia.org/wiki/Ron%20Rivera%20%28public%20health%29
Ronald Rivera (August 22, 1948 – September 3, 2008) was an American activist of Puerto Rican descent who is best known for promoting an inexpensive ceramic water filter developed in Guatemala by the chemist Fernando Mazariegos and used to treat gray water in impoverished communities and for establishing community-based factories to produce the filters around the world. Early years Rivera was born in the Bronx borough of New York City, of Puerto Rican parents. He was raised in both New York City and Puerto Rico. Rivera graduated from The World University in San Juan, Puerto Rico. He also studied at the School for International Training. Rivera worked with the Peace Corps in Panama and Ecuador, and with Catholic Relief Services in Bolivia. He founded the local consultancy office for the Inter American Foundation in Ecuador where he worked until 1988, when he moved to Nicaragua. Career and work with ceramics Rivera first became passionate about ceramics in the early 1970s when he studied in Cuernavaca, Mexico with Paulo Freire and Ivan Illich, who taught that human beings had lost their connection with the earth. Rivera then went to live with an experienced potter and learned the art of ceramics. After moving to Nicaragua in the late 1980s during the Contra War, where he reunited with and eventually married his high-school sweetheart, Kathy McBride, Rivera worked for over two decades with potters from rural communities in Nicaragua, helping them to enhance their production methods, including the implementation of a more fuel-efficient kiln developed by Manny Hernandez, a professor at Northern Illinois University. He also worked with potters around the country to develop new designs and to connect to new markets. Ceramic water filter He first learned of ceramic pot filters from its inventor Guatemalan chemist Fernando Mazariegos. Rivera produced this inexpensive filter developed in Guatemala by Mr. Mazariegos from a mix of local terra-cotta clay and sawdust or other combustible materials, such as rice husks. The combustible ingredient, which has been milled and screened, burns out in the firing, leaving a network of fine pores. After firing, the filter is coated with colloidal silver. This combination of fine pore size and the bactericidal properties of colloidal silver produce an effective filter, killing over 98 percent of the contaminants that cause diarrhea, thus dramatically reducing public health problems in the communities that use them to purify potable water. He designed a mold for the filter and a special clay press that was operated with a tire jack. The Family of the Americas Association, a Guatemalan organization, conducted a one-year follow-up study on the initial Mazariegos-developed filter project, concluding that this filter helped to reduce the incidence of diarrhea in participating households by as much as 50 percent. Laboratory testing and field studies have been performed on the filter by various institutions, including MIT, Tulane University, University of Colorado and University of North Carolina. Rivera began manufacturing the pots through Potters for Peace in Nicaragua, eventually helping to establish an independent enterprise to produce the filters. Beginning in 1998, Rivera traveled throughout Latin America, Africa and Asia to establish 30 filter microenterprises in Guatemala, Honduras, Mexico, Cambodia, Bangladesh, Ghana, Nigeria, El Salvador, the Darfur region of Sudan, Myanmar and other countries. These factories have produced over 300,000 filters, and the filters are used by about 1.5 million people to date. An additional 13 filter workshops are scheduled to begin operating by the end of next year. The filter has been cited by the United Nations’ Appropriate Technology Handbook, and tens of thousands of filters have been distributed worldwide by organizations such as International Federation of the Red Cross and Red Crescent, Doctors Without Borders, UNICEF, Plan International, Project Concern International, International Development Enterprises, Oxfam and USAID. Rivera wanted to share this Guatemalan invention with the world and posted his experience in manufacturing ceramic pot filters in painstaking detail, on the Internet. Written work Ron Rivera, Lynette Yetter, Jeff Rogers and Reid Harvey co-authored the paper, "A Sustainable Ceramic Water Filter for Household Purification," which Lynette Yetter presented at a NSF Conference in 2000. Legacy Rivera's filters were included in an exhibition at the Cooper-Hewitt National Design Museum called "Design for the Other 90 Percent." Rivera died in Managua, Nicaragua on September 3, 2008, after contracting falciparum malaria while working in Nigeria. A memorial service held in Managua on September 6 at the Universidad Centroamericana was attended by hundreds, including scores of local potters. During his stay in Nigeria he worked endlessly to put together a ceramic water filter factory. See also List of Puerto Ricans Puerto Rican scientists and inventors References External links Ron Rivera Memorial Ron Rivera profile via Changemakers Design for the other 90%: Ron Rivera Coordinator of Ceramic Water Filter and International Projects, Potters for Peace Potters for Peace (U.S.) Potters without Borders(Canada) Puerto Rican inventors American people of Puerto Rican descent Water filters Water technology SIT Graduate Institute alumni Infectious disease deaths in Nicaragua Deaths from malaria 1948 births 2008 deaths 20th-century American ceramists 20th-century Puerto Rican scientists Puerto Rican writers American emigrants to Nicaragua 20th-century Puerto Rican writers 20th-century American inventors
Ron Rivera (public health)
[ "Chemistry" ]
1,080
[ "Water treatment", "Water filters", "Filters", "Water technology" ]
19,316,605
https://en.wikipedia.org/wiki/Non-photochemical%20quenching
Non-photochemical quenching (NPQ) is a mechanism employed by plants and algae to protect themselves from the adverse effects of high light intensity. It involves the quenching of singlet excited state chlorophylls (Chl) via enhanced internal conversion to the ground state (non-radiative decay), thus harmlessly dissipating excess excitation energy as heat through molecular vibrations. NPQ occurs in almost all photosynthetic eukaryotes (algae and plants), and helps to regulate and protect photosynthesis in environments where light energy absorption exceeds the capacity for light utilization in photosynthesis. Process When a molecule of chlorophyll absorbs light it is promoted from its ground state to its first singlet excited state. The excited state then has three main fates. Either the energy is; 1. passed to another chlorophyll molecule by Förster resonance energy transfer (in this way excitation is gradually passed to the photochemical reaction centers (photosystem I and photosystem II) where energy is used in photosynthesis (called photochemical quenching)); or 2. the excited state can return to the ground state by emitting the energy as heat (called non-photochemical quenching); or 3. the excited state can return to the ground state by emitting a photon (fluorescence). In higher plants, the absorption of light continues to increase as light intensity increases, while the capacity for photosynthesis tends to saturate. Therefore, there is the potential for the absorption of excess light energy by photosynthetic light harvesting systems. This excess excitation energy leads to an increase in the lifetime of singlet excited chlorophyll, increasing the chances of the formation of long-lived chlorophyll triplet states by inter-system crossing. Triplet chlorophyll is a potent photosensitiser of molecular oxygen forming singlet oxygen which can cause oxidative damage to the pigments, lipids and proteins of the photosynthetic thylakoid membrane. To counter this problem, one photoprotective mechanism is so-called non-photochemical quenching (NPQ), which relies upon the conversion and dissipation of the excess excitation energy into heat. NPQ involves conformational changes within the light harvesting proteins of photosystem (PS) II that bring about a change in pigment interactions causing the formation of energy traps. The conformational changes are stimulated by a combination of transmembrane proton gradient, the photosystem II subunit S (PsBs) and the enzymatic conversion of the carotenoid violaxanthin to zeaxanthin (the xanthophyll cycle). Violaxanthin is a carotenoid downstream from chlorophyll a and b within the antenna of PS II and nearest to the special chlorophyll a located in the reaction center of the antenna. As light intensity increases, acidification of the thylakoid lumen takes place through the stimulation of carbonic anhydrase, which in turn converts bicarbonate (HCO3) into carbon dioxide causing an influx of CO2 and inhibiting Rubisco oxygenase activity. This acidification also leads to the protonation of the PsBs subunit of PS II which catalyze the conversion of violaxanthin to zeaxanthin, and is involved in the alteration orientation of the photosystems at times of high light absorption to reduce the quantities of carbon dioxide created and start the non-photochemical quenching, along with the activation of enzyme violaxanthin de-epoxidase which eliminates an epoxide and forms an alkene on a six-member ring of violaxanthin giving rise to another carotenoid known as antheraxanthin. Violaxanthin contains two epoxides each bonded to a six-member ring and when both are eliminated by de-epoxidase the carotenoid zeaxanthin is formed. Only violaxanthin is able to transport a photon to the special chlorophyll a. Antheraxanthin and zeaxanthin dissipate the energy from the photon as heat preserving the integrity of photosystem II. This dissipation of energy as heat is one form of non-photochemical quenching. Measurement of NPQ Non-photochemical quenching is measured by the quenching of chlorophyll fluorescence and is distinguished from photochemical quenching by applying a bright light pulse under actinic light to transiently saturate photosystem II reaction center and compare the maximal yield of fluorescence emission under light and dark-adapted state. Non-photochemical quenching is not affected if the pulse of light is short. During this pulse, the fluorescence reaches the level reached in the absence of any photochemical quenching, known as maximum fluorescence, . For further discussion, see Measuring chlorophyll fluorescence and Plant stress measurement. Chlorophyll fluorescence can easily be measured with a chlorophyll fluorometer. Some fluorometers can calculate NPQ and photochemical quenching coefficients (including qP, qN, qE and NPQ), as well as light and dark adaptation parameters (including Fo, Fm, and Fv/Fm). See also Chlorophyll fluorescence Measuring chlorophyll fluorescence Integrated fluorometer References Photosynthesis
Non-photochemical quenching
[ "Chemistry", "Biology" ]
1,160
[ "Biochemistry", "Photosynthesis" ]
10,240,442
https://en.wikipedia.org/wiki/CM-field
In mathematics, a CM-field is a particular type of number field, so named for a close connection to the theory of complex multiplication. Another name used is J-field. The abbreviation "CM" was introduced by . Formal definition A number field K is a CM-field if it is a quadratic extension K/F where the base field F is totally real but K is totally imaginary. I.e., every embedding of F into lies entirely within , but there is no embedding of K into . In other words, there is a subfield F of K such that K is generated over F by a single square root of an element, say β = , in such a way that the minimal polynomial of β over the rational number field has all its roots non-real complex numbers. For this α should be chosen totally negative, so that for each embedding σ of into the real number field, σ(α) < 0. Properties One feature of a CM-field is that complex conjugation on induces an automorphism on the field which is independent of its embedding into . In the notation given, it must change the sign of β. A number field K is a CM-field if and only if it has a "units defect", i.e. if it contains a proper subfield F whose unit group has the same -rank as that of K . In fact, F is the totally real subfield of K mentioned above. This follows from Dirichlet's unit theorem. Examples The simplest, and motivating, example of a CM-field is an imaginary quadratic field, for which the totally real subfield is just the field of rationals. One of the most important examples of a CM-field is the cyclotomic field , which is generated by a primitive nth root of unity. It is a totally imaginary quadratic extension of the totally real field The latter is the fixed field of complex conjugation, and is obtained from it by adjoining a square root of The union QCM of all CM fields is similar to a CM field except that it has infinite degree. It is a quadratic extension of the union of all totally real fields QR. The absolute Galois group Gal(/QR) is generated (as a closed subgroup) by all elements of order 2 in Gal(/Q), and Gal(/QCM) is a subgroup of index 2. The Galois group Gal(QCM/Q) has a center generated by an element of order 2 (complex conjugation) and the quotient by its center is the group Gal(QR/Q). If V is a complex abelian variety of dimension n, then any abelian algebra F of endomorphisms of V has rank at most 2n over Z. If it has rank 2n and V is simple then F is an order in a CM-field. Conversely any CM field arises like this from some simple complex abelian variety, unique up to isogeny. One example of a totally imaginary field which is not CM is the number field defined by the polynomial . References Field (mathematics) Algebraic number theory Complex numbers
CM-field
[ "Mathematics" ]
656
[ "Mathematical objects", "Algebraic number theory", "Complex numbers", "Numbers", "Number theory" ]
10,253,266
https://en.wikipedia.org/wiki/Rigid%20unit%20modes
Rigid unit modes (RUMs) represent a class of lattice vibrations or phonons that exist in network materials such as quartz, cristobalite or zirconium tungstate. Network materials can be described as three-dimensional networks of polyhedral groups of atoms such as SiO4 tetrahedra or TiO6 octahedra. A RUM is a lattice vibration in which the polyhedra are able to move, by translation and/or rotation, without distorting. RUMs in crystalline materials are the counterparts of floppy modes in glasses, as introduced by Jim Phillips and Mike Thorpe. The interest in rigid unit modes The idea of rigid unit modes was developed for crystalline materials to enable an understanding of the origin of displacive phase transitions in materials such as silicates, which can be described as infinite three-dimensional networks of corner-lined SiO4 and AlO4 tetrahedra. The idea was that rigid unit modes could act as the soft modes for displacive phase transitions. The original work in silicates showed that many of the phase transitions in silicates could be understood in terms of soft modes that are RUMs. After the original work on displacive phase transitions, the RUM model was also applied to understanding the nature of the disordered high-temperature phases of materials such as cristobalite, the dynamics and localised structural distortions in zeolites, and negative thermal expansion. Why rigid unit modes can exist The simplest way to understand the origin of RUMs is to consider the balance between the numbers of constraints and degrees of freedom of the network, an engineering analysis that dates back to James Clerk Maxwell and which was introduced to amorphous materials by Jim Phillips and Mike Thorpe. If the number of constraints exceeds the number of degrees of freedom, the structure will be rigid. On the other hand, if the number of degrees of freedom exceeds the number of constraints, the structure will be floppy. For a structure that consists of corner-linked tetrahedra (such as the SiO4 tetrahedra in silica, SiO2) we can count the numbers of constraints and degrees of freedom as follows. For a given tetrahedron, the position of any corner has to have its three spatial coordinates (x,y,z) match the spatial coordinates of the corresponding corner of a linked tetrahedron. Thus each corner has three constraints. These are shared by the two linked tetrahedra, so contribute 1.5 constraints to each tetrahedron. There are 4 corners, so we have a total of 6 constraints per tetrahedron. A rigid three-dimensional object has 6 degrees of freedom, 3 translations and 3 rotations. Thus there is an exact balance between the numbers of constraints and degrees of freedom. (Note that we can get an identical result by considering the atoms to be the basic units. There are 5 atoms in the structural tetrahedron, but 4 of there are shared by two tetrahedra, so that there are 3 + 4*3/2 = 9 degrees of freedom per tetrahedron. The number of constraints to hold together such a tetrahedron is 9 (4 distances and 5 angles)). What this balance means is that a structure composed of structural tetrahedra joined at corners is exactly on the border between being rigid and floppy. What appears to happen is that symmetry reduces the number of constraints so that structures such as quartz and cristobalite are slightly floppy and thus support some RUMs. The above analysis can be applied to any network structure composed of polyhedral groups of atoms. One example is the perovskite family of structures, which consist of corner-linked BX6 octahedra such as TiO6 or ZrO6. A simple counting analysis would in fact suggest that such structures are rigid, but in the ideal cubic phase symmetry allows some degree of flexibility. Zirconium tungstate, the archetypal material showing negative thermal expansion, contains ZrO6 octahedra and WO4 tetrahedra, with one of the corners of each WO4 tetrahedra having no linkage. The counting analysis shows that, like silica, zirconium tungstate has an exact balance between the numbers of constraints and degrees of freedom, and further analysis has shown the existence of RUMs in this material. References Andrew P. Giddy Martin T. Dove G. S. Pawley and Volker Heine. The determination of rigid‐unit modes as potential soft modes for displacive phase transitions in framework crystal structures Kenton D. Hammonds, Martin T. Dove, Andrew P. Giddy, Volker Heine, and Björn Winkler. Rigid-unit phonon modes and structural phase transitions in framework silicates. Martin T. Dove. Theory of displacive phase transitions in minerals. Crystallography Materials science
Rigid unit modes
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,014
[ "Applied and interdisciplinary physics", "Materials science", "Crystallography", "Condensed matter physics", "nan" ]
4,593,361
https://en.wikipedia.org/wiki/Peter%20J.%20Freyd
Peter John Freyd (; born February 5, 1936) is an American mathematician, a professor at the University of Pennsylvania, known for work in category theory and for founding the False Memory Syndrome Foundation. Mathematics Freyd obtained his Ph.D. from Princeton University in 1960; his dissertation, on Functor Theory, was written under the supervision of Norman Steenrod and David Buchsbaum. Freyd is best known for his adjoint functor theorem. He was the author of the foundational book Abelian Categories: An Introduction to the Theory of Functors (1964). This work culminates in a proof of the Freyd–Mitchell embedding theorem. In addition, Freyd's name is associated with the HOMFLYPT polynomial of knot theory, and he and Andre Scedrov originated the concept of (mathematical) allegories. In 2012, he became a fellow of the American Mathematical Society. False Memory Syndrome Foundation Freyd and his wife Pamela founded the False Memory Syndrome Foundation in 1992, after Freyd was accused of childhood sexual abuse by his daughter Jennifer. Peter Freyd denied the accusations. Three years after its founding, it had more than 7,500 members. As of December 2019, the False Memory Syndrome Foundation was dissolved. Publications Reprinted with a forward as Peter J. Freyd and Andre Scedrov: Categories, Allegories. North-Holland (1999). . References External links Printable versions of Abelian categories, an introduction to the theory of functors. Living people 20th-century American mathematicians 21st-century American mathematicians Category theorists University of Pennsylvania faculty Mathematicians at the University of Pennsylvania Princeton University alumni Fellows of the American Mathematical Society 1936 births People from Evanston, Illinois Mathematicians from Illinois
Peter J. Freyd
[ "Mathematics" ]
354
[ "Category theorists", "Mathematical structures", "Category theory" ]
4,593,828
https://en.wikipedia.org/wiki/Samay%C4%81
Samaya () or Samayam () is a Sanskrit term referring to the "appointed or proper time, [the] right moment for doing anything." In Indian languages, samayam, or samay in Indo-Aryan languages, is a unit of time. Meaning In contemporary usage, samayam means time in Dravidian languages such as Kannada, Malayalam, and Tamil, and in Indo-Aryan languages such as Bengali, Hindi, Marathi, Gujarati. Jainism Meaning Samaya represents the most infinitesimal part of time that cannot be divided further. The blink of an eye, or about a quarter of a second, has innumerable samaya in it. For all practical purposes a second happens to be the finest measurement of time. Jainism, however, recognizes a very small measurement of time known as samaya, which is an infinitely small part of a second. Measurements The following are measures of time as adopted by Jainism: indivisible time = 1 samaya innumerable samaya = 1 16,777,216 = 1 30 = 1 day and night 15 days and nights = 1 (fortnight) 2 = 1 month 12 months = 1 year innumerable years = 1 10 million million = 1 10 million million = l or 1 1 + = 1 (one time cycle) Example When an Arihant reaches the stage of moksha (liberation), the soul travels to the Siddhashila (highest realm in universe) in one . Hinduism is the basic unit of time in Hindu mythology. It is stated to be an epithet of Shiva in the Agni Purana. Other uses The samayachakra is the great chariot wheel of time which turns relentlessly forward. is a term used in Indian classical music to loosely categorize ragas into times of day. Each raga has a specific period of the day (praharam) when it is performed. In Gandharva-Veda the day is divided into three-hour-long intervals: 4–7a.m., 7–10a.m., etc. The time concept in Gandharva-Veda is more strictly adhered to than it would be, for example, in Carnatic music. See also Hindu cosmology Jain cosmology Palya Hasta Religious cosmology References Units of time Hindu cosmology Sanskrit words and phrases
Samayā
[ "Physics", "Mathematics" ]
490
[ "Physical quantities", "Time", "Units of time", "Quantity", "Spacetime", "Units of measurement" ]