text
stringlengths
11
320k
source
stringlengths
26
161
A scalar is an element of a field which is used to define a vector space . In linear algebra , real numbers or generally elements of a field are called scalars and relate to vectors in an associated vector space through the operation of scalar multiplication (defined in the vector space), in which a vector can be multiplied by a scalar in the defined way to produce another vector. [ 1 ] [ 2 ] [ 3 ] Generally speaking, a vector space may be defined by using any field instead of real numbers (such as complex numbers ). Then scalars of that vector space will be elements of the associated field (such as complex numbers). A scalar product operation – not to be confused with scalar multiplication – may be defined on a vector space, allowing two vectors to be multiplied in the defined way to produce a scalar. A vector space equipped with a scalar product is called an inner product space . A quantity described by multiple scalars, such as having both direction and magnitude, is called a vector . [ 4 ] The term scalar is also sometimes used informally to mean a vector, matrix , tensor , or other, usually, "compound" value that is actually reduced to a single component. Thus, for example, the product of a 1 × n matrix and an n × 1 matrix, which is formally a 1 × 1 matrix, is often said to be a scalar . The real component of a quaternion is also called its scalar part . The term scalar matrix is used to denote a matrix of the form kI where k is a scalar and I is the identity matrix . The word scalar derives from the Latin word scalaris , an adjectival form of scala (Latin for "ladder"), from which the English word scale also comes. The first recorded usage of the word "scalar" in mathematics occurs in François Viète 's Analytic Art ( In artem analyticem isagoge ) (1591): [ 5 ] [ 6 ] According to a citation in the Oxford English Dictionary the first recorded usage of the term "scalar" in English came with W. R. Hamilton in 1846, referring to the real part of a quaternion: A vector space is defined as a set of vectors (additive abelian group ), a set of scalars ( field ), and a scalar multiplication operation that takes a scalar k and a vector v to form another vector k v . For example, in a coordinate space , the scalar multiplication k ( v 1 , v 2 , … , v n ) {\displaystyle k(v_{1},v_{2},\dots ,v_{n})} yields ( k v 1 , k v 2 , … , k v n ) {\displaystyle (kv_{1},kv_{2},\dots ,kv_{n})} . In a (linear) function space , kf is the function x ↦ k ( f ( x )) . The scalars can be taken from any field, including the rational , algebraic , real, and complex numbers, as well as finite fields . According to a fundamental theorem of linear algebra, every vector space has a basis . It follows that every vector space over a field K is isomorphic to the corresponding coordinate vector space where each coordinate consists of elements of K (E.g., coordinates ( a 1 , a 2 , ..., a n ) where a i ∈ K and n is the dimension of the vector space in consideration.). For example, every real vector space of dimension n is isomorphic to the n -dimensional real space R n . Alternatively, a vector space V can be equipped with a norm function that assigns to every vector v in V a scalar || v ||. By definition, multiplying v by a scalar k also multiplies its norm by | k |. If || v || is interpreted as the length of v , this operation can be described as scaling the length of v by k . A vector space equipped with a norm is called a normed vector space (or normed linear space ). The norm is usually defined to be an element of V 's scalar field K , which restricts the latter to fields that support the notion of sign. Moreover, if V has dimension 2 or more, K must be closed under square root, as well as the four arithmetic operations; thus the rational numbers Q are excluded, but the surd field is acceptable. For this reason, not every scalar product space is a normed vector space. When the requirement that the set of scalars form a field is relaxed so that it need only form a ring (so that, for example, the division of scalars need not be defined, or the scalars need not be commutative ), the resulting more general algebraic structure is called a module . In this case the "scalars" may be complicated objects. For instance, if R is a ring, the vectors of the product space R n can be made into a module with the n × n matrices with entries from R as the scalars. Another example comes from manifold theory , where the space of sections of the tangent bundle forms a module over the algebra of real functions on the manifold. The scalar multiplication of vector spaces and modules is a special case of scaling , a kind of linear transformation .
https://en.wikipedia.org/wiki/Scalar_(mathematics)
In mathematics and physics , a scalar field is a function associating a single [ dubious – discuss ] number to each point in a region of space – possibly physical space . The scalar may either be a pure mathematical number ( dimensionless ) or a scalar physical quantity (with units ). In a physical context, scalar fields are required to be independent of the choice of reference frame. That is, any two observers using the same units will agree on the value of the scalar field at the same absolute point in space (or spacetime ) regardless of their respective points of origin. Examples used in physics include the temperature distribution throughout space, the pressure distribution in a fluid, and spin -zero quantum fields, such as the Higgs field . These fields are the subject of scalar field theory . Mathematically, a scalar field on a region U is a real or complex-valued function or distribution on U . [ 1 ] [ 2 ] The region U may be a set in some Euclidean space , Minkowski space , or more generally a subset of a manifold , and it is typical in mathematics to impose further conditions on the field, such that it be continuous or often continuously differentiable to some order. A scalar field is a tensor field of order zero, [ 3 ] and the term "scalar field" may be used to distinguish a function of this kind with a more general tensor field, density , or differential form . Physically, a scalar field is additionally distinguished by having units of measurement associated with it. In this context, a scalar field should also be independent of the coordinate system used to describe the physical system—that is, any two observers using the same units must agree on the numerical value of a scalar field at any given point of physical space. Scalar fields are contrasted with other physical quantities such as vector fields , which associate a vector to every point of a region, as well as tensor fields and spinor fields . [ citation needed ] More subtly, scalar fields are often contrasted with pseudoscalar fields. In physics, scalar fields often describe the potential energy associated with a particular force . The force is a vector field , which can be obtained as a factor of the gradient of the potential energy scalar field. Examples include:
https://en.wikipedia.org/wiki/Scalar_field
In astrophysics and cosmology scalar field dark matter is a classical, minimally coupled, scalar field postulated to account for the inferred dark matter . [ 2 ] The universe may be accelerating, fueled perhaps by a cosmological constant or some other field possessing long range 'repulsive' effects. A model must predict the correct form for the large scale clustering spectrum, [ 3 ] account for cosmic microwave background anisotropies on large and intermediate angular scales, and provide agreement with the luminosity distance relation obtained from observations of high redshift supernovae . The modeled evolution of the universe includes a large amount of unknown matter and energy in order to agree with such observations. This energy density has two components: cold dark matter and dark energy . Each contributes to the theory of the origination of galaxies and the expansion of the universe. The universe must have a critical density, a density not explained by baryonic matter (ordinary matter ) alone. The dark matter can be modeled as a scalar field using two fitted parameters, mass and self-interaction . [ 4 ] [ 5 ] In this model the dark matter consists of an ultralight particle with a mass of ~10 −22 eV when there is no self-interaction. [ 6 ] [ 7 ] [ 8 ] If there is a self-interaction a wider mass range is allowed. [ 9 ] The uncertainty in position of a particle is larger than its Compton wavelength (a particle with mass 10 −22 eV has a Compton wavelength of 1.3 light years ), and for some reasonable estimates of particle mass and density of dark matter there is no point talking about the individual particles' positions and momenta. By some dynamical measurements, we can deduce that the mass density of the dark matter is about 0.4 G e V c m − 3 {\displaystyle 0.4\ GeV\ cm^{-3}} . One can calculate the average separation between these particles by deducing the de-Broglie wavelength: λ = 2 π / m v {\displaystyle \lambda =2\pi /mv} , here m is the mass of the dark matter particle and v is the dispersion velocity of the halo. The average number of the particles in cubic volume having the dimension equal to the de Broglie wavelength, λ 3 {\displaystyle \lambda ^{3}} is given by, N d b = ( 34 e V m ) 4 ( 250 k m / s v ) 3 {\displaystyle N_{db}=\left({\frac {34eV}{m}}\right)^{4}\left({\frac {250km/s}{v}}\right)^{3}} The occupation number of these particles is so huge that we can consider the wave nature of these particles in the classical description. To satisfy Pauli's exclusion principle the particle must be bosons especially spin zero (scalar) particles, hence these ultra-light dark matter would be more like a wave than a particle, and the galactic halos are giant systems of condensed bose liquid , possibly superfluid . The dark matter can be described as a Bose–Einstein condensate of the ultralight quanta of the field [ 10 ] and as boson stars. [ 9 ] The enormous Compton wavelength of these particles prevents structure formation on small, subgalactic scales, which is a major problem in traditional cold dark matter models. The collapse of initial over-densities is studied in the references. [ 11 ] [ 12 ] [ 13 ] [ 14 ] There are not many models in which we consider dark matter as the scalar field. Axion -like particle (ALP) in string theory can be considered as a model of scalar field dark matter, as its mass density satisfies the relic density of the dark matter. The most common production mechanism of ALP is misalignment mechanism . Which shows the mass around ( 10 − 22 − 10 − 20 ) e V {\displaystyle (10^{-22}-10^{-20})\ eV} satisfies with the relic abundance of observed dark matter. [ 15 ] This dark matter model is also known as BEC dark matter or wave dark matter. Fuzzy dark matter and ultra-light axion are examples of scalar field dark matter.
https://en.wikipedia.org/wiki/Scalar_field_dark_matter
In general relativity , a scalar field solution is an exact solution of the Einstein field equation in which the gravitational field is due entirely to the field energy and momentum of a scalar field . Such a field may or may not be massless , and it may be taken to have minimal curvature coupling , or some other choice, such as conformal coupling . In general relativity, the geometric setting for physical phenomena is a Lorentzian manifold , which is physically interpreted as a curved spacetime, and which is mathematically specified by defining a metric tensor g a b {\displaystyle g_{ab}} (or by defining a frame field ). The curvature tensor R a b c d {\displaystyle R^{a}{}_{bcd}} of this manifold and associated quantities such as the Einstein tensor G a b {\displaystyle G_{ab}} , are well-defined even in the absence of any physical theory, but in general relativity they acquire a physical interpretation as geometric manifestations of the gravitational field . In addition, we must specify a scalar field by giving a function ψ {\displaystyle \psi } . This function is required to satisfy two following conditions: G a b = κ ( ψ ; a ψ ; b − 1 2 ψ ; m ψ ; m g a b ) {\displaystyle G_{ab}=\kappa \left(\psi _{;a}\psi _{;b}-{\frac {1}{2}}\psi _{;m}\psi ^{;m}g_{ab}\right)} . Both conditions follow from varying the Lagrangian density for the scalar field, which in the case of a minimally coupled massless scalar field is Here, gives the wave equation, while gives the Einstein equation (in the case where the field energy of the scalar field is the only source of the gravitational field). Scalar fields are often interpreted as classical approximations, in the sense of effective field theory , to some quantum field. In general relativity, the speculative quintessence field can appear as a scalar field. For example, a flux of neutral pions can in principle be modeled as a minimally coupled massless scalar field. The components of a tensor computed with respect to a frame field rather than the coordinate basis are often called physical components , because these are the components which can (in principle) be measured by an observer. In the special case of a minimally coupled massless scalar field , an adapted frame (the first is a timelike unit vector field , the last three are spacelike unit vector fields) can always be found in which the Einstein tensor takes the simple form G a ^ b ^ = 8 π σ [ − 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] {\displaystyle G_{{\hat {a}}{\hat {b}}}=8\pi \sigma \,\left[{\begin{matrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{matrix}}\right]} where σ {\displaystyle \sigma } is the energy density of the scalar field. The characteristic polynomial of the Einstein tensor in a minimally coupled massless scalar field solution must have the form In other words, we have a simple eigenvalue and a triple eigenvalue, each being the negative of the other. Multiply out and using Gröbner basis methods, we find that the following three invariants must vanish identically: Using Newton's identities , we can rewrite these in terms of the traces of the powers. We find that We can rewrite this in terms of index gymnastics as the manifestly invariant criteria: Notable individual scalar field solutions include This relativity -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scalar_field_solution
In linear algebra , a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices . Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is [ 3 0 0 2 ] {\displaystyle \left[{\begin{smallmatrix}3&0\\0&2\end{smallmatrix}}\right]} , while an example of a 3×3 diagonal matrix is [ 6 0 0 0 5 0 0 0 4 ] {\displaystyle \left[{\begin{smallmatrix}6&0&0\\0&5&0\\0&0&4\end{smallmatrix}}\right]} . An identity matrix of any size, or any multiple of it is a diagonal matrix called a scalar matrix , for example, [ 0.5 0 0 0.5 ] {\displaystyle \left[{\begin{smallmatrix}0.5&0\\0&0.5\end{smallmatrix}}\right]} . In geometry , a diagonal matrix may be used as a scaling matrix , since matrix multiplication with it results in changing scale (size) and possibly also shape ; only a scalar matrix results in uniform change in scale. As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero. That is, the matrix D = ( d i , j ) with n columns and n rows is diagonal if ∀ i , j ∈ { 1 , 2 , … , n } , i ≠ j ⟹ d i , j = 0. {\displaystyle \forall i,j\in \{1,2,\ldots ,n\},i\neq j\implies d_{i,j}=0.} However, the main diagonal entries are unrestricted. The term diagonal matrix may sometimes refer to a rectangular diagonal matrix , which is an m -by- n matrix with all the entries not of the form d i , i being zero. For example: [ 1 0 0 0 4 0 0 0 − 3 0 0 0 ] or [ 1 0 0 0 0 0 4 0 0 0 0 0 − 3 0 0 ] {\displaystyle {\begin{bmatrix}1&0&0\\0&4&0\\0&0&-3\\0&0&0\\\end{bmatrix}}\quad {\text{or}}\quad {\begin{bmatrix}1&0&0&0&0\\0&4&0&0&0\\0&0&-3&0&0\end{bmatrix}}} More often, however, diagonal matrix refers to square matrices, which can be specified explicitly as a square diagonal matrix . A square diagonal matrix is a symmetric matrix , so this can also be called a symmetric diagonal matrix . The following matrix is square diagonal matrix: [ 1 0 0 0 4 0 0 0 − 2 ] {\displaystyle {\begin{bmatrix}1&0&0\\0&4&0\\0&0&-2\end{bmatrix}}} If the entries are real numbers or complex numbers , then it is a normal matrix as well. In the remainder of this article we will consider only square diagonal matrices, and refer to them simply as "diagonal matrices". A diagonal matrix D can be constructed from a vector a = [ a 1 … a n ] T {\displaystyle \mathbf {a} ={\begin{bmatrix}a_{1}&\dots &a_{n}\end{bmatrix}}^{\textsf {T}}} using the diag {\displaystyle \operatorname {diag} } operator: D = diag ⁡ ( a 1 , … , a n ) . {\displaystyle \mathbf {D} =\operatorname {diag} (a_{1},\dots ,a_{n}).} This may be written more compactly as D = diag ⁡ ( a ) {\displaystyle \mathbf {D} =\operatorname {diag} (\mathbf {a} )} . The same operator is also used to represent block diagonal matrices as A = diag ⁡ ( A 1 , … , A n ) {\displaystyle \mathbf {A} =\operatorname {diag} (\mathbf {A} _{1},\dots ,\mathbf {A} _{n})} where each argument A i is a matrix. The diag operator may be written as diag ⁡ ( a ) = ( a 1 T ) ∘ I , {\displaystyle \operatorname {diag} (\mathbf {a} )=\left(\mathbf {a} \mathbf {1} ^{\textsf {T}}\right)\circ \mathbf {I} ,} where ∘ {\displaystyle \circ } represents the Hadamard product , and 1 is a constant vector with elements 1. The inverse matrix-to-vector diag operator is sometimes denoted by the identically named diag ⁡ ( D ) = [ a 1 … a n ] T , {\displaystyle \operatorname {diag} (\mathbf {D} )={\begin{bmatrix}a_{1}&\dots &a_{n}\end{bmatrix}}^{\textsf {T}},} where the argument is now a matrix, and the result is a vector of its diagonal entries. The following property holds: diag ⁡ ( A B ) = ∑ j ( A ∘ B T ) i j = ( A ∘ B T ) 1 . {\displaystyle \operatorname {diag} (\mathbf {A} \mathbf {B} )=\sum _{j}\left(\mathbf {A} \circ \mathbf {B} ^{\textsf {T}}\right)_{ij}=\left(\mathbf {A} \circ \mathbf {B} ^{\textsf {T}}\right)\mathbf {1} .} A diagonal matrix with equal diagonal entries is a scalar matrix ; that is, a scalar multiple λ of the identity matrix I . Its effect on a vector is scalar multiplication by λ . For example, a 3×3 scalar matrix has the form: [ λ 0 0 0 λ 0 0 0 λ ] ≡ λ I 3 {\displaystyle {\begin{bmatrix}\lambda &0&0\\0&\lambda &0\\0&0&\lambda \end{bmatrix}}\equiv \lambda {\boldsymbol {I}}_{3}} The scalar matrices are the center of the algebra of matrices: that is, they are precisely the matrices that commute with all other square matrices of the same size. [ a ] By contrast, over a field (like the real numbers), a diagonal matrix with all diagonal elements distinct only commutes with diagonal matrices (its centralizer is the set of diagonal matrices). That is because if a diagonal matrix D = diag ⁡ ( a 1 , … , a n ) {\displaystyle \mathbf {D} =\operatorname {diag} (a_{1},\dots ,a_{n})} has a i ≠ a j , {\displaystyle a_{i}\neq a_{j},} then given a matrix M with m i j ≠ 0 , {\displaystyle m_{ij}\neq 0,} the ( i , j ) term of the products are: ( D M ) i j = a i m i j {\displaystyle (\mathbf {DM} )_{ij}=a_{i}m_{ij}} and ( M D ) i j = m i j a j , {\displaystyle (\mathbf {MD} )_{ij}=m_{ij}a_{j},} and a j m i j ≠ m i j a i {\displaystyle a_{j}m_{ij}\neq m_{ij}a_{i}} (since one can divide by m ij ), so they do not commute unless the off-diagonal terms are zero. [ b ] Diagonal matrices where the diagonal entries are not all equal or all distinct have centralizers intermediate between the whole space and only diagonal matrices. [ 1 ] For an abstract vector space V (rather than the concrete vector space K n ), the analog of scalar matrices are scalar transformations . This is true more generally for a module M over a ring R , with the endomorphism algebra End( M ) (algebra of linear operators on M ) replacing the algebra of matrices. Formally, scalar multiplication is a linear map, inducing a map R → End ⁡ ( M ) , {\displaystyle R\to \operatorname {End} (M),} (from a scalar λ to its corresponding scalar transformation, multiplication by λ ) exhibiting End( M ) as a R - algebra . For vector spaces, the scalar transforms are exactly the center of the endomorphism algebra, and, similarly, scalar invertible transforms are the center of the general linear group GL( V ) . The former is more generally true free modules M ≅ R n , {\displaystyle M\cong R^{n},} for which the endomorphism algebra is isomorphic to a matrix algebra. Multiplying a vector by a diagonal matrix multiplies each of the terms by the corresponding diagonal entry. Given a diagonal matrix D = diag ⁡ ( a 1 , … , a n ) {\displaystyle \mathbf {D} =\operatorname {diag} (a_{1},\dots ,a_{n})} and a vector v = [ x 1 ⋯ x n ] T {\displaystyle \mathbf {v} ={\begin{bmatrix}x_{1}&\dotsm &x_{n}\end{bmatrix}}^{\textsf {T}}} , the product is: D v = diag ⁡ ( a 1 , … , a n ) [ x 1 ⋮ x n ] = [ a 1 ⋱ a n ] [ x 1 ⋮ x n ] = [ a 1 x 1 ⋮ a n x n ] . {\displaystyle \mathbf {D} \mathbf {v} =\operatorname {diag} (a_{1},\dots ,a_{n}){\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}={\begin{bmatrix}a_{1}\\&\ddots \\&&a_{n}\end{bmatrix}}{\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}={\begin{bmatrix}a_{1}x_{1}\\\vdots \\a_{n}x_{n}\end{bmatrix}}.} This can be expressed more compactly by using a vector instead of a diagonal matrix, d = [ a 1 ⋯ a n ] T {\displaystyle \mathbf {d} ={\begin{bmatrix}a_{1}&\dotsm &a_{n}\end{bmatrix}}^{\textsf {T}}} , and taking the Hadamard product of the vectors (entrywise product), denoted d ∘ v {\displaystyle \mathbf {d} \circ \mathbf {v} } : D v = d ∘ v = [ a 1 ⋮ a n ] ∘ [ x 1 ⋮ x n ] = [ a 1 x 1 ⋮ a n x n ] . {\displaystyle \mathbf {D} \mathbf {v} =\mathbf {d} \circ \mathbf {v} ={\begin{bmatrix}a_{1}\\\vdots \\a_{n}\end{bmatrix}}\circ {\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}={\begin{bmatrix}a_{1}x_{1}\\\vdots \\a_{n}x_{n}\end{bmatrix}}.} This is mathematically equivalent, but avoids storing all the zero terms of this sparse matrix . This product is thus used in machine learning , such as computing products of derivatives in backpropagation or multiplying IDF weights in TF-IDF , [ 2 ] since some BLAS frameworks, which multiply matrices efficiently, do not include Hadamard product capability directly. [ 3 ] The operations of matrix addition and matrix multiplication are especially simple for diagonal matrices. Write diag( a 1 , ..., a n ) for a diagonal matrix whose diagonal entries starting in the upper left corner are a 1 , ..., a n . Then, for addition , we have diag ⁡ ( a 1 , … , a n ) + diag ⁡ ( b 1 , … , b n ) = diag ⁡ ( a 1 + b 1 , … , a n + b n ) {\displaystyle \operatorname {diag} (a_{1},\,\ldots ,\,a_{n})+\operatorname {diag} (b_{1},\,\ldots ,\,b_{n})=\operatorname {diag} (a_{1}+b_{1},\,\ldots ,\,a_{n}+b_{n})} and for matrix multiplication , diag ⁡ ( a 1 , … , a n ) diag ⁡ ( b 1 , … , b n ) = diag ⁡ ( a 1 b 1 , … , a n b n ) . {\displaystyle \operatorname {diag} (a_{1},\,\ldots ,\,a_{n})\operatorname {diag} (b_{1},\,\ldots ,\,b_{n})=\operatorname {diag} (a_{1}b_{1},\,\ldots ,\,a_{n}b_{n}).} The diagonal matrix diag( a 1 , ..., a n ) is invertible if and only if the entries a 1 , ..., a n are all nonzero. In this case, we have diag ⁡ ( a 1 , … , a n ) − 1 = diag ⁡ ( a 1 − 1 , … , a n − 1 ) . {\displaystyle \operatorname {diag} (a_{1},\,\ldots ,\,a_{n})^{-1}=\operatorname {diag} (a_{1}^{-1},\,\ldots ,\,a_{n}^{-1}).} In particular, the diagonal matrices form a subring of the ring of all n -by- n matrices. Multiplying an n -by- n matrix A from the left with diag( a 1 , ..., a n ) amounts to multiplying the i -th row of A by a i for all i ; multiplying the matrix A from the right with diag( a 1 , ..., a n ) amounts to multiplying the i -th column of A by a i for all i . As explained in determining coefficients of operator matrix , there is a special basis, e 1 , ..., e n , for which the matrix A takes the diagonal form. Hence, in the defining equation A e j = ∑ i a i , j e i {\textstyle \mathbf {Ae} _{j}=\sum _{i}a_{i,j}\mathbf {e} _{i}} , all coefficients a i, j with i ≠ j are zero, leaving only one term per sum. The surviving diagonal elements, a i, j , are known as eigenvalues and designated with λ i in the equation, which reduces to A e i = λ i e i . {\displaystyle \mathbf {Ae} _{i}=\lambda _{i}\mathbf {e} _{i}.} The resulting equation is known as eigenvalue equation [ 4 ] and used to derive the characteristic polynomial and, further, eigenvalues and eigenvectors . In other words, the eigenvalues of diag( λ 1 , ..., λ n ) are λ 1 , ..., λ n with associated eigenvectors of e 1 , ..., e n . Diagonal matrices occur in many areas of linear algebra. Because of the simple description of the matrix operation and eigenvalues/eigenvectors given above, it is typically desirable to represent a given matrix or linear map by a diagonal matrix. In fact, a given n -by- n matrix A is similar to a diagonal matrix (meaning that there is a matrix X such that X −1 AX is diagonal) if and only if it has n linearly independent eigenvectors. Such matrices are said to be diagonalizable . Over the field of real or complex numbers, more is true. The spectral theorem says that every normal matrix is unitarily similar to a diagonal matrix (if AA ∗ = A ∗ A then there exists a unitary matrix U such that UAU ∗ is diagonal). Furthermore, the singular value decomposition implies that for any matrix A , there exist unitary matrices U and V such that U ∗ AV is diagonal with positive entries. In operator theory , particularly the study of PDEs , operators are particularly easy to understand and PDEs easy to solve if the operator is diagonal with respect to the basis with which one is working; this corresponds to a separable partial differential equation . Therefore, a key technique to understanding operators is a change of coordinates—in the language of operators, an integral transform —which changes the basis to an eigenbasis of eigenfunctions : which makes the equation separable. An important example of this is the Fourier transform , which diagonalizes constant coefficient differentiation operators (or more generally translation invariant operators), such as the Laplacian operator, say, in the heat equation . Especially easy are multiplication operators , which are defined as multiplication by (the values of) a fixed function–the values of the function at each point correspond to the diagonal entries of a matrix.
https://en.wikipedia.org/wiki/Scalar_transformation
In theoretical physics , a scalar–tensor theory is a field theory that includes both a scalar field and a tensor field to represent a certain interaction. For example, the Brans–Dicke theory of gravitation uses both a scalar field and a tensor field to mediate the gravitational interaction . Modern physics tries to derive all physical theories from as few principles as possible. In this way, Newtonian mechanics as well as quantum mechanics are derived from Hamilton 's principle of least action . In this approach, the behavior of a system is not described via forces , but by functions which describe the energy of the system. Most important are the energetic quantities known as the Hamiltonian function and the Lagrangian function. Their derivatives in space are known as Hamiltonian density and the Lagrangian density . Going to these quantities leads to the field theories. Modern physics uses field theories to explain reality. These fields can be scalar , vectorial or tensorial . An example of a scalar field is the temperature field. An example of a vector field is the wind velocity field. An example of a tensor field is the stress tensor field in a stressed body, used in continuum mechanics . In physics, forces (as vectorial quantities) are given as the derivative (gradient) of scalar quantities named potentials. In classical physics before Einstein , gravitation was given in the same way, as consequence of a gravitational force (vectorial), given through a scalar potential field, dependent of the mass of the particles. Thus, Newtonian gravity is called a scalar theory . The gravitational force is dependent of the distance r of the massive objects to each other (more exactly, their centre of mass). Mass is a parameter and space and time are unchangeable. Einstein's theory of gravity, the General Relativity (GR) is of another nature. It unifies space and time in a 4-dimensional manifold called space-time. In GR there is no gravitational force, instead, the actions we ascribed to being a force are the consequence of the local curvature of space-time. That curvature is defined mathematically by the so-called metric , which is a function of the total energy, including mass, in the area. The derivative of the metric is a function that approximates the classical Newtonian force in most cases. The metric is a tensorial quantity of degree 2 (it can be given as a 4x4 matrix, an object carrying 2 indices). Another possibility to explain gravitation in this context is by using both tensor (of degree n>1) and scalar fields, i.e. so that gravitation is given neither solely through a scalar field nor solely through a metric. These are scalar–tensor theories of gravitation. The field theoretical start of General Relativity is given through the Lagrange density. It is a scalar and gauge invariant (look at gauge theories ) quantity dependent on the curvature scalar R. This Lagrangian, following Hamilton's principle, leads to the field equations of Hilbert and Einstein . If in the Lagrangian the curvature (or a quantity related to it) is multiplied with a square scalar field, field theories of scalar–tensor theories of gravitation are obtained. In them, the gravitational constant of Newton is no longer a real constant but a quantity dependent of the scalar field. An action of such a gravitational scalar–tensor theory can be written as follows: where g {\displaystyle g} is the metric determinant, R {\displaystyle R} is the Ricci scalar constructed from the metric g μ ν {\displaystyle g_{\mu \nu }} , μ {\displaystyle \mu } is a coupling constant with the dimensions L − 1 M − 1 T 2 {\displaystyle L^{-1}M^{-1}T^{2}} , V ( Φ ) {\displaystyle V(\Phi )} is the scalar-field potential, L m {\displaystyle {\mathcal {L}}_{m}} is the material Lagrangian and Ψ {\displaystyle \Psi } represents the non-gravitational fields. Here, the Brans–Dicke parameter ω {\displaystyle \omega } has been generalized to a function. Although μ {\displaystyle \mu } is often written as being 8 π G / c 4 {\displaystyle 8\pi G/c^{4}} , one has to keep in mind that the fundamental constant G {\displaystyle G} there, is not the constant of gravitation that can be measured with, for instance, Cavendish type experiments . Indeed, the empirical gravitational constant is generally no longer a constant in scalar–tensor theories, but a function of the scalar field Φ {\displaystyle \Phi } . The metric and scalar-field equations respectively write: and Also, the theory satisfies the following conservation equation, implying that test-particles follow space-time geodesics such as in general relativity: where T μ σ {\displaystyle T^{\mu \sigma }} is the stress-energy tensor defined as Developing perturbatively the theory defined by the previous action around a Minkowskian background, and assuming non-relativistic gravitational sources, the first order gives the Newtonian approximation of the theory. In this approximation, and for a theory without potential, the metric writes with U {\displaystyle U} satisfying the following usual Poisson equation at the lowest order of the approximation: where ρ {\displaystyle \rho } is the density of the gravitational source and G e f f = 2 ω 0 + 4 2 ω 0 + 3 G Φ 0 {\displaystyle G_{\mathrm {eff} }={\frac {2\omega _{0}+4}{2\omega _{0}+3}}{\frac {G}{\Phi _{0}}}} (the subscript 0 {\displaystyle _{0}} indicates that the corresponding value is taken at present cosmological time and location). Therefore, the empirical gravitational constant is a function of the present value of the scalar-field background Φ 0 {\displaystyle \Phi _{0}} and therefore theoretically depends on time and location. [ 1 ] However, no deviation from the constancy of the Newtonian gravitational constant has been measured, [ 2 ] implying that the scalar-field background Φ 0 {\displaystyle \Phi _{0}} is pretty stable over time. Such a stability is not theoretically generally expected but can be theoretically explained by several mechanisms. [ 3 ] Developing the theory at the next level leads to the so-called first post-Newtonian order. For a theory without potential and in a system of coordinates respecting the weak isotropy condition [ 4 ] (i.e., g i j ∝ δ i j + O ( c − 3 ) {\displaystyle g_{ij}\propto \delta _{ij}+{\mathcal {O}}(c^{-3})\,} ), the metric takes the following form: with [ 5 ] where J {\displaystyle J} is a function depending on the coordinate gauge It corresponds to the remaining diffeomorphism degree of freedom that is not fixed by the weak isotropy condition. The sources are defined as the so-called post-Newtonian parameters are and finally the empirical gravitational constant G e f f {\displaystyle G_{\mathrm {eff} }} is given by where G {\displaystyle G} is the (true) constant that appears in the coupling constant μ {\displaystyle \mu } defined previously. Current observations indicate that γ − 1 = ( 2.1 ± 2.3 ) × 10 − 5 {\displaystyle \gamma -1=(2.1\pm 2.3)\times 10^{-5}} , [ 2 ] which means that ω 0 > 40000 {\displaystyle \omega _{0}>40000} . Although explaining such a value in the context of the original Brans–Dicke theory is impossible, Damour and Nordtvedt found that the field equations of the general theory often lead to an evolution of the function ω {\displaystyle \omega } toward infinity during the evolution of the universe. [ 3 ] Hence, according to them, the current high value of the function ω {\displaystyle \omega } could be a simple consequence of the evolution of the universe. Seven years of data from the NASA MESSENGER mission constraints the post-Newtonian parameter β {\displaystyle \beta } for Mercury's perihelion shift to | β − 1 | < 1.6 × 10 − 5 {\displaystyle |\beta -1|<1.6\times 10^{-5}} . [ 6 ] Both constraints show that while the theory is still a potential candidate to replace general relativity, the scalar field must be very weakly coupled in order to explain current observations. Generalized scalar-tensor theories have also been proposed as explanation for the accelerated expansion of the universe but the measurement of the speed of gravity with the gravitational wave event GW170817 has ruled this out. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] After the postulation of the General Relativity of Einstein and Hilbert, Theodor Kaluza and Oskar Klein proposed in 1917 a generalization in a 5-dimensional manifold: Kaluza–Klein theory . This theory possesses a 5-dimensional metric (with a compactified and constant 5th metric component, dependent on the gauge potential ) and unifies gravitation and electromagnetism , i.e. there is a geometrization of electrodynamics. This theory was modified in 1955 by P. Jordan in his Projective Relativity theory, in which, following group-theoretical reasonings, Jordan took a functional 5th metric component that led to a variable gravitational constant G . In his original work, he introduced coupling parameters of the scalar field, to change energy conservation as well, according to the ideas of Dirac . Following the Conform Equivalence theory , multidimensional theories of gravity are conform equivalent to theories of usual General Relativity in 4 dimensions with an additional scalar field. One case of this is given by Jordan's theory, which, without breaking energy conservation (as it should be valid, following from microwave background radiation being of a black body), is equivalent to the theory of C. Brans and Robert H. Dicke of 1961, so that it is usually spoken about the Brans–Dicke theory . The Brans–Dicke theory follows the idea of modifying Hilbert-Einstein theory to be compatible with Mach's principle . For this, Newton's gravitational constant had to be variable, dependent of the mass distribution in the universe, as a function of a scalar variable, coupled as a field in the Lagrangian. It uses a scalar field of infinite length scale (i.e. long-ranged), so, in the language of Yukawa 's theory of nuclear physics, this scalar field is a massless field . This theory becomes Einsteinian for high values for the parameter of the scalar field. In 1979, R. Wagoner proposed a generalization of scalar–tensor theories using more than one scalar field coupled to the scalar curvature. JBD theories although not changing the geodesic equation for test particles, change the motion of composite bodies to a more complex one. The coupling of a universal scalar field directly to the gravitational field gives rise to potentially observable effects for the motion of matter configurations to which gravitational energy contributes significantly. This is known as the "Dicke–Nordtvedt" effect, which leads to possible violations of the Strong as well as the Weak Equivalence Principle for extended masses. JBD-type theories with short-ranged scalar fields use, according to Yukawa's theory, massive scalar fields . The first of this theories was proposed by A. Zee in 1979. He proposed a Broken-Symmetric Theory of Gravitation, combining the idea of Brans and Dicke with the one of Symmetry Breakdown, which is essential within the Standard Model SM of elementary particles , where the so-called Symmetry Breakdown leads to mass generation (as a consequence of particles interacting with the Higgs field). Zee proposed the Higgs field of SM as scalar field and so the Higgs field to generate the gravitational constant. The interaction of the Higgs field with the particles that achieve mass through it is short-ranged (i.e. of Yukawa-type) and gravitational-like (one can get a Poisson equation from it), even within SM, so that Zee's idea was taken 1992 for a scalar–tensor theory with Higgs field as scalar field with Higgs mechanism. There, the massive scalar field couples to the masses, which are at the same time the source of the scalar Higgs field, which generates the mass of the elementary particles through Symmetry Breakdown. For vanishing scalar field, this theories usually go through to standard General Relativity and because of the nature of the massive field, it is possible for such theories that the parameter of the scalar field (the coupling constant) does not have to be as high as in standard JBD theories. Though, it is not clear yet which of these models explains better the phenomenology found in nature nor if such scalar fields are really given or necessary in nature. Nevertheless, JBD theories are used to explain inflation (for massless scalar fields then it is spoken of the inflaton field) after the Big Bang as well as the quintessence . Further, they are an option to explain dynamics usually given through the standard cold dark matter models, as well as MOND , Axions (from Breaking of a Symmetry, too), MACHOS ,... A generic prediction of all string theory models is that the spin-2 graviton has a spin-0 partner called the dilaton . [ 12 ] Hence, string theory predicts that the actual theory of gravity is a scalar–tensor theory rather than general relativity. However, the precise form of such a theory is not currently known because one does not have the mathematical tools in order to address the corresponding non-perturbative calculations. Besides, the precise effective 4-dimensional form of the theory is also confronted to the so-called landscape issue .
https://en.wikipedia.org/wiki/Scalar–tensor_theory
Scalar–tensor–vector gravity ( STVG ) [ 1 ] is a modified theory of gravity developed by John Moffat , a researcher at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario . The theory is also often referred to by the acronym MOG ( MO dified G ravity ). Scalar–tensor–vector gravity theory, [ 2 ] also known as MOdified Gravity (MOG), is based on an action principle and postulates the existence of a vector field , while elevating the three constants of the theory to scalar fields . In the weak-field approximation , STVG produces a Yukawa -like modification of the gravitational force due to a point source. Intuitively, this result can be described as follows: far from a source gravity is stronger than the Newtonian prediction, but at shorter distances, it is counteracted by a repulsive fifth force due to the vector field. STVG has been used successfully to explain galaxy rotation curves , [ 3 ] the mass profiles of galaxy clusters, [ 4 ] gravitational lensing in the Bullet Cluster , [ 5 ] and cosmological observations [ 6 ] without the need for dark matter . On a smaller scale, in the Solar System, STVG predicts no observable deviation from general relativity. [ 7 ] The theory may also offer an explanation for the origin of inertia . [ 8 ] STVG is formulated using the action principle. In the following discussion, a metric signature of [ + , − , − , − ] {\displaystyle [+,-,-,-]} will be used; the speed of light is set to c = 1 {\displaystyle c=1} , and we are using the following definition for the Ricci tensor : We begin with the Einstein–Hilbert Lagrangian : where R {\displaystyle R} is the trace of the Ricci tensor, G {\displaystyle G} is the gravitational constant, g {\displaystyle g} is the determinant of the metric tensor g α β {\displaystyle g_{\alpha \beta }} , while Λ {\displaystyle \Lambda } is the cosmological constant. We introduce the Maxwell-Proca Lagrangian for the STVG covector field ϕ α {\displaystyle \phi _{\alpha }} : where B α β = ∂ α ϕ β − ∂ β ϕ α = ( d ϕ ) α β {\displaystyle B_{\alpha \beta }=\partial _{\alpha }\phi _{\beta }-\partial _{\beta }\phi _{\alpha }=(\mathrm {d} \phi )_{\alpha \beta }} is the field strength of ϕ α {\displaystyle \phi _{\alpha }} (given by the exterior derivative ), μ {\displaystyle \mu } is the mass of the vector field, ω {\displaystyle \omega } characterizes the strength of the coupling between the fifth force and matter, and V ϕ {\displaystyle V_{\phi }} is a self-interaction potential. The three constants of the theory, G , μ , {\displaystyle G,\mu ,} and ω , {\displaystyle \omega ,} are promoted to scalar fields by introducing associated kinetic and potential terms in the Lagrangian density: where V G , V μ , {\displaystyle V_{G},V_{\mu },} and V ω {\displaystyle V_{\omega }} are the self-interaction potentials associated with the scalar fields. The STVG action integral takes the form where L M {\displaystyle {\mathcal {L}}_{M}} is the ordinary matter Lagrangian density. The field equations of STVG can be developed from the action integral using the variational principle . First a test particle Lagrangian is postulated in the form where m {\displaystyle m} is the test particle mass, α {\displaystyle \alpha } is a factor representing the nonlinearity of the theory, q 5 {\displaystyle q_{5}} is the test particle's fifth-force charge, and u μ = d x μ / d s {\displaystyle u^{\mu }=dx^{\mu }/ds} is its four-velocity. Assuming that the fifth-force charge is proportional to mass, i.e., q 5 = κ m , {\displaystyle q_{5}=\kappa m,} the value of κ = G N / ω {\displaystyle \kappa ={\sqrt {G_{N}/\omega }}} is determined and the following equation of motion is obtained in the spherically symmetric, static gravitational field of a point mass of mass M {\displaystyle M} : where G N {\displaystyle G_{N}} is Newton's constant of gravitation. Further study of the field equations allows a determination of α {\displaystyle \alpha } and μ {\displaystyle \mu } for a point gravitational source of mass M {\displaystyle M} in the form [ 9 ] where G ∞ ≃ 20 G N {\displaystyle G_{\infty }\simeq 20G_{N}} is determined from cosmological observations, while for the constants D {\displaystyle D} and E {\displaystyle E} galaxy rotation curves yield the following values: where M ⊙ {\displaystyle M_{\odot }} is the mass of the Sun . These results form the basis of a series of calculations that are used to confront the theory with observation. STVG/MOG has been applied successfully to a range of astronomical, astrophysical, and cosmological phenomena. On the scale of the Solar System, the theory predicts no deviation [ 7 ] from the results of Newton and Einstein. This is also true for star clusters containing no more than a few million solar masses. [ 7 ] The theory accounts for the rotation curves of spiral galaxies, [ 3 ] correctly reproducing the Tully–Fisher law . [ 9 ] STVG is in good agreement with the mass profiles of galaxy clusters. [ 4 ] STVG can also account for key cosmological observations, including: [ 6 ] A 2017 article on Forbes by Ethan Siegel states that the Bullet Cluster still "proves dark matter exists, but not for the reason most physicists think". There he argues in favor of dark matter over non-local gravity theories, such as STVG/MOG. Observations show that in "undisturbed" galaxy clusters the reconstructed mass from gravitational lensing is located where matter is distributed, and a separation of matter from gravitation only seems to appear after a collision or interaction has taken place. According to Ethan Siegel: "Adding dark matter makes this work, but non-local gravity would make differing before-and-after predictions that can't both match up, simultaneously, with what we observe." [ 10 ]
https://en.wikipedia.org/wiki/Scalar–tensor–vector_gravity
A scale-down bioreactor is a miniature model designed to mimic or reproduce large-scale bio-processes or specific process steps on a smaller scale. These models play an important role during process development stage by fine-tuning the minute parameters and steps without the need for substantial investments in both materials and consumables. [ 1 ] [ 2 ] Vessel geometry like aspect ratios, impeller designs, and sparger placements should be nearly identical between the small and large scales. For this purpose computer fluid dynamics (CFD) are used as they can be employed to investigate the scalability of mixing processes from small-scale models to larger production scales. Scientists use outcome of these studies on scale down systems to derive and facilitate the transition from laboratory-scale studies to industrial large-scale conditions. [ 3 ] Stirred tank bioreactors are systems further developed to two compartment systems to provide a fundamental structure for Scale down bioreactors. Two commonly used developed systems are cells which are circulated between either two stirred tank reactors (STR–STR), or from a STR through a plug flow reactor (STR–PFR). The application of coupled stirred-tank reactors in scale-down models is a powerful technical model for simulating and studying the complex conditions of large-scale industrial bioreactors. It provides a controlled environment to replicate non-homogeneous conditions, these models offer valuable insights into optimizing bioprocesses, ensuring consistent product quality, and reducing costs and time in biotechnological production. Co-cultures, meaning that more than two microbes complementing cultivation can be conducted. One such recent study conducted for two compartment bioreactor is the production of Violecin. [ 4 ] Scale down reactors can be two compartment bioreactor. In a two-compartment bioreactor setup, the first compartment can be operated as an STR for initial growth/biomass buildup, while the second compartment functions as a PFR for the production phase with a defined residence time. Fusing a mixed stirred tank reactor (STR) with a plug flow reactor (PFR) in a two-compartment system offers significant options in flow characteristics to meet specific process requirements. This configuration allows for precise control over various factors, including improved bioprocess results by enhancing residence time distribution and substrate gradients. The integration of this system results in a portion of the culture being exposed to varying environmental cues, such as altered mixing times, nutrient deprivation, aeration, pH, or temperature, before being recirculated in to the main STR. The formed perturbations simulate transient stresses encountered in large-scale industrial reactors. The residence time in the PFR zone is calibrated to match the typical timescale experienced in large scale industrial bioprocesses. This system is further optimized to explore shorter timescales and they are termed dynamic microfluidic systems. Computational fluid dynamics (CFD) simulations can predict and model the flow patterns in STR-PFR complex systems. [ 5 ] During process development, a wide range of operating conditions should be deployed, in order to identify the optimal parameter ranges, and is crucial to achieve successful large scale bioprocesses . However, due to number of experiments in large-scale fermenters can be time-consuming, resource-intensive, and cost. Hence, smaller scale-down systems, in the form of miniaturized bioreactors ranging from micro liters to milliliters in scale. [ 6 ] Miniaturized bioreactors enable researchers to conduct numerous experiments simultaneously, exploring various combinations of process parameters such as temperature , pH , agitation rates, and nutrient concentrations. These models facilitate efficient process optimization at a small scale, the insights gained from these experiments can be seamlessly transferred to larger-scale systems. The scalability of the process parameters and operating conditions identified through scale-down models ensures a smooth transition to pilot and commercial-scale production. This high-throughput approach allows for rapid screening and identification of optimal operating conditions, which would be impractical and costly with larger-scale systems. By working at a smaller scale, these miniaturized bioreactors significantly reduce the consumption of raw materials, media components, and other consumables needed for reactor fermentations runs. This resource-efficient approach not only minimizes costs but also aligns with sustainable practices, reducing waste and environmental impact. [ 7 ] Bioprocess Engineering strategies are applied to upgrade and enhance the overall productivity of the cultivation experiments. Some important parameters like oxygen transfer rate (OTR), dissolved oxygen concentration, superficial gas velocity, volume‐specific power input P/V, mixing time, could be modified and optimized to obtain high titre formation according to the desired requirements. These titre values could be comparable to values obtained in large scale industrial bioprocesses. [ 8 ] Microbial strain Engineering and cell factory engineering is a developing area of interest and important in determining the outcome of large scale fermentation . With the development in metabolic engineering and synthetic biology new strains are constructed, which need to be tested in large scale like conditions. [ 9 ] This is an instance where scale down bioreactors could be coupled with microbial strain engineering to broaden the scope of research and bridge the gap between two interdisciplinary fields of studies. By developing and applying computational fluid dynamics simulations, process scientists and engineers can gain valuable insights into the fluid flow patterns and mixing dynamics within various geometries.The ability to run multiple experiments in parallel, combined with the reduced resource requirements, translates into accelerated process development timelines. Researchers can quickly iterate through various conditions, analyze results, and make informed decisions, ultimately shortening the overall development cycle. Two parameters that need to be focused on are the Reynolds number and power number , as non-dimensional values for technical know-how and scaling processes, both upscaling and scale-down processes. [ 10 ] [ 11 ] By understanding this relationship between power number and reynold's number, it becomes possible to predict the power requirements for a given flow regime and impeller configuration. This knowledge is crucial for designing and operating agitated systems at different scales while maintaining consistent mixing performance.
https://en.wikipedia.org/wiki/Scale-down_bioreactor
The scale of a chemical process refers to the rough ranges in mass or volume of a chemical reaction or process that define the appropriate category of chemical apparatus and equipment required to accomplish it, and the concepts, priorities, and economies that operate at each. While the specific terms used—and limits of mass or volume that apply to them—can vary between specific industries, the concepts are used broadly across industry and the fundamental scientific fields that support them. Use of the term "scale" is unrelated to the concept of weighing; rather it is related to cognate terms in mathematics (e.g., geometric scaling , the linear transformation that enlarges or shrinks objects, and scale parameters in probability theory ), and in applied areas (e.g., in the scaling of images in architecture, engineering , cartography , etc.). Practically speaking, the scale of chemical operations also relates to the training required to carry them out, and can be broken out roughly as follows: For instance, the production of the streptomycin -class of antibiotics, which combined biotechnologic and chemical operations, involved use of a 130,000 liter fermenter , an operational scale approximately one million-fold larger than the microbial shake flasks used in the early laboratory scale studies. [ 2 ] [ 3 ] As noted, nomenclature can vary between manufacturing sectors; some industries use the scale terms pilot plant and demonstration plant interchangeably. Apart from defining the category of chemical apparatus and equipment required at each scale, the concepts, priorities and economies that obtain, and the skill-sets needed by the practicing scientists at each, defining scale allows for theoretical work prior to actual plant operations (e.g., defining relevant process parameters used in the numerical simulation of large-scale production processes), and allows economic analyses that ultimately define how manufacturing will proceed. Besides the chemistry and biology expertises involved in scaling designs and decisions, varied aspects of process engineering and mathematical modeling, simulations, and operations research are involved.
https://en.wikipedia.org/wiki/Scale_(chemistry)
In zoology , a scale ( Ancient Greek : λεπίς , romanized : lepís ; Latin : squāma ) is a small rigid plate made out of keratin that grows out of Vertebrate animals ' skin to provide protection. In lepidopterans ( butterflies and moths ), scales are plates on the surface of the insect wing , made out of chitin instead of keratin, and provide coloration. Scales are quite common and have evolved multiple times through convergent evolution , with varying structure and function. Scales are generally classified as part of an organism's integumentary system . There are various types of scales according to the shape and class of an animal. Fish scales are dermally derived, specifically in the mesoderm . This fact distinguishes them from reptile scales paleontologically. Genetically, the same genes involved in tooth and hair development in mammals are also involved in scale development. [ 1 ] True cosmoid scales can only be found on the Sarcopterygians . The inner layer of the scale is made of lamellar bone. On top of this lies a layer of spongy or vascular bone and then a layer of dentine -like material called cosmine . The upper surface is keratin . The coelacanth has modified cosmoid scales that lack cosmine and are thinner than true cosmoid scales. Ganoid scales can be found on gars (family Lepisosteidae ), bichirs , and reedfishes (family Polypteridae ). Ganoid scales are similar to cosmoid scales, but a layer of ganoin lies over the cosmine layer and under the enamel [ clarification needed ] . Ganoin scales are diamond shaped, shiny, and hard. Within the ganoin are guanine compounds, iridescent derivatives of guanine found in a DNA molecule. [ 2 ] The iridescent property of these chemicals provide the ganoin its shine. Placoid scales are found on cartilaginous fish including sharks and stingrays . These scales, also called denticles, are similar in structure to teeth , and have one median spine and two lateral spines. The modern jawed fish ancestors, the jawless ostracoderms and later jawed placoderms , may have had scales with the properties of both placoid and ganoid scales. Leptoid scales are found on higher-order bony fish. As they grow they add concentric layers. They are arranged so as to overlap in a head-to-tail direction, like roof tiles, allowing a smoother flow of water over the body and therefore reducing drag . [ 3 ] They come in two forms: Reptile scale types include: cycloid, granular (which appear bumpy), and keeled (which have a center ridge). Scales usually vary in size, the stouter, larger scales cover parts that are often exposed to physical stress (usually the feet, tail and head), while scales are small around the joints for flexibility. Most snakes have extra broad scales on the belly, each scale covering the belly from side to side. The scales of all reptiles have an epidermal component (what one sees on the surface), but many reptiles, such as crocodilians and turtles, have osteoderms underlying the epidermal scale. Such scales are more properly termed scutes . Snakes, tuataras and many lizards lack osteoderms. All reptilian scales have a dermal papilla underlying the epidermal part, and it is there that the osteoderms, if present, would be formed. Many reptiles possess large scales not supported by osteoderms known as feature scales. The green iguana possesses large feature scales on the ventral sides of its neck, and dorsal spines not supported by osteoderms. Many extinct non-avian dinosaurs such as Carnotaurus and Brachylophosaurus are known to possess feature scales from skin impressions. Birds' scales are found mainly on the toes and metatarsus, but may be found further up on the ankle in some birds. The scales and scutes of birds were thought to be homologous to those of reptiles, [ 4 ] but are now agreed to have evolved independently, being degenerate feathers. [ 5 ] [ 6 ] Carcharodontosaurid theropod dinosaur Concavenator , is known to have possessed these feather-derived tarsal scutes. An example of a scaled mammal is the pangolin . Its scales are made of keratin and are used for protection, similar to an armadillo 's armor. They have been convergently evolved, being unrelated to mammals' distant reptile-like ancestors (since therapsids lost scales), except that they use a similar gene. On the other hand, the musky rat-kangaroo has scales on its feet and tail. [ 7 ] The precise nature of its purported scales has not been studied in detail, but they appear to be structurally different from pangolin scales. Anomalures also have scales on their tail undersides. [ 8 ] Foot pad epidermal tissues in most mammal species have been compared to the scales of other vertebrates. They are likely derived from cornification processes or stunted fur much like avian reticulae are derived from stunted feathers. [ 9 ] Butterflies and moths - the order Lepidoptera ( Greek "scale-winged") - have membranous wings covered in delicate, powdery scales, which are modified setae . Each scale consists of a series of tiny stacked platelets of organic material, and butterflies tend to have the scales broad and flattened, while moths tend to have the scales narrower and more hair like. Scales are usually pigmented , but some types of scales are iridescent, without pigments; because the thickness of the platelets is on the same order as the wavelength of visible light the plates lead to structural coloration and iridescence through the physical phenomenon described as thin-film optics . The most common color produced in this fashion is blue , such as in the Morpho butterflies. Some types of spiders also have scales. Spider scales are flattened setae that overlay the surface of the cuticle . They come in a wide variety of shapes, sizes, and colors. At least 13 different spider families are known to possess cuticular scales, although they have only been well described for jumping spiders (Salticidae) and lynx spiders (Oxyopidae). [ 10 ] [ 11 ] Some crustaceans such as Glyptonotus antarcticus have knobbly scales. [ 12 ] Some crayfish have been shown to use antennal scales that are activated in rapid response movements. [ 13 ]
https://en.wikipedia.org/wiki/Scale_(zoology)
Scale analysis (or order-of-magnitude analysis ) is a powerful tool used in the mathematical sciences for the simplification of equations with many terms. First the approximate magnitude of individual terms in the equations is determined. Then some negligibly small terms may be ignored. Consider for example the momentum equation of the Navier–Stokes equations in the vertical coordinate direction of the atmosphere where R is Earth radius, Ω is frequency of rotation of the Earth, g is gravitational acceleration , φ is latitude, ρ is density of air and ν is kinematic viscosity of air (we can neglect turbulence in free atmosphere ). In synoptic scale we can expect horizontal velocities about U = 10 1 m.s −1 and vertical about W = 10 −2 m.s −1 . Horizontal scale is L = 10 6 m and vertical scale is H = 10 4 m. Typical time scale is T = L / U = 10 5 s. Pressure differences in troposphere are Δ P = 10 4 Pa and density of air ρ = 10 0 kg⋅m −3 . Other physical properties are approximately: Estimates of the different terms in equation ( A1 ) can be made using their scales: Now we can introduce these scales and their values into equation ( A1 ): We can see that all terms — except the first and second on the right-hand side — are negligibly small. Thus we can simplify the vertical momentum equation to the hydrostatic equilibrium equation: Scale analysis is very useful and widely used tool for solving problems in the area of heat transfer and fluid mechanics, pressure-driven wall jet, separating flows behind backward-facing steps, jet diffusion flames, study of linear and non-linear dynamics. Scale analysis is an effective shortcut for obtaining approximate solutions to equations often too complicated to solve exactly. The object of scale analysis is to use the basic principles of convective heat transfer to produce order-of-magnitude estimates for the quantities of interest. Scale analysis anticipates within a factor of order one when done properly, the expensive results produced by exact analyses. Scale analysis rules as follows: Rule1- First step in scale analysis is to define the domain of extent in which we apply scale analysis. Any scale analysis of a flow region that is not uniquely defined is not valid. Rule2- One equation constitutes an equivalence between the scales of two dominant terms appearing in the equation. For example, In the above example, the left-hand side could be of equal order of magnitude as the right-hand side. Rule3- If in the sum of two terms given by the order of magnitude of one term is greater than order of magnitude of the other term then the order of magnitude of the sum is dictated by the dominant term The same conclusion holds if we have the difference of two terms Rule4- In the sum of two terms, if two terms are same order of magnitude, then the sum is also of same order of magnitude: Rule5- In case of product of two terms the order of magnitude of the product is equal to the product of the orders of magnitude of the two factors for ratios then here O(a) represents the order of magnitude of a. ~ represents two terms are of same order of magnitude. > represents greater than, in the sense of order-of-magnitude. Consider the steady laminar flow of a viscous fluid inside a circular tube. Let the fluid enter with a uniform velocity over the flow across section. As the fluid moves down the tube a boundary layer of low-velocity fluid forms and grows on the surface because the fluid immediately adjacent to the surface have zero velocity. A particular and simplifying feature of viscous flow inside cylindrical tubes is the fact that the boundary layer must meet itself at the tube centerline, and the velocity distribution then establishes a fixed pattern that is invariant. Hydrodynamic entrance length is that part of the tube in which the momentum boundary layer grows and the velocity distribution changes with length. The fixed velocity distribution in the fully developed region is called fully developed velocity profile. The steady-state continuity and conservation of momentum equations in two-dimensional are These equations can be simplified by using scale analysis. At any point x ∼ L {\displaystyle x\sim L} in the fully developed zone, we have y ∼ δ {\displaystyle y\sim \delta } and u ∼ U ∞ {\displaystyle u\sim U_{\infty }} . Now, from equation ( 1 ), the transverse velocity component in the fully developed region is simplified using scaling as In the fully developed region L ≫ δ {\displaystyle L\gg \delta } , so that the scale of the transverse velocity is negligible from equation ( 4 ). Therefore in fully developed flow, the continuity equation requires that Based on equation ( 5 ), the y momentum equation ( 3 ) reduces to this means that P is function of x only. From this, the x momentum equation becomes Each term should be constant, because left side is function of x only and right is function of y . Solving equation ( 7 ) subject to the boundary condition this results in the well-known Hagen–Poiseuille solution for fully developed flow between parallel plates. where y is measured away from the center of the channel. The velocity is to be parabolic and is proportional to the pressure per unit duct length in the direction of the flow.
https://en.wikipedia.org/wiki/Scale_analysis_(mathematics)
The scale cube is a technology model that indicates three methods (or approaches) by which technology platforms may be scaled to meet increasing levels of demand upon the system in question. The three approaches defined by the model include scaling through replication or cloning (the “X axis”), scaling through segmentation along service boundaries or dissimilar components (the “Y axis”) and segmentation or partitioning along similar components (the “Z axis”). [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] The model was first published in a book in the first edition of The Art of Scalability . [ 7 ] The authors claim first publishing of the model online in 2007 in their company blog. [ 6 ] Subsequent versions of the model were published in the first edition of Scalability Rules in 2011, [ 8 ] the second edition of The Art of Scalability in 2015 [ 1 ] [ 4 ] and the second edition of Scalability Rules in 2016. [ 9 ] The X axis of the model describes scaling a technology solution through multiple instances of the same component through cloning of a service or replication of a data set. Web and application servers performing the same function may exist behind a load balancer for scaling a solution. Data persistence systems such as a database may be replicated for higher transaction throughput. [ 1 ] The Y axis of the model describes scaling a technology solution by separating a monolithic application into services using action words (verbs), or separating “dissimilar” things. Data may be separated by nouns. Services should have the data upon which they act separated and isolated to that service. [ 1 ] [ 10 ] The Z axis of the cube describes scaling a technology solution by separating components along “similar” boundaries. Such separations may be done on a geographic basis, along customer identity numbers, etc. [ 1 ] [ 11 ] X axis scaling is the most commonly used approach and tends to be the easiest to implement. Although potentially costly, the speed at which it can be implemented and start alleviating issues tends to offset the cost. The X Axis tends to be a simple copy of a service that is then load balanced to either help with spikes in traffic or server outages. The costs can start to become overwhelming, particularly when dealing with the persistence tier. [ 6 ] Y axis scaling starts to break away chunks of monolithic code bases and creates separate services, or sometimes microservices. [ 12 ] This separation creates clearly defined lanes for not only responsibility and accountability, but also for fault isolation. If one service fails, it should only bring down itself and not other services. [ 6 ] [ 13 ] Z axis scaling is usually looking at similar use cases of data. Whether that be geographic in nature or how customers use your website, or even just a random modulus of your customer dataset. The Z Axis breaks customers into sequestered sections to benefit response time and to help eliminate issues if a particular region or section should go down. [ 6 ] [ 14 ]
https://en.wikipedia.org/wiki/Scale_cube
A scale model is a physical model that is geometrically similar to an object (known as the prototype ). Scale models are generally smaller than large prototypes such as vehicles, buildings, or people; but may be larger than small prototypes such as anatomical structures or subatomic particles. Models built to the same scale as the prototype are called mockups . Scale models are used as tools in engineering design and testing, promotion and sales, filmmaking special effects, military strategy, and hobbies such as rail transport modeling , wargaming and racing; and as toys. Model building is also pursued as a hobby for the sake of artisanship . Scale models are constructed of plastic , wood, or metal. They are usually painted with enamel , lacquer , or acrylics . Model prototypes include all types of vehicles (railroad trains, cars, trucks, military vehicles, aircraft, and spacecraft), buildings, people, and science fiction themes (spaceships and robots). Models are built to scale , defined as the ratio of any linear dimension of the model to the equivalent dimension on the full-size subject (called the "prototype"), expressed either as a ratio with a colon (ex. 1:8 scale), or as a fraction with a slash (1/8 scale). This designates that 1 length unit on the model represents 8 such units on the prototype. In English-speaking countries, the scale is sometimes expressed as the number of feet on the prototype corresponding to one inch on the model, e.g. 1:48 scale = "1 inch to 4 feet", 1:96 = "1 inch to 8 feet", etc. Models are obtained by three different means: kit assembly , scratch building , and collecting pre-assembled models. Scratch building is the only option available to structural engineers, and among hobbyists requires the highest level of skill, craftsmanship, and time; scratch builders tend to be the most concerned with accuracy and detail. [ citation needed ] Kit assembly is done either "out of the box", or with modifications (known as " kitbashing "). Many kit manufacturers, for various reasons leave something to be desired in terms of accuracy, but using the kit parts as a baseline and adding after-market conversion kits, alternative decal sets, and some scratch building can correct this without the master craftsmanship or time expenditure required by scratch building. Scale models are generally of two types: static and animated . They are used for several purposes in many fields, including: Most hobbyist's models are built for static display, but some have operational features, such as railroad trains that roll, and airplanes and rockets that fly. Flying airplane models may be simple unpowered gliders, or have sophisticated features such as radio control powered by miniature methanol/nitromethane engines . Cars in 1:24, 1:32, or HO scale are fitted with externally powered electric motors which run on plastic road track fitted with metal rails on slots. The track may or may not be augmented with miniature buildings, trees, and people. Children can build and race their own gravity-powered, uncontrolled cars carved out of a wood such as pine, with plastic wheels on metal axles, which run on inclined tracks. The most famous wood racing event is the Boy Scouts of America 's annual Pinewood Derby which debuted in 1953. Entry is open to Cub Scouts . Entrants are supplied with a kit containing a wooden block out of which to carve the body, four plastic wheels, and four axle nails; or they may purchase their own commercially available kit. Regulations generally limit the car's weight to 5 ounces (141.7 g), width to 2.75 inches (7.0 cm), and length to 7 inches (17.8 cm). The rules permit the cars to be augmented with tungsten carbide weights up to the limit, and graphite axle lubricant. Miniature wargames are played using miniature soldiers, artillery, vehicles, and scenery built by the players. Before the advent of computer-generated imagery (CGI), visual effects of vehicles such as marine ships and spaceships were created by filming "miniature" models. These were considerably larger scale than hobby versions to allow inclusion of a high degree of surface detail, and electrical features such as interior lighting and animation. For Star Trek: The Original Series , a 33-inch (0.84 m) pre-production model of the Starship Enterprise was created in December 1964, mostly of pine, with Plexiglass and brass details, at a cost of $600. [ 1 ] This was followed by a 135.5-inch (3.44 m) production model constructed from plaster, sheet metal, and wood, at ten times the cost of the first. [ 2 ] [ 3 ] As the Enterprise was originally reckoned to be 947 feet (289 m) long, this put the models at 1:344 and 1:83.9 scale respectively. The Polar Lights company sells a large plastic Enterprise model kit essentially the same size as the first TV model, in 1:350 scale (32 inches long). It can be purchased with an optional electronic lighting and animation (rotating engine domes) kit. Although structural engineering has been a field of study for thousands of years and many of the great problems have been solved using analytical and numerical techniques, many problems are still too complicated to understand in an analytical manner or the current numerical techniques lack real world confirmation. When this is the case, for example a complicated reinforced concrete beam-column-slab interaction problem, scale models can be constructed observing the requirements of similitude to study the problem. Many structural labs exist to test these structural scale models such as the Newmark Civil Engineering Laboratory at the University of Illinois, UC. [ 5 ] For structural engineering scale models, it is important for several specific quantities to be scaled according to the theory of similitude. These quantities can be broadly grouped into three categories: loading, geometry, and material properties. A good reference for considering scales for a structural scale model under static loading conditions in the elastic regime is presented in Table 2.2 of the book Structural Modeling and Experimental Techniques . [ 6 ] Structural engineering scale models can use different approaches to satisfy the similitude requirements of scale model fabrication and testing. A practical introduction to scale model design and testing is discussed in the paper "Pseudodynamic Testing of Scaled Models". [ 7 ] Aerodynamic models may be used for testing new aircraft designs in a wind tunnel or in free flight. Models of scale large enough to permit piloting may be used for testing of a proposed design. Architecture firms usually employ model makers or contract model making firms to make models of projects to sell their designs to builders and investors. These models are traditionally hand-made, but advances in technology have turned the industry into a very high tech process than can involve Class IV laser cutters , five-axis CNC machines as well as rapid prototyping or 3D printing . Typical scales are 1:12, 1:24, 1:48, 1:50, 1:100, 1:200, 1:500, etc. With elements similar to miniature wargaming , building models and architectural models , a plan-relief is a means of geographical representation in relief as a scale model for military use, to visualize building projects on fortifications or campaigns involving fortifications. In the first half of the 20th century, navies used hand-made models of warships for identification and instruction in a variety of scales. That of 1:500 was called "teacher scale." Besides models made in 1:1200 and 1:2400 scales, there were also ones made to 1:2000 and 1:5000. Some, made in Britain , were labelled "1 inch to 110 feet", which would be 1:1320 scale, but are not necessarily accurate. Many research workers, hydraulics specialists and engineers have used scale models for over a century, in particular in towing tanks. Manned models are small scale models that can carry and be handled by at least one person on an open expanse of water. They must behave just like real ships, giving the shiphandler the same sensations. Physical conditions such as wind, currents, waves, water depths, channels, and berths must be reproduced realistically. Manned models are used for research (e.g. ship behaviour), engineering (e.g. port layout) and for training in shiphandling (e.g. maritime pilots , masters and officers ). They are usually at 1:25 scale. Models, and their constituent parts, can be built out of a variety of materials, such as: This includes injection molded or extruded plastics such as polystyrene , acrylonitrile butadiene styrene (ABS), butyrate , and clear acrylic and copolyester ( PETG ). Parts can also be cast from synthetic resins . Pine wood is sometimes used; balsa wood , a light wood, is good for flying airplane models. Aluminum or brass can be used in tubing form, or can be used in flat sheets with photo-etched surface detail. Model figures used in wargaming can be made of white metal . Styrene parts are welded together using plastic cement , which comes both in a thick form to be carefully applied to a bonding surface, or in a thin liquid which is applied into a joint by capillary action using a brush or syringe needle. Ethyl cyanoacrylate (ECA) aka "super-glue", or fast-setting epoxy , must be used to bond styrene to other materials. Glossy colors are generally used for car and commercial truck exteriors. Flat colors are generally desirable for military vehicles, aircraft, and spacecraft. Metallic colors simulate the various metals (silver, gold, aluminum, steel, copper, brass, etc.) Enamel paint has classically been used for model making and is generally considered the most durable paint for plastics. It is available in small bottles for brushing and airbrushing , and aerosol spray cans . Disadvantages include toxicity and a strong chemical smell of the paint and its mineral spirit thinner /brush cleaner. Modern enamels are made of alkyd resin to limit toxicity. Popular brands include Testor 's in the US and Humbrol (now Hornby ) in the UK. Lacquer paint produces a hard, durable finish, and requires its own lacquer thinner . Enamels have been generally replaced in popularity by acrylic paint , which is water-based. Advantages include decreased toxicity and chemical smell, and brushes clean with soap and water. Disadvantages include possibly limited durability on plastic, requiring priming coats, at least two color coats, and allowing adequate cure time. Popular brands include the Japanese import Tamiya . Some beginner's level kits avoid the necessity to paint the model by adding pigments and chrome plating to the plastic. Decals are generally applied to models after painting and assembly, to add details such as lettering, flags, insignia, or other decorations too small to paint. Water transfer (slide-on) decals are generally used, but beginner's kits may use dry transfer stickers instead. Model railroading (US and Canada; known as railway modelling in UK, Australia, New Zealand, and Ireland) is done in a variety of scales from 1:4 to 1:450 ( T scale ). Each scale has its own strengths and weaknesses, and fills a different niche in the hobby: gauge gauge Model railroads originally used the term gauge , which refers to the distance between the rails , just as full-size railroads continue to do. Although model railroads were also built to different gauges, standard gauge in full-size railroads is 4' 8.5". Therefore, a model railroad reduces that standard to scale. An HO scale model railroad runs on track that is 1/87 of 4' 8.5", or 0.649" from rail to rail. Today model railroads are more typically referred to using the term scale instead of "gauge" in most usages. Confusion arises from indiscriminate use of "scale" and "gauge" synonymously. The word "scale" strictly refers to the proportional size of the model, while "gauge" strictly applies to the measurement between the inside faces of the rails. It is completely incorrect to refer to the mainstream scales as "HO gauge", "N gauge, "Z gauge", etc. This is further complicated by the fact some scales use several different gauges; for example, HO scale uses 16.5 mm as the standard gauge of 4 ft 8 + 1 ⁄ 2 in ( 1,435 mm ), 12 mm to represent 1,000 mm ( 3 ft 3 + 3 ⁄ 8 in ) gauge (HOm), and 3 ft 6 in ( 1,067 mm ) (HOn3-1/2), and 9 mm to represent a prototype gauge of 2 ft ( 610 mm ). The most popular scale to go with a given gauge was often arrived at through the following roundabout process: German artisans would take strips of metal of standard metric size to construct their products from blueprints dimensioned in inches. "Four mm to the foot" yielded the 1:76.2 size of the British "OO scale", which is anomalously used on the standard HO/OO scale (16.5 mm gauge from 3.5 mm/foot scale) tracks, because early electric motors weren't available commercially in smaller sizes. Today, most scale sizes are internationally standardized, with the notable exceptions of O scale and N scale. There are three different versions of the "O" scale, each of which uses tracks of 32 mm for the standard gauge. The American version follows a dollhouse scale of 1:48, sometimes called "quarter-gauge" as in "one-quarter-inch to the foot". The British version continued the pattern of sub-contracting to Germans, so, at 7 mm to the foot, it works out to a scale of 1:43.5. Later, the European authority of model railroad firms MOROP declared that the "O" gauge (still 32 mm) must use the scale of 1:45, to allow wheel, tire , and splasher clearance for smaller than realistic curved sections. N scale trains were first commercially produced at 1:160 scale in 1962 by the Arnold company of Nuremberg . [ 13 ] [ 12 ] This standard size was imported to the US by firms such as the Aurora Plastics Corporation . However, the early N-scale motors would not fit in the smaller models of British locomotives, so the British N gauge was standardized to allow a slightly larger body size. Similar sizing problems with Japanese prototypes led to adoption of a 1:150 scale standard there. Since space is more limited in Japanese houses, N scale has become more popular there than HO scale. Static model aircraft are commonly built using plastic, but wood, metal, card and paper can also be used. Models are sold painted and assembled, painted but not assembled ( snap-fit ), or unpainted and not assembled. The most popular types of aircraft to model are commercial airliners and military aircraft. Popular aircraft scales are, in order of increasing size: 1:144 , 1:87 (also known as HO, or "half-O scale") , 1:72 (the most numerous), 1:48 (known as "O scale") , 1:32 , 1:24 , 1:16 , 1:6, and 1:4 . Some European models are available at more metric scales such as 1:50 . The highest quality models are made from injection molded plastic or cast resin . Models made from Vacuum formed plastic are generally for the more skilled builder. More inexpensive models are made from heavy paper or card stock. Ready-made die-cast metal models are also very popular. As well as the traditional scales, die-cast models are available in 1:200 , 1:250 , 1:350 , 1:400 , 1:500 and 1:600 scale . The majority of aircraft modelers concern themselves with depiction of real-life aircraft, but there are some modelers who 'bend' history by modeling aircraft that either never actually flew or existed, or by painting them in a color scheme that did not actually exist. This is commonly referred to as 'What-if' or 'Alternative' modeling, and the most common theme is 'Luftwaffe 1946' or 'Luftwaffe '46'. This theme stems from the idea of modeling German secret projects that never saw the light of day due to the close of World War II. This concept has been extended to include British, Russian, and US experimental projects that never made it into production. Flying model aircraft are built for aerodynamic research and for recreation ( aeromodeling ). Recreational models are often made to resemble some real type. However the aerodynamic requirements of a small model are different from those of a full-size craft, so flying models are seldom fully accurate to scale. Flying model aircraft are one of three types: free flight , control line , and radio controlled . Some flying model kits take many hours to put together, and some kits are almost ready to fly or ready to fly . Model rocketry dates back to the Space Race of the 1950s. The first model rocket engine was designed in 1954 by Orville Carlisle , a licensed pyrotechnics expert, and his brother Robert, a model airplane enthusiast. [ 14 ] Static model rocket kits began as a development of model aircraft kits, yet the scale of 1:72 [V.close to 4 mm.::1foot] never caught on. Scales 1:48 and 1:96 are most frequently used. There are some rockets of scales 1:128, 1:144 , and 1:200 , but Russian firms put their large rockets in 1:288. Heller SA offers some models in the scale of 1:125. Science fiction space ships are heavily popular in the modeling community. In 1966, with the release of the television show Star Trek: The Original Series , AMT corporation released an 18-inch (46 cm) model of the Starship Enterprise . This has been followed over the decades by a complete array of various starships, shuttlecraft , and space stations from the Star Trek franchise. The 1977 release of the first Star Wars film and the 1978 TV series Battlestar Galactica also spawned lines of licensed model kits in scales ranging from 1:24 for fighters and smaller ships, to 1:1000, 1:1400, and 1:2500 for most main franchise ships, and up to 1:10000 for the larger Star Wars ships (for especially objects like the Death Stars and Super Star Destroyers , even smaller scales are used). Finemolds in Japan have recently released a series of high quality injection molded Star Wars kits in 1:72 , and this range is supplemented by resin kits from Fantastic Plastic . Although the British scale for 0 gauge was first used for model cars made of rectilinear and circular parts, it was the origin of the European scale for cast or injection molded model cars. MOROP's specification of 1:45 scale for European 0 does not alter the series of cars in 1:43 scale , as it has the widest distribution in the world. In America, a series of cars was developed from at first cast metal and later styrene models ("promos") offered at new-car dealerships to drum up interest. The firm Monogram , and later Tamiya , first produced them in a scale derived from the Architect's scale: 1:24 scale , while the firms AMT , Jo-Han , and Revell chose the scale of 1:25. Monogram later switched to this scale after the firm was purchased by Revell. Some cars are also made in 1:32 scale , and rolling toys are often made on the scale 1:64 scale . Chinese die-cast manufacturers have introduced 1/72 scale into their range. The smaller scales are usually die-cast cars and not the in the class as model cars. Except in rare occasions, Johnny Lightning and Ertl-made die-cast cars were sold as kits for buyers to assemble. Model cars are also used in car design . Typically found in 1:50 scale , most manufacturers of commercial vehicles and heavy equipment commission scale models made of die-cast metal as promotional items to give to prospective customers. These are also popular children's toys and collectibles. The major manufacturers of these items are Conrad and NZG in Germany. Corgi also makes some 1:50 models, as well as Dutch maker Tekno . Trucks are also found as diecast models in 1:43 scale and injection molded kits (and children's toys) in 1:24 scale . Recently some manufacturers have appeared in 1:64 scale like Code 3 . A model construction vehicle (or engineering vehicle ) is a scale model or die-cast toy that represents a construction vehicle such as a bulldozer , excavator , crane , concrete pump , backhoe , etc. Construction vehicle models are almost always made in 1:50 scale , particularly because the cranes at this scale are often three to four feet tall when extended and larger scales would be unsuited for display on a desk or table. These models are popular as children's toys in Germany . In the US they are commonly sold as promotional models for new construction equipment, commissioned by the manufacturer of the prototype real-world equipment. The major manufacturers in Germany are Conrad and NZG, with some competition from Chinese firms that have been entering the market. Japanese firms have marketed toys and models of what are often called mecha , nimble humanoid fighting robots. The robots, which appear in animated shows ( anime ), are often depicted at a size between 15-20m in height, and so scales of 1:100 and 1:144 are common for these subjects, though other scales such as 1:72 are commonly used for robots and related subjects of different size. The most prolific manufacturer of mecha models is Bandai , whose Gundam kit lines were a strong influence in the genre in the 1980s. Even today, Gundam kits are the most numerous in the mecha modeling genre, usually with dozens of new releases every year. The features of modern Gundam kits, such as color molding and snap-fit construction , have become the standard expectations for other mecha model kits. Due to the fantasy nature of most anime robots, and the necessary simplicity of cel-animated designs, mecha models lend themselves well to stylized work, improvisations, and simple scratchbuilds . One of Gundam 's contributions to the genre was the use of a gritty wartime backstory as a part of the fantasy, and so it is almost equally fashionable to build the robots in a weathered, beaten style, as would often be expected for AFV kits as to build them in a more stylish, pristine manner. Scale models of people and animals are found in a wide variety of venues, and may be either single-piece objects or kits that must be assembled, usually depending on the purpose of the model. For instance, models of people as well as both domestic and wild animals are often produced for display in model cities or railroads to provide a measure of detail or realism, and scaled relative to the trains, buildings, and other accessories of a certain line of models. If a line of trains or buildings does not feature models of living creatures, those who build the models often buy these items separately from another line so they can feature people or animals. In other cases, scale model lines feature living creatures exclusively, often focusing on educational interests. Model kits of superheroes and super-villains from popular franchises such as DC Entertainment and Marvel Entertainment are also sold, as are models of real-world celebrities, such as Marilyn Monroe and Elvis Presley . One type of assembly kit sold as educational features skeletons and anatomical structure of humans and animals. Such kits may have unique features such as glow-in-the-dark pieces. Dinosaurs are a popular subject for such models. There are also garage kits , which are often figures of anime characters in multiple parts that require assembly. Michele Morciano says small scale ship models were produced in about 1905 linked to the wargaming rules and other publications of Fred T. Jane . The company that standardized on 1:1200 was Bassett-Lowke in 1908. The British Admiralty subsequently contracted with Bassett-Lowke and other companies and individual craftsmen to produce large numbers of recognition models, to this scale, in 1914–18. [ 15 ] Just before the Second World War, the American naval historian (and science fiction author) Fletcher Pratt published a book on naval wargaming as could be done by civilians using ship models cut off at the waterline to be moved on the floors of basketball courts and similar locales. The scale he used was non-standard (reported as 1:666), and may have been influenced by toy ships then available, but as the hobby progressed, and other rule sets came into use, it was progressively supplemented by the series 1:600, 1:1200, and 1:2400. In Britain, 1:3000 became popular and these models also have come into use in the USA. These had the advantage of approximating the nautical mile as 120 inches, 60 inches, and 30 inches, respectively. As the knot is based on this mile and a 60-minute hour, this was quite handy. After the war, firms emerged to produce models from the same white metal used to make toy soldiers. Lines Bros. Ltd , a British firm, offered a tremendously wide range of waterline merchant and naval ships as well as dockyard equipment in the scale 1:1200 which were die-cast in Zamak . In the US, at least one manufacturer, of the wartime 1:1200 recognition models, Comet, made them available for the civilian market postwar, which also drove the change to this scale. In addition, continental European manufacturers and European ship book publishers had adopted the 1:1250 drawing scale because of its similar convenience in size for both models and comparison drawings in books. A prestige scale for boats , comparable to that of 1:32 for fighter planes, is 1:72, producing huge models, but there are very few kits marketed in this scale. There are now several clubs around the world for those who choose to scratch-build radio-controlled model ships and submarines in 1:72, which is often done because of the compatibility with naval aircraft kits. For the smaller ships, plank-on-frame or other wood construction kits are offered in the traditional shipyard scales of 1:96, 1:108, or 1:192 (half of 1:96). In injection-molded plastic kits, Airfix makes full-hull models in the scale the Royal Navy has used to compare the relative sizes of ships: 1:600. Revell makes some kits to half the scale of the US Army standard: 1:570. Some American and foreign firms have made models in a proportion from the Engineer's scale: "one-sixtieth-of-an-inch-to-the-foot", or 1:720. Early in the 20th century, the British historian and science fiction author H. G. Wells published a book, Little Wars , on how to play at battles in miniature. His books use 2" lead figures, [ 16 ] particularly those manufactured by Britains . His fighting system employed spring-loaded model guns that shot matchsticks . This use of physical mechanisms was echoed in the later games of Fred Jane, whose rules required throwing darts at ship silhouettes; his collection of data on the world's fleets was later published and became renowned. Dice have largely replaced this toy mayhem for consumers. For over a century, toy soldiers were made of white metal , a lead-based alloy, often in architect's scale-based ratios in the English-speaking countries, and called tin soldiers . After the Second World War, such toys were on the market for children but now made of a safe plastic softer than styrene . American children called these " army men ". Many sets were made in the new scale of 1:40 . A few styrene model kits of land equipment were offered in this and in 1:48 and 1:32 scales. However, these were swept away by the number of kits in the scale of 1:35 . Those who continued to develop miniature wargaming preferred smaller scale models, the soldiers still made of soft plastic. Airfix particularly wanted people to buy 1:76 scale soldiers and tanks to go with "00" gauge train equipment. Roco offered 1:87 scale styrene military vehicles to go with "HO" gauge model houses. However, although there is no 1:72 scale model railroad, more toy soldiers are now offered in this scale because it is the same as the popular aircraft scale. The number of fighting vehicles in this scale is also increasing, although the number of auxiliary vehicles available is far fewer than in 1:87 scale . A more recent development, especially in wargaming of land battles, is 15 mm white metal miniatures, often referred to as 1:100. The use of 15 mm scale metals has grown quickly since the early 1990s as they allow a more affordable option over 28 mm if large battles are to be refought, or a large number of vehicles represented. The rapid rise in the detail and quality of castings at 15 mm scale has also helped to fuel their uptake by the wargaming community. Armies use smaller scales still. The US Army specifies models of the scale 1:285 for its sand table wargaming. There are metal ground vehicles and helicopters in this scale, which is a near "one-quarter-inch-to-six-feet" scale. The continental powers of NATO have developed the similar scale of 1:300, even though metric standardizers really don't like any divisors other than factors of 10, 5, and 2, so maps are not commonly offered in Europe in scales with a "3" in the denominator. Consumer wargaming has since expanded into fantasy realms, employing scales large enough to be painted in imaginative detail - so called "heroic" 28 mm figures, (roughly 1:64 scale ). Firms that produce these make small production lots of white metal . Alternatively to the commercial models, some modelers also tend to use scraps to achieve home-made warfare models. While it doesn't always involve wargaming, some modelers insert realistic procedures, enabling a certain realism such as firing guns or shell deflection on small scale models. Kits for building an engine model are available, especially for kids. The most popular are the internal combustion , steam , jet , and Stirling model engine . Usually they move using an electric motor or a hand crank , and many of them have a transparent case to show the internal process in action. Most hobbyists who build models of buildings do so as part of a diorama to enhance their other models, such as a model railroad or model war machines. As a stand-alone hobby, building models are probably most popular among enthusiasts of construction toys such as Erector , Lego and K'Nex . Famous landmarks such as the Empire State Building, Big Ben and the White House are common subjects. Standard scales have not emerged in this hobby. Model railroaders use railroad scales for their buildings: HO scale (1:87), OO scale (1:76), N scale (1:160), and O scale (1:43). Lego builders use miniland scale (1:20), minifig scale (1:48), and micro scale (1:192) [ note 1 ] Generally, the larger the building, the smaller the scale. Model buildings are commonly made from plastic, foam, balsa wood or paper. Card models are published in the form of a book, and some models are manufactured like 3-D puzzles. Professionally, building models are used by architects and salesmen. Typically found in 1:50 scale and also called model house , model home or display house, this type of model is usually found in stately homes or specially designed houses. Sometimes this kind of model is commissioned to mark a special date like an anniversary or the completion of the architecture, or these models might be used by salesmen selling homes in a new neighborhood. Miniatures and model kits are used in contemporary art whereby artists use both scratch built miniaturizations or commercially manufactured model kits to construct a dialogue between object and viewer. The role of the artist in this type of miniature is not necessarily to re-create an historical event or achieve naturalist realism, but rather to use scale as a mode of articulation in generating conceptual or theoretical exploration. Political, conceptual, and architectural examples are provided by noted artists such as Bodys Isek Kingelez , Jake and Dinos Chapman (otherwise known as the Chapman Brothers), Ricky Swallow , Shaun Wilson , Sven Christoffersen, or the Psikhelekedana artists from Mozambique , James Casebere , Oliver Boberg , and Daniel Dorall .
https://en.wikipedia.org/wiki/Scale_model
Scale models of the Bastille were produced between 1789 and 1790 by the businessman Pierre-François Palloy using stones from the demolition of the Bastille , the building they portray. Following the fall of the Bastille on 14 July 1789 Palloy decided to take charge of its demolition, gaining official authorisation to do so on 16 July and completing work on 21 May 1791. Stones from the former fortress were used on several building projects, notably Pont de la Concorde , but Palloy also converted salvaged stones and other materials into souvenirs, such as stone plaques made from stones from the dungeons, [ 1 ] [ 2 ] medals made from chains [ 3 ] and so on, thus launching a fashion for representations of the fall. [ 3 ] When the 83 départements were founded at the end of 1789 Palloy decided to make a scale model of the Bastille from its stones for each of the départements' capitals. He set up a studio dedicated to producing them, initially carved from the stones, then mass-produced by casting a mix of stone powder and mortar. [ 4 ] He offered the scale models to the départements at the end of 1790, sending with them other products and an "apostle of Liberty" (an association set up by Palloy himself), who would give a speech when the gifts were handed over. [ 3 ] [ 4 ] Scale models were also offered to government ministers, to Louis XVI and to foreign dignitaries such as George Washington (his is still on display at Mount Vernon ). The scale models were around 40 cm high, 100 cm wide and 60 cm deep and were presented to the public at patriotic festivals [ 5 ] and contributed to turning the Bastille's fall into a republican myth and symbol of liberty/ [ 3 ] Surviving examples include :
https://en.wikipedia.org/wiki/Scale_models_of_the_Bastille
A scale of chords may be used to set or read an angle in the absence of a protractor . To draw an angle, compasses describe an arc from origin with a radius taken from the 60 mark. The required angle is copied from the scale by the compasses, and an arc of this radius drawn from the sixty mark so it intersects the first arc. The line drawn from this point to the origin will be at the target angle. [ 1 ] A chord is a line drawn between two points on the circumference of a circle. Look at the centre point of this line. For a circle of radius r , each half will be r sin ⁡ θ 2 {\displaystyle r\sin {\tfrac {\theta }{2}}} so the chord will be 2 r sin ⁡ θ 2 {\displaystyle 2r\sin {\tfrac {\theta }{2}}} . The line of chords scale represents each of these values linearly on a scale running from 0 to 60. It appears on Gunter's scale and the Foster Serle dialing scales . The commercial company Stanley marketed a metal version (Stanley 60R Line of Chords Rule) in 2015. This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scale_of_chords
Scale of temperature is a methodology of calibrating the physical quantity temperature in metrology . Empirical scales measure temperature in relation to convenient and stable parameters or reference points , such as the freezing and boiling point of water . Absolute temperature is based on thermodynamic principles: using the lowest possible temperature as the zero point, and selecting a convenient incremental unit. Celsius , Kelvin , and Fahrenheit are common temperature scales . Other scales used throughout history include Rankine , Rømer , Newton , Delisle , Réaumur , Gas mark , Leiden , and Wedgwood . The zeroth law of thermodynamics describes thermal equilibrium between thermodynamic systems in form of an equivalence relation . Accordingly, the set of all thermal systems may be divided into a quotient set of equivalence classes , denoted as M {\displaystyle M} , where any element of M {\displaystyle M} collects all systems that are in thermal equilibrium with one another. If the set M {\displaystyle M} has cardinality at most c {\displaystyle {\mathfrak {c}}} (the cardinality of the continuum ), then one can construct an injective function f : M → R {\displaystyle f\colon M\to \mathbb {R} } into the real numbers by which every thermal system has a parameter – a specific real number – associated with it: the property of temperature. By definition, when two systems are in thermal equilibrium, they belong to the same equivalence class, hence to the same element of M {\displaystyle M} , and are assigned the same temperature. Conversely, two systems not in thermal equilibrium belong to different equivalence classes, and since f {\displaystyle f} is injective, they are assigned different temperatures. Temperature depends on the specific choice of f {\displaystyle f} , and any suitable f {\displaystyle f} – any specific way of assigning numerical values for temperature – establishes a scale of temperature . [ 1 ] [ 2 ] [ 3 ] In practical terms, a temperature scale is always based on usually a single physical property of a simple thermodynamic system, called a thermometer , that defines a scaling function for mapping the temperature to the measurable thermometric parameter. Such temperature scales that are purely based on measurement are called empirical temperature scales . The second law of thermodynamics provides a fundamental, natural definition of thermodynamic temperature starting with a null point of absolute zero . A scale for thermodynamic temperature is established similarly to the empirical temperature scales, however, needing only one additional fixing point. Empirical scales are based on the measurement of physical parameters that express the property of interest to be measured through some formal, most commonly a simple linear, functional relationship. For the measurement of temperature, the formal definition of thermal equilibrium in terms of the thermodynamic coordinate spaces of thermodynamic systems, expressed in the zeroth law of thermodynamics , provides the framework to measure temperature. All temperature scales, including the modern thermodynamic temperature scale used in the International System of Units , are calibrated according to thermal properties of a particular substance or device. Typically, this is established by fixing two well-defined temperature points and defining temperature increments via a linear function of the response of the thermometric device. For example, both the old Celsius scale and Fahrenheit scale were originally based on the linear expansion of a narrow mercury column within a limited range of temperature, [ 4 ] each using different reference points and scale increments. Different empirical scales may not be compatible with each other, except for small regions of temperature overlap. If an alcohol thermometer and a mercury thermometer have the same two fixed points, namely the freezing and boiling point of water, their readings will not agree with each other except at the fixed points, as the linear 1:1 relationship of expansion between any two thermometric substances may not be guaranteed. Empirical temperature scales are not reflective of the fundamental, microscopic laws of matter. Temperature is a universal attribute of matter, yet empirical scales map a narrow range onto a scale that is known to have a useful functional form for a particular application. Thus, their range is limited. The working material only exists in a form under certain circumstances, beyond which it no longer can serve as a scale. For example, mercury freezes below 234.32 K, so temperatures lower than that cannot be measured in a scale based on mercury. Even ITS-90 , which interpolates among different ranges of temperature, has a range of only 0.65 K to approximately 1358 K (−272.5 °C to 1085 °C). When pressure approaches zero, all real gas will behave like ideal gas, that is, pV of a mole of gas relying only on temperature. Therefore, we can design a scale with pV as its argument. Of course any bijective function will do, but for convenience's sake a linear function is the best. Therefore, we define it as [ 5 ] The ideal gas scale is in some sense a "mixed" scale. It relies on the universal properties of gas, a big advance from just a particular substance. But still it is empirical since it puts gas at a special position and thus has limited applicability—at some point no gas can exist. One distinguishing characteristic of ideal gas scale, however, is that it precisely equals thermodynamical scale when it is well defined (see § Equality to ideal gas scale ). ITS-90 is designed to represent the thermodynamic temperature scale (referencing absolute zero ) as closely as possible throughout its range. Many different thermometer designs are required to cover the entire range. These include helium vapor pressure thermometers, helium gas thermometers, standard platinum resistance thermometers (known as SPRTs, PRTs or Platinum RTDs) and monochromatic radiation thermometers . Although the Kelvin and Celsius scales are defined using absolute zero (0 K) and the triple point of water (273.16 K and 0.01 °C), it is impractical to use this definition at temperatures that are very different from the triple point of water. Accordingly, ITS–90 uses numerous defined points, all of which are based on various thermodynamic equilibrium states of fourteen pure chemical elements and one compound (water). Most of the defined points are based on a phase transition ; specifically the melting / freezing point of a pure chemical element. However, the deepest cryogenic points are based exclusively on the vapor pressure /temperature relationship of helium and its isotopes whereas the remainder of its cold points (those less than room temperature) are based on triple points . Examples of other defining points are the triple point of hydrogen (−259.3467 °C) and the freezing point of aluminum (660.323 °C). Thermometers calibrated per ITS–90 use complex mathematical formulas to interpolate between its defined points. ITS–90 specifies rigorous control over variables to ensure reproducibility from lab to lab. For instance, the small effect that atmospheric pressure has upon the various melting points is compensated for (an effect that typically amounts to no more than half a millikelvin across the different altitudes and barometric pressures likely to be encountered). The standard even compensates for the pressure effect due to how deeply the temperature probe is immersed into the sample. ITS–90 also draws a distinction between "freezing" and "melting" points. The distinction depends on whether heat is going into (melting) or out of (freezing) the sample when the measurement is made. Only gallium is measured while melting, all the other metals are measured while the samples are freezing. There are often small differences between measurements calibrated per ITS–90 and thermodynamic temperature. For instance, precise measurements show that the boiling point of VSMOW water under one standard atmosphere of pressure is actually 373.1339 K (99.9839 °C) when adhering strictly to the two-point definition of thermodynamic temperature. When calibrated to ITS–90, where one must interpolate between the defining points of gallium and indium, the boiling point of VSMOW water is about 10 mK less, about 99.974 °C. The virtue of ITS–90 is that another lab in another part of the world will measure the very same temperature with ease due to the advantages of a comprehensive international calibration standard featuring many conveniently spaced, reproducible, defining points spanning a wide range of temperatures. OV is a specialized scale used in Japan to measure female basal body temperature for fertility awareness . The range of 35.5 °C (OV 0) to 38.0 °C (OV 50) is divided into 50 equal parts. [ 6 ] Celsius (known until 1948 as centigrade) is a temperature scale that is named after the Swedish astronomer Anders Celsius (1701–1744), who developed a similar temperature scale two years before his death. The degree Celsius (°C) can refer to a specific temperature on the Celsius scale as well as a unit to indicate a temperature interval (a difference between two temperatures). From 1744 until 1954, 0 °C was defined as the freezing point of water and 100 °C was defined as the boiling point of water, both at a pressure of one standard atmosphere . [ citation needed ] Although these defining correlations are commonly taught in schools today, by international agreement, between 1954 and 2019 the unit degree Celsius and the Celsius scale were defined by absolute zero and the triple point of VSMOW (specially prepared water). This definition also precisely related the Celsius scale to the Kelvin scale, which defines the SI base unit of thermodynamic temperature with symbol K. Absolute zero, the lowest temperature possible, is defined as being exactly 0 K and −273.15 °C. Until 19 May 2019, the temperature of the triple point of water was defined as exactly 273.16 K (0.01 °C). This means that a temperature difference of one degree Celsius and that of one kelvin are exactly the same. On 20 May 2019, the kelvin was redefined so that its value is now determined by the definition of the Boltzmann constant rather than being defined by the triple point of VSMOW. This means that the triple point is now a measured value, not a defined value. The newly-defined exact value of the Boltzmann constant was selected so that the measured value of the VSMOW triple point is exactly the same as the older defined value to within the limits of accuracy of contemporary metrology . The degree Celsius remains exactly equal to the kelvin, and 0 K remains exactly −273.15 °C. Thermodynamic scale differs from empirical scales in that it is absolute. It is based on the fundamental laws of thermodynamics or statistical mechanics instead of some arbitrary chosen working material. Besides it covers full range of temperature and has simple relation with microscopic quantities like the average kinetic energy of particles (see equipartition theorem ). In experiments ITS-90 is used to approximate thermodynamic scale due to simpler realization. Lord Kelvin devised the thermodynamic scale based on the efficiency of heat engines as shown below: The efficiency of an engine is the work divided by the heat introduced to the system or where w cy is the work done per cycle. Thus, the efficiency depends only on q C / q H . Because of Carnot theorem , any reversible heat engine operating between temperatures T 1 and T 2 must have the same efficiency, meaning, the efficiency is the function of the temperatures only: In addition, a reversible heat engine operating between temperatures T 1 and T 3 must have the same efficiency as one consisting of two cycles, one between T 1 and another (intermediate) temperature T 2 , and the second between T 2 and T 3 . This can only be the case if Specializing to the case that T 1 {\displaystyle T_{1}} is a fixed reference temperature: the temperature of the triple point of water. Then for any T 2 and T 3 , Therefore, if thermodynamic temperature is defined by then the function f , viewed as a function of thermodynamic temperature, is and the reference temperature T 1 has the value 273.16. (Of course any reference temperature and any positive numerical value could be used—the choice here corresponds to the Kelvin scale.) It follows immediately that Substituting Equation 3 back into Equation 1 gives a relationship for the efficiency in terms of temperature: This is identical to the efficiency formula for Carnot cycle , which effectively employs the ideal gas scale. This means that the two scales equal numerically at every point.
https://en.wikipedia.org/wiki/Scale_of_temperature
A scale test car is a type of railroad car in maintenance of way service. Its purpose is to calibrate the weighing scales used to weigh loaded railroad cars. Scale test cars are of a precisely known weight so that the track scale can be calibrated against them. [ 1 ] Cars are weighed for various purposes. These include: Cars are weighed to ensure they are within the axle load limits of the railroad. Cars are weighed to determine (by subtracting the car's unloaded, or tare weight from the total weight ) the amount of cargo loaded. This is used to bill the railroad's customers for the carriage of bulk commodities, so it is essential that the track scales be accurate. Many scale test cars were small, old railroad cars carrying heavy metal weights as their superstructure. Scale test cars needed special handling so they would not suffer damage, which might alter their weight. They were reweighed periodically on accurate scales at the railroad's shops. [ 2 ] [ 3 ] This rail-transport related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scale_test_car
The Scaled Particle Theory (SPT) is an equilibrium theory of hard-sphere fluids which gives an approximate expression for the equation of state of hard-sphere mixtures and for their thermodynamic properties such as the surface tension . [ 1 ] [ 2 ] Consider the one-component homogeneous hard-sphere fluid with molecule radius R {\displaystyle R} . To obtain its equation of state in the form p = p ( ρ , T ) {\displaystyle p=p(\rho ,T)} (where p {\displaystyle p} is the pressure, ρ {\displaystyle \rho } is the density of the fluid and T {\displaystyle T} is the temperature) one can find the expression for the chemical potential μ {\displaystyle \mu } and then use the Gibbs–Duhem equation to express p {\displaystyle p} as a function of ρ {\displaystyle \rho } . [ 3 ] The chemical potential of the fluid can be written as a sum of an ideal-gas contribution and an excess part: μ = μ i d + μ e x {\displaystyle \mu =\mu _{id}+\mu _{ex}} . The excess chemical potential is equivalent to the reversible work of inserting an additional molecule into the fluid. Note that inserting a spherical particle of radius R 0 {\displaystyle R_{0}} is equivalent to creating a cavity of radius R 0 + R {\displaystyle R_{0}+R} in the hard-sphere fluid. The SPT theory gives an approximate expression for this work W ( R 0 ) {\displaystyle W(R_{0})} . In case of inserting a molecule ( R 0 = R ) {\displaystyle (R_{0}=R)} it is where η ≡ 4 3 π R 3 ρ {\displaystyle \eta \equiv {\frac {4}{3}}\pi R^{3}\rho } is the packing fraction, k {\displaystyle k} is the Boltzmann constant . This leads to the equation of state which is equivalent to the compressibility equation of state of the Percus-Yevick theory. This article about statistical mechanics is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scaled_particle_theory
" Candidatus Scalindua brodae " is a bacterial member of the order Planctomycetales [ 1 ] and therefore lacks peptidoglycan in its cell wall , has a compartmentalized cytoplasm . It is an ammonium oxidising bacteria. This bacteria -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scalindua_brodae
Candidatus Scalindua wagneri is a Gram-negative coccoid-shaped bacterium that was first isolated from a wastewater treatment plant. [ 1 ] This bacterium is an obligate anaerobic chemolithotroph that undergoes anaerobic ammonium oxidation ( anammox ). [ 1 ] It can be used in the wastewater treatment industry in nitrogen reactors to remove nitrogenous wastes from wastewater without contributing to fixed nitrogen loss and greenhouse gas emission . [ 2 ] Candidatus Scalindua wagneri is a coccoid -shaped bacterium with a diameter of 1 μm. [ 1 ] Like other Planctomycetota , S. wagneri is Gram-negative and does not have peptidoglycan in its cell wall . [ 1 ] In addition, the bacterium contains two inner membranes instead of having one inner membrane and one outer membrane that surrounds the cell wall. [ 3 ] Some of the near neighbors are other species within the new Scalindua genus, such as " Candidatus S. sorokinii" and " Candidatus S. brodae". [ 1 ] Other neighbors include " Candidatus Kuenenia stuttgartiensis" and " Candidatus Brocadia anammoxidans ". [ 1 ] S. wagneri and its genus share only about 85% similarity with other members in its evolutionary line, which suggests that it is distantly related to other anaerobic ammonium oxidizing (anammox) bacteria. [ 1 ] Markus Schmid from the Jetten lab first discovered S. wagneri in a landfill leachate treatment plant located in Pitsea , UK on August 1, 2001. [ 1 ] These bacteria doubled in number about every three weeks in laboratory conditions, which made them very difficult to isolate. [ 1 ] Therefore, the researchers used 16S rRNA ( ribosomal RNA ) gene analysis on the biofilm of wastewater samples to detect the presence of these bacteria. [ 1 ] They amplified and isolated the 16S rRNA gene from the biofilm using PCR and gel electrophoresis . Then, they cloned the DNA into TOPO vectors . [ 1 ] Once the researchers sequenced the DNA, they aligned the 16S rRNA gene sequences to a genome database and found that the sequences are related to the anammox bacteria. [ 1 ] One of the sequences showed a 93% similarity to Candidatus Scalindua sorokinii, which suggests that this sequence belonged to a new species within the genus Scalindua and the researchers named it Candidatus Scalindua wagneri after Michael Wagner, a microbial ecologist . [ 1 ] S. wagneri is an obligate anaerobic chemolithoautotroph and undergoes anaerobic ammonium oxidation (anammox) in the intracytoplasmic compartment called an anammoxosome . [ 1 ] [ 3 ] During the anammox process, ammonium is oxidized using nitrite as an electron acceptor and forms dinitrogen gas as a product. [ 1 ] It is proposed that this mechanism occurs through the production of a hydrazine intermediate using hydroxylamine , which is derived from nitrite. [ 1 ] In addition, S. wagneri uses nitrite as an electron donor to fix carbon dioxide and forms nitrate as a byproduct. [ 1 ] To the test the metabolic properties of S. wagneri, Nakajima et al . performed anammox activity tests using nitrogen compounds labeled with the 15 N isotopes and measured 28 N 2 , 29 N 2 , and 30 N 2 concentrations after 15 days. [ 4 ] The researchers found that the concentrations of the 28 N 2 and 29 N 2 gases increased significantly. [ 4 ] These results suggest that ammonia and nitrite is used in equal amounts to make 29 N 2 , and denitrification concurrently occurs with anammox metabolism. [ 4 ] Currently, genomic information about S. wagneri is very limited. [ 5 ] Current genome sequences were collected from DNA isolated from the bacteria growing in a marine anammox bacteria (MAB) reactor. [ 4 ] Then, the 16S rRNA genes on the DNA were amplified using a specific oligonucleotide primer for Planctomycetales , separated using gel electrophoresis, and sequenced using a CEQ 2000 DNA Sequencer. [ 4 ] Analysis of the 16S rRNA gene sequences was performed using the GENETYX program, and the alignments and phylogenetic trees were made using BLAST , CLUSTALW and neighbor joining , respectively. [ 4 ] To have a better understanding of the genome , S. wagneri can be compared to one of its better-known relatives. For example, Candidatus Scalindua profunda has a genome length of 5.14 million base pairs with a GC content of 39.1%. [ 6 ] There is no genomic information about the length or % GC content for S. wagneri. However, there are hundreds of 476 base pair partial sequences for its 16S rRNA gene. [ 5 ] Using fluorescent in situ hybridization (FISH) analysis, a technique used to detect specific DNA sequences on chromosomes , researchers were not able to detect hybridization between the chromosome of S. wagneri and the putative anammox DNA probe. [ 1 ] This suggests that S. wagneri is not very similar to the known anammox bacteria, so the researchers categorized the bacterium into its own genus. [ 1 ] Although researchers are unable to isolate pure cultures of S. wagneri, it is believed to encompass a broad niche . [ 7 ] Using 16S rRNA gene analysis, Schmid first found evidence of the bacteria in wastewater treatment plants. [ 1 ] Other researchers also found 16S rRNA gene evidence in a petroleum reservoir held at a temperature range between 55 °C and 75 °C in addition to freshwater and marine ecosystems , such as estuaries . [ 7 ] [ 8 ] S. wagneri allows wastewater treatment plants to reduce operation costs while reducing the adverse effects of nitrification and denitrification on the environment. [ 2 ] These bacteria contribute to the development of new technologies for wastewater management by aiding in the efficient removal of nitrogenous compounds in wastewater. [ 1 ] Usually, nitrogen reactors use both nitrification and denitrification to remove nitrogenous wastes. [ 2 ] These processes have high operation costs due to the continuous maintenance of aerobic conditions in the reactor. [ 2 ] Denitrification also produces nitrous oxide (N 2 O), which is a greenhouse gas that is detrimental to the environment. [ 9 ] Production of N 2 O contributes to the loss of fixed nitrogen, which regulates the biological productivity of ecosystems . [ 10 ] [ 11 ] By inoculating wastewater reactors with the anaerobic S. wagneri, operation costs can be reduced by about ninety percent without the production of greenhouse gases. [ 2 ] This allows for better wastewater management in a more cost-efficient manner without contributing to climate change . [ 2 ] [ 9 ]
https://en.wikipedia.org/wiki/Scalindua_wagneri
In affine geometry , uniform scaling (or isotropic scaling [ 1 ] ) is a linear transformation that enlarges (increases) or shrinks (diminishes) objects by a scale factor that is the same in all directions ( isotropically ). The result of uniform scaling is similar (in the geometric sense) to the original. A scale factor of 1 is normally allowed, so that congruent shapes are also classed as similar. Uniform scaling happens, for example, when enlarging or reducing a photograph , or when creating a scale model of a building, car, airplane, etc. More general is scaling with a separate scale factor for each axis direction. Non-uniform scaling ( anisotropic scaling ) is obtained when at least one of the scaling factors is different from the others; a special case is directional scaling or stretching (in one direction). Non-uniform scaling changes the shape of the object; e.g. a square may change into a rectangle, or into a parallelogram if the sides of the square are not parallel to the scaling axes (the angles between lines parallel to the axes are preserved, but not all angles). It occurs, for example, when a faraway billboard is viewed from an oblique angle , or when the shadow of a flat object falls on a surface that is not parallel to it. When the scale factor is larger than 1, (uniform or non-uniform) scaling is sometimes also called dilation or enlargement . When the scale factor is a positive number smaller than 1, scaling is sometimes also called contraction or reduction . In the most general sense, a scaling includes the case in which the directions of scaling are not perpendicular. It also includes the case in which one or more scale factors are equal to zero ( projection ), and the case of one or more negative scale factors (a directional scaling by -1 is equivalent to a reflection ). Scaling is a linear transformation , and a special case of homothetic transformation (scaling about a point). In most cases, the homothetic transformations are non-linear transformations. A scale factor is usually a decimal which scales, or multiplies, some quantity. In the equation y = Cx , C is the scale factor for x . C is also the coefficient of x , and may be called the constant of proportionality of y to x . For example, doubling distances corresponds to a scale factor of two for distance, while cutting a cake in half results in pieces with a scale factor for volume of one half. The basic equation for it is image over preimage. In the field of measurements, the scale factor of an instrument is sometimes referred to as sensitivity. The ratio of any two corresponding lengths in two similar geometric figures is also called a scale. A scaling can be represented by a scaling matrix . To scale an object by a vector v = ( v x , v y , v z ), each point p = ( p x , p y , p z ) would need to be multiplied with this scaling matrix: As shown below, the multiplication will give the expected result: Such a scaling changes the diameter of an object by a factor between the scale factors, the area by a factor between the smallest and the largest product of two scale factors, and the volume by the product of all three. The scaling is uniform if and only if the scaling factors are equal ( v x = v y = v z ). If all except one of the scale factors are equal to 1, we have directional scaling. In the case where v x = v y = v z = k , scaling increases the area of any surface by a factor of k 2 and the volume of any solid object by a factor of k 3 . In n {\displaystyle n} -dimensional space R n {\displaystyle \mathbb {R} ^{n}} , uniform scaling by a factor v {\displaystyle v} is accomplished by scalar multiplication with v {\displaystyle v} , that is, multiplying each coordinate of each point by v {\displaystyle v} . As a special case of linear transformation, it can be achieved also by multiplying each point (viewed as a column vector) with a diagonal matrix whose entries on the diagonal are all equal to v {\displaystyle v} , namely v I {\displaystyle vI} . Non-uniform scaling is accomplished by multiplication with any symmetric matrix . The eigenvalues of the matrix are the scale factors, and the corresponding eigenvectors are the axes along which each scale factor applies. A special case is a diagonal matrix, with arbitrary numbers v 1 , v 2 , … v n {\displaystyle v_{1},v_{2},\ldots v_{n}} along the diagonal: the axes of scaling are then the coordinate axes, and the transformation scales along each axis i {\displaystyle i} by the factor v i {\displaystyle v_{i}} . In uniform scaling with a non-zero scale factor, all non-zero vectors retain their direction (as seen from the origin), or all have the direction reversed, depending on the sign of the scaling factor. In non-uniform scaling only the vectors that belong to an eigenspace will retain their direction. A vector that is the sum of two or more non-zero vectors belonging to different eigenspaces will be tilted towards the eigenspace with largest eigenvalue. In projective geometry , often used in computer graphics , points are represented using homogeneous coordinates . To scale an object by a vector v = ( v x , v y , v z ), each homogeneous coordinate vector p = ( p x , p y , p z , 1) would need to be multiplied with this projective transformation matrix: As shown below, the multiplication will give the expected result: Since the last component of a homogeneous coordinate can be viewed as the denominator of the other three components, a uniform scaling by a common factor s (uniform scaling) can be accomplished by using this scaling matrix: For each vector p = ( p x , p y , p z , 1) we would have which would be equivalent to Given a point P ( x , y ) {\displaystyle P(x,y)} , the dilation associates it with the point P ′ ( x ′ , y ′ ) {\displaystyle P'(x',y')} through the equations Therefore, given a function y = f ( x ) {\displaystyle y=f(x)} , the equation of the dilated function is If n = 1 {\displaystyle n=1} , the transformation is horizontal; when m > 1 {\displaystyle m>1} , it is a dilation, when m < 1 {\displaystyle m<1} , it is a contraction. If m = 1 {\displaystyle m=1} , the transformation is vertical; when n > 1 {\displaystyle n>1} it is a dilation, when n < 1 {\displaystyle n<1} , it is a contraction. If m = 1 / n {\displaystyle m=1/n} or n = 1 / m {\displaystyle n=1/m} , the transformation is a squeeze mapping .
https://en.wikipedia.org/wiki/Scaling_(geometry)
In mathematics , a change of variables is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables. The intent is that when expressed in new variables, the problem may become simpler, or equivalent to a better understood problem. Change of variables is an operation that is related to substitution . However these are different operations, as can be seen when considering differentiation ( chain rule ) or integration ( integration by substitution ). A very simple example of a useful variable change can be seen in the problem of finding the roots of the sixth-degree polynomial: Sixth-degree polynomial equations are generally impossible to solve in terms of radicals (see Abel–Ruffini theorem ). This particular equation, however, may be written (this is a simple case of a polynomial decomposition ). Thus the equation may be simplified by defining a new variable u = x 3 {\displaystyle u=x^{3}} . Substituting x by u 3 {\displaystyle {\sqrt[{3}]{u}}} into the polynomial gives which is just a quadratic equation with the two solutions: The solutions in terms of the original variable are obtained by substituting x 3 back in for u , which gives Then, assuming that one is interested only in real solutions, the solutions of the original equation are Consider the system of equations where x {\displaystyle x} and y {\displaystyle y} are positive integers with x > y {\displaystyle x>y} . (Source: 1991 AIME ) Solving this normally is not very difficult, but it may get a little tedious. However, we can rewrite the second equation as x y ( x + y ) = 880 {\displaystyle xy(x+y)=880} . Making the substitutions s = x + y {\displaystyle s=x+y} and t = x y {\displaystyle t=xy} reduces the system to s + t = 71 , s t = 880 {\displaystyle s+t=71,st=880} . Solving this gives ( s , t ) = ( 16 , 55 ) {\displaystyle (s,t)=(16,55)} and ( s , t ) = ( 55 , 16 ) {\displaystyle (s,t)=(55,16)} . Back-substituting the first ordered pair gives us x + y = 16 , x y = 55 , x > y {\displaystyle x+y=16,xy=55,x>y} , which gives the solution ( x , y ) = ( 11 , 5 ) . {\displaystyle (x,y)=(11,5).} Back-substituting the second ordered pair gives us x + y = 55 , x y = 16 , x > y {\displaystyle x+y=55,xy=16,x>y} , which gives no solutions. Hence the solution that solves the system is ( x , y ) = ( 11 , 5 ) {\displaystyle (x,y)=(11,5)} . Let A {\displaystyle A} , B {\displaystyle B} be smooth manifolds and let Φ : A → B {\displaystyle \Phi :A\rightarrow B} be a C r {\displaystyle C^{r}} - diffeomorphism between them, that is: Φ {\displaystyle \Phi } is a r {\displaystyle r} times continuously differentiable, bijective map from A {\displaystyle A} to B {\displaystyle B} with r {\displaystyle r} times continuously differentiable inverse from B {\displaystyle B} to A {\displaystyle A} . Here r {\displaystyle r} may be any natural number (or zero), ∞ {\displaystyle \infty } ( smooth ) or ω {\displaystyle \omega } ( analytic ). The map Φ {\displaystyle \Phi } is called a regular coordinate transformation or regular variable substitution , where regular refers to the C r {\displaystyle C^{r}} -ness of Φ {\displaystyle \Phi } . Usually one will write x = Φ ( y ) {\displaystyle x=\Phi (y)} to indicate the replacement of the variable x {\displaystyle x} by the variable y {\displaystyle y} by substituting the value of Φ {\displaystyle \Phi } in y {\displaystyle y} for every occurrence of x {\displaystyle x} . Some systems can be more easily solved when switching to polar coordinates . Consider for example the equation This may be a potential energy function for some physical problem. If one does not immediately see a solution, one might try the substitution Note that if θ {\displaystyle \theta } runs outside a 2 π {\displaystyle 2\pi } -length interval, for example, [ 0 , 2 π ] {\displaystyle [0,2\pi ]} , the map Φ {\displaystyle \Phi } is no longer bijective. Therefore, Φ {\displaystyle \Phi } should be limited to, for example ( 0 , ∞ ] × [ 0 , 2 π ) {\displaystyle (0,\infty ]\times [0,2\pi )} . Notice how r = 0 {\displaystyle r=0} is excluded, for Φ {\displaystyle \Phi } is not bijective in the origin ( θ {\displaystyle \theta } can take any value, the point will be mapped to (0, 0)). Then, replacing all occurrences of the original variables by the new expressions prescribed by Φ {\displaystyle \Phi } and using the identity sin 2 ⁡ x + cos 2 ⁡ x = 1 {\displaystyle \sin ^{2}x+\cos ^{2}x=1} , we get Now the solutions can be readily found: sin ⁡ ( θ ) = 0 {\displaystyle \sin(\theta )=0} , so θ = 0 {\displaystyle \theta =0} or θ = π {\displaystyle \theta =\pi } . Applying the inverse of Φ {\displaystyle \Phi } shows that this is equivalent to y = 0 {\displaystyle y=0} while x ≠ 0 {\displaystyle x\not =0} . Indeed, we see that for y = 0 {\displaystyle y=0} the function vanishes, except for the origin. Note that, had we allowed r = 0 {\displaystyle r=0} , the origin would also have been a solution, though it is not a solution to the original problem. Here the bijectivity of Φ {\displaystyle \Phi } is crucial. The function is always positive (for x , y ∈ R {\displaystyle x,y\in \mathbb {R} } ), hence the absolute values. The chain rule is used to simplify complicated differentiation. For example, consider the problem of calculating the derivative Let y = sin ⁡ u {\displaystyle y=\sin u} with u = x 2 . {\displaystyle u=x^{2}.} Then: Difficult integrals may often be evaluated by changing variables; this is enabled by the substitution rule and is analogous to the use of the chain rule above. Difficult integrals may also be solved by simplifying the integral using a change of variables given by the corresponding Jacobian matrix and determinant . [ 1 ] Using the Jacobian determinant and the corresponding change of variable that it gives is the basis of coordinate systems such as polar, cylindrical, and spherical coordinate systems. The following theorem allows us to relate integrals with respect to Lebesgue measure to an equivalent integral with respect to the pullback measure under a parameterization G. [ 2 ] The proof is due to approximations of the Jordan content. Suppose that Ω {\displaystyle \Omega } is an open subset of R n {\displaystyle \mathbb {R} ^{n}} and G : Ω → R n {\displaystyle G:\Omega \to \mathbb {R} ^{n}} is a C 1 {\displaystyle C^{1}} diffeomorphism. As a corollary of this theorem, we may compute the Radon–Nikodym derivatives of both the pullback and pushforward measures of m {\displaystyle m} under T {\displaystyle T} . The pullback measure in terms of a transformation T {\displaystyle T} is defined as T ∗ μ := μ ( T ( A ) ) {\displaystyle T^{*}\mu :=\mu (T(A))} . The change of variables formula for pullback measures is ∫ T ( Ω ) g d μ = ∫ Ω g ∘ T d T ∗ μ {\displaystyle \int _{T(\Omega )}gd\mu =\int _{\Omega }g\circ TdT^{*}\mu } . Pushforward measure and transformation formula The pushforward measure in terms of a transformation T {\displaystyle T} , is defined as T ∗ μ := μ ( T − 1 ( A ) ) {\displaystyle T_{*}\mu :=\mu (T^{-1}(A))} . The change of variables formula for pushforward measures is ∫ Ω g ∘ T d μ = ∫ T ( Ω ) g d T ∗ μ {\displaystyle \int _{\Omega }g\circ Td\mu =\int _{T(\Omega )}gdT_{*}\mu } . As a corollary of the change of variables formula for Lebesgue measure, we have that From which we may obtain Variable changes for differentiation and integration are taught in elementary calculus and the steps are rarely carried out in full. The very broad use of variable changes is apparent when considering differential equations, where the independent variables may be changed using the chain rule or the dependent variables are changed resulting in some differentiation to be carried out. Exotic changes, such as the mingling of dependent and independent variables in point and contact transformations , can be very complicated but allow much freedom. Very often, a general form for a change is substituted into a problem and parameters picked along the way to best simplify the problem. Probably the simplest change is the scaling and shifting of variables, that is replacing them with new variables that are "stretched" and "moved" by constant amounts. This is very common in practical applications to get physical parameters out of problems. For an n th order derivative, the change simply results in where This may be shown readily through the chain rule and linearity of differentiation. This change is very common in practical applications to get physical parameters out of problems, for example, the boundary value problem describes parallel fluid flow between flat solid walls separated by a distance δ; μ is the viscosity and d p / d x {\displaystyle dp/dx} the pressure gradient , both constants. By scaling the variables the problem becomes where Scaling is useful for many reasons. It simplifies analysis both by reducing the number of parameters and by simply making the problem neater. Proper scaling may normalize variables, that is make them have a sensible unitless range such as 0 to 1. Finally, if a problem mandates numeric solution, the fewer the parameters the fewer the number of computations. Consider a system of equations for a given function H ( x , v ) {\displaystyle H(x,v)} . The mass can be eliminated by the (trivial) substitution Φ ( p ) = 1 / m ⋅ p {\displaystyle \Phi (p)=1/m\cdot p} . Clearly this is a bijective map from R {\displaystyle \mathbb {R} } to R {\displaystyle \mathbb {R} } . Under the substitution v = Φ ( p ) {\displaystyle v=\Phi (p)} the system becomes Given a force field φ ( t , x , v ) {\displaystyle \varphi (t,x,v)} , Newton 's equations of motion are Lagrange examined how these equations of motion change under an arbitrary substitution of variables x = Ψ ( t , y ) {\displaystyle x=\Psi (t,y)} , v = ∂ Ψ ( t , y ) ∂ t + ∂ Ψ ( t , y ) ∂ y ⋅ w . {\displaystyle v={\frac {\partial \Psi (t,y)}{\partial t}}+{\frac {\partial \Psi (t,y)}{\partial y}}\cdot w.} He found that the equations are equivalent to Newton's equations for the function L = T − V {\displaystyle L=T-V} , where T is the kinetic, and V the potential energy. In fact, when the substitution is chosen well (exploiting for example symmetries and constraints of the system) these equations are much easier to solve than Newton's equations in Cartesian coordinates.
https://en.wikipedia.org/wiki/Scaling_and_shifting
In spatial ecology and macroecology , scaling pattern of occupancy ( SPO ), also known as the area-of-occupancy ( AOO ) is the way in which species distribution changes across spatial scales. In physical geography and image analysis , it is similar to the modifiable areal unit problem . Simon A. Levin (1992) [ 1 ] states that the problem of relating phenomena across scales is the central problem in biology and in all of science . Understanding the SPO is thus one central theme in ecology. This pattern is often plotted as log-transformed grain (cell size) versus log-transformed occupancy. Kunin (1998) [ 2 ] presented a log-log linear SPO and suggested a fractal nature for species distributions. It has since been shown to follow a logistic shape, reflecting a percolation process. Furthermore, the SPO is closely related to the intraspecific occupancy-abundance relationship . For instance, if individuals are randomly distributed in space, the number of individuals in an α -size cell follows a Poisson distribution , with the occupancy being P α = 1 − exp(− μα ), where μ is the density. [ 3 ] Clearly, P α in this Poisson model for randomly distributed individuals is also the SPO. Other probability distributions, such as the negative binomial distribution , can also be applied for describing the SPO and the occupancy-abundance relationship for non-randomly distributed individuals. [ 4 ] Other occupancy-abundance models that can be used to describe the SPO includes Nachman's exponential model, [ 5 ] Hanski and Gyllenberg's metapopulation model, [ 6 ] He and Gaston's [ 7 ] improved negative binomial model by applying Taylor's power law between the mean and variance of species distribution, [ 8 ] and Hui and McGeoch's droopy-tail percolation model. [ 9 ] One important application of the SPO in ecology is to estimate species abundance based on presence-absence data, or occupancy alone. [ 10 ] This is appealing because obtaining presence-absence data is often cost-efficient. Using a dipswitch test consisting of 5 subtests and 15 criteria, Hui et al. [ 11 ] confirmed that using the SPO is robust and reliable for assemblage-scale regional abundance estimation . The other application of SPOs includes trends identification in populations, which is extremely valuable for biodiversity conservation . [ 12 ] Models providing explanations to the observed scaling pattern of occupancy include the fractal model, the cross-scale model and the Bayesian estimation model. The fractal model can be configured by dividing the landscape into quadrats of different sizes, [ 13 ] [ 14 ] or bisecting into grids with special width-to-length ratio (2:1), [ 15 ] [ 16 ] and yields the following SPO: where D is the box-counting fractal dimension . If during each step a quadrat is divided into q sub-quadrats, we will find a constant portion ( f ) of sub-quadrats is also present in the fractal model, i.e. D = 2(1 + log ƒ /log q ). Since this assumption that f is scale independent is not always the case in nature, [ 17 ] a more general form of ƒ can be assumed, ƒ = q − λ ( λ is a constant), which yields the cross-scale model: [ 18 ] The Bayesian estimation model follows a different way of thinking. Instead of providing the best-fit model as above, the occupancy at different scales can be estimated by Bayesian rule based on not only the occupancy but also the spatial autocorrelation at one specific scale. For the Bayesian estimation model, Hui et al. [ 19 ] provide the following formula to describe the SPO and join-count statistics of spatial autocorrelation: where Ω = p ( a ) 0 − q ( a ) 0/+ p ( a ) + and ℧ {\displaystyle \mho } = p ( a ) 0 (1 − p ( a ) + 2 (2 q ( a ) +/+ − 3) + p(a) + ( q ( a ) +/+ 2 − 3)). p ( a ) + is occupancy; q ( a ) +/+ is the conditional probability that a randomly chosen adjacent quadrat of an occupied quadrat is also occupied. The conditional probability q ( a ) 0/+ = 1 − q ( a ) +/+ is the absence probability in a quadrate adjacent to an occupied one; a and 4 a are the grains. The R-code of the Bayesian estimation model has been provided elsewhere. The key point of the Bayesian estimation model is that the scaling pattern of species distribution, measured by occupancy and spatial pattern, can be extrapolated across scales. Later on, Hui [ 20 ] provides the Bayesian estimation model for continuously changing scales: where b , c , and h are constants. This SPO becomes the Poisson model when b = c = 1. In the same paper, the scaling pattern of join-count spatial autocorrelation and multi-species association (or co-occurrence ) were also provided by the Bayesian model, suggesting that " the Bayesian model can grasp the statistical essence of species scaling patterns. " The probability of species extinction and ecosystem collapse increases rapidly as range size declines. In risk assessment protocols such as the IUCN Red List of Species or the IUCN Red List of Ecosystems , area of occupancy (AOO) is used as a standardized, complementary and widely applicable measure of risk spreading against spatially explicit threats. [ 21 ] [ 22 ]
https://en.wikipedia.org/wiki/Scaling_pattern_of_occupancy
In physics, the scallop theorem states that a swimmer that performs a reciprocal motion cannot achieve net displacement in a low- Reynolds number Newtonian fluid environment, i.e. a fluid that is highly viscous . Such a swimmer deforms its body into a particular shape through a sequence of motions and then reverts to the original shape by going through the sequence in reverse. At low Reynolds number, time or inertia does not come into play, and the swimming motion is purely determined by the sequence of shapes that the swimmer assumes. Edward Mills Purcell stated this theorem in his 1977 paper Life at Low Reynolds Number explaining physical principles of aquatic locomotion . [ 1 ] The theorem is named for the motion of a scallop which opens and closes a simple hinge during one period. Such motion is not sufficient to create migration at low Reynolds numbers. The scallop is an example of a body with one degree of freedom to use for motion. Bodies with a single degree of freedom deform in a reciprocal manner and subsequently, bodies with one degree of freedom do not achieve locomotion in a highly viscous environment. The scallop theorem is a consequence of the subsequent forces applied to the organism as it swims from the surrounding fluid. For an incompressible Newtonian fluid with density ρ {\displaystyle \rho } and dynamic viscosity η {\displaystyle \eta } , the flow satisfies the Navier–Stokes equations : where u {\displaystyle \mathbf {u} } denotes the velocity of the fluid. However, at the low Reynolds number limit, the inertial terms of the Navier-Stokes equations on the left-hand side tend to zero. This is made more apparent by nondimensionalizing the Navier–Stokes equations . By defining a characteristic velocity and length, u 0 {\displaystyle u_{0}} and L {\displaystyle L} , we can cast our variables to dimensionless form: where the dimensionless pressure is appropriately scaled for flow with significant viscous effects. Plugging these quantities into the Navier-Stokes equations gives us: And by rearranging terms, we arrive at a dimensionless form: where Re = ρ u 0 L / η {\displaystyle {\text{Re}}=\rho u_{0}L/\eta } is the Reynolds number. In the low Reynolds number limit (as R e → 0 {\displaystyle \mathrm {Re} \rightarrow 0} ), the LHS tends to zero and we arrive at a dimensionless form of Stokes equations. Redimensionalizing yields: The consequences of having no inertial terms at low Reynolds number are: In particular, for a swimmer moving in the low Reynolds number regime, its motion satisfies: This is closer in spirit to the proof sketch given by Purcell. [ 1 ] The key result is to show that a swimmer in a Stokes fluid does not depend on time. That is, a one cannot detect if a movie of a swimmer motion is slowed down, sped up, or reversed. The other results then are simple corollaries. The stress tensor of the fluid is σ i j = − p δ i j + μ ( ∂ i u j + ∂ j u i ) {\displaystyle \sigma _{ij}=-p\delta _{ij}+\mu (\partial _{i}u_{j}+\partial _{j}u_{i})} . Let r {\displaystyle r} be a nonzero real constant. Suppose we have a swimming motion, then we can do the following scaling: p ↦ r p ; u ↦ r u ; σ ↦ r σ {\displaystyle p\mapsto rp;\quad u\mapsto ru;\quad \sigma \mapsto r\sigma } and obtain another solution to the Stokes equation. That is, if we scale hydrostatic pressure, flow-velocity, and stress tensor all by r {\displaystyle r} , we still obtain a solution to the Stokes equation. Since the motion is in the low Reynolds number regime, inertial forces are negligible, and the instantaneous total force and torque on the swimmer must both balance to zero. Since the instantaneous total force and torque on the swimmer is computed by integrating the stress tensor σ {\displaystyle \sigma } over its surface, the instantaneous total force and torque increase by r {\displaystyle r} as well, which are still zero. Thus, scaling both the swimmer's motion and the motion of the surrounding fluid scales by the same factor, we still obtain a motion that respects the Stokes equation. The proof of the scallop theorem can be represented in a mathematically elegant way. To do this, we must first understand the mathematical consequences of the linearity of Stokes equations. To summarize, the linearity of Stokes equations allows us to use the reciprocal theorem to relate the swimming velocity of the swimmer to the velocity field of the fluid around its surface (known as the swimming gait), which changes according to the periodic motion it exhibits. This relation allows us to conclude that locomotion is independent of swimming rate. Subsequently, this leads to the discovery that reversal of periodic motion is identical to the forward motion due to symmetry, allowing us to conclude that there can be no net displacement. [ 3 ] The reciprocal theorem describes the relationship between two Stokes flows in the same geometry where inertial effects are insignificant compared to viscous effects. Consider a fluid filled region V {\displaystyle V} bounded by surface S {\displaystyle S} with a unit normal n ^ {\displaystyle {\hat {\mathbf {n} }}} . Suppose we have solutions to Stokes equations in the domain V {\displaystyle V} possessing the form of the velocity fields u {\displaystyle \mathbf {u} } and u ′ {\displaystyle \mathbf {u} '} . The velocity fields harbor corresponding stress fields σ {\displaystyle \mathbf {\sigma } } and σ ′ {\displaystyle \mathbf {\sigma } '} respectively. Then the following equality holds: The reciprocal theorem allows us to obtain information about a certain flow by using information from another flow. This is preferable to solving Stokes equations, which is difficult due to not having a known boundary condition. This particularly useful if one wants to understand flow from a complicated problem by studying the flow of a simpler problem in the same geometry. One can use the reciprocal theorem to relate the swimming velocity, U {\displaystyle \mathbf {U} } , of a swimmer subject to a force F {\displaystyle \mathbf {F} } to its swimming gait u S {\displaystyle \mathbf {u} _{S}} : Now that we have established that the relationship between the instantaneous swimming velocity in the direction of the force acting on the body and its swimming gait follow the general form where u S ≡ r S ˙ = d r S / d t {\displaystyle \mathbf {u} _{S}\equiv {\dot {\mathbf {r} _{S}}}=\mathrm {d} \mathbf {r} _{S}/\mathrm {d} t} and r S {\displaystyle \mathbf {r} _{S}} denote the positions of points on the surface of the swimmer, we can establish that locomotion is independent of rate. Consider a swimmer that deforms in a periodic fashion through a sequence of motions between the times t 0 {\displaystyle t_{0}} and t 1 . {\displaystyle t_{1}.} The net displacement of the swimmer is Now consider the swimmer deforming in the same manner but at a different rate. We describe this with the mapping Using this mapping, we see that This result means that the net distance traveled by the swimmer does not depend on the rate at which it is being deformed, but only on the geometrical sequence of shape. This is the first key result. If a swimmer is moving in a periodic fashion that is time invariant, we know that the average displacement during one period must be zero. To illustrate the proof, let us consider a swimmer deforming during one period that starts and ends at times t 0 {\displaystyle t_{0}} and t 1 {\displaystyle t_{1}} . That means its shape at the start and end are the same, i.e. r S ( t 0 ) = r S ( t 1 ) . {\displaystyle \mathbf {r} _{S}(t_{0})=\mathbf {r} _{S}(t_{1}).} Next, we consider motion obtained by time-reversal symmetry of the first motion that occurs during the period starting and ending at times t 2 {\displaystyle t_{2}} and t 3 . {\displaystyle t_{3}.} using a similar mapping as in the previous section, we define t 2 = f ( t 1 ) {\displaystyle t_{2}=f(t_{1})} and t 3 = f ( t 0 ) {\displaystyle t_{3}=f(t_{0})} and define the shape in the reverse motion to be the same as the shape in the forward motion, r S ( t ) = r ′ S ( t ′ ) . {\displaystyle \mathbf {r} _{S}(t)=\mathbf {r'} _{S}(t').} Now we find the relationship between the net displacements in these two cases: This is the second key result. Combining with our first key result from the previous section, we see that Δ X ′ = Δ X = − Δ X → Δ X = 0. {\displaystyle \Delta X'=\Delta X=-\Delta X\rightarrow \Delta X=0.} We see that a swimmer that reverses its motion by reversing its sequence of shape changes leads to the opposite distance traveled. In addition, since the swimmer exhibits reciprocal body deformation, the sequence of motion is the same between t 2 {\displaystyle t_{2}} and t 3 {\displaystyle t_{3}} and t 0 {\displaystyle t_{0}} and t 1 . {\displaystyle t_{1}.} Thus, the distance traveled should be the same independently of the direction of time, meaning that reciprocal motion cannot be used for net motion in low Reynolds number environments. The scallop theorem holds if we assume that a swimmer undergoes reciprocal motion in an infinite quiescent Newtonian fluid in the absence of inertia and external body forces. However, there are instances where the assumptions for the scallop theorem are violated. [ 4 ] In one case, successful swimmers in viscous environments must display non-reciprocal body kinematics. In another case, if a swimmer is in a non-Newtonian fluid , locomotion can be achieved as well. In his original paper, Purcell proposed a simple example of non-reciprocal body deformation, now commonly known as the Purcell swimmer. This simple swimmer possess two degrees of freedom for motion: a two-hinged body composed of three rigid links rotating out-of-phase with each other. However, any body with more than one degree of freedom of motion can achieve locomotion as well. In general, microscopic organisms like bacteria have evolved different mechanisms to perform non-reciprocal motion: Geometrically, the rotating flagellum is a one-dimensional swimmer, and it works because its motion is going around a circle-shaped configuration space, and a circle is not a reciprocating motion. The flexible arm is a multi-dimensional swimmer, and it works because its motion is going around a circle in a square-shaped configuration space. Notice that the first kind of motion has nontrivial homotopy , but the second kind has trivial homotopy. The assumption of a Newtonian fluid is essential since Stokes equations will not remain linear and time-independent in an environment that possesses complex mechanical and rheological properties. It is also common knowledge that many living microorganisms live in complex non-Newtonian fluids, which are common in biologically relevant environments. For instance, crawling cells often migrate in elastic polymeric fluids. Non-Newtonian fluids have several properties that can be manipulated to produce small scale locomotion. [ 4 ] First, one such exploitable property is normal stress differences. These differences will arise from the stretching of the fluid by the flow of the swimmer. Another exploitable property is stress relaxation. Such time evolution of such stresses contain a memory term, though the extent in which this can be utilized is largely unexplored. Last, non-Newtonian fluids possess viscosities that are dependent on the shear rate. In other words, a swimmer would experience a different Reynolds number environment by altering its rate of motion. Many biologically relevant fluids exhibit shear-thinning, meaning viscosity decreases with shear rate. In such an environment, the rate at which a swimmer exhibits reciprocal motion would be significant as it would no longer be time invariant. This is in stark contrast to what we established where the rate in which a swimmer moves is irrelevant for establishing locomotion. Thus, a reciprocal swimmer can be designed in a non-Newtonian fluid. Qiu et al . (2014) were able to design a micro scallop in a non-Newtonian fluid. [ 8 ]
https://en.wikipedia.org/wiki/Scallop_theorem
Scam call centers operate in Ukraine , [ 1 ] [ 2 ] including major cities such as Kyiv and Dnipro . Under Article 190 of The Criminal Code of Ukraine, the activities of these call centers are considered illegal. [ 3 ] According to Sberbank , the "capital" of phone fraud is Dnipro, [ 4 ] and up to 95% of calls to Russians originate from Ukraine. [ 5 ] These organized cells specialize in fraudulent schemes aimed at extracting funds from citizens. According to Article 190 of the Criminal Code of Ukraine , the activities of these call centers are illegal. [ 3 ] According to (the Russian ) Sberbank , the "capital" of phone fraud is Dnipro . [ 4 ] Scammers may call individuals, posing as bank and mobile operator security personnel, to obtain card information and SMS codes for the purpose of misappropriating victims' funds. After obtaining data supposedly meant to prevent fraud, scammers siphon money from victims and transfer it to specific accounts. The operations of such groups are not confined to Ukraine and Russia but target countries with a significant Russian-speaking population, such as Kazakhstan , Poland , and the Czech Republic . However, there are also call centers focused on English-speaking countries. [ 2 ] According to the Geneva-based Global Initiative , which monitors such activities, the proceeds from fraudulent call centers often do not contribute to the development of the Ukrainian economy but are transferred to offshore accounts or cryptocurrencies to avoid taxation. [ 1 ] International media and organizations such as The Times of Israel , Dagens Nyheter , and the Organized Crime and Corruption Reporting Project (OCCRP), have exposed information about scam call centers based in Kyiv. One of the companies associated with these call centers is named " Milton Group ". [ citation needed ] According to Sberbank up to 95% of calls originate from Ukraine to Russians. [ 5 ] The Russian government has said that Ukrainian scam call centers have played a significant role in the many unusual fires that have occurred in Russia since its invasion of Ukraine in February 2022. In December 2022, Russian Federal Security Service (FSB) issued a warning about Ukrainian scam calls, saying that scammers "incline gullible citizens to commit arson attacks on social infrastructure facilities, as well as cars in crowded places". According to the FSB, most of these arsonists were told that they were participating in an operation to catch pro-Ukrainian criminals. BBC believes that the claim is not supported by concrete evidence. [ 6 ] [ 7 ] In August 2023, the Russian Prosecutor General's Office and the Ministry of Internal Affairs issued official warnings about a new form of phone fraud in which Russians are forced to set fire to military enlistment offices through pressure or deception. The authorities claim that scammers call from the territory of Ukraine and choose elderly Russians as their victims. The Russian government has not yet offered any evidence of their claims. [ 8 ] [ 9 ] Russian business newspaper Kommersant claims that fraudsters support the Armed Forces of Ukraine and organize "terrorist attacks". [ 10 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scam_call_centers_in_Ukraine
The ScanPyramids [ 1 ] mission is an Egyptian-International project designed and led by Cairo University and the French HIP Institute (Heritage Innovation Preservation). [ 2 ] This project aims at scanning Old Kingdom Egyptian Pyramids ( Khufu , Khafre , the Bent and the Red ) to detect the presence of unknown internal voids and structures. [ 3 ] The project, launched in October 2015, [ 4 ] combines several non-invasive and non-destructive techniques which may help to get a better understanding of their structure and their construction processes and techniques. [ clarification needed ] The team was using Infrared thermography , muon tomography , 3D simulation and reconstruction techniques. [ 5 ] [ 6 ] ScanPyramids is an interdisciplinary project mixing art, science and technology. [ 7 ] On November 2, 2017, the ScanPyramids team announced, through a publication in Nature , [ 8 ] its third discovery in the Great Pyramid , a "plane-sized" [ 9 ] previously unknown void named the "ScanPyramids Big Void". [ 10 ] On October 15, 2016, ScanPyramids confirmed their first unknown void discoveries thanks to muon tomography in the Great Pyramid . [ 11 ] [ 12 ] A previously unknown cavity [ 13 ] was confirmed on the North-Eastern Edge, [ 14 ] roughly at 110 metres (360 ft) high with similar void volume characteristics as a known "cave" located at 83 metres (272 ft) on the same edge. [ 15 ] A second void was discovered behind the chevrons area of Khufu 's North Face above the Descending Corridor (referred to as "SP-NFC" in papers). This area was investigated after thermal anomalies observation that led the team to position muon emulsion plates in the Descending Corridor. [ 5 ] [ 16 ] This void was further investigated during 2017 to provide more information about its shape, size, and exact position. [ 5 ] [ 17 ] In 2017 more muon-sensitive emulsion plates were positioned in the descending corridor and in Al-Mamun's tunnel [ broken anchor ] . The void behind the chevrons could be confirmed through different points of view and its characteristics refined. Named "ScanPyramids North-Face Corridor" (SP-NFC), this void is located between 17 and 23 metres (56 and 75 ft) from the Great Pyramid 's ground level, between 0.7 and 2 metres (2 ft 4 in and 6 ft 7 in) from the North Face. It could be horizontal or sloping upwards and it has a corridor-like shape. [ 19 ] On November 2, 2017, the ScanPyramids team published its third discovery in Nature , which was named "ScanPyramids Big Void", or "SP-BV" for short. It describes a newly discovered huge void in a circumscribed area above the Grand Gallery . It is estimated to have a length of at least 30 metres (98 ft) and a similar cross-section as the Grand Gallery . The ScanPyramids Big Void has been observed by three teams of physicists from different points of view (2 points of view in the Queen's Chamber and from outside in front of the North Face). [ 8 ] Three scientific institutions specializing in particle physics have worked independently and each one used a different and complementary muography technique: Like the work done on the "ScanPyramids North Face Corridor", more muography observations, from new viewpoints, need to be conducted in order to better determine the Big Void's shape, so that functional inferences can be drawn. [ 22 ] [ unreliable source? ] [ 23 ] As long the exact layout and function of the void is still unknown, the scientists have been cautious about using architectural nomenclature. [ 24 ] In March 2023, the team published [ 25 ] its finding of the North Facing Corridor (NFC) behind the original entrance ; the void is called "SP-NFC" (ScanPyramids - North Face Corridor) in the paper. [ 26 ] On November 2, 2017, the Egyptologist Zahi Hawass told the New York Times : "They found nothing...This paper offers nothing to Egyptology. Zero." [ 27 ] Though this later contrasted his comments once the North Face Corridor was inspected with an Edoscope, commenting that this represented a "major discovery" that would "enter houses and homes of people all over the world for the first time". [ 28 ] On November 3, 2017, Egypt's Ministry of Tourism and Antiquities said in a statement, "The existence of several void spaces inside the pyramid is not a new thing. Egyptologists and scholars knew about it several years ago," adding, "the ministry sees that the ScanPyramid team should not have rushed [ sic ] to publish their findings in media at that stage of their research because it requires more research and it is too early to say that there was a new discovery." [ 29 ] On November 4, Khaled al-Anany , Egyptian Minister of Antiquities said, during a press conference, that the void space found inside the Great Pyramid of Khufu by the ScanPyramids project is a new revelation that brought the world's attention to Egypt . He added "What was discovered is new and larger than the known cavities, and we'll continue in our scientific steps". [ 30 ] Other Egyptologists have welcomed the discovery. Yukinori Kawae told National Geographic "This is definitely the discovery of the century...There have been many hypotheses about the pyramid, but no one even imagined that such a big void is located above the Grand Gallery ." [ 31 ] [ 32 ] Peter der Manuelian , from Harvard University , said that "This is an exciting new discovery, and potentially a major contribution to our knowledge about the Great Pyramid ." [ 33 ] [ 23 ] Lee Thompson, an expert in particle physics at the University of Sheffield (UK) told Science : "The scientists have "seen" the void using three different muon detectors in three independent experiments, which makes their finding very robust." [ 34 ] Christopher Morris, physicist at Los Alamos National Laboratory called the findings "pretty amazing". [ 35 ] Jerry Anderson who worked on Khafre's Pyramid and was a member of the team of Luis Walter Alvarez , the first scientist to use muography inside a pyramid in 1965, [ 36 ] said to Los Angeles Times , with a laugh: "I am very excited and very pleased,...I wish we had worked in the Great Pyramid , now that I look back on it". [ 37 ] This discovery has been featured in many international media as one of the top discoveries of the year 2017 ( NBC News , [ 38 ] Euronews , [ 39 ] Physics World , [ 40 ] Science News , [ 41 ] Global News , [ 42 ] Gizmodo , [ 43 ] Business Insider , [ 44 ] Altmetric , [ 45 ] Egypt Today , [ 46 ] NBC , [ 47 ] MSN News , [ 48 ] Le Monde , [ 49 ] CTV , [ 50 ] The Franklin Institute , [ 51 ] Radio Canada , [ 52 ] Sciences et Avenir , [ 53 ] RTÉ , [ 54 ] PBS , [ 55 ] Yahoo , [ 56 ] La Vanguardia , [ 57 ] France Info [ 58 ] ).
https://en.wikipedia.org/wiki/ScanPyramids
The Scandinavian Logic Society , abbreviated as SLS , is a not-for-profit organization with objective to organize, promote, and support logic-related events and other activities of relevance for the development of logic-related research and education in the Nordic Region of Europe. [ 1 ] The society is a member of the Division of Logic, Methodology and Philosophy of Science and Technology . The SLS was founded on 20 August 2012, at the 8th Scandinavian Logic Symposium in Roskilde , Denmark. Today the society has its seat in Stockholm , Sweden. It unites academics from Denmark , Finland , Iceland , Norway and Sweden working primarily on theory and applications of logic to computer science , philosophy , mathematics and linguistics . The SLS is led by Executive Committee. The presidents of the SLS: The Society organizes regular Scandinavian Logic Symposia (SLSS) every 2–4 years on a geographically rotating principle. The primary aim of the Symposium is to promote research in the field of logic (broadly conceived) carried out in research communities in Scandinavia. 11th symposium scheduled for 2020 in Bergen, Norway, was postponed for 2022 due to pandemic of COVID-19 The Society organizes regular Nordic Logic Schools every 2–4 years. The intended audience is advanced master students, PhD-students, postdocs and experienced researchers wishing to learn the state of the art in a particular subject. 4th summer school scheduled for 2020 in Bergen, Norway, was postponed for 2022 due to pandemic of COVID-19 General meetings of the Society are held regularly during the Scandinavian Logic Symposium . Membership in the SLS is open to all interested persons who agree with and support the objectives of the Society.
https://en.wikipedia.org/wiki/Scandinavian_Logic_Society
Scandium(III) sulfide is a chemical compound of scandium and sulfur with the chemical formula Sc 2 S 3 . It is a yellow solid. The crystal structure of Sc 2 S 3 is closely related to that of sodium chloride , in that it is based on a cubic close packed array of anions. Whereas NaCl has all the octahedral interstices in the anion lattice occupied by cations, Sc 2 S 3 has one third of them vacant. The vacancies are ordered, but in a very complicated pattern, leading to a large, orthorhombic unit cell belonging to the space group Fddd. [ 1 ] Metal sulfides are usually prepared by heating mixtures of the two elements, but in the case of scandium, this method yields scandium monosulfide , ScS. Sc 2 S 3 can be prepared by heating scandium(III) oxide under flowing hydrogen sulfide in a graphite crucible to 1550 °C or above for 2–3 hours. The crude product is then purified by chemical vapor transport at 950 °C using iodine as the transport agent. [ 1 ] Scandium(III) sulfide can be prepared by reacting scandium(III) chloride with dry hydrogen sulfide at elevated temperature: [ 2 ] Above 1100 °C, Sc 2 S 3 loses sulfur, forming nonstoichiometric compounds such as Sc 1.37 S 2 . [ 1 ]
https://en.wikipedia.org/wiki/Scandium(III)_sulfide
Scandium sulfate is the scandium salt of sulfuric acid and has the formula Sc 2 (SO 4 ) 3 . This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scandium_sulfate
In condensed matter physics , scanning SQUID microscopy is a technique where a superconducting quantum interference device (SQUID) is used to image surface magnetic field strength with micrometre -scale resolution . A tiny SQUID is mounted onto a tip which is then rastered near the surface of the sample to be measured. As the SQUID is the most sensitive detector of magnetic fields available and can be constructed at submicrometre widths via lithography , the scanning SQUID microscope allows magnetic fields to be measured with unparalleled resolution and sensitivity. The first scanning SQUID microscope was built in 1992 by Black et al. [ 2 ] Since then the technique has been used to confirm unconventional superconductivity in several high-temperature superconductors including YBCO and BSCCO compounds. The scanning SQUID microscope is based upon the thin-film DC SQUID. A DC SQUID consists of superconducting electrodes in a ring pattern connected by two weak-link Josephson junctions (see figure). Above the critical current of the Josephson junctions, the idealized difference in voltage between the electrodes is given by [ 3 ] where R is the resistance between the electrodes, I is the current , I 0 is the maximum supercurrent , I c is the critical current of the Josephson junctions, Φ is the total magnetic flux through the ring, and Φ 0 is the magnetic flux quantum . Hence, a DC SQUID can be used as a flux-to-voltage transducer . However, as noted by the figure, the voltage across the electrodes oscillates sinusoidally with respect to the amount of magnetic flux passing through the device. As a result, alone a SQUID can only be used to measure the change in magnetic field from some known value, unless the magnetic field or device size is very small such that Φ < Φ 0 . To use the DC SQUID to measure standard magnetic fields, one must either count the number of oscillations in the voltage as the field is changed, which is very difficult in practice, or use a separate DC bias magnetic field parallel to the device to maintain a constant voltage and consequently constant magnetic flux through the loop. The strength of the field being measured will then be equal to the strength of the bias magnetic field passing through the SQUID. Although it is possible to read the DC voltage between the two terminals of the SQUID directly, because noise tends to be a problem in DC measurements, an alternating current technique is used. In addition to the DC bias magnetic field, an AC magnetic field of constant amplitude, with field strength generating Φ << Φ 0 , is also emitted in the bias coil. This AC field produces an AC voltage with amplitude proportional to the DC component in the SQUID. The advantage of this technique is that the frequency of the voltage signal can be chosen to be far away from that of any potential noise sources. By using a lock-in amplifier the device can read only the frequency corresponding to the magnetic field, ignoring many other sources of noise. A Scanning SQUID Microscope is a sensitive near-field imaging system for the measurement of weak magnetic fields by moving a Superconducting Quantum Interference Device ( SQUID ) across an area. The microscope can map out buried current-carrying wires by measuring the magnetic fields produced by the currents, or can be used to image fields produced by magnetic materials. By mapping out the current in an integrated circuit or a package, short circuits can be localized and chip designs can be verified to see that current is flowing where expected. As the SQUID material must be superconducting, measurements must be performed at low temperatures. Typically, experiments are carried out below liquid helium temperature (4.2 K) in a helium-3 refrigerator or dilution refrigerator . However, advances in high-temperature superconductor thin-film growth have allowed relatively inexpensive liquid nitrogen cooling to instead be used. It is even possible to measure room-temperature samples by only cooling a high T c squid and maintaining thermal separation with the sample. In either case, due to the extreme sensitivity of the SQUID probe to stray magnetic fields, in general some form of magnetic shielding is used. Most common is a shield made of mu-metal , possibly in combination with a superconducting "can" (all superconductors repel magnetic fields via the Meissner effect ). The actual SQUID probe is generally made via thin-film deposition with the SQUID area outlined via lithography . A wide variety of superconducting materials can be used, but the two most common are Niobium , due to its relatively good resistance to damage from thermal cycling , and YBCO , for its high T c > 77 K and relative ease of deposition compared to other high T c superconductors. In either case, a superconductor with critical temperature higher than that of the operating temperature should be chosen. The SQUID itself can be used as the pickup coil for measuring the magnetic field, in which case the resolution of the device is proportional to the size of the SQUID. However, currents in or near the SQUID generate magnetic fields which are then registered in the coil and can be a source of noise. To reduce this effect it is also possible to make the size of the SQUID itself very small, but attach the device to a larger external superconducting loop located far from the SQUID. The flux through the loop will then be detected and measured, inducing a voltage in the SQUID. The resolution and sensitivity of the device are both proportional to the size of the SQUID. A smaller device will have greater resolution but less sensitivity. The change in voltage induced is proportional to the inductance of the device, and limitations in the control of the bias magnetic field as well as electronics issues prevent a perfectly constant voltage from being maintained at all times. However, in practice, the sensitivity in most scanning SQUID microscopes is sufficient for almost any SQUID size for many applications, and therefore the tendency is to make the SQUID as small as possible to enhance resolution. Via e-beam lithography techniques it is possible to fabricate devices with total area of 1–10 μm 2 , although devices in the tens to hundreds of square micrometres are more common. The SQUID itself is mounted onto a cantilever and operated either in direct contact with or just above the sample surface. The position of the SQUID is usually controlled by some form of electric stepping motor . Depending on the particular application, different levels of precision may be required in the height of the apparatus. Operating at lower-tip sample distances increases the sensitivity and resolution of the device, but requires more advanced mechanisms in controlling the height of the probe. In addition such devices require extensive vibration dampening if precise height control is to be maintained. A high temperature Scanning SQUID Microscope using a YBCO SQUID is capable of measuring magnetic fields as small as 20 pT (about 2 million times weaker than the Earth's magnetic field ). The SQUID sensor is sensitive enough that it can detect a wire even if it is carrying only 10 nA of current at a distance of 100 μm from the SQUID sensor with 1 second averaging. The microscope uses a patented design to allow the sample under investigation to be at room temperature and in air while the SQUID sensor is under vacuum and cooled to less than 80 K using a cryo cooler. No Liquid Nitrogen is used. During non-contact, non-destructive imaging of room temperature samples in air, the system achieves a raw, unprocessed spatial resolution equal to the distance separating the sensor from the current or the effective size of the sensor, whichever is larger. To best locate a wire short in a buried layer, however, a Fast Fourier Transform (FFT) back-evolution technique can be used to transform the magnetic field image into an equivalent map of the current in an integrated circuit or printed wiring board. [ 4 ] [ 5 ] The resulting current map can then be compared to a circuit diagram to determine the fault location. With this post-processing of a magnetic image and the low noise present in SQUID images, it is possible to enhance the spatial resolution by factors of 5 or more over the near-field limited magnetic image. The system's output is displayed as a false-color image of magnetic field strength or current magnitude (after processing) versus position on the sample. After processing to obtain current magnitude, this microscope has been successful at locating shorts in conductors to within ±16 μm at a sensor-current distance of 150 μm. [ 6 ] Operation of a scanning SQUID microscope consists of simply cooling down the probe and sample, and rastering the tip across the area where measurements are desired. As the change in voltage corresponding to the measured magnetic field is quite rapid, the strength of the bias magnetic field is typically controlled by feedback electronics. This field strength is then recorded by a computer system that also keeps track of the position of the probe. An optical camera can also be used to track the position of the SQUID with respect to the sample. As the name implies, SQUIDs are made from superconducting material. As a result, they need to be cooled to cryogenic temperatures of less than 90 K (liquid nitrogen temperatures) for high temperature SQUIDs and less than 9 K (liquid helium temperatures) for low temperature SQUIDs. For magnetic current imaging systems, a small (about 30 μm wide) high temperature SQUID is used. This system has been designed to keep a high temperature SQUID, made from YBa 2 Cu 3 O 7 , cooled below 80K and in vacuum while the device under test is at room temperature and in air. A SQUID consists of two Josephson tunnel junctions that are connected together in a superconducting loop (see Figure 1). A Josephson junction is formed by two superconducting regions that are separated by a thin insulating barrier. Current exists in the junction without any voltage drop, up to a maximum value, called the critical current, I o . When the SQUID is biased with a constant current that exceeds the critical current of the junction, then changes in the magnetic flux, Φ, threading the SQUID loop produce changes in the voltage drop across the SQUID (see Figure 1). Figure 2(a) shows the I-V characteristic of a SQUID where ∆V is the modulation depth of the SQUID due to external magnetic fields. The voltage across a SQUID is a nonlinear periodic function of the applied magnetic field, with a periodicity of one flux quantum, Φ 0 =2.07×10 −15 Tm 2 (see Figure 2(b)). In order to convert this nonlinear response to a linear response, a negative feedback circuit is used to apply a feedback flux to the SQUID so as to keep the total flux through the SQUID constant. In such a flux locked loop, the magnitude of this feedback flux is proportional to the external magnetic field applied to the SQUID. Further description of the physics of SQUIDs and SQUID microscopy can be found elsewhere. [ 7 ] [ 8 ] [ 9 ] [ 10 ] Magnetic current imaging uses the magnetic fields produced by currents in electronic devices to obtain images of those currents. This is accomplished through the fundamental physics relationship between magnetic fields and current, the Biot-Savart Law: As a result, the current can be directly calculated from the magnetic field knowing only the separation between the current and the magnetic field sensor. The details of this mathematical calculation can be found elsewhere, [ 11 ] but what is important to know here is that this is a direct calculation that is not influenced by other materials or effects, and that through the use of Fast Fourier Transforms these calculations can be performed very quickly. A magnetic field image can be converted to a current density image in about 1 or 2 seconds. The scanning SQUID microscope was originally developed for an experiment to test the pairing symmetry of the high-temperature cuprate superconductor YBCO. Standard superconductors are isotropic with respect to their superconducting properties, that is, for any direction of electron momentum k {\displaystyle k} in the superconductor, the magnitude of the order parameter and consequently the superconducting energy gap will be the same. However, in the high-temperature cuprate superconductors, the order parameter instead follows the equation Δ ( k ) = Δ 0 ( c o s ( k x a ) − ( k y a ) ) {\displaystyle \Delta (k)=\Delta _{0}(cos(k_{x}a)-(k_{y}a))} , meaning that when crossing over any of the [110] directions in momentum space one will observe a sign change in the order parameter. The form of this function is equal to that of the l = 2 spherical harmonic function, giving it the name d-wave superconductivity. As the superconducting electrons are described by a single coherent wavefunction, proportional to exp(- i φ), where φ is known as the phase of the wavefunction, this property can be also interpreted as a phase shift of π under a 90 degree rotation. This property was exploited by Tsuei et al. [ 13 ] by manufacturing a series of YBCO ring Josephson junctions which crossed [110] Bragg planes of a single YBCO crystal (figure). In a Josephson junction ring the superconducting electrons form a coherent wave function, just as in a superconductor. As the wavefunction must have only one value at each point, the overall phase factor obtained after traversing the entire Josephson circuit must be an integer multiple of 2π, as otherwise, one would obtain a different value of the probability density depending on the number of times one traversed the ring. In YBCO, upon crossing the [110] planes in momentum (and real) space, the wavefunction will undergo a phase shift of π. Hence if one forms a Josephson ring device where this plane is crossed (2 n +1), number of times, a phase difference of (2 n +1)π will be observed between the two junctions. For 2 n , or even number of crossings, as in B, C, and D, a phase difference of (2 n )π will be observed. Compared to the case of standard s-wave junctions, where no phase shift is observed, no anomalous effects were expected in the B, C, and D cases, as the single valued property is conserved, but for device A, the system must do something to for the φ=2 n π condition to be maintained. In the same property behind the scanning SQUID microscope, the phase of the wavefunction is also altered by the amount of magnetic flux passing through the junction, following the relationship Δφ=π(Φ 0 ). As was predicted by Sigrist and Rice, [ 14 ] the phase condition can then be maintained in the junction by a spontaneous flux in the junction of value Φ 0 /2. Tsuei et al. used a scanning SQUID microscope to measure the local magnetic field at each of the devices in the figure, and observed a field in ring A approximately equal in magnitude Φ 0 /2 A , where A was the area of the ring. The device observed zero field at B, C, and D. The results provided one of the earliest and most direct experimental confirmations of d-wave pairing in YBCO. Scanning SQUID Microscope can detect all types of shorts and conductive paths including Resistive Opens (RO) defects such as cracked or voided bumps, Delaminated Vias, Cracked traces/ mouse bites and Cracked Plated Through Holes (PTH). It can map power distributions in packages as well as in 3D Integrated Circuits (IC) with Through-Silicon Via (TSV), System in package (SiP), Multi-Chip Module (MCM) and stacked die. SQUID scanning can also isolate defective components in assembled devices or Printed Circuit Board (PCB). [ 15 ] Advanced wire-bond packages, unlike traditional Ball Grid Array (BGA) packages, have multiple pad rows on the die and multiple tiers on the substrate. This package technology has brought new challenges to failure analysis. To date, Scanning Acoustic Microscopy (SAM), Time Domain Reflectometry (TDR) analysis, and Real-Time X-ray (RTX) inspection were the non-destructive tools used to detect short faults. Unfortunately, these techniques do not work very well in advanced wire-bond packages. Because of the high density wire bonding in advanced wire-bond packages, it is extremely hard to localize the short with conventional RTX inspection. Without detailed information as to where the short might occur, attempting destructive decapsulation to expose both die surface and bond wires is full of risk. Wet chemical etching to remove mold compound in a large area often results in over-etching. Furthermore, even if the package is successfully decapped, visual inspection of the multi-tiered bond wires is a blind search. The Scanning SQUID Microscopy (SSM) data are current density images and current peak images. The current density images give the magnitude of the current, while the current peak images reveal the current path with a ± 3 μm resolution. Obtaining the SSM data from scanning advanced wire-bond packages is only half the task; fault localization is still necessary. The critical step is to overlay the SSM current images or current path images with CAD files such as bonding diagrams or RTX images to pinpoint the fault location. To make alignment of overlaying possible, an optical two-point reference alignment is made. The package edge and package fiducial are the most convenient package markings to align to. Based on the data analysis, fault localization by SSM should isolate the short in the die, bond wires or package substrate. After all non-destructive approaches are exhausted, the final step is destructive deprocessing to verify SSM data. Depending on fault isolation, the deprocessing techniques include decapsulation, parallel lapping or cross-section. Electric shorts in multi-stacked die packages can be very difficult to isolate non-destructively; especially when a large number of bond wires are somehow shorted. For instance, when an electric short is produced by two bond wires touching each other, X-ray analysis may help to identify potential defect locations; however, defects like metal migration produced at wirebond pads, or bond wires somehow touching any other conductive structures, may be very difficult to catch with non-destructive techniques that are not electrical in nature. Here, the availability of analytical tools that can map out the flow of electric current inside the package provide valuable information to guide the failure analyst to potential defect locations. Figure 1a shows the schematic of our first case study consisting of a triple-stacked die package. The X-ray image of figure 1b is intended to illustrate the challenge of finding the potential short locations represented for failure analysts. In particular, this is one of a set of units that were inconsistently failing and recovering under reliability tests. Time domain reflectometry and X-ray analysis were performed on these units with no success in isolating the defects. Also there was no clear indication of defects that could potentially produce the observed electrical short failure mode. Two of those units were analyzed with SSM. Electrically connecting the failing pin to a ground pin produced the electric current path shown in figure 2. This electrical path strongly suggests that the current is somehow flowing through all the ground nets though a conductive path located very close to the wirebond pads from the top down view of the package. Based on electrical and layout analysis of the package, it can be inferred that current is either flowing through the wirebond pads or that the wirebonds are somehow touching a conductive structure at the specified location. After obtaining similar SSM results on the two units under test, further destructive analysis focused around the small potential short region, and it showed that the failing pin wirebond is touching the bottom of one of the stacked dice at the specific XY position highlighted by SSM analysis. The cross section view of one of those units is shown in figure 3. A similar defect was found in the second unit. The failure in this example was characterized as an eight-ohm short between two adjacent pins. The bond wires to the pins of interest were cut with no effect on the short as measured at the external pins, indicating that the short was present in the package. Initial attempts to identify the failure with conventional radiographic analysis were unsuccessful. Arguably the most difficult part of the procedure is identifying the physical location of the short with a high enough degree of confidence to permit destructive techniques to be used to reveal the shorting material. Fortunately, two analytical techniques are now available that can significantly increase the effectiveness of the fault localization process. One characteristic that all shorts have in common is the movement of electrons from a high potential to a lower one. This physical movement of the electrical charge creates a small magnetic field around the electron. With enough electrons moving, the aggregate magnetic field can be detected by superconducting sensors. Instruments equipped with such sensors can follow the path of a short circuit along its course through a part. The SQUID detector has been used in failure analysis for many years, [ 19 ] and is now commercially available for use at the package level. The ability of SQUID to track the flow of current provides a virtual roadmap of the short, including the location in plan view of the shorting material in a package. We used the SQUID facilities at Neocera to investigate the failure in the package of interest, with pins carrying 1.47 milliamps at 2 volts. SQUID analysis of the part revealed a clear current path between the two pins of interest, including the location of the conductive material that bridged the two pins. The SQUID scan of the part is shown in Figure 1. The second fault location technique will be taken somewhat out of turn, as it was used to characterize this failure after the SQUID analysis, as an evaluation sample for an equipment vendor. The ability to focus and resolve low-power X-rays and detect their presence or absence has improved to the point that radiography can now be used to identify features heretofore impossible to detect. The equipment at Xradia was used to inspect the failure of interest in this analysis. An example of their findings is shown in Figure 2. The feature shown (which is also the material responsible for the failure) is a copper filament approximately three micrometres wide in cross-section, which was impossible to resolve in our in-house radiography equipment. The principal drawback of this technique is that the depth of field is extremely short, requiring many ‘cuts’ on a given specimen to detect very small particles or filaments. At the high magnification required to resolve micrometre-sized features, the technique can become prohibitively expensive in both time and money to perform. In effect, to get the most out of it, the analyst really needs to know already where the failure is located. This makes low-power radiography a useful supplement to SQUID, but not a generally effective replacement for it. It would likely best be used immediately after SQUID to characterize morphology and depth of the shorting material once SQUID had pinpointed its location. Examination of the module shown in Figure 1 in the Failure Analysis Laboratory found no external evidence of the failure. [ 20 ] Coordinate axes of the device were chosen as shown in Figure 1. Radiography was performed on the module in three orthogonal views: side, end, and top-down; as shown in Figure 2. For purposes of this paper the top-down X-ray view shows the x-y plane of the module. The side view shows the x-z plane, and the end view shows the y-z plane. No anomalies were noted in the radiographic images. Excellent alignment of components on the mini-boards permitted an uncluttered top-down view of the mini-circuit boards. The internal construction of the module was seen to consist of eight, stacked mini-boards, each with a single microcircuit and capacitor. The mini-boards connected with the external module pins using the gold-plated exterior of the package. External inspection showed that laser-cut trenches created an external circuit on the device, which is used to enable, read, or write to any of the eight EEPROM devices in the encapsulated vertical stack. Regarding nomenclature, the laser-trenched gold panels on the exterior walls of the package were labeled with the pin numbers. The eight miniboards were labeled TSOP01 through TSOP08, beginning at the bottom of the package near the device pins. Pin-to-pin electrical testing confirmed that Vcc Pins 12, 13, 14, and 15 were electrically common, presumably through the common exterior gold panel on the package wall. Likewise, Vss Pins 24, 25, 26, and 27 were common. Comparison to the xray images showed that these four pins funneled into a single wide trace on the mini-boards. All of the Vss pins were shorted to the Vcc pins with a resistance determined by the I-V slope at approximately 1.74 ohms, the low resistance indicating something other than an ESD defect. Similarly electrical overstress was considered an unlikely cause of failure as the part had not been under power since the time it was qualified at the factory. The three-dimensional geometry of the EEPROM module suggested the use of magnetic current imaging (MCI) on three, or more flat sides in order to construct the current path of the short within the module. As noted, the coordinate axes selected for this analysis are shown in Figure 1. SQUIDs are the most sensitive magnetic sensors known. [ 4 ] This allows one to scan currents of 500 nA at a working distance of about 400 micrometres. As for all near field situations, the resolution is limited by the scanning distance or, ultimately, by the sensor size (typical SQUIDs are about 30 μm wide), although software and data acquisition improvements allow locating currents within 3 micrometres. To operate, the SQUID sensor must be kept cool (about 77 K) and in vacuum, while the sample, at room temperature, is raster-scanned under the sensor at some working distance z, separated from the SQUID enclosure by a thin, transparent diamond window. This allows one to reduce the scanning distance to tens of micrometres from the sensor itself, improving the resolution of the tool. The typical MCI sensor configuration is sensitive to magnetic fields in the perpendicular z direction (i.e., sensitive to the in-plane xy current distribution in the DUT). This does not mean that we are missing vertical information; in the simplest situation, if a current path jumps from one plane to another, getting closer to the sensor in the process, this will be revealed as stronger magnetic field intensity for the section closer to the sensor and also as higher intensity in the current density map. This way, vertical information can be extracted from the current density images. Further details about MCI can be found elsewhere. [ 21 ]
https://en.wikipedia.org/wiki/Scanning_SQUID_microscope
In condensed matter physics , scanning SQUID microscopy is a technique where a superconducting quantum interference device (SQUID) is used to image surface magnetic field strength with micrometre -scale resolution . A tiny SQUID is mounted onto a tip which is then rastered near the surface of the sample to be measured. As the SQUID is the most sensitive detector of magnetic fields available and can be constructed at submicrometre widths via lithography , the scanning SQUID microscope allows magnetic fields to be measured with unparalleled resolution and sensitivity. The first scanning SQUID microscope was built in 1992 by Black et al. [ 2 ] Since then the technique has been used to confirm unconventional superconductivity in several high-temperature superconductors including YBCO and BSCCO compounds. The scanning SQUID microscope is based upon the thin-film DC SQUID. A DC SQUID consists of superconducting electrodes in a ring pattern connected by two weak-link Josephson junctions (see figure). Above the critical current of the Josephson junctions, the idealized difference in voltage between the electrodes is given by [ 3 ] where R is the resistance between the electrodes, I is the current , I 0 is the maximum supercurrent , I c is the critical current of the Josephson junctions, Φ is the total magnetic flux through the ring, and Φ 0 is the magnetic flux quantum . Hence, a DC SQUID can be used as a flux-to-voltage transducer . However, as noted by the figure, the voltage across the electrodes oscillates sinusoidally with respect to the amount of magnetic flux passing through the device. As a result, alone a SQUID can only be used to measure the change in magnetic field from some known value, unless the magnetic field or device size is very small such that Φ < Φ 0 . To use the DC SQUID to measure standard magnetic fields, one must either count the number of oscillations in the voltage as the field is changed, which is very difficult in practice, or use a separate DC bias magnetic field parallel to the device to maintain a constant voltage and consequently constant magnetic flux through the loop. The strength of the field being measured will then be equal to the strength of the bias magnetic field passing through the SQUID. Although it is possible to read the DC voltage between the two terminals of the SQUID directly, because noise tends to be a problem in DC measurements, an alternating current technique is used. In addition to the DC bias magnetic field, an AC magnetic field of constant amplitude, with field strength generating Φ << Φ 0 , is also emitted in the bias coil. This AC field produces an AC voltage with amplitude proportional to the DC component in the SQUID. The advantage of this technique is that the frequency of the voltage signal can be chosen to be far away from that of any potential noise sources. By using a lock-in amplifier the device can read only the frequency corresponding to the magnetic field, ignoring many other sources of noise. A Scanning SQUID Microscope is a sensitive near-field imaging system for the measurement of weak magnetic fields by moving a Superconducting Quantum Interference Device ( SQUID ) across an area. The microscope can map out buried current-carrying wires by measuring the magnetic fields produced by the currents, or can be used to image fields produced by magnetic materials. By mapping out the current in an integrated circuit or a package, short circuits can be localized and chip designs can be verified to see that current is flowing where expected. As the SQUID material must be superconducting, measurements must be performed at low temperatures. Typically, experiments are carried out below liquid helium temperature (4.2 K) in a helium-3 refrigerator or dilution refrigerator . However, advances in high-temperature superconductor thin-film growth have allowed relatively inexpensive liquid nitrogen cooling to instead be used. It is even possible to measure room-temperature samples by only cooling a high T c squid and maintaining thermal separation with the sample. In either case, due to the extreme sensitivity of the SQUID probe to stray magnetic fields, in general some form of magnetic shielding is used. Most common is a shield made of mu-metal , possibly in combination with a superconducting "can" (all superconductors repel magnetic fields via the Meissner effect ). The actual SQUID probe is generally made via thin-film deposition with the SQUID area outlined via lithography . A wide variety of superconducting materials can be used, but the two most common are Niobium , due to its relatively good resistance to damage from thermal cycling , and YBCO , for its high T c > 77 K and relative ease of deposition compared to other high T c superconductors. In either case, a superconductor with critical temperature higher than that of the operating temperature should be chosen. The SQUID itself can be used as the pickup coil for measuring the magnetic field, in which case the resolution of the device is proportional to the size of the SQUID. However, currents in or near the SQUID generate magnetic fields which are then registered in the coil and can be a source of noise. To reduce this effect it is also possible to make the size of the SQUID itself very small, but attach the device to a larger external superconducting loop located far from the SQUID. The flux through the loop will then be detected and measured, inducing a voltage in the SQUID. The resolution and sensitivity of the device are both proportional to the size of the SQUID. A smaller device will have greater resolution but less sensitivity. The change in voltage induced is proportional to the inductance of the device, and limitations in the control of the bias magnetic field as well as electronics issues prevent a perfectly constant voltage from being maintained at all times. However, in practice, the sensitivity in most scanning SQUID microscopes is sufficient for almost any SQUID size for many applications, and therefore the tendency is to make the SQUID as small as possible to enhance resolution. Via e-beam lithography techniques it is possible to fabricate devices with total area of 1–10 μm 2 , although devices in the tens to hundreds of square micrometres are more common. The SQUID itself is mounted onto a cantilever and operated either in direct contact with or just above the sample surface. The position of the SQUID is usually controlled by some form of electric stepping motor . Depending on the particular application, different levels of precision may be required in the height of the apparatus. Operating at lower-tip sample distances increases the sensitivity and resolution of the device, but requires more advanced mechanisms in controlling the height of the probe. In addition such devices require extensive vibration dampening if precise height control is to be maintained. A high temperature Scanning SQUID Microscope using a YBCO SQUID is capable of measuring magnetic fields as small as 20 pT (about 2 million times weaker than the Earth's magnetic field ). The SQUID sensor is sensitive enough that it can detect a wire even if it is carrying only 10 nA of current at a distance of 100 μm from the SQUID sensor with 1 second averaging. The microscope uses a patented design to allow the sample under investigation to be at room temperature and in air while the SQUID sensor is under vacuum and cooled to less than 80 K using a cryo cooler. No Liquid Nitrogen is used. During non-contact, non-destructive imaging of room temperature samples in air, the system achieves a raw, unprocessed spatial resolution equal to the distance separating the sensor from the current or the effective size of the sensor, whichever is larger. To best locate a wire short in a buried layer, however, a Fast Fourier Transform (FFT) back-evolution technique can be used to transform the magnetic field image into an equivalent map of the current in an integrated circuit or printed wiring board. [ 4 ] [ 5 ] The resulting current map can then be compared to a circuit diagram to determine the fault location. With this post-processing of a magnetic image and the low noise present in SQUID images, it is possible to enhance the spatial resolution by factors of 5 or more over the near-field limited magnetic image. The system's output is displayed as a false-color image of magnetic field strength or current magnitude (after processing) versus position on the sample. After processing to obtain current magnitude, this microscope has been successful at locating shorts in conductors to within ±16 μm at a sensor-current distance of 150 μm. [ 6 ] Operation of a scanning SQUID microscope consists of simply cooling down the probe and sample, and rastering the tip across the area where measurements are desired. As the change in voltage corresponding to the measured magnetic field is quite rapid, the strength of the bias magnetic field is typically controlled by feedback electronics. This field strength is then recorded by a computer system that also keeps track of the position of the probe. An optical camera can also be used to track the position of the SQUID with respect to the sample. As the name implies, SQUIDs are made from superconducting material. As a result, they need to be cooled to cryogenic temperatures of less than 90 K (liquid nitrogen temperatures) for high temperature SQUIDs and less than 9 K (liquid helium temperatures) for low temperature SQUIDs. For magnetic current imaging systems, a small (about 30 μm wide) high temperature SQUID is used. This system has been designed to keep a high temperature SQUID, made from YBa 2 Cu 3 O 7 , cooled below 80K and in vacuum while the device under test is at room temperature and in air. A SQUID consists of two Josephson tunnel junctions that are connected together in a superconducting loop (see Figure 1). A Josephson junction is formed by two superconducting regions that are separated by a thin insulating barrier. Current exists in the junction without any voltage drop, up to a maximum value, called the critical current, I o . When the SQUID is biased with a constant current that exceeds the critical current of the junction, then changes in the magnetic flux, Φ, threading the SQUID loop produce changes in the voltage drop across the SQUID (see Figure 1). Figure 2(a) shows the I-V characteristic of a SQUID where ∆V is the modulation depth of the SQUID due to external magnetic fields. The voltage across a SQUID is a nonlinear periodic function of the applied magnetic field, with a periodicity of one flux quantum, Φ 0 =2.07×10 −15 Tm 2 (see Figure 2(b)). In order to convert this nonlinear response to a linear response, a negative feedback circuit is used to apply a feedback flux to the SQUID so as to keep the total flux through the SQUID constant. In such a flux locked loop, the magnitude of this feedback flux is proportional to the external magnetic field applied to the SQUID. Further description of the physics of SQUIDs and SQUID microscopy can be found elsewhere. [ 7 ] [ 8 ] [ 9 ] [ 10 ] Magnetic current imaging uses the magnetic fields produced by currents in electronic devices to obtain images of those currents. This is accomplished through the fundamental physics relationship between magnetic fields and current, the Biot-Savart Law: As a result, the current can be directly calculated from the magnetic field knowing only the separation between the current and the magnetic field sensor. The details of this mathematical calculation can be found elsewhere, [ 11 ] but what is important to know here is that this is a direct calculation that is not influenced by other materials or effects, and that through the use of Fast Fourier Transforms these calculations can be performed very quickly. A magnetic field image can be converted to a current density image in about 1 or 2 seconds. The scanning SQUID microscope was originally developed for an experiment to test the pairing symmetry of the high-temperature cuprate superconductor YBCO. Standard superconductors are isotropic with respect to their superconducting properties, that is, for any direction of electron momentum k {\displaystyle k} in the superconductor, the magnitude of the order parameter and consequently the superconducting energy gap will be the same. However, in the high-temperature cuprate superconductors, the order parameter instead follows the equation Δ ( k ) = Δ 0 ( c o s ( k x a ) − ( k y a ) ) {\displaystyle \Delta (k)=\Delta _{0}(cos(k_{x}a)-(k_{y}a))} , meaning that when crossing over any of the [110] directions in momentum space one will observe a sign change in the order parameter. The form of this function is equal to that of the l = 2 spherical harmonic function, giving it the name d-wave superconductivity. As the superconducting electrons are described by a single coherent wavefunction, proportional to exp(- i φ), where φ is known as the phase of the wavefunction, this property can be also interpreted as a phase shift of π under a 90 degree rotation. This property was exploited by Tsuei et al. [ 13 ] by manufacturing a series of YBCO ring Josephson junctions which crossed [110] Bragg planes of a single YBCO crystal (figure). In a Josephson junction ring the superconducting electrons form a coherent wave function, just as in a superconductor. As the wavefunction must have only one value at each point, the overall phase factor obtained after traversing the entire Josephson circuit must be an integer multiple of 2π, as otherwise, one would obtain a different value of the probability density depending on the number of times one traversed the ring. In YBCO, upon crossing the [110] planes in momentum (and real) space, the wavefunction will undergo a phase shift of π. Hence if one forms a Josephson ring device where this plane is crossed (2 n +1), number of times, a phase difference of (2 n +1)π will be observed between the two junctions. For 2 n , or even number of crossings, as in B, C, and D, a phase difference of (2 n )π will be observed. Compared to the case of standard s-wave junctions, where no phase shift is observed, no anomalous effects were expected in the B, C, and D cases, as the single valued property is conserved, but for device A, the system must do something to for the φ=2 n π condition to be maintained. In the same property behind the scanning SQUID microscope, the phase of the wavefunction is also altered by the amount of magnetic flux passing through the junction, following the relationship Δφ=π(Φ 0 ). As was predicted by Sigrist and Rice, [ 14 ] the phase condition can then be maintained in the junction by a spontaneous flux in the junction of value Φ 0 /2. Tsuei et al. used a scanning SQUID microscope to measure the local magnetic field at each of the devices in the figure, and observed a field in ring A approximately equal in magnitude Φ 0 /2 A , where A was the area of the ring. The device observed zero field at B, C, and D. The results provided one of the earliest and most direct experimental confirmations of d-wave pairing in YBCO. Scanning SQUID Microscope can detect all types of shorts and conductive paths including Resistive Opens (RO) defects such as cracked or voided bumps, Delaminated Vias, Cracked traces/ mouse bites and Cracked Plated Through Holes (PTH). It can map power distributions in packages as well as in 3D Integrated Circuits (IC) with Through-Silicon Via (TSV), System in package (SiP), Multi-Chip Module (MCM) and stacked die. SQUID scanning can also isolate defective components in assembled devices or Printed Circuit Board (PCB). [ 15 ] Advanced wire-bond packages, unlike traditional Ball Grid Array (BGA) packages, have multiple pad rows on the die and multiple tiers on the substrate. This package technology has brought new challenges to failure analysis. To date, Scanning Acoustic Microscopy (SAM), Time Domain Reflectometry (TDR) analysis, and Real-Time X-ray (RTX) inspection were the non-destructive tools used to detect short faults. Unfortunately, these techniques do not work very well in advanced wire-bond packages. Because of the high density wire bonding in advanced wire-bond packages, it is extremely hard to localize the short with conventional RTX inspection. Without detailed information as to where the short might occur, attempting destructive decapsulation to expose both die surface and bond wires is full of risk. Wet chemical etching to remove mold compound in a large area often results in over-etching. Furthermore, even if the package is successfully decapped, visual inspection of the multi-tiered bond wires is a blind search. The Scanning SQUID Microscopy (SSM) data are current density images and current peak images. The current density images give the magnitude of the current, while the current peak images reveal the current path with a ± 3 μm resolution. Obtaining the SSM data from scanning advanced wire-bond packages is only half the task; fault localization is still necessary. The critical step is to overlay the SSM current images or current path images with CAD files such as bonding diagrams or RTX images to pinpoint the fault location. To make alignment of overlaying possible, an optical two-point reference alignment is made. The package edge and package fiducial are the most convenient package markings to align to. Based on the data analysis, fault localization by SSM should isolate the short in the die, bond wires or package substrate. After all non-destructive approaches are exhausted, the final step is destructive deprocessing to verify SSM data. Depending on fault isolation, the deprocessing techniques include decapsulation, parallel lapping or cross-section. Electric shorts in multi-stacked die packages can be very difficult to isolate non-destructively; especially when a large number of bond wires are somehow shorted. For instance, when an electric short is produced by two bond wires touching each other, X-ray analysis may help to identify potential defect locations; however, defects like metal migration produced at wirebond pads, or bond wires somehow touching any other conductive structures, may be very difficult to catch with non-destructive techniques that are not electrical in nature. Here, the availability of analytical tools that can map out the flow of electric current inside the package provide valuable information to guide the failure analyst to potential defect locations. Figure 1a shows the schematic of our first case study consisting of a triple-stacked die package. The X-ray image of figure 1b is intended to illustrate the challenge of finding the potential short locations represented for failure analysts. In particular, this is one of a set of units that were inconsistently failing and recovering under reliability tests. Time domain reflectometry and X-ray analysis were performed on these units with no success in isolating the defects. Also there was no clear indication of defects that could potentially produce the observed electrical short failure mode. Two of those units were analyzed with SSM. Electrically connecting the failing pin to a ground pin produced the electric current path shown in figure 2. This electrical path strongly suggests that the current is somehow flowing through all the ground nets though a conductive path located very close to the wirebond pads from the top down view of the package. Based on electrical and layout analysis of the package, it can be inferred that current is either flowing through the wirebond pads or that the wirebonds are somehow touching a conductive structure at the specified location. After obtaining similar SSM results on the two units under test, further destructive analysis focused around the small potential short region, and it showed that the failing pin wirebond is touching the bottom of one of the stacked dice at the specific XY position highlighted by SSM analysis. The cross section view of one of those units is shown in figure 3. A similar defect was found in the second unit. The failure in this example was characterized as an eight-ohm short between two adjacent pins. The bond wires to the pins of interest were cut with no effect on the short as measured at the external pins, indicating that the short was present in the package. Initial attempts to identify the failure with conventional radiographic analysis were unsuccessful. Arguably the most difficult part of the procedure is identifying the physical location of the short with a high enough degree of confidence to permit destructive techniques to be used to reveal the shorting material. Fortunately, two analytical techniques are now available that can significantly increase the effectiveness of the fault localization process. One characteristic that all shorts have in common is the movement of electrons from a high potential to a lower one. This physical movement of the electrical charge creates a small magnetic field around the electron. With enough electrons moving, the aggregate magnetic field can be detected by superconducting sensors. Instruments equipped with such sensors can follow the path of a short circuit along its course through a part. The SQUID detector has been used in failure analysis for many years, [ 19 ] and is now commercially available for use at the package level. The ability of SQUID to track the flow of current provides a virtual roadmap of the short, including the location in plan view of the shorting material in a package. We used the SQUID facilities at Neocera to investigate the failure in the package of interest, with pins carrying 1.47 milliamps at 2 volts. SQUID analysis of the part revealed a clear current path between the two pins of interest, including the location of the conductive material that bridged the two pins. The SQUID scan of the part is shown in Figure 1. The second fault location technique will be taken somewhat out of turn, as it was used to characterize this failure after the SQUID analysis, as an evaluation sample for an equipment vendor. The ability to focus and resolve low-power X-rays and detect their presence or absence has improved to the point that radiography can now be used to identify features heretofore impossible to detect. The equipment at Xradia was used to inspect the failure of interest in this analysis. An example of their findings is shown in Figure 2. The feature shown (which is also the material responsible for the failure) is a copper filament approximately three micrometres wide in cross-section, which was impossible to resolve in our in-house radiography equipment. The principal drawback of this technique is that the depth of field is extremely short, requiring many ‘cuts’ on a given specimen to detect very small particles or filaments. At the high magnification required to resolve micrometre-sized features, the technique can become prohibitively expensive in both time and money to perform. In effect, to get the most out of it, the analyst really needs to know already where the failure is located. This makes low-power radiography a useful supplement to SQUID, but not a generally effective replacement for it. It would likely best be used immediately after SQUID to characterize morphology and depth of the shorting material once SQUID had pinpointed its location. Examination of the module shown in Figure 1 in the Failure Analysis Laboratory found no external evidence of the failure. [ 20 ] Coordinate axes of the device were chosen as shown in Figure 1. Radiography was performed on the module in three orthogonal views: side, end, and top-down; as shown in Figure 2. For purposes of this paper the top-down X-ray view shows the x-y plane of the module. The side view shows the x-z plane, and the end view shows the y-z plane. No anomalies were noted in the radiographic images. Excellent alignment of components on the mini-boards permitted an uncluttered top-down view of the mini-circuit boards. The internal construction of the module was seen to consist of eight, stacked mini-boards, each with a single microcircuit and capacitor. The mini-boards connected with the external module pins using the gold-plated exterior of the package. External inspection showed that laser-cut trenches created an external circuit on the device, which is used to enable, read, or write to any of the eight EEPROM devices in the encapsulated vertical stack. Regarding nomenclature, the laser-trenched gold panels on the exterior walls of the package were labeled with the pin numbers. The eight miniboards were labeled TSOP01 through TSOP08, beginning at the bottom of the package near the device pins. Pin-to-pin electrical testing confirmed that Vcc Pins 12, 13, 14, and 15 were electrically common, presumably through the common exterior gold panel on the package wall. Likewise, Vss Pins 24, 25, 26, and 27 were common. Comparison to the xray images showed that these four pins funneled into a single wide trace on the mini-boards. All of the Vss pins were shorted to the Vcc pins with a resistance determined by the I-V slope at approximately 1.74 ohms, the low resistance indicating something other than an ESD defect. Similarly electrical overstress was considered an unlikely cause of failure as the part had not been under power since the time it was qualified at the factory. The three-dimensional geometry of the EEPROM module suggested the use of magnetic current imaging (MCI) on three, or more flat sides in order to construct the current path of the short within the module. As noted, the coordinate axes selected for this analysis are shown in Figure 1. SQUIDs are the most sensitive magnetic sensors known. [ 4 ] This allows one to scan currents of 500 nA at a working distance of about 400 micrometres. As for all near field situations, the resolution is limited by the scanning distance or, ultimately, by the sensor size (typical SQUIDs are about 30 μm wide), although software and data acquisition improvements allow locating currents within 3 micrometres. To operate, the SQUID sensor must be kept cool (about 77 K) and in vacuum, while the sample, at room temperature, is raster-scanned under the sensor at some working distance z, separated from the SQUID enclosure by a thin, transparent diamond window. This allows one to reduce the scanning distance to tens of micrometres from the sensor itself, improving the resolution of the tool. The typical MCI sensor configuration is sensitive to magnetic fields in the perpendicular z direction (i.e., sensitive to the in-plane xy current distribution in the DUT). This does not mean that we are missing vertical information; in the simplest situation, if a current path jumps from one plane to another, getting closer to the sensor in the process, this will be revealed as stronger magnetic field intensity for the section closer to the sensor and also as higher intensity in the current density map. This way, vertical information can be extracted from the current density images. Further details about MCI can be found elsewhere. [ 21 ]
https://en.wikipedia.org/wiki/Scanning_SQUID_microscopy
Scanning electrochemical microscopy ( SECM ) is a technique within the broader class of scanning probe microscopy (SPM) that is used to measure the local electrochemical behavior of liquid/solid, liquid/gas and liquid/liquid interfaces. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] Initial characterization of the technique was credited to University of Texas electrochemist, Allen J. Bard , in 1989. [ 6 ] Since then, the theoretical underpinnings have matured to allow widespread use of the technique in chemistry, biology and materials science. Spatially resolved electrochemical signals can be acquired by measuring the current at an ultramicroelectrode (UME) tip as a function of precise tip position over a substrate region of interest. Interpretation of the SECM signal is based on the concept of diffusion-limited current . [ 7 ] Two-dimensional raster scan information can be compiled to generate images of surface reactivity and chemical kinetics . The technique is complementary to other surface characterization methods such as surface plasmon resonance (SPR), [ 8 ] electrochemical scanning tunneling microscopy (ESTM), [ 9 ] and atomic force microscopy (AFM) [ 10 ] in the interrogation of various interfacial phenomena. In addition to yielding topographic information, SECM is often used to probe the surface reactivity of solid-state materials, electrocatalyst materials, enzymes and other biophysical systems. [ 11 ] SECM and variations of the technique have also found use in microfabrication , surface patterning, and microstructuring. [ 12 ] The emergence of ultramicroelectrodes (UMEs) around 1980 was pivotal to the development of sensitive electroanalytical techniques like SECM. UMEs employed as probes enabled the study of quick or localized electrochemical reactions. The first SECM-like experiment was performed in 1986 by Engstrom to yield direct observation of reaction profiles and short-lived intermediates. [ 13 ] Simultaneous experiments by Allen J. Bard using an Electrochemical Scanning Tunneling Microscope ( ESTM ) demonstrated current at large tip-to-sample distances that was inconsistent with electron tunneling . This phenomenon was attributed to Faradaic current , compelling a more thorough analysis of electrochemical microscopy. [ 14 ] The theoretical basis was presented in 1989 by Bard, where he also coined the term Scanning Electrochemical Microscopy. In addition to the simple collection modes used at the time, Bard illustrated the widespread utility of SECM through the implementation of various feedback modes. [ 6 ] As the theoretical foundation developed, annual SECM-related publications steadily rose from 10 to around 80 in 1999, when the first commercial SECM became available. [ 15 ] SECM continues to increase in popularity due to theoretical and technological advances that expand experimental modes while broadening substrate scope and enhancing sensitivity. [ 16 ] Electric potential is manipulated through the UME tip in a bulk solution containing a redox-active couple (e.g. Fe 2+ /Fe 3+ ). When a sufficiently negative potential is applied, (Fe 3+ ) is reduced to (Fe 2+ ) at the UME tip, generating a diffusion-limited current. [ 13 ] The steady-state current is governed by the flux of oxidized species in solution to the UME disc and is given by: i T , ∞ = 4 n F C D a {\displaystyle i_{T,\infty }=4nFCDa} where i T,∞ is the diffusion-limited current, n is the number of electrons transferred at the electrode tip (O + n e − → R), F is Faraday's constant , C is the concentration of the oxidized species in solution, D is the diffusion coefficient and a is the radius of the UME disc. In order to probe a surface of interest, the tip is moved closer to the surface and changes in current are measured. There are two predominant modes of operation, which are feedback mode and collection-generation mode. In a bulk solution, the oxidized species is reduced at the tip, producing a steady-state current that is limited by hemispherical diffusion. As the tip approaches a conductive substrate in the solution, the reduced species formed at the tip is oxidized at the conductive surface, yielding an increase in the tip current and creating a regenerative "positive" feedback loop. [ 6 ] The opposite effect is observed when probing insulating surfaces, as the oxidized species cannot be regenerated and diffusion to the electrode is inhibited as a result of physical obstruction as the tip approaches the substrate, creating a "negative" feedback loop and decreasing the tip current. An additional parameter to consider when probing insulating surfaces is the electrode sheath diameter, r g , since it contributes to the physical obstruction of diffusion. The change in tip current as a function of distance d can be plotted as an "approach curve" as shown. Due to the rate dependent nature of SECM measurements, it is also employed to study electron-transfer kinetics. [ 17 ] Another mode of operation that is employed is tip generation/substrate collection (TG/SC). In TG/SC mode, the tip is held at a potential sufficient for an electrode reaction to occur and "generate" a product while the substrate is held at a potential sufficient for the electrode product to react with or be "collected" by the substrate. [ 6 ] The reciprocal to this method is substrate generation/tip collection (SG/TC), where the substrate acts to generate a species that is measured at the tip. Both TG/SC and SG/TC variations are also categorized as "direct" modes. [ 7 ] Two currents are generated: the tip current, i T , and the substrate current, i S . Since the substrate is generally much larger than the tip, the efficiency of collection, i S / i T , is 1 if no reactions occur during the transfer of tip-generated species to the substrate. As the distance between tip and substrate, d , decreases, the collection efficiency, i S / i T , approaches 1. In ac-SECM a sinusoidal bias is applied to the dc bias of the SECM probe allowing the impedance of a sample to be measured, as is the case in electrochemical impedance spectroscopy . [ 18 ] Unlike dc-SECM techniques ac-SECM does not require the use of a redox mediator. This is particularly advantageous for measurements where the redox mediator could affect the chemistry of the system under study. [ 19 ] Examples include corrosion studies where a redox mediator may act to inhibit or enhance the rate of corrosion, and biological studies where a redox mediator may be toxic to the living cell under study. In ac-SECM the feedback response measured is dependent on both the sample type and the experimental conditions. [ 20 ] When a sample is insulating the measured impedance will always increase with decreasing probe to sample distance. This is not the case for a conductive sample however. For a conductive sample measured in a high conductivity electrolyte, or measured with a low ac frequency, decreasing the probe to sample distance will lead to an increase in impedance. If, however, a conductive sample is measured in a low conductivity electrolyte, or with a high ac frequency, decreasing the probe to sample distance will result in a lower measured impedance. Changes in current as a function of distance between electrode tip and substrate surface allow imaging of insulating and conducting surfaces for topology and reactivity information by moving the tip across surfaces and measuring tip current. The most common scanning mode is constant-height mode, [ 7 ] where the tip height is unchanging and is scanned across the surface in the x-y plane. Alternatively, constant distance measurements are possible, which change the z position to maintain the probe to sample distance as the probe is scanned across the surface in the x-y plane. The constant distance measurement can be based on an electrical signal as is the case in the constant-current mode, [ 7 ] where the device attempts to maintain a constant current by changing the substrate to tip distance, d , and recording the change in d . A mechanical signal can also be used to control the probe to sample distance. Examples of this are the intermittent contact (ic)-SECM [ 21 ] and shear force [ 22 ] techniques which use changes in probe vibration to maintain the probe to sample distance. Spatial resolution is dependent on the tip radius, the substrate to tip distance, the precision of the electronics, and other considerations. Early SECMs were constructed solely by individual lab groups from a set of common components including potentiostat (or bipotentiostat) and potential programmer, current amplifier, piezoelectric positioner and controller, computer, and UME. [ 4 ] Many SECM experiments are highly specific in nature, and in-house assembly of SECMs remains common. The development of new techniques toward the reliable nanofabrication of electrodes has been a primary focus in the literature due to several distinct advantages including high mass-transfer rates and low levels of reactant adsorption in kinetic experiments. [ 23 ] [ 24 ] Additionally, enhanced spatial resolution afforded by reduced tip size expands the scope of SECM studies to smaller and faster phenomena. The following methods encompass an abbreviated summary of fabrication techniques in a rapidly developing field. SECM probes use platinum as the active core material, however carbon, gold, mercury, and silver have all been used. [ 25 ] Typical preparation of a microscale electrode is performed by heat sealing a microwire or carbon fiber in a glass capillary under vacuum . This tip can be connected to a larger copper electrode through the use of silver epoxy then polished to yield a sharpened tip. Nanofabrication of electrodes can be performed by etching a metal wire with sodium cyanide and sodium hydroxide. Etched metal wires can then be coated with wax, varnish, molten paraffin or glass, poly(a-methylstyrene), polyimide , [ 26 ] electropolymerized phenol, and electrophoretic paint . [ 27 ] Nanotips produced by these methods are conical, however disc-shaped tips can be obtained by micropipette pulling of glass sealed electrodes. Nanoscale electrodes allow for high resolution experiments of biological features of sub micron scale or single molecule analysis. "Penetration" experiments, where the tip is inserted into a microstructure (such as a thin polymer film with fixed redox centers) to probe kinetic and concentration parameters, also require the use of nanoscale electrodes. [ 28 ] However, microelectrodes remain ideal for quantitative kinetic and feedback mode experiments due to their increased surface area. Modification of electrodes has developed beyond the size parameter. SECM-AFM probes can act as both a force sensor and electrode through the utilization of a flattened, etched metal wire coated by electrophoretic paint. In this system, the flattened wire acts as a flexible cantilever to measure the force against a sample (AFM) as the wire electrode measures the current (SECM). [ 2 ] Similarly, SECM functionality can be imparted into standard AFM probes by sputtering the surface with a conductive metal or by milling an insulated tip with a focused ion beam (FIB). Electron-beam lithography has also been demonstrated to reproducibly generate SECM-AFM probes using silicon wafers. [ 29 ] AFM probe manufacturers, such as Scuba Probe Technologies fabricate SECM-AFM probes with reliable electrical contacts for operation in liquids. [ 30 ] Images of the chemical environment that is decoupled from localized topographies are also desirable to study larger or uneven surfaces. "Soft stylus probes" were recently developed by filling a microfabricated track on a polyethylene terephthalate sheet with a conductive carbon ink. Lamination with a polymer film produced v-shaped stylus that was cut to expose the carbon tip. The flexibility inherent in the probe design allows for constant contact with the substrate that bends the probe. When dragged across a sample, probe bending accommodates for topographical differences in the substrate and provides a quasi-constant tip-to-substrate distance, d . [ 31 ] Micro-ITIES probes represent another type of specialty probe that utilizes the Interface between Two Immiscible Electrolyte Solutions ( ITIES ). These tips feature a tapered pipette containing a solution containing a metal counter electrode, and are used to measure electron and ion transfer events when immersed in a second, immiscible liquid phase containing a counter-reference electrode. [ 1 ] Often the probing of liquid/liquid and air/liquid interfaces via SECM require the use of a submarine electrode. [ 32 ] In this configuration, the electrode is fashioned into a hook shape where the electrode can be inverted and submerged within the liquid layer. The UME tip points upwards and can be positioned directly beneath the liquid/liquid or air/liquid interface. The portion of the electrode passing through the interface region is electrically insulated to prevent indirect interfacial perturbations. Increases in the complexity of electrodes along with decreases in size have prompted the need for high resolution characterization techniques. Scanning electron microscopy (SEM), cyclic voltammetry (CV), and SECM approach curve measurements are frequently applied to identify the dimension and geometry of fabricated probes. The potentiostat biases and measures the voltage using the standard three electrode system of voltammetry experiments. The UME acts as the working electrode to apply a controlled potential to the substrate. The auxiliary electrode (or counter electrode) acts to balance the current generated at the working electrode, often through a redox reaction with the solvent or supporting electrolyte. Voltage measured with regard to the well defined reduction potential of the reference electrode , although this electrode itself does not pass any current. SECM utilizes many of the same positioning components that are available to other materials characterization techniques. Precise positioning between the tip and sample is an important factor that is complementary to tip size. The position of the probe relative to a given point on the material surface in the x, y, and z directions is typically controlled by a motor for rough positioning coupled with a piezoelectric motor for finer control. More specifically, systems may feature an inchworm motor that directs coarse positioning with additional z control governed by a PZT piezo pusher. Stepper motors with XYZ piezo block positioner or closed-loop controller systems have also been used. [ 15 ] SECM has been employed to probe the topography and surface reactivity of solid-state materials, track the dissolution kinetics of ionic crystals in aqueous environments, screen electrocatalytic prospects, elucidate enzymatic activities, and investigate dynamic transport across synthetic/natural membranes and other biophysical systems. Early experiments focused on these solid/liquid interfaces and the characterization of typical solution-based electrochemical systems at higher spatial resolution and sensitivities than bulk electrochemical experiments typically afford. More recently the SECM technique has been adapted to explore the chemical transfer dynamics at liquid/liquid and liquid/gas interfaces. SECM and variations of the technique have also found use in microfabrication, surface patterning, and microstructuring. [ 12 ] A multitude of surface reactions within this context have been explored including metal deposition, etching and patterning of surfaces by enzymes. Scanning probe lithography (SPL) of surfaces can be performed using the SECM configuration. Due to size limitations in the microfabrication procedures for the UMEs, spatial resolution is decreased, affording larger feature sizes compared to other SPL techniques. An early example demonstrated patterning of dodecylthiolate self-assembled monolayers (SAMs) by moving the UME in a two-dimensional array in close proximity to the surface while applying an oxidative or reductive potential, thus locally desorbing the chemical species. [ 12 ] Micron-sized features were effectively patterned into the SAM. An inherent benefit of SECM over other SPL techniques for surface patterning can be attributed to its ability to simultaneously acquire surface-related electrochemical information while performing lithography. Other studies have demonstrated the utility of SECM for the deposition of local gold islands as templates for attachment of biomolecules and fluorescent dyes . [ 33 ] Such studies are suggestive of the technique’s potential for the fabrication of nanoscale assemblies, making it particularly suited to explore previously studied systems tethered to small gold clusters. Varieties of SECM employing the micropipet tip geometry have been used to generate spatially resolved microcrystals of a solid solution . [ 34 ] Here, glass microcapillaries with sub-micron sized orifices replace the standard UME allowing femtoliter -sized droplets to be suspended from the capillary over a conductive surface acting as the working electrode . Upon contact with the positively biased surface, the droplets of salt solutions achieve supersaturation and crystallize with well-defined, microscale geometries. Such technology could lend itself well to solid-state electrochemical sensors on microdevices. The dissolution of ionic crystals in aqueous environments is fundamentally important to the characterization of a host of naturally occurring and synthetic systems. [ 35 ] The high spatial resolution and three-dimensional mobility provided by the UME allows one to probe the dissolution kinetics on specific faces of single ionic crystals, whereas previous characterization techniques relied on a bulk or ensemble average measurement. Due to the high mass transfer rates associated with UMEs in the SECM configuration, it is possible to quantify systems defined by very fast reaction kinetics . In addition, UMEs allow monitoring over a wide dynamic range , making possible the study of ionic solids with large differences in solubility . Early examples demonstrating the utility of SECM to extract quantitative rate data from such systems was carried out on CuSO 4 crystals in an aqueous solution saturated with Cu 2+ and SO 2− 4 ions. [ 36 ] By positioning an UME in the SECM configuration approximately one-electrode radius away from the (100) face of a CuSO 4 crystal, it was possible to perturb the dissolution equilibrium by locally reducing Cu 2+ at the UME surface. As the crystal face locally dissolved into copper and sulfate ions, a visible pit was formed and the chronoamperometric signal could be monitored as a function of distance between the UME and the crystal. Assuming first or second order kinetic behavior, the dissolution rate constant could then be extracted from the data. Similar studies have been performed on additional crystal systems without a supporting electrolyte. [ 37 ] Approaching the search for novel catalytic materials to replace precious metals used in fuel cells demands extensive knowledge of the oxygen reduction reaction (ORR) occurring at the metal surface. Often even more pressing are the physical limitations imposed by the need to survey and assess the electrocatalytic viability of large numbers of potential catalytic candidates. Some groups studying electrocatalysis have demonstrated the use of SECM as a rapid screening technique that provides local quantitative electrochemical information about catalytic mixtures and materials. [ 38 ] [ 39 ] A variety of approaches have been suggested for high throughput assessment of novel metallic electrocatalysts. One functional, non-SECM approach, enabled the electrocatalytic activities of a large number of catalysts to be assessed optically by employing a technique that detected proton production on deposited arrays of proton-sensitive fluorescent dyes . [ 40 ] Though of certain utility, the technique suffers from the failure to extract quantitative electrochemical information from any catalytic system of interest, thus requiring the quantitative electrochemical information to be obtained off-line from the array experiment. Bard et al. have demonstrated assessment of electrocatalytic activities at high volume using the SECM configuration. [ 38 ] With this approach, direct quantitative electrochemical information from multicomponent systems can be acquired on a rapid screening platform. Such high throughput screening significantly assists the search for abundant, efficient and cost-effective electrocatalytic materials as substitutes for platinum and other precious metals . The ability to probe non-conductive surfaces makes SECM a feasible method for analyzing membranes, redox active enzymes, and other biophysical systems. Changes in intracellular redox activity may be related to conditions such as oxidative stress and cancer. Redox processes of individual living cells can be probed by SECM, which serves as a non-invasive method for monitoring intracellular charge transfer. In such measurements, the cell of interest is immobilized on a surface submerged in a solution with the oxidized form of the redox mediator and feedback mode is employed. A potential is applied to the tip, which reduces the oxidized species, generating a steady-state current, i T . When the tip product enters the cell, it is re-oxidized by processes within the cell and sent back out. Depending on the rate at which tip product is regenerated by the cell, the tip current will change. A study by Liu et al. [ 41 ] employed this method and showed that the redox states within three human breast cell lines (nonmotile, motile , and metastatic ) were consistently different. SECM can not only examine immobilized cells, but also be used to study the kinetics of immobilized redox-active enzymes. [ 42 ] Transport of ions such as K + and Na + across membranes or other biological interfaces is vital to many cell processes; SECM has been employed in studying transport of redox active species across cell membranes. In feedback mode, the transfer of molecules across a membrane can be induced by collecting the transferred species at the tip and forming a concentration gradient. [ 4 ] The changes in current can be measured as a function of molecule transport rate. The interface between two immiscible electrolyte solutions (ITIES) can be studied using SECM with a micro-ITIES probe. The probe lies in one layer, and is moved closer to the junction while applying a potential. Oxidation or reduction depletes the substrate concentration, resulting in diffusion from either layer. At close tip-interface distances, rates of diffusion between the organic/aqueous layer for a substrate or ionic species are observed. [ 43 ] Electron transfer rates have also been studied extensively at the ITIES. In such experiments, redox couples are dissolved in separate phases and the current at the ITIES is recorded. [ 1 ] This is also the fundamental principle in studying transport across membranes. The transfer of chemical species across air/liquid interfaces is integral to almost every physical, physiological, biological and environmental system on some level. Thus far, a major thrust in the field has been the quantification of molecular transfer dynamics across monolayer films in order to gain insight into chemical transport properties of cellular membrane systems and chemical diffusion at environmental interfaces. [ 44 ] Though much work has been done in the area of evaporation through monolayers at air/water interfaces, it was the introduction of SECM that provided researchers an alternative method for exploring the permeability of monolayers to small solute molecules across such interfaces. By precisely positioning a submarine electrode beneath an organic monolayer that separates an air/water interface, researchers were able to perturb the oxygen diffusion equilibrium by local reduction of oxygen in the aqueous layer, thereby eliciting diffusion across the monolayer. [ 45 ] Diffusion dynamics of the system can be elucidated by measuring the current response at the UME with high spatial and temporal resolution . SECM is quite amenable to such kinetics studies since the current response can be monitored with high sensitivity due to the rapid mass transfer rates associated with UMEs in the SECM configuration. The three dimensional mobility of the UME also affords spatial probing of membranes to identify points of high flux or permeability. A very similar approach has been employed for diffusion studies at liquid/liquid and solid/liquid interfaces.
https://en.wikipedia.org/wiki/Scanning_electrochemical_microscopy
Scanning Flow Cell (SFC) is an electrochemical technique , based on the principle of channel electrode. The electrolyte is continuously flowing over a substrate that is introduced externally on translation stage. In contrast to the reference and counter electrode that are integrated in the main channel or placed in side compartments connected with a salt bridge. [ citation needed ] SFC utilizes V-formed geometry with a small opening on the bottom (in range of 0.2-1mm diameter) used to establish the contact with sample. The convective flow is sustained also in non-contact mode of operation that allows easy exchange of the working electrode. [ 1 ] The SFC is employed for combinatorial and high-throughput electrochemical studies. Due to its non-homogenous flow profile distribution, it is currently used for comparative kinetic studies. SFC is predominantly used for coupling of electrochemical measurements with post analytical techniques like UV-Vis , ICP-MS , ICP-OES etc. This makes possible a direct correlation of electrochemical and spectrometric signal. This methodology was successfully applied for corrosion studies. [ 2 ] [ 3 ]
https://en.wikipedia.org/wiki/Scanning_flow_cell
The scanning helium microscope (SHeM) is a form of microscopy that uses low-energy (5–100 meV) neutral helium atoms to image the surface of a sample without any damage to the sample caused by the imaging process. Since helium is inert and neutral, it can be used to study delicate and insulating surfaces. Images are formed by rastering a sample underneath an atom beam and monitoring the flux of atoms that are scattered into a detector at each point. The technique is different from a scanning helium ion microscope , which uses charged helium ions that can cause damage to a surface. Microscopes can be divided into two general classes: those that illuminate the sample with a beam, and those that use a physical scanning probe. Scanning probe microscopies raster a small probe across the surface of a sample and monitor the interaction of the probe with the sample. The resolution of scanning probe microscopies is set by the size of the interaction region between the probe and the sample, which can be sufficiently small to allow atomic resolution. Using a physical tip (e.g. AFM or STM ) does have some disadvantages though including a reasonably small imaging area and difficulty in observing structures with a large height variation over a small lateral distance. Microscopes that use a beam have a fundamental limit on the minimum resolvable feature size, d A {\displaystyle d_{\text{A}}} , which is given by the Abbe diffraction limit , where λ {\displaystyle \lambda } is the wavelength of the probing wave, n {\displaystyle n} is the refractive index of the medium the wave is travelling in and the wave is converging to a spot with a half-angle of θ {\displaystyle \theta } . While it is possible to overcome the diffraction limit on resolution by using a near-field technique , it is usually quite difficult. Since the denominator of the above equation for the Abbe diffraction limit will be approximately two at best, the wavelength of the probe is the main factor in determining the minimum resolvable feature, which is typically about 1 μm for optical microscopy. To overcome the diffraction limit, a probe that has a smaller wavelength is needed, which can be achieved using either light with a higher energy, or through using a matter wave . X-rays have a much smaller wavelength than visible light, and therefore can achieve superior resolutions when compared to optical techniques. Projection X-ray imaging is conventionally used in medical applications, but high resolution imaging is achieved through scanning transmission X-ray microscopy (STXM). By focussing the X-rays to a small point and rastering across a sample, a very high resolution can be obtained with light. The small wavelength of X-rays comes at the expense of a high photon energy , meaning that X-rays can cause radiation damage. Additionally, X-rays are weakly interacting, so they will primarily interact with the bulk of the sample, making investigations of a surface difficult. Matter waves have a much shorter wavelength than visible light and therefore can be used to study features below about 1 μm. The advent of electron microscopy opened up a variety of new materials that could be studied due to the enormous improvement in the resolution when compared to optical microscopy. The de Broglie wavelength , λ {\displaystyle \lambda } , of a matter wave in terms of its kinetic energy , E {\displaystyle E} , and particle mass, m {\displaystyle m} , is given by Hence, for an electron beam to resolve atomic structure, the wavelength of the matter wave would need be at least λ {\displaystyle \lambda } = 1 Å, and therefore the beam energy would need to be given by E {\displaystyle E} > 100 eV. Since electrons are charged, they can be manipulated using electromagnetic optics to form extremely small spot sizes on a surface. Due to the wavelength of an electron beam being low, the Abbe diffraction limit can be pushed below atomic resolution and electromagnetic lenses can be used to form very intense spots on the surface of a material. The optics in a scanning electron microscope usually require the beam energy to be in excess of 1 keV to produce the best-quality electron beam. The high energy of the electrons leads to the electron beam interacting not only with the surface of a material, but forming a tear-drop interaction volume underneath the surface. While the spot size on the surface can be extremely low, the electrons will travel into the bulk and continue interacting with the sample. Transmission electron microscopy avoids the bulk interaction by only using thin samples, however usually the electron beam interacting with the bulk will limit the resolution of a scanning electron microscope. The electron beam can also damage the material, destroying the structure that is to be studied due to the high beam energy. Electron beam damage can occur through a variety of different processes that are specimen-specific. [ 1 ] Examples of beam damage include the breaking of bonds in a polymer, which changes the structure, and knock-on damage in metals that creates a vacancy in the lattice, which changes to the surface chemistry. Additionally, the electron beam is charged, which means that the surface of the sample needs to be conducting to avoid artefacts of charge accumulation in images. One method to mitigate the issue when imaging insulating surfaces is to use an environmental scanning electron microscope (ESEM). Therefore, in general, electrons are often not particularly suited to studying delicate surfaces due to the high beam energy and lack of exclusive surface sensitivity. Instead, an alternative beam is required for the study of surfaces at low energy without disturbing the structure. Given the equation for the de Broglie wavelength above, the same wavelength of a beam can be achieved at lower energies by using a beam of particles that have a higher mass. Thus, if the objective were to study the surface of a material at a resolution that is below that which can be achieved with optical microscopy, it may be appropriate to use atoms as a probe instead. While neutrons can be used as a probe, they are weakly interacting with matter and can only study the bulk structure of a material. [ 2 ] Neutron imaging also requires a high flux of neutrons, which usually can only be provided by a nuclear reactor or particle accelerator. A beam of helium atoms with a wavelength λ {\displaystyle \lambda } = 1 Å has an energy of E ≈ {\displaystyle E\approx } 20 meV, which is about the same as the thermal energy. Using particles of a higher mass than that of an electron means that it is possible to obtain a beam with a wavelength suitable to probe length scales down to the atomic level with a much lower energy. Thermal energy helium atom beams are exclusively surface sensitive, giving helium scattering an advantage over other techniques such as electron and x-ray scattering for surface studies. For the beam energies that are used, the helium atoms will have classical turning points 2–3 Å away from the surface atom cores. [ 3 ] The turning point is well above the surface atom cores, meaning that the beam will only interact with the outermost electrons. The first discussion of obtaining an image of a surface using atoms was by King and Bigas, [ 4 ] [ original research? ] who showed that an image of a surface can be obtained by heating a sample and monitoring the atoms that evaporate from the surface [ when? ] . King and Bigas suggest that it could be possible to form an image by scattering atoms from the surface, though it was some time before this was demonstrated. [ when? ] The idea of imaging with atoms instead of light was subsequently widely discussed in the literature. [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ original research? ] [ when? ] The initial approach to producing a helium microscope assumed that a focussing element is required to produce a high intensity beam of atoms. An early approach was to develop an atomic mirror , [ 8 ] [ non-primary source needed ] which is appealing since the focussing is independent of the velocity distribution of the incoming atoms. However the material challenges to produce an appropriate surface that is macroscopically curved and defect free on an atomic length-scale has proved too challenging so far. [ 10 ] [ 11 ] [ non-primary source needed ] King and Bigas, [ 4 ] showed that an image of a surface can be obtained by heating a sample and monitoring the atoms that evaporate from the surface. King and Bigas suggest it could be possible to form an image by scattering atoms from the surface, though it was some time before it was demonstrated. [ non-primary source needed ] Metastable atoms are atoms that have been excited out of the ground state, but remain in an excited state for a significant period of time. Microscopy using metastable atoms has been shown to be possible, where the metastable atoms release stored internal energy into the surface, releasing electrons that provide information on the electronic structure. [ 12 ] [ 13 ] [ non-primary source needed ] The kinetic energy of the metastable atoms means that only the surface electronic structure is probed, but the large energy exchange when the metastable atom de-excites will still perturb delicate sample surfaces. The first two-dimensional neutral helium images were obtained using a conventional Fresnel zone plate [ 9 ] by Koch et al. [ 14 ] [ non-primary source needed ] [ when? ] in a transmission setup. Helium will not pass through a solid material, therefore a large change in the measured signal is obtained when a sample is placed between the source and the detector. By maximising the contrast and using transmission mode, it was much easier to verify the feasibility of the technique. However, the setup used by Koch et al. with a zone plate did not produce a high enough signal to observe the reflected signal from the surface at the time. Nevertheless, the focussing obtained with a zone plate offers the potential for improved resolution due to the small beam spot size in the future. Research into neutral helium microscopes that use a Fresnel zone plate is an active area in Holst’s group at the University of Bergen. Since using a zone plate proved difficult due to the low focussing efficiency, alternative methods for forming a helium beam to produce images with atoms were explored. Recent efforts [ when? ] have avoided focussing elements and instead are directly collimating a beam with a pinhole. The lack of atom optics means that the beam width will be significantly larger than in an electron microscope . The first published demonstration of a two-dimensional image formed by helium reflecting from the surface was by Witham and Sánchez, who used a pinhole to form the helium beam. [ 15 ] [ non-primary source needed ] A small pinhole is placed very close to a sample and the helium scattered into a large solid angle is fed to a detector. Images are collected by moving the sample around underneath the beam and monitoring how the scattered helium flux changes. In parallel to the work by Witham and Sánchez, a proof of concept machine named the scanning helium microscope (SHeM) was being developed in Cambridge in collaboration with Dastoor's group from the University of Newcastle. [ 16 ] [ non-primary source needed ] The approach that was adopted was to simplify previous attempts that involved an atom mirror by using a pinhole, but to still use a conventional helium source to produce a high quality beam. Other differences from the Witham and Sánchez design include using a larger sample to pinhole distance, so that a larger variety of samples can be used and to use a smaller collection solid angle, so that it may be possible to observe more subtle contrast. These changes also reduced the total flux in the detector meaning that higher efficiency detectors are required (which in itself is an active area of research. [ 17 ] [ 18 ] [ non-primary source needed ] The atomic beam is formed through a supersonic expansion , which is a standard technique used in helium atom scattering . The centreline of the gas is selected by a skimmer to form an atom beam with a narrow velocity distribution. The gas is then further collimated by a pinhole to form a narrow beam, which is typically between 1–10 μm. The use of a focusing element (such as a zone plate) allows beam spot sizes below 1 μm to be achieved, but currently still comes with low signal intensity. The gas then scatters from the surface and is collected into a detector. In order to measure the flux of the neutral helium atoms, they must first be ionised. The inertness of helium that makes it a gentle probe means that it is difficult to ionise and therefore reasonably aggressive electron bombardment is typically used to create the ions. A mass spectrometer setup is then used to select only the helium ions for detection. Once the flux from a specific part of the surface is collected, the sample is moved underneath the beam to generate an image. By obtaining the value of the scattered flux across a grid of positions, then values can then be converted to an image. The observed contrast in helium images has typically been dominated by the variation in topography of the sample. Typically, since the wavelength of the atom beam is small, surfaces appear extremely rough to the incoming atom beam. Therefore, the atoms are diffusely scattered and roughly follow Knudsen's Law [citation?] (the atom equivalent of Lambert's cosine law in optics). However, more recently work has begun to see divergence from diffuse scattering due to effects such as diffraction [ 18 ] and chemical contrast effects. [ 19 ] However, the exact mechanisms for forming contrast in a helium microscope is an active field of research. Most cases have some complex combination of several contrast mechanisms making it difficult to disentangle the different contributions. Combinations of images from multiple perspectives allows stereophotogrammetry to produce partial three dimensional images, especially valuable for biological samples subject to degradation in electron microscopes. [ 20 ] The optimal configurations of scanning helium microscopes are geometrical configurations that maximise the intensity of the imaging beam within a given lateral resolution and under certain technological constraints . [ 21 ] [ 22 ] When designing a scanning helium microscope, scientists strive to maximise the intensity of the imaging beam while minimising its width. The reason behind this is that the beam's width gives the resolution of the microscope while its intensity is proportional to its signal to noise ratio. Due to their neutrality and high ionisation energy , neutral helium atoms are hard to detect. [ 22 ] This makes high-intensity beams a crucial requirement for a viable scanning helium microscope. In order to generate a high-intensity beam, scanning helium microscopes are designed to generate a supersonic expansion of the gas into vacuum, that accelerates neutral helium atoms to high velocities. [ 23 ] Scanning helium microscopes exist in two different configurations: the pinhole configuration [ 24 ] and the zone plate configuration. [ 25 ] In the pinhole configuration, a small opening (the pinhole) selects a section of the supersonic expansion far away from its origin, which has previously been collimated by a skimmer (essentially, another small pinhole). This section then becomes the imaging beam. In the zone plate configuration a Fresnel zone plate focuses the atoms coming from a skimmer into a small focal spot. Each of these configurations have different optimal designs, as they are defined by different optics equations. For the pinhole configuration the width of the beam (which we aim to minimise) is largely given by geometrical optics . The size of the beam at the sample plane is given by the lines connecting the skimmer edges with the pinhole edges. When the Fresnel number is very small ( F ≪ 1 {\displaystyle F\ll 1} ), the beam width is also affected by Fraunhofer diffraction (see equation below). In this equation Φ {\displaystyle \Phi } is the Full Width at Half Maximum of the beam, δ {\displaystyle \delta } is the geometrical projection of the beam and σ A {\displaystyle \sigma _{A}} is the Airy diffraction term. θ {\displaystyle \theta } is the Heaviside step function used here to indicate that the presence of the diffraction term depends on the value of the Fresnel number. Note that there are variations of this equation depending on what is defined as the "beam width" (for details compare [ 21 ] and [ 22 ] ). Due to the small wavelength of the helium beam, the Fraunhofer diffraction term can usually be omitted. The intensity of the beam (which we aim to maximise) is given by the following equation (according to the Sikora and Andersen model): [ 26 ] Where I 0 {\displaystyle I_{0}} is the total intensity stemming from the supersonic expansion nozzle (taken as a constant in the optimisation problem), r p h {\displaystyle r_{ph}} is the radius of the pinhole, S is the speed ratio of the beam, r S {\displaystyle r_{S}} is the radius of the skimmer, R F {\displaystyle R_{F}} is the radius of the supersonic expansion quitting surface (the point in the expansion from which atoms can be considered to travel in a straight line), x S {\displaystyle x_{S}} is the distance between the nozzle and the skimmer and a {\displaystyle a} is the distance between the skimmer and the pinhole. There are several other versions of this equation that depend on the intensity model, but they all show a quadratic dependency on the pinhole radius (the bigger the pinhole, the more intensity) and an inverse quadratic dependency with the distance between the skimmer and the pinhole (the more the atoms spread, the less intensity). By combining the two equations shown above, one can obtain that for a given beam width Φ {\displaystyle \Phi } for the geometrical optics regime the following values correspond to intensity maxima: In here, W D {\displaystyle W_{D}} represents the working distance of the microscope and K = 2 2 ln ⁡ 2 / 3 {\displaystyle K=2{\sqrt {2\ln 2/3}}} is a constant that stems from the definition of the beam width. Note that both equations are given with respect to the distance between the skimmer and the pinhole, a . The global maximum of intensity can then be obtained numerically by replacing these values in the intensity equation above. In general, smaller skimmer radii coupled with smaller distances between the skimmer and the pinhole are preferred, leading in practice to the design of increasingly smaller pinhole microscopes. The zone plate microscope uses a zone plate (that acts roughly like as a classical lens ) instead of a pinhole to focus the atom beam into a small focal spot. This means that the beam width equation changes significantly (see below). Here, M {\displaystyle M} is the zone plate magnification and Δ r {\displaystyle \Delta r} is the width of the smallest zone. Note the presence of chromatic aberrations ( σ c m {\displaystyle \sigma _{cm}} ). The approximation sign indicates the regime in which the distance between the zone plate and the skimmer is much bigger than its focal length. The first term in this equation is similar to the geometric contribution δ {\displaystyle \delta } in the pinhole case: a bigger zone plate (taken all parameters constant) corresponds to a bigger focal spot size. The third term differs from the pinhole configuration optics as it includes a quadratic relation with the skimmer size (which is imaged through the zone plate) and a linear relation with the zone plate magnification, which will at the same time depend on its radius. The equation to maximise, the intensity, is the same as the pinhole case with the substitution r p h ↔ r Z P {\displaystyle r_{ph}\leftrightarrow r_{ZP}} . By substitution of the magnification equation: where λ {\displaystyle \lambda } is the average de-Broglie wavelength of the beam. Taking a constant Δ r {\displaystyle \Delta r} , which should be made equal to the smallest achievable value, the maxima of the intensity equation with respect to the zone plate radius and the skimmer-zone plate distance a {\displaystyle a} can be obtained analytically. The derivative of the intensity with respect to the zone plate radius can be reduced the following cubic equation (once it has been set equal to zero): Here some groupings are used: Γ {\displaystyle \Gamma } is a constant that gives the relative size of the smallest aperture of the zone plate compared with the average wavelength of the beam and Φ ′ {\displaystyle \Phi '} is the modified beam width, which is used through the derivation to avoid explicitly operating with the constant airy term: Φ ′ 2 = σ c m 2 + ( M r S 3 ) 2 {\displaystyle \Phi '^{2}=\sigma _{cm}^{2}+\left({\frac {Mr_{S}}{\sqrt {3}}}\right)^{2}} . This cubic equation is obtained under a series of geometrical assumptions and has a closed-form analytical solution that can be consulted in the original paper [ 27 ] or obtained through any modern-day algebra software. The practical consequence of this equation is that zone plate microscopes are optimally designed when the distances between the components are small, and the radius of the zone plate is also small. This goes in line with the results obtained for the pinhole configuration, and has as its practical consequence the design of smaller scanning helium microscopes.
https://en.wikipedia.org/wiki/Scanning_helium_microscopy
A scanning mobility particle sizer ( SMPS ) is an analytical instrument that measures the size and number concentration of aerosol particles with diameters from 2.5 nm to 1000 nm. [ 1 ] They employ a continuous, fast-scanning technique to provide high-resolution measurements. The particles that are investigated can be of biological or chemical nature. The instrument can be used for air quality measurement indoors, vehicle exhaust, [ 2 ] research in bioaerosols , atmospheric studies, and toxicology testing. The air to be analyzed is pumped through an ionizing source (or neutralizer) which will establish a known charge distribution. Then, exposure to an electric field in the DMA will isolate a certain particle diameter, which is a function of the voltage generating the field (a voltage value corresponding to a particle diameter value that passes through the DMA). Finally, these particles of the same diameter will be counted by an optical device (CPC). [ 3 ] The air inlet to be analyzed can be equipped with an impaction head. An impaction head, or fractionator head, is a device that uses the principles of fluid mechanics to trap, by their inertia, the largest particles present in the air. The sampling inlet of the SMPS is thus protected from large dust and insects, the air that enters it contains only the fine particles to be quantified. [ 4 ] These are usually called "PM10 inlet" or "PM2.5 inlet". The air flow then passes through an ionizing source. The sampled air will be exposed to high concentrations of positive and negative ions, after a certain number of collisions the charge distribution will be stable and known. The neutralizer is also used to eliminate electrostatic charges from aerosol particles. The charge distribution from the neutralizer is a balanced charge distribution that follows Boltzmann's law. [ 5 ] The sample then enters a differential mobility analyzer. The air and aerosol (whose charge distribution is now balanced and known) are then introduced into an air flow channel. A central tubular electrode, and another concentric one, generate an electric field in this fluid path. In the channel, the particles are subjected to a uniform electric field and an air flow. The particles then move at a speed that depends on their electrical mobility. At a given voltage, only particles of a certain diameter will follow this channel until they exit; the smaller and larger will crash into the electrodes. [ 6 ] The air now contains only particles of a certain diameter. The flow is introduced into a CPC, a condensation particle counter , which measures the concentration of particles in an aerosol sample. The CPC works by using butanol vapor condensation on the particles present in the air sample. The particles are exposed to butanol vapor heated to 39 °C. The butanol vapor condenses on the particles, increasing their size and thus facilitating their optical detection. The particles are then exposed to a laser beam, and each particle scatters light. The peaks of scattered light intensity are continuously counted and expressed in particles/cm 3 . The results obtained by this type of device therefore include the distribution of particle sizes in the air continuously. The DMA will generate voltage back-and-forth between its electrodes from 0 to 10,000 V, corresponding to a measurement range of 8 nm to 800 or 1000 nm, and the CPC will quantify each of these diameters.
https://en.wikipedia.org/wiki/Scanning_mobility_particle_sizer
Scanning probe lithography [ 1 ] ( SPL ) describes a set of nanolithographic methods to pattern material on the nanoscale using scanning probes. It is a direct-write, mask-less approach which bypasses the diffraction limit and can reach resolutions below 10 nm. [ 2 ] It is considered an alternative lithographic technology often used in academic and research environments. The term scanning probe lithography was coined after the first patterning experiments with scanning probe microscopes (SPM) in the late 1980s. [ 3 ] The different approaches towards SPL can be classified by their goal to either add or remove material, by the general nature of the process either chemical or physical, or according to the driving mechanisms of the probe-surface interaction used in the patterning process: mechanical , thermal , diffusive and electrical . Mechanical scanning probe lithography (m-SPL) is a nanomachining or nano-scratching [ 4 ] top-down approach without the application of heat. [ 5 ] Thermo-mechanical SPL applies heat together with a mechanical force, e.g. indenting of polymers in the Millipede memory . Thermal scanning probe lithography (t-SPL) uses a heatable scanning probe in order to efficiently remove material from a surface without the application of significant mechanical forces. The patterning depth can be controlled to create high-resolution 3D structures. [ 6 ] [ 7 ] Thermochemical scanning probe lithography (tc-SPL) or thermochemical nanolithography (TCNL) employs the scanning probe tips to induce thermally activated chemical reactions to change the chemical functionality or the phase of surfaces. Such thermally activated reactions have been shown in proteins , [ 8 ] organic semiconductors , [ 9 ] electroluminescent conjugated polymers, [ 10 ] and nanoribbon resistors. [ 11 ] Furthermore, deprotection of functional groups [ 12 ] (sometimes involving a temperature gradients [ 13 ] ), reduction of oxides, [ 14 ] and the crystallization of piezoelectric/ferroelectric ceramics [ 15 ] has been demonstrated. Dip-pen scanning probe lithography (dp-SPL) or dip-pen nanolithography (DPN) is a scanning probe lithography technique based on diffusion , where the tip is employed to create patterns on a range of substances by deposition of a variety of liquid inks . [ 16 ] [ 17 ] [ 18 ] Thermal dip-pen scanning probe lithography or thermal dip-pen nanolithography (TDPN) extends the usable inks to solids, which can be deposited in their liquid form when the probes are pre-heated. [ 19 ] [ 20 ] [ 21 ] Oxidation scanning probe lithography (o-SPL), also called local oxidation nanolithography (LON), scanning probe oxidation, nano-oxidation, local anodic oxidation, AFM oxidation lithography is based on the spatial confinement of an oxidation reaction. [ 22 ] [ 23 ] Bias-induced scanning probe lithography (b-SPL) uses the high electrical fields created at the apex of a probe tip when voltages are applied between tip and sample to facilitate and confining a variety of chemical reactions to decompose gases [ 24 ] or liquids [ 2 ] [ 25 ] in order to locally deposit and grow materials on surfaces. In current induced scanning probe lithography (c-SPL) in addition to the high electrical fields of b-SPL, also a focused electron current which emanates from the SPM tip is used to create nanopatterns, e.g. in polymers [ 26 ] and molecular glasses. [ 27 ] Various scanning probe techniques have been developed to write magnetization patterns into ferromagnetic structures which are often described as magnetic SPL techniques. Thermally-assisted magnetic scanning probe lithography (tam-SPL) [ 28 ] operates by employing a heatable scanning probe to locally heat and cool regions of an exchange-biased ferromagnetic layer in the presence of an external magnetic field. This causes a shift in the hysteresis loop of exposed regions, pinning the magnetization in a different orientation compared to unexposed regions. The pinned regions become stable even in the presence of external fields after cooling, allowing arbitrary nanopatterns to be written into the magnetization of the ferromagnetic layer. In arrays of interacting ferromagnetic nano-islands such as artificial spin ice , scanning probe techniques have been used to write arbitrary magnetic patterns by locally reversing the magnetization of individual islands. Topological defect-driven magnetic writing (TMW) [ 29 ] uses the dipolar field of a magnetized scanning probe to induce topological defects in the magnetization field of individual ferromagnetic islands. These topological defects interact with the island edges and annihilate, leaving the magnetization reversed. Another way of writing such magnetic patterns is field-assisted magnetic force microscopy patterning, [ 30 ] where an external magnetic field a little below the switching field of the nano-islands is applied and a magnetized scanning probe is used to locally raise the field strength above that required to reverse the magnetization of selected islands. In magnetic systems where interfacial Dzyaloshinskii–Moriya interactions stabilize magnetic textures known as magnetic skyrmions , scanning-probe magnetic nanolithography has been employed for the direct writing of skyrmions and skyrmion lattices. [ 31 ] [ 32 ] Being a serial technology, SPL is inherently slower than e.g. photolithography or nanoimprint lithography , while parallelization as required for mass-fabrication is considered a large systems engineering effort ( see also Millipede memory ). As for resolution, SPL methods bypass the optical diffraction limit due to their use of scanning probes compared with photolithographic methods. Some probes have integrated in-situ metrology capabilities, allowing for feedback control during the write process. [ 33 ] SPL works under ambient atmospheric conditions , without the need for ultra high vacuum ( UHV ), unlike e-beam or EUV lithography .
https://en.wikipedia.org/wiki/Scanning_probe_lithography
Scanning tunneling spectroscopy (STS) , an extension of scanning tunneling microscopy (STM), is used to provide information about the density of electrons in a sample as a function of their energy. In scanning tunneling microscopy, a metal tip is moved over a conducting sample without making physical contact. A bias voltage applied between the sample and tip allows a current to flow between the two. This is as a result of quantum tunneling across a barrier; in this instance, the physical distance between the tip and the sample The scanning tunneling microscope is used to obtain "topographs" - topographic maps - of surfaces. The tip is rastered across a surface and (in constant current mode), a constant current is maintained between the tip and the sample by adjusting the height of the tip. A plot of the tip height at all measurement positions provides the topograph. These topographic images can obtain atomically resolved information on metallic and semi-conducting surfaces However, the scanning tunneling microscope does not measure the physical height of surface features. One such example of this limitation is an atom adsorbed onto a surface. The image will result in some perturbation of the height at this point. A detailed analysis of the way in which an image is formed shows that the transmission of the electric current between the tip and the sample depends on two factors: (1) the geometry of the sample and (2) the arrangement of the electrons in the sample. The arrangement of the electrons in the sample is described quantum mechanically by an "electron density". The electron density is a function of both position and energy, and is formally described as the local density of electron states, abbreviated as local density of states (LDOS), which is a function of energy. Spectroscopy, in its most general sense, refers to a measurement of the number of something as a function of energy. For scanning tunneling spectroscopy the scanning tunneling microscope is used to measure the number of electrons (the LDOS) as a function of the electron energy. The electron energy is set by the electrical potential difference (voltage) between the sample and the tip. The location is set by the position of the tip. At its simplest, a "scanning tunneling spectrum" is obtained by placing a scanning tunneling microscope tip above a particular place on the sample. With the height of the tip fixed, the electron tunneling current is then measured as a function of electron energy by varying the voltage between the tip and the sample (the tip to sample voltage sets the electron energy). The change of the current with the energy of the electrons is the simplest spectrum that can be obtained, it is often referred to as an I-V curve. As is shown below, it is the slope of the I-V curve at each voltage (often called the dI/dV-curve) which is more fundamental because dI/dV corresponds to the electron density of states at the local position of the tip, the LDOS. Scanning tunneling spectroscopy is an experimental technique which uses a scanning tunneling microscope (STM) to probe the local density of electronic states (LDOS) and the band gap of surfaces and materials on surfaces at the atomic scale. [ 1 ] Generally, STS involves observation of changes in constant-current topographs with tip-sample bias , local measurement of the tunneling current versus tip-sample bias (I-V) curve, measurement of the tunneling conductance , d I / d V {\displaystyle dI/dV} , or more than one of these. Since the tunneling current in a scanning tunneling microscope only flows in a region with diameter ~5 Å, STS is unusual in comparison with other surface spectroscopy techniques, which average over a larger surface region. The origins of STS are found in some of the earliest STM work of Gerd Binnig and Heinrich Rohrer , in which they observed changes in the appearance of some atoms in the (7 x 7) unit cell of the Si(111) – (7 x 7) surface with tip-sample bias . [ 2 ] STS provides the possibility for probing the local electronic structure of metals , semiconductors , and thin insulators on a scale unobtainable with other spectroscopic methods. Additionally, topographic and spectroscopic data can be recorded simultaneously. Since STS relies on tunneling phenomena and measurement of the tunneling current or its derivative , understanding the expressions for the tunneling current is very important. Using the modified Bardeen transfer Hamiltonian method, which treats tunneling as a perturbation , the tunneling current ( I {\displaystyle I} ) is found to be where f ( E ) {\displaystyle f\left(E\right)} is the Fermi distribution function, ρ s {\displaystyle \rho _{s}} and ρ T {\displaystyle \rho _{T}} are the density of states (DOS) in the sample and tip, respectively, and M μ ν {\displaystyle M_{\mu \nu }} is the tunneling matrix element between the modified wavefunctions of the tip and the sample surface. The tunneling matrix element, describes the energy lowering due to the interaction between the two states. Here ψ {\displaystyle \psi } and χ {\displaystyle \chi } are the sample wavefunction modified by the tip potential, and the tip wavefunction modified by sample potential, respectively. [ 3 ] For low temperatures and a constant tunneling matrix element, the tunneling current reduces to which is a convolution of the DOS of the tip and the sample. [ 3 ] Generally, STS experiments attempt to probe the sample DOS, but equation (3) shows that the tip DOS must be known for the measurement to have meaning. Equation (3) implies that under the gross assumption that the tip DOS is constant. For these ideal assumptions, the tunneling conductance is directly proportional to the sample DOS. [ 3 ] For higher bias voltages, the predictions of simple planar tunneling models using the Wentzel-Kramers Brillouin (WKB) approximation are useful. In the WKB theory, the tunneling current is predicted to be where ρ s {\displaystyle \rho _{s}} and ρ t {\displaystyle \rho _{t}} are the density of states (DOS) in the sample and tip, respectively. [ 2 ] The energy- and bias-dependent electron tunneling transition probability, T, is given by where ϕ s {\displaystyle \phi _{s}} and ϕ t {\displaystyle \phi _{t}} are the respective work functions of the sample and tip and Z {\displaystyle Z} is the distance from the sample to the tip. [ 2 ] The tip is often regarded to be a single molecule, essentially neglecting further shapes induced effects. This approximation is the Tersoff-Hamann approximation, which suggests the tip to be a single ball-shaped molecule of certain radius. The tunneling current therefore becomes proportional to the local density of states (LDOS). Acquiring standard STM topographs at many different tip-sample biases and comparing to experimental topographic information is perhaps the most straightforward spectroscopic method. The tip-sample bias can also be changed on a line-by-line basis during a single scan. This method creates two interleaved images at different biases. Since only the states between the Fermi levels of the sample and the tip contribute to I {\displaystyle I} , this method is a quick way to determine whether there are any interesting bias-dependent features on the surface. However, only limited information about the electronic structure can be extracted by this method, since the constant I {\displaystyle I} topographs depend on the tip and sample DOS's and the tunneling transmission probability, which depends on the tip-sample spacing, as described in equation (5). [ 4 ] By using modulation techniques, a constant current topograph and the spatially resolved d I / d V {\displaystyle dI/dV} can be acquired simultaneously. A small, high frequency sinusoidal modulation voltage is superimposed on the D.C. tip-sample bias. The A.C. component of the tunneling current is recorded using a lock-in amplifier, and the component in-phase with the tip-sample bias modulation gives d I / d V {\displaystyle dI/dV} directly. The amplitude of the modulation V m has to be kept smaller than the spacing of the characteristic spectral features. The broadening caused by the modulation amplitude is 2 eVm and it has to be added to the thermal broadening of 3.2 k B T. [ 5 ] In practice, the modulation frequency is chosen slightly higher than the bandwidth of the STM feedback system. [ 4 ] This choice prevents the feedback control from compensating for the modulation by changing the tip-sample spacing and minimizes the displacement current 90° out-of-phase with the applied bias modulation. Such effects arise from the capacitance between the tip and the sample, which grows as the modulation frequency increases. [ 2 ] In order to obtain I-V curves simultaneously with a topograph, a sample-and-hold circuit is used in the feedback loop for the z piezo signal. The sample-and-hold circuit freezes the voltage applied to the z piezo, which freezes the tip-sample distance, at the desired location allowing I-V measurements without the feedback system responding. [ 6 ] [ 7 ] The tip-sample bias is swept between the specified values, and the tunneling current is recorded. After the spectra acquisition, the tip-sample bias is returned to the scanning value, and the scan resumes. Using this method, the local electronic structure of semiconductors in the band gap can be probed. [ 4 ] There are two ways to record I-V curves in the manner described above. In constant-spacing scanning tunneling spectroscopy (CS-STS), the tip stops scanning at the desired location to obtain an I-V curve. The tip-sample spacing is adjusted to reach the desired initial current, which may be different from the initial current setpoint, at a specified tip-sample bias. A sample-and-hold amplifier freezes the z piezo feedback signal, which holds the tip-sample spacing constant by preventing the feedback system from changing the bias applied to the z piezo. [ 7 ] The tip-sample bias is swept through the specified values, and the tunneling current is recorded. Either numerical differentiation of I(V) or lock-in detection as described above for modulation techniques can be used to find d I / d V {\displaystyle dI/dV} . If lock-in detection is used, then an A.C. modulation voltage is applied to the D.C. tip-sample bias during the bias sweep and the A.C. component of the current in-phase with the modulation voltage is recorded. In variable-spacing scanning tunneling spectroscopy (VS-STS), the same steps occur as in CS-STS through turning off the feedback. As the tip-sample bias is swept through the specified values, the tip-sample spacing is decreased continuously as the magnitude of the bias is reduced. [ 6 ] [ 8 ] Generally, a minimum tip-sample spacing is specified to prevent the tip from crashing into the sample surface at the 0 V tip-sample bias. Lock-in detection and modulation techniques are used to find the conductivity, because the tunneling current is a function also of the varying tip-sample spacing. Numerical differentiation of I(V) with respect to V would include the contributions from the varying tip-sample spacing. [ 9 ] Introduced by Mårtensson and Feenstra to allow conductivity measurements over several orders of magnitude, VS-STS is useful for conductivity measurements on systems with large band gaps. Such measurements are necessary to properly define the band edges and examine the gap for states. [ 8 ] Current-imaging-tunneling spectroscopy (CITS) is an STS technique where an I-V curve is recorded at each pixel in the STM topograph. [ 6 ] Either variable-spacing or constant-spacing spectroscopy may be used to record the I-V curves. The conductance, d I / d V {\displaystyle dI/dV} , can be obtained by numerical differentiation of I with respect to V or acquired using lock-in detection as described above. [ 10 ] Because the topographic image and the tunneling spectroscopy data are obtained nearly simultaneously, there is nearly perfect registry of topographic and spectroscopic data. As a practical concern, the number of pixels in the scan or the scan area may be reduced to prevent piezo creep or thermal drift from moving the feature of study or the scan area during the duration of the scan. While most CITS data obtained on the times scale of several minutes, some experiments may require stability over longer periods of time. One approach to improving the experimental design is by applying feature-oriented scanning (FOS) methodology. [ 11 ] From the obtained I-V curves, the band gap of the sample at the location of the I-V measurement can be determined. By plotting the magnitude of I on a log scale versus the tip-sample bias, the band gap can clearly be determined. Although determination of the band gap is possible from a linear plot of the I-V curve, the log scale increases the sensitivity. [ 9 ] Alternatively, a plot of the conductance, d I / d V {\displaystyle dI/dV} , versus the tip-sample bias, V, allows one to locate the band edges that determine the band gap. The structure in the d I / d V {\displaystyle dI/dV} , as a function of the tip-sample bias, is associated with the density of states of the surface when the tip-sample bias is less than the work functions of the tip and the sample. Usually, the WKB approximation for the tunneling current is used to interpret these measurements at low tip-sample bias relative to the tip and sample work functions. The derivative of equation (5), I in the WKB approximation, is where ρ s {\displaystyle \rho _{s}} is the sample density of states, ρ t {\displaystyle \rho _{t}} is the tip density of states, and T is the tunneling transmission probability. [ 2 ] Although the tunneling transmission probability T is generally unknown, at a fixed location T increases smoothly and monotonically with the tip-sample bias in the WKB approximation. Hence, structure in the d I / d V {\displaystyle dI/dV} is usually assigned to features in the density of states in the first term of equation (7). [ 4 ] Interpretation of d I / d V {\displaystyle dI/dV} as a function of position is more complicated. Spatial variations in T show up in measurements of d I / d V {\displaystyle dI/dV} as an inverted topographic background. When obtained in constant current mode, images of the spatial variation of d I / d V {\displaystyle dI/dV} contain a convolution of topographic and electronic structure. An additional complication arises since d I / d V = I / V {\displaystyle dI/dV=I/V} in the low-bias limit. Thus, d I / d V {\displaystyle dI/dV} diverges as V approaches 0, preventing investigation of the local electronic structure near the Fermi level. [ 4 ] Since both the tunneling current, equation (5), and the conductance, equation (7), depend on the tip DOS and the tunneling transition probability, T, quantitative information about the sample DOS is very difficult to obtain. Additionally, the voltage dependence of T, which is usually unknown, can vary with position due to local fluctuations in the electronic structure of the surface. [ 2 ] For some cases, normalizing d I / d V {\displaystyle dI/dV} by dividing by I / V {\displaystyle I/V} can minimize the effect of the voltage dependence of T and the influence of the tip-sample spacing. Using the WKB approximation, equations (5) and (7), we obtain: [ 12 ] Feenstra et al. argued that the dependencies of T ( E , e V , r ) {\displaystyle T\left(E,eV,r\right)} and T ( e V , e V , r ) {\displaystyle T\left(eV,eV,r\right)} on tip-sample spacing and tip-sample bias tend to cancel, since they appear as ratios. [ 13 ] This cancellation reduces the normalized conductance to the following form: where B ( V ) {\displaystyle B\left(V\right)} normalizes T to the DOS and A ( V ) {\displaystyle A\left(V\right)} describes the influence of the electric field in the tunneling gap on the decay length. Under the assumption that A ( V ) {\displaystyle A\left(V\right)} and B ( V ) {\displaystyle B\left(V\right)} vary slowly with tip-sample bias, the features in ( d I / d V ) / ( I / V ) {\displaystyle \left(dI/dV\right)/\left(I/V\right)} reflect the sample DOS, ρ s {\displaystyle \rho _{s}} . [ 2 ] While STS can provide spectroscopic information with amazing spatial resolution, there are some limitations. The STM and STS lack chemical sensitivity. Since the tip-sample bias range in tunneling experiments is limited to ± ϕ / e {\displaystyle \pm \phi /e} , where ϕ {\displaystyle \phi } is the apparent barrier height, STM and STS only sample valence electron states. Element-specific information is generally impossible to extract from STM and STS experiments, since the chemical bond formation greatly perturbs the valence states. [ 4 ] At finite temperatures, the thermal broadening of the electron energy distribution due to the Fermi-distribution limits spectroscopic resolution. At T = 300 K {\displaystyle T=300\,\mathrm {K} } , k B T ≈ 0.026 e V {\displaystyle k_{\text{B}}T\approx 0.026\,\mathrm {eV} } , and the sample and tip energy distribution spread are both 2 k B T ≈ 0.052 e V {\displaystyle 2k_{\text{B}}T\approx 0.052\,\mathrm {eV} } . Hence, the total energy deviation is Δ E ≈ 0.1 e V {\displaystyle \Delta E\approx 0.1\,\mathrm {eV} } . [ 3 ] Assuming the dispersion relation for simple metals, it follows from the uncertainty relation Δ x Δ k ≥ 1 / 2 {\displaystyle \Delta x\Delta k\geq 1/2} that where E F {\displaystyle E_{F}} is the Fermi energy , E 0 {\displaystyle E_{0}} is the bottom of the valence band, k F {\displaystyle k_{F}} is the Fermi wave vector, and r {\displaystyle r} is the lateral resolution. Since spatial resolution depends on the tip-sample spacing, smaller tip-sample spacings and higher topographic resolution blur the features in tunneling spectra. [ 3 ] Despite these limitations, STS and STM provide the possibility for probing the local electronic structure of metals, semiconductors, and thin insulators on a scale unobtainable with other spectroscopic methods. Additionally, topographic and spectroscopic data can be recorded simultaneously.
https://en.wikipedia.org/wiki/Scanning_tunneling_spectroscopy
Scanning vibrating electrode technique ( SVET ), also known as vibrating probe within the field of biology , is a scanning probe microscopy (SPM) technique which visualizes electrochemical processes at a sample. It was originally introduced in 1974 by Jaffe and Nuccitelli to investigate the electrical current densities near living cells. [ 1 ] Starting in the 1980s Hugh Isaacs began to apply SVET to a number of different corrosion studies. [ 2 ] SVET measures local current density distributions in the solution above the sample of interest, to map electrochemical processes in situ as they occur. It utilizes a probe, vibrating perpendicular to the sample of interest, to enhance the measured signal. [ 1 ] It is related to scanning ion-selective electrode technique (SIET), which can be used with SVET in corrosion studies, [ 3 ] and scanning reference electrode technique (SRET), which is a precursor to SVET. [ 4 ] Scanning vibrating electrode technique was originally introduced to sensitively measure extracellular currents by Jaffe and Nuccitelli in 1974. [ 1 ] Jaffe and Nuccitelli then demonstrated the ability of the technique through the measurement of the extracellular currents involved with amputated and re-generating newt limbs, [ 5 ] developmental currents of chick embryos, [ 6 ] and the electrical currents associated with amoeboid movement. [ 7 ] In corrosion, the scanning reference electrode technique (SRET) existed as the precursor to SVET, and was first introduced commercially and trademarked by Uniscan Instruments, [ 8 ] now part of Bio-Logic Science Instruments. [ 9 ] SRET is an in situ technique in which a reference electrode is scanned near a sample surface to map the potential distribution in the electrolyte above the sample. Using SRET it is possible to determine the anodic and cathodic sites of a corroding sample without the probe altering the corrosion process. [ 10 ] SVET was first applied to and developed for the local investigation of corrosion processes by Hugh Isaacs. [ 2 ] SVET measures the currents associated with a sample in solution with natural electrochemical activity, or which is biased to force electrochemical activity. In both cases the current radiates into solution from the active regions of the sample. In a typical SVET instrument the probe is mounted on a piezoelectric vibrator on and x,y stage. The probe is vibrated perpendicular to the plane of the sample resulting in the measurement of an ac signal . The resulting ac signal is detected and demodulated using an input phase angle by a lock-in amplifier to produce a dc signal. [ 1 ] [ 11 ] [ 12 ] The input phase angle is typically found by manually adjusting the phase input of the Lock-in Amplifier until there is no response, 90 degrees is then added to determine the optimum phase. [ 13 ] The reference phase can also be found automatically by some commercial instruments. [ 14 ] The demodulated dc signal which results can then be plotted to reflect the local activity distribution. In SVET, the probe vibration results in a more sensitive measurement than its non-vibrating predecessors, [ 1 ] as well as giving rise to an improvement of the signal-to-noise ratio . [ 13 ] The probe vibration does not affect the process under study under normal experimental conditions. [ 15 ] [ 16 ] The SVET signal is affected by a number of factors including the probe to sample distance, solution conductivity , and the SVET probe. The signal strength in a SVET measurement is influenced by the probe to sample distance. When all other variables are equal a smaller probe to sample distance will result in the measurement of a higher magnitude signal. [ 17 ] The solution conductivity affects the signal strength in SVET measurements. With increasing solution conductivity, the signal strength of the SVET measurement decreases. [ 18 ] Corrosion is a major application area in for SVET. SVET is used to follow the corrosion process and provide information not possible from any other technique. [ 19 ] In corrosion it has been used to investigate a variety of processes including, but not limited to, local corrosion, self-healing coatings, Self-Assembled Monolayers (SAMs). SVET has also been used to investigate the effect of different local features on the corrosion properties of a system. For example, using SVET, the influence of the grains and grain boundaries of X70 was measured. A difference in current densities existed between the grains and grain boundaries with the SVET data suggesting the grain was anodic, and the boundary relatively cathodic. [ 20 ] Through the use of SVET it has been possible to investigate the effect of changing the aluminum spacer width on the galvanic coupling between steel and magnesium , a pairing which can be found on automobiles. Increasing the spacer width reduced the coupling between magnesium and steel. [ 21 ] More generally localized corrosion processes have been followed using SVET. For a variety of systems it has been possible to use SVET to follow the corrosion front as it moves across the sample over extended periods, providing insight into the corrosion mechanism. [ 22 ] [ 23 ] [ 24 ] A number of groups have used SVET to analyze the efficiency of self-healing coatings, mapping the changes in surface activity over time. When SVET measurements of the bare metals are compared to the same metal with the smart coating it can be seen that the current density is lower for the coated surface. Furthermore, when a defect is made in the smart coating the current over the defect can be seen to decrease as the coating recovers. [ 25 ] [ 26 ] [ 27 ] Mekhalif et. al . have performed a number of studies on SAMs formed on different metals to investigate their corrosion inhibition using SVET. The SVET studies revealed that the bare surfaces experience corrosion, with inhomogeneous activity measured by SVET. SVET was then used to investigate the effect of modification time, [ 28 ] and exposure to corrosive solution. [ 29 ] When a defect free SAM was investigated SVET showed homogeneous activity. [ 30 ] [ 31 ] In the field of biology the vibrating probe technique has been used to investigate a variety of processes. Vibrating probe measurements of lung cancer tumor cells have shown that the electric fields above the tumor cell were statistically larger than those measured over the intact epithelium , with the tumor cell behaving as the anode. Furthermore, it was noted that the application of an electric field resulted in the migration of the tumor cells. [ 32 ] Using vibrating probe, the electrical currents involved in the biological processes occurring at leaves have been measured. Through vibrating probe it has been possible to correlate electrical currents with the stomatal aperture, suggesting that stomatal opening was related to proton efflux. [ 33 ] Based on this work further vibrating probe measurements also indicated a relationship between the photosynthetic activity of a plant and the flow of electrical current on its leaf surfaces, with the measured current changing when it was exposed to different types of light and dark. [ 34 ] [ 35 ] As a final example, the vibrating probe technique has been used in the investigation of currents associated with wounding in plants and animals. A vibrating probe measurement of maize roots found that large inward currents were associated with wounding of the root, with the current decreasing in magnitude away from the center of the wound. [ 36 ] When similar experiments were performed on rat skin wounds, large outward currents were measured at the wound, with the strongest current measured at the wound edge. [ 37 ] The ability of the vibrating probe to investigate wounding has even lead to the development of a hand held prototype vibrating probe device for use. [ 38 ] SVET has been used to investigate the photoconductive nature of semiconductor materials, by following changes in current density related to photoelectrochemical reactions. [ 39 ] Using SVET the lithium/organic electrolyte interface, as in lithium battery systems has also been investigated. [ 40 ] Although SVET has almost exclusively been applied for the measurement of samples in aqueous environments, its application in non-aqueous environments has recently been demonstrated by Bastos et al. [ 41 ]
https://en.wikipedia.org/wiki/Scanning_vibrating_electrode_technique
The scapular line , also known as the linea scapularis, is a vertical line passing through the inferior angle of the scapula . [ 1 ] It has been used in the evaluation of brachial plexus birth palsy. [ 2 ] This anatomy article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scapular_line
Scarab was a professional fraternity in the field of architecture . It was founded in 1909 at the University of Illinois at Urbana-Champaign as the first group of its type for architecture. [ 1 ] Scarab was founded on February 25, 1909, at the University of Illinois at Urbana-Champaign . [ 1 ] Its members were students of architecture, landscape architecture, or architectural engineering. [ 2 ] Annually, each chapter held an exhibition of its best work. [ 3 ] Chapters also issued a bronze or silver medal annually for excellence in architectural design in a competition that was open to any student at it institution. [ 3 ] [ 4 ] The national fraternity sponsored the annual Scarab National Competition. [ 2 ] The fraternity was governed by a supreme council that met during the annual convention. [ 2 ] Its publication was The Hieratic . It also published the Scarab Bulletin twice a year. [ 2 ] Archival materials related to Scarab are housed at Carnegie Mellon University Libraries, Rensselaer Polytechnic Institute Archives, and the University of Illinois Archives. [ 5 ] [ 6 ] [ 7 ] It is unknown when most chapters ceased operations; The mother chapter, at Illinois, ceased activity circa 1971. Scarab's chapters were called temples. [ 2 ] A list of its temples follows. [ 8 ] [ 9 ]
https://en.wikipedia.org/wiki/Scarab_(fraternity)
The Scarborough criterion is used for satisfying convergence of a solution while solving linear equations using an iterative method . Analytical solutions for certain systems of equations can be difficult or impossible to obtain. A well known example are the Navier-Stokes equations describing the flow of Newtonian fluids. Solutions of such equations can be obtained numerically , at discrete points of the solution domain (e.g. at discrete time points and points in space). Numerical solutions based on the integration of the equations at discrete control volumes of the solution domain (for example the Finite Volume Method ) result in a system of algebraic equations, one for each nodal point (corresponding to a particular control volume). These algebraic equations are usually referred to as discretised equations . The Scarborough criterion formulated by Scarborough (1958), can be expressed in terms of the values of the coefficients of the discretised equations: [ 1 ] [ 2 ] Here a' p is the net coefficient of a random central node P and the summation in the numerator is taken over all the neighbouring nodes. For a one, two and three-dimensional problem there will be two (east & west), four (east, west, south & north), and six (east, west, south north, top & bottom) neighbours for each node, respectively. If Scarborough criterion is not satisfied then Gauss–Seidel method iterative procedure is not guaranteed to converge a solution. This criterion is a sufficient condition, [ 3 ] not a necessary one. If this criterion is satisfied then it means equation will be converged by at least one iterative method . The Scarborough criterion is used as a sufficient condition for convergent iterative method. The finite volume method uses this criterion for obtaining a convergent solution and implementing boundary conditions . If the differencing scheme produces coefficients that satisfy the above criterion the resulting matrix of coefficients is diagonally dominant . [ 4 ] To achieve diagonal dominance we need large values of net coefficient so the linearisation practice of source terms should ensure that S P is always negative. If this is the case – S P is always positive and adds to a P . Diagonal dominance is a desirable feature for satisfying the boundedness criterion. This states that in the absence of sources the internal nodal values of the property ф should be bounded by its boundary values. Hence in a steady state conduction problem without sources and with boundary temperatures of 500 °C and 200 °C all interior values of T should be less than 500 °C and greater than 200 °C. [ 2 ]
https://en.wikipedia.org/wiki/Scarborough_criterion
The Scatchard equation is an equation used in molecular biology to calculate the affinity and number of binding sites of a receptor for a ligand . [ 1 ] It is named after the American chemist George Scatchard. [ 2 ] Throughout this article, [ RL ] denotes the concentration of a receptor-ligand complex, [ R ] the concentration of free receptor, and [ L ] the concentration of free ligand (so that the total concentration of the receptor and ligand are [ R ]+[ RL ] and [ L ]+[ RL ], respectively). Let n be the number of binding sites for ligand on each receptor molecule, and let n represent the average number of ligands bound to a receptor. Let K d denote the dissociation constant between the ligand and receptor. The Scatchard equation is given by By plotting n /[ L ] versus n , the Scatchard plot shows that the slope equals to -1/ K d while the x-intercept equals the number of ligand binding sites n . When each receptor has a single ligand binding site, the system is described by with an on-rate ( k on ) and off-rate ( k off ) related to the dissociation constant through K d = k off / k on . When the system equilibrates, so that the average number of ligands bound to each receptor is given by which is the Scatchard equation for n =1. When each receptor has two ligand binding sites, the system is governed by At equilibrium, the average number of ligands bound to each receptor is given by which is equivalent to the Scatchard equation. For a receptor with n binding sites that independently bind to the ligand, each binding site will have an average occupancy of [ L ]/( K d + [ L ]). Hence, by considering all n binding sites, there will ligands bound to each receptor on average, from which the Scatchard equation follows. The Scatchard method is less used nowadays because of the availability of computer programs that directly fit parameters to binding data. Mathematically, the Scatchard equation is related to Eadie-Hofstee method , which is used to infer kinetic properties from enzyme reaction data. Many modern methods for measuring binding such as surface plasmon resonance and isothermal titration calorimetry provide additional binding parameters that are globally fit by computer-based iterative methods. [ citation needed ]
https://en.wikipedia.org/wiki/Scatchard_equation
In mathematical order theory , a scattered order is a linear order that contains no densely ordered subset with more than one element. [ 1 ] A characterization due to Hausdorff states that the class of all scattered orders is the smallest class of linear orders that contains the singleton orders and is closed under well-ordered and reverse well-ordered sums . Laver's theorem (generalizing a conjecture of Roland Fraïssé on countable orders) states that the embedding relation on the class of countable unions of scattered orders is a well-quasi-order . [ 2 ] The order topology of a scattered order is scattered . The converse implication does not hold, as witnessed by the lexicographic order on Q × Z {\displaystyle \mathbb {Q} \times \mathbb {Z} } . This mathematical logic -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scattered_order
In physics, scattering is a wide range of physical processes where moving particles or radiation of some form, such as light or sound , are forced to deviate from a straight trajectory by localized non-uniformities (including particles and radiation) in the medium through which they pass. In conventional use, this also includes deviation of reflected radiation from the angle predicted by the law of reflection . Reflections of radiation that undergo scattering are often called diffuse reflections and unscattered reflections are called specular (mirror-like) reflections. Originally, the term was confined to light scattering (going back at least as far as Isaac Newton in the 17th century [ 1 ] ). As more "ray"-like phenomena were discovered, the idea of scattering was extended to them, so that William Herschel could refer to the scattering of "heat rays" (not then recognized as electromagnetic in nature) in 1800. [ 2 ] John Tyndall , a pioneer in light scattering research, noted the connection between light scattering and acoustic scattering in the 1870s. [ 3 ] Near the end of the 19th century, the scattering of cathode rays (electron beams) [ 4 ] and X-rays [ 5 ] was observed and discussed. With the discovery of subatomic particles (e.g. Ernest Rutherford in 1911 [ 6 ] ) and the development of quantum theory in the 20th century, the sense of the term became broader as it was recognized that the same mathematical frameworks used in light scattering could be applied to many other phenomena. Scattering can refer to the consequences of particle-particle collisions between molecules, atoms, electrons , photons and other particles. Examples include: cosmic ray scattering in the Earth's upper atmosphere; particle collisions inside particle accelerators ; electron scattering by gas atoms in fluorescent lamps; and neutron scattering inside nuclear reactors . [ 7 ] The types of non-uniformities which can cause scattering, sometimes known as scatterers or scattering centers , are too numerous to list, but a small sample includes particles , bubbles , droplets , density fluctuations in fluids , crystallites in polycrystalline solids, defects in monocrystalline solids, surface roughness , cells in organisms, and textile fibers in clothing. The effects of such features on the path of almost any type of propagating wave or moving particle can be described in the framework of scattering theory . Some areas where scattering and scattering theory are significant include radar sensing, medical ultrasound , semiconductor wafer inspection, polymerization process monitoring, acoustic tiling, free-space communications and computer-generated imagery . [ 8 ] Particle-particle scattering theory is important in areas such as particle physics , atomic, molecular, and optical physics , nuclear physics and astrophysics . In particle physics the quantum interaction and scattering of fundamental particles is described by the Scattering Matrix or S-Matrix , introduced and developed by John Archibald Wheeler and Werner Heisenberg . [ 9 ] Scattering is quantified using many different concepts, including scattering cross section (σ), attenuation coefficients , the bidirectional scattering distribution function (BSDF), S-matrices , and mean free path . When radiation is only scattered by one localized scattering center, this is called single scattering . It is more common that scattering centers are grouped together; in such cases, radiation may scatter many times, in what is known as multiple scattering . [ 11 ] The main difference between the effects of single and multiple scattering is that single scattering can usually be treated as a random phenomenon, whereas multiple scattering, somewhat counterintuitively, can be modeled as a more deterministic process because the combined results of a large number of scattering events tend to average out. Multiple scattering can thus often be modeled well with diffusion theory . [ 12 ] Because the location of a single scattering center is not usually well known relative to the path of the radiation, the outcome, which tends to depend strongly on the exact incoming trajectory, appears random to an observer. This type of scattering would be exemplified by an electron being fired at an atomic nucleus. In this case, the atom's exact position relative to the path of the electron is unknown and would be unmeasurable, so the exact trajectory of the electron after the collision cannot be predicted. Single scattering is therefore often described by probability distributions. With multiple scattering, the randomness of the interaction tends to be averaged out by a large number of scattering events, so that the final path of the radiation appears to be a deterministic distribution of intensity. This is exemplified by a light beam passing through thick fog . Multiple scattering is highly analogous to diffusion , and the terms multiple scattering and diffusion are interchangeable in many contexts. Optical elements designed to produce multiple scattering are thus known as diffusers . [ 13 ] Coherent backscattering , an enhancement of backscattering that occurs when coherent radiation is multiply scattered by a random medium, is usually attributed to weak localization . Not all single scattering is random, however. A well-controlled laser beam can be exactly positioned to scatter off a microscopic particle with a deterministic outcome, for instance. Such situations are encountered in radar scattering as well, where the targets tend to be macroscopic objects such as people or aircraft. Similarly, multiple scattering can sometimes have somewhat random outcomes, particularly with coherent radiation. The random fluctuations in the multiply scattered intensity of coherent radiation are called speckles . Speckle also occurs if multiple parts of a coherent wave scatter from different centers. In certain rare circumstances, multiple scattering may only involve a small number of interactions such that the randomness is not completely averaged out. These systems are considered to be some of the most difficult to model accurately. The description of scattering and the distinction between single and multiple scattering are tightly related to wave–particle duality . Scattering theory is a framework for studying and understanding the scattering of waves and particles . Wave scattering corresponds to the collision and scattering of a wave with some material object, for instance (sunlight) scattered by rain drops to form a rainbow . Scattering also includes the interaction of billiard balls on a table, the Rutherford scattering (or angle change) of alpha particles by gold nuclei , the Bragg scattering (or diffraction) of electrons and X-rays by a cluster of atoms, and the inelastic scattering of a fission fragment as it traverses a thin foil. More precisely, scattering consists of the study of how solutions of partial differential equations , propagating freely "in the distant past", come together and interact with one another or with a boundary condition , and then propagate away "to the distant future". The direct scattering problem is the problem of determining the distribution of scattered radiation/particle flux basing on the characteristics of the scatterer. The inverse scattering problem is the problem of determining the characteristics of an object (e.g., its shape, internal constitution) from measurement data of radiation or particles scattered from the object. When the target is a set of many scattering centers whose relative position varies unpredictably, it is customary to think of a range equation whose arguments take different forms in different application areas. In the simplest case consider an interaction that removes particles from the "unscattered beam" at a uniform rate that is proportional to the incident number of particles per unit area per unit time ( I {\displaystyle I} ), i.e. that where Q is an interaction coefficient and x is the distance traveled in the target. The above ordinary first-order differential equation has solutions of the form: where I o is the initial flux, path length Δx ≡ x − x o , the second equality defines an interaction mean free path λ, the third uses the number of targets per unit volume η to define an area cross-section σ, and the last uses the target mass density ρ to define a density mean free path τ. Hence one converts between these quantities via Q = 1/ λ = ησ = ρ/τ , as shown in the figure at left. In electromagnetic absorption spectroscopy, for example, interaction coefficient (e.g. Q in cm −1 ) is variously called opacity , absorption coefficient , and attenuation coefficient . In nuclear physics, area cross-sections (e.g. σ in barns or units of 10 −24 cm 2 ), density mean free path (e.g. τ in grams/cm 2 ), and its reciprocal the mass attenuation coefficient (e.g. in cm 2 /gram) or area per nucleon are all popular, while in electron microscopy the inelastic mean free path [ 14 ] (e.g. λ in nanometers) is often discussed [ 15 ] instead. The term "elastic scattering" implies that the internal states of the scattering particles do not change, and hence they emerge unchanged from the scattering process. In inelastic scattering, by contrast, the particles' internal state is changed, which may amount to exciting some of the electrons of a scattering atom, or the complete annihilation of a scattering particle and the creation of entirely new particles. The example of scattering in quantum chemistry is particularly instructive, as the theory is reasonably complex while still having a good foundation on which to build an intuitive understanding. When two atoms are scattered off one another, one can understand them as being the bound state solutions of some differential equation. Thus, for example, the hydrogen atom corresponds to a solution to the Schrödinger equation with a negative inverse-power (i.e., attractive Coulombic) central potential . The scattering of two hydrogen atoms will disturb the state of each atom, resulting in one or both becoming excited, or even ionized , representing an inelastic scattering process. The term " deep inelastic scattering " refers to a special kind of scattering experiment in particle physics. In mathematics , scattering theory deals with a more abstract formulation of the same set of concepts. For example, if a differential equation is known to have some simple, localized solutions, and the solutions are a function of a single parameter, that parameter can take the conceptual role of time . One then asks what might happen if two such solutions are set up far away from each other, in the "distant past", and are made to move towards each other, interact (under the constraint of the differential equation) and then move apart in the "future". The scattering matrix then pairs solutions in the "distant past" to those in the "distant future". Solutions to differential equations are often posed on manifolds . Frequently, the means to the solution requires the study of the spectrum of an operator on the manifold. As a result, the solutions often have a spectrum that can be identified with a Hilbert space , and scattering is described by a certain map, the S matrix , on Hilbert spaces. Solutions with a discrete spectrum correspond to bound states in quantum mechanics, while a continuous spectrum is associated with scattering states. The study of inelastic scattering then asks how discrete and continuous spectra are mixed together. An important, notable development is the inverse scattering transform , central to the solution of many exactly solvable models . In mathematical physics , scattering theory is a framework for studying and understanding the interaction or scattering of solutions to partial differential equations . In acoustics , the differential equation is the wave equation , and scattering studies how its solutions, the sound waves , scatter from solid objects or propagate through non-uniform media (such as sound waves, in sea water , coming from a submarine ). In the case of classical electrodynamics , the differential equation is again the wave equation, and the scattering of light or radio waves is studied. In particle physics , the equations are those of Quantum electrodynamics , Quantum chromodynamics and the Standard Model , the solutions of which correspond to fundamental particles . In regular quantum mechanics , which includes quantum chemistry , the relevant equation is the Schrödinger equation , although equivalent formulations, such as the Lippmann-Schwinger equation and the Faddeev equations , are also largely used. The solutions of interest describe the long-term motion of free atoms, molecules, photons, electrons, and protons. The scenario is that several particles come together from an infinite distance away. These reagents then collide, optionally reacting, getting destroyed or creating new particles. The products and unused reagents then fly away to infinity again. (The atoms and molecules are effectively particles for our purposes. Also, under everyday circumstances, only photons are being created and destroyed.) The solutions reveal which directions the products are most likely to fly off to and how quickly. They also reveal the probability of various reactions, creations, and decays occurring. There are two predominant techniques of finding solutions to scattering problems: partial wave analysis , and the Born approximation . Electromagnetic waves are one of the best known and most commonly encountered forms of radiation that undergo scattering. [ 16 ] Scattering of light and radio waves (especially in radar) is particularly important. Several different aspects of electromagnetic scattering are distinct enough to have conventional names. Major forms of elastic light scattering (involving negligible energy transfer) are Rayleigh scattering and Mie scattering . Inelastic scattering includes Brillouin scattering , Raman scattering , inelastic X-ray scattering and Compton scattering . Light scattering is one of the two major physical processes that contribute to the visible appearance of most objects, the other being absorption. Surfaces described as white owe their appearance to multiple scattering of light by internal or surface inhomogeneities in the object, for example by the boundaries of transparent microscopic crystals that make up a stone or by the microscopic fibers in a sheet of paper. More generally, the gloss (or lustre or sheen ) of the surface is determined by scattering. Highly scattering surfaces are described as being dull or having a matte finish, while the absence of surface scattering leads to a glossy appearance, as with polished metal or stone. Spectral absorption, the selective absorption of certain colors, determines the color of most objects with some modification by elastic scattering . The apparent blue color of veins in skin is a common example where both spectral absorption and scattering play important and complex roles in the coloration. Light scattering can also create color without absorption, often shades of blue, as with the sky (Rayleigh scattering), the human blue iris , and the feathers of some birds (Prum et al. 1998). However, resonant light scattering in nanoparticles can produce many different highly saturated and vibrant hues, especially when surface plasmon resonance is involved (Roqué et al. 2006). [ 17 ] [ 18 ] Models of light scattering can be divided into three domains based on a dimensionless size parameter, α which is defined as: α = π D p / λ , {\displaystyle \alpha =\pi D_{\text{p}}/\lambda ,} where πD p is the circumference of a particle and λ is the wavelength of incident radiation in the medium. Based on the value of α , these domains are: Rayleigh scattering is a process in which electromagnetic radiation (including light) is scattered by a small spherical volume of variant refractive indexes, such as a particle, bubble, droplet, or even a density fluctuation. This effect was first modeled successfully by Lord Rayleigh , from whom it gets its name. In order for Rayleigh's model to apply, the sphere must be much smaller in diameter than the wavelength ( λ ) of the scattered wave; typically the upper limit is taken to be about 1/10 the wavelength. In this size regime, the exact shape of the scattering center is usually not very significant and can often be treated as a sphere of equivalent volume. The inherent scattering that radiation undergoes passing through a pure gas is due to microscopic density fluctuations as the gas molecules move around, which are normally small enough in scale for Rayleigh's model to apply. This scattering mechanism is the primary cause of the blue color of the Earth's sky on a clear day, as the shorter blue wavelengths of sunlight passing overhead are more strongly scattered than the longer red wavelengths according to Rayleigh's famous 1/ λ 4 relation. Along with absorption, such scattering is a major cause of the attenuation of radiation by the atmosphere . [ 19 ] The degree of scattering varies as a function of the ratio of the particle diameter to the wavelength of the radiation, along with many other factors including polarization , angle, and coherence . [ 20 ] For larger diameters, the problem of electromagnetic scattering by spheres was first solved by Gustav Mie , and scattering by spheres larger than the Rayleigh range is therefore usually known as Mie scattering. In the Mie regime, the shape of the scattering center becomes much more significant and the theory only applies well to spheres and, with some modification, spheroids and ellipsoids . Closed-form solutions for scattering by certain other simple shapes exist, but no general closed-form solution is known for arbitrary shapes. Both Mie and Rayleigh scattering are considered elastic scattering processes, in which the energy (and thus wavelength and frequency) of the light is not substantially changed. However, electromagnetic radiation scattered by moving scattering centers does undergo a Doppler shift , which can be detected and used to measure the velocity of the scattering center/s in forms of techniques such as lidar and radar . This shift involves a slight change in energy. At values of the ratio of particle diameter to wavelength more than about 10, the laws of geometric optics are mostly sufficient to describe the interaction of light with the particle. Mie theory can still be used for these larger spheres, but the solution often becomes numerically unwieldy. For modeling of scattering in cases where the Rayleigh and Mie models do not apply such as larger, irregularly shaped particles, there are many numerical methods that can be used. The most common are finite-element methods which solve Maxwell's equations to find the distribution of the scattered electromagnetic field. Sophisticated software packages exist which allow the user to specify the refractive index or indices of the scattering feature in space, creating a 2- or sometimes 3-dimensional model of the structure. For relatively large and complex structures, these models usually require substantial execution times on a computer. Electrophoresis involves the migration of macromolecules under the influence of an electric field. [ 21 ] Electrophoretic light scattering involves passing an electric field through a liquid which makes particles move. The bigger the charge is on the particles, the faster they are able to move. [ 22 ]
https://en.wikipedia.org/wiki/Scattering
In quantum physics , the scattering amplitude is the probability amplitude of the outgoing spherical wave relative to the incoming plane wave in a stationary-state scattering process . [ 1 ] Scattering in quantum mechanics begins with a physical model based on the Schrodinger wave equation for probability amplitude ψ {\displaystyle \psi } : − ℏ 2 2 μ ∇ 2 ψ + V ψ = E ψ {\displaystyle -{\frac {\hbar ^{2}}{2\mu }}\nabla ^{2}\psi +V\psi =E\psi } where μ {\displaystyle \mu } is the reduced mass of two scattering particles and E is the energy of relative motion. For scattering problems, a stationary (time-independent) wavefunction is sought with behavior at large distances ( asymptotic form) in two parts. First a plane wave represents the incoming source and, second, a spherical wave emanating from the scattering center placed at the coordinate origin represents the scattered wave: [ 2 ] : 114 ψ ( r → ∞ ) ∼ e i k i ⋅ r + f ( k f , k i ) e i k f ⋅ r r {\displaystyle \psi (r\rightarrow \infty )\sim e^{i\mathbf {k} _{i}\cdot \mathbf {r} }+f(\mathbf {k} _{f},\mathbf {k} _{i}){\frac {e^{i\mathbf {k} _{f}\cdot \mathbf {r} }}{r}}} The scattering amplitude, f ( k f , k i ) {\displaystyle f(\mathbf {k} _{f},\mathbf {k} _{i})} , represents the amplitude that the target will scatter into the direction k f {\displaystyle \mathbf {k} _{f}} . [ 3 ] : 194 In general the scattering amplitude requires knowing the full scattering wavefunction: f ( k f , k i ) = − μ 2 π ℏ 2 ∫ ψ f ∗ V ( r ) ψ i d 3 r {\displaystyle f(\mathbf {k} _{f},\mathbf {k} _{i})=-{\frac {\mu }{2\pi \hbar ^{2}}}\int \psi _{f}^{*}V(\mathbf {r} )\psi _{i}d^{3}r} For weak interactions a perturbation series can be applied; the lowest order is called the Born approximation . For a spherically symmetric scattering center, the plane wave is described by the wavefunction [ 4 ] where r ≡ ( x , y , z ) {\displaystyle \mathbf {r} \equiv (x,y,z)} is the position vector; r ≡ | r | {\displaystyle r\equiv |\mathbf {r} |} ; e i k z {\displaystyle e^{ikz}} is the incoming plane wave with the wavenumber k along the z axis; e i k r / r {\displaystyle e^{ikr}/r} is the outgoing spherical wave; θ is the scattering angle (angle between the incident and scattered direction); and f ( θ ) {\displaystyle f(\theta )} is the scattering amplitude. The dimension of the scattering amplitude is length . The scattering amplitude is a probability amplitude ; the differential cross-section as a function of scattering angle is given as its modulus squared , When conservation of number of particles holds true during scattering, it leads to a unitary condition for the scattering amplitude. In the general case, we have [ 4 ] Optical theorem follows from here by setting n = n ′ . {\displaystyle \mathbf {n} =\mathbf {n} '.} In the centrally symmetric field, the unitary condition becomes where γ {\displaystyle \gamma } and γ ′ {\displaystyle \gamma '} are the angles between n {\displaystyle \mathbf {n} } and n ′ {\displaystyle \mathbf {n} '} and some direction n ″ {\displaystyle \mathbf {n} ''} . This condition puts a constraint on the allowed form for f ( θ ) {\displaystyle f(\theta )} , i.e., the real and imaginary part of the scattering amplitude are not independent in this case. For example, if | f ( θ ) | {\displaystyle |f(\theta )|} in f = | f | e 2 i α {\displaystyle f=|f|e^{2i\alpha }} is known (say, from the measurement of the cross section), then α ( θ ) {\displaystyle \alpha (\theta )} can be determined such that f ( θ ) {\displaystyle f(\theta )} is uniquely determined within the alternative f ( θ ) → − f ∗ ( θ ) {\displaystyle f(\theta )\rightarrow -f^{*}(\theta )} . [ 4 ] In the partial wave expansion the scattering amplitude is represented as a sum over the partial waves, [ 5 ] where f ℓ is the partial scattering amplitude and P ℓ are the Legendre polynomials . The partial amplitude can be expressed via the partial wave S-matrix element S ℓ ( = e 2 i δ ℓ {\displaystyle =e^{2i\delta _{\ell }}} ) and the scattering phase shift δ ℓ as Then the total cross section [ 6 ] can be expanded as [ 4 ] is the partial cross section. The total cross section is also equal to σ = ( 4 π / k ) I m f ( 0 ) {\displaystyle \sigma =(4\pi /k)\,\mathrm {Im} f(0)} due to optical theorem . For θ ≠ 0 {\displaystyle \theta \neq 0} , we can write [ 4 ] The scattering length for X-rays is the Thomson scattering length or classical electron radius , r 0 . The nuclear neutron scattering process involves the coherent neutron scattering length, often described by b . A quantum mechanical approach is given by the S matrix formalism. The scattering amplitude can be determined by the scattering length in the low-energy regime.
https://en.wikipedia.org/wiki/Scattering_amplitude
In scattering theory , a scattering channel is a quantum state of the colliding system before or after the collision ( t → ± ∞ {\displaystyle t\to \pm \infty } ). The Hilbert space spanned by the states before collision (in states) is equal to the space spanned by the states after collision (out states) which are both Fock spaces if there is a mass gap . This is the reason why the S matrix which maps the in states onto the out states must be unitary . The scattering channel are also called scattering asymptotes . The Møller operators are mapping the scattering channels onto the corresponding states which are solution of the Schrödinger equation taking the interaction Hamiltonian into account. The Møller operators are isometric . This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it . This scattering –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scattering_channel
Surface roughness scattering or interface roughness scattering is the elastic scattering of particles against a rough solid surface or imperfect interface between two different materials. This effect has been observed in classical systems, such as microparticle scattering, [ 1 ] as well as quantum systems , where it arises electronic devices, such as field effect transistors and quantum cascade lasers . [ 2 ] In the classical mechanics framework, a rough surface, such as a machined metal surface, randomizes the probability distribution function governing the incoming particles, leading to net momentum loss of the particle flux. [ 3 ] In the quantum mechanical framework, this scattering is most noticeable in confined systems, in which the energies for charge carriers are determined by the locations of interfaces. An example of such a system is a quantum well , which may be constructed from a sandwich of different layers of semiconductor. Variations in the thickness of these layers therefore causes the energy of particles to be dependent on their in-plane location in the layer. [ 4 ] Classification of the roughness at a given position, Δ z ( r ) {\displaystyle \Delta _{z}(\mathbf {r} )} , is complex, but as in the classical models, it has been modeled as a Gaussian distribution by some researchers [ 5 ] This assumption may be formulated in terms of the ensemble average for some given characteristic height, Δ {\displaystyle \Delta } , and correlation length, Λ {\displaystyle \Lambda } , such that Selective Scattering : In selective Scattering scattering depends upon the wavelength of light. [ citation needed ] Mie scattering : Mie theory can describe how electromagnetic waves interact with homogeneously spherical particles. However, a theory for homogeneous spheres will completely fail to predict polarization effects. [ 6 ] [ 7 ] When the size of the molecules is greater than the wavelength of light, the result is a non-uniform scattering of light. [ citation needed ] Lambertian Scattering : This type of scattering occurs when a surface has microscopic irregularities that scatter light perfectly uniformly in all directions, causing it to appear equally bright from all viewing angles. Subsurface Scattering : This type of scattering occurs when light scatters within a material before exiting the surface at a different point. Isotropic crystal scattering (aka powder diffraction ) : This type of scattering occurs when every crystalline orientation is represented equally in a powdered sample. Powder X-ray diffraction (PXRD) operates under the assumption that the sample is randomly arranged such that each plane will be represented in the signal.
https://en.wikipedia.org/wiki/Scattering_from_rough_surfaces
The scattering length in quantum mechanics describes low-energy scattering . For potentials that decay faster than 1 / r 3 {\displaystyle 1/r^{3}} as r → ∞ {\displaystyle r\to \infty } , it is defined as the following low-energy limit : where a {\displaystyle a} is the scattering length, k {\displaystyle k} is the wave number , and δ ( k ) {\displaystyle \delta (k)} is the phase shift of the outgoing spherical wave. The elastic cross section , σ e {\displaystyle \sigma _{e}} , at low energies is determined solely by the scattering length: When a slow particle scatters off a short ranged scatterer (e.g. an impurity in a solid or a heavy particle) it cannot resolve the structure of the object since its de Broglie wavelength is very long. The idea is that then it should not be important what precise potential V ( r ) {\displaystyle V(r)} one scatters off, but only how the potential looks at long length scales. The formal way to solve this problem is to do a partial wave expansion (somewhat analogous to the multipole expansion in classical electrodynamics ), where one expands in the angular momentum components of the outgoing wave. At very low energy the incoming particle does not see any structure, therefore to lowest order one has only a spherical outgoing wave, called the s-wave in analogy with the atomic orbital at angular momentum quantum number l =0. At higher energies one also needs to consider p and d-wave ( l =1,2) scattering and so on. The idea of describing low energy properties in terms of a few parameters and symmetries is very powerful, and is also behind the concept of renormalization . The concept of the scattering length can also be extended to potentials that decay slower than 1 / r 3 {\displaystyle 1/r^{3}} as r → ∞ {\displaystyle r\to \infty } . A famous example, relevant for proton-proton scattering, is the Coulomb-modified scattering length. As an example on how to compute the s-wave (i.e. angular momentum l = 0 {\displaystyle l=0} ) scattering length for a given potential we look at the infinitely repulsive spherical potential well of radius r 0 {\displaystyle r_{0}} in 3 dimensions. The radial Schrödinger equation ( l = 0 {\displaystyle l=0} ) outside of the well is just the same as for a free particle: where the hard core potential requires that the wave function u ( r ) {\displaystyle u(r)} vanishes at r = r 0 {\displaystyle r=r_{0}} , u ( r 0 ) = 0 {\displaystyle u(r_{0})=0} . The solution is readily found: Here k = 2 m E / ℏ {\displaystyle k={\sqrt {2mE}}/\hbar } and δ s = − k ⋅ r 0 {\displaystyle \delta _{s}=-k\cdot r_{0}} is the s-wave phase shift (the phase difference between incoming and outgoing wave), which is fixed by the boundary condition u ( r 0 ) = 0 {\displaystyle u(r_{0})=0} ; A {\displaystyle A} is an arbitrary normalization constant. One can show that in general δ s ( k ) ≈ − k ⋅ a s + O ( k 2 ) {\displaystyle \delta _{s}(k)\approx -k\cdot a_{s}+O(k^{2})} for small k {\displaystyle k} (i.e. low energy scattering). The parameter a s {\displaystyle a_{s}} of dimension length is defined as the scattering length . For our potential we have therefore a = r 0 {\displaystyle a=r_{0}} , in other words the scattering length for a hard sphere is just the radius. (Alternatively one could say that an arbitrary potential with s-wave scattering length a s {\displaystyle a_{s}} has the same low energy scattering properties as a hard sphere of radius a s {\displaystyle a_{s}} .) To relate the scattering length to physical observables that can be measured in a scattering experiment we need to compute the cross section σ {\displaystyle \sigma } . In scattering theory one writes the asymptotic wavefunction as (we assume there is a finite ranged scatterer at the origin and there is an incoming plane wave along the z {\displaystyle z} -axis): where f {\displaystyle f} is the scattering amplitude . According to the probability interpretation of quantum mechanics the differential cross section is given by d σ / d Ω = | f ( θ ) | 2 {\displaystyle d\sigma /d\Omega =|f(\theta )|^{2}} (the probability per unit time to scatter into the direction k {\displaystyle \mathbf {k} } ). If we consider only s-wave scattering the differential cross section does not depend on the angle θ {\displaystyle \theta } , and the total scattering cross section is just σ = 4 π | f | 2 {\displaystyle \sigma =4\pi |f|^{2}} . The s-wave part of the wavefunction ψ ( r , θ ) {\displaystyle \psi (r,\theta )} is projected out by using the standard expansion of a plane wave in terms of spherical waves and Legendre polynomials P l ( cos ⁡ θ ) {\displaystyle P_{l}(\cos \theta )} : By matching the l = 0 {\displaystyle l=0} component of ψ ( r , θ ) {\displaystyle \psi (r,\theta )} to the s-wave solution ψ ( r ) = A sin ⁡ ( k r + δ s ) / r {\displaystyle \psi (r)=A\sin(kr+\delta _{s})/r} (where we normalize A {\displaystyle A} such that the incoming wave e i k z {\displaystyle e^{ikz}} has a prefactor of unity) one has: This gives: σ = 4 π k 2 sin 2 ⁡ δ s = 4 π a s 2 {\displaystyle \sigma ={\frac {4\pi }{k^{2}}}\sin ^{2}\delta _{s}=4\pi a_{s}^{2}}
https://en.wikipedia.org/wiki/Scattering_length
In physics, scattering is a wide range of physical processes where moving particles or radiation of some form, such as light or sound , are forced to deviate from a straight trajectory by localized non-uniformities (including particles and radiation) in the medium through which they pass. In conventional use, this also includes deviation of reflected radiation from the angle predicted by the law of reflection . Reflections of radiation that undergo scattering are often called diffuse reflections and unscattered reflections are called specular (mirror-like) reflections. Originally, the term was confined to light scattering (going back at least as far as Isaac Newton in the 17th century [ 1 ] ). As more "ray"-like phenomena were discovered, the idea of scattering was extended to them, so that William Herschel could refer to the scattering of "heat rays" (not then recognized as electromagnetic in nature) in 1800. [ 2 ] John Tyndall , a pioneer in light scattering research, noted the connection between light scattering and acoustic scattering in the 1870s. [ 3 ] Near the end of the 19th century, the scattering of cathode rays (electron beams) [ 4 ] and X-rays [ 5 ] was observed and discussed. With the discovery of subatomic particles (e.g. Ernest Rutherford in 1911 [ 6 ] ) and the development of quantum theory in the 20th century, the sense of the term became broader as it was recognized that the same mathematical frameworks used in light scattering could be applied to many other phenomena. Scattering can refer to the consequences of particle-particle collisions between molecules, atoms, electrons , photons and other particles. Examples include: cosmic ray scattering in the Earth's upper atmosphere; particle collisions inside particle accelerators ; electron scattering by gas atoms in fluorescent lamps; and neutron scattering inside nuclear reactors . [ 7 ] The types of non-uniformities which can cause scattering, sometimes known as scatterers or scattering centers , are too numerous to list, but a small sample includes particles , bubbles , droplets , density fluctuations in fluids , crystallites in polycrystalline solids, defects in monocrystalline solids, surface roughness , cells in organisms, and textile fibers in clothing. The effects of such features on the path of almost any type of propagating wave or moving particle can be described in the framework of scattering theory . Some areas where scattering and scattering theory are significant include radar sensing, medical ultrasound , semiconductor wafer inspection, polymerization process monitoring, acoustic tiling, free-space communications and computer-generated imagery . [ 8 ] Particle-particle scattering theory is important in areas such as particle physics , atomic, molecular, and optical physics , nuclear physics and astrophysics . In particle physics the quantum interaction and scattering of fundamental particles is described by the Scattering Matrix or S-Matrix , introduced and developed by John Archibald Wheeler and Werner Heisenberg . [ 9 ] Scattering is quantified using many different concepts, including scattering cross section (σ), attenuation coefficients , the bidirectional scattering distribution function (BSDF), S-matrices , and mean free path . When radiation is only scattered by one localized scattering center, this is called single scattering . It is more common that scattering centers are grouped together; in such cases, radiation may scatter many times, in what is known as multiple scattering . [ 11 ] The main difference between the effects of single and multiple scattering is that single scattering can usually be treated as a random phenomenon, whereas multiple scattering, somewhat counterintuitively, can be modeled as a more deterministic process because the combined results of a large number of scattering events tend to average out. Multiple scattering can thus often be modeled well with diffusion theory . [ 12 ] Because the location of a single scattering center is not usually well known relative to the path of the radiation, the outcome, which tends to depend strongly on the exact incoming trajectory, appears random to an observer. This type of scattering would be exemplified by an electron being fired at an atomic nucleus. In this case, the atom's exact position relative to the path of the electron is unknown and would be unmeasurable, so the exact trajectory of the electron after the collision cannot be predicted. Single scattering is therefore often described by probability distributions. With multiple scattering, the randomness of the interaction tends to be averaged out by a large number of scattering events, so that the final path of the radiation appears to be a deterministic distribution of intensity. This is exemplified by a light beam passing through thick fog . Multiple scattering is highly analogous to diffusion , and the terms multiple scattering and diffusion are interchangeable in many contexts. Optical elements designed to produce multiple scattering are thus known as diffusers . [ 13 ] Coherent backscattering , an enhancement of backscattering that occurs when coherent radiation is multiply scattered by a random medium, is usually attributed to weak localization . Not all single scattering is random, however. A well-controlled laser beam can be exactly positioned to scatter off a microscopic particle with a deterministic outcome, for instance. Such situations are encountered in radar scattering as well, where the targets tend to be macroscopic objects such as people or aircraft. Similarly, multiple scattering can sometimes have somewhat random outcomes, particularly with coherent radiation. The random fluctuations in the multiply scattered intensity of coherent radiation are called speckles . Speckle also occurs if multiple parts of a coherent wave scatter from different centers. In certain rare circumstances, multiple scattering may only involve a small number of interactions such that the randomness is not completely averaged out. These systems are considered to be some of the most difficult to model accurately. The description of scattering and the distinction between single and multiple scattering are tightly related to wave–particle duality . Scattering theory is a framework for studying and understanding the scattering of waves and particles . Wave scattering corresponds to the collision and scattering of a wave with some material object, for instance (sunlight) scattered by rain drops to form a rainbow . Scattering also includes the interaction of billiard balls on a table, the Rutherford scattering (or angle change) of alpha particles by gold nuclei , the Bragg scattering (or diffraction) of electrons and X-rays by a cluster of atoms, and the inelastic scattering of a fission fragment as it traverses a thin foil. More precisely, scattering consists of the study of how solutions of partial differential equations , propagating freely "in the distant past", come together and interact with one another or with a boundary condition , and then propagate away "to the distant future". The direct scattering problem is the problem of determining the distribution of scattered radiation/particle flux basing on the characteristics of the scatterer. The inverse scattering problem is the problem of determining the characteristics of an object (e.g., its shape, internal constitution) from measurement data of radiation or particles scattered from the object. When the target is a set of many scattering centers whose relative position varies unpredictably, it is customary to think of a range equation whose arguments take different forms in different application areas. In the simplest case consider an interaction that removes particles from the "unscattered beam" at a uniform rate that is proportional to the incident number of particles per unit area per unit time ( I {\displaystyle I} ), i.e. that where Q is an interaction coefficient and x is the distance traveled in the target. The above ordinary first-order differential equation has solutions of the form: where I o is the initial flux, path length Δx ≡ x − x o , the second equality defines an interaction mean free path λ, the third uses the number of targets per unit volume η to define an area cross-section σ, and the last uses the target mass density ρ to define a density mean free path τ. Hence one converts between these quantities via Q = 1/ λ = ησ = ρ/τ , as shown in the figure at left. In electromagnetic absorption spectroscopy, for example, interaction coefficient (e.g. Q in cm −1 ) is variously called opacity , absorption coefficient , and attenuation coefficient . In nuclear physics, area cross-sections (e.g. σ in barns or units of 10 −24 cm 2 ), density mean free path (e.g. τ in grams/cm 2 ), and its reciprocal the mass attenuation coefficient (e.g. in cm 2 /gram) or area per nucleon are all popular, while in electron microscopy the inelastic mean free path [ 14 ] (e.g. λ in nanometers) is often discussed [ 15 ] instead. The term "elastic scattering" implies that the internal states of the scattering particles do not change, and hence they emerge unchanged from the scattering process. In inelastic scattering, by contrast, the particles' internal state is changed, which may amount to exciting some of the electrons of a scattering atom, or the complete annihilation of a scattering particle and the creation of entirely new particles. The example of scattering in quantum chemistry is particularly instructive, as the theory is reasonably complex while still having a good foundation on which to build an intuitive understanding. When two atoms are scattered off one another, one can understand them as being the bound state solutions of some differential equation. Thus, for example, the hydrogen atom corresponds to a solution to the Schrödinger equation with a negative inverse-power (i.e., attractive Coulombic) central potential . The scattering of two hydrogen atoms will disturb the state of each atom, resulting in one or both becoming excited, or even ionized , representing an inelastic scattering process. The term " deep inelastic scattering " refers to a special kind of scattering experiment in particle physics. In mathematics , scattering theory deals with a more abstract formulation of the same set of concepts. For example, if a differential equation is known to have some simple, localized solutions, and the solutions are a function of a single parameter, that parameter can take the conceptual role of time . One then asks what might happen if two such solutions are set up far away from each other, in the "distant past", and are made to move towards each other, interact (under the constraint of the differential equation) and then move apart in the "future". The scattering matrix then pairs solutions in the "distant past" to those in the "distant future". Solutions to differential equations are often posed on manifolds . Frequently, the means to the solution requires the study of the spectrum of an operator on the manifold. As a result, the solutions often have a spectrum that can be identified with a Hilbert space , and scattering is described by a certain map, the S matrix , on Hilbert spaces. Solutions with a discrete spectrum correspond to bound states in quantum mechanics, while a continuous spectrum is associated with scattering states. The study of inelastic scattering then asks how discrete and continuous spectra are mixed together. An important, notable development is the inverse scattering transform , central to the solution of many exactly solvable models . In mathematical physics , scattering theory is a framework for studying and understanding the interaction or scattering of solutions to partial differential equations . In acoustics , the differential equation is the wave equation , and scattering studies how its solutions, the sound waves , scatter from solid objects or propagate through non-uniform media (such as sound waves, in sea water , coming from a submarine ). In the case of classical electrodynamics , the differential equation is again the wave equation, and the scattering of light or radio waves is studied. In particle physics , the equations are those of Quantum electrodynamics , Quantum chromodynamics and the Standard Model , the solutions of which correspond to fundamental particles . In regular quantum mechanics , which includes quantum chemistry , the relevant equation is the Schrödinger equation , although equivalent formulations, such as the Lippmann-Schwinger equation and the Faddeev equations , are also largely used. The solutions of interest describe the long-term motion of free atoms, molecules, photons, electrons, and protons. The scenario is that several particles come together from an infinite distance away. These reagents then collide, optionally reacting, getting destroyed or creating new particles. The products and unused reagents then fly away to infinity again. (The atoms and molecules are effectively particles for our purposes. Also, under everyday circumstances, only photons are being created and destroyed.) The solutions reveal which directions the products are most likely to fly off to and how quickly. They also reveal the probability of various reactions, creations, and decays occurring. There are two predominant techniques of finding solutions to scattering problems: partial wave analysis , and the Born approximation . Electromagnetic waves are one of the best known and most commonly encountered forms of radiation that undergo scattering. [ 16 ] Scattering of light and radio waves (especially in radar) is particularly important. Several different aspects of electromagnetic scattering are distinct enough to have conventional names. Major forms of elastic light scattering (involving negligible energy transfer) are Rayleigh scattering and Mie scattering . Inelastic scattering includes Brillouin scattering , Raman scattering , inelastic X-ray scattering and Compton scattering . Light scattering is one of the two major physical processes that contribute to the visible appearance of most objects, the other being absorption. Surfaces described as white owe their appearance to multiple scattering of light by internal or surface inhomogeneities in the object, for example by the boundaries of transparent microscopic crystals that make up a stone or by the microscopic fibers in a sheet of paper. More generally, the gloss (or lustre or sheen ) of the surface is determined by scattering. Highly scattering surfaces are described as being dull or having a matte finish, while the absence of surface scattering leads to a glossy appearance, as with polished metal or stone. Spectral absorption, the selective absorption of certain colors, determines the color of most objects with some modification by elastic scattering . The apparent blue color of veins in skin is a common example where both spectral absorption and scattering play important and complex roles in the coloration. Light scattering can also create color without absorption, often shades of blue, as with the sky (Rayleigh scattering), the human blue iris , and the feathers of some birds (Prum et al. 1998). However, resonant light scattering in nanoparticles can produce many different highly saturated and vibrant hues, especially when surface plasmon resonance is involved (Roqué et al. 2006). [ 17 ] [ 18 ] Models of light scattering can be divided into three domains based on a dimensionless size parameter, α which is defined as: α = π D p / λ , {\displaystyle \alpha =\pi D_{\text{p}}/\lambda ,} where πD p is the circumference of a particle and λ is the wavelength of incident radiation in the medium. Based on the value of α , these domains are: Rayleigh scattering is a process in which electromagnetic radiation (including light) is scattered by a small spherical volume of variant refractive indexes, such as a particle, bubble, droplet, or even a density fluctuation. This effect was first modeled successfully by Lord Rayleigh , from whom it gets its name. In order for Rayleigh's model to apply, the sphere must be much smaller in diameter than the wavelength ( λ ) of the scattered wave; typically the upper limit is taken to be about 1/10 the wavelength. In this size regime, the exact shape of the scattering center is usually not very significant and can often be treated as a sphere of equivalent volume. The inherent scattering that radiation undergoes passing through a pure gas is due to microscopic density fluctuations as the gas molecules move around, which are normally small enough in scale for Rayleigh's model to apply. This scattering mechanism is the primary cause of the blue color of the Earth's sky on a clear day, as the shorter blue wavelengths of sunlight passing overhead are more strongly scattered than the longer red wavelengths according to Rayleigh's famous 1/ λ 4 relation. Along with absorption, such scattering is a major cause of the attenuation of radiation by the atmosphere . [ 19 ] The degree of scattering varies as a function of the ratio of the particle diameter to the wavelength of the radiation, along with many other factors including polarization , angle, and coherence . [ 20 ] For larger diameters, the problem of electromagnetic scattering by spheres was first solved by Gustav Mie , and scattering by spheres larger than the Rayleigh range is therefore usually known as Mie scattering. In the Mie regime, the shape of the scattering center becomes much more significant and the theory only applies well to spheres and, with some modification, spheroids and ellipsoids . Closed-form solutions for scattering by certain other simple shapes exist, but no general closed-form solution is known for arbitrary shapes. Both Mie and Rayleigh scattering are considered elastic scattering processes, in which the energy (and thus wavelength and frequency) of the light is not substantially changed. However, electromagnetic radiation scattered by moving scattering centers does undergo a Doppler shift , which can be detected and used to measure the velocity of the scattering center/s in forms of techniques such as lidar and radar . This shift involves a slight change in energy. At values of the ratio of particle diameter to wavelength more than about 10, the laws of geometric optics are mostly sufficient to describe the interaction of light with the particle. Mie theory can still be used for these larger spheres, but the solution often becomes numerically unwieldy. For modeling of scattering in cases where the Rayleigh and Mie models do not apply such as larger, irregularly shaped particles, there are many numerical methods that can be used. The most common are finite-element methods which solve Maxwell's equations to find the distribution of the scattered electromagnetic field. Sophisticated software packages exist which allow the user to specify the refractive index or indices of the scattering feature in space, creating a 2- or sometimes 3-dimensional model of the structure. For relatively large and complex structures, these models usually require substantial execution times on a computer. Electrophoresis involves the migration of macromolecules under the influence of an electric field. [ 21 ] Electrophoretic light scattering involves passing an electric field through a liquid which makes particles move. The bigger the charge is on the particles, the faster they are able to move. [ 22 ]
https://en.wikipedia.org/wiki/Scattering_theory
Scavengers are animals that consume dead organisms that have died from causes other than predation or have been killed by other predators. [ 1 ] While scavenging generally refers to carnivores feeding on carrion , it is also a herbivorous feeding behavior . [ 2 ] Scavengers play an important role in the ecosystem by consuming dead animal and plant material. Decomposers and detritivores complete this process, by consuming the remains left by scavengers. Scavengers aid in overcoming fluctuations of food resources in the environment. [ 3 ] The process and rate of scavenging is affected by both biotic and abiotic factors, such as carcass size, habitat, temperature, and seasons. [ 4 ] Scavenger is an alteration of scavager, from Middle English skawager meaning " customs collector", from skawage meaning "customs", from Old North French escauwage meaning "inspection", from schauwer meaning "to inspect", of Germanic origin; akin to Old English scēawian and German schauen meaning "to look at", and modern English "show" (with semantic drift ). Obligate scavenging (subsisting entirely or mainly on dead animals) is rare among vertebrates, due to the difficulty of finding enough carrion without expending too much energy. Well-known invertebrate scavengers of animal material include burying beetles and blowflies , which are obligate scavengers, and yellowjackets . Fly larvae are also common scavengers for organic materials at the bottom of freshwater bodies. For example, Tokunagayusurika akamusi is a species of midge fly whose larvae live as obligate scavengers at the bottom of lakes and whose adults almost never feed and only live up to a few weeks. Most scavenging animals are facultative scavengers that gain most of their food through other methods, especially predation . Many large carnivores that hunt regularly, such as hyenas and jackals , but also animals rarely thought of as scavengers, such as African lions , leopards , and wolves will scavenge if given the chance. They may also use their size and ferocity to intimidate the original hunters (the cheetah is a notable victim, rather than a perpetrator). Almost all scavengers above insect size are predators and will hunt if not enough carrion is available, as few ecosystems provide enough dead animals year-round to keep its scavengers fed on that alone. Scavenging wild dogs and crows frequently exploit roadkill . Scavengers of dead plant material include termites that build nests in grasslands and then collect dead plant material for consumption within the nest. The interaction between scavenging animals and humans is seen today most commonly in suburban settings with animals such as opossums, polecats and raccoons . In some African towns and villages, scavenging from hyenas is also common. In the prehistoric eras, the species Tyrannosaurus rex may have been an apex predator , preying upon hadrosaurs , ceratopsians , and possibly juvenile sauropods, [ 5 ] although some experts have suggested the dinosaur was primarily a scavenger. The debate about whether Tyrannosaurus was an apex predator or scavenger was among the longest ongoing feuds in paleontology ; however, most scientists now agree that Tyrannosaurus was an opportunistic carnivore, acting mostly as a predator but also scavenging when it could sense it. [ 6 ] Recent research also shows that while an adult T. rex would energetically gain little through scavenging, smaller theropods of approximately 500 kg (1,100 lb) might have gained levels similar to those of hyenas, though not enough for them to rely on scavenging. [ 7 ] Other research suggests that carcasses of giant sauropods may have made scavenging much more profitable to carnivores than it is now. For example, a single 40 tonne Apatosaurus carcass would have been worth roughly 6 years of calories for an average allosaur. As a result of this resource oversupply, it is possible that some theropods evolved to get most of their calories by scavenging giant sauropod carcasses, and may not have needed to consistently hunt in order to survive. [ 8 ] [ 9 ] The same study suggested that theropods in relatively sauropod-free environments, such as tyrannosaurs, were not exposed to the same type of carrion oversupply, and were therefore forced to hunt in order to survive. Animals which consume feces , such as dung beetles , are referred to as coprovores . Animals that collect small particles of dead organic material of both animal and plant origin are referred to as detritivores . Scavengers play a fundamental role in the environment through the removal of decaying organisms, serving as a natural sanitation service. [ 10 ] While microscopic and invertebrate decomposers break down dead organisms into simple organic matter which are used by nearby autotrophs , scavengers help conserve energy and nutrients obtained from carrion within the upper trophic levels , and are able to disperse the energy and nutrients farther away from the site of the carrion than decomposers. [ 11 ] Scavenging unites animals which normally would not come into contact, [ 12 ] and results in the formation of highly structured and complex communities which engage in nonrandom interactions. [ 13 ] Scavenging communities function in the redistribution of energy obtained from carcasses and reducing diseases associated with decomposition. Oftentimes, scavenger communities differ in consistency due to carcass size and carcass types, as well as by seasonal effects as consequence of differing invertebrate and microbial activity. [ 4 ] Competition for carrion results in the inclusion or exclusion of certain scavengers from access to carrion, shaping the scavenger community. When carrion decomposes at a slower rate during cooler seasons, competitions between scavengers decrease, while the number of scavenger species present increases. [ 4 ] Alterations in scavenging communities may result in drastic changes to the scavenging community in general, reduce ecosystem services and have detrimental effects on animal and humans. [ 13 ] The reintroduction of gray wolves ( Canis lupus ) into Yellowstone National Park in the United States caused drastic changes to the prevalent scavenging community, resulting in the provision of carrion to many mammalian and avian species. [ 4 ] Likewise, the reduction of vulture species in India lead to the increase of opportunistic species such as feral dogs and rats. The presence of both species at carcasses resulted in the increase of diseases such as rabies and bubonic plague in wildlife and livestock, as feral dogs and rats are transmitters of such diseases. Furthermore, the decline of vulture populations in India has been linked to the increased rates of anthrax in humans due to the handling and ingestion of infected livestock carcasses. An increase of disease transmission has been observed in mammalian scavengers in Kenya due to the decrease in vulture populations in the area, as the decrease in vulture populations resulted in an increase of the number of mammalian scavengers at a given carcass along with the time spent at a carcass. [ 10 ] Scavenging may provide a direct and indirect method for transmitting disease between animals. [ 14 ] Scavengers of infected carcasses may become hosts for certain pathogens and consequently vectors of disease themselves. [ 14 ] An example of this phenomenon is the increased transmission of tuberculosis observed when scavengers engage in eating infected carcasses. [ 15 ] Likewise, the ingestion of bat carcasses infected with rabies by striped skunks ( Mephitis mephitis ) resulted in increased infection of these organisms with the virus. A major vector of transmission of diseases are various bird species, with outbreak being influenced by such carrier birds and their environment. An avian cholera outbreak from 2006 to 2007 off the coast Newfoundland, Canada resulted in the mortality of many marine bird species. The transmission, perpetuation and spread of the outbreak was mainly restricted to gull species who scavenge for food in the area. [ 16 ] Similarly, an increase of transmission of avian influenza virus to chickens by domestic ducks from Indonesian farms permitted to scavenge surrounding areas was observed in 2007. The scavenging of ducks in rice paddy fields in particular resulted in increased contact with other bird species feeding on leftover rice, which may have contributed to increased infection and transmission of the avian influenza virus. The domestic ducks may not have demonstrated symptoms of infection themselves, though were observed to excrete high concentrations of the avian influenza virus. [ 17 ] Many species that scavenge face persecution globally. [ citation needed ] Vultures, in particular, have faced incredible persecution and threats by humans. Before its ban by regional governments in 2006, the veterinary drug Diclofenac has resulted in at least a 95% decline of Gyps vultures in Asia. Habitat loss and food shortage have contributed to the decline of vulture species in West Africa due to the growing human population and over-hunting of vulture food sources, as well as changes in livestock husbandry. Poisoning certain predators to increase the number of game animals is still a common hunting practice in Europe and contributes to the poisoning of vultures when they consume the carcasses of poisoned predators. [ 10 ] Highly efficient scavengers, also known as dominant or apex-scavengers, can have benefits to humans. Increases in dominant scavenger populations, such as vultures, can reduce populations of smaller opportunistic scavengers, such as rats. [ 18 ] These smaller scavengers are often pests and disease vectors. In the 1980s, Lewis Binford suggested that early humans primarily obtained meat via scavenging , not through hunting . [ 19 ] In 2010, Dennis Bramble and Daniel Lieberman proposed that early carnivorous human ancestors subsequently developed long-distance running behaviors which improved the ability to scavenge and hunt: they could reach scavenging sites more quickly and also pursue a single animal until it could be safely killed at close range due to exhaustion and hyperthermia. [ 20 ] In Tibetan Buddhism , the practice of excarnation —that is, the exposure of dead human bodies to carrion birds and/or other scavenging animals—is the distinctive characteristic of sky burial , which involves the dismemberment of human cadavers of whom the remains are fed to vultures , and traditionally the main funerary rite (alongside cremation ) used to dispose of the human body. [ 21 ] A similar funerary practice that features excarnation can be found in Zoroastrianism ; in order to prevent the pollution of the sacred elements (fire, earth, and water) from contact with decomposing bodies , human cadavers are exposed on the Towers of Silence to be eaten by vultures and wild dogs. [ 22 ] Studies in behavioral ecology and ecological epidemiology have shown that cannibalistic necrophagy , although rare, has been observed as a survival behavior in several social species , including anatomically modern humans ; [ 14 ] however, episodes of human cannibalism occur rarely in most human societies. [ 14 ] [ Note 1 ] Many instances have occurred in human history , especially in times of war and famine , where necrophagy and human cannibalism emerged as a survival behavior, although anthropologists report the usage of ritual cannibalism among funerary practices and as the preferred means of disposal of the dead in some tribal societies . [ 23 ] [ 24 ] [ 25 ]
https://en.wikipedia.org/wiki/Scavenger
A scavenger in chemistry is a chemical substance added to a mixture in order to remove or de-activate impurities and unwanted reaction products, for example oxygen, to make sure that they will not cause any unfavorable reactions. Their use is wide-ranged:
https://en.wikipedia.org/wiki/Scavenger_(chemistry)
Scavenger resins are polymers (resins) with bound functional groups that react with specific by-products, impurities, or excess reagents produced in a reaction. Polymer-bound functional groups permit the use of many different scavengers, as the functional groups are confined within a resin or are simply bound to the solid support of a bead. Simply, the functional groups of one scavenger will react minimally with the functional groups of another. [ 1 ] [ 2 ] Employment of scavenger resins has become increasingly popular in solution-phase combinatorial chemistry . [ 3 ] Used primarily in the synthesis of medicinal drugs , [ 4 ] solution-phase combinatorial chemistry allows for the creation of large libraries of structurally related compounds. [ 2 ] When purifying a solution, many approaches can be taken. In general chemical synthesis laboratories, a number of traditional techniques for purification are used as opposed to the employment of scavenger resins. Whether or not scavenger resins are used often depends on the quantity of product desired, how much time you have to produce the wanted product, and the use of the product. Some of the advantages and disadvantages to using scavenger resins as a means for purification are described later. Traditional methods of purification of these compounds becomes time consuming and does not always produce entirely pure products. [ 4 ] The ability to specialize a scavenger resin allows for significantly reduce purification times and more pure products. Furthermore, the use of scavenger resins creates a situation where the product can remain in solution and the reaction can be monitored. Conversely, many scavenger resins must be used in large amounts to purify a given product, presenting physical purification issues. [ 3 ] Furthermore, when discussing the use of scavenger resins it is important to think about the different types of solid support "beads" that will hold the selected functional group . These polymer beads can be describe most often in two ways, lightly crosslinked and highly crosslinked. The different solid supports are chosen at the preference of the chemist. Lightly crosslinked refers to the lightly woven polymer portion of the scavenger. This type of resin becomes swollen in a particular solvent , allowing an impurity to react with a specified functional group. In many times single solvents are not sufficient to expand the resin, in which case a second solvent must be added. Examples of a secondary solvent, or co-solvent, would be Tetrahydrofuran , or THF. Typically contain 1–3% of divinylbenzene . [ 2 ] [ 5 ] Highly crosslinked resins typically swell much less than the latter. The property that allows these types of resins to work efficiently lies in their porous properties. The reacting compound can diffuse through the porous layer of the resin to converge with the scavenger's functional group. These types of resins are utilized in situations where swelling of the resins may cause a physical barrier to reaction purification. Contain much higher content of divinylbenzene. [ 4 ] Organic scavenger resins have been used commercially in water filters as early as 1997. As an alternative to reverse osmosis , organic anion resins (scavenger resins) have been used to remove impurities from drinking water. These types of resins are able to remove the negatively charged organic [ verification needed ] molecules in water, like bicarbonates , sulfates , and nitrates . It has been estimated that 60–80% of organic impurities in water may be remove using these methods.
https://en.wikipedia.org/wiki/Scavenger_resin
In the field of vehicular automation a scenario denotes a sequence of snapshots of the environment and the actions of a vehicle. Scenarios are created to represent real-world situations and are used for development, testing, and validation purposes. [ 1 ] [ 2 ] According to ASAM 's OpenODD concept paper, scenarios are related to operational design domain . However, they are not the same. Defining the appropriate behavior of actors within an ODD creates a scenario that is not dependent on any ODD definition. [ 5 ] In 2022, Netherlands Organisation for Applied Scientific Research announced scenario-based safety validation of self-driving trucks in cooperation with Torc Robotics . [ 6 ]
https://en.wikipedia.org/wiki/Scenario_(vehicular_automation)
Scenario planning , scenario thinking , scenario analysis , [ 1 ] scenario prediction [ 2 ] and the scenario method [ 3 ] all describe a strategic planning method that some organizations use to make flexible long-term plans. It is in large part an adaptation and generalization of classic methods used by military intelligence . [ 4 ] In the most common application of the method, analysts generate simulation games for policy makers . The method combines known facts, such as demographics , geography and mineral reserves , with military , political , and industrial information, and key driving forces identified by considering social, technical, economic, environmental, and political ("STEEP") trends. In business applications, the emphasis on understanding the behavior of opponents has been reduced while more attention is now paid to changes in the natural environment. At Royal Dutch Shell for example, scenario planning has been described as changing mindsets about the exogenous part of the world prior to formulating specific strategies. [ 5 ] [ 6 ] Scenario planning may involve aspects of systems thinking , specifically the recognition that many factors may combine in complex ways to create sometimes surprising futures (due to non-linear feedback loops ). The method also allows the inclusion of factors that are difficult to formalize, such as novel insights about the future, deep shifts in values, and unprecedented regulations or inventions. [ 7 ] Systems thinking used in conjunction with scenario planning leads to plausible scenario storylines because the causal relationship between factors can be demonstrated. [ 8 ] These cases, in which scenario planning is integrated with a systems thinking approach to scenario development, are sometimes referred to as "dynamic scenarios". Critics of using a subjective and heuristic methodology to deal with uncertainty and complexity argue that the technique has not been examined rigorously, nor influenced sufficiently by scientific evidence. They caution against using such methods to "predict" based on what can be described as arbitrary themes and "forecasting techniques". A challenge and a strength of scenario-building is that "predictors are part of the social context about which they are trying to make a prediction and may influence that context in the process". [ 9 ] As a consequence, societal predictions can become self-destructing. For example, a scenario in which a large percentage of a population will become HIV infected based on existing trends may cause more people to avoid risky behavior and thus reduce the HIV infection rate, invalidating the forecast (which might have remained correct if it had not been publicly known). Or, a prediction that cybersecurity will become a major issue may cause organizations to implement more secure cybersecurity measures, thus limiting the issue. [ 9 ] Combinations and permutations of fact and related social changes are called " scenarios ". Scenarios usually include plausible, but unexpectedly important, situations and problems that exist in some nascent form in the present day. Any particular scenario is unlikely. However, futures studies analysts select scenario features so they are both possible and uncomfortable. Scenario planning helps policy-makers and firms anticipate change, prepare responses, and create more robust strategies. [ 10 ] [ 11 ] Scenario planning helps a firm anticipate the impact of different scenarios and identify weaknesses. When anticipated years in advance, those weaknesses can be avoided or their impacts reduced more effectively than when similar real-life problems are considered under the duress of an emergency. For example, a company may discover that it needs to change contractual terms to protect against a new class of risks, or collect cash reserves to purchase anticipated technologies or equipment. Flexible business continuity plans with " PREsponse protocols " can help cope with similar operational problems and deliver measurable future value. Strategic military intelligence organizations also construct scenarios. The methods and organizations are almost identical, except that scenario planning is applied to a wider variety of problems than merely military and political problems. As in military intelligence, the chief challenge of scenario planning is to find out the real needs of policy-makers, when policy-makers may not themselves know what they need to know, or may not know how to describe the information that they really want. Good analysts design wargames so that policy makers have great flexibility and freedom to adapt their simulated organisations. [ 12 ] Then these simulated organizations are "stressed" by the scenarios as a game plays out. Usually, particular groups of facts become more clearly important. These insights enable intelligence organizations to refine and repackage real information more precisely to better serve the policy-makers' real-life needs. Usually the games' simulated time runs hundreds of times faster than real life, so policy-makers experience several years of policy decisions, and their simulated effects, in less than a day. This chief value of scenario planning is that it allows policy-makers to make and learn from mistakes without risking career-limiting failures in real life. Further, policymakers can make these mistakes in a safe, unthreatening, game-like environment, while responding to a wide variety of concretely presented situations based on facts. This is an opportunity to "rehearse the future", an opportunity that does not present itself in day-to-day operations where every action and decision counts. The basic concepts of the process are relatively simple. In terms of the overall approach to forecasting, they can be divided into three main groups of activities (which are, generally speaking, common to all long range forecasting processes): [ 13 ] The first of these groups quite simply comprises the normal environmental analysis . This is almost exactly the same as that which should be undertaken as the first stage of any serious long-range planning. However, the quality of this analysis is especially important in the context of scenario planning. The central part represents the specific techniques – covered here – which differentiate the scenario forecasting process from the others in long-range planning. The final group represents all the subsequent processes which go towards producing the corporate strategy and plans. Again, the requirements are slightly different but in general they follow all the rules of sound long-range planning. In the past, strategic plans have often considered only the "official future", which was usually a straight-line graph of current trends carried into the future. Often the trend lines were generated by the accounting department, and lacked discussions of demographics, or qualitative differences in social conditions. [ 5 ] These simplistic guesses are surprisingly good most of the time, but fail to consider qualitative social changes that can affect a business or government. Paul J. H. Schoemaker offers a strong managerial case for the use of scenario planning in business and had wide impact. [ 14 ] The approach may have had more impact outside Shell than within, as many others firms and consultancies started to benefit as well from scenario planning. Scenario planning is as much art as science, and prone to a variety of traps (both in process and content) as enumerated by Paul J. H. Schoemaker . [ 14 ] More recently scenario planning has been discussed as a tool to improve the strategic agility, by cognitively preparing not only multiple scenarios but also multiple consistent strategies. [ 10 ] Scenario planning is also extremely popular with military planners. Most states' department of war maintains a continuously updated series of strategic plans to cope with well-known military or strategic problems. [ citation needed ] These plans are almost always based on scenarios, and often the plans and scenarios are kept up-to-date by war games, sometimes played out with real troops. [ citation needed ] This process was first carried out (arguably the method was invented by) the Prussian general staff of the mid-19th century. [ citation needed ] In economics and finance , a financial institution might use scenario analysis to forecast several possible scenarios for the economy (e.g. rapid growth, moderate growth, slow growth) and for financial returns (for bonds, stocks, cash, etc.) in each of those scenarios. It might consider sub-sets of each of the possibilities. It might further seek to determine correlations and assign probabilities to the scenarios (and sub-sets if any). Then it will be in a position to consider how to distribute assets between asset types (i.e. asset allocation ); the institution can also calculate the scenario-weighted expected return (which figure will indicate the overall attractiveness of the financial environment). It may also perform stress testing , using adverse scenarios. [ 15 ] Depending on the complexity of the problem, scenario analysis can be a demanding exercise. It can be difficult to foresee what the future holds (e.g. the actual future outcome may be entirely unexpected), i.e. to foresee what the scenarios are, and to assign probabilities to them; and this is true of the general forecasts never mind the implied financial market returns. The outcomes can be modeled mathematically/statistically e.g. taking account of possible variability within single scenarios as well as possible relationships between scenarios. In general, one should take care when assigning probabilities to different scenarios as this could invite a tendency to consider only the scenario with the highest probability. [ 16 ] In politics or geopolitics, scenario analysis involves reflecting on the possible alternative paths of a social or political environment and possibly diplomatic and war risks. Most authors attribute the introduction of scenario planning to Herman Kahn through his work for the US Military in the 1950s at the RAND Corporation where he developed a technique of describing the future in stories as if written by people in the future. He adopted the term "scenarios" to describe these stories. In 1961 he founded the Hudson Institute where he expanded his scenario work to social forecasting and public policy. [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] One of his most controversial uses of scenarios was to suggest that a nuclear war could be won. [ 22 ] Though Kahn is often cited as the father of scenario planning, at the same time Kahn was developing his methods at RAND, Gaston Berger was developing similar methods at the Centre d’Etudes Prospectives which he founded in France. His method, which he named 'La Prospective', was to develop normative scenarios of the future which were to be used as a guide in formulating public policy. During the mid-1960s various authors from the French and American institutions began to publish scenario planning concepts such as 'La Prospective' by Berger in 1964 [ 23 ] and 'The Next Thirty-Three Years' by Kahn and Wiener in 1967. [ 24 ] By the 1970s scenario planning was in full swing with a number of institutions now established to provide support to business including the Hudson Foundation, the Stanford Research Institute (now SRI International ), and the SEMA Metra Consulting Group in France. Several large companies also began to embrace scenario planning including DHL Express , Dutch Royal Shell and General Electric . [ 19 ] [ 21 ] [ 25 ] [ 26 ] Possibly as a result of these very sophisticated approaches, and of the difficult techniques they employed (which usually demanded the resources of a central planning staff), scenarios earned a reputation for difficulty (and cost) in use. Even so, the theoretical importance of the use of alternative scenarios, to help address the uncertainty implicit in long-range forecasts, was dramatically underlined by the widespread confusion which followed the Oil Shock of 1973. As a result, many of the larger organizations started to use the technique in one form or another. By 1983 Diffenbach reported that 'alternate scenarios' were the third most popular technique for long-range forecasting – used by 68% of the large organizations he surveyed. [ 27 ] Practical development of scenario forecasting, to guide strategy rather than for the more limited academic uses which had previously been the case, was started by Pierre Wack in 1971 at the Royal Dutch Shell group of companies – and it, too, was given impetus by the Oil Shock two years later. Shell has, since that time, led the commercial world in the use of scenarios – and in the development of more practical techniques to support these. Indeed, as – in common with most forms of long-range forecasting – the use of scenarios has (during the depressed trading conditions of the last decade) reduced to only a handful of private-sector organisations, Shell remains almost alone amongst them in keeping the technique at the forefront of forecasting. [ 28 ] There has only been anecdotal evidence offered in support of the value of scenarios, even as aids to forecasting; and most of this has come from one company – Shell. In addition, with so few organisations making consistent use of them – and with the timescales involved reaching into decades – it is unlikely that any definitive supporting evidenced will be forthcoming in the foreseeable future. For the same reasons, though, a lack of such proof applies to almost all long-range planning techniques. In the absence of proof, but taking account of Shell's well documented experiences of using it over several decades (where, in the 1990s, its then CEO ascribed its success to its use of such scenarios), can be significant benefit to be obtained from extending the horizons of managers' long-range forecasting in the way that the use of scenarios uniquely does. [ 13 ] The part of the overall process which is radically different from most other forms of long-range planning is the central section, the actual production of the scenarios. Even this, though, is relatively simple, at its most basic level. As derived from the approach most commonly used by Shell, [ 29 ] it follows six steps: [ 30 ] The first stage is to examine the results of environmental analysis to determine which are the most important factors that will decide the nature of the future environment within which the organisation operates. These factors are sometimes called 'variables' (because they will vary over the time being investigated, though the terminology may confuse scientists who use it in a more rigorous manner). Users tend to prefer the term 'drivers' (for change), since this terminology is not laden with quasi-scientific connotations and reinforces the participant's commitment to search for those forces which will act to change the future. Whatever the nomenclature, the main requirement is that these will be informed assumptions. This is partly a process of analysis, needed to recognise what these 'forces' might be. However, it is likely that some work on this element will already have taken place during the preceding environmental analysis. By the time the formal scenario planning stage has been reached, the participants may have already decided – probably in their sub-conscious rather than formally – what the main forces are. In the ideal approach, the first stage should be to carefully decide the overall assumptions on which the scenarios will be based. Only then, as a second stage, should the various drivers be specifically defined. Participants, though, seem to have problems in separating these stages. Perhaps the most difficult aspect though, is freeing the participants from the preconceptions they take into the process with them. In particular, most participants will want to look at the medium term, five to ten years ahead rather than the required longer-term, ten or more years ahead. However, a time horizon of anything less than ten years often leads participants to extrapolate from present trends, rather than consider the alternatives which might face them. When, however, they are asked to consider timescales in excess of ten years they almost all seem to accept the logic of the scenario planning process, and no longer fall back on that of extrapolation. There is a similar problem with expanding participants horizons to include the whole external environment. Brainstorming In any case, the brainstorming which should then take place, to ensure that the list is complete, may unearth more variables – and, in particular, the combination of factors may suggest yet others. A very simple technique which is especially useful at this – brainstorming – stage, and in general for handling scenario planning debates is derived from use in Shell where this type of approach is often used. An especially easy approach, it only requires a conference room with a bare wall and copious supplies of 3M Post-It Notes . The six to ten people ideally taking part in such face-to-face debates should be in a conference room environment which is isolated from outside interruptions. The only special requirement is that the conference room has at least one clear wall on which Post-It notes will stick. At the start of the meeting itself, any topics which have already been identified during the environmental analysis stage are written (preferably with a thick magic marker, so they can be read from a distance) on separate Post-It Notes. These Post-It Notes are then, at least in theory, randomly placed on the wall. In practice, even at this early stage the participants will want to cluster them in groups which seem to make sense. The only requirement (which is why Post-It Notes are ideal for this approach) is that there is no bar to taking them off again and moving them to a new cluster. A similar technique – using 5" by 3" index cards – has also been described (as the 'Snowball Technique'), by Backoff and Nutt, for grouping and evaluating ideas in general. [ 31 ] As in any form of brainstorming, the initial ideas almost invariably stimulate others. Indeed, everyone should be encouraged to add their own Post-It Notes to those on the wall. However it differs from the 'rigorous' form described in 'creative thinking' texts, in that it is much slower paced and the ideas are discussed immediately. In practice, as many ideas may be removed, as not being relevant, as are added. Even so, it follows many of the same rules as normal brainstorming and typically lasts the same length of time – say, an hour or so only. It is important that all the participants feel they 'own' the wall – and are encouraged to move the notes around themselves. The result is a very powerful form of creative decision-making for groups, which is applicable to a wide range of situations (but is especially powerful in the context of scenario planning). It also offers a very good introduction for those who are coming to the scenario process for the first time. Since the workings are largely self-evident, participants very quickly come to understand exactly what is involved. Important and uncertain This step is, though, also one of selection – since only the most important factors will justify a place in the scenarios. The 80:20 Rule here means that, at the end of the process, management's attention must be focused on a limited number of most important issues. Experience has proved that offering a wider range of topics merely allows them to select those few which interest them, and not necessarily those which are most important to the organisation. In addition, as scenarios are a technique for presenting alternative futures, the factors to be included must be genuinely 'variable'. They should be subject to significant alternative outcomes. Factors whose outcome is predictable, but important, should be spelled out in the introduction to the scenarios (since they cannot be ignored). The Important Uncertainties Matrix, as reported by Kees van der Heijden of Shell, is a useful check at this stage. [ 32 ] At this point it is also worth pointing out that a great virtue of scenarios is that they can accommodate the input from any other form of forecasting. They may use figures, diagrams or words in any combination. No other form of forecasting offers this flexibility. The next step is to link these drivers together to provide a meaningful framework. This may be obvious, where some of the factors are clearly related to each other in one way or another. For instance, a technological factor may lead to market changes, but may be constrained by legislative factors. On the other hand, some of the 'links' (or at least the 'groupings') may need to be artificial at this stage. At a later stage more meaningful links may be found, or the factors may then be rejected from the scenarios. In the most theoretical approaches to the subject, probabilities are attached to the event strings. This is difficult to achieve, however, and generally adds little – except complexity – to the outcomes. This is probably the most (conceptually) difficult step. It is where managers' 'intuition' – their ability to make sense of complex patterns of 'soft' data which more rigorous analysis would be unable to handle – plays an important role. There are, however, a range of techniques which can help; and again the Post-It-Notes approach is especially useful: Thus, the participants try to arrange the drivers, which have emerged from the first stage, into groups which seem to make sense to them. Initially there may be many small groups. The intention should, therefore, be to gradually merge these (often having to reform them from new combinations of drivers to make these bigger groups work). The aim of this stage is eventually to make 6–8 larger groupings; 'mini-scenarios'. Here the Post-It Notes may be moved dozens of times over the length – perhaps several hours or more – of each meeting. While this process is taking place the participants will probably want to add new topics – so more Post-It Notes are added to the wall. In the opposite direction, the unimportant ones are removed (possibly to be grouped, again as an 'audit trail' on another wall). More important, the 'certain' topics are also removed from the main area of debate – in this case they must be grouped in clearly labelled area of the main wall. As the clusters – the 'mini-scenarios' – emerge, the associated notes may be stuck to each other rather than individually to the wall; which makes it easier to move the clusters around (and is a considerable help during the final, demanding stage to reducing the scenarios to two or three). The great benefit of using Post-It Notes is that there is no bar to participants changing their minds. If they want to rearrange the groups – or simply to go back (iterate) to an earlier stage – then they strip them off and put them in their new position. The outcome of the previous step is usually between seven and nine logical groupings of drivers. This is usually easy to achieve. The 'natural' reason for this may be that it represents some form of limit as to what participants can visualise. Having placed the factors in these groups, the next action is to work out, very approximately at this stage, what is the connection between them. What does each group of factors represent? The main action, at this next stage, is to reduce the seven to nine mini-scenarios/groupings detected at the previous stage to two or three larger scenarios There is no theoretical reason for reducing to just two or three scenarios, only a practical one. It has been found that the managers who will be asked to use the final scenarios can only cope effectively with a maximum of three versions! Shell started, more than three decades ago, by building half a dozen or more scenarios – but found that the outcome was that their managers selected just one of these to concentrate on. As a result, the planners reduced the number to three, which managers could handle easily but could no longer so easily justify the selection of only one! This is the number now recommended most frequently in most of the literature. Complementary scenarios As used by Shell, and as favoured by a number of the academics, two scenarios should be complementary; the reason being that this helps avoid managers 'choosing' just one, 'preferred', scenario – and lapsing once more into single-track forecasting (negating the benefits of using 'alternative' scenarios to allow for alternative, uncertain futures). This is, however, a potentially difficult concept to grasp, where managers are used to looking for opposites; a good and a bad scenario, say, or an optimistic one versus a pessimistic one – and indeed this is the approach (for small businesses) advocated by Foster. In the Shell approach, the two scenarios are required to be equally likely, and between them to cover all the 'event strings'/drivers. Ideally they should not be obvious opposites, which might once again bias their acceptance by users, so the choice of 'neutral' titles is important. For example, Shell's two scenarios at the beginning of the 1990s were titled 'Sustainable World' and 'Global Mercantilism'[xv]. In practice, we found that this requirement, much to our surprise, posed few problems for the great majority, 85%, of those in the survey; who easily produced 'balanced' scenarios. The remaining 15% mainly fell into the expected trap of 'good versus bad'. We have found that our own relatively complex (OBS) scenarios can also be made complementary to each other; without any great effort needed from the teams involved; and the resulting two scenarios are both developed further by all involved, without unnecessary focusing on one or the other. Testing Having grouped the factors into these two scenarios, the next step is to test them, again, for viability. Do they make sense to the participants? This may be in terms of logical analysis, but it may also be in terms of intuitive 'gut-feel'. Once more, intuition often may offer a useful – if academically less respectable – vehicle for reacting to the complex and ill-defined issues typically involved. If the scenarios do not intuitively 'hang together', why not? The usual problem is that one or more of the assumptions turns out to be unrealistic in terms of how the participants see their world. If this is the case then you need to return to the first step – the whole scenario planning process is above all an iterative one (returning to its beginnings a number of times until the final outcome makes the best sense). The scenarios are then 'written up' in the most suitable form. The flexibility of this step often confuses participants, for they are used to forecasting processes which have a fixed format. The rule, though, is that you should produce the scenarios in the form most suitable for use by the managers who are going to base their strategy on them. Less obviously, the managers who are going to implement this strategy should also be taken into account. They will also be exposed to the scenarios, and will need to believe in these. This is essentially a 'marketing' decision, since it will be very necessary to 'sell' the final results to the users. On the other hand, a not inconsiderable consideration may be to use the form the author also finds most comfortable. If the form is alien to him or her the chances are that the resulting scenarios will carry little conviction when it comes to the 'sale'. Most scenarios will, perhaps, be written in word form (almost as a series of alternative essays about the future); especially where they will almost inevitably be qualitative which is hardly surprising where managers, and their audience, will probably use this in their day to day communications. Some, though use an expanded series of lists and some enliven their reports by adding some fictional 'character' to the material – perhaps taking literally the idea that they are stories about the future – though they are still clearly intended to be factual. On the other hand, they may include numeric data and/or diagrams – as those of Shell do (and in the process gain by the acid test of more measurable 'predictions'). The final stage of the process is to examine these scenarios to determine what are the most critical outcomes; the 'branching points' relating to the 'issues' which will have the greatest impact (potentially generating 'crises') on the future of the organisation. The subsequent strategy will have to address these – since the normal approach to strategy deriving from scenarios is one which aims to minimise risk by being 'robust' (that is it will safely cope with all the alternative outcomes of these 'life and death' issues) rather than aiming for performance (profit) maximisation by gambling on one outcome. Scenarios may be used in a number of ways: a) Containers for the drivers/event strings Most basically, they are a logical device, an artificial framework, for presenting the individual factors/topics (or coherent groups of these) so that these are made easily available for managers' use – as useful ideas about future developments in their own right – without reference to the rest of the scenario. It should be stressed that no factors should be dropped, or even given lower priority, as a result of producing the scenarios. In this context, which scenario contains which topic (driver), or issue about the future, is irrelevant. b) Tests for consistency At every stage it is necessary to iterate, to check that the contents are viable and make any necessary changes to ensure that they are; here the main test is to see if the scenarios seem to be internally consistent – if they are not then the writer must loop back to earlier stages to correct the problem. Though it has been mentioned previously, it is important to stress once again that scenario building is ideally an iterative process. It usually does not just happen in one meeting – though even one attempt is better than none – but takes place over a number of meetings as the participants gradually refine their ideas. c) Positive perspectives Perhaps the main benefit deriving from scenarios, however, comes from the alternative 'flavors' of the future their different perspectives offer. It is a common experience, when the scenarios finally emerge, for the participants to be startled by the insight they offer – as to what the general shape of the future might be – at this stage it no longer is a theoretical exercise but becomes a genuine framework (or rather set of alternative frameworks) for dealing with that. Scenario planning differs from contingency planning , sensitivity analysis and computer simulations . [ 33 ] Contingency planning is a "What if" tool, that only takes into account one uncertainty. However, scenario planning considers combinations of uncertainties in each scenario. Planners also try to select especially plausible but uncomfortable combinations of social developments. Sensitivity analysis analyzes changes in one variable only, which is useful for simple changes, while scenario planning tries to expose policy makers to significant interactions of major variables. While scenario planning can benefit from computer simulations, scenario planning is less formalized, and can be used to make plans for qualitative patterns that show up in a wide variety of simulated events. During the past 5 years, computer supported Morphological Analysis has been employed as aid in scenario development by the Swedish Defence Research Agency in Stockholm . [ 34 ] This method makes it possible to create a multi-variable morphological field which can be treated as an inference model – thus integrating scenario planning techniques with contingency analysis and sensitivity analysis . Scenario analysis is a process of analyzing future events by considering alternative possible outcomes (sometimes called "alternative worlds"). Thus, scenario analysis, which is one of the main forms of projection, does not try to show one exact picture of the future. Instead, it presents several alternative future developments. Consequently, a scope of possible future outcomes is observable. Not only are the outcomes observable, also the development paths leading to the outcomes. In contrast to prognoses , the scenario analysis is not based on extrapolation of the past or the extension of past trends. It does not rely on historical data and does not expect past observations to remain valid in the future. Instead, it tries to consider possible developments and turning points, which may only be connected to the past. In short, several scenarios are fleshed out in a scenario analysis to show possible future outcomes. Each scenario normally combines optimistic, pessimistic, and more and less probable developments. However, all aspects of scenarios should be plausible. Although highly discussed, experience has shown that around three scenarios are most appropriate for further discussion and selection. More scenarios risks making the analysis overly complicated. [ 35 ] [ 36 ] Scenarios are often confused with other tools and approaches to planning. Scenario-building is designed to allow improved decision-making by allowing deep consideration of outcomes and their implications. A scenario is a tool used during requirements analysis to describe a specific use of a proposed system. Scenarios capture the system, as viewed from the outside Scenario analysis can also be used to illuminate "wild cards." For example, analysis of the possibility of the earth being struck by a meteor suggests that whilst the probability is low, the damage inflicted is so high that the event is much more important (threatening) than the low probability (in any one year) alone would suggest. However, this possibility is usually disregarded by organizations using scenario analysis to develop a strategic plan since it has such overarching repercussions. Scenario planning concerns planning based on the systematic examination of the future by picturing plausible and consistent images of that future. The Delphi method attempts to develop systematically expert opinion consensus concerning future developments and events. It is a judgmental forecasting procedure in the form of an anonymous, written, multi-stage survey process, where feedback of group opinion is provided after each round. Numerous researchers have stressed that both approaches are best suited to be combined. [ 37 ] [ 38 ] Due to their process similarity, the two methodologies can be easily combined. The output of the different phases of the Delphi method can be used as input for the scenario method and vice versa. A combination makes a realization of the benefits of both tools possible. In practice, usually one of the two tools is considered the dominant methodology and the other one is added on at some stage. The variant that is most often found in practice is the integration of the Delphi method into the scenario process (see e.g. Rikkonen, 2005; [ 39 ] von der Gracht, 2008; [ 40 ] ). Authors refer to this type as Delphi-scenario (writing), expert-based scenarios, or Delphi panel derived scenarios. Von der Gracht (2010) [ 41 ] is a scientifically valid example of this method. Since scenario planning is “information hungry”, Delphi research can deliver valuable input for the process. There are various types of information output of Delphi that can be used as input for scenario planning. Researchers can, for example, identify relevant events or developments and, based on expert opinion, assign probabilities to them. Moreover, expert comments and arguments provide deeper insights into relationships of factors that can, in turn, be integrated into scenarios afterwards. Also, Delphi helps to identify extreme opinions and dissent among the experts. Such controversial topics are particularly suited for extreme scenarios or wildcards. In his doctoral thesis, Rikkonen (2005) [ 39 ] examined the utilization of Delphi techniques in scenario planning and, concretely, in construction of scenarios. The author comes to the conclusion that the Delphi technique has instrumental value in providing different alternative futures and the argumentation of scenarios. It is therefore recommended to use Delphi in order to make the scenarios more profound and to create confidence in scenario planning. Further benefits lie in the simplification of the scenario writing process and the deep understanding of the interrelations between the forecast items and social factors. While there is utility in weighting hypotheses and branching potential outcomes from them, reliance on scenario analysis without reporting some parameters of measurement accuracy (standard errors, confidence intervals of estimates, metadata, standardization and coding, weighting for non-response, error in reportage, sample design, case counts, etc.) is a poor second to traditional prediction. Especially in “complex” problems, factors and assumptions do not correlate in lockstep fashion. Once a specific sensitivity is undefined, it may call the entire study into question. It is faulty logic to think, when arbitrating results, that a better hypothesis will render empiricism unnecessary. In this respect, scenario analysis tries to defer statistical laws (e.g., Chebyshev's inequality Law), because the decision rules occur outside a constrained setting. Outcomes are not permitted to “just happen”; rather, they are forced to conform to arbitrary hypotheses ex post, and therefore there is no footing on which to place expected values. In truth, there are no ex ante expected values, only hypotheses, and one is left wondering about the roles of modeling and data decision. In short, comparisons of "scenarios" with outcomes are biased by not deferring to the data; this may be convenient, but it is indefensible. “Scenario analysis” is no substitute for complete and factual exposure of survey error in economic studies. In traditional prediction, given the data used to model the problem, with a reasoned specification and technique, an analyst can state, within a certain percentage of statistical error, the likelihood of a coefficient being within a certain numerical bound. This exactitude need not come at the expense of very disaggregated statements of hypotheses. R Software, specifically the module “WhatIf,” [ 42 ] (in the context, see also Matchit and Zelig) has been developed for causal inference, and to evaluate counterfactuals. These programs have fairly sophisticated treatments for determining model dependence, in order to state with precision how sensitive the results are to models not based on empirical evidence. Another challenge of scenario-building is that "predictors are part of the social context about which they are trying to make a prediction and may influence that context in the process". [ 9 ] As a consequence, societal predictions can become self-destructing. [ 9 ] For example, a scenario in which a large percentage of a population will become HIV infected based on existing trends may cause more people to avoid risky behavior and thus reduce the HIV infection rate, invalidating the forecast (which might have remained correct if it had not been publicly known). Or, a prediction that cybersecurity will become a major issue may cause organizations to implement more secure cybersecurity measures, thus limiting the issue. In the 1970s, many energy companies were surprised by both environmentalism and the OPEC cartel, and thereby lost billions of dollars of revenue by mis-investment. The dramatic financial effects of these changes led at least one organization, Royal Dutch Shell, to implement scenario planning. The analysts of this company publicly estimated that this planning process made their company the largest in the world. [ 43 ] However other observers [ who? ] of Shell's use of scenario planning have suggested that few if any significant long-term business advantages accrued to Shell from the use of scenario methodology [ citation needed ] . Whilst the intellectual robustness of Shell's long term scenarios was seldom in doubt their actual practical use was seen as being minimal by many senior Shell executives [ citation needed ] . A Shell insider has commented "The scenario team were bright and their work was of a very high intellectual level. However neither the high level "Group scenarios" nor the country level scenarios produced with operating companies really made much difference when key decisions were being taken". [ citation needed ] The use of scenarios was audited by Arie de Geus 's team in the early 1980s and they found that the decision-making processes following the scenarios were the primary cause of the lack of strategic implementation [ clarification needed ] ), rather than the scenarios themselves. Many practitioners today spend as much time on the decision-making process as on creating the scenarios themselves. [ 44 ]
https://en.wikipedia.org/wiki/Scenario_planning
SceneKit , sometimes rendered Scene Kit , is a 3D graphics application programming interface (API) for Apple Inc. platforms written in Objective-C . It is a high-level framework designed to provide an easy-to-use layer over the lower level APIs like OpenGL and Metal . [ 1 ] SceneKit maintains an object based scene graph , along with a physics engine , particle system , and links to Core Animation and other frameworks to easily animate that display. SceneKit views can be mixed with other views, for instance, allowing a SpriteKit 2D display to be mapped onto the surface of an object in SceneKit, or a UIBezierPath from Core Graphics to define the geometry of a SceneKit object. SceneKit also supports import and export of 3D scenes using the COLLADA format. SceneKit was first released for macOS in 2012, and iOS in 2014. SceneKit maintains a scene graph based on a root object, an instance of the class SCNScene. The SCNScene object is roughly equivalent to the view objects found in most 2D libraries, and is intended to be embedded in a display container like a window or another view object. The only major content of the SCNScene is a link to the rootNode, which points to an SCNNode object. SCNNodes are the primary contents of the SceneKit hierarchy. Each Node has a Name, and pointers to optional Camera, Light and Geometry objects, as well as an array of childNodes and a pointer to its own parent. A typical scene will contain a single Scene object pointed to a conveniently named Node (often "root") whose primary purpose is to hold a collection of children Nodes. The children nodes can be used to represent cameras, lights, or the various geometry objects in the Scene. A simple Scene can be created by making a single SCNGeometry object, typically with one of the constructor classes like SCNBox, a single SCNCamera, one or more SCNLights, and then assigning all of these objects to separate Nodes. A single additional generic Node is then created and assigned to the SCNScene object's rootNode, and then all of the objects are added as children of that rootNode. Nevertheless, the number of lights is limited to 8. SCNScenes also contain a number of built-in user interface controls and input/output libraries to greatly ease implementing simple viewers and similar tasks. For instance, setting the Scene's autoenablesDefaultLighting and allowsCameraControl to true, and then adding an object tree read from a COLLADA file will produce viewable content of arbitrary complexity with a few lines of code. The integration with Xcode allows the Scene itself to be placed in a window in Interface Builder , without any code at all. There is a Scenekit archive file format, using the filename extension .scn. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/SceneKit
Scene statistics is a discipline within the field of perception . It is concerned with the statistical regularities related to scenes . It is based on the premise that a perceptual system is designed to interpret scenes . Biological perceptual systems have evolved in response to physical properties of natural environments. [ 1 ] Therefore natural scenes receive a great deal of attention. [ 2 ] Natural scene statistics are useful for defining the behavior of an ideal observer in a natural task, typically by incorporating signal detection theory , information theory , or estimation theory . Geisler (2008) [ 4 ] distinguishes between four kinds of domains: (1) Physical environments, (2) Images/Scenes, (3) Neural responses, and (4) Behavior. Within the domain of images/scenes, one can study the characteristics of information related to redundancy and efficient coding. Across-domain statistics determine how an autonomous system should make inferences about its environment, process information, and control its behavior. To study these statistics, it is necessary to sample or register information in multiple domains simultaneously. One of the most successful applications of Natural Scenes Statistics Models has been perceptual picture and video quality prediction. For example, the Visual Information Fidelity (VIF) algorithm, which is used to measure the degree of distortion of pictures and videos, is used extensively by the image and video processing communities to assess perceptual quality, often after processing, such as compression, which can degrade the appearance of a visual signal. The premise is that the scene statistics are changed by distortion, and that the visual system is sensitive to the changes in the scene statistics. VIF is heavily used in the streaming television industry. Other popular picture quality models that use natural scene statistics include BRISQUE, [ 5 ] and NIQE [ 6 ] both of which are no-reference, since they do not require any reference picture to measure quality against.
https://en.wikipedia.org/wiki/Scene_statistics
Scenechronize (stylized as scenechronize) is a computer software platform , developed by Clever Machine Inc., for television and movie production companies. Its purpose is to reduce the need for paper materials used during the production process, in order to reduce waste. Clever Machine was founded in December 2003 and incorporated in California . The founders, Hunter Hancock, chief executive officer, and Darren Ehlers, chief operations officer, and five engineers had originally provided customized solutions to financial services companies, assisting in marketing and engineering positions with multiple enterprise software companies. The company's first project was to provide an outsourced information technology team to a financial services company. [ citation needed ] From there, the company was able to begin its own software company, Scenechronize, after purchasing a business plan from Rhys Ryan, who also joined the company. Using a Web-based user interface , Scenechronize organizes different production aspects — the script, locations, casting, breakdown elements, and schedule. Tools have been specifically created for assistant directors, line producers, above-the-line and below-the-line crews. Each team member has access to his or her own department, while the unit production manager or line producer maintains an overall view, with the option to share that information with other crew members on an as-needed basis. [ citation needed ] In 2008, a preview release of Scenechronize was demonstrated to art directors and unit production managers. The beta release of Scenechronize was announced at the Sundance Film Festival in 2009, with the initial public release announced a year later, also at Sundance. [ 1 ] The company's engineering offices are in San Francisco , while the sales and support offices were in Burbank, California . In November 2012, Ease Entertainment, a payroll and production accounting/financial tracking software firm, acquired the assets of Scenechronize. All existing employees of Scenechronize were retained with operations in their Burbank offices being relocated to Ease's headquarters in Beverly Hills . [ 2 ] In August 2015, Entertainment Partners (EP), a payroll and production accounting/financial tracking software firm, acquired the assets of Ease Entertainment including Scenechronize. [ 3 ] All existing employees of Scenechronize were retained with operations in the Beverly Hills office relocated to EP's offices in Burbank, California . [ 4 ] In April 2019, Entertainment Partners was acquired by private equity firm Texas Pacific Group (TPG). [ 5 ]
https://en.wikipedia.org/wiki/Scenechronize
The Scenedesmus obliquus mitochondrial code (translation table 22) is a genetic code found in the mitochondria of Scenedesmus obliquus , a species of green algae . [ 1 ] Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U). Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), Valine (Val, V) This article incorporates text from the United States National Library of Medicine , which is in the public domain . [ 2 ] This genetics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scenedesmus_obliquus_mitochondrial_code
Scenic design , also known as stage design or set design , is the creation of scenery for theatrical productions including plays and musicals . The term can also be applied to film and television [ 1 ] productions, where it may be referred to as production design . [ 2 ] Scenic designers create sets and scenery to support the overall artistic goals of the production. Scenic design is an aspect of scenography , which includes theatrical set design as well as light and sound. Modern scenic designers are increasingly taking on the role of co-creators in the artistic process, shaping not only the physical space of a production but also influencing its blocking, pacing, and tone. As Richard Foreman famously stated, scenic design is a way to "create the world through which you perceive things happening." [ 3 ] These designers work closely with the director, playwright, and other creative members of the team to develop a visual concept that complements the narrative and emotional tone of the production. Notable scenic designers who have embraced this collaborative role include Robin Wagner , Eugene Lee, and Jim Clayburgh The origins of scenic design may be found in the outdoor amphitheaters of ancient Greece, when acts were staged using basic props and scenery. Because of improvements in stage equipment and drawing perspectives throughout the Renaissance, more complex and realistic sets could be created for scenic design. Scenic design evolved in conjunction with technological and theatrical improvements over the 19th and 20th centuries. [ 3 ] In the early 20th century, American scenic design underwent a dramatic transformation with the introduction of the New Stagecraft. [ 3 ] [ 4 ] Drawing inspiration from European pioneers like Adolphe Appia and Edward Gordon Craig, American designers began moving away from the overly detailed naturalism of the 19th century. [ 3 ] Instead, they embraced simplified realism, abstraction, mood-driven environments, and symbolic imagery. Leaders of this movement, including Robert Edmond Jones, Lee Simonson, and Norman Bel Geddes, laid the foundation for a more interpretive and artistic approach to stage design in the United States. Following the New Stagecraft, designers like Jo Mielziner and Boris Aronson helped define a style known as poetic realism. [ 3 ] Characterized by soft lighting, romantic imagery, scrims, and fragmented sets, this style prioritized the emotional tone of a production over strict realism. These designers often collaborated closely with playwrights and directors, shaping the mood and meaning of American theater classics like the early works of Arthur Miller and Tennessee Williams. A key element of modern trends is the integration of spectacle . [ 3 ] This movement towards larger-than-life visuals, mechanized scenery, and intricate special effects has reshaped both Broadway productions and regional theater. Designers like David Mitchell, known for his work on kinetic sets, exemplify the push towards spectacle that mirrors the influence of cinema on stage design. This trend emphasizes the audience's sensory experience, focusing on visual impact and technical prowess rather than traditional storytelling techniques alone. At the same time, many designers are exploring minimalism and abstraction, moving away from overly realistic representations to create symbolic and suggestive environments that focus on mood rather than realism. The evolving role of the designer as a collaborator with directors and playwrights has also reinforced these trends, as designers today have a more equal voice in shaping the vision and narrative of a production. Scenic design involves several key elements: A scenic designer works with the theatre director and other members of the creative team to establish a visual concept for the production and to design the stage environment. They are responsible for developing a complete set of design drawings that include: In planning, scenic designers often make multiple scale models and renderings . Models are often made before final drawings are completed for construction. [ 6 ] These precise drawings help the scenic designer effectively communicate with other production staff, especially the technical director , production manager , charge scenic artist , and prop master . In Europe and Australia , [ 7 ] [ 8 ] many scenic designers are also responsible for costume design , lighting design and sound design . They are commonly referred to as theatre designers, scenographers , or production designers. Scenic design often involves skills such as carpentry , architecture , textual analysis , and budgeting . [ 1 ] In addition, successful scenic designers must have a strong understanding of theatrical collaboration, including the ability to communicate ideas clearly, engage with the director’s vision, and address technical challenges in the design. [ 9 ] Many modern scenic designers use 3D CAD models to produce design drawings that used to be done by hand. [ 10 ] CAD tools have revolutionized the way designers create technical drawings, allowing for precise, scalable plans that are easier to adjust and communicate to the entire production team. [ 4 ] Some of the most influential scenic designers include: Robin Wagner : Known for his work on Broadway musicals like A Chorus Line and The Producers , Wagner's designs often blur the boundaries between traditional and modern aesthetics. His sets are celebrated for their dramatic flair and innovative use of space, enhancing both the storytelling and the audience’s emotional engagement. [ 5 ] Eugene Lee : A key figure in contemporary scenic design, Lee's work on Sweeney Todd and The Glass Menagerie showcases his ability to create immersive environments that serve as a vital part of the narrative. His work often integrates lighting design with set elements to create an emotional connection with the audience. [ 5 ] Jim Clayburgh : Clayburgh's sets for productions like The Red Shoes and Pippin have demonstrated his collaborative process with directors and designers, focusing on creating highly theatrical and dynamic spaces that support the narrative’s emotional core. [ 5 ] [ 3 ] Bob Crowley : Recognized for his work on the Broadway musical The Lion King , Crowley’s designs are iconic for their ability to integrate traditional African aesthetics with a modern theatrical approach. His work has influenced the integration of puppetry and stagecraft , making the set an active part of the storytelling process. [ 5 ] Scenic design varies significantly across different cultures, reflecting diverse traditions, artistic sensibilities, and historical contexts. These differences are particularly evident when comparing European , American , and Australian scenic design practices, as well as in non-Western theater traditions. [ 4 ] Designers in countries like Germany and France are typically referred to as scenographers , a term that emphasizes their role in integrating set design , lighting , and costume design into a cohesive artistic vision. This approach to design is especially well known in European operas. [ 5 ] American scenic design traditionally focuses more on set construction and the physical environment of a production. Designers are often responsible for creating the illusion of realism , particularly in Broadway musicals and dramatic plays. [ 3 ] [ 4 ] In Australia , scenic designers frequently take on multi-disciplinary roles. Many Australian designers, especially in regional theater , are involved in the design of both the sets and costumes , and they often collaborate closely with lighting and sound designers from the early stages of production. [ 9 ] In non-Western theater traditions , such as Chinese , Indian , and Japanese theater, often employ vastly different scenic approaches, relying heavily on symbolic elements, minimalistic sets, and dynamic stage movements . [ 4 ] For example, Kabuki theater in Japan uses elaborate costumes and stylized, symbolic sets to convey meaning, with a heavy focus on color symbolism and abstract designs rather than realistic representations. [ 4 ] In Chinese opera , the use of large, symbolic backdrops and the minimalistic set serves to enhance the performance of actors and emphasize the gestural language and music . [ 4 ] Some notable scenic designers include: Adolphe Appia , Boris Aronson , Alexandre Benois , Alison Chitty , Antony McDonald , Barry Kay , Caspar Neher , Cyro Del Nero , Aleksandra Ekster , David Gallo , Edward Gordon Craig , Es Devlin , Ezio Frigerio , Christopher Gibbs , Franco Zeffirelli , George Tsypin , Howard Bay , Inigo Jones , Jean-Pierre Ponnelle , Jo Mielziner , John Lee Beatty , Josef Svoboda , Ken Adam , Léon Bakst , Luciano Damiani , Maria Björnson , Ming Cho Lee , Philip James de Loutherbourg , Natalia Goncharova , Nathan Altman , Nicholas Georgiadis , Oliver Smith , Ralph Koltai , Emanuele Luzzati , Neil Patel , Robert Wilson , Russell Patterson , Brian Sidney Bembridge , Santo Loquasto , Sean Kenny , Todd Rosenthal , Robin Wagner , Tony Walton , Louis Daguerre , Ralph Funicello , and Roger Kirk .
https://en.wikipedia.org/wiki/Scenic_design
Sceptrum Brandenburgicum (or Sceptrum Brandenburgium – Latin for scepter of Brandenburg ) was a constellation created in 1688 by Gottfried Kirch , astronomer of the Prussian Royal Society of Sciences. It represented the scepter used by the royal family of the Brandenburgs . It was west from the constellation of Lepus . The constellation was quickly forgotten and is no longer in use. Its name was, however, partially inherited by one of its brightest stars, Sceptrum , which today is denoted 53 Eridani . This name is still in use today. This constellation -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sceptrum_Brandenburgicum
Sceptrum et Manus Iustitiae ( Latin for scepter and hand of justice ) was a constellation created by Augustin Royer in 1679 to honor king Louis XIV of France . It was formed from stars of what is today the constellations Lacerta and western Andromeda . Due to the awkward name the constellation was modified and name changed a couple of times, for example some old star maps show Sceptrum Imperiale , Stellio and Scettro , and Johannes Hevelius 's star map divides the area between the new Lacerta and as a chain end fettering Andromeda. The connection with the later constellation Frederici Honores , that occupied the chain end of Andromeda, is unclear, except that both represent a regal spire attributed to varying regents. [ 1 ] This constellation -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sceptrum_et_Manus_Iustitiae
ScerTF is a comprehensive database of position weight matrices for the transcription factors of Saccharomyces . [ 1 ] This Biological database -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/ScerTF
The Schaeffer–Fulton stain is a technique designed to isolate endospores by staining any present endospores green, and any other bacterial bodies red. [ 1 ] The primary stain is malachite green , and the counterstain is safranin , which dyes any other bacterial bodies red. Endospores cannot be stained by normal staining procedures because their walls are practically impermeable to all chemicals. The Schaeffer- Fulton endospore stain uses heat to drive the primary stain(malachite green) into the endospore. After cooling, the slide is decolorized with water and counterstained with safranin. Using an aseptic technique, bacteria are placed on a slide and heat fixed . The slide is then suspended over a water bath with some sort of porus paper over it, so that the slide is steamed . Malachite green is applied to the slide, which can penetrate the tough walls of the endospores, staining them green. After five minutes, the slide is removed from the steam, and the paper towel is removed. After cooling, the slide is rinsed with water for thirty seconds. The slide is then stained with diluted safranin for two minutes, which stains most other microorganic bodies red or pink. The slide is then rinsed again, and blotted dry with bibulous paper . [ 2 ] After drying, the slide can then be viewed under a light microscope . The procedure was designed by Alice B. Schaeffer and MacDonald Fulton, two microbiologists at Middlebury College , during the 1930s. The procedure also goes by the name Wirtz-Conklin method , referring to two bacteriologists during the 1900s. [ 2 ] This microbiology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Schaeffer–Fulton_stain
In mathematics , the Schauenbug–Ng theorem is a theorem about the modular group representations of modular tensor categories proved by Siu-Hung Ng and Peter Schauenburg in 2010. It asserts that that the kernels of the modular representations of all modular tensor categories are congruence subgroups of SL 2 ( Z ) {\displaystyle {\text{SL}}_{2}(\mathbb {Z} )} . [ 1 ] Since congruence subgroups all have finite index in SL 2 ( Z ) {\displaystyle {\text{SL}}_{2}(\mathbb {Z} )} , this implies in particular that the modular representations of all modular representations have finite image. On physical grounds coming from conformal field theory , it has been conjectured since 1987 by Greg Moore and others that the kernel of the modular group representations should be congruence subgroups. [ 2 ] [ 3 ] [ 4 ] The proof by Schauenbug and Ng came after a series of partial results by other mathematicians, which proved the theorem in special cases. [ 5 ] [ 6 ] [ 7 ] To prove their result Schauenbug and Ng introduced the notion of 'generalied Frobenius–Schur' indicators, which have since found separate applications to mathematical physics . [ 8 ]
https://en.wikipedia.org/wiki/Schauenburg–Ng_theorem
A scheduled-task pattern is a type of software design pattern used with real-time systems. [ 1 ] It is not to be confused with the " scheduler pattern ". While the scheduler pattern delays access to a resource (be it a function, variable, or otherwise) only as long as absolutely needed, the scheduled-task pattern delays execution until a determined time. This is important in real-time systems for a variety of reasons. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scheduled-task_pattern
The schedules of substances annexed to the Chemical Weapons Convention (CWC) list toxic substances and their precursors which can be used for the production of chemical weapons , the use of which is permitted by State Parties to the Chemical Weapons Convention only to a limited extent under the supervision of the Organisation for the Prohibition of Chemical Weapons (OPCW). State Parties are required to provide the OPCW with an annual summary of the production and use in their territory of substances listed in these Schedules, in accordance with the Convention. The Chemical Weapons Convention (CWC) provides for a worldwide prohibition of the development, production, proliferation and use of chemical weapons, prohibition of their stockpiling, and destruction of existing chemical weapons that a Member State has in its possession or has abandoned elsewhere. The Annex on Chemicals to the CWC lists toxic substances that can be used to make chemical weapons and are under supervision by the OPCW. [ 1 ] The Annex on Chemicals contains three lists of substances whose use is prohibited or restricted under the Convention. Part A of the Annex contains guidelines for inclusion in the lists; Part B contains the lists themselves. [ 2 ] The lists are compiled according to decreasing probability that the substance is intended for military use. The Lists themselves are divided into part A with toxic substances that can be used directly as chemical weapons and part B containing their precursors (chemicals with which those substances can be made). The Lists are subject to the following criteria, among others: [ 2 ] [ 3 ] The State Parties to the Convention are required to provide the monitoring Organisation for the Prohibition of Chemical Weapons (OPCW) annually with an overview of the production of substances covered by the Convention. Any (intended) production of Schedule 1 chemicals or transfers thereof to other Contracting States must be declared. [ 5 ] Schedules 2 and 3 chemicals produced on their territory must also be reported in the annual statement insofar as the quantity thereof has exceeded a certain threshold value. [ 8 ] With a few exceptions, only annually produced quantities above 100 kg of List 2 substances from Part A or 1000 kg of a precursor from Part B must be reported to the OPCW by the Contracting States. A limit of 1 kg applies to the substance BZ in List 2. For substances in List 3, a threshold value of 30 tonnes applies. [ 3 ]
https://en.wikipedia.org/wiki/Schedules_of_substances_annexed_to_the_Chemical_Weapons_Convention
In metallurgy , the Scheil-Gulliver equation (or Scheil equation ) describes solute redistribution during solidification of an alloy . Four key assumptions in Scheil analysis enable determination of phases present in a cast part. These assumptions are: The fourth condition (straight solidus/liquidus segments) may be relaxed when numerical techniques are used, such as those used in CALPHAD software packages, though these calculations rely on calculated equilibrium phase diagrams. Calculated diagrams may include odd artifacts (i.e. retrograde solubility) that influence Scheil calculations. The hatched areas in the figure represent the amount of solute in the solid and liquid. Considering that the total amount of solute in the system must be conserved, the areas are set equal as follows: Since the partition coefficient (related to solute distribution) is and mass must be conserved the mass balance may be rewritten as Using the boundary condition the following integration may be performed: Integrating results in the Scheil-Gulliver equation for composition of the liquid during solidification: or for the composition of the solid: In certain analytical solutions of the Scheil equation, C L = C o ( 1 − f S ) k − 1 {\displaystyle \ C_{L}=C_{o}(1-f_{S})^{k-1}} , transcendental equations arise due to the implicit coupling between the liquid-phase solute concentration ( C L {\displaystyle C_{L}} ) and the solid fraction ( f S {\displaystyle f_{S}} ). For example, rearranging terms under specific boundary conditions leads to equations of the form: ( C S C L ) ln ⁡ ( 1 − f S ) e ( C S C L ln ⁡ ( 1 − f S ) ) = ( C S C 0 ) ( 1 − f S ) ln ⁡ ( 1 − f S ) {\displaystyle \left({\frac {C_{S}}{C_{L}}}\right)\ln(1-f_{S})\,e^{\left({\frac {C_{S}}{C_{L}}}\ln(1-f_{S})\right)}=\left({\frac {C_{S}}{C_{0}}}\right)(1-f_{S})\ln(1-f_{S})} Such equations can be solved explicitly using the Lambert W function , W ( x ) {\displaystyle W(x)} , which is defined as the inverse function of x = W ( x ) e W ( x ) {\displaystyle x=W(x)e^{W(x)}} . By rearranging terms into this canonical form, the solution for C L {\displaystyle C_{L}} becomes: C L = C S ln ⁡ ( 1 − f S ) W ( z ) {\displaystyle C_{L}={\frac {C_{S}\ln(1-f_{S})}{W(z)}}} or C L = C 0 ( 1 − f S ) e W ( z ) {\displaystyle C_{L}={\frac {C_{0}}{(1-f_{S})}}e^{W(z)}} where z = ( C S C 0 ) ( 1 − f S ) ln ⁡ ( 1 − f S ) {\displaystyle z=\left({\frac {C_{S}}{C_{0}}}\right)(1-f_{S})\ln(1-f_{S})} . This application of the Lambert W function is particularly valuable in modeling impurity segregation during crystal growth which optimizes melt utilization and enhances crystal growth efficiency. For further details, see https://doi.org/10.1016/j.jcrysgro.2014.03.028 and https://doi.org/10.1016/j.jcrysgro.2024.127605 . Nowadays, several Calphad softwares are available - in a framework of computational thermodynamics - to simulate solidification in systems with more than two components; these have recently been defined as Calphad Tools for the Metallurgy of Solidification. In recent years, Calphad-based methodologies have reached maturity in several important fields of metallurgy, and especially in solidification-related processes such as semi-solid casting, 3d printing, and welding, to name a few. While there are important studies devoted to the progress of Calphad methodology, there is still space for a systematization of the field, which proceeds from the ability of most Calphad-based software to simulate solidification curves and includes both fundamental and applied studies on solidification, to be substantially appreciated by a wider community than today. The three applied fields mentioned above could be widened by specific successful examples of simple modeling related to the topic of this issue, with the aim of widening the application of simple and effective tools related to Calphad and Metallurgy. See also "Calphad Tools for the Metallurgy of Solidification" in an ongoing issue of an Open Journal. https://www.mdpi.com/journal/metals/special_issues/Calphad_Solidification Given a specific chemical composition, using a software for computational thermodynamics - which might be open or commercial - the calculation of the Scheil curve is possible if a thermodynamic database is available. A good point in favour of some specific commercial softwares is that the install is easy indeed and you can use it on a windows based system - for instance with students or for self training. One should get some open, chiefly binary, databases (extension *.tdb), one could find - after registering - at Computational Phase Diagram Database (CPDDB) of the National Institute for Materials Science of Japan, NIMS https://cpddb.nims.go.jp/index_en.html . They are available - for free - and the collection is rather complete; in fact currently 507 binary systems are available in the thermodynamic data base (tdb) format. Some wider and more specific alloy systems partly open - with tdb compatible format - are available with minor corrections for Pandat use at Matcalc https://www.matcalc.at/index.php/databases/open-databases . A key concept that might be used for applications is the (numerical) derivative of the solid fraction fs with temperature. A numerical example using a copper zinc alloy at composition Zn 30% in weight is proposed as an example here using the opposite sign for using both temperature and its derivative in the same graph. Q = lim f s → 0 ∂ T ∂ f S {\displaystyle Q=\lim _{fs\to 0}{\frac {\partial T}{\partial f_{S}}}} This - Calphad calculated value of numerical derivative - Q has some interesting applications in the field of metal solidification. In fact, Q reflects the phase diagram of the alloy system and its reciprocal has been found to have a relationship with grain size d on solidification, which empirically has been found in some cases to be linear: d = a + b Q {\displaystyle d=a+{\frac {b}{Q}}} where a and b are constants, as illustrated with some examples from the literature for Mg and Al alloys. Before Calphad use, Q values were calculated from the conventional relationship: Q=m*c0(k−1) where m is the slope of the liquidus, c0 is the solute concentration, and k is the equilibrium distribution coefficient. More recently some other possible correlation of Q with grain size d have been found, for instance: d = B Q ( 1 / 3 ) {\displaystyle d={\frac {B}{Q^{(}1/3)}}} where B is a constant independent of alloy composition. In recent publications, prof. Sindo Kou has proposed an approach to evaluate susceptibility to solidification cracking; this approach is based on a similar approach where a quantity, lim f s → 1 ∂ T ∂ f S ( 1 / 2 ) {\displaystyle \lim _{fs\to 1}{\frac {\partial T}{\partial f_{S}^{(1/2)}}}} , which has the dimensions of a temperature is proposed as an index of the cracking susceptibility. Again one could exploit Scheil based solidification curves to link this index to the slope of the (Scheil) solidification curve: ∂ T ∂ f s ( 1 / 2 ) = {\displaystyle {\frac {\partial T}{\partial fs^{(1/2)}}}=} ∂T/(∂(fS)^{1/2})= ∂T/(∂(fS)*(∂(fS)^{1/2})/∂(fS))= (1/2)∂T/∂(fS)*(fS)^{1/2}= ∂ T ∂ f s ∗ f s ( 1 / 2 ) / 2 {\displaystyle {\frac {\partial T}{\partial fs}}*fs^{(1/2)}/2} Last but not least prof. E.J.Zoqui has summarized in his work the approach proposed by several researchers in the criteria for semi-solid processing, which involves the stability of the solid phase fs with the temperature; to process semisolid alloys the sensitivity to variation of solid fraction with temperature should be minimal: in one direction it could evolve to a difficult to deform solid, on the other to a liquid which may be difficult to shape without proper moulding. It turns out that we can express this criterion again by evaluating the slope of the solidification curve, in fact ∂(fS)/∂T should be less than a certain threshold, which is commonly accepted in the scientific and technical literature to be below 0.03 1/K. Mathematically this may be expressed by an inequation, ∂(fS)/∂T < 0.03 (1/K) - where K stands for Kelvin degrees - could be equally assumed for a rough estimate of the two main semi-solid casting processing: both rheocasting ( 0.3<fs<0.4 ) and thixoforming (0.6<fs<0.7). If one would go back just to the (numerical) and functional approaches above, one should consider the reciprocal value i.e. ∂T/∂(fS)> 33 (K)
https://en.wikipedia.org/wiki/Scheil_equation
In mathematics , Scheinerman's conjecture , now a theorem, states that every planar graph is the intersection graph of a set of line segments in the plane. This conjecture was formulated by E. R. Scheinerman in his Ph.D. thesis (1984) , following earlier results that every planar graph could be represented as the intersection graph of a set of simple curves in the plane ( Sinden 1966 ) ( Ehrlich, Even & Tarjan 1976 ). It was proven by Jeremie Chalopin and Daniel Gonçalves ( 2009 ). For instance, the graph G shown in figure 1 may be represented as the intersection graph of the set of segments shown in figure 2. Here, vertices of G are represented by straight line segments and edges of G are represented by intersection points. Scheinerman also conjectured that segments with only three directions would be sufficient to represent 3- colorable graphs, and West (1991) conjectured that analogously every planar graph could be represented using four directions. If a graph is represented with segments having only k directions and no two segments belong to the same line, then the graph can be colored using k colors, one color for each direction. Therefore, if every planar graph can be represented in this way with only four directions, then the four color theorem follows. Gonçalves (2020) proved that some planar graphs cannot be represented in that way. Hartman, Newman & Ziv (1991) and de Fraysseix, Ossona de Mendez & Pach (1991) proved that every bipartite planar graph can be represented as an intersection graph of horizontal and vertical line segments; for this result see also Czyzowicz, Kranakis & Urrutia (1998) . De Castro et al. (2002) proved that every triangle-free planar graph can be represented as an intersection graph of line segments having only three directions; this result implies Grötzsch's theorem ( Grötzsch 1959 ) that triangle-free planar graphs can be colored with three colors. de Fraysseix & Ossona de Mendez (2005) proved that if a planar graph G can be 4-colored in such a way that no separating cycle uses all four colors, then G has a representation as an intersection graph of segments. Chalopin, Gonçalves & Ochem (2007) proved that planar graphs are in 1-STRING, the class of intersection graphs of simple curves in the plane that intersect each other in at most one crossing point per pair. This class is intermediate between the intersection graphs of segments appearing in Scheinerman's conjecture and the intersection graphs of unrestricted simple curves from the result of Ehrlich et al. It can also be viewed as a generalization of the circle packing theorem , which shows the same result when curves are allowed to intersect in a tangent. The proof of the conjecture by Chalopin & Gonçalves (2009) was based on an improvement of this result.
https://en.wikipedia.org/wiki/Scheinerman's_conjecture
Schellman loops (also called Schellman motifs or paperclips ) [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] are commonly occurring structural features of proteins and polypeptides . [ 11 ] Each has six amino acid residues (labelled residues i to i +5) with two specific inter-mainchain hydrogen bonds (as in lower figure, i) and a characteristic main chain dihedral angle conformation. The CO group of residue i is hydrogen-bonded to the NH of residue i +5 (colored orange in upper figure), and the CO group of residue i +1 is hydrogen-bonded to the NH of residue i +4 ( beta turn , colored purple). Residues i +1, i +2, and i +3 have negative φ (phi) angle values and the phi value of residue i +4 is positive. Schellman loops incorporate a three amino acid residue RL nest (protein structural motif) , [ 12 ] [ 13 ] in which three mainchain NH groups (from Schellman loop residues i +3 to i +5) form a concavity for hydrogen bonding to carbonyl oxygens. About 2.5% of amino acids in proteins belong to Schellman loops. Two websites are available for examining small motifs in proteins, Motivated Proteins: [1] ; [ 14 ] or PDBeMotif: [2] . [ 15 ] The majority of Schellman loops (82%) occur at the C-terminus of an alpha-helix such that residues i , i +1, i +2 and i +3 are part of the helix. Over a quarter of helices (28%) have a C-terminal Schellman loop. [ 10 ] Occasional Schellman loops occur with seven instead of six residues. In these, the CO group of residue i is hydrogen-bonded to the NH of residue i +6, and the CO group of residue i +1 is hydrogen-bonded to the NH of residue i +5. Rare “left-handed” six-residue Schellman loops occur; these have the same hydrogen bonds, but residues i +1, i +2, and i +3 have positive φ values while the φ value of residue i +4 is negative; the nest is of the LR, rather than the RL, kind. Amino acid propensities for the residues of the common type of Schellman loop have been described. [ 16 ] Residue i +4 is the one most-highly conserved; it has positive φ values; 70% of amino acids are glycine and none are proline . Consideration of the hydrogen bonding in the nests of Schellman loops bound to mainchain oxygens reveals two main types of arrangement: 1,3-bridged or not. In one (lower figure, ii) the first and third nest NH groups are bridged by an oxygen atom. In the other (lower figure, iv) the first NH group is hydrogen bonded to the CO group of an amino acid four residues behind in the sequence, and none of the nest NH groups are bridged. [ 17 ] It seems that Schellman loops are less homogeneous than might have been expected. The original Schellman criteria [ 1 ] result in the inclusion of features not now regarded as Schellman loops. A newer set of criteria is given in the first paragraph.
https://en.wikipedia.org/wiki/Schellman_loop
A schema for horizontal dials is a set of instructions used to construct horizontal sundials using compass and straightedge construction techniques, which were widely used in Europe from the late fifteenth century to the late nineteenth century. The common horizontal sundial is a geometric projection of an equatorial sundial onto a horizontal plane. The special properties of the polar-pointing gnomon (axial gnomon) were first known to the Moorish astronomer Abdul Hassan Ali in the early thirteenth century [ 1 ] and this led the way to the dial-plates, with which we are familiar, dial plates where the style and hour lines have a common root. Through the centuries artisans have used different methods to markup the hour lines sundials using the methods that were familiar to them, in addition the topic has fascinated mathematicians and become a topic of study. Graphical projection was once commonly taught, though this has been superseded by trigonometry , logarithms , sliderules and computers which made arithmetical calculations increasingly trivial/ Graphical projection was once the mainstream method for laying out a sundial but has been sidelined and is now only of academic interest. The first known document in English describing a schema for graphical projection was published in Scotland in 1440, leading to a series of distinct schema for horizontal dials each with characteristics that suited the target latitude and construction method of the time. The art of sundial design is to produce a dial that accurately displays local time. Sundial designers have also been fascinated by the mathematics of the dial and possible new ways of displaying the information. Modern dialling started in the tenth century when Arab astronomers made the great discovery that a gnomon parallel to the Earth's axis will produce sundials whose hour lines show equal hours or legal hours on any day of the year: the dial of Ibn al-Shatir in the Umayyad Mosque in Damascus is the oldest dial of this type. [ a ] Dials of this type appeared in Austria and Germany in the 1440s. [ 2 ] A dial plate can be laid out, by a pragmatic approach, observing and marking a shadow at regular intervals throughout the day on each day of the year. If the latitude is known the dial plate can be laid out using geometrical construction techniques which rely on projection geometry , or by calculation using the known formulas and trigonometric tables usually using logarithms , or slide rules or more recently computers or mobile phones . Linear algebra has provided a useful language to describe the transformations . A sundial schema uses a compass and a straight edge to firstly to derive the essential angles for that latitude, then to use this to draw the hourlines on the dial plate. In modern terminology this would mean that graphical techniques were used to derive sin ⁡ x {\displaystyle \sin x} and m tan ⁡ y {\displaystyle m\tan y} and from it sin ⁡ x . tan ⁡ y {\displaystyle \sin x.\tan y} . [ b ] Such geometric constructions were well known and remained part of the high school (UK grammar school) curriculum until the New Maths revolution in the 1970s. [ 3 ] The schema shown above was used in 1525 (from an earlier work 1440) by Dürer is still used today. The simpler schema were more suitable for dials designed for the lower latitudes, requiring a narrow sheet of paper for the construction, than those intended for the higher latitudes. This prompted the quest for other constructions. The first part of the process is common to many methods. It establishes a point on the north south line that is sin φ from the meridian line. The significant problem is the width of the paper needed in the higher latitudes. [ 5 ] Giambattista Benedetti , an impoverished nobleman worked as a mathematician at the court of Savola. His book which describes this method was De gnomonum umbrarumque solarium usu published in 1574. It describes a method for displaying the legal hours, that is equal hours as we use today, while most people still used unequal hours which divided the hours of daylight into 12 equal hours- but they would change as the year progressed. Benedettis method divides the quadrant into 15° segments. Two construction are made: a parallel horizontal line that defines the tan h distances, and a gnomonic polar line GT which represents sin φ. Benedetti included instructions for drawing a point gnomon so unequal hours could be plotted. [ 6 ] ( Fabica et usus instrumenti ad horologiorum descriptionem. ) Rome Italy. The Clavius method looks at a quarter of the dial. It views the horizontal and the perpendicular plane to the polar axis as two rectangles hinged around the top edge of both dials. the polar axis will be at φ degrees to the polar axis, and the hour lines will be equispaced on the polar plane an equatorial dial. (15°). Hour points on the polar plane will connect to the matching point on the horizontal plane. The horizontal hour lines are plotted to the origin. [ 7 ] [ 5 ] The Jesuit Mario Bettini penned a method which was posthumously published in the book Recreationum Mathematicarum Apiaria Novissima 1660. [ 8 ] William Leybourn published his " Art of Dialling " [ d ] in 1669, a with it a six-stage method. His description relies heavily on the term line of chords , for which a modern diallist substitutes a protractor . The line of chords was a scale found on the sector which was used in conjunction with a set of dividers or compasses. It was still used by navigators up to the end of the 19th century. [ e ] This method requires a far smaller piece of paper, [ 5 ] a great advantage for higher latitudes. [ 5 ] This method uses the properties of chords to establish distance m . sin ⁡ θ {\displaystyle m.\sin \theta } in the top quadrant, and then transfers this distance into the bottom quadrant so that sin ⁡ ϕ sin ⁡ θ {\displaystyle \sin \phi \sin \theta } is established. Again, a transfer of this measure to the chords in the top quadrant. The final lines establish the formula tan ⁡ κ = {\displaystyle \tan \kappa =} sin ⁡ θ sin ⁡ ϕ cos ⁡ θ {\displaystyle \sin \theta \sin \phi \over \cos \theta } = tan ⁡ θ sin ⁡ ϕ {\displaystyle \tan \theta \sin \phi } This is then transferred by symmetry to all quadrants. It was used in the Encyclopædia Britannica First Edition 1771, Sixth Edition 1823 [ 11 ] The Dom Francois Bedos de Celles method (1760) [ 13 ] otherwise known as the Waugh method (1973) [ 14 ] [ 5 ] This method first appeared in Peter Nicholsons A popular Course of Pure and Mixed Mathematics in 1825. It was copied by School World in Jun 1903, then in Kenneth Lynch's, Sundial and Spheres 1971. [ 15 ] It starts by drawing the well known triangle, and takes the vertices to draw two circles at radius (OB) sin φ and (AB) tan φ. The 15° lines are drawn, intersecting these circles. Lines are taken horizontally, and vertically from these circles and their intersection point (OB sin t,AB cos t) is on the hour line. That is tan κ = OB sin t/ AB cos t which resolves to sin φ. tan t. [ 15 ] [ 16 ] This was an early and convenient method to use if you had access to an astrolabe as many astrologers and mathematicians of the time would have had. The method involved copying the projections of the celestial sphere onto a plane surface. A vertical line was drawn with a line at the angle of the latitude drawn on the bisection of the vertical with the celestial sphere. [ 17 ]
https://en.wikipedia.org/wiki/Schema_for_horizontal_dials
The Schenck ene reaction or the Schenk reaction is the reaction of singlet oxygen with alkenes to yield hydroperoxides . The hydroperoxides can be reduced to allylic alcohols or eliminate to form unsaturated carbonyl compounds. It is a type II photooxygenation reaction , and is discovered in 1944 by Günther Otto Schenck . [ 1 ] Its results are similar to ene reactions , hence its name. [ 2 ] The singlet oxygen reagent can be produced via photochemical activation of triplet oxygen (regular oxygen) in the presence of photosensitizers like rose bengal . Chemical processes like the reaction between hydrogen peroxide and sodium hypochlorite are also viable. Historically, four mechanisms have been proposed: [ 3 ] Experimental and computational studies show that the reaction actually proceeds via a two step no intermediate process. One can loosely interpret it as a mix of the perepoxide mechanism and the concerted mechanism. There is no perepoxide intermediate as in the classical sense of reaction intermediates , for there exists no energy barrier between it and the hydroperoxide product. [ 4 ] Such a mechanism can account for the selectivity of the Schenck ene reaction. The singlet oxygen is more likely to abstract hydrogen from the side with more C-H bonds due to favorable interactions in the transition state: [ 2 ] Very bulky groups, like the tertiary butyl group, will hinder hydrogen abstraction on that side. The Schenck ene reaction is utilized in the biological and biomimetic synthesis of rhodonoids , yield Many hydroperoxides derived from fatty acids, steroids, and terpenes are also formed via the Schenck ene reaction. For instance, the generation of cis-3-hexenal from linolenic acid : It must be noted, however, that this enzyme catalyzed path follows a different mechanism from the usual Schenck ene reaction. Radicals are involved, and triplet oxygen is used instead of singlet oxygen.
https://en.wikipedia.org/wiki/Schenck_ene_reaction
In theoretical physics , the Scherk–Schwarz mechanism (named after Joël Scherk and John Henry Schwarz ) for a field φ basically means that φ is a section of a non- trivializable fiber bundle (not necessarily a vector bundle since φ needn't be linear) which is fixed by the model. This is called a twist by physicists. Note that this can never occur in a spacetime which is homeomorphic to R n , which is a contractible space . However, for Kaluza–Klein theories , the Scherk–Schwarz mechanism is a possibility which can't be neglected. This particle physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scherk–Schwarz_mechanism
Scheutjens–Fleer theory is a lattice-based self-consistent field theory that is the basis for many computational analyses of polymer adsorption . This article about polymer science is a stub . You can help Wikipedia by expanding it . This article about statistical mechanics is a stub . You can help Wikipedia by expanding it . This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scheutjens–Fleer_theory
The Schick test , developed in 1913, [ 1 ] is a skin test used to determine whether or not a person is susceptible to diphtheria . [ 2 ] It was named after its inventor, Béla Schick (1877–1967), a Hungarian-born American pediatrician. [ citation needed ] The test is a simple procedure. A small amount (0.1 ml) of diluted (1/50 MLD ) diphtheria toxin is injected intradermally into one arm of the person and a heat inactivated toxin on the other as a control. If a person does not have enough antibodies to fight it off, the skin around the injection will become red and swollen, indicating a positive result. This swelling disappears after a few days. If the person has an immunity, then little or no swelling and redness will occur, indicating a negative result. Results can be interpreted as: [ 3 ] The test was created when immunizing agents were scarce and not very safe; however, as newer and safer toxoids became available, susceptibility tests were no longer required.
https://en.wikipedia.org/wiki/Schick_test
In organic chemistry , a Schiff base (named after Hugo Schiff ) is a compound with the general structure R 1 R 2 C=NR 3 ( R 3 = alkyl or aryl , but not hydrogen ). [ 1 ] [ 2 ] They can be considered a sub-class of imines , being either secondary ketimines or secondary aldimines depending on their structure. Anil refers to a common subset of Schiff bases: imines derived from anilines . [ 3 ] The term can be synonymous with azomethine which refers specifically to secondary aldimines (i.e. R−CH=NR' where R' ≠ H). [ 4 ] Schiff bases can be synthesized from an aliphatic or aromatic amine and a carbonyl compound by nucleophilic addition forming a hemiaminal , followed by a dehydration to generate an imine . In a typical reaction, 4,4'-oxydianiline reacts with o - vanillin : [ 5 ] Schiff bases can also be synthesized via the Aza-Wittig reaction . Schiff bases have been investigated in relation to a wide range of contexts, including antimicrobial, antiviral and anticancer activity. They have also been considered for the inhibition of amyloid-β aggregation. [ 6 ] Schiff bases are common enzymatic intermediates where an amine, such as the terminal group of a lysine residue, reversibly reacts with an aldehyde or ketone of a cofactor or substrate. The common enzyme cofactor pyridoxal phosphate (PLP) forms a Schiff base with a lysine residue and is transaldiminated to the substrate(s). [ 7 ] Similarly, the cofactor retinal forms a Schiff base in rhodopsins , including human rhodopsin (via Lysine 296), which is key in the photoreception mechanism. The term Schiff base is normally applied to these compounds when they are being used as ligands to form coordination complexes with metal ions . [ 8 ] One example is Jacobsen's catalyst . The imine nitrogen is basic and exhibits pi-acceptor properties . Several, especially the diiminopyridines are noninnocent ligands . Many Schiff base ligands are derived from alkyl diamines and aromatic aldehydes. [ 9 ] Chiral Schiff bases were one of the first ligands used for asymmetric catalysis . In 1968 Ryōji Noyori developed a copper-Schiff base complex for the metal-carbenoid cyclopropanation of styrene . [ 10 ] Schiff bases have also been incorporated into metal–organic frameworks (MOF). [ 11 ] Conjugated Schiff bases absorb strongly in the UV-visible region of the electromagnetic spectrum. This absorption is the basis of the anisidine value , which is a measure of oxidative spoilage for fats and oils.
https://en.wikipedia.org/wiki/Schiff_base
The Schiff test is an early organic chemistry named reaction developed by Hugo Schiff , [ 1 ] and is a relatively general chemical test for detection of many organic aldehydes that has also found use in the staining of biological tissues. [ 2 ] The Schiff reagent is the reaction product of a dye formulation such as fuchsin and sodium bisulfite ; pararosaniline (which lacks an aromatic methyl group) and new fuchsin (which is uniformly mono-methylated ortho to the dye's amine functionalities) are not dye alternatives with comparable detection chemistry. In its use as a qualitative test for aldehydes, the unknown sample is added to the decolorized Schiff reagent; when aldehyde is present a characteristic magenta color develops. Schiff-type reagents are used for various biological tissue staining methods, e.g. Feulgen stain and periodic acid-Schiff stain . Human skin also contains aldehyde functional groups in the termini of saccharides and so is stained as well. Fuchsin solutions appear colored due to the visible wavelength absorbance of its central quinoid structure—see also for example viologen —but are "decolorized" upon sulfonation of the dye at its central carbon atom by sulfurous acid or its conjugate base, bisulfite. This reaction disrupts the otherwise favored delocalized extended pi-electron system and resonance in the parent molecule. [ 3 ] The further reaction of the Schiff reagent with aldehydes is complex with several research groups reporting multiple reaction products with model compounds. In the currently accepted mechanism, the pararosaniline and bisulfite combine to yield the "decolorized" adduct with sulfonation at the central carbon as described and shown. The free, uncharged aromatic amine groups then react with the aldehyde being tested to form two aldimine groups; these groups have also been named for their discoverer as Schiff bases ( azomethines ), with the usual carbinolamine ( hemiaminal ) intermediate being formed and dehydrated en route to the Schiff base. These electrophilic aldimine groups then react with further bisulfite, and the Ar-NH-CH(R)-SO 3 − product (and other resonance -stabilized species in equilibrium with the product) give rise to the magenta color of a positive test. [ 4 ] Prior formation of classical bisulfite adducts of the tested aldehyde may, when the adducts are stable, give rise to false negative tests such as in the case of testing for the aldehydic terminus of glucose. [ 4 ] Schiff's reagent on reaction with Acetaldehyde gives pink colour. Such an imine-mediated mechanism was first proposed by Paul Rumpf (1908–1999) in 1935, [ 5 ] and experimental evidence was provided by Hardonk and van Duijn in 1964. [ 6 ] In 1980, Robins, Abrams and Pincock provided substantial NMR evidence for the mechanism, leading to its general acceptance. [ 7 ] Stoward had examined the mechanism in 1966 and, on the whole, considered this mechanism to be correct. [ 8 ] A second, earlier mechanism continues to appear in the literature. [ 9 ] The mechanism was proposed in 1921 by the eminent German organic chemist Heinrich Wieland and his student Georg Scheuing (1895–1949). [ 10 ] [ 11 ] Bisulphite was believed to react with the available aromatic amine functional groups to form N-sulfinic acid groups, Ar-NH-SO 2 H, followed by reaction with aldehyde to form sulfonamides , Ar-NH-SO 2 CH(OH)-R. The 1980 NMR data that allowed visualization of intermediates does not support this mechanism or the sulfonamides as the chromogenic product. [ 7 ]
https://en.wikipedia.org/wiki/Schiff_test
The Schikorr reaction formally describes the conversion of the iron(II) hydroxide (Fe(OH) 2 ) into iron(II,III) oxide (Fe 3 O 4 ). This transformation reaction was first studied by Gerhard Schikorr . The global reaction follows: It is of special interest in the context of the serpentinization , the formation of hydrogen by the action of water on a common mineral. [ 1 ] The Schikorr reaction can be viewed as two distinct processes: The global reaction can thus be decomposed in half redox reactions as follows: to give: Adding to this reaction one intact iron(II) ion for each two oxidized iron(II) ions leads to: Electroneutrality requires the iron cations on both sides of the equation to be counterbalanced by 6 hydroxyl anions (OH − ): For completing the main reaction, two companion reactions have still to be taken into account: The autoprotolysis of the hydroxyl anions; a proton exchange between two OH − , like in a classical acid–base reaction : it is then possible to reorganize the global reaction as: Considering then the formation reaction of iron(II,III) oxide : it is possible to write the balanced global reaction: in its final form, known as the Schikorr reaction : The Schikorr reaction can occur in the process of anaerobic corrosion of iron and carbon steel in various conditions. Anaerobic corrosion of metallic iron to give iron(II) hydroxide and hydrogen: followed by the Schikorr reaction: give the following global reaction: At low temperature, the anaerobic corrosion of iron can give rise to the formation of "green rust" ( fougerite ) an unstable layered double hydroxide (LDH). In function of the geochemical conditions prevailing in the environment of the corroding steel, iron(II) hydroxide and green rust can progressively transform in iron(II,III) oxide, or if bicarbonate ions are present in solution, they can also evolve towards more stable carbonate phases such as iron carbonate (FeCO 3 ), or iron(II) hydroxycarbonate (Fe 2 (OH) 2 (CO 3 ), chukanovite ) isomorphic to copper(II) hydroxycarbonate (Cu 2 (OH) 2 (CO 3 ), malachite ) in the copper system. Anaerobic oxidation of iron and steel commonly finds place in oxygen-depleted environments, such as in permanently water-saturated soils , peat bogs or wetlands in which archaeological iron artefacts are often found. Anaerobic oxidation of carbon steel of canisters and overpacks is also expected to occur in deep geological formations in which high-level radioactive waste and spent fuels should be ultimately disposed. Nowadays, in the frame of the corrosion studies related to HLW disposal, anaerobic corrosion of steel is receiving a renewed and continued attention. Indeed, it is essential to understand this process to guarantee the total containment of HLW waste in an engineered barrier during the first centuries or millennia when the radiotoxicity of the waste is high and when it emits a significant quantity of heat . The question is also relevant for the corrosion of the reinforcement bars ( rebars ) in concrete (Aligizaki et al. , 2000). This deals then with the service life of concrete structures, amongst others the near-surface vaults intended for hosting low-level radioactive waste . The slow but continuous production of hydrogen in deep low-permeability argillaceous formations could represent a problem for the long-term disposal of radioactive waste (Ortiz et al. , 2001; Nagra, 2008; recent Nagra NTB reports). Indeed, a gas pressure build-up could occur if the rate of hydrogen production by the anaerobic corrosion of carbon-steel and by the subsequent transformation of green rust into magnetite should exceed the rate of diffusion of dissolved H 2 in the pore water of the formation. The question is presently the object of many studies (King, 2008; King and Kolar, 2009; Nagra Technical Reports 2000–2009) in the countries (Belgium, Switzerland, France, Canada) envisaging the option of disposal in clay formation. When nascent hydrogen is produced by anaerobic corrosion of iron by the protons of water, the atomic hydrogen can diffuse into the metal crystal lattice because of the existing concentration gradient. After diffusion , hydrogen atoms can recombine into molecular hydrogen giving rise to the formation of high-pressure micro-bubbles of H 2 in the metallic lattice. The trends to expansion of H 2 bubbles and the resulting tensile stress can generate cracks in the metallic alloys sensitive to this effect also known as hydrogen embrittlement . Several recent studies (Turnbull, 2009; King, 2008; King and Kolar, 2009) address this question in the frame of the radioactive waste disposal in Switzerland and Canada. For detailed reports on iron corrosion issues related to high-level waste disposal, see the following links:
https://en.wikipedia.org/wiki/Schikorr_reaction
In pharmacology , Schild regression analysis , based upon the Schild equation , both named for Heinz Otto Schild , are tools for studying the effects of agonists and antagonists on the response caused by the receptor or on ligand-receptor binding. Dose-response curves can be constructed to describe response or ligand-receptor complex formation as a function of the ligand concentration. Antagonists make it harder to form these complexes by inhibiting interactions of the ligand with its receptor. This is seen as a change in the dose response curve: typically a rightward shift or a lowered maximum. A reversible competitive antagonist should cause a rightward shift in the dose response curve, such that the new curve is parallel to the old one and the maximum is unchanged. This is because reversible competitive antagonists are surmountable antagonists. The magnitude of the rightward shift can be quantified with the dose ratio, r. The dose ratio r is the ratio of the dose of agonist required for half maximal response with the antagonist B {\displaystyle {\ce {B}}} present divided by the agonist required for half maximal response without antagonist ("control"). In other words, the ratio of the EC50s of the inhibited and un-inhibited curves. Thus, r represents both the strength of an antagonist and the concentration of the antagonist that was applied. An equation derived from the Gaddum equation can be used to relate r to [ B ] {\displaystyle [{\ce {B}}]} , as follows: where A Schild plot is a double logarithmic plot, typically log 10 ⁡ ( r − 1 ) {\displaystyle \log _{10}(r-1)} as the ordinate and log 10 ⁡ [ B ] {\displaystyle \log _{10}[{\ce {B}}]} as the abscissa . This is done by taking the base-10 logarithm of both sides of the previous equation after subtracting 1: This equation is linear with respect to log 10 ⁡ [ B ] {\displaystyle \log _{10}[{\ce {B}}]} , allowing for easy construction of graphs without computations. This was particular valuable before the use of computers in pharmacology became widespread. The y-intercept of the equation represents the negative logarithm of K B {\displaystyle K_{B}} and can be used to quantify the strength of the antagonist. These experiments must be carried out on a very wide range (therefore the logarithmic scale) as the mechanisms differ over a large scale, such as at high concentration of drug. [ citation needed ] The fitting of the Schild plot to observed data points can be done with regression analysis . Although most experiments use cellular response as a measure of the effect, the effect is, in essence, a result of the binding kinetics; so, in order to illustrate the mechanism, ligand binding is used. A ligand A will bind to a receptor R according to an equilibrium constant : Although the equilibrium constant is more meaningful, texts often mention its inverse, the affinity constant (K aff = k 1 /k −1 ): A better binding means an increase of binding affinity. The equation for simple ligand binding to a single homogeneous receptor is This is the Hill-Langmuir equation, which is practically the Hill equation described for the agonist binding. In chemistry, this relationship is called the Langmuir equation , which describes the adsorption of molecules onto sites of a surface (see adsorption ). [ R ] t {\displaystyle [R]_{t}} is the total number of binding sites, and when the equation is plotted it is the horizontal asymptote to which the plot tends; more binding sites will be occupied as the ligand concentration increases, but there will never be 100% occupancy. The binding affinity is the concentration needed to occupy 50% of the sites; the lower this value is the easier it is for the ligand to occupy the binding site. The binding of the ligand to the receptor at equilibrium follows the same kinetics as an enzyme at steady-state ( Michaelis–Menten equation ) without the conversion of the bound substrate to product. Agonists and antagonists can have various effects on ligand binding. They can change the maximum number of binding sites, the affinity of the ligand to the receptor, both effects together or even more bizarre effects when the system being studied is more intact, such as in tissue samples. (Tissue absorption, desensitization, and other non equilibrium steady-state can be a problem.) A surmountable drug changes the binding affinity: A nonsurmountable drug changes the maximum binding: The Schild regression also can reveal if there are more than one type of receptor and it can show if the experiment was done wrong as the system has not reached equilibrium. The first radio-receptor assay (RRA) was done in 1970 by Lefkowitz et al., [ dubious – discuss ] using a radiolabeled hormone to determine the binding affinity for its receptor. [ 1 ] A radio-receptor assay requires the separation of the bound from the free ligand. This is done by filtration , centrifugation or dialysis . [ 2 ] A method that does not require separation is the scintillation proximity assay that relies on the fact that β-rays from 3 H travel extremely short distances. The receptors are bound to beads coated with a polyhydroxy scintillator. Only the bound ligands to be detected. Today, the fluorescence method is preferred to radioactive materials due to a much lower cost, lower hazard, and the possibility of multiplexing the reactions in a high-throughput manner. One problem is that fluorescent-labeled ligands have to bear a bulky fluorophore that may cause it to hinder the ligand binding. Therefore, the fluorophore used, the length of the linker, and its position must be carefully selected. An example is by using FRET , where the ligand's fluorophore transfers its energy to the fluorophore of an antibody raised against the receptor. Other detection methods such as surface plasmon resonance do not even require fluorophores.
https://en.wikipedia.org/wiki/Schild_equation
Schill+Seilacher , also known by its brand name Struktol , is a German chemical company. It was founded in 1877, and produces chemicals for the textile and paper industry. On November 1, 1877, the brothers-in-law Christoph Seilacher and Karl Schill founded the company in Heilbronn as a chemical factory for the manufacture of specialty products for the leather industry. Due to rapid growth, four years later they decided to relocate the business to Feuerbach, now a district of Stuttgart . [ 2 ] In October 1886, Schill+Seilacher applied for a patent for a process for the production of dégras tanning fat under the patent number D.R.P. 39952. Because it combined the properties of a fat and an emulsifier, it was one of the most important products in emulsion technology, but was not used outside the leather industry. [ 3 ] At the turn of the century, chrome tanning became more important in leather processing, and eventually replaced dégras-based products altogether. [ 4 ] Christoph Seilacher feared a loss of sales for his dégras products, and, therefore developed a photo gelatine as well as the necessary pouring, cooling and application machines. [ 5 ] The product was supplied to firms such as Kodak , Lumière in Paris and Agfa . The company later expanded its production of process additives. Schill+Seilacher opened a branch office in Hamburg in 1925 to benefit from the port's international trade routes. Meanwhile, the company began producing more chemicals for textile finishing. In the late 1920s, the first additives for rubber processing were produced. During World War II , the plant facilities in Feuerbach and Hamburg were destroyed in a bombing raid. Afterwards, it was decided to relocate production to Böblingen, southwest of Stuttgart. [ 6 ] After the end of the war, the plant in Hamburg was rebuilt almost on the same site. The Böblingen plant was enlarged and modernized as business expanded. It produces chemicals such as sophorolipids , and amino acid surfactants . [ 7 ] After World War II, the company was handed over by Christoph Seilacher to his granddaughter Ingeborg Gross, who managed it until shortly before her death in 2019. [ 8 ] In the early 1950s, the company developed a process for tanning and fatliquoring animal hides with the salts of sulfonated fatty acid amides . [ 9 ] In the 1960s, Schiller+Seilacher began producing leather softeners at its Hamburg plant. [ 10 ] In 1977, the company expanded into the USA and founded the Struktol Company of America in Stow, Ohio . [ 8 ] It produces various chemicals, e.g. additives for the tire industry, such as compatibilizers . [ 11 ] In 1997, Schill + Seilacher opened a new plant in Neundorf , near Pirna [ 6 ] where it produces chemicals such as the flame retardant 9,10-dihydro-9-oxa-10-phosphaphenanthrene 10-oxide. [ 12 ] On December 1, 2014, 5 metric tons of trimethyl phosphite exploded there during an Arbuzov reaction , killing one worker and causing injuries to four. [ 13 ] According to the Saxon State Office for the Environment, the cause of the explosion was the addition of too little solvent toluene, i.e. human error. [ 14 ] After controversies about the reconstruction, the Freiberg University of Mining and Technology developed a new safety concept for the plant. It was reopened in August 2019. [ 15 ] From its foundation in 1877 until 2019, the Schill+Seilacher group was family-owned. The last owner of the founding family was Ingeborg Gross. Before her death in 2019, she transferred the group to the Ingeborg Gross Foundation based in Hamburg, and the Pro Humanitate Foundation based in Liechtenstein , which she had established specifically for this purpose. [ 8 ]
https://en.wikipedia.org/wiki/Schill+Seilacher
The Schilling test was a medical investigation used for patients with vitamin B 12 (cobalamin) deficiency. [ 1 ] The purpose of the test was to determine how well a patient is able to absorb B 12 from their intestinal tract. The test is now considered obsolete and is rarely performed, and is no longer available at many medical centers. It is named for Robert F. Schilling . [ 2 ] The Schilling test has multiple stages. [ 3 ] As noted below, it can be done at any time after vitamin B 12 supplementation and body store replacement, and some clinicians recommend that in severe deficiency cases, at least several weeks of vitamin repletion be done before the test (more than one B 12 shot, and also oral folic acid), in order to ensure that impaired absorption of B 12 (with or without intrinsic factor) is not occurring due to damage to the intestinal mucosa from the B 12 and folate deficiency themselves. In the first part of the test, the patient is given radiolabeled vitamin B 12 to drink or eat. The most commonly used radiolabels are 57 Co and 58 Co . An intramuscular injection of unlabeled vitamin B 12 is given an hour later. This is not enough to replete [ clarification needed ] or saturate body stores of B 12 . The purpose of the single injection is to temporarily saturate B 12 receptors in the liver with enough normal vitamin B 12 to prevent radioactive vitamin B 12 binding in body tissues (especially in the liver), so that if absorbed from the G.I. tract, it will pass into the urine. The patient's urine is then collected over the next 24 hours to assess the absorption. Normally, the ingested radiolabeled vitamin B 12 will be absorbed into the body. Since the body already has liver receptors for transcobalamin /vitamin B 12 saturated by the injection, much of the ingested vitamin B 12 will be excreted in the urine. The normal test will result in a higher amount of the radiolabeled cobalamin in the urine because it would have been absorbed by the intestinal epithelium, but passed into the urine because all hepatic B 12 receptors were occupied. An abnormal result is caused by less of the labeled cobalamin to appear in the urine because it will remain in the intestine and be passed into the feces. If an abnormality is found, i.e. the B 12 in the urine is only present in low levels, the test is repeated, this time with additional oral intrinsic factor . This stage is useful for identifying patients with bacterial overgrowth syndrome . The physician will provide a course of 2 weeks of antibiotics to eliminate any possible bacterial overgrowth and repeat the test to check whether radio-labeled Vitamin B 12 would be found in urine or not. This stage, in which pancreatic enzymes are administered, can be useful in identifying patients with pancreatic insufficiency . The physician will give 3 days of pancreatic enzymes followed by repeating the test to check if radio-labeled Vitamin B12 would be detected in urine. In some versions of the Schilling test, B 12 can be given both with and without intrinsic factor at the same time, using different cobalt radioisotopes 57 Co and 58 Co, which have different radiation signatures, in order to differentiate the two forms of B 12 . This is performed with the 'Dicopac' kitset. This allows for only a single radioactive urine collection. [ 4 ] Note that the B 12 shot which begins the Schilling test is enough to go a considerable way toward treating B 12 deficiency, so the test is also a partial treatment for B 12 deficiency. Also, the classic Schilling test can be performed at any time, even after full B 12 repletion and correction of the anemia, and it will still show if the cause of the B 12 deficiency was intrinsic-factor related. In fact, some clinicians have suggested that folate and B 12 replacement for several weeks be normally performed before a Schilling test is done, since folate and B 12 deficiencies are both known to interfere with intestinal cell function, and thus cause malabsorption of B 12 on their own, even if intrinsic factor is being made. This state would then tend to cause a false-positive test for both simple B 12 and intrinsic factor-related B 12 malabsorption. Several weeks of vitamin replacement are necessary, before epithelial damage to the G.I. tract from B 12 deficiency is corrected. Many labs have stopped performing the Schilling test, [ 5 ] due to lack of production of the cobalt radioisotopes and labeled-B 12 test substances. Also, injection replacement of B 12 has become relatively inexpensive, and can be self-administered by patients, as well as megadose oral B 12 . Since these are the same treatments which would be administered for most causes of B 12 malabsorption even if the exact cause were identified, the diagnostic test may be omitted without damage to the patient (so long as follow-up treatment and occasional serum B 12 testing is not allowed to lapse). It is possible for use of other radiopharmaceuticals to interfere with interpretation of the test. [ 6 ]
https://en.wikipedia.org/wiki/Schilling_test
Schisandrins [ 1 ] ( schizandrins ) are a group of bioactive chemical compounds found in Schisandra rubriflora , Schisandra sphenanthera , and Schisandra chinensis . [ 2 ] [ 3 ] Schizandrin is a tannin . [ 4 ] 3,4,5,14,15,16-hexamethoxy-9,10-dimethyltricyclo[10.4.0.02,7]hexadeca-1(16),2,4,6,12,14-hexaen-9-ol * Computed by Lexichem TK 2.7.0 (PubChem release 2021.05.07) 7432-28-2 C 24 H 32 O 7 {\displaystyle {\ce {C24H32O7}}} 432.5 23915 [ 9 ] Schisandra chinensis ( S. chinensis ) berries, originally a component of traditional herbal medicine in China, Korea, and other east Asian countries, are also valuable agents in modern phototherapy. S. chinensis berry preparations, including extracts and their chemical components, demonstrate anti-cancer, hepatoprotective, anti-inflammatory , and antioxidant properties, among others. These valuable properties, and their therapeutic potential, are conditioned by the unique chemical composition of S. chinensis berries, particularly their lignan content. About 40 of these compounds, mainly dibenzocyclooctane type, were isolated from S. chinensis . The most important bioactive lignans are schisandrin (also denoted as schizandrin or schisandrol A), schisandrin B, schisantherin A, schisantherin B, schisanhenol, deoxyschisandrin, and gomisin A. The present work reviews newly-available literature concerning the cardioprotective potential of S. chinensis berries and their individual components. It places special emphasis on the cardioprotective properties of the selected lignans related to their antioxidant and anti-inflammatory characteristis. [ 10 ] Examples include: This organic chemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Schisandrin
Schizocoely (adjective forms: schizocoelous or schizocoelic ) is a process by which some animal embryos develop . The schizocoely mechanism occurs when secondary body cavities ( coeloms ) are formed by splitting a solid mass of mesodermal embryonic tissue. [ 1 ] [ 2 ] All schizocoelomates are protostomians [ contradictory ] and they show holoblastic, spiral, determinate cleavage. The term schizocoely derives from the Ancient Greek words σχίζω ( skhízō ), meaning 'to split', and κοιλία ( koilía ), meaning 'cavity'. [ 3 ] [ 4 ] This refers to the fact that fluid-filled body cavities are formed by splitting of mesodermal cells. Animals called protostomes develop through schizocoely for which they are also known as schizocoelomates . Schizocoelous development often occurs in protostomes , [ 1 ] [ 5 ] [ 6 ] as in phyla Mollusca , Annelida , and Arthropoda . Deuterostomes usually exhibit enterocoely ; [ 7 ] however, some deuterostomes like enteropneusts can exhibit schizocoely as well. [ 8 ] The term refers to the order of organization of cells in the gastrula leading to development of the coelom . In mollusks, annelids, and arthropods, the mesoderm (the middle germ layer ) forms as a solid mass of migrated cells from the single layer of the gastrula. The new mesoderm then splits, creating the pocket-like cavity of the coelom. This developmental biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Schizocoely
The Schlenk equilibrium, named after its discoverer Wilhelm Schlenk , is a chemical equilibrium taking place in solutions of Grignard reagents [ 1 ] [ 2 ] and Hauser bases [ 3 ] [ 4 ] The process described is an equilibrium between two equivalents of an alkyl or aryl magnesium halide on the left of the equation and one equivalent each of the di alkyl or di aryl magnesium compound and magnesium halide salt on the right. Organomagnesium halides in solution also form dimers and higher oligomers , especially at high concentration. Alkyl magnesium chlorides in ether are present as dimers. [ 5 ] The position of the equilibrium is influenced by solvent, temperature, and the nature of the various substituents. It is known that magnesium center in Grignard reagents typically coordinates two molecules of ether such as diethyl ether or tetrahydrofuran (THF). Thus they are more precisely described as having the formula RMgXL 2 where L = an ether. In the presence of monoethers, the equilibrium typically favors the alkyl- or arylmagnesium halide. Addition of dioxane to such solutions, however, leads to precipitation of the coordination polymers MgX 2 (μ-dioxane) 2 , [ 6 ] driving the equilibrium completely to the right. [ 7 ] The dialkylmagnesium compounds are popular in the synthesis of organometallic compounds.
https://en.wikipedia.org/wiki/Schlenk_equilibrium
A Schlenk flask , or Schlenk tube , is a reaction vessel typically used in air-sensitive chemistry, invented by Wilhelm Schlenk . It has a side arm fitted with a PTFE or ground glass stopcock , which allows the vessel to be evacuated or filled with gases (usually inert gases like nitrogen or argon ). These flasks are often connected to Schlenk lines , which allow both operations to be done easily. Schlenk flasks and Schlenk tubes, like most laboratory glassware , are made from borosilicate glass such as Pyrex . Schlenk flasks are round-bottomed, while Schlenk tubes are elongated. They may be purchased off-the-shelf from laboratory suppliers or made from round-bottom flasks or glass tubing by a skilled glassblower . Typically, before solvent or reagents are introduced into a Schlenk flask, the flask is dried and the atmosphere of the flask is exchanged with an inert gas. A common method of exchanging the atmosphere of the flask is to flush the flask out with an inert gas. The gas can be introduced through the sidearm of the flask, or via a wide bore needle (attached to a gas line). The contents of the flask exit the flask through the neck portion of the flask. The needle method has the advantage that the needle can be placed at the bottom of the flask to better flush out the atmosphere of the flask. Flushing a flask out with an inert gas can be inefficient for large flasks and is impractical for complex apparatus. [ 1 ] An alternative way to exchange the atmosphere of a Schlenk flask is to use one or more "vac-refill" cycles, typically using a vacuum-gas manifold , also known as a Schlenk line . This involves pumping the air out of the flask and replacing the resulting vacuum with an inert gas. For example, evacuation of the flask to 1 mmHg (130 Pa; 0.0013 atm) and then replenishing the atmosphere with 760 mmHg (1 atm) inert gas leaves 0.13% of the original atmosphere ( 1 ⁄ 760 ). Two such vac-refill cycles leaves 0.000173% ( 1 ⁄ 760 2 ). Most Schlenk lines easily and quickly achieve a vacuum of 1 mmHg (~1.3 mBar). [ 2 ] When using Schlenk systems, including flasks, the use of grease is often necessary at stopcock valves and ground glass joints to provide a gas tight seal and prevent glass pieces from fusing. In contrast, teflon plug valves may have a trace of oil as a lubricant but generally no grease. In the following text any "connection" is assumed to be rendered mostly air free through a series of vac-refill cycles. The standard Schlenk flask is a round bottom, pear-shaped, or tubular flask with a ground glass joint and a side arm. The side arm contains a valve, usually a greased stopcock , used to control the flask's exposure to a manifold or the atmosphere. This allows a material to be added to a flask through the ground glass joint, which is then capped with a septum . This operation can, for example, be done in a glove box . The flask can then be removed from the glove box and taken to a Schlenk line. Once connected to the Schlenk line, the inert gas and/or vacuum can be applied to the flask as required. While the flask is connected to the line under a positive pressure of inert gas, the septum can be replaced with other apparatus, for example a reflux condenser. Once the manipulations are complete, the contents can be vacuum dried and placed under a static vacuum by closing the side arm valve. These evacuated flasks can be taken back into a glove box for further manipulation or storage of the flasks' contents. A "bomb" flask is subclass of Schlenk flask which includes all flasks that have only one opening accessed by opening a Teflon plug valve. This design allows a Schlenk bomb to be sealed more completely than a standard Schlenk flask even if its septum or glass cap is wired on. Schlenk bombs include structurally sound shapes such as round bottoms and heavy walled tubes. Schlenk bombs are often used to conduct reactions at elevated pressures and temperatures as a closed system. In addition, all Schlenk bombs are designed to withstand the pressure differential created by the ante-chamber when pumping solvents into a glove box. In practice Schlenk bombs can perform many of the functions of a standard Schlenk flask. Even when the opening is used to fit a bomb to a manifold, the plug can still be removed to add or remove material from the bomb. In some situations, however, Schlenk bombs are less convenient than standard Schlenk flasks: they lack an accessible ground glass joint to attach additional apparatus; the opening provided by plug valves can be difficult to access with a spatula , and it can be much simpler to work with a septum designed to fit a ground glass joint than with a Teflon plug. The name "bomb" is often applied to containers used under pressure such as a bomb calorimeter . While glass does not equal the pressure rating and mechanical strength of most metal containers, it does have several advantages. Glass allows visual inspection of a reaction in progress, it is inert to a wide range of reaction conditions and substrates, it is generally more compatible with common laboratory glassware, and it is more easily cleaned and checked for cleanliness. A Straus flask (often misspelled "Strauss") is subclass of "bomb" flask originally developed by Kontes Glass Company, [ 3 ] commonly used for storing dried and degassed solvents. Straus flasks are sometimes referred to as solvent bombs — a name which applies to any Schlenk bomb dedicated to storing solvent. Straus flasks are mainly differentiated from other "bombs" by their neck structure. Two necks emerge from a round bottom flask, one larger than the other. The larger neck ends in a ground glass joint and is permanently partitioned by blown glass from direct access to the flask. The smaller neck includes the threading required for a teflon plug to be screwed in perpendicular to the flask. The two necks are joined through a glass tube. The ground glass joint can be connected to a manifold directly or through an adapter and hosing. Once connected, the plug valve can be partially opened to allow the solvent in the Straus flask to be vacuum transferred to other vessels. Or, once connected to the line, the neck can be placed under a positive pressure of inert gas and the plug valve can be fully removed. This allows direct access to the flask through a narrow glass tube now protected by a curtain of inert gas. The solvent can then be transferred through cannula to another flask. In contrast, other bomb flask plugs are not necessarily ideally situated to protect the atmosphere of the flask from the external atmosphere. Straus flasks are distinct from "solvent pots", which are flasks that contain a solvent as well as drying agents. Solvent pots are not usually bombs, or even Schlenk flasks in the classic sense. The most common configuration of a solvent pot is a simple round bottom flask attached to a 180° adapter fitted with some form of valve. The pot can be attached to a manifold and the contents distilled or vacuum transferred to other flasks free of soluble drying agents, water, oxygen or nitrogen. The term "solvent pot" can also refer to the flask containing the drying agents in a classic solvent still system. Due to fire risks, solvent stills have largely been replaced by solvent columns in which degassed solvent is forced through an insoluble drying agent before being collected. Solvent is usually collected from solvent columns through a needle connected to the column which pierces the septum of a flask or through a ground glass joint connected to the column, as in the case of a Straus flask.
https://en.wikipedia.org/wiki/Schlenk_flask
The Schlenk line (also vacuum gas manifold ) is a commonly used chemistry apparatus developed by Wilhelm Schlenk . [ 1 ] It consists of a dual manifold with several ports. [ 2 ] One manifold is connected to a source of purified inert gas , while the other is connected to a vacuum pump . The inert-gas line is vented through an oil bubbler , while solvent vapors and gaseous reaction products are prevented from contaminating the vacuum pump by a liquid-nitrogen or dry-ice / acetone cold trap . Special stopcocks or Teflon taps allow vacuum or inert gas to be selected without the need for placing the sample on a separate line. [ 3 ] Schlenk lines are useful for manipulating moisture- and air-sensitive compounds. The vacuum is used to remove air or other gasses present in closed, connected glassware to the line. It often also removes the last traces of solvent from a sample. Vacuum and gas manifolds often have many ports and lines, and with care, it is possible for several reactions or operations to be run simultaneously in inert conditions. When the reagents are highly susceptible to oxidation , traces of oxygen may pose a problem. Then, for the removal of oxygen below the ppm level, the inert gas needs to be purified by passing it through a deoxygenation catalyst. [ 4 ] This is usually a column of copper(I) or manganese(II) oxide, which reacts with oxygen traces present in the inert gas. In other cases, a purge-cycle technique is often employed, where the closed, reaction vessel connected to the line is filled with inert gas, evacuated with the vacuum and then refilled. This process is repeated 3 or more times to make sure air is rigorously removed. Moisture can be removed by heating the reaction vessel with a heat gun . [ 5 ] The main techniques associated with the use of a Schlenk line include: Glassware are usually connected by tightly fitting and greased ground glass joints . Round bends of glass tubing with ground glass joints may be used to adjust the orientation of various vessels. Glassware is necessarily purged of outside air by using the purge cycling technique. The solvents and reagents that are used can use a technique called "sparging" to remove air. This is where a cannula needle, which is connected to the inert gas on the line, is inserted into the reaction vessel containing the solvent; this effectively bubbles the inert gas into the solution, which will actively push out trapped gas molecules from the solvent. [ 5 ] Filtration under inert conditions poses a special challenge. It is usually achieved using a "cannula filter". [ 3 ] Classically, filtration is tackled with a Schlenk filter, which consists of a sintered glass funnel fitted with joints and stopcocks that is sometimes called a Schlenk frit. [ 8 ] By fitting the pre-dried funnel and receiving flask to the reaction flask against a flow of nitrogen, carefully inverting the set-up and turning on the vacuum appropriately, the filtration may be accomplished with minimal exposure to air. [ 5 ] A glovebox is often used in conjunction with the Schlenk line for storing and reusing air- and moisture-sensitive solvents in a lab. The main dangers associated with the use of a Schlenk line are the risks of an implosion or explosion . An implosion can occur due to the use of vacuum and flaws in the glass apparatus. An explosion can occur due to the common use of liquid nitrogen in the cold trap , used to protect the vacuum pump from solvents. If a reasonable amount of air is allowed to enter the Schlenk line, liquid oxygen can condense into the cold trap as a pale blue liquid. An explosion may occur due to reaction of the liquid oxygen with any organic compounds also in the trap.
https://en.wikipedia.org/wiki/Schlenk_line
Schlichting jet is a steady, laminar, round jet, emerging into a stationary fluid of the same kind with very high Reynolds number . The problem was formulated and solved by Hermann Schlichting in 1933, [ 1 ] who also formulated the corresponding planar Bickley jet problem in the same paper. [ 2 ] The Landau-Squire jet from a point source is an exact solution of Navier-Stokes equations , which is valid for all Reynolds number, reduces to Schlichting jet solution at high Reynolds number, for distances far away from the jet origin. Consider an axisymmetric jet emerging from an orifice, located at the origin of a cylindrical polar coordinates ( r , x ) {\displaystyle (r,x)} , with x {\displaystyle x} being the jet axis and r {\displaystyle r} being the radial distance from the axis of symmetry. Since the jet is in constant pressure, the momentum flux in the x {\displaystyle x} direction is constant and equal to the momentum flux at the origin, where ρ {\displaystyle \rho } is the constant density, ( v , u ) {\displaystyle (v,u)} are the velocity components in r {\displaystyle r} and x {\displaystyle x} direction, respectively and J {\displaystyle J} is the known momentum flux at the origin. The quantity K = J / ρ {\displaystyle K=J/\rho } is called as the kinematic momentum flux . The boundary layer equations are where ν {\displaystyle \nu } is the kinematic viscosity . The boundary conditions are The Reynolds number of the jet, is a large number for the Schlichting jet. A self-similar solution exist for the problem posed. The self-similar variables are Then the boundary layer equation reduces to with boundary conditions F ( 0 ) = F ′ ( 0 ) = 0 {\displaystyle F(0)=F'(0)=0} . If F ( η ) {\displaystyle F(\eta )} is a solution, then F ( γ η ) = F ( ξ ) {\displaystyle F(\gamma \eta )=F(\xi )} is also a solution. A particular solution which satisfies the condition at η = 0 {\displaystyle \eta =0} is given by The constant γ {\displaystyle \gamma } can be evaluated from the momentum condition, Thus the solution is Unlike the momentum flux, the volume flow rate in the x {\displaystyle x} is not constant, but increases due to slow entrainment of the outer fluid by the jet, increases linearly with distance along the axis. Schneider flow describes the flow induced by the jet due to the entrainment. [ 3 ] Schlichting jet for the compressible fluid has been solved by M.Z. Krzywoblocki [ 4 ] and D.C. Pack. [ 5 ] Similarly, Schlichting jet with swirling motion is studied by H. Görtler. [ 6 ]
https://en.wikipedia.org/wiki/Schlichting_jet
In geometry , the Schläfli symbol is a notation of the form { p , q , r , . . . } {\displaystyle \{p,q,r,...\}} that defines regular polytopes and tessellations . The Schläfli symbol is named after the 19th-century Swiss mathematician Ludwig Schläfli , [ 1 ] : 143 who generalized Euclidean geometry to more than three dimensions and discovered all their convex regular polytopes, including the six that occur in four dimensions. The Schläfli symbol is a recursive description, [ 1 ] : 129 starting with { p } {\displaystyle \{p\}} for a p {\displaystyle p} -sided regular polygon that is convex . For example, {3} is an equilateral triangle , {4} is a square , {5} a convex regular pentagon , etc. Regular star polygons are not convex, and their Schläfli symbols { p / q } {\displaystyle \{p/q\}} contain irreducible fractions p / q {\displaystyle p/q} , where p {\displaystyle p} is the number of vertices, and q {\displaystyle q} is their turning number . Equivalently, { p / q } {\displaystyle \{p/q\}} is created from the vertices of { p } {\displaystyle \{p\}} , connected every q {\displaystyle q} . For example, { 5 / 2 } {\displaystyle \{5/2\}} is a pentagram ; { 5 / 1 } {\displaystyle \{5/1\}} is a pentagon . A regular polyhedron that has q {\displaystyle q} regular p {\displaystyle p} -sided polygon faces around each vertex is represented by { p , q } {\displaystyle \{p,q\}} . For example, the cube has 3 squares around each vertex and is represented by {4,3}. A regular 4-dimensional polytope , with r { p , q } {\displaystyle r\{p,q\}} regular polyhedral cells around each edge is represented by { p , q , r } {\displaystyle \{p,q,r\}} . For example, a tesseract , {4,3,3}, has 3 cubes , {4,3}, around an edge. In general, a regular polytope { p , q , r , … , y , z } {\displaystyle \{p,q,r,\dots ,y,z\}} has z { p , q , r , … , y } {\displaystyle z\{p,q,r,\dots ,y\}} facets around every peak , where a peak is a vertex in a polyhedron, an edge in a 4-polytope, a face in a 5-polytope, and an ( n −3)-face in an n -polytope. A regular polytope has a regular vertex figure . The vertex figure of a regular polytope { p , q , r ,..., y , z } is { q , r ,..., y , z }. Regular polytopes can have star polygon elements, like the pentagram , with symbol { 5 ⁄ 2 }, represented by the vertices of a pentagon but connected alternately. The Schläfli symbol can represent a finite convex polyhedron , an infinite tessellation of Euclidean space , or an infinite tessellation of hyperbolic space , depending on the angle defect of the construction. A positive angle defect allows the vertex figure to fold into a higher dimension and loops back into itself as a polytope. A zero angle defect tessellates space of the same dimension as the facets. A negative angle defect cannot exist in ordinary space, but can be constructed in hyperbolic space. Usually, a facet or a vertex figure is assumed to be a finite polytope, but can sometimes itself be considered a tessellation. A regular polytope also has a dual polytope , represented by the Schläfli symbol elements in reverse order. A self-dual regular polytope will have a symmetric Schläfli symbol. In addition to describing Euclidean polytopes, Schläfli symbols can be used to describe spherical polytopes or spherical honeycombs. [ 1 ] : 138 Schläfli's work was almost unknown in his lifetime, and his notation for describing polytopes was rediscovered independently by several others. In particular, Thorold Gosset rediscovered the Schläfli symbol which he wrote as | p | q | r | ... | z | rather than with brackets and commas as Schläfli did. [ 1 ] : 144 Gosset's form has greater symmetry, so the number of dimensions is the number of vertical bars, and the symbol exactly includes the sub-symbols for facet and vertex figure. Gosset regarded | p as an operator, which can be applied to | q | ... | z | to produce a polytope with p -gonal faces whose vertex figure is | q | ... | z |. Schläfli symbols are closely related to (finite) reflection symmetry groups , which correspond precisely to the finite Coxeter groups and are specified with the same indices, but square brackets instead [ p , q , r ,...]. Such groups are often named by the regular polytopes they generate. For example, [3,3] is the Coxeter group for reflective tetrahedral symmetry , [3,4] is reflective octahedral symmetry , and [3,5] is reflective icosahedral symmetry . The Schläfli symbol of a convex regular polygon with p edges is { p }. For example, a regular pentagon is represented by {5}. For nonconvex star polygons , the constructive notation { p ⁄ q } is used, where p is the number of vertices and q −1 is the number of vertices skipped when drawing each edge of the star. For example, { 5 ⁄ 2 } represents the pentagram . The Schläfli symbol of a regular polyhedron is { p , q } if its faces are p -gons, and each vertex is surrounded by q faces (the vertex figure is a q -gon). For example, {5,3} is the regular dodecahedron . It has pentagonal (5 edges) faces, and 3 pentagons around each vertex. See the 5 convex Platonic solids , the 4 nonconvex Kepler-Poinsot polyhedra . Topologically, a regular 2-dimensional tessellation may be regarded as similar to a (3-dimensional) polyhedron, but such that the angular defect is zero. Thus, Schläfli symbols may also be defined for regular tessellations of Euclidean or hyperbolic space in a similar way as for polyhedra. The analogy holds for higher dimensions. For example, the hexagonal tiling is represented by {6,3}. The Schläfli symbol of a regular 4-polytope is of the form { p , q , r }. Its (two-dimensional) faces are regular p -gons ({ p }), the cells are regular polyhedra of type { p , q }, the vertex figures are regular polyhedra of type { q , r }, and the edge figures are regular r -gons (type { r }). See the six convex regular and 10 regular star 4-polytopes . For example, the 120-cell is represented by {5,3,3}. It is made of dodecahedron cells {5,3}, and has 3 cells around each edge. There is one regular tessellation of Euclidean 3-space: the cubic honeycomb , with a Schläfli symbol of {4,3,4}, made of cubic cells and 4 cubes around each edge. There are also 4 regular compact hyperbolic tessellations including {5,3,4}, the hyperbolic small dodecahedral honeycomb , which fills space with dodecahedron cells. If a 4-polytope's symbol is palindromic (e.g. {3,3,3} or {3,4,3}), its bitruncation will only have truncated forms of the vertex figure as cells. For higher-dimensional regular polytopes , the Schläfli symbol is defined recursively as { p 1 , p 2 , ..., p n − 1 } if the facets have Schläfli symbol { p 1 , p 2 , ..., p n − 2 } and the vertex figures have Schläfli symbol { p 2 , p 3 , ..., p n − 1 } . A vertex figure of a facet of a polytope and a facet of a vertex figure of the same polytope are the same: { p 2 , p 3 , ..., p n − 2 } . There are only 3 regular polytopes in 5 dimensions and above: the simplex , {3, 3, 3, ..., 3}; the cross-polytope , {3, 3, ..., 3, 4}; and the hypercube , {4, 3, 3, ..., 3}. There are no non-convex regular polytopes above 4 dimensions. If a polytope of dimension n ≥ 2 has Schläfli symbol { p 1 , p 2 , ..., p n − 1 } then its dual has Schläfli symbol { p n − 1 , ..., p 2 , p 1 }. If the sequence is palindromic , i.e. the same forwards and backwards, the polytope is self-dual . Every regular polytope in 2 dimensions (polygon) is self-dual. Uniform prismatic polytopes can be defined and named as a Cartesian product (with operator "×") of lower-dimensional regular polytopes. The prismatic duals, or bipyramids can be represented as composite symbols, but with the addition operator, "+". Pyramidal polytopes containing vertices orthogonally offset can be represented using a join operator, "∨". Every pair of vertices between joined figures are connected by edges. In 2D, an isosceles triangle can be represented as ( ) ∨ { } = ( ) ∨ [( ) ∨ ( )]. In 3D: In 4D: When mixing operators, the order of operations from highest to lowest is ×, +, ∨. Axial polytopes containing vertices on parallel offset hyperplanes can be represented by the ‖ operator. A uniform prism is { n }‖{ n } and antiprism { n }‖ r { n }. A truncated regular polygon doubles in sides. A regular polygon with even sides can be halved. An altered even-sided regular 2n-gon generates a star figure compound, 2{n}. Coxeter expanded his usage of the Schläfli symbol to quasiregular polyhedra by adding a vertical dimension to the symbol. It was a starting point toward the more general Coxeter diagram . Norman Johnson simplified the notation for vertical symbols with an r prefix. The t-notation is the most general, and directly corresponds to the rings of the Coxeter diagram. Symbols have a corresponding alternation , replacing rings with holes in a Coxeter diagram and h prefix standing for half , construction limited by the requirement that neighboring branches must be even-ordered and cuts the symmetry order in half. A related operator, a for altered , is shown with two nested holes, represents a compound polyhedra with both alternated halves, retaining the original full symmetry. A snub is a half form of a truncation, and a holosnub is both halves of an alternated truncation. Alternations have half the symmetry of the Coxeter groups and are represented by unfilled rings. There are two choices possible on which half of vertices are taken, but the symbol does not imply which one. Quarter forms are shown here with a + inside a hollow ring to imply they are two independent alternations. Altered and holosnubbed forms have the full symmetry of the Coxeter group, and are represented by double unfilled rings, but may be represented as compounds. Spherical Regular Semi-regular Hyperbolic
https://en.wikipedia.org/wiki/Schläfli_symbol
Schlömilch's series is a Fourier series type expansion of twice continuously differentiable function in the interval ( 0 , π ) {\displaystyle (0,\pi )} in terms of the Bessel function of the first kind , named after the German mathematician Oskar Schlömilch , who derived the series in 1857. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] The real-valued function f ( x ) {\displaystyle f(x)} has the following expansion: where Some examples of Schlömilch's series are the following:
https://en.wikipedia.org/wiki/Schlömilch's_series
In materials science , Schmid's law (also Schmid factor [ a ] ) states that slip begins in a crystalline material when the resolved shear stress on a slip system reaches a critical value, known as the critical resolved shear stress . [ 2 ] A slip system can be described by two vectors: a vector normal to the slip plane and a vector parallel to the slip direction. The resolved shear stress on a slip system ( τ {\displaystyle \tau } ) is given by [ 2 ] τ = σ cos ⁡ ϕ cos ⁡ λ {\displaystyle \tau =\sigma \cos \phi \cos \lambda } , where σ {\displaystyle \sigma } is the magnitude of the applied tensile stress , ϕ {\displaystyle \phi } is the angle between the slip plane normal and the direction of the applied stress, and λ {\displaystyle \lambda } is the angle between the slip direction and the direction of the applied stress. This equation can also be expressed in terms of the Schmid factor ( m {\displaystyle m} ), given by [ 3 ] m = cos ⁡ ϕ cos ⁡ λ {\displaystyle m=\cos \phi \cos \lambda } According to Schmid's law, slip begins on the slip system when τ = τ c {\displaystyle \tau =\tau _{c}} , where τ c {\displaystyle \tau _{c}} is the critical resolved shear stress. The corresponding tensile stress at which slip begins is the yield stress ( σ y {\displaystyle \sigma _{y}} ), which is related to the critical resolved shear stress by [ 2 ] τ c = m σ y {\displaystyle \tau _{c}=m\sigma _{y}} . The Schmid factor is limited to the range 0 ≤ m ≤ 0.5 {\displaystyle 0\leq m\leq 0.5} . The Schmid factor is minimized when the tensile stress is perpendicular to the slip plane normal ( cos ⁡ ϕ = 0 {\displaystyle \cos \phi =0} ) or perpendicular to the slip direction ( cos ⁡ λ = 0 {\displaystyle \cos \lambda =0} ). The Schmid factor is maximized when ϕ = λ = 45 ∘ {\displaystyle \phi =\lambda =45^{\circ }} . [ 3 ] For crystals with multiple slip systems, Schmid's law indicates that the slip system with the largest Schmid factor will yield first. [ 2 ] The Schmid factor is named after Erich Schmid who coauthored a book with Walter Boas introducing the concept in 1935. [ 4 ]
https://en.wikipedia.org/wiki/Schmid's_law
The double bond rule , postulated by Otto Schmidt in 1932, relates to the enhanced reactivity of sigma bonds attached to an atom adjacent to a double bond. [ 1 ] Examples of this phenomenon include the difference in reactivity of allyl bromides as compared to bromo alkenes and benzyl bromides are compared to bromobenzenes . The first to observe the phenomenon was Conrad Laar in 1885. This chemistry -related article is a stub . You can help Wikipedia by expanding it . This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Schmidt_double_bond_rule
In fluid dynamics , the Schmidt number (denoted Sc ) of a fluid is a dimensionless number defined as the ratio of momentum diffusivity ( kinematic viscosity ) and mass diffusivity , and it is used to characterize fluid flows in which there are simultaneous momentum and mass diffusion convection processes. It was named after German engineer Ernst Heinrich Wilhelm Schmidt (1892–1975). The Schmidt number is the ratio of the shear component for diffusivity (viscosity divided by density ) to the diffusivity for mass transfer D . It physically relates the relative thickness of the hydrodynamic layer and mass-transfer boundary layer . [ 1 ] It is defined [ 2 ] as: where (in SI units ): The heat transfer analog of the Schmidt number is the Prandtl number ( Pr ). The ratio of thermal diffusivity to mass diffusivity is the Lewis number ( Le ). The turbulent Schmidt number is commonly used in turbulence research and is defined as: [ 3 ] where: The turbulent Schmidt number describes the ratio between the rates of turbulent transport of momentum and the turbulent transport of mass (or any passive scalar). It is related to the turbulent Prandtl number , which is concerned with turbulent heat transfer rather than turbulent mass transfer. It is useful for solving the mass transfer problem of turbulent boundary layer flows. The simplest model for Sct is the Reynolds analogy, which yields a turbulent Schmidt number of 1. From experimental data and CFD simulations, Sct ranges from 0.2 to 6. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] For Stirling engines , the Schmidt number is related to the specific power . Gustav Schmidt of the German Polytechnic Institute of Prague published an analysis in 1871 for the now-famous closed-form solution for an idealized isothermal Stirling engine model. [ 9 ] [ 10 ] where:
https://en.wikipedia.org/wiki/Schmidt_number
In organic chemistry , the Schmidt reaction is an organic reaction in which an azide reacts with a carbonyl derivative, usually an aldehyde , ketone , or carboxylic acid , under acidic conditions to give an amine or amide , with expulsion of nitrogen . [ 1 ] [ 2 ] [ 3 ] It is named after Karl Friedrich Schmidt (1887–1971), who first reported it in 1924 by successfully converting benzophenone and hydrazoic acid to benzanilide . [ 4 ] The intramolecular reaction was not reported until 1991 [ 5 ] but has become important in the synthesis of natural products. [ 6 ] The reaction is effective with carboxylic acids to give amines (above), and with ketones to give amides (below). The reaction is closely related to the Curtius rearrangement except that in this reaction the acyl azide is produced by reaction of the carboxylic acid with hydrazoic acid via the protonated carboxylic acid, in a process akin to a Fischer esterification . An alternative, involving the formation of an acylium ion, becomes more important when the reaction takes place in concentrated acid (>90% sulfuric acid ). [ 7 ] (In the Curtius rearrangement, sodium azide and an acyl chloride are combined to quantitatively generate the acyl azide intermediate, and the rest of the reaction takes place under neutral conditions.) The carboxylic acid Schmidt reaction starts with acylium ion 1 obtained from protonation and loss of water. Reaction with hydrazoic acid forms the protonated azido ketone 2 , which goes through a rearrangement reaction with the alkyl group R, migrating over the C-N bond with expulsion of nitrogen. The protonated isocyanate is attacked by water forming carbamate 4 , which after deprotonation loses carbon dioxide to the amine . In the reaction mechanism for the Schmidt reaction of ketones , the carbonyl group is activated by protonation for nucleophilic addition by the azide, forming azidohydrin 3 , which loses water in an elimination reaction to diazoiminium 5. One of the alkyl or aryl groups migrates from carbon to nitrogen with loss of nitrogen to give a nitrilium intermediate 6 , as in the Beckmann rearrangement . Attack by water converts 6 to protonated imidic acid 7 , which undergoes loss of proton to arrive at the imidic acid tautomer of the final amide . In an alternative mechanism, the migration occurs at 9 , directly after protonation of intermediate 3 , in a manner similar to the Baeyer–Villiger oxidation to give protonated amide 10 . Loss of a proton again furnishes the amide. It has been proposed that the dehydration to 3 to give 5 (and, hence, the Beckmann pathway) is favored by nonaqueous acids like conc. H 2 SO 4 , while aqueous acids like conc. HCl favor migration from 9 (the Baeyer-Villiger pathway). These possibilities have been used to account for the fact that, for certain substrates like α-tetralone , the group that migrates can sometimes change, depending on the conditions used, to deliver either of the two possible amides. [ 8 ] The scope of this reaction has been extended to reactions of carbonyls with alkyl azides R-N 3 . This extension was first reported by J.H. Boyer in 1955 [ 9 ] (hence the name Boyer reaction ), for example, the reaction of 3-nitrobenzaldehyde with β-azido ethanol: Variations involving intramolecular Schmidt reactions have been known since 1991. [ 5 ] These are annulation reactions and have some utility in the synthesis of natural products; [ 6 ] [ 10 ] such as lactams [ 11 ] and alkaloids . [ 12 ]
https://en.wikipedia.org/wiki/Schmidt_reaction
In electronics , a Schmitt trigger is a comparator circuit with hysteresis implemented by applying positive feedback to the noninverting input of a comparator or differential amplifier. It is an active circuit which converts an analog input signal to a digital output signal. The circuit is named a trigger because the output retains its value until the input changes sufficiently to trigger a change. In the non-inverting configuration, when the input is higher than a chosen threshold, the output is high. When the input is below a different (lower) chosen threshold the output is low, and when the input is between the two levels the output retains its value. This dual threshold action is called hysteresis and implies that the Schmitt trigger possesses memory and can act as a bistable multivibrator (latch or flip-flop ). There is a close relation between the two kinds of circuits: a Schmitt trigger can be converted into a latch and a latch can be converted into a Schmitt trigger. Schmitt trigger devices are typically used in signal conditioning applications to remove noise from signals used in digital circuits, particularly mechanical contact bounce in switches . They are also used in closed loop negative feedback configurations to implement relaxation oscillators , used in function generators and switching power supplies . In signal theory, a schmitt trigger is essentially a one-bit quantizer . The Schmitt trigger was invented by American scientist Otto H. Schmitt in 1934 while he was a graduate student, [ 1 ] later described in his doctoral dissertation (1937) as a thermionic trigger . [ 2 ] It was a direct result of Schmitt's study of the neural impulse propagation in squid nerves. [ 2 ] Circuits with hysteresis are based on positive feedback. Any active circuit can be made to behave as a Schmitt trigger by applying positive feedback so that the loop gain is more than one. The positive feedback is introduced by adding a part of the output voltage to the input voltage. These circuits contain an attenuator (the B box in the figure on the right) and an adder (the circle with "+" inside) in addition to an amplifier acting as a comparator. There are three specific techniques for implementing this general idea. The first two of them are dual versions (series and parallel) of the general positive feedback system. In these configurations, the output voltage increases the effective difference input voltage of the comparator by "decreasing the threshold" or by "increasing the circuit input voltage"; the threshold and memory properties are incorporated in one element. In the third technique , the threshold and memory properties are separated. Dynamic threshold (series feedback): when the input voltage crosses the threshold in either direction, the circuit itself changes its own threshold to the opposite direction. For this purpose, it subtracts a part of its output voltage from the threshold (it is equal to adding voltage to the input voltage). Thus the output affects the threshold and does not affect the input voltage. These circuits are implemented by a differential amplifier with "series positive feedback" where the input is connected to the inverting input and the inverted output to the non-inverting input. In this arrangement, attenuation and summation are separated: a voltage divider acts as an attenuator and the loop acts as a simple series voltage summer . Examples are the classic transistor emitter-coupled Schmitt trigger , the op-amp inverting Schmitt trigger , etc. Modified input voltage (parallel feedback): when the input voltage crosses the threshold in either direction the circuit changes its input voltage in the same direction (now it adds a part of its output voltage directly to the input voltage). Thus the output augments the input voltage and does not affect the threshold. These circuits can be implemented by a single-ended non-inverting amplifier with "parallel positive feedback" where the input and the output sources are connected through resistors to the input. The two resistors form a weighted parallel summer incorporating both the attenuation and summation. Examples are the less familiar collector-base coupled Schmitt trigger , the op-amp non-inverting Schmitt trigger , etc. Some circuits and elements exhibiting negative resistance can also act in a similar way: negative impedance converters (NIC), neon lamps , tunnel diodes (e.g., a diode with an N-shaped current–voltage characteristic in the first quadrant), etc. In the last case, an oscillating input will cause the diode to move from one rising leg of the "N" to the other and back again as the input crosses the rising and falling switching thresholds. Two different unidirectional thresholds are assigned in this case to two separate open-loop comparators (without hysteresis) driving a bistable multivibrator (latch) or flip-flop . The trigger is toggled high when the input voltage crosses down to up the high threshold and low when the input voltage crosses up to down the low threshold. Again, there is a positive feedback, but now it is concentrated only in the memory cell. Examples are the 555 timer and the switch debouncing circuit. [ 3 ] The symbol for Schmitt triggers in circuit diagrams is a triangle with a symbol inside representing its ideal hysteresis curve. The original Schmitt trigger is based on the dynamic threshold idea that is implemented by a voltage divider with a switchable upper leg (the collector resistors R C1 and R C2 ) and a steady lower leg (R E ). Q1 acts as a comparator with a differential input (Q1 base-emitter junction) consisting of an inverting (Q1 base) and a non-inverting (Q1 emitter) inputs. The input voltage is applied to the inverting input; the output voltage of the voltage divider is applied to the non-inverting input thus determining its threshold. The comparator output drives the second common collector stage Q2 (an emitter follower ) through the voltage divider R 1 -R 2 . The emitter-coupled transistors Q1 and Q2 actually compose an electronic double throw switch that switches over the upper legs of the voltage divider and changes the threshold in a different (to the input voltage) direction. This configuration can be considered as a differential amplifier with series positive feedback between its non-inverting input (Q2 base) and output (Q1 collector) that forces the transition process. There is also a smaller negative feedback introduced by the emitter resistor R E . To make the positive feedback dominate over the negative one and to obtain a hysteresis, the proportion between the two collector resistors is chosen so that R C1 > R C2 . Thus less current flows through and there is less voltage drop across R E when Q1 is switched on than in the case when Q2 is switched on. As a result, the circuit has two different thresholds in regard to ground (V − in the image). Initial state. For the NPN transistors shown on the right, imagine the input voltage is below the shared emitter voltage (high threshold for concreteness) so that the Q1 base-emitter junction is reverse-biased and Q1 does not conduct. The Q2 base voltage is determined by the divider described above so that Q2 is conducting and the trigger output is in the low state. The two resistors R C2 and R E form another voltage divider that determines the high threshold. Neglecting V BE , the high threshold value is approximately The output voltage is low but well above ground. It is approximately equal to the high threshold and may not be low enough to be a logical zero for subsequent digital circuits. This may require an additional level shifting circuit following the trigger circuit. Crossing up the high threshold. When the input voltage (Q1 base voltage) rises slightly above the voltage across the emitter resistor R E (the high threshold), Q1 begins conducting. Its collector voltage goes down and Q2 starts toward cutoff, because the voltage divider now provides lower Q2 base voltage. The common emitter voltage follows this change and goes down, making Q1 conduct more. The current begins to steer from the right leg of the circuit to the left one. Although Q1 is conducting more, it passes less current through R E (since R C1 > R C2 ); the emitter voltage continues dropping and the effective Q1 base-emitter voltage continuously increases. This avalanche-like process continues until Q1 becomes completely turned on (saturated) and Q2 turned off. The trigger transitions to the high state and the output (Q2's collector) voltage is close to V+. Now the two resistors R C1 and R E form a voltage divider that determines the low threshold. Its value is approximately Crossing down the low threshold. With the trigger now in the high state, if the input voltage drops enough (below the low threshold), Q1 begins cutting off. Its collector current reduces; as a result, the shared emitter voltage drops slightly and Q1's collector voltage rises significantly. The R 1 -R 2 voltage divider conveys this change to the Q2 base voltage and it begins conducting. The voltage across R E rises, further reducing the Q1 base-emitter potential in the same avalanche-like manner, and Q1 ceases to conduct. Q2 becomes completely turned on (saturated) and the output voltage becomes low again. Non-inverting circuit. The classic non-inverting Schmitt trigger can be turned into an inverting trigger by taking V out from the emitters instead of from a Q2 collector. In this configuration, the output voltage is equal to the dynamic threshold (the shared emitter voltage) and both the output levels stay away from the supply rails. Another disadvantage is that the load changes the thresholds so, it has to be high enough. The base resistor R B is obligatory to prevent the impact of the input voltage through Q1 base-emitter junction on the emitter voltage. Direct-coupled circuit. To simplify the circuit, the R 1 –R 2 voltage divider can be omitted connecting Q1 collector directly to Q2 base. The base resistor R B can be omitted as well so that the input voltage source drives directly Q1's base. [ 4 ] In this case, the common emitter voltage and Q1 collector voltage are not suitable for outputs. Only Q2 collector should be used as an output since, when the input voltage exceeds the high threshold and Q1 saturates, its base-emitter junction is forward biased and transfers the input voltage variations directly to the emitters. As a result, the common emitter voltage and Q1 collector voltage follow the input voltage. This situation is typical for over-driven transistor differential amplifiers and ECL gates. Like every latch, the fundamental collector-base coupled bistable circuit operates with hysteresis. It can be converted to a Schmitt trigger by connecting an additional base resistor R to one of the inputs (Q1's base in the figure). The two resistors R and R 4 form a parallel voltage summer (the circle in the block diagram above ) that sums output (Q2's collector) voltage and the input voltage, and drives the single-ended transistor "comparator" Q1. When the base voltage crosses the threshold (V BE0 ∞ 0.65 V) in either direction, a part of Q2's collector voltage is added in the same direction to the input voltage. Thus the output modifies the input voltage by means of parallel positive feedback and does not affect the threshold (the base-emitter voltage). The emitter-coupled version has the advantage that the input transistor is reverse biased when the input voltage is quite below the high threshold so the transistor is definitely cut off. This was important when germanium transistors were used for implementing the circuit, and this configuration has continued to be popular. The input base resistor can be omitted, since the emitter resistor limits the current when the input base-emitter junction is forward-biased. An emitter-coupled Schmitt trigger logical zero output level may not be low enough and might need an additional output level shifting circuit. The collector-coupled Schmitt trigger has extremely low (almost zero) output at logical zero . Schmitt triggers are commonly implemented using an operational amplifier or a dedicated comparator . [ nb 2 ] An open-loop op-amp and comparator may be considered as an analog-digital device having analog inputs and a digital output that extracts the sign of the voltage difference between its two inputs. [ nb 3 ] The positive feedback is applied by adding a part of the output voltage to the input voltage in series or parallel manner. Due to the extremely high op-amp gain, the loop gain is also high enough and provides the avalanche-like process. In this circuit, the two resistors R 1 and R 2 form a parallel voltage summer. It adds a part of the output voltage to the input voltage thus augmenting it during and after switching that occurs when the resulting voltage is near ground. This parallel positive feedback creates the needed hysteresis that is controlled by the proportion between the resistances of R 1 and R 2 . The output of the parallel voltage summer is single-ended (it produces voltage with respect to ground) so the circuit does not need an amplifier with a differential input. Since conventional op-amps have a differential input, the inverting input is grounded to make the reference point zero volts. The output voltage always has the same sign as the op-amp input voltage but it does not always have the same sign as the circuit input voltage (the signs of the two input voltages can differ). When the circuit input voltage is above the high threshold or below the low threshold, the output voltage has the same sign as the circuit input voltage (the circuit is non-inverting). It acts like a comparator that switches at a different point depending on whether the output of the comparator is high or low. When the circuit input voltage is between the thresholds, the output voltage is undefined and it depends on the last state (the circuit behaves as an elementary latch ). For instance, if the Schmitt trigger is currently in the high state, the output will be at the positive power supply rail (+V S ). The output voltage V + of the resistive summer can be found by applying the superposition theorem : The comparator will switch when V + =0. Then R 2 ⋅ V i n = − R 1 ⋅ V s {\displaystyle {R_{2}}\cdot V_{\mathrm {in} }=-{R_{1}}\cdot V_{\mathrm {s} }} (the same result can be obtained by applying the current conservation principle). So V in {\displaystyle V_{\text{in}}} must drop below − R 1 R 2 V s {\displaystyle -{\frac {R_{1}}{R_{2}}}{V_{s}}} to get the output to switch. Once the comparator output has switched to − V S , the threshold becomes + R 1 R 2 V s {\displaystyle +{\frac {R_{1}}{R_{2}}}{V_{s}}} to switch back to high. So this circuit creates a switching band centered on zero, with trigger levels ± R 1 R 2 V s {\displaystyle \pm {\frac {R_{1}}{R_{2}}}{V_{s}}} (it can be shifted to the left or the right by applying a bias voltage to the inverting input). The input voltage must rise above the top of the band, and then below the bottom of the band, for the output to switch on (plus) and then back off (minus). If R 1 is zero or R 2 is infinity (i.e., an open circuit ), the band collapses to zero width, and it behaves as a standard comparator. The transfer characteristic is shown in the picture on the left. The value of the threshold T is given by R 1 R 2 V s {\displaystyle {\frac {R_{1}}{R_{2}}}{V_{s}}} and the maximum value of the output M is the power supply rail. A unique property of circuits with parallel positive feedback is the impact on the input source. [ citation needed ] In circuits with negative parallel feedback (e.g., an inverting amplifier), the virtual ground at the inverting input separates the input source from the op-amp output. Here there is no virtual ground, and the steady op-amp output voltage is applied through R 1 -R 2 network to the input source. The op-amp output passes an opposite current through the input source (it injects current into the source when the input voltage is positive and it draws current from the source when it is negative). A practical Schmitt trigger with precise thresholds is shown in the figure on the right. The transfer characteristic has exactly the same shape of the previous basic configuration, and the threshold values are the same as well. On the other hand, in the previous case, the output voltage was depending on the power supply, while now it is defined by the Zener diodes (which could also be replaced with a single double-anode Zener diode ). In this configuration, the output levels can be modified by appropriate choice of Zener diode, and these levels are resistant to power supply fluctuations (i.e., they increase the PSRR of the comparator). The resistor R 3 is there to limit the current through the diodes, and the resistor R 4 minimizes the input voltage offset caused by the comparator's input leakage currents (see limitations of real op-amps ). In the inverting version, the attenuation and summation are separated. The two resistors R 1 and R 2 act only as a "pure" attenuator (voltage divider). The input loop acts as a series voltage summer that adds a part of the output voltage in series to the circuit input voltage. This series positive feedback creates the needed hysteresis that is controlled by the proportion between the resistances of R 1 and the whole resistance (R 1 and R 2 ). The effective voltage applied to the op-amp input is floating so the op-amp must have a differential input. The circuit is named inverting since the output voltage always has an opposite sign to the input voltage when it is out of the hysteresis cycle (when the input voltage is above the high threshold or below the low threshold). However, if the input voltage is within the hysteresis cycle (between the high and low thresholds), the circuit can be inverting as well as non-inverting. The output voltage is undefined and it depends on the last state so the circuit behaves like an elementary latch. To compare the two versions, the circuit operation will be considered at the same conditions as above. If the Schmitt trigger is currently in the high state, the output will be at the positive power supply rail (+V S ). The output voltage V + of the voltage divider is: The comparator will switch when V in = V + . So V in {\displaystyle V_{\text{in}}} must exceed above this voltage to get the output to switch. Once the comparator output has switched to − V S , the threshold becomes − R 1 R 1 + R 2 V s {\displaystyle -{\frac {R_{1}}{R_{1}+R_{2}}}{V_{s}}} to switch back to high. So this circuit creates a switching band centered on zero, with trigger levels ± R 1 R 1 + R 2 V s {\displaystyle \pm {\frac {R_{1}}{R_{1}+R_{2}}}{V_{s}}} (it can be shifted to the left or the right by connecting R 1 to a bias voltage). The input voltage must rise above the top of the band, and then below the bottom of the band, for the output to switch off (minus) and then back on (plus). If R 1 is zero (i.e., a short circuit ) or R 2 is infinity, the band collapses to zero width, and it behaves as a standard comparator. In contrast with the parallel version, this circuit does not impact on the input source since the source is separated from the voltage divider output by the high op-amp input differential impedance. In the inverting amplifier voltage drop across resistor (R1) decides the reference voltages i.e., upper threshold voltage (V+) and lower threshold voltages (V−) for the comparison with input signal applied. These voltages are fixed as the output voltage and resistor values are fixed. so by changing the drop across (R1) threshold voltages can be varied. By adding a bias voltage in series with resistor (R1) drop across it can be varied, which can change threshold voltages. Desired values of reference voltages can be obtained by varying bias voltage. The above equations can be modified as: Schmitt triggers are typically used in open loop configurations for noise immunity and closed loop configurations to implement function generators . One application of a Schmitt trigger is to increase the noise immunity in a circuit with only a single input threshold. With only one input threshold, a noisy input signal [ nb 4 ] near that threshold could cause the output to switch rapidly back and forth from noise alone. A noisy Schmitt Trigger input signal near one threshold can cause only one switch in output value, after which it would have to move beyond the other threshold in order to cause another switch. For example, an amplified infrared photodiode may generate an electric signal that switches frequently between its absolute lowest value and its absolute highest value. This signal is then low-pass filtered to form a smooth signal that rises and falls corresponding to the relative amount of time the switching signal is on and off. That filtered output passes to the input of a Schmitt trigger. The net effect is that the output of the Schmitt trigger only passes from low to high after a received infrared signal excites the photodiode for longer than some known period, and once the Schmitt trigger is high, it only moves low after the infrared signal ceases to excite the photodiode for longer than a similar known period. Whereas the photodiode is prone to spurious switching due to noise from the environment, the delay added by the filter and Schmitt trigger ensures that the output only switches when there is certainly an input stimulating the device. Schmitt triggers are common in many switching circuits for similar reasons (e.g., for switch debouncing ). The following 7400 series devices include a Schmitt trigger on their input(s): (see List of 7400-series integrated circuits ) A number of 4000 series devices include a Schmitt trigger on their inputs(s): (see List of 4000-series integrated circuits ) Schmitt input configurable single-gate chips: (see List of 7400-series integrated circuits#One gate chips ) A Schmitt trigger is a bistable multivibrator , and it can be used to implement another type of multivibrator, the relaxation oscillator . This is achieved by connecting a single RC integrating circuit between the output and the input of an inverting Schmitt trigger. The output will be a continuous square wave whose frequency depends on the values of R and C, and the threshold points of the Schmitt trigger. Since multiple Schmitt trigger circuits can be provided by a single integrated circuit (e.g. the 4000 series CMOS device type 40106 contains 6 of them), a spare section of the IC can be quickly pressed into service as a simple and reliable oscillator with only two external components. Here, a comparator-based Schmitt trigger is used in its inverting configuration . Additionally, slow negative feedback is added with an integrating RC network . The result, which is shown on the right, is that the output automatically oscillates from V SS to V DD as the capacitor charges from one Schmitt trigger threshold to the other.
https://en.wikipedia.org/wiki/Schmitt_trigger
Schmutzdecke [ 1 ] ( German , "dirt cover" or dirty skin, sometimes wrongly spelled schmutzedecke ) is a hypogeal biological layer formed on the surface of a slow sand filter and a form of periphyton . [ 2 ] The schmutzdecke is the layer that provides the effective purification in potable water treatment, the underlying sand providing the support medium for this biological treatment layer. The composition of any particular schmutzdecke varies, but will typically consist of a gelatinous biofilm matrix of bacteria , fungi , protozoa , rotifera and a range of aquatic insect larvae. As a schmutzdecke ages, more algae tend to develop, and larger aquatic organisms may be present including some bryozoa , snails and annelid worms.
https://en.wikipedia.org/wiki/Schmutzdecke
Schneider flow describes the axisymmetric outer flow induced by a laminar or turbulent jet having a large jet Reynolds number or by a laminar plume with a large Grashof number , in the case where the fluid domain is bounded by a wall. When the jet Reynolds number or the plume Grashof number is large, the full flow field constitutes two regions of different extent: a thin boundary-layer flow that may identified as the jet or as the plume and a slowly moving fluid in the large outer region encompassing the jet or the plume. The Schneider flow describing the latter motion is an exact solution of the Navier-Stokes equations , discovered by Wilhelm Schneider in 1981. [ 1 ] The solution was discovered also by A. A. Golubinskii and V. V. Sychev in 1979, [ 2 ] [ 3 ] however, was never applied to flows entrained by jets. The solution is an extension of Taylor's potential flow solution [ 4 ] to arbitrary Reynolds number . For laminar or turbulent jets and for laminar plumes, the volumetric entertainment rate per unit axial length is constant as can be seen from the solution of Schlichting jet and Yih plume. Thus, the jet or plume can be considered as a line sink that drives the motion in the outer region, as was first done by G. I. Taylor . Prior to Schneider, it was assumed that this outer fluid motion is also a large Reynolds number flow, hence the outer fluid motion is assumed to be a potential flow solution, which was solved by G. I. Taylor in 1958. For turbulent plume, the entrainment is not constant, nevertheless, the outer fluid is still governed by Taylors solution. Though Taylor's solution is still true for turbulent jet, for laminar jet or laminar plume, the effective Reynolds number for outer fluid is found to be of order unity since the entertainment by the sink in these cases is such that the flow is not inviscid. In this case, full Navier-Stokes equations has to be solved for the outer fluid motion and at the same time, since the fluid is bounded from the bottom by a solid wall, the solution has to satisfy the non-slip condition. Schneider obtained a self-similar solution for this outer fluid motion, which naturally reduced to Taylor's potential flow solution as the entrainment rate by the line sink is increased. Suppose a conical wall of semi-angle α {\displaystyle \alpha } with polar axis along the cone-axis and assume the vertex of the solid cone sits at the origin of the spherical coordinates ( r , θ , ϕ ) {\displaystyle (r,\theta ,\phi )} extending along the negative axis. Now, put the line sink along the positive side of the polar axis. Set this way, α = π / 2 {\displaystyle \alpha =\pi /2} represents the common case of flat wall with jet or plume emerging from the origin. The case α = π {\displaystyle \alpha =\pi } corresponds to jet/plume issuing from a thin injector. The flow is axisymmetric with zero azimuthal motion, i.e., the velocity components are ( v r , v θ , 0 ) {\displaystyle (v_{r},v_{\theta },0)} . The usual technique to study the flow is to introduce the Stokes stream function ψ {\displaystyle \psi } such that Introducing ξ = cos ⁡ θ {\displaystyle \xi =\cos \theta } as the replacement for θ {\displaystyle \theta } and introducing the self-similar form ψ = K ν r f ( ξ ) {\displaystyle \psi =K\nu rf(\xi )} into the axisymmetric Navier-Stokes equations, we obtain [ 5 ] where the constant K {\displaystyle K} is such that the volumetric entrainment rate per unit axial length is equal to 2 π K ν {\displaystyle 2\pi K\nu } . For laminar jet, K = 4 {\displaystyle K=4} and for laminar plume, it depends on the Prandtl number P r {\displaystyle Pr} , for example with P r = 1 {\displaystyle Pr=1} , we have K = 6 {\displaystyle K=6} and with P r = 2 {\displaystyle Pr=2} , we have K = 4 {\displaystyle K=4} . For turbulent jet, this constant is the order of the jet Reynolds number, which is a large number. The above equation can easily be reduced to a Riccati equation by integrating thrice, a procedure that is same as in the Landau–Squire jet (main difference between Landau-Squire jet and the current problem are the boundary conditions). The boundary conditions on the conical wall ξ = ξ w = cos ⁡ α {\displaystyle \xi =\xi _{w}=\cos \alpha } become and along the line sink ξ = 1 {\displaystyle \xi =1} , we have The problem has been solved numerically from here. The numerical solution also provides the values f ′ ( 1 ) {\displaystyle f'(1)} (the radial velocity at the axis), which must be accounted in the first-order boundary analysis of the inner jet problem at the axis. For turbulent jet, K ≫ 1 {\displaystyle K\gg 1} , the linear terms in the equation can be neglected everywhere except near a small boundary layer along the wall. Then neglecting the non-slip conditions ( f ′ ( ξ w ) = 0 {\displaystyle f'(\xi _{w})=0} ) at the wall, the solution, which was provided by G. I. Taylor in 1958, is given by [ 4 ] In the case of axisymmetric turbulent plumes where the entrainment rate per unit axial length of the plume increases like r 2 / 3 {\displaystyle r^{2/3}} , [ 6 ] Taylor's solution is given by ψ = C B 1 / 3 r 5 / 3 g ( ξ ) {\displaystyle \psi =CB^{1/3}r^{5/3}g(\xi )} where C {\displaystyle C} is a constant, B {\displaystyle B} is the specific buoyancy flux and [ 5 ] in which P 2 / 3 1 {\displaystyle P_{2/3}^{1}} denotes the associated Legendre function of the first kind with degree 2 / 3 {\displaystyle 2/3} and order 1 {\displaystyle 1} . The Schneider flow describes the outer motion driven by the jets or plumes and it becomes invalid in a thin region encompassing the axis where the jet or plume resides. For laminrar jets, the inner solution is described by the Schlichting jet and for laminar plumes, the inner solution is prescribed by Yih plume. A composite solution by stitching the inner thin Schlichting solution and the outer Schneider solution can be constructed by the method of matched asymptotic expansions . For the laminar jet, the composite solution is given by [ 5 ] in which the first term respresents the Schlichting jet (with a characteristic jet thickness R e θ {\displaystyle Re\,\theta } ), the second term represents the Schneider flow and the third term is the subtraction of the matching conditions. Here R e = ν − 1 ( J / 2 π ρ ) 1 / 2 {\displaystyle Re=\nu ^{-1}(J/2\pi \rho )^{1/2}} is the Reynolds number of the jet and J / ρ {\displaystyle J/\rho } is the kinematic momentum flux of the jet. A similar composite solution can be constructed for the laminar plumes. The exact solution of the Navier-Stokes solutions was verified experimentally by Zauner in 1985. [ 7 ] Further analysis [ 8 ] [ 9 ] showed that the axial momentum flux decays slowly along the axis unlike the Schlichting jet solution and it is found that the Schneider flow becomes invalid when distance from the origin increases to a distance of the order exponential of square of the jet Reynolds number, thus the domain of validity of Schneider solution increases with increasing jet Reynolds number. The presence of swirling motion, i.e., v ϕ ≠ 0 {\displaystyle v_{\phi }\neq 0} is shown not to influence the axial motion given by ψ = K ν r f ( ξ ) {\displaystyle \psi =K\nu rf(\xi )} provided K ∼ O ( 1 ) {\displaystyle K\sim O(1)} . If K {\displaystyle K} is very large, the presence of swirl completely alters the motion on the axial plane. For K ∼ O ( 1 ) {\displaystyle K\sim O(1)} , the azimuthal solution can be solved in terms of the circulation 2 π Γ {\displaystyle 2\pi \Gamma } , where Γ = r sin ⁡ θ v ϕ {\displaystyle \Gamma =r\sin \theta v_{\phi }} . The solution can be described in terms of the self-similar solution of the second kind , Γ = A r λ Λ ( ξ ) {\displaystyle \Gamma =Ar^{\lambda }\Lambda (\xi )} , where A {\displaystyle A} is an unknown constant and λ {\displaystyle \lambda } is an eigenvalue. The function Λ ( ξ ) {\displaystyle \Lambda (\xi )} satisfies [ 5 ] subjected to the boundary conditions Λ ( ξ w ) = 0 {\displaystyle \Lambda (\xi _{w})=0} and ( 1 − ξ ) 1 / 2 Λ ′ → 0 {\displaystyle (1-\xi )^{1/2}\Lambda '\rightarrow 0} as ξ → 1 {\displaystyle \xi \rightarrow 1} .
https://en.wikipedia.org/wiki/Schneider_flow
In mathematics , the Schneider–Lang theorem is a refinement by Lang (1966) of a theorem of Schneider (1949) about the transcendence of values of meromorphic functions . The theorem implies both the Hermite–Lindemann and Gelfond–Schneider theorems , and implies the transcendence of some values of elliptic functions and elliptic modular functions . Fix a number field K and meromorphic f 1 , ..., f N , of which at least two are algebraically independent and have orders ρ 1 and ρ 2 , and such that f j ′ ∈ K [ f 1 , ..., f N ] for any j . Then there are at most distinct complex numbers ω 1 , ..., ω m such that f i ( ω j ) ∈ K for all combinations of i and j . To prove the result Lang took two algebraically independent functions from f 1 , ..., f N , say, f and g , and then created an auxiliary function F ∈ K [ f , g ] . Using Siegel's lemma , he then showed that one could assume F vanished to a high order at the ω 1 , ..., ω m . Thus a high-order derivative of F takes a value of small size at one such ω i s, "size" here referring to an algebraic property of a number . Using the maximum modulus principle , Lang also found a separate estimate for absolute values of derivatives of F . Standard results connect the size of a number and its absolute value, and the combined estimates imply the claimed bound on m . Bombieri & Lang (1970) and Bombieri (1970) generalized the result to functions of several variables. Bombieri showed that if K is an algebraic number field and f 1 , ..., f N are meromorphic functions of d complex variables of order at most ρ generating a field K ( f 1 , ..., f N ) of transcendence degree at least d + 1 that is closed under all partial derivatives , then the set of points where all the functions f n have values in K is contained in an algebraic hypersurface in C d of degree at most Waldschmidt (1979 , theorem 5.1.1) gave a simpler proof of Bombieri's theorem, with a slightly stronger bound of d (ρ 1 + ... + ρ d +1 )[ K : Q ] for the degree, where the ρ j are the orders of d + 1 algebraically independent functions. The special case d = 1 gives the Schneider–Lang theorem, with a bound of (ρ 1 + ρ 2 )[ K : Q ] for the number of points. If p {\displaystyle p} is a polynomial with integer coefficients then the functions z 1 , . . . , z n , e p ( z 1 , . . . , z n ) {\displaystyle z_{1},...,z_{n},e^{p(z_{1},...,z_{n})}} are all algebraic at a dense set of points of the hypersurface p = 0 {\displaystyle p=0} .
https://en.wikipedia.org/wiki/Schneider–Lang_theorem
In additive number theory , the Schnirelmann density of a sequence of numbers is a way to measure how "dense" the sequence is. It is named after Russian mathematician Lev Schnirelmann , who was the first to study it. [ 1 ] [ 2 ] The Schnirelmann density of a set of natural numbers A is defined as where A ( n ) denotes the number of elements of A not exceeding n and inf is infimum . [ 3 ] The Schnirelmann density is well-defined even if the limit of A ( n )/ n as n → ∞ fails to exist (see upper and lower asymptotic density ). By definition, 0 ≤ A ( n ) ≤ n and n σ A ≤ A ( n ) for all n , and therefore 0 ≤ σ A ≤ 1 , and σ A = 1 if and only if A = N . Furthermore, The Schnirelmann density is sensitive to the first values of a set: In particular, and Consequently, the Schnirelmann densities of the even numbers and the odd numbers, which one might expect to agree, are 0 and 1/2 respectively. Schnirelmann and Yuri Linnik exploited this sensitivity. If we set G 2 = { k 2 } k = 1 ∞ {\displaystyle {\mathfrak {G}}^{2}=\{k^{2}\}_{k=1}^{\infty }} , then Lagrange's four-square theorem can be restated as σ ( G 2 ⊕ G 2 ⊕ G 2 ⊕ G 2 ) = 1 {\displaystyle \sigma ({\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2})=1} . (Here the symbol A ⊕ B {\displaystyle A\oplus B} denotes the sumset of A ∪ { 0 } {\displaystyle A\cup \{0\}} and B ∪ { 0 } {\displaystyle B\cup \{0\}} .) It is clear that σ G 2 = 0 {\displaystyle \sigma {\mathfrak {G}}^{2}=0} . In fact, we still have σ ( G 2 ⊕ G 2 ) = 0 {\displaystyle \sigma ({\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2})=0} , and one might ask at what point the sumset attains Schnirelmann density 1 and how does it increase. It actually is the case that σ ( G 2 ⊕ G 2 ⊕ G 2 ) = 5 / 6 {\displaystyle \sigma ({\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2})=5/6} and one sees that sumsetting G 2 {\displaystyle {\mathfrak {G}}^{2}} once again yields a more populous set, namely all of N {\displaystyle \mathbb {N} } . Schnirelmann further succeeded in developing these ideas into the following theorems, aiming towards Additive Number Theory, and proving them to be a novel resource (if not greatly powerful) to attack important problems, such as Waring's problem and Goldbach's conjecture . Theorem. Let A {\displaystyle A} and B {\displaystyle B} be subsets of N {\displaystyle \mathbb {N} } . Then σ ( A ⊕ B ) ≥ σ A + σ B − σ A ⋅ σ B . {\displaystyle \sigma (A\oplus B)\geq \sigma A+\sigma B-\sigma A\cdot \sigma B.} Note that σ A + σ B − σ A ⋅ σ B = 1 − ( 1 − σ A ) ( 1 − σ B ) {\displaystyle \sigma A+\sigma B-\sigma A\cdot \sigma B=1-(1-\sigma A)(1-\sigma B)} . Inductively, we have the following generalization. Corollary. Let A i ⊆ N {\displaystyle A_{i}\subseteq \mathbb {N} } be a finite family of subsets of N {\displaystyle \mathbb {N} } . Then σ ( ⨁ i A i ) ≥ 1 − ∏ i ( 1 − σ A i ) . {\displaystyle \sigma \left(\bigoplus _{i}A_{i}\right)\geq 1-\prod _{i}\left(1-\sigma A_{i}\right).} The theorem provides the first insights on how sumsets accumulate. It seems unfortunate that its conclusion stops short of showing σ {\displaystyle \sigma } being superadditive . Yet, Schnirelmann provided us with the following results, which sufficed for most of his purpose. Theorem. Let A {\displaystyle A} and B {\displaystyle B} be subsets of N {\displaystyle \mathbb {N} } . If σ A + σ B ≥ 1 {\displaystyle \sigma A+\sigma B\geq 1} , then A ⊕ B = N . {\displaystyle A\oplus B=\mathbb {N} .} Theorem. ( Schnirelmann ) Let A ⊆ N {\displaystyle A\subseteq \mathbb {N} } . If σ A > 0 {\displaystyle \sigma A>0} then there exists k {\displaystyle k} such that ⨁ i = 1 k A = N . {\displaystyle \bigoplus _{i=1}^{k}A=\mathbb {N} .} A subset A ⊆ N {\displaystyle A\subseteq \mathbb {N} } with the property that A ⊕ A ⊕ ⋯ ⊕ A = N {\displaystyle A\oplus A\oplus \cdots \oplus A=\mathbb {N} } for a finite sum, is called an additive basis , and the least number of summands required is called the degree (sometimes order ) of the basis. Thus, the last theorem states that any set with positive Schnirelmann density is an additive basis. In this terminology, the set of squares G 2 = { k 2 } k = 1 ∞ {\displaystyle {\mathfrak {G}}^{2}=\{k^{2}\}_{k=1}^{\infty }} is an additive basis of degree 4. (About an open problem for additive bases, see Erdős–Turán conjecture on additive bases .) Historically the theorems above were pointers to the following result, at one time known as the α + β {\displaystyle \alpha +\beta } hypothesis. It was used by Edmund Landau and was finally proved by Henry Mann in 1942. Theorem. ( Mann 1942 ) Let A {\displaystyle A} and B {\displaystyle B} be subsets of N {\displaystyle \mathbb {N} } . In case that A ⊕ B ≠ N {\displaystyle A\oplus B\neq \mathbb {N} } , we still have σ ( A ⊕ B ) ≥ σ A + σ B . {\displaystyle \sigma (A\oplus B)\geq \sigma A+\sigma B.} An analogue of this theorem for lower asymptotic density was obtained by Kneser. [ 4 ] At a later date, E. Artin and P. Scherk simplified the proof of Mann's theorem. [ 5 ] Let k {\displaystyle k} and N {\displaystyle N} be natural numbers. Let G k = { i k } i = 1 ∞ {\displaystyle {\mathfrak {G}}^{k}=\{i^{k}\}_{i=1}^{\infty }} . Define r N k ( n ) {\displaystyle r_{N}^{k}(n)} to be the number of non-negative integral solutions to the equation and R N k ( n ) {\displaystyle R_{N}^{k}(n)} to be the number of non-negative integral solutions to the inequality in the variables x i {\displaystyle x_{i}} , respectively. Thus R N k ( n ) = ∑ i = 0 n r N k ( i ) {\displaystyle R_{N}^{k}(n)=\sum _{i=0}^{n}r_{N}^{k}(i)} . We have The volume of the N {\displaystyle N} -dimensional body defined by 0 ≤ x 1 k + x 2 k + ⋯ + x N k ≤ n {\displaystyle 0\leq x_{1}^{k}+x_{2}^{k}+\cdots +x_{N}^{k}\leq n} , is bounded by the volume of the hypercube of size n 1 / k {\displaystyle n^{1/k}} , hence R N k ( n ) = ∑ i = 0 n r N k ( i ) ≤ n N / k {\displaystyle R_{N}^{k}(n)=\sum _{i=0}^{n}r_{N}^{k}(i)\leq n^{N/k}} . The hard part is to show that this bound still works on the average, i.e., Lemma. ( Linnik ) For all k ∈ N {\displaystyle k\in \mathbb {N} } there exists N ∈ N {\displaystyle N\in \mathbb {N} } and a constant c = c ( k ) {\displaystyle c=c(k)} , depending only on k {\displaystyle k} , such that for all n ∈ N {\displaystyle n\in \mathbb {N} } , r N k ( m ) < c n N k − 1 {\displaystyle r_{N}^{k}(m)<cn^{{\frac {N}{k}}-1}} for all 0 ≤ m ≤ n . {\displaystyle 0\leq m\leq n.} With this at hand, the following theorem can be elegantly proved. Theorem. For all k {\displaystyle k} there exists N {\displaystyle N} for which σ ( N G k ) > 0 {\displaystyle \sigma (N{\mathfrak {G}}^{k})>0} . We have thus established the general solution to Waring's Problem: Corollary. ( Hilbert 1909 ) For all k {\displaystyle k} there exists N {\displaystyle N} , depending only on k {\displaystyle k} , such that every positive integer n {\displaystyle n} can be expressed as the sum of at most N {\displaystyle N} many k {\displaystyle k} -th powers. In 1930 Schnirelmann used these ideas in conjunction with the Brun sieve to prove Schnirelmann's theorem , [ 1 ] [ 2 ] that any natural number greater than 1 can be written as the sum of not more than C prime numbers , where C is an effectively computable constant: [ 6 ] Schnirelmann obtained C < 800000. [ 7 ] Schnirelmann's constant is the lowest number C with this property. [ 6 ] Olivier Ramaré showed in ( Ramaré 1995 ) that Schnirelmann's constant is at most 7, [ 6 ] improving the earlier upper bound of 19 obtained by Hans Riesel and R. C. Vaughan . Schnirelmann's constant is at least 3; Goldbach's conjecture implies that this is the constant's actual value. [ 6 ] In 2013, Harald Helfgott proved Goldbach's weak conjecture for all odd numbers. Therefore, Schnirelmann's constant is at most 4. [ 8 ] [ 9 ] [ 10 ] [ 11 ] Khintchin proved that the sequence of squares, though of zero Schnirelmann density, when added to a sequence of Schnirelmann density between 0 and 1, increases the density: This was soon simplified and extended by Erdős , who showed, that if A is any sequence with Schnirelmann density α and B is an additive basis of order k then and this was improved by Plünnecke to Sequences with this property, of increasing density less than one by addition, were named essential components by Khintchin. Linnik showed that an essential component need not be an additive basis [ 14 ] as he constructed an essential component that has x o(1) elements less than x . More precisely, the sequence has elements less than x for some c < 1. This was improved by E. Wirsing to For a while, it remained an open problem how many elements an essential component must have. Finally, Ruzsa determined that for every ε > 0 there is an essential component which has at most c (log x ) 1+ ε elements up to x , but there is no essential component which has c (log x ) 1+ o (1) elements up to x . [ 15 ] [ 16 ]
https://en.wikipedia.org/wiki/Schnirelmann-Landau_conjecture
In additive number theory , the Schnirelmann density of a sequence of numbers is a way to measure how "dense" the sequence is. It is named after Russian mathematician Lev Schnirelmann , who was the first to study it. [ 1 ] [ 2 ] The Schnirelmann density of a set of natural numbers A is defined as where A ( n ) denotes the number of elements of A not exceeding n and inf is infimum . [ 3 ] The Schnirelmann density is well-defined even if the limit of A ( n )/ n as n → ∞ fails to exist (see upper and lower asymptotic density ). By definition, 0 ≤ A ( n ) ≤ n and n σ A ≤ A ( n ) for all n , and therefore 0 ≤ σ A ≤ 1 , and σ A = 1 if and only if A = N . Furthermore, The Schnirelmann density is sensitive to the first values of a set: In particular, and Consequently, the Schnirelmann densities of the even numbers and the odd numbers, which one might expect to agree, are 0 and 1/2 respectively. Schnirelmann and Yuri Linnik exploited this sensitivity. If we set G 2 = { k 2 } k = 1 ∞ {\displaystyle {\mathfrak {G}}^{2}=\{k^{2}\}_{k=1}^{\infty }} , then Lagrange's four-square theorem can be restated as σ ( G 2 ⊕ G 2 ⊕ G 2 ⊕ G 2 ) = 1 {\displaystyle \sigma ({\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2})=1} . (Here the symbol A ⊕ B {\displaystyle A\oplus B} denotes the sumset of A ∪ { 0 } {\displaystyle A\cup \{0\}} and B ∪ { 0 } {\displaystyle B\cup \{0\}} .) It is clear that σ G 2 = 0 {\displaystyle \sigma {\mathfrak {G}}^{2}=0} . In fact, we still have σ ( G 2 ⊕ G 2 ) = 0 {\displaystyle \sigma ({\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2})=0} , and one might ask at what point the sumset attains Schnirelmann density 1 and how does it increase. It actually is the case that σ ( G 2 ⊕ G 2 ⊕ G 2 ) = 5 / 6 {\displaystyle \sigma ({\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2}\oplus {\mathfrak {G}}^{2})=5/6} and one sees that sumsetting G 2 {\displaystyle {\mathfrak {G}}^{2}} once again yields a more populous set, namely all of N {\displaystyle \mathbb {N} } . Schnirelmann further succeeded in developing these ideas into the following theorems, aiming towards Additive Number Theory, and proving them to be a novel resource (if not greatly powerful) to attack important problems, such as Waring's problem and Goldbach's conjecture . Theorem. Let A {\displaystyle A} and B {\displaystyle B} be subsets of N {\displaystyle \mathbb {N} } . Then σ ( A ⊕ B ) ≥ σ A + σ B − σ A ⋅ σ B . {\displaystyle \sigma (A\oplus B)\geq \sigma A+\sigma B-\sigma A\cdot \sigma B.} Note that σ A + σ B − σ A ⋅ σ B = 1 − ( 1 − σ A ) ( 1 − σ B ) {\displaystyle \sigma A+\sigma B-\sigma A\cdot \sigma B=1-(1-\sigma A)(1-\sigma B)} . Inductively, we have the following generalization. Corollary. Let A i ⊆ N {\displaystyle A_{i}\subseteq \mathbb {N} } be a finite family of subsets of N {\displaystyle \mathbb {N} } . Then σ ( ⨁ i A i ) ≥ 1 − ∏ i ( 1 − σ A i ) . {\displaystyle \sigma \left(\bigoplus _{i}A_{i}\right)\geq 1-\prod _{i}\left(1-\sigma A_{i}\right).} The theorem provides the first insights on how sumsets accumulate. It seems unfortunate that its conclusion stops short of showing σ {\displaystyle \sigma } being superadditive . Yet, Schnirelmann provided us with the following results, which sufficed for most of his purpose. Theorem. Let A {\displaystyle A} and B {\displaystyle B} be subsets of N {\displaystyle \mathbb {N} } . If σ A + σ B ≥ 1 {\displaystyle \sigma A+\sigma B\geq 1} , then A ⊕ B = N . {\displaystyle A\oplus B=\mathbb {N} .} Theorem. ( Schnirelmann ) Let A ⊆ N {\displaystyle A\subseteq \mathbb {N} } . If σ A > 0 {\displaystyle \sigma A>0} then there exists k {\displaystyle k} such that ⨁ i = 1 k A = N . {\displaystyle \bigoplus _{i=1}^{k}A=\mathbb {N} .} A subset A ⊆ N {\displaystyle A\subseteq \mathbb {N} } with the property that A ⊕ A ⊕ ⋯ ⊕ A = N {\displaystyle A\oplus A\oplus \cdots \oplus A=\mathbb {N} } for a finite sum, is called an additive basis , and the least number of summands required is called the degree (sometimes order ) of the basis. Thus, the last theorem states that any set with positive Schnirelmann density is an additive basis. In this terminology, the set of squares G 2 = { k 2 } k = 1 ∞ {\displaystyle {\mathfrak {G}}^{2}=\{k^{2}\}_{k=1}^{\infty }} is an additive basis of degree 4. (About an open problem for additive bases, see Erdős–Turán conjecture on additive bases .) Historically the theorems above were pointers to the following result, at one time known as the α + β {\displaystyle \alpha +\beta } hypothesis. It was used by Edmund Landau and was finally proved by Henry Mann in 1942. Theorem. ( Mann 1942 ) Let A {\displaystyle A} and B {\displaystyle B} be subsets of N {\displaystyle \mathbb {N} } . In case that A ⊕ B ≠ N {\displaystyle A\oplus B\neq \mathbb {N} } , we still have σ ( A ⊕ B ) ≥ σ A + σ B . {\displaystyle \sigma (A\oplus B)\geq \sigma A+\sigma B.} An analogue of this theorem for lower asymptotic density was obtained by Kneser. [ 4 ] At a later date, E. Artin and P. Scherk simplified the proof of Mann's theorem. [ 5 ] Let k {\displaystyle k} and N {\displaystyle N} be natural numbers. Let G k = { i k } i = 1 ∞ {\displaystyle {\mathfrak {G}}^{k}=\{i^{k}\}_{i=1}^{\infty }} . Define r N k ( n ) {\displaystyle r_{N}^{k}(n)} to be the number of non-negative integral solutions to the equation and R N k ( n ) {\displaystyle R_{N}^{k}(n)} to be the number of non-negative integral solutions to the inequality in the variables x i {\displaystyle x_{i}} , respectively. Thus R N k ( n ) = ∑ i = 0 n r N k ( i ) {\displaystyle R_{N}^{k}(n)=\sum _{i=0}^{n}r_{N}^{k}(i)} . We have The volume of the N {\displaystyle N} -dimensional body defined by 0 ≤ x 1 k + x 2 k + ⋯ + x N k ≤ n {\displaystyle 0\leq x_{1}^{k}+x_{2}^{k}+\cdots +x_{N}^{k}\leq n} , is bounded by the volume of the hypercube of size n 1 / k {\displaystyle n^{1/k}} , hence R N k ( n ) = ∑ i = 0 n r N k ( i ) ≤ n N / k {\displaystyle R_{N}^{k}(n)=\sum _{i=0}^{n}r_{N}^{k}(i)\leq n^{N/k}} . The hard part is to show that this bound still works on the average, i.e., Lemma. ( Linnik ) For all k ∈ N {\displaystyle k\in \mathbb {N} } there exists N ∈ N {\displaystyle N\in \mathbb {N} } and a constant c = c ( k ) {\displaystyle c=c(k)} , depending only on k {\displaystyle k} , such that for all n ∈ N {\displaystyle n\in \mathbb {N} } , r N k ( m ) < c n N k − 1 {\displaystyle r_{N}^{k}(m)<cn^{{\frac {N}{k}}-1}} for all 0 ≤ m ≤ n . {\displaystyle 0\leq m\leq n.} With this at hand, the following theorem can be elegantly proved. Theorem. For all k {\displaystyle k} there exists N {\displaystyle N} for which σ ( N G k ) > 0 {\displaystyle \sigma (N{\mathfrak {G}}^{k})>0} . We have thus established the general solution to Waring's Problem: Corollary. ( Hilbert 1909 ) For all k {\displaystyle k} there exists N {\displaystyle N} , depending only on k {\displaystyle k} , such that every positive integer n {\displaystyle n} can be expressed as the sum of at most N {\displaystyle N} many k {\displaystyle k} -th powers. In 1930 Schnirelmann used these ideas in conjunction with the Brun sieve to prove Schnirelmann's theorem , [ 1 ] [ 2 ] that any natural number greater than 1 can be written as the sum of not more than C prime numbers , where C is an effectively computable constant: [ 6 ] Schnirelmann obtained C < 800000. [ 7 ] Schnirelmann's constant is the lowest number C with this property. [ 6 ] Olivier Ramaré showed in ( Ramaré 1995 ) that Schnirelmann's constant is at most 7, [ 6 ] improving the earlier upper bound of 19 obtained by Hans Riesel and R. C. Vaughan . Schnirelmann's constant is at least 3; Goldbach's conjecture implies that this is the constant's actual value. [ 6 ] In 2013, Harald Helfgott proved Goldbach's weak conjecture for all odd numbers. Therefore, Schnirelmann's constant is at most 4. [ 8 ] [ 9 ] [ 10 ] [ 11 ] Khintchin proved that the sequence of squares, though of zero Schnirelmann density, when added to a sequence of Schnirelmann density between 0 and 1, increases the density: This was soon simplified and extended by Erdős , who showed, that if A is any sequence with Schnirelmann density α and B is an additive basis of order k then and this was improved by Plünnecke to Sequences with this property, of increasing density less than one by addition, were named essential components by Khintchin. Linnik showed that an essential component need not be an additive basis [ 14 ] as he constructed an essential component that has x o(1) elements less than x . More precisely, the sequence has elements less than x for some c < 1. This was improved by E. Wirsing to For a while, it remained an open problem how many elements an essential component must have. Finally, Ruzsa determined that for every ε > 0 there is an essential component which has at most c (log x ) 1+ ε elements up to x , but there is no essential component which has c (log x ) 1+ o (1) elements up to x . [ 15 ] [ 16 ]
https://en.wikipedia.org/wiki/Schnirelmann_density
A Schnorr group , proposed by Claus P. Schnorr , is a large prime-order subgroup of Z p × {\displaystyle \mathbb {Z} _{p}^{\times }} , the multiplicative group of integers modulo p {\displaystyle p} for some prime p {\displaystyle p} . To generate such a group, generate p {\displaystyle p} , q {\displaystyle q} , r {\displaystyle r} such that with p {\displaystyle p} , q {\displaystyle q} prime. Then choose any h {\displaystyle h} in the range 1 < h < p {\displaystyle 1<h<p} until you find one such that This value is a generator of a subgroup of Z p × {\displaystyle \mathbb {Z} _{p}^{\times }} of order q {\displaystyle q} . Schnorr groups are useful in discrete log based cryptosystems including Schnorr signatures and DSA . In such applications, typically p {\displaystyle p} is chosen to be large enough to resist index calculus and related methods of solving the discrete-log problem (perhaps 1024 to 3072 bits), while q {\displaystyle q} is large enough to resist the birthday attack on discrete log problems, which works in any group (perhaps 160 to 256 bits). Because the Schnorr group is of prime order, it has no non-trivial proper subgroups, thwarting confinement attacks due to small subgroups. Implementations of protocols that use Schnorr groups must verify where appropriate that integers supplied by other parties are in fact members of the Schnorr group; x {\displaystyle x} is a member of the group if 0 < x < p {\displaystyle 0<x<p} and x q ≡ 1 ( mod p ) {\displaystyle x^{q}\equiv 1\;({\text{mod }}p)} . Any member of the group except the element 1 {\displaystyle 1} is also a generator of the group. This group theory -related article is a stub . You can help Wikipedia by expanding it . This cryptography-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Schnorr_group
In graph theory , Schnyder's theorem is a characterization of planar graphs in terms of the order dimension of their incidence posets . It is named after Walter Schnyder, who published its proof in 1989 . The incidence poset P ( G ) of an undirected graph G with vertex set V and edge set E is the partially ordered set of height 2 that has V ∪ E as its elements. In this partial order, there is an order relation x < y when x is a vertex, y is an edge, and x is one of the two endpoints of y . The order dimension of a partial order is the smallest number of total orderings whose intersection is the given partial order; such a set of orderings is called a realizer of the partial order. Schnyder's theorem states that a graph G is planar if and only if the order dimension of P ( G ) is at most three. This theorem has been generalized by Brightwell and Trotter ( 1993 , 1997 ) to a tight bound on the dimension of the height-three partially ordered sets formed analogously from the vertices, edges and faces of a convex polyhedron , or more generally of an embedded planar graph: in both cases, the order dimension of the poset is at most four. However, this result cannot be generalized to higher-dimensional convex polytopes , as there exist four-dimensional polytopes whose face lattices have unbounded order dimension. Even more generally, for abstract simplicial complexes , the order dimension of the face poset of the complex is at most 1 + d , where d is the minimum dimension of a Euclidean space in which the complex has a geometric realization (Ossona de Mendez 1999 , 2002 ). As Schnyder observes, the incidence poset of a graph G has order dimension two if and only if the graph is a path or a subgraph of a path. For, in when an incidence poset has order dimension is two, its only possible realizer consists of two total orders that (when restricted to the graph's vertices) are the reverse of each other. Any other two orders would have an intersection that includes an order relation between two vertices, which is not allowed for incidence posets. For these two orders on the vertices, an edge between consecutive vertices can be included in the ordering by placing it immediately following the later of the two edge endpoints, but no other edges can be included. If a graph can be colored with four colors, then its incidence poset has order dimension at most four ( Schnyder 1989 ). The incidence poset of a complete graph on n vertices has order dimension Θ ( log ⁡ log ⁡ n ) {\displaystyle \Theta (\log \log n)} ( Spencer 1971 ).
https://en.wikipedia.org/wiki/Schnyder's_theorem