text
stringlengths
11
320k
source
stringlengths
26
161
Semi-global matching ( SGM ) is a computer vision algorithm for the estimation of a dense disparity map from a rectified stereo image pair , introduced in 2005 by Heiko Hirschmüller while working at the German Aerospace Center . [ 1 ] Given its predictable run time, its favourable trade-off between quality of the results and computing time, and its suitability for fast parallel implementation in ASIC or FPGA , it has encountered wide adoption in real-time stereo vision applications such as robotics and advanced driver assistance systems . [ 2 ] [ 3 ] Pixelwise stereo matching allows to perform real-time calculation of disparity maps by measuring the similarity of each pixel in one stereo image to each pixel within a subset in the other stereo image. Given a rectified stereo image pair, for a pixel with coordinates ( x , y ) {\displaystyle (x,y)} the set of pixels in the other image is usually selected as { ( x ^ , y ) | x ^ ≥ x , x ^ ≤ x + D } {\displaystyle \{({\hat {x}},y)|{\hat {x}}\geq x,{\hat {x}}\leq x+D\}} , where D {\displaystyle D} is a maximum allowed disparity shift. [ 1 ] A simple search for the best matching pixel produces many spurious matches, and this problem can be mitigated with the addition of a regularisation term that penalises jumps in disparity between adjacent pixels, with a cost function in the form where D ( p , d p ) {\displaystyle D(p,d_{p})} is the pixel-wise dissimilarity cost at pixel p {\displaystyle p} with disparity d p {\displaystyle d_{p}} , and R ( p , d p , q , d q ) {\displaystyle R(p,d_{p},q,d_{q})} is the regularisation cost between pixels p {\displaystyle p} and q {\displaystyle q} with disparities d p {\displaystyle d_{p}} and d q {\displaystyle d_{q}} respectively, for all pairs of neighbouring pixels N {\displaystyle {\mathcal {N}}} . Such constraint can be efficiently enforced on a per-scanline basis by using dynamic programming (e.g. the Viterbi algorithm ), but such limitation can still introduce streaking artefacts in the depth map, because little or no regularisation is performed across scanlines. [ 4 ] A possible solution is to perform global optimisation in 2D, which is however an NP-complete problem in the general case. For some families of cost functions (e.g. submodular functions ) a solution with strong optimality properties can be found in polynomial time using graph cut optimization , however such global methods are generally too expensive for real-time processing. [ 5 ] The idea behind SGM is to perform line optimisation along multiple directions and computing an aggregated cost S ( p , d ) {\displaystyle S(p,d)} by summing the costs to reach pixel p {\displaystyle p} with disparity d {\displaystyle d} from each direction. The number of directions affects the run time of the algorithm, and while 16 directions usually ensure good quality, a lower number can be used to achieve faster execution. [ 6 ] A typical 8-direction implementation of the algorithm can compute the cost in two passes, a forward pass accumulating the cost from the left, top-left, top, and top-right, and a backward pass accumulating the cost from right, bottom-right, bottom, and bottom-left. [ 7 ] A single-pass algorithm can be implemented with only five directions. [ 8 ] The cost is composed by a matching term D ( p , d ) {\displaystyle D(p,d)} and a binary regularisation term R ( d p , d q ) {\displaystyle R(d_{p},d_{q})} . The former can be in principle any local image dissimilarity measure, and commonly used functions are absolute or squared intensity difference (usually summed over a window around the pixel, and after applying a high-pass filter to the images to gain some illumination invariance), Birchfield–Tomasi dissimilarity , Hamming distance of the census transform , Pearson correlation ( normalized cross-correlation ). Even mutual information can be approximated as a sum over the pixels, and thus used as a local similarity metric. [ 9 ] The regularisation term has the form where P 1 {\displaystyle P_{1}} and P 2 {\displaystyle P_{2}} are two constant parameters, with P 1 < P 2 {\displaystyle P_{1}<P_{2}} . The three-way comparison allows to assign a smaller penalty for unitary changes in disparity, thus allowing smooth transitions corresponding e.g. to slanted surfaces, and penalising larger jumps while preserving discontinuities due to the constant penalty term. To further preserve discontinuities, the gradient of the intensity can be used to adapt the penalty term, because discontinuities in depth usually correspond to a discontinuity in image intensity I {\displaystyle I} , by setting for each pair of pixels p {\displaystyle p} and q {\displaystyle q} . [ 10 ] The accumulated cost S ( p , d ) = ∑ r L r ( p , d ) {\displaystyle S(p,d)=\sum _{r}L_{r}(p,d)} is the sum of all costs L r ( p , d ) {\displaystyle L_{r}(p,d)} to reach pixel p {\displaystyle p} with disparity d {\displaystyle d} along direction r {\displaystyle r} . Each term can be expressed recursively as where the minimum cost at the previous pixel min k L r ( p − r , k ) {\displaystyle \min _{k}L_{r}(p-r,k)} is subtracted for numerical stability, since it is constant for all values of disparity at the current pixel and therefore it does not affect the optimisation. [ 6 ] The value of disparity at each pixel is given by d ∗ ( p ) = argmin d ⁡ S ( p , d ) {\displaystyle d^{*}(p)=\operatorname {argmin} _{d}S(p,d)} , and sub-pixel accuracy can be achieved by fitting a curve in d ∗ ( p ) {\displaystyle d^{*}(p)} and its neighbouring costs and taking the minimum along the curve. Since the two images in the stereo pair are not treated symmetrically in the calculations, a consistency check can be performed by computing the disparity a second time in the opposite direction, swapping the role of the left and right image, and invalidating the result for the pixels where the result differs between the two calculations. Further post-processing techniques for the refinement of the disparity image include morphological filtering to remove outliers, intensity consistency checks to refine textureless regions, and interpolation to fill in pixels invalidated by consistency checks. [ 11 ] The cost volume C ( p , d ) {\displaystyle C(p,d)} for all values of p = ( x , y ) {\displaystyle p=(x,y)} and d {\displaystyle d} can be precomputed and in an implementation of the full algorithm, using D {\displaystyle D} possible disparity shifts and R {\displaystyle R} directions, each pixel is subsequently visited R {\displaystyle R} times, therefore the computational complexity of the algorithm for an image of size W × H {\displaystyle W\times H} is O ( W H D ) {\displaystyle O(WHD)} . [ 7 ] The main drawback of SGM is its memory consumption. An implementation of the two-pass 8-directions version of the algorithm requires to store W × H × D + 3 × W × D + D {\displaystyle W\times H\times D+3\times W\times D+D} elements, since the accumulated cost volume has a size of W × H × D {\displaystyle W\times H\times D} and to compute the cost for a pixel during each pass it is necessary to keep track of the D {\displaystyle D} path costs of its left or right neighbour along one direction and of the W × D {\displaystyle W\times D} path costs of the pixels in the row above or below along 3 directions. [ 7 ] One solution to reduce memory consumption is to compute SGM on partially overlapping image tiles, interpolating the values over the overlapping regions. This method also allows to apply SGM to very large images, that would not fit within memory in the first place. [ 12 ] A memory-efficient approximation of SGM stores for each pixel only the costs for the disparity values that represent a minimum along some direction, instead of all possible disparity values. The true minimum is highly likely to be predicted by the minima along the eight directions, thus yielding similar quality of the results. The algorithm uses eight directions and three passes, and during the first pass it stores for each pixel the cost for the optimal disparity along the four top-down directions, plus the two closest lower and higher values (for sub-pixel interpolation). Since the cost volume is stored in a sparse fashion, the four values of optimal disparity need also to be stored. In the second pass, the other four bottom-up directions are computed, completing the calculations for the four disparity values selected in the first pass, that now have been evaluated along all eight directions. An intermediate value of cost and disparity is computed from the output of the first pass and stored, and the memory of the four outputs from the first pass is replaced with the four optimal disparity values and their costs from the directions in the second pass. A third pass goes again along the same directions used in the first pass, completing the calculations for the disparity values from the second pass. The final result is then selected among the four minima from the third pass and the intermediate result computed during the second pass. [ 13 ] In each pass four disparity values are stored, together with three cost values each (the minimum and its two closest neighbouring costs), plus the disparity and cost values of the intermediate result, for a total of eighteen values for each pixel, making the total memory consumption equal to 18 × W × H + 3 × W × D + D {\displaystyle 18\times W\times H+3\times W\times D+D} , at the cost in time of an additional pass over the image. [ 13 ]
https://en.wikipedia.org/wiki/Semi-global_matching
In high energy particle physics nucleon - lepton scattering, the semi-inclusive deep inelastic scattering ( SIDIS ) is a method to obtain information on the nucleon structure. [ 1 ] It expands the traditional method of deep inelastic scattering (DIS). In DIS, only the scattered lepton is detected while the remnants of the shattered nucleon are ignored (inclusive experiment). In SIDIS, a high momentum hadron , a.k.a. as the leading hadron is detected in addition to the scattered lepton. This allows us to obtain additional details about the scattering process kinematics. The leading hadron results from the hadronization of the struck quark . This latter retains the information on its motion inside the nucleon, including its transverse momentum which allows to access the transverse momentum distributions (TMDs) of partons . Likewise, by detecting the leading hadron, one essentially tags (i.e. identifies) the quark on which the scattering occurred. For example, if the leading hadron is a kaon , we know that the scattering occurred on one of the strange quarks of the nucleon's quark sea . In DIS the struck quark is not identified and the information is an indistinguishable sum over all the quark flavors . SIDIS allows to disentangle this information. SIDIS measurements were pioneered at DESY by the HERMES experiment. They are currently (2021) being carried out at CERN by the COMPASS experiment and several experiments at Jefferson Lab . SIDIS will be an important technique used in the future Electron Ion Collider scientific program. [ 2 ]
https://en.wikipedia.org/wiki/Semi-inclusive_deep_inelastic_scattering
In mathematics, semi-infinite objects are objects which are infinite or unbounded in some but not all possible ways. Generally, a semi-infinite set is bounded in one direction, and unbounded in another. For instance, the natural numbers are semi-infinite considered as a subset of the integers; similarly, the intervals ( c , ∞ ) {\displaystyle (c,\infty )} and ( − ∞ , c ) {\displaystyle (-\infty ,c)} and their closed counterparts are semi-infinite subsets of R {\displaystyle \mathbb {R} } if c {\displaystyle c} is finite. [ 1 ] Half-spaces and half-lines are sometimes described as semi-infinite regions. Semi-infinite regions occur frequently in the study of differential equations . [ 2 ] [ 3 ] For instance, one might study solutions of the heat equation in an idealised semi-infinite metal bar. A semi-infinite integral is an improper integral over a semi-infinite interval. More generally, objects indexed or parametrised by semi-infinite sets may be described as semi-infinite. [ 4 ] Most forms of semi-infiniteness are boundedness properties, not cardinality or measure properties: semi-infinite sets are typically infinite in cardinality and measure. Many optimization problems involve some set of variables and some set of constraints. A problem is called semi-infinite if one (but not both) of these sets is finite. The study of such problems is known as semi-infinite programming . [ 5 ] This mathematics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Semi-infinite
In science and engineering , a semi-log plot / graph or semi-logarithmic plot / graph has one axis on a logarithmic scale , the other on a linear scale . It is useful for data with exponential relationships, where one variable covers a large range of values. [ 1 ] All equations of the form y = λ a γ x {\displaystyle y=\lambda a^{\gamma x}} form straight lines when plotted semi-logarithmically, since taking logs of both sides gives This is a line with slope γ {\displaystyle \gamma } and log a ⁡ λ {\displaystyle \log _{a}\lambda } vertical intercept. The logarithmic scale is usually labeled in base 10; occasionally in base 2: A log–linear (sometimes log–lin) plot has the logarithmic scale on the y -axis, and a linear scale on the x -axis; a linear–log (sometimes lin–log) is the opposite. The naming is output–input ( y – x ), the opposite order from ( x , y ). On a semi-log plot the spacing of the scale on the y -axis (or x -axis) is proportional to the logarithm of the number, not the number itself. It is equivalent to converting the y values (or x values) to their log, and plotting the data on linear scales. A log–log plot uses the logarithmic scale for both axes, and hence is not a semi-log plot. The equation of a line on a linear–log plot, where the abscissa axis is scaled logarithmically (with a logarithmic base of n ), would be The equation for a line on a log–linear plot, with an ordinate axis logarithmically scaled (with a logarithmic base of n ), would be: On a linear–log plot, pick some fixed point ( x 0 , F 0 ), where F 0 is shorthand for F ( x 0 ), somewhere on the straight line in the above graph, and further some other arbitrary point ( x 1 , F 1 ) on the same graph. The slope formula of the plot is: which leads to or which means that F ( x ) = m log n ⁡ ( x ) + c o n s t a n t {\displaystyle F(x)=m\log _{n}(x)+\mathrm {constant} } In other words, F is proportional to the logarithm of x times the slope of the straight line of its lin–log graph, plus a constant. Specifically, a straight line on a lin–log plot containing points ( F 0 , x 0 ) and ( F 1 , x 1 ) will have the function: On a log–linear plot (logarithmic scale on the y-axis), pick some fixed point ( x 0 , F 0 ), where F 0 is shorthand for F ( x 0 ), somewhere on the straight line in the above graph, and further some other arbitrary point ( x 1 , F 1 ) on the same graph. The slope formula of the plot is: which leads to Notice that n log n ( F 1 ) = F 1 . Therefore, the logs can be inverted to find: or This can be generalized for any point, instead of just F 1 : In physics and chemistry , a plot of logarithm of pressure against temperature can be used to illustrate the various phases of a substance, as in the following for water : While ten is the most common base , there are times when other bases are more appropriate, as in this example: [ further explanation needed ] Notice that while the horizontal (time) axis is linear, with the dates evenly spaced, the vertical (cases) axis is logarithmic, with the evenly spaced divisions being labelled with successive powers of two. The semi-log plot makes it easier to see when the infection has stopped spreading at its maximum rate, i.e. the straight line on this exponential plot, and starts to curve to indicate a slower rate. This might indicate that some form of mitigation action is working, e.g. social distancing. In biology and biological engineering , the change in numbers of microbes due to asexual reproduction and nutrient exhaustion is commonly illustrated by a semi-log plot. Time is usually the independent axis, with the logarithm of the number or mass of bacteria or other microbe as the dependent variable. This forms a plot with four distinct phases, as shown below.
https://en.wikipedia.org/wiki/Semi-log_plot
The term semi-monocoque or semimonocoque refers to a stressed shell structure that is similar to a true monocoque , but which derives at least some of its strength from conventional reinforcement. Semi-monocoque construction is used for, among other things, aircraft fuselages, car bodies and motorcycle frames. Semi-monocoque aircraft fuselages differ from true monocoque construction through being reinforced with longitudinal stringers. [ 1 ] [ 2 ] The Mooney range of four seat aircraft, for instance, use a steel tube truss frame around the passenger compartment with monocoque behind. [ 3 ] The British ARV Super2 light aircraft has a fuselage constructed mainly of aluminium alloy, but with some fibreglass elements. The cockpit is a stiff monocoque of "Supral" alloy, but aft of the cockpit bulkhead, the ARV is conventionally built, with frames, longerons and stressed skin forming a semi-monocoque. [ 4 ] Peter Williams ' 1973 Formula 750 TT -winning John Player Norton racer was an early example of a semi-monocoque motorcycle. [ 5 ]
https://en.wikipedia.org/wiki/Semi-monocoque
In the area of mathematics known as functional analysis , a semi-reflexive space is a locally convex topological vector space (TVS) X such that the canonical evaluation map from X into its bidual (which is the strong dual of X ) is bijective. If this map is also an isomorphism of TVSs then it is called reflexive . Semi-reflexive spaces play an important role in the general theory of locally convex TVSs. Since a normable TVS is semi-reflexive if and only if it is reflexive, the concept of semi-reflexivity is primarily used with TVSs that are not normable. Suppose that X is a topological vector space (TVS) over the field F {\displaystyle \mathbb {F} } (which is either the real or complex numbers) whose continuous dual space , X ′ {\displaystyle X^{\prime }} , separates points on X (i.e. for any x ∈ X {\displaystyle x\in X} there exists some x ′ ∈ X ′ {\displaystyle x^{\prime }\in X^{\prime }} such that x ′ ( x ) ≠ 0 {\displaystyle x^{\prime }(x)\neq 0} ). Let X b ′ {\displaystyle X_{b}^{\prime }} and X β ′ {\displaystyle X_{\beta }^{\prime }} both denote the strong dual of X , which is the vector space X ′ {\displaystyle X^{\prime }} of continuous linear functionals on X endowed with the topology of uniform convergence on bounded subsets of X ; this topology is also called the strong dual topology and it is the "default" topology placed on a continuous dual space (unless another topology is specified). If X is a normed space, then the strong dual of X is the continuous dual space X ′ {\displaystyle X^{\prime }} with its usual norm topology. The bidual of X , denoted by X ′ ′ {\displaystyle X^{\prime \prime }} , is the strong dual of X b ′ {\displaystyle X_{b}^{\prime }} ; that is, it is the space ( X b ′ ) b ′ {\displaystyle \left(X_{b}^{\prime }\right)_{b}^{\prime }} . [ 1 ] For any x ∈ X , {\displaystyle x\in X,} let J x : X ′ → F {\displaystyle J_{x}:X^{\prime }\to \mathbb {F} } be defined by J x ( x ′ ) = x ′ ( x ) {\displaystyle J_{x}\left(x^{\prime }\right)=x^{\prime }(x)} , where J x {\displaystyle J_{x}} is called the evaluation map at x ; since J x : X b ′ → F {\displaystyle J_{x}:X_{b}^{\prime }\to \mathbb {F} } is necessarily continuous, it follows that J x ∈ ( X b ′ ) ′ {\displaystyle J_{x}\in \left(X_{b}^{\prime }\right)^{\prime }} . Since X ′ {\displaystyle X^{\prime }} separates points on X , the map J : X → ( X b ′ ) ′ {\displaystyle J:X\to \left(X_{b}^{\prime }\right)^{\prime }} defined by J ( x ) := J x {\displaystyle J(x):=J_{x}} is injective where this map is called the evaluation map or the canonical map . This map was introduced by Hans Hahn in 1927. [ 2 ] We call X semireflexive if J : X → ( X b ′ ) ′ {\displaystyle J:X\to \left(X_{b}^{\prime }\right)^{\prime }} is bijective (or equivalently, surjective ) and we call X reflexive if in addition J : X → X ′ ′ = ( X b ′ ) b ′ {\displaystyle J:X\to X^{\prime \prime }=\left(X_{b}^{\prime }\right)_{b}^{\prime }} is an isomorphism of TVSs. [ 1 ] If X is a normed space then J is a TVS-embedding as well as an isometry onto its range; furthermore, by Goldstine's theorem (proved in 1938), the range of J is a dense subset of the bidual ( X ′ ′ , σ ( X ′ ′ , X ′ ) ) {\displaystyle \left(X^{\prime \prime },\sigma \left(X^{\prime \prime },X^{\prime }\right)\right)} . [ 2 ] A normable space is reflexive if and only if it is semi-reflexive. A Banach space is reflexive if and only if its closed unit ball is σ ( X ′ , X ) {\displaystyle \sigma \left(X^{\prime },X\right)} -compact. [ 2 ] Let X be a topological vector space over a number field F {\displaystyle \mathbb {F} } (of real numbers R {\displaystyle \mathbb {R} } or complex numbers C {\displaystyle \mathbb {C} } ). Consider its strong dual space X b ′ {\displaystyle X_{b}^{\prime }} , which consists of all continuous linear functionals f : X → F {\displaystyle f:X\to \mathbb {F} } and is equipped with the strong topology b ( X ′ , X ) {\displaystyle b\left(X^{\prime },X\right)} , that is, the topology of uniform convergence on bounded subsets in X . The space X b ′ {\displaystyle X_{b}^{\prime }} is a topological vector space (to be more precise, a locally convex space), so one can consider its strong dual space ( X b ′ ) b ′ {\displaystyle \left(X_{b}^{\prime }\right)_{b}^{\prime }} , which is called the strong bidual space for X . It consists of all continuous linear functionals h : X b ′ → F {\displaystyle h:X_{b}^{\prime }\to {\mathbb {F} }} and is equipped with the strong topology b ( ( X b ′ ) ′ , X b ′ ) {\displaystyle b\left(\left(X_{b}^{\prime }\right)^{\prime },X_{b}^{\prime }\right)} . Each vector x ∈ X {\displaystyle x\in X} generates a map J ( x ) : X b ′ → F {\displaystyle J(x):X_{b}^{\prime }\to \mathbb {F} } by the following formula: J ( x ) ( f ) = f ( x ) , f ∈ X ′ . {\displaystyle J(x)(f)=f(x),\qquad f\in X'.} This is a continuous linear functional on X b ′ {\displaystyle X_{b}^{\prime }} , that is, J ( x ) ∈ ( X b ′ ) b ′ {\displaystyle J(x)\in \left(X_{b}^{\prime }\right)_{b}^{\prime }} . One obtains a map called the evaluation map or the canonical injection : J : X → ( X b ′ ) b ′ . {\displaystyle J:X\to \left(X_{b}^{\prime }\right)_{b}^{\prime }.} which is a linear map. If X is locally convex, from the Hahn–Banach theorem it follows that J is injective and open (that is, for each neighbourhood of zero U {\displaystyle U} in X there is a neighbourhood of zero V in ( X b ′ ) b ′ {\displaystyle \left(X_{b}^{\prime }\right)_{b}^{\prime }} such that J ( U ) ⊇ V ∩ J ( X ) {\displaystyle J(U)\supseteq V\cap J(X)} ). But it can be non-surjective and/or discontinuous. A locally convex space X {\displaystyle X} is called semi-reflexive if the evaluation map J : X → ( X b ′ ) b ′ {\displaystyle J:X\to \left(X_{b}^{\prime }\right)_{b}^{\prime }} is surjective (hence bijective); it is called reflexive if the evaluation map J : X → ( X b ′ ) b ′ {\displaystyle J:X\to \left(X_{b}^{\prime }\right)_{b}^{\prime }} is surjective and continuous, in which case J will be an isomorphism of TVSs ). If X is a Hausdorff locally convex space then the following are equivalent: Theorem [ 4 ] — A locally convex Hausdorff space X {\displaystyle X} is semi-reflexive if and only if X {\displaystyle X} with the σ ( X , X ′ ) {\displaystyle \sigma \left(X,X^{\prime }\right)} -topology has the Heine–Borel property (i.e. weakly closed and bounded subsets of X {\displaystyle X} are weakly compact). Every semi-Montel space is semi-reflexive and every Montel space is reflexive. If X {\displaystyle X} is a Hausdorff locally convex space then the canonical injection from X {\displaystyle X} into its bidual is a topological embedding if and only if X {\displaystyle X} is infrabarrelled. [ 5 ] The strong dual of a semireflexive space is barrelled . Every semi-reflexive space is quasi-complete . [ 3 ] Every semi-reflexive normed space is a reflexive Banach space. [ 6 ] The strong dual of a semireflexive space is barrelled. [ 7 ] If X is a Hausdorff locally convex space then the following are equivalent: If X is a normed space then the following are equivalent: Every non- reflexive infinite-dimensional Banach space is a distinguished space that is not semi-reflexive. [ 11 ] If X {\displaystyle X} is a dense proper vector subspace of a reflexive Banach space then X {\displaystyle X} is a normed space that not semi-reflexive but its strong dual space is a reflexive Banach space. [ 11 ] There exists a semi-reflexive countably barrelled space that is not barrelled . [ 11 ]
https://en.wikipedia.org/wiki/Semi-reflexive_space
In mathematics, semi-simplicity is a widespread concept in disciplines such as linear algebra , abstract algebra , representation theory , category theory , and algebraic geometry . A semi-simple object is one that can be decomposed into a sum of simple objects, and simple objects are those that do not contain non-trivial proper sub-objects. The precise definitions of these words depends on the context. For example, if G is a finite group , then a nontrivial finite-dimensional representation V over a field is said to be simple if the only subrepresentations it contains are either {0} or V (these are also called irreducible representations ). Now Maschke's theorem says that any finite-dimensional representation of a finite group is a direct sum of simple representations (provided the characteristic of the base field does not divide the order of the group). So in the case of finite groups with this condition, every finite-dimensional representation is semi-simple. Especially in algebra and representation theory, "semi-simplicity" is also called complete reducibility . For example, Weyl's theorem on complete reducibility says a finite-dimensional representation of a semisimple compact Lie group is semisimple. A square matrix (in other words a linear operator T : V → V {\displaystyle T:V\to V} with V a finite-dimensional vector space) is said to be simple if its only invariant linear subspaces under T are {0} and V . If the field is algebraically closed (such as the complex numbers ), then the only simple matrices are of size 1-by-1. A semi-simple matrix is one that is similar to a direct sum of simple matrices ; if the field is algebraically closed, this is the same as being diagonalizable . These notions of semi-simplicity can be unified using the language of semi-simple modules , and generalized to semi-simple categories . If one considers all vector spaces (over a field , such as the real numbers ), the simple vector spaces are those that contain no proper nontrivial subspaces. Therefore, the one- dimensional vector spaces are the simple ones. So it is a basic result of linear algebra that any finite-dimensional vector space is the direct sum of simple vector spaces; in other words, all finite-dimensional vector spaces are semi-simple. A square matrix or, equivalently, a linear operator T on a finite-dimensional vector space V is called semi-simple if every T - invariant subspace has a complementary T -invariant subspace. [ 1 ] [ 2 ] This is equivalent to the minimal polynomial of T being square-free. For vector spaces over an algebraically closed field F , semi-simplicity of a matrix is equivalent to diagonalizability . [ 1 ] This is because such an operator always has an eigenvector; if it is, in addition, semi-simple, then it has a complementary invariant hyperplane , which itself has an eigenvector, and thus by induction is diagonalizable. Conversely, diagonalizable operators are easily seen to be semi-simple, as invariant subspaces are direct sums of eigenspaces, and any eigenbasis for this subspace can be extended to an eigenbasis of the full space. For a fixed ring R , a nontrivial R -module M is simple, if it has no submodules other than 0 and M . An R -module M is semi-simple if every R -submodule of M is an R -module direct summand of M (the trivial module 0 is semi-simple, but not simple). For an R -module M , M is semi-simple if and only if it is the direct sum of simple modules (the trivial module is the empty direct sum). Finally, R is called a semi-simple ring if it is semi-simple as an R -module. As it turns out, this is equivalent to requiring that any finitely generated R -module M is semi-simple. [ 3 ] Examples of semi-simple rings include fields and, more generally, finite direct products of fields. For a finite group G Maschke's theorem asserts that the group ring R [ G ] over some ring R is semi-simple if and only if R is semi-simple and | G | is invertible in R . Since the theory of modules of R [ G ] is the same as the representation theory of G on R -modules, this fact is an important dichotomy, which causes modular representation theory , i.e., the case when | G | does divide the characteristic of R to be more difficult than the case when | G | does not divide the characteristic, in particular if R is a field of characteristic zero. By the Artin–Wedderburn theorem , a unital Artinian ring R is semisimple if and only if it is (isomorphic to) M n 1 ( D 1 ) × M n 2 ( D 2 ) × ⋯ × M n r ( D r ) {\displaystyle M_{n_{1}}(D_{1})\times M_{n_{2}}(D_{2})\times \cdots \times M_{n_{r}}(D_{r})} , where each D i {\displaystyle D_{i}} is a division ring and M n ( D ) {\displaystyle M_{n}(D)} is the ring of n -by- n matrices with entries in D . An operator T is semi-simple in the sense above if and only if the subalgebra F [ T ] ⊆ End F ⁡ ( V ) {\displaystyle F[T]\subseteq \operatorname {End} _{F}(V)} generated by the powers (i.e., iterations) of T inside the ring of endomorphisms of V is semi-simple. As indicated above, the theory of semi-simple rings is much more easy than the one of general rings. For example, any short exact sequence of modules over a semi-simple ring must split, i.e., M ≅ M ′ ⊕ M ″ {\displaystyle M\cong M'\oplus M''} . From the point of view of homological algebra , this means that there are no non-trivial extensions . The ring Z of integers is not semi-simple: Z is not the direct sum of n Z and Z / n . Many of the above notions of semi-simplicity are recovered by the concept of a semi-simple category C . Briefly, a category is a collection of objects and maps between such objects, the idea being that the maps between the objects preserve some structure inherent in these objects. For example, R -modules and R -linear maps between them form a category, for any ring R . An abelian category [ 4 ] C is called semi-simple if there is a collection of simple objects X α ∈ C {\displaystyle X_{\alpha }\in C} , i.e., ones with no subobject other than the zero object 0 and X α {\displaystyle X_{\alpha }} itself, such that any object X is the direct sum (i.e., coproduct or, equivalently, product) of finitely many simple objects. It follows from Schur's lemma that the endomorphism ring in a semi-simple category is a product of matrix rings over division rings, i.e., semi-simple. Moreover, a ring R is semi-simple if and only if the category of finitely generated R -modules is semisimple. An example from Hodge theory is the category of polarizable pure Hodge structures , i.e., pure Hodge structures equipped with a suitable positive definite bilinear form . The presence of this so-called polarization causes the category of polarizable Hodge structures to be semi-simple. [ 5 ] Another example from algebraic geometry is the category of pure motives of smooth projective varieties over a field k Mot ⁡ ( k ) ∼ {\displaystyle \operatorname {Mot} (k)_{\sim }} modulo an adequate equivalence relation ∼ {\displaystyle \sim } . As was conjectured by Grothendieck and shown by Jannsen , this category is semi-simple if and only if the equivalence relation is numerical equivalence . [ 6 ] This fact is a conceptual cornerstone in the theory of motives. Semisimple abelian categories also arise from a combination of a t -structure and a (suitably related) weight structure on a triangulated category . [ 7 ] One can ask whether the category of finite-dimensional representations of a group or a Lie algebra is semisimple, that is, whether every finite-dimensional representation decomposes as a direct sum of irreducible representations. The answer, in general, is no. For example, the representation of R {\displaystyle \mathbb {R} } given by is not a direct sum of irreducibles. [ 8 ] (There is precisely one nontrivial invariant subspace, the span of the first basis element, e 1 {\displaystyle e_{1}} .) On the other hand, if G {\displaystyle G} is compact , then every finite-dimensional representation Π {\displaystyle \Pi } of G {\displaystyle G} admits an inner product with respect to which Π {\displaystyle \Pi } is unitary, showing that Π {\displaystyle \Pi } decomposes as a sum of irreducibles. [ 9 ] Similarly, if g {\displaystyle {\mathfrak {g}}} is a complex semisimple Lie algebra, every finite-dimensional representation of g {\displaystyle {\mathfrak {g}}} is a sum of irreducibles. [ 10 ] Weyl's original proof of this used the unitarian trick : Every such g {\displaystyle {\mathfrak {g}}} is the complexification of the Lie algebra of a simply connected compact Lie group K {\displaystyle K} . Since K {\displaystyle K} is simply connected, there is a one-to-one correspondence between the finite-dimensional representations of K {\displaystyle K} and of g {\displaystyle {\mathfrak {g}}} . [ 11 ] Thus, the just-mentioned result about representations of compact groups applies. It is also possible to prove semisimplicity of representations of g {\displaystyle {\mathfrak {g}}} directly by algebraic means, as in Section 10.3 of Hall's book. See also: Fusion category (which are semisimple).
https://en.wikipedia.org/wiki/Semi-simplicity
A semi-submarine ( semi-sub ) is a surface vessel that is not capable of diving, but has accommodation space below the waterline featuring underwater windows. [ 1 ] The watercraft is similar to glass-bottom boats , but with deeper draft . Both types of boats are mainly used to provide sight-seeing trips for tourists in clear, calm, and often shallow, waters. The most common design is similar to a small ship . The passenger cabin is deep within the hull, a few meters below the waterline. The cabin is equipped with large underwater windows, so the passengers can observe the marine environment that is passed during the voyage. There are significant engineering differences between a true submarine and a semi-submarine. Submarines are human-occupied pressure vessels subjected to high external pressure, while semi-submarines are only subjected to the same pressures as other surface vessels of similar draft operating in similar conditions. As the hydrostatic pressure close to the water surface is relatively low, the viewing windows can be large. In some designs, the windows enclose the majority of the immersed hull. Passengers can climb up from the submerged cabin to the unsubmerged deck level at any time. Since the semi-sub interior is always open to the atmosphere, no special measures must be taken to assure a supply of breathable air to its occupants. Semi-submarines can be used for research , but they are most commonly used in the tourism business. However, large tourism-oriented semi-submarines should not be confused with narco-submarines which are smaller home-made semi-submarines used to smuggle drugs . Semi-submarines do not have an international classified status. Their operating range from the native port might be limited by the local authorities.
https://en.wikipedia.org/wiki/Semi-submarine
In the mathematical field of graph theory , a semi-symmetric graph is an undirected graph that is edge-transitive and regular , but not vertex-transitive . In other words, a graph is semi-symmetric if each vertex has the same number of incident edges, and there is a symmetry taking any of the graph's edges to any other of its edges, but there is some pair of vertices such that no symmetry maps the first into the second. A semi-symmetric graph must be bipartite , and its automorphism group must act transitively on each of the two vertex sets of the bipartition (in fact, regularity is not required for this property to hold). For instance, in the diagram of the Folkman graph shown here, green vertices can not be mapped to red ones by any automorphism, but every two vertices of the same color are symmetric with each other. Semi-symmetric graphs were first studied E. Dauber, a student of F. Harary, in a paper, no longer available, titled "On line- but not point-symmetric graphs". This was seen by Jon Folkman , whose paper, published in 1967, includes the smallest semi-symmetric graph, now known as the Folkman graph , on 20 vertices. [ 1 ] The term "semi-symmetric" was first used by Klin et al. in a paper they published in 1978. [ 2 ] The smallest cubic semi-symmetric graph (that is, one in which each vertex is incident to exactly three edges) is the Gray graph on 54 vertices. It was first observed to be semi-symmetric by Bouwer (1968) . It was proven to be the smallest cubic semi-symmetric graph by Dragan Marušič and Aleksander Malnič. [ 3 ] All the cubic semi-symmetric graphs on up to 10000 vertices are known. According to Conder , Malnič, Marušič and Potočnik, the four smallest possible cubic semi-symmetric graphs after the Gray graph are the Iofinova–Ivanov graph on 110 vertices, the Ljubljana graph on 112 vertices, [ 4 ] a graph on 120 vertices with girth 8 and the Tutte 12-cage . [ 5 ]
https://en.wikipedia.org/wiki/Semi-symmetric_graph
A semi-synchronous orbit is an orbit with a period equal to half the average rotational period of the body being orbited, and in the same direction as that body's rotation. For Earth , a semi-synchronous orbit is considered a medium Earth orbit , with a period of just under 12 hours. For circular Earth orbits, the altitude is approximately 20,200 kilometres (12,600 mi). [ 1 ] [ 2 ] Semi-synchronous orbits are typical for GPS satellites . This astronomy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Semi-synchronous_orbit
For both chemical and biological engineering , Semibatch (semiflow) reactors operate much like batch reactors in that they take place in a single stirred tank with similar equipment. [ 1 ] However, they are modified to allow reactant addition and/or product removal in time. A normal batch reactor is filled with reactants in a single stirred tank at time t = 0 {\displaystyle t=0} and the reaction proceeds. A semi batch reactor, however, allows partial filling of reactants with the flexibility of adding more as time progresses. Stirring in both types is very efficient, which allows batch and semi batch reactors to assume a uniform composition and temperature throughout. The flexibility of adding more reactants over time through semi batch operation has several advantages over a batch reactor. These include: Sometimes a particular reactant can go through parallel paths that yield two different products, only one of which is desired. Consider the simple example below: A→U (desired product) A→W (undesired product) The rate expressions, considering the variability of the volume of reaction, are: V d C A d t + C A d V d t {\displaystyle V{\frac {dC_{A}}{dt}}+C_{A}{\frac {dV}{dt}}} = F A {\displaystyle F_{A}} - k 1 {\displaystyle k_{1}} C A α {\displaystyle C_{A}^{\alpha }} V d C U d t + C U d V d t {\displaystyle V{\frac {dC_{U}}{dt}}+C_{U}{\frac {dV}{dt}}} = k 1 {\displaystyle k_{1}} C A α {\displaystyle C_{A}^{\alpha }} - F U {\displaystyle F_{U}} V d C W d t + C W d V d t {\displaystyle V{\frac {dC_{W}}{dt}}+C_{W}{\frac {dV}{dt}}} = k 2 {\displaystyle k_{2}} C A β {\displaystyle C_{A}^{\beta }} - F W {\displaystyle F_{W}} Where F A {\displaystyle F_{A}} is the molar rate of addition of the reactant A. Note that the presence of these addition terms, which could be negative in case of products removal (e.g. by fractional distillation ) are the ones marking the difference of the semi batch reactor cases from the simpler batch cases. For standard batch reactors (no addition terms) the selectivity of the desired product is defined as: S = d ( V C U ) d ( V C W ) {\displaystyle {\frac {d(VC_{U})}{d(VC_{W})}}} = k 1 k 2 C A α − β {\displaystyle {\frac {k_{1}}{k_{2}}}C_{A}^{\alpha -\beta }} S = d C U d C W {\displaystyle {\frac {dC_{U}}{dC_{W}}}} = k 1 k 2 C A α − β {\displaystyle {\frac {k_{1}}{k_{2}}}C_{A}^{\alpha -\beta }} for constant volume (i.e. batch) reactions. If β > α {\displaystyle \beta >\alpha } , the concentration of the reactant should be kept at a low level in order to maximize selectivity. This can be accomplished using a semibatch reactor. Exothermic reactions release heat, and ones that are highly exothermic can cause safety concerns. Semibatch reactors allow for slow addition of reactants in order to control the heat released and thus, temperature, in the reactor. In order to minimize the reversibility of a reaction one must minimize the concentration of the product. This can be done in a semibatch reactor by using a purge stream to remove products and increase the net reaction rate by favoring the forward reaction. It is important to understand that these advantages are more applicable to the decision between using a batch, a semibatch or a continuous reactor in a certain process. Both batch and semibatch reactors are more suitable for liquid phase reactions and small scale production, because they usually require lower capital costs than a continuously stirred tank reactor operation (CSTR), but incur greater costs per unit if production needs to be scaled up. These per unit costs include labor, materials handling (filling, emptying, cleaning), protective measures, and nonproductive periods that result from changeovers when switching batches. Hence, the capital costs must be weighed against operating costs to determine the correct reactor design to be implemented.
https://en.wikipedia.org/wiki/Semibatch_reactor
Semicarbazide is the chemical compound with the formula OC(NH 2 )(N 2 H 3 ). It is a water-soluble white solid. It is a derivative of urea . The compound prepared by treating urea with hydrazine : [ 2 ] A further reaction can occur to give carbohydrazide : Semicarbazide is frequently reacted with aldehydes and ketones to produce semicarbazones via a condensation reaction . This is an example of imine formation resulting from the reaction of a primary amine with a carbonyl group . The reaction is useful because semicarbazones, like oximes and 2,4-DNPs , typically have high melting points and crystallize , facilitating purification or identification of reaction products. [ 3 ] Semicarbazide products (semicarbazones and thiosemicarbazones) are known to have an activity of antiviral , antiinfective and antineoplastic through binding to copper or iron in cells. Semicarbazide is used in preparing pharmaceuticals including nitrofuran antibacterials ( furazolidone , nitrofurazone , nitrofurantoin ) and related compounds. It is also a product of degradations of the blowing agent azodicarbonamide (ADC). Semicarbazide forms in heat-treated flour containing ADC as well as breads made from ADC-treated flour. [ 4 ] [ 5 ] Semicarbazide is used as a detection reagent in thin layer chromatography (TLC). Semicarbazide stains α-keto acids on the TLC plate, which can then be viewed under ultraviolet light.
https://en.wikipedia.org/wiki/Semicarbazide
In organic chemistry , a semicarbazone is a derivative of imines formed by a condensation reaction between a ketone or aldehyde and semicarbazide . They are classified as imine derivatives because they are formed from the reaction of an aldehyde or ketone with the terminal -NH 2 group of semicarbazide, which behaves very similarly to primary amines . For example, the semicarbazone of acetone would have the structure (CH 3 ) 2 C=NNHC(=O)NH 2 . Some semicarbazones, such as nitrofurazone , and thiosemicarbazones are known to have anti-viral and anti-cancer activity, usually mediated through binding to copper or iron in cells. Many semicarbazones are crystalline solids, useful for the identification of the parent aldehydes/ketones by melting point analysis. [ 1 ] A thiosemicarbazone is an analog of a semicarbazone which contains a sulfur atom in place of the oxygen atom.
https://en.wikipedia.org/wiki/Semicarbazone
The semicircle law , in condensed matter physics , is a mathematical relationship that occurs between quantities measured in the quantum Hall effect . It describes a relationship between the anisotropic and isotropic components of the macroscopic conductivity tensor σ , and, when plotted, appears as a semicircle . The semicircle law was first described theoretically in Dykhne and Ruzin's analysis of the quantum Hall effect as a mixture of 2 phases: a free electron gas , and a free hole gas. [ 1 ] [ 2 ] Mathematically, it states that σ x x 2 + ( σ x y − σ x y 0 ) 2 = ( σ x x 0 ) 2 {\displaystyle \sigma _{xx}^{2}+(\sigma _{xy}-\sigma _{xy}^{0})^{2}=(\sigma _{xx}^{0})^{2}} where σ is the mean-field Hall conductivity, and σ 0 is a parameter that encodes the classical conductivity of each phase. A similar law also holds for the resistivity . [ 1 ] A convenient reformulation of the law mixes conductivity and resistivity: σ x y 0 = ρ x y 0 ( ρ x y 0 ) 2 + ( ρ x x 0 ) 2 = e 2 h ( n + 1 2 ) {\displaystyle \sigma _{xy}^{0}={\frac {\rho _{xy}^{0}}{(\rho _{xy}^{0})^{2}+(\rho _{xx}^{0})^{2}}}={\frac {e^{2}}{h}}\left(n+{\frac {1}{2}}\right)} where n is an integer, the Hall divisor . [ 3 ] Although Dykhne and Ruzin's original analysis assumed little scattering, an assumption that proved empirically unsound, the law holds in the coherent-transport limits commonly observed in experiment. [ 2 ] [ 4 ] Theoretically, the semicircle law originates from a representation of the modular group Γ 0 (2) , which describes a symmetry between different Hall phases. (Note that this is not a symmetry in the conventional sense; there is no conserved current .) [ 5 ] [ 6 ] That group's strong connections to number theory also appear: Hall phase transitions (in a single layer) [ 5 ] exhibit a selection rule | p q ′ − p ′ q | = 1 {\displaystyle |pq'-p'q|=1} that also governs the Farey sequence . [ 5 ] [ 6 ] Indeed, plots of the semicircle law are also Farey diagrams . In striped quantum Hall phases, the relationship is slightly more complex, because of the broken symmetry: { σ 1 σ 2 + ( σ h − σ h 0 ) 2 = ( e 2 / ( 2 h ) ) 2 σ h 0 = ( N + 1 / 2 ) e 2 / h {\displaystyle {\begin{cases}\sigma _{1}\sigma _{2}+(\sigma _{h}-\sigma _{h}^{0})^{2}=(e^{2}/(2h))^{2}\\\sigma _{h}^{0}=(N+1/2)e^{2}/h\end{cases}}} Here σ 1 and σ 2 describe the macroscopic conductivity in directions aligned with and perpendicular to the stripes. [ 7 ]
https://en.wikipedia.org/wiki/Semicircle_law_(quantum_Hall_effect)
A semi-circular bund (also known as a demi-lune or half-moon ) is a rainwater harvesting technique consisting in digging semi-lunar holes in the ground with the opening perpendicular to the flow of water. [ 1 ] [ 2 ] These techniques are particularly beneficial in areas where rainfall is scarce and irregular, namely arid and semi-arid regions. Semi-circular bunds primarily serve to slow down and retain runoff , ensuring that the plants inside them receive necessary water. Crop cultivation, grazing, and forestry are particularly challenging in drylands . Local communities often lack the financial and practical resources to establish irrigation systems or use chemical fertilizers . As such, these are generally considered infeasible solutions for these areas. As a result, rainfall harvesting techniques are widely adopted to efficiently retain rainwater while minimizing the need for additional materials and financial investment. [ 3 ] There are various rainfall harvesting techniques, all sharing the fundamental principle of constructing or excavating structures using natural materials such as soil and stones. These techniques include planting pits, infiltration basin and microbasins, and cross-slope barriers. [ 4 ] Semi-circular bunds fall in the subcategory of microcatchment water harvesting . Beyond their primary function of reducing runoff for agricultural purposes, these methods offer additional benefits, such as providing extra drinking water for livestock, enabling land reclamation, enhancing soil fertility, accelerating timber growth for firewood, and influencing regional atmospheric patterns, potentially leading to increased precipitation. [ 5 ] Semi-circular bunds have a long history as a response to the challenges of water scarcity and soil erosion in dry climates. The use of semi-circular bunds can be traced back to traditional farming practices in various parts of the world, especially in Africa and the Middle East. In these areas, local communities developed and refined the technique over generations as a way to improve agricultural productivity and restore degraded lands . [ 6 ] While the practice has ancient roots, semi-circular bunds gained significant attention in the scientific and development communities during the latter half of the 20th century, especially in the Sahel region of Africa. [ 7 ] As climate variability and land degradation intensified in this area, these structures became an important tool for land rehabilitation and agricultural improvement . Research conducted in the early 21st century has further validated the effectiveness of semi-circular bunds in improving soil properties, increasing vegetation cover, and enhancing biodiversity in arid and semi-arid ecosystems. [ 8 ] [ 9 ] Semi-circular bunds are pits dug in a semi-circular shape, with the excavated material (earth and stone) accumulated along the edges. They usually have a diameter of 2-8 meters (up to 12 meters) and are 30-50 cm high. The bunds are arranged in a staggered pattern across a plot, meaning that runoff flowing between structures in one row is captured by the row below, and so forth. The catchment-to-cultivated area (C:A) ratio varies between 1:1 and 3:1. They are generally implemented on slopes of up to 15%, though earthen bunds are rarely used on slopes steeper than 5% when annual rainfall exceeds 300 mm. In drier conditions, the bunds are larger, whereas in wetter areas, more bunds with smaller radii are built. Larger, more widely spaced half-moons are primarily used for rehabilitating grazing land or producing fodder, while smaller, closely spaced half-moons support the growth of trees and shrubs. [ 10 ] In addition to their primary function of capturing rainfall, the accumulated detritus attracts termites and other invertebrates, whose activity creates tunnels and pores in the organic matter. This process enhances humus formation, improves water infiltration , and ultimately enriches soil quality. [ 11 ] [ 12 ] When combined with other nutrient rich material such as animal manure, semi-circular bunds have been shown to significantly reduce the risk of crop failure and boost agricultural productivity—potentially tripling yields compared to more conventional methods. [ 13 ] [ 14 ] However, while these techniques require minimal artificial materials and financial investment, they are highly labor-intensive: preparing one hectare of semi-circular bunds can require up to four person-months of work, with additional work for annual maintenance. [ 11 ] Additional challenges include a lack of knowledge and the absence of training programs. [ 15 ] In March 2023, the Wyss Academy for Nature in partnership with the Naibunga Community Conservancy and Justdiggit, started a project introducing semi-circular bunds to the Ewaso Ng’iro basin in Kenya. [ 10 ] The first bunds were dug in March 2023, [ 10 ] with the total amount of bunds having reached 100‘000. [ 16 ] As of December 2023, preliminary results showed that soil had become greener, and elephants had returned to the area. [ 10 ] Dry and wet season biodiversity and socio-ecological assessments of local livelihoods were undertaken in July and August of 2023, as well as April of 2024, and local natural resources, including habitat restoration, are continuously monitored. [ 17 ] A 2014 study was conducted to investigate the effects of semi-circular bunds on vegetation cover and soil properties in Koteh rangeland, Sistan and Baloochestan province in Iran. Vegetation samples (using 5x5 m 2 plots) and soil samples (taken from a depth of 0-30 cm) showed that areas with bunds exhibited more vegetation cover, plant production, and density compared to the control (no bund) area. The highest richness and diversity of species was measured in the semi-circular bunds area, and the amounts of organic carbon, nitrogen, potassium, phosphorus and calcium carbonate in the soil increased significantly. The study concluded that the building of semi-circular bunds had a positive effect on vegetation cover and soil properties in the rangelands of the study area. [ 18 ] Semi-circular bunds in the Téra and Tahuoa regions of Niger implemented in the years from 1970 to 2010 show positive results in terms of many restoration goals. [ 19 ] Impacts of the intervention include increased crop production (millet yields increased by 180 kg and straw yields by 400 kg per hectare per year), increased fodder production, and increased wood production. Both plant and animal diversity increased. Additionally, the demand for irrigation water decreased, and food security and self sufficiency increased. The harvesting and collection of water improved, and both soil moisture and nutrient cycling increased. [ 19 ]
https://en.wikipedia.org/wiki/Semicircular_bund
In physics , semiclassical refers to a theory in which one part of a system is described quantum mechanically , whereas the other is treated classically . For example, external fields will be constant, or when changing will be classically described. In general, it incorporates a development in powers of the Planck constant , resulting in the classical physics of power 0, and the first nontrivial approximation to the power of (−1). In this case, there is a clear link between the quantum-mechanical system and the associated semi-classical and classical approximations, as it is similar in appearance to the transition from physical optics to geometric optics . Max Planck was the first to introduce the idea of quanta of energy in 1900 while studying black-body radiation . In 1906, he was also the first to write that quantum theory should replicate classical mechanics at some limit, particularly if the Planck constant h were infinitesimal. [ 1 ] [ 2 ] With this idea he showed that Planck's law for thermal radiation leads to the Rayleigh–Jeans law , the classical prediction (valid for large wavelength ). [ 1 ] [ 2 ] Some examples of a semiclassical approximation include: This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Semiclassical_physics
Semiclassical Transition State Theory (SCTST) [ 1 ] [ 2 ] is an efficient chemical rate theory, which aims to calculate accurate rate constants of chemical reactions, including nuclear quantum effects such as tunnelling , from ab initio quantum chemistry . [ 3 ] [ 4 ] [ 5 ] The method makes use of the semiclassical WKB wavefunction, Bohr-sommerfeld theory and vibrational perturbation theory to derive an analytical relation for the probability of a particle transmitting through a potential barrier at some energy, E. It was first developed by Bill Miller and coworkers in the 1970's, and has been further developed to allow for application to larger systems [ 6 ] and using more accurate potentials. [ 7 ]
https://en.wikipedia.org/wiki/Semiclassical_transition_state_theory
In computability theory , a semicomputable function is a partial function f : Q → R {\displaystyle f:\mathbb {Q} \rightarrow \mathbb {R} } that can be approximated either from above or from below by a computable function . More precisely a partial function f : Q → R {\displaystyle f:\mathbb {Q} \rightarrow \mathbb {R} } is upper semicomputable , meaning it can be approximated from above, if there exists a computable function ϕ ( x , k ) : Q × N → Q {\displaystyle \phi (x,k):\mathbb {Q} \times \mathbb {N} \rightarrow \mathbb {Q} } , where x {\displaystyle x} is the desired parameter for f ( x ) {\displaystyle f(x)} and k {\displaystyle k} is the level of approximation, such that: Completely analogous a partial function f : Q → R {\displaystyle f:\mathbb {Q} \rightarrow \mathbb {R} } is lower semicomputable if and only if − f ( x ) {\displaystyle -f(x)} is upper semicomputable or equivalently if there exists a computable function ϕ ( x , k ) {\displaystyle \phi (x,k)} such that: If a partial function is both upper and lower semicomputable it is called computable. This mathematical logic -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Semicomputable_function
The semiconductor Bloch equations [ 1 ] (abbreviated as SBEs) describe the optical response of semiconductors excited by coherent classical light sources, such as lasers . They are based on a full quantum theory , and form a closed set of integro-differential equations for the quantum dynamics of microscopic polarization and charge carrier distribution . [ 2 ] [ 3 ] The SBEs are named after the structural analogy to the optical Bloch equations that describe the excitation dynamics in a two-level atom interacting with a classical electromagnetic field . As the major complication beyond the atomic approach, the SBEs must address the many-body interactions resulting from Coulomb force among charges and the coupling among lattice vibrations and electrons. The optical response of a semiconductor follows if one can determine its macroscopic polarization P {\displaystyle \mathbf {P} } as a function of the electric field E {\displaystyle \mathbf {E} } that excites it. The connection between P {\displaystyle \mathbf {P} } and the microscopic polarization P k {\displaystyle P_{\mathbf {k} }} is given by P = d ∑ k P k + c . c . , {\displaystyle \mathbf {P} =\mathbf {d} \,\sum _{\mathbf {k} }P_{\mathbf {k} }+\operatorname {c.c.} \;,} where the sum involves crystal-momenta ℏ k {\displaystyle \hbar {\mathbf {k} }} of all relevant electronic states. In semiconductor optics, one typically excites transitions between a valence and a conduction band . In this connection, d {\displaystyle \mathbf {d} } is the dipole matrix element between the conduction and valence band and P k {\displaystyle P_{\mathbf {k} }} defines the corresponding transition amplitude. The derivation of the SBEs starts from a system Hamiltonian that fully includes the free-particles , Coulomb interaction , dipole interaction between classical light and electronic states, as well as the phonon contributions. [ 3 ] Like almost always in many-body physics , it is most convenient to apply the second-quantization formalism after the appropriate system Hamiltonian H ^ S y s t e m {\displaystyle {\hat {H}}_{\mathrm {System} }} is identified. One can then derive the quantum dynamics of relevant observables O ^ {\displaystyle {\hat {\mathcal {O}}}} by using the Heisenberg equation of motion i ℏ d d t ⟨ O ^ ⟩ = ⟨ [ O ^ , H ^ S y s t e m ] − ⟩ . {\displaystyle \mathrm {i} \hbar {\frac {\mathrm {d} }{\mathrm {d} t}}\langle {\hat {\mathcal {O}}}\rangle =\langle [{\hat {\mathcal {O}}},{\hat {H}}_{\mathrm {System} }]_{-}\rangle \;.} Due to the many-body interactions within H ^ S y s t e m {\displaystyle {\hat {H}}_{\mathrm {System} }} , the dynamics of the observable O ^ {\displaystyle {\hat {\mathcal {O}}}} couples to new observables and the equation structure cannot be closed. This is the well-known BBGKY hierarchy problem that can be systematically truncated with different methods such as the cluster-expansion approach . [ 4 ] At operator level, the microscopic polarization is defined by an expectation value for a single electronic transition between a valence and a conduction band. In second quantization, conduction-band electrons are defined by fermionic creation and annihilation operators a ^ c , k † {\displaystyle {\hat {a}}_{c,{\mathbf {k} }}^{\dagger }} and a ^ c , k {\displaystyle {\hat {a}}_{c,{\mathbf {k} }}} , respectively. An analogous identification, i.e., a ^ v , k † {\displaystyle {\hat {a}}_{v,{\mathbf {k} }}^{\dagger }} and a ^ v , k {\displaystyle {\hat {a}}_{v,{\mathbf {k} }}} , is made for the valence band electrons. The corresponding electronic interband transition then becomes P k ⋆ = ⟨ a ^ c , k † a ^ v , k ⟩ , P k = ⟨ a ^ v , k † a ^ c , k ⟩ , {\displaystyle P_{\mathbf {k} }^{\star }=\langle {\hat {a}}_{c,\mathbf {k} }^{\dagger }{\hat {a}}_{v,\mathbf {k} }\rangle \,,\qquad P_{\mathbf {k} }=\langle {\hat {a}}_{v,\mathbf {k} }^{\dagger }{\hat {a}}_{c,\mathbf {k} }\rangle \,,} that describe transition amplitudes for moving an electron from conduction to valence band ( P k ⋆ {\displaystyle P_{\mathbf {k} }^{\star }} term) or vice versa ( P k {\displaystyle P_{\mathbf {k} }} term). At the same time, an electron distribution follows from f k e = ⟨ a ^ c , k † a ^ c , k ⟩ . {\displaystyle f_{\mathbf {k} }^{e}=\langle {\hat {a}}_{c,\mathbf {k} }^{\dagger }{\hat {a}}_{c,\mathbf {k} }\rangle \;.} It is also convenient to follow the distribution of electronic vacancies, i.e., the holes , f k h = 1 − ⟨ a ^ v , k † a ^ v , k ⟩ = ⟨ a ^ v , k a ^ v , k † ⟩ {\displaystyle f_{\mathbf {k} }^{h}=1-\langle {\hat {a}}_{v,\mathbf {k} }^{\dagger }{\hat {a}}_{v,\mathbf {k} }\rangle =\langle {\hat {a}}_{v,\mathbf {k} }{\hat {a}}_{v,\mathbf {k} }^{\dagger }\rangle } that are left to the valence band due to optical excitation processes. The quantum dynamics of optical excitations yields an integro-differential equations that constitute the SBEs [ 1 ] [ 3 ] i ℏ ∂ ∂ t P k = ε ~ k P k − [ 1 − f k e ( t ) − f k h ( t ) ] Ω k + i ℏ ∂ ∂ t P k | s c a t t e r , {\displaystyle \mathrm {i} \hbar {\frac {\partial }{\partial t}}P_{\mathbf {k} }={\tilde {\varepsilon }}_{\mathbf {k} }P_{\mathbf {k} }-\left[1-f_{\mathbf {k} }^{e}(t)-f_{\mathbf {k} }^{h}(t)\right]\Omega _{\mathbf {k} }+\mathrm {i} \hbar \left.{\frac {\partial }{\partial t}}P_{\mathbf {k} }\right|_{\mathrm {scatter} }\,,} ℏ ∂ ∂ t f k e = 2 Im ⁡ [ Ω k ⋆ P k ] + ℏ ∂ ∂ t f k e | s c a t t e r , {\displaystyle \hbar {\frac {\partial }{\partial t}}f_{\mathbf {k} }^{e}=2\operatorname {Im} \left[\Omega _{\mathbf {k} }^{\star }P_{\mathbf {k} }\right]+\hbar \left.{\frac {\partial }{\partial t}}f_{\mathbf {k} }^{e}\right|_{\mathrm {scatter} }\,,} ℏ ∂ ∂ t f k h = 2 Im ⁡ [ Ω k ⋆ P k ] + ℏ ∂ ∂ t f k h | s c a t t e r . {\displaystyle \hbar {\frac {\partial }{\partial t}}f_{\mathbf {k} }^{h}=2\operatorname {Im} \left[\Omega _{\mathbf {k} }^{\star }P_{\mathbf {k} }\right]+\hbar \left.{\frac {\partial }{\partial t}}f_{\mathbf {k} }^{h}\right|_{\mathrm {scatter} }\;.} These contain the renormalized Rabi energy Ω k = d ⋅ E + ∑ k ′ ≠ k V k − k ′ P k ′ {\displaystyle \Omega _{\mathbf {k} }=\mathbf {d} \cdot \mathbf {E} +\sum _{\mathbf {k} '\neq \mathbf {k} }V_{\mathbf {k} -\mathbf {k} '}P_{\mathbf {k} '}} as well as the renormalized carrier energy ε ~ k = ε k − ∑ k ′ ≠ k V k − k ′ [ f k ′ e + f k ′ h ] , {\displaystyle {\tilde {\varepsilon }}_{\mathbf {k} }=\varepsilon _{\mathbf {k} }-\sum _{\mathbf {k} '\neq \mathbf {k} }V_{\mathbf {k} -\mathbf {k} '}\left[f_{\mathbf {k} '}^{e}+f_{\mathbf {k} '}^{h}\right]\,,} where ε k {\displaystyle \varepsilon _{\mathbf {k} }} corresponds to the energy of free electron–hole pairs and V k {\displaystyle V_{\mathbf {k} }} is the Coulomb matrix element, given here in terms of the carrier wave vector k {\displaystyle \mathbf {k} } . The symbolically denoted ⋯ | s c a t t e r {\displaystyle \left.\cdots \right|_{\mathrm {scatter} }} contributions stem from the hierarchical coupling due to many-body interactions. Conceptually, P k {\displaystyle P_{\mathbf {k} }} , f k e {\displaystyle f_{\mathbf {k} }^{e}} , and f k h {\displaystyle f_{\mathbf {k} }^{h}} are single-particle expectation values while the hierarchical coupling originates from two-particle correlations such as polarization-density correlations or polarization-phonon correlations. Physically, these two-particle correlations introduce several nontrivial effects such as screening of Coulomb interaction, Boltzmann-type scattering of f k e {\displaystyle f_{\mathbf {k} }^{e}} and f k h {\displaystyle f_{\mathbf {k} }^{h}} toward Fermi–Dirac distribution , excitation-induced dephasing, and further renormalization of energies due to correlations. All these correlation effects can be systematically included by solving also the dynamics of two-particle correlations. [ 5 ] At this level of sophistication, one can use the SBEs to predict optical response of semiconductors without phenomenological parameters, which gives the SBEs a very high degree of predictability. Indeed, one can use the SBEs in order to predict suitable laser designs through the accurate knowledge they produce about the semiconductor's gain spectrum . One can even use the SBEs to deduce existence of correlations, such as bound excitons, from quantitative measurements. [ 6 ] The presented SBEs are formulated in the momentum space since carrier's crystal momentum follows from ℏ k {\displaystyle \hbar \mathbf {k} } . An equivalent set of equations can also be formulated in position space. [ 7 ] However, especially, the correlation computations are much simpler to be performed in the momentum space. The P k {\displaystyle P_{\mathbf {k} }} dynamic shows a structure where an individual P k {\displaystyle P_{\mathbf {k} }} is coupled to all other microscopic polarizations due to the Coulomb interaction V k {\displaystyle V_{\mathbf {k} }} . Therefore, the transition amplitude P k {\displaystyle P_{\mathbf {k} }} is collectively modified by the presence of other transition amplitudes. Only if one sets V k {\displaystyle V_{\mathbf {k} }} to zero, one finds isolated transitions within each k {\displaystyle {\mathbf {k} }} state that follow exactly the same dynamics as the optical Bloch equations predict. Therefore, already the Coulomb interaction among P k {\displaystyle P_{\mathbf {k} }} produces a new solid-state effect compared with optical transitions in simple atoms. Conceptually, P k {\displaystyle P_{\mathbf {k} }} is just a transition amplitude for exciting an electron from valence to conduction band. At the same time, the homogeneous part of P k {\displaystyle P_{\mathbf {k} }} dynamics yields an eigenvalue problem that can be expressed through the generalized Wannier equation . The eigenstates of the Wannier equation is analogous to bound solutions of the hydrogen problem of quantum mechanics. These are often referred to as exciton solutions and they formally describe Coulombic binding by oppositely charged electrons and holes. However, a real exciton is a true two-particle correlation because one must then have a correlation between one electron to another hole. Therefore, the appearance of exciton resonances in the polarization does not signify the presence of excitons because P k {\displaystyle P_{\mathbf {k} }} is a single-particle transition amplitude. The excitonic resonances are a direct consequence of Coulomb coupling among all transitions possible in the system. In other words, the single-particle transitions themselves are influenced by Coulomb interaction making it possible to detect exciton resonance in optical response even when true excitons are not present. [ 8 ] Therefore, it is often customary to specify optical resonances as exciton ic instead of exciton resonances. The actual role of excitons on optical response can only be deduced by quantitative changes to induce to the linewidth and energy shift of excitonic resonances. [ 6 ] The solutions of the Wannier equation produce valuable insight to the basic properties of a semiconductor's optical response. In particular, one can solve the steady-state solutions of the SBEs to predict optical absorption spectrum analytically with the so-called Elliott formula . In this form, one can verify that an unexcited semiconductor shows several excitonic absorption resonances well below the fundamental bandgap energy. Obviously, this situation cannot be probing excitons because the initial many-body system does not contain electrons and holes to begin with. Furthermore, the probing can, in principle, be performed so gently that one essentially does not excite electron–hole pairs. This gedanken experiment illustrates nicely why one can detect excitonic resonances without having excitons in the system, all due to virtue of Coulomb coupling among transition amplitudes. The SBEs are particularly useful when solving the light propagation through a semiconductor structure. In this case, one needs to solve the SBEs together with the Maxwell's equations driven by the optical polarization. This self-consistent set is called the Maxwell–SBEs and is frequently applied to analyze present-day experiments and to simulate device designs. At this level, the SBEs provide an extremely versatile method that describes linear as well as nonlinear phenomena such as excitonic effects, propagation effects, semiconductor microcavity effects, four-wave-mixing , polaritons in semiconductor microcavities, gain spectroscopy , and so on. [ 4 ] [ 8 ] [ 9 ] One can also generalize the SBEs by including excitation with terahertz (THz) fields [ 5 ] that are typically resonant with intraband transitions. One can also quantize the light field and investigate quantum-optical effects that result. In this situation, the SBEs become coupled to the semiconductor luminescence equations .
https://en.wikipedia.org/wiki/Semiconductor_Bloch_equations
The Semiconductor Chip Protection Act of 1984 (or SCPA ) is an act of the US Congress that makes the layouts of integrated circuits legally protected upon registration, and hence illegal to copy without permission. It is an integrated circuit layout design protection law. Prior to 1984, it was not necessarily illegal to produce a competing chip with an identical layout. As the legislative history for the SCPA explained, patent and copyright protection for chip layouts, chip topographies , was largely unavailable. [ 1 ] This led to considerable complaint by American chip manufacturers—notably, Intel , which, along with the Semiconductor Industry Association (SIA), took the lead in seeking remedial legislation—against what they termed "chip piracy." During the hearings that led to enactment of the SCPA, chip industry representatives asserted that a pirate could copy a chip design for $100,000 in 3 to 5 months that had cost its original manufacturer upwards of $1 million to design. [ 2 ] In 1984 the United States enacted the Semiconductor Chip Protection Act of 1984 (the SCPA ) to protect the topography of semiconductor chips. The SCPA is found in title 17, U.S. Code, sections 901-914 ( 17 U.S.C. §§ 901-914 ). Japan [ 3 ] and European Community (EC) countries soon followed suit [ 4 ] and enacted their own, similar laws protecting the topography of semiconductor chips. [ 5 ] Chip topographies are also protected by TRIPS , an international treaty. [ 6 ] Although the U.S. SCPA is codified in title 17 (copyrights), the SCPA is not a copyright or patent law. Rather, it is a sui generis law resembling a utility model law or Gebrauchsmuster . It has some aspects of copyright law, some aspects of patent law, and in some ways, it is completely different from either. From Brooktree , ¶ 23: The Semiconductor Chip Protection Act of 1984 was an innovative solution to this new problem of technology-based industry. While some copyright principles underlie the law, as do some attributes of patent law, the Act was uniquely adapted to semiconductor mask works, in order to achieve appropriate protection for original designs while meeting the competitive needs of the industry and serving the public interest. In general, the chip topography laws of other nations are also sui generis laws. Nevertheless, copyright and patent case law illuminate many aspects of the SCPA and its interpretation. Chip protection is acquired under the SCPA by filing with the US Copyright Office an application for "mask work" registration under the SCPA, together with a filing fee. The application must be accompanied by identifying material, such as pictorial representations of the IC layers so that in the event of infringement litigation, it can be determined what the registration covers. Protection continues for ten years from the date of registration. The SCPA repeatedly refers to "mask works." The term is a relic of the original form of the bill that became the SCPA and was passed in the Senate as an amendment to the Copyright Act. The term mask work is parallel to and consistent with the terminology of the 1976 Copyright Act, which introduced the concept of "literary works," "pictorial works," "audiovisual works," and the like and protected physical embodiments of such works, such as books, paintings, video game cassettes, and the like against unauthorized copying and distribution. The terminology became unnecessary when the House of Representatives insisted on the substitution of a sui generis bill, but the SCPA as enacted still continued its use. [ 7 ] The term "mask work" is not limited to actual masks used in chip manufacture but is defined broadly in the SCPA to include the topographic creation embodied in the masks and chips. Moreover, the SCPA protects any physical embodiment of a mask work. [ 8 ] The owner of mask work rights may pursue an alleged infringer ("chip pirate") by bringing an action for mask work infringement in federal district court. The remedies available correspond generally to those of copyright law and patent law. The SCPA does not protect functional aspects of chip designs, which is reserved to patent law . Although EPROM and other memory chips topographies are protectable under the SCPA, such protection does not extend to the information stored in chips, such as computer programs. Such information is protected, if at all, only by copyright law . The SCPA permits competitive emulation of a chip by means of reverse engineering . The ordinary test for illegal copying (mask work infringement) is the " substantial similarity " test of copyright law, [ 9 ] but when the defense of reverse engineering is involved and supported by probative evidence (usually, the so-called paper trail of design and development work), the similarity must be greater. [ 10 ] Then, the accused chip topography must be substantially identical (truly copied by rote, so-called slavish copying) rather than just substantially similar for the defendant to be liable for infringement. [ 11 ] Most world chip topography protection laws provide for a reverse engineering privilege. In the semiconductor industry, innovation is indispensable; research breakthroughs are essential to the life and health of the industry. But research and innovation in the design of semiconductor chips are threatened by the inadequacies of existing legal protection against piracy and unauthorized copying. This problem, which is so critical to this essential sector of the American economy, is addressed by the Semiconductor Chip Protection Act of 1984.... [The bill] would prohibit "chip piracy"--the unauthorized copying and distribution of semiconductor chip products copied from the original creators of such works.
https://en.wikipedia.org/wiki/Semiconductor_Chip_Protection_Act_of_1984
Semiconductor device fabrication is the process used to manufacture semiconductor devices , typically integrated circuits (ICs) such as microprocessors , microcontrollers , and memories (such as RAM and flash memory ). It is a multiple-step photolithographic and physico-chemical process (with steps such as thermal oxidation , thin-film deposition, ion-implantation, etching) during which electronic circuits are gradually created on a wafer , typically made of pure single-crystal semiconducting material. Silicon is almost always used, but various compound semiconductors are used for specialized applications. This article focuses on the manufacture of integrated circuits, however steps such as etching and photolithography can be used to manufacture other devices such as LCD and OLED displays. [ 1 ] The fabrication process is performed in highly specialized semiconductor fabrication plants , also called foundries or "fabs", [ 2 ] with the central part being the " clean room ". In more advanced semiconductor devices, such as modern 14 / 10 / 7 nm nodes, fabrication can take up to 15 weeks, with 11–13 weeks being the industry average. [ 3 ] Production in advanced fabrication facilities is completely automated, with automated material handling systems taking care of the transport of wafers from machine to machine. [ 4 ] A wafer often has several integrated circuits which are called dies as they are pieces diced from a single wafer. Individual dies are separated from a finished wafer in a process called die singulation , also called wafer dicing. The dies can then undergo further assembly and packaging. [ 5 ] Within fabrication plants, the wafers are transported inside special sealed plastic boxes called FOUPs . [ 4 ] FOUPs in many fabs contain an internal nitrogen atmosphere [ 6 ] [ 7 ] which helps prevent copper from oxidizing on the wafers. Copper is used in modern semiconductors for wiring. [ 8 ] The insides of the processing equipment and FOUPs is kept cleaner than the surrounding air in the cleanroom. This internal atmosphere is known as a mini-environment and helps improve yield which is the amount of working devices on a wafer. This mini environment is within an EFEM (equipment front end module) [ 9 ] which allows a machine to receive FOUPs, and introduces wafers from the FOUPs into the machine. Additionally many machines also handle wafers in clean nitrogen or vacuum environments to reduce contamination and improve process control. [ 4 ] Fabrication plants need large amounts of liquid nitrogen to maintain the atmosphere inside production machinery and FOUPs, which are constantly purged with nitrogen. [ 6 ] [ 7 ] There can also be an air curtain or a mesh [ 10 ] between the FOUP and the EFEM which helps reduce the amount of humidity that enters the FOUP and improves yield. [ 11 ] [ 12 ] Companies that manufacture machines used in the industrial semiconductor fabrication process include ASML , Applied Materials , Tokyo Electron and Lam Research . Feature size is determined by the width of the smallest lines that can be patterned in a semiconductor fabrication process, this measurement is known as the linewidth. [ 13 ] [ 14 ] Patterning often refers to photolithography which allows a device design or pattern to be defined on the device during fabrication. [ 15 ] F 2 is used as a measurement of area for different parts of a semiconductor device, based on the feature size of a semiconductor manufacturing process. Many semiconductor devices are designed in sections called cells, and each cell represents a small part of the device such as a memory cell to store data. Thus F 2 is used to measure the area taken up by these cells or sections. [ 16 ] A specific semiconductor process has specific rules on the minimum size (width or CD/Critical Dimension) and spacing for features on each layer of the chip. [ 17 ] Normally a new semiconductor process has smaller minimum sizes and tighter spacing. In some cases, this allows a simple die shrink of a currently produced chip design to reduce costs, improve performance, [ 17 ] and increase transistor density (number of transistors per unit area) without the expense of a new design. Early semiconductor processes had arbitrary names for generations (viz., HMOS I/II/III/IV and CHMOS III/III-E/IV/V). Later each new generation process became known as a technology node [ 18 ] or process node , [ 19 ] [ 20 ] designated by the process' minimum feature size in nanometers (or historically micrometers ) of the process's transistor gate length, such as the " 90 nm process ". However, this has not been the case since 1994, [ 21 ] and the number of nanometers used to name process nodes (see the International Technology Roadmap for Semiconductors ) has become more of a marketing term that has no standardized relation with functional feature sizes or with transistor density (number of transistors per unit area). [ 22 ] Initially transistor gate length was smaller than that suggested by the process node name (e.g. 350 nm node); however this trend reversed in 2009. [ 21 ] Feature sizes can have no connection to the nanometers (nm) used in marketing. For example, Intel's former 10 nm process actually has features (the tips of FinFET fins) with a width of 7 nm, so the Intel 10 nm process is similar in transistor density to TSMC 's 7 nm process . As another example, GlobalFoundries' 12 and 14 nm processes have similar feature sizes. [ 23 ] [ 24 ] [ 22 ] In 1955, Carl Frosch and Lincoln Derick, working at Bell Telephone Laboratories , accidentally grew a layer of silicon dioxide over the silicon wafer, for which they observed surface passivation effects. [ 26 ] [ 27 ] By 1957 Frosch and Derick, using masking and predeposition, were able to manufacture silicon dioxide transistors; the first planar field effect transistors, in which drain and source were adjacent at the same surface. [ 28 ] At Bell Labs, the importance of their discoveries was immediately realized. Memos describing the results of their work circulated around Bell Labs before being formally published in 1957. At Shockley Semiconductor , Shockley had circulated the preprint of their article in December 1956 to all his senior staff, including Jean Hoerni , [ 29 ] [ 30 ] [ 31 ] [ 32 ] who would later invent the planar process in 1959 while at Fairchild Semiconductor . [ 33 ] [ 34 ] In 1948, Bardeen patented an insulated-gate transistor (IGFET) with an inversion layer; Bardeen's concept forms the basis of MOSFET technology today. [ 35 ] An improved type of MOSFET technology, CMOS , was developed by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963. [ 36 ] [ 37 ] CMOS was commercialised by RCA in the late 1960s. [ 36 ] RCA commercially used CMOS for its 4000-series integrated circuits in 1968, starting with a 20 μm process before gradually scaling to a 10 μm process over the next several years. [ 38 ] Many early semiconductor device manufacturers developed and built their own equipment such as ion implanters. [ 39 ] In 1963, Harold M. Manasevit was the first to document epitaxial growth of silicon on sapphire while working at the Autonetics division of North American Aviation (now Boeing ). In 1964, he published his findings with colleague William Simpson in the Journal of Applied Physics . [ 40 ] In 1965, C.W. Mueller and P.H. Robinson fabricated a MOSFET (metal–oxide–semiconductor field-effect transistor) using the silicon-on-sapphire process at RCA Laboratories . [ 41 ] Semiconductor device manufacturing has since spread from Texas and California in the 1960s to the rest of the world, including Asia , Europe , and the Middle East . Wafer size has grown over time, from 25 mm (1 inch) in 1960, to 50 mm (2 inches) in 1969, 100 mm (4 inches) in 1976, 125 mm (5 inches) in 1981, 150 mm (6 inches) in 1983 and 200 mm in 1992. [ 42 ] [ 43 ] In the era of 2-inch wafers, these were handled manually using tweezers and held manually for the time required for a given process. Tweezers were replaced by vacuum wands as they generate fewer particles [ 44 ] which can contaminate the wafers. Wafer carriers or cassettes, which can hold several wafers at once, were developed to carry several wafers between process steps, but wafers had to be individually removed from the carrier, processed and returned to the carrier, so acid-resistant carriers were developed to eliminate this time consuming process, so the entire cassette with wafers was dipped into wet etching and wet cleaning tanks. When wafer sizes increased to 100 mm, the entire cassette would often not be dipped as uniformly, and the quality of the results across the wafer became hard to control. By the time 150 mm wafers arrived, the cassettes were not dipped and were only used as wafer carriers and holders to store wafers, and robotics became prevalent for handling wafers. With 200 mm wafers manual handling of wafer cassettes becomes risky as they are heavier. [ 45 ] In the 1970s and 1980s, several companies migrated their semiconductor manufacturing technology from bipolar to MOSFET technology. Semiconductor manufacturing equipment has been considered costly since 1978. [ 46 ] [ 47 ] [ 48 ] [ 49 ] In 1984, KLA developed the first automatic reticle and photomask inspection tool. [ 50 ] In 1985, KLA developed an automatic inspection tool for silicon wafers, which replaced manual microscope inspection. [ 51 ] In 1985, SGS (now STmicroelectronics ) invented BCD, also called BCDMOS , a semiconductor manufacturing process using bipolar, CMOS and DMOS devices. [ 52 ] Applied Materials developed the first practical multi chamber, or cluster wafer processing tool, the Precision 5000. [ 53 ] Until the 1980s, physical vapor deposition was the primary technique used for depositing materials onto wafers, until the advent of chemical vapor deposition. [ 54 ] Equipment with diffusion pumps was replaced with those using turbomolecular pumps as the latter do not use oil which often contaminated wafers during processing in vacuum. [ 55 ] 200 mm diameter wafers were first used in 1990 and became the standard until the introduction of 300 mm diameter wafers in 2000. [ 56 ] [ 57 ] Bridge tools were used in the transition from 150 mm wafers to 200 mm wafers [ 58 ] and in the transition from 200 mm to 300 mm wafers. [ 59 ] [ 60 ] The semiconductor industry has adopted larger wafers to cope with the increased demand for chips as larger wafers provide more surface area per wafer. [ 61 ] Over time, the industry shifted to 300 mm wafers which brought along the adoption of FOUPs, [ 62 ] but many products that are not advanced are still produced in 200 mm wafers such as analog ICs, RF chips, power ICs, BCDMOS and MEMS devices. [ 63 ] Some processes such as cleaning, [ 64 ] ion implantation, [ 65 ] [ 66 ] etching, [ 67 ] annealing [ 68 ] and oxidation [ 69 ] started to adopt single wafer processing instead of batch wafer processing in order to improve the reproducibility of results. [ 70 ] [ 71 ] A similar trend existed in MEMS manufacturing. [ 72 ] In 1998, Applied Materials introduced the Producer, a cluster tool that had chambers grouped in pairs for processing wafers, which shared common vacuum and supply lines but were otherwise isolated, which was revolutionary at the time as it offered higher productivity than other cluster tools without sacrificing quality, due to the isolated chamber design. [ 73 ] [ 58 ] The semiconductor industry is a global business today. The leading semiconductor manufacturers typically have facilities all over the world. Samsung Electronics , the world's largest manufacturer of semiconductors, has facilities in South Korea and the US. Intel , the second-largest manufacturer, has facilities in Europe and Asia as well as the US. TSMC , the world's largest pure play foundry , has facilities in Taiwan, China, Singapore, and the US. Qualcomm and Broadcom are among the biggest fabless semiconductor companies, outsourcing their production to companies like TSMC. [ 74 ] They also have facilities spread in different countries. As the average utilization of semiconductor devices increased, durability became an issue and manufacturers started to design their devices to ensure they last for enough time, and this depends on the market the device is designed for. This especially became a problem at the 10 nm node. [ 75 ] [ 76 ] Silicon on insulator (SOI) technology has been used in AMD 's 130 nm, 90 nm, 65 nm, 45 nm and 32 nm single, dual, quad, six and eight core processors made since 2001. [ 77 ] During the transition from 200 mm to 300 mm wafers in 2001, many bridge tools were used which could process both 200 mm and 300 mm wafers. [ 78 ] At the time, 18 companies could manufacture chips in the leading edge 130nm process. [ 79 ] In 2006, 450 mm wafers were expected to be adopted in 2012, and 675 mm wafers were expected to be used by 2021. [ 80 ] Since 2009, "node" has become a commercial name for marketing purposes that indicates new generations of process technologies, without any relation to gate length, metal pitch or gate pitch. [ 81 ] [ 82 ] [ 83 ] For example, GlobalFoundries ' 7 nm process was similar to Intel's 10 nm process , thus the conventional notion of a process node has become blurred. [ 84 ] Additionally, TSMC and Samsung's 10 nm processes are only slightly denser than Intel's 14 nm in transistor density. They are actually much closer to Intel's 14 nm process than they are to Intel's 10 nm process (e.g. Samsung's 10 nm processes' fin pitch is the exact same as that of Intel's 14 nm process: 42 nm). [ 85 ] [ 86 ] Intel has changed the name of its 10 nm process to position it as a 7 nm process. [ 87 ] As transistors become smaller, new effects start to influence design decisions such as self-heating of the transistors, and other effects such as electromigration have become more evident since the 16nm node. [ 88 ] [ 89 ] In 2011, Intel demonstrated Fin field-effect transistors (FinFETs), where the gate surrounds the channel on three sides, allowing for increased energy efficiency and lower gate delay—and thus greater performance—over planar transistors at the 22nm node, because planar transistors which only have one surface acting as a channel, started to suffer from short channel effects. [ 90 ] [ 91 ] [ 92 ] [ 93 ] [ 94 ] A startup called SuVolta created a technology called Deeply Depleted Channel (DDC) to compete with FinFET transistors, which uses planar transistors at the 65 nm node which are very lightly doped. [ 95 ] By 2018, a number of transistor architectures had been proposed for the eventual replacement of FinFET , most of which were based on the concept of GAAFET : [ 96 ] horizontal and vertical nanowires, horizontal nanosheet transistors [ 97 ] [ 98 ] (Samsung MBCFET, Intel Nanoribbon), vertical FET (VFET) and other vertical transistors, [ 99 ] [ 100 ] complementary FET (CFET), stacked FET, vertical TFETs, FinFETs with III-V semiconductor materials (III-V FinFET), [ 101 ] [ 102 ] several kinds of horizontal gate-all-around transistors such as nano-ring, hexagonal wire, square wire, and round wire gate-all-around transistors [ 103 ] and negative-capacitance FET (NC-FET) which uses drastically different materials. [ 104 ] FD-SOI was seen as a potential low cost alternative to FinFETs. [ 105 ] As of 2019, 14 nanometer and 10 nanometer chips are in mass production by Intel, UMC , TSMC, Samsung, Micron , SK Hynix , Toshiba Memory and GlobalFoundries, with 7 nanometer process chips in mass production by TSMC and Samsung, although their 7 nanometer node definition is similar to Intel's 10 nanometer process. The 5 nanometer process began being produced by Samsung in 2018. [ 106 ] As of 2019, the node with the highest transistor density is TSMC's 5 nanometer N5 node, [ 107 ] with a density of 171.3 million transistors per square millimeter. [ 108 ] In 2019, Samsung and TSMC announced plans to produce 3 nanometer nodes. GlobalFoundries has decided to stop the development of new nodes beyond 12 nanometers in order to save resources, as it has determined that setting up a new fab to handle sub-12 nm orders would be beyond the company's financial abilities. [ 109 ] From 2020 to 2022, there was a global chip shortage . During this shortage caused by the COVID-19 pandemic, many semiconductor manufacturers banned employees from leaving company grounds. [ 110 ] Many countries granted subsidies to semiconductor companies for building new fabrication plants or fabs. Many companies were affected by counterfeit chips. [ 111 ] Semiconductors have become vital to the world economy and the national security of some countries. [ 112 ] [ 113 ] [ 114 ] The US has asked TSMC to not produce semiconductors for Huawei, a Chinese company. [ 115 ] CFET transistors were explored, which stacks NMOS and PMOS transistors on top of each other. Two approaches were evaluated for constructing these transistors: a monolithic approach which built both types of transistors in one process, and a sequential approach which built the two types of transistors separately and then stacked them. [ 116 ] This is a list of processing techniques that are employed numerous times throughout the construction of a modern electronic device; this list does not necessarily imply a specific order, nor that all techniques are taken during manufacture as, in practice the order and which techniques are applied, are often specific to process offerings by foundries, or specific to an integrated device manufacturer (IDM) for their own products, and a semiconductor device might not need all techniques. Equipment for carrying out these processes is made by a handful of companies . All equipment needs to be tested before a semiconductor fabrication plant is started. [ 117 ] These processes are done after integrated circuit design . A semiconductor fab operates 24/7 [ 118 ] and many fabs use large amounts of water, primarily for rinsing the chips. [ 119 ] Additionally steps such as Wright etch may be carried out. When feature widths were far greater than about 10 micrometres , semiconductor purity was not as big of an issue as it is today in device manufacturing. In the 1960s, workers could work on semiconductor devices in street clothing. [ 140 ] As devices become more integrated, cleanrooms must become even cleaner. Today, fabrication plants are pressurized with filtered air to remove even the smallest particles, which could come to rest on the wafers and contribute to defects. The ceilings of semiconductor cleanrooms have fan filter units (FFUs) at regular intervals to constantly replace and filter the air in the cleanroom; semiconductor capital equipment may also have their own FFUs to clean air in the equipment's EFEM which allows the equipment to receive wafers in FOUPs. The FFUs, combined with raised floors with grills, help ensure a laminar air flow, to ensure that particles are immediately brought down to the floor and do not stay suspended in the air due to turbulence. The workers in a semiconductor fabrication facility are required to wear cleanroom suits to protect the devices from contamination by humans. [ 141 ] To increase yield, FOUPs and semiconductor capital equipment may have a mini environment with ISO class 1 level of dust, and FOUPs can have an even cleaner micro environment. [ 12 ] [ 9 ] FOUPs and SMIF pods isolate the wafers from the air in the cleanroom, increasing yield because they reduce the number of defects caused by dust particles. Also, fabs have as few people as possible in the cleanroom to make maintaining the cleanroom environment easier, since people, even when wearing cleanroom suits, shed large amounts of particles, especially when walking. [ 142 ] [ 141 ] [ 143 ] A typical wafer is made out of extremely pure silicon that is grown into mono-crystalline cylindrical ingots ( boules ) up to 300 mm (slightly less than 12 inches) in diameter using the Czochralski process . These ingots are then sliced into wafers about 0.75 mm thick and polished to obtain a very regular and flat surface. During the production process wafers are often grouped into lots, which are represented by a FOUP, SMIF or a wafer cassette, which are wafer carriers. FOUPs and SMIFs can be transported in the fab between machines and equipment with an automated OHT (Overhead Hoist Transport) AMHS (Automated Material Handling System). [ 62 ] Besides SMIFs and FOUPs, wafer cassettes can be placed in a wafer box or a wafer carrying box. [ 144 ] In semiconductor device fabrication, the various processing steps fall into four general categories: deposition, removal, patterning, and modification of electrical properties. Modification of electrical properties now also extends to the reduction of a material's dielectric constant in low-κ insulators via exposure to ultraviolet light in UV processing (UVP). Modification is frequently achieved by oxidation , which can be carried out to create semiconductor-insulator junctions, such as in the local oxidation of silicon ( LOCOS ) to fabricate metal oxide field effect transistors . Modern chips have up to eleven or more metal levels produced in over 300 or more sequenced processing steps. A recipe in semiconductor manufacturing is a list of conditions under which a wafer will be processed by a particular machine in a processing step during manufacturing. [ 161 ] Process variability is a challenge in semiconductor processing, in which wafers are not processed evenly or the quality or effectiveness of processes carried out on a wafer are not even across the wafer surface. [ 162 ] Wafer processing is separated into FEOL and BEOL stages. FEOL processing refers to the formation of the transistors directly in the silicon . The raw wafer is engineered by the growth of an ultrapure, virtually defect-free silicon layer through epitaxy . [ 163 ] [ 164 ] In the most advanced logic devices , prior to the silicon epitaxy step, tricks are performed to improve the performance of the transistors to be built. One method involves introducing a straining step wherein a silicon variant such as silicon-germanium (SiGe) is deposited. Once the epitaxial silicon is deposited, the crystal lattice becomes stretched somewhat, resulting in improved electronic mobility. Another method, called silicon on insulator technology involves the insertion of an insulating layer between the raw silicon wafer and the thin layer of subsequent silicon epitaxy. This method results in the creation of transistors with reduced parasitic effects . Semiconductor equipment may have several chambers which process wafers in processes such as deposition and etching. Many pieces of equipment handle wafers between these chambers in an internal nitrogen or vacuum environment to improve process control. [ 4 ] Wet benches with tanks containing chemical solutions were historically used for cleaning and etching wafers. [ 165 ] At the 90nm node, transistor channels made with strain engineering were introduced to improve drive current in PMOS transistors by introducing regions with Silicon-Germanium in the transistor. The same was done in NMOS transistors at the 20nm node. [ 128 ] In 2007, HKMG (high-k/metal gate) transistors were introduced by Intel at the 45nm node, which replaced polysilicon gates which in turn replaced metal gate (aluminum gate) [ 166 ] technology in the 1970s. [ 167 ] High-k dielectric such as hafnium oxide (HfO 2 ) replaced silicon oxynitride (SiON), in order to prevent large amounts of leakage current in the transistor while allowing for continued scaling or shrinking of the transistors. However HfO 2 is not compatible with polysilicon gates which requires the use of a metal gate. Two approaches were used in production: gate-first and gate-last. Gate-first consists of depositing the high-k dielectric and then the gate metal such as Tantalum nitride whose workfunction depends on whether the transistor is NMOS or PMOS, polysilicon deposition, gate line patterning, source and drain ion implantation, dopant anneal, and silicidation of the polysilicon and the source and drain. [ 168 ] [ 169 ] In DRAM memories this technology was first adopted in 2015. [ 170 ] Gate-last consisted of first depositing the High-κ dielectric , creating dummy gates, manufacturing sources and drains by ion deposition and dopant annealing, depositing an "interlevel dielectric (ILD)" and then polishing, and removing the dummy gates to replace them with a metal whose workfunction depended on whether the transistor was NMOS or PMOS, thus creating the metal gate. A third process, full silicidation (FUSI) [ 171 ] was not pursued due to manufacturing problems. [ 172 ] Gate-first became dominant at the 22nm/20nm node. [ 173 ] [ 174 ] HKMG has been extended from planar transistors for use in FinFET and nanosheet transistors. [ 175 ] Hafnium silicon oxynitride can also be used instead of Hafnium oxide. [ 176 ] [ 177 ] [ 4 ] [ 178 ] [ 179 ] Since the 16nm/14nm node, Atomic layer etching (ALE) is increasingly used for etching as it offers higher precision than other etching methods. In production, plasma ALE is commonly used, which removes materials unidirectionally, creating structures with vertical walls. Thermal ALE can also be used to remove materials isotropically, in all directions at the same time but without the capability to create vertical walls. Plasma ALE was initially adopted for etching contacts in transistors, and since the 7nm node it is also used to create transistor structures by etching them. [ 127 ] Front-end surface engineering is followed by growth of the gate dielectric (traditionally silicon dioxide ), patterning of the gate, patterning of the source and drain regions, and subsequent implantation or diffusion of dopants to obtain the desired complementary electrical properties. In dynamic random-access memory (DRAM) devices, storage capacitors are also fabricated at this time, typically stacked above the access transistor (the now defunct DRAM manufacturer Qimonda implemented these capacitors with trenches etched deep into the silicon surface). Once the various semiconductor devices have been created , they must be interconnected to form the desired electrical circuits. This occurs in a series of wafer processing steps collectively referred to as BEOL (not to be confused with back end of chip fabrication, which refers to the packaging and testing stages). BEOL processing involves creating metal interconnecting wires that are isolated by dielectric layers. The insulating material has traditionally been a form of SiO 2 or a silicate glass , but recently new low dielectric constant materials, also called low-κ dielectrics, are being used (such as silicon oxycarbide), typically providing dielectric constants around 2.7 (compared to 3.82 for SiO 2 ), although materials with constants as low as 2.2 are being offered to chipmakers. BEoL has been used since 1995 at the 350nm and 250nm nodes (0.35 and 0.25 micron nodes), at the same time chemical mechanical polishing began to be employed. At the time, 2 metal layers for interconnect, also called metallization [ 180 ] was state-of-the-art. [ 181 ] Since the 22nm node, some manufacturers have added a new process called middle-of-line (MOL) which connects the transistors to the rest of the interconnect made in the BEoL process. The MOL is often based on tungsten and has upper and lower layers: the lower layer connects the junctions of the transistors, and an upper layer which is a tungsten plug that connects the transistors to the interconnect. Intel at the 10nm node introduced contact-over-active-gate (COAG) which, instead of placing the contact for connecting the transistor close to the gate of the transistor, places it directly over the gate of the transistor to improve transistor density. [ 182 ] Historically, the metal wires have been composed of aluminum . In this approach to wiring (often called subtractive aluminum ), blanket films of aluminum are deposited first, patterned, and then etched, leaving isolated wires. Dielectric material is then deposited over the exposed wires. The various metal layers are interconnected by etching holes (called " vias") in the insulating material and then depositing tungsten in them with a CVD technique using tungsten hexafluoride ; this approach can still be (and often is) used in the fabrication of many memory chips such as dynamic random-access memory (DRAM), because the number of interconnect levels can be small (no more than four). The aluminum was sometimes alloyed with copper for preventing recrystallization. Gold was also used in interconnects in early chips. [ 183 ] More recently, as the number of interconnect levels for logic has substantially increased due to the large number of transistors that are now interconnected in a modern microprocessor , the timing delay in the wiring has become so significant as to prompt a change in wiring material (from aluminum to copper interconnect layer) [ 184 ] alongside a change in dielectric material in the interconnect (from silicon dioxides to newer low-κ insulators). [ 185 ] [ 186 ] This performance enhancement also comes at a reduced cost via damascene processing, which eliminates processing steps. As the number of interconnect levels increases, planarization of the previous layers is required to ensure a flat surface prior to subsequent lithography. Without it, the levels would become increasingly crooked, extending outside the depth of focus of available lithography, and thus interfering with the ability to pattern. CMP ( chemical-mechanical planarization ) is the primary processing method to achieve such planarization, although dry etch back is still sometimes employed when the number of interconnect levels is no more than three. Copper interconnects use an electrically conductive barrier layer to prevent the copper from diffusing into ("poisoning") its surroundings, often made of tantalum nitride. [ 187 ] [ 182 ] In 1997, IBM was the first to adopt copper interconnects. [ 188 ] In 2014, Applied Materials proposed the use of cobalt in interconnects at the 22nm node, used for encapsulating copper interconnects in cobalt to prevent electromigration, replacing tantalum nitride since it needs to be thicker than cobalt in this application. [ 182 ] [ 189 ] The highly serialized nature of wafer processing has increased the demand for metrology in between the various processing steps. For example, thin film metrology based on ellipsometry or reflectometry is used to tightly control the thickness of gate oxide, as well as the thickness, refractive index, and extinction coefficient of photoresist and other coatings. [ 190 ] Wafer metrology equipment/tools, or wafer inspection tools are used to verify that the wafers haven't been damaged by previous processing steps up until testing; if too many dies on one wafer have failed, the entire wafer is scrapped to avoid the costs of further processing. Virtual metrology has been used to predict wafer properties based on statistical methods without performing the physical measurement itself. [ 2 ] Once the front-end process has been completed, the semiconductor devices or chips are subjected to a variety of electrical tests to determine if they function properly. The percent of devices on the wafer found to perform properly is referred to as the yield . Manufacturers are typically secretive about their yields, [ 191 ] but it can be as low as 30%, meaning that only 30% of the chips on the wafer work as intended. Process variation is one among many reasons for low yield. Testing is carried out to prevent faulty chips from being assembled into relatively expensive packages. The yield is often but not necessarily related to device (die or chip) size. As an example, in December 2019, TSMC announced an average yield of ~80%, with a peak yield per wafer of >90% for their 5nm test chips with a die size of 17.92 mm 2 . The yield went down to 32% with an increase in die size to 100 mm 2 . [ 192 ] The number of killer defects on a wafer, regardless of die size, can be noted as the defect density (or D 0 ) of the wafer per unit area, usually cm 2 . The fab tests the chips on the wafer with an electronic tester that presses tiny probes against the chip. The machine marks each bad chip with a drop of dye. Currently, electronic dye marking is possible if wafer test data (results) are logged into a central computer database and chips are "binned" (i.e. sorted into virtual bins) according to predetermined test limits such as maximum operating frequencies/clocks, number of working (fully functional) cores per chip, etc. The resulting binning data can be graphed, or logged, on a wafer map to trace manufacturing defects and mark bad chips. This map can also be used during wafer assembly and packaging. Binning allows chips that would otherwise be rejected to be reused in lower-tier products, as is the case with GPUs and CPUs, increasing device yield, especially since very few chips are fully functional (have all cores functioning correctly, for example). eFUSEs may be used to disconnect parts of chips such as cores, either because they did not work as intended during binning, or as part of market segmentation (using the same chip for low, mid and high-end tiers). Chips may have spare parts to allow the chip to fully pass testing even if it has several non-working parts. Chips are also tested again after packaging, as the bond wires may be missing, or analog performance may be altered by the package. This is referred to as the "final test". Chips may also be imaged using x-rays. Usually, the fab charges for testing time, with prices on the order of cents per second. Testing times vary from a few milliseconds to a couple of seconds, and the test software is optimized for reduced testing time. Multiple chip (multi-site) testing is also possible because many testers have the resources to perform most or all of the tests in parallel and on several chips at once. Chips are often designed with "testability features" such as scan chains or a " built-in self-test " to speed testing and reduce testing costs. In certain designs that use specialized analog fab processes, wafers are also laser-trimmed during testing, in order to achieve tightly distributed resistance values as specified by the design. Good designs try to test and statistically manage corners (extremes of silicon behavior caused by a high operating temperature combined with the extremes of fab processing steps). Most designs cope with at least 64 corners. Device yield or die yield is the number of working chips or dies on a wafer, given in percentage since the number of chips on a wafer (Die per wafer, DPW) can vary depending on the chips' size and the wafer's diameter. Yield degradation is a reduction in yield, which historically was mainly caused by dust particles, however since the 1990s, yield degradation is mainly caused by process variation, the process itself and by the tools used in chip manufacturing, although dust still remains a problem in many older fabs. Dust particles have an increasing effect on yield as feature sizes are shrunk with newer processes. Automation and the use of mini environments inside of production equipment, FOUPs and SMIFs have enabled a reduction in defects caused by dust particles. Device yield must be kept high to reduce the selling price of the working chips since working chips have to pay for those chips that failed, and to reduce the cost of wafer processing. Yield can also be affected by the design and operation of the fab. Tight control over contaminants and the production process are necessary to increase yield. Contaminants may be chemical contaminants or be dust particles. "Killer defects" are those caused by dust particles that cause complete failure of the device (such as a transistor). There are also harmless defects. A particle needs to be 1/5 the size of a feature to cause a killer defect. So if a feature is 100 nm across, a particle only needs to be 20 nm across to cause a killer defect. Electrostatic electricity can also affect yield adversely. Chemical contaminants or impurities include heavy metals such as iron, copper, nickel, zinc, chromium, gold, mercury and silver, alkali metals such as sodium, potassium and lithium, and elements such as aluminum, magnesium, calcium, chlorine, sulfur, carbon, and fluorine. It is important for these elements to not remain in contact with the silicon, as they could reduce yield. Chemical mixtures may be used to remove these elements from the silicon; different mixtures are effective against different elements. Several models are used to estimate yield. They are Murphy's model, Poisson's model, the binomial model, Moore's model and Seeds' model. There is no universal model; a model has to be chosen based on actual yield distribution (the location of defective chips). For example, Murphy's model assumes that yield loss occurs more at the edges of the wafer (non-working chips are concentrated on the edges of the wafer), Poisson's model assumes that defective dies are spread relatively evenly across the wafer, and Seeds's model assumes that defective dies are clustered together. [ 193 ] Smaller dies cost less to produce (since more fit on a wafer, and wafers are processed and priced as a whole), and can help achieve higher yields since smaller dies have a lower chance of having a defect, due to their lower surface area on the wafer. However, smaller dies require smaller features to achieve the same functions of larger dies or surpass them, and smaller features require reduced process variation and increased purity (reduced contamination) to maintain high yields. Metrology tools are used to inspect the wafers during the production process and predict yield, so wafers predicted to have too many defects may be scrapped to save on processing costs. [ 191 ] Once tested, a wafer is typically reduced in thickness in a process also known as "backlap", [ 120 ] : 6 "backfinish", "wafer backgrind" or "wafer thinning" [ 194 ] before the wafer is scored and then broken into individual dies, a process known as wafer dicing . Only the good, unmarked chips are packaged. After the dies are tested for functionality and binned, they are packaged. Plastic or ceramic packaging involves mounting the die, connecting the die/bond pads to the pins on the package, and sealing the die. Tiny bondwires are used to connect the pads to the pins. In the 'old days' (1970s), wires were attached by hand, but now specialized machines perform the task. Traditionally, these wires have been composed of gold, leading to a lead frame (pronounced "leed frame") of solder -plated copper; lead is poisonous, so lead-free "lead frames" are now mandated by RoHS . Traditionally the bond pads are located on the edges of the die, however, Flip-chip packaging can be used to place bond pads across the entire surface of the die. Chip scale package (CSP) is another packaging technology. A plastic dual in-line package , like most packages, is many times larger than the actual die hidden inside, whereas CSP chips are nearly the size of the die; a CSP can be constructed for each die before the wafer is diced. The packaged chips are retested to ensure that they were not damaged during packaging and that the die-to-pin interconnect operation was performed correctly. A laser then etches the chip's name and numbers on the package. The steps involving testing and packaging of dies, followed by final testing of finished, packaged chips, are called the back end, [ 120 ] post-fab, [ 195 ] ATMP (Assembly, Test, Marking, and Packaging) [ 196 ] or ATP (Assembly, Test and Packaging) of semiconductor manufacturing, and may be carried out by OSAT (OutSourced Assembly and Test) companies which are separate from semiconductor foundries. A foundry is a company or fab performing manufacturing processes such as photolithography and etching that are part of the front end of semiconductor manufacturing. [ 197 ] Many toxic materials are used in the fabrication process. [ 198 ] These include: It is vital that workers not be directly exposed to these dangerous substances. The high degree of automation common in the IC fabrication industry helps to reduce the risks of exposure. Most fabrication facilities employ exhaust management systems, such as wet scrubbers, combustors, heated absorber cartridges, etc., [ 203 ] [ 204 ] [ 205 ] to control the risk to workers and to the environment.
https://en.wikipedia.org/wiki/Semiconductor_device_fabrication
Semidiones are radical anions analogous to semiquinones , obtained from the one-electron reduction of non- quinone conjugated dicarbonyls . [ 1 ] The simplest possible semidiones are derived from 1,2-dicarbonyls and have structure R−C(−O − )=C(−O • )−R' ↔ R−C(−O • )=C(−O − )−R' , making them the second member of a homologous series starting with ketyl radicals and continuing with semitriones . [ 1 ] They are often transient intermediates , appearing in reactions such as the final reduction step of the acyloin condensation . [ 2 ] Benzil semidione ( Ph−C(−O − )=C(−O • )−Ph ↔ Ph−C(−O • )=C(−O − )−Ph ), synthesized by Auguste Laurent in 1836, is believed to have been the first radical ion ever characterized. [ 3 ] : 425 [ 4 ] Semidehydroascorbate is a relatively stable semitrione produced by hydrogen abstraction from ascorbate ( Vitamin C ). [ 5 ] This organic chemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Semidione
Semigroup Forum (print ISSN 0037-1912 , electronic ISSN 1432-2137 ) is a mathematics research journal published by Springer . The journal serves as a platform for the speedy and efficient transmission of information on current research in semigroup theory . Coverage in the journal includes: Semigroups of operators were initially considered off-topic, but began being included in the journal in 1985. [ 1 ] Semigroup Forum features survey and research articles. It also contains research announcements, which describe new results, mostly without proofs, of full length papers appearing elsewhere as well as short notes, which detail such information as new proofs, significant generalizations of known facts, comments on unsolved problems, and historical remarks. In addition, the journal contains research problems; announcements of conferences, seminars, and symposia on semigroup theory; abstracts and bibliographical items; as well as listings of books, papers, and lecture notes of interest. The journal published its first issue in 1970. [ 1 ] It is indexed in Science Citation Index Expanded, Journal Citation Reports/Science Edition, SCOPUS, and Zentralblatt Math. [ 2 ] " Semigroup Forum was a pioneering journal ... one of the early instances of a highly specialized journal, of which there are now many. Indeed, it was during the 1960s that many of the current specialised journals began to appear, probably in connection with research specialization ...among many other examples, the journals Topology , Journal of Algebra , Journal of Combinatorial Theory , and Journal of Number Theory were launched in 1962, 1964, 1966 and 1969 respectively. Semigroup Forum simply followed in this trend, with academic publishers realizing that there was a market for such narrowly focused journals. [ 1 ] : 330 This journal has been called "in many ways a point of crystallization for semigroup theory and its community", [ 3 ] and "an indicator of a field which is mathematically active". [ 4 ]
https://en.wikipedia.org/wiki/Semigroup_Forum
In linear algebra , particularly projective geometry , a semilinear map between vector spaces V and W over a field K is a function that is a linear map "up to a twist", hence semi -linear, where "twist" means " field automorphism of K ". Explicitly, it is a function T : V → W that is: Where the domain and codomain are the same space (i.e. T : V → V ), it may be termed a semilinear transformation . The invertible semilinear transforms of a given vector space V (for all choices of field automorphism) form a group, called the general semilinear group and denoted Γ L ⁡ ( V ) , {\displaystyle \operatorname {\Gamma L} (V),} by analogy with and extending the general linear group . The special case where the field is the complex numbers C {\displaystyle \mathbb {C} } and the automorphism is complex conjugation , a semilinear map is called an antilinear map . Similar notation (replacing Latin characters with Greek ones) is used for semilinear analogs of more restricted linear transformations; formally, the semidirect product of a linear group with the Galois group of field automorphisms. For example, PΣU is used for the semilinear analogs of the projective special unitary group PSU. Note, however, that it was only recently noticed that these generalized semilinear groups are not well-defined, as pointed out in ( Bray, Holt & Roney-Dougal 2009 ) – isomorphic classical groups G and H (subgroups of SL) may have non-isomorphic semilinear extensions. At the level of semidirect products, this corresponds to different actions of the Galois group on a given abstract group, a semidirect product depending on two groups and an action. If the extension is non-unique, there are exactly two semilinear extensions; for example, symplectic groups have a unique semilinear extension, while SU( n , q ) has two extensions if n is even and q is odd, and likewise for PSU. A map f : V → W for vector spaces V and W over fields K and L respectively is σ -semilinear, or simply semilinear , if there exists a field homomorphism σ : K → L such that for all x , y in V and λ in K it holds that A given embedding σ of a field K in L allows us to identify K with a subfield of L , making a σ -semilinear map a K - linear map under this identification. However, a map that is τ -semilinear for a distinct embedding τ ≠ σ will not be K -linear with respect to the original identification σ , unless f is identically zero. More generally, a map ψ : M → N between a right R - module M and a left S -module N is σ - semilinear if there exists a ring antihomomorphism σ : R → S such that for all x , y in M and λ in R it holds that The term semilinear applies for any combination of left and right modules with suitable adjustment of the above expressions, with σ being a homomorphism as needed. [ 1 ] [ 2 ] The pair ( ψ , σ ) is referred to as a dimorphism . [ 3 ] Let σ : R → S {\displaystyle \sigma :R\to S} be a ring isomorphism, M {\displaystyle M} a right R {\displaystyle R} -module and N {\displaystyle N} a right S {\displaystyle S} -module, and ψ : M → N {\displaystyle \psi :M\to N} a σ {\displaystyle \sigma } -semilinear map. Define the transpose of ψ {\displaystyle \psi } as the mapping t ψ : N ∗ → M ∗ {\displaystyle {}^{t}\psi :N^{*}\to M^{*}} that satisfies [ 4 ] ⟨ y , ψ ( x ) ⟩ = σ ( ⟨ t ψ ( y ) , x ⟩ ) for all y ∈ N ∗ , and all x ∈ M . {\displaystyle \langle y,\psi (x)\rangle =\sigma \left(\left\langle {}^{\text{t}}\psi (y),x\right\rangle \right)\quad {\text{ for all }}y\in N^{*},{\text{ and all }}x\in M.} This is a σ − 1 {\displaystyle \sigma ^{-1}} -semilinear map. Let σ : R → S {\displaystyle \sigma :R\to S} be a ring isomorphism, M {\displaystyle M} a right R {\displaystyle R} -module and N {\displaystyle N} a right S {\displaystyle S} -module, and ψ : M → N {\displaystyle \psi :M\to N} a σ {\displaystyle \sigma } -semilinear map. The mapping M → R : x ↦ σ − 1 ( ⟨ y , ψ ( x ) ⟩ ) , y ∈ N ∗ {\displaystyle M\to R:x\mapsto \sigma ^{-1}(\langle y,\psi (x)\rangle ),\quad y\in N^{*}} defines an R {\displaystyle R} -linear form. [ 5 ] Given a vector space V , the set of all invertible semilinear transformations V → V (over all field automorphisms) is the group ΓL( V ). Given a vector space V over K , ΓL( V ) decomposes as the semidirect product where Aut( K ) is the automorphisms of K . Similarly, semilinear transforms of other linear groups can be defined as the semidirect product with the automorphism group, or more intrinsically as the group of semilinear maps of a vector space preserving some properties. We identify Aut( K ) with a subgroup of ΓL( V ) by fixing a basis B for V and defining the semilinear maps: for any σ ∈ Aut ⁡ ( K ) {\displaystyle \sigma \in \operatorname {Aut} (K)} . We shall denoted this subgroup by Aut( K ) B . We also see these complements to GL( V ) in ΓL( V ) are acted on regularly by GL( V ) as they correspond to a change of basis . Every linear map is semilinear, thus GL ⁡ ( V ) ≤ Γ L ⁡ ( V ) {\displaystyle \operatorname {GL} (V)\leq \operatorname {\Gamma L} (V)} . Fix a basis B of V . Now given any semilinear map f with respect to a field automorphism σ ∈ Aut( K ) , then define g : V → V by As f ( B ) is also a basis of V , it follows that g is simply a basis exchange of V and so linear and invertible: g ∈ GL( V ) . Set h := f g − 1 {\displaystyle h:=fg^{-1}} . For every v = ∑ b ∈ B ℓ b b {\displaystyle v=\sum _{b\in B}\ell _{b}b} in V , thus h is in the Aut( K ) subgroup relative to the fixed basis B. This factorization is unique to the fixed basis B . Furthermore, GL( V ) is normalized by the action of Aut( K ) B , so ΓL( V ) = GL( V ) ⋊ Aut( K ) . The Γ L ⁡ ( V ) {\displaystyle \operatorname {\Gamma L} (V)} groups extend the typical classical groups in GL( V ). The importance in considering such maps follows from the consideration of projective geometry . The induced action of Γ L ⁡ ( V ) {\displaystyle \operatorname {\Gamma L} (V)} on the associated projective space P( V ) yields the projective semilinear group , denoted P Γ L ⁡ ( V ) {\displaystyle \operatorname {P\Gamma L} (V)} , extending the projective linear group , PGL( V ). The projective geometry of a vector space V , denoted PG( V ), is the lattice of all subspaces of V . Although the typical semilinear map is not a linear map, it does follow that every semilinear map f : V → W {\displaystyle f\colon V\to W} induces an order-preserving map f : PG ⁡ ( V ) → PG ⁡ ( W ) {\displaystyle f\colon \operatorname {PG} (V)\to \operatorname {PG} (W)} . That is, every semilinear map induces a projectivity . The converse of this observation (except for the projective line) is the fundamental theorem of projective geometry . Thus semilinear maps are useful because they define the automorphism group of the projective geometry of a vector space. The group PΓL(3,4) can be used to construct the Mathieu group M 24 , which is one of the sporadic simple groups ; PΓL(3,4) is a maximal subgroup of M 24 , and there are many ways to extend it to the full Mathieu group. This article incorporates material from semilinear transformation on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
https://en.wikipedia.org/wiki/Semilinear_map
Semi-linear response theory (SLRT) is an extension of linear response theory (LRT) for mesoscopic circumstances: LRT applies if the driven transitions are much weaker/slower than the environmental relaxation/dephasing effect, while SLRT assumes the opposite conditions. SLRT uses a resistor network analogy (see illustration) in order to calculate the rate of energy absorption: The driving induces transitions between energy levels, and connected sequences of transitions are essential in order to have a non-vanishing result, as in the theory of percolation. The original motivation for introducing SLRT was the study of mesosopic conductance [ 1 ] [ 2 ] [ 3 ] . [ 4 ] The term SLRT has been coined in [ 5 ] where it has been applied to the calculation of energy absorption by metallic grains. Later the theory has been applied for analysing the rate of heating of atoms in vibrating traps . [ 6 ] Consider a system that is driven by a source f ( t ) {\displaystyle f(t)} that has a power spectrum S ~ ( ω ) {\displaystyle {\tilde {S}}(\omega )} . The latter is defined as the Fourier transform of ⟨ f ( t ) f ( 0 ) ⟩ {\displaystyle \langle f(t)f(0)\rangle } . In linear response theory (LRT) the driving source induces a steady state which is only slightly different from the equilibrium state. In such circumstances the response ( G {\displaystyle G} ) is a linear functional of the power spectrum: In the traditional LRT context G {\displaystyle G} represents the rate of heating, and η ( ω ) {\displaystyle \eta (\omega )} can be defined as the absorption coefficient. Whenever such relation applies If the driving is very strong the response becomes non-linear, meaning that both properties [A] and [B] do not hold. But there is a class of systems whose response becomes semi-linear, i.e. the first property [A] still holds, but not [B]. SLRT applies whenever the driving is strong enough such that relaxation to the steady state is slow compared with the driven dynamics. Yet one assumes that the system can be modeled as a resistor network, mathematically expressed as G = [ [ G n m ] ] {\displaystyle G=[[G_{nm}]]} . The notation [ [ G n m ] ] {\displaystyle [[G_{nm}]]} stands for the usual electrical engineering calculation of a two terminal conductance of a given resistor network. For example, parallel connections imply G = ∑ n m G n m {\displaystyle G=\sum _{nm}G_{nm}} , while serial connections imply G = [ ∑ n m G n m − 1 ] − 1 {\displaystyle G=\left[\sum _{nm}G_{nm}^{-1}\right]^{-1}} . Resistor network calculation is manifestly semi-linear because it satisfies [ [ λ A ] ] = λ [ [ A ] ] {\displaystyle [[\lambda A]]=\lambda [[A]]} , but in general [ [ A + B ] ] ≠ [ [ A ] ] + [ [ B ] ] {\displaystyle [[A+B]]\neq [[A]]+[[B]]} . In the quantum mechanical calculation of energy absorption, the G n m {\displaystyle G_{nm}} represent Fermi-golden-rule transition rates between energy levels. If only neighboring levels are coupled, serial addition implies which is manifestly semi-linear. Results for sparse networks, that are encountered in the analysis of weakly chaotic driven systems, are more interesting and can be obtained using a generalized variable range hopping (VRH) scheme.
https://en.wikipedia.org/wiki/Semilinear_response
In mathematics , a generalized arithmetic progression (or multiple arithmetic progression ) is a generalization of an arithmetic progression equipped with multiple common differences – whereas an arithmetic progression is generated by a single common difference, a generalized arithmetic progression can be generated by multiple common differences. For example, the sequence 17 , 20 , 22 , 23 , 25 , 26 , 27 , 28 , 29 , … {\displaystyle 17,20,22,23,25,26,27,28,29,\dots } is not an arithmetic progression, but is instead generated by starting with 17 and adding either 3 or 5, thus allowing multiple common differences to generate it. A semilinear set generalizes this idea to multiple dimensions – it is a set of vectors of integers, rather than a set of integers. A finite generalized arithmetic progression , or sometimes just generalized arithmetic progression (GAP) , of dimension d is defined to be a set of the form where x 0 , x 1 , … , x d , L 1 , … , L d ∈ Z {\displaystyle x_{0},x_{1},\dots ,x_{d},L_{1},\dots ,L_{d}\in \mathbb {Z} } . The product L 1 L 2 ⋯ L d {\displaystyle L_{1}L_{2}\cdots L_{d}} is called the size of the generalized arithmetic progression; the cardinality of the set can differ from the size if some elements of the set have multiple representations. If the cardinality equals the size, the progression is called proper . Generalized arithmetic progressions can be thought of as a projection of a higher dimensional grid into Z {\displaystyle \mathbb {Z} } . This projection is injective if and only if the generalized arithmetic progression is proper. Formally, an arithmetic progression of N d {\displaystyle \mathbb {N} ^{d}} is an infinite sequence of the form v , v + v ′ , v + 2 v ′ , v + 3 v ′ , … {\displaystyle \mathbf {v} ,\mathbf {v} +\mathbf {v} ',\mathbf {v} +2\mathbf {v} ',\mathbf {v} +3\mathbf {v} ',\ldots } , where v {\displaystyle \mathbf {v} } and v ′ {\displaystyle \mathbf {v} '} are fixed vectors in N d {\displaystyle \mathbb {N} ^{d}} , called the initial vector and common difference respectively. A subset of N d {\displaystyle \mathbb {N} ^{d}} is said to be linear if it is of the form where m {\displaystyle m} is some integer and v , v 1 , … , v m {\displaystyle \mathbf {v} ,\mathbf {v} _{1},\dots ,\mathbf {v} _{m}} are fixed vectors in N d {\displaystyle \mathbb {N} ^{d}} . A subset of N d {\displaystyle \mathbb {N} ^{d}} is said to be semilinear if it is a finite union of linear sets. The semilinear sets are exactly the sets definable in Presburger arithmetic . [ 1 ]
https://en.wikipedia.org/wiki/Semilinear_set
A semimetal is a material with a small energy overlap between the bottom of the conduction band and the top of the valence band , but they do not overlap in momentum space . According to electronic band theory , solids can be classified as insulators , semiconductors , semimetals, or metals . In insulators and semiconductors the filled valence band is separated from an empty conduction band by a band gap . For insulators, the magnitude of the band gap is larger (e.g., > 4 eV ) than that of a semiconductor (e.g., < 4 eV). Because of the slight overlap between the conduction and valence bands, semimetals have no band gap and a small density of states at the Fermi level . A metal, by contrast, has an appreciable density of states at the Fermi level because the conduction band is partially filled. [ 1 ] The insulating/semiconducting states differ from the semimetallic/metallic states in the temperature dependency of their electrical conductivity . With a metal, the conductivity decreases with increases in temperature (due to increasing interaction of electrons with phonons (lattice vibrations)). With an insulator or semiconductor (which have two types of charge carriers – holes and electrons), both the carrier mobilities and carrier concentrations will contribute to the conductivity and these have different temperature dependencies. Ultimately, it is observed that the conductivity of insulators and semiconductors increase with initial increases in temperature above absolute zero (as more electrons are shifted to the conduction band), before decreasing with intermediate temperatures and then, once again, increasing with still higher temperatures. The semimetallic state is similar to the metallic state but in semimetals both holes and electrons contribute to electrical conduction. With some semimetals, like arsenic and antimony , there is a temperature-independent carrier density below room temperature (as in metals) while, in bismuth , this is true at very low temperatures but at higher temperatures the carrier density increases with temperature giving rise to a semimetal-semiconductor transition. A semimetal also differs from an insulator or semiconductor in that a semimetal's conductivity is always non-zero, whereas a semiconductor has zero conductivity at zero temperature and insulators have zero conductivity even at ambient temperatures (due to a wider band gap). To classify semiconductors and semimetals, the energies of their filled and empty bands must be plotted against the crystal momentum of conduction electrons. According to the Bloch theorem the conduction of electrons depends on the periodicity of the crystal lattice in different directions. In a semimetal, the bottom of the conduction band is typically situated in a different part of momentum space (at a different k -vector ) than the top of the valence band. One could say that a semimetal is a semiconductor with a negative indirect bandgap , although they are seldom described in those terms. Classification of a material either as a semiconductor or a semimetal can become tricky when it has extremely small or slightly negative band-gaps. The well-known compound Fe 2 VAl for example, was historically thought of as a semi-metal (with a negative gap ~ -0.1 eV) for over two decades before it was actually shown to be a small-gap (~ 0.03 eV) semiconductor [ 2 ] using self-consistent analysis of the transport properties, electrical resistivity and Seebeck coefficient . Commonly used experimental techniques to investigate band-gap can be sensitive to many things such as the size of the band-gap, electronic structure features (direct versus indirect gap) and also the number of free charge carriers (which can frequently depend on synthesis conditions). Band-gap obtained from transport property modeling is essentially independent of such factors. Theoretical techniques to calculate the electronic structure on the other hand can often underestimate band-gap. Schematically, the figure shows The figure is schematic, showing only the lowest-energy conduction band and the highest-energy valence band in one dimension of momentum space (or k-space). In typical solids, k-space is three-dimensional, and there are an infinite number of bands. Unlike a regular metal , semimetals have charge carriers of both types (holes and electrons), so that one could also argue that they should be called 'double-metals' rather than semimetals. However, the charge carriers typically occur in much smaller numbers than in a real metal. In this respect they resemble degenerate semiconductors more closely. This explains why the electrical properties of semimetals are partway between those of metals and semiconductors . As semimetals have fewer charge carriers than metals, they typically have lower electrical and thermal conductivities . They also have small effective masses for both holes and electrons because the overlap in energy is usually the result of the fact that both energy bands are broad. In addition they typically show high diamagnetic susceptibilities and high lattice dielectric constants . The classic semimetallic elements are arsenic , antimony , bismuth , α- tin (gray tin) and graphite , an allotrope of carbon . The first two (As, Sb) are also considered metalloids but the terms semimetal and metalloid are not synonymous. Semimetals, in contrast to metalloids, can also be chemical compounds , such as mercury telluride (HgTe), [ 3 ] and tin , bismuth , and graphite are typically not considered metalloids. [ 4 ] Transient semimetal states have been reported at extreme conditions. [ 5 ] It has been recently shown that some conductive polymers can behave as semimetals. [ 6 ]
https://en.wikipedia.org/wiki/Semimetal
Bovine seminal RNase (BS-RNase) is a member of the ribonuclease superfamily produced by the bovine seminal vesicles. This enzyme can not be differentiated from its members distinctly since there are more features that this enzyme shares with its family members than features that it possess alone. The research on the question of how new functions arrive in proteins in evolution led the scientists to find an uncommon consequence for a usual biological event called gene conversion in the case of the ribonuclease (RNase) protein family. [ 1 ] The most well-known member of this family, RNase A (also called pancreatic RNase), is expressed in the pancreas of oxen. It serves to digest RNA in intestine, and evolved from bacteria fermenting in the stomach of the first ox. [ 2 ] The homologous RNase, called seminal RNase, differs from RNase A by 23 amino acids and is expressed in seminal plasma in the concentration of 1-1.5 mg/ml, which constitutes more than 3% of the fluid protein content. [ 3 ] Bovine seminal ribonuclease (BS-RNase) is a homologue of RNase A with specific antitumor activity. The physiological role of this enzyme is not yet found and thus it is still a mystery why the seminal fluid in bovine has such a higher concentration of this enzyme. In the evolutionary process, it has acquired new behaviors such as being a dimer with composite active sites binding firmly to anionic glycolipids, [ 4 ] [ 5 ] [ 6 ] [ 7 ] including bovine spermatozoa seminolipid, a fusogenic sulfated galactolipid [ 8 ] possessing immunosuppressive and cytostatic activities whereas the ancestral RNase does not possess these behaviors. [ 9 ] [ 10 ] The homolog of RNase A, bovine seminal ribonuclease (BS-RNase), has a specific antitumor activity. In the immunoregulation of both male and female genital systems, the seminal plasma plays a prominent role in immunosuppression. [ 11 ] The direct or indirect interference of the seminal plasma with the function of many types of immunocompetent cells including T cells , B cells , NK cells and macrophages [ 12 ] [ 13 ] has been shown. These effects of immunosuppression are not species-specific [ 14 ] and are found to be in physiological concentrations that are normally seen in the urogenital tract of females. [ 11 ] [ 15 ] RNase secretion has not been detected in the seminal fluid of any other mammal. The recruitment of established proteins after the gene duplication leads to play some new biomolecular functions. [ 16 ] [ 17 ] Among different models that exist, one model suggests that after the gene duplication , among the two copies of genes, one will be subjected to continuous evolution under ancestral dictated functional constraints whereas the duplicate meanwhile will not be restricted by a functional role and feels free to find protein “structure space”. In the end, it may come with encoded new behaviors that which are required for a new physiological function and thereby discourse the selective advantage. In any case, we may consider it as an ambiguous model since most duplicates have to become pseudogenes , which are considered as an inexpressible genetic information (referred to as “ junk DNA ”) in just a few million years. Because selective pressure can do nothing much with duplicated genes, they are prone to deleterious mutations that present their incapability to encode a protein useful for any function. [ 18 ] [ 19 ] This restricts to use a functionally unconstrained gene duplicate as a tool for investigating protein structure space of new behaviors that might discourse selectable physiological function. Then how would new functions arise in proteins? One of the possibilities is the resurrection of the pseudogenes due to some biological events like gene conversion . One such an example is the resurrection of the bovine seminal RNase gene. From the laboratory reconstructions of ancient RNases, [ 20 ] it is shown that each of these traits was absent in the most recent common ancestor of seminal and pancreatic RNase and a bit later arose in the seminal lineage after the divergence of the above two protein families. The RNase genes from all taxa in a true ruminant phylogenetic tree that was constructed by parsimony analysis were analyzed by the researchers, and they revealed that early after the gene duplication, pancreatic RNases and seminal RNases separated at about 35 million years ago (MYA). Several marker substitutions, including Pro 19, Cys 32 and Lys 62, have been introduced in seminal RNase genes which made them to be recognized as different from their pancreatic cousins. [ 21 ] Based on this, the seminal RNase family includes the taxa called saiga, sheep , duiker, kudu and cape buffalo , while peccary has been excluded from it. Later on, from the sequence analyses, mass spectrophotometry and western blotting studies on taxa that comes under the seminal RNase gene family, it has been revealed that they are consistent with the model which assumes that immediately after duplication the seminal RNase gene gained a physiological function and this function has been continued throughout the divergent evolution (each copy of gene gets evolved independently) and later on it was lost in all other species including modern kudu and cape buffalo except in modern oxen . This would require, however, that this function was lost independently multiple times in different lineages. [ 21 ] After the divergence of Cape buffalo in the lineage leading to modern oxen, the seminal RNase gene was resurrected very recently. It is intriguing to ask whether the domestication of the ox is related to the emergence of seminal RNase as a functioning protein. In modern oxen, does the seminal RNase gene have a function? This is the question that arises now. To address this question we can take into consideration the non-silent to silent substitution ratio in these gene families. [ 21 ] The average ratio of non-silent to silent substitutions is 2:1 for unexpressed seminal RNase sequences, which is consistent with the model that these seminal RNases are pseudogenes and is close to that expected for random substitution in a gene that serves with no selected function. On other hand, the average ratio is less than 1:1 in case of pancreatic RNases which exhibits consistency with the model that states that pancreatic RNases are functional where selective pressure constrains the amino acid replacements. However, when the expressed ox seminal RNase is compared with its nearest unexpressed homologs ( homologous chromosomes ) in buffalo and kudu, a most remarkable ratio of non-silent to silent substitutions, 4:1, is observed. Pseudogenes in order to perform a new function and to provide new selected properties they search protein “structure space” with rapidly introduced amino acid replacements and such pseudogenes are only expected to have the above-mentioned remarkable ratio of non-silent to silent substitutions. The resurrection of the seminal RNase gene is evidently associated with the introduction of Cys 31. [ 21 ] Then how was this pseudogene resurrected? It is not so clear to say and one can note that the similarity between the region of the kudu deletion and the sequence of the expressed seminal RNase pseudogene extends some 70 base pairs into the 3’ –untranslated region are 89% identical (with 62 of the 70 nucleotides ). [ 21 ] We can expect that in order to repair the damaged seminal RNase there might be the gene conversion event took place between it and the pancreatic gene to create a new physiological evolution. Gene conversion is of two types - interallelic and interlocus gene conversions. The resurrection of seminal RNase gene function is believed to be the unexpected consequence of the interlocus gene conversion event of seminal RNase pseudogene with its homologous functional gene. In these recombination events, the genetic information is transferred from a donor functional locus to that of an acceptor pseudogene which is non-functional. Thus the non-functional seminal RNase pseudogene has acquired some new physiological functions being in the state of dead for many million years. This might be the first example in the literature with for the resurrection of a pseudogene by gene conversion event and it would be interesting to further test this data with more sequencing data. [ 21 ] Later on another evolutionary aspect has been proposed in case of seminal RNase [ 22 ] showing that the seminal RNase has been left with two quaternary forms: one is to exhibit special biological actions and the other is just an RNA-degrading enzyme. Based on this proposal the evolution of seminal RNase into these two structures that coexists and are more versatile structurally and biologically can be considered as treated as an evolutionary progress. Scientists from all over the world studied and recognized a plenty of pseudogenes. They have launched several projects which are worldwide to identify and study the potential roles of pseudogenes. ENCODE is one of such projects. Even though the pseudogenes accelerate the issues formolecular analysis, they are still regarded as genome fossils that provide a sound record of evolution since they offer a plethora of diverse information for molecular analysis. The worldwide researchers are building different ways to identify the pseudogenes by various scheme and criteria for computation such that they give a set of pseudogenes that are consistent. Sometimes, the resurrected pseudogenes have been identified as functional and they may also be altered back to be non-functional, which again can be reversed. Not all the pseudogenes in a genome should be considered as a “junk DNA”. The evidence for functional pseudogenes strengthens their significance, and they have also become a hotspot in research due to their significance and possible resurrection. To study the characteristics of this soundless fossil in human and other organisms, researchers are contributing their attempts. In the near future, the real evolutionary fates of the pseudogenes will be found with the embedded picture of genome annotation .
https://en.wikipedia.org/wiki/Seminal_RNase
Seminal fluid proteins (SFPs) or accessory gland proteins (Acps) are one of the non- sperm components of semen . In many animals with internal fertilization , males transfer a complex cocktail of proteins in their semen to females during copulation. These seminal fluid proteins often have diverse, potent effects on female post-mating phenotypes . [ 2 ] SFPs are produced by the male accessory glands . Seminal fluid proteins frequently show evidence of elevated evolutionary rates and are often cited as an example of sexual conflict . [ 2 ] SFPs are best studied in mammals and insects , [ 3 ] especially in the common fruit fly, Drosophila melanogaster . Most species produce a wide variety of proteins that are transferred to females. For example, approximately 290 SFPs have been identified in D. melanogaster , [ 4 ] [ 5 ] [ 6 ] 46 in the mosquito Anopheles gambae , [ 7 ] and around 160 in humans. [ 8 ] Even between closely related species, the seminal fluid proteome can vary greatly. SFPs show elevated rates of DNA sequence change compared to non-reproductive genes (measured by K a /K s ratio) in many orders, including Diptera (flies), [ 9 ] [ 10 ] Lepidoptera (butterflies and moths), [ 1 ] Rodentia , [ 11 ] and Primates . [ 12 ] [ 13 ] [ 14 ] Additionally, SFPs show high rates of gene turnover compared to non-reproductive genes. [ 10 ] The function of SFPs is best understood in D. melanogaster . SFPs play a role in male–male sperm competition . One study that manipulated the amount of SFPs male D. melanogaster produced found that when males were in competition, males that produced more SFPs sired a larger proportion of offspring. [ 15 ] Many D. melanogaster SFP genes are expressed by the female reproductive tract, particularly within the sperm storage organs, which may be more consistent with roles supporting spermatozoa than in sexual conflict. [ 16 ] In many insect species, significant changes occur in female behavior and physiology following mating; the isolated receipt of SFPs has been shown to be responsible for many of these changes. In D. melanogaster females, over 160 genes show either up or down-regulation following isolated SFP receipt. [ 17 ] These transcriptomic changes are not limited to the female's reproductive tract . [ 18 ] SFPs lengthen the refractory period (when the female is disinterested in mating) and stimulate ovulation ; additionally they can affect processes such as sperm storage , metabolism , and activity levels. [ 3 ] Though SFPs seem to play a role in coordinating male and female reproductive efforts (e.g. in timing of ovulation), SFPs may also be a source of sexual conflict . Studies of D. melanogaster have revealed that females who received SFPs suffered decreased lifespan and fitness . [ 19 ] Frequent mating in D. melanogaster is associated with a reduction in female lifespan, [ 20 ] and this cost of mating in females has been shown to be primarily mediated by receipt of SFPs. [ 21 ] As SFPs play an important role in reproductive processes in disease-carrying species of mosquito and additionally tend to be highly species-specific, manipulation of SFPs may hold potential for highly targeted control of these mosquito populations . [ 22 ]
https://en.wikipedia.org/wiki/Seminal_fluid_protein
Seminaphtharhodafluor or SNARF is a fluorescent dye that changes color with pH . It can be used to construct optical biosensors that use enzymes that change pH. The absorption peak of the derivative carboxy-SNARF [ specify ] at pH 6.0 is at wavelength (515 and) 550 nm, while that at pH 9.0 is at 575 nm. [ citation needed ] The emission peak of carboxy-SNARF at pH 6.0 is at wavelength 585 nm, while that at pH 9.0 is at 640 nm. [ 1 ] SNARF-1 can serve as a substrate for the MRP1 (multidrug resistance-associated protein-1) drug transporter, to measure the activity of the MRP1 transporter. For this purpose, an acetomethoxyester group is added to SNARF-1. Cellular esterases cleave off SNARF-1, and its transport out of the cells can be measured by following the loss of fluorescence from the cells. [ 2 ] This article about an organic compound is a stub . You can help Wikipedia by expanding it . This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Seminaphtharhodafluor
The Seminar for Applied Mathematics (SAM; from 1948 to 1969 Institute for Applied Mathematics) was founded in 1948 by Prof. Eduard Stiefel . It is part of the Department of Mathematics (D-MATH) of the Swiss Federal Institute of Technology , ETH Zurich . The Seminar consists of four regular professorships (as of 2014), two assistant professorships, two permanent senior scientists, approximately 14 positions for assistants which are either filled by senior assistants, postdoctoral fellows or Ph.D. students, as well as secretarial staff and a systems administrator. It is represented by the Head of SAM. The SAM is a center for research and teaching in numerical mathematics, mathematical modelling and computing in Science and Technology in the D-MATH of the ETH Zürich .
https://en.wikipedia.org/wiki/Seminar_for_Applied_Mathematics
A semiochemical , from the Greek σημεῖον ( semeion ), meaning "signal", is a chemical substance or mixture released by an organism that affects the behaviors of other individuals. [ 1 ] Semiochemical communication can be divided into two broad classes: communication between individuals of the same species (intraspecific) or communication between different species ( interspecific ). [ 2 ] It is usually used in the field of chemical ecology to encompass pheromones , allomones , kairomones , attractants and repellents . [ 1 ] [ 3 ] Many insects, including parasitic insects , use semiochemicals. Pheromones are intraspecific signals that aid in finding mates, food and habitat resources, warning of enemies, and avoiding competition. Interspecific signals known as allomones and kairomones have similar functions. [ 4 ] A pheromone (from Greek phero "to bear" + hormone from Greek – "impetus") is a secreted or excreted chemical factor that triggers a social response in members of the same species. Pheromones are chemicals capable of acting outside the body of the secreting individual to impact the behavior of the receiving individual. [ 5 ] There are alarm pheromones, food trail pheromones, sex pheromones , and many others that affect behavior or physiology. [ 6 ] Their use among insects has been particularly well documented. In addition, some vertebrates and plants communicate by using pheromones. A notable example of pheromone usage to indicate sexual receptivity in insects can be seen in the female Dawson's burrowing bee , which uses a particular mixture of cuticular hydrocarbons to signal sexual receptivity to mating, and then another mixture to indicate sexual disinterest. These hydrocarbons, in association with other chemical signals produced in the Dufour's gland, have been implicated in male repulsion signaling as well. [ 7 ] The term "pheromone" was introduced by Peter Karlson and Martin Lüscher in 1959, based on the Greek word pherein (to transport) and hormone (to stimulate). [ 8 ] They are also sometimes classified as ecto-hormones. [ 9 ] German Biochemist Adolf Butenandt characterized the first such chemical, Bombykol (a chemically well-characterized pheromone released by the female silkworm to attract mates). [ 10 ] An allomone is any chemical substance released by an individual of one species that affects the behavior of a member of another species to the benefit of the originator but not the receiver. [ 11 ] Production of allomones is a common form of defense, such as by plant species against insect herbivores or prey species against predators. Sometimes species produce the sex pheromones of the organisms they exploit as prey or pollinators (such as bolas spiders [ 12 ] and some orchids [ 13 ] ). Male sex pheromone of Dacini fruit flies, besides acting as aggregation pheromone to form lek, also acts as an allomone to deter lizard predation. [ 14 ] [ 15 ] [ 16 ] The term "Allomone" was proposed by Brown, Eisner, and Whittaker [ 17 ] to denote those substances which confer an advantage upon the emitter. A kairomone is a semiochemical, emitted by an organism, which mediates interspecific interactions in a way that benefits an individual of another species which receives it, without benefitting the emitter. Two main ecological cues are provided by kairomones; they generally either indicate a food source for the receiver, or give warning of the presence of a predator. Often a pheromone may be utilized as a kairomone by a predator or parasitoid to locate the emitting organism. [ 18 ] A synomone is an interspecific semiochemical that is beneficial to both interacting organisms, the emitter and receiver, e.g. floral synomone of certain Bulbophyllum species ( Orchidaceae ) attracts fruit fly males ( Tephritidae : Diptera ) as pollinators, so can be classed as an attractant . In this true mutualistic inter-relationship, both organisms gain benefits in their respective sexual reproductive systems – i.e. orchid flowers are pollinated and the Dacini fruit fly males are rewarded with a sex pheromone precursor or booster. The floral synomone, also acts as a reward to pollinators, is either in the form of a phenylpropanoid (e.g. methyl eugenol [ 19 ] [ 20 ] [ 21 ] ) or a phenylbutanoid (e.g. raspberry ketone [ 22 ] and zingerone [ 23 ] [ 24 ] ). Another example of a synomone is trans -2-hexenal , emitted by trees in the Mimosa / Acacia clade of the Fabaceae . These trees form distinctive hollow structures in which ants nest. When a leaf is disrupted by an herbivore, the damaged cells emit trans -2-hexenal (among other volatiles), which is detected by the ants. The ants swarm to the herbivore, biting and stinging to defend their host plant. The tree repays them in turn by providing sugary nectar and fat- and protein-rich Beltian bodies to feed the ant colony. The goals of using semiochemicals in pest control are
https://en.wikipedia.org/wiki/Semiochemical
In order theory , a branch of mathematics, a semiorder is a type of ordering for items with numerical scores, where items with widely differing scores are compared by their scores and where scores within a given margin of error are deemed incomparable . Semiorders were introduced and applied in mathematical psychology by Duncan Luce ( 1956 ) as a model of human preference. They generalize strict weak orderings , in which items with equal scores may be tied but there is no margin of error. They are a special case of partial orders and of interval orders , and can be characterized among the partial orders by additional axioms , or by two forbidden four-item suborders. The original motivation for introducing semiorders was to model human preferences without assuming that incomparability is a transitive relation . For instance, suppose that x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} represent three quantities of the same material, and that x {\displaystyle x} is larger than z {\displaystyle z} by the smallest amount that is perceptible as a difference, while y {\displaystyle y} is halfway between the two of them. Then, a person who desires more of the material would prefer x {\displaystyle x} to z {\displaystyle z} , but would not have a preference between the other two pairs. In this example, x {\displaystyle x} and y {\displaystyle y} are incomparable in the preference ordering, as are y {\displaystyle y} and z {\displaystyle z} , but x {\displaystyle x} and z {\displaystyle z} are comparable, so incomparability does not obey the transitive law. [ 1 ] To model this mathematically, suppose that objects are given numerical utility values , by letting u {\displaystyle u} be any utility function that maps the objects to be compared (a set X {\displaystyle X} ) to real numbers . Set a numerical threshold (which may be normalized to 1) such that utilities within that threshold of each other are declared incomparable, and define a binary relation < {\displaystyle <} on the objects, by setting x < y {\displaystyle x<y} whenever u ( x ) ≤ u ( y ) − 1 {\displaystyle u(x)\leq u(y)-1} . Then ( X , < ) {\displaystyle (X,<)} forms a semiorder. [ 2 ] If, instead, objects are declared comparable whenever their utilities differ, the result would be a strict weak ordering , for which incomparability of objects (based on equality of numbers) would be transitive. [ 1 ] A semiorder, defined from a utility function as above, is a partially ordered set with the following two properties: [ 3 ] Conversely, every finite partial order that avoids the two forbidden four-point orderings described above can be given utility values making it into a semiorder. [ 4 ] Therefore, rather than being a consequence of a definition in terms of utility, these forbidden orderings, or equivalent systems of axioms , can be taken as a combinatorial definition of semiorders. [ 5 ] If a semiorder on n {\displaystyle n} elements is given only in terms of the order relation between its pairs of elements, obeying these axioms, then it is possible to construct a utility function that represents the order in time O ( n 2 ) {\displaystyle O(n^{2})} , where the O {\displaystyle O} is an instance of big O notation . [ 6 ] For orderings on infinite sets of elements, the orderings that can be defined by utility functions and the orderings that can be defined by forbidden four-point orders differ from each other. For instance, if a semiorder ( X , < ) {\displaystyle (X,<)} (as defined by forbidden orders) includes an uncountable totally ordered subset then there do not exist sufficiently many sufficiently well-spaced real-numbers for it to be representable by a utility function. Fishburn (1973) supplies a precise characterization of the semiorders that may be defined numerically. [ 7 ] One may define a partial order ( X , ≤ ) {\displaystyle (X,\leq )} from a semiorder ( X , < ) {\displaystyle (X,<)} by declaring that x ≤ y {\displaystyle x\leq y} whenever either x < y {\displaystyle x<y} or x = y {\displaystyle x=y} . Of the axioms that a partial order is required to obey, reflexivity ( x ≤ x {\displaystyle x\leq x} ) follows automatically from this definition. Antisymmetry (if x ≤ y {\displaystyle x\leq y} and y ≤ x {\displaystyle y\leq x} then x = y {\displaystyle x=y} ) follows from the first semiorder axiom. Transitivity (if x ≤ y {\displaystyle x\leq y} and y ≤ z {\displaystyle y\leq z} then x ≤ z {\displaystyle x\leq z} ) follows from the second semiorder axiom. Therefore, the binary relation ( X , ≤ ) {\displaystyle (X,\leq )} defined in this way meets the three requirements of a partial order that it be reflexive, antisymmetric, and transitive. Conversely, suppose that ( X , ≤ ) {\displaystyle (X,\leq )} is a partial order that has been constructed in this way from a semiorder. Then the semiorder may be recovered by declaring that x < y {\displaystyle x<y} whenever x ≤ y {\displaystyle x\leq y} and x ≠ y {\displaystyle x\neq y} . Not every partial order leads to a semiorder in this way, however: The first of the semiorder axioms listed above follows automatically from the axioms defining a partial order, but the others do not. A partial order that includes four elements forming two two-element chains would lead to a relation ( X , < ) {\displaystyle (X,<)} that violates the second semiorder axiom, and a partial order that includes four elements forming a three-element chain and an unrelated item would violate the third semiorder axiom (cf. pictures in section #Axiomatics ). Every strict weak ordering < is also a semi-order. More particularly, transitivity of < and transitivity of incomparability with respect to < together imply the above axiom 2, while transitivity of incomparability alone implies axiom 3. The semiorder shown in the top image is not a strict weak ordering, since the rightmost vertex is incomparable to its two closest left neighbors, but they are comparable. The semiorder defined from a utility function u {\displaystyle u} may equivalently be defined as the interval order defined by the intervals [ u ( x ) , u ( x ) + 1 ] {\displaystyle [u(x),u(x)+1]} , [ 8 ] so every semiorder is an example of an interval order. A relation is a semiorder if, and only if, it can be obtained as an interval order of unit length intervals ( ℓ i , ℓ i + 1 ) {\displaystyle (\ell _{i},\ell _{i}+1)} . According to Amartya K. Sen , [ 9 ] semi-orders were examined by Dean T. Jamison and Lawrence J. Lau [ 10 ] and found to be a special case of quasitransitive relations . In fact, every semiorder is quasitransitive, [ 11 ] and quasitransitivity is invariant to adding all pairs of incomparable items. [ 12 ] Removing all non-vertical red lines from the topmost image results in a Hasse diagram for a relation that is still quasitransitive, but violates both axiom 2 and 3; this relation might no longer be useful as a preference ordering. The number of distinct semiorders on n {\displaystyle n} unlabeled items is given by the Catalan numbers [ 13 ] 1 n + 1 ( 2 n n ) , {\displaystyle {\frac {1}{n+1}}{\binom {2n}{n}},} while the number of semiorders on n {\displaystyle n} labeled items is given by the sequence [ 14 ] Any finite semiorder has order dimension at most three. [ 15 ] Among all partial orders with a fixed number of elements and a fixed number of comparable pairs, the partial orders that have the largest number of linear extensions are semiorders. [ 16 ] Semiorders are known to obey the 1/3–2/3 conjecture : in any finite semiorder that is not a total order, there exists a pair of elements x {\displaystyle x} and y {\displaystyle y} such that x {\displaystyle x} appears earlier than y {\displaystyle y} in between 1/3 and 2/3 of the linear extensions of the semiorder. [ 3 ] The set of semiorders on an n {\displaystyle n} -element set is well-graded : if two semiorders on the same set differ from each other by the addition or removal of k {\displaystyle k} order relations, then it is possible to find a path of k {\displaystyle k} steps from the first semiorder to the second one, in such a way that each step of the path adds or removes a single order relation and each intermediate state in the path is itself a semiorder. [ 17 ] The incomparability graphs of semiorders are called indifference graphs , and are a special case of the interval graphs . [ 18 ]
https://en.wikipedia.org/wiki/Semiorder
Semipermeable membrane is a type of synthetic or biologic , polymeric membrane that allows certain molecules or ions to pass through it by osmosis . The rate of passage depends on the pressure , concentration , and temperature of the molecules or solutes on either side, as well as the permeability of the membrane to each solute. Depending on the membrane and the solute, permeability may depend on solute size, solubility , properties, or chemistry. How the membrane is constructed to be selective in its permeability will determine the rate and the permeability. Many natural and synthetic materials which are rather thick are also semipermeable. One example of this is the thin film on the inside of an egg. [ 1 ] Biological membranes are selectively permeable , [ 2 ] with the passage of molecules controlled by facilitated diffusion , passive transport or active transport regulated by proteins embedded in the membrane. A phospholipid bilayer is an example of a biological semipermeable membrane. It consists of two parallel, opposite-facing layers of uniformly arranged phospholipids . Each phospholipid is made of one phosphate head and two fatty acid tails. [ 3 ] The plasma membrane that surrounds all biological cells is an example of a phospholipid bilayer . [ 2 ] The plasma membrane is very specific in its permeability , meaning it carefully controls which substances enter and leave the cell. Because they are attracted to the water content within and outside the cell (or hydrophillic ), the phosphate heads assemble along the outer and inner surfaces of the plasma membrane, and the hydrophobic tails are the layer hidden in the inside of the membrane. Cholesterol molecules are also found throughout the plasma membrane and act as a buffer of membrane fluidity . [ 3 ] The phospholipid bilayer is most permeable to small, uncharged solutes . Protein channels are embedded in or through the phospholipids, [ 4 ] and, collectively, this model is known as the fluid mosaic model . Aquaporins are protein channel pores permeable to water. Information can also pass through the plasma membrane when signaling molecules bind to receptors in the cell membrane. The signaling molecules bind to the receptors, which alters the structure of these proteins. [ 5 ] A change in the protein structure initiates a signaling cascade. [ 5 ] G protein-coupled receptor signaling is an important subset of such signaling processes. [ 6 ] Because the lipid bilayer is semipermeable, it is subject to osmotic pressure . [ 7 ] When the solutes around a cell become more or less concentrated, osmotic pressure causes water to flow into or out of the cell to equilibrate . [ 8 ] This osmotic stress inhibits cellular functions that depend on the activity of water in the cell, such as the functioning of its DNA and protein systems and proper assembly of its plasma membrane. [ 9 ] This can lead to osmotic shock and cell death . Osmoregulation is the method by which cells counteract osmotic stress, and includes osmosensory transporters in the membrane that allow K+ [ note 1 ] and other molecules to flow through the membrane. [ 8 ] Artificial semipermeable membranes see wide usage in research and the medical field. Artificial lipid membranes can easily be manipulated and experimented upon to study biological phenomenon. [ 10 ] Other artificial membranes include those involved in drug delivery, dialysis, and bioseparations. [ 11 ] The bulk flow of water through a selectively permeable membrane because of an osmotic pressure difference is called osmosis . This allows only certain particles to go through including water and leaving behind the solutes including salt and other contaminants. In the process of reverse osmosis , water is purified by applying high pressure to a solution and thereby push water through a thin-film composite membrane (TFC or TFM). These are semipermeable membranes manufactured principally for use in water purification or desalination systems. They also have use in chemical applications such as batteries and fuel cells. In essence, a TFC material is a molecular sieve constructed in the form of a film from two or more layered materials. Sidney Loeb and Srinivasa Sourirajan invented the first practical synthetic semi-permeable membrane. [ 12 ] Membranes used in reverse osmosis are, in general, made out of polyamide , chosen primarily for its permeability to water and relative impermeability to various dissolved impurities including salt ions and other small molecules that cannot be filtered. Reverse osmosis membrane modules have a limited life cycle, several studies have endeavored to improve the performance of the process and extend the RO membranes lifespan. However, even with the appropriate pretreatment of the feed water, the membranes lifespan is generally limited to five to seven years. Discarded RO membrane modules are currently classified worldwide as inert solid waste and are often disposed of in landfills, with limited reuse. Estimates indicated that the mass of membranes annually discarded worldwide reached 12,000 tons. At the current rate, the disposal of RO modules represents significant and growing adverse impacts on the environment, giving rise to the need to limit the direct discarding of these modules. Discarded RO membranes from desalination operations could be recycled for other processes that do not require the intensive filtration criteria of desalination, they could be used in applications requiring nanofiltration (NF) membranes. [ 13 ] Regeneration process steps: 1- Chemical Treatment Chemical procedures aimed at removing fouling from the spent membrane; several chemicals agents are used; such as: - Sodium Hydroxide (alkaline) - Hydrochloric Acid (Acidic) - Chelating agents Such as Citric and Oxalic acids There are three forms of membranes exposure to chemical agents; simple immersion, recirculating the cleaning agent, or immersion in an ultrasound bath. 2 - Oxidative treatment It includes exposing the membrane to oxidant solutions in order to remove its dense aromatic polyamide active layer and subsequent conversion to a porous membrane. Oxidizing agents such as Sodium Hypochlorite NaClO (10–12%) and Potassium Permanganate KMnO₄ are used. [ 14 ] These agents remove organic and biological fouling from RO membranes, They also disinfect the membrane surface, preventing the growth of bacteria and other microorganisms. Sodium Hypochlorite is the most efficient oxidizing agent in light of permeability and salt rejection solution. Dialysis tubing is used in hemodialysis to purify blood in the case of kidney failure . The tubing uses a semipermeable membrane to remove waste before returning the purified blood to the patient. [ 15 ] Differences in the semipermeable membrane, such as size of pores, change the rate and identity of removed molecules. Traditionally, cellulose membranes were used, but they could cause inflammatory responses in patients. Synthetic membranes have been developed that are more biocompatible and lead to fewer inflammatory responses. [ 16 ] However, despite the increased biocompatibility, synthetic membranes have not been linked to decreased mortality. [ 15 ] Other types of semipermeable membranes are cation-exchange membranes (CEMs), anion-exchange membranes (AEMs), alkali anion-exchange membranes (AAEMs) and proton-exchange membranes (PEMs).
https://en.wikipedia.org/wiki/Semipermeable_membrane
Semipermeable membrane devices ( SPMD ) are passive sampling devices used to monitor trace levels of organic compounds with a log Kow > 3. SPMDs are an effective way of monitoring the concentrations of chemicals from anthropogenic runoff and pollution in the marine environment because of their ability to detect minuscule levels of chemical. The data collected from a passive sampler is important for examining the amount of chemical in the environment and can therefore be used to formulate other scientific research about the effects of those chemicals on the organisms as well as the environment. Examples of chemicals commonly measured using SPMDs include: polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers (PBDEs), dioxins , and furans , as well as hydrophobic waste-water effluents such as fragrances , triclosan , and phthalates . Passive samplers may be used to monitor and record short-lived pulses of contaminants found in surface water that would otherwise be missed. SPMDs can accumulate contaminants from the water column because triolein (glyceryl trioleate) comprises the lipid membrane housed within the canister. [ 1 ] However, they are most successful in accumulating trace chemicals in surface water with a calculable flow. The amount of chemical measured using an SPMD is related to the surface area of the sampling device. Therefore, using a larger SPMD increases the amount of chemical sampled. SPMDs can be deployed within a wide range of water bodies, although flowing shallow water is preferable. These devices need to be secured to nearby structures to allow the SPMD to remain in a fixed position in its environment. To aid in the stability of SPMDs, various ways can be employed including attaching the device to a buoy system, an anchor , a boat, or structures/debris in shallow water. For the best possible data to be collected using passive samplers, some degree of stability and the ability of the SPMD to be stationary are required. As long as there are openings on the canister of the device, there is no particular way that the SPMD needs to be facing while deployed. [ 2 ] Depending on the type of analysis that will be conducted, the extract of the sampler one or more may be needed to be deployed. SPMDs normally are deployed up to 30 days in the field, depending on how much accumulation of trace chemicals occurs in the passive sampler itself. The advantage of deploying SPMDs into flowing water like streams or rivers is that these systems will increase the volume of water sampled. Areas of extremely high flow should be avoided however, as they present a danger to the integrity of the SPMD by way of floating debris (rocks, sediment or wood) and can move the device downstream. If the stream or river has suspended solids flowing through at regular intervals it may be advantageous for the device to be deployed with most of the openings facing away from the direction of flow. Placing the SPMD canister behind an obstacle in flowing water may also reduce the amount of suspended solids that interact with the device in these types of systems. [ 2 ] The SPMD can be deployed in areas with low rates of flow or even in deep water areas. To ensure the safety and correct deployment of the SPMD within deep areas, it is important to attach the device to an anchor as well as a flotation device. To retrieve SPMDs from the field, a boat or a diver may be required, depending on the depth of the canister. An SPMD can concentrate chemicals from the water as well as the sediments in the water column, therefore it is important to know the composition of the sediment and benthic surface before deployment. To reduce interference from chemicals of unwanted sources, anchoring the SPMD at certain depths (e.g. higher for muddy sediments in aquatic systems) can be very beneficial. [ 2 ] Bio-films may grow on the canister which reduces the amount of contaminant collected due to membrane pores being covered. In marine systems a common problem involving barnacles growing on and in the canister can occur which can moderately to greatly reduce the amount of contaminant collected, as well as make it difficult to retrieve the devices. The advantages of working with an SPMD passive sampler as opposed to a normal field test with an organism are that SPMDs are able to be deployed in extremely toxic waters that might be too toxic for an organism to live in or just not inhabited by sessile filter feeders. [ 3 ] The design of the SPMD makes it so that it imitates the process of accumulating contaminants the way a mussel or oyster would, but without the issues of mortality or metabolizing any contaminants that may be present. [ 3 ] They can also be deployed for a long period of time and can account for surge runoff events, chemical spills or other abnormal pollution events. The physical structure of a SPMD with its stainless steel covering protects it and allows it to be suspended on a sessile anchor in the water column. [ 3 ] The main advantage of the SPMD over a different passive sampler called the polar organic chemical integrative sampler (POCIS ) is that SPMDs will detect contaminants that have not fully dissolved in the water. [ 3 ] Although SPMDs have been around since the mid-1990s, they are still relatively new in the toxicology world and are still being studied as reliable forms of data collection. Because they are sessile, they don’t always paint an accurate picture of the environment because there are many organisms in the water that are mobile and can move away from contamination. Since the SPMD is made of a low-density polyethylene membrane, it is transparent to UVa and UVb waves. And unfortunately, chemicals that are sensitive to light, like PAH’s, can degrade before correct concentrations are measured. SPMDs are designed to accumulate low level concentrations of chemicals and those that are exposed to air for more than 30 minutes can concentrate airborne pollutants. Surface waters that are covered with oils or other layers must be disturbed, and the water cleared before the SPMD can be placed into the water otherwise false data collection will commence [ 1 ] . In addition, while an SPMD can account for surge events of contaminants, it is difficult to determine when this event took place during the sampling period because the SPMD does not track time [ 3 ] . Another large disadvantage is that an SPMD will not be able to detect contaminants that readily dissolve in water, whereas a device like POCIS can. [ 3 ] Two types of information are provided by passive samplers: the concentrations of contaminant inside the sampler and a predicted concentration of the contaminant in the water surrounding the sampler. The concentration of chemical inside the sampler is determined from SPMD dialysis, and the water concentration from a set of calculations based on the dialysis results and sampling methods. [ 3 ] The dialysis method starts with a thorough cleaning of the device to remove salts. It is then submerged in hexane and incubated for 18 to 24 hours. This process is done twice and the hexane from both dialysis periods is combined. The samples are then processed by an analytical chemistry lab to determine the content of the mixture. [ 3 ] As laid out by the USEPA , the dissolved concentration outside the sampler can be predicted by the following equation: Cw is the dissolved concentration in the water (in μg/L); Cps is the concentration of the contaminant in the SPMD (μg contaminant/g sampler); Kow is the phase-partitioning coefficient (L/kg); and 1000 is a multiplier to correct for a change in units. [ 4 ] This equation is for general passive samplers, while a more defined equation includes the length of time the SPMD was monitoring: Cw is the concentration of contaminant in the water; Cspmd is the concentration in the SPMD (usually ng/L); Vspmd is the volume of the SPMD (usually L); Rs is effective sampling rate (L/day); and t is the time of deployment (day). [ 5 ] SPMDs are currently being used by the United States Environmental Protection Agency (EPA) as a tool to assess management strategies of contaminants in water and sediments. At a Superfund site in South Carolina, three versions of an SPMD was used at a superfund cleanup site to measure PCBs: one was kept in contact with surface sediments; a second suspended in the water-sediment interface; and a third in the water column. [ 6 ] In June 2005, a Superfund site in North Providence, Rhode Island deployed SPMDs in six locations. They were set in the water column and sediment for a 27-day exposure to test for TCDD, furans, dioxins, and volatile organic compounds . [ 7 ]
https://en.wikipedia.org/wiki/Semipermeable_membrane_device
Semiquinones (or ubisemiquinones , if their origin is ubiquinone ) are free radicals resulting from the removal of one hydrogen atom with its electron during the process of dehydrogenation of a hydroquinone, such as hydroquinone itself or catechol , to a quinone or alternatively the addition of a single hydrogen atom with its electron to a quinone. [ 1 ] Semiquinones are highly unstable. E.g. ubisemiquinone is the first of two stages in reducing the supplementary form of CoQ 10 (ubiquinone) to its active form ubiquinol . This article about an organic compound is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Semiquinone
In mathematics , a linear operator T : V → V on a vector space V is semisimple if every T - invariant subspace has a complementary T -invariant subspace. [ 1 ] If T is a semisimple linear operator on V, then V is a semisimple representation of T . Equivalently, a linear operator is semisimple if its minimal polynomial is a product of distinct irreducible polynomials . [ 2 ] A linear operator on a finite-dimensional vector space over an algebraically closed field is semisimple if and only if it is diagonalizable . [ 1 ] [ 3 ] Over a perfect field , the Jordan–Chevalley decomposition expresses an endomorphism x : V → V {\displaystyle x:V\to V} as a sum of a semisimple endomorphism s and a nilpotent endomorphism n such that both s and n are polynomials in x . This linear algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Semisimple_operator
Semisynthesis , or partial chemical synthesis , is a type of chemical synthesis that uses chemical compounds isolated from natural sources (such as microbial cell cultures or plant material) as the starting materials to produce novel compounds with distinct chemical and medicinal properties. The novel compounds generally have a high molecular weight or a complex molecular structure, more so than those produced by total synthesis from simple starting materials. Semisynthesis is a means of preparing many medicines more cheaply than by total synthesis since fewer chemical steps are necessary. Drugs derived from natural sources are commonly produced either by isolation from their natural source or, as described here, through semisynthesis of an isolated agent. From the perspective of chemical synthesis , living organisms act as highly efficient chemical factories, capable of producing structurally complex compounds through biosynthesis . In contrast, engineered chemical synthesis, although powerful, tends to be simpler and less chemically diverse than the complex biosynthetic pathways essential to life. Due to these differences, certain functional groups are easier to synthesize using engineered chemical methods, such as acetylation . However, biological pathways are often able to generate complex groups and structures with minimal economic input, making certain biosynthetic processes far more efficient than total synthesis for producing complex molecules. This efficiency drives the preference for natural sources in the preparation of certain compounds, especially when synthesizing them from simpler molecules would be cost-prohibitive. Plants , animals , fungi , and bacteria are all valuable sources of complex precursor molecules , with bioreactors representing an intersection of biological and engineered synthesis. In drug discovery, semisynthesis is employed to retain the medicinal properties of a natural compound while modifying other molecular characteristics—such as adverse effects or oral bioavailability —in just a few chemical steps. Semisynthesis contrasts with total synthesis , which constructs the target molecule entirely from inexpensive, low-molecular-weight precursors, often petrochemicals or minerals. [ 3 ] While there is no strict boundary between total synthesis and semisynthesis, they differ primarily in the degree of engineered synthesis employed. Complex or fragile functional groups are often more cost-effective to extract directly from an organism than to prepare from simpler precursors, making semisynthesis the preferred approach for complex natural products. Practical applications of semisynthesis include the groundbreaking isolation of the antibiotic chlortetracycline and the subsequent semisynthesis of antibiotics such as tetracycline , doxycycline, and tigecycline . [ 4 ] [ 5 ] Other notable examples include the early commercial production of the anti-cancer agent paclitaxel from 10-deacetylbaccatin, isolated from Taxus baccata (European yew), [ 1 ] the semisynthesis of LSD from ergotamine (derived from fungal cultures of ergot ), [ citation needed ] and the preparation of the antimalarial drug artemether from the naturally occurring compound artemisinin . [ 2 ] [ non-primary source needed ] [ non-primary source needed ] As synthetic chemistry advances, transformations that were previously too costly or difficult to achieve become more feasible, influencing the economic viability of semisynthetic routes. [ 3 ]
https://en.wikipedia.org/wiki/Semisynthesis
The versine or versed sine is a trigonometric function found in some of the earliest ( Sanskrit Aryabhatia , [ 1 ] Section I) trigonometric tables . The versine of an angle is 1 minus its cosine . There are several related functions, most notably the coversine and haversine . The latter, half a versine, is of particular importance in the haversine formula of navigation. The versine [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] or versed sine [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] is a trigonometric function already appearing in some of the earliest trigonometric tables. It is symbolized in formulas using the abbreviations versin , sinver , [ 13 ] [ 14 ] vers , or siv . [ 15 ] [ 16 ] In Latin , it is known as the sinus versus (flipped sine), versinus , versus , or sagitta (arrow). [ 17 ] Expressed in terms of common trigonometric functions sine, cosine, and tangent, the versine is equal to versin ⁡ θ = 1 − cos ⁡ θ = 2 sin 2 ⁡ θ 2 = sin ⁡ θ tan ⁡ θ 2 {\displaystyle \operatorname {versin} \theta =1-\cos \theta =2\sin ^{2}{\frac {\theta }{2}}=\sin \theta \,\tan {\frac {\theta }{2}}} There are several related functions corresponding to the versine: Special tables were also made of half of the versed sine, because of its particular use in the haversine formula used historically in navigation . hav θ = sin 2 ⁡ ( θ 2 ) = 1 − cos ⁡ θ 2 {\displaystyle {\text{hav}}\ \theta =\sin ^{2}\left({\frac {\theta }{2}}\right)={\frac {1-\cos \theta }{2}}} The ordinary sine function ( see note on etymology ) was sometimes historically called the sinus rectus ("straight sine"), to contrast it with the versed sine ( sinus versus ). [ 31 ] The meaning of these terms is apparent if one looks at the functions in the original context for their definition, a unit circle : For a vertical chord AB of the unit circle, the sine of the angle θ (representing half of the subtended angle Δ ) is the distance AC (half of the chord). On the other hand, the versed sine of θ is the distance CD from the center of the chord to the center of the arc. Thus, the sum of cos( θ ) (equal to the length of line OC ) and versin( θ ) (equal to the length of line CD ) is the radius OD (with length 1). Illustrated this way, the sine is vertical ( rectus , literally "straight") while the versine is horizontal ( versus , literally "turned against, out-of-place"); both are distances from C to the circle. This figure also illustrates the reason why the versine was sometimes called the sagitta , Latin for arrow . [ 17 ] [ 30 ] If the arc ADB of the double-angle Δ = 2 θ is viewed as a " bow " and the chord AB as its "string", then the versine CD is clearly the "arrow shaft". In further keeping with the interpretation of the sine as "vertical" and the versed sine as "horizontal", sagitta is also an obsolete synonym for the abscissa (the horizontal axis of a graph). [ 30 ] In 1821, Cauchy used the terms sinus versus ( siv ) for the versine and cosinus versus ( cosiv ) for the coversine. [ 15 ] [ 16 ] [ nb 1 ] As θ goes to zero, versin( θ ) is the difference between two nearly equal quantities, so a user of a trigonometric table for the cosine alone would need a very high accuracy to obtain the versine in order to avoid catastrophic cancellation , making separate tables for the latter convenient. [ 12 ] Even with a calculator or computer, round-off errors make it advisable to use the sin 2 formula for small θ . Another historical advantage of the versine is that it is always non-negative, so its logarithm is defined everywhere except for the single angle ( θ = 0, 2 π , …) where it is zero—thus, one could use logarithmic tables for multiplications in formulas involving versines. In fact, the earliest surviving table of sine (half- chord ) values (as opposed to the chords tabulated by Ptolemy and other Greek authors), calculated from the Surya Siddhantha of India dated back to the 3rd century BC, was a table of values for the sine and versed sine (in 3.75° increments from 0 to 90°). [ 31 ] The versine appears as an intermediate step in the application of the half-angle formula sin 2 ( ⁠ θ / 2 ⁠ ) = ⁠ 1 / 2 ⁠ versin( θ ), derived by Ptolemy , that was used to construct such tables. The haversine, in particular, was important in navigation because it appears in the haversine formula , which is used to reasonably accurately compute distances on an astronomic spheroid (see issues with the Earth's radius vs. sphere ) given angular positions (e.g., longitude and latitude ). One could also use sin 2 ( ⁠ θ / 2 ⁠ ) directly, but having a table of the haversine removed the need to compute squares and square roots. [ 12 ] An early utilization by José de Mendoza y Ríos of what later would be called haversines is documented in 1801. [ 14 ] [ 32 ] The first known English equivalent to a table of haversines was published by James Andrew in 1805, under the name "Squares of Natural Semi-Chords". [ 33 ] [ 34 ] [ 17 ] In 1835, the term haversine (notated naturally as hav. or base-10 logarithmically as log. haversine or log. havers. ) was coined [ 35 ] by James Inman [ 14 ] [ 36 ] [ 37 ] in the third edition of his work Navigation and Nautical Astronomy: For the Use of British Seamen to simplify the calculation of distances between two points on the surface of the Earth using spherical trigonometry for applications in navigation. [ 3 ] [ 35 ] Inman also used the terms nat. versine and nat. vers. for versines. [ 3 ] Other high-regarded tables of haversines were those of Richard Farley in 1856 [ 33 ] [ 38 ] and John Caulfield Hannyngton in 1876. [ 33 ] [ 39 ] The haversine continues to be used in navigation and has found new applications in recent decades, as in Bruce D. Stark's method for clearing lunar distances utilizing Gaussian logarithms since 1995 [ 40 ] [ 41 ] or in a more compact method for sight reduction since 2014. [ 29 ] While the usage of the versine, coversine and haversine as well as their inverse functions can be traced back centuries, the names for the other five cofunctions appear to be of much younger origin. One period (0 < θ < 2 π ) of a versine or, more commonly, a haversine waveform is also commonly used in signal processing and control theory as the shape of a pulse or a window function (including Hann , Hann–Poisson and Tukey windows ), because it smoothly ( continuous in value and slope ) "turns on" from zero to one (for haversine) and back to zero. [ nb 2 ] In these applications, it is named Hann function or raised-cosine filter . The functions are circular rotations of each other. Inverse functions like arcversine (arcversin, arcvers, [ 8 ] avers, [ 43 ] [ 44 ] aver), arcvercosine (arcvercosin, arcvercos, avercos, avcs), arccoversine (arccoversin, arccovers, [ 8 ] acovers, [ 43 ] [ 44 ] acvs), arccovercosine (arccovercosin, arccovercos, acovercos, acvc), archaversine (archaversin, archav, haversin −1 , [ 45 ] invhav, [ 46 ] [ 47 ] [ 48 ] ahav, [ 43 ] [ 44 ] ahvs, ahv, hav −1 [ 49 ] [ 50 ] ), archavercosine (archavercosin, archavercos, ahvc), archacoversine (archacoversin, ahcv) or archacovercosine (archacovercosin, archacovercos, ahcc) exist as well: These functions can be extended into the complex plane . [ 42 ] [ 19 ] [ 24 ] Maclaurin series : [ 24 ] When the versine v is small in comparison to the radius r , it may be approximated from the half-chord length L (the distance AC shown above) by the formula [ 51 ] v ≈ L 2 2 r . {\displaystyle v\approx {\frac {L^{2}}{2r}}.} Alternatively, if the versine is small and the versine, radius, and half-chord length are known, they may be used to estimate the arc length s ( AD in the figure above) by the formula s ≈ L + v 2 r {\displaystyle s\approx L+{\frac {v^{2}}{r}}} This formula was known to the Chinese mathematician Shen Kuo , and a more accurate formula also involving the sagitta was developed two centuries later by Guo Shoujing . [ 52 ] A more accurate approximation used in engineering [ 53 ] is v ≈ s 3 2 L 1 2 8 r {\displaystyle v\approx {\frac {s^{\frac {3}{2}}L^{\frac {1}{2}}}{8r}}} The term versine is also sometimes used to describe deviations from straightness in an arbitrary planar curve, of which the above circle is a special case. Given a chord between two points in a curve, the perpendicular distance v from the chord to the curve (usually at the chord midpoint) is called a versine measurement. For a straight line, the versine of any chord is zero, so this measurement characterizes the straightness of the curve. In the limit as the chord length L goes to zero, the ratio ⁠ 8 v / L 2 ⁠ goes to the instantaneous curvature . This usage is especially common in rail transport , where it describes measurements of the straightness of the rail tracks [ 54 ] and it is the basis of the Hallade method for rail surveying . The term sagitta (often abbreviated sag ) is used similarly in optics , for describing the surfaces of lenses and mirrors .
https://en.wikipedia.org/wiki/Semiversus
Send tracks (sometimes simply called Sends ) are the software audio routing equivalent to the aux-sends found on multitrack sound mixing/sequencing consoles. In audio recording , a given song is almost always made up of multiple tracks, with each instrument or sound on its own track (for example, one track could contain the drums, one for the guitar, one for a vocal, etc). Further, each track can be separately adjusted in many ways, such as changing the volume, adding effects, and so on. This can be done with individual hardware components, commonly known as "outside the box," or via software applications known as DAWs (Digital Audio Workstations), commonly known as "inside the box." Send tracks are tracks that aren't (normally) used to record sound on themselves, but to apply those adjustments to multiple, perhaps even all, tracks the same way. For example: if the drums are not on one track, but are instead spread out across multiple tracks (which is common), there is often the desire to treat them all the same in terms of volume, effects, etc. Instead of doing that for each track, you can set up a single send track to apply to all of them. Because one can treat numerous tracks uniformly with a single send track, they can save a lot of time and resources. They are also inherently more flexible than their hardware equivalent since any number of send tracks can be created as needed. For more complicated effect chains, send tracks also allow their output to be routed to other send tracks, which can switch their routing to other send tracks in turn. The solutions offered by most multi-track software provide musicians with an easier (although arguably less hands-on) approach to controlling sends and their respective effects on the audio. This sound technology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Send_track
Sending , or to send , is the action of conveying or directing something or someone to another physical, virtual, or conceptual location for a specific purpose. The initiator of the action of sending is the sender . With respect to humans, "sending" also encompasses instructing others to go to another physical location, whether voluntarily or by force. Sending is generally an act of volition , requiring the intent and purpose of the sender to cause a thing to be sent. English language authority James C. Fernald , in his 1896 English Synonyms and Antonyms, with Notes on the Correct Use of Prepositions , provided a lengthy examination of concepts falling within the rubric of sending: [ 1 ] To send is to cause to go or pass from one place to another, and always in fact or thought away from the agent or agency that controls the act. Send in its most common use involves personal agency without personal presence; according to the adage, "If you want your business done, go; if not, send "; one sends a letter or a bullet, a messenger or a message. In all the derived uses this same idea controls; if one sends a ball into his own heart, the action is away from the directing hand, and he is viewed as the passive recipient of his own act; it is with an approach to personification that we speak of the bow sending the arrow, or the gun the shot. To despatch is to send hastily or very promptly, ordinarily with a destination in view; to dismiss is to send away from oneself without reference to a destination; as, to dismiss a clerk, an application, or an annoying subject. To discharge is to send away so as to relieve a person or thing of a load; we discharge a gun or discharge the contents; as applied to persons, discharge is a harsher term than dismiss . To emit is to send forth from within, with no reference to a destination; as, the sun emits light and heat. Transmit , from the Latin, is a dignified term, often less vigorous than the Saxon send , but preferable at times in literary or scientific use; as, to transmit the crown, or the feud, from generation to generation; to transmit a charge of electricity. Transmit fixes the attention more on the intervening agency, as send does upon the points of departure and destination. [ 1 ] A message may be sent by both physical means of conveyance such as mail , or electronic means such as email and texting . The practice of communication by written documents carried by an intermediary from one person or place to another almost certainly dates back nearly to the invention of writing . However, the development of formal postal systems occurred much later. The first documented use of an organized courier service for the dissemination of written documents is in Egypt , where Pharaohs used couriers to send out decrees throughout the territory of the state (2400 BCE). [ 2 ] The earliest surviving piece of mail is also Egyptian, dating to 255 BCE. [ 3 ] The phrase "send a message" or "sending a message" is also used with respect to actions taken by a party to convey that party's attitude towards a certain thing. For example, a government that executes people who commit acts of treason can be said to be "sending a message" that treason will not be tolerated. [ 4 ] Conversely, a party that appears through its actions to endorse something that it actually opposes can be said to be "sending the wrong message", [ 4 ] while one which appears to simultaneously endorse contradictory things can be said to be "sending mixed messages". [ 5 ] The sending of mixed messages is a common source of miscommunication, particularly where the words of a message convey one thing, but accompanying nonverbal cues convey another. [ 6 ] Mixed messages are also common in dating , where one member of a potential romantic couple may appear at different times receptive or dismissive of the pursuit of a relationship, for a variety of reasons including obliviousness to the likely interpretation of communications, internal uncertainty about pursuit of a relationship, or deliberate efforts to "appear cool and coy". [ 6 ] Communications are not necessarily things that are sent at all. An alternative to sending a communication somewhere is to create a sufficiently durable means of conveying the communication, such as carving or painting on a surface, or sculpting a three-dimensional representation, and placing it where persons arriving at that location will receive the communication. Some scientists have proposed the possibility of using quantum effects to convey messages without "sending" information at all, [ 7 ] though this proposition depends on a semantic distinction between different meanings of the word, "sending". Physical items or objects can similarly sent from one place to another for a wide variety of reasons, for the benefit of the sender, the recipient, or others. Things may be sent by a merchant in response to a remote purchase, or as a gift. International trade is primarily focused on the sending of goods from one place to another. Packaging , containerization and the like have developed to help facilitate the sending of cargo. Items as well as messages may be sent through the services of a courier or a post office . The sending of objects as gifts may involve multiple models of sending. For example, if a person orders a gift for another through a third-party website, from a social perspective the person making the order is sending the gift, while from the physical and economic perspective, it is the third-party website, or a vendor doing business with it, that is sending the item to the recipient. Sending of small objects is done through package delivery or parcel delivery. The service is provided by most postal systems , express mail , private courier companies, and less than truckload shipping carriers. [ 8 ] With respect to sending large items such as pieces of furniture , specialized less-than-truckload shipping carriers handle shipping furniture and other heavy goods from the manufacturer to a last mile hub. [ 9 ] [ 10 ] The last mile problem can also include the challenge of making deliveries in urban areas. Deliveries to retail stores, restaurants, and other merchants in a central business district often contribute to congestion and safety problems. [ 9 ] [ 11 ] A person or group of people can be sent to places for various reasons, and the fact of one person sending another person somewhere often indicates that the person sent was not sent of their own volition. For example, persons who engage in disfavored conduct may be sent to prison or detention , expelled from a school, banished from a place, or sent to a remote or inhospitable place. An unruly or unwanted child may be sent to a boarding school , or to live with a different family. Conversely, people may volunteer or even campaign to be sent places in order to explore, or achieve some personal benefit or public good. In some cases a person might be sent away to protect them from danger, without a specific destination being determined in advance. The sending of military personnel to positions from which they can prepare for or engage in combat is called deployment . The word "deploy" can be used in multiple senses within this framework, so that "it could mean, on the one hand, the sending of troops forward from their peacetime bases. The Navy, for example, calls extended cruises 'deployments' even when no combat operations are anticipated. In another sense, it might be countered that 'deploying the troops' means sending them onto the field of battle from their forward staging bases". [ 12 ] Many religions incorporate beliefs in a supreme being "sending" things in a variety of ways, including sending messengers or prophets, and sending people (or components of people, such as souls) to specific afterlives. In some religions this raises the question of why a benevolent god would send souls to afterlives of eternal torment, which is resolved by claiming that the condemned souls have actually chosen to send themselves to that afterlife. [ 13 ]
https://en.wikipedia.org/wiki/Sending
Sendust is a magnetic metal powder that was invented by Hakaru Masumoto at Tohoku Imperial University in Sendai , Japan circa 1936 as an alternative to permalloy in inductor applications for telephone networks. Sendust composition is typically 85% iron , 9% silicon and 6% aluminium . The powder is sintered into cores to manufacture inductors. Sendust cores have high magnetic permeability (up to 140 000) [ clarification needed ] , low loss, low coercivity (5 A/m) good temperature stability and saturation flux density up to 1 T [ clarification needed ] . Due to its chemical composition and crystallographic structure Sendust exhibits simultaneously zero magnetostriction and zero magnetocrystalline anisotropy constant K 1 . Sendust is harder than permalloy, and is thus useful in abrasive wear applications such as magnetic recording heads . This alloy-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sendust
The Z-Mill or Sendzimir Mill is a machine for rolling steels . [ 1 ] [ 2 ] Unlike a traditional rolling mill this 20-high cluster mill configuration utilizes cascaded supporting rolls to apply force on the small work rolls in the center. This allows the application of a higher roll pressure without bending the work rolls, which would result in poor metal quality. Thus very hard and elastic materials can be rolled. An early version of a Sendzimir Mill, one with fewer back-up rollers, was installed in the Old Park Silver Mills in Sheffield, England. Its primary function was to roll sheet nickel silver for the city's renowned cutlery and flatware trades. The installation was frequently cited in the Metallurgy degree course at Sheffield University. The Old Park Mills held a key place in the Industrial Revolution. They were originally laid down by Joseph Hancock in 1762-65 to produce Old Sheffield Plate - a thinner layer of silver fused onto thicker copper was then rolled into sheets - for the emerging Sheffield silverware industry. As such, the factory was probably one of the first in the world solely producing a semi-manufacture for other businesses. The original Old Sheffield Plate, now much sought-after by collectors, lasted for almost a century until it was superseded by electroplate technology in the mid-nineteenth century. Then the Old Park business switched to rolling nickel silver which, after manufacture, was electroplated. Details of the Old Park Silver Mills and the silverware industry are explained in Mary Walton's book "Sheffield : its Story and its Achievements", pp116-120. Although the book was published in 1948 by the "Sheffield Telegraph" it remains in indispensable source on the city's history. The evolution to the current 20-High Cluster Sendzimir Mill from the early 12-High configuration [ 1 ] was pioneered by Tadeusz Sendzimir in the early 1930s. The designs for the Sendzimir Mill were patented. [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Sendzimir_mill
The Seneca effect , or Seneca cliff or Seneca collapse , is a mathematical model proposed by Ugo Bardi to describe situations where a system's rate of decline is much sharper than its earlier rate of growth. In 2017, Bardi published a book titled The Seneca Effect: When Growth is Slow but Collapse is Rapid , named as the Roman philosopher and writer Seneca , who wrote Fortune is of sluggish growth, but ruin is rapid ( Letters to Lucilius , 91.6): [ 1 ] Whatever structure has been reared by a long sequence of years, at the cost of great toil and through the great kindness of the gods, is scattered and dispersed by a single day. Nay, he who has said "a day" has granted too long a postponement to swift-coming misfortune; an hour, an instant of time, suffices for the overthrow of empires! It would be some consolation for the feebleness of our selves and our works, if all things should perish as slowly as they come into being; but as it is, increases are of sluggish growth, but the way to ruin is rapid. Bardi's book looked at cases of rapid decline across societies (including the fall of empires, financial crises, and major famines), in nature (including avalanches), and through man-made systems (including cracks in metal objects). Bardi concluded that rapid collapse is not a flaw, or "bug" as he terms it, but a "varied and ubiquitous phenomena" with multiple causes and resultant pathways. The collapse of a system can often clear the path for new, and better adapted, structures. [ 2 ] In a 2019 book titled Before the Collapse: A Guide to the Other Side of Growth , Bardi describes a "Seneca Rebound" that often takes place where new systems replace the collapsed system, and often at a rate faster than preceding growth rates as the collapse has eliminated many of impediments or constraints from the previous system. [ 2 ] The "Seneca effect" model is related to the " World3 " model from the 1972 report The Limits to Growth , issued by the Club of Rome . [ 1 ] [ 3 ] One of the model's main practical applications has been to describe the resultant outcomes given the condition of a global shortage of fossil fuels . [ 1 ] Unlike the symmetrical Hubbert curve fossil fuel model, the Seneca cliff model shows material asymmetry , where the global rate of decline in fossil fuel production is far steeper than forecasted by the Hubbert curve. [ 4 ] The term has also been used to describe rapid declines in businesses that had grown for decades, with the rapid post-2005 decline and resultant bankruptcy in Kodak as a quoted example. [ 5 ] https://books.google.ca/books/about/The_Seneca_Effect.html?id=F14yDwAAQBAJ&printsec=frontcover&source=kp_read_button&redir_esc=y This applied mathematics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Seneca_effect
Senescence ( / s ɪ ˈ n ɛ s ə n s / ) or biological aging is the gradual deterioration of functional characteristics in living organisms. Whole organism senescence involves an increase in death rates or a decrease in fecundity with increasing age, at least in the later part of an organism's life cycle . [ 1 ] [ 2 ] However, the resulting effects of senescence can be delayed. The 1934 discovery that calorie restriction can extend lifespans by 50% in rats, the existence of species having negligible senescence , and the existence of potentially immortal organisms such as members of the genus Hydra have motivated research into delaying senescence and thus age-related diseases . Rare human mutations can cause accelerated aging diseases . Environmental factors may affect aging – for example, overexposure to ultraviolet radiation accelerates skin aging . Different parts of the body may age at different rates and distinctly, including the brain , the cardiovascular system , and muscle. Similarly, functions may distinctly decline with aging, including movement control and memory . Two organisms of the same species can also age at different rates, making biological aging and chronological aging distinct concepts. Organismal senescence is the aging of whole organisms. Actuarial senescence can be defined as an increase in mortality or a decrease in fecundity with age. The Gompertz–Makeham law of mortality says that the age-dependent component of the mortality rate increases exponentially with age. Aging is characterized by the declining ability to respond to stress, increased homeostatic imbalance, and increased risk of aging-associated diseases including cancer and heart disease . Aging has been defined as "a progressive deterioration of physiological function, an intrinsic age-related process of loss of viability and increase in vulnerability." [ 3 ] In 2013, a group of scientists defined nine hallmarks of aging that are common between organisms with emphasis on mammals: In a decadal update, three hallmarks have been added, totaling 12 proposed hallmarks: The environment induces damage at various levels, e.g. damage to DNA , and damage to tissues and cells by oxygen radicals (widely known as free radicals ), and some of this damage is not repaired and thus accumulates with time. [ 6 ] Cloning from somatic cells rather than germ cells may begin life with a higher initial load of damage. Dolly the sheep died young from a contagious lung disease, but data on an entire population of cloned individuals would be necessary to measure mortality rates and quantify aging. [ citation needed ] The evolutionary theorist George Williams wrote, "It is remarkable that after a seemingly miraculous feat of morphogenesis , a complex metazoan should be unable to perform the much simpler task of merely maintaining what is already formed." [ 7 ] Different speeds with which mortality increases with age correspond to different maximum life span among species . For example, a mouse is elderly at 3 years, a human is elderly at 80 years, [ 8 ] and ginkgo trees show little effect of age even at 667 years. [ 9 ] Almost all organisms senesce, including bacteria which have asymmetries between "mother" and "daughter" cells upon cell division , with the mother cell experiencing aging, while the daughter is rejuvenated. [ 10 ] [ 11 ] There is negligible senescence in some groups, such as the genus Hydra . [ 12 ] Planarian flatworms have "apparently limitless telomere regenerative capacity fueled by a population of highly proliferative adult stem cells ." [ 13 ] These planarians are not biologically immortal , but rather their death rate slowly increases with age. Organisms that are thought to be biologically immortal would, in one instance, be Turritopsis dohrnii , also known as the "immortal jellyfish", due to its ability to revert to its youth when it undergoes stress during adulthood. [ 14 ] The reproductive system is observed to remain intact, and even the gonads of Turritopsis dohrnii are existing. [ 15 ] Some species exhibit "negative senescence", in which reproduction capability increases or is stable, and mortality falls with age, resulting from the advantages of increased body size during aging. [ 16 ] More than 300 different theories have been posited to explain the nature (mechanisms) and causes (reasons for natural emergence or factors) of aging. [ 17 ] [ additional citation(s) needed ] Good theories would both explain past observations and predict the results of future experiments. Some of the theories may complement each other, overlap, contradict, or may not preclude various other theories. [ citation needed ] Theories of aging fall into two broad categories, evolutionary theories of aging and mechanistic theories of aging. Evolutionary theories of aging primarily explain why aging happens, [ 18 ] but do not concern themselves with the molecular mechanism(s) that drive the process. All evolutionary theories of aging rest on the basic mechanisms that the force of natural selection declines with age. [ 19 ] [ 20 ] Mechanistic theories of aging can be divided into theories that propose aging is programmed, and damage accumulation theories, i.e. those that propose aging to be caused by specific molecular changes occurring over time. One theory was proposed by George C. Williams [ 7 ] and involves antagonistic pleiotropy . A single gene may affect multiple traits. Some traits that increase fitness early in life may also have negative effects later in life. But, because many more individuals are alive at young ages than at old ages, even small positive effects early can be strongly selected for, and large negative effects later may be very weakly selected against. Williams suggested the following example: Perhaps a gene codes for calcium deposition in bones, which promotes juvenile survival and will therefore be favored by natural selection; however, this same gene promotes calcium deposition in the arteries, causing negative atherosclerotic effects in old age. Thus, harmful biological changes in old age may result from selection for pleiotropic genes that are beneficial early in life but harmful later on. In this case, selection pressure is relatively high when Fisher's reproductive value is high and relatively low when Fisher's reproductive value is low. Senescent cells within a multicellular organism can be purged by competition between cells, but this increases the risk of cancer. This leads to an inescapable dilemma between two possibilities—the accumulation of physiologically useless senescent cells, and cancer—both of which lead to increasing rates of mortality with age. [ 2 ] The disposable soma theory of aging was proposed by Thomas Kirkwood in 1977. [ 1 ] [ 21 ] The theory suggests that aging occurs due to a strategy in which an individual only invests in maintenance of the soma for as long as it has a realistic chance of survival. [ 22 ] A species that uses resources more efficiently will live longer, and therefore be able to pass on genetic information to the next generation. The demands of reproduction are high, so less effort is invested in repair and maintenance of somatic cells, compared to germline cells , in order to focus on reproduction and species survival. [ 23 ] Programmed theories of aging posit that aging is adaptive, normally invoking selection for evolvability or group selection . The reproductive-cell cycle theory suggests that aging is regulated by changes in hormonal signaling over the lifespan. [ 24 ] One of the most prominent theories of aging was first proposed by Harman in 1956. [ 25 ] It posits that free radicals produced by dissolved oxygen, radiation, cellular respiration and other sources cause damage to the molecular machines in the cell and gradually wear them down. This is also known as oxidative stress . There is substantial evidence to back up this theory. Old animals have larger amounts of oxidized proteins, DNA and lipids than their younger counterparts. [ 26 ] [ 27 ] One of the earliest aging theories was the Rate of Living Hypothesis described by Raymond Pearl in 1928 [ 28 ] (based on earlier work by Max Rubner ), which states that fast basal metabolic rate corresponds to short maximum life span . While there may be some validity to the idea that for various types of specific damage detailed below that are by-products of metabolism , all other things being equal, a fast metabolism may reduce lifespan, in general this theory does not adequately explain the differences in lifespan either within, or between, species. Calorically restricted animals process as much, or more, calories per gram of body mass, as their ad libitum fed counterparts, yet exhibit substantially longer lifespans. [ citation needed ] Similarly, metabolic rate is a poor predictor of lifespan for birds, bats and other species that, it is presumed, have reduced mortality from predation, and therefore have evolved long lifespans even in the presence of very high metabolic rates. [ 29 ] In a 2007 analysis it was shown that, when modern statistical methods for correcting for the effects of body size and phylogeny are employed, metabolic rate does not correlate with longevity in mammals or birds. [ 30 ] With respect to specific types of chemical damage caused by metabolism, it is suggested that damage to long-lived biopolymers , such as structural proteins or DNA , caused by ubiquitous chemical agents in the body such as oxygen and sugars , are in part responsible for aging. The damage can include breakage of biopolymer chains, cross-linking of biopolymers, or chemical attachment of unnatural substituents ( haptens ) to biopolymers. [ citation needed ] Under normal aerobic conditions, approximately 4% of the oxygen metabolized by mitochondria is converted to superoxide ion, which can subsequently be converted to hydrogen peroxide , hydroxyl radical and eventually other reactive species including other peroxides and singlet oxygen , which can, in turn, generate free radicals capable of damaging structural proteins and DNA. [ 6 ] Certain metal ions found in the body, such as copper and iron , may participate in the process. (In Wilson's disease , a hereditary defect that causes the body to retain copper, some of the symptoms resemble accelerated senescence.) These processes termed oxidative stress are linked to the potential benefits of dietary polyphenol antioxidants , for example in coffee , [ 31 ] and tea . [ 32 ] However their typically positive effects on lifespans when consumption is moderate [ 33 ] [ 34 ] [ 35 ] have also been explained by effects on autophagy , [ 36 ] glucose metabolism [ 37 ] and AMPK . [ 38 ] Sugars such as glucose and fructose can react with certain amino acids such as lysine and arginine and certain DNA bases such as guanine to produce sugar adducts, in a process called glycation . These adducts can further rearrange to form reactive species, which can then cross-link the structural proteins or DNA to similar biopolymers or other biomolecules such as non-structural proteins. People with diabetes , who have elevated blood sugar , develop senescence-associated disorders much earlier than the general population, but can delay such disorders by rigorous control of their blood sugar levels. There is evidence that sugar damage is linked to oxidant damage in a process termed glycoxidation . Free radicals can damage proteins, lipids or DNA . Glycation mainly damages proteins. Damaged proteins and lipids accumulate in lysosomes as lipofuscin . Chemical damage to structural proteins can lead to loss of function; for example, damage to collagen of blood vessel walls can lead to vessel-wall stiffness and, thus, hypertension , and vessel wall thickening and reactive tissue formation ( atherosclerosis ); similar processes in the kidney can lead to kidney failure . Damage to enzymes reduces cellular functionality. Lipid peroxidation of the inner mitochondrial membrane reduces the electric potential and the ability to generate energy. It is probably no accident that nearly all of the so-called " accelerated aging diseases " are due to defective DNA repair enzymes. [ 39 ] [ 40 ] It is believed that the impact of alcohol on aging can be partly explained by alcohol's activation of the HPA axis , which stimulates glucocorticoid secretion, long-term exposure to which produces symptoms of aging. [ 41 ] DNA damage was proposed in a 2021 review to be the underlying cause of aging because of the mechanistic link of DNA damage to nearly every aspect of the aging phenotype. [ 42 ] Slower rate of accumulation of DNA damage as measured by the DNA damage marker gamma H2AX in leukocytes was found to correlate with longer lifespans in comparisons of dolphins , goats , reindeer , American flamingos and griffon vultures . [ 43 ] DNA damage-induced epigenetic alterations, such as DNA methylation and many histone modifications, appear to be of particular importance to the aging process. [ 42 ] Evidence for the theory that DNA damage is the fundamental cause of aging was first reviewed in 1981. [ 44 ] Natural selection can support lethal and harmful alleles , if their effects are felt after reproduction. The geneticist J. B. S. Haldane wondered why the dominant mutation that causes Huntington's disease remained in the population, and why natural selection had not eliminated it. The onset of this neurological disease is (on average) at age 45 and is invariably fatal within 10–20 years. Haldane assumed that, in human prehistory, few survived until age 45. Since few were alive at older ages and their contribution to the next generation was therefore small relative to the large cohorts of younger age groups, the force of selection against such late-acting deleterious mutations was correspondingly small. Therefore, a genetic load of late-acting deleterious mutations could be substantial at mutation–selection balance . This concept came to be known as the selection shadow . [ 45 ] Peter Medawar formalised this observation in his mutation accumulation theory of aging. [ 46 ] [ 47 ] "The force of natural selection weakens with increasing age—even in a theoretically immortal population, provided only that it is exposed to real hazards of mortality. If a genetic disaster... happens late enough in individual life, its consequences may be completely unimportant". Age-independent hazards such as predation, disease, and accidents, called ' extrinsic mortality ', mean that even a population with negligible senescence will have fewer individuals alive in older age groups. A study concluded that retroviruses in the human genomes can become awakened from dormant states and contribute to aging which can be blocked by neutralizing antibodies , alleviating "cellular senescence and tissue degeneration and, to some extent, organismal aging". [ 48 ] The stem cell theory of aging postulates that the aging process is the result of the inability of various types of stem cells to continue to replenish the tissues of an organism with functional differentiated cells capable of maintaining that tissue's (or organ 's) original function. Damage and error accumulation in genetic material is always a problem for systems regardless of the age. The number of stem cells in young people is very much higher than older people and thus creates a better and more efficient replacement mechanism in the young contrary to the old. In other words, aging is not a matter of the increase in damage, but a matter of failure to replace it due to a decreased number of stem cells. Stem cells decrease in number and tend to lose the ability to differentiate into progenies or lymphoid lineages and myeloid lineages. Maintaining the dynamic balance of stem cell pools requires several conditions. Balancing proliferation and quiescence along with homing ( See niche ) and self-renewal of hematopoietic stem cells are favoring elements of stem cell pool maintenance while differentiation, mobilization and senescence are detrimental elements. These detrimental effects will eventually cause apoptosis . If different individuals age at different rates, then fecundity, mortality, and functional capacity might be better predicted by biomarkers than by chronological age. [ 58 ] [ 59 ] However, graying of hair , [ 60 ] face aging , skin wrinkles and other common changes seen with aging are not better indicators of future functionality than chronological age. Biogerontologists have continued efforts to find and validate biomarkers of aging, but success thus far has been limited. Levels of CD4 and CD8 memory T cells and naive T cells have been used to give good predictions of the expected lifespan of middle-aged mice. [ 61 ] There is interest in an epigenetic clock as a biomarker of aging, based on its ability to predict human chronological age. [ 62 ] Basic blood biochemistry and cell counts can also be used to accurately predict the chronological age. [ 63 ] It is also possible to predict the human chronological age using transcriptomic aging clocks. [ 64 ] There is research and development of further biomarkers, detection systems and software systems to measure biological age of different tissues or systems or overall. For example, a deep learning (DL) software using anatomic magnetic resonance images estimated brain age with relatively high accuracy, including detecting early signs of Alzheimer's disease and varying neuroanatomical patterns of neurological aging, [ 65 ] and a DL tool was reported as to calculate a person's inflammatory age based on patterns of systemic age-related inflammation. [ 66 ] Aging clocks have been used to evaluate impacts of interventions on humans, including combination therapies . [ 67 ] [ additional citation(s) needed ] Employing aging clocks to identify and evaluate longevity interventions represents a fundamental goal in aging biology research. However, achieving this goal requires overcoming numerous challenges and implementing additional validation steps. [ 68 ] [ 69 ] A number of genetic components of aging have been identified using model organisms, ranging from the simple budding yeast Saccharomyces cerevisiae to worms such as Caenorhabditis elegans and fruit flies ( Drosophila melanogaster ). Study of these organisms has revealed the presence of at least two conserved aging pathways. Gene expression is imperfectly controlled, and it is possible that random fluctuations in the expression levels of many genes contribute to the aging process as suggested by a study of such genes in yeast. [ 70 ] Individual cells, which are genetically identical, nonetheless can have substantially different responses to outside stimuli, and markedly different lifespans, indicating the epigenetic factors play an important role in gene expression and aging as well as genetic factors. There is research into epigenetics of aging . The ability to repair DNA double-strand breaks declines with aging in mice [ 71 ] and humans. [ 72 ] A set of rare hereditary ( genetics ) disorders, each called progeria , has been known for some time. Sufferers exhibit symptoms resembling accelerated aging , including wrinkled skin . The cause of Hutchinson–Gilford progeria syndrome was reported in the journal Nature in May 2003. [ 73 ] This report suggests that DNA damage , not oxidative stress , is the cause of this form of accelerated aging. A study indicates that aging may shift activity toward short genes or shorter transcript length and that this can be countered by interventions. [ 74 ] Healthspan can broadly be defined as the period of one's life that one is healthy , such as free of significant diseases [ 76 ] or declines of capacities (e.g. of senses, muscle , endurance and cognition ). Biological aging or the LHG comes with a great cost burden to society, including potentially rising health care costs (also depending on types and costs of treatments ). [ 75 ] [ 79 ] This, along with global quality of life or wellbeing , highlight the importance of extending healthspans. [ 75 ] Many measures that may extend lifespans may simultaneously also extend healthspans, albeit that is not necessarily the case, indicating that "lifespan can no longer be the sole parameter of interest" in related research. [ 80 ] While recent life expectancy increases were not followed by "parallel" healthspan expansion, [ 75 ] awareness of the concept and issues of healthspan lags as of 2017. [ 76 ] Scientists have noted that " [c]hronic diseases of aging are increasing and are inflicting untold costs on human quality of life". [ 79 ] Life extension is the concept of extending the human lifespan , either modestly through improvements in medicine or dramatically by increasing the maximum lifespan beyond its generally-settled biological limit of around 125 years . [ 81 ] Several researchers in the area, along with "life extensionists", " immortalists ", or " longevists " (those who wish to achieve longer lives themselves), postulate that future breakthroughs in tissue rejuvenation , stem cells , regenerative medicine , molecular repair, gene therapy , pharmaceuticals, and organ replacement (such as with artificial organs or xenotransplantations ) will eventually enable humans to have indefinite lifespans through complete rejuvenation to a healthy youthful condition (agerasia [ 82 ] ). The ethical ramifications, if life extension becomes a possibility, are debated by bioethicists .
https://en.wikipedia.org/wiki/Senescence
Senescence-associated beta-galactosidase ( SA-β-gal or SABG ) is a hypothetical hydrolase enzyme that catalyzes the hydrolysis of β-galactosides into monosaccharides . Senescence-associated beta-galactosidase, along with p16 Ink4A , is regarded to be a biomarker of cellular senescence . [ 1 ] [ 2 ] Its existence was proposed in 1995 by Dimri et al. [ 3 ] following the observation that when beta-galactosidase assays were carried out at pH 6.0, only cells in senescence state develop staining. They proposed a cytochemical assay based on production of a blue-dyed precipitate that results from the cleavage of the chromogenic substrate X-Gal , which stains blue when cleaved by galactosidase. Since then, even more specific quantitative assays were developed for its detection at pH 6.0. [ 4 ] [ 5 ] [ 6 ] Today this phenomenon is explained by the overexpression and accumulation of the endogenous lysosomal beta-galactosidase specifically in senescent cells. [ 7 ] Its expression is not required for senescence. [ 7 ] However, it remains as the most widely used biomarker for senescent and aging cells, because it is easy to detect and reliable both in situ and in vitro . [ citation needed ] This enzyme -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Senescence-associated_beta-galactosidase
Hydra ( / ˈ h aɪ d r ə / HY -drə ) is a genus of small freshwater hydrozoans of the phylum Cnidaria . They are solitary, carnivorous jellyfish-like animals, [ 2 ] native to the temperate and tropical regions. [ 3 ] [ 4 ] The genus was named by Linnaeus in 1758 after the Hydra , which was the many-headed beast of myth defeated by Heracles , as when the animal has a part severed, it will regenerate much like the mythical hydra's heads. Biologists are especially interested in Hydra because of their regenerative ability ; they do not appear to die of old age, or to age at all. Hydras are often found in freshwater bodies, but some hydras are found in open water. They live attached to submerged rocks using a sticky secretion from their base. [ 2 ] Hydra has a tubular, radially symmetric body up to 10 mm (0.39 in) long when extended, secured by a simple adhesive foot known as the basal disc. Gland cells in the basal disc secrete a sticky fluid that accounts for its adhesive properties. At the free end of the body is a mouth opening surrounded by one to twelve thin, mobile tentacles . Each tentacle, or cnida (plural: cnidae), is clothed with highly specialised stinging cells called cnidocytes . Cnidocytes contain specialized structures called nematocysts , which look like miniature light bulbs with a coiled thread inside. At the narrow outer edge of the cnidocyte is a short trigger hair called a cnidocil. Upon contact with prey, the contents of the nematocyst are explosively discharged, firing a dart-like thread containing neurotoxins into whatever triggered the release. This can paralyze the prey, especially if many hundreds of nematocysts are fired. Hydra has two main body layers, which makes it " diploblastic ". The layers are separated by mesoglea , a gel-like substance. The outer layer is the epidermis , and the inner layer is called the gastrodermis , because it lines the stomach. The cells making up these two body layers are relatively simple. Hydramacin [ 5 ] is a bactericide recently discovered in Hydra ; it protects the outer layer against infection. A single Hydra is composed of 50,000 to 100,000 cells which consist of three specific stem cell populations that create many different cell types. These stem cells continually renew themselves in the body column . [ 6 ] Hydras have two significant structures on their body: the "head" and the "foot". When a Hydra is cut in half, each half regenerates and forms into a small Hydra ; the "head" regenerates a "foot" and the "foot" regenerates a "head". If the Hydra is sliced into many segments then the middle slices form both a "head" and a "foot". [ 7 ] Respiration and excretion occur by diffusion throughout the surface of the epidermis , while larger excreta are discharged through the mouth. [ 8 ] [ 9 ] The nervous system of Hydra is a nerve net , which is structurally simple compared to more derived animal nervous systems. Hydra does not have a recognizable brain or true muscles . Nerve nets connect sensory photoreceptors and touch-sensitive nerve cells located in the body wall and tentacles. The structure of the nerve net has two levels: Some have only two sheets of neurons . [ 10 ] If Hydra are alarmed or attacked, the tentacles can be retracted to small buds, and the body column itself can be retracted to a small gelatinous sphere. Hydra generally react in the same way regardless of the direction of the stimulus, and this may be due to the simplicity of the nerve nets. Hydra are generally sedentary or sessile , but do occasionally move quite readily, especially when hunting. They have two distinct methods for moving – 'looping' and 'somersaulting'. They do this by bending over and attaching themselves to the substrate with the mouth and tentacles and then relocate the foot, which provides the usual attachment, this process is called looping. In somersaulting, the body then bends over and makes a new place of attachment with the foot. By this process of "looping" or "somersaulting", a Hydra can move several inches (c. 100 mm) in a day. Hydra may also move by amoeboid motion of their bases or by detaching from the substrate and floating away in the current. Most hydra species do not have any gender system. Instead, when food is plentiful, many Hydra reproduce asexually by budding . A section of the body wall and an extension of the digestive cavity develops, creating a bud. [ 2 ] The buds grow into miniature adults and break away when mature. When a hydra is well fed, a new bud can form every two days. [ 11 ] When conditions are harsh, often before winter or in poor feeding conditions, sexual reproduction occurs in some Hydra . Swellings in the body wall develop into either ovaries or testes. The testes release free-swimming gametes into the water, and these can fertilize the egg in the ovary of another individual. The fertilized eggs secrete a tough outer coating, and, as the adult dies (due to starvation or cold), these resting eggs fall to the bottom of the lake or pond to await better conditions, whereupon they hatch into nymph Hydra . Some Hydra species, like Hydra circumcincta and Hydra viridissima , are hermaphrodites [ 12 ] and may produce both testes and ovaries at the same time. Many members of the Hydrozoa go through a body change from a polyp to an adult form called a medusa , which is usually the life stage where sexual reproduction occurs, but Hydra do not progress beyond the polyp phase. [ 13 ] Hydra mainly feed on aquatic invertebrates such as Daphnia and Cyclops . The mouth of the hydra is surrounded by four to eight tentacles. [ 2 ] When feeding, Hydra extend their body to maximum length and then slowly extend their tentacles. Despite their simple construction, the tentacles of Hydra are extraordinarily extensible and can be four to five times the length of the body. Once fully extended, the tentacles are slowly maneuvered around waiting for contact with a suitable prey animal. Upon contact, nematocysts on the tentacle fire into the prey, and the tentacle itself coils around the prey. Most of the tentacles join in the attack within 30 seconds to subdue the struggling prey. Within two minutes, the tentacles surround the prey and move it into the open mouth aperture. Within ten minutes, the prey is engulfed within the body cavity, and digestion commences. Hydra can stretch their body wall considerably. [ citation needed ] The hydra's mouth is not permanent, as when the hydra closes its mouth, the cells surrounding the open mouth fuse together. These joints are then broken when the hydra feeds again. [ 2 ] The feeding behaviour of Hydra demonstrates the sophistication of what appears to be a simple nervous system. Some species of Hydra exist in a mutual relationship with various types of unicellular algae . The algae are protected from predators by Hydra ; in return, photosynthetic products from the algae are beneficial as a food source to Hydra [ 14 ] [ 15 ] and even help to maintain the Hydra microbiome. [ 16 ] The feeding response in Hydra is induced by glutathione (specifically in the reduced state as GSH) released from damaged tissue of injured prey. [ 17 ] There are several methods conventionally used for quantification of the feeding response. In some, the duration for which the mouth remains open is measured. [ 18 ] Other methods rely on counting the number of Hydra among a small population showing the feeding response after addition of glutathione. [ 19 ] Recently, an assay for measuring the feeding response in hydra has been developed. [ 20 ] In this method, the linear two-dimensional distance between the tip of the tentacle and the mouth of hydra was shown to be a direct measure of the extent of the feeding response. This method has been validated using a starvation model, as starvation is known to cause enhancement of the Hydra feeding response. [ 20 ] The species Hydra oligactis is preyed upon by the flatworm Microstomum lineare . [ 21 ] [ 22 ] Hydras undergo morphallaxis (tissue regeneration) when injured or severed. Typically, Hydras reproduce by just budding off a whole new individual; the bud occurs around two-thirds of the way down the body axis. When a Hydra is cut in half, each half regenerates and forms into a small Hydra ; the "head" regenerates a "foot" and the "foot" regenerates a "head". This regeneration occurs without cell division. If the Hydra is sliced into many segments, the middle slices form both a "head" and a "foot". [ 7 ] The polarity of the regeneration is explained by two pairs of positional value gradients. There is both a head and foot activation and inhibition gradient. The head activation and inhibition works in an opposite direction of the pair of foot gradients. [ 23 ] The evidence for these gradients was shown in the early 1900s with grafting experiments. The inhibitors for both gradients have shown to be important to block the bud formation. The location where the bud forms is where the gradients are low for both the head and foot. [ 7 ] Hydras are capable of regenerating from pieces of tissue from the body and additionally after tissue dissociation from reaggregates. [ 23 ] This process takes place not only in the pieces of tissue excised from the body column, but also from re-aggregates of dissociated single cells. It was found that in these aggregates, cells initially distributed randomly undergo sorting and form two epithelial cell layers, in which the endodermal epithelial cells play more active roles in the process. Active mobility of these endodermal epithelial cells forms two layers in both the re-aggregate and the re-generating tip of the excised tissue. As these two layers are established, a patterning process takes place to form heads and feet. [ 24 ] Daniel Martinez claimed in a 1998 article in Experimental Gerontology that Hydra are biologically immortal . [ 25 ] This publication has been widely cited as evidence that Hydra do not senesce (do not age), and that they are proof of the existence of non-senescing organisms generally. In 2010, Preston Estep published (also in Experimental Gerontology ) a letter to the editor arguing that the Martinez data refutes the hypothesis that Hydra do not senesce. [ 26 ] The controversial unlimited lifespan of Hydra has attracted much attention from scientists. Research today appears to confirm Martinez' study. [ 27 ] Hydra stem cells have a capacity for indefinite self-renewal. The transcription factor " forkhead box O " (FoxO) has been identified as a critical driver of the continuous self-renewal of Hydra . [ 27 ] In experiments, a drastically reduced population growth resulted from FoxO down-regulation . [ 27 ] In bilaterally symmetrical organisms ( Bilateria ), the transcription factor FoxO affects stress response, lifespan, and increase in stem cells. If this transcription factor is knocked down in bilaterian model organisms, such as fruit flies and nematodes , their lifespan is significantly decreased. In experiments on H. vulgaris (a radially symmetrical member of phylum Cnidaria ), when FoxO levels were decreased, there was a negative effect on many key features of the Hydra , but no death was observed, thus it is believed other factors may contribute to the apparent lack of aging in these creatures. [ 6 ] Hydra are capable of two types of DNA repair : nucleotide excision repair and base excision repair . [ 28 ] The repair pathways facilitate DNA replication by removing DNA damage. Their identification in hydra was based, in part, on the presence in its genome of genes homologous to ones present in other genetically well studied species playing key roles in these DNA repair pathways. [ 28 ] An ortholog comparison analysis done within the last decade [ as of? ] demonstrated that Hydra share a minimum of 6,071 genes with humans. Hydra is becoming an increasingly better model system as more genetic approaches become available. [ 6 ] Transgenic hydra have become attractive model organisms to study the evolution of immunity . [ 29 ] A draft of the genome of Hydra magnipapillata was reported in 2010 . [ 30 ] The genomes of cnidarians are usually less than 500 Mb ( megabases ) in size, as in the Hydra viridissima , which has a genome size of approximately 300 Mb. In contrast, the genomes of brown hydras are approximately 1 Gb in size. This is because the brown hydra genome is the result of an expansion event involving LINEs , a type of transposable elements , in particular, a single family of the CR1 class. This expansion is unique to this subgroup of the genus Hydra and is absent in the green hydra, which has a repeating landscape similar to other cnidarians. These genome characteristics make Hydra attractive for studies of transposon-driven speciations and genome expansions. [ 31 ] Due to the simplicity of their life cycle when compared to other hydrozoans, hydras have lost many genes that correspond to cell types or metabolic pathways of which the ancestral function is still unknown. Hydra genome shows a preference towards proximal promoters . Thanks to this feature, many reporter cell lines have been created with regions around 500 to 2000 bases upstream of the gene of interest. Its cis-regulatory elements ( CRE ) are mostly located less than 2000 base pairs upstream from the closest transcription initiation site, but there are CREs located further away. Its chromatin has a Rabl configuration. There are interactions between the centromeres of different chromosomes and the centromeres and telomeres of the same chromosome. It presents a great number of intercentromeric interactions when compared to other cnidarians, probably due to the loss of multiple subunits of condensin II . It is organized in domains that span dozens to hundreds of megabases, containing epigenetically co-regulated genes and flanked by boundaries located within heterochromatin . [ 32 ] Different Hydra cell types express gene families of different evolutionary ages. Progenitor cells (stem cells, neuron and nematocyst precursors, and germ cells) express genes from families that predate metazoans . Among differentiated cells some express genes from families that date from the base of metazoans, like gland and neuronal cells, and others express genes from newer families, originating from the base of cnidaria or medusozoa , like nematocysts. Interstitial cells contain translation factors with a function that has been conserved for at least 400 million years. [ 32 ]
https://en.wikipedia.org/wiki/Senescence_in_Hydra
A senior cat diet is generally considered to be a diet for cats that are mature (7–10 years old), senior (11–15 years old), or geriatric (over 15 years old). [ 1 ] Nutritional considerations arise when choosing an appropriate diet for a healthy senior cat. [ 2 ] Dietary management of many conditions becomes more important in senior cats because changes in their physiology and metabolism may alter how their system responds to medications and treatments. [ 3 ] [ 4 ] Diets should be managed for each individual cat to ensure that they maintain an ideal body and muscle condition. [ 5 ] [ 6 ] Unlike many other species, the energy requirements for cats do not decrease with age, but may even increase, therefore seniors require the same or more energy than adults. [ 7 ] [ 8 ] Scientific studies have indicated that after 12 years of age, and again after 13 years of age, energy requirements for cats increase significantly. [ 9 ] Obesity is common in adult cats, but much less so in senior cats. [ 4 ] Of all feline life stages it has been demonstrated that senior cats are the most often underweight. [ 9 ] Research has shown that fat and protein digestibility decrease with age in cats, causing seniors to have a higher dietary requirement for these macronutrients . [ 8 ] The fat and protein sources need to be highly digestible to maximize energy capture from the food. [ 4 ] This may help to explain the body condition differences between adult and senior cats given the consistency of food intake. There is little research on the reasons for decreased fat and protein digestibility, however some speculations have been made based on age-related changes observed in other species. [ 10 ] Decreased secretion of digestive enzymes may be related to decreased digestive function in humans and rats, however more research into this is required to explain this in cats. [ 10 ] Vitamin B12 is important in methionine synthesis, DNA synthesis, and is a vital part of an enzyme important for metabolic pathways . [ 11 ] Lower nutrient digestibility may be due to gastrointestinal disease , including pancreatic and intestinal disease , which are often found with low levels of vitamin B12. [ 11 ] One study has shown that fat digestibility in senior cats could be reduced by as much as 9% when associated with B12 deficiency and pancreatic disease. [ 11 ] Due to this lower digestibility seen in seniors, it is important to look at metabolizable energy values, which provide a more accurate assessment of nutrient availability than a gross energy calculation. [ 12 ] The metabolizable energy of food is determined by the Atwater system and calculates the amount of energy available to the animal after digestion and absorption. [ 12 ] A gross energy calculation may overestimate digested energy, as it provides the total available energy in the food rather than what is actually being utilized by the cat. [ 12 ] Senior cats are often prone to arthritis , [ 13 ] [ 14 ] periodontal disease , [ 4 ] and a decline in cognitive and sensory function. [ 15 ] What an owner may perceive as a normal age-related change could actually be subtle signs of arthritis, such as increased inactivity and reluctance to perform normal activities, such as stair climbing and descent. [ 13 ] [ 14 ] Arthritis has been found in approximately 80-90% of senior cats showing little or no lameness ; in many cases this is not a result of damage to the joints but natural degeneration specific to cats. [ 13 ] [ 14 ] Cats that suffer from arthritis have been shown in some studies to display significant signs of improvement when chondroprotectants , [ 16 ] substances which help maintain the integrity of connective tissue , are added to the diet. Evidence for antioxidants ( vitamin C and E , and beta-carotene ), [ 17 ] omega-3 fatty acids such as eicosapentaenoic acid (f) and docosahexaenoic acid (DHA) for inflammation, [ 18 ] L‑carnitine [ 19 ] and lysine [ 20 ] has been shown to be beneficial in other species. Cognitive decline similar to that seen in humans and dogs has been observed in senior cats, with ongoing research into the causes and treatment. [ 15 ] Changes in the structure of the brain, including those similar to the causes of Alzheimer's disease in humans, are considered to be a significant factor in cognitive issues in cats. [ 21 ] [ 22 ] [ 23 ] Studies in other species have shown that supplementing dietary omega-3 and -6 fatty acids, [ 24 ] particularly EPA, DHA, and arachidonic acid , along with antioxidants such as beta-carotene and vitamin B12, [ 25 ] may aid in the prevention of decline of cognitive function, and slow the progression of symptoms. Senior cats tend to become particularly picky with their food as a reduced ability to taste and smell is associated with age, therefore, palatability is an important factor to consider. [ 26 ] Cats have shown a preference in studies for diets with a higher protein content regardless of the flavouring of the food. [ 27 ] [ 28 ] Additionally, cats are unable to effectively regulate their water intake, and seniors are particularly prone to dehydration. [ 29 ] Wet diets should be considered to increase water intake and enhance palatability, [ 4 ] as well as to alleviate discomfort associated with periodontal disease, a common concern with senior cats. [ 4 ] Dry dental kibble could be considered to help prevent plaque buildup on teeth, [ 4 ] however as this has only been shown to be effective as the sole diet, [ 30 ] brushing of the teeth or dental chews would be a better alternative in combination with a canned food in order to optimize water intake, palatability and dental health. As they do not digest as much energy per meal as an adult cat, it is important to feed senior cats smaller, more frequent meals of a highly digestible diet throughout the day. [ 29 ] It is also important to monitor the cat's health closely, with regular visits to the veterinarian, as they are very good at hiding symptoms of disease. [ 31 ] By carefully selecting a diet that considers a senior cat's changing needs, such as digestion, mobility, cognition, dental health and body condition, it may be possible to manage, or even prevent the progression of many of these age associated conditions.
https://en.wikipedia.org/wiki/Senior_cat_diet
A senolytic (from the words senescence and -lytic , "destroying") is among a class of small molecules under basic research to determine if they can selectively induce death of senescent cells and improve health in humans. [ 1 ] A goal of this research is to discover or develop agents to delay, prevent, alleviate, or reverse age-related diseases. [ 2 ] [ 3 ] Removal of senescent cells with senolytics has been proposed as a method of enhancing immunity during aging. [ 4 ] A related concept is "senostatic", which means to suppress senescence. [ 5 ] Possible senolytic agents are under preliminary research, including some which are in early-stage human trials . [ 6 ] [ 7 ] [ clarification needed ] The majority of candidate senolytic compounds are repurposed anti-cancer molecules, such as the chemotherapeutic drug dasatinib and the experimental small molecule navitoclax . [ 8 ] [ 9 ] Soluble urokinase plasminogen activator surface receptor have been found to be highly expressed on senescent cells, leading researchers to use chimeric antigen receptor T cells to eliminate senescent cells in mice. [ 10 ] [ 11 ] According to reviews, it is thought that senolytics can be administered intermittently while being as effective as continuous administration. This could be an advantage of senolytic drugs and decrease adverse effects, for instance circumventing potential off-target effects. [ 6 ] [ 12 ] [ 13 ] [ 14 ] Recently, artificial intelligence has been used to discover new senolytics, resulting in the identification of structurally distinct senolytic compounds with more favorable medicinal chemistry properties than previous senolytic candidates. [ 15 ] [ 16 ] Senolytics eliminate senescent cells whereas senomorphics – with candidates such as Apigenin , Rapamycin and rapalog Everolimus – modulate properties of senescent cells without eliminating them, suppressing phenotypes of senescence, including the SASP . [ 13 ] [ 12 ]
https://en.wikipedia.org/wiki/Senolytic
Senotherapeutics refers to therapeutic agents/strategies that specifically target cellular senescence . [ 1 ] Senotherapeutics include emerging senolytic /senoptotic small molecules that specifically induce cell death in senescent cells [ 2 ] and agents that inhibit the pro-inflammatory senescent secretome. [ 3 ] Senescent cells can be targeted for immune clearance, but an ageing immune system likely impairs senescent cell clearance leading to their accumulation. [ 4 ] Therefore, agents which can enhance immune clearance of senescent cells can also be considered as senotherapeutic. [ citation needed ]
https://en.wikipedia.org/wiki/Senotherapy
SenseGlove is a Dutch technology company that develops and manufactures wearable hand haptic products for use in virtual reality (VR), VR/AR training, research and other applications. [ 1 ] [ 2 ] The company is headquartered in Delft , Netherlands. SenseGlove develops force and haptic feedback gloves that enable to emulate natural feelings for users interacting with virtual objects. This includes the sense of its size, stiffness, and resistance. [ 3 ] [ 4 ] Their products are mainly used for enterprise training in automotive, aviation, defense, healthcare and research sectors. [ 5 ] [ 6 ] [ 7 ] [ 8 ] SenseGlove was initially a graduation project of its two founders, Johannes Luijten and Gijs den Butter, at the Delft University of Technology . The initial working prototype was introduced in 2017 when developers aimed to create a rehabilitation device for stroke victims. [ 9 ] The company's first haptic force-feedback glove, SenseGlove DK1, was designed as an exoskeleton for hands that was capable of providing haptic sensations and had the functions of fingers tracking, vibrotactile feedback and force-feedback. It was mainly used for research and telerobotics applications. [ 10 ] In 2021, SenseGlove announced SenseGlove Nova, a new version of haptic force-feedback gloves designed for VR training purposes. It was redesigned to be wireless and more compact than DK1. [ 11 ] [ 12 ] In 2022, DK1 has been used in a project that won Ana Avatar XPrize by X Prize Foundation . [ 13 ] [ 14 ] In his Dope Tech review in 2022, Marques Brownlee , an American tech commentator, reviewed the SenseGlove and praised the technology behind the product. While CNET observer after testing the SenseGlove Nova commented that haptic technology is yet to become a common fixture in home VR setups. [ 15 ] [ 16 ] In May 2023 SenseGlove announced the second generation of Nova – the “Nova 2.” The Nova 2 features palm feedback to simulate a feeling inside the user's hand and provide an increased level of realism in virtual training, research, and social multiplayer interactions. [ 17 ] SenseGlove Nova is a wireless haptic glove that is able to provide force feedback (by applying resistance through its magnetic friction brakes). Its vibrotactile feedback renders the feeling of realistic button clicks, vibrations, and impact simulations. [ 18 ] SenseGlove Nova 2 is equipped with active contact feedback, a feature that comes in addition to force-feedback and vibrotactile-feedback which are already key features of the Nova. This function, as developers describe it, impact user's palms physically depending on the virtual objects being carried that makes one feel it even more realistic. [ 19 ]
https://en.wikipedia.org/wiki/SenseGlove
In molecular biology and genetics , the sense of a nucleic acid molecule, particularly of a strand of DNA or RNA , refers to the nature of the roles of the strand and its complement in specifying a sequence of amino acids . [ citation needed ] Depending on the context, sense may have slightly different meanings. For example, the negative-sense strand of DNA is equivalent to the template strand, whereas the positive-sense strand is the non-template strand whose nucleotide sequence is equivalent to the sequence of the mRNA transcript. Because of the complementary nature of base-pairing between nucleic acid polymers, a double-stranded DNA molecule will be composed of two strands with sequences that are reverse complements of each other. To help molecular biologists specifically identify each strand individually, the two strands are usually differentiated as the "sense" strand and the "antisense" strand. An individual strand of DNA is referred to as positive-sense (also positive (+) or simply sense ) if its nucleotide sequence corresponds directly to the sequence of an RNA transcript which is translated or translatable into a sequence of amino acids (provided that any thymine bases in the DNA sequence are replaced with uracil bases in the RNA sequence). The other strand of the double-stranded DNA molecule is referred to as negative-sense (also negative (−) or antisense ), and is reverse complementary to both the positive-sense strand and the RNA transcript. It is actually the antisense strand that is used as the template from which RNA polymerases construct the RNA transcript, but the complementary base-pairing by which nucleic acid polymerization occurs means that the sequence of the RNA transcript will look identical to the positive-sense strand, apart from the RNA transcript's use of uracil instead of thymine. Sometimes the phrases coding strand and template strand are encountered in place of sense and antisense, respectively, and in the context of a double-stranded DNA molecule the usage of these terms is essentially equivalent. However, the coding/sense strand need not always contain a code that is used to make a protein; both protein-coding and non-coding RNAs may be transcribed. The terms "sense" and "antisense" are relative only to the particular RNA transcript in question, and not to the DNA strand as a whole. In other words, either DNA strand can serve as the sense or antisense strand. Most organisms with sufficiently large genomes make use of both strands, with each strand functioning as the template strand for different RNA transcripts in different places along the same DNA molecule. In some cases, RNA transcripts can be transcribed in both directions (i.e. on either strand) from a common promoter region, or be transcribed from within introns on either strand (see "ambisense" below). [ 1 ] [ 2 ] [ 3 ] The DNA sense strand looks like the messenger RNA (mRNA) transcript, and can therefore be used to read the expected codon sequence that will ultimately be used during translation (protein synthesis) to build an amino acid sequence and then a protein. For example, the sequence "ATG" within a DNA sense strand corresponds to an "AUG" codon in the mRNA, which codes for the amino acid methionine . However, the DNA sense strand itself is not used as the template for the mRNA; it is the DNA antisense strand that serves as the source for the protein code, because, with bases complementary to the DNA sense strand, it is used as a template for the mRNA. Since transcription results in an RNA product complementary to the DNA template strand, the mRNA is complementary to the DNA antisense strand. Hence, a base triplet 3′-TAC-5′ in the DNA antisense strand (complementary to the 5′-ATG-3′ of the DNA sense strand) is used as the template which results in a 5′-AUG-3′ base triplet in the mRNA. The DNA sense strand will have the triplet ATG, which looks similar to the mRNA triplet AUG but will not be used to make methionine because it will not be directly used to make mRNA. The DNA sense strand is called a "sense" strand not because it will be used to make protein (it won't be), but because it has a sequence that corresponds directly to the RNA codon sequence. By this logic, the RNA transcript itself is sometimes described as "sense". Some regions within a double-stranded DNA molecule code for genes , which are usually instructions specifying the order in which amino acids are assembled to make proteins, as well as regulatory sequences, splicing sites, non-coding introns , and other gene products . For a cell to use this information, one strand of the DNA serves as a template for the synthesis of a complementary strand of RNA . The transcribed DNA strand is called the template strand, with antisense sequence, and the mRNA transcript produced from it is said to be sense sequence (the complement of antisense). The untranscribed DNA strand, complementary to the transcribed strand, is also said to have sense sequence; it has the same sense sequence as the mRNA transcript (though T bases in DNA are substituted with U bases in RNA). The names assigned to each strand actually depend on which direction you are writing the sequence that contains the information for proteins (the "sense" information), not on which strand is depicted as "on the top" or "on the bottom" (which is arbitrary). The only biological information that is important for labeling strands is the relative locations of the terminal 5′ phosphate group and the terminal 3′ hydroxyl group (at the ends of the strand or sequence in question), because these ends determine the direction of transcription and translation. A sequence written 5′-CGCTAT-3′ is equivalent to a sequence written 3′-TATCGC-5′ as long as the 5′ and 3′ ends are noted. If the ends are not labeled, convention is to assume that both sequences are written in the 5′-to-3′ direction. The "Watson strand" refers to 5′-to-3′ top strand (5′→3′), whereas the "Crick strand" refers to the 5′-to-3′ bottom strand (3′←5′). [ 4 ] Both Watson and Crick strands can be either sense or antisense strands depending on the specific gene product made from them. For example, the notation "YEL021W", an alias of the URA3 gene used in the National Center for Biotechnology Information (NCBI) database, denotes that this gene is in the 21st open reading frame (ORF) from the centromere of the left arm (L) of Yeast (Y) chromosome number V (E), and that the expression coding strand is the Watson strand (W). "YKL074C" denotes the 74th ORF to the left of the centromere of chromosome XI and that the coding strand is the Crick strand (C). Another confusing term referring to "Plus" and "Minus" strand is also widely used. Whether the strand is sense (positive) or antisense (negative), the default query sequence in NCBI BLAST alignment is "Plus" strand. A single-stranded genome that is used in both positive-sense and negative-sense capacities is said to be ambisense . Some viruses have ambisense genomes. Bunyaviruses have three single-stranded RNA (ssRNA) fragments, some of them containing both positive-sense and negative-sense sections; arenaviruses are also ssRNA viruses with an ambisense genome, as they have three fragments that are mainly negative-sense except for part of the 5′ ends of the large and small segments of their genome. An RNA sequence that is complementary to an endogenous mRNA transcript is sometimes called " antisense RNA ". In other words, it is a non-coding strand complementary to the coding sequence of RNA; this is similar to negative-sense viral RNA. When mRNA forms a duplex with a complementary antisense RNA sequence, translation is blocked. This process is related to RNA interference . Cells can produce antisense RNA molecules naturally, called microRNAs , which interact with complementary mRNA molecules and inhibit their expression . The concept has also been exploited as a molecular biology technique, by artificially introducing a transgene coding for antisense RNA in order to block the expression of a gene of interest. Radioactively or fluorescently labelled antisense RNA can be used to show the level of transcription of genes in various cell types. Some alternative antisense structural types have been experimentally applied as antisense therapy . In the United States, the Food and Drug Administration (FDA) has approved the phosphorothioate antisense oligonucleotides fomivirsen (Vitravene) [ 5 ] and mipomersen (Kynamro) [ 6 ] for human therapeutic use. In virology , the term "sense" has a slightly different meaning. The genome of an RNA virus can be said to be either positive-sense , also known as a "plus-strand", or negative-sense , also known as a "minus-strand". In most cases, the terms "sense" and "strand" are used interchangeably, making terms such as "positive-strand" equivalent to "positive-sense", and "plus-strand" equivalent to "plus-sense". Whether a viral genome is positive-sense or negative-sense can be used as a basis for classifying viruses. Positive-sense ( 5′ -to- 3′ ) viral RNA signifies that a particular viral RNA sequence may be directly translated into viral proteins (e.g., those needed for viral replication). Therefore, in positive-sense RNA viruses, the viral RNA genome can be considered viral mRNA, and can be immediately translated by the host cell. Unlike negative-sense RNA, positive-sense RNA is of the same sense as mRNA. Some viruses (e.g. Coronaviridae ) have positive-sense genomes that can act as mRNA and be used directly to synthesize proteins without the help of a complementary RNA intermediate. Because of this, these viruses do not need to have an RNA polymerase packaged into the virion —the RNA polymerase will be one of the first proteins produced by the host cell, since it is needed in order for the virus's genome to be replicated. Negative-sense (3′-to-5′) viral RNA is complementary to the viral mRNA, thus a positive-sense RNA must be produced by an RNA-dependent RNA polymerase from it prior to translation. Like DNA, negative-sense RNA has a nucleotide sequence complementary to the mRNA that it encodes; also like DNA, this RNA cannot be translated into protein directly. Instead, it must first be transcribed into a positive-sense RNA that acts as an mRNA. Some viruses (e.g. influenza viruses) have negative-sense genomes and so must carry an RNA polymerase inside the virion. Gene silencing can be achieved by introducing into cells a short "antisense oligonucleotide" that is complementary to an RNA target. This experiment was first done by Zamecnik and Stephenson in 1978 [ 7 ] and continues to be a useful approach, both for laboratory experiments and potentially for clinical applications ( antisense therapy ). [ 8 ] Several viruses, such as influenza viruses [ 9 ] [ 10 ] [ 11 ] [ 12 ] Respiratory syncytial virus (RSV) [ 9 ] and SARS coronavirus (SARS-CoV), [ 9 ] have been targeted using antisense oligonucleotides to inhibit their replication in host cells. If the antisense oligonucleotide contains a stretch of DNA or a DNA mimic (phosphorothioate DNA, 2′F-ANA, or others) it can recruit RNase H to degrade the target RNA. This makes the mechanism of gene silencing catalytic. Double-stranded RNA can also act as a catalytic, enzyme-dependent antisense agent through the RNAi / siRNA pathway, involving target mRNA recognition through sense-antisense strand pairing followed by target mRNA degradation by the RNA-induced silencing complex (RISC). The R1 plasmid hok/sok system provides yet another example of an enzyme-dependent antisense regulation process through enzymatic degradation of the resulting RNA duplex. Other antisense mechanisms are not enzyme-dependent, but involve steric blocking of their target RNA (e.g. to prevent translation or to induce alternative splicing). Steric blocking antisense mechanisms often use oligonucleotides that are heavily modified. Since there is no need for RNase H recognition, this can include chemistries such as 2′-O-alkyl, peptide nucleic acid (PNA), locked nucleic acid (LNA), and Morpholino oligomers.
https://en.wikipedia.org/wiki/Sense_(molecular_biology)
In the philosophy of language , the distinction between sense and reference was an idea of the German philosopher and mathematician Gottlob Frege in 1892 (in his paper " On Sense and Reference "; German: "Über Sinn und Bedeutung"), [ 1 ] reflecting the two ways he believed a singular term may have meaning . The reference (or " referent "; Bedeutung ) of a proper name is the object it means or indicates ( bedeuten ), whereas its sense ( Sinn ) is what the name expresses. The reference of a sentence is its truth value , whereas its sense is the thought that it expresses. [ 1 ] Frege justified the distinction in a number of ways. Much of analytic philosophy is traceable to Frege's philosophy of language. [ 4 ] Frege's views on logic (i.e., his idea that some parts of speech are complete by themselves, and are analogous to the arguments of a mathematical function ) led to his views on a theory of reference . [ 4 ] Frege developed his original theory of meaning in early works like Begriffsschrift (concept paper) of 1879 and Grundlagen (Foundations of Arithmetic) of 1884. On this theory, the meaning of a complete sentence consists in its being true or false, [ 5 ] and the meaning of each significant expression in the sentence is an extralinguistic entity which Frege called its Bedeutung , literally meaning or significance, but rendered by Frege's translators as reference, referent, 'Meaning', nominatum, etc. Frege supposed that some parts of speech are complete by themselves, and are analogous to the arguments of a mathematical function , but that other parts are incomplete, and contain an empty place, by analogy with the function itself. [ 6 ] Thus "Caesar conquered Gaul" divides into the complete term "Caesar", whose reference is Caesar himself, and the incomplete term "—conquered Gaul", whose reference is a concept. Only when the empty place is filled by a proper name does the reference of the completed sentence – its truth value – appear. This early theory of meaning explains how the significance or reference of a sentence (its truth value) depends on the significance or reference of its parts. Frege introduced the notion of "sense" (German: Sinn ) to accommodate difficulties in his early theory of meaning. [ 7 ] : 965 First, if the entire significance of a sentence consists of its truth value, it follows that the sentence will have the same significance if we replace a word of the sentence with one having an identical reference, as this will not change its truth value. [ 8 ] The reference of the whole is determined by the reference of the parts. If the evening star has the same reference as the morning star , it follows that the evening star is a body illuminated by the Sun has the same truth value as the morning star is a body illuminated by the Sun . But it is possible for someone to think that the first sentence is true while also thinking that the second is false. Therefore, the thought corresponding to each sentence cannot be its reference, but something else, which Frege called its sense . Second, sentences that contain proper names with no reference cannot have a truth value at all. Yet the sentence 'Odysseus was set ashore at Ithaca while sound asleep' obviously has a sense, even though 'Odysseus' has no reference. The thought remains the same whether or not 'Odysseus' has a reference. [ 8 ] Furthermore, a thought cannot contain the objects that it is about. For example, Mont Blanc , 'with its snowfields', cannot be a component of the thought that Mont Blanc is more than 4,000 metres high. Nor can a thought about Etna contain lumps of solidified lava. [ 9 ] Frege's notion of sense is somewhat obscure, and neo-Fregeans have come up with different candidates for its role. [ 10 ] Accounts based on the work of Carnap [ 11 ] and Church [ 12 ] treat sense as an intension , or a function from possible worlds to extensions . For example, the intension of ‘number of planets’ is a function that maps any possible world to the number of planets in that world. John McDowell supplies cognitive and reference-determining roles. [ 13 ] Michael Devitt treats senses as causal-historical chains connecting names to referents, allowing that repeated "groundings" in an object account for reference change. [ 14 ] In his theory of descriptions , Bertrand Russell held the view that most proper names in ordinary language are in fact disguised definite descriptions . For example, 'Aristotle' can be understood as "The pupil of Plato and teacher of Alexander", or by some other uniquely applying description. This is known as the descriptivist theory of names . Because Frege used definite descriptions in many of his examples, he is often taken to have endorsed the descriptivist theory. Thus Russell's theory of descriptions was conflated with Frege's theory of sense, and for most of the twentieth century this "Frege–Russell" view was the orthodox view of proper name semantics. Saul Kripke argued influentially against the descriptivist theory, asserting that proper names are rigid designators which designate the same object in every possible world. [ 15 ] : 48–49 Descriptions, however, such as "the President of the U.S. in 1969" do not designate the same entity in every possible world. For example, someone other than Richard Nixon , e.g. Hubert H. Humphrey , might have been the President in 1969. Hence a description (or cluster of descriptions) cannot be a rigid designator, and thus a proper name cannot mean the same as a description. [ 16 ] : 49 However, the Russellian descriptivist reading of Frege has been rejected by many scholars, in particular by Gareth Evans in The Varieties of Reference [ 17 ] and by John McDowell in "The Sense and Reference of a Proper Name", [ 18 ] following Michael Dummett , who argued that Frege's notion of sense should not be equated with a description. Evans further developed this line, arguing that a sense without a referent was not possible. He and McDowell both take the line that Frege's discussion of empty names, and of the idea of sense without reference, are inconsistent, and that his apparent endorsement of descriptivism rests only on a small number of imprecise and perhaps offhand remarks. And both point to the power that the sense-reference distinction does have (i.e., to solve at least the first two problems), even if it is not given a descriptivist reading. As noted above, translators of Frege have rendered the German Bedeutung in various ways. The term 'reference' has been the most widely adopted, but this fails to capture the meaning of the original German ('meaning' or 'significance'), and does not reflect the decision to standardise key terms across different editions of Frege's works published by Blackwell . [ 19 ] The decision was based on the principle of exegetical neutrality : that "if at any point in a text there is a passage that raises for the native speaker legitimate questions of exegesis , then, if at all possible, a translator should strive to confront the reader of his version with the same questions of exegesis and not produce a version which in his mind resolves those questions". [ 20 ] The term 'meaning' best captures the standard German meaning of Bedeutung . However, while Frege's own use of the term can sound as odd in German for modern readers as when translated into English, the related term deuten does mean 'to point towards'. Though Bedeutung is not usually used with this etymological proximity in mind in German, German speakers can well make sense of Bedeutung as signifying 'reference', in the sense of it being what Bedeutung points, i.e. refers to. Moreover, 'meaning' captures Frege's early use of Bedeutung well, [ 21 ] and it would be problematic to translate Frege's early use as 'meaning' and his later use as 'reference', suggesting a change in terminology not evident in the original German. The Greek philosopher Antisthenes , a pupil of Socrates , apparently distinguished "a general object that can be aligned with the meaning of the utterance” from “a particular object of extensional reference". According to Susan Prince, this "suggests that he makes a distinction between sense and reference". [ 22 ] : 20 The principal basis of Prince's claim is a passage in Alexander of Aphrodisias ' “Comments on Aristotle 's 'Topics'” with a three-way distinction: The Stoic doctrine of lekta refers to a correspondence between speech and the object referred to in speech, as distinct from the speech itself. British classicist R. W. Sharples cites lekta as an anticipation of the distinction between sense and reference. [ 24 ] : 23 The sense-reference distinction is commonly confused with that between connotation and denotation , which originates with John Stuart Mill . [ 25 ] According to Mill, a common term like 'white' denotes all white things, as snow, paper. [ 26 ] : 11–13 But according to Frege, a common term does not refer to any individual white thing, but rather to an abstract concept ( Begriff ). We must distinguish between the relation of reference, which holds between a proper name and the object it refers to, such as between the name 'Earth' and the planet Earth , and the relation of 'falling under', such as when the Earth falls under the concept planet . The relation of a proper name to the object it designates is direct, whereas a word like 'planet' does not have such a direct relation to the Earth; instead, it refers to a concept under which the Earth falls. Moreover, judging of anything that it falls under this concept is not in any way part of our knowledge of what the word 'planet' means. [ 27 ] The distinction between connotation and denotation is closer to that between concept and object than to that between 'sense' and 'reference'.
https://en.wikipedia.org/wiki/Sense_and_reference
Sensemaking or sense-making is the process by which people give meaning to their collective experiences. It has been defined as "the ongoing retrospective development of plausible images that rationalize what people are doing" ( Weick, Sutcliffe, & Obstfeld, 2005, p. 409 ). The concept was introduced to organizational studies by Karl E. Weick in the late 1960's and has affected both theory and practice. Weick intended to encourage a shift away from the traditional focus of organization theorists on decision-making and towards the processes that constitute the meaning of the decisions that are enacted in behavior. There is no single agreed upon definition of sensemaking, but there is consensus that it is a process that allows people to understand ambiguous, equivocal or confusing issues or events. [ 1 ] : 266 Disagreements about the meaning of sensemaking exist around whether sensemaking is a mental process within the individual, a social process or a process that occurs as part of discussion; whether it is an ongoing daily process or only occurs in response to rare events; and whether sensemaking describes past events or considers the future. [ 1 ] : 268 Sensemaking further refers not only to a process, be it mental or social, that occurs in organizations but also a broader perspective on what organization is and what it means for people to be organized. [ 2 ] Overall five distinct schools of sensemaking/sense-making have been identified. [ 3 ] In 1966, Daniel Katz and Robert L. Kahn published The Social Psychology of Organizations ( Katz & Kahn, 1966 ). In 1969, Karl Weick played on this title in his book The Social Psychology of Organizing , shifting the focus from organizations as entities to organiz ing as an activity. It was especially the second edition, published ten years later ( Weick, 1979 ) that established Weick's approach in organization studies. Weick identified seven properties of sensemaking ( Weick, 1995 ): Each of these seven aspects interact and intertwine as individuals interpret events. Their interpretations become evident through narratives – written and spoken – which convey the sense they have made of events ( Currie & Brown, 2003 ), as well as through diagrammatic reasoning and associated material practices ( Huff, 1990 ; Stigliani & Ravasi, 2012 ). The rise of the sensemaking perspective marks a shift of focus in organization studies from how decisions shape organizations to how meaning drives organizing ( Weick, 1993 ). The aim was to focus attention on the largely cognitive activity of framing experienced situations as meaningful. It is a collaborative process of creating shared awareness and understanding out of different individuals' perspectives and varied interests. Weick described the social and historical context in the late 1960s as being important in this shift from decision-making to sensemaking: "These ideas coincided with a growing societal realization that administrators in Washington were trying to justify committing more resources to a war in Vietnam that the United States was clearly losing. One could not escape the feeling that rationality had a demonstrable retrospective core, that people looked forward with anxiety and put the best face on it after the fact, and that the vaunted prospective skills of McNamara’s “whiz kids” in the Pentagon were a chimera. It was easy to put words to this mess. People create their own fate. Organizations enact their own environments. The point seemed obvious." [ 4 ] Sensemaking scholars are less interested in the intricacies of planning than in the details of action ( Weick, 1995, p. 55 ). The sensemaking approach is often used to provide insight into factors that surface as organizations address either uncertain or ambiguous situations ( Weick 1988 , 1993 ; Weick et al., 2005 ), including deep uncertainty. [ 5 ] Beginning in the 1980s with an influential re-analysis of the Bhopal disaster , Weick's name has come to be associated with the study of the situated sensemaking that influences the outcomes of disasters ( Weick 1993 ). A 2014 review of the literature on sensemaking in organizations identified a dozen different categories of sensemaking and a half-dozen sensemaking related concepts ( Maitlis & Christianson, 2014 ). The categories of sensemaking included: constituent-minded, cultural, ecological, environmental, future-oriented, intercultural, interpersonal, market, political, prosocial, prospective, and resourceful. The sensemaking-related concepts included: sensebreaking, sensedemanding, sense-exchanging, sensegiving, sensehiding, and sense specification. Sensemaking is central to the conceptual framework for military network-centric operations (NCO) espoused by the United States Department of Defense ( Garstka and Alberts, 2004 ). In a joint/coalition military environment, sensemaking is complicated by numerous technical, social, organizational, cultural, and operational factors. A central hypothesis of NCO is that the quality of shared sensemaking and collaboration will be better in a "robustly networked" force than in a platform-centric force, empowering people to make better decisions. According to NCO theory, there is a mutually-reinforcing relationship among and between individual sensemaking, shared sensemaking, and collaboration. In defense applications, sensemaking theorists have primarily focused on how shared awareness and understanding are developed within command and control organizations at the operational level. At the tactical level, individuals monitor and assess their immediate physical environment in order to predict where different elements will be in the next moment. At the operational level, where the situation is far broader, more complex and more uncertain, and evolves over hours and days, the organization must collectively make sense of enemy dispositions, intentions and capabilities, as well as anticipate the (often unintended) effects of own-force actions on a complex system of systems . Sensemaking has been studied in the patient safety literature ( Battles, et al. 2006 ). It has been used as a conceptual framework for identifying and detecting high risk patient situations. For example, Rhodes, et al. (2015) examined sensemaking and the co-production of safety of primary medical care patients.
https://en.wikipedia.org/wiki/Sensemaking
Sensible heat is heat exchanged by a body or thermodynamic system in which the exchange of heat changes the temperature of the body or system, and some macroscopic variables of the body or system, but leaves unchanged certain other macroscopic variables of the body or system, such as volume or pressure. [ 1 ] [ 2 ] [ 3 ] [ 4 ] The term is used in contrast to a latent heat , which is the amount of heat exchanged that is hidden, meaning it occurs without change of temperature. For example, during a phase change such as the melting of ice, the temperature of the system containing the ice and the liquid is constant until all ice has melted. Latent and sensible heat are complementary terms. The sensible heat of a thermodynamic process may be calculated as the product of the body's mass ( m ) with its specific heat capacity ( c ) and the change in temperature ( Δ T {\displaystyle \Delta T} ): Sensible heat and latent heat are not special forms of energy. Rather, they describe exchanges of heat under conditions specified in terms of their effect on a material or a thermodynamic system. In the writings of the early scientists who provided the foundations of thermodynamics , sensible heat had a clear meaning in calorimetry . James Prescott Joule characterized it in 1847 as an energy that was indicated by the thermometer. [ 5 ] Both sensible and latent heats are observed in many processes while transporting energy in nature. Latent heat is associated with changes of state, measured at constant temperature, especially the phase changes of atmospheric water vapor , mostly vaporization and condensation , whereas sensible heat directly affects the temperature of the atmosphere. In meteorology, the term 'sensible heat flux' means the conductive heat flux from the Earth's surface to the atmosphere . [ 6 ] It is an important component of Earth's surface energy budget. Sensible heat flux is commonly measured with the eddy covariance method.
https://en.wikipedia.org/wiki/Sensible_heat
Sensing of phage-triggered ion cascades ( SEPTIC ) [ 1 ] [ 2 ] is a prompt bacterium identification method based on fluctuation-enhanced sensing in fluid medium. The advantages of SEPTIC are the specificity and speed (needs only a few minutes) offered by the characteristics of phage infection, the sensitivity due to fluctuation-enhanced sensing , and durability originating from the robustness of phages. An idealistic SEPTIC device may be as small as a pen and maybe able to identify a library of different bacteria within a few minutes measurement window. SEPTIC utilizes bacteriophages as indicators to trigger an ionic response by the bacteria during phage infection. [ 3 ] [ 4 ] [ 5 ] Microscopic metal electrodes detect the random fluctuations of the electrochemical potential due to the stochastic fluctuations of the ionic concentration gradient caused by the phage infection of bacteria. The electrode pair in the electrolyte with different local ion concentrations at the vicinity of electrodes form an electrochemical cell that produces a voltage depending on the instantaneous ratio of local concentrations. [ 6 ] While the concentrations are fluctuating, an alternating random voltage difference will appear between the electrodes. According to the experimental studies, whenever there is an ongoing phage infection, the power density spectrum of the measured electronic noise will have a 1/ f 2 noise spectrum while, without phage infection, it is a 1/f noise spectrum. In order to have a high sensitivity, a DC electrical field attracts the infected bacteria (which are charged due to ion imbalance) to the electrode with the relevant polarization. [ 5 ] The advantages of SEPTIC are the specificity and speed (needs only a few minutes) offered by the characteristics of phage infection, the sensitivity due to fluctuation-enhanced sensing , and durability originating from the robustness of phages. An idealistic SEPTIC device may be as small as a pen and maybe able to identify a library of different bacteria within a few minutes measurement window. The SEPTIC concept was first conceived by Laszlo B. Kish and Maria Dobozi-King [ 2 ] in 2004, and developed and demonstrated at the Electrical and Computer Engineering department of Texas A&M University in collaboration with Mosong Cheng, Ry Young, Sergey M. Bezrukov (NIH), and Bob Biard. [ 7 ] A new, related scheme, BIPIF, [ 6 ] that has recently been conceived and analyzed by Laszlo B. Kish and Gabor Schmera (SPAWAR, United States Navy ) utilizes the AC impedance fluctuations at related arrangements, and it promises higher sensitivity and less dependence on the electrodes.
https://en.wikipedia.org/wiki/Sensing_of_phage-triggered_ion_cascades
A sensitive flame is a gas flame which under suitable adjustment of pressure resonates readily with sounds or air vibrations in the vicinity. [ 1 ] Noticed by both the American scientist John LeConte and the English physicist William Fletcher Barrett , they recorded the effect that a shrill note had upon a gas flame issuing from a tapering jet. The phenomenon caught the attention of the Irish physicist John Tyndall who gave a lecture on the process to the Royal Institution in January 1867. [ 2 ] While not necessary to observe the effect, a higher flame temperature allows for easier observation. Sounds at lower to mid-range frequencies have little to no effect on the flame. However, shrill noises at high frequencies produce noticeable effects on the flame. Even the ticking of a pocket watch was observed as producing a high enough frequency to affect the flame. [ 3 ]
https://en.wikipedia.org/wiki/Sensitive_flame
Sensitivity auditing is an extension of sensitivity analysis for use in policy-relevant modelling studies. [ 1 ] Its use is recommended - i.a. in the European Commission Impact assessment guidelines [ 2 ] and by the European Science Academies [ 3 ] - when a sensitivity analysis (SA) of a model-based study is meant to demonstrate the robustness of the evidence provided by the model in the context whereby the inference feeds into a policy or decision-making process. In settings where scientific work feeds into policy, the framing of the analysis, its institutional context, and the motivations of its author may become highly relevant, and a pure SA - with its focus on quantified uncertainty - may be insufficient. The emphasis on the framing may, among other things, derive from the relevance of the policy study to different constituencies that are characterized by different norms and values, and hence by a different story about `what the problem is' and foremost about `who is telling the story'. Most often the framing includes implicit assumptions, which could be political (e.g. which group needs to be protected) all the way to technical (e.g. which variable can be treated as a constant). In order to take these concerns into due consideration, sensitivity auditing extends the instruments of sensitivity analysis to provide an assessment of the entire knowledge- and model-generating process. It takes inspiration from NUSAP , [ 4 ] a method used to communicate the quality of quantitative information with the generation of `Pedigrees' of numbers. Likewise, sensitivity auditing has been developed to provide pedigrees of models and model-based inferences. Sensitivity auditing is especially suitable in an adversarial context, where not only the nature of the evidence, but also the degree of certainty and uncertainty associated to the evidence, is the subject of partisan interests. These are the settings considered in Post-normal science [ 5 ] or in Mode 2 [ 6 ] science. Post-normal science (PNS) is a concept developed by Silvio Funtowicz and Jerome Ravetz , [ 5 ] [ 7 ] [ 8 ] which proposes a methodology of inquiry that is appropriate when “facts are uncertain, values in dispute, stakes high and decisions urgent” (Funtowicz and Ravetz, 1992: [ 8 ] 251–273). Mode 2 Science, coined in 1994 by Gibbons et al., refers to a mode of production of scientific knowledge that is context-driven, problem-focused and interdisciplinary. Sensitivity auditing consists of a seven-point checklist: 1. Use Math Wisely: Ask if complex math is being used when simpler math could do the job. Check if the model is being stretched beyond its intended use. 2. Look for Assumptions: Find out what assumptions were made in the study, and see if they were clearly stated or hidden. 3. Avoid Garbage In, Garbage Out: Check if the data used in the model were manipulated to make the results look more certain than they really are, or if they were made overly uncertain to avoid regulation. 4. Prepare for Criticism: It's better to find problems in your study before others do. Do robust checks for uncertainty and sensitivity before publishing. 5. Be Transparent: Don't keep your model a secret. Make it clear and understandable to the public. 6. Focus on the Right Problem: Ensure your model is addressing the correct issue and not just solving a problem that isn't really there. 7. Do Thorough Analyses: Conduct in-depth tests to measure uncertainty and sensitivity using the best methods available. These rules are meant to help an analyst to anticipate criticism, in particular relating to model-based inference feeding into an impact assessment. What questions and objections may be received by the modeler? Here is a possible list: Sensitivity auditing is described in the European Commission Guidelines for impact assessment. [ 2 ] Relevants excerpts are (pp. 392): The European Academies’ association of science for policy SAPEA describes in detail sensitivity auditing in its 2019 report entitled “ Making sense of science for policy under conditions of complexity and uncertainty ”. [ 3 ]
https://en.wikipedia.org/wiki/Sensitivity_auditing
The sensitivity index or discriminability index or detectability index is a dimensionless statistic used in signal detection theory . A higher index indicates that the signal can be more readily detected. The discriminability index is the separation between the means of two distributions (typically the signal and the noise distributions), in units of the standard deviation . For two univariate distributions a {\displaystyle a} and b {\displaystyle b} with the same standard deviation, it is denoted by d ′ {\displaystyle d'} ('dee-prime'): In higher dimensions, i.e. with two multivariate distributions with the same variance-covariance matrix Σ {\displaystyle \mathbf {\Sigma } } , (whose symmetric square-root, the standard deviation matrix, is S {\displaystyle \mathbf {S} } ), this generalizes to the Mahalanobis distance between the two distributions: where σ μ = 1 / ‖ S − 1 μ ‖ {\displaystyle \sigma _{\boldsymbol {\mu }}=1/\lVert \mathbf {S} ^{-1}{\boldsymbol {\mu }}\rVert } is the 1d slice of the sd along the unit vector μ {\displaystyle {\boldsymbol {\mu }}} through the means, i.e. the d ′ {\displaystyle d'} equals the d ′ {\displaystyle d'} along the 1d slice through the means. [ 1 ] For two bivariate distributions with equal variance-covariance, this is given by: where ρ {\displaystyle \rho } is the correlation coefficient, and here d x ′ = μ b x − μ a x σ x {\displaystyle d'_{x}={\frac {{\mu _{b}}_{x}-{\mu _{a}}_{x}}{\sigma _{x}}}} and d y ′ = μ b y − μ a y σ y {\displaystyle d'_{y}={\frac {{\mu _{b}}_{y}-{\mu _{a}}_{y}}{\sigma _{y}}}} , i.e. including the signs of the mean differences instead of the absolute. [ 1 ] d ′ {\displaystyle d'} is also estimated as Z ( hit rate ) − Z ( false alarm rate ) {\displaystyle Z({\text{hit rate}})-Z({\text{false alarm rate}})} . [ 2 ] : 8 When the two distributions have different standard deviations (or in general dimensions, different covariance matrices), there exist several contending indices, all of which reduce to d ′ {\displaystyle d'} for equal variance/covariance. This is the maximum (Bayes-optimal) discriminability index for two distributions, based on the amount of their overlap, i.e. the optimal (Bayes) error of classification e b {\displaystyle e_{b}} by an ideal observer, or its complement, the optimal accuracy a b {\displaystyle a_{b}} : where Z {\displaystyle Z} is the inverse cumulative distribution function of the standard normal. The Bayes discriminability between univariate or multivariate normal distributions can be numerically computed [ 1 ] ( Matlab code ), and may also be used as an approximation when the distributions are close to normal. d b ′ {\displaystyle d'_{b}} is a positive-definite statistical distance measure that is free of assumptions about the distributions, like the Kullback-Leibler divergence D KL {\displaystyle D_{\text{KL}}} . D KL ( a , b ) {\displaystyle D_{\text{KL}}(a,b)} is asymmetric, whereas d b ′ ( a , b ) {\displaystyle d'_{b}(a,b)} is symmetric for the two distributions. However, d b ′ {\displaystyle d'_{b}} does not satisfy the triangle inequality, so it is not a full metric. [ 1 ] In particular, for a yes/no task between two univariate normal distributions with means μ a , μ b {\displaystyle \mu _{a},\mu _{b}} and variances v a > v b {\displaystyle v_{a}>v_{b}} , the Bayes-optimal classification accuracies are: [ 1 ] where χ ′ 2 {\displaystyle \chi '^{2}} denotes the non-central chi-squared distribution , λ = ( μ a − μ b v a − v b ) 2 {\displaystyle \lambda =\left({\frac {\mu _{a}-\mu _{b}}{v_{a}-v_{b}}}\right)^{2}} , and c = λ + ln ⁡ v a − ln ⁡ v b v a − v b {\displaystyle c=\lambda +{\frac {\ln v_{a}-\ln v_{b}}{v_{a}-v_{b}}}} . The Bayes discriminability d b ′ = 2 Z ( p ( A | a ) + p ( B | b ) 2 ) . {\displaystyle d'_{b}=2Z\left({\frac {p\left(A|a\right)+p\left(B|b\right)}{2}}\right).} d b ′ {\displaystyle d'_{b}} can also be computed from the ROC curve of a yes/no task between two univariate normal distributions with a single shifting criterion. It can also be computed from the ROC curve of any two distributions (in any number of variables) with a shifting likelihood-ratio, by locating the point on the ROC curve that is farthest from the diagonal. [ 1 ] For a two-interval task between these distributions, the optimal accuracy is a b = p ( χ ~ w , k , λ , 0 , 0 2 > 0 ) {\displaystyle a_{b}=p\left({\tilde {\chi }}_{{\boldsymbol {w}},{\boldsymbol {k}},{\boldsymbol {\lambda }},0,0}^{2}>0\right)} ( χ ~ 2 {\displaystyle {\tilde {\chi }}^{2}} denotes the generalized chi-squared distribution ), where w = [ σ s 2 − σ n 2 ] , k = [ 1 1 ] , λ = μ s − μ n σ s 2 − σ n 2 [ σ s 2 σ n 2 ] {\displaystyle {\boldsymbol {w}}={\begin{bmatrix}\sigma _{s}^{2}&-\sigma _{n}^{2}\end{bmatrix}},\;{\boldsymbol {k}}={\begin{bmatrix}1&1\end{bmatrix}},\;{\boldsymbol {\lambda }}={\frac {\mu _{s}-\mu _{n}}{\sigma _{s}^{2}-\sigma _{n}^{2}}}{\begin{bmatrix}\sigma _{s}^{2}&\sigma _{n}^{2}\end{bmatrix}}} . [ 1 ] The Bayes discriminability d b ′ = 2 Z ( a b ) {\displaystyle d'_{b}=2Z\left(a_{b}\right)} . A common approximate (i.e. sub-optimal) discriminability index that has a closed-form is to take the average of the variances, i.e. the rms of the two standard deviations: d a ′ = | μ a − μ b | / σ rms {\displaystyle d'_{a}=\left\vert \mu _{a}-\mu _{b}\right\vert /\sigma _{\text{rms}}} [ 3 ] (also denoted by d a {\displaystyle d_{a}} ). It is 2 {\displaystyle {\sqrt {2}}} times the z {\displaystyle z} -score of the area under the receiver operating characteristic curve (AUC) of a single-criterion observer. This index is extended to general dimensions as the Mahalanobis distance using the pooled covariance, i.e. with S rms = [ ( Σ a + Σ b ) / 2 ] 1 2 {\displaystyle \mathbf {S} _{\text{rms}}=\left[\left(\mathbf {\Sigma } _{a}+\mathbf {\Sigma } _{b}\right)/2\right]^{\frac {1}{2}}} as the common sd matrix. [ 1 ] Another index is d e ′ = | μ a − μ b | / σ avg {\displaystyle d'_{e}=\left\vert \mu _{a}-\mu _{b}\right\vert /\sigma _{\text{avg}}} , extended to general dimensions using S avg = ( S a + S b ) / 2 {\displaystyle \mathbf {S} _{\text{avg}}=\left(\mathbf {S} _{a}+\mathbf {S} _{b}\right)/2} as the common sd matrix. [ 1 ] It has been shown that for two univariate normal distributions, d a ′ ≤ d e ′ ≤ d b ′ {\displaystyle d'_{a}\leq d'_{e}\leq d'_{b}} , and for multivariate normal distributions, d a ′ ≤ d e ′ {\displaystyle d'_{a}\leq d'_{e}} still. [ 1 ] Thus, d a ′ {\displaystyle d'_{a}} and d e ′ {\displaystyle d'_{e}} underestimate the maximum discriminability d b ′ {\displaystyle d'_{b}} of univariate normal distributions. d a ′ {\displaystyle d'_{a}} can underestimate d b ′ {\displaystyle d'_{b}} by a maximum of approximately 30%. At the limit of high discriminability for univariate normal distributions, d e ′ {\displaystyle d'_{e}} converges to d b ′ {\displaystyle d'_{b}} . These results often hold true in higher dimensions, but not always. [ 1 ] Simpson and Fitter [ 3 ] promoted d a ′ {\displaystyle d'_{a}} as the best index, particularly for two-interval tasks, but Das and Geisler [ 1 ] have shown that d b ′ {\displaystyle d'_{b}} is the optimal discriminability in all cases, and d e ′ {\displaystyle d'_{e}} is often a better closed-form approximation than d a ′ {\displaystyle d'_{a}} , even for two-interval tasks. The approximate index d g m ′ {\displaystyle d'_{gm}} , which uses the geometric mean of the sd's, is less than d b ′ {\displaystyle d'_{b}} at small discriminability, but greater at large discriminability. [ 1 ] In general, the contribution to the total discriminability by each dimension or feature may be measured using the amount by which the discriminability drops when that dimension is removed. If the total Bayes discriminability is d ′ {\displaystyle d'} and the Bayes discriminability with dimension i {\displaystyle i} removed is d − i ′ {\displaystyle d'_{-i}} , we can define the contribution of dimension i {\displaystyle i} as d ′ 2 − d − i ′ 2 {\displaystyle {\sqrt {d'^{2}-{d'_{-i}}^{2}}}} . This is the same as the individual discriminability of dimension i {\displaystyle i} when the covariance matrices are equal and diagonal, but in the other cases, this measure more accurately reflects the contribution of a dimension than its individual discriminability. [ 1 ] We may sometimes want to scale the discriminability of two data distributions by moving them closer or farther apart. One such case is when we are modeling a detection or classification task, and the model performance exceeds that of the subject or observed data. In that case, we can move the model variable distributions closer together so that it matches the observed performance, while also predicting which specific data points should start overlapping and be misclassified. There are several ways of doing this. One is to compute the mean vector and covariance matrix of the two distributions, then effect a linear transformation to interpolate the mean and sd matrix (square root of the covariance matrix) of one of the distributions towards the other. [ 1 ] Another way that is by computing the decision variables of the data points (log likelihood ratio that a point belongs to one distribution vs another) under a multinormal model, then moving these decision variables closer together or farther apart. [ 1 ] This signal processing -related article is a stub . You can help Wikipedia by expanding it . This statistics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sensitivity_index
In immunology , the term sensitization is used for the following concepts: [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] Those particles themselves are biologically inactive except for serving as antigens against the primary antibodies or as carriers of the antigens . [ 5 ] When antibodies are used in the preparation, they are bound to the erythrocyte or particles in their Fab regions. Thus the step follows requires the secondary antibodies against those primary antibodies, that is, the secondary antibodies must have binding specificity to the primary antibodies including to their Fc regions .
https://en.wikipedia.org/wiki/Sensitization_(immunology)
A sensor is often defined as a device that receives and responds to a signal or stimulus. The stimulus is the quantity, property, or condition that is sensed and converted into electrical signal. [ 1 ] In the broadest definition, a sensor is a device, module, machine, or subsystem that detects events or changes in its environment and sends the information to other electronics, frequently a computer processor. Sensors are used in everyday objects such as touch-sensitive elevator buttons ( tactile sensor ) and lamps which dim or brighten by touching the base, and in innumerable applications of which most people are never aware. With advances in micromachinery and easy-to-use microcontroller platforms, the uses of sensors have expanded beyond the traditional fields of temperature, pressure and flow measurement, [ 2 ] for example into MARG sensors . Analog sensors such as potentiometers and force-sensing resistors are still widely used. Their applications include manufacturing and machinery, airplanes and aerospace, cars, medicine, robotics and many other aspects of our day-to-day life. There is a wide range of other sensors that measure chemical and physical properties of materials, including optical sensors for refractive index measurement, vibrational sensors for fluid viscosity measurement, and electro-chemical sensors for monitoring pH of fluids. A sensor's sensitivity indicates how much its output changes when the input quantity it measures changes. For instance, if the mercury in a thermometer moves 1  cm when the temperature changes by 1 °C, its sensitivity is 1 cm/°C (it is basically the slope dy/dx assuming a linear characteristic). Some sensors can also affect what they measure; for instance, a room temperature thermometer inserted into a hot cup of liquid cools the liquid while the liquid heats the thermometer. Sensors are usually designed to have a small effect on what is measured; making the sensor smaller often improves this and may introduce other advantages. [ 3 ] Technological progress allows more and more sensors to be manufactured on a microscopic scale as microsensors using MEMS technology. In most cases, a microsensor reaches a significantly faster measurement time and higher sensitivity compared with macroscopic approaches. [ 3 ] [ 4 ] Due to the increasing demand for rapid, affordable and reliable information in today's world, disposable sensors—low-cost and easy‐to‐use devices for short‐term monitoring or single‐shot measurements—have recently gained growing importance. Using this class of sensors, critical analytical information can be obtained by anyone, anywhere and at any time, without the need for recalibration and worrying about contamination. [ 5 ] A good sensor obeys the following rules: [ 5 ] Most sensors have a linear transfer function . The sensitivity is then defined as the ratio between the output signal and measured property. For example, if a sensor measures temperature and has a voltage output, the sensitivity is constant with the units [V/K]. The sensitivity is the slope of the transfer function. Converting the sensor's electrical output (for example V) to the measured units (for example K) requires dividing the electrical output by the slope (or multiplying by its reciprocal). In addition, an offset is frequently added or subtracted. For example, −40 must be added to the output if 0 V output corresponds to −40 C input. For an analog sensor signal to be processed or used in digital equipment, it needs to be converted to a digital signal, using an analog-to-digital converter . Since sensors cannot replicate an ideal transfer function , several types of deviations can occur which limit sensor accuracy : All these deviations can be classified as systematic errors or random errors . Systematic errors can sometimes be compensated for by means of some kind of calibration strategy. Noise is a random error that can be reduced by signal processing , such as filtering, usually at the expense of the dynamic behavior of the sensor. The sensor resolution or measurement resolution is the smallest change that can be detected in the quantity that is being measured. The resolution of a sensor with a digital output is usually the numerical resolution of the digital output. The resolution is related to the precision with which the measurement is made, but they are not the same thing. A sensor's accuracy may be considerably worse than its resolution. A chemical sensor is a self-contained analytical device that can provide information about the chemical composition of its environment, that is, a liquid or a gas phase . [ 6 ] [ 7 ] The information is provided in the form of a measurable physical signal that is correlated with the concentration of a certain chemical species (termed as analyte ). Two main steps are involved in the functioning of a chemical sensor, namely, recognition and transduction . In the recognition step, analyte molecules interact selectively with receptor molecules or sites included in the structure of the recognition element of the sensor. Consequently, a characteristic physical parameter varies and this variation is reported by means of an integrated transducer that generates the output signal. A chemical sensor based on recognition material of biological nature is a biosensor . However, as synthetic biomimetic materials are going to substitute to some extent recognition biomaterials, a sharp distinction between a biosensor and a standard chemical sensor is superfluous. Typical biomimetic materials used in sensor development are molecularly imprinted polymers and aptamers . [ 8 ] In biomedicine and biotechnology , sensors which detect analytes thanks to a biological component, such as cells, protein, nucleic acid or biomimetic polymers , are called biosensors . Whereas a non-biological sensor, even organic (carbon chemistry), for biological analytes is referred to as sensor or nanosensor . This terminology applies for both in-vitro and in vivo applications. The encapsulation of the biological component in biosensors, presents a slightly different problem that ordinary sensors; this can either be done by means of a semipermeable barrier , such as a dialysis membrane or a hydrogel , or a 3D polymer matrix, which either physically constrains the sensing macromolecule or chemically constrains the macromolecule by bounding it to the scaffold. Neuromorphic sensors are sensors that physically mimic structures and functions of biological neural entities. [ 13 ] One example of this is the event camera . The MOSFET invented at Bell Labs between 1955 and 1960, [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] MOSFET sensors (MOS sensors) were later developed, and they have since been widely used to measure physical , chemical , biological and environmental parameters. [ 20 ] A number of MOSFET sensors have been developed, for measuring physical , chemical , biological , and environmental parameters. [ 20 ] The earliest MOSFET sensors include the open-gate field-effect transistor (OGFET) introduced by Johannessen in 1970, [ 20 ] the ion-sensitive field-effect transistor (ISFET) invented by Piet Bergveld in 1970, [ 21 ] the adsorption FET (ADFET) patented by P.F. Cox in 1974, and a hydrogen -sensitive MOSFET demonstrated by I. Lundstrom, M.S. Shivaraman, C.S. Svenson and L. Lundkvist in 1975. [ 20 ] The ISFET is a special type of MOSFET with a gate at a certain distance, [ 20 ] and where the metal gate is replaced by an ion -sensitive membrane , electrolyte solution and reference electrode . [ 22 ] The ISFET is widely used in biomedical applications, such as the detection of DNA hybridization , biomarker detection from blood , antibody detection, glucose measurement, pH sensing, and genetic technology . [ 22 ] By the mid-1980s, numerous other MOSFET sensors had been developed, including the gas sensor FET (GASFET), surface accessible FET (SAFET), charge flow transistor (CFT), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), biosensor FET (BioFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). [ 20 ] By the early 2000s, BioFET types such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed. [ 22 ] MOS technology is the basis for modern image sensors , including the charge-coupled device (CCD) and the CMOS active-pixel sensor (CMOS sensor), used in digital imaging and digital cameras . [ 23 ] Willard Boyle and George E. Smith developed the CCD in 1969. While researching the MOS process, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. [ 23 ] The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting . [ 24 ] The MOS active-pixel sensor (APS) was developed by Tsutomu Nakamura at Olympus in 1985. [ 25 ] The CMOS active-pixel sensor was later developed by Eric Fossum and his team in the early 1990s. [ 26 ] MOS image sensors are widely used in optical mouse technology. The first optical mouse, invented by Richard F. Lyon at Xerox in 1980, used a 5 μm NMOS sensor chip. [ 27 ] [ 28 ] Since the first commercial optical mouse, the IntelliMouse introduced in 1999, most optical mouse devices use CMOS sensors. [ 29 ] MOS monitoring sensors are used for house monitoring , office and agriculture monitoring, traffic monitoring (including car speed , traffic jams , and traffic accidents ), weather monitoring (such as for rain , wind , lightning and storms ), defense monitoring, and monitoring temperature , humidity , air pollution , fire , health , security and lighting . [ 31 ] MOS gas detector sensors are used to detect carbon monoxide , sulfur dioxide , hydrogen sulfide , ammonia , and other gas substances. [ 32 ] Other MOS sensors include intelligent sensors [ 33 ] and wireless sensor network (WSN) technology. [ 34 ] The typical modern CPUs , GPUs and SoCs are usually integrated electrics sensors to detect chip temperatures, voltages and powers. [ 35 ]
https://en.wikipedia.org/wiki/Sensor
In industrial automation , sensor-based sorting is an umbrella term for all applications in which particles are detected using a sensor technique and rejected by an amplified mechanical , hydraulic or pneumatic process. The technique is generally applied in mining , recycling and food processing and used in the particle size range between 0.5 and 300 mm (0.020 and 11.811 in). Since sensor-based sorting is a single particle separation technology, the throughput is proportional to the average particle size and weight fed onto the machine. The main subprocesses of sensor-based sorting are material conditioning, material presentation, detection, data processing and separation. [ 1 ] There are two types of sensor-based sorters: the chute type and the belt type. For both types the first step in acceleration is spreading out the particles by a vibrating feeder followed by either a fast belt or a chute. On the belt type the sensor usually detects the particles horizontally while they pass it on the belt. For the chute type the material detection is usually done vertically while the material passes the sensor in a free fall. The data processing is done in real time by a computer. The computer transfers the result of the data processing to an ultra fast ejection unit which, depending on the sorting decision, ejects a particle or lets it pass. [ 3 ] Sensor-based ore sorting is the terminology used in the mining industry. It is a coarse physical coarse particle separation technology usually applied in the size range for 25–100 mm (0.98–3.94 in). Aim is either to create a lumpy product in ferrous metals, coal or industrial minerals applications or to reject waste before it enters production bottlenecks and more expensive comminution and concentration steps in the process. In the majority of all mining processes, particles of sub-economic grade enter the traditional comminution, classification and concentration steps. If the amount of sub-economic material in the above-mentioned fraction is roughly 25% or more, there is good potential that sensor-based ore sorting is a technically and financially viable option. High added value can be achieved with relatively low capital expenditure , especially when increasing the productivity through downstream processing of higher grade feed and through increased overall recovery when rejecting deleterious waste. Sensor-based sorting is a coarse particle separation technology applied in mining for the dry separation of bulk materials . The functional principle does not limit the technology to any kind of segment or mineral application but makes the technical viability mainly depend on the liberation characteristics at the size range 25–100 mm (0.98–3.94 in), which is usually sorted. If physical liberation is present there is a good potential that one of the sensors available on industrial scale sorting machines can differentiate between valuable and non-valuable particles. The separation is based on features measured with a detection technology that are used to derive a yes/no decision for actuation of usually pneumatic impulses. Sensor-based sorting is a disruptive technology in the mining industry which is universally applicable for all commodities. A comprehensive study examines both the technology's potential and its limitations, whilst providing a framework for application development and evaluation. All relevant aspects, from sampling to plant design and integration into mining and mineral processing systems, are covered. [ 4 ] Other terminologies used in the industry include ore sorting, automated sorting, electronic sorting, and optical sorting . Sensor-based sorting has been introduced by Wotruba and Harbeck as an umbrella term for all applications where particles are singularly detected by a sensor technique and then rejected by an amplified mechanical, hydraulic or pneumatic process. [ 5 ] As for any other physical separation process, liberation is pre-requisite for possible separation. Liberation characteristics are well known and relatively easy to study for particulate lots in smaller size ranges, e.g. flotation feed and products. The analysis is essential for understanding the possible results of physical separation and relatively easy to conduct in laboratory on a couple of dozens of grams of sample which can be studied using optical methods or such as the QEMSCAN . For larger particles above 10 mm (0.39 in) it is widely known for applications that are treated using density separation methods, such as coal or iron ore . Here, the washability analysis can be conducted on sample masses up to 10 tonnes in equipped laboratories. For sensor-based sorting, where laboratory methods can only tell about the liberation characteristics where the describing feature is the density (e.g. iron ore, coal), hand counting, single-particle tests and bulk tests can reveal the liberation characteristics of a bulk material: Hereby, only single particle tests reveal the true liberation, while hand counting and bulk testing give a result which also incorporates the separation efficiency of the type of analysis. More information on the testing procedures used in technical feasibility evaluation can be found in the respective chapter. The oldest form of mineral processing practiced since the Stone Age is hand-picking. Georgius Agricola also describes hand-picking is his book De re metallica in 1556. [ 6 ] Sensor-based sorting is the automation and extension to hand picking. In addition to sensors that measure visible differences like color (and the further interpretation of the data regarding texture and shape), other sensors are available on industrial scale sorters that are able to measure differences invisible for the human eye (EM, XRT, NIR). The principles of the technology and the first machinery has been developed since the 1920s (. [ 7 ] Nevertheless, widely applied and standard technology it is only in the industrial minerals and gemstone segments. Mining is benefiting from the step change developments in sensing and computing technologies and from machine development in the recycling and food processing industries. In 2002, Cutmore and Eberhard stated that the relatively small installed base of sensor-based sorters in mining is more a result of insufficient industry interest than any technical barriers to their effective use [ 8 ] Nowadays sensor-based sorting is beginning to reveal its potential in various applications in basically all segments of mineral production (industrial minerals, gemstones , base-metals, precious metals , ferrous metals, fuel). Precondition is physical liberation in coarse size ranges (~10–300 mm (0.39–11.81 in)) to make physical separation possible. Either the product fraction, but more often the waste fraction needs to be liberated. If liberation is present, there is good potential that one of available detection technologies on today's sensor-based sorters can positively or negatively identify one of the two desired fractions. A size range coefficient of approximately three is advisable. A minimum amount of undersized fine material must enter the machines to optimize availability. Moisture of the feed is not important, if the material is sufficiently dewatered and the undersize fraction is efficiently removed. For surface detection technologies sometimes spray water on the classifying screen is required to clean the surfaces. Surface detection technologies would otherwise measure the reflectance of the adhesions on the surface and a correlation to the particle's content is not given. During the more than 80 years of technical development of sensor-based ore sorting equipment, various types of machines have been developed. This includes the channel-type, bucket-wheel type and cone type sorters. [ 9 ] [ 10 ] The main machine types being installed in the mining industry today are belt-type and chute-type machines. Harbeck made a good comparison of both disadvantages and advantages of the systems for different sorting applications. [ 11 ] The selection of a machine-type for an application depends various case-dependent factors, including the detection system applied, particle size, moisture, yield amongst others. The chute-type machine has a lower footprint and fewer moving parts which results in lower investment and operating costs. In general, it is more applicable to well liberated material and surface detection, because a double sided scanning is possible on a more reliable on the system. The applicable top size of the chute-type machine is bigger, as material handling of particles up to 300 mm (12 in) is only technically viable on this setup. The cost for most average farmers and industry workers is around $500 for the study and ergonomic design of the sensor. The sensor itself is still a prototype not yet built but looking to be approved by FDA around 2003 The belt-type machine is generally more applicable to smaller and to adhesive feed. In addition, the feed presentation is more stable which makes it more applicable for more difficult and heterogenous applications. The separation in both machine types comprises the following sub-processes: A sized screen fraction with a size range coefficient (d95/d5) of 2-5 (optimal 2-3) is fed onto a vibratory feeder which has the function to create a mono-layer , by pre-accelerating the particles. A common misunderstanding in plant design is, that you can use the vibratory feeder to discharge from a buffer bunker but a separate units needs to be applied, since the feed distribution is very important to the efficiency of the sensor-based sorter and different loads on the feeder change its position and vibration characteristics. The feed is then transferred to the presentation mechanism which is the belt or the chute in the two main machine types respectively. This sub-process has the function to pass single particles of the material stream in a stable and predictable manner, thus in a unidirectional movement orthogonal to the detection line with uniform speed profile. In the detection sub-process location and property vectors are recorded to allow particle localization for ejection and material classification for discrimination purposes. All detection technologies applied have in common to be cheap, contactless and fast. The technologies are subdivided in transmitting and reflecting groups, the first measuring the inner content of a particle while the later only uses the surface reflection for discrimination. Surface, or reflection technologies have the disadvantage that the surfaces need to be representing the content, thus need to be clean from clay and dust adhesions. But by default surface reflection technologies violate the Fundamental Sampling Principle because not all components of a particle have the same probability of being detected. The main transmitting technologies are EM ( Electromagnetics ) and XRT ( X-ray -Transmission). EM detection is based on the conductivity of the material passing an alternating electromagnetic field . The principle of XRT is widely known through the application in medical diagnostics and airport luggage scanners. The main surface or reflection technologies are traditionally X-ray luminescence detectors capturing the fluorescence of diamonds under the excitation of X-ray radiation and color cameras detecting brightness and colour difference. Spectroscopic methods such as near-infrared spectroscopy known from remote sensing in exploration in mining for decades, have found their way into industrial scale sensor-based sorters. Advantage of the application of near-infrared spectroscopy is that the evidence can be measured on the presence of specific molecular bonds , thus minerals composition of the near-infrared active minerals. [ 12 ] There is more detection technologies available on industrial scale sensor-based ore sorters. Readers that want to go into detail can find more in the literature. [ 5 ] Spectral and spatial is collected by the detection system. The spatial component catches the position of the particles distribution across the width of the sorting machine, which is then used in case the ejection mechanism is activated for a single particle. Spectral data comprises the features that are used for material discrimination. In a superseding processing step, spectral and spatial can be combined to include patterns into the separation criterion. Huge amount of data is collected in real time multiple processing and filtering steps are bringing the data down to the Yes/no decision – either for ejecting a particle or for keeping the ejection mechanism still for that one. The state-of-the-art mechanism of today's sensor-based ore sorters is a pneumatic ejection. Here, a combination of high speed air valves and an array of nozzles perpendicular to the acceleration belt or chute allows precise application of air pulses to change the direction of flight of single particles. The nozzle pitch and diameter is adapted to the particle size. The air impulse must be precise enough to change the direction of flight of a single particle by applying the drag force to this single particle and directing it over the mechanical splitter plate. Sensor-based sorting installations normally comprise the following basic units; crusher , screen, sensor-based sorter and compressor. There are principally two different kinds of installations that are described in the following paragraphs – stationary and semi-mobile installations. Transportable semi-mobile installations have gained increasing popularity in the last two decades. They are enabled by the fact that complete sensor-based sorting systems are relatively compact in relation to the capacity in tonnes per hour. This is mainly because little infrastructure is needed. The picture shows a containerised sensor-based sorter which is applied in Chromitite sorting. The system is operated in conjunction with a Diesel-powered mobile crusher and screen. Material handling of the feed, undersize fraction, product and waste fraction is conducted using a wheel loader . The system is powered by a Diesel generator and a compressor station delivers the instrument quality air needed for the operation. Semi-mobile installations are applied primarily to minimise material handling and save transport costs. Another reason for choosing the semi-mobile option for an installation is bulk testing of new ore bodies. Capacity of a system very much depends on the size fraction sorted, but a 250tph capacity is a good estimate for semi-mobile installations, considering a capacity of 125tph sorter feed and 125tph undersize material. During the last decade both generic plant designs and customised designs have been developed, for example in the framework of the i2mine project. [ 13 ] To cope with high volume mass flows and for application, where a changing physical location of the sensor-based sorting process is of no benefit for the financial feasibility of the operation, stationary installations are applied. Another reason for applying stationary installations are multistage (Rougher, Scavenger, Cleaner) sensor-based ore sorting processes. Within stationary installations, sorters are usually located in parallel, which allows transport of the discharge fractions with one product and one waste belt respectively, which decreases plant footprint and amount of conveyors . For higher grade applications such as ferrous metals , coal and industrial minerals, sensor-based ore sorting can be applied to create a final product. Pre-condition is, that the liberation allows for the creation of a sellable product. Undersize material is usually bypassed as product, but can also be diverted to the waste fraction, if the composition does not meet the required specifications. This is case and application dependent. Most prominent example of the application of sensor-based ore sorting is the rejection of barren waste before transporting and comminution. Waste rejection is also known under the term pre-concentration. A discrimination has been introduced by Robben. [ 4 ] Rule of thumb is that at least 25% of liberated barren waste must be present in the fraction to be treated by sensor-based ore sorting to make waste rejection financially feasible. Reduction of waste before it enters comminution and grinding processes does not only reduce the costs in those processes, but also releases the capacity that can be filled with higher grade material and thus implies higher productivity of the system. A prejudice against the application of a waste rejection process is, that the valuable content lost in this process is a penalty higher than the savings that can be achieved. But it is reported in the literature that the overall recovery even increases through bringing higher grade material as feed into the mill. In addition, the higher productivity is an additional source of income. If noxious waste such as acid consuming calcite is removed, the downstream recovery increases and the downstream costs decrease disproportionally as reported for example by Bergmann. [ 14 ] The coarse waste rejected can be an additional source of income if there is a local market for aggregates. Sensor-based ore sorting is financially especially attractive for low grade or marginal ore or waste dump material. [ 4 ] This described scenario describes that waste dump material or marginal ore is sorted and added to the run-of-mine production. The needed capacity for the sensor-based ore sorting step is less in this case such as the costs involved. Requirement is that two crude material streams are fed in parallel, requiring two crushing stations. Alternatively, marginal and high grade ore can be buffered on an intermediate stockpile and dispatched in an alternating operation. The latter option has the disadvantage that the planned production time, the loading, of the sensor-based ore sorter is low, unless a significant intermediate stockpile or bunker is installed. Treating the marginal ore separately has the advantage that less equipment is needed since the processed material stream is lower, but it has the disadvantage that the potential of the technology is not unfolded for the higher grade material, where sensor-based sorting would also add benefit. Pebble circuits are a very advantageous location for the application of sensor-based ore sorters. Usually it is hard waste recirculating and limiting the total mill capacity. In addition, the tonnage is significantly lower in comparison to the total run-of-mine stream, the size range is applicable and usually uniform and the particles' surfaces are clean. High impact on total mill capacity is reported in the literature. [ 15 ] Sensor-based sorting can be applied to separate the coarse fraction of the run-of-mine material according to its characteristics. Possible separation criteria are grade, mineralogy , grade and grindability amongst others. Treating different ore types separately results either in an optimised cash flow in the sense, that revenue ist shifted to an earlier point in time, or increased overall recovery which translates to higher productivity and thus revenue . If two separate plant lines are installed, the increased productivity must compensate for the overall higher capital expenditure and operating costs . Sensor-based ore sorting is in comparison to other coarse particle separation technologies relatively cheap. While the costs for the equipment itself are relatively high in capital expenditure and operating costs, the absence of extensive infrastructure in a system results in operating costs that are to be compared to jigging. The specific costs are very much depending on the average particle size of the feed and on the ease of the separation. Coarser particles imply higher capacity and thus less costs. Detailed costing can be conducted after the mini-bulk stage in the technical feasibility evaluation. Prejudice against waste rejection with sensor-based sorting widely spread, that the loss of valuables, thus the recovery penalty of this process, supersedes the potential downstream cost savings and is therefore economically not viable. It must be noted that for waste rejection the aim for the separation with sensor-based ore sorting must be put onto maximum recovery, which means that only low grade or barren waste is rejected because the financial feasibility is very much sensitive to that factor. Nevertheless, through the rejection of waste before comminution and concentration steps, recovery can be often increased in the downstream process, meaning that the overall recovery is equal or even higher than the one in the base case, meaning that instead of losing product, additional product can be produced, which adds the additional revenue to the cost savings on the positive side in the cash flow . If the rejected material is replaced with additional higher grade material, the main economic benefit unfolds through the additional production. It implies, that in conjunction with sensor-based ore sorting, the capacity of the crushing station is increased, to allow for the additional mass-flow that is subsequently taken out by the sensor-based ore sorters as waste. Precondition for the applicability of sensor-based ore sorting is the presence of liberation at the particle size of interest. Before entering into sensor-based ore sorting testing procedures there is the possibility to assess the degree of liberation through the inspection of drill cores, hand-counting and washability analysis. The quantification of liberation does not include any process efficiencies, but gives an estimate of the possible sorting result and can thus be applied for desktop financial feasibility analysis. Drill core analysis Both for green-field and brown-field applications, inspection of drill core in combination with the grade distribution and mineralogical description is a good option for estimation of the liberation characteristics and the possible success of sensor-based ore sorting. In combination with the mining method and mine plan, an estimation of possible grade distribution in coarse particles can be done. Hand-counting is a cheap and easy to conduct method to estimate the liberation characteristics of a bulk sample wither originating from run-of-mine material, a waste dump or for example exploration trenching. Analysis of particles in the size range 10-100mm has been conducted on a total sample mass of 10 tonnes. By visual inspection of trained personnel, a classification of each particle into different bins (e.g. lithology , grade) is possible and the distribution is determined by weighing each bin. A trained professional can quickly estimate the efficiency of a specific detection and process efficiency of sensor-based ore sorting knowing the sensor response of the mineralogy of ore in question and other process efficiency parameters. The washability analysis is widely known in bulk material analysis, where the specific density is the physical property describing the liberation and the separation results, which is then in the form of the partition curve. The partition curve is defined as the curve which gives as a function of a physical property or characteristic, the proportions in which different elemental classes of raw feed having the same property are split into separate products. [ 16 ] It is thus per its definition not limited to, but predominantly applied in analysis of liberation and process efficiency of density separation processes. For sensor-based ore sorting, the partition (also called Tromp) curves for chromite, iron ore and coal are known and can thus be applied for process modelling. Single particle testing is an extensive but powerful laboratory procedure developed by Tomra. Outo of a sample set of multiple hundreds of fragments in the size range 30-60mm are measured individually on each of the available detection technologies. After recording of the raw data, all the fragments are comminute and assayed individually which then allows plotting of the liberation function of the sample set and in addition, the detection efficiency of each detection technology in combination with the calibration method applied. This makes the evaluation of detection and calibration and subsequently the selection of the most powerful combination possible. This analysis is possible to be applied on quarters or half sections of drill core. Mini-bulk tests are conducted with 1-100t of samples on industrial scale sensor-based ore sorters. The size fraction intervals to be treated are prepared using screen classifications. Full capacity is established then with each fraction and multiple cut-points are programmed in the sorting software. After creating multiple sorting fractions in rougher, scavenger and cleaner steps these weighed are sent for assays. The resulting data delivers all input for flow-sheet development. Since the tests are conducted on industrial scale equipment, there is no scale-up factor involved when designing a flow-sheet and installation of sensor-based ore sorting. To gather relevant statistical data higher sample masses are needed in some cases. Thus, transportation of the sample into the mini-bulk testing facility becomes unviable and the equipment is set up in the field. Containerised units in conjunction with Diesel-powered crushing and screening equipment are often applied and used for production test runs under full scale operating conditions. The process efficiency of sensor-based ore sorting is described in detail by C. Robben in 2014. [ 4 ] The total process efficiency is subdivided into the following sub-process efficiencies; Platform efficiency, preparation efficiency, presentation efficiency, detection efficiency and separation efficiency. All the sub-process contribute to the total process efficiency, of course in combination with the liberation characteristics of the bulk material that the technology is applied to. The detailed description of the sib-processes and their contribution to the total process efficiency can be found in the literature. Steinert provides sorting technologies for recycling and mining industries using a variety of sensors, like X-ray, inductive, NIR and color optical sensors and 3D laser camera, which can be combined for sorting a variety of materials. NIR technology is used in the recycling field. A sensor-based sorting equipment supplier with large installed base in the industries mining, recycling and food. Tomra's sensor-based sorting equipment and services for the precious metals and base metals segment are marketed through a cooperation agreement with Outotec from Finland , which brings the extensive comminution, processing and application experience of Outotec together with Tomra's sensor-based ore sorting technology and application expertise. Raytec Vision is a camera and sensor-based manufacturer based in Parma and specialized in food sorting. The applications of Raytec Vision's machines are many: tomatoes, tubers, fruit, fresh cut, vegetables and confectionery products. Each machine can separate good products from wastes, foreign bodies and defects and guarantees high levels of food safety for the final consumer. Comex provides sorting technologies for mining industries using multi-sensory solution integrated in the same sorting units, like X-ray, hyper-spectral IR and color optical sensors and 3D cameras, which can be very effective in identifying and sorting of various mineral particles. Integration of AI models for sensor data processing is of critical importance to achieve good sorting results. The expert conference “Sensor-Based Sorting” is addressing new developments and applications in the field of automatic sensor separation techniques for primary and secondary raw materials. The conference provides a platform for plant operators, manufacturers, developers and scientists to exchange know-how and experiences. The congress is hosted by the Department of Processing and Recycling and the Unit for Mineral Processing (AMR) of RWTH Aachen University in cooperation with the GDMB Society of Metallurgists and Miners, Clausthal. Scientific supervisors are Professor Thomas Pretz and Professor Hermann Wotruba. . [ 17 ] Tungsten plays a large and indispensable role in modern high-tech industry. Up to 500,000 tons of raw tungsten ore are mined each year by Wolfram Bergbau und Hütten AG (WHB)in Felbertal, Austria, which is the largest scheelite deposit in Europe. 25% of the run-of-mine ore are separated as waste before entering the mill. [ 18 ] “ Sensor-Based Sorting ”
https://en.wikipedia.org/wiki/Sensor-based_sorting
A sensor network query processor (SNQP) , also called a sensorDB, is a user-friendly interface for programming and running applications which translates instructions from declarative programming language with high-level instructions to low-level instructions understood by the operating system . The basic idea of SNQP is the addition of a layer modeling the WSN as a distributed database searchable by a query language similar to SQL . [ 1 ] [ 2 ] TinyDB is a query processing system for extracting information from a network of TinyOS sensors . Unlike existing solutions for data processing in TinyOS , TinyDB does not require embedded C code for sensors. Instead, TinyDB provides a simple, SQL-like interface to specify the data desired, along with additional parameters, as the rate at which data should be refreshed— much like a traditional database . [ 3 ] Given a query specifying data interests, TinyDB collects that data from motes in the environment, filters it, aggregates it, and routes it to a PC . TinyDB does this via power-efficient in-network processing algorithms . QLowpan is a sensor network queries processor for resource-constrained sensor devices. In order to guarantee interoperability between the different platforms, QLowpan is based on RPL / 6LoWPAN protocol . It is the first sensor network queries processor which is compatible with 6lowpan protocol. [ 4 ]
https://en.wikipedia.org/wiki/Sensor_network_query_processor
The sensorimotor network ( SMN ), also known as somatomotor network , is a large-scale brain network that primarily includes somatosensory ( postcentral gyrus ) and motor ( precentral gyrus ) regions and extends to the supplementary motor areas (SMA). [ 1 ] The auditory cortex may also be included, [ 2 ] as well as the visual cortex . [ 3 ] The SMN is activated during motor tasks, such as finger tapping, [ 4 ] indicating that the network readies the brain when performing and coordinating motor tasks. [ 1 ] Dysfunction in the SMN has been implicated in various neuropsychiatric disorders . In 2019, Uddin et al. proposed that pericentral network ( PN ) be used as a standard anatomical name for the network. [ 2 ]
https://en.wikipedia.org/wiki/Sensorimotor_network
Sensorization is a modern technology trend to insert many similar sensors in any device or application. Some scientists believe that sensorization is one of main requirements for third technological revolution . [ 1 ] As a result of significant prices drop in recent years there is a trend to include large number of sensors with the same or different function in one device. [ 2 ] An example is the evolution of the iPhone . Acsensorize This computer hardware article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sensorization
Sensormatic is a subsidiary of Tyco International (now owned by Johnson Controls ) that manufactures and sells electronic article surveillance equipment. They manufacture acusto-magnetic (AM) electronic article surveillance systems. Sensormatic Electronics Corporation was purchased by Tyco International in 2001. The acquisition was executed by a merger of Sensormatic with a subsidiary of Tyco. Sensormatic is frequently called by the name of its parent company ADT, formerly ADT/Tyco. A product from Sensormatic is the Supertag, a hard loss prevention tag. The Supertag took over for the Sensormatic Ultragator tag. Ultragator is a tan Ultra Max tag that is sold to retail companies. Ultragator tags were improved upon during the design of the Supertag. Sensormatic specializes in the area of sourcetagging. A sourcetag is a security tag or label applied during the manufacturing process. Recently [ when? ] Sensormatic introduced disposable hard source tags at a few retail chains. Johnson Controls announced the SuperTag 4 as part of the Sensormatic portfolio. The SuperTag 4 is an iteration of the SuperTag product. [ 1 ]
https://en.wikipedia.org/wiki/Sensormatic
Sensors and Materials is a monthly peer-reviewed open access scientific journal covering all aspects of sensor technology, including materials science as applied to sensors. It is published by Myu Scientific Publishing and the editor-in-chief is Makoto Ishida ( Toyohashi University of Technology ). The journal was established in 1988 by a group of Japanese academics to promote the publication of research by Asian authors in English. [ 1 ] The journal is abstracted and indexed in: According to the Journal Citation Reports , the journal has a 2022 impact factor of 1.2. [ 7 ]
https://en.wikipedia.org/wiki/Sensors_and_Materials
The hw.sensors framework is a kernel -level hardware sensors framework originating from OpenBSD , which uses the sysctl kernel interface as the transport layer between the kernel and the userland . As of 2019 [update] , the framework is used by over a hundred device drivers in OpenBSD to export various environmental sensors, with temperature sensors being the most common type. [ 1 ] [ 2 ] Consumption and monitoring of sensors is done in the userland with the help of sysctl , systat , sensorsd, OpenBSD NTP Daemon ( OpenNTPD , ntpd), Simple Network Management Protocol (snmpd), ports/sysutils/symon and GKrellM . [ 3 ] [ 4 ] In OpenBSD, the framework is integrated with Dell 's ESM, Intelligent Platform Management Interface (IPMI) and I 2 C , [ 5 ] [ 6 ] in addition to several popular Super I/O chips through lm(4) . [ 2 ] A major difference compared to other solutions like lm_sensors is simplicity and a works-by-default approach in the drivers, which don't need or support any configurability; no installation or configuration actions are required by the system administrator to get the sensors going. [ 7 ] [ 6 ] This is coupled with a fine-tuned ad-hoc read-only scan procedure on the I 2 C bus, written by Theo de Raadt in a centralised way with a cache, making it possible to leave it enabled by default at all times, unlike the competing solutions. [ 7 ] [ 6 ] [ 8 ] Support for automatic monitoring of RAID drives is also provided through the sensors framework, [ 5 ] this concept of sensors of drive type has been backported by NetBSD back into envsys in 2007. [ 2 ] OpenNTPD uses sensors of type timedelta to synchronise time. [ 9 ] These are provided by NMEA and other drivers. [ 10 ] [ 11 ] The framework was originally devised in 2003 by Alexander Yurchenko, when he was porting several envsys -based drivers from NetBSD . Instead of porting NetBSD's envsys, a simpler sysctl -based mechanism was developed. [ 2 ] Framework use by the device drivers rose sharply with the release of OpenBSD 3.9. Then, in only 6 months, the number of individual drivers using the framework rose from 9 in OpenBSD 3.8 (released 1 November 2005 ) to 33 in OpenBSD 3.9 (released 1 May 2006 ). [ 2 ] As of 23 December 2006 [update] , the framework was used by 44 devices drivers. At this time, a patchset was committed converting a simple one-level addressing scheme into a more stable multi-layer addressing. [ 12 ] [ 13 ] In 2007, the framework was ported to FreeBSD as part of a Google Summer of Code grant. It was adopted by DragonFly BSD later that year. [ 14 ] The usability of the sensorsd(8) , the sensor monitoring daemon , was vastly improved in 2007, partly via the same GSoC grant. [ 15 ] As of 1 November 2008 [update] , the total number of drivers stood at 68 in OpenBSD 4.4; growing by 7 drivers in a 6-month release cycle. [ 16 ] This rate of growth, of one new driver per month on average, has been common throughout the history of the framework since OpenBSD 3.9. [ 2 ] The values exported by the drivers through the framework are read-only; however, an external patch exists that implements the fan control functions in both the framework, and in one of the drivers for the most popular family of Super I/O chips. This patchset was provided for both OpenBSD and DragonFly BSD. [ 17 ] [ 1 ]
https://en.wikipedia.org/wiki/Sensorsd
Sensory design aims to establish an overall diagnosis of the sensory perceptions of a product, and define appropriate means to design or redesign it on that basis. It involves an observation of the diverse and varying situations in which a given product or object is used in order to measure the users' overall opinion of the product, its positive and negative aspects in terms of tactility , appearance , sound and so on. Sensory assessment aims to quantify and describe, in a systematic manner, all human perceptions when confronted with a product or object. Contrary to traditional laboratory analysis, a sensory analysis of a product is either carried out by a panel of trained testers , or by specialized test equipment designed to mimic the perception of humans. The result allows researchers to establish a list of specifications and to set out a precise and quantified requirement. These are applied to materials and objects using various criteria: In the transportation sphere, sensory analysis are sometimes translated into minor enhancements to the design for a vehicle interior, information system, or station environment to smooth some of the rougher edges of the travel experience. [ 1 ] For example, specialized air purifying equipment can be used to design a more pleasant odor in train compartments. [ 2 ] Sensory design plays a critical role in the modern food and beverage industry . [ 3 ] The food and beverage industry attempts to maintain specific sensory experiences. In addition to smell and flavor, the color (e.g. ripe fruits) [ 4 ] and texture of food (e.g. potato chips) are also important. Even the environment is important as "Color affects the appetite, in essence, the taste of food". [ 2 ] Food is a multi-sensorial experience in which all five senses (vision, touch, sound, smell and taste) come together to create a strong memory. [ 5 ] In food marketing, the goal of the marketers is to design their products so that those food and beverage would stimulate as many senses of the customers. At restaurants, many sensorial aspects such as the interior design (vision), texture of the chairs and tables (touch), background music and the noise level (sound), openness of the kitchen and cooking scene (smell and vision), and of course, the food itself (taste), all come together before a customer decides if he or she likes the experience and would want to revisit. [ 6 ] While multi-sensory experiences were only subjected to a few categories in the past, in modern day, the spectrum has expanded to acknowledge the importance of sensory design. Food used to be considered strictly as an experience for taste. Now, as the multi-sensorial trait of food has been known, marketers of food products and restaurants focus more on providing services that extend beyond the sense of taste. In recent research, the role of vestibular sense , a system that contributes to sense of balance and space, has been highlighted in relation to food. Often be referred as "the sixth sense", researches show that vestibular senses that are exhibited through people's postures while eating, can shape their perceptions for food. In general, people tend to rate food as better-tasting when they consume it while sitting down, compared to standing up. The researches conclude that the perception of food and vestibular system is in the result of the different stress levels caused according to the postures. Similar to food that used to be regarded merely as an experience of taste, architecture in the past used to be subjected only to sense of vision, which is why much of architectural products relied on visual forms of photographs, or television. In contrast, architecture has become a multi-sensorial experience in which people visit the architectural sites and feel the various sensorial aspects such as the texture of the building, background noise and the scent of the surrounding area, and the overall look of the building in coordination with the nature and the area. [ 7 ] Furthermore, there is a type of design in architecture field called "responsive architecture", which is a design that interacts with people. [ 8 ] This kind of architecture could promote the occupants' lifestyle if sensory design is properly applied. For instance, if a responsive architecture is helping an occupant with a goal to exercise more, sensory design can arrange its environmental stimuli in time along an occupant’s path, like a space may serve to feed occupants through their senses to inspire and teach exercise at just the right time and in just the right way. [ 8 ] When it comes to the experience of architecture, our visual senses only play a small part. [ 9 ] This is also why when architects are designing, they need to think of "after-the-moment" experience instead of merely "in-the-moment" experience for the occupants. While classically limited to the perception of trained sensory experts, advances in sensors and computation have allowed objective quantified measurements of sensory information to be acquired, quantified and communicated leading to improved design communication, translation from prototype to production, and quality assurance. Sensory areas that have been objectively quantified include vision, touch, and smell. In vision both light and color are considered in sensory design. Early light meters (called extinction meters) relied on the human eye to gauge and quantify the amount of light . Subsequently, analog and digital light meters have been popularized for photography. Work by Lawrence Herbert in the 1960s lead to a systematic combination of lighting and color samples required to quantify colors by human eye. This became the basis for the Pantone Matching System . Combining this with specialized light meters allowed digital color meters to be invented and popularized. Touch plays an important role in a variety of products and is increasingly considered in product design and marketing efforts and has led to a more scientific approach to tactile design and marketing. [ 10 ] Classical the field of tribology has developed various tests to evaluate interacting surfaces in relative motion with a focus on measuring friction, lubrication, and wear. However these measurements do not correlate with human perception. [ 11 ] Alternative methods for evaluating how materials feel were first popularized from work initiated at Kyoto University. [ 12 ] The Kawabata evaluation system developed six measurements [ 13 ] of how fabrics feel. The SynTouch Haptics Profiles [ 14 ] produced by the SynTouch Toccare Haptics Measurement System that incorporates a biomimetic tactile sensor to quantify 15 dimensions of touch based on psychophysics research performed with over 3000 materials. [ 11 ] Measuring odors has remained difficult. A variety of techniques have been attempted but “Most measures have had a subjective component that makes them anachronistic with modern methodology in experimental behavioral science, indeterminate regarding the extent of individual differences, unusable with infra humans and of unproved ability to discern small differences”. [ 15 ] New methods for robotic exploration of smell are being proposed. [ 16 ]
https://en.wikipedia.org/wiki/Sensory_design
Sensory ecology is a relatively new field focusing on the information organisms obtain about their environment. It includes questions of what information is obtained, how it is obtained (the mechanism ), and why the information is useful to the organism (the function ). Sensory ecology is the study of how organisms acquire, process, and respond to information from their environment. All individual organisms interact with their environment (consisting of both animate and inanimate components), and exchange materials, energy, and sensory information. Ecology has generally focused on the exchanges of matter and energy, while sensory interactions have generally been studied as influences on behavior and functions of certain physiological systems (sense organs). The relatively new area of sensory ecology has emerged as more researchers focus on questions concerning information in the environment. [ 1 ] [ 2 ] This field covers topics ranging from the neurobiological mechanisms of sensory systems to the behavioral patterns employed in the acquisition of sensory information to the role of sensory ecology in larger evolutionary processes such as speciation and reproductive isolation. While human perception is largely visual, [ 3 ] other species may rely more heavily on different senses. In fact, how organisms perceive and filter information from their environment varies widely. Organisms experience different perceptual worlds, also known as “ umwelten ”, as a result of their sensory filters. [ 4 ] These senses range from smell (olfaction), taste (gustation), hearing (mechanoreception), and sight (vision) to pheromone detection, pain detection (nociception), electroreception and magnetoreception. For example, magnetoreception establishes a magnetic compass for many different species, helping animals like migratory birds move in respective migratory directions, marine turtles head away from the shore, guide honeybee’s building activities, and aid salamanders in being able to identify borders between water and land. [ 5 ] Because different species rely on different senses, sensory ecologists seek to understand which environmental and sensory cues are more important in determining the behavioral patterns of certain species. In recent years, this information has been widely applied in conservation and management fields. Communication is the key to many species interactions. In particular, many species rely on vocalizations for information such as potential mates, nearby predators, or food availability. Human changes in the habitat modify acoustic environments and may make it more difficult for animals to communicate. Humans may alter acoustic environments by modifying background noise levels, modifying habitat, or changing species composition . [ 6 ] These changes in acoustic environments can mask the vocalizations of various species. Because humans can exert such strong changes on acoustic environments, sensory ecologists have been particularly interested in researching and understanding how organisms react to these changes. Anthropogenic changes to acoustic environments have had perhaps the most significant impacts on species that rely on auditory cues for foraging and communication. Bats, for example, rely on ultrasonic echolocation to locate and catch prey. When these auditory cues are masked by loud background noises, the bats become less efficient at finding prey. [ 7 ] Sensory ecologists have also found that foraging bats avoid noisy habitats, perhaps as a result of this decrease in foraging efficiency. [ 8 ] Meanwhile, in bird communities, ecologists found that increased noise led to changes in avian community composition, decreases in diversity, and even decreases in reproductive success. [ 9 ] [ 10 ] One study showed that to avoid noise pollutions, some birds changed the frequency of their calls. [ 10 ] In order to understand the degree of sound that impacts species, one study found that many bird species will also alter their overall behavior (reproduction, abundance, stress hormone levels and species richness) at levels greater than or equal to 45 decibels and terrestrial mammals experienced heightened stress levels and less reproductive efficiency at noise ranging between 52 and 68 decibels. Anthropogenic noise can be characterized as intentionally produced, such as by sonars or seismic exploration (measuring seismic waves), or intentionally produced as a by-product of human activity, such as construction or traffic corridors. [ 11 ] These studies demonstrate the importance of auditory cues and have resulted in calls for the preservation of “soundscapes", or the collective sounds of ecosystems. [ 12 ] Hearing is a particularly important sense for marine species. Because of low light penetration, hearing is often more useful than vision in marine environments. In addition, sound travels about five times faster in water than in land, and over greater distances. [ 13 ] Sounds are important for the survival and reproduction of marine species. [ 14 ] Over the last century, human activities have increasingly added sounds to water environments. These activities can impede the ability of fish to hear sounds, and can interfere with communication, predator avoidance, prey detection, and even navigation. [ 13 ] Whales, for example, are at risk of reductions in foraging efficiency and mating opportunities as a result of noise pollution. [ 15 ] Intense noise can also cause ear damage for whales, impacting their use of echolocation for orientation and may even startle them so badly that they engage in stranding as a flight response (commonly known as bleaching). [ 16 ] In recent years, the creation of offshore wind turbines has led conservationists and ecologists to study how the noises produced from these turbines may affect marine species. Studies have found that the sounds created by wind turbines may have significant effects on the communication of marine mammal species such as seals and porpoises. [ 17 ] [ 18 ] [ 19 ] This research has been applied to development projects. For example, a recent report assessed the risks of the acoustic changes brought on by offshore wind farms on fish communities. [ 20 ] Humans have strongly altered nighttime lighting. This light pollution has had serious impacts on species that rely on visual cues for navigation. One recent study of rodent communities showed that brighter nights led to community-level changes in foraging behavior; while less predator-susceptible species foraged heavily, those species susceptible to predation reduced their foraging activity as a result of their increased nighttime visibility. [ 21 ] Birds are also heavily influenced by light pollution. For example, ecologists have found that lights on tall structures can disorient migrating birds, leading to millions of deaths each year. [ 22 ] These findings have guided recent conservation efforts. The US Fish and Wildlife Service has created a set of guidelines to reduce the impacts of lighting on migratory birds, such as limiting tower construction, limiting the height of towers, and keeping towers away from migratory zones. [ 23 ] In addition, Programs such as the Fatal Light Awareness Program (FLAP) in Toronto have reduced bird collisions by reducing the light emissions of tall buildings. [ 24 ] Studies have also found that artificial lighting disrupts the orientation of baby sea turtles. [ 25 ] This, in turn, has increased mortality in sea turtle populations. This information has led to the proposed implementation of a number of conservation and management strategies. The same researchers, for example, have suggested pairing light reduction with dune restoration to improve hatchling orientation and success. In addition, researchers have used information on the sensory ecology of sea turtles to decrease their bycatch rate by fishermen. Bycatch is the term for non-target fish, turtles, or marine mammals that are incidentally captured by fishermen. [ 26 ] Because researchers know that fishes and sea turtles differ in their responses to visual sensory cues, they have devised a baiting system that is non-detectable to fish, but less attractive or even repellant to sea turtles. [ 27 ] In this recent study, this method led to decreases in turtle bycatch while imposing no noticeable reduction on fishing yield. A goal of sensory ecologists has been to study what environmental information is most important in determining how these organisms perceive their world. This information has been particularly relevant in understanding how organisms might respond to rapid environmental change and novel human-modified environments. [ 3 ] Recently, scientists have called for an integration of sensory ecology into conservation and management strategies. [ 28 ] Sensory ecology can thus be used as a tool to understand (1) why different species may react to anthropogenic and environmental change in different ways, and (2) how negative impacts of environmental and anthropogenic change might be mitigated. For example, through understanding soundscape and its ability to restore ecosystems has shown that by playing loud diverse healthy soundscapes can actually have the power to restore a once degrading coral reef. [ 29 ] This means that these restored ecosystems are the beacon of hope for future generations of coral reefs, this supporting long-term stability and conservation management. In addition, sensory ecology has been employed as a tool to shape management strategies for the control and eradication for pests and invasive species as diverse as crop pests, marine animals, cane toads, and brown snakes. Through understanding the vastness of perceptual worlds experienced by different species, this can better inform conservation strategies to prevent ecological and sensory traps as well as reducing human-wildlife interactions. This would in turn increase the success of re-introducing species into habitats and aid in the process of capturing and releasing to protect species from potentially dangerous sites. [ 30 ] An ecological trap is an instance where organisms choose poor-quality habitats over better, available habitats because of their incorrect evaluation of habitat quality. [ 31 ] Man-made landscapes present novel environments to organisms. In addition, man-made materials can be mistaken for natural materials, leading some organisms to choose poor-quality habitats over better-quality habitat locations. Sensory ecology can be used to mitigate the effects of these ecological traps by clarifying which particular information organisms are using to make “bad” decisions. Organisms often misinterpret man-made surfaces such as asphalt and solar panels as natural surfaces. Solar panels, for example, reflect horizontally polarized light that is perceived by many insects to be water. Since insects lay their eggs in water, they will try to oviposit on the solar panels. This leads to widespread juvenile insect mortality on solar panels. [ 32 ] To mitigate the effects of this ecological trap, researchers broke up the shape of the solar-active area on the panels. In doing so, the panels became less attractive to insects, thus reducing mortality. [ 32 ] A number of bat species fall also prey to ecological traps that are the result of man-made surfaces. A recent study by Greif and Siemers [ 33 ] found that bats determine water location based on the smoothness of a surface, not by actual presence of water. Bats thus attempt to drink from smooth surfaces that are not in fact water, such as glass. As a result, the bats waste energy and time, which could lead to decreases in fitness. [ 33 ] Bird species are also often subject to ecological traps as a result of their sensory ecology. One of the recent areas of focus of avian sensory ecology has been on how birds may perceive large wind turbines and other buildings. Each year, countless birds die after colliding with power lines, fences, wind turbines, and buildings. [ 34 ] The flight paths around these structures act as forms of ecological traps; while birds may perceive areas around buildings as “good habitat” and viable flight corridors, they can actually increase bird mortality because of collisions. Sensory ecologists have linked these ecological traps to avian sensory ecology. Researchers have found that while human vision is binocular, bird vision is much less so. In addition, birds do not possess high resolution frontal vision. [ 34 ] As a result, birds may not see large structures directly in front of them, leading to collisions. A number of solutions to this problem have been proposed. One study showed that the response of birds to different airport lighting schemes differed, and that bird strikes could be reduced by altering lighting patterns. [ 35 ] Other researchers have suggested that warning sounds or visual cues placed on the ground may help reduce bird collisions. [ 36 ] By adjusting the other sensory cues of birds, ecologists may help reduce the presence of avian ecological traps around these structures. In addition to using sensory ecology as a tool to inform conservation strategies, scientists have also used sensory ecology concepts and findings to inform pest management strategies. In particular, the exploitation of senses has been used to control insect, marine, and amphibious pests. Managers have used sensory ecology to create highly individualized visual, pheromonal, and chemical traps for pests. Visual traps are important in the management of a number of insect species. For example, Tsetse flies , a vector of African Trypanosomiasis (sleeping sickness), are attracted to blue colors. Flies can therefore be lured in and killed by blue fabric traps imbued with pesticides. [ 37 ] Scientists believe that flies are attracted to these blue fabrics because blue colors are similar to the color of the ground under a shady tree. Since flies must seek out cool places in the heat of the day, blue colors are more attractive. [ 38 ] Exploitation of visual cues has also been used for the control of aphids and whiteflies. Many aphid species show a strong preference for yellow colors. Scientists have suggested that this may be the result of a preference for yellow leaves, which tend to have higher flows of accessible nitrogen sources. [ 39 ] Pheromones are species-specific chemical cues. When released, pheromones can strongly influence the behavior and physiology of other organisms of the same species. Because pheromones are largely species-specific, and because they often elicit strong behavioral responses, scientists and managers have used pheromones to lure and trap an array of species. This method has been particularly exploited in insect populations. This method has been used to capture and control species such as sugarcane weevils, [ 40 ] gypsy moths, [ 41 ] invasive oriental fruit flies, [ 42 ] bark beetles, [ 43 ] and Carpophilus spp.
https://en.wikipedia.org/wiki/Sensory_ecology
Many types of sense loss occur due to a dysfunctional sensation process, whether it be ineffective receptors , nerve damage , or cerebral impairment. Unlike agnosia , these impairments are due to damages prior to the perception process. Degrees of vision loss vary dramatically, although the ICD-9 released in 1979 categorized them into three tiers: normal vision , low vision , and blindness . Two significant causes of vision loss due to sensory failures include media opacity and optic nerve diseases, although hypoxia and retinal disease can also lead to blindness. Most causes of vision loss can cause varying degrees of damage, from total blindness to a negligible effect. Media opacity occurs in the presence of opacities in the eye tissues or fluid, distorting and/or blocking the image prior to contact with the photoreceptor cells . Vision loss often results despite correctly functioning retinal receptors. Optic nerve diseases such as optic neuritis or retrobulbar neuritis lead to dysfunction in the afferent nerve pathway once the signal has been correctly transmitted from retinal photoreceptors. Partial or total vision loss may affect every single area of a person's life. Though loss of eyesight may occur naturally with age, trauma to the eye or exposure to hazardous conditions may also cause this serious condition. Workers in virtually any field may be at risk of sustaining eye injuries through trauma or exposure. A traumatic eye injury occurs when the eye itself sustains some form of trauma, whether a penetrating injury such as a laceration or a non-penetrating injury such as an impact. Because the eye is a delicate and complex organ, even a slight injury may have a temporary or permanent effect on eyesight. Similarly to vision loss, hearing loss can vary from full or partial inability to detect some or all frequencies of sound which can typically be heard by members of their species . For humans , this range is approximately 20 Hz to 20 kHz at ~6.5 dB , although a 10 dB correction is often allowed for the elderly . [ 1 ] Primary causes of hearing loss due to an impaired sensory system include long-term exposure to environmental noise , which can damage the mechanoreceptors responsible for receiving sound vibrations , as well as multiple diseases, such as CMV or meningitis , which damage the cochlea and auditory nerve , respectively. [ 2 ] Hearing loss may be gradual or sudden. Hearing loss may be very mild, resulting in minor difficulties with conversation, or as severe as complete deafness. The speed with which hearing loss occurs may give clues as to the cause. If hearing loss is sudden, it may be from trauma or a problem with blood circulation . A gradual onset is suggestive of other causes such as aging or a tumor . Associated neurological problems, such as tinnitus or vertigo , may indicate a problem with the nerves in the ear or brain. Hearing loss may be unilateral or bilateral. Unilateral hearing loss is most often associated with conductive causes, trauma, and acoustic neuromas. Pain in the ear is associated with ear infections, trauma, and obstruction in the canal. Anosmia is the inability to perceive odor , or in other words a lack of functioning olfaction . Many patients may experience unilateral or bilateral anosmia. A temporary loss of smell can be caused by a blocked nose or infection. In contrast, a permanent loss of smell may be caused by death of olfactory receptor neurons in the nose or by brain injury in which there is damage to the olfactory nerve or damage to brain areas that process smell. The lack of the sense of smell at birth, usually due to genetic factors, is referred to as congenital anosmia . The diagnosis of anosmia as well as the degree of impairment can now be tested much more efficiently and effectively than ever before thanks to "smell testing kits" that have been made available as well as screening tests which use materials that most clinics would readily have. [ 3 ] Many cases of congenital anosmia remain unreported and undiagnosed. Since the disorder is present from birth the individual may have little or no understanding of the sense of smell, hence are unaware of the deficit. [ 4 ] The somatosensory system is a complex sensory system made up of a number of different receptors, including thermoreceptors, nociceptors, mechanoreceptors and chemoreceptors. It also comprises essential processing centres, or sensory modalities, such as proprioception , touch, temperature, and nociception. The sensory receptors cover the skin and epithelia, skeletal muscles, bones and joints, internal organs, and the cardiovascular system. While touch (also called tactile or tactual perception) is considered one of the five traditional senses, the impression of touch is formed from several modalities. In medicine, the colloquial term "touch" is usually replaced with "somatic senses" to better reflect the variety of mechanisms involved. Insensitivity to somatosensory stimuli, such as heat , cold , touch , and pain , are most commonly a result of a more general physical impairment associated with paralysis . Damage to the spinal cord or other major nerve fiber may lead to a termination of both afferent and efferent signals to varying areas of the body, causing both a loss of touch and a loss of motor coordination . Other types of somatosensory loss include hereditary sensory and autonomic neuropathy , which consists of ineffective afferent neurons with fully functioning efferent neurons; essentially, motor movement without somatosensation. [ 5 ] Sensory loss can occur due to a minor nick or lesion on the spinal cord which creates a problem within the neurosystem. This can lead to loss of smell, taste, touch, sight, and hearing. In most cases it often leads to issues with touch. Sometimes people cannot feel touch at all while other times a light finger tap feels like someone has punched them. There are medications and therapies [ example needed ] that can help control the symptoms of sensory loss and deprivation. Ageusia is the loss of taste, particularly the inability to detect sweetness , sourness , bitterness , saltiness , and umami (meaning "pleasant/savory taste"). It is sometimes confused with anosmia (a loss of the sense of smell). Because the tongue can only indicate texture and differentiate between sweet, sour, bitter, salty, and umami, most of what is perceived as the sense of taste is actually derived from smell. True ageusia is relatively rare compared to hypogeusia (a partial loss of taste) and dysgeusia (a distortion or alteration of taste). Tissue damage to the nerves that support the tongue can cause ageusia, especially damage to the lingual nerve and the glossopharyngeal nerve . The lingual nerve passes taste for the front two-thirds of the tongue and the glossopharyngeal nerve passes taste for the back third of the tongue. The lingual nerve can also be damaged during otologic surgery, causing a feeling of metal taste. Taste loss can vary from true ageusia , a complete loss of taste, to hypogeusia , a partial loss of taste, to dysgeusia , a distortion or alteration of taste. The primary cause of ageusia involves damage to the lingual nerve , which receives the stimuli from taste buds for the front two-thirds of the tongue , or the glossopharyngeal nerve , which acts similarly for the back third. Damage may be due to neurological disorders, such as Bell’s palsy or multiple sclerosis , as well as infectious diseases such as meningoencephalopathy. Other causes include a vitamin B deficiency, as well as taste bud death due to acidic/spicy foods, radiation , and/or tobacco use. [ 6 ] Dual sensory loss is the simultaneous loss of two senses. Research has shown that 6% of non- institutionalized older adults had a dual sensory impairment, and 70% of severely visually impaired older adults additionally suffered from significant hearing loss. [ 7 ] Vision and hearing loss both interfere with the interpretation and comprehension of speech. People with sensory loss often have problems communicating. Personal, situational and environmental factors can also become prohibitive barriers to communication. Poor communication frequently results in poor psychosocial functioning. Older adults with sensory loss often find it difficult to adapt to their sensory loss, becoming depressed , anxious, lethargic, and dissatisfied. Thus, sensory loss, the inability to communicate, and poor psychosocial functioning reduces quality of life and well-being. [ 7 ]
https://en.wikipedia.org/wiki/Sensory_loss
Sensory stimulation therapy ( SST ) is an experimental therapy that aims to use neural plasticity mechanisms to aid in the recovery of somatosensory function after stroke or cognitive ageing . Stroke and cognitive ageing are well known sources of cognitive loss, the former by neuronal death , the latter by weakening of neural connections . SST stimulates a specific sense at a specific frequency. Research suggests that this technique may reverse cognitive ageing by up to 30 years, and may selectively improve or impair two point discrimination thresholds . By 2025, it is estimated that 34 million people in the United States will have dementia. It is extremely important, then, that we establish an effective treatment for people with such symptoms to either reduce, or diminish dementia altogether. In modern-day treatment not involving pharmacological treatment, psychosocial therapies are a great intervention. With psychosocial therapies such as massage, aromatherapy, multi-sensory stimulation, music therapy, and reality orientation, treatment of dementia and dementia related diseases has become possible in a less traditional yet non-pharmacological form. [ 1 ] It was once believed that the brain was largely unchanging and that its function was decided at a young age. [ 2 ] Along this train of thought, cognitive loss from strokes and ageing were viewed as unrecoverable. Functional localization is a theory which suggests that each section of the brain has a specific function, and that loss of a section equates to permanent loss of function. Traditional models even specialize between hemispheres of the brain and describe "artistic and logical sections of the brain." This fatalistic outlook has been dramatically challenged by the recent paradigm of brain plasticity . Brain plasticity refers to the ability of the brain to restructure itself, form new connections, or adjust the strength of existing connections. [ 3 ] The current paradigm allow for conceptualization of brain that is capable of change. Various researchers are using this concept to develop new therapies for conditions that were previously viewed as permanent; for example Paul Bach-y-Rita has worked on devices to give sight to blind individuals, and alleviate a feeling of falling in a patient that has lost function of the vestibular apparatus . It has been found that many senses have some plastic nature about them. Even auditory cognition has been shown to have some potential for recovery after stroke. A recent study by Sarkamo et al. has shown that listening to music and audio books during early recovery from stroke can result in improved cognition. [ 4 ] This paradigm has opened doors into the previously believed to be impossible; recovery from strokes, reduced cognitive ageing. A stroke can be caused by a few different situations, but the basic result is the same. Blood flow to a section of the brain is stopped, which results in rapid depletion of oxygen and other nutrients in the starved section. The starved section of brain tissue quickly begins to die, and results in a lesion in the brain. The resulting lesion can be traced loss of various cognitive functions depending on the location and area of damage. [ 5 ] It is common for stroke patients to suffer from muscle weakness and loss of muscle function . Some natural recovery has been observed, however training based in advances in neuroscience have shown the most dramatic improvements. These investigational therapies focus on repetition of basic tasks with limited appendages. It is generally found that more intensive remedial function therapies result in the greater restoration to function. [ 6 ] fMRI and PET scan studies have shown that after as little as 3 weeks of the intensive training programs there are statistical differences between the experimental group and controls, with observable improvements in muscle control. [ 7 ] Although these methods have opened doors for improved quality of life for stroke patients, the training methods are very time, and attention intensive. It would be a powerful tool if one could find a system that does not have massive attention requirements. During ageing , many brain functions decline and some are lost, this is referred to as cognitive ageing. In the most extreme cases one might think about the catastrophic results of Alzheimer's disease . Age is the largest risk factor for Alzheimer's disease. However, due to lack of knowledge and successful research in this field, little is known about the rates of clinical decline and brain atrophy. [ 1 ] This disease is associated with neuronal death. However, more general ageing considers loss of synaptic strength over neuronal death. [ 8 ] When considering this situation, the machinery for proper functioning of the brain is still present, but is in disarray. It has been shown that as much as a 46% decrease in dendrite spine number and density can occur in humans over 50 years old when compared to older participants. [ 9 ] The cortical homunculus, or the visual representation of how your brain sees your body, was discovered by Wilder Penfield . a world-famous brain surgeon. Upon ending his career as part of the McGill medical faculty, he served as the director of Neurological Institute. The somatosensory system is the part of our sensory system that deals with touch. We would not be able to feel things like temperatures, pain, pressure, vibration, and skin rash without the unwavering help of our somatosensory system. [ 10 ] The peripheral nervous system has the ability to understand touch, pressure , vibration , limb position , heat , coldness , and pain . This information is sent through the peripheral nervous system , to the spinal cord where it is finally processed by the brain . One of the key structures in processing this information is the primary somatosensory cortex , which is located in the parietal lobe . The primary somatosensory cortex is known to have subsections that process information from different sections, and the area of the cortex for each section is related to its acuity. This observation is often shown symbolically through a homunculus . [ 11 ] Sensory stimulation uses rapid stimulation of nerves in a section of skin to drive neuronal changes in the participant. The nerves are electrically stimulated in a fashion referred to as coactivation. [ 12 ] [ 13 ] In both cases the participant's limb, often hand, is constrained in a device that has a section that applies the stimulation. The participant is allowed to go about their daily activities, and many do not mind the presence of the device. [ 12 ] These degree of reorganization is often measured through two point discrimination thresholds , which measure the smallest distance between two points that can be felt by the subject. It has been shown that the use of this technique can rest as much as 30 years of sensory loss. [ 12 ] In the study by Dinse et al. , 28 patients between the ages of 66 and 86 tested similarly to participants 30 years younger than themselves after treatment. These participants had the device attached for 3 hours while undergoing stimulation. Other studies have used shorter periods of stimulation and achieved similar results. [ 14 ] A recent study published in the Archives of Physical Medicine and Rehabilitation followed four patients recovering from strokes while undergoing electrical sensory stimulation therapy. Their progress was followed through several different tests; Touch Threshold, Tactile Acuity, Haptic Object Recognition , Pegs placed in peg board and motor tapping tests. It was found that all patients increased their performance during the study. Although this study uses a small sample group, and had no control group it is a first step study that suggests future studies. [ 13 ] Future studies were developed around this study, in which participant's skin was electrically stimulated to induce signals sent to the brain. In January 2008, Ragert et al. explored the impact of frequency of stimulation on sensory stimulation techniques to induce plastic changes. The study investigated if varying the frequency could be used to induce either Long-Term Potentiation (LTP) or Long-term Depression (LTD). LTP refers to the processes by which neuronal connections are formed and strengthened through stimulation and activity. Conversely LTD is a process by which a neuronal pathway is decreased by low levels of stimulation or by disuse. [ 15 ] In the study, Ragert et al. divided their participants into two groups, both of which underwent the SS therapy, but the frequency of stimulation was varied between the two groups. Their analysis showed a statistical improvement in two point discrimination tests for the high frequency group, and a statistical impairment of the same test on the low frequency group. [ 15 ] This result brings an interesting possibility to light for the future of this technique; SS could be used to both recover lost sensory function, but also to dull chronic pain . Activity-dependent plasticity refers the phenomenon by which neuronal connections changes via repetitive use. This form of plasticity has been used by neuro-rehabilitation clinics to help those recovering from strokes; for example The Taub Therapy Clinic uses a constraint induced therapy. [ 2 ] This therapy focuses on stroke patients with limited function in a limb. The patient's good limb is constrained and the patient is directed through physical tasks of increasing difficulty to induce recovery of neural networks. The American Stroke Association published an article in 2005 by Sawaki et al. on the possible use of SS to supplement UDP therapies. They suspected that because of the importance of somatosensory information in movement, that enhancing sensory processing through SS could also improve UDP. Their experiment had two experimental groups; both groups were directed to complete voluntary movements with their thumb, and one group underwent 20 minutes of SS before the directed thumb movements. It was found that the paired participants had greater recovery of function. [ 14 ] Cortical maps are the maps in which parts of our brain, such as the somatosensory system, are described. The cortical maps in our brains do not so much relate to our senses so much as it relates to our sense of physical touch. It has been found that the use of intensive training methods can be used to enlarge cortical maps for patients recovering from a stroke. Studies with that use fMRI and PET scans have shown that the degree of activation increases in the motor cortex of patients undergoing intensive therapies. [ 16 ] This provides a strong support to the idea that plastic changes in the brain are a mechanism by which recovery can occur. Patients suffering from hemiparetic stroke [ 17 ] often lose their ability to stand upright and hold their posture on their own. Without the ability to control our posture, we lose the ability to move freely and voluntarily, which is necessary for activities of daily living (ADL) . Studies have been conducted to see if sensory stimulation could improve functionality after a stroke occurs. The study compared two groups; a group undergoing standard physical therapy (group 1), and a group that was given sensory stimulation with acupuncture , physiotherapy , and ADL training (group 2). Both groups began the study within ten days of the initial stroke. Group 2 achieved stimulation via traditional Chinese acupuncture (10 needles), placed according to traditional Chinese acupuncture points and kept in place for 30 minutes. Alongside the manual stimulation, electric stimulation (2 to 5 Hz) was also given to four of the ten needles. The treatment continued for four to ten days, with an average of six and a half days. The twenty-one patients in group 2 had a mean age of 74.2 and the mean age of group 1 was 74.8. From the patients in group 2 which postural recordings could be made, 7 patients suffered from hemiparetic lesion on the left side and 10 had lesions on the right. Of the patients in group 1, 4 had lesion to the left side and 3 on the right. Upon testing, the subjects stood on a platform with their heels together and their arms crossed over their chests. The subjects were exposed to perturbations via vibratory stimulus on their calf muscles, which caused anteroposterior movement, or galvanic stimulation of the vestibular nerves, causing lateral movement. Three different tests were done, with patients eyes both open and closed. [ 18 ] Results of the study found that there were major differences in group 1, the control group, and group 2, the treatment group. More patients of the treatment group than the control group were able to maintain a healthy stance during perturbations. As both groups were being treated for post stroke symptoms, it was thought that these perturbations would enhance their posture and motor movements naturally. Among the subjects who survived 2 or more years after hemiparetic stroke, the treatment group (group 2), withheld better postural control. Furthermore, patients who had any additional sensory stimulation were comparable acquired values approaching the normal for age-matched healthy subjects when postural control was measured. The sensory stimulation tests enhanced at least partial recovery of postural function for up to 2 years after the stroke and treatment. After testing, it was deduced that improved recovery after sensory stimulation can be accomplished by patients regaining near normal dynamics of human postural control. Postural control is one of the most important issues in rehabilitation of stroke, thus concluding that sensory stimulation obtained from this study may enhance the functional plasticity of the brain. [ 18 ] Sensory stimulation therapy is a developing technique aimed at recovering sensory loss after strokes and restoring losses from ageing. It has not been proven that sensory stimulation therapy can actually improve brain plasticity, nor cognitive function. The paradigm of brain plasticity marked a fundamental change in the way that the brain is understood, and considered for future therapies. [ 2 ] SS takes advantage of this paradigm and the senses are presented with simple stimulation to cause changes inside the brain. In this particular situation a section of skin is stimulated either through electrical or physical means. Signals are sent through the peripheral nervous system to the somatosensory cortex. [ 12 ] [ 13 ] These signals are then the impetus for changes inside the brain. It has been shown that the adjustment of frequency in this technique can be used to induce either Long Term Potentiation or Long Term Depression. [ 15 ] In the case of LTP as much as 30 years of sensory loss has been shown to be recoverable in relatively short time periods. [ 12 ] SS has been paired with Use Dependant Plasticity training systems and it has been shown that enhanced recovery is produced from the combination. [ 14 ] One of the striking advantages of this technique is that it is not necessary for the participant to pay attention to the stimulus in order to gain benefit from the therapy. [ 12 ] This technique opens many interesting doors for future therapies. A potential challenge for this technique is that there is little transfer of gains from one section of skin to another. Many studies have been conducted, most with some kind of positive conclusion however, further studies need to be conducted sensory stimulation in dementia in order to prove or disprove any theories. [ 19 ]
https://en.wikipedia.org/wiki/Sensory_stimulation_therapy
The sensory trap hypothesis describes an evolutionary idea that revolves around mating behavior and female mate choice . It is a model of female preference and male sexual trait evolution through what is known as sensory exploitation. [ 1 ] [ 2 ] Sensory exploitation, or a sensory trap is an event that occurs in nature where male members of a species perform behaviors or display visual traits that resemble a non-sexual stimulus which females are responsive to. This tricks females into engaging with the males, thus creating more mating opportunities for males. [ 2 ] [ 3 ] What makes it a sensory trap is that these female responses evolved in a non-sexual context, and the male produced stimulus exploits the female response which would not otherwise occur without the mimicked stimulus. [ 2 ] [ 4 ] The term "trap" indicates that these sensory trap events may be detrimental to female mating success, but they may not always be costly. In fact, there are circumstances where not responding to the stimulus itself can be costly, as females may ignore the actual stimulus in the correct context, and lose the fitness benefits that come with it. [ 2 ] There are also circumstances where these traps can actually be beneficial in the context of mate choice, where the females who are responding to the trap end up gaining high-quality males to mate with. [ 2 ] [ 3 ] While these sensory traps can be quite successful when they appropriately mimic the non-sexual stimulus, they often become exaggerated as a result of excessive selection to the point where they are no longer useful. [ 2 ] This is due to the trait or behavior becoming imperceptible or no longer resembling the original stimulus. [ 2 ]
https://en.wikipedia.org/wiki/Sensory_trap_hypothesis
Sensu is a Latin word meaning "in the sense of". It is used in a number of fields including biology , geology , linguistics , semiotics , and law . Commonly it refers to how strictly or loosely an expression is used in describing any particular concept, but it also appears in expressions that indicate the convention or context of the usage. Sensu is the ablative case of the noun sensus , here meaning "sense". It is often accompanied by an adjective (in the same case). Three such phrases are: Søren Kierkegaard uses the phrase sensu eminenti to mean "in the pre-eminent [or most important or significant] sense". [ 3 ] When appropriate, comparative and superlative adjectives may also be used to convey the meaning of "more" or "most". Thus sensu stricto becomes sensu strictiore ("in the stricter sense" or "more strictly speaking") and sensu strictissimo ("in the strictest possible sense" or "most strictly speaking"). Current definitions of the plant kingdom ( Plantae ) offer a biological example of when such phrases might be used. One definition of Plantae is that it consists of all green plants (comprising green algae and land plants ), all red algae and all glaucophyte algae ; the group defined in this way could be called Plantae in sensu lato (or simply Plantae sensu lato ). A stricter definition excludes the red and glaucophyte algae; the group defined in this way could be called Plantae in sensu stricto . An even stricter definition excludes green algae, leaving only land plants; the group defined in this way could be called Plantae in sensu strictiore , or Plantae in sensu strictissimo . [ 4 ] Conversely, where convenient, some authors derive expressions such as " sensu non strictissimo ", meaning "not in the narrowest possible sense". [ 5 ] A similar form is in use to indicate the sense of a particular context, such as "Nonmonophyletic groups are ... nonnatural (sensu cladistics) in that ..." [ 6 ] or "... computation of a cladogram (sensu phenetics) ..." [ 7 ] Also the expression sensu auctorum (abbreviation: sensu auct. ) is used to mean "in the sense of certain authors", who can be designated or described. It normally refers to a sense which is considered invalid and may be used in place of the author designation of a taxon in such a case (for instance, "Tricholoma amethystinum sensu auct." is an erroneous name for a mushroom which should really be " Lepista personata (Fr.) Cooke"). [ 8 ] A related usage is in a concept-author citation (" sec . Smith", or " sensu Smith"), indicating that the intended meaning is the one defined by that author. [ 7 ] [ 9 ] (Here " sec ." is an abbreviation of " secundum ", meaning "following" or "in accordance with".) Such an author citation is different from the citation of the nomenclatural "author citation" or "authority citation" . In biological taxonomy the author citation following the name of a taxon simply identifies the author who originally published the name and applied it to the type , the specimen or specimens that one refers to in case of doubt about the definition of a species. Given that an author (such as Linnaeus, for example) was the first to supply a definite type specimen and to describe it, it is to be hoped that his description would stand the tests of time and criticism, but even if it does not, then as far as practical the name that he had assigned will apply. It still will apply in preference to any subsequent names or descriptions that anyone proposes, whether his description was correct or not, and whether he had correctly identified its biological affinities or not. This does not always happen of course; all sorts of errors occur in practice. For example, a collector might scoop a netful of small fish and describe them as a new species; it then might turn out that he had failed to notice that there were several (possibly unrelated) species in the net. It then is not clear what he had named, so his name can hardly be taken seriously, either s.s. or s.l . [ citation needed ] After a species has been established in this manner, specialist taxonomists may work on the subject and make certain types of changes in the light of new information. In modern practice it is greatly preferred that the collector of the specimens immediately passes them to specialists for naming; it is rarely possible for non-specialists to tell whether their specimens are of new species or not, and in modern times not many publications or their referees would accept an amateur description. [ citation needed ] In any event, the person who finally classifies and describes a species has the task of taxonomic circumscription . Circumscription means in essence that anyone competent in the matter can tell which creatures are included in the species described, and which are excluded . It is in this process of species description that the question of the sense arises, because that is where the worker produces and argues their view of the proper circumscription. Equally, or perhaps even more strongly, the arguments for deciding questions concerning higher taxa such as families or orders , require very difficult circumscription, where changing the sense applied could totally upset an entire scheme of classification, either constructively or disastrously. [ citation needed ] Note that the principles of circumscription apply in various ways in non-biological senses. In biological taxonomy the usual assumption is that circumscription reflects the shared ancestry perceived as most likely in the light of the currently available information; in geology or legal contexts far wider and more arbitrary ranges of logical circumscription commonly apply, not necessarily formally uniformly. However, the usage of expressions incorporating sensu remains functionally similarly intelligible among the fields. In geology for example, in which the concept of ancestry is looser and less pervasive than in biology, one finds usages such as: Sensu is used in the taxonomy of living creatures to specify which circumscription of a given taxon is meant, where more than one circumscription can be defined. "The family Malvaceae s.s. is cladistically monophyletic ." This means that the members of the entire family of plants under the name Malvaceae ( strictly speaking ), over 1000 species, including the closest relatives of cotton and hibiscus , all descend from a shared ancestor, specifically, that they, and no other extant plant taxa , share a notional most recent common ancestor (MRCA). [ 12 ] If this is correct, that ancestor might have been a single species of plant. Conversely the assertion also means that the family includes all surviving species descended from that ancestor. Other species of plants that some people might ( broadly speaking or s.l. ) have included in the family would not have shared that MRCA (or ipso facto they too would have been members of the family Malvaceae s.s. In short, this circumscription s.s. includes all and only plants that have descended from that particular ancestral stock. "In the broader APG circumscription the family Malvaceae s.l. includes Malvaceae s.s. and also the families Bombacaceae , Sterculiaceae and Tiliaceae ." Here the circumscription is broader, stripped of some of its constraints by saying sensu lato ; that is what speaking more broadly amounts to. Discarding such constraints might be for historical reasons, for example when people usually speak of the polyphyletic taxon because the members were long believed to form a "true" taxon and the standard literature still refers to them together. Alternatively a taxon might include members simply because they form a group that is convenient to work with in practice. In this example, we can know from additional sources that we are dealing with the latter case: by adding other groups of plants to the family Malvaceae s.l. , including those related to cacao , cola , durian , and jute , the APG circumscription omits some of the criteria by which the new members previously had been excluded. The s.l. group remains monophyletic. [ 12 ] [ 13 ] "The 'clearly non-monophyletic' series Cyrtostylis sensu A.S. George has been virtually dismantled..." [ 14 ] This remark specifies Alex George 's particular description of that series. It is a different kind of circumscription, alluding to the fact that A.S. George called them a series. "Sensu A.S. George" means that A.S. George discussed the Cyrtostylis in that series, and that members of that series are the ones under discussion in the same sense— how A. S. George saw them ; the current author might or might not approve George's circumscription, but George's is the circumscription currently under consideration.
https://en.wikipedia.org/wiki/Sensu
Sentient is an artificial intelligence (AI) space-based satellite data analysis system operated by the intelligence community of the United States of America . Sentient is described as a heavily classified " artificial brain " and an automated system that allows intelligence agencies and armed forces to use satellite imagery and other data to autonomously find and track in real time targets on or above the Earth from outer space . Satellites are automatically repurposed with AI and machine learning while tracking targets and can independently of human operators decide which targets are worth tracking. Sentient is developed and operated by the National Reconnaissance Office (NRO), with the Air Force 's (USAF) Research Laboratory (AFRL) and the Department of Energy's (DOE) National Laboratories . Sentient, sometimes reported on and referred to as the Future Ground Architecture (FGA) program, is a jointly developed program led by the NRO's Advanced Systems and Technology Directorate (AS&T) in collaboration with the AFRL at Wright-Patterson Air Force Base and multiple DOE laboratories. [ 2 ] [ 3 ] As a heavily classified program, public details on Sentient’s architecture and operations remain limited. [ 1 ] Public records indicate that Sentient’s development program began in 2009, as highlighted by the Federation of American Scientists (FAS). [ 4 ] In 2010, following the declassification of its FY 2010 Congressional Budget Justification (Volume IV), the NRO issued a request for information (RFI) soliciting white papers on user interaction , self‑awareness , cognitive processing and process automation . [ 4 ] The FAS noted that satellite reconnaissance underpins U.S. situational awareness by enabling rapid, risk‑free collection anywhere in the world. [ 4 ] As reported by Sarah Scoles in The Verge , research and development on Sentient began as early as October 2010, managed out of the NRO's AS&T. [ 1 ] NRO reporting indicates Sentient’s core development phase ran from its first milestone in 2013 through 2016. [ 1 ] Then-NRO Director (DNRO) Betty J. Sapp said at the 2013 GEOINT Symposium that Sentient "will allow NRO to be not only responsive, but also predictive, with where it aims spaceborne assets." [ 5 ] Sentient was discussed in a 2014 edition of NRL Review , published by the United States Naval Research Laboratory (NRL). [ 6 ] By 2015, Sentient had become the lynchpin of the FGA approach; it transitioned to horizontally networked ground stations that enable rapid software‑defined updates to “dumb” satellites. [ 5 ] [ 7 ] In 2016, the NRO's Principal Deputy Director (PDDNRO) Frank Calvelli briefed the House Armed Services Committee HASC on Sentient. [ 8 ] The American Nuclear Society published that the annual budget of the Sentient program at the time was $238,000,000 USD per year in the 2015–2017 period. [ 9 ] In March 2017, the NRO completed a briefing for the Senate Armed Services Committee (SASC) related to Sentient. [ 10 ] NROL-76 , also known as USA-276, was a May 2017 Falcon 9 Full Thrust launch deployed from Cape Canaveral Space Force Station conducted by SpaceX , and is the only reported to the media NRO and Sentient program–related orbital launch and satellite deployment mission. [ 11 ] At the 39th Space Symposium in April 2024, PDDNRO Troy Meink announced plans to field a mix of large and small satellites to increase satellite revisit times , thereby improving global coverage and enhancing resilience against emerging threats. [ 12 ] Joshua Perrius of Booz Allen Hamilton , a contractor supporting the NRO, said that automating routine exploitation workflows will allow personnel to focus on higher‑level analysis. [ 12 ] DNRO Sapp stated that the NRO has been asked to give more demonstrations of Sentient and its capabilities than "any other capability since the beginning of the organization's history ," in 1959. [ 2 ] Sentient employs " tipping and queueing "—part of an AI‑driven orchestration layer —to dynamically retask reconnaissance satellites to observe specific targets. [ 1 ] [ 13 ] Sentient hands off tracking duties across satellite constellations and associated Earth-based stations . [ 1 ] By 2024, the NRO had announced plans to field a mix of small and large reconnaissance satellites across orbital regimes —from low , medium and geosynchronous orbits —to increase revisit rates and improve space‑based coverage of high‑value targets . [ 12 ] Sentient fuses the diverse information and data sourced from its constellation—spanning orbital imagery , signal intercepts , and other feeds—to build a unified, actionable common operational picture . [ 14 ] In that fused big picture, Sentient applies algorithms to spot unexpected or non-traditional observables that human analysts may miss. [ 1 ] [ 15 ] Using forecasting models to predict adversary courses of action —from force movements to emerging threats —Sentient then adjusts satellite retasking in near real‑time. [ 1 ] [ 16 ] The cycle requires minimal human intervention and intelligence analysts are freed to focus on interpretation and decision‑making rather than data wrangling and sifting . [ 15 ] [ 1 ] A declassified 2019 NRO document shows Sentient collects complex information buried in noisy data and extracts the relevant pieces , freeing analysts to refocus on situational understanding via predictive analytics and automated tasking . [ 14 ] The NRO fielded cubesats —small, cube‑form satellites—to validate resilient, distributed remote sensing . [ 2 ] It also prioritized on-demand wide-area monitoring via new phenomenological models to detect and geolocate targets, enhanced collection against weak signals and low- reflectance objects in dense clutter and co-channel interference environments, and advanced phased array technologies to improve overall performance. [ 4 ] The NRO’s Aerospace Data Facilities (ADF)— Colorado , East , and Southwest —provide ground support for intelligence collection. [ 17 ] Several independent analyses, think‑tank reports, and media outlets have examined the Sentient program, concerning it's data‑fusion capabilities, integration with intelligence architectures, effects on situational awareness and decision‑making processes, scientific and engineering pedigree and strategic value. The Verge described Sentient as “an omnivorous analysis tool, capable of devouring data of all sorts, making sense of the past and present, anticipating the future, and pointing satellites toward what it determines will be the most interesting parts of that future.” [ 1 ] Andrew Krepinevich warns of the “avalanche” of data available from intelligence, military, and commercial sources that would overwhelm human analysts. [ 16 ] Steven Aftergood of the FAS adds that Sentient’s inputs “could include international communications, older intelligence collateral, and human sources .” [ 1 ] The National Geospatial-Intelligence Agency 's (NGA) former Director Robert Cardillo characterized Sentient as "automation that ingests data, makes sense of it in context, and infers likely future intelligence and collection needs." [ 18 ] United States Army Captain Anjanay Kumar warned in 2021 that although the system itself is secure, its distributed ground infrastructure could be vulnerable to adversary attack. [ 19 ] The Rand Corporation notes a key advantage of Sentient: by automating routine collection tasks, Sentient frees analysts to concentrate on the “so what?” of intelligence, rather than the “what.” [ 15 ] Alec Smith, writing for Grey Dynamics , concurs that Sentient enhances “situational awareness based on observed activity and historical intelligence to model and anticipate potential courses of action by adversaries.” [ 20 ] Wege and Mobley further suggest that Sentient‑style tools can boost “intelligence equities” in areas like oceanic shipping and " sanctions busting " by authoritarian states . [ 21 ] Henning Lahmann of Leiden University argues that Sentient’s anomaly‑detection and modeling can predict adversary behavior as part of real‑time automated analytics of the battlespace . [ 22 ] Sarah Shoker adds that comparable systems—such as automatic target recognition (ATR)—can remove human bottlenecks in time‑sensitive analysis by forecasting future actions from past patterns. [ 23 ] Lahmann likewise emphasizes the move toward fully automated, real‑time fusion of diverse sensor data streams for intelligence support. [ 22 ] Andrew Krepinevich details the commercial providers contracted to fuel Sentient’s analytics—namely Maxar Technologies , Planet, and BlackSky. [ 16 ] Maxar reports it supplies “90 percent of the foundational geospatial intelligence used by the US government.” [ 24 ] In The Fragile Dictator: Counterintelligence Pathologies in Authoritarian States , Wege and Mobley compare Sentient to Spaceflight Industries’ commercial Blacksky Global service. [ 21 ] According to Krepinevich, BlackSky “hoovers up” vast volumes of raw collateral—dozens of satellites, over a hundred million mobile devices, plus ships, planes, social networks, and environmental sensors—to feed Sentient’s big‑data pipelines. [ 16 ] Retired Central Intelligence Agency (CIA) analyst Allen Thomson observes that the system aspires to ingest “everything,” from imagery to financial records to weather data and more. [ 1 ] This article incorporates public domain material from websites or documents of the United States government .
https://en.wikipedia.org/wiki/Sentient_(intelligence_analysis_system)
The Sentinel Space Telescope was [ 1 ] a space observatory to be developed by Ball Aerospace & Technologies for the B612 Foundation . [ 2 ] [ 3 ] The B612 Foundation is dedicated to protecting the Earth from dangerous asteroid strikes and Sentinel was to be the Foundation's first spacecraft tangibly to address that mission. The space telescope was intended to locate and catalog 90% of the asteroids greater than 140 metres (460 ft) in diameter that exist in near-Earth orbits . The telescope would have orbited the Sun in a Venus -like orbit (i.e. between Earth and the Sun ). This orbit would allow it clearly to view the night half of the sky every 20 days, and pick out objects that are often difficult, if not impossible, to see in advance from Earth." [ 4 ] Sentinel would have had an operational mission life of six and a half to ten years. [ 5 ] After NASA terminated their funding agreement with the B612 Foundation in October 2015 [ 6 ] and the private fundraising goals could not be met, the Foundation eventually opted for an alternative approach using a constellation of much smaller spacecraft under study as of June 2017 [update] . [ 1 ] NASA/ JPL 's NEOCam has been proposed instead. The B612 project grew out of a one-day workshop on asteroid deflection organized by Piet Hut and Ed Lu at NASA Johnson Space Center , Houston , Texas on October 20, 2001. Participants Rusty Schweickart , Clark Chapman , Piet Hut , and Ed Lu established the B612 Foundation on October 7, 2002. [ 7 ] The Foundation originally planned to launch Sentinel by December 2016 and to begin data retrieval no later than 6 months after successful positioning. [ 8 ] In April 2013, the plan had moved out to launching on a SpaceX Falcon 9 in 2018, following preliminary design review in 2014, and critical design review in 2015. [ 4 ] As of April 2013 [update] , B612 was attempting to raise approximately $450 million in total to fund the total development and launch cost of Sentinel, at a rate of some $30 to $40 million per year. [ 4 ] That funding profile excludes the advertised 2018 launch date. After NASA terminated their $30 million funding agreement with the B612 Foundation in October 2015 [ 6 ] and the private fundraising did not achieve its goals, the Foundation eventually opted for an alternative approach using a constellation of much smaller spacecraft which is under study as of June 2017 [update] . [ 1 ] NASA/ JPL 's NEOCam has been proposed instead. Unlike similar projects to search for near-Earth asteroids or near-Earth objects (NEOs) such as NASA 's Near-Earth Object Program , Sentinel would have orbited between Earth and the Sun. Since the Sun would therefore always have been behind the lens of the telescope, it would have never inhibited the telescope's ability to detect NEOs and Sentinel would have been able to perform continuous observation and analysis. Sentinel was anticipated to be capable of detecting 90% of the asteroids greater than 140 meters in diameter that exist in Earth's orbit, which pose existential risk to humanity. The B612 Foundation estimates that approximately half a million asteroids in Earth's neighbourhood equal or exceed the one that struck Tunguska in 1908 . [ 5 ] It was planned to be launched atop the Falcon 9 rocket designed and manufactured by the private aerospace company SpaceX in 2019, [ 9 ] and to be maneuvered into position with the help of the gravity of Venus . Data gathered by the Sentinel Project would have been provided through an existing network of scientific data-sharing that includes NASA and academic institutions such as the Minor Planet Center in Cambridge, Massachusetts . Given the satellite's telescopic accuracy, Sentinel's data was speculated to prove valuable for future missions in such fields as asteroid mining . [ 5 ] [ 10 ] The telescope was intended to measure 7.7 metres (25 ft) by 3.2 metres (10 ft) mass 1,500 kilograms (3,300 lb) and would have orbited the Sun at a distance of 0.6 to 0.8 astronomical units (90,000,000 to 120,000,000 km; 56,000,000 to 74,000,000 mi) approximately in the same orbital distance as Venus . It would have employed infrared astronomy methods to identify asteroids against the cold of outer space . The B612 Foundation worked in partnership with Ball Aerospace to construct Sentinel's 0.51 m (20 inches) aluminum mirror, which would have captured the large field of view. [ 3 ] "Sentinel will scan in the 7- to 15-micron wavelength using a 0.5-meter infrared telescope across a 5.5 by 2-deg. field of view. The [infrared] IR array would have consisted of 16 detectors, and coverage would have scanned a 200-degree, full-angle field of regard ." [ 4 ] Key features included: [ citation needed ] REP. STEWART: ... are we technologically capable of launching something that could intercept [an asteroid]? ... DR. A'HEARN: No. If we had spacecraft plans on the books already, that would take a year ... I mean a typical small mission ... takes four years from approval to start to launch ...
https://en.wikipedia.org/wiki/Sentinel_Space_Telescope
Sentinel cells refer to cells in the body's first line of defense , which embed themselves in tissues such as skin . [ 1 ] Sentinel cells represent diverse array of cell types with the capability to monitor the presence of exogenous or potentially harmful particles and play a crucial role in recognizing and sampling signs of infection or abnormal cellular activity and/or death. Encountering such stimuli is initiating the innate immune response. [ 2 ] Their ability to recognize injurious or dangerous material is mediated by specialized pattern recognition receptors (PRR) and possess specialized function to prime naive T cells upon pathogen recognition. [ 3 ] Sentinel cells can refer to specific antigen-presenting cells , such as: Sentinel cells can also refer to cells that are normally not specialized antigen-presenting cells such as: [ 1 ] Sometimes tissue cells not part of the immune system such as are also referred to as Sentinel cells : [ 1 ] Typically, dendritic cells (DCs) and macrophages serve this function by being strategically distributed throughout diverse tissues within host environment particularly in those regions exposed to the contact with the external environment such as mucosal tissues and skin. [ 4 ] In elucidating the intricate network of phenotypic markers characterizing sentinel cells residing in skin there is a recent study offering an in-depth exploration whereas stimulus-specific gene expression and functions are described in the summarized article. [ 5 ] Interestingly, novel function has been discovered by designing sentinel bacteria Bacillus subtilis combining the living organism with evolutionary function of sentinels resulting in surveillance of specific DNA sequences and reporting single nucleotide polymorphism associated with facial features through the mechanism by which these sentinel cells take up and record target DNA sequences, using CRISPR interference for SNP differentiation and expression of target gene. This technology demonstrates potential applications in areas like forensics, ecology, and epidemiology, by enabling the detection of specific DNA sequences in various environments. Application of such DNA-specific surveillance can be designed in detection of an anomaly and targeting the localized treatment to identified tissue such as tumor. [ 6 ] This biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sentinel_cell
Sentinus is an educational charity based in Lisburn , Northern Ireland that provides educational programs for young people interested in science, technology, engineering and mathematics (STEM). Northern Ireland produces around 2,000 qualified IT workers each year; there are around 16,000 IT jobs in the Northern Ireland economy. [ citation needed ] [ relevant? ] It works with EngineeringUK and the Council for the Curriculum, Examinations & Assessment (CCEA). It works with primary and secondary schools in Northern Ireland. It runs summer placements for IT workshops for those of sixth form age (16-18). [ 2 ] It offers Robotics Roadshows for primary school children. [ 3 ] Sentinus hosts the annual Big Bang Northern Ireland Fair which incorporates Sentinus Young Innovators. This is a one day science and engineering project exhibition for post-primary students. It is one of largest such events in the United Kingdom . In 2019 over 3,000 students participated from 130 schools across both Northern Ireland and the Republic of Ireland . The competition is affiliated with the International Science and Engineering Fair (ISEF) and the Broadcom MASTERS program. The overall winner represents Northern Ireland at the following year's ISEF.
https://en.wikipedia.org/wiki/Sentinus
A sentroller , used in the Internet of things is a sensor , controller or actuator or combination of these three. The current Internet is an Internet of people, based on people communicating with people, using smartphones , tablets , laptops and computers, but this is changing: [ citation needed ] equipment and devices connected to the Internet (e.g. set-top boxes , cameras , cars ) are starting to shift the balance away from people towards things. The future is clearly moving into a direction where the number of things connected to the Internet will overwhelm the number of connected people. Predictions range up to a factor of 100 to 1 or more, and thus the Internet of people will transform into the Internet of things. Thereby, the Internet of things will connect to and communicate directly with other "things", rather than directly with people. Many of the new things connected to the Internet will be sentrollers, which means they are either actuators, sensors, controllers, or combinations of these three things. One simple example is a home thermostat . A thermostat is a sentroller as it senses the temperature in the home, checks whether the home or room in which the thermostat is monitoring temperature is at the desired temperature, and if not, turns on the heater or the air conditioner . Therefore, a thermostat is a sensor and a controller – a sentroller. No human interaction with the thermostat or the cloud will be necessary for the thermostat to keep the heat in the home at the right level. In practice, sentrollers absorb and/or produce very limited amounts of information, but connectivity to the internet is essential for their operation. The Internet of things will host the applications that know how to interpret the information provided by sentrollers and what action to be taken. The smarts of the smart home , smart energy , smart buildings , etc. actually reside in the cloud. [ dubious – discuss ] [ citation needed ] The sentrollers are the end nodes that will become the majority population of this type of Internet of things.
https://en.wikipedia.org/wiki/Sentroller
A separable partial differential equation can be broken into a set of equations of lower dimensionality (fewer independent variables) by a method of separation of variables . It generally relies upon the problem having some special form or symmetry . In this way, the partial differential equation (PDE) can be solved by solving a set of simpler PDEs, or even ordinary differential equations (ODEs) if the problem can be broken down into one-dimensional equations. The most common form of separation of variables is simple separation of variables. A solution is obtained by assuming a solution of the form given by a product of functions of each individual coordinate. There is a special form of separation of variables called R {\displaystyle R} -separation of variables which is accomplished by writing the solution as a particular fixed function of the coordinates multiplied by a product of functions of each individual coordinate. Laplace's equation on R n {\displaystyle {\mathbb {R} }^{n}} is an example of a partial differential equation that admits solutions through R {\displaystyle R} -separation of variables; in the three-dimensional case this uses 6-sphere coordinates . (This should not be confused with the case of a separable ODE, which refers to a somewhat different class of problems that can be broken into a pair of integrals ; see separation of variables .) For example, consider the time-independent Schrödinger equation for the function ψ ( x ) {\displaystyle \psi (\mathbf {x} )} (in dimensionless units, for simplicity). (Equivalently, consider the inhomogeneous Helmholtz equation .) If the function V ( x ) {\displaystyle V(\mathbf {x} )} in three dimensions is of the form then it turns out that the problem can be separated into three one-dimensional ODEs for functions ψ 1 ( x 1 ) {\displaystyle \psi _{1}(x_{1})} , ψ 2 ( x 2 ) {\displaystyle \psi _{2}(x_{2})} , and ψ 3 ( x 3 ) {\displaystyle \psi _{3}(x_{3})} , and the final solution can be written as ψ ( x ) = ψ 1 ( x 1 ) ⋅ ψ 2 ( x 2 ) ⋅ ψ 3 ( x 3 ) {\displaystyle \psi (\mathbf {x} )=\psi _{1}(x_{1})\cdot \psi _{2}(x_{2})\cdot \psi _{3}(x_{3})} . (More generally, the separable cases of the Schrödinger equation were enumerated by Eisenhart in 1948. [ 1 ] )
https://en.wikipedia.org/wiki/Separable_partial_differential_equation
In mathematics , a polynomial P ( X ) over a given field K is separable if its roots are distinct in an algebraic closure of K , that is, the number of distinct roots is equal to the degree of the polynomial. [ 1 ] This concept is closely related to square-free polynomial . If K is a perfect field then the two concepts coincide. In general, P ( X ) is separable if and only if it is square-free over any field that contains K , which holds if and only if P ( X ) is coprime to its formal derivative D P ( X ). In an older definition, P ( X ) was considered separable if each of its irreducible factors in K [ X ] is separable in the modern definition. [ 2 ] In this definition, separability depended on the field K ; for example, any polynomial over a perfect field would have been considered separable. This definition, although it can be convenient for Galois theory , is no longer in use. [ 3 ] Separable polynomials are used to define separable extensions : A field extension K ⊂ L is a separable extension if and only if for every α in L which is algebraic over K , the minimal polynomial of α over K is a separable polynomial. Inseparable extensions (that is, extensions which are not separable) may occur only in positive characteristic . The criterion above leads to the quick conclusion that if P is irreducible and not separable, then D P ( X ) = 0. Thus we must have for some polynomial Q over K , where the prime number p is the characteristic. With this clue we can construct an example: with K the field of rational functions in the indeterminate T over the finite field with p elements. Here one can prove directly that P ( X ) is irreducible and not separable. This is actually a typical example of why inseparability matters; in geometric terms P represents the mapping on the projective line over the finite field, taking co-ordinates to their p th power. Such mappings are fundamental to the algebraic geometry of finite fields. Put another way, there are coverings in that setting that cannot be 'seen' by Galois theory. (See Radical morphism for a higher-level discussion.) If L is the field extension in other words the splitting field of P , then L / K is an example of a purely inseparable field extension . It is of degree p , but has no automorphism fixing K , other than the identity, because T 1/ p is the unique root of P . This shows directly that Galois theory must here break down. A field such that there are no such extensions is called perfect . That finite fields are perfect follows a posteriori from their known structure. One can show that the tensor product of fields of L with itself over K for this example has nilpotent elements that are non-zero. This is another manifestation of inseparability: that is, the tensor product operation on fields need not produce a ring that is a product of fields (so, not a commutative semisimple ring ). If P ( x ) is separable, and its roots form a group (a subgroup of the field K ), then P ( x ) is an additive polynomial . Separable polynomials occur frequently in Galois theory . For example, let P be an irreducible polynomial with integer coefficients and p be a prime number which does not divide the leading coefficient of P . Let Q be the polynomial over the finite field with p elements, which is obtained by reducing modulo p the coefficients of P . Then, if Q is separable (which is the case for every p but a finite number) then the degrees of the irreducible factors of Q are the lengths of the cycles of some permutation of the Galois group of P . Another example: P being as above, a resolvent R for a group G is a polynomial whose coefficients are polynomials in the coefficients of P , which provides some information on the Galois group of P . More precisely, if R is separable and has a rational root then the Galois group of P is contained in G . For example, if D is the discriminant of P then X 2 − D {\displaystyle X^{2}-D} is a resolvent for the alternating group . This resolvent is always separable (assuming the characteristic is not 2) if P is irreducible, but most resolvents are not always separable.
https://en.wikipedia.org/wiki/Separable_polynomial
9700 105988 ENSG00000135476 ENSMUSG00000058290 Q14674 P60330 NM_012291 NM_001014976 NM_001356312 NP_036423 NP_001014976 NP_001343241 Separase , also known as separin , is a cysteine protease responsible for triggering anaphase by hydrolysing cohesin , which is the protein responsible for binding sister chromatids during the early stage of anaphase . [ 5 ] In humans, separin is encoded by the ESPL1 gene . [ 6 ] In S. cerevisiae , separase is encoded by the esp1 gene. Esp1 was discovered by Kim Nasmyth and coworkers in 1998. [ 7 ] [ 8 ] In 2021, structures of human separase were determined in complex with either securin or CDK1-cyclin B1-CKS1 using cryo-EM by scientists of the University of Geneva. [ 9 ] Stable cohesion between sister chromatids before anaphase and their timely separation during anaphase are critical for cell division and chromosome inheritance. In vertebrates, sister chromatid cohesion is released in 2 steps via distinct mechanisms. The first step involves phosphorylation of STAG1 or STAG2 in the cohesin complex. The second step involves cleavage of the cohesin subunit SCC1 ( RAD21 ) by separase, which initiates the final separation of sister chromatids. [ 11 ] In S. cerevisiae , Esp1 is coded by ESP1 and is regulated by the securin Pds1 . The two sister chromatids are initially bound together by the cohesin complex until the beginning of anaphase, at which point the mitotic spindle pulls the two sister chromatids apart, leaving each of the two daughter cells with an equivalent number of sister chromatids. The proteins that bind the two sister chromatids, disallowing any premature sister chromatid separation, are a part of the cohesin protein family. One of these cohesin proteins crucial for sister chromatid cohesion is Scc1. Esp1 is a separase protein that cleaves the cohesin subunit Scc1 (RAD21), allowing sister chromatids to separate at the onset of anaphase during mitosis . [ 8 ] When the cell is not dividing, separase is prevented from cleaving cohesin through its association with either securin or upon phosphorylation of a specific serine residue in separase by the cyclin-CDK complex. Separase phosphorylation leads to a stable association with CDK1-cyclin B1. Securin or CDK1-cyclin B binding is mutually exclusive. In both complexes, separase is inhibited by pseudosubstrate motifs that block substrate binding at the catalytic site and at nearby docking sites. However, while securin contains its own pseudosubstrate motifs to occlude substrate binding, the CDK1–cyclin B complex inhibits separase by rigidifying pseudosubstrate motifs from flexible loops in separase itself, leading to an auto-inhibition of the proteolytic activity of separase. [ 9 ] Regulation through these distinct binding partners provides two layers of negative regulation to prevent inappropriate cohesin cleavage. Note that separase cannot function without initially forming the securin-separase complex in most organisms. This is because securin helps properly fold separase into the functional conformation. However, yeast does not appear to require securin to form functional separase because anaphase occurs in yeast even with a securin deletion. [ 10 ] On the signal for anaphase, securin is ubiquitinated and hydrolysed, releasing separase for dephosphorylation by the APC -Cdc20 complex. Active separase can then cleave Scc1 for release of the sister chromatids. Separase initiates the activation of Cdc14 in early anaphase [ 13 ] and Cdc14 has been found to dephosphorylate securin, thereby increasing its efficiency as a substrate for degradation. The presence of this positive feedback loop offers a potential mechanism for giving anaphase a more switch-like behavior. [ 12 ] This article incorporates text from the United States National Library of Medicine , which is in the public domain .
https://en.wikipedia.org/wiki/Separase
In topology and related branches of mathematics , separated sets are pairs of subsets of a given topological space that are related to each other in a certain way: roughly speaking, neither overlapping nor touching. The notion of when two sets are separated or not is important both to the notion of connected spaces (and their connected components) as well as to the separation axioms for topological spaces. Separated sets should not be confused with separated spaces (defined below), which are somewhat related but different. Separable spaces are again a completely different topological concept. There are various ways in which two subsets A {\displaystyle A} and B {\displaystyle B} of a topological space X {\displaystyle X} can be considered to be separated. A most basic way in which two sets can be separated is if they are disjoint , that is, if their intersection is the empty set . This property has nothing to do with topology as such, but only set theory . Each of the following properties is stricter than disjointness, incorporating some topological information. The properties below are presented in increasing order of specificity, each being a stronger notion than the preceding one. The sets A {\displaystyle A} and B {\displaystyle B} are separated in X {\displaystyle X} if each is disjoint from the other's closure : A ∩ B ¯ = ∅ = A ¯ ∩ B . {\displaystyle A\cap {\bar {B}}=\varnothing ={\bar {A}}\cap B.} This property is known as the Hausdorff−Lennes Separation Condition . [ 1 ] Since every set is contained in its closure, two separated sets automatically must be disjoint. The closures themselves do not have to be disjoint from each other; for example, the intervals [ 0 , 1 ) {\displaystyle [0,1)} and ( 1 , 2 ] {\displaystyle (1,2]} are separated in the real line R , {\displaystyle \mathbb {R} ,} even though the point 1 belongs to both of their closures. A more general example is that in any metric space , two open balls B r ( p ) = { x ∈ X : d ( p , x ) < r } {\displaystyle B_{r}(p)=\{x\in X:d(p,x)<r\}} and B s ( q ) = { x ∈ X : d ( q , x ) < s } {\displaystyle B_{s}(q)=\{x\in X:d(q,x)<s\}} are separated whenever d ( p , q ) ≥ r + s . {\displaystyle d(p,q)\geq r+s.} The property of being separated can also be expressed in terms of derived set (indicated by the prime symbol): A {\displaystyle A} and B {\displaystyle B} are separated when they are disjoint and each is disjoint from the other's derived set, that is, A ′ ∩ B = ∅ = B ′ ∩ A . {\textstyle A'\cap B=\varnothing =B'\cap A.} (As in the case of the first version of the definition, the derived sets A ′ {\displaystyle A'} and B ′ {\displaystyle B'} are not required to be disjoint from each other.) The sets A {\displaystyle A} and B {\displaystyle B} are separated by neighbourhoods if there are neighbourhoods U {\displaystyle U} of A {\displaystyle A} and V {\displaystyle V} of B {\displaystyle B} such that U {\displaystyle U} and V {\displaystyle V} are disjoint. (Sometimes you will see the requirement that U {\displaystyle U} and V {\displaystyle V} be open neighbourhoods, but this makes no difference in the end.) For the example of A = [ 0 , 1 ) {\displaystyle A=[0,1)} and B = ( 1 , 2 ] , {\displaystyle B=(1,2],} you could take U = ( − 1 , 1 ) {\displaystyle U=(-1,1)} and V = ( 1 , 3 ) . {\displaystyle V=(1,3).} Note that if any two sets are separated by neighbourhoods, then certainly they are separated. If A {\displaystyle A} and B {\displaystyle B} are open and disjoint, then they must be separated by neighbourhoods; just take U = A {\displaystyle U=A} and V = B . {\displaystyle V=B.} For this reason, separatedness is often used with closed sets (as in the normal separation axiom ). The sets A {\displaystyle A} and B {\displaystyle B} are separated by closed neighbourhoods if there is a closed neighbourhood U {\displaystyle U} of A {\displaystyle A} and a closed neighbourhood V {\displaystyle V} of B {\displaystyle B} such that U {\displaystyle U} and V {\displaystyle V} are disjoint. Our examples, [ 0 , 1 ) {\displaystyle [0,1)} and ( 1 , 2 ] , {\displaystyle (1,2],} are not separated by closed neighbourhoods. You could make either U {\displaystyle U} or V {\displaystyle V} closed by including the point 1 in it, but you cannot make them both closed while keeping them disjoint. Note that if any two sets are separated by closed neighbourhoods, then certainly they are separated by neighbourhoods . The sets A {\displaystyle A} and B {\displaystyle B} are separated by a continuous function if there exists a continuous function f : X → R {\displaystyle f:X\to \mathbb {R} } from the space X {\displaystyle X} to the real line R {\displaystyle \mathbb {R} } such that A ⊆ f − 1 ( 0 ) {\displaystyle A\subseteq f^{-1}(0)} and B ⊆ f − 1 ( 1 ) {\displaystyle B\subseteq f^{-1}(1)} , that is, members of A {\displaystyle A} map to 0 and members of B {\displaystyle B} map to 1. (Sometimes the unit interval [ 0 , 1 ] {\displaystyle [0,1]} is used in place of R {\displaystyle \mathbb {R} } in this definition, but this makes no difference.) In our example, [ 0 , 1 ) {\displaystyle [0,1)} and ( 1 , 2 ] {\displaystyle (1,2]} are not separated by a function, because there is no way to continuously define f {\displaystyle f} at the point 1. [ 2 ] If two sets are separated by a continuous function, then they are also separated by closed neighbourhoods ; the neighbourhoods can be given in terms of the preimage of f {\displaystyle f} as U = f − 1 [ − c , c ] {\displaystyle U=f^{-1}[-c,c]} and V = f − 1 [ 1 − c , 1 + c ] , {\displaystyle V=f^{-1}[1-c,1+c],} where c {\displaystyle c} is any positive real number less than 1 / 2. {\displaystyle 1/2.} The sets A {\displaystyle A} and B {\displaystyle B} are precisely separated by a continuous function if there exists a continuous function f : X → R {\displaystyle f:X\to \mathbb {R} } such that A = f − 1 ( 0 ) {\displaystyle A=f^{-1}(0)} and B = f − 1 ( 1 ) . {\displaystyle B=f^{-1}(1).} (Again, you may also see the unit interval in place of R , {\displaystyle \mathbb {R} ,} and again it makes no difference.) Note that if any two sets are precisely separated by a function, then they are separated by a function . Since { 0 } {\displaystyle \{0\}} and { 1 } {\displaystyle \{1\}} are closed in R , {\displaystyle \mathbb {R} ,} only closed sets are capable of being precisely separated by a function, but just because two sets are closed and separated by a function does not mean that they are automatically precisely separated by a function (even a different function). The separation axioms are various conditions that are sometimes imposed upon topological spaces, many of which can be described in terms of the various types of separated sets. As an example we will define the T 2 axiom, which is the condition imposed on separated spaces. Specifically, a topological space is separated if, given any two distinct points x and y , the singleton sets { x } and { y } are separated by neighbourhoods. Separated spaces are usually called Hausdorff spaces or T 2 spaces . Given a topological space X , it is sometimes useful to consider whether it is possible for a subset A to be separated from its complement . This is certainly true if A is either the empty set or the entire space X , but there may be other possibilities. A topological space X is connected if these are the only two possibilities. Conversely, if a nonempty subset A is separated from its own complement, and if the only subset of A to share this property is the empty set, then A is an open-connected component of X . (In the degenerate case where X is itself the empty set ∅ {\displaystyle \emptyset } , authorities differ on whether ∅ {\displaystyle \emptyset } is connected and whether ∅ {\displaystyle \emptyset } is an open-connected component of itself.) Given a topological space X , two points x and y are topologically distinguishable if there exists an open set that one point belongs to but the other point does not. If x and y are topologically distinguishable, then the singleton sets { x } and { y } must be disjoint. On the other hand, if the singletons { x } and { y } are separated, then the points x and y must be topologically distinguishable. Thus for singletons, topological distinguishability is a condition in between disjointness and separatedness.
https://en.wikipedia.org/wiki/Separated_sets
A separating arch is an arch , which, as arcade , separates the nave of a church from the side aisle, [ 1 ] or an arch between two adjacent side aisles. [ 2 ] It is found mainly in hall churches . [ 3 ] A separating arch can be replaced constructively or emphasised decoratively by a vault rib [ de ] . [ 4 ] In this case one speaks - instead of a Scheidbogen - also of a Scheidrippe . [ 5 ] Separating arches delimit a bay in the longitudinal direction. [ 6 ] A pair of transverse arches and a pair of separating arches result in a vault . [ 7 ] With the belt arches as well as the pillars or columns at the four corners, the segmental arches form a vault field as the basic element of a vault. [ 8 ] A wall supported by separating arch is called a separating wall. [ 9 ] This article about the architecture of churches or other Christian places of worship is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Separating_arch
A separation barrier or separation wall is a barrier , wall or fence , constructed to limit the movement of people across a certain line or border , or to separate peoples or cultures. [ 1 ] A separation barrier that runs along an internationally recognized border is known as a border barrier . [ citation needed ] David Henley opines in The Guardian that separation barriers are being built at a record-rate around the world along borders and do not only surround dictatorships or pariah states. In 2014, The Washington Post listed notable 14 separation walls as of 2011, indicating that the total concurrent number of walls and barriers which separate countries and territories is 45. [ 2 ] The term "separation barrier" has been applied to structures erected in Belfast , Homs , the West Bank , São Paulo , Cyprus , and along the Greece-Turkey border and the Mexico-United States border . In 2016, Julia Sonnevend listed in her book Stories Without Borders: The Berlin Wall and the Making of a Global Iconic Event the concurrent separation barriers of Sharm el-Sheikh (Egypt), Limbang border ( Brunei ), the Kazakh-Uzbekistan barrier, Indian border fence with Bangladesh , United States separation barrier with Mexico, Saudi Arabian border fence with Iraq and Hungary's fence with Serbia. [ 3 ] Several erected separation barriers are no longer active or in place, including the Berlin Wall , the Maginot Line and some barrier sections in Jerusalem. [ 4 ] One of the longest separation barriers in the world, [ 5 ] the dingo fence is a pest-exclusion fence to keep dingoes out of the relatively fertile south-east part of the continent. In addition, it was initially established so that landowners could lawfully keep Australian Aboriginal people off the land . [ 6 ] Although the fence has helped reduce losses of sheep to predators , this has been countered by holes in fences found in the 1990s through which dingo offspring have passed. Laws were appointed to protect the fence; jail terms of three months for leaving a crossing gate open, and six months for damage or removal of part of the fence. Introduced in 1946, these penalties still apply today. [ 7 ] In 2009 as part of the Q150 celebrations, the dingo fence was announced as one of the Q150 Icons of Queensland for its role as an iconic "innovation and invention". [ 8 ] The fence staff consists of 23 employees, including two-person teams that patrol a 300 km (186 mi) section of the fence twice every week. [ 9 ] Communities in the Czech Republic , Romania and Slovakia have long built Roma walls in urban environments when a Roma group is in close proximity to the rest of the population. [ 10 ] Since the Turkish invasion of Cyprus in 1974 , Turkey has constructed and maintained what economics professor Rongxing Guo has called a "separation barrier" of 300 kilometres (190 mi) along the 1974 Green Line (or ceasefire line) dividing the island of Cyprus into two parts, with a United Nations buffer zone between them. [ 11 ] The Egypt–Gaza barrier is often referred as "separation barrier" in the media [ 12 ] or as a "separating wall". [ 13 ] [ 14 ] [ 15 ] In December 2009, Egypt started the construction of the Egypt–Gaza barrier along the border with Gaza , consisting of a steel wall. Egypt's foreign minister said that the wall, being built along the country's border with the Gaza Strip will defend it "against threats to national security". [ 16 ] Though the construction paused a number of times, the wall is nearly complete. According to Julia Sonnevend, the anti-terrorist barrier around the Sharm el-Sheikh resort in Egypt is in fact a separation barrier. [ 3 ] The Line of Control (LoC) refers to the military control line between the Indian and Pakistani controlled parts of the former princely state of Kashmir and Jammu —a line which, to this day, does not constitute a legally recognized international boundary, but is the de facto border. Originally known as the Cease-fire Line , it was redesignated as the "Line of Control" following the Simla Agreement , which was signed on 3 July 1972. The part of the former princely state that is under Indian control was known as the state of Jammu and Kashmir , which was split into two separate Union Territories . The two parts of the former princely state that are under Pakistani control are known as Gilgit–Baltistan and Azad Kashmir (AJK). Its northernmost point is known as the NJ9842 . This territorial division, which to this day still exists, severed many villages and separated family members from each other. [ 17 ] [ 18 ] A separation fence construction between Indian and Pakistani controlled areas, based on 1972 cease-fire line, was initiated by India in 2003. [ 19 ] In December 2013, it was revealed that India plans a construction of a separation wall in the Himalayan area in Kashmir. [ 20 ] The wall is aimed to cover 179 km. The other sections of India's borders also have a fence or wall. Israel began building the Israeli West Bank barrier in 2002, in order to protect civilians from Palestinian terrorism such as suicide bombing attacks which increased significantly during the Second Intifada . Barrier opponents claim it seeks to annex Palestinian land under the guise of security and undermines peace negotiations by unilaterally establishing new borders. When completed it will be a 700-kilometres long network of high walls, electronic fences, gates and trenches. It is a controversial barrier because much of it is built outside the 1949 Armistice Line ( Green Line ), de facto annexing potentially 10 percent of Palestinian land, according to the United Nations Office for the Coordination of Humanitarian Affairs . It cuts far into the West Bank and encompasses Israel's largest settlement blocs containing hundreds of thousands of settlers. In June 2004, the Israeli Supreme Court held that building the wall on West Bank Palestinian land is in itself legal, but it ordered some changes to the original route, which separated 35,000 Palestinian farmers from their lands and crops. The Israeli finance minister ( Benjamin Netanyahu ) replied that it was disputed land, not Palestinian, and its final status would be resolved in political negotiation. [ 21 ] In July 2004, the International Court of Justice at The Hague in an advisory opinion declared the barriers illegal under international law and called on Israel to dismantle the walls, return confiscated land and make reparations for damages. [ 22 ] [ 23 ] In spite of all this, the number of Arab terrorist suicide bombings continued to decrease with the gradual completion of segments of the Security Barrier as was initially stated it would by the Israeli authorities. [ 24 ] [ 25 ] Israel refers to land between the 1949 lines and the separation barrier as the Seam Zone , including all of East Jerusalem . In 2003, the military declared that only Israeli citizens and Palestinians with permits are allowed to be inside it; Palestinians have found it increasingly difficult to get permits unless they own land in the zone. [ 26 ] [ 27 ] The separation barrier cuts off east Jerusalem and some settlement blocs from the West Bank, even as Israelis and Arabs build structures and communities in eastern Jerusalem. [ 28 ] Palestinians in the West Bank, including East Jerusalem, have continued to protest the separation barrier. [ 29 ] The existing barrier cuts off access to the Jordan River for Palestinian farmers in the West Bank. [ 30 ] Due to international condemnation after the International Court ruling, Israel did not build an even stronger barrier, instead instituting permit-based access control. [ 31 ] It has been opined that this change was to allow land to be annexed. [ 32 ] [ better source needed ] Israeli settlement councils already have de facto control of 86 percent of the Jordan Valley and the Dead Sea [ 33 ] as the settler population steadily grows there. [ 34 ] Writer Damon DiMarco has described as a "separation barrier" the Kuwait-Iraq barricade constructed by the United Nations in 1991 after the Iraqi invasion of Kuwait was repelled . With electrified fencing and concertina wire , it includes a 5-meter-wide trench and a high berm . It runs 180 kilometers along the border between the two nations. [ 35 ] A 2016 separation wall around the Ain al-Hilweh camp in Lebanon is intended to separate the local Palestinian-Lebanese population and Syrian refugee Palestinians from the surrounding society. [ 36 ] Renee Pirrong of The Heritage Foundation described the Malaysia–Thailand border barrier as a "separation barrier". Its purpose is to cut down on smuggling, drug trafficking, illegal immigration, crime and insurgency. [ 37 ] In 2004 Saudi Arabia began construction of a Saudi-Yemen barrier between its territory and Yemen to prevent the unauthorized movement of people and goods into and out of the Kingdom. Some have labeled it a "separation barrier". [ 38 ] In February 2004 The Guardian reported that Yemeni opposition newspapers likened the barrier to the Israeli West Bank barrier, [ 39 ] while The Independent wrote "Saudi Arabia, one of the most vocal critics in the Arab world of Israel's 'security fence' in the West Bank, is quietly emulating the Israeli example by erecting a barrier along its porous border with Yemen". [ 40 ] Saudi officials rejected the comparison saying it was built to prevent infiltration and smuggling. [ 39 ] Saudi Arabia has also built a wall on the Saudi Iraqi border. The Syria–Turkey barrier is a wall and fence under construction along the Syria–Turkey border aimed at preventing illegal crossings and smuggling from Syria into Turkey . [ 42 ] In 2017, The Syrian government accused Turkey of building a separation wall , referring to the barrier. [ 43 ] From 2017, Turkey began construction a barrier along the Iran–Turkey border aimed at preventing illegal immigration and smuggling. [ 44 ] Over 21 miles of high walling or fencing separate Catholic and Protestant communities in Northern Ireland , with most concentrated in Belfast and Derry . The wall [ clarification needed ] was built in 1969 in order to separate the Catholic and Protestant areas in Belfast. [ 45 ] An Army Major, overseeing the construction of the wall at the time, said: 'This is a temporary measure ... we do not want to see another Berlin wall situation in Western Europe ... it will be gone by Christmas'. In 2013, that wall still remains and almost 100 additional walls and barriers now complement the original. Technically known as ' peace walls ', there are moves to remove all of them by 2023 by mutual consent. [ 46 ] The United States constructed a barrier on the border with Mexico of 3,169 kilometres (1,969 mi) to prevent unauthorized immigration into the United States and to deter smuggling of contraband. The US President Trump stated that he would replace the wall with an updated Mexico–United States border wall ; some parts of the old wall have been replaced. [ 47 ] The Detroit Wall was erected to enforce redlining as part of the policies of racial segregation in the United States . Morocco has constructed a 2,700 km (1,700 mi) long sand wall cutting through the length of Western Sahara . [ 48 ] [ 2 ] Minefields and watchtowers serve to separate the Moroccan-controlled zone from the sparsely populated Free Zone . The Berlin Wall was a barrier that divided Berlin from 1961 to 1989, [ 49 ] constructed by the German Democratic Republic (GDR, East Germany ) starting on 13 August 1961, that completely cut off (by land) West Berlin from surrounding East Germany and from East Berlin until it was opened in November 1989. [ 50 ] Its demolition officially began on 13 June 1990 and was completed in 1992. [ 51 ] The barrier included guard towers placed along large concrete walls, [ 52 ] which circumscribed a wide area (later known as the "death strip") that contained anti-vehicle trenches, " fakir beds " and other defenses. The Eastern Bloc claimed that the wall was erected to protect its population from fascist elements conspiring to prevent the "will of the people" in building a socialist state in East Germany. In practice, the Wall served to prevent the massive emigration and defection that marked East Germany and the communist Eastern Bloc during the post-World War II period. The Berlin Wall was officially referred to as the " Anti-Fascist Protection Rampart" ( German : Antifaschistischer Schutzwall ) by GDR authorities, implying that the NATO countries and West Germany in particular were " fascists ". [ 53 ] The West Berlin city government sometimes referred to it as the " Wall of Shame "—a term coined by mayor Willy Brandt —while condemning the Wall's restriction on freedom of movement . Along with the separate and much longer Inner German border (IGB), which demarcated the border between East and West Germany , it came to symbolize the " Iron Curtain " that separated Western Europe and the Eastern Bloc during the Cold War . Before the Wall's erection, 3.5 million East Germans circumvented Eastern Bloc emigration restrictions and defected from the GDR, many by crossing over the border from East Berlin into West Berlin, from where they could then travel to West Germany and other Western European countries. Between 1961 and 1989, the wall prevented almost all such emigration. [ 54 ] During this period, around 5,000 people attempted to escape over the wall, with an estimated death toll of from 136 [ 55 ] to more than 200 [ 56 ] in and around Berlin. In 1989, a series of radical political changes occurred in the Eastern Bloc , associated with the liberalization of the Eastern Bloc's authoritarian systems and the erosion of political power in the pro- Soviet governments in nearby Poland and Hungary . After several weeks of civil unrest, the East German government announced on 9 November 1989 that all GDR citizens could visit West Germany and West Berlin. Crowds of East Germans crossed and climbed onto the wall, joined by West Germans on the other side in a celebratory atmosphere. Over the next few weeks, euphoric people and souvenir hunters chipped away parts of the wall; the governments later used industrial equipment to remove most of what was left. Contrary to popular belief the wall's actual demolition did not begin until the summer of 1990 and was not completed until 1992. [ 49 ] The fall of the Berlin Wall paved the way for German reunification , which was formally concluded on 3 October 1990.
https://en.wikipedia.org/wiki/Separation_barrier
In nuclear physics , separation energy is the energy needed to remove one nucleon (or other specified particle or particles) from an atomic nucleus . The separation energy is different for each nuclide and particle to be removed. Values are stated as "neutron separation energy", "two-neutron separation energy", "proton separation energy", "deuteron separation energy", "alpha separation energy", and so on. The lowest separation energy among stable nuclides is 1.67 MeV, to remove a neutron from beryllium-9. The energy can be added to the nucleus by an incident high-energy gamma ray . If the energy of the incident photon exceeds the separation energy, a photodisintegration might occur. Energy in excess of the threshold value becomes kinetic energy of the ejected particle. By contrast, nuclear binding energy is the energy needed to completely disassemble a nucleus, or the energy released when a nucleus is assembled from nucleons. It is the sum of multiple separation energies, which should add to the same total regardless of the order of assembly or disassembly. Electron separation energy or electron binding energy, the energy required to remove one electron from a neutral atom or molecule (or cation) is called ionization energy . The reaction leads to photoionization , photodissociation , the photoelectric effect , photovoltaics , etc. Bond-dissociation energy is the energy required to break one bond of a molecule or ion, usually separating an atom or atoms.
https://en.wikipedia.org/wiki/Separation_energy
Separation of content and presentation (or separation of content and style ) is the separation of concerns design principle as applied to the authoring and presentation of content. Under this principle, visual and design aspects (presentation and style) are separated from the core material and structure (content) of a document. [ 1 ] [ 2 ] [ 3 ] A typical analogy used to explain this principle is the distinction between the human skeleton (as the structural component) and human flesh (as the visual component) which makes up the body's appearance. Common applications of this principle are seen in Web design ( HTML vs. CSS ) [ 4 ] [ 5 ] and document typesetting ( LaTeX body vs. its preamble). This principle is not a rigid guideline, but serves more as best practice for keeping appearance and structure separate. In many cases, the design and development aspects of a project are performed by different people, so keeping both aspects separated ensures both initial production accountability and later maintenance simplification, as in the don't repeat yourself (DRY) principle. LaTeX is a document markup language that focuses primarily on the content and structure of a document. When a document is prepared using the LaTeX system, the source code of the document can be divided into two parts: the document body and the preamble (and the style sheets). The document body can be likened to the body of a HTML document, where one specifies the content and the structure of the document, whereas the preamble (and the style sheets) can be likened to the CSS portion of a HTML document, where the formatting, document specifications and other visual attributes are specified. Under this methodology, academic writings and publications can be structured, styled and typeset with minimal effort by its creators. In fact, it also prevents the end-users — who are usually not trained as designers themselves — from alternating between tweaking the formatting and working on the document itself. Similar to the case with HTML and CSS, the separation between content and style also allows a document to be quickly reformatted for different purposes, or a style to be re-purposed across multiple documents as well. [ 6 ]
https://en.wikipedia.org/wiki/Separation_of_content_and_presentation
A separation process is a method that converts a mixture or a solution of chemical substances into two or more distinct product mixtures, [ 1 ] a scientific process of separating two or more substances in order to obtain purity. At least one product mixture from the separation is enriched in one or more of the source mixture's constituents. In some cases, a separation may fully divide the mixture into pure constituents. Separations exploit differences in chemical properties or physical properties (such as size, shape, charge, mass, density, or chemical affinity) between the constituents of a mixture. Processes are often classified according to the particular properties they exploit to achieve separation. If no single difference can be used to accomplish the desired separation, multiple operations can often be combined to achieve the desired end. Different processes are also sometimes categorized by their separating agent, i.e. mass separating agents or energy separating agents . [ 2 ] Mass separating agents operate by addition of material to induce separation like the addition of an anti-solvent to induce precipitation. In contrast, energy-based separations cause separation by heating or cooling as in distillation. With a few exceptions, elements or compounds exist in nature in an impure state. Often these raw materials must go through a separation before they can be put to productive use, making separation techniques essential for the modern industrial economy. The purpose of separation may be: Separations may be performed on a small scale, as in a laboratory for analytical purposes, or on a large scale, as in a chemical plant . Some types of separation require complete purification of a certain component. An example is the production of aluminum metal from bauxite ore through electrolysis refining . In contrast, an incomplete separation process may specify an output to consist of a mixture instead of a single pure component. A good example of an incomplete separation technique is oil refining. Crude oil occurs naturally as a mixture of various hydrocarbons and impurities. The refining process splits this mixture into other, more valuable mixtures such as natural gas , gasoline and chemical feedstocks , none of which are pure substances, but each of which must be separated from the raw crude. [ citation needed ] In both complete separation and incomplete separation, a series or cascade of separations may be necessary to obtain the desired end products. In the case of oil refining, crude is subjected to a long series of individual distillation steps, each of which produces a different product or intermediate . [ citation needed ]
https://en.wikipedia.org/wiki/Separation_process
A separator is a permeable membrane placed between a battery's anode and cathode . The main function of a separator is to keep the two electrodes apart to prevent electrical short circuits while also allowing the transport of ionic charge carriers that are needed to close the circuit during the passage of current in an electrochemical cell . [ 1 ] Separators are critical components in liquid electrolyte batteries. A separator generally consists of a polymeric membrane forming a microporous layer. It must be chemically and electrochemically stable with regard to the electrolyte and electrode materials and mechanically strong enough to withstand the high tension during battery construction. They are important to batteries because their structure and properties considerably affect the battery performance, including the batteries energy and power densities, cycle life, and safety. [ 2 ] Unlike many forms of technology, polymer separators were not developed specifically for batteries. They were instead spin-offs of existing technologies, which is why most are not optimized for the systems they are used in. Even though this may seem unfavorable, most polymer separators can be mass-produced at a low cost, because they are based on existing forms of technologies. [ 3 ] Yoshino and co-workers at Asahi Kasei first developed them for a prototype of secondary lithium-ion batteries (LIBs) in 1983. Initially, lithium cobalt oxide was used as the cathode and polyacetylene as the anode. Later in 1985, it was found that using lithium cobalt oxide as the cathode and graphite as the anode produced an excellent secondary battery with enhanced stability, employing the frontier electron theory of Kenichi Fukui. [ 4 ] This enabled the development of portable devices, such as cell phones and laptops. However, before lithium ion batteries could be mass-produced, safety concerns needed to be addressed such as overheating and over potential. One key to ensuring safety was the separator between the cathode and anode. Yoshino developed a microporous polyethylene membrane separator with a “fuse” function. [ 5 ] In the case of abnormal heat generation within the battery cell, the separator provides a shutdown mechanism. The micropores close by melting and the ionic flow terminates. In 2004, a novel electroactive polymer separator with the function of overcharge protection was first proposed by Denton and coauthors. [ 6 ] This kind of separator reversibly switches between insulating and conducting states. Changes in charge potential drive the switch. More recently, separators primarily provide charge transport and electrode separation. Materials include nonwoven fibers ( cotton , nylon , polyesters , glass ), polymer films ( polyethylene , polypropylene , poly ( tetrafluoroethylene ), polyvinyl chloride ), ceramic [ 7 ] and naturally occurring substances ( rubber , asbestos , wood ). Some separators employ polymeric materials with pores of less than 20 Å, generally too small for batteries. Both dry and wet processes are used for fabrication. [ 8 ] [ 9 ] Nonwovens consist of a manufactured sheet, web or mat of directionally or randomly oriented fibers. Supported liquid membranes consist of a solid and liquid phase contained within a microporous separator. Some polymer electrolytes form complexes with alkali metal salts, which produce ionic conductors that serve as solid electrolytes. Solid ion conductors, can serve as both separator and the electrolyte. [ 10 ] Separators can use a single or multiple layers/sheets of material. Polymer separators generally are made from microporous polymer membranes. Such membranes are typically fabricated from a variety of inorganic, organic and naturally occurring materials. Pore sizes are typically larger than 50-100 Å. Dry and wet processes are the most common separation production methods for polymeric membranes. The extrusion and stretching portions of these processes induce porosity and can serve as a means of mechanical strengthening. [ 11 ] Membranes synthesized by dry processes are more suitable for higher power density, given their open and uniform pore structure, while those made by wet processes are offer more charge/discharge cycles because of their tortuous and interconnected pore structure. This helps to suppress the conversion of charge carriers into crystals on anodes during fast or low temperature charging. [ 12 ] The dry process involves extruding, annealing and stretching steps. The final porosity depends on the morphology of the precursor film and the specifics of each step. The extruding step is generally carried out at a temperature higher than the melting point of the polymer resin . This is because the resins are melted to shape them into a uniaxially-oriented tubular film, called a precursor film. The structure and orientation of the precursor film depends on the processing conditions and the resin's characteristics. In the annealing process, the precursor is annealed at a temperature slightly lower than the polymer's melting point. The purpose of this step is to improve the crystalline structure. During stretching, the annealed film is deformed along the machine direction by a cold stretch followed by a hot stretch followed by relaxation. The cold stretch creates the pore structure by stretching the film at a lower temperature with a faster strain rate. The hot stretch increases pore sizes using a higher temperature and a slower strain rate. The relaxation step reduces internal stress within the film. [ 13 ] [ 14 ] The dry process is only suitable for polymers with high crystallinity . These include but are not limited to: semi-crystalline polyolefins , polyoxymethylene , and isotactic poly (4-methyl-1-pentene). One can also use blends of immiscible polymers, in which at least one polymer has a crystalline structure, such as polyethylene- polypropylene , polystyrene-polypropylene, and poly ( ethylene terephthalate ) - polypropylene blends. [ 9 ] [ 15 ] After processing, separators formed from the dry process possess a porous microstructure. While specific processing parameters (such as temperature and rolling speed) influence the final microstructure, generally, these separators have elongated, slit-like pores and thin fibrils that run parallel to the machine direction. These fibrils connect larger regions of semi-crystalline polymer, which run perpendicular to the machine direction. [ 11 ] The wet process consists of mixing, heating, extruding, stretching and additive removal steps. The polymer resins are first mixed with, paraffin oil , antioxidant and other additives. The mixture is heated to produce a homogenous solution. The heated solution is pushed through a sheet die to make a gel-like film. The additives are then removed with a volatile solvent to form the microporous result. [ 16 ] This microporous result can then be stretched uniaxially (along the machine direction) or biaxially (along both the machine and transverse directions, providing further pore definition. [ 11 ] The wet process is suitable for both crystalline and amorphous polymers. Wet process separators often use ultrahigh-molecular-weight polyethylene. The use of these polymers enables the batteries with favorable mechanical properties, while shutting it down when it becomes too hot. [ 17 ] When subjected to biaxial stretching, separators formed from the wet process have rounded pores. These pores are dispersed throughout an interconnected polymer matrix. [ 11 ] Specific types of polymers are ideal for the different types of synthesis. Most polymers currently used in battery separators are polyolefin based materials with semi-crystalline structure. Among them, polyethylene , polypropylene , PVC, and their blends such as polyethylene-polypropylene are widely used. Recently, graft polymers have been studied in an attempt to improve battery performance, including micro-porous poly( methyl methacrylate )-grafted [ 16 ] and siloxane grafted polyethylene separators, which show favorable surface morphology and electrochemical properties compared to conventional polyethylene separators. In addition, polyvinylidene fluoride (PVDF) nanofiber webs can be synthesized as a separator to improve both ion conductivity and dimensional stability. [ 3 ] Another type of polymer separator, polytriphenylamine (PTPAn)-modified separator, is an electroactive separator with reversible overcharge protection. [ 6 ] The separator is always placed between the anode and the cathode. The pores of the separator are filled with the electrolyte and packaged for use. [ 18 ] Many structural defects can form in polymer separators due to temperature changes. These structural defects can result in a thicker separators. Furthermore, there can be intrinsic defects in the polymers themselves, such as polyethylene often begins to deteriorate during the stages of polymerization, transportation, and storage. [ 30 ] Additionally, defects such as tears or holes can form during the synthesis of polymer separators. There are also other sources of defects can come from doping the polymer separator. [ 2 ] Polymer separators, similar to battery separators in general, act as a separator of the anode and cathode in the Li-ion battery while also enabling the movement of ions through the cell. Additionally, many of the polymer separators, typically multilayer polymer separators, can act as “shutdown separators”, which are able to shut down the battery if it becomes too hot during the cycling process. These multilayered polymer separators are generally composed of one or more polyethylene layers which serve to shut down the battery and at least one polypropylene layer which acts as a form of mechanical support for the separator. [ 6 ] [ 31 ] Separators are also subjected to numerous stresses during battery assembly and battery usage.  Common stresses include tensile stresses from dry/wet processes and compressive stresses from the volumetric expansion of electrodes and required forces to ensure sufficient contact between components. Dendritic lithium growths are another common source of stress. These stresses are often applied concurrently, creating a complex stress field that separators must withstand.  Additionally, standard battery operation leads to the cyclic application of these stresses. These cyclic conditions can mechanically fatigue separators, which reduces strength, leading to eventual device failure. [ 32 ] In addition to polymer separators, there are several other types of separators. There are nonwovens, which consist of a manufactured sheet, web, or mat of directionally or randomly oriented fibers. Supported liquid membranes, which consist of a solid and liquid phase contained within a microporous separator. Additionally there are also polymer electrolytes which can form complexes with different types of alkali metal salts, which results in the production of ionic conductors which serve as solid electrolytes. Another type of separator, a solid ion conductor, can serve as both a separator and the electrolyte in a battery. [ 10 ] Plasma technology was used to modify a polyethylene membrane for enhanced adhesion, wettability and printability. These are usually performed by modifying the membrane on only its outermost several molecular levels. This allows the surface to behave differently without modifying the properties of the remainder. The surface was modified with acrylonitrile via a plasma coating technique. The resulting acrylonitrile-coated membrane was named PiAn-PE. The surface characterization demonstrated that PiAN-PE's enhanced adhesion resulted from the increased polar component of surface energy. [ 33 ] The sealed rechargeable nickel-metal hydride battery offers significant performance and environmental friendliness above alkaline rechargeable batteries. Ni/MH, like the lithium-ion battery, provides high energy and power density with long cycle lives. This technology's greatest problem is its inherent high corrosion rate in aqueous solutions. The most commonly used separators are porous insulator films of polyolefin , nylon or cellophane. Acrylic compounds can be radiation-grafted onto these separators to make their properties more wettable and permeable. Zhijiang Cai and co-workers developed a solid polymer membrane gel separator. This was a polymerization product of one or more monomers selected from the group of water-soluble ethylenically unsaturated amides and acid. The polymer-based gel also includes a water swellable polymer, which acts as a reinforcing element. Ionic species are added to the solution and remain embedded in the gel after polymerization. Ni/MH batteries of bipolar design (bipolar batteries) are being developed because they offer some advantages for applications as storage systems for electric vehicles. This solid polymer membrane gel separator could be useful for such applications in bipolar design. In other words, this design can help to avoid short-circuits occurring in liquid-electrolyte systems. [ 34 ] Inorganic polymer separators have also been of interest as use in lithium-ion batteries. Inorganic particulate film/ poly(methyl methacrylate) (PMMA) /inorganic particulate film trilayer separators are prepared by dip-coating inorganic particle layers on both sides of PMMA thin films. This inorganic trilayer membrane is believed to be an inexpensive, novel separator for application in lithium-ion batteries from increased dimensional and thermal stability. [ 35 ]
https://en.wikipedia.org/wiki/Separator_(electricity)
The term separator in oilfield terminology designates a pressure vessel used for separating well fluids produced from oil and gas wells into gaseous and liquid components. A separator for petroleum production is a large vessel designed to separate production fluids into their constituent components of oil , gas and water . A separating vessel may be referred to in the following ways: Oil and gas separator , Separator , Stage separator , Trap , Knockout vessel (Knockout drum, knockout trap, water knockout, or liquid knockout), Flash chamber (flash vessel or flash trap), Expansion separator or expansion vessel , Scrubber (gas scrubber), Filter (gas filter). These separating vessels are normally used on a producing lease or platform near the wellhead, manifold, or tank battery to separate fluids produced from oil and gas wells into oil and gas or liquid and gas. An oil and gas separator generally includes the following essential components and features: Separators work on the principle that the three components have different densities , which allows them to stratify when moving slowly with gas on top, water on the bottom and oil in the middle. Any solids such as sand will also settle in the bottom of the separator. The functions of oil and gas separators can be divided into the primary and secondary functions which will be discussed later on. Oil and gas separators can have three general configurations: vertical , horizontal , and spherical . Vertical separators can vary in size from 10 or 12 inches in diameter and 4 to 5 feet seam to seam (S to S) up to 10 or 12 feet in diameter and 15 to 25 feet S to S. Horizontal separators may vary in size from 10 or 12 inches in diameter and 4 to 5 feet S to S up to 15 to 16 feet in diameter and 60 to 70 feet S to S. Spherical separators are usually available in 24 or 30 inch up to 66 to 72 inch in diameter. Horizontal oil and gas separators are manufactured with monotube and dual-tube shells. Monotube units have one cylindrical shell, and dual-tube units have two cylindrical parallel shells with one above the other. Both types of units can be used for two-phase and three-phase service. A monotube horizontal oil and gas separator is usually preferred over a dual-tube unit. The monotube unit has greater area for gas flow as well as a greater oil/gas interface area than is usually available in a dual-tube separator of comparable price. The monotube separator will usually afford a longer retention time because the larger single-tube vessel retains a larger volume of oil than the dual-tube separator. It is also easier to clean than the dual-tube unit. In cold climates, freezing will likely cause less trouble in the monotube unit because the liquid is usually in close contact with the warm stream of gas flowing through the separator. The monotube design normally has a lower silhouette than the dual-tube unit, and it is easier to stack them for multiple-stage separation on offshore platforms where space is limited. It was illustrated by Powers et al (1990) [ 1 ] that vertical separators should be constructed such that the flow stream enters near the top and passes through a gas/liquid separating chamber even though they are not competitive alternatives unlike the horizontal separators. The three configurations of separators are available for two-phase operation and three-phase operation. In the two-phase units, gas is separated from the liquid with the gas and liquid being discharged separately. Oil and gas separators are mechanically designed such that the liquid and gas components are separated from the hydrocarbon steam at specific temperature and pressure according to Arnold et al (2008). [ 2 ] In three-phase separators, well fluid is separated into gas, oil, and water with the three fluids being discharged separately. The gas–liquid separation section of the separator is determined by the maximum removal droplet size using the Souders–Brown equation with an appropriate K factor. The oil-water separation section is held for a retention time that is provided by laboratory test data, pilot plant operating procedure, or operating experience. In the case where the retention time is not available, the recommended retention time for three-phase separator in API 12J is used. The sizing methods by K factor and retention time give proper separator sizes. According to Song et al (2010), [ 3 ] engineers sometimes need further information for the design conditions of downstream equipment, i.e., liquid loading for the mist extractor, water content for the crude dehydrator/desalter or oil content for the water treatment. Oil and gas separators can operate at pressures ranging from a high vacuum to 4,000 to 5,000 psi. Most oil and gas separators operate in the pressure range of 20 to 1,500 psi. Separators may be referred to as low pressure, medium pressure, or high pressure. Low-pressure separators usually operate at pressures ranging from 10 to 20 up to 180 to 225 psi. Medium-pressure separators usually operate at pressures ranging from 230 to 250 up to 600 to 700 psi. High-pressure separators generally operate in the wide pressure range from 750 to 1,500 psi. Oil and gas separators may be classified according to application as test separator, production separator, low temperature separator, metering separator, elevated separator, and stage separators (first stage, second stage, etc.). Separation of oil from gas may begin as the fluid flows through the producing formation into the well bore and may progressively increase through the tubing, flow lines, and surface handling equipment. Under certain conditions, the fluid may be completely separated into liquid and gas before it reaches the oil and gas separator. In such cases, the separator vessel affords only an "enlargement" to permit gas to ascend to one outlet and liquid to descend to another. Difference in density of the liquid and gaseous hydrocarbons may accomplish acceptable separation in an oil and gas separator. However, in some instances, it is necessary to use mechanical devices commonly referred to as "mist extractors" to remove liquid mist from the gas before it is discharged from the separator. Also, it may be desirable or necessary to use some means to remove non solution gas from the oil before the oil is discharged from the separator. The physical and chemical characteristics of the oil and its conditions of pressure and temperature determine the amount of gas it will contain in solution. The rate at which the gas is liberated from a given oil is a function of change in pressure and temperature. The volume of gas that an oil and gas separator will remove from crude oil is dependent on (1) physical and chemical characteristics of the crude, (2) operating pressure, (3) operating temperature, (4) rate of throughput, (5) size and configuration of the separator, and (6) other factors. Agitation, heat, special baffling, coalescing packs, and filtering materials can assist in the removal of nonsolution gas that otherwise may be retained in the oil because of the viscosity and surface tension of the oil. Gas can be removed from the top of the drum by virtue of being gas. Oil and water are separated by a baffle at the end of the separator, which is set at a height close to the oil-water contact, allowing oil to spill over onto the other side, while trapping water on the near side. The two fluids can then be piped out of the separator from their respective sides of the baffle. The produced water is then either injected back into the oil reservoir, disposed of, or treated. The bulk level (gas–liquid interface) and the oil water interface are determined using instrumentation fixed to the vessel. Valves on the oil and water outlets are controlled to ensure the interfaces are kept at their optimum levels for separation to occur. The separator will only achieve bulk separation. The smaller droplets of water will not settle by gravity and will remain in the oil stream. Normally the oil from the separator is routed to a coalescer to further reduce the water content. The production of water with oil continues to be a problem for engineers and the oil producers. Since 1865 when water was coproduced with hydrocarbons, separation of valuable hydrocarbons from disposable water has challenged and frustrated the oil industry. According to Rehm et al (1983), [ 4 ] innovation over the years has led from the skim pit to installation of the stock tank, to the gunbarrel, to the freewater knockout, to the hay-packed coalescer and most recently to the Performax Matrix Plate Coalescer, an enhanced gravity settling separator. The history of water treating for the most part has been sketchy and spartan. There is little economic value to the produced water, and it represents an extra cost for the producer to arrange for its disposal. Today, oil fields produce greater quantities of water than they produce oil. [ citation needed ] Along with greater water production are emulsions and dispersions which are more difficult to treat. The separation process becomes interlocked with a myriad of contaminants as the last drop of oil is being recovered from the reservoir. In some instances it is preferable to separate and to remove water from the well fluid before it flows through pressure reductions, such as those caused by chokes and valves . Such water removal may prevent difficulties that could be caused downstream by the water, such as corrosion which can be referred to as being a chemical reactions that occurs whenever a gas or liquid chemically attacks an exposed metallic surface. [ 5 ] Corrosion is usually accelerated by warm temperatures and likewise by the presence of acids and salts. Other factors that affect the removal of water from oil include hydrate formation and the formation of tight emulsion that may be difficult to resolve into oil and water. The water can be separated from the oil in a three-phase separator by use of chemicals and gravity separation. If the three-phase separator is not large enough to separate the water adequately, it can be separated in a free-water knockout vessel installed upstream or downstream of the separators. For an oil and gas separator to accomplish its primary functions, pressure must be maintained in the separator so that the liquid and gas can be discharged into their respective processing or gathering systems. Pressure is maintained on the separator by use of a gas backpressure valve on each separator or with one master backpressure valve that controls the pressure on a battery of two or more separators. The optimum pressure to maintain on a separator is the pressure that will result in the highest economic yield from the sale of the liquid and gaseous hydrocarbons . To maintain pressure on a separator, a liquid seal must be effected in the lower portion of the vessel. This liquid seal prevents loss of gas with the oil and requires the use of a liquid-level controller and a valve . Effective oil-gas separation is important not only to ensure that the required export quality is achieved but also to prevent problems in downstream process equipment and compressors. Once the bulk liquid has been knocked out, which can be achieved in many ways, the remaining liquid droplets are separated from by a demisting device. Until recently the main technologies used for this application were reverse-flow cyclones, mesh pads and vane packs. More recently new devices with higher gas-handling have been developed which have enabled potential reduction in the scrubber vessel size. There are several new concepts currently under development in which the fluids are degassed upstream of the primary separator. These systems are based on centrifugal and turbine technology and have additional advantages in that they are compact and motion insensitive, hence ideal for floating production facilities . [ 6 ] Below are some of the ways in which oil is separated from gas in separators. Natural gas is lighter than liquid hydrocarbon . Minute particles of liquid hydrocarbon that are temporarily suspended in a stream of natural gas will, by density difference or force of gravity, settle out of the stream of gas if the velocity of the gas is sufficiently slow. The larger droplets of hydrocarbon will quickly settle out of the gas, but the smaller ones will take longer. At standard conditions of pressure and temperature , the droplets of liquid hydrocarbon may have a density 400 to 1,600 times that of natural gas. However, as the operating pressure and temperature increase, the difference in density decreases. At an operating pressure of 800 psig, the liquid hydrocarbon may be only 6 to 10 times as dense as the gas. Thus, operating pressure materially affects the size of the separator and the size and type of mist extractor required to separate adequately the liquid and gas. The fact that the liquid droplets may have a density 6 to 10 times that of the gas may indicate that droplets of liquid would quickly settle out of and separate from the gas. However, this may not occur because the particles of liquid may be so small that they tend to "float" in the gas and may not settle out of the gas stream in the short period of time the gas is in the oil and gas separator. As the operating pressure on a separator increases, the density difference between the liquid and gas decreases. For this reason, it is desirable to operate oil and gas separators at as low a pressure as is consistent with other process variables, conditions, and requirements. If a flowing stream of gas containing liquid , mist is impinged against a surface, the liquid mist may adhere to and coalesce on the surface. After the mist coalesces into larger droplets, the droplets will gravitate to the liquid section of the vessel. If the liquid content of the gas is high, or if the mist particles are extremely fine, several successive impingement surfaces may be required to effect satisfactory removal of the mist. When the direction of flow of a gas stream containing liquid mist is changed abruptly, inertia causes the liquid to continue in the original direction of flow. Separation of liquid mist from the gas thus can be effected because the gas will more readily assume the change of flow direction and will flow away from the liquid mist particles. The liquid thus removed may coalesce on a surface or fall to the liquid section below. Separation of liquid and gas can be effected with either a sudden increase or decrease in gas velocity. Both conditions use the difference in inertia of gas and liquid. With a decrease in velocity, the higher inertia of the liquid mist carries it forward and away from the gas. [ 7 ] The liquid may then coalesce on some surface and gravitate to the liquid section of the separator. With an increase in gas velocity, the higher inertia of the liquid causes the gas to move away from the liquid, and the liquid may fall to the liquid section of the vessel. If a gas stream carrying liquid mist flows in a circular motion at sufficiently high velocity, centrifugal force throws the liquid mist outward against the walls of the container. Here the liquid coalesces into progressively larger droplets and finally gravitates to the liquid section below. Centrifugal force is one of the most effective methods of separating liquid mist from gas. However, according to Keplinger (1931), [ 8 ] some separator designers have pointed out a disadvantage in that a liquid with a free surface rotating as a whole will have its surface curved around its lowest point lying on the axis of rotation. This created false level may cause difficulty in regulating the fluid level control on the separator. This is largely overcome by placing vertical quieting baffles which should extend from the bottom of the separator to above the outlet. Efficiency of this type of mist extractor increases as the velocity of the gas stream increases. Thus for a given rate of throughput, a smaller centrifugal separator will suffice. Because of higher prices for natural gas , the widespread reliance on metering of liquid hydrocarbons , and other reasons, it is important to remove all nonsolution gas from crude oil during field processing. Methods used to remove gas from crude oil in oil and gas separators are discussed below: Moderate, controlled agitation which can be defined as movement of the crude oil with sudden force [ 9 ] is usually helpful in removing nonsolution gas that may be mechanically locked in the oil by surface tension and oil viscosity. Agitation usually will cause the gas bubbles to coalesce and to separate from the oil in less time than would be required if agitation were not used. Heat as a form of energy that is transferred from one body to another results in a difference in temperature. [ 10 ] This reduces surface tension and viscosity of the oil and thus assists in releasing gas that is hydraulically retained in the oil. The most effective method of heating crude oil is to pass it through a heated-water bath. A spreader plate that disperses the oil into small streams or rivulets increases the effectiveness of the heated-water bath. Upward flow of the oil through the water bath affords slight agitation, which is helpful in coalescing and separating entrained gas from the oil. A heated-water bath is probably the most effective method of removing foam bubbles from foaming crude oil. A heated-water bath is not practical in most oil and gas separators, but heat can be added to the oil by direct or indirect fired heaters and/or heat exchangers, or heated free-water knockouts or emulsion treaters can be used to obtain a heated-water bath. Centrifugal force which can be defined as a fictitious force, peculiar to a particle moving on a circular path, that has the same magnitude and dimensions as the force that keeps the particle on its circular path (the centripetal force ) [ 11 ] but points in the opposite direction is effective in separating gas from oil. The heavier oil is thrown outward against the wall of the vortex retainer while the gas occupies the inner portion of the vortex. A properly shaped and sized vortex will allow the gas to ascend while the liquid flows downward to the bottom of the unit. The direction of flow in and around a separator along with other flow instruments are usually illustrated on the Piping and instrumentation diagram , (P&ID). Some of these flow instruments include the Flow Indicator (FI), Flow Transmitter (FT) and the Flow Controller (FC). Flow is of paramount importance in the oil and gas industry because flow, as a major process variable is essentially important in that its understanding helps engineers come up with better designs and enables them to confidently carry out additional research. Mohan et al (1999) [ 12 ] carried out a research into the design and development of separators for a three-phase flow system. The purpose of the study was to investigate the complex multiphase hydrodynamic flow behaviour in a three-phase oil and gas separator. A mechanistic model was developed alongside a computational fluid dynamics (CFD) simulator. These were then used to carry out a detailed experimentation on the three-phase separator. The experimental and CFD simulation results were suitably integrated with the mechanistic model. The simulation time for the experiment was 20 seconds with the oil specific gravity as 0.885, and the separator lower part length and diameter were 4-ft and 3-inches respectively. The first set of experiment became a basis through which detailed investigations were used to carry out and to conduct similar simulation studies for different flow velocities and other operating conditions as well. As earlier stated, flow instruments that function with the separator in an oil and gas environment include the flow indicator, flow transmitter and the flow controller. Due to maintenance (which will be discussed later) or due to high usage, these flowmeters do need to be calibrated from time to time. [ 13 ] Calibration can be defined as the process of referencing signals of known quantity that has been predetermined to suit the range of measurements required. Calibration can also be seen from a mathematical point of view in which the flowmeters are standardized by determining the deviation from the predetermined standard so as to ascertain the proper correction factors. In determining the deviation from the predetermined standard, the actual flowrate is usually first determined with the use of a master meter which is a type of flowmeter that has been calibrated with a high degree of accuracy or by weighing the flow so as to be able to obtain a gravimetric reading of the mass flow. Another type of meter used is the transfer meter. However, according to Ting et al (1989), [ 14 ] transfer meters have been proven to be less accurate if the operating conditions are different from its original calibrated points. According to Yoder (2000), [ 15 ] the types of flowmeters used as master meters include turbine meters, positive displacement meters, venturi meters, and Coriolis meters. In the U.S., master meters are often calibrated at a flow lab that has been certified by the National Institute of Standards and Technology , (NIST). NIST certification of a flowmeter lab means that its methods have been approved by NIST. Normally, this includes NIST traceability, meaning that the standards used in the flowmeter calibration process have been certified by NIST or are causally linked back to standards that have been approved by NIST. However, there is a general belief in the industry that the second method which involves the gravimetric weighing of the amount of fluid (liquid or gas) that actually flows through the meter into or out of a container during the calibration procedure is the most ideal method for measuring the actual amount of flow. Apparently, the weighing scale used for this method also has to be traceable to the National Institute of Standards and Technology (NIST) as well. [ 16 ] In ascertaining a proper correction factor, there is often no simple hardware adjustment to make the flowmeter start reading correctly. Instead, the deviation from the correct reading is recorded at a variety of flowrates. The data points are plotted, comparing the flowmeter output to the actual flowrate as determined by the standardized National Institute of Standards and Technology master meter or weigh scale. The controls required for oil and gas separators are liquid level controllers for oil and oil/water interface (three-phase operation) and gas back-pressure control valve with pressure controller. Although the use of controls is expensive making the cost of operating fields with separators so high, installations has resulted in substantial savings in the overall operating expense as in the case of the 70 gas wells in the Big Piney, Wyo sighted by Fair (1968). [ 17 ] The wells with separators were located above 7,200 ft elevation, ranging upward to 9,000 ft. Control installations were sufficiently automated such that the field operations around the controllers could be operated from a remote-control station at the field office using the Distributed Control System . All in all, this improved the efficiency of personnel and the operation of the field, with a corresponding increase in production from the area. The valves required for oil and gas separators are oil discharge control valve, water-discharge control valve (three-phase operation), drain valves, block valves, pressure relief valves, and emergency shutdown valves (ESD). ESD valves typically stay in open position for months or years awaiting a command signal to operate. Little attention is paid to these valves outside of scheduled turnarounds. The pressures of continuous production often stretch these intervals even longer. This leads to build up or corrosion on these valves that prevents them from moving. For safety critical applications, it must be ensured that the valves operate upon demand. [ 18 ] The accessories required for oil and gas separators are pressure gauges, thermometers , pressure-reducing regulators (for control gas), level sight glasses, safety head with rupture disk, piping , and tubing. Oil and gas separators should be installed at a safe distance from other lease equipment. Where they are installed on offshore platforms or in close proximity to other equipment, precautions should be taken to prevent injury to personnel and damage to surrounding equipment in case the separator or its controls or accessories fail. The following safety features are recommended for most oil and gas separators. Over the life of a production system, the separator is expected to process a wide range of produced fluids. With break through from water flood and expanded gas lift circulation, the produced fluid water cut and gas-oil ratio is ever changing. In many instances, the separator fluid loading may exceed the original design capacity of the vessel. As a result, many operators find their separator no longer able to meet the required oil and water effluent standards, or experience high liquid carry-over in the gas according to Power et al (1990). [ 20 ] Some operational maintenance and considerations are discussed below: In refineries and processing plants, it is normal practice to inspect all pressure vessels and piping periodically for corrosion and erosion. In the oil fields, this practice is not generally followed (they are inspected at a predetermined frequency, normally decided by an RBI assessment) and equipment is replaced only after actual failure. This policy may create hazardous conditions for operating personnel and surrounding equipment. It is recommended that periodic inspection schedules for all pressure equipment be established and followed to protect against undue failures. All safety relief devices should be installed as close to the vessel as possible and in such manner that the reaction force from exhausting fluids will not break off, unscrew, or otherwise dislodge the safety device. The discharge from safety devices should not endanger personnel or other equipment. Separators should be operated above hydrate-formation temperature . Otherwise hydrates may form in the vessel and partially or completely plug it thereby reducing the capacity of the separator. In some instances when the liquid or gas outlet is plugged or restricted, this causes the safety valve to open or the safety head to rupture. Steam coils can be installed in the liquid section of oil and gas separators to melt hydrates that may form there. This is especially appropriate on low-temperature separators. A separator handling corrosive fluid should be checked periodically to determine whether remedial work is required. Extreme cases of corrosion may require a reduction in the rated working pressure of the vessel. Periodic hydrostatic testing is recommended, especially if the fluids being handled are corrosive. Expendable anode can be used in separators to protect them against electrolytic corrosion . Some operators determine separator shell and head thickness with ultrasonic thickness indicators and calculate the maximum allowable working pressure from the remaining metal thickness. This should be done yearly offshore and every two to four years onshore. Sand and other solids from upstream will tend to settle out in the bottom of the separators. If allowed to accumulate the solids reduce the volume available for oil/gas/water separation reducing efficiency. The vessel may be taken offline and drained down and the solids removed by digging out by hand. Or water sparge pipes in the base of the separator used to fluidize the sand which can be drained from the drain valves in the base.
https://en.wikipedia.org/wiki/Separator_(oil_production)
A separatory funnel , also known as a separation funnel , separating funnel , or colloquially sep funnel , is a piece of laboratory glassware used in liquid-liquid extractions to separate ( partition ) the components of a mixture into two immiscible solvent phases of different densities . [ 1 ] Typically, one of the phases will be aqueous, and the other a lipophilic organic solvent such as ether , MTBE , dichloromethane , chloroform , or ethyl acetate . All of these solvents form a clear delineation between the two liquids. [ 2 ] The more dense liquid, typically the aqueous phase unless the organic phase is halogenated , sinks to the bottom of the funnel and can be drained out through a valve away from the less dense liquid, which remains in the separatory funnel. [ 3 ] A separating funnel takes the shape of a cone with a hemispherical end. It has a stopper at the top and stopcock (tap), at the bottom. Separating funnels used in laboratories are typically made from borosilicate glass and their taps are made from glass or PTFE . Typical sizes are between 30 mL and 3 L. In industrial chemistry they can be much larger and for much larger volumes centrifuges are used. The sloping sides are designed to facilitate the identification of the layers. The tap-controlled outlet is designed to drain the liquid out of the funnel. On top of the funnel there is a standard taper joint which fits with a ground glass or Teflon stopper. [ 4 ] To use a separating funnel, the two phases and the mixture to be separated in solution are added through the top with the stopcock at the bottom closed. The funnel is then closed and shaken gently by inverting the funnel multiple times; if the two solutions are mixed together too vigorously emulsions will form. The funnel is then inverted and the stopcock carefully opened to release excess vapor pressure . The separating funnel is set aside to allow for the complete separation of the phases. The top and the bottom stopcock are then opened and the lower phase is released by gravitation . The top must be opened while releasing the lower phase to allow pressure equalization between the inside of the funnel and the atmosphere. When the bottom layer has been removed, the stopcock is closed and the upper layer is poured out through the top into another container. The separating funnel relies on the concept of "like dissolves like", which describes the ability of polar solvents to dissolve polar solutes and non-polar solvents to dissolve non-polar solutes. When the separating funnel is agitated, each solute migrates to the solvent (also referred to as "phase") in which it is more soluble. The solvents normally do not form a unified solution together because they are immiscible. When the funnel is kept stationary after agitation, the liquids form distinct physical layers - lower density liquids will stay above higher density liquids. A mixture of solutes is thus separated into two physically separate solutions, each enriched in different solutes. The stopcock may be opened after the two phases separate to allow the bottom layer to escape the separator funnel. The top layer may be retained in the separating funnel for further extractions with additional batches of solvent or drained out into a separate vessel for other uses. If it is desired to retain the bottom layer in the separating funnel for further extractions, both layers are taken out separately, and then the former bottom layer is returned to the separating funnel. Each independent solution can then be extracted again with additional batches of solvent, used for other physical or chemical processes. If the goal was to separate a soluble material from mixture, the solution containing that desired product can sometimes simply be evaporated to leave behind the purified solute. For this reason, it is a practical benefit to use volatile solvents for extracting the desired material from the mixture. [ 5 ] One of the drawbacks of using a separating funnel is emulsions can form easily, and can take a long time to separate once formed. They are often formed while liquids are being mixed in the separating funnel. This can occur when small droplets are suspended in an aqueous solution. If an emulsion is formed, one technique used to separate the liquids is to slowly swirl the solution in the separating funnel. If the emulsion is not separated by this process, a small amount of saturated saline solution is added (" salting out "). [ 6 ] Research is being done on alternative, more efficient techniques, mostly utilizing stir bars (stirrer bars) to decrease or even eliminate the chance of emulsification, thus decreasing the amount of waiting time. [ 7 ] The largest risk when using a separating funnel is that of pressure build-up inside it. Pressure accumulates during mixing if a gas evolving reaction or physical change occurs. This problem can be easily handled by simply opening the stopper at the top of the funnel routinely while mixing. More standard procedure is to turn the separating funnel upside down and open the stopcock to release the pressure as often as needed, a step known as 'venting'. The tip of the funnel should be pointed away from the body during venting. [ 8 ]
https://en.wikipedia.org/wiki/Separatory_funnel
In mathematics , a separatrix is the boundary separating two modes of behaviour in a differential equation . [ 1 ] Consider the differential equation describing the motion of a simple pendulum : where ℓ {\displaystyle \ell } denotes the length of the pendulum, g {\displaystyle g} the gravitational acceleration and θ {\displaystyle \theta } the angle between the pendulum and vertically downwards. In this system there is a conserved quantity H (the Hamiltonian ), which is given by H = θ ˙ 2 2 − g ℓ cos ⁡ θ . {\displaystyle H={\frac {{\dot {\theta }}^{2}}{2}}-{\frac {g}{\ell }}\cos \theta .} With this defined, one can plot a curve of constant H in the phase space of system. The phase space is a graph with θ {\displaystyle \theta } along the horizontal axis and θ ˙ {\displaystyle {\dot {\theta }}} on the vertical axis – see the thumbnail to the right. The type of resulting curve depends upon the value of H . If H < − g ℓ {\displaystyle H<-{\frac {g}{\ell }}} then no curve exists (because θ ˙ {\displaystyle {\dot {\theta }}} must be imaginary ). If − g ℓ < H < g ℓ {\displaystyle -{\frac {g}{\ell }}<H<{\frac {g}{\ell }}} then the curve will be a simple closed curve which is nearly circular for small H and becomes "eye" shaped when H approaches the upper bound. These curves correspond to the pendulum swinging periodically from side to side. If g ℓ < H {\displaystyle {\frac {g}{\ell }}<H} then the curve is open, and this corresponds to the pendulum forever swinging through complete circles. In this system the separatrix is the curve that corresponds to H = g ℓ {\displaystyle H={\frac {g}{\ell }}} . It separates — hence the name — the phase space into two distinct areas, each with a distinct type of motion. The region inside the separatrix has all those phase space curves which correspond to the pendulum oscillating back and forth, whereas the region outside the separatrix has all the phase space curves which correspond to the pendulum continuously turning through vertical planar circles. In the FitzHugh–Nagumo model , when the linear nullcline pierces the cubic nullcline at the left, middle, and right branch once each, the system has a separatrix . Trajectories to the left of the separatrix converge to the left stable equilibrium, and similarly for the right. The separatrix itself is the stable manifold for the saddle point in the middle. Details are found in the page. The separatrix is clearly visible by numerically solving for trajectories backwards in time . Since when solving for the trajectories forwards in time, trajectories diverge from the separatrix, when solving backwards in time, trajectories converge to the separatrix.
https://en.wikipedia.org/wiki/Separatrix_(mathematics)
Sephadex is a cross-linked dextran gel used for gel filtration . It was launched by Pharmacia in 1959, after development work by Jerker Porath and Per Flodin . [ 1 ] [ 2 ] The name is derived from se paration Pha rmacia dex tran. It is normally manufactured in a bead form and most commonly used for gel filtration columns. By varying the degree of cross-linking, the fractionation properties of the gel can be altered. These highly specialized gel filtration and chromatographic media are composed of macroscopic beads synthetically derived from the polysaccharide dextran. The organic chains are cross-linked to give a three-dimensional network having functional ionic groups attached by ether linkages to glucose units of the polysaccharide chains. Available forms include anion and cation exchangers, as well as gel filtration resins, with varying degrees of porosity; bead sizes fall in discrete ranges between 20 and 300 μm . Sephadex is also used for ion-exchange chromatography. [ 3 ] Sephadex is crosslinked with epichlorohydrin . [ 4 ] Sephadex is used to separate molecules by molecular weight. Sephadex is a faster alternative to dialysis (de-salting), requiring a low dilution factor (as little as 1.4:1), with high activity recoveries. Sephadex is also used for buffer exchange and the removal of small molecules during the preparation of large biomolecules, such as ampholytes, detergents, radioactive or fluorescent labels, and phenol (during DNA purification). A special hydroxypropylated [ 5 ] form of Sephadex resin, named Sephadex LH-20, is used for the separation and purification of small organic molecules such as steroids, terpenoids, lipids. An example of use is the purification of cholesterol. [ 6 ] Exclusion chromatography. Fractionation Range [ 7 ] of Globular Proteins and Dextrans (Da). Proteins Ion-exchange chromatography. Sephadex ion exchangers are produced by introducing functional groups onto the cross-linked dextran matrix. These groups are attached to glucose units in the matrix by stable ether linkages. [ 8 ]
https://en.wikipedia.org/wiki/Sephadex
Sepiapterin reductase deficiency is an inherited pediatric disorder characterized by movement problems, and most commonly displayed as a pattern of involuntary sustained muscle contractions known as dystonia . Symptoms are usually present within the first year of age, but diagnosis is delayed due to physicians lack of awareness and the specialized diagnostic procedures. [ 2 ] Individuals with this disorder also have delayed motor skills development including sitting, crawling, and need assistance when walking. Additional symptoms of this disorder include intellectual disability, excessive sleeping, mood swings, and an abnormally small head size. SR deficiency is a very rare condition. The first case was diagnosed in 2001, and since then there have been approximately 30 reported cases. At this time, the condition seems to be treatable, but the lack of overall awareness and the need for a series of atypical procedures used to diagnose this condition pose a dilemma. [ 3 ] The oculogyric crises usually occur in the later half of the day and during these episodes patients undergo extreme agitation and irritability along with uncontrolled head and neck movements. Apart from the aforementioned symptoms, patients can also display parkinsonism , sleep disturbances, small head size (microcephaly), behavioral abnormalities, weakness, drooling, and gastrointestinal symptoms. [ 4 ] This disorder occurs through a mutation in the SPR gene, which is responsible for encoding the sepiapterin reductase enzyme. The enzyme is involved in the last step of producing tetrahydrobiopterin , better known as BH 4 . BH 4 is involved in the processing of amino acids and the production of neurotransmitters, specifically that of dopamine and serotonin which are primarily used in transmission of signals between nerve cells in the brain. The mutation in the SPR gene interferes with the production of the enzyme by producing enzymes with little or no function at all. This interference results in a lack of BH 4 specifically in the brain. The lack of BH 4 only occurs in the brain because other parts of the body adapt and utilize alternate pathways for the production of BH 4 . The mutation in the SPR gene leads to nonfunctional sepiapterin reductase enzymes, which results in a lack of BH 4 and ultimately disrupts the production of dopamine and serotonin in the brain. [ 6 ] The disruption of dopamine and serotonin production leads to the visible symptoms present in patients suffering from sepiapterin reductase deficiency. SR deficiency is considered an inherited autosomal recessive condition disorder because each parent carries one copy of the mutated gene, but typically do not show any signs or symptoms of the condition. [ 7 ] The diagnosis of SR deficiency is based on the analysis of the pterins and biogenic amines found in the cerebrospinal fluid (CSF) of the brain. The pterin compound functions as a cofactor in enzyme catalysis and biogenic amines which include adrenaline, dopamine, and serotonin have functions that vary from the control of homeostasis to the management of cognitive tasks. [ 8 ] This analysis reveals decreased concentrations of homovanillic acid (HVA), 5-hydroxyindolacetic acid (HIAA), and elevated levels of 7,8-dihydrobiopterin , a compound produced in the synthesis of neurotransmitters. Sepiapterin is not detected by the regularly used methods applied in the investigation of biogenic monoamine metabolites in the cerebrospinal fluid. It must be determined by specialized methods that work by indicating a marked and abnormal increase of sepiapterin in cerebrospinal fluid. Confirmation of the diagnosis occurs by demonstrating high levels of CSF sepiapterin and a marked decrease of SR activity of the fibroblasts along with SPR gene molecular analysis. [ 2 ] [ 9 ] SR deficiency is currently being treated using a combination therapy of levodopa and carbidopa. These treatments are also used for individuals suffering from Parkinson's. The treatment is noninvasive and only requires the patient to take oral tablets 3 or 4 times a day, where the dosage of levodopa and carbidopa is determined by the severity of the symptoms. Levodopa is in a class of medications called central nervous system agents where its main function is to become dopamine in the brain. Carbidopa is in a class of medications called decarboxylase inhibitors and it works by preventing levodopa from being broken down before it reaches the brain. This treatment is effective in mitigating motor symptoms, but it does not totally eradicate them and it is not as effective on cognitive problems. Patients who have been diagnosed with SR deficiency and have undergone this treatment have shown improvements with most motor impairments including oculogyric crises, dystonia, balance, and coordination. [ 9 ] [ 10 ] The diagnosis of sepiapterin reductase deficiency in a patient at the age of 14 years was delayed by an earlier diagnosis of an initially unclassified form of methylmalonic aciduria at the age of 2. At that time the hypotonia and delayed development were not considered to be suggestive of a neurotransmitter defect. The clinically relevant diagnosis was only made following the onset of dystonia with diurnal variation, when the patient was a teenager. Variability in occurrence and severity of other symptoms of the condition, such as hypotonia, ataxia , tremors, spasticity, bulbar involvement, oculogyric crises, and cognitive impairment, is comparable with autosomal dominant GTPCH and tyrosine hydroxylase deficiency, which are both classified as forms of DOPA-responsive dystonia. [ 11 ] Hypotonia and Parkinsonism were present in two Turkish siblings, brother and sister. By using exome sequencing , which sequences a selective coding region of the genome , researchers have found a homozygous five- nucleotide deletion in the SPR gene which confirmed both siblings were homozygous. It is predicted that this mutation leads to premature translational termination. Translation is the biological process through which proteins are manufactured. The homozygous mutation of the SPR gene in these two siblings exhibiting early-onset Parkinsonism showcases that SPR gene mutations can vary in combinations of clinical symptoms and movement. These differences result in a wider spectrum for the disease phenotype and increases the genetic heterogeneity causing difficulties in diagnosing the disease. [ 3 ] This study examined the clinical history of the CSF and urine of two Greek siblings who were both diagnosed with SR deficiency. Both siblings displayed delayed psychomotor development and a movement disorder. The diagnosis was confirmed by measuring the SR enzyme activity and mutation analysis. The mutation analysis of the gene was performed using genomic DNA isolated from blood samples. The results concluded that both patients have low concentrations of HVA and HIAA and high concentrations of sepiapterin in the CSF, but neopterin and biopterin were abnormal in only one sibling. The results of this research indicates that when diagnosing the SR deficiency, the quantification of sepiapterin in the CSF is more important and indicative of SR deficiency than using neopterin and biopterin alone. The results also show that the urine concentrations of neurotransmitter metabolites are abnormal in patients with this disorder. This finding may provide an initial and easier indication of the deficiency before CSF analysis is performed. [ 12 ]
https://en.wikipedia.org/wiki/Sepiapterin_reductase_deficiency
Josef "Sepp" Hochreiter (born 14 February 1967) is a German computer scientist . Since 2018 he has led the Institute for Machine Learning at the Johannes Kepler University of Linz after having led the Institute of Bioinformatics from 2006 to 2018. In 2017 he became the head of the Linz Institute of Technology (LIT) AI Lab. Hochreiter is also a founding director of the Institute of Advanced Research in Artificial Intelligence (IARAI). [ 1 ] Previously, he was at Technische Universität Berlin , at University of Colorado Boulder , and at the Technical University of Munich . He is a chair of the Critical Assessment of Massive Data Analysis (CAMDA) conference. [ 2 ] Hochreiter has made contributions in the fields of machine learning , deep learning and bioinformatics , most notably the development of the long short-term memory (LSTM) neural network architecture, [ 3 ] [ 4 ] but also in meta-learning , [ 5 ] reinforcement learning [ 6 ] [ 7 ] and biclustering with application to bioinformatics data. Hochreiter developed the long short-term memory (LSTM) neural network architecture in his diploma thesis in 1991 leading to the main publication in 1997. [ 3 ] [ 4 ] LSTM overcomes the problem of numerical instability in training recurrent neural networks (RNNs) that prevents them from learning from long sequences ( vanishing or exploding gradient ). [ 3 ] [ 8 ] [ 9 ] In 2007, Hochreiter and others successfully applied LSTM with an optimized architecture to very fast protein homology detection without requiring a sequence alignment . [ 10 ] LSTM networks have also been used in Google Voice for transcription [ 11 ] and search, [ 12 ] and in the Google Allo chat app for generating response suggestion with low latency. [ 13 ] Beyond LSTM, Hochreiter has developed "Flat Minimum Search" to increase the generalization of neural networks [ 14 ] and introduced rectified factor networks (RFNs) for sparse coding [ 15 ] [ 16 ] which have been applied in bioinformatics and genetics. [ 17 ] Hochreiter introduced modern Hopfield networks with continuous states [ 18 ] and applied them to the task of immune repertoire classification. [ 19 ] Hochreiter worked with Jürgen Schmidhuber in the field of reinforcement learning on actor-critic systems that learn by "backpropagation through a model". [ 6 ] [ 20 ] Hochreiter has been involved in the development of factor analysis methods with application to bioinformatics, including FABIA for biclustering , [ 21 ] HapFABIA for detecting short segments of identity by descent [ 22 ] and FARMS for preprocessing and summarizing high-density oligonucleotide DNA microarrays to analyze RNA gene expression . [ 23 ] In 2006, Hochreiter and others proposed an extension of the support vector machine (SVM), the "Potential Support Vector Machine" (PSVM), [ 24 ] which can be applied to non-square kernel matrices and can be used with kernels that are not positive definite. Hochreiter and his collaborators have applied PSVM to feature selection , including gene selection for microarray data. [ 25 ] [ 26 ] [ 27 ] Hochreiter was awarded the IEEE CIS Neural Networks Pioneer Prize in 2021 for his work on LSTM. [ 28 ]
https://en.wikipedia.org/wiki/Sepp_Hochreiter
Sepro Mineral Systems Corp. is a Canadian company founded in 1987 and headquartered in British Columbia , Canada. The outcome of the acquisition of Sepro Mineral Processing International by Falcon Concentrators in 2008, [ 1 ] the company's key focus is the production of mineral processing equipment for the mining and aggregate industries. [ 2 ] Sepro Mineral Systems Corp. also provides engineering and process design services. Products sold by Sepro include grinding mills, ore scrubbers, vibrating screens, centrifugal gravity concentrators, agglomeration drums, and dense media separators. The company is also a supplier of single source modular pre-designed and custom designed plants and circuits. [ 3 ] Today, Sepro Mineral Systems Corp. is represented by global agents in over 15 countries and has equipment operating in over 31 countries around the world. [ 4 ] The Falcon Concentrator is a type of gravity separation device for the recovery of valuable metals and minerals. There are three types of Falcon Concentrators: Falcon Semi-Batch (SB), Falcon Continuous (C) and Falcon Ultra-Fine (UF). [ 5 ] [ 6 ] All models of Falcon Concentrator rely on the creation of centrifugal forces by way of a rapidly rotating, vertical bowl in order to stratify and separate particles based on weight. The amount of gravitational force generated and the method of collecting these heavier particles differs for each model. The Falcon semi batch centrifugal concentrator is primarily used for the recovery of free (liberated) precious metals such as gold, silver and platinum. The machine generates forces up to 200 times the force of gravity (200 G's) and makes use of a two-stage rotating bowl for mineral separation. The smooth-walled lower portion is for particle stratification and then a fluidized upper portion is used for the collection of the heavier particles. The machine is stopped periodically to rinse and collect the valuable concentrate from the bowl. [ 7 ] The Falcon SB concentrator is used for gold recovery at many mines around the world, including Quadra FNX Mining 's Robinson mine in the United States, Newcrest 's Telfer Gold Mine in Australia and the Sadiola Gold Mine (owned principally by AngloGold Ashanti and Iamgold ) in Mali. [ 8 ] The Falcon Continuous (C) centrifugal concentrator is primarily used for the separation of heavy minerals which occur in ore concentrations above 0.1% by weight, such as cassiterite , tantalum and scheelite . [ 9 ] [ 10 ] It is also used for coal cleaning [ 11 ] and pre-concentration of gold bearing ores. [ 12 ] [ 13 ] The machine generates forces up to 300 times the force of gravity (300 G's) and operates by using a smooth-walled, rotating bowl to stratify the material into heavier and lighter fractions then uses pneumatic valves to control the amount of heavy material that reports to the concentrate collection stream. It does not use any fluidization water and relies entirely on centrifugal force for separation. [ 14 ] The Falcon C concentrator is used in various process plants around the world, such as the Tanco mine in Canada, [ 15 ] the Sekisovskoye mine in Kazakhstan [ 16 ] and the Renison tin mine in Tasmania. [ 17 ] The Falcon Ultra-Fine (UF) centrifugal concentrator is primarily used for the separation of heavy minerals which occur in ore concentrations above 0.1% by weight, such as cassiterite , tantalum and scheelite when the majority of the particles are smaller than 75 μm . The machine generates forces up to 600 times the force of gravity (600 G's) and uses a smooth-walled bowl for particle stratification with a pneumatically controlled rubber lip for heavy material collection. The machine is stopped periodically to rinse and collect the valuable concentrate from the bowl. [ 18 ] Studies have found that the deposition of heavy material within the bowl can be predicted by a hindered settling model. [ 19 ] The Falcon UF concentrator is used in a number of process plants around the world such as the Tanco mine in Canada [ 20 ] and the Bluestone tin mine in Tasmania. [ 21 ] Scrubbers are material washers used to break down and disperse clays in order to prepare mineral ores or construction aggregates for further processing. Sepro Tyre Drive Scrubbers are manufactured up to 3.6m in diameter [ 22 ] and are capable of processing up to 1500 tonnes per hour of material. Shell supported Scrubbers such as the Sepro PTD Scrubber minimize stress on the shell by spreading the power drive over the full length of the washing drum. These scrubbers operate in many applications on feeds with high clay content, and are commonly used for difficult ore and stone washing duties. A few specified applications of Sepro Scrubbers include removal of gold “robbing” carboniferous material and other contaminants from gold ores , the processing of bauxite ores for aluminum production, the washing of laterites (gold, nickel, cobalt) to liberate fine metals for gravity recovery, and the washing of crushed aggregate , gravel and sand to remove clay contamination. [ 23 ] [ 24 ] Sepro Agglomeration Drums are specifically designed to prepare feeds with high fines content on Gold and Base Metal heap loading operations. [ 25 ] Processes where a Sepro Agglomeration Drum can be utilized include gold , copper , uranium and nickel laterite . The action in the agglomeration drum, combined with small additions of cement or lime, binds the fines into a "pelletised" product, which can be heaped and leached out without "pooling" and "channeling" caused by loss of heap permeability due to blinding by fines. [ 26 ] The machine uses flexible rubber liners to prevent build up without the use of lifter bars and is adjustable on a pivotable base frame. [ 27 ] Shell supported agglomerators such as the Sepro PTD Agglomeration Drum minimize stress on the shell by spreading the power drive over the full length of the unit. Sepro Tyre Driven Grinding Mills are designed for small and medium capacity grinding applications, specifically small tonnage plants, regrinding mills, reagent prep and lime slaking. Sepro Pneumatic Tyre Driven (PTD) mills provide an alternative to standard trunnion drive systems. The drive consists of multiple gears boxes and electric motors directly connected and controlled through an AC variable frequency drive. Shell supported mills such as the Sepro PTD mills minimize stress on the mill shell by spreading the power drive over the full length of the mill. Sepro Mills are suitable for ball, rod and pebble charges and are available with overflow or grate discharge [ 28 ] to suit the application. Shell supported mills such as the Sepro PTD Grinding Mills minimize stress on the shell by spreading the power drive over the full length of the unit. [ 29 ] The Condor Dense Medium Separator (DMS) is a multi-stage, high efficiency media separation machine for mineral processing operations at the rougher and scavenger stage. It is typically used in a pre-concentration duty prior to processing or milling to reject barren material. The unit is manufactured with either two or three stages of separation depending on the media with one or two valuable densities resulting, while the unit can produce up to four products from one dense medium vessel altogether. The Condor DMS can take a larger feed particle size compared to a DMS cyclone of the same diameter and capacity, and is capable of handling higher sinks or floats loading without affecting performance. The valuable dense material (or 'sinks') can be combined or separated at the final stage and is then pumped onto the next process in the circuit. Sepro Mineral Systems Corp. supplies customizable DMS Plants for a wide variety of application requirements. Sepro's standard two product (concentrate, tailings) DMS Plant utilizes a two-stage Condor Separator and single density medium circuit, while the three product (concentrate, middlings, tailings) DMS Plant utilizes a three-stage Condor Separator and two medium circuits at high and low density. [ 30 ] The Sepro Leach Reactor is a high concentration leach reactor developed to treat the gold concentrate produced by the Falcon Concentrator . The unit consists of a concentrate holding tank and an agitated leach tank which are linked by a peristaltic hose pump. The SLR uses either hydrogen peroxide, oxygen gas or a chemical accelerant like SeproLeach to achieve elevated levels of dissolved oxygen required to accelerate the leaching process. The pregnant leach solution produced can be directly electrowon . With the addition of an electrowinning unit the final product becomes a gold plated carbon that can be directly refined to produce gold bullion. Extensive test work of the SLR on site has shown over 99% of the target mineral is recovered through a simple, fully automated process that is easily incorporated into recovery operations. Sepro Mineral Systems Corp. supplies SLR units with capacities ranging from 1,000 to 50,000 kg (2,200 to 110,200 lb). [ 31 ] Sepro supplies horizontal slurry pumps , vertical sump pumps , vertical froth pumps , vertical tank pumps and horizontal fluid process pump models which are metal lined or rubber lined, one option being SH46® material for advanced wear resistance. [ 32 ] They are designed to operate in the mining, aggregate, chemical and industrial sectors. Applications suitable for Sepro Pumps include mill discharge, mineral concentrate, dense media, coarse / fine tailings, process water and aggregates. Sepro engineers mobile, modular and fixed mineral processing plant designs which incorporate the complete line of slurry, sump, froth, tank and fluid Sepro Pumps. [ 33 ] The Sepro Blackhawk 100 Cone Crusher is a modern, hydraulically operated cone crusher designed to be simple, rugged and effective for heavy duty mining and aggregate applications. The combination of the speed and eccentric throw of the crusher provides fine crushing capability and high capacity in a very compact design. The Blackhawk is capable of being applied as a secondary or tertiary crusher as well as a pebble crusher. The Blackhawk 100 is driven directly via a flexible coupling to the electric drive motor. This arrangement eliminates the need for sheaves and v-belts, allowing for simplified operation and maintenance. A variable speed drive package is included to optimize the speed of the machine to the given liner profile, feed and production conditions. [ 34 ] Sepro Screens are used for a variety of particle size separation and dewatering duties in mineral processing and aggregate applications. In mineral processing applications, particle size separation is of utmost importance in order to optimize crushing, grinding and gravity separation as well as many other processes. In aggregate applications, proper size separation and dewatering is essential to generate a saleable product. High capacity capable and featuring interchangeable screen decks, Sepro-Sizetec Screens are used for gold ore processing , fine aggregates , industrial minerals, soil remediation and coal processing applications. [ 35 ] Sepro designs and builds modular and mobile processing plants for a wide range of mineral applications. Complete plants can be assembled using Sepro manufactured equipment along with equipment from third-party vendors and sub-contractors. [ 36 ] Sepro Mobile Plants are designed to be easily re-locatable as they are mounted on road transportable custom built trailer assemblies. These include the Sepro Mobile Mill Plant and Sepro Mobile Flotation Plant, both of which were installed by Banks Island Gold Ltd at the company's Yellow Giant Gold Property on the coast of British Columbia. [ 37 ] They can be designed to encompass a wide variety of process options from crushing through to the final concentrate collection. Sepro Modular and Skid Mounted Plants are engineered around structural elements that are simple and easy to erect on site. These plants can be designed with larger equipment for higher tonnage applications than that of the Sepro Mobile Plants. One example is a 360 TPD Gold Processing Plant Sepro supplied to ProEurasia LCC for the Vladimirskaya Project in Russia. This included milling, gravity and smelting circuits. [ 38 ] Sepro also offers standard process modules which are designed around a single recovery or procession option. Dense Media Separation and Gravity Concentration are two examples of standard Sepro process modules. [ 39 ]
https://en.wikipedia.org/wiki/Sepro_Mineral_Systems
Septate junctions are intercellular junctions found in invertebrate epithelial cells, appearing as ladder-like structures under electron microscopy . They are thought to provide structural strength and a barrier to solute diffusion through the intercellular space. They are considered somewhat analogous to the (vertebrate) tight junctions ; however, tight and septate junctions are different in many ways. Known insect homologues of tight junction components are components of conserved signalling pathways that localize to either adherens junctions , the subapical complex, or the marginal zone . [ 1 ] Recent studies show that septate junctions are also identified in the myelinated nerve fibers of the vertebrates. [ 2 ] [ 3 ] The main trait of septate junctions structure is that cross-bridges or septa are in the ladder-like shape and cover the 15–20 nm intermembrane space of cell–cell contacts. Septate junctions are in a tight arrangement which is parallel to each other. [ 4 ] For the septate junctions, several components are related to the function or the morphology of septate junctions, like Band 4.1-Coracle, Discs-large, fasciclin III, Neurexin IV (NRX) and so on. [ 5 ] [ 6 ] Band 4.1-Coracle is necessary for the interaction of the cell. [ 6 ] Discs-large, a key component of septate junctions, is needed for the growth control. [ 7 ] [ 8 ] Fasciclin III acts as an adhesion protein. [ 6 ] Neurexin IV (NRX) is a required transmembrane protein for the formation of septate junctions. For example, the glial–glial septate junctions that lack NRX will cause the blood barriers to break down. [ 5 ] Gliotactin (Gli), is also a necessary a transmembrane protein for the formation of pleated septate junctions. [ 4 ] [ 9 ] Tsp2A and Undicht are newly identified components that are needed for the formation of smooth septate junctions and septate junctions. [ 10 ] [ 11 ] There are three known claudins contained in the septate junctions, Megatrachea (Mega), Sinuous (Sinu) and Kune-kune (Kune). Among these three claudins , Kune-kune (Kune) plays a more central role in septate junctions organization and function. [ 12 ] There are several functions of septate junctions. For the septate junctions in the vertebrates, they play some roles of tight junctions. [ 14 ] Na+/K+ ATPase works for the function of septate junctions. [ 16 ] In Drosophila melanogaster , there are two types of septate junctions, smooth SJs (sSJs) and pleated SJs(pSJs). sSJs and pSJs are distributed in different tissues. sSJs are in gut endoderm and Malpighian tubules , while pSJs are in the ectodermally derived epithelia. [ 5 ] sSJs and pSJs vary in shape but have the same function. [ 4 ]
https://en.wikipedia.org/wiki/Septate_junction
Septentrional , meaning "of the north ", is a Latinate adjective sometimes used in English . It is a form of the Latin noun septentriones , which refers to the seven stars of the Plough (Big Dipper), occasionally called the Septentrion . In the 18th century, septentrional languages was a recognised term for the Germanic languages . [ 1 ] The Oxford English Dictionary gives the etymology of septentrional as: [ad. L. septentrio , sing. of septentriōnēs , orig. septem triōnēs , the seven stars of the constellation of the Great Bear, f. septem seven + triōnes , pl. of trio plough-ox. Cf. F. septentrion .] [ 2 ] "Septentrional" is more or less synonymous with the term "boreal", derived from Boreas , a Greek god of the North Wind . The constellation Ursa Major , containing the Big Dipper, or Plough, dominates the skies of the North. The usual antonym for septentrional is the term meridional , which refers to the noonday sun. The term septentrional is found on maps, mostly those made before 1700. Early maps of North America often refer to the northern- and northwesternmost unexplored areas of the continent as at the "Septentrional" and as "America Septentrionalis", sometimes with slightly varying spellings. [ note 1 ] Sometimes abbreviated to "Sep.", it was used in historical astronomy to indicate the northern direction on the celestial globe, together with Meridional ("Mer.") for southern, Oriental ("Ori.") for eastern and Occidental ("Occ.") for western. [ 3 ] The linguistic usage in the 17th and 18th centuries was as an umbrella term . It described "the Germanic languages, usually with particular emphasis on Anglo-Saxon, Old Norse and Gothic." [ 4 ] Writing of Johann Georg Keyßler in 1758, Thomas Gray distinguished between "Celtic" and "septentrional" antiquities. [ 5 ] Thomas Percy actively criticised the blurring of the Celtic and the Germanic in the name of the "septentrional", while at the same time Ossianism favoured it. [ 6 ] James Ingram in his inaugural lecture of 1807 called George Hickes "the first of septentrional scholars" for his pioneering lexicographical work on Anglo-Saxon. [ 7 ] In current usage, "septentrional fiction" may refer to a setting in the Canadian North. [ 8 ] In France , the term septentrional refers to the Northern stretch of the Côtes du Rhône AOC winemaking region. [ 9 ] The Northern Rhône, or septentrional, runs along the Rhône river from Vienne in the north, to Montélimar in the south. It includes the eight crus : Côte Rôtie , Condrieu , Château-Grillet , Hermitage , Saint-Joseph , Crozes-Hermitage , Cornas and Saint-Péray . [ 10 ] The Southern Rhône is referred to as the meridional ( Rhône méridionale ), and extends from Montélimar in the north, to Avignon in the south.
https://en.wikipedia.org/wiki/Septentrional
Septic drain fields , also called leach fields or leach drains , are subsurface wastewater disposal facilities used to remove contaminants and impurities from the liquid that emerges after anaerobic digestion in a septic tank . Organic materials in the liquid are catabolized by a microbial ecosystem . A septic drain field, a septic tank, and associated piping compose a septic system . The drain field typically consists of an arrangement of trenches containing perforated pipes and porous material (often gravel ) covered by a layer of soil to prevent animals (and surface runoff ) from reaching the wastewater distributed within those trenches. [ 1 ] Primary design considerations are both hydraulic for the volume of wastewater requiring disposal and catabolic for the long-term biochemical oxygen demand of that wastewater. The land area that is set aside for the septic drain field may be called a septic reserve area (SRA). [ 2 ] Sewage farms similarly dispose of wastewater through a series of ditches and lagoons (often with little or no pre-treatment). These are more often found in arid countries as the waterflow on the surface allows for irrigation (and fertilization) of agricultural land. Many health departments require a percolation test ("perc" test) to establish the suitability of drain field soil to receive septic tank effluent. An engineer , soil scientist , or licensed designer may be required to work with the local governing agency to design a system that conforms to these criteria. A more progressive way [ citation needed ] to determine leach field sizing is by direct observation of the soil profile. In this observation, the engineer evaluates many features of the soil such as texture, structure, consistency, pores/roots, etc. The goal of percolation testing is to ensure the soil is permeable enough for septic tank effluent to percolate away from the drain field but fine-grained enough to filter out pathogenic bacteria and viruses before they travel far enough to reach a water well or surface water supply. Coarse soils – sand and gravel – can transmit wastewater away from the drain field before pathogens are destroyed. Silt and clay effectively filter out pathogens but limit wastewater flow rates. [ 3 ] Percolation tests measure the rate at which clean water disperses through a disposal trench into the soil. Several factors may reduce observed percolation rates when the drain field receives anoxic septic tank effluent: [ 4 ] Just as a septic tank is sized to support a community of anaerobic organisms capable of liquefying anticipated amounts of putrescible materials in wastewater, a drain field should be sized to support a community of aerobic soil microorganisms capable of decomposing the anaerobic septic tank's effluent into aerobic water. Hydrogen sulfide odors or iron bacteria may be observed in nearby wells or surface waters when effluent has not been completely oxidized before reaching those areas. [ 7 ] The biofilm on the walls of the drain field trenches will use atmospheric oxygen in the trenches to catabolize organic compounds in septic tank effluent. Groundwater flow is laminar in the aquifer soils surrounding the drain field. [ 8 ] Septic tank effluent with soluble organic compounds passing through the biofilm forms a mounded lens atop the groundwater underlying the drain field. Molecular diffusion controls the mixing of soluble organic compounds into the groundwater and the transport of oxygen from underlying groundwater or the capillary fringe of the groundwater surface to micro-organisms capable of catabolizing dissolved organic compounds remaining in the effluent plume. [ 9 ] When a septic tank is used in combination with a biofilter , the height and catabolic area of the drain field may be reduced. Biofilter technology may allow higher-density residential construction, minimal site disturbance, and more usable land for trees, swimming pools, or gardens. Adequate routine maintenance may reduce the chances of the drain field plugging up. The biofilter will not reduce the volume of liquid that must percolate into the soil, but it may reduce the oxygen demand of organic materials in that liquid. A drain field may be designed to offer several separate disposal areas for effluent from a single septic tank. One area may be "rested" while effluent is routed to a different area. The nematode community in the resting drain field continues feeding on the accumulated biofilm and fats when the anaerobic septic tank effluent is no longer available. This natural cleansing process may reduce bioclogging to improve the hydraulic capacity of the field by increasing the available interstitial area of the soil as the accumulated organic material is oxidized. The percolation rate after resting may approach, but is unlikely to match, the original clean water percolation rate of the site. Septic tank and drain field microorganisms have very limited capability for catabolizing petroleum products and chlorinated solvents , and cannot remove dissolved metals ; however, some may be absorbed into septic tank sludge or drain field soils, and concentrations may be diluted by other groundwater in the vicinity of the drain field. Cleaning formulations may reduce drain field efficiency. Laundry bleach may slow or stop microbial activity in the drain field, and sanitizing or deodorizing chemicals may have similar effects. Detergents, solvents, and drain cleaners may transport emulsified , saponified or dissolved fats into the drain field before they can be catabolized into short-chain organic acids in the septic tank scum layer. [ 7 ]
https://en.wikipedia.org/wiki/Septic_drain_field
In algebra , a septic equation is an equation of the form where a ≠ 0 . A septic function is a function of the form where a ≠ 0 . In other words, it is a polynomial of degree seven. If a = 0 , then f is a sextic function ( b ≠ 0 ), quintic function ( b = 0, c ≠ 0 ), etc. The equation may be obtained from the function by setting f ( x ) = 0 . The coefficients a , b , c , d , e , f , g , h may be either integers , rational numbers , real numbers , complex numbers or, more generally, members of any field . Because they have an odd degree, septic functions appear similar to quintic and cubic functions when graphed, except they may possess additional local maxima and local minima (up to three maxima and three minima). The derivative of a septic function is a sextic function . Some seventh degree equations can be solved by factorizing into radicals , but other septics cannot. Évariste Galois developed techniques for determining whether a given equation could be solved by radicals which gave rise to the field of Galois theory . To give an example of an irreducible but solvable septic, one can generalize the solvable de Moivre quintic to get, where the auxiliary equation is This means that the septic is obtained by eliminating u and v between x = u + v , uv + α = 0 and u 7 + v 7 + β = 0 . It follows that the septic's seven roots are given by where ω k is any of the 7 seventh roots of unity . The Galois group of this septic is the maximal solvable group of order 42. This is easily generalized to any other degrees k , not necessarily prime. Another solvable family is, whose members appear in Kluner's Database of Number Fields . Its discriminant is The Galois group of these septics is the dihedral group of order 14. The general septic equation can be solved with the alternating or symmetric Galois groups A 7 or S 7 . [ 1 ] Such equations require hyperelliptic functions and associated theta functions of genus 3 for their solution. [ 1 ] However, these equations were not studied specifically by the nineteenth-century mathematicians studying the solutions of algebraic equations, because the sextic equations ' solutions were already at the limits of their computational abilities without computers. [ 1 ] Septics are the lowest order equations for which it is not obvious that their solutions may be obtained by composing continuous functions of two variables. Hilbert's 13th problem was the conjecture this was not possible in the general case for seventh-degree equations. Vladimir Arnold solved this in 1957, demonstrating that this was always possible. [ 2 ] However, Arnold himself considered the genuine Hilbert problem to be whether for septics their solutions may be obtained by superimposing algebraic functions of two variables. [ 3 ] As of 2023, the problem is still open. There are seven Galois groups for septics: [ 4 ] The square of the area of a cyclic pentagon is a root of a septic equation whose coefficients are symmetric functions of the sides of the pentagon. [ 5 ] The same is true of the square of the area of a cyclic hexagon . [ 6 ]
https://en.wikipedia.org/wiki/Septic_equation