text
stringlengths
11
320k
source
stringlengths
26
161
In numerical integration , Simpson's rules are several approximations for definite integrals , named after Thomas Simpson (1710–1761). The most basic of these rules, called Simpson's 1/3 rule , or just Simpson's rule , reads ∫ a b f ( x ) d x ≈ b − a 6 [ f ( a ) + 4 f ( a + b 2 ) + f ( b ) ] . {\displaystyle \int _{a}^{b}f(x)\,dx\approx {\frac {b-a}{6}}\left[f(a)+4f\left({\frac {a+b}{2}}\right)+f(b)\right].} In German and some other languages, it is named after Johannes Kepler , who derived it in 1615 after seeing it used for wine barrels (barrel rule, Keplersche Fassregel ). The approximate equality in the rule becomes exact if f is a polynomial up to and including 3rd degree. If the 1/3 rule is applied to n equal subdivisions of the integration range [ a , b ], one obtains the composite Simpson's 1/3 rule . Points inside the integration range are given alternating weights 4/3 and 2/3. Simpson's 3/8 rule , also called Simpson's second rule , requires one more function evaluation inside the integration range and gives lower error bounds, but does not improve the order of the error. If the 3/8 rule is applied to n equal subdivisions of the integration range [ a , b ], one obtains the composite Simpson's 3/8 rule . Simpson's 1/3 and 3/8 rules are two special cases of closed Newton–Cotes formulas . In naval architecture and ship stability estimation, there also exists Simpson's third rule , which has no special importance in general numerical analysis , see Simpson's rules (ship stability) . Simpson's 1/3 rule, also simply called Simpson's rule, is a method for numerical integration proposed by Thomas Simpson. It is based upon a quadratic interpolation and is the composite Simpson's 1/3 rule evaluated for n = 2 {\displaystyle n=2} . Simpson's 1/3 rule is as follows: ∫ a b f ( x ) d x ≈ b − a 6 [ f ( a ) + 4 f ( a + b 2 ) + f ( b ) ] = 1 3 h [ f ( a ) + 4 f ( a + h ) + f ( b ) ] , {\displaystyle {\begin{aligned}\int _{a}^{b}f(x)\,dx&\approx {\frac {b-a}{6}}\left[f(a)+4f\left({\frac {a+b}{2}}\right)+f(b)\right]\\&={\frac {1}{3}}h\left[f(a)+4f\left(a+h\right)+f(b)\right],\end{aligned}}} where h = ( b − a ) / n {\displaystyle h=(b-a)/n} is the step size for n = 2 {\displaystyle n=2} . The error in approximating an integral by Simpson's rule for n = 2 {\displaystyle n=2} is − 1 90 h 5 f ( 4 ) ( ξ ) = − ( b − a ) 5 2880 f ( 4 ) ( ξ ) , {\displaystyle -{\frac {1}{90}}h^{5}f^{(4)}(\xi )=-{\frac {(b-a)^{5}}{2880}}f^{(4)}(\xi ),} where ξ {\displaystyle \xi } (the Greek letter xi ) is some number between a {\displaystyle a} and b {\displaystyle b} . [ 1 ] [ 2 ] The error is asymptotically proportional to ( b − a ) 5 {\displaystyle (b-a)^{5}} . However, the above derivations suggest an error proportional to ( b − a ) 4 {\displaystyle (b-a)^{4}} . Simpson's rule gains an extra order because the points at which the integrand is evaluated are distributed symmetrically in the interval [ a , b ] {\displaystyle [a,\ b]} . Since the error term is proportional to the fourth derivative of f {\displaystyle f} at ξ {\displaystyle \xi } , this shows that Simpson's rule provides exact results for any polynomial f {\displaystyle f} of degree three or less, since the fourth derivative of such a polynomial is zero at all points. Another way to see this result is to note that any interpolating cubic polynomial can be expressed as the sum of the unique interpolating quadratic polynomial plus an arbitrarily scaled cubic polynomial that vanishes at all three points in the interval, and the integral of this second term vanishes because it is odd within the interval. If the second derivative f ″ {\displaystyle f''} exists and is convex in the interval ( a , b ) {\displaystyle (a,\ b)} , then ( b − a ) f ( a + b 2 ) + 1 3 ( b − a 2 ) 3 f ″ ( a + b 2 ) ≤ ∫ a b f ( x ) d x ≤ b − a 6 [ f ( a ) + 4 f ( a + b 2 ) + f ( b ) ] . {\displaystyle (b-a)f\left({\frac {a+b}{2}}\right)+{\frac {1}{3}}\left({\frac {b-a}{2}}\right)^{3}f''\left({\frac {a+b}{2}}\right)\leq \int _{a}^{b}f(x)\,dx\leq {\frac {b-a}{6}}\left[f(a)+4f\left({\frac {a+b}{2}}\right)+f(b)\right].} One derivation replaces the integrand f ( x ) {\displaystyle f(x)} by the quadratic polynomial (i.e. parabola) P ( x ) {\displaystyle P(x)} that takes the same values as f ( x ) {\displaystyle f(x)} at the end points a {\displaystyle a} and b {\displaystyle b} and the midpoint a + h {\displaystyle a+h} , where h = ( b − a ) / 2 {\displaystyle h=(b-a)/2} . One can use Lagrange polynomial interpolation to find an expression for this polynomial, P ( x ) = f ( a ) ( x − a − h ) ( x − b ) − h ( a − b ) + f ( a + h ) ( x − a ) ( x − b ) h ( a + h − b ) + f ( b ) ( x − a ) ( x − a − h ) ( b − a ) ( b − a − h ) . {\displaystyle P(x)=f(a){\frac {(x-a-h)(x-b)}{-h(a-b)}}+f(a+h){\frac {(x-a)(x-b)}{h(a+h-b)}}+f(b){\frac {(x-a)(x-a-h)}{(b-a)(b-a-h)}}.} Define: A 1 ( x ) = f ( a ) ( x − a − h ) ( x − b ) − h ( a − b ) , {\displaystyle A_{1}(x)=f(a){\frac {(x-a-h)(x-b)}{-h(a-b)}},} A 2 ( x ) = f ( a + h ) ( x − a ) ( x − b ) h ( a + h − b ) , {\displaystyle A_{2}(x)=f(a+h){\frac {(x-a)(x-b)}{h(a+h-b)}},} and A 3 ( x ) = f ( b ) ( x − a ) ( x − a − h ) ( b − a ) ( b − a − h ) . {\displaystyle A_{3}(x)=f(b){\frac {(x-a)(x-a-h)}{(b-a)(b-a-h)}}.} By making the substitution u = x − a − h {\displaystyle u=x-a-h} , one can calculate: ∫ a b A 1 ( x ) d x = ∫ − h h A 1 ( u ) d u = f ( a ) ( ∫ − h h u ( u − h ) 2 h 2 d u ) = f ( a ) 2 h 2 ( ∫ − h h u 2 − h u d u ) = f ( a ) 2 h 2 [ 1 3 u 3 − 1 2 h u 2 ] u = − h u = h = f ( a ) 2 h 2 [ ( 1 3 h 3 − 1 2 h 3 ) − ( − 1 3 h 3 − 1 2 h 3 ) ] = f ( a ) 2 h 2 ( 1 3 h 3 − 1 2 h 3 + 1 3 h 3 + 1 2 h 3 ) = f ( a ) 2 h 2 ( 2 3 h 3 ) = h 3 f ( a ) . {\displaystyle {\begin{aligned}\int _{a}^{b}A_{1}(x)\,dx=&\int _{-h}^{h}A_{1}(u)\,du\\&=f(a)\left(\int _{-h}^{h}{\frac {u(u-h)}{2h^{2}}}\,du\right)\\&={\frac {f(a)}{2h^{2}}}\left(\int _{-h}^{h}u^{2}-hu\,du\right)\\&={\frac {f(a)}{2h^{2}}}\left[{\frac {1}{3}}u^{3}-{\frac {1}{2}}hu^{2}\right]_{u=-h}^{u=h}\\&={\frac {f(a)}{2h^{2}}}\left[\left({\frac {1}{3}}h^{3}-{\frac {1}{2}}h^{3}\right)-\left(-{\frac {1}{3}}h^{3}-{\frac {1}{2}}h^{3}\right)\right]\\&={\frac {f(a)}{2h^{2}}}\left({\frac {1}{3}}h^{3}-{\frac {1}{2}}h^{3}+{\frac {1}{3}}h^{3}+{\frac {1}{2}}h^{3}\right)\\&={\frac {f(a)}{2h^{2}}}\left({\frac {2}{3}}h^{3}\right)\\&={\frac {h}{3}}f(a).\end{aligned}}} ∫ a b A 2 ( x ) d x = ∫ − h h A 2 ( u ) d u = f ( a + h ) ( ∫ − h h ( u + h ) ( u + a + h − b ) h ( a + h − b ) d u ) = f ( a + h ) ( ∫ − h h ( u + h ) ( u − h ) − h 2 d u ) = − f ( a + h ) h 2 ( ∫ − h h u 2 − h 2 d u ) = − f ( a + h ) h 2 [ 1 3 u 3 − h 2 u ] u = − h u = h = − f ( a + h ) h 2 [ 1 3 h 3 − h 3 − ( − 1 3 h 3 + h 3 ) ] = − f ( a + h ) h 2 ( − 2 3 h 3 + 1 3 h 3 − h 3 ) = − f ( a + h ) h 2 ( − 4 3 h 3 ) = 4 h 3 f ( a + h ) . {\displaystyle {\begin{aligned}\int _{a}^{b}A_{2}(x)\,dx=&\int _{-h}^{h}A_{2}(u)\,du\\&=f(a+h)\left(\int _{-h}^{h}{\frac {(u+h)(u+a+h-b)}{h(a+h-b)}}\,du\right)\\&=f(a+h)\left(\int _{-h}^{h}{\frac {(u+h)(u-h)}{-h^{2}}}\,du\right)\\&=-{\frac {f(a+h)}{h^{2}}}\left(\int _{-h}^{h}u^{2}-h^{2}\,du\right)\\&=-{\frac {f(a+h)}{h^{2}}}\left[{\frac {1}{3}}u^{3}-h^{2}u\right]_{u=-h}^{u=h}\\&=-{\frac {f(a+h)}{h^{2}}}\left[{\frac {1}{3}}h^{3}-h^{3}-\left(-{\frac {1}{3}}h^{3}+h^{3}\right)\right]\\&=-{\frac {f(a+h)}{h^{2}}}\left(-{\frac {2}{3}}h^{3}+{\frac {1}{3}}h^{3}-h^{3}\right)\\&=-{\frac {f(a+h)}{h^{2}}}\left({\frac {-4}{3}}h^{3}\right)\\&={\frac {4h}{3}}f(a+h).\end{aligned}}} ∫ a b A 3 ( x ) d x = ∫ − h h A 3 ( u ) d u = f ( b ) ( ∫ − h h u ( u + h ) ( b − a ) ( b − a − h ) d u ) = f ( b ) ( ∫ − h h u ( u + h ) 2 h 2 d u ) = f ( b ) 2 h 2 ( ∫ − h h u 2 + h u d u ) = f ( b ) 2 h 2 [ 1 3 u 3 + 1 2 h u 2 ] u = − h u = h = f ( b ) 2 h 2 [ 1 3 h 3 + 1 2 h 3 − ( − 1 3 h 3 + 1 2 h 3 ) ] = f ( b ) 2 h 2 ( 5 6 h 3 − 1 6 h 3 ) = f ( b ) 2 h 2 ( 2 3 h 3 ) = h 3 f ( b ) . {\displaystyle {\begin{aligned}\int _{a}^{b}A_{3}(x)\,dx=&\int _{-h}^{h}A_{3}(u)\,du\\&=f(b)\left(\int _{-h}^{h}{\frac {u(u+h)}{(b-a)(b-a-h)}}\,du\right)\\&=f(b)\left(\int _{-h}^{h}{\frac {u(u+h)}{2h^{2}}}\,du\right)\\&={\frac {f(b)}{2h^{2}}}\left(\int _{-h}^{h}u^{2}+hu\,du\right)\\&={\frac {f(b)}{2h^{2}}}\left[{\frac {1}{3}}u^{3}+{\frac {1}{2}}hu^{2}\right]_{u=-h}^{u=h}\\&={\frac {f(b)}{2h^{2}}}\left[{\frac {1}{3}}h^{3}+{\frac {1}{2}}h^{3}-\left(-{\frac {1}{3}}h^{3}+{\frac {1}{2}}h^{3}\right)\right]\\&={\frac {f(b)}{2h^{2}}}\left({\frac {5}{6}}h^{3}-{\frac {1}{6}}h^{3}\right)\\&={\frac {f(b)}{2h^{2}}}\left({\frac {2}{3}}h^{3}\right)\\&={\frac {h}{3}}f(b).\end{aligned}}} Therefore, ∫ a b f ( x ) d x ≈ ∫ a b P ( x ) d x = ∫ a b A 1 ( x ) d x + ∫ a b A 2 ( x ) d x + ∫ a b A 3 ( x ) d x = h 3 f ( a ) + 4 h 3 f ( a + h ) + h 3 f ( b ) = 1 3 h [ f ( a ) + 4 f ( a + h ) + f ( b ) ] . {\displaystyle {\begin{aligned}\int _{a}^{b}f(x)\,dx\approx &\int _{a}^{b}P(x)\,dx\\&=\int _{a}^{b}A_{1}(x)\,dx+\int _{a}^{b}A_{2}(x)\,dx+\int _{a}^{b}A_{3}(x)\,dx\\&={\frac {h}{3}}f(a)+{\frac {4h}{3}}f(a+h)+{\frac {h}{3}}f(b)\\&={\frac {1}{3}}h\left[f(a)+4f(a+h)+f(b)\right].\end{aligned}}} Because of the 1 / 3 {\displaystyle 1/3} factor, Simpson's rule is also referred to as "Simpson's 1/3 rule" (see below for generalization). Another derivation constructs Simpson's rule from two simpler approximations: the midpoint rule M = ( b − a ) f ( a + b 2 ) {\displaystyle M=(b-a)f\left({\frac {a+b}{2}}\right)} and the trapezoidal rule T = 1 2 ( b − a ) ( f ( a ) + f ( b ) ) . {\displaystyle T={\frac {1}{2}}(b-a){\big (}f(a)+f(b){\big )}.} The errors in these approximations are 1 24 ( b − a ) 3 f ″ ( a ) + O ( ( b − a ) 4 ) {\displaystyle {\frac {1}{24}}(b-a)^{3}f''(a)+O{\big (}(b-a)^{4}{\big )}} and − 1 12 ( b − a ) 3 f ″ ( a ) + O ( ( b − a ) 4 ) {\displaystyle -{\frac {1}{12}}(b-a)^{3}f''(a)+O{\big (}(b-a)^{4}{\big )}} respectively, where O ( ( b − a ) 4 ) {\displaystyle O{\big (}(b-a)^{4}{\big )}} denotes a term asymptotically proportional to ( b − a ) 4 {\displaystyle (b-a)^{4}} . The two O ( ( b − a ) 4 ) {\displaystyle O{\big (}(b-a)^{4}{\big )}} terms are not equal; see Big O notation for more details. It follows from the above formulas for the errors of the midpoint and trapezoidal rule that the leading error term vanishes if we take the weighted average 2 M + T 3 . {\displaystyle {\frac {2M+T}{3}}.} This weighted average is exactly Simpson's rule. Using another approximation (for example, the trapezoidal rule with twice as many points), it is possible to take a suitable weighted average and eliminate another error term. This is Romberg's method . The third derivation starts from the ansatz 1 b − a ∫ a b f ( x ) d x ≈ α f ( a ) + β f ( a + b 2 ) + γ f ( b ) . {\displaystyle {\frac {1}{b-a}}\int _{a}^{b}f(x)\,dx\approx \alpha f(a)+\beta f\left({\frac {a+b}{2}}\right)+\gamma f(b).} The coefficients α , β and γ can be fixed by requiring that this approximation be exact for all quadratic polynomials. This yields Simpson's rule. (This derivation is essentially a less rigorous version of the quadratic interpolation derivation, where one saves significant calculation effort by guessing the correct functional form.) If the interval of integration [ a , b ] {\displaystyle [a,b]} is in some sense "small", then Simpson's rule with n = 2 {\displaystyle n=2} subintervals will provide an adequate approximation to the exact integral. By "small" we mean that the function being integrated is relatively smooth over the interval [ a , b ] {\displaystyle [a,b]} . For such a function, a smooth quadratic interpolant like the one used in Simpson's rule will give good results. However, it is often the case that the function we are trying to integrate is not smooth over the interval. Typically, this means that either the function is highly oscillatory or lacks derivatives at certain points. In these cases, Simpson's rule may give very poor results. One common way of handling this problem is by breaking up the interval [ a , b ] {\displaystyle [a,b]} into n > 2 {\displaystyle n>2} small subintervals. Simpson's rule is then applied to each subinterval, with the results being summed to produce an approximation for the integral over the entire interval. This sort of approach is termed the composite Simpson's 1/3 rule , or just composite Simpson's rule . Suppose that the interval [ a , b ] {\displaystyle [a,b]} is split up into n {\displaystyle n} subintervals, with n {\displaystyle n} an even number. Then, the composite Simpson's rule is given by Dividing the interval [ a , b ] {\displaystyle [a,b]} into n {\displaystyle n} subintervals of length h = ( b − a ) / n {\displaystyle h=(b-a)/n} and introducing the points x i = a + i h {\displaystyle x_{i}=a+ih} for 0 ≤ i ≤ n {\displaystyle 0\leq i\leq n} (in particular, x 0 = a {\displaystyle x_{0}=a} and x n = b {\displaystyle x_{n}=b} ), we have ∫ a b f ( x ) d x ≈ 1 3 h ∑ i = 1 n / 2 [ f ( x 2 i − 2 ) + 4 f ( x 2 i − 1 ) + f ( x 2 i ) ] = 1 3 h [ f ( x 0 ) + 4 f ( x 1 ) + 2 f ( x 2 ) + 4 f ( x 3 ) + 2 f ( x 4 ) + ⋯ + 2 f ( x n − 2 ) + 4 f ( x n − 1 ) + f ( x n ) ] = 1 3 h [ f ( x 0 ) + 4 ∑ i = 1 n / 2 f ( x 2 i − 1 ) + 2 ∑ i = 1 n / 2 − 1 f ( x 2 i ) + f ( x n ) ] . {\displaystyle {\begin{aligned}\int _{a}^{b}f(x)\,dx&\approx {\frac {1}{3}}h\sum _{i=1}^{n/2}{\big [}f(x_{2i-2})+4f(x_{2i-1})+f(x_{2i}){\big ]}\\&={\frac {1}{3}}h{\big [}f(x_{0})+4f(x_{1})+2f(x_{2})+4f(x_{3})+2f(x_{4})+\dots +2f(x_{n-2})+4f(x_{n-1})+f(x_{n}){\big ]}\\&={\frac {1}{3}}h\left[f(x_{0})+4\sum _{i=1}^{n/2}f(x_{2i-1})+2\sum _{i=1}^{n/2-1}f(x_{2i})+f(x_{n})\right].\end{aligned}}} This composite rule with n = 2 {\displaystyle n=2} corresponds with the regular Simpson's rule of the preceding section. The error committed by the composite Simpson's rule is − 1 180 h 4 ( b − a ) f ( 4 ) ( ξ ) , {\displaystyle -{\frac {1}{180}}h^{4}(b-a)f^{(4)}(\xi ),} where ξ {\displaystyle \xi } is some number between a {\displaystyle a} and b {\displaystyle b} , and h = ( b − a ) / n {\displaystyle h=(b-a)/n} is the "step length". [ 3 ] [ 4 ] The error is bounded (in absolute value) by 1 180 h 4 ( b − a ) max ξ ∈ [ a , b ] | f ( 4 ) ( ξ ) | . {\displaystyle {\frac {1}{180}}h^{4}(b-a)\max _{\xi \in [a,b]}\left|f^{(4)}(\xi )\right|.} This formulation splits the interval [ a , b ] {\displaystyle [a,b]} in subintervals of equal length. In practice, it is often advantageous to use subintervals of different lengths and concentrate the efforts on the places where the integrand is less well-behaved. This leads to the adaptive Simpson's method . Simpson's 3/8 rule, also called Simpson's second rule, is another method for numerical integration proposed by Thomas Simpson. It is based upon a cubic interpolation rather than a quadratic interpolation. Simpson's 3/8 rule is as follows: ∫ a b f ( x ) d x ≈ b − a 8 [ f ( a ) + 3 f ( 2 a + b 3 ) + 3 f ( a + 2 b 3 ) + f ( b ) ] = 3 8 h [ f ( a ) + 3 f ( a + h ) + 3 f ( a + 2 h ) + f ( b ) ] , {\displaystyle {\begin{aligned}\int _{a}^{b}f(x)\,dx&\approx {\frac {b-a}{8}}\left[f(a)+3f\left({\frac {2a+b}{3}}\right)+3f\left({\frac {a+2b}{3}}\right)+f(b)\right]\\&={\frac {3}{8}}h\left[f(a)+3f\left(a+h\right)+3f\left(a+2h\right)+f(b)\right],\end{aligned}}} where h = ( b − a ) / 3 {\displaystyle h=(b-a)/3} is the step size. The error of this method is − 3 80 h 5 f ( 4 ) ( ξ ) = − ( b − a ) 5 6480 f ( 4 ) ( ξ ) , {\displaystyle -{\frac {3}{80}}h^{5}f^{(4)}(\xi )=-{\frac {(b-a)^{5}}{6480}}f^{(4)}(\xi ),} where ξ {\displaystyle \xi } is some number between a {\displaystyle a} and b {\displaystyle b} . Thus, the 3/8 rule is about twice as accurate as the standard method, but it uses one more function value. A composite 3/8 rule also exists, similarly as above. [ 5 ] A further generalization of this concept for interpolation with arbitrary-degree polynomials are the Newton–Cotes formulas . Dividing the interval [ a , b ] {\displaystyle [a,b]} into n {\displaystyle n} subintervals of length h = ( b − a ) / n {\displaystyle h=(b-a)/n} and introducing the points x i = a + i h {\displaystyle x_{i}=a+ih} for 0 ≤ i ≤ n {\displaystyle 0\leq i\leq n} (in particular, x 0 = a {\displaystyle x_{0}=a} and x n = b {\displaystyle x_{n}=b} ), we have ∫ a b f ( x ) d x ≈ 3 8 h ∑ i = 1 n / 3 [ f ( x 3 i − 3 ) + 3 f ( x 3 i − 2 ) + 3 f ( x 3 i − 1 ) + f ( x 3 i ) ] = 3 8 h [ f ( x 0 ) + 3 f ( x 1 ) + 3 f ( x 2 ) + 2 f ( x 3 ) + 3 f ( x 4 ) + 3 f ( x 5 ) + 2 f ( x 6 ) + … + 2 f ( x n − 3 ) + 3 f ( x n − 2 ) + 3 f ( x n − 1 ) + f ( x n ) ] = 3 8 h [ f ( x 0 ) + 3 ∑ i = 1 , 3 ∤ i n − 1 f ( x i ) + 2 ∑ i = 1 n / 3 − 1 f ( x 3 i ) + f ( x n ) ] . {\displaystyle {\begin{aligned}\int _{a}^{b}f(x)\,dx&\approx {\frac {3}{8}}h\sum _{i=1}^{n/3}{\big [}f(x_{3i-3})+3f(x_{3i-2})+3f(x_{3i-1})+f(x_{3i}){\big ]}\\&={\frac {3}{8}}h{\big [}f(x_{0})+3f(x_{1})+3f(x_{2})+2f(x_{3})+3f(x_{4})+3f(x_{5})+2f(x_{6})+\dots \\&\qquad +2f(x_{n-3})+3f(x_{n-2})+3f(x_{n-1})+f(x_{n}){\big ]}\\&={\frac {3}{8}}h\left[f(x_{0})+3\sum _{i=1,\ 3\nmid i}^{n-1}f(x_{i})+2\sum _{i=1}^{n/3-1}f(x_{3i})+f(x_{n})\right].\end{aligned}}} While the remainder for the rule is shown as [ 5 ] − 1 80 h 4 ( b − a ) f ( 4 ) ( ξ ) , {\displaystyle -{\frac {1}{80}}h^{4}(b-a)f^{(4)}(\xi ),} we can only use this if n {\displaystyle n} is a multiple of three. The 1/3 rule can be used for the remaining subintervals without changing the order of the error term (conversely, the 3/8 rule can be used with a composite 1/3 rule for odd-numbered subintervals). This is another formulation of a composite Simpson's rule: instead of applying Simpson's rule to disjoint segments of the integral to be approximated, Simpson's rule is applied to overlapping segments, yielding [ 6 ] ∫ a b f ( x ) d x ≈ 1 48 h [ 17 f ( x 0 ) + 59 f ( x 1 ) + 43 f ( x 2 ) + 49 f ( x 3 ) + 48 ∑ i = 4 n − 4 f ( x i ) + 49 f ( x n − 3 ) + 43 f ( x n − 2 ) + 59 f ( x n − 1 ) + 17 f ( x n ) ] {\displaystyle {\begin{aligned}\int _{a}^{b}f(x)\,dx\approx {\frac {1}{48}}h{\bigg [}&17f(x_{0})+59f(x_{1})+43f(x_{2})+49f(x_{3})\\+&48\sum _{i=4}^{n-4}f(x_{i})\\+&49f(x_{n-3})+43f(x_{n-2})+59f(x_{n-1})+17f(x_{n}){\bigg ]}\end{aligned}}} The formula above is obtained by combining the composite Simpson's 1/3 rule with the one consisting of using Simpson's 3/8 rule in the extreme subintervals and Simpson's 1/3 rule in the remaining subintervals. The result is then obtained by taking the mean of the two formulas. In the task of estimation of full area of narrow peak-like functions, Simpson's rules are much less efficient than trapezoidal rule . Namely, composite Simpson's 1/3 rule requires 1.8 times more points to achieve the same accuracy as trapezoidal rule. [ 7 ] Composite Simpson's 3/8 rule is even less accurate. Integration by Simpson's 1/3 rule can be represented as a weighted average with 2/3 of the value coming from integration by the trapezoidal rule with step h and 1/3 of the value coming from integration by the rectangle rule with step 2 h . The accuracy is governed by the second (2 h step) term. Averaging of Simpson's 1/3 rule composite sums with properly shifted frames produces the following rules: ∫ a b f ( x ) d x ≈ 1 24 h [ − f ( x − 1 ) + 12 f ( x 0 ) + 25 f ( x 1 ) + 24 ∑ i = 2 n − 2 f ( x i ) + 25 f ( x n − 1 ) + 12 f ( x n ) − f ( x n + 1 ) ] , {\displaystyle \int _{a}^{b}f(x)\,dx\approx {\frac {1}{24}}h\left[-f(x_{-1})+12f(x_{0})+25f(x_{1})+24\sum _{i=2}^{n-2}f(x_{i})+25f(x_{n-1})+12f(x_{n})-f(x_{n+1})\right],} where two points outside of the integrated region are exploited, and ∫ a b f ( x ) d x ≈ 1 24 h [ 9 f ( x 0 ) + 28 f ( x 1 ) + 23 f ( x 2 ) + 24 ∑ i = 3 n − 3 f ( x i ) + 23 f ( x n − 2 ) + 28 f ( x n − 1 ) + 9 f ( x n ) ] , {\displaystyle \int _{a}^{b}f(x)\,dx\approx {\frac {1}{24}}h\left[9f(x_{0})+28f(x_{1})+23f(x_{2})+24\sum _{i=3}^{n-3}f(x_{i})+23f(x_{n-2})+28f(x_{n-1})+9f(x_{n})\right],} where only points within integration region are used. Application of the second rule to the region of 3 points generates 1/3 Simpson's rule, 4 points - 3/8 rule. These rules are very much similar to the alternative extended Simpson's rule. The coefficients within the major part of the region being integrated are one with non-unit coefficients only at the edges. These two rules can be associated with Euler–MacLaurin formula with the first derivative term and named First order Euler–MacLaurin integration rules . [ 7 ] The two rules presented above differ only in the way how the first derivative at the region end is calculated. The first derivative term in the Euler–MacLaurin integration rules accounts for integral of the second derivative , which equals the difference of the first derivatives at the edges of the integration region. It is possible to generate higher order Euler–Maclaurin rules by adding a difference of 3rd, 5th, and so on derivatives with coefficients, as defined by Euler–MacLaurin formula . For some applications, the integration interval I = [ a , b ] {\displaystyle I=[a,b]} needs to be divided into uneven intervals – perhaps due to uneven sampling of data, or missing or corrupted data points. Suppose we divide the interval I {\displaystyle I} into an even number N {\displaystyle N} of subintervals of widths h k {\displaystyle h_{k}} . Then the composite Simpson's rule is given by [ 8 ] ∫ a b f ( x ) d x ≈ ∑ i = 0 N / 2 − 1 h 2 i + h 2 i + 1 6 [ ( 2 − h 2 i + 1 h 2 i ) f 2 i + ( h 2 i + h 2 i + 1 ) 2 h 2 i h 2 i + 1 f 2 i + 1 + ( 2 − h 2 i h 2 i + 1 ) f 2 i + 2 ] , {\displaystyle \int _{a}^{b}f(x)\,dx\approx \sum _{i=0}^{N/2-1}{\frac {h_{2i}+h_{2i+1}}{6}}\left[\left(2-{\frac {h_{2i+1}}{h_{2i}}}\right)f_{2i}+{\frac {(h_{2i}+h_{2i+1})^{2}}{h_{2i}h_{2i+1}}}f_{2i+1}+\left(2-{\frac {h_{2i}}{h_{2i+1}}}\right)f_{2i+2}\right],} where f k = f ( a + ∑ i = 0 k − 1 h i ) {\displaystyle f_{k}=f\left(a+\sum _{i=0}^{k-1}h_{i}\right)} are the function values at the k {\displaystyle k} th sampling point on the interval I {\displaystyle I} . In case of an odd number N {\displaystyle N} of subintervals , the above formula is used up to the second to last interval, and the last interval is handled separately by adding the following to the result: [ 9 ] α f N + β f N − 1 − η f N − 2 , {\displaystyle \alpha f_{N}+\beta f_{N-1}-\eta f_{N-2},} where α = 2 h N − 1 2 + 3 h N − 1 h N − 2 6 ( h N − 2 + h N − 1 ) , β = h N − 1 2 + 3 h N − 1 h N − 2 6 h N − 2 , η = h N − 1 3 6 h N − 2 ( h N − 2 + h N − 1 ) . {\displaystyle {\begin{aligned}\alpha &={\frac {2h_{N-1}^{2}+3h_{N-1}h_{N-2}}{6(h_{N-2}+h_{N-1})}},\\[1ex]\beta &={\frac {h_{N-1}^{2}+3h_{N-1}h_{N-2}}{6h_{N-2}}},\\[1ex]\eta &={\frac {h_{N-1}^{3}}{6h_{N-2}(h_{N-2}+h_{N-1})}}.\end{aligned}}} This article incorporates material from Code for Simpson's rule on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
https://en.wikipedia.org/wiki/Simpson's_rule
The Simputer General Public License , or the SGPL is a hardware distribution public copyright license drafted specifically for the purpose of distributing Simputers . As a license it has been loosely modeled on the GPL but in substance it is very different. The Simputer specifications are released under the terms and conditions of the SGPL. This license permits the user to build a Simputer based upon the specifications and to use the Simputer for non-commercial purposes. Any modifications made to the Simputer specifications may be used exclusively by the person making those modifications with no obligation to release the same to the public domain. However, within 12 months from the date of the first public sale of the Simputer based on these modified specifications, the person who created these modified specifications is bound to disclose the specifications to the Simputer Trust. The Simputers manufactured under the SGPL are required to be certified by the Simputer Trust before they are allowed to be sold under the Simputer trademark. In order to be so certified they must fulfill the Core Simputer Specifications as disclosed on the simputer website. All Simputers developed under the specifications must be distributed under the same terms as the SGPL. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Simputer_General_Public_License
In mathematics , the Sims conjecture is a result in group theory , originally proposed by Charles Sims . [ 1 ] He conjectured that if G {\displaystyle G} is a primitive permutation group on a finite set S {\displaystyle S} and G α {\displaystyle G_{\alpha }} denotes the stabilizer of the point α {\displaystyle \alpha } in S {\displaystyle S} , then there exists an integer -valued function f {\displaystyle f} such that f ( d ) ≥ | G α | {\displaystyle f(d)\geq |G_{\alpha }|} for d {\displaystyle d} the length of any orbit of G α {\displaystyle G_{\alpha }} in the set S ∖ { α } {\displaystyle S\setminus \{\alpha \}} . The conjecture was proven by Peter Cameron , Cheryl Praeger , Jan Saxl , and Gary Seitz using the classification of finite simple groups , in particular the fact that only finitely many isomorphism types of sporadic groups exist. The theorem reads precisely as follows. [ 2 ] Theorem — There exists a function f : N → N {\displaystyle f:\mathbb {N} \to \mathbb {N} } such that whenever G {\displaystyle G} is a primitive permutation group and h > 1 {\displaystyle h>1} is the length of a non-trivial orbit of a point stabilizer H {\displaystyle H} in G {\displaystyle G} , then the order of H {\displaystyle H} is at most f ( h ) {\displaystyle f(h)} . Thus, in a primitive permutation group with "large" stabilizers, these stabilizers cannot have any small orbit. A consequence of their proof is that there exist only finitely many connected distance-transitive graphs having degree greater than 2. [ 3 ] [ 4 ] [ 5 ] This group theory -related article is a stub . You can help Wikipedia by expanding it . This graph theory -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sims_conjecture
The Simulated Fluorescence Process (SFP) is a computing algorithm used for scientific visualization of 3D data from, for example, fluorescence microscopes . By modeling a physical light/matter interaction process, an image can be computed which shows the data as it would have appeared in reality when viewed under these conditions. The algorithm considers a virtual light source producing excitation light that illuminates the object. This casts shadows either on parts of the object itself or on other objects below it. The interaction between the excitation light and the object provokes the emission light, which also interacts with the object before it finally reaches the eye of the viewer. [ 1 ] [ 2 ] This simulation software article is a stub . You can help Wikipedia by expanding it . This scientific software article is a stub . You can help Wikipedia by expanding it . This spectroscopy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Simulated_fluorescence_process_algorithm
The simulated growth of plants is a significant task in of systems biology and mathematical biology , which seeks to reproduce plant morphology with computer software. Electronic trees (e-trees) usually use L-systems to simulate growth. L-systems are very important in the field of complexity science and A-life . A universally accepted system for describing changes in plant morphology at the cellular or modular level has yet to be devised. [ 1 ] The most widely implemented tree-generating algorithms are described in the papers "Creation and Rendering of Realistic Trees" , and Real-Time Tree Rendering The realistic modeling of plant growth is of high value to biology, but also for computer games. A biologist, Aristid Lindenmayer (1925–1989) worked with yeast and filamentous fungi and studied the growth patterns of various types of algae, such as the blue/green bacteria Anabaena catenula . Originally the L-systems were devised to provide a formal description of the development of such simple multicellular organisms, and to illustrate the neighbourhood relationships between plant cells. Later on, this system was extended to describe higher plants and complex branching structures. Central to L-systems, is the notion of rewriting, where the basic idea is to define complex objects by successively replacing parts of a simple object using a set of rewriting rules or productions. The rewriting can be carried out recursively. L-Systems are also closely related to Koch curves . A challenge for plant simulations is to consistently integrate environmental factors, such as surrounding plants, obstructions, water and mineral availability, and lighting conditions. Essentially, attempting to build virtual environments with as many parameters as computationally feasible, thereby, not only simulating the growth of the plant, but also the environment it is growing within, and, in fact, whole ecosystems. Changes in resource availability influence plant growth, which in turn results in a change of resource availability. Powerful models and powerful hardware will be necessary to effectively simulate these recursive interactions of recursive structures. see Comparison of tree generators and A Survey of Modeling and Rendering Trees
https://en.wikipedia.org/wiki/Simulated_growth_of_plants
In manufacturing, the simulated moving bed (SMB) process is a highly engineered process for implementing chromatographic separation. It is used to separate one chemical compound or one class of chemical compounds from one or more other chemical compounds to provide significant quantities of the purified or enriched material at a lower cost than could be obtained using simple (batch) chromatography. It cannot provide any separation or purification that cannot be done by a simple column purification. The process is rather complicated. The single advantage which it brings to a chromatographic purification is that it allows the production of large quantities of highly purified material at a dramatically reduced cost. The cost reductions come about as a result of: the use of a smaller amount of chromatographic separation media stationary phase , a continuous and high rate of production, and decreased solvent and energy requirements. This improved economic performance is brought about by a valve-and-column arrangement that is used to lengthen the stationary phase indefinitely and allow very high solute loadings to the process. In the conventional moving bed technique of production chromatography the feed entry and the analyte recovery are simultaneous and continuous, but because of practical difficulties with a continuously moving bed, the simulated moving bed technique was proposed. In the simulated moving bed technique instead of moving the bed, the feed inlet, the solvent or eluent inlet and the desired product exit and undesired product exit positions are moved continuously, giving the impression of a moving bed, with continuous flow of solid particles and continuous flow of liquid in the opposite direction of the solid particles. [ 1 ] [ 2 ] Specifically, an SMB system has two or more identical columns, which are connected to the mobile phase pump, and each other, by a multi-port valve. The plumbing is configured in such a way that: and SMB provides lower production cost by requiring less column volume, less chromatographic separation media ("packing" or "stationary phase"), using less solvent and less energy, and requiring far less labor. At industrial scale an SMB chromatographic separator is operated continuously, requiring less resin and less solvent than batch chromatography. The continuous operation facilitates operation control and integration into production plants. The drawbacks of the SMB are higher investment cost compared to single column operations, a higher complexity, as well as higher maintenance costs. But these drawbacks are effectively compensated by the better yield and a much lower solvent consumption as well as a much higher productivity compared to simple batch separations. For purifications, in particular the isolation of an intermediate single component or a fraction out of a multicomponent mixture, the SMB is not as ideally suited. Normally, a single SMB will separate only two fractions from each other, but a series or "train" of SMBs can perform multiple cuts and purify one or more products from a multi-component mixture. SMB is not readily suited for solvent gradients. Solvent gradient purification may be preferred for the purification of some biomolecules. A continuous chromatography technique to overcome the two fraction limit and to apply gradients is multicolumn countercurrent solvent gradient purification (MCSGP). [ 3 ] In size exclusion chromatography , where the separation process is driven by entropy , it is not possible to increase the resolution attained by a column via temperature or solvent gradients. Consequently, these separations often require SMB, to extend usable retention time differences between the molecules or particles being separated. SMB is also very useful in the pharmaceutical industry, where separation of molecules having different chirality must be done on a very large scale. For the purification of fructose , e.g. in high fructose corn syrup , or amino-acids, biological-acids, etc. on an industrial scale, simulated moving bed chromatography is used in order to improve the economics of the production.
https://en.wikipedia.org/wiki/Simulated_moving_bed
Simulation governance is a managerial function concerned with assurance of reliability of information generated by numerical simulation . The term was introduced in 2011 [ 1 ] and specific technical requirements were addressed from the perspective of mechanical design in 2012. [ 2 ] Its strategic importance was addressed in 2015. [ 3 ] [ 4 ] At the 2017 NAFEMS World Congress in Stockholm simulation governance was identified as the first of eight “ big issues ” in numerical simulation . Simulation governance is concerned with (a) selection and adoption of the best available simulation technology, (b) formulation of mathematical models , (c) management of experimental data, (d) data and solution verification procedures, and (e) revision of mathematical models in the light of new information collected from physical experiments and field observations. [ 5 ] Plans for simulation governance have to be formulated to fit the mission of each organization or department within an organization: In the terminology of structural and mechanical engineering, typical missions are: Note that items 1 to 3 require strength analysis where the quantities of interest are related to the first derivatives of the displacement field. Item 4 refers to structural analysis where the quantities of interest are force-displacement relations or accelerations (as in crash dynamics). This distinction is important because in strength analysis errors associated with the formulation of mathematical models and their numerical solution, for example by the finite element method, must be treated separately and verification, validation and uncertainty quantification must be applied. [ 6 ] [ 7 ] In structural analysis on the other hand, numerical problems, typically constructed by assembling elements from a finite element library, a method known as finite element modeling, can produce satisfactory results. In this case the numerical solution stands on its own, typically it is not an approximation to a well-posed mathematical problem. Therefore, neither solution verification nor model validation can be performed. Satisfactory results can be produced by artful tuning of finite element models with reference to sets of experimental data so that two large errors nearly cancel one another: One error is conceptual: inadmissible data violate basic assumptions in the formulation. The other error is numerical: one or more quantities of interest diverge but the rate of divergence is slow and may not be visible at mesh refinements used in practice. [ 6 ] This management -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Simulation_governance
The simulation heuristic is a psychological heuristic , or simplified mental strategy , according to which people determine the likelihood of an event based on how easy it is to picture the event mentally. Partially as a result, people experience more regret over outcomes that are easier to imagine, such as "near misses". The simulation heuristic was first theorized by psychologists Daniel Kahneman and Amos Tversky as a specialized adaptation of the availability heuristic to explain counterfactual thinking and regret . [ 1 ] However, it is not the same as the availability heuristic. Specifically the simulation heuristic is defined as "how perceivers tend to substitute normal antecedent events for exceptional ones in psychologically 'undoing' this specific outcome." Kahneman and Tversky also believed that people used this heuristic to understand and predict other's behavior in certain circumstances and to answer questions involving counterfactual propositions. People, they believe, do this by mentally undoing events that have occurred and then running mental simulations of the events with the corresponding input values of the altered model. For example, a study was proposed that provided a group of participants with a situation describing two men who were delayed by half an hour in a traffic jam on the way to the airport. Both men were delayed enough that they both missed flights on which they were booked, one of them by half an hour and the second by only five minutes (because his flight had been delayed for 25 minutes). The results showed that a greater number of participants thought that the second man would be more upset than the first man. Kahneman and Tversky argued that this difference could not be attributed to disappointment, because both had expected to miss their flights. They believed instead that the true explanation was that the students utilized the simulation heuristic and so it was easier for them to imagine minor alterations that would have enabled the second man to arrive in time for his flight than it was for them to devise the same alterations for the first man. This heuristic was introduced by the Israeli psychologists Daniel Kahneman (born 1934) and Amos Tversky (1937–96) in a lecture in 1979. It was published as a book chapter in 1982. [ 1 ] The subjective probability judgments of an event used in the simulation heuristic do not follow the availability heuristic, in that these judgments are not the cause of relevant examples in memory but are instead based on the ease with which situations that did not happen can be mentally simulated or imagined. The theory that underlies the simulation heuristic assumes that one's judgments are biased towards information that is easily imagined or simulated mentally. It is because of this that we see biases having to do with the overestimation of how causally plausible an event could be or the enhanced regret experienced when it is easy to mentally undo an unfortunate event, such as an accident. Significant research on simulation heuristic's application in counterfactual reasoning has been performed by Dale T Miller and Bryan Taylor . For example, they found that if an affectively negative experience, such as a fatal car accident was brought about by an extraordinary event, such as someone who usually goes by train to work but instead drove, the simulation heuristic will cause an emotional reaction of regret. This emotional reaction is because the exceptional event is easy to mentally undo and replace with a more common one that would not have caused the accident. Kahneman and Tversky did a study in which two individuals were given lottery tickets and then were given the opportunity to sell those same tickets back either two weeks before the drawing or an hour before the drawing. They proposed this question to some participants whose responses showed that they believed that the man who had sold his ticket an hour before the drawing would experience the greatest anticipatory regret when that ticket won. Kahneman and Tversky explained these findings through the understanding of the norm theory, by stating that "people's anticipatory regret, along with reluctance to sell the ticket, should increase with their ease of imagining themselves still owning the winning ticket". [ 2 ] Therefore, the man who recently sold his ticket will experience more regret because the " counterfactual world", in which he is the winner, is perceived as closer for him than the man who sold his ticket two weeks ago. This example shows the bias in this type of thinking because both men had the same probability of winning if they had not sold their tickets and the time differences in which they did will not increase or decrease these chances. Similar results were found with plane crash survivors. These individuals experienced a greater amount of anticipatory regret when they engaged in the highly mutable action of switching flights last minute. It was reasoned that this was due to a person "anticipating counterfactual thoughts that a negative event was evoked, because it tends to make the event more vivid, and so tends to make it more subjectively likely". [ 3 ] This heuristic has shown to be a salient feature of clinical anxiety and its disorders, which are marked by heighted expectations of future negative events. A study done by David Raune and Andrew Macleod tried to tie the cognitive mechanisms that underlie this type of judgment to the simulation heuristic. [ 4 ] Their findings showed that anxious patient's simulation heuristic scores were correlated with the subjective probability. Such that, the more reasons anxious patients could think of why negative events would happen, relative to the number why they would not happen, the higher their subjective probability judgment that the events would happen to them. Further it was found that anxious patients displayed increase access to the simulation compared to control patients. They also found support for the hypothesis that the easier it was for anxious patients to form the visual image, the greater the subjective probability that the event would happen to them. Through this work they purposed that the main clinical implication of the simulation heuristic results is that, in order to lower elevated subjective probability in clinical anxiety, patients should be encouraged to think of more reasons why the negative events will not occur then why they will occur . A study done by Philip Broemer was done to test the hypothesis that the subjective ease with which one can imagine a symptom will be affected by the impact of differently framed messages on attitudes toward performing health behaviors. [ 5 ] By drawing on the simulation heuristic, he argued that the vividness of information is reflected in the subjective ease with which people can imagine having symptoms of an illness. His results showed that the impact of message framing upon attitudes was moderated by the ease of imagination and clearly supported the congruency hypothesis for different kinds of health behavior. Finding that, negatively framed messages led to more positive attitudes when the recipients of these messages could easily imagine the relevant symptoms. Ease of imagination thus facilitates persuasion when messages emphasize potential health risks. A positive framing however, leads to more positive attitudes when symptom imagination was rather difficult. Therefore, a message with a reassuring theme is more congruent with a recipient's state of mind when he or she cannot easily imagine the symptoms whereas a message with an aversive theme is more congruent with a recipient's state of mind when he or she can easily imagine having the symptoms .
https://en.wikipedia.org/wiki/Simulation_heuristic
The simulation hypothesis proposes that what one experiences as the real world is actually a simulated reality , such as a computer simulation in which humans are constructs. [ 1 ] [ 2 ] There has been much debate over this topic in the philosophical discourse, and regarding practical applications in computing . In 2003, philosopher Nick Bostrom proposed the simulation argument , which suggested that if a civilization became capable of creating conscious simulations , it could generate so many simulated beings that a randomly chosen conscious entity would almost certainly be in a simulation. This argument presents a trilemma : either such simulations are not created because of technological limitations or self-destruction; or advanced civilizations choose not to create them; or if advanced civilizations do create them, the number of simulations would far exceed base reality and we would therefore almost certainly be living in one. This assumes that consciousness is not uniquely tied to biological brains but can arise from any system that implements the right computational structures and processes. [ 3 ] [ 4 ] The hypothesis is preceded by many earlier versions, and variations on the idea have also been featured in science fiction , appearing as a central plot device in many stories and films, such as Simulacron-3 (1964) and The Matrix (1999). [ 5 ] Human history is full of thinkers who observed the difference between how things seem and how they might actually be, with dreams , illusions , and hallucinations providing poetic and philosophical metaphors. For example, the " Butterfly Dream " of Zhuangzi from ancient China ; [ 6 ] or the Indian philosophy of Maya ; or in ancient Greek philosophy , where Anaxarchus and Monimus likened existing things to a scene-painting and supposed them to resemble the impressions experienced in sleep or madness. [ 7 ] Aztec philosophical texts theorized that the world was a painting or book written by the Teotl . [ 8 ] A common theme in the spiritual philosophy of the religious movements collectively referred to by scholars as Gnosticism was the belief that reality as we experience it is the creation of a lesser, possibly malevolent, deity , from which humanity should seek to escape. [ 9 ] In the Western philosophical tradition, Plato's allegory of the cave analogized human beings to chained prisoners unable to see reality. René Descartes ' evil demon philosophically formalized these epistemic doubts, [ 10 ] [ 11 ] to be followed by a large literature with subsequent variations like brain in a vat . [ 12 ] In 1969, Konrad Zuse published his book Calculating Space on automata theory , in which he proposed the idea that the universe was fundamentally computational, a concept which became known as digital physics . [ 13 ] Later, roboticist Hans Moravec explored related themes through the lens of artificial intelligence , discussing concepts like mind uploading and speculating that our current reality might itself be a computer simulation created by future intelligences. [ 14 ] [ 15 ] [ 16 ] Nick Bostrom 's premise: Many works of science fiction as well as some forecasts by serious technologists and futurologists predict that enormous amounts of computing power will be available in the future. Let us suppose for a moment that these predictions are correct. One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears. Because their computers would be so powerful, they could run a great many such simulations. Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct). Then it could be the case that the vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race. [ 17 ] Bostrom's conclusion: It is then possible to argue that, if this were the case, we would be rational to think that we are likely among the simulated minds rather than among the original biological ones. Therefore, if we don't think that we are currently living in a computer simulation, we are not entitled to believe that we will have descendants who will run lots of such simulations of their forebears. In 2003, Bostrom proposed a trilemma that he called "the simulation argument". Despite its name, the "simulation argument" does not directly argue that humans live in a simulation; instead, it argues that one of three unlikely-seeming propositions is almost certainly true: [ 3 ] The trilemma points out that a technologically mature "posthuman" civilization would have enormous computing power. If even a tiny percentage of "ancestor simulations" were run (that is, "high-fidelity" simulations of ancestral life that would be indistinguishable from reality to the simulated ancestor), the total number of simulated ancestors, or "Sims", in the universe (or multiverse , if it exists) would greatly exceed the total number of actual ancestors. [ 3 ] Bostrom uses a type of anthropic reasoning to claim that, if the third proposition is the one of those three that is true, and almost all people live in simulations, then humans are almost certainly living in a simulation. [ 3 ] Bostrom's argument rests on the premise that given sufficiently advanced technology, it would be possible to represent the populated surface of the Earth without recourse to digital physics ; that the qualia experienced by a simulated consciousness are comparable or equivalent to those of a naturally occurring human consciousness, and that one or more levels of simulation within simulations would be feasible given only a modest expenditure of computational resources in the real world. [ 18 ] [ 3 ] Bostrom argues that if one assumes that humans will not be destroyed nor destroy themselves before developing such a technology, and that human descendants will have no overriding legal restrictions or moral compunctions against simulating biospheres or their own historical biosphere, then it would be unreasonable to count ourselves among the small minority of genuine organisms who, sooner or later, will be vastly outnumbered by artificial simulations. [ 18 ] Epistemologically , it is not impossible for humans to tell whether they are living in a simulation. For example, Bostrom suggests that a window could pop up saying: "You are living in a simulation. Click here for more information". However, imperfections in a simulated environment might be difficult for the native inhabitants to identify and for purposes of authenticity, even the simulated memory of a blatant revelation might be purged by a programme. But if any evidence came to light, either for or against the skeptical hypothesis, it would radically alter the aforementioned probability. [ 18 ] [ 19 ] Bostrom claims that his argument goes beyond the classical ancient " skeptical hypothesis ", claiming that "... we have interesting empirical reasons to believe that a certain disjunctive claim about the world is true", the third of the three disjunctive propositions being that humans are almost certainly living in a simulation. Thus, Bostrom, and writers in agreement with Bostrom such as David Chalmers , [ 20 ] argue there might be empirical reasons for the "simulation hypothesis", and that therefore the simulation hypothesis is not a skeptical hypothesis but rather a " metaphysical hypothesis". Bostrom says he sees no strong argument for which of the three trilemma propositions is the true one: "If (1) is true, then we will almost certainly go extinct before reaching posthumanity. If (2) is true, then there must be a strong convergence among the courses of advanced civilizations so that virtually none contains any individuals who desire to run ancestor-simulations and are free to do so. If (3) is true, then we almost certainly live in a simulation. In the dark forest of our current ignorance, it seems sensible to apportion one's credence roughly evenly between (1), (2), and (3) ... I note that people who hear about the simulation argument often react by saying, 'Yes, I accept the argument, and it is obvious that it is possibility # n that obtains.' But different people pick a different n . Some think it obvious that (1) is true, others that (2) is true, yet others that (3) is true". As a corollary to the trilemma, Bostrom states that "Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation". [ 18 ] Bostrom argues that if "the fraction of all people with our kind of experiences that are living in a simulation is very close to one", then it follows that humans probably live in a simulation. Some philosophers disagree, proposing that perhaps "Sims" do not have conscious experiences the same way that unsimulated humans do, or that it can otherwise be self-evident to a human that they are a human rather than a Sim. [ 21 ] [ 22 ] Philosopher Barry Dainton modifies Bostrom's trilemma by substituting "neural ancestor simulations" (ranging from literal brains in a vat, to far-future humans with induced high-fidelity hallucinations that they are their own distant ancestors) for Bostrom's "ancestor simulations", on the grounds that every philosophical school of thought can agree that sufficiently high-tech neural ancestor simulation experiences would be indistinguishable from non-simulated experiences. Even if high-fidelity computer Sims are never conscious, Dainton's reasoning leads to the following conclusion: either the fraction of human-level civilizations that reach a posthuman stage and are able and willing to run large numbers of neural ancestor simulations is close to zero, or some kind of (possibly neural) ancestor simulation exists. [ 23 ] The hypothesis has received criticism from some physicists , such as Sabine Hossenfelder , who considers that it is physically impossible to simulate the universe without producing measurable inconsistencies, and called it pseudoscience and religion . [ 24 ] Cosmologist George F. R. Ellis , who stated that "[the hypothesis] is totally impracticable from a technical viewpoint", and that "late-night pub discussion is not a viable theory". [ 25 ] [ 26 ] Some scholars categorically reject—or are uninterested in—anthropic reasoning, dismissing it as "merely philosophical", unfalsifiable, or inherently unscientific. [ 21 ] Some critics propose that the simulation could be in the first generation, and all the simulated people that will one day be created do not yet exist, [ 21 ] in accordance with philosophical presentism . The cosmologist Sean M. Carroll argues that the simulation hypothesis leads to a contradiction: if humans are typical, as it is assumed, and not capable of performing simulations, this contradicts the arguer's assumption that it is easy for us to foresee that other civilizations can most likely perform simulations. [ 27 ] Physicist Frank Wilczek raises an empirical objection, saying that the laws of the universe have hidden complexity which is "not used for anything" and the laws are constrained by time and location – all of this being unnecessary and extraneous in a simulation. He further argues that the simulation argument amounts to " begging the question ," due to the "embarrassing question" of the nature of the underlying reality in which this universe is simulated. "Okay if this is a simulated world, what is the thing in which it is simulated made out of? What are the laws for that?" [ 28 ] Brian Eggleston has argued that the future humans of our universe cannot be the ones performing the simulation, since the simulation argument considers our universe to be the one being simulated. [ 29 ] In other words, it has been argued that the probability that humans live in a simulated universe is not independent of the prior probability that is assigned to the existence of other universes. Some scholars accept the trilemma, and argue that the first or second of the propositions are true, and that the third proposition (the proposition that humans live in a simulation) is false. Physicist Paul Davies uses Bostrom's trilemma as part of one possible argument against a near-infinite multiverse . This argument runs as follows: if there were a near-infinite multiverse, there would be posthuman civilizations running ancestor simulations, which would lead to the untenable and scientifically self-defeating conclusion that humans live in a simulation; therefore, by reductio ad absurdum , existing multiverse theories are likely false. (Unlike Bostrom and Chalmers, Davies (among others) considers the simulation hypothesis to be self-defeating.) [ 21 ] [ 30 ] Some point out that there is currently no proof of technology that would facilitate the existence of sufficiently high-fidelity ancestor simulation. Additionally, there is no proof that it is physically possible or feasible for a posthuman civilization to create such a simulation, and therefore for the present, the first proposition must be taken to be true. [ 21 ] Additionally there are limits of computation . [ 17 ] [ 31 ] Physicist Marcelo Gleiser objects to the notion that posthumans would have a reason to run simulated universes: "...being so advanced they would have collected enough knowledge about their past to have little interest in this kind of simulation. ...They may have virtual-reality museums, where they could go and experience the lives and tribulations of their ancestors. But a full-fledged, resource-consuming simulation of an entire universe? Sounds like a colossal waste of time". Gleiser also points out that there is no plausible reason to stop at one level of simulation, so that the simulated ancestors might also be simulating their ancestors, and so on, creating an infinite regress akin to the " problem of the First Cause ". [ 32 ] In 2019, philosopher Preston Greene suggested that it may be best not to find out if we are living in a simulation, since, if it were found to be true, such knowing might end the simulation. [ 33 ] Economist Robin Hanson argues that a self-interested occupant of a high-fidelity simulation should strive to be entertaining and praiseworthy in order to avoid being turned off or being shunted into a non-conscious low-fidelity part of the simulation. Hanson additionally speculates that someone who is aware that he might be in a simulation might care less about others and live more for today: "your motivation to save for retirement, or to help the poor in Ethiopia , might be muted by realizing that in your simulation, you will never retire and there is no Ethiopia". [ 34 ] Besides attempting to assess whether the simulation hypothesis is true or false, philosophers have also used it to illustrate other philosophical problems, especially in metaphysics and epistemology . David Chalmers has argued that simulated beings might wonder whether their mental lives are governed by the physics of their environment, when in fact these mental lives are simulated separately (and are thus, in fact, not governed by the simulated physics). [ 35 ] Chalmers claims that they might eventually find that their thoughts fail to be physically caused , and argues that this means that Cartesian dualism is not necessarily as problematic of a philosophical view as is commonly supposed, though he does not endorse it. [ 36 ] Similar arguments have been made for philosophical views about personal identity that say that an individual could have been another human being in the past, as well as views about qualia that say that colors could have appeared differently than they do (the inverted spectrum scenario). In both cases, the claim is that all this would require is hooking up the mental lives to the simulated physics in a different way. [ 37 ] Computationalism is a philosophy of mind theory stating that cognition is a form of computation . It is relevant to the simulation hypothesis in that it illustrates how a simulation could contain conscious subjects, as required by a "virtual people" simulation. For example, it is well known that physical systems can be simulated to some degree of accuracy. If computationalism is correct and if there is no problem in generating artificial consciousness or cognition, it would establish the theoretical possibility of a simulated reality. Nevertheless, the relationship between cognition and phenomenal qualia of consciousness is disputed . It is possible that consciousness requires a vital substrate that a computer cannot provide and that simulated people, while behaving appropriately, would be philosophical zombies . This would undermine Nick Bostrom 's simulation argument; humans cannot be a simulated consciousness, if consciousness, as humans understand it, cannot be simulated. The skeptical hypothesis remains intact, however, and humans could still be vatted brains , existing as conscious beings within a simulated environment, even if consciousness cannot be simulated. It has been suggested that whereas virtual reality would enable a participant to experience only three senses (sight, sound and optionally smell), simulated reality would enable all five (including taste and touch). [ citation needed ] Some theorists [ 38 ] [ 39 ] have argued that if the "consciousness-is-computation" version of computationalism and mathematical realism (or radical mathematical Platonism ) [ 40 ] are true, then consciousness is computation, which in principle is platform independent and thus admits of simulation. This argument states that a " Platonic realm " or ultimate ensemble would contain every algorithm, including those that implement consciousness. Hans Moravec has explored the simulation hypothesis and has argued for a kind of mathematical Platonism according to which every object (including, for example, a stone) can be regarded as implementing every possible computation. [ 14 ] In physics, the view of the universe and its workings as the ebb and flow of information was first observed by Wheeler. [ 41 ] Consequently, two views of the world emerged: the first one proposes that the universe is a quantum computer , [ 42 ] while the other one proposes that the system performing the simulation is distinct from its simulation (the universe). [ 43 ] Of the former view, quantum-computing specialist Dave Bacon wrote: In many respects this point of view may be nothing more than a result of the fact that the notion of computation is the disease of our age—everywhere we look today we see examples of computers, computation, and information theory and thus we extrapolate this to our laws of physics. Indeed, thinking about computing as arising from faulty components, it seems as if the abstraction that uses perfectly operating computers is unlikely to exist as anything but a platonic ideal. Another critique of such a point of view is that there is no evidence for the kind of digitization that characterizes computers nor are there any predictions made by those who advocate such a view that have been experimentally confirmed. [ 44 ] A method to test one type of simulation hypothesis was proposed in 2012 in a joint paper by physicists Silas R. Beane from the University of Bonn (now at the University of Washington, Seattle ), and Zohreh Davoudi and Martin J. Savage from the University of Washington, Seattle. [ 45 ] Under the assumption of finite computational resources, the simulation of the universe would be performed by dividing the space-time continuum into a discrete set of points, which may result in observable effects. In analogy with the mini-simulations that lattice-gauge theorists run today to build up nuclei from the underlying theory of strong interactions (known as quantum chromodynamics ), several observational consequences of a grid-like space-time have been studied in their work. Among proposed signatures is an anisotropy in the distribution of ultra-high-energy cosmic rays that, if observed, would be consistent with the simulation hypothesis according to these physicists. [ 46 ] In 2017, Campbell et al. proposed several experiments aimed at testing the simulation hypothesis in their paper "On Testing the Simulation Theory". [ 47 ] Astrophysicist Neil Degrasse Tyson said in a 2018 NBC News interview that he estimated the likelihood of the simulation hypothesis being correct at "better than 50-50 odds", adding "I wish I could summon a strong argument against it, but I can find none". [ 48 ] However, in a subsequent interview with Chuck Nice on a YouTube episode of StarTalk , Tyson shared that his friend J. Richard Gott , a professor of astrophysical sciences at Princeton University , made him aware of a strong objection to the simulation hypothesis. The objection claims that the common trait that all hypothetical high-fidelity simulated universes possess is the ability to produce high-fidelity simulated universes. And since our current world does not possess this ability, it would mean that either humans are in the real universe, and therefore simulated universes have not yet been created, or that humans are the last in a very long chain of simulated universes, an observation that makes the simulation hypothesis seem less probable. Regarding this objection, Tyson remarked "that changes my life". [ 49 ] Elon Musk , the CEO of Tesla and SpaceX , stated that the argument for the simulation hypothesis is "quite strong" . [ 50 ] In a podcast with Joe Rogan , Musk said "If you assume any rate of improvement at all, games will eventually be indistinguishable from reality" before concluding "that it's most likely we're in a simulation". [ 51 ] At various other press conferences and events, Musk has also speculated that the likelihood of us living in a simulated reality or computer made by others is about 99.9%, and stated in a 2016 interview that he believed there was "a one in billion chance we're in base reality". [ 50 ] [ 52 ] Rizwan Virk, of Massachusetts Institute of Technology is a founder of PlayLabs, and author of the book, "The Simulation Hypothesis". A story about Virk trying on a virtual reality headset and forgetting he was in an empty room makes him wonder if the real world was created by more tech-savvy individuals, other than us. [ 53 ] A dream could be considered a type of simulation capable of fooling someone who is asleep. As a result, Bertrand Russell has argued that the "dream hypothesis" is not a logical impossibility, but that common sense as well as considerations of simplicity and inference to the best explanation rule against it. [ 54 ] One of the first philosophers to question the distinction between reality and dreams was Zhuangzi , a Chinese philosopher of the 4th century BC. He phrased the problem as the well-known " Butterfly Dream ", which went as follows: Once Zhuangzi dreamt he was a butterfly, a butterfly flitting and fluttering around, happy with himself and doing as he pleased. He didn't know he was Zhuangzi. Suddenly he woke up and there he was, solid and unmistakable Zhuangzi. But he didn't know if he was Zhuangzi who had dreamt he was a butterfly or a butterfly dreaming he was Zhuangzi. Between Zhuangzi and a butterfly there must be some distinction! This is called the Transformation of Things. (2, tr. Burton Watson 1968:49) The philosophical underpinnings of this argument are also brought up by Descartes , who was one of the first Western philosophers to do so. In Meditations on First Philosophy , he states "... there are no certain indications by which we may clearly distinguish wakefulness from sleep", [ 55 ] and goes on to conclude that "It is possible that I am dreaming right now and that all of my perceptions are false". [ 55 ] Chalmers (2003) discusses the dream hypothesis and notes that this comes in two distinct forms: Both the dream argument and the simulation hypothesis can be regarded as skeptical hypotheses . Another state of mind in which some argue an individual's perceptions have no physical basis in the real world is psychosis , though psychosis may have a physical basis in the real world and explanations vary. In On Certainty , the philosopher Ludwig Wittgenstein has argued that such skeptical hypothesis are unsinnig (i.e. non-sensical), as they doubt knowledge that is required in order to make sense of the hypotheses themselves. [ 57 ] The dream hypothesis is also used to develop other philosophical concepts, such as Valberg's personal horizon : what this world would be internal to if this were all a dream. [ 58 ] Lucid dreaming is characterized as an idea where the elements of dreaming and waking are combined to a point where the user knows they are dreaming, or waking perhaps. [ 59 ] Science fiction has highlighted themes such as virtual reality, artificial intelligence and computer gaming for more than fifty years. [ 60 ] Simulacron-3 (1964) by Daniel F. Galouye (alternative title: Counterfeit World ) tells the story of a virtual city developed as a computer simulation for market research purposes, in which the simulated inhabitants possess consciousness; all but one of the inhabitants are unaware of the true nature of their world. The book was made into a German made-for-TV film called World on a Wire (1973) directed by Rainer Werner Fassbinder and aired on ARD . The film The Thirteenth Floor (1999) was also loosely based on both this book and World on a Wire . " We Can Remember It for You Wholesale " is a short story by American writer Philip K. Dick , first published in The Magazine of Fantasy & Science Fiction in April 1966, and was the basis for the 1990 film Total Recall and its 2012 remake . In Overdrawn at the Memory Bank , a short story made into a 1983 television film, the main character pays to have his mind connected to a simulation. [ citation needed ] The same theme was repeated in the 1999 film The Matrix , which depicted a world in which artificially intelligent robots enslaved humanity within a simulation set in the contemporary world. The 2012 play World of Wires was partially inspired by the Bostrom essay on the simulation hypothesis. [ 61 ] The 2012 visual novel Danganronpa 2: Goodbye Despair is set in a simulated reality known as the Neo World Program, which in this instance simulates a class trip to Jabberwock Island which, while initially peaceful, turns into a "killing game" involving the students in the simulation killing each other and trying to not be found guilty. Similarly, 2022's Anonymous;Code explores the idea of the world being a simulation, with an infinite or near-infinite number of "world layers" of simulations running inside other simulations. The main problem with this system is that in some of these "world layers", both above and below the one the characters find themselves living in, the Year 2038 Problem has not been solved, dooming the world to end on January 19, 2038 at 3:14:07 am UTC. The characters have to hack all the way into the highest world layer, the real world that the player lives in, to synchronize all the world layers and solve the Year 2038 problem in all of them. The 2014 episode of the animated sitcom Rick and Morty , " M. Night Shaym-Aliens! ", demonstrates a low-quality simulation that attempts to trap the two titular protagonists, but because the operation is less "realistic" than typically operated "reality", it becomes obvious. In 2015, Kent Forbes published a documentary named "The Simulation Hypothesis", notably featuring Max Tegmark , Neil deGrasse Tyson , Paul Davies and James Gates. [ 62 ] In the 2016 video game No Man's Sky , the universe is a simulated universe run by The Atlas. According to in-game lore, many vastly different iterations of the universe existed, with very different histories and races. As the Atlas AI became more and more corrupted, the universes became more and more similar to each other. A 2017 episode of the long-running British science fiction series Doctor Who titled " Extremis " features a simulated version of the Twelfth Doctor and his companions. A secret Vatican document describes the truth about the simulated reality by inviting its reader to choose any series of numbers at random. The document lists the same numbers on the next page since the simulated program cannot produce a truly random event. The simulation is finally revealed to be a practice world for aliens intent on real-world domination. The 2022 Netflix epic period mystery - science fiction 1899 created by Jantje Friese and Baran bo Odar tells the unfinished story of a simulation scenario in which multiple persons find themselves in a circumstance of multiplicities and simultaneities. The storyline involves an amnesia, seemingly to protect the integrity of the simulation, as suggested would be necessary by the philosopher Preston Green. [ 33 ]
https://en.wikipedia.org/wiki/Simulation_hypothesis
Simultaneous action selection , or SAS, is a game mechanic that occurs when players of a game take action (such as moving their pieces) at the same time. Examples of games that use this type of movement include rock–paper–scissors and Diplomacy . Typically, a "secret yet binding" method of committing to one's move is necessary, so that as players' moves are revealed and implemented, others do not change their moves in light of the new information. In Diplomacy , players write down their moves and then reveal them simultaneously. Because no player gets the first move, this potentially arbitrary source of advantage is not present. It is also possible for simultaneous movement games to proceed relatively quickly, because players are acting at the same time, rather than waiting for their turn. Simultaneous action selection is easily implemented in card games such as Apples to Apples in which players simply select cards and throw them face-down into the center. Some games do not lend themselves to simultaneous movement, because one player's move may be prevented by the other player's. For instance, in chess, a move of a bishop takes queen would be incompatible with a simultaneous opposing move of queen takes bishop. By contrast, the simultaneous movement is possible in Junta because each coup phase has a movement stage and a separate combat stage; no units are removed until all have had a chance to move. It has been noted that "a certain amount of reverse psychology and reverse-reverse psychology ensues" as players attempt to calculate the implications of others' potential actions. [ 1 ] Junta also has simultaneous action selection in that players secretly choose their locations at the same time. [ 2 ] This is important in that, for instance, a player plotting an assassination may choose the bank for his or her own location (hoping to quickly deposit the ill-gotten gains) before finding out whether the location of his or her assassination was on the mark. Simultaneous action selection is used in many real-world applications such as first-price sealed-bid auctions . The fact that no bidder knows what others are planning to bid may provide an incentive to bid high if there is a strong desire to win the auction, which can result in much higher winning bids than if better information were available. The prisoner's dilemma is another classic example of simultaneous action selection. SAS can also be used to introduce an element of chance, as when rock–paper–scissors is used to decide a matter.
https://en.wikipedia.org/wiki/Simultaneous_action_selection
In game theory , a simultaneous game or static game [ 1 ] is a game where each player chooses their action without knowledge of the actions chosen by other players. [ 2 ] Simultaneous games contrast with sequential games , which are played by the players taking turns (moves alternate between players). In other words, both players normally act at the same time in a simultaneous game. Even if the players do not act at the same time, both players are uninformed of each other's move while making their decisions. [ 3 ] Normal form representations are usually used for simultaneous games. [ 4 ] Given a continuous game , players will have different information sets if the game is simultaneous than if it is sequential because they have less information to act on at each step in the game. For example, in a two player continuous game that is sequential, the second player can act in response to the action taken by the first player. However, this is not possible in a simultaneous game where both players act at the same time. In sequential games, players observe what rivals have done in the past and there is a specific order of play. [ 5 ] However, in simultaneous games, all players select strategies without observing the choices of their rivals and players choose at exactly the same time. [ 5 ] A simple example is rock-paper-scissors in which all players make their choice at exactly the same time. However moving at exactly the same time isn’t always taken literally, instead players may move without being able to see the choices of other players. [ 5 ] A simple example is an election in which not all voters will vote literally at the same time but each voter will vote not knowing what anyone else has chosen. Given that decision makers are rational, then so is individual rationality. An outcome is individually rational if it yields each player at least his security level. [ 6 ] The security level for Player i is the amount max min Hi (s) that the player can guarantee themselves unilaterally, that is, without considering the actions of other players. In a simultaneous game, players will make their moves simultaneously, determine the outcome of the game and receive their payoffs. The most common representation of a simultaneous game is normal form (matrix form). For a 2 player game; one player selects a row and the other player selects a column at exactly the same time. Traditionally, within a cell, the first entry is the payoff of the row player, the second entry is the payoff of the column player. The “cell” that is chosen is the outcome of the game. [ 4 ] To determine which "cell" is chosen, the payoffs for both the row player and the column player must be compared respectively. Each player is best off where their payoff is higher. Rock–paper–scissors , a widely played hand game, is an example of a simultaneous game. Both players make a decision without knowledge of the opponent's decision, and reveal their hands at the same time. There are two players in this game and each of them has three different strategies to make their decision; the combination of strategy profiles (a complete set of each player's possible strategies) forms a 3×3 table. We will display Player 1's strategies as rows and Player 2's strategies as columns. In the table, the numbers in red represent the payoff to Player 1, the numbers in blue represent the payoff to Player 2. Hence, the pay off for a 2 player game in rock-paper-scissors will look like this: [ 4 ] Another common representation of a simultaneous game is extensive form ( game tree ). Information sets are used to emphasize the imperfect information. Although it is not simple, it is easier to use game trees for games with more than 2 players. [ 4 ] Even though simultaneous games are typically represented in normal form, they can be represented using extensive form too. While in extensive form one player’s decision must be draw before that of the other, by definition such representation does not correspond to the real life timing of the players’ decisions in a simultaneous game. The key to modeling simultaneous games in the extensive form is to get the information sets right. A dashed line between nodes in extensive form representation of a game represents information asymmetry and specifies that, during the game, a party cannot distinguish between the nodes, [ 7 ] due to the party being unaware of the other party's decision (by definition of "simultaneous game"). Some variants of chess that belong to this class of games include synchronous chess and parity chess. [ 8 ] In a simultaneous game, players only have one move and all players' moves are made simultaneously. The number of players in a game must be stipulated and all possible moves for each player must be listed. Each player may have different roles and options for moves. [ 9 ] However, each player has a finite number of options available to choose. An example of a simultaneous 2-player game: A town has two companies, A and B, who currently make $8,000,000 each and need to determine whether they should advertise. The table below shows the payoff patterns; the rows are options of A and the columns are options of B. The entries are payoffs for A and B, respectively, separated by a comma. [ 9 ] A zero-sum game is when the sum of payoffs equals zero for any outcome i.e. the losers pay for the winners gains. For a zero-sum 2-player game the payoff of player A doesn’t have to be displayed since it is the negative of the payoff of player B. [ 9 ] An example of a simultaneous zero-sum 2-player game: Rock–paper–scissors is being played by two friends, A and B for $10. The first cell stands for a payoff of 0 for both players. The second cell is a payoff of 10 for A which has to be paid by B, therefore a payoff of -10 for B. An example of a simultaneous 3-player game: A classroom vote is held as to whether or not they should have an increased amount of free time. Player A selects the matrix, player B selects the row, and player C selects the column. [ 9 ] The payoffs are: All of the above examples have been symmetric. All players have the same options so if players interchange their moves, they also interchange their payoffs. By design, symmetric games are fair in which every player is given the same chances. [ 9 ] Game theory should provide players with advice on how to find which move is best. These are known as “Best Response” strategies. [ 10 ] Pure strategies are those in which players pick only one strategy from their best response. A Pure Strategy determines all your possible moves in a game, it is a complete plan for a player in a given game. Mixed strategies are those in which players randomize strategies in their best responses set. These have associated probabilities with each set of strategies. [ 10 ] For simultaneous games, players will typically select mixed strategies while very occasionally choosing pure strategies. The reason for this is that in a game where players don’t know what the other one will choose it is best to pick the option that is likely to give the you the greatest benefit for the lowest risk given the other player could choose anything [ 10 ] i.e. if you pick your best option but the other player also picks their best option, someone will suffer. A dominant strategy provides a player with the highest possible payoff for any strategy of the other players. In simultaneous games, the best move a player can make is to follow their dominant strategy, if one exists. [ 11 ] When analyzing a simultaneous game: Firstly, identify any dominant strategies for all players. If each player has a dominant strategy, then players will play that strategy however if there is more than one dominant strategy then any of them are possible. [ 11 ] Secondly, if there aren’t any dominant strategies, identify all strategies dominated by other strategies. Then eliminate the dominated strategies and the remaining are strategies players will play. [ 11 ] Some people always expect the worst and believe that others want to bring them down when in fact others want to maximise their payoffs. Still, nonetheless, player A will concentrate on their smallest possible payoff, believing this is what player A will get, they will choose the option with the highest value. This option is the maximin move (strategy), as it maximises the minimum possible payoff. Thus, the player can be assured a payoff of at least the maximin value, regardless of how the others are playing. The player doesn’t have the know the payoffs of the other players in order to choose the maximin move, therefore players can choose the maximin strategy in a simultaneous game regardless of what the other players choose. [ 10 ] A pure Nash Equilibrium is when no one can gain a higher payoff by deviating from their move, provided others stick with their original choices. Nash equilibria are self-enforcing contracts, in which negotiation happens prior to the game being played in which each player best sticks with their negotiated move. In a Nash Equilibrium, each player is best responded to the choices of the other player. [ 11 ] The prisoner’s dilemma originated with Merrill Flood and Melvin Dresher and is one of the most famous games in Game theory. The game is usually presented as follows: Two members of a criminal gang have been apprehended by the police. Both individuals now sit in solitary confinement. The prosecutors have the evidence required to put both prisoners away on lesser charges. However, they do not possess the evidence required to convict the prisoners on their principle charges. The prosecution therefore simultaneously offers both prisoners a deal where they can choose to cooperate with one another by remaining silent, or they can choose betrayal, meaning they testify against their partner and receive a reduced sentence. It should be mentioned that the prisoners cannot communicate with one another. [ 12 ] Therefore, resulting in the following payoff matrix: (C ooperation ) (Betrayal) (C ooperation ) Prisoner B: 3 Months (Betrayal) Prisoner B: 3 Years This game results in a clear dominant strategy of betrayal where the only strong Nash Equilibrium is for both prisoners to confess. This is because we assume both prisoners to be rational and possessing no loyalty towards one another. Therefore, betrayal provides a greater reward for a majority of the potential outcomes. [ 12 ] If B cooperates, A should choose betrayal, as serving 3 months is better than serving 1 year. Moreover, if B chooses betrayal, then A should also choose betrayal as serving 2 years is better than serving 3. The choice to cooperate clearly provides a better outcome for the two prisoners however from a perspective of self interest this option would be deemed irrational. The aforementioned both cooperating option features the least total time spent in prison, serving 2 years total. This total is significantly less than the Nash Equilibrium total, where both cooperate, of 4 years. However, given the constraints that Prisoners A and B are individually motivated, they will always choose betrayal. They do so by selecting the best option for themselves while considering each possible decisions of the other prisoner. In the battle of the sexes game, a wife and husband decide independently whether to go to a football game or the ballet. Each person likes to do something together with the other, but the husband prefers football and the wife prefers ballet. The two Nash equilibria, and therefore the best responses for both husband and wife, are for them to both pick the same leisure activity e.g. (ballet, ballet) or (football, football). [ 11 ] The table below shows the payoff for each option: Simultaneous games are designed to inform strategic choices in competitive and non cooperative environments. However, is important to note that Nash equilibria and many of the aforementioned strategies generally fail to result in socially desirable outcomes. Pareto efficiency is a notion rooted in the theoretical construct of perfect competition . Originating with Italian economist Vilfredo Pareto the concept refers to a state in which an economy has maximized efficiency in terms of resource allocation. Pareto Efficiency is closely linked to Pareto Optimality which is an ideal of Welfare Economics and often implies a notion of ethical consideration. A simultaneous game, for example, is said to reach Pareto optimality if there is no alternative outcome that can make at least one player better off while leaving all other players at least as well off. Therefore, these outcomes are referred to as socially desirable outcomes. [ 13 ] The Stag Hunt by philosopher Jean-Jacques Rousseau is a simultaneous game in which there are two players. The decision to be made is whether or not each player wishes to hunt a Stag or a Hare. Naturally hunting a Stag will provide greater utility in comparison to hunting a Hare. However, in order to hunt a Stag both players need to work together. On the other hand, each player is perfectly capable of hunting a hare alone. The resulting dilemma is that neither player can be sure of what the other will choose to do. Thus, providing the potential for a player to receive no payoff should they be the only party to choose to hunt a Stag. [ 14 ] Therefore, resulting in the following payoff matrix: The game is designed to illustrate a clear Pareto optimality where both players cooperate to hunt a Stag. However, due to the inherent risk of the game, such an outcome does not always come to fruition. It is imperative to note that Pareto optimality is not a strategic solution for simultaneous games. However, the ideal informs players about the potential for more efficient outcomes. Moreover, potentially providing insight into how players should learn to play over time. [ 15 ] Bibliography
https://en.wikipedia.org/wiki/Simultaneous_game
Simultaneous nitrification–denitrification (SNdN) is a wastewater treatment process. Microbial simultaneous nitrification - denitrification is the conversion of the ammonium ion to nitrogen gas in a single bioreactor . The process is dependent on floc characteristics, reaction kinetics, mass loading of readily biodegradable chemical oxygen demand {rbCOD}, and the dissolved oxygen {DO} concentration. [ 1 ] The oxidation of the ammonium to nitrogen gas has been achieved with attached growth and suspended growth wastewater treatment processes. The most common bacteria responsible for the two step conversion are the autotrophic organisms, Nitrosomonas and Nitrobacter , and many different heterotrophs. The former obtain energy from the oxidation of ammonia , obtain carbon from CO 2 , and use oxygen as the electron acceptor . They are termed autotrophic because of their carbon source and termed aerobes because of their aerobic environment. The heterotrophic organisms are responsible for denitrification or the reduction of nitrate , NO 3 − , to nitrogen gas, N 2 . They use carbon from complex organic compounds, prefer low to zero dissolved oxygen, and use nitrate as the electron acceptor. [ citation needed ] The most common design uses two different basins: one catering to the autotrophic bacteria and the second to the heterotrophic bacteria. However, SNdN accommodates to both in one basin with strict control of DO. This has been done in two common approaches. One is to develop an oxygen gradient by adding oxygen in one location in the basin. Near the O2 injection point, a high DO concentration is maintained allowing for nitrification and oxidation of other organic compounds . Oxygen is the electron acceptor and is depleted. The DO level in localized environments decreases with increasing distance from the injection point. In these low DO locations, the heterotrophic bacteria complete the nitrogen removal. The Orbal process is a technology in practice today using this method. The other method is to produce an oxygen gradient within the bio floc. The DO concentration remains high in the outside rings of the floc where nitrification occurs but low in the inner rings of the floc where denitrification occurs. This method is dependent on the floc size and characteristics; however controlling flocs is not well understood and is an active field of study [ 2 ] Typically, SNdN has slower ammonia and nitrate utilization rates as compared to separate basin designs because only a fraction of the total biomass is participating in either the nitrification or the denitrification steps. The SNdN limitation due to partial active biomass has led to research in novel bacteria and system designs. [ 3 ] Huang achieved significant ammonia removal in an attached growth process with ciliated columns packed with granular sulfur where the denitrifying bacteria used the sulfur as the electron donor and nitrate as the electron acceptor. Another well established pathway is via autotrophic denitrifying bacteria in the process termed the Anammox process. [ 1 ] It is typically used for high ammonia strength wastewater. [ citation needed ]
https://en.wikipedia.org/wiki/Simultaneous_nitrification–denitrification
In mathematics , sine and cosine are trigonometric functions of an angle . The sine and cosine of an acute angle are defined in the context of a right triangle : for the specified angle, its sine is the ratio of the length of the side opposite that angle to the length of the longest side of the triangle (the hypotenuse ), and the cosine is the ratio of the length of the adjacent leg to that of the hypotenuse . For an angle θ {\displaystyle \theta } , the sine and cosine functions are denoted as sin ⁡ ( θ ) {\displaystyle \sin(\theta )} and cos ⁡ ( θ ) {\displaystyle \cos(\theta )} . The definitions of sine and cosine have been extended to any real value in terms of the lengths of certain line segments in a unit circle . More modern definitions express the sine and cosine as infinite series , or as the solutions of certain differential equations , allowing their extension to arbitrary positive and negative values and even to complex numbers . The sine and cosine functions are commonly used to model periodic phenomena such as sound and light waves , the position and velocity of harmonic oscillators, sunlight intensity and day length, and average temperature variations throughout the year. They can be traced to the jyā and koṭi-jyā functions used in Indian astronomy during the Gupta period . To define the sine and cosine of an acute angle α {\displaystyle \alpha } , start with a right triangle that contains an angle of measure α {\displaystyle \alpha } ; in the accompanying figure, angle α {\displaystyle \alpha } in a right triangle A B C {\displaystyle ABC} is the angle of interest. The three sides of the triangle are named as follows: [ 1 ] Once such a triangle is chosen, the sine of the angle is equal to the length of the opposite side divided by the length of the hypotenuse, and the cosine of the angle is equal to the length of the adjacent side divided by the length of the hypotenuse: [ 1 ] sin ⁡ ( α ) = opposite hypotenuse , cos ⁡ ( α ) = adjacent hypotenuse . {\displaystyle \sin(\alpha )={\frac {\text{opposite}}{\text{hypotenuse}}},\qquad \cos(\alpha )={\frac {\text{adjacent}}{\text{hypotenuse}}}.} The other trigonometric functions of the angle can be defined similarly; for example, the tangent is the ratio between the opposite and adjacent sides or equivalently the ratio between the sine and cosine functions. The reciprocal of sine is cosecant, which gives the ratio of the hypotenuse length to the length of the opposite side. Similarly, the reciprocal of cosine is secant, which gives the ratio of the hypotenuse length to that of the adjacent side. The cotangent function is the ratio between the adjacent and opposite sides, a reciprocal of a tangent function. These functions can be formulated as: [ 1 ] tan ⁡ ( θ ) = sin ⁡ ( θ ) cos ⁡ ( θ ) = opposite adjacent , cot ⁡ ( θ ) = 1 tan ⁡ ( θ ) = adjacent opposite , csc ⁡ ( θ ) = 1 sin ⁡ ( θ ) = hypotenuse opposite , sec ⁡ ( θ ) = 1 cos ⁡ ( θ ) = hypotenuse adjacent . {\displaystyle {\begin{aligned}\tan(\theta )&={\frac {\sin(\theta )}{\cos(\theta )}}={\frac {\text{opposite}}{\text{adjacent}}},\\\cot(\theta )&={\frac {1}{\tan(\theta )}}={\frac {\text{adjacent}}{\text{opposite}}},\\\csc(\theta )&={\frac {1}{\sin(\theta )}}={\frac {\text{hypotenuse}}{\text{opposite}}},\\\sec(\theta )&={\frac {1}{\cos(\theta )}}={\frac {\textrm {hypotenuse}}{\textrm {adjacent}}}.\end{aligned}}} As stated, the values sin ⁡ ( α ) {\displaystyle \sin(\alpha )} and cos ⁡ ( α ) {\displaystyle \cos(\alpha )} appear to depend on the choice of a right triangle containing an angle of measure α {\displaystyle \alpha } . However, this is not the case as all such triangles are similar , and so the ratios are the same for each of them. For example, each leg of the 45-45-90 right triangle is 1 unit, and its hypotenuse is 2 {\displaystyle {\sqrt {2}}} ; therefore, sin ⁡ 45 ∘ = cos ⁡ 45 ∘ = 2 2 {\textstyle \sin 45^{\circ }=\cos 45^{\circ }={\frac {\sqrt {2}}{2}}} . [ 2 ] The following table shows the special value of each input for both sine and cosine with the domain between 0 < α < π 2 {\textstyle 0<\alpha <{\frac {\pi }{2}}} . The input in this table provides various unit systems such as degree, radian, and so on. The angles other than those five can be obtained by using a calculator. [ 3 ] [ 4 ] The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. [ 5 ] Given a triangle A B C {\displaystyle ABC} with sides a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} , and angles opposite those sides α {\displaystyle \alpha } , β {\displaystyle \beta } , and γ {\displaystyle \gamma } , the law states, sin ⁡ α a = sin ⁡ β b = sin ⁡ γ c . {\displaystyle {\frac {\sin \alpha }{a}}={\frac {\sin \beta }{b}}={\frac {\sin \gamma }{c}}.} This is equivalent to the equality of the first three expressions below: a sin ⁡ α = b sin ⁡ β = c sin ⁡ γ = 2 R , {\displaystyle {\frac {a}{\sin \alpha }}={\frac {b}{\sin \beta }}={\frac {c}{\sin \gamma }}=2R,} where R {\displaystyle R} is the triangle's circumradius . The law of cosines is useful for computing the length of an unknown side if two other sides and an angle are known. [ 5 ] The law states, a 2 + b 2 − 2 a b cos ⁡ ( γ ) = c 2 {\displaystyle a^{2}+b^{2}-2ab\cos(\gamma )=c^{2}} In the case where γ = π / 2 {\displaystyle \gamma =\pi /2} from which cos ⁡ ( γ ) = 0 {\displaystyle \cos(\gamma )=0} , the resulting equation becomes the Pythagorean theorem . [ 6 ] The cross product and dot product are operations on two vectors in Euclidean vector space . The sine and cosine functions can be defined in terms of the cross product and dot product. If a {\displaystyle \mathbb {a} } and b {\displaystyle \mathbb {b} } are vectors, and θ {\displaystyle \theta } is the angle between a {\displaystyle \mathbb {a} } and b {\displaystyle \mathbb {b} } , then sine and cosine can be defined as: sin ⁡ ( θ ) = | a × b | | a | | b | , cos ⁡ ( θ ) = a ⋅ b | a | | b | . {\displaystyle {\begin{aligned}\sin(\theta )&={\frac {|\mathbb {a} \times \mathbb {b} |}{|a||b|}},\\\cos(\theta )&={\frac {\mathbb {a} \cdot \mathbb {b} }{|a||b|}}.\end{aligned}}} The sine and cosine functions may also be defined in a more general way by using unit circle , a circle of radius one centered at the origin ( 0 , 0 ) {\displaystyle (0,0)} , formulated as the equation of x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} in the Cartesian coordinate system . Let a line through the origin intersect the unit circle, making an angle of θ {\displaystyle \theta } with the positive half of the x {\displaystyle x} - axis. The x {\displaystyle x} - and y {\displaystyle y} - coordinates of this point of intersection are equal to cos ⁡ ( θ ) {\displaystyle \cos(\theta )} and sin ⁡ ( θ ) {\displaystyle \sin(\theta )} , respectively; that is, [ 7 ] sin ⁡ ( θ ) = y , cos ⁡ ( θ ) = x . {\displaystyle \sin(\theta )=y,\qquad \cos(\theta )=x.} This definition is consistent with the right-angled triangle definition of sine and cosine when 0 < θ < π 2 {\textstyle 0<\theta <{\frac {\pi }{2}}} because the length of the hypotenuse of the unit circle is always 1; mathematically speaking, the sine of an angle equals the opposite side of the triangle, which is simply the y {\displaystyle y} - coordinate. A similar argument can be made for the cosine function to show that the cosine of an angle when 0 < θ < π 2 {\textstyle 0<\theta <{\frac {\pi }{2}}} , even under the new definition using the unit circle. [ 8 ] [ 9 ] Using the unit circle definition has the advantage of drawing a graph of sine and cosine functions. This can be done by rotating counterclockwise a point along the circumference of a circle, depending on the input θ > 0 {\displaystyle \theta >0} . In a sine function, if the input is θ = π 2 {\textstyle \theta ={\frac {\pi }{2}}} , the point is rotated counterclockwise and stopped exactly on the y {\displaystyle y} - axis. If θ = π {\displaystyle \theta =\pi } , the point is at the circle's halfway. If θ = 2 π {\displaystyle \theta =2\pi } , the point returned to its origin. This results that both sine and cosine functions have the range between − 1 ≤ y ≤ 1 {\displaystyle -1\leq y\leq 1} . [ 10 ] Extending the angle to any real domain, the point rotated counterclockwise continuously. This can be done similarly for the cosine function as well, although the point is rotated initially from the y {\displaystyle y} - coordinate. In other words, both sine and cosine functions are periodic , meaning any angle added by the circumference's circle is the angle itself. Mathematically, [ 11 ] sin ⁡ ( θ + 2 π ) = sin ⁡ ( θ ) , cos ⁡ ( θ + 2 π ) = cos ⁡ ( θ ) . {\displaystyle \sin(\theta +2\pi )=\sin(\theta ),\qquad \cos(\theta +2\pi )=\cos(\theta ).} A function f {\displaystyle f} is said to be odd if f ( − x ) = − f ( x ) {\displaystyle f(-x)=-f(x)} , and is said to be even if f ( − x ) = f ( x ) {\displaystyle f(-x)=f(x)} . The sine function is odd, whereas the cosine function is even. [ 12 ] Both sine and cosine functions are similar, with their difference being shifted by π 2 {\textstyle {\frac {\pi }{2}}} . This means, [ 13 ] sin ⁡ ( θ ) = cos ⁡ ( π 2 − θ ) , cos ⁡ ( θ ) = sin ⁡ ( π 2 − θ ) . {\displaystyle {\begin{aligned}\sin(\theta )&=\cos \left({\frac {\pi }{2}}-\theta \right),\\\cos(\theta )&=\sin \left({\frac {\pi }{2}}-\theta \right).\end{aligned}}} Zero is the only real fixed point of the sine function; in other words the only intersection of the sine function and the identity function is sin ⁡ ( 0 ) = 0 {\displaystyle \sin(0)=0} . The only real fixed point of the cosine function is called the Dottie number . The Dottie number is the unique real root of the equation cos ⁡ ( x ) = x {\displaystyle \cos(x)=x} . The decimal expansion of the Dottie number is approximately 0.739085. [ 14 ] The sine and cosine functions are infinitely differentiable. [ 15 ] The derivative of sine is cosine, and the derivative of cosine is negative sine: [ 16 ] d d x sin ⁡ ( x ) = cos ⁡ ( x ) , d d x cos ⁡ ( x ) = − sin ⁡ ( x ) . {\displaystyle {\frac {d}{dx}}\sin(x)=\cos(x),\qquad {\frac {d}{dx}}\cos(x)=-\sin(x).} Continuing the process in higher-order derivative results in the repeated same functions; the fourth derivative of a sine is the sine itself. [ 15 ] These derivatives can be applied to the first derivative test , according to which the monotonicity of a function can be defined as the inequality of function's first derivative greater or less than equal to zero. [ 17 ] It can also be applied to second derivative test , according to which the concavity of a function can be defined by applying the inequality of the function's second derivative greater or less than equal to zero. [ 18 ] The following table shows that both sine and cosine functions have concavity and monotonicity—the positive sign ( + {\displaystyle +} ) denotes a graph is increasing (going upward) and the negative sign ( − {\displaystyle -} ) is decreasing (going downward)—in certain intervals. [ 19 ] This information can be represented as a Cartesian coordinates system divided into four quadrants. Both sine and cosine functions can be defined by using differential equations. The pair of ( cos ⁡ θ , sin ⁡ θ ) {\displaystyle (\cos \theta ,\sin \theta )} is the solution ( x ( θ ) , y ( θ ) ) {\displaystyle (x(\theta ),y(\theta ))} to the two-dimensional system of differential equations y ′ ( θ ) = x ( θ ) {\displaystyle y'(\theta )=x(\theta )} and x ′ ( θ ) = − y ( θ ) {\displaystyle x'(\theta )=-y(\theta )} with the initial conditions y ( 0 ) = 0 {\displaystyle y(0)=0} and x ( 0 ) = 1 {\displaystyle x(0)=1} . One could interpret the unit circle in the above definitions as defining the phase space trajectory of the differential equation with the given initial conditions. It can be interpreted as a phase space trajectory of the system of differential equations y ′ ( θ ) = x ( θ ) {\displaystyle y'(\theta )=x(\theta )} and x ′ ( θ ) = − y ( θ ) {\displaystyle x'(\theta )=-y(\theta )} starting from the initial conditions y ( 0 ) = 0 {\displaystyle y(0)=0} and x ( 0 ) = 1 {\displaystyle x(0)=1} . [ citation needed ] Their area under a curve can be obtained by using the integral with a certain bounded interval. Their antiderivatives are: ∫ sin ⁡ ( x ) d x = − cos ⁡ ( x ) + C ∫ cos ⁡ ( x ) d x = sin ⁡ ( x ) + C , {\displaystyle \int \sin(x)\,dx=-\cos(x)+C\qquad \int \cos(x)\,dx=\sin(x)+C,} where C {\displaystyle C} denotes the constant of integration . [ 20 ] These antiderivatives may be applied to compute the mensuration properties of both sine and cosine functions' curves with a given interval. For example, the arc length of the sine curve between 0 {\displaystyle 0} and t {\displaystyle t} is ∫ 0 t 1 + cos 2 ⁡ ( x ) d x = 2 E ⁡ ( t , 1 2 ) , {\displaystyle \int _{0}^{t}\!{\sqrt {1+\cos ^{2}(x)}}\,dx={\sqrt {2}}\operatorname {E} \left(t,{\frac {1}{\sqrt {2}}}\right),} where E ⁡ ( φ , k ) {\displaystyle \operatorname {E} (\varphi ,k)} is the incomplete elliptic integral of the second kind with modulus k {\displaystyle k} . It cannot be expressed using elementary functions . [ 21 ] In the case of a full period, its arc length is L = 4 2 π 3 Γ ( 1 / 4 ) 2 + Γ ( 1 / 4 ) 2 2 π = 2 π ϖ + 2 ϖ ≈ 7.6404 … {\displaystyle L={\frac {4{\sqrt {2\pi ^{3}}}}{\Gamma (1/4)^{2}}}+{\frac {\Gamma (1/4)^{2}}{\sqrt {2\pi }}}={\frac {2\pi }{\varpi }}+2\varpi \approx 7.6404\ldots } where Γ {\displaystyle \Gamma } is the gamma function and ϖ {\displaystyle \varpi } is the lemniscate constant . [ 22 ] The inverse function of sine is arcsine or inverse sine, denoted as "arcsin", "asin", or sin − 1 {\displaystyle \sin ^{-1}} . [ 23 ] The inverse function of cosine is arccosine, denoted as "arccos", "acos", or cos − 1 {\displaystyle \cos ^{-1}} . [ a ] As sine and cosine are not injective , their inverses are not exact inverse functions, but partial inverse functions. For example, sin ⁡ ( 0 ) = 0 {\displaystyle \sin(0)=0} , but also sin ⁡ ( π ) = 0 {\displaystyle \sin(\pi )=0} , sin ⁡ ( 2 π ) = 0 {\displaystyle \sin(2\pi )=0} , and so on. It follows that the arcsine function is multivalued: arcsin ⁡ ( 0 ) = 0 {\displaystyle \arcsin(0)=0} , but also arcsin ⁡ ( 0 ) = π {\displaystyle \arcsin(0)=\pi } , arcsin ⁡ ( 0 ) = 2 π {\displaystyle \arcsin(0)=2\pi } , and so on. When only one value is desired, the function may be restricted to its principal branch . With this restriction, for each x {\displaystyle x} in the domain, the expression arcsin ⁡ ( x ) {\displaystyle \arcsin(x)} will evaluate only to a single value, called its principal value . The standard range of principal values for arcsin is from − π 2 {\textstyle -{\frac {\pi }{2}}} to π 2 {\textstyle {\frac {\pi }{2}}} , and the standard range for arccos is from 0 {\displaystyle 0} to π {\displaystyle \pi } . [ 24 ] The inverse function of both sine and cosine are defined as: [ citation needed ] θ = arcsin ⁡ ( opposite hypotenuse ) = arccos ⁡ ( adjacent hypotenuse ) , {\displaystyle \theta =\arcsin \left({\frac {\text{opposite}}{\text{hypotenuse}}}\right)=\arccos \left({\frac {\text{adjacent}}{\text{hypotenuse}}}\right),} where for some integer k {\displaystyle k} , sin ⁡ ( y ) = x ⟺ y = arcsin ⁡ ( x ) + 2 π k , or y = π − arcsin ⁡ ( x ) + 2 π k cos ⁡ ( y ) = x ⟺ y = arccos ⁡ ( x ) + 2 π k , or y = − arccos ⁡ ( x ) + 2 π k {\displaystyle {\begin{aligned}\sin(y)=x\iff &y=\arcsin(x)+2\pi k,{\text{ or }}\\&y=\pi -\arcsin(x)+2\pi k\\\cos(y)=x\iff &y=\arccos(x)+2\pi k,{\text{ or }}\\&y=-\arccos(x)+2\pi k\end{aligned}}} By definition, both functions satisfy the equations: [ citation needed ] sin ⁡ ( arcsin ⁡ ( x ) ) = x cos ⁡ ( arccos ⁡ ( x ) ) = x {\displaystyle \sin(\arcsin(x))=x\qquad \cos(\arccos(x))=x} and arcsin ⁡ ( sin ⁡ ( θ ) ) = θ for − π 2 ≤ θ ≤ π 2 arccos ⁡ ( cos ⁡ ( θ ) ) = θ for 0 ≤ θ ≤ π {\displaystyle {\begin{aligned}\arcsin(\sin(\theta ))=\theta \quad &{\text{for}}\quad -{\frac {\pi }{2}}\leq \theta \leq {\frac {\pi }{2}}\\\arccos(\cos(\theta ))=\theta \quad &{\text{for}}\quad 0\leq \theta \leq \pi \end{aligned}}} According to Pythagorean theorem , the squared hypotenuse is the sum of two squared legs of a right triangle. Dividing the formula on both sides with squared hypotenuse resulting in the Pythagorean trigonometric identity , the sum of a squared sine and a squared cosine equals 1: [ 25 ] [ b ] sin 2 ⁡ ( θ ) + cos 2 ⁡ ( θ ) = 1. {\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1.} Sine and cosine satisfy the following double-angle formulas: [ 26 ] sin ⁡ ( 2 θ ) = 2 sin ⁡ ( θ ) cos ⁡ ( θ ) , cos ⁡ ( 2 θ ) = cos 2 ⁡ ( θ ) − sin 2 ⁡ ( θ ) = 2 cos 2 ⁡ ( θ ) − 1 = 1 − 2 sin 2 ⁡ ( θ ) {\displaystyle {\begin{aligned}\sin(2\theta )&=2\sin(\theta )\cos(\theta ),\\\cos(2\theta )&=\cos ^{2}(\theta )-\sin ^{2}(\theta )\\&=2\cos ^{2}(\theta )-1\\&=1-2\sin ^{2}(\theta )\end{aligned}}} The cosine double angle formula implies that sin 2 and cos 2 are, themselves, shifted and scaled sine waves. Specifically, [ 27 ] sin 2 ⁡ ( θ ) = 1 − cos ⁡ ( 2 θ ) 2 cos 2 ⁡ ( θ ) = 1 + cos ⁡ ( 2 θ ) 2 {\displaystyle \sin ^{2}(\theta )={\frac {1-\cos(2\theta )}{2}}\qquad \cos ^{2}(\theta )={\frac {1+\cos(2\theta )}{2}}} The graph shows both sine and sine squared functions, with the sine in blue and the sine squared in red. Both graphs have the same shape but with different ranges of values and different periods. Sine squared has only positive values, but twice the number of periods. [ citation needed ] Both sine and cosine functions can be defined by using a Taylor series , a power series involving the higher-order derivatives. As mentioned in § Continuity and differentiation , the derivative of sine is cosine and that the derivative of cosine is the negative of sine. This means the successive derivatives of sin ⁡ ( x ) {\displaystyle \sin(x)} are cos ⁡ ( x ) {\displaystyle \cos(x)} , − sin ⁡ ( x ) {\displaystyle -\sin(x)} , − cos ⁡ ( x ) {\displaystyle -\cos(x)} , sin ⁡ ( x ) {\displaystyle \sin(x)} , continuing to repeat those four functions. The ( 4 n + k ) {\displaystyle (4n+k)} - th derivative, evaluated at the point 0: sin ( 4 n + k ) ⁡ ( 0 ) = { 0 when k = 0 1 when k = 1 0 when k = 2 − 1 when k = 3 {\displaystyle \sin ^{(4n+k)}(0)={\begin{cases}0&{\text{when }}k=0\\1&{\text{when }}k=1\\0&{\text{when }}k=2\\-1&{\text{when }}k=3\end{cases}}} where the superscript represents repeated differentiation. This implies the following Taylor series expansion at x = 0 {\displaystyle x=0} . One can then use the theory of Taylor series to show that the following identities hold for all real numbers x {\displaystyle x} —where x {\displaystyle x} is the angle in radians. [ 28 ] More generally, for all complex numbers : [ 29 ] sin ⁡ ( x ) = x − x 3 3 ! + x 5 5 ! − x 7 7 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) ! x 2 n + 1 {\displaystyle {\begin{aligned}\sin(x)&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}\end{aligned}}} Taking the derivative of each term gives the Taylor series for cosine: [ 28 ] [ 29 ] cos ⁡ ( x ) = 1 − x 2 2 ! + x 4 4 ! − x 6 6 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! x 2 n {\displaystyle {\begin{aligned}\cos(x)&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}\end{aligned}}} Both sine and cosine functions with multiple angles may appear as their linear combination , resulting in a polynomial. Such a polynomial is known as the trigonometric polynomial . The trigonometric polynomial's ample applications may be acquired in its interpolation , and its extension of a periodic function known as the Fourier series . Let a n {\displaystyle a_{n}} and b n {\displaystyle b_{n}} be any coefficients, then the trigonometric polynomial of a degree N {\displaystyle N} —denoted as T ( x ) {\displaystyle T(x)} —is defined as: [ 30 ] [ 31 ] T ( x ) = a 0 + ∑ n = 1 N a n cos ⁡ ( n x ) + ∑ n = 1 N b n sin ⁡ ( n x ) . {\displaystyle T(x)=a_{0}+\sum _{n=1}^{N}a_{n}\cos(nx)+\sum _{n=1}^{N}b_{n}\sin(nx).} The trigonometric series can be defined similarly analogous to the trigonometric polynomial, its infinite inversion. Let A n {\displaystyle A_{n}} and B n {\displaystyle B_{n}} be any coefficients, then the trigonometric series can be defined as: [ 32 ] 1 2 A 0 + ∑ n = 1 ∞ A n cos ⁡ ( n x ) + B n sin ⁡ ( n x ) . {\displaystyle {\frac {1}{2}}A_{0}+\sum _{n=1}^{\infty }A_{n}\cos(nx)+B_{n}\sin(nx).} In the case of a Fourier series with a given integrable function f {\displaystyle f} , the coefficients of a trigonometric series are: [ 33 ] A n = 1 π ∫ 0 2 π f ( x ) cos ⁡ ( n x ) d x , B n = 1 π ∫ 0 2 π f ( x ) sin ⁡ ( n x ) d x . {\displaystyle {\begin{aligned}A_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\cos(nx)\,dx,\\B_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\sin(nx)\,dx.\end{aligned}}} Both sine and cosine can be extended further via complex number , a set of numbers composed of both real and imaginary numbers . For real number θ {\displaystyle \theta } , the definition of both sine and cosine functions can be extended in a complex plane in terms of an exponential function as follows: [ 34 ] sin ⁡ ( θ ) = e i θ − e − i θ 2 i , cos ⁡ ( θ ) = e i θ + e − i θ 2 , {\displaystyle {\begin{aligned}\sin(\theta )&={\frac {e^{i\theta }-e^{-i\theta }}{2i}},\\\cos(\theta )&={\frac {e^{i\theta }+e^{-i\theta }}{2}},\end{aligned}}} Alternatively, both functions can be defined in terms of Euler's formula : [ 34 ] e i θ = cos ⁡ ( θ ) + i sin ⁡ ( θ ) , e − i θ = cos ⁡ ( θ ) − i sin ⁡ ( θ ) . {\displaystyle {\begin{aligned}e^{i\theta }&=\cos(\theta )+i\sin(\theta ),\\e^{-i\theta }&=\cos(\theta )-i\sin(\theta ).\end{aligned}}} When plotted on the complex plane , the function e i x {\displaystyle e^{ix}} for real values of x {\displaystyle x} traces out the unit circle in the complex plane. Both sine and cosine functions may be simplified to the imaginary and real parts of e i θ {\displaystyle e^{i\theta }} as: [ 35 ] sin ⁡ θ = Im ⁡ ( e i θ ) , cos ⁡ θ = Re ⁡ ( e i θ ) . {\displaystyle {\begin{aligned}\sin \theta &=\operatorname {Im} (e^{i\theta }),\\\cos \theta &=\operatorname {Re} (e^{i\theta }).\end{aligned}}} When z = x + i y {\displaystyle z=x+iy} for real values x {\displaystyle x} and y {\displaystyle y} , where i = − 1 {\displaystyle i={\sqrt {-1}}} , both sine and cosine functions can be expressed in terms of real sines, cosines, and hyperbolic functions as: [ citation needed ] sin ⁡ z = sin ⁡ x cosh ⁡ y + i cos ⁡ x sinh ⁡ y , cos ⁡ z = cos ⁡ x cosh ⁡ y − i sin ⁡ x sinh ⁡ y . {\displaystyle {\begin{aligned}\sin z&=\sin x\cosh y+i\cos x\sinh y,\\\cos z&=\cos x\cosh y-i\sin x\sinh y.\end{aligned}}} Sine and cosine are used to connect the real and imaginary parts of a complex number with its polar coordinates ( r , θ ) {\displaystyle (r,\theta )} : z = r ( cos ⁡ ( θ ) + i sin ⁡ ( θ ) ) , {\displaystyle z=r(\cos(\theta )+i\sin(\theta )),} and the real and imaginary parts are Re ⁡ ( z ) = r cos ⁡ ( θ ) , Im ⁡ ( z ) = r sin ⁡ ( θ ) , {\displaystyle {\begin{aligned}\operatorname {Re} (z)&=r\cos(\theta ),\\\operatorname {Im} (z)&=r\sin(\theta ),\end{aligned}}} where r {\displaystyle r} and θ {\displaystyle \theta } represent the magnitude and angle of the complex number z {\displaystyle z} . For any real number θ {\displaystyle \theta } , Euler's formula in terms of polar coordinates is stated as z = r e i θ {\textstyle z=re^{i\theta }} . Applying the series definition of the sine and cosine to a complex argument, z , gives: where sinh and cosh are the hyperbolic sine and cosine . These are entire functions . It is also sometimes useful to express the complex sine and cosine functions in terms of the real and imaginary parts of its argument: Using the partial fraction expansion technique in complex analysis , one can find that the infinite series ∑ n = − ∞ ∞ ( − 1 ) n z − n = 1 z − 2 z ∑ n = 1 ∞ ( − 1 ) n n 2 − z 2 {\displaystyle \sum _{n=-\infty }^{\infty }{\frac {(-1)^{n}}{z-n}}={\frac {1}{z}}-2z\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{n^{2}-z^{2}}}} both converge and are equal to π sin ⁡ ( π z ) {\textstyle {\frac {\pi }{\sin(\pi z)}}} . Similarly, one can show that π 2 sin 2 ⁡ ( π z ) = ∑ n = − ∞ ∞ 1 ( z − n ) 2 . {\displaystyle {\frac {\pi ^{2}}{\sin ^{2}(\pi z)}}=\sum _{n=-\infty }^{\infty }{\frac {1}{(z-n)^{2}}}.} Using product expansion technique, one can derive sin ⁡ ( π z ) = π z ∏ n = 1 ∞ ( 1 − z 2 n 2 ) . {\displaystyle \sin(\pi z)=\pi z\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{n^{2}}}\right).} sin( z ) is found in the functional equation for the Gamma function , which in turn is found in the functional equation for the Riemann zeta-function , As a holomorphic function , sin z is a 2D solution of Laplace's equation : The complex sine function is also related to the level curves of pendulums . [ how? ] [ 36 ] [ better source needed ] The word sine is derived, indirectly, from the Sanskrit word jyā 'bow-string' or more specifically its synonym jīvá (both adopted from Ancient Greek χορδή 'string; chord'), due to visual similarity between the arc of a circle with its corresponding chord and a bow with its string (see jyā, koti-jyā and utkrama-jyā ; sine and chord are closely related in a circle of unit diameter, see Ptolemy’s Theorem ). This was transliterated in Arabic as jība , which is meaningless in that language and written as jb ( جب ). Since Arabic is written without short vowels, jb was interpreted as the homograph jayb ( جيب ), which means 'bosom', 'pocket', or 'fold'. [ 37 ] [ 38 ] When the Arabic texts of Al-Battani and al-Khwārizmī were translated into Medieval Latin in the 12th century by Gerard of Cremona , he used the Latin equivalent sinus (which also means 'bay' or 'fold', and more specifically 'the hanging fold of a toga over the breast'). [ 39 ] [ 40 ] [ 41 ] Gerard was probably not the first scholar to use this translation; Robert of Chester appears to have preceded him and there is evidence of even earlier usage. [ 42 ] [ 43 ] The English form sine was introduced in Thomas Fale 's 1593 Horologiographia . [ 44 ] The word cosine derives from an abbreviation of the Latin complementi sinus 'sine of the complementary angle ' as cosinus in Edmund Gunter 's Canon triangulorum (1620), which also includes a similar definition of cotangens . [ 45 ] While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was discovered by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). [ 46 ] The sine and cosine functions are closely related to the jyā and koṭi-jyā functions used in Indian astronomy during the Gupta period ( Aryabhatiya and Surya Siddhanta ), via translation from Sanskrit to Arabic and then from Arabic to Latin. [ 39 ] [ 47 ] All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines , used in solving triangles . [ 48 ] Al-Khwārizmī (c. 780–850) produced tables of sines, cosines and tangents. [ 49 ] [ 50 ] Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. [ 50 ] In the early 17th-century, the French mathematician Albert Girard published the first use of the abbreviations sin , cos , and tan ; these were further promulgated by Euler (see below). The Opus palatinum de triangulis of Georg Joachim Rheticus , a student of Copernicus , was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596. In a paper published in 1682, Leibniz proved that sin x is not an algebraic function of x . [ 51 ] Roger Cotes computed the derivative of sine in his Harmonia Mensurarum (1722). [ 52 ] Leonhard Euler 's Introductio in analysin infinitorum (1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, also defining them as infinite series and presenting " Euler's formula ", as well as the near-modern abbreviations sin. , cos. , tang. , cot. , sec. , and cosec. [ 39 ] There is no standard algorithm for calculating sine and cosine. IEEE 754 , the most widely used standard for the specification of reliable floating-point computation, does not address calculating trigonometric functions such as sine. The reason is that no efficient algorithm is known for computing sine and cosine with a specified accuracy, especially for large inputs. [ 53 ] Algorithms for calculating sine may be balanced for such constraints as speed, accuracy, portability, or range of input values accepted. This can lead to different results for different algorithms, especially for special circumstances such as very large inputs, e.g. sin(10 22 ) . A common programming optimization, used especially in 3D graphics, is to pre-calculate a table of sine values, for example one value per degree, then for values in-between pick the closest pre-calculated value, or linearly interpolate between the 2 closest values to approximate it. This allows results to be looked up from a table rather than being calculated in real time. With modern CPU architectures this method may offer no advantage. [ citation needed ] The CORDIC algorithm is commonly used in scientific calculators. The sine and cosine functions, along with other trigonometric functions, are widely available across programming languages and platforms. In computing, they are typically abbreviated to sin and cos . Some CPU architectures have a built-in instruction for sine, including the Intel x87 FPUs since the 80387. In programming languages, sin and cos are typically either a built-in function or found within the language's standard math library. For example, the C standard library defines sine functions within math.h : sin( double ) , sinf( float ) , and sinl( long double ) . The parameter of each is a floating point value, specifying the angle in radians. Each function returns the same data type as it accepts. Many other trigonometric functions are also defined in math.h , such as for cosine, arc sine, and hyperbolic sine (sinh). Similarly, Python defines math.sin(x) and math.cos(x) within the built-in math module. Complex sine and cosine functions are also available within the cmath module, e.g. cmath.sin(z) . CPython 's math functions call the C math library, and use a double-precision floating-point format . Some software libraries provide implementations of sine and cosine using the input angle in half- turns , a half-turn being an angle of 180 degrees or π {\displaystyle \pi } radians. Representing angles in turns or half-turns has accuracy advantages and efficiency advantages in some cases. [ 54 ] [ 55 ] These functions are called sinpi and cospi in MATLAB, [ 54 ] OpenCL, [ 56 ] R, [ 55 ] Julia, [ 57 ] CUDA, [ 58 ] and ARM. [ 59 ] For example, sinpi(x) would evaluate to sin ⁡ ( π x ) , {\displaystyle \sin(\pi x),} where x is expressed in half-turns, and consequently the final input to the function, πx can be interpreted in radians by sin . The accuracy advantage stems from the ability to perfectly represent key angles like full-turn, half-turn, and quarter-turn losslessly in binary floating-point or fixed-point. In contrast, representing 2 π {\displaystyle 2\pi } , π {\displaystyle \pi } , and π 2 {\textstyle {\frac {\pi }{2}}} in binary floating-point or binary scaled fixed-point always involves a loss of accuracy since irrational numbers cannot be represented with finitely many binary digits. Turns also have an accuracy advantage and efficiency advantage for computing modulo to one period. Computing modulo 1 turn or modulo 2 half-turns can be losslessly and efficiently computed in both floating-point and fixed-point. For example, computing modulo 1 or modulo 2 for a binary point scaled fixed-point value requires only a bit shift or bitwise AND operation. In contrast, computing modulo π 2 {\textstyle {\frac {\pi }{2}}} involves inaccuracies in representing π 2 {\textstyle {\frac {\pi }{2}}} . For applications involving angle sensors, the sensor typically provides angle measurements in a form directly compatible with turns or half-turns. For example, an angle sensor may count from 0 to 4096 over one complete revolution. [ 60 ] If half-turns are used as the unit for angle, then the value provided by the sensor directly and losslessly maps to a fixed-point data type with 11 bits to the right of the binary point. In contrast, if radians are used as the unit for storing the angle, then the inaccuracies and cost of multiplying the raw sensor integer by an approximation to π 2048 {\textstyle {\frac {\pi }{2048}}} would be incurred.
https://en.wikipedia.org/wiki/Sin_(trigonometry)
Sinch AB , formerly CLX Communications , is a communications platform as a service ( CPaaS ) company which powers messaging, voice, and email communications between businesses and their customers. Headquartered in Stockholm , Sweden , the company employs over 4000 people in more than 60 countries. CLX Communications , better known as CLX , was founded in 2008 by Johan Hedberg, Robert Gerstmann, Kristian Männik, Henrik Sandell, Björn Zethraeus and Kjell Arvidsson, [ 1 ] [ 2 ] [ 3 ] as a telecommunications and cloud communications platform as a service ( PaaS ) company. [ 4 ] CLX acquired numerous companies in the industry between 2009 and 2018 [ 5 ] and then later under the brand name Sinch from 2019. In 2014, CLX acquired Voltari 's mobile messaging business in the US and Canada. [ 6 ] After announcing its intention to proceed with an initial public offering (IPO) in September 2015, [ 7 ] [ 8 ] [ 9 ] CLX completed the IPO of its shares and began trading on Nasdaq Stockholm in October 2015, [ 10 ] with the introduction price set at SEK 59. [ 11 ] The company is listed on Nasdaq Stockholm, on the Mid Cap list under the Technology sector. [ 12 ] In 2016 CLX acquired Mblox for US$117 million. [ 4 ] [ 13 ] [ 14 ] [ 15 ] At the time, Mblox was one of the largest messaging service providers in the world, delivering 7 billion messages in 2015. [ 16 ] In 2017, CLX acquired German provider Xura Secure Communications GmbH in February 2017, [ citation needed ] and UK-based Dialogue in May 2017. [ 17 ] [ 18 ] Also in 2017, CLX entered into a strategic partnership with Google "to provide the next generation of messaging services to brand marketers using Rich Communications Services (RCS) standard embedded directly into consumers' native messaging apps". [ 19 ] Part of Google's Early Access Program (EAP), CLX enabled enterprises to build with RCS [ 20 ] and was one of the first companies to offer an upgraded messaging experience. In March 2018, CLX entered into a definitive agreement to acquire Danish company Unwire Communication ApS [ 21 ] for DKK 148 million. [ 22 ] Upon completion, CLX became the largest CPaaS provider in the Nordic region. [ 23 ] The following month, they announced that they had acquired Seattle-based company Vehicle for US$8 million. [ 24 ] [ 25 ] Symsoft was founded in Stockholm in 1989, to supply charging and messaging services to mobile operators . [ citation needed ] The company was initially founded as a consultancy company but shifted focus in the late 1990s to sell products. [ citation needed ] Formerly listed on Nasdaq Stockholm, it provided mobile communications in the areas of Real-Time BSS, SMS and other messaging services, with SS7 security for mobile network operators (MNOs), mobile virtual network operators (MVNOs) and mobile virtual network enablers (MVNEs). [ citation needed ] The first deployment of the Symsoft SMS real-time charging was with TeliaMobile, now Telia Company , in Sweden in 1999. [ citation needed ] Building on the Telia case and other early projects relating to mobile messaging and real-time charging Symsoft launched the Symsoft Service Delivery Platform (formerly known as the Nobill platform). [ citation needed ] In 2009, CLX acquired Symsoft, which then formed the Operator Division of CLX Communications. The division offered software and services to customers in the areas of IoT platforms, real-time BSS, VAS, fraud prevention solutions [ buzzword ] for revenue retention and full-service solutions [ buzzword ] for virtual operators. As of 2016, Symsoft had experience in upgrading and replacing legacy systems in the areas of BSS, Value Added Services and Security. The company serves mobile operators, MVNOs and MVNEs such as America Móvil, Virgin Mobile, [ 26 ] Polkomtel, Saudi Telecom, Simfonics [ 27 ] Telefónica , Telia Company , Unify Mobile [ 28 ] and 3 in 30+ countries. Mblox Inc. was a global company that provided a Carrier-grade SaaS-based mobile messaging platform [ buzzword ] to enterprises, including global one-way and two-way SMS , MMS , push notifications , ring-tones, shortcodes and virtual mobile numbers . Mblox Ltd. was founded in 1999, by Andrew Bud in London, England. In 2003, MBlox, Ltd. was acquired by a newly restructured and capitalized Mobilesys, Inc. [ 29 ] Founded by David Coelho, the company assumed the name MBlox, Inc. at Andrew Bud's insistence and operated dual headquarters in Sunnyvale, California, and London, England. [ 30 ] This transaction was spearheaded by Q-Advisors, a boutique investment bank founded by Michael Quinn in 2001 and located in Denver, Colorado. The CEO of mBlox, Inc. was William "Chip" Hoffman, who architected and executed the transaction with the help of Gary Cuccio (Chairman), and Jay Emmet (COO). The new company - Mblox Inc. – became the world leader in SMS technology use and carrier clearing and billing, growing from $1.75 million in 2002 to over $2 billion in pro forma revenue by April 2004 (19 months). This growth was primarily organic with geographic expansion into over 30 countries, including London, San Francisco, Atlanta, Stockholm, Paris, Munich, Madrid, Singapore, Manila, and Sydney. In 2003, the company acquired carrier-grade message delivery capabilities through the purchase of an SMSC from Comverse. [ 31 ] After stepping down in mid-2004, William "Chip" Hoffman handed the reins to then-Chairman Gary Cuccio, former Senior Executive with Pacific Telesis, and COO of the wildly successful [ citation needed ] Omnipoint Wireless. Cuccio added the CEO role to his chairmanship. From 2005 to 2006, the company opened offices in Singapore and Australia. [ 32 ] The year 2009 was marked by the opening of the Milan office, Italy. [ 33 ] In December 2017, the parent company CLX Communications retired the Mblox and CardBoardFish brands. [ 39 ] Starting in 2006, with the release of the globally successful "Crazy Frog" ring-tone, MBlox became a target for regulators. Since phone bills from some carriers attribute charges to the company doing the billing, rather than to the business that actually sold and provided the service, Mblox had been accused in Internet forums of enabling a process called cramming , [ 40 ] [ 41 ] or automatically signing mobile customers up for unsolicited services and billing them accordingly. [ 42 ] As of 2008, the company had been fined 22 times for cramming-related offences, totaling hundreds of thousands of dollars. [ 43 ] Since Mblox had never had any involvement, direct or indirect, in creating or promoting such services, its responsibility was to try to prevent its customers abusing the SMS-based billing services it provided them. Following the sudden spate of problems in 2008–09, in 2010 the UK regulator Phonepay Plus commended Mblox for "a significant investment in new technology, personnel and resources to aid compliance and prevent further harm occurring to consumers from services operating over its platform." [ 44 ] In the UK, Mblox had not been cited in any case since December 2011, at which time the regulator described its actions as "exemplary". [ 45 ] On March 19, 2013, Mblox sold its PSMS business to OpenMarket as it chose to focus exclusively on Enterprise to Consumer mobile messaging. This brought an end to its involvement in mobile payments [ 46 ] Sinch was initially founded in May 2014 [ 47 ] by Andreas Bernström in Stockholm and San Francisco. Originally the technology behind Rebtel , Sinch was spun-out with $12 million in funding, focusing on the mobile first, app developer market. Sinch launched its Voice and Instant Messaging [ 48 ] products in May 2014, and quickly launched their SMS API [ 49 ] product at the end of 2014. In 2016, CLX also acquired Sinch for a total consideration of SEK 138.9 million on a debt-free basis. [ 50 ] In February 2019, it was announced that CLX had "launched a new corporate brand and visual identity" [ 51 ] to unify all its business units under the same name. After acquiring Sinch in 2016, they have now leveraged the brand name to encompass the entire company to "more accurately and immediately [depict] its current offerings and mission". [ 52 ] Sinch effectively has inherited the history of CLX Communications and is considered a different brand to the one launched in 2014. CLX completed an initial public offering (IPO) and were listed on Nasdaq Stockholm in October 2015. [ 53 ] Since February 2019, the company can be found on the Mid Cap list under the Technology sector [ 54 ] under the ticker 'SINCH'. [ 55 ] [ 56 ] In September 2019, Sinch acquired myElefant , [ 57 ] a Software-as-a-Service platform for rich interactive messaging, for an upfront cash consideration of EUR 18.5 million, with an additional cash earnout of up to three million within two years if certain gross profit targets were met. In October 2019, Sinch acquired TWW do Brasil S.A., [ 58 ] one of the largest SMS connectivity providers in Brazil, for an enterprise value of BRL 180,750 million. In March 2020, Sinch entered into a definitive agreement to acquire Chatlayer BV, [ 59 ] a cloud-based chatbot and voicebot platform, for an enterprise value of EUR 6.9 million. Also in March 2020, Sinch acquired Brazilian messaging provider Wavy [ 60 ] for a total cash consideration of BRL 355 million and 1,534,582 new shares in Sinch. This corresponded to an enterprise value of SEK 1,187 million. In May 2020, Sinch acquired SAP Digital Interconnect (SDI) [ 61 ] — a unit within SAP offering services in programmable communications, carrier messaging, and enterprise solutions — for a total cash consideration of EUR 225 million. In June, 2020, Sinch acquired ACL Mobile Ltd (ACL), a vendor of communications services in India and Southeast Asia, for a total consideration of INR 5,350 million (approximately SEK 655 million). In February 2021, Sinch acquired Inteliquent , an interconnection provider for voice communications, for $1.14 billion. [ 62 ] In June 2021, Sinch acquired MessageMedia [ 63 ] for a total enterprise value of USD 1.3 billion, with a total cash consideration of USD 1.1 billion and 1,128,487 new shares in Sinch. MessageMedia provides mobile messaging solutions for small and medium-sized businesses in the United States and Australia, New Zealand, and Europe. In September 2021, Sinch entered into a definitive agreement to acquire MessengerPeople [ 64 ] (now Sinch Engage), a SaaS-based conversational messaging platform, for a total enterprise value of EUR 48 million, with a total cash consideration of EUR 33.6 million and EUR 14.4 million paid in the form of new shares in Sinch. In September 2021, Sinch acquired cloud-based email delivery platform Pathwire [ 65 ] (now Sinch Mailgun, Sinch Mailjet, and Sinch Email on Acid) for a cash consideration of USD 925 million and 51 million new shares in Sinch, which corresponds to an enterprise value of approximately USD 1.9 billion. Sinch's main competitors are Twilio , Infobip , and Vonage . In May 2022, the Federal Trade Commission (FTC) of the United States of America sent a cease-and-desist letter to the Inteliquent division of Sinch for hosting illegal robocall campaigns. [ 66 ] In the cease-and-desist letter, the FTC cited robocalls from Social Security Administration imposters, AT&T/DirecTV imposters, Utility disconnection/rebate/rate reduction scams, Auto warranty robocalls, and Credit card interest rate reduction robocalls. [ 67 ] CEO and Founder Andreas was announced as one of The 9 Most Innovative People in VoIP in 2014. [ 68 ]
https://en.wikipedia.org/wiki/Sinch_AB
Sindell v. Abbott Laboratories , 26 Cal. 3d 588 (1980), was a landmark products liability decision of the Supreme Court of California which pioneered the doctrine of market share liability . The plaintiff in Sindell was a young woman who developed cancer as a result of her mother's use of the drug diethylstilbestrol (DES) during pregnancy . A large number of companies had manufactured DES around the time the plaintiff's mother used the drug. Since the drug was a fungible product and many years had passed, it was impossible for the plaintiff to identify the manufacturer(s) of the particular DES pills her mother had actually consumed. In a 4-3 majority decision by Associate Justice Stanley Mosk , the court decided to impose a new kind of liability , known as market share liability. The doctrine evolved from a line of negligence and strict products liability opinions (most of which had been decided by the Supreme Court of California) that were being adopted as the majority rule in many U.S. states. The essential components of the theory are as follows: If these requirements are met, a rebuttable presumption arises in favor of the plaintiff; if she can prove actual damages, then a court may order each defendant to pay a percentage of such damages equal to its share of the market for the product at the time the product was used. A manufacturer may rebut the presumption and reduce its market share damages to zero by showing that its product could not have possibly injured the plaintiff (for example, by demonstrating that it did not manufacture the product during the time period relevant for that particular plaintiff). Mosk later explained in an oral history interview that the court got the idea for market share liability from the Fordham Law Review comment cited extensively in the Sindell opinion. [ 1 ] Associate Justice Frank K. Richardson wrote a dissent in which he accused the majority of judicial activism and argued that the judiciary should defer to the legislature, whose role it was to craft an appropriate solution to the problems presented by the unique nature of DES. Courts after Sindell have refused to apply the market share doctrine to products other than drugs such as DES. The argument centers on the fact that a product must be fungible to hold all producers equally liable for any harm. If the product was not fungible, then different production methods or gross negligence in manufacturing might imply that some manufacturers were actually more culpable than others, yet they would only be required to pay up to their share of the market. The time period over which the harm occurred is also an issue: in Skipworth v. Lead Industries Association 690 A.2d 169 (Pa. 1997), a 1997 Pennsylvania case, the plaintiffs complained of the use of lead-based paint in their house and brought suit against Lead Industries Association . The court refused to apply the market share theory because the house had stood for over a century and many manufacturers of lead-based paint had since gone out of business, while others named in the suit had not existed at the time the house was painted. The court also noted that lead-based paint was not a fungible product and therefore, some of the manufacturers may not have been responsible for Skipworth's injuries.
https://en.wikipedia.org/wiki/Sindell_v._Abbott_Laboratories
The sine-Gordon equation is a second-order nonlinear partial differential equation for a function φ {\displaystyle \varphi } dependent on two variables typically denoted x {\displaystyle x} and t {\displaystyle t} , involving the wave operator and the sine of φ {\displaystyle \varphi } . It was originally introduced by Edmond Bour ( 1862 ) in the course of study of surfaces of constant negative curvature as the Gauss–Codazzi equation for surfaces of constant Gaussian curvature −1 in 3-dimensional space . [ 1 ] The equation was rediscovered by Frenkel and Kontorova ( 1939 ) in their study of crystal dislocations known as the Frenkel–Kontorova model . [ 2 ] This equation attracted a lot of attention in the 1970s due to the presence of soliton solutions, [ 3 ] and is an example of an integrable PDE . Among well-known integrable PDEs, the sine-Gordon equation is the only relativistic system due to its Lorentz invariance . This is the first derivation of the equation, by Bour (1862). There are two equivalent forms of the sine-Gordon equation. In the ( real ) space-time coordinates , denoted ( x , t ) {\displaystyle (x,t)} , the equation reads: [ 4 ] where partial derivatives are denoted by subscripts. Passing to the light-cone coordinates ( u , v ), akin to asymptotic coordinates where the equation takes the form [ 5 ] This is the original form of the sine-Gordon equation, as it was considered in the 19th century in the course of investigation of surfaces of constant Gaussian curvature K = −1, also called pseudospherical surfaces . Consider an arbitrary pseudospherical surface. Across every point on the surface there are two asymptotic curves . This allows us to construct a distinguished coordinate system for such a surface, in which u = constant, v = constant are the asymptotic lines, and the coordinates are incremented by the arc length on the surface. At every point on the surface, let φ {\displaystyle \varphi } be the angle between the asymptotic lines. The first fundamental form of the surface is and the second fundamental form is L = N = 0 , M = sin ⁡ φ {\displaystyle L=N=0,M=\sin \varphi } and the Gauss–Codazzi equation is φ u v = sin ⁡ φ . {\displaystyle \varphi _{uv}=\sin \varphi .} Thus, any pseudospherical surface gives rise to a solution of the sine-Gordon equation, although with some caveats: if the surface is complete, it is necessarily singular due to the Hilbert embedding theorem . In the simplest case, the pseudosphere , also known as the tractroid, corresponds to a static one-soliton, but the tractroid has a singular cusp at its equator. Conversely, one can start with a solution to the sine-Gordon equation to obtain a pseudosphere uniquely up to rigid transformations . There is a theorem, sometimes called the fundamental theorem of surfaces , that if a pair of matrix-valued bilinear forms satisfy the Gauss–Codazzi equations, then they are the first and second fundamental forms of an embedded surface in 3-dimensional space. Solutions to the sine-Gordon equation can be used to construct such matrices by using the forms obtained above. The study of this equation and of the associated transformations of pseudospherical surfaces in the 19th century by Bianchi and Bäcklund led to the discovery of Bäcklund transformations . Another transformation of pseudospherical surfaces is the Lie transform introduced by Sophus Lie in 1879, which corresponds to Lorentz boosts for solutions of the sine-Gordon equation. [ 6 ] There are also some more straightforward ways to construct new solutions but which do not give new surfaces. Since the sine-Gordon equation is odd, the negative of any solution is another solution. However this does not give a new surface, as the sign-change comes down to a choice of direction for the normal to the surface. New solutions can be found by translating the solution: if φ {\displaystyle \varphi } is a solution, then so is φ + 2 n π {\displaystyle \varphi +2n\pi } for n {\displaystyle n} an integer. Consider a line of pendula, hanging on a straight line, in constant gravity. Connect the bobs of the pendula together by a string in constant tension. Let the angle of the pendulum at location x {\displaystyle x} be φ {\displaystyle \varphi } , then schematically, the dynamics of the line of pendulum follows Newton's second law: m φ t t ⏟ mass times acceleration = T φ x x ⏟ tension − m g sin ⁡ φ ⏟ gravity {\displaystyle \underbrace {m\varphi _{tt}} _{\text{mass times acceleration}}=\underbrace {T\varphi _{xx}} _{\text{tension}}-\underbrace {mg\sin \varphi } _{\text{gravity}}} and this is the sine-Gordon equation, after scaling time and distance appropriately. Note that this is not exactly correct, since the net force on a pendulum due to the tension is not precisely T φ x x {\displaystyle T\varphi _{xx}} , but more accurately T φ x x ( 1 + φ x 2 ) − 3 / 2 {\displaystyle T\varphi _{xx}(1+\varphi _{x}^{2})^{-3/2}} . However this does give an intuitive picture for the sine-gordon equation. One can produce exact mechanical realizations of the sine-gordon equation by more complex methods. [ 7 ] The name "sine-Gordon equation" is a pun on the well-known Klein–Gordon equation in physics: [ 4 ] The sine-Gordon equation is the Euler–Lagrange equation of the field whose Lagrangian density is given by Using the Taylor series expansion of the cosine in the Lagrangian, it can be rewritten as the Klein–Gordon Lagrangian plus higher-order terms: An interesting feature of the sine-Gordon equation is the existence of soliton and multisoliton solutions. The sine-Gordon equation has the following 1- soliton solutions: where and the slightly more general form of the equation is assumed: The 1-soliton solution for which we have chosen the positive root for γ {\displaystyle \gamma } is called a kink and represents a twist in the variable φ {\displaystyle \varphi } which takes the system from one constant solution φ = 0 {\displaystyle \varphi =0} to an adjacent constant solution φ = 2 π {\displaystyle \varphi =2\pi } . The states φ ≅ 2 π n {\displaystyle \varphi \cong 2\pi n} are known as vacuum states, as they are constant solutions of zero energy. The 1-soliton solution in which we take the negative root for γ {\displaystyle \gamma } is called an antikink . The form of the 1-soliton solutions can be obtained through application of a Bäcklund transform to the trivial (vacuum) solution and the integration of the resulting first-order differentials: for all time. The 1-soliton solutions can be visualized with the use of the elastic ribbon sine-Gordon model introduced by Julio Rubinstein in 1970. [ 8 ] Here we take a clockwise ( left-handed ) twist of the elastic ribbon to be a kink with topological charge θ K = − 1 {\displaystyle \theta _{\text{K}}=-1} . The alternative counterclockwise ( right-handed ) twist with topological charge θ AK = + 1 {\displaystyle \theta _{\text{AK}}=+1} will be an antikink. Multi- soliton solutions can be obtained through continued application of the Bäcklund transform to the 1-soliton solution, as prescribed by a Bianchi lattice relating the transformed results. [ 10 ] The 2-soliton solutions of the sine-Gordon equation show some of the characteristic features of the solitons. The traveling sine-Gordon kinks and/or antikinks pass through each other as if perfectly permeable, and the only observed effect is a phase shift . Since the colliding solitons recover their velocity and shape , such an interaction is called an elastic collision . The kink-kink solution is given by φ K / K ( x , t ) = 4 arctan ⁡ ( v sinh ⁡ x 1 − v 2 cosh ⁡ v t 1 − v 2 ) {\displaystyle \varphi _{K/K}(x,t)=4\arctan \left({\frac {v\sinh {\frac {x}{\sqrt {1-v^{2}}}}}{\cosh {\frac {vt}{\sqrt {1-v^{2}}}}}}\right)} while the kink-antikink solution is given by φ K / A K ( x , t ) = 4 arctan ⁡ ( v cosh ⁡ x 1 − v 2 sinh ⁡ v t 1 − v 2 ) {\displaystyle \varphi _{K/AK}(x,t)=4\arctan \left({\frac {v\cosh {\frac {x}{\sqrt {1-v^{2}}}}}{\sinh {\frac {vt}{\sqrt {1-v^{2}}}}}}\right)} Another interesting 2-soliton solutions arise from the possibility of coupled kink-antikink behaviour known as a breather . There are known three types of breathers: standing breather , traveling large-amplitude breather , and traveling small-amplitude breather . [ 11 ] The standing breather solution is given by φ ( x , t ) = 4 arctan ⁡ ( 1 − ω 2 cos ⁡ ( ω t ) ω cosh ⁡ ( 1 − ω 2 x ) ) . {\displaystyle \varphi (x,t)=4\arctan \left({\frac {{\sqrt {1-\omega ^{2}}}\;\cos(\omega t)}{\omega \;\cosh({\sqrt {1-\omega ^{2}}}\;x)}}\right).} 3-soliton collisions between a traveling kink and a standing breather or a traveling antikink and a standing breather results in a phase shift of the standing breather. In the process of collision between a moving kink and a standing breather, the shift of the breather Δ B {\displaystyle \Delta _{\text{B}}} is given by where v K {\displaystyle v_{\text{K}}} is the velocity of the kink, and ω {\displaystyle \omega } is the breather's frequency. [ 11 ] If the old position of the standing breather is x 0 {\displaystyle x_{0}} , after the collision the new position will be x 0 + Δ B {\displaystyle x_{0}+\Delta _{\text{B}}} . Suppose that φ {\displaystyle \varphi } is a solution of the sine-Gordon equation Then the system where a is an arbitrary parameter, is solvable for a function ψ {\displaystyle \psi } which will also satisfy the sine-Gordon equation. This is an example of an auto-Bäcklund transform, as both φ {\displaystyle \varphi } and ψ {\displaystyle \psi } are solutions to the same equation, that is, the sine-Gordon equation. By using a matrix system, it is also possible to find a linear Bäcklund transform for solutions of sine-Gordon equation. For example, if φ {\displaystyle \varphi } is the trivial solution φ ≡ 0 {\displaystyle \varphi \equiv 0} , then ψ {\displaystyle \psi } is the one-soliton solution with a {\displaystyle a} related to the boost applied to the soliton. The topological charge or winding number of a solution φ {\displaystyle \varphi } is N = 1 2 π ∫ R d φ = 1 2 π [ φ ( x = ∞ , t ) − φ ( x = − ∞ , t ) ] . {\displaystyle N={\frac {1}{2\pi }}\int _{\mathbb {R} }d\varphi ={\frac {1}{2\pi }}\left[\varphi (x=\infty ,t)-\varphi (x=-\infty ,t)\right].} The energy of a solution φ {\displaystyle \varphi } is E = ∫ R d x ( 1 2 ( φ t 2 + φ x 2 ) + m 2 ( 1 − cos ⁡ φ ) ) {\displaystyle E=\int _{\mathbb {R} }dx\left({\frac {1}{2}}(\varphi _{t}^{2}+\varphi _{x}^{2})+m^{2}(1-\cos \varphi )\right)} where a constant energy density has been added so that the potential is non-negative. With it the first two terms in the Taylor expansion of the potential coincide with the potential of a massive scalar field, as mentioned in the naming section; the higher order terms can be thought of as interactions. The topological charge is conserved if the energy is finite. The topological charge does not determine the solution, even up to Lorentz boosts. Both the trivial solution and the soliton-antisoliton pair solution have N = 0 {\displaystyle N=0} . The sine-Gordon equation is equivalent to the curvature of a particular s u ( 2 ) {\displaystyle {\mathfrak {su}}(2)} - connection on R 2 {\displaystyle \mathbb {R} ^{2}} being equal to zero. [ 12 ] Explicitly, with coordinates ( u , v ) {\displaystyle (u,v)} on R 2 {\displaystyle \mathbb {R} ^{2}} , the connection components A μ {\displaystyle A_{\mu }} are given by A u = ( i λ i 2 φ u i 2 φ u − i λ ) = 1 2 φ u i σ 1 + λ i σ 3 , {\displaystyle A_{u}={\begin{pmatrix}i\lambda &{\frac {i}{2}}\varphi _{u}\\{\frac {i}{2}}\varphi _{u}&-i\lambda \end{pmatrix}}={\frac {1}{2}}\varphi _{u}i\sigma _{1}+\lambda i\sigma _{3},} A v = ( − i 4 λ cos ⁡ φ − 1 4 λ sin ⁡ φ 1 4 λ sin ⁡ φ i 4 λ cos ⁡ φ ) = − 1 4 λ i sin ⁡ φ σ 2 − 1 4 λ i cos ⁡ φ σ 3 , {\displaystyle A_{v}={\begin{pmatrix}-{\frac {i}{4\lambda }}\cos \varphi &-{\frac {1}{4\lambda }}\sin \varphi \\{\frac {1}{4\lambda }}\sin \varphi &{\frac {i}{4\lambda }}\cos \varphi \end{pmatrix}}=-{\frac {1}{4\lambda }}i\sin \varphi \sigma _{2}-{\frac {1}{4\lambda }}i\cos \varphi \sigma _{3},} where the σ i {\displaystyle \sigma _{i}} are the Pauli matrices . Then the zero-curvature equation ∂ v A u − ∂ u A v + [ A u , A v ] = 0 {\displaystyle \partial _{v}A_{u}-\partial _{u}A_{v}+[A_{u},A_{v}]=0} is equivalent to the sine-Gordon equation φ u v = sin ⁡ φ {\displaystyle \varphi _{uv}=\sin \varphi } . The zero-curvature equation is so named as it corresponds to the curvature being equal to zero if it is defined F μ ν = [ ∂ μ − A μ , ∂ ν − A ν ] {\displaystyle F_{\mu \nu }=[\partial _{\mu }-A_{\mu },\partial _{\nu }-A_{\nu }]} . The pair of matrices A u {\displaystyle A_{u}} and A v {\displaystyle A_{v}} are also known as a Lax pair for the sine-Gordon equation, in the sense that the zero-curvature equation recovers the PDE rather than them satisfying Lax's equation. The sinh-Gordon equation is given by [ 13 ] This is the Euler–Lagrange equation of the Lagrangian Another closely related equation is the elliptic sine-Gordon equation or Euclidean sine-Gordon equation , given by where φ {\displaystyle \varphi } is now a function of the variables x and y . This is no longer a soliton equation, but it has many similar properties, as it is related to the sine-Gordon equation by the analytic continuation (or Wick rotation ) y = i t . The elliptic sinh-Gordon equation may be defined in a similar way. Another similar equation comes from the Euler–Lagrange equation for Liouville field theory φ x x − φ t t = 2 e 2 φ . {\displaystyle \varphi _{xx}-\varphi _{tt}=2e^{2\varphi }.} A generalization is given by Toda field theory . [ 14 ] More precisely, Liouville field theory is the Toda field theory for the finite Kac–Moody algebra s l 2 {\displaystyle {\mathfrak {sl}}_{2}} , while sin(h)-Gordon is the Toda field theory for the affine Kac–Moody algebra s l ^ 2 {\displaystyle {\hat {\mathfrak {sl}}}_{2}} . One can also consider the sine-Gordon model on a circle, [ 15 ] on a line segment, or on a half line. [ 16 ] It is possible to find boundary conditions which preserve the integrability of the model. [ 16 ] On a half line the spectrum contains boundary bound states in addition to the solitons and breathers. [ 16 ] In quantum field theory the sine-Gordon model contains a parameter that can be identified with the Planck constant . The particle spectrum consists of a soliton, an anti-soliton and a finite (possibly zero) number of breathers . [ 17 ] [ 18 ] [ 19 ] The number of breathers depends on the value of the parameter. Multiparticle production cancels on mass shell. Semi-classical quantization of the sine-Gordon model was done by Ludwig Faddeev and Vladimir Korepin . [ 20 ] The exact quantum scattering matrix was discovered by Alexander Zamolodchikov . [ 21 ] This model is S-dual to the Thirring model , as discovered by Coleman . [ 22 ] This is sometimes known as the Coleman correspondence and serves as an example of boson-fermion correspondence in the interacting case. This article also showed that the constants appearing in the model behave nicely under renormalization : there are three parameters α 0 , β {\displaystyle \alpha _{0},\beta } and γ 0 {\displaystyle \gamma _{0}} . Coleman showed α 0 {\displaystyle \alpha _{0}} receives only a multiplicative correction, γ 0 {\displaystyle \gamma _{0}} receives only an additive correction, and β {\displaystyle \beta } is not renormalized. Further, for a critical, non-zero value β = 4 π {\displaystyle \beta ={\sqrt {4\pi }}} , the theory is in fact dual to a free massive Dirac field theory . The quantum sine-Gordon equation should be modified so the exponentials become vertex operators with V β =: e i β φ : {\displaystyle V_{\beta }=:e^{i\beta \varphi }:} , where the semi-colons denote normal ordering . A possible mass term is included. For different values of the parameter β 2 {\displaystyle \beta ^{2}} , the renormalizability properties of the sine-Gordon theory change. [ 23 ] The identification of these regimes is attributed to Jürg Fröhlich . The finite regime is β 2 < 4 π {\displaystyle \beta ^{2}<4\pi } , where no counterterms are needed to render the theory well-posed. The super-renormalizable regime is 4 π < β 2 < 8 π {\displaystyle 4\pi <\beta ^{2}<8\pi } , where a finite number of counterterms are needed to render the theory well-posed. More counterterms are needed for each threshold n n + 1 8 π {\displaystyle {\frac {n}{n+1}}8\pi } passed. [ 24 ] For β 2 > 8 π {\displaystyle \beta ^{2}>8\pi } , the theory becomes ill-defined (Coleman 1975 ). The boundary values are β 2 = 4 π {\displaystyle \beta ^{2}=4\pi } and β 2 = 8 π {\displaystyle \beta ^{2}=8\pi } , which are respectively the free fermion point, as the theory is dual to a free fermion via the Coleman correspondence, and the self-dual point, where the vertex operators form an affine sl 2 subalgebra , and the theory becomes strictly renormalizable (renormalizable, but not super-renormalizable). The stochastic or dynamical sine-Gordon model has been studied by Martin Hairer and Hao Shen [ 25 ] allowing heuristic results from the quantum sine-Gordon theory to be proven in a statistical setting. The equation is ∂ t u = 1 2 Δ u + c sin ⁡ ( β u + θ ) + ξ , {\displaystyle \partial _{t}u={\frac {1}{2}}\Delta u+c\sin(\beta u+\theta )+\xi ,} where c , β , θ {\displaystyle c,\beta ,\theta } are real-valued constants, and ξ {\displaystyle \xi } is space-time white noise . The space dimension is fixed to 2. In the proof of existence of solutions, the thresholds β 2 = n n + 1 8 π {\displaystyle \beta ^{2}={\frac {n}{n+1}}8\pi } again play a role in determining convergence of certain terms. A supersymmetric extension of the sine-Gordon model also exists. [ 26 ] Integrability preserving boundary conditions for this extension can be found as well. [ 26 ] The sine-Gordon model arises as the continuum limit of the Frenkel–Kontorova model which models crystal dislocations. Dynamics in long Josephson junctions are well-described by the sine-Gordon equations, and conversely provide a useful experimental system for studying the sine-Gordon model. [ 27 ] The sine-Gordon model is in the same universality class as the effective action for a Coulomb gas of vortices and anti-vortices in the continuous classical XY model , which is a model of magnetism. [ 28 ] [ 29 ] The Kosterlitz–Thouless transition for vortices can therefore be derived from a renormalization group analysis of the sine-Gordon field theory. [ 30 ] [ 31 ] The sine-Gordon equation also arises as the formal continuum limit of a different model of magnetism, the quantum Heisenberg model , in particular the XXZ model. [ 32 ]
https://en.wikipedia.org/wiki/Sine-Gordon_equation
In mathematics , sine and cosine are trigonometric functions of an angle . The sine and cosine of an acute angle are defined in the context of a right triangle : for the specified angle, its sine is the ratio of the length of the side opposite that angle to the length of the longest side of the triangle (the hypotenuse ), and the cosine is the ratio of the length of the adjacent leg to that of the hypotenuse . For an angle θ {\displaystyle \theta } , the sine and cosine functions are denoted as sin ⁡ ( θ ) {\displaystyle \sin(\theta )} and cos ⁡ ( θ ) {\displaystyle \cos(\theta )} . The definitions of sine and cosine have been extended to any real value in terms of the lengths of certain line segments in a unit circle . More modern definitions express the sine and cosine as infinite series , or as the solutions of certain differential equations , allowing their extension to arbitrary positive and negative values and even to complex numbers . The sine and cosine functions are commonly used to model periodic phenomena such as sound and light waves , the position and velocity of harmonic oscillators, sunlight intensity and day length, and average temperature variations throughout the year. They can be traced to the jyā and koṭi-jyā functions used in Indian astronomy during the Gupta period . To define the sine and cosine of an acute angle α {\displaystyle \alpha } , start with a right triangle that contains an angle of measure α {\displaystyle \alpha } ; in the accompanying figure, angle α {\displaystyle \alpha } in a right triangle A B C {\displaystyle ABC} is the angle of interest. The three sides of the triangle are named as follows: [ 1 ] Once such a triangle is chosen, the sine of the angle is equal to the length of the opposite side divided by the length of the hypotenuse, and the cosine of the angle is equal to the length of the adjacent side divided by the length of the hypotenuse: [ 1 ] sin ⁡ ( α ) = opposite hypotenuse , cos ⁡ ( α ) = adjacent hypotenuse . {\displaystyle \sin(\alpha )={\frac {\text{opposite}}{\text{hypotenuse}}},\qquad \cos(\alpha )={\frac {\text{adjacent}}{\text{hypotenuse}}}.} The other trigonometric functions of the angle can be defined similarly; for example, the tangent is the ratio between the opposite and adjacent sides or equivalently the ratio between the sine and cosine functions. The reciprocal of sine is cosecant, which gives the ratio of the hypotenuse length to the length of the opposite side. Similarly, the reciprocal of cosine is secant, which gives the ratio of the hypotenuse length to that of the adjacent side. The cotangent function is the ratio between the adjacent and opposite sides, a reciprocal of a tangent function. These functions can be formulated as: [ 1 ] tan ⁡ ( θ ) = sin ⁡ ( θ ) cos ⁡ ( θ ) = opposite adjacent , cot ⁡ ( θ ) = 1 tan ⁡ ( θ ) = adjacent opposite , csc ⁡ ( θ ) = 1 sin ⁡ ( θ ) = hypotenuse opposite , sec ⁡ ( θ ) = 1 cos ⁡ ( θ ) = hypotenuse adjacent . {\displaystyle {\begin{aligned}\tan(\theta )&={\frac {\sin(\theta )}{\cos(\theta )}}={\frac {\text{opposite}}{\text{adjacent}}},\\\cot(\theta )&={\frac {1}{\tan(\theta )}}={\frac {\text{adjacent}}{\text{opposite}}},\\\csc(\theta )&={\frac {1}{\sin(\theta )}}={\frac {\text{hypotenuse}}{\text{opposite}}},\\\sec(\theta )&={\frac {1}{\cos(\theta )}}={\frac {\textrm {hypotenuse}}{\textrm {adjacent}}}.\end{aligned}}} As stated, the values sin ⁡ ( α ) {\displaystyle \sin(\alpha )} and cos ⁡ ( α ) {\displaystyle \cos(\alpha )} appear to depend on the choice of a right triangle containing an angle of measure α {\displaystyle \alpha } . However, this is not the case as all such triangles are similar , and so the ratios are the same for each of them. For example, each leg of the 45-45-90 right triangle is 1 unit, and its hypotenuse is 2 {\displaystyle {\sqrt {2}}} ; therefore, sin ⁡ 45 ∘ = cos ⁡ 45 ∘ = 2 2 {\textstyle \sin 45^{\circ }=\cos 45^{\circ }={\frac {\sqrt {2}}{2}}} . [ 2 ] The following table shows the special value of each input for both sine and cosine with the domain between 0 < α < π 2 {\textstyle 0<\alpha <{\frac {\pi }{2}}} . The input in this table provides various unit systems such as degree, radian, and so on. The angles other than those five can be obtained by using a calculator. [ 3 ] [ 4 ] The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. [ 5 ] Given a triangle A B C {\displaystyle ABC} with sides a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} , and angles opposite those sides α {\displaystyle \alpha } , β {\displaystyle \beta } , and γ {\displaystyle \gamma } , the law states, sin ⁡ α a = sin ⁡ β b = sin ⁡ γ c . {\displaystyle {\frac {\sin \alpha }{a}}={\frac {\sin \beta }{b}}={\frac {\sin \gamma }{c}}.} This is equivalent to the equality of the first three expressions below: a sin ⁡ α = b sin ⁡ β = c sin ⁡ γ = 2 R , {\displaystyle {\frac {a}{\sin \alpha }}={\frac {b}{\sin \beta }}={\frac {c}{\sin \gamma }}=2R,} where R {\displaystyle R} is the triangle's circumradius . The law of cosines is useful for computing the length of an unknown side if two other sides and an angle are known. [ 5 ] The law states, a 2 + b 2 − 2 a b cos ⁡ ( γ ) = c 2 {\displaystyle a^{2}+b^{2}-2ab\cos(\gamma )=c^{2}} In the case where γ = π / 2 {\displaystyle \gamma =\pi /2} from which cos ⁡ ( γ ) = 0 {\displaystyle \cos(\gamma )=0} , the resulting equation becomes the Pythagorean theorem . [ 6 ] The cross product and dot product are operations on two vectors in Euclidean vector space . The sine and cosine functions can be defined in terms of the cross product and dot product. If a {\displaystyle \mathbb {a} } and b {\displaystyle \mathbb {b} } are vectors, and θ {\displaystyle \theta } is the angle between a {\displaystyle \mathbb {a} } and b {\displaystyle \mathbb {b} } , then sine and cosine can be defined as: sin ⁡ ( θ ) = | a × b | | a | | b | , cos ⁡ ( θ ) = a ⋅ b | a | | b | . {\displaystyle {\begin{aligned}\sin(\theta )&={\frac {|\mathbb {a} \times \mathbb {b} |}{|a||b|}},\\\cos(\theta )&={\frac {\mathbb {a} \cdot \mathbb {b} }{|a||b|}}.\end{aligned}}} The sine and cosine functions may also be defined in a more general way by using unit circle , a circle of radius one centered at the origin ( 0 , 0 ) {\displaystyle (0,0)} , formulated as the equation of x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} in the Cartesian coordinate system . Let a line through the origin intersect the unit circle, making an angle of θ {\displaystyle \theta } with the positive half of the x {\displaystyle x} - axis. The x {\displaystyle x} - and y {\displaystyle y} - coordinates of this point of intersection are equal to cos ⁡ ( θ ) {\displaystyle \cos(\theta )} and sin ⁡ ( θ ) {\displaystyle \sin(\theta )} , respectively; that is, [ 7 ] sin ⁡ ( θ ) = y , cos ⁡ ( θ ) = x . {\displaystyle \sin(\theta )=y,\qquad \cos(\theta )=x.} This definition is consistent with the right-angled triangle definition of sine and cosine when 0 < θ < π 2 {\textstyle 0<\theta <{\frac {\pi }{2}}} because the length of the hypotenuse of the unit circle is always 1; mathematically speaking, the sine of an angle equals the opposite side of the triangle, which is simply the y {\displaystyle y} - coordinate. A similar argument can be made for the cosine function to show that the cosine of an angle when 0 < θ < π 2 {\textstyle 0<\theta <{\frac {\pi }{2}}} , even under the new definition using the unit circle. [ 8 ] [ 9 ] Using the unit circle definition has the advantage of drawing a graph of sine and cosine functions. This can be done by rotating counterclockwise a point along the circumference of a circle, depending on the input θ > 0 {\displaystyle \theta >0} . In a sine function, if the input is θ = π 2 {\textstyle \theta ={\frac {\pi }{2}}} , the point is rotated counterclockwise and stopped exactly on the y {\displaystyle y} - axis. If θ = π {\displaystyle \theta =\pi } , the point is at the circle's halfway. If θ = 2 π {\displaystyle \theta =2\pi } , the point returned to its origin. This results that both sine and cosine functions have the range between − 1 ≤ y ≤ 1 {\displaystyle -1\leq y\leq 1} . [ 10 ] Extending the angle to any real domain, the point rotated counterclockwise continuously. This can be done similarly for the cosine function as well, although the point is rotated initially from the y {\displaystyle y} - coordinate. In other words, both sine and cosine functions are periodic , meaning any angle added by the circumference's circle is the angle itself. Mathematically, [ 11 ] sin ⁡ ( θ + 2 π ) = sin ⁡ ( θ ) , cos ⁡ ( θ + 2 π ) = cos ⁡ ( θ ) . {\displaystyle \sin(\theta +2\pi )=\sin(\theta ),\qquad \cos(\theta +2\pi )=\cos(\theta ).} A function f {\displaystyle f} is said to be odd if f ( − x ) = − f ( x ) {\displaystyle f(-x)=-f(x)} , and is said to be even if f ( − x ) = f ( x ) {\displaystyle f(-x)=f(x)} . The sine function is odd, whereas the cosine function is even. [ 12 ] Both sine and cosine functions are similar, with their difference being shifted by π 2 {\textstyle {\frac {\pi }{2}}} . This means, [ 13 ] sin ⁡ ( θ ) = cos ⁡ ( π 2 − θ ) , cos ⁡ ( θ ) = sin ⁡ ( π 2 − θ ) . {\displaystyle {\begin{aligned}\sin(\theta )&=\cos \left({\frac {\pi }{2}}-\theta \right),\\\cos(\theta )&=\sin \left({\frac {\pi }{2}}-\theta \right).\end{aligned}}} Zero is the only real fixed point of the sine function; in other words the only intersection of the sine function and the identity function is sin ⁡ ( 0 ) = 0 {\displaystyle \sin(0)=0} . The only real fixed point of the cosine function is called the Dottie number . The Dottie number is the unique real root of the equation cos ⁡ ( x ) = x {\displaystyle \cos(x)=x} . The decimal expansion of the Dottie number is approximately 0.739085. [ 14 ] The sine and cosine functions are infinitely differentiable. [ 15 ] The derivative of sine is cosine, and the derivative of cosine is negative sine: [ 16 ] d d x sin ⁡ ( x ) = cos ⁡ ( x ) , d d x cos ⁡ ( x ) = − sin ⁡ ( x ) . {\displaystyle {\frac {d}{dx}}\sin(x)=\cos(x),\qquad {\frac {d}{dx}}\cos(x)=-\sin(x).} Continuing the process in higher-order derivative results in the repeated same functions; the fourth derivative of a sine is the sine itself. [ 15 ] These derivatives can be applied to the first derivative test , according to which the monotonicity of a function can be defined as the inequality of function's first derivative greater or less than equal to zero. [ 17 ] It can also be applied to second derivative test , according to which the concavity of a function can be defined by applying the inequality of the function's second derivative greater or less than equal to zero. [ 18 ] The following table shows that both sine and cosine functions have concavity and monotonicity—the positive sign ( + {\displaystyle +} ) denotes a graph is increasing (going upward) and the negative sign ( − {\displaystyle -} ) is decreasing (going downward)—in certain intervals. [ 19 ] This information can be represented as a Cartesian coordinates system divided into four quadrants. Both sine and cosine functions can be defined by using differential equations. The pair of ( cos ⁡ θ , sin ⁡ θ ) {\displaystyle (\cos \theta ,\sin \theta )} is the solution ( x ( θ ) , y ( θ ) ) {\displaystyle (x(\theta ),y(\theta ))} to the two-dimensional system of differential equations y ′ ( θ ) = x ( θ ) {\displaystyle y'(\theta )=x(\theta )} and x ′ ( θ ) = − y ( θ ) {\displaystyle x'(\theta )=-y(\theta )} with the initial conditions y ( 0 ) = 0 {\displaystyle y(0)=0} and x ( 0 ) = 1 {\displaystyle x(0)=1} . One could interpret the unit circle in the above definitions as defining the phase space trajectory of the differential equation with the given initial conditions. It can be interpreted as a phase space trajectory of the system of differential equations y ′ ( θ ) = x ( θ ) {\displaystyle y'(\theta )=x(\theta )} and x ′ ( θ ) = − y ( θ ) {\displaystyle x'(\theta )=-y(\theta )} starting from the initial conditions y ( 0 ) = 0 {\displaystyle y(0)=0} and x ( 0 ) = 1 {\displaystyle x(0)=1} . [ citation needed ] Their area under a curve can be obtained by using the integral with a certain bounded interval. Their antiderivatives are: ∫ sin ⁡ ( x ) d x = − cos ⁡ ( x ) + C ∫ cos ⁡ ( x ) d x = sin ⁡ ( x ) + C , {\displaystyle \int \sin(x)\,dx=-\cos(x)+C\qquad \int \cos(x)\,dx=\sin(x)+C,} where C {\displaystyle C} denotes the constant of integration . [ 20 ] These antiderivatives may be applied to compute the mensuration properties of both sine and cosine functions' curves with a given interval. For example, the arc length of the sine curve between 0 {\displaystyle 0} and t {\displaystyle t} is ∫ 0 t 1 + cos 2 ⁡ ( x ) d x = 2 E ⁡ ( t , 1 2 ) , {\displaystyle \int _{0}^{t}\!{\sqrt {1+\cos ^{2}(x)}}\,dx={\sqrt {2}}\operatorname {E} \left(t,{\frac {1}{\sqrt {2}}}\right),} where E ⁡ ( φ , k ) {\displaystyle \operatorname {E} (\varphi ,k)} is the incomplete elliptic integral of the second kind with modulus k {\displaystyle k} . It cannot be expressed using elementary functions . [ 21 ] In the case of a full period, its arc length is L = 4 2 π 3 Γ ( 1 / 4 ) 2 + Γ ( 1 / 4 ) 2 2 π = 2 π ϖ + 2 ϖ ≈ 7.6404 … {\displaystyle L={\frac {4{\sqrt {2\pi ^{3}}}}{\Gamma (1/4)^{2}}}+{\frac {\Gamma (1/4)^{2}}{\sqrt {2\pi }}}={\frac {2\pi }{\varpi }}+2\varpi \approx 7.6404\ldots } where Γ {\displaystyle \Gamma } is the gamma function and ϖ {\displaystyle \varpi } is the lemniscate constant . [ 22 ] The inverse function of sine is arcsine or inverse sine, denoted as "arcsin", "asin", or sin − 1 {\displaystyle \sin ^{-1}} . [ 23 ] The inverse function of cosine is arccosine, denoted as "arccos", "acos", or cos − 1 {\displaystyle \cos ^{-1}} . [ a ] As sine and cosine are not injective , their inverses are not exact inverse functions, but partial inverse functions. For example, sin ⁡ ( 0 ) = 0 {\displaystyle \sin(0)=0} , but also sin ⁡ ( π ) = 0 {\displaystyle \sin(\pi )=0} , sin ⁡ ( 2 π ) = 0 {\displaystyle \sin(2\pi )=0} , and so on. It follows that the arcsine function is multivalued: arcsin ⁡ ( 0 ) = 0 {\displaystyle \arcsin(0)=0} , but also arcsin ⁡ ( 0 ) = π {\displaystyle \arcsin(0)=\pi } , arcsin ⁡ ( 0 ) = 2 π {\displaystyle \arcsin(0)=2\pi } , and so on. When only one value is desired, the function may be restricted to its principal branch . With this restriction, for each x {\displaystyle x} in the domain, the expression arcsin ⁡ ( x ) {\displaystyle \arcsin(x)} will evaluate only to a single value, called its principal value . The standard range of principal values for arcsin is from − π 2 {\textstyle -{\frac {\pi }{2}}} to π 2 {\textstyle {\frac {\pi }{2}}} , and the standard range for arccos is from 0 {\displaystyle 0} to π {\displaystyle \pi } . [ 24 ] The inverse function of both sine and cosine are defined as: [ citation needed ] θ = arcsin ⁡ ( opposite hypotenuse ) = arccos ⁡ ( adjacent hypotenuse ) , {\displaystyle \theta =\arcsin \left({\frac {\text{opposite}}{\text{hypotenuse}}}\right)=\arccos \left({\frac {\text{adjacent}}{\text{hypotenuse}}}\right),} where for some integer k {\displaystyle k} , sin ⁡ ( y ) = x ⟺ y = arcsin ⁡ ( x ) + 2 π k , or y = π − arcsin ⁡ ( x ) + 2 π k cos ⁡ ( y ) = x ⟺ y = arccos ⁡ ( x ) + 2 π k , or y = − arccos ⁡ ( x ) + 2 π k {\displaystyle {\begin{aligned}\sin(y)=x\iff &y=\arcsin(x)+2\pi k,{\text{ or }}\\&y=\pi -\arcsin(x)+2\pi k\\\cos(y)=x\iff &y=\arccos(x)+2\pi k,{\text{ or }}\\&y=-\arccos(x)+2\pi k\end{aligned}}} By definition, both functions satisfy the equations: [ citation needed ] sin ⁡ ( arcsin ⁡ ( x ) ) = x cos ⁡ ( arccos ⁡ ( x ) ) = x {\displaystyle \sin(\arcsin(x))=x\qquad \cos(\arccos(x))=x} and arcsin ⁡ ( sin ⁡ ( θ ) ) = θ for − π 2 ≤ θ ≤ π 2 arccos ⁡ ( cos ⁡ ( θ ) ) = θ for 0 ≤ θ ≤ π {\displaystyle {\begin{aligned}\arcsin(\sin(\theta ))=\theta \quad &{\text{for}}\quad -{\frac {\pi }{2}}\leq \theta \leq {\frac {\pi }{2}}\\\arccos(\cos(\theta ))=\theta \quad &{\text{for}}\quad 0\leq \theta \leq \pi \end{aligned}}} According to Pythagorean theorem , the squared hypotenuse is the sum of two squared legs of a right triangle. Dividing the formula on both sides with squared hypotenuse resulting in the Pythagorean trigonometric identity , the sum of a squared sine and a squared cosine equals 1: [ 25 ] [ b ] sin 2 ⁡ ( θ ) + cos 2 ⁡ ( θ ) = 1. {\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1.} Sine and cosine satisfy the following double-angle formulas: [ 26 ] sin ⁡ ( 2 θ ) = 2 sin ⁡ ( θ ) cos ⁡ ( θ ) , cos ⁡ ( 2 θ ) = cos 2 ⁡ ( θ ) − sin 2 ⁡ ( θ ) = 2 cos 2 ⁡ ( θ ) − 1 = 1 − 2 sin 2 ⁡ ( θ ) {\displaystyle {\begin{aligned}\sin(2\theta )&=2\sin(\theta )\cos(\theta ),\\\cos(2\theta )&=\cos ^{2}(\theta )-\sin ^{2}(\theta )\\&=2\cos ^{2}(\theta )-1\\&=1-2\sin ^{2}(\theta )\end{aligned}}} The cosine double angle formula implies that sin 2 and cos 2 are, themselves, shifted and scaled sine waves. Specifically, [ 27 ] sin 2 ⁡ ( θ ) = 1 − cos ⁡ ( 2 θ ) 2 cos 2 ⁡ ( θ ) = 1 + cos ⁡ ( 2 θ ) 2 {\displaystyle \sin ^{2}(\theta )={\frac {1-\cos(2\theta )}{2}}\qquad \cos ^{2}(\theta )={\frac {1+\cos(2\theta )}{2}}} The graph shows both sine and sine squared functions, with the sine in blue and the sine squared in red. Both graphs have the same shape but with different ranges of values and different periods. Sine squared has only positive values, but twice the number of periods. [ citation needed ] Both sine and cosine functions can be defined by using a Taylor series , a power series involving the higher-order derivatives. As mentioned in § Continuity and differentiation , the derivative of sine is cosine and that the derivative of cosine is the negative of sine. This means the successive derivatives of sin ⁡ ( x ) {\displaystyle \sin(x)} are cos ⁡ ( x ) {\displaystyle \cos(x)} , − sin ⁡ ( x ) {\displaystyle -\sin(x)} , − cos ⁡ ( x ) {\displaystyle -\cos(x)} , sin ⁡ ( x ) {\displaystyle \sin(x)} , continuing to repeat those four functions. The ( 4 n + k ) {\displaystyle (4n+k)} - th derivative, evaluated at the point 0: sin ( 4 n + k ) ⁡ ( 0 ) = { 0 when k = 0 1 when k = 1 0 when k = 2 − 1 when k = 3 {\displaystyle \sin ^{(4n+k)}(0)={\begin{cases}0&{\text{when }}k=0\\1&{\text{when }}k=1\\0&{\text{when }}k=2\\-1&{\text{when }}k=3\end{cases}}} where the superscript represents repeated differentiation. This implies the following Taylor series expansion at x = 0 {\displaystyle x=0} . One can then use the theory of Taylor series to show that the following identities hold for all real numbers x {\displaystyle x} —where x {\displaystyle x} is the angle in radians. [ 28 ] More generally, for all complex numbers : [ 29 ] sin ⁡ ( x ) = x − x 3 3 ! + x 5 5 ! − x 7 7 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) ! x 2 n + 1 {\displaystyle {\begin{aligned}\sin(x)&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}\end{aligned}}} Taking the derivative of each term gives the Taylor series for cosine: [ 28 ] [ 29 ] cos ⁡ ( x ) = 1 − x 2 2 ! + x 4 4 ! − x 6 6 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! x 2 n {\displaystyle {\begin{aligned}\cos(x)&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}\end{aligned}}} Both sine and cosine functions with multiple angles may appear as their linear combination , resulting in a polynomial. Such a polynomial is known as the trigonometric polynomial . The trigonometric polynomial's ample applications may be acquired in its interpolation , and its extension of a periodic function known as the Fourier series . Let a n {\displaystyle a_{n}} and b n {\displaystyle b_{n}} be any coefficients, then the trigonometric polynomial of a degree N {\displaystyle N} —denoted as T ( x ) {\displaystyle T(x)} —is defined as: [ 30 ] [ 31 ] T ( x ) = a 0 + ∑ n = 1 N a n cos ⁡ ( n x ) + ∑ n = 1 N b n sin ⁡ ( n x ) . {\displaystyle T(x)=a_{0}+\sum _{n=1}^{N}a_{n}\cos(nx)+\sum _{n=1}^{N}b_{n}\sin(nx).} The trigonometric series can be defined similarly analogous to the trigonometric polynomial, its infinite inversion. Let A n {\displaystyle A_{n}} and B n {\displaystyle B_{n}} be any coefficients, then the trigonometric series can be defined as: [ 32 ] 1 2 A 0 + ∑ n = 1 ∞ A n cos ⁡ ( n x ) + B n sin ⁡ ( n x ) . {\displaystyle {\frac {1}{2}}A_{0}+\sum _{n=1}^{\infty }A_{n}\cos(nx)+B_{n}\sin(nx).} In the case of a Fourier series with a given integrable function f {\displaystyle f} , the coefficients of a trigonometric series are: [ 33 ] A n = 1 π ∫ 0 2 π f ( x ) cos ⁡ ( n x ) d x , B n = 1 π ∫ 0 2 π f ( x ) sin ⁡ ( n x ) d x . {\displaystyle {\begin{aligned}A_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\cos(nx)\,dx,\\B_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\sin(nx)\,dx.\end{aligned}}} Both sine and cosine can be extended further via complex number , a set of numbers composed of both real and imaginary numbers . For real number θ {\displaystyle \theta } , the definition of both sine and cosine functions can be extended in a complex plane in terms of an exponential function as follows: [ 34 ] sin ⁡ ( θ ) = e i θ − e − i θ 2 i , cos ⁡ ( θ ) = e i θ + e − i θ 2 , {\displaystyle {\begin{aligned}\sin(\theta )&={\frac {e^{i\theta }-e^{-i\theta }}{2i}},\\\cos(\theta )&={\frac {e^{i\theta }+e^{-i\theta }}{2}},\end{aligned}}} Alternatively, both functions can be defined in terms of Euler's formula : [ 34 ] e i θ = cos ⁡ ( θ ) + i sin ⁡ ( θ ) , e − i θ = cos ⁡ ( θ ) − i sin ⁡ ( θ ) . {\displaystyle {\begin{aligned}e^{i\theta }&=\cos(\theta )+i\sin(\theta ),\\e^{-i\theta }&=\cos(\theta )-i\sin(\theta ).\end{aligned}}} When plotted on the complex plane , the function e i x {\displaystyle e^{ix}} for real values of x {\displaystyle x} traces out the unit circle in the complex plane. Both sine and cosine functions may be simplified to the imaginary and real parts of e i θ {\displaystyle e^{i\theta }} as: [ 35 ] sin ⁡ θ = Im ⁡ ( e i θ ) , cos ⁡ θ = Re ⁡ ( e i θ ) . {\displaystyle {\begin{aligned}\sin \theta &=\operatorname {Im} (e^{i\theta }),\\\cos \theta &=\operatorname {Re} (e^{i\theta }).\end{aligned}}} When z = x + i y {\displaystyle z=x+iy} for real values x {\displaystyle x} and y {\displaystyle y} , where i = − 1 {\displaystyle i={\sqrt {-1}}} , both sine and cosine functions can be expressed in terms of real sines, cosines, and hyperbolic functions as: [ citation needed ] sin ⁡ z = sin ⁡ x cosh ⁡ y + i cos ⁡ x sinh ⁡ y , cos ⁡ z = cos ⁡ x cosh ⁡ y − i sin ⁡ x sinh ⁡ y . {\displaystyle {\begin{aligned}\sin z&=\sin x\cosh y+i\cos x\sinh y,\\\cos z&=\cos x\cosh y-i\sin x\sinh y.\end{aligned}}} Sine and cosine are used to connect the real and imaginary parts of a complex number with its polar coordinates ( r , θ ) {\displaystyle (r,\theta )} : z = r ( cos ⁡ ( θ ) + i sin ⁡ ( θ ) ) , {\displaystyle z=r(\cos(\theta )+i\sin(\theta )),} and the real and imaginary parts are Re ⁡ ( z ) = r cos ⁡ ( θ ) , Im ⁡ ( z ) = r sin ⁡ ( θ ) , {\displaystyle {\begin{aligned}\operatorname {Re} (z)&=r\cos(\theta ),\\\operatorname {Im} (z)&=r\sin(\theta ),\end{aligned}}} where r {\displaystyle r} and θ {\displaystyle \theta } represent the magnitude and angle of the complex number z {\displaystyle z} . For any real number θ {\displaystyle \theta } , Euler's formula in terms of polar coordinates is stated as z = r e i θ {\textstyle z=re^{i\theta }} . Applying the series definition of the sine and cosine to a complex argument, z , gives: where sinh and cosh are the hyperbolic sine and cosine . These are entire functions . It is also sometimes useful to express the complex sine and cosine functions in terms of the real and imaginary parts of its argument: Using the partial fraction expansion technique in complex analysis , one can find that the infinite series ∑ n = − ∞ ∞ ( − 1 ) n z − n = 1 z − 2 z ∑ n = 1 ∞ ( − 1 ) n n 2 − z 2 {\displaystyle \sum _{n=-\infty }^{\infty }{\frac {(-1)^{n}}{z-n}}={\frac {1}{z}}-2z\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{n^{2}-z^{2}}}} both converge and are equal to π sin ⁡ ( π z ) {\textstyle {\frac {\pi }{\sin(\pi z)}}} . Similarly, one can show that π 2 sin 2 ⁡ ( π z ) = ∑ n = − ∞ ∞ 1 ( z − n ) 2 . {\displaystyle {\frac {\pi ^{2}}{\sin ^{2}(\pi z)}}=\sum _{n=-\infty }^{\infty }{\frac {1}{(z-n)^{2}}}.} Using product expansion technique, one can derive sin ⁡ ( π z ) = π z ∏ n = 1 ∞ ( 1 − z 2 n 2 ) . {\displaystyle \sin(\pi z)=\pi z\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{n^{2}}}\right).} sin( z ) is found in the functional equation for the Gamma function , which in turn is found in the functional equation for the Riemann zeta-function , As a holomorphic function , sin z is a 2D solution of Laplace's equation : The complex sine function is also related to the level curves of pendulums . [ how? ] [ 36 ] [ better source needed ] The word sine is derived, indirectly, from the Sanskrit word jyā 'bow-string' or more specifically its synonym jīvá (both adopted from Ancient Greek χορδή 'string; chord'), due to visual similarity between the arc of a circle with its corresponding chord and a bow with its string (see jyā, koti-jyā and utkrama-jyā ; sine and chord are closely related in a circle of unit diameter, see Ptolemy’s Theorem ). This was transliterated in Arabic as jība , which is meaningless in that language and written as jb ( جب ). Since Arabic is written without short vowels, jb was interpreted as the homograph jayb ( جيب ), which means 'bosom', 'pocket', or 'fold'. [ 37 ] [ 38 ] When the Arabic texts of Al-Battani and al-Khwārizmī were translated into Medieval Latin in the 12th century by Gerard of Cremona , he used the Latin equivalent sinus (which also means 'bay' or 'fold', and more specifically 'the hanging fold of a toga over the breast'). [ 39 ] [ 40 ] [ 41 ] Gerard was probably not the first scholar to use this translation; Robert of Chester appears to have preceded him and there is evidence of even earlier usage. [ 42 ] [ 43 ] The English form sine was introduced in Thomas Fale 's 1593 Horologiographia . [ 44 ] The word cosine derives from an abbreviation of the Latin complementi sinus 'sine of the complementary angle ' as cosinus in Edmund Gunter 's Canon triangulorum (1620), which also includes a similar definition of cotangens . [ 45 ] While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was discovered by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). [ 46 ] The sine and cosine functions are closely related to the jyā and koṭi-jyā functions used in Indian astronomy during the Gupta period ( Aryabhatiya and Surya Siddhanta ), via translation from Sanskrit to Arabic and then from Arabic to Latin. [ 39 ] [ 47 ] All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines , used in solving triangles . [ 48 ] Al-Khwārizmī (c. 780–850) produced tables of sines, cosines and tangents. [ 49 ] [ 50 ] Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. [ 50 ] In the early 17th-century, the French mathematician Albert Girard published the first use of the abbreviations sin , cos , and tan ; these were further promulgated by Euler (see below). The Opus palatinum de triangulis of Georg Joachim Rheticus , a student of Copernicus , was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596. In a paper published in 1682, Leibniz proved that sin x is not an algebraic function of x . [ 51 ] Roger Cotes computed the derivative of sine in his Harmonia Mensurarum (1722). [ 52 ] Leonhard Euler 's Introductio in analysin infinitorum (1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, also defining them as infinite series and presenting " Euler's formula ", as well as the near-modern abbreviations sin. , cos. , tang. , cot. , sec. , and cosec. [ 39 ] There is no standard algorithm for calculating sine and cosine. IEEE 754 , the most widely used standard for the specification of reliable floating-point computation, does not address calculating trigonometric functions such as sine. The reason is that no efficient algorithm is known for computing sine and cosine with a specified accuracy, especially for large inputs. [ 53 ] Algorithms for calculating sine may be balanced for such constraints as speed, accuracy, portability, or range of input values accepted. This can lead to different results for different algorithms, especially for special circumstances such as very large inputs, e.g. sin(10 22 ) . A common programming optimization, used especially in 3D graphics, is to pre-calculate a table of sine values, for example one value per degree, then for values in-between pick the closest pre-calculated value, or linearly interpolate between the 2 closest values to approximate it. This allows results to be looked up from a table rather than being calculated in real time. With modern CPU architectures this method may offer no advantage. [ citation needed ] The CORDIC algorithm is commonly used in scientific calculators. The sine and cosine functions, along with other trigonometric functions, are widely available across programming languages and platforms. In computing, they are typically abbreviated to sin and cos . Some CPU architectures have a built-in instruction for sine, including the Intel x87 FPUs since the 80387. In programming languages, sin and cos are typically either a built-in function or found within the language's standard math library. For example, the C standard library defines sine functions within math.h : sin( double ) , sinf( float ) , and sinl( long double ) . The parameter of each is a floating point value, specifying the angle in radians. Each function returns the same data type as it accepts. Many other trigonometric functions are also defined in math.h , such as for cosine, arc sine, and hyperbolic sine (sinh). Similarly, Python defines math.sin(x) and math.cos(x) within the built-in math module. Complex sine and cosine functions are also available within the cmath module, e.g. cmath.sin(z) . CPython 's math functions call the C math library, and use a double-precision floating-point format . Some software libraries provide implementations of sine and cosine using the input angle in half- turns , a half-turn being an angle of 180 degrees or π {\displaystyle \pi } radians. Representing angles in turns or half-turns has accuracy advantages and efficiency advantages in some cases. [ 54 ] [ 55 ] These functions are called sinpi and cospi in MATLAB, [ 54 ] OpenCL, [ 56 ] R, [ 55 ] Julia, [ 57 ] CUDA, [ 58 ] and ARM. [ 59 ] For example, sinpi(x) would evaluate to sin ⁡ ( π x ) , {\displaystyle \sin(\pi x),} where x is expressed in half-turns, and consequently the final input to the function, πx can be interpreted in radians by sin . The accuracy advantage stems from the ability to perfectly represent key angles like full-turn, half-turn, and quarter-turn losslessly in binary floating-point or fixed-point. In contrast, representing 2 π {\displaystyle 2\pi } , π {\displaystyle \pi } , and π 2 {\textstyle {\frac {\pi }{2}}} in binary floating-point or binary scaled fixed-point always involves a loss of accuracy since irrational numbers cannot be represented with finitely many binary digits. Turns also have an accuracy advantage and efficiency advantage for computing modulo to one period. Computing modulo 1 turn or modulo 2 half-turns can be losslessly and efficiently computed in both floating-point and fixed-point. For example, computing modulo 1 or modulo 2 for a binary point scaled fixed-point value requires only a bit shift or bitwise AND operation. In contrast, computing modulo π 2 {\textstyle {\frac {\pi }{2}}} involves inaccuracies in representing π 2 {\textstyle {\frac {\pi }{2}}} . For applications involving angle sensors, the sensor typically provides angle measurements in a form directly compatible with turns or half-turns. For example, an angle sensor may count from 0 to 4096 over one complete revolution. [ 60 ] If half-turns are used as the unit for angle, then the value provided by the sensor directly and losslessly maps to a fixed-point data type with 11 bits to the right of the binary point. In contrast, if radians are used as the unit for storing the angle, then the inaccuracies and cost of multiplying the raw sensor integer by an approximation to π 2048 {\textstyle {\frac {\pi }{2048}}} would be incurred.
https://en.wikipedia.org/wiki/Sine_and_cosine
In mathematics , the trigonometric functions (also called circular functions , angle functions or goniometric functions ) [ 1 ] are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry , such as navigation , solid mechanics , celestial mechanics , geodesy , and many others. They are among the simplest periodic functions , and as such are also widely used for studying periodic phenomena through Fourier analysis . The trigonometric functions most widely used in modern mathematics are the sine , the cosine , and the tangent functions. Their reciprocals are respectively the cosecant , the secant , and the cotangent functions, which are less used. Each of these six trigonometric functions has a corresponding inverse function , and an analog among the hyperbolic functions . The oldest definitions of trigonometric functions, related to right-angle triangles, define them only for acute angles . To extend the sine and cosine functions to functions whose domain is the whole real line , geometrical definitions using the standard unit circle (i.e., a circle with radius 1 unit) are often used; then the domain of the other functions is the real line with some isolated points removed. Modern definitions express trigonometric functions as infinite series or as solutions of differential equations . This allows extending the domain of sine and cosine functions to the whole complex plane , and the domain of the other trigonometric functions to the complex plane with some isolated points removed. Conventionally, an abbreviation of each trigonometric function's name is used as its symbol in formulas. Today, the most common versions of these abbreviations are " sin " for sine, " cos " for cosine, " tan " or " tg " for tangent, " sec " for secant, " csc " or " cosec " for cosecant, and " cot " or " ctg " for cotangent. Historically, these abbreviations were first used in prose sentences to indicate particular line segments or their lengths related to an arc of an arbitrary circle, and later to indicate ratios of lengths, but as the function concept developed in the 17th–18th century, they began to be considered as functions of real-number-valued angle measures, and written with functional notation , for example sin( x ) . Parentheses are still often omitted to reduce clutter, but are sometimes necessary; for example the expression sin ⁡ x + y {\displaystyle \sin x+y} would typically be interpreted to mean ( sin ⁡ x ) + y , {\displaystyle (\sin x)+y,} so parentheses are required to express sin ⁡ ( x + y ) . {\displaystyle \sin(x+y).} A positive integer appearing as a superscript after the symbol of the function denotes exponentiation , not function composition . For example sin 2 ⁡ x {\displaystyle \sin ^{2}x} and sin 2 ⁡ ( x ) {\displaystyle \sin ^{2}(x)} denote ( sin ⁡ x ) 2 , {\displaystyle (\sin x)^{2},} not sin ⁡ ( sin ⁡ x ) . {\displaystyle \sin(\sin x).} This differs from the (historically later) general functional notation in which f 2 ( x ) = ( f ∘ f ) ( x ) = f ( f ( x ) ) . {\displaystyle f^{2}(x)=(f\circ f)(x)=f(f(x)).} In contrast, the superscript − 1 {\displaystyle -1} is commonly used to denote the inverse function , not the reciprocal . For example sin − 1 ⁡ x {\displaystyle \sin ^{-1}x} and sin − 1 ⁡ ( x ) {\displaystyle \sin ^{-1}(x)} denote the inverse trigonometric function alternatively written arcsin ⁡ x . {\displaystyle \arcsin x\,.} The equation θ = sin − 1 ⁡ x {\displaystyle \theta =\sin ^{-1}x} implies sin ⁡ θ = x , {\displaystyle \sin \theta =x,} not θ ⋅ sin ⁡ x = 1. {\displaystyle \theta \cdot \sin x=1.} In this case, the superscript could be considered as denoting a composed or iterated function , but negative superscripts other than − 1 {\displaystyle {-1}} are not in common use. If the acute angle θ is given, then any right triangles that have an angle of θ are similar to each other. This means that the ratio of any two side lengths depends only on θ . Thus these six ratios define six functions of θ , which are the trigonometric functions. In the following definitions, the hypotenuse is the length of the side opposite the right angle, opposite represents the side opposite the given angle θ , and adjacent represents the side between the angle θ and the right angle. [ 2 ] [ 3 ] Various mnemonics can be used to remember these definitions. In a right-angled triangle, the sum of the two acute angles is a right angle, that is, 90° or ⁠ π / 2 ⁠ radians . Therefore sin ⁡ ( θ ) {\displaystyle \sin(\theta )} and cos ⁡ ( 90 ∘ − θ ) {\displaystyle \cos(90^{\circ }-\theta )} represent the same ratio, and thus are equal. This identity and analogous relationships between the other trigonometric functions are summarized in the following table. In geometric applications, the argument of a trigonometric function is generally the measure of an angle . For this purpose, any angular unit is convenient. One common unit is degrees , in which a right angle is 90° and a complete turn is 360° (particularly in elementary mathematics ). However, in calculus and mathematical analysis , the trigonometric functions are generally regarded more abstractly as functions of real or complex numbers , rather than angles. In fact, the functions sin and cos can be defined for all complex numbers in terms of the exponential function , via power series, [ 5 ] or as solutions to differential equations given particular initial values [ 6 ] ( see below ), without reference to any geometric notions. The other four trigonometric functions ( tan , cot , sec , csc ) can be defined as quotients and reciprocals of sin and cos , except where zero occurs in the denominator. It can be proved, for real arguments, that these definitions coincide with elementary geometric definitions if the argument is regarded as an angle in radians. [ 5 ] Moreover, these definitions result in simple expressions for the derivatives and indefinite integrals for the trigonometric functions. [ 7 ] Thus, in settings beyond elementary geometry, radians are regarded as the mathematically natural unit for describing angle measures. When radians (rad) are employed, the angle is given as the length of the arc of the unit circle subtended by it: the angle that subtends an arc of length 1 on the unit circle is 1 rad (≈ 57.3°), [ 8 ] and a complete turn (360°) is an angle of 2 π (≈ 6.28) rad. [ 9 ] For real number x , the notation sin x , cos x , etc. refers to the value of the trigonometric functions evaluated at an angle of x rad. If units of degrees are intended, the degree sign must be explicitly shown ( sin x° , cos x° , etc.). Using this standard notation, the argument x for the trigonometric functions satisfies the relationship x = (180 x / π )°, so that, for example, sin π = sin 180° when we take x = π . In this way, the degree symbol can be regarded as a mathematical constant such that 1° = π /180 ≈ 0.0175. [ 10 ] The six trigonometric functions can be defined as coordinate values of points on the Euclidean plane that are related to the unit circle , which is the circle of radius one centered at the origin O of this coordinate system. While right-angled triangle definitions allow for the definition of the trigonometric functions for angles between 0 and π 2 {\textstyle {\frac {\pi }{2}}} radians (90°), the unit circle definitions allow the domain of trigonometric functions to be extended to all positive and negative real numbers. Let L {\displaystyle {\mathcal {L}}} be the ray obtained by rotating by an angle θ the positive half of the x -axis ( counterclockwise rotation for θ > 0 , {\displaystyle \theta >0,} and clockwise rotation for θ < 0 {\displaystyle \theta <0} ). This ray intersects the unit circle at the point A = ( x A , y A ) . {\displaystyle \mathrm {A} =(x_{\mathrm {A} },y_{\mathrm {A} }).} The ray L , {\displaystyle {\mathcal {L}},} extended to a line if necessary, intersects the line of equation x = 1 {\displaystyle x=1} at point B = ( 1 , y B ) , {\displaystyle \mathrm {B} =(1,y_{\mathrm {B} }),} and the line of equation y = 1 {\displaystyle y=1} at point C = ( x C , 1 ) . {\displaystyle \mathrm {C} =(x_{\mathrm {C} },1).} The tangent line to the unit circle at the point A , is perpendicular to L , {\displaystyle {\mathcal {L}},} and intersects the y - and x -axes at points D = ( 0 , y D ) {\displaystyle \mathrm {D} =(0,y_{\mathrm {D} })} and E = ( x E , 0 ) . {\displaystyle \mathrm {E} =(x_{\mathrm {E} },0).} The coordinates of these points give the values of all trigonometric functions for any arbitrary real value of θ in the following manner. The trigonometric functions cos and sin are defined, respectively, as the x - and y -coordinate values of point A . That is, In the range 0 ≤ θ ≤ π / 2 {\displaystyle 0\leq \theta \leq \pi /2} , this definition coincides with the right-angled triangle definition, by taking the right-angled triangle to have the unit radius OA as hypotenuse . And since the equation x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} holds for all points P = ( x , y ) {\displaystyle \mathrm {P} =(x,y)} on the unit circle, this definition of cosine and sine also satisfies the Pythagorean identity . The other trigonometric functions can be found along the unit circle as By applying the Pythagorean identity and geometric proof methods, these definitions can readily be shown to coincide with the definitions of tangent, cotangent, secant and cosecant in terms of sine and cosine, that is Since a rotation of an angle of ± 2 π {\displaystyle \pm 2\pi } does not change the position or size of a shape, the points A , B , C , D , and E are the same for two angles whose difference is an integer multiple of 2 π {\displaystyle 2\pi } . Thus trigonometric functions are periodic functions with period 2 π {\displaystyle 2\pi } . That is, the equalities hold for any angle θ and any integer k . The same is true for the four other trigonometric functions. By observing the sign and the monotonicity of the functions sine, cosine, cosecant, and secant in the four quadrants, one can show that 2 π {\displaystyle 2\pi } is the smallest value for which they are periodic (i.e., 2 π {\displaystyle 2\pi } is the fundamental period of these functions). However, after a rotation by an angle π {\displaystyle \pi } , the points B and C already return to their original position, so that the tangent function and the cotangent function have a fundamental period of π {\displaystyle \pi } . That is, the equalities hold for any angle θ and any integer k . The algebraic expressions for the most important angles are as follows: Writing the numerators as square roots of consecutive non-negative integers, with a denominator of 2, provides an easy way to remember the values. [ 13 ] Such simple expressions generally do not exist for other angles which are rational multiples of a right angle. The following table lists the sines, cosines, and tangents of multiples of 15 degrees from 0 to 90 degrees. G. H. Hardy noted in his 1908 work A Course of Pure Mathematics that the definition of the trigonometric functions in terms of the unit circle is not satisfactory, because it depends implicitly on a notion of angle that can be measured by a real number. [ 14 ] Thus in modern analysis, trigonometric functions are usually constructed without reference to geometry. Various ways exist in the literature for defining the trigonometric functions in a manner suitable for analysis; they include: Sine and cosine can be defined as the unique solution to the initial value problem : [ 17 ] Differentiating again, d 2 d x 2 sin ⁡ x = d d x cos ⁡ x = − sin ⁡ x {\textstyle {\frac {d^{2}}{dx^{2}}}\sin x={\frac {d}{dx}}\cos x=-\sin x} and d 2 d x 2 cos ⁡ x = − d d x sin ⁡ x = − cos ⁡ x {\textstyle {\frac {d^{2}}{dx^{2}}}\cos x=-{\frac {d}{dx}}\sin x=-\cos x} , so both sine and cosine are solutions of the same ordinary differential equation Sine is the unique solution with y (0) = 0 and y ′(0) = 1 ; cosine is the unique solution with y (0) = 1 and y ′(0) = 0 . One can then prove, as a theorem, that solutions cos , sin {\displaystyle \cos ,\sin } are periodic, having the same period. Writing this period as 2 π {\displaystyle 2\pi } is then a definition of the real number π {\displaystyle \pi } which is independent of geometry. Applying the quotient rule to the tangent tan ⁡ x = sin ⁡ x / cos ⁡ x {\displaystyle \tan x=\sin x/\cos x} , so the tangent function satisfies the ordinary differential equation It is the unique solution with y (0) = 0 . The basic trigonometric functions can be defined by the following power series expansions. [ 18 ] These series are also known as the Taylor series or Maclaurin series of these trigonometric functions: The radius of convergence of these series is infinite. Therefore, the sine and the cosine can be extended to entire functions (also called "sine" and "cosine"), which are (by definition) complex-valued functions that are defined and holomorphic on the whole complex plane . Term-by-term differentiation shows that the sine and cosine defined by the series obey the differential equation discussed previously, and conversely one can obtain these series from elementary recursion relations derived from the differential equation. Being defined as fractions of entire functions, the other trigonometric functions may be extended to meromorphic functions , that is functions that are holomorphic in the whole complex plane, except some isolated points called poles . Here, the poles are the numbers of the form ( 2 k + 1 ) π 2 {\textstyle (2k+1){\frac {\pi }{2}}} for the tangent and the secant, or k π {\displaystyle k\pi } for the cotangent and the cosecant, where k is an arbitrary integer. Recurrences relations may also be computed for the coefficients of the Taylor series of the other trigonometric functions. These series have a finite radius of convergence . Their coefficients have a combinatorial interpretation: they enumerate alternating permutations of finite sets. [ 19 ] More precisely, defining one has the following series expansions: [ 20 ] The following continued fractions are valid in the whole complex plane: The last one was used in the historically first proof that π is irrational . [ 21 ] There is a series representation as partial fraction expansion where just translated reciprocal functions are summed up, such that the poles of the cotangent function and the reciprocal functions match: [ 22 ] This identity can be proved with the Herglotz trick. [ 23 ] Combining the (– n ) th with the n th term lead to absolutely convergent series: Similarly, one can find a partial fraction expansion for the secant, cosecant and tangent functions: The following infinite product for the sine is due to Leonhard Euler , and is of great importance in complex analysis: [ 24 ] This may be obtained from the partial fraction decomposition of cot ⁡ z {\displaystyle \cot z} given above, which is the logarithmic derivative of sin ⁡ z {\displaystyle \sin z} . [ 25 ] From this, it can be deduced also that Euler's formula relates sine and cosine to the exponential function : This formula is commonly considered for real values of x , but it remains true for all complex values. Proof : Let f 1 ( x ) = cos ⁡ x + i sin ⁡ x , {\displaystyle f_{1}(x)=\cos x+i\sin x,} and f 2 ( x ) = e i x . {\displaystyle f_{2}(x)=e^{ix}.} One has d f j ( x ) / d x = i f j ( x ) {\displaystyle df_{j}(x)/dx=if_{j}(x)} for j = 1, 2 . The quotient rule implies thus that d / d x ( f 1 ( x ) / f 2 ( x ) ) = 0 {\displaystyle d/dx\,(f_{1}(x)/f_{2}(x))=0} . Therefore, f 1 ( x ) / f 2 ( x ) {\displaystyle f_{1}(x)/f_{2}(x)} is a constant function, which equals 1 , as f 1 ( 0 ) = f 2 ( 0 ) = 1. {\displaystyle f_{1}(0)=f_{2}(0)=1.} This proves the formula. One has Solving this linear system in sine and cosine, one can express them in terms of the exponential function: When x is real, this may be rewritten as Most trigonometric identities can be proved by expressing trigonometric functions in terms of the complex exponential function by using above formulas, and then using the identity e a + b = e a e b {\displaystyle e^{a+b}=e^{a}e^{b}} for simplifying the result. Euler's formula can also be used to define the basic trigonometric function directly, as follows, using the language of topological groups . [ 26 ] The set U {\displaystyle U} of complex numbers of unit modulus is a compact and connected topological group, which has a neighborhood of the identity that is homeomorphic to the real line. Therefore, it is isomorphic as a topological group to the one-dimensional torus group R / Z {\displaystyle \mathbb {R} /\mathbb {Z} } , via an isomorphism e : R / Z → U . {\displaystyle e:\mathbb {R} /\mathbb {Z} \to U.} In pedestrian terms e ( t ) = exp ⁡ ( 2 π i t ) {\displaystyle e(t)=\exp(2\pi it)} , and this isomorphism is unique up to taking complex conjugates. For a nonzero real number a {\displaystyle a} (the base ), the function t ↦ e ( t / a ) {\displaystyle t\mapsto e(t/a)} defines an isomorphism of the group R / a Z → U {\displaystyle \mathbb {R} /a\mathbb {Z} \to U} . The real and imaginary parts of e ( t / a ) {\displaystyle e(t/a)} are the cosine and sine, where a {\displaystyle a} is used as the base for measuring angles. For example, when a = 2 π {\displaystyle a=2\pi } , we get the measure in radians, and the usual trigonometric functions. When a = 360 {\displaystyle a=360} , we get the sine and cosine of angles measured in degrees. Note that a = 2 π {\displaystyle a=2\pi } is the unique value at which the derivative d d t e ( t / a ) {\displaystyle {\frac {d}{dt}}e(t/a)} becomes a unit vector with positive imaginary part at t = 0 {\displaystyle t=0} . This fact can, in turn, be used to define the constant 2 π {\displaystyle 2\pi } . Another way to define the trigonometric functions in analysis is using integration. [ 14 ] [ 27 ] For a real number t {\displaystyle t} , put θ ( t ) = ∫ 0 t d τ 1 + τ 2 = arctan ⁡ t {\displaystyle \theta (t)=\int _{0}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\arctan t} where this defines this inverse tangent function. Also, π {\displaystyle \pi } is defined by 1 2 π = ∫ 0 ∞ d τ 1 + τ 2 {\displaystyle {\frac {1}{2}}\pi =\int _{0}^{\infty }{\frac {d\tau }{1+\tau ^{2}}}} a definition that goes back to Karl Weierstrass . [ 28 ] On the interval − π / 2 < θ < π / 2 {\displaystyle -\pi /2<\theta <\pi /2} , the trigonometric functions are defined by inverting the relation θ = arctan ⁡ t {\displaystyle \theta =\arctan t} . Thus we define the trigonometric functions by tan ⁡ θ = t , cos ⁡ θ = ( 1 + t 2 ) − 1 / 2 , sin ⁡ θ = t ( 1 + t 2 ) − 1 / 2 {\displaystyle \tan \theta =t,\quad \cos \theta =(1+t^{2})^{-1/2},\quad \sin \theta =t(1+t^{2})^{-1/2}} where the point ( t , θ ) {\displaystyle (t,\theta )} is on the graph of θ = arctan ⁡ t {\displaystyle \theta =\arctan t} and the positive square root is taken. This defines the trigonometric functions on ( − π / 2 , π / 2 ) {\displaystyle (-\pi /2,\pi /2)} . The definition can be extended to all real numbers by first observing that, as θ → π / 2 {\displaystyle \theta \to \pi /2} , t → ∞ {\displaystyle t\to \infty } , and so cos ⁡ θ = ( 1 + t 2 ) − 1 / 2 → 0 {\displaystyle \cos \theta =(1+t^{2})^{-1/2}\to 0} and sin ⁡ θ = t ( 1 + t 2 ) − 1 / 2 → 1 {\displaystyle \sin \theta =t(1+t^{2})^{-1/2}\to 1} . Thus cos ⁡ θ {\displaystyle \cos \theta } and sin ⁡ θ {\displaystyle \sin \theta } are extended continuously so that cos ⁡ ( π / 2 ) = 0 , sin ⁡ ( π / 2 ) = 1 {\displaystyle \cos(\pi /2)=0,\sin(\pi /2)=1} . Now the conditions cos ⁡ ( θ + π ) = − cos ⁡ ( θ ) {\displaystyle \cos(\theta +\pi )=-\cos(\theta )} and sin ⁡ ( θ + π ) = − sin ⁡ ( θ ) {\displaystyle \sin(\theta +\pi )=-\sin(\theta )} define the sine and cosine as periodic functions with period 2 π {\displaystyle 2\pi } , for all real numbers. Proving the basic properties of sine and cosine, including the fact that sine and cosine are analytic, one may first establish the addition formulae. First, arctan ⁡ s + arctan ⁡ t = arctan ⁡ s + t 1 − s t {\displaystyle \arctan s+\arctan t=\arctan {\frac {s+t}{1-st}}} holds, provided arctan ⁡ s + arctan ⁡ t ∈ ( − π / 2 , π / 2 ) {\displaystyle \arctan s+\arctan t\in (-\pi /2,\pi /2)} , since arctan ⁡ s + arctan ⁡ t = ∫ − s t d τ 1 + τ 2 = ∫ 0 s + t 1 − s t d τ 1 + τ 2 {\displaystyle \arctan s+\arctan t=\int _{-s}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\int _{0}^{\frac {s+t}{1-st}}{\frac {d\tau }{1+\tau ^{2}}}} after the substitution τ → s + τ 1 − s τ {\displaystyle \tau \to {\frac {s+\tau }{1-s\tau }}} . In particular, the limiting case as s → ∞ {\displaystyle s\to \infty } gives arctan ⁡ t + π 2 = arctan ⁡ ( − 1 / t ) , t ∈ ( − ∞ , 0 ) . {\displaystyle \arctan t+{\frac {\pi }{2}}=\arctan(-1/t),\quad t\in (-\infty ,0).} Thus we have sin ⁡ ( θ + π 2 ) = − 1 t 1 + ( − 1 / t ) 2 = − 1 1 + t 2 = − cos ⁡ ( θ ) {\displaystyle \sin \left(\theta +{\frac {\pi }{2}}\right)={\frac {-1}{t{\sqrt {1+(-1/t)^{2}}}}}={\frac {-1}{\sqrt {1+t^{2}}}}=-\cos(\theta )} and cos ⁡ ( θ + π 2 ) = 1 1 + ( − 1 / t ) 2 = t 1 + t 2 = sin ⁡ ( θ ) . {\displaystyle \cos \left(\theta +{\frac {\pi }{2}}\right)={\frac {1}{\sqrt {1+(-1/t)^{2}}}}={\frac {t}{\sqrt {1+t^{2}}}}=\sin(\theta ).} So the sine and cosine functions are related by translation over a quarter period π / 2 {\displaystyle \pi /2} . One can also define the trigonometric functions using various functional equations . For example, [ 29 ] the sine and the cosine form the unique pair of continuous functions that satisfy the difference formula and the added condition The sine and cosine of a complex number z = x + i y {\displaystyle z=x+iy} can be expressed in terms of real sines, cosines, and hyperbolic functions as follows: By taking advantage of domain coloring , it is possible to graph the trigonometric functions as complex-valued functions. Various features unique to the complex functions can be seen from the graph; for example, the sine and cosine functions can be seen to be unbounded as the imaginary part of z {\displaystyle z} becomes larger (since the color white represents infinity), and the fact that the functions contain simple zeros or poles is apparent from the fact that the hue cycles around each zero or pole exactly once. Comparing these graphs with those of the corresponding Hyperbolic functions highlights the relationships between the two. sin ⁡ z {\displaystyle \sin z\,} cos ⁡ z {\displaystyle \cos z\,} tan ⁡ z {\displaystyle \tan z\,} cot ⁡ z {\displaystyle \cot z\,} sec ⁡ z {\displaystyle \sec z\,} csc ⁡ z {\displaystyle \csc z\,} The sine and cosine functions are periodic , with period 2 π {\displaystyle 2\pi } , which is the smallest positive period: sin ⁡ ( z + 2 π ) = sin ⁡ ( z ) , cos ⁡ ( z + 2 π ) = cos ⁡ ( z ) . {\displaystyle \sin(z+2\pi )=\sin(z),\quad \cos(z+2\pi )=\cos(z).} Consequently, the cosecant and secant also have 2 π {\displaystyle 2\pi } as their period. The functions sine and cosine also have semiperiods π {\displaystyle \pi } , and sin ⁡ ( z + π ) = − sin ⁡ ( z ) , cos ⁡ ( z + π ) = − cos ⁡ ( z ) {\displaystyle \sin(z+\pi )=-\sin(z),\quad \cos(z+\pi )=-\cos(z)} and consequently tan ⁡ ( z + π ) = tan ⁡ ( z ) , cot ⁡ ( z + π ) = cot ⁡ ( z ) . {\displaystyle \tan(z+\pi )=\tan(z),\quad \cot(z+\pi )=\cot(z).} Also, sin ⁡ ( x + π / 2 ) = cos ⁡ ( x ) , cos ⁡ ( x + π / 2 ) = − sin ⁡ ( x ) {\displaystyle \sin(x+\pi /2)=\cos(x),\quad \cos(x+\pi /2)=-\sin(x)} (see Complementary angles ). The function sin ⁡ ( z ) {\displaystyle \sin(z)} has a unique zero (at z = 0 {\displaystyle z=0} ) in the strip − π < ℜ ( z ) < π {\displaystyle -\pi <\Re (z)<\pi } . The function cos ⁡ ( z ) {\displaystyle \cos(z)} has the pair of zeros z = ± π / 2 {\displaystyle z=\pm \pi /2} in the same strip. Because of the periodicity, the zeros of sine are π Z = { … , − 2 π , − π , 0 , π , 2 π , … } ⊂ C . {\displaystyle \pi \mathbb {Z} =\left\{\dots ,-2\pi ,-\pi ,0,\pi ,2\pi ,\dots \right\}\subset \mathbb {C} .} There zeros of cosine are π 2 + π Z = { … , − 3 π 2 , − π 2 , π 2 , 3 π 2 , … } ⊂ C . {\displaystyle {\frac {\pi }{2}}+\pi \mathbb {Z} =\left\{\dots ,-{\frac {3\pi }{2}},-{\frac {\pi }{2}},{\frac {\pi }{2}},{\frac {3\pi }{2}},\dots \right\}\subset \mathbb {C} .} All of the zeros are simple zeros, and both functions have derivative ± 1 {\displaystyle \pm 1} at each of the zeros. The tangent function tan ⁡ ( z ) = sin ⁡ ( z ) / cos ⁡ ( z ) {\displaystyle \tan(z)=\sin(z)/\cos(z)} has a simple zero at z = 0 {\displaystyle z=0} and vertical asymptotes at z = ± π / 2 {\displaystyle z=\pm \pi /2} , where it has a simple pole of residue − 1 {\displaystyle -1} . Again, owing to the periodicity, the zeros are all the integer multiples of π {\displaystyle \pi } and the poles are odd multiples of π / 2 {\displaystyle \pi /2} , all having the same residue. The poles correspond to vertical asymptotes lim x → π − tan ⁡ ( x ) = + ∞ , lim x → π + tan ⁡ ( x ) = − ∞ . {\displaystyle \lim _{x\to \pi ^{-}}\tan(x)=+\infty ,\quad \lim _{x\to \pi ^{+}}\tan(x)=-\infty .} The cotangent function cot ⁡ ( z ) = cos ⁡ ( z ) / sin ⁡ ( z ) {\displaystyle \cot(z)=\cos(z)/\sin(z)} has a simple pole of residue 1 at the integer multiples of π {\displaystyle \pi } and simple zeros at odd multiples of π / 2 {\displaystyle \pi /2} . The poles correspond to vertical asymptotes lim x → 0 − cot ⁡ ( x ) = − ∞ , lim x → 0 + cot ⁡ ( x ) = + ∞ . {\displaystyle \lim _{x\to 0^{-}}\cot(x)=-\infty ,\quad \lim _{x\to 0^{+}}\cot(x)=+\infty .} Many identities interrelate the trigonometric functions. This section contains the most basic ones; for more identities, see List of trigonometric identities . These identities may be proved geometrically from the unit-circle definitions or the right-angled-triangle definitions (although, for the latter definitions, care must be taken for angles that are not in the interval [0, π /2] , see Proofs of trigonometric identities ). For non-geometrical proofs using only tools of calculus , one may use directly the differential equations, in a way that is similar to that of the above proof of Euler's identity. One can also use Euler's identity for expressing all trigonometric functions in terms of complex exponentials and using properties of the exponential function. The cosine and the secant are even functions ; the other trigonometric functions are odd functions . That is: All trigonometric functions are periodic functions of period 2 π . This is the smallest period, except for the tangent and the cotangent, which have π as smallest period. This means that, for every integer k , one has See Periodicity and asymptotes . The Pythagorean identity, is the expression of the Pythagorean theorem in terms of trigonometric functions. It is Dividing through by either cos 2 ⁡ x {\displaystyle \cos ^{2}x} or sin 2 ⁡ x {\displaystyle \sin ^{2}x} gives and The sum and difference formulas allow expanding the sine, the cosine, and the tangent of a sum or a difference of two angles in terms of sines and cosines and tangents of the angles themselves. These can be derived geometrically, using arguments that date to Ptolemy (see Angle sum and difference identities ). One can also produce them algebraically using Euler's formula . When the two angles are equal, the sum formulas reduce to simpler equations known as the double-angle formulae . These identities can be used to derive the product-to-sum identities . By setting t = tan ⁡ 1 2 θ , {\displaystyle t=\tan {\tfrac {1}{2}}\theta ,} all trigonometric functions of θ {\displaystyle \theta } can be expressed as rational fractions of t {\displaystyle t} : Together with this is the tangent half-angle substitution , which reduces the computation of integrals and antiderivatives of trigonometric functions to that of rational fractions. The derivatives of trigonometric functions result from those of sine and cosine by applying the quotient rule . The values given for the antiderivatives in the following table can be verified by differentiating them. The number C is a constant of integration . Note: For 0 < x < π {\displaystyle 0<x<\pi } the integral of csc ⁡ x {\displaystyle \csc x} can also be written as − arsinh ⁡ ( cot ⁡ x ) , {\displaystyle -\operatorname {arsinh} (\cot x),} and for the integral of sec ⁡ x {\displaystyle \sec x} for − π / 2 < x < π / 2 {\displaystyle -\pi /2<x<\pi /2} as arsinh ⁡ ( tan ⁡ x ) , {\displaystyle \operatorname {arsinh} (\tan x),} where arsinh {\displaystyle \operatorname {arsinh} } is the inverse hyperbolic sine . Alternatively, the derivatives of the 'co-functions' can be obtained using trigonometric identities and the chain rule: The trigonometric functions are periodic, and hence not injective , so strictly speaking, they do not have an inverse function . However, on each interval on which a trigonometric function is monotonic , one can define an inverse function, and this defines inverse trigonometric functions as multivalued functions . To define a true inverse function, one must restrict the domain to an interval where the function is monotonic, and is thus bijective from this interval to its image by the function. The common choice for this interval, called the set of principal values , is given in the following table. As usual, the inverse trigonometric functions are denoted with the prefix "arc" before the name or its abbreviation of the function. The notations sin −1 , cos −1 , etc. are often used for arcsin and arccos , etc. When this notation is used, inverse functions could be confused with multiplicative inverses. The notation with the "arc" prefix avoids such a confusion, though "arcsec" for arcsecant can be confused with " arcsecond ". Just like the sine and cosine, the inverse trigonometric functions can also be expressed in terms of infinite series. They can also be expressed in terms of complex logarithms . In this section A , B , C denote the three (interior) angles of a triangle, and a , b , c denote the lengths of the respective opposite edges. They are related by various formulas, which are named by the trigonometric functions they involve. The law of sines states that for an arbitrary triangle with sides a , b , and c and angles opposite those sides A , B and C : sin ⁡ A a = sin ⁡ B b = sin ⁡ C c = 2 Δ a b c , {\displaystyle {\frac {\sin A}{a}}={\frac {\sin B}{b}}={\frac {\sin C}{c}}={\frac {2\Delta }{abc}},} where Δ is the area of the triangle, or, equivalently, a sin ⁡ A = b sin ⁡ B = c sin ⁡ C = 2 R , {\displaystyle {\frac {a}{\sin A}}={\frac {b}{\sin B}}={\frac {c}{\sin C}}=2R,} where R is the triangle's circumradius . It can be proved by dividing the triangle into two right ones and using the above definition of sine. The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. This is a common situation occurring in triangulation , a technique to determine unknown distances by measuring two angles and an accessible enclosed distance. The law of cosines (also known as the cosine formula or cosine rule) is an extension of the Pythagorean theorem : c 2 = a 2 + b 2 − 2 a b cos ⁡ C , {\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos C,} or equivalently, cos ⁡ C = a 2 + b 2 − c 2 2 a b . {\displaystyle \cos C={\frac {a^{2}+b^{2}-c^{2}}{2ab}}.} In this formula the angle at C is opposite to the side c . This theorem can be proved by dividing the triangle into two right ones and using the Pythagorean theorem . The law of cosines can be used to determine a side of a triangle if two sides and the angle between them are known. It can also be used to find the cosines of an angle (and consequently the angles themselves) if the lengths of all the sides are known. The law of tangents says that: If s is the triangle's semiperimeter, ( a + b + c )/2, and r is the radius of the triangle's incircle , then rs is the triangle's area. Therefore Heron's formula implies that: The law of cotangents says that: [ 30 ] It follows that The trigonometric functions are also important in physics. The sine and the cosine functions, for example, are used to describe simple harmonic motion , which models many natural phenomena, such as the movement of a mass attached to a spring and, for small angles, the pendular motion of a mass hanging by a string. The sine and cosine functions are one-dimensional projections of uniform circular motion . Trigonometric functions also prove to be useful in the study of general periodic functions . The characteristic wave patterns of periodic functions are useful for modeling recurring phenomena such as sound or light waves . [ 31 ] Under rather general conditions, a periodic function f ( x ) can be expressed as a sum of sine waves or cosine waves in a Fourier series . [ 32 ] Denoting the sine or cosine basis functions by φ k , the expansion of the periodic function f ( t ) takes the form: f ( t ) = ∑ k = 1 ∞ c k φ k ( t ) . {\displaystyle f(t)=\sum _{k=1}^{\infty }c_{k}\varphi _{k}(t).} For example, the square wave can be written as the Fourier series f square ( t ) = 4 π ∑ k = 1 ∞ sin ⁡ ( ( 2 k − 1 ) t ) 2 k − 1 . {\displaystyle f_{\text{square}}(t)={\frac {4}{\pi }}\sum _{k=1}^{\infty }{\sin {\big (}(2k-1)t{\big )} \over 2k-1}.} In the animation of a square wave at top right it can be seen that just a few terms already produce a fairly good approximation. The superposition of several terms in the expansion of a sawtooth wave are shown underneath. While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was defined by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). The functions of sine and versine (1 – cosine) are closely related to the jyā and koti-jyā functions used in Gupta period Indian astronomy ( Aryabhatiya , Surya Siddhanta ), via translation from Sanskrit to Arabic and then from Arabic to Latin. [ 33 ] (See Aryabhata's sine table .) All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines , used in solving triangles . [ 34 ] Al-Khwārizmī (c. 780–850) produced tables of sines and cosines. Circa 860, Habash al-Hasib al-Marwazi defined the tangent and the cotangent, and produced their tables. [ 35 ] [ 36 ] Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) defined the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. [ 36 ] The trigonometric functions were later studied by mathematicians including Omar Khayyám , Bhāskara II , Nasir al-Din al-Tusi , Jamshīd al-Kāshī (14th century), Ulugh Beg (14th century), Regiomontanus (1464), Rheticus , and Rheticus' student Valentinus Otho . Madhava of Sangamagrama (c. 1400) made early strides in the analysis of trigonometric functions in terms of infinite series . [ 37 ] (See Madhava series and Madhava's sine table .) The tangent function was brought to Europe by Giovanni Bianchini in 1467 in trigonometry tables he created to support the calculation of stellar coordinates. [ 38 ] The terms tangent and secant were first introduced by the Danish mathematician Thomas Fincke in his book Geometria rotundi (1583). [ 39 ] The 17th century French mathematician Albert Girard made the first published use of the abbreviations sin , cos , and tan in his book Trigonométrie . [ 40 ] In a paper published in 1682, Gottfried Leibniz proved that sin x is not an algebraic function of x . [ 41 ] Though defined as ratios of sides of a right triangle , and thus appearing to be rational functions , Leibnitz result established that they are actually transcendental functions of their argument. The task of assimilating circular functions into algebraic expressions was accomplished by Euler in his Introduction to the Analysis of the Infinite (1748). His method was to show that the sine and cosine functions are alternating series formed from the even and odd terms respectively of the exponential series . He presented " Euler's formula ", as well as near-modern abbreviations ( sin. , cos. , tang. , cot. , sec. , and cosec. ). [ 33 ] A few functions were common historically, but are now seldom used, such as the chord , versine (which appeared in the earliest tables [ 33 ] ), haversine , coversine , [ 42 ] half-tangent (tangent of half an angle), and exsecant . List of trigonometric identities shows more relations between these functions. Historically, trigonometric functions were often combined with logarithms in compound functions like the logarithmic sine, logarithmic cosine, logarithmic secant, logarithmic cosecant, logarithmic tangent and logarithmic cotangent. [ 43 ] [ 44 ] [ 45 ] [ 46 ] The word sine derives [ 47 ] from Latin sinus , meaning "bend; bay", and more specifically "the hanging fold of the upper part of a toga ", "the bosom of a garment", which was chosen as the translation of what was interpreted as the Arabic word jaib , meaning "pocket" or "fold" in the twelfth-century translations of works by Al-Battani and al-Khwārizmī into Medieval Latin . [ 48 ] The choice was based on a misreading of the Arabic written form j-y-b ( جيب ), which itself originated as a transliteration from Sanskrit jīvā , which along with its synonym jyā (the standard Sanskrit term for the sine) translates to "bowstring", being in turn adopted from Ancient Greek χορδή "string". [ 49 ] The word tangent comes from Latin tangens meaning "touching", since the line touches the circle of unit radius, whereas secant stems from Latin secans —"cutting"—since the line cuts the circle. [ 50 ] The prefix " co- " (in "cosine", "cotangent", "cosecant") is found in Edmund Gunter 's Canon triangulorum (1620), which defines the cosinus as an abbreviation of the sinus complementi (sine of the complementary angle ) and proceeds to define the cotangens similarly. [ 51 ] [ 52 ]
https://en.wikipedia.org/wiki/Sine_complement
Sinec H1 is an Industrial Ethernet communications protocol that provides the transport layer function widely used in automation and process control applications. [ 1 ] [ 2 ] The protocol was developed by Siemens and is used mainly for control applications. It has large bandwidth and is well suited to the transmission of large volumes of data. It is used as part of the control infrastructure of CERN [ 3 ] and LHC . [ 4 ] This computer networking article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sinec_H1
The Singapore Workplace Safety and Health Conference (SWSHC) is a biennial conference [ 1 ] for the promotion of workplace safety and health (WSH, OH&S) thought and practice in Singapore and the region. Organised by the Workplace Safety & Health Council (WSH Council) in partnership with Singapore’s Ministry of Manpower , the conference aims to bring together regulators, industry leaders and safety professionals to identify problems, formulate recommendations and develop and implement best practices to ensure the improvement and advancement of safety standards in the workplace. The inaugural conference will be held on 15 to 16 September 2010 at Suntec Singapore International Convention and Exhibition Centre . The theme for the two-day conference, "Embracing Challenges, Pushing WSH Frontiers", is aimed at creating awareness for the legal, moral and economic reasons behind WSH legislation , changing opinions of WSH practices, and creating a framework for the sustained promotion of WSH thought. Speakers at the 2010 conference include local and international WSH thought leaders and Captains of Industry, including Professor Harri Vainio from the Finnish Institute of Occupational Health, Mr. John Spanswick from the United Kingdom’s Health and Safety Executive Board and Mr. Choo Chiau Beng, CEO of Keppel Corporation .
https://en.wikipedia.org/wiki/Singapore_Workplace_Safety_and_Health_Conference
The Singing Sand Dunes ( Chinese : 鳴沙山 Ming Sha Shan) in Dunhuang , China , are the sand dunes that, when the wind blows, give out a singing or drumming sound. [ 1 ] [ 2 ] [ unreliable source? ] They are part of the Kumtag Desert . The Singing Sand Dunes were originally known as the "Gods' Sand Dunes" ( Chinese : 神沙山 ). In the Records of the Grand Historian , Sima Qian described the sound "as if listening to music when the weather is fine." During the Ming Dynasty , they came to be called by the current name. There are four better-known singing sand dunes in China, that include those of Hami , which are usually said to give the best sound among the four, and those of Dunhuang. The tourist area at the site offers various facilities and activities such as ATV riding, paragliding, camel rides and sandboarding. [ 3 ]
https://en.wikipedia.org/wiki/Singing_Sand_Dunes_(Dunhuang)
Singing sand , also called whistling sand , barking sand , booming sand or singing dune , is sand that produces sound. The sound emission may be caused by wind passing over dunes or by walking on the sand. Certain conditions have to come together to create singing sand: The most common frequency emitted seems to be close to 450 Hz . There are various theories about the singing sand mechanism. It has been proposed that the sound frequency is controlled by the shear rate. Others have suggested that the frequency of vibration is related to the thickness of the dry surface layer of sand. The sound waves bounce back and forth between the surface of the dune and the surface of the moist layer, creating a resonance that increases the sound's volume. The noise may be generated by friction between the grains or by the compression of air between them. [ 1 ] Other sounds that can be emitted by sand have been described as "roaring" or "booming". Singing sand dunes, an example of the phenomenon of singing sand, produce a sound described as roaring, booming, squeaking, or the "Song of Dunes". This is a natural sound phenomenon of up to 105 decibels , lasting as long as several minutes, that occurs in about 35 desert locations around the world. The sound is similar to a loud low-pitch rumble. It emanates from crescent-shaped dunes, or barchans . The sound emission accompanies a slumping or avalanching movement of sand, usually triggered by wind passing over the dune or by someone walking near the crest. Examples of singing sand dunes include California's Kelso Dunes and Eureka Dunes ; AuTrain Beach in Northern Michigan; sugar sand beaches and Warren Dunes in southwestern Michigan ; Sand Mountain in Nevada ; the Booming Dunes in the Namib Desert , Africa ; Porth Oer (also known as Whistling Sands) near Aberdaron in Wales ; Indiana Dunes in Indiana ; Barking Sands Beach in Hawaiʻi ; Ming Sha Shan in Dunhuang , China; Kotogahama Beach in Odashi , Japan; Singing Beach in Manchester-by-the-Sea, Massachusetts ; near Mesaieed in Qatar ; and Gebel Naqous, near el-Tor , South Sinai, Egypt. The song "The Singing Sands of Alamosa" on Bing Crosby 's 1947 album Drifting and Dreaming [ 2 ] [ 3 ] was inspired by the sand dunes near Alamosa, Colorado, now Great Sand Dunes National Park . On some beaches around the world, dry sand makes a singing, squeaking, whistling, or screaming sound if a person scuffs or shuffles their feet with sufficient force. [ 4 ] [ 5 ] The phenomenon is not completely understood scientifically, but it has been found that quartz sand does this if the grains are highly spherical. [ 6 ] It is believed by some that the sand grains must be of similar size, so the sand must be well sorted by the actions of wind and waves, and that the grains should be close to spherical and have surfaces free of dust, pollution and organic matter. The "singing" sound is then believed to be produced by shear , as each layer of sand grains slides over the layer beneath it. The similarity in size, the uniformity, and the cleanness means that grains move up and down in unison over the layer of grains below them. Even small amounts of pollution on the sand grains reduce the friction enough to silence the sand. [ 5 ] Others believe that the sound is produced by the friction of grain against grain that have been coated with dried salt, in a way that is analogous to the way that the rosin on the bow produces sounds from a violin string. It has also been speculated that thin layers of gas trapped and released between the grains act as "percussive cushions" capable of vibration, and so produce the tones heard. [ 7 ] Not all sands sing, whistle or bark alike. The sounds heard have a wide frequency range that can be different for each patch of sand. Fine sands, where individual grains are barely visible to the naked eye, produce only a poor, weak sounding bark. Medium-sized grains can emit a range of sounds, from a faint squeak or a high-pitched sound, to the best and loudest barks when scuffed enthusiastically. [ 5 ] Water also influences the effect. Wet sands are usually silent because the grains stick together instead of sliding past each other, but small amounts of water can actually raise the pitch of the sounds produced. The most common part of the beach on which to hear singing sand is the dry upper beach above the normal high tide line, but singing has been reported on the lower beach near the low tide line as well. [ 5 ] Singing sand has been reported on 33 beaches in the British Isles, [ 8 ] including in the north of Wales and on the little island of Eigg in the Scottish Hebrides . It has also been reported at a number of beaches along North America's Atlantic coast. Singing sands can be found at Souris , on the eastern tip of Prince Edward Island , at the Singing Sands beach in Basin Head Provincial Park ; [ 6 ] on Singing Beach in Manchester-by-the-Sea, Massachusetts , [ 4 ] as well as in the fresh waters of Lake Superior [ 9 ] and Lake Michigan [ 10 ] and in other places.
https://en.wikipedia.org/wiki/Singing_sand
In mechanical engineering , the cylinders of reciprocating engines are often classified by whether they are single- or double-acting, depending on how the working fluid acts on the piston . A single-acting cylinder in a reciprocating engine is a cylinder in which the working fluid acts on one side of the piston only. A single-acting cylinder relies on the load, springs, other cylinders, or the momentum of a flywheel , to push the piston back in the other direction. Single-acting cylinders are found in most kinds of reciprocating engine. They are almost universal in internal combustion engines (e.g. petrol and diesel engines ) and are also used in many external combustion engines such as Stirling engines and some steam engines . They are also found in pumps and hydraulic rams . A double-acting cylinder is a cylinder in which the working fluid acts alternately on both sides of the piston. In order to connect the piston in a double-acting cylinder to an external mechanism, such as a crank shaft , a hole must be provided in one end of the cylinder for the piston rod, and this is fitted with a gland or " stuffing box " to prevent escape of the working fluid. Double-acting cylinders are common in steam engines but unusual in other engine types. Many hydraulic and pneumatic cylinders use them where it is needed to produce a force in both directions. A double-acting hydraulic cylinder has a port at each end, supplied with hydraulic fluid for both the retraction and extension of the piston. A double-acting cylinder is used where an external force is not available to retract the piston or it can be used where high force is required in both directions of travel. Steam engines normally use double-acting cylinders. However, early steam engines, such as atmospheric engines and some beam engines , were single-acting. These often transmitted their force through the beam by means of chains and an "arch head", as only a tension in one direction was needed. Where these were used for pumping mine shafts and only had to act against a load in one direction, single-acting designs remained in use for many years. The main impetus towards double-acting cylinders came when James Watt was trying to develop a rotative beam engine , that could be used to drive machinery via an output shaft. [ 1 ] Compared to a single-cylinder engine, a double-acting cylinder gave a smoother power output. The high-pressure engine, [ i ] as developed by Richard Trevithick , used double-acting pistons and became the model for most steam engines afterwards. Some of the later steam engines, the high-speed steam engines , used single-acting pistons of a new design. The crosshead became part of the piston, [ ii ] and there was no longer any piston rod. This was for similar reasons to the internal combustion engine, as avoiding the piston rod and its seals allowed a more effective crankcase lubrication system. Small models and toys often use single-acting cylinders for the above reason but also to reduce manufacturing costs. In contrast to steam engines, nearly all internal combustion engines have used single-acting cylinders. Their pistons are usually trunk pistons , where the gudgeon pin joint of the connecting rod is within the piston itself. This avoids the crosshead, piston rod and its sealing gland, but it also makes a single-acting piston almost essential. This, in turn, has the advantage of allowing easy access to the bottom of the piston for lubricating oil, which also has an important cooling function. This avoids local overheating of the piston and rings. Small petrol two-stroke engines , such as for motorcycles, use crankcase compression rather than a separate supercharger or scavenge blower . This uses both sides of the piston as working faces, the lower side of the piston acting as a piston compressor to compress the inlet charge ready for the next stroke. The piston is still considered as single-acting, as only one of these faces produces power. Some early gas engines , such as Lenoir 's original engines, from around 1860, were double-acting and followed steam engines in their design. Internal combustion engines soon switched to single-acting cylinders. This was for two reasons: as for the high-speed steam engine, the high force on each piston and its connecting rod was so great that it placed large demands upon the bearings. A single-acting piston, where the direction of the forces was consistently compressive along the connecting rod, allowed for tighter bearing clearances. [ 2 ] Secondly the need for large valve areas to provide good gas flow, whilst requiring a small volume for the combustion chamber so as to provide good compression , monopolised the space available in the cylinder head . Lenoir's steam engine-derived cylinder was inadequate for the petrol engine and so a new design, based around poppet valves and a single-acting trunk piston appeared instead. Extremely large gas engines were also built as blowing engines for blast furnaces , with one or two extremely large cylinders and powered by the burning of furnace gas . These, particularly those built by Körting , used double-acting cylinders. Gas engines require little or no compression of their charge, in comparison to petrol or compression-ignition engines , and so the double-acting cylinder designs were still adequate, despite their narrow, convoluted passageways. Double-acting cylinders have been infrequently used for internal combustion engines since, although Burmeister & Wain made 2-stroke cycle double-acting (2-SCDA) diesels for marine propulsion before 1930. The first, of 7,000 hp, was fitted in the British MV Amerika (United Baltic Co.) in 1929. [ 3 ] [ 4 ] The two B&W SCDA engines fitted to the MV Stirling Castle in 1937 produced 24,000 hp each. In 1935 the US submarine USS Pompano was ordered as part of the Perch class [ iii ] Six boats were built, with three different diesel engine designs from different makers. Pompano was fitted with H.O.R. ( Hooven-Owens-Rentschler ) 8-cylinder double-acting engines that were a licence-built version of the MAN auxiliary engines of the cruiser Leipzig . [ 5 ] Owing to the limited space available within the submarines, either opposed-piston , or, in this case, double-acting engines were favoured for being more compact. Pompano ' s engines were a complete failure and were wrecked during trials before even leaving the Mare Island Navy Yard . Pompano was laid up for eight months until 1938 while the engines were replaced. [ 5 ] Even then the engines were regarded as unsatisfactory and were replaced by Fairbanks-Morse engines in 1942. [ 5 ] While Pompano was still being built, the Salmon -class submarines were ordered. Three of these were built by Electric Boat , with a 9-cylinder development of the H.O.R. engine. [ 6 ] Although not as great a failure as Pompano ' s engines, this version was still troublesome and the boats were later re-engined with the same single-acting General Motors 16-248 V16 engines as their sister boats. [ 6 ] Other Electric Boat-constructed submarines of the Sargo and Seadragon classes, as well as the first few of the Gato class, were also built with these 9-cylinder H.O.R. engines, but later re-engined. [ 7 ] A hydraulic cylinder is a mechanical actuator that is powered by a pressurised liquid, typically oil. It has many applications, notably in construction equipment ( engineering vehicles ), manufacturing machinery , and civil engineering.
https://en.wikipedia.org/wiki/Single-_and_double-acting_cylinders
In phylogenetics , a single-access key (also called dichotomous key , sequential key , analytical key , [ 1 ] or pathway key ) is an identification key where the sequence and structure of identification steps is fixed by the author of the key. At each point in the decision process, multiple alternatives are offered, each leading to a result or a further choice. The alternatives are commonly called "leads", and the set of leads at a given point a "couplet". Single access keys are closely related to decision trees and binary search trees . However, to improve the usability and reliability of keys, many single-access keys incorporate reticulation , changing the tree structure into a directed acyclic graph . Single-access keys have been in use for several hundred years. [ 2 ] They may be printed in various styles (e. g., linked, nested, indented, graphically branching ) or used as interactive, computer-aided keys. In the latter case, either a longer part of the key may be displayed (optionally hyperlinked), or only a single question may be displayed at a time. If the key has several choices it is described as polychotomous or polytomous . If the entire key consists of exactly two choices at each branching point, the key is called dichotomous . The majority of single-access keys are dichotomous. Any single-access key organizes a large set of items into a structure that breaks them down into smaller, more accessible subsets, with many keys leading to the smallest available classification unit (a species or infraspecific taxon typically in the form of binomial nomenclature ). However, a trade-off exists between keys that concentrate on making identification most convenient and reliable ( diagnostic keys ), and keys which aim to reflect the scientific classification of organisms ( synoptic keys ). The first type of keys limits the choice of characteristics to those most reliable, convenient, and available under certain conditions. Multiple diagnostic keys may be offered for the same group of organisms: Diagnostic keys may be designed for field ( field guides ) or laboratory use, for summer or winter use, and they may use geographic distribution or habitat preference of organisms as accessory characteristics. They do so at the expense of creating artificial groups in the key. An example of a diagnostic key is shown below. It is not based on the taxonomic classification of the included species — compare with the botanical classification of oaks . In contrast, synoptic keys follow the taxonomic classification as close as possible. Where the classification is already based on phylogenetic studies, the key represents the evolutionary relationships within the group. To achieve this, these keys often have to use more difficult characteristics, which may not always be available in the field, and which may require instruments like a hand lens or microscope. Because of convergent evolution , superficially similar species may be separated early in the key, with superficially different, but genetically closely related species being separated much later in the key. Synoptic keys are typically found in scientific treatments of a taxonomic group ("monographs"). An example of a synoptic key (corresponding to the diagnostic key shown below) is shown further below. In plants , flower and fruit characteristics often are important for primary taxonomic classification: Example of a diagnostic dichotomous key for some eastern United States oaks based on leaf characteristics This key first differentiates between oaks with entire leaves with normally smooth margins ( live oaks , Willow oak , Shingle oak ), and other oaks with lobed or toothed leaves. The following steps created smaller and smaller groups (e. g., red oak , white oak ), until the species has been keyed out. Example of a synoptic (taxonomic) dichotomous key for some eastern United States oaks , reflecting taxonomic classification The distinction between dichotomous (bifurcating) and polytomous (multifurcating) keys is a structural one, and identification key software may or may not support polytomous keys. This distinction is less arbitrary than it may appear. Allowing a variable number of choices is disadvantageous in the nested display style, where for each couplet in a polytomous key the entire key must be scanned to the end to determine whether more than a second lead may exist or not. Furthermore, if the alternative lead statements are complex (involving more than one characteristic and possibly "and", "or", or "not"), two alternative statements are significantly easier to understand than couplets with more alternatives. However, the latter consideration can easily be accommodated in a polytomous key where couplets based on a single characteristic may have more than two choices, and complex statements may be limited to two alternative leads. Another structural distinction is whether only lead statements or question-answer pairs are supported. Most traditional single-access keys use the "lead-style", where each option consists of a statement, only one of which is correct. Especially computer-aided keys occasionally use the "question-answer-style" instead, where a question is presented with a choice of answers. The second style is well known from multiple choice testing and therefore more intuitive for beginners. However, it creates problems when multiple characteristics need to be combined in a single step (as in "Flower red and spines present" versus "Flowers yellow to reddish-orange, spines absent" ). 1. Flowers red ... 2 Flowers white ... 3 Flowers blue ... 4 1. What is the flower color? - red ... 2 - white ... 3 - blue ... 4 Single-access keys may be presented in different styles. The two most frequently encountered styles are the The nested style gives an excellent overview over the structure of the key. With a short key and moderate indentation it can be easy to follow and even backtrace an erroneous identification path. The nested style is problematic with polytomous keys, where each key must be scanned to the end to verify that no further leads exist within a couplet. It also does not easily support reticulation (which requires a link method similar to the one used in the linked style). A large amount of knowledge about reliable and efficient identification procedures may be incorporated in good single-access keys. Characteristics that are reliable and convenient to observe most of the time and for most species (or taxa), and which further provide a well-balanced key (the leads splitting number of species evenly) will be preferred at the start of the key. However, in practice it is difficult to achieve this goal for all taxa in all conditions. If the information for a given identification step is not available, several potential leads must be followed and identification becomes increasingly difficult. Although software exists that helps in skipping questions in a single-access key, [ 3 ] the more general solution to this problem is the construction and use of multi-access keys , allowing a free choice of identification steps and are easily adaptable to different taxa (e.g., very small or very large) as well as different circumstances of identification (e. g., in the field or laboratory).
https://en.wikipedia.org/wiki/Single-access_key
A single-atom transistor is a device that can open and close an electrical circuit by the controlled and reversible repositioning of one single atom . The single-atom transistor was invented and first demonstrated in 2002 by Dr. Fangqing Xie in Prof. Thomas Schimmel's Group at the Karlsruhe Institute of Technology (former University of Karlsruhe). [ 1 ] By means of a small electrical voltage applied to a control electrode , the so-called gate electrode , a single silver atom is reversibly moved in and out of a tiny junction, in this way closing and opening an electrical contact. Therefore, the single-atom transistor works as an atomic switch or atomic relay , where the switchable atom opens and closes the gap between two tiny electrodes called source and drain . [ 2 ] [ 3 ] [ 4 ] The single-atom transistor opens perspectives for the development of future atomic-scale logics and quantum electronics. At the same time, the device of the Karlsruhe team of researchers marks the lower limit of miniaturization , as feature sizes smaller than one atom cannot be produced lithographically . The device represents a quantum transistor, the conductance of the source-drain channel being defined by the rules of quantum mechanics . It can be operated at room temperature and at ambient conditions, i.e. neither cooling nor vacuum are required. [ 5 ] Few atom transistors have been developed at Waseda University and at Italian CNR by Takahiro Shinada and Enrico Prati, who observed the Anderson–Mott transition [ clarification needed ] in miniature by employing arrays of only two, four and six individually implanted As or P atoms. [ 6 ]
https://en.wikipedia.org/wiki/Single-atom_transistor
Single-base extension (SBE) is a method for determining the identity of a nucleotide base at a specific position along a nucleic acid . The method is used to identify a single-nucleotide polymorphism (SNP). In the method, an oligonucleotide primer hybridizes to a complementary region along the nucleic acid to form a duplex, with the primer’s terminal 3’-end directly adjacent to the nucleotide base to be identified. Using a DNA polymerase, the oligonucleotide primer is enzymatically extended by a single base in the presence of all four nucleotide terminators; the nucleotide terminator complementary to the base in the template being interrogated is incorporated and identified. The presence of all four terminators suppresses misincorporation of non-complementary nucleotides. Many approaches can be taken for determining the identity of an incorporated terminator, including fluorescence labeling, mass labeling for mass spectrometry, isotope labeling, and tagging the base with a hapten and detecting chromogenically with an anti-hapten antibody-enzyme conjugate (e.g., via an ELISA format). The method was invented by Philip Goelet, Michael Knapp, Richard Douglas and Stephen Anderson while working at the company Molecular Tool. This approach was designed for high-throughput SNP genotyping and was originally called "Genetic Bit Analysis" (GBA). Illumina, Inc. utilizes this method in their Infinium technology ( http://www.illumina.com/technology/beadarray-technology/infinium-hd-assay.html ) to measure DNA methylation levels in the human genome.
https://en.wikipedia.org/wiki/Single-base_extension
Single-cell DNA template strand sequencing , or Strand-seq , is a technique for the selective sequencing of a daughter cell's parental template strands. [ 1 ] This technique offers a wide variety of applications, including the identification of sister chromatid exchanges in the parental cell prior to segregation, the assessment of non-random segregation of sister chromatids, the identification of misoriented contigs in genome assemblies, de novo genome assembly of both haplotypes in diploid organisms including humans, whole-chromosome haplotyping , and the identification of germline and somatic genomic structural variation , the latter of which can be detected robustly even in single cells. Strand-seq (single-cell and single-strand sequencing) was one of the first single-cell sequencing protocols described in 2012. [ 1 ] This genomic technique selectively sequencings the parental template strands in single daughter cells DNA libraries . [ 1 ] As a proof of concept study, the authors demonstrated the ability to acquire sequence information from the Watson and/or Crick chromosomal strands in an individual DNA library, depending on the mode of chromatid segregation; a typical DNA library will always contain DNA from both strands. The authors were specifically interested in showing the utility of strand-seq in detecting sister chromatid exchanges (SCEs) at high-resolution. They successfully identified eight putative SCEs in the murine (mouse) embryonic stem (meS) cell line with resolution up to 23 bp . This methodology has also been shown to hold great utility in discerning patterns of non-random chromatid segregation, especially in stem cell lineages. Furthermore, SCEs have been implicated as diagnostic indicators of genome stress, information that has utility in cancer biology. Most research on this topic involves observing the assortment of chromosomal template strands through many cell development cycles and correlating non-random assortment with particular cell fates. Single-cell sequencing protocols were foundational in the development of this technique, but they differ in several aspects. Past methods have been used to track the inheritance patterns of chromatids on a per-strand basis and elucidate the process of non-random segregation: Pulse-chase experiments have been used for determining the segregation patterns of chromosomes in addition to studying other time-dependent cellular processes. [ 2 ] Briefly, pulse-chase assays allow researchers to track radioactively labelled molecules in the cell. In experiments used to study non-random chromosome assortment, stem cells are labeled or "pulsed" with a nucleotide analog that is incorporated in the replicated DNA strands. [ 3 ] This allows the nascent stands to be tracked through many rounds of replication . Unfortunately, this method is found to have poor resolution as it can only be observed at the chromatid level. CO-FISH, or strand-specific fluorescence in situ hybridization, facilitates strand-specific targeting of DNA with fluorescently-tagged probes. [ 4 ] It exploits the uniform orientation of major satellites relative to the direction of telomeres , thus allowing strands to be unambiguously designated as "Watson" or "Crick" strands. Using unidirectional probes that recognize major satellite regions, coupled to fluorescently labelled dyes, individual strands can be bound. [ 4 ] To ensure that only the template strand is labelled, the newly formed strands must be degraded by BrdU incorporation and photolysis . This protocol offers improved cytogenetic resolution, allowing researchers to observe single strands as opposed to whole chromatids with pulse-chase experiments. Moreover, non-random segregation of chromatids can be directly assayed by targeting major satellite markers. Cells of interest are cultured either in vivo or in vitro. During S-phase cells are treated with bromodeoxyuridine (BrdU) which is then incorporated into their nascent DNA, acting as a substitute for thymidine. After at least one replication event has occurred, the daughter cells are synchronized at the G2 phase and individually separated by fluorescence-activated cell sorting (FACS) . The cells are directly sorted into lysis buffer and their DNA is extracted. Having been arrested at a specified number of generations (usually one), the inheritance patterns of sister chromatids can be assessed. The following methods concentrate on the DNA sequencing of a single daughter cell's DNA. At this point the chromosomes are composed of nascent strands with BrdU in place of thymidine and the original template strands are primed for DNA sequencing library preparation. Since this protocol was published in 2012, [ 1 ] the canonical methodology is only well described for Illumina sequencing platforms; the protocol could very easily be adapted for other sequencing platforms, depending on the application. Next, the DNA is incubated with a special dye such that when the BrdU-dye complex is excited by UV light, nascent strands are nicked by photolysis . This process inhibits polymerase chain reaction (PCR) amplification of the nascent strand, allowing only the parental template strands to be amplified. Library construction proceeds as normal for Illumina paired-end sequencing. Multiplexing PCR primers are then ligated to the PCR amplicons with hexamer barcodes identifying which cell each fragment they are derived from. Unlike single cell sequencing protocols, Strand-seq does not utilize multiple displacement amplification or MALBAC for DNA amplification. Rather, it is solely dependent on PCR. The majority of current applications for Strand-seq start by aligning sequenced reads to a reference genome. Alignment can be performed using a variety of short-read aligners such as BWA and Bowtie. [ 5 ] [ 6 ] By aligning Strand-seq reads from a single cell to the reference genome, the inherited template strands can be determined. If the cell was sequenced after more than one generation, a pattern of chromatid assortment can be ascertained for the particular cell lineage at hand. The Bioinformatic Analysis of Inherited Templates (BAIT) was the first bioinformatic software to exclusively analyze reads generated from the Strand-seq methodology. [ 7 ] It begins by aligning the reads to a reference sequence, binning the genome into sections, and finally counting the number of Watson and Crick reads falling within each bin. From here, BAIT enables the identification of SCE events, misoriented contigs in the reference genome, aneuploid chromosomes and modes of sister chromatid segregation. It can also aid in assembling early-build genomes and assigning orphan scaffolds to locations within late-build genomes. Following BAIT, numerous bioinformatics tools have recently been introduced that use Strand-seq data for a variety of applications (see, for example, the following sections on haplotyping, de novo genome assembly, and discovery of structural variations in single cells, with reference to the respective linked articles). Strand-seq requires cells undergoing cell division for BrdU labeling, and thus is not applicable to formalin-fixed specimens or non-dividing cells. But it may be applied to normal mitotic cells and tissues, organoids, as well as leukemia and tumor samples using fresh or frozen primary specimens. Strand-seq is using Illumina sequencing, and applications that require sequence information from different sequencing technologies require new protocols, or alternatively integration of data generated using distinct sequencing platforms as recently show-cased. [ 8 ] [ 9 ] Authors from the initial papers describing Strand-seq showed that they were able to attain a 23bp resolution for mapping SCEs, and other large chromosomal abnormalities are likely to share this mapping resolution (if breakpoint fine-mapping is performed). Resolution, however, is dependent on a combination of the sequencing platform used, library preparation protocols, and the number of cells analysed as well as the depth of sequencing per cell. However, it would be sensical for precision to further increase with sequencing technologies that don't incur errors in homopolymeric repeats. Strand-seq was initially proposed as a tool to identify sister chromatid exchanges. [ 1 ] Being a process that is localized to individual cells, DNA sequencing of more than one cell would naturally scatter these effects and suggest an absence of SCE events. Moreover, classic single cell sequencing techniques are unable to show these events due to heterogeneous amplification biases and dual-strand sequence information, thereby necessitating Strand-seq. Using the reference alignment information, researchers can identify an SCE if the directionality of an inherited template strand changes. Misoriented contigs are present in reference genomes at significant rates (ex. 1% in the mouse reference genome). [ 1 ] [ 10 ] Strand-seq, in contrast to conventional sequencing methods, can detect these misorientations. [ 1 ] Misoriented contigs are present where strand inheritance changes from one homozygous state to the other (ex. WW to CC, or CC to WW). Moreover, this state change is visible in every Strand-seq library, reinforcing the presence of a misoriented contig. [ 7 ] Prior to the 1960s, it was assumed that sister chromatids were segregated randomly into daughter cells. However, non-random segregation of sister chromatids has been observed in mammalian cells ever since. [ 11 ] [ 12 ] There have been a few hypotheses proposed to explain the non-random segregation, including the Immortal Strand Hypothesis and the Silent Sister Hypothesis, one of which may hopefully be verified by methods involving Strand-seq. ‘’Immortal Strand Hypothesis’’ Mutations occur every time a cell divides. Certain long-lived cells (ex. stem cells) may be particularly affected by these mutations. The Immortal Strand Hypothesis proposes that these cells avoid mutation accumulation by consistently retaining parental template strands[9]. For this hypothesis to be true, sister chromatids from each and every chromosome must segregate in a non-random fashion. Additionally, one cell will retain the exact same set of template strands after each division, giving the rest to the other cell products of the division. [ 13 ] ‘’Silent Sister Hypothesis’’ This hypothesis states that sister chromatids have differing epigenetic signatures, thereby also differing expression regulation. When replication occurs, non-random segregation of sister chromatids ensures the fates of the daughter cells. [ 14 ] Assessing the validity of this hypothesis would require a joint analysis of Strand-seq and gene expression profiles for both daughter cells. [ 13 ] The output of BAIT shows the inheritance of parental template strands along the genome. [ 7 ] Normally, two template strands are inherited for each autosome, and any deviation from this number indicates an instance of aneuploidy , which can be visualised in single cells. Inversions are a class of copy-number balanced structural variation , which lead to a change in strand directionality readily visualised by Strand-seq. Strand-seq can hence be used to readily detect polymorphic inversions in humans and primates, including Megbase-sized events embedded in large segmental duplications known to be inaccessible to Illumina sequencing . [ 15 ] [ 16 ] A study published online in 2019 further demonstrated that using Strand-seq, all classes of structural variation ≥200kb including deletions, duplications, inversions, inverted duplications, balanced translocations, unbalanced translocations, breakage-fusion-bridge cycle mediated complex DNA rearrangements, and chromothripsis events are sensitively detected in single cells or subclones, using single-cell tri-channel processing (scTRIP). scTRIP works via joint modelling of read-orientation, read-depth, and haplotype-phase to discover SVs in single cells. [ 17 ] Using scTRIP, structural variants are resolved by chromosome-length haplotype which confers higher sensitivity and specificity for single-cell structural variant calling than other current technologies. [ 17 ] Since scTRIP does not require reads (or read pairs) transversing the boundaries (or breakpoints) of structural variants in single cells for variant calling, it does not suffer from known artefacts of single-cell methods based on whole genome amplification (i.e. so-called read chimera) which tend to confound structural variation analysis in single cells. [ 17 ] Early-build genomes are quite fragmented, with unordered and unoriented contigs. Using Strand-seq provides directionality information to accompany the sequence, which ultimately helps resolve the placement of contigs. Contigs present in the same chromosome will exhibit the same directionality, provided SCE events have not occurred. Conversely, contigs present in different chromosomes will only exhibit the same directionality in 50% of the Strand-seq libraries. [ 7 ] Scaffolds, successive contigs intersected by a gap, can be localized in the same manner. [ 7 ] The same principle of using strand direction to distinguish large DNA molecules enables the use of Strand-seq as a tool to construct whole-chromosome haplotypes of genetic variation, from telomere to telomere. [ 18 ] Recent reports have shown that Strand-seq can be computationally integrated with long-read sequencing technology, with the unique advantages of both technologies enabling the generation of highly contiguous haplotype-resolved de novo human genome assemblies. [ 8 ] These genomic assemblies integrate all forms of genetic variation including single nucleotide variants, indels and structural variation even across complex genomic loci, and have recently been applied to generate comprehensive haplotype-aware maps of structural variation in a diversity panel of humans from distinct ancestries. [ 9 ] The possibility that BrdU being substituted for thymine in the genomic DNA could induce double stranded chromosomal breaks and specifically resulting in SCEs has been previously discussed in the literature. [ 19 ] [ 20 ] Additionally, BrdU incorporation has been suggested to interfere with strand segregation patterns. [ 13 ] If this is the case, there would be an inflation in false positive SCEs which may be annotated. Therefore, many cells should be analyzed using the Strand-seq protocol to ensure that SCEs are in fact present in the population. For structural variants detected in single cells, detection of the same variant (on the same haplotype) in more than one cell can exclude BrdU incorporation as a possible cause. [ 17 ] The number of single cell strands that need to be sequenced in order for an annotation to be accepted has yet to be proposed and is highly dependent on the questions being asked. As Strand-seq is founded on single cell sequencing techniques, one must consider the problems faced with single cell sequencing as well. These include the lacking standards for cell isolation and amplification. Even though previous Strand-seq studies isolated cells using FACS, microfluidics also serves as an attractive alternative. PCR has been shown to produce more erroneous amplification products compared to strand displacement based methods such as MDA and MALBAC, whereas the latter two techniques generate chimeric reads as a byproduct that can result in erroneous structural variation calls. [ 21 ] [ 17 ] MDA and MALBAC also generate more dropouts than Strand-seq during SV detection because they require reads that cross the breakpoint of an SV to enable its detection (this is not required for any of the different SV classes that Strand-seq can detect). [ 17 ] Strand displacement amplification also tends to generate more sequence and longer products which could be beneficial for long read sequencing technologies.
https://en.wikipedia.org/wiki/Single-cell_DNA_template_strand_sequencing
In cell biology , single-cell analysis and subcellular analysis [ 1 ] refer to the study of genomics , transcriptomics , proteomics , metabolomics , and cell–cell interactions at the level of an individual cell, as opposed to more conventional methods which study bulk populations of many cells. [ 2 ] [ 3 ] [ 4 ] The concept of single-cell analysis originated in the 1970s. Before the discovery of heterogeneity, single-cell analysis mainly referred to the analysis or manipulation of an individual cell within a bulk population of cells under the influence of a particular condition using optical or electron microscopy. [ 5 ] Due to the heterogeneity seen in both eukaryotic and prokaryotic cell populations, analyzing the biochemical processes and features of a single cell makes it possible to discover mechanisms which are too subtle or infrequent to be detectable when studying a bulk population of cells; in conventional multi-cell analysis, this variability is usually masked by the average behavior of the larger population. [ 6 ] Technologies such as fluorescence-activated cell sorting (FACS) allow the precise isolation of selected single cells from complex samples, while high-throughput single-cell partitioning technologies [ 7 ] [ 8 ] [ 9 ] enable the simultaneous molecular analysis of hundreds or thousands of individual unsorted cells; this is particularly useful for the analysis of variations in gene expression between genotypically identical cells, allowing the definition of otherwise undetectable cell subtypes. The development of new technologies is increasing scientists' ability to analyze the genome and transcriptome of single cells, [ 10 ] and to quantify their proteome and metabolome . [ 11 ] [ 12 ] [ 13 ] Mass spectrometry techniques have become important analytical tools for proteomic and metabolomic analysis of single cells. [ 14 ] [ 15 ] Recent advances have enabled the quantification of thousands of proteins across hundreds of single cells, [ 16 ] making possible new types of analysis. [ 17 ] [ 18 ] In situ sequencing and fluorescence in situ hybridization (FISH) do not require that cells be isolated and are increasingly being used for analysis of tissues. [ 19 ] Many single-cell analysis techniques require the isolation of individual cells. Methods currently used for single-cell isolation include: dielectrophoretic digital sorting, enzymatic digestion, FACS , hydrodynamic traps, laser capture microdissection , manual picking, microfluidics , Inkjet Printing (IJP), micromanipulation , serial dilution , and Raman tweezers. Manual single-cell picking is a method where cells in suspension are viewed under a microscope and individually picked using a micropipette . [ 20 ] [ 21 ] The Raman tweezers technique combines Raman spectroscopy with optical tweezers , using a laser beam to trap and manipulate cells. [ 22 ] The dielectrophoretic digital sorting method utilizes a semiconductor-controlled array of electrodes in a microfluidic chip to trap single cells in dielectrophoretic (DEP) cages. Cell identification is ensured by the combination of fluorescent markers with image observation. Precision delivery is ensured by the semiconductor-controlled motion of DEP cages in the flow cell. Inkjet printing [ 23 ] combines microfluidics with MEMS on a CMOS chip to provide individual control over a large number of print nozzles, using the same technology as home Inkjet printing. IJP allows for the adjustment of shear force to the sample ejection, greatly improving cell survivability. This approach, when combined with optical inspection and AI-driven image recognition, not only guarantees single-cell dispensing into the well plate or other medium but also can qualify the cell sample for quality of sample, rejecting defective cells, debris, and fragments. The development of hydrodynamic-based microfluidic biochips has been increasing over the years. In this technique, the cells or particles are trapped in a particular region for single-cell analysis, usually without application of any external force fields such as optical, electrical, magnetic, or acoustic. There is a need to explore the insights of SCA in the cell's natural state and development of these techniques is highly essential for that study. Researchers have highlighted the vast potential field that needs to be explored to develop biochip devices to suit market/researcher demands. Hydrodynamic microfluidics facilitates the development of passive lab-on-chip applications. [ 24 ] Hydrodynamic traps allow for the isolation of an individual cell in a "trap" at a single given time by passive microfluidic transport. The number of isolated cells can be manipulated based on the number of traps in the system. The Laser Capture Microdissection technique utilizes a laser to dissect and separate individual cells, or sections, from tissue samples of interest. The methods involve the observation of a cell under a microscope, so that a section for analysis can be identified and labeled so that the laser can cut the cell. Then, the cell can be extracted for analysis. Microfluidics allows for the isolation of individual cells for further analyses. The following principles outline the various microfluidic processes for single-cell separation: droplet-in-oil-based isolation, pneumatic membrane valving, and hydrodynamic cell traps. Droplet-in-oil-based microfluidics uses oil-filled channels to hold separated aqueous droplets. This allows the single cell to be contained and isolated from inside the oil-based channels. Pneumatic membrane valves manipulate air pressure to isolate individual cells by membrane deflection. The manipulation of the pressure source allows the opening or closing of channels in a microfluidic network. Typically, the system requires an operator and is limited in throughput. Single-cell genomics is heavily dependent on increasing the copies of DNA found in the cell so that there is enough statistical power for accurate sequencing. This has led to the development of strategies for whole genome amplification (WGA). Currently, WGA strategies can be grouped into three categories: The Adapter-Linker PCR WGA is reported in many comparative studies to be the best-performing technique for diploid single-cell mutation analysis, thanks to its very low Allelic Dropout effect, [ 25 ] [ 26 ] [ 27 ] and for copy number variation profiling due to its low noise, both with aCGH and with NGS low Pass Sequencing. [ 28 ] [ 29 ] This method is only applicable to human cells, both fixed and unfixed. One widely adopted WGA technique is called degenerate oligonucleotide–primed polymerase chain reaction (DOP-PCR). This method uses the well established DNA amplification method PCR to try and amplify the entire genome using a large set of primers . Although simple, this method has been shown to have very low genome coverage. An improvement on DOP-PCR is Multiple displacement amplification (MDA), which uses random primers and a high fidelity enzyme , usually Φ29 DNA polymerase , to accomplish the amplification of larger fragments and greater genome coverage than DOP-PCR. Despite these improvements MDA still has a sequence-dependent bias (certain parts of the genome are amplified more than others because of their sequence, causing some parts to be overrepresented in the resulting genomic dataset). The method shown to largely avoid the biases seen in DOP-PCR and MDA is Multiple Annealing and Looping–Based Amplification Cycles (MALBAC). Bias in this system is reduced by only copying off the original DNA strand instead of making copies of copies. The main drawback to using MALBAC is that it has reduced accuracy compared to DOP-PCR and MDA due to the enzyme used to copy the DNA. [ 11 ] Once amplified using any of the above techniques, the DNA can be sequenced using Sanger sequencing or next-generation sequencing (NGS). There are two major applications to studying the genome at the single-cell level. One application is to track the changes that occur in bacterial populations, where phenotypic differences are often seen. These differences are easily missed by bulk sequencing of a population, but can be observed in single-cell sequencing. [ 30 ] The second major application is to study the genetic evolution of cancer. Since cancer cells are constantly mutating it is of great interest to researchers to see how cancers evolve at the level of individual cells. These patterns of somatic mutations and copy number aberration can be observed using single-cell sequencing. [ 2 ] Single-cell transcriptomics uses sequencing techniques similar to single-cell genomics or direct detection using fluorescence in situ hybridization . The first step in quantifying the transcriptome is to convert RNA to cDNA using reverse transcriptase so that the contents of the cell can be sequenced using NGS methods as was done in genomics. Once converted, there is not enough cDNA to be sequenced so the same DNA amplification techniques discussed in single-cell genomics are applied to the cDNA to make sequencing possible. [ 2 ] Alternatively, fluorescent compounds attached to RNA hybridization probes are used to identify specific sequences and sequential application of different RNA probes will build up a comprehensive transcriptome. [ 31 ] [ 32 ] The purpose of single-cell transcriptomics is to determine what genes are being expressed in each individual cell. The transcriptome is often used to quantify gene expression instead of the proteome because of the difficulty currently associated with amplifying protein levels sufficiently to make them convenient to study. [ 2 ] There are three major reasons gene expression has been studied using this technique: to study gene dynamics, RNA splicing , and for cell typing. Gene dynamics are usually studied to determine what changes in gene expression affect different cell characteristics. For example, this type of transcriptomic analysis has often been used to study embryonic development . RNA splicing studies are focused on understanding the regulation of different transcript isoforms . Single-cell transcriptomics has also been used for cell typing, where the genes expressed in a cell are used to identify and classify different types of cells. The main goal in cell typing is to find a way to determine the identity of cells that do not express known genetic markers . [ 2 ] RNA expression can serve as a proxy for protein abundance. However, protein abundance is governed by the complex interplay between RNA expression and post-transcriptional processes. While more challenging technically, translation can be monitored by ribosome profiling in single cells. [ 33 ] There are three major approaches to single-cell proteomics: antibody-based methods, fluorescent protein-based methods, and mass spectroscopy-based methods. [ 34 ] [ 18 ] The antibody based methods use designed antibodies to bind to proteins of interest, allowing the relative abundance of multiple individual targets to be identified by one of several different techniques. Imaging: Antibodies can be bound to fluorescent molecules such as quantum dots or tagged with organic fluorophores for detection by fluorescence microscopy . Since different colored quantum dots or unique fluorophores are attached to each antibody it is possible to identify multiple different proteins in a single cell. Quantum dots can be washed off of the antibodies without damaging the sample, making it possible to do multiple rounds of protein quantification using this method on the same sample. [ 35 ] For the methods based on organic fluorophores, the fluorescent tags are attached by a reversible linkage such as a DNA-hybrid (that can be melted/dissociated under low-salt conditions) [ 36 ] or chemically inactivated, [ 37 ] allowing multiple cycles of analysis, with 3-5 targets quantified per cycle. These approaches have been used for quantifying protein abundance in patient biopsy samples (e.g. cancer) to map variable protein expression in tissues and/or tumors, [ 37 ] and to measure changes in protein expression and cell signaling in response to cancer treatment. [ 36 ] Mass Cytometry : rare metal isotopes, not normally found in cells or tissues, can be attached to the individual antibodies and detected by mass spectrometry for simultaneous and sensitive identification of proteins. [ 38 ] These techniques can be highly multiplexed for simultaneous quantification of many targets (panels of up to 38 markers) in single cells. [ 39 ] Antibody-DNA quantification: another antibody-based method converts protein levels to DNA levels. [ 34 ] The conversion to DNA makes it possible to amplify protein levels and use NGS to quantify proteins. In one such approach, two antibodies are selected for each protein needed to be quantified. The two antibodies are then modified to have single stranded DNA connected to them that are complementary. When the two antibodies bind to a protein the complementary strands will anneal and produce a double stranded segment of DNA that can then be amplified using PCR. Each pair of antibodies designed for one protein is tagged with a different DNA sequence. The DNA amplified from PCR can then be sequenced, and the protein levels quantified. [ 40 ] In mass spectroscopy-based proteomics there are three major steps needed for peptide identification: sample preparation, separation of peptides, and identification of peptides. Several groups have focused on oocytes or very early cleavage-stage cells since these cells are unusually large and provide enough material for analysis. [ 41 ] [ 42 ] [ 43 ] [ 44 ] Another approach, single cell proteomics by mass spectrometry (SCoPE-MS) has quantified thousands of proteins in mammalian cells with typical cell sizes (diameter of 10-15 μm) by combining carrier-cells and single-cell barcoding. [ 45 ] [ 46 ] The second generation, SCoPE2, [ 47 ] [ 48 ] increased the throughput by automated and miniaturized sample preparation; [ 49 ] It also improved quantitative reliability and proteome coverage by data-driven optimization of LC-MS/MS [ 50 ] and peptide identification. [ 51 ] The sensitivity and consistency of these methods have been further improved by prioritization, [ 52 ] and massively parallel sample preparation in nanoliter size droplets. [ 53 ] Another direction for single-cell protein analysis is based on a scalable framework of multiplexed data-independent acquisition (plexDIA) enables time saving by parallel analysis of both peptide ions and protein samples, thereby realizing multiplicative gains in throughput. [ 54 ] [ 55 ] [ 56 ] The separation of differently sized proteins can be accomplished by using capillary electrophoresis (CE) or liquid chromatography (LC) (using liquid chromatography with mass spectroscopy is also known as LC-MS). [ 42 ] [ 43 ] [ 44 ] [ 45 ] This step gives order to the peptides before quantification using tandem mass-spectroscopy (MS/MS). The major difference between quantification methods is some use labels on the peptides such as tandem mass tags (TMT) or dimethyl labels which are used to identify which cell a certain protein came from (proteins coming from each cell have a different label) while others do not use labels but rather quantify cells individually. The mass spectroscopy data is then analyzed by running data through databases that count the peptides identified to quantify protein levels. [ 42 ] [ 43 ] [ 44 ] [ 45 ] [ 57 ] These methods are very similar to those used to quantify the proteome of bulk cells , with modifications to accommodate the very small sample volume. [ 58 ] A huge variety of ionization techniques can be used to analyze single cells. The choice of ionization method is crucial for analyte detection. It can be decisive which type of compounds are ionizable and in which state they appear, e.g., charge and possible fragmentation of the ions. [ 59 ] A few examples of ionization are mentioned in the paragraphs below. One of the possible ways to measure the content of single cells is nano-DESI (nanospray desorption electrospray ionization). Unlike desorption electrospray ionization , which is a desorption technique, nano-DESI is a liquid extraction technique that enables the sampling of small surfaces, therefore suitable for single-cell analysis. In nano-DESI, two fused silica capillaries are set up in a V-shaped form, closing an angle of approx. 85 degrees. The two capillaries are touching therefore a liquid bridge can be formed between them and enable the sampling of surfaces as small as a single cell. The primary capillary delivers the solvent to the sample surface where the extraction happens and the secondary capillary directs the solvent with extracted molecules to the MS inlet. Nano-DESI mass spectrometry (MS) enables sensitive molecular profiling and quantification of endogenous species as small as a few hundred fmol-s  in single cells in a higher throughput manner. Lanekoff et al. identified 14 amino acids, 6 metabolites, and several lipid molecules from single cheek cells using nano-DESI MS. [ 60 ] In Laser ablation electrospray ionization (LAESI), a laser is used to ablate the surface of the sample and the emitted molecules are ionized in the gas phase by charged droplets from electrospray. Similar to DESI the ionization happens in ambient conditions. Anderton et al . used this ionization technique coupled to a Fourier transform mass spectrometer to analyze 200 single cells of Allium cepa (red onion) with high spatial resolution. [ 61 ] Secondary-ion mass spectrometry (SIMS) is a technique similar to DESI, but while DESI is an ambient ionization technique, SIMS happens in vacuum . The solid sample surface is bombarded by a highly focused beam of primary ions. As they hit the surface, molecules are emitted from the surface and ionized. The choice of primary ions determines the size of the beam and also the extent of ionization and fragmentation. [ 62 ] Pareek et al. performed metabolomics to trace how purines are synthesized within purinosomes and used isotope labeling and SIMS imaging to directly observe hotspots of metabolic activity within frozen HeLa cells. [ 63 ] In matrix-assisted laser desorption and ionization (MALDI), the sample is incorporated in a chemical matrix that is capable of absorbing energy from a laser. Similar to SIMS, ionization happens in vacuum. Laser irradiation ablates the matrix material from the surface and results in charged gas phase matrix particles, with the analyte molecules ionized from this charged chemical matrix. Liu et al. used MALDI-MS to detect eight phospholipids from single A549 cells . [ 64 ] MALDI MS imaging can be used for spatial metabolomics and single-cell analysis. [ 65 ] [ 66 ] The purpose of studying the proteome is to better understand the activity of proteins at the single-cell level. Since proteins are responsible for determining how the cell acts, understanding the proteome of single cells gives the best understanding of how a cell operates, and how gene expression changes in a cell due to different environmental stimuli. Although transcriptomics has the same purpose as proteomics it is not as accurate at determining gene expression in cells as it does not take into account post-transcriptional regulation (not all messenger RNA transcripts are actually translated into proteins). [ 12 ] Transcriptomics is still important, of course, as studying the difference between RNA levels and protein levels can give insight regarding which genes are post-transcriptionally regulated. There are four major methods used to quantify the metabolome of single cells; they are: fluorescence–based detection, fluorescence biosensors, FRET biosensors, and mass spectroscopy. The first three methods listed use fluorescence microscopy to detect molecules in a cell. Usually these assays use small fluorescent tags attached to molecules of interest, however this has been shown be too invasive for single cell metabolomics, and alters the activity of the metabolites. The current solution to this problem is to use fluorescent proteins which will act as metabolite detectors, fluorescing when ever they bind to a metabolite of interest. [ 67 ] Mass spectroscopy is becoming the most frequently used method for single cell metabolomics. Its advantages are that there is no need to develop fluorescent proteins for all molecules of interest, and is capable of detecting metabolites in the femtomole range. [ 15 ] Similar to the methods discussed in proteomics, there has also been success in combining mass spectroscopy with separation techniques such as capillary electrophoresis to quantify metabolites. This method is also capable of detecting metabolites present in femtomole concentrations. [ 67 ] Another method utilizing capillary microsampling combined with mass spectrometry with ion mobility separation has been demonstrated to enhance the molecular coverage and ion separation for single cell metabolomics. [ 21 ] [ 68 ] Researchers are trying to develop a technique that can fulfil what current techniques are lacking: high throughput, higher sensitivity for metabolites that have a lower abundance or that have low ionization efficiencies, good replicability and that allow quantification of metabolites. [ 69 ] The purpose of single cell metabolomics is to gain a better understanding at the molecular level of major biological topics such as: cancer, stem cells, aging, as well as the development of drug resistance. In general the focus of metabolomics is mostly on understanding how cells deal with environmental stresses at the molecular level, and to give a more dynamic understanding of cellular functions. [ 67 ] Single-cell transcriptomic assays have allowed reconstruction development trajectories. Branching of these trajectories describes cell differentiation. Various methods have been developed for reconstructing branching developmental trajectories from single-cell transcriptomic data. [ 70 ] [ 71 ] [ 72 ] [ 73 ] [ 74 ] They use various advanced mathematical concepts from optimal transportation [ 72 ] to principal graphs. [ 73 ] Some software libraries for reconstruction and visualization of lineage differentiation trajectories are freely available online. [ 75 ] Cell–cell interactions are characterized by stable and transient interactions.
https://en.wikipedia.org/wiki/Single-cell_analysis
Single-cell proteins ( SCP ) or microbial proteins [ 1 ] refer to edible unicellular microorganisms . [ a ] The biomass or protein extract from pure or mixed cultures of algae , yeasts , fungi or bacteria may be used as an ingredient or a substitute for protein-rich foods, and is suitable for human consumption or as animal feeds. Industrial agriculture is marked by a high water footprint , [ 2 ] high land use, [ 3 ] biodiversity destruction, [ 3 ] general environmental degradation [ 3 ] and contributes to climate change by emission of a third of all greenhouse gases ; [ 4 ] production of SCP does not necessarily exhibit any of these serious drawbacks. As of today, SCP is commonly grown on agricultural waste products, and as such inherits the ecological footprint and water footprint of industrial agriculture. However, SCP may also be produced entirely independent of agricultural waste products through autotrophic growth. [ 5 ] Thanks to the high diversity of microbial metabolism, autotrophic SCP provides several different modes of growth, versatile options of nutrients recycling, and a substantially increased efficiency compared to crops. [ 5 ] A 2021 publication showed that photovoltaic -driven microbial protein production could use 10 times less land for an equivalent amount of protein compared to soybean cultivation. [ 1 ] With the world population reaching 9 billion by 2050, there is strong evidence that agriculture will not be able to meet demand [ 6 ] and that there is serious risk of food shortage. [ 7 ] [ 8 ] Autotrophic SCP represents options of fail-safe mass food-production which can produce food reliably even under harsh climate conditions. [ 5 ] In 1781, processes for preparing highly concentrated forms of yeast were established. Research on Single Cell Protein Technology started a century ago when Max Delbrück and his colleagues found out the high value of surplus brewer’s yeast as a feeding supplement for animals. [ 9 ] During World War I and World War II , yeast-SCP was employed on a large scale in Germany to counteract food shortages during the war. Inventions for SCP production often represented milestones for biotechnology in general: for example, in 1919, Sak in Denmark and Hayduck in Germany invented a method named, “Zulaufverfahren”, ( fed-batch ) in which sugar solution was fed continuously to an aerated suspension of yeast instead of adding yeast to diluted sugar solution once ( batch ). [ 9 ] In post war period, the Food and Agriculture Organization of the United Nations (FAO) emphasized on hunger and malnutrition problems of the world in 1960 and introduced the concept of protein gap, showing that 25% of the world population had a deficiency of protein intake in their diet. [ 9 ] It was also feared that agricultural production would fail to meet the increasing demands of food by humanity. By the mid 60’s, almost quarter of a million tons of food yeast were being produced in different parts of the world and Soviet Union alone produced some 900,000 tons by 1970 of food and fodder yeast. [ 9 ] In the 1960s, researchers at BP developed what they called "proteins-from-oil process": a technology for producing single-cell protein by yeast fed by waxy n-paraffins, a byproduct of oil refineries. Initial research work was done by Alfred Champagnat at BP's Lavera Refinery in France; a small pilot plant there started operations in March 1963, and the same construction of the second pilot plant, at Grangemouth Oil Refinery in Britain, was authorized. [ 10 ] The term SCP was coined in 1966 by Carroll L. Wilson of MIT . [ 11 ] The "food from oil" idea became quite popular by the 1970s, with Champagnat being awarded the UNESCO Science Prize in 1976, [ 12 ] and paraffin-fed yeast facilities being built in a number of countries. The primary use of the product was as poultry and cattle feed. [ 13 ] The Soviets were particularly enthusiastic, opening large "BVK" ( belkovo-vitaminny kontsentrat , i.e., "protein-vitamin concentrate") plants next to their oil refineries in Kstovo (1973) [ 14 ] [ 15 ] [ 16 ] and Kirishi (1974). [ 17 ] The Soviet Ministry of Microbiological Industry had eight plants of this kind by 1989. However, due to concerns of toxicity of alkanes in SCP and pressured by the environmentalist movements, the government decided to close them down, or convert to some other microbiological processes. [ 17 ] Quorn is a range of vegetarian and vegan meat-substitutes made from Fusarium venenatum mycoprotein , sold in Europe and North America. Another type of single cell protein-based meat analogue (which does not use fungi however but rather bacteria [ 18 ] ) is Calysta . Other producers are Unibio (Denmark) Circe Biotechnologie (Austria) and String Bio (India). SCP has been argued to be a source of alternative or resilient food. [ 19 ] [ 20 ] Single-cell proteins develop when microbes ferment waste materials (including wood, straw, cannery, and food-processing wastes, residues from alcohol production, hydrocarbons, or human and animal excreta). [ 21 ] With 'electric food' processes the inputs are electricity, CO 2 and trace minerals and chemicals such as fertiliser. [ 22 ] It is also possible to derive SCP from natural gas to use as a resilient food . [ 23 ] Similarly SCP can be derived from waste plastic by upcycling . [ 24 ] The problem with extracting single-cell proteins from waste products is the dilution and cost. They are found in very low concentrations, usually less than 5%. Engineers have developed ways to increase the concentrations including centrifugation, flotation, precipitation, coagulation, and filtration, or the use of semi-permeable membranes. The single-cell protein must be dehydrated to approximately 10% moisture content and/or acidified to aid in storage and prevent spoilage. The methods to increase the concentrations to adequate levels and the de-watering process require equipment that is expensive and not always suitable for small-scale operations. It is economically prudent to feed the product locally and soon after it is produced. [ citation needed ] Microbes employed include (brand names in parentheses for commercialized examples): Large-scale production of microbial biomass has many advantages over the traditional methods for producing proteins for food or feed. Although SCP shows very attractive features as a nutrient for humans, however there are some problems that deter its adoption on global basis: This problem can be remediated, however. One common method consists in a heat treatment which kills the cells, inactivates proteases and allows endogenous RNases to hydrolyse RNA with release of nucleotides from cell to culture broth. [ 38 ]
https://en.wikipedia.org/wiki/Single-cell_protein
Single-cell sequencing examines the nucleic acid sequence information from individual cells with optimized next-generation sequencing technologies, providing a higher resolution of cellular differences and a better understanding of the function of an individual cell in the context of its microenvironment. [ 1 ] For example, in cancer, sequencing the DNA of individual cells can give information about mutations carried by small populations of cells. In development, sequencing the RNAs expressed by individual cells can give insight into the existence and behavior of different cell types. [ 2 ] In microbial systems, a population of the same species can appear genetically clonal. Still, single-cell sequencing of RNA or epigenetic modifications can reveal cell-to-cell variability that may help populations rapidly adapt to survive in changing environments. [ 3 ] A typical human cell consists of about 2 x 3.3 billion base pairs of DNA and 600 million mRNA bases. Usually, a mix of millions of cells is used in sequencing the DNA or RNA using traditional methods like Sanger sequencing or next generation sequencing . By deep sequencing of DNA and RNA from a single cell, cellular functions can be investigated extensively. [ 1 ] Like typical next-generation sequencing experiments, single-cell sequencing protocols generally contain the following steps: isolation of a single cell, nucleic acid extraction and amplification , sequencing library preparation, sequencing, and bioinformatic data analysis. It is more challenging to perform single-cell sequencing than sequencing from cells in bulk. The minimal amount of starting materials from a single cell makes degradation, sample loss, and contamination exert pronounced effects on the quality of sequencing data. In addition, due to the picogram level of the number of nucleic acids used, [ 4 ] heavy amplification is often needed during sample preparation of single-cell sequencing, resulting in uneven coverage, noise, and inaccurate quantification of sequencing data. Recent technical improvements make single-cell sequencing a promising tool for approaching a set of seemingly inaccessible problems. For example, heterogeneous samples, rare cell types, cell lineage relationships, mosaicism of somatic tissues, analyses of microbes that cannot be cultured, and disease evolution can all be elucidated through single-cell sequencing. [ 5 ] Single-cell sequencing was selected as the method of the year 2013 by Nature Publishing Group. [ 6 ] Single-cell DNA genome sequencing involves isolating a single cell, amplifying the whole genome or region of interest, constructing sequencing libraries, and then applying next-generation DNA sequencing (for example Illumina , Ion Torrent ). Single-cell DNA sequencing has been widely applied in mammalian systems to study normal physiology and disease. Single-cell resolution can uncover the roles of genetic mosaicism or intra-tumor genetic heterogeneity in cancer development or treatment response. [ 7 ] In the context of microbiomes, a genome from a single unicellular organism is referred to as a single amplified genome (SAG). Advancements in single-cell DNA sequencing have enabled collecting of genomic data from uncultivated prokaryotic species present in complex microbiomes. [ 8 ] Although SAGs are characterized by low completeness and significant bias, recent computational advances have achieved the assembly of near-complete genomes from composite SAGs. [ 9 ] Data obtained from microorganisms might establish processes for culturing in the future. [ 10 ] Some of the genome assembly tools used in single cell single-cell sequencing include SPAdes , IDBA-UD, Cortex, and HyDA. [ 11 ] A list of more than 100 different single-cell omics methods has been published. [ 12 ] Multiple displacement amplification (MDA) is a widely used technique, enabling amplifying femtograms of DNA from bacterium to micrograms for sequencing. Reagents required for MDA reactions include: random primers and DNA polymerase from bacteriophage phi29. In 30 degree isothermal reaction, DNA is amplified with included reagents. As the polymerases manufacture new strands, a strand displacement reaction takes place, synthesizing multiple copies from each template DNA. At the same time, the strands that were extended antecedently will be displaced. MDA products result in a length of about 12 kb and ranges up to around 100 kb, enabling its use in DNA sequencing. [ 10 ] In 2017, a major improvement to this technique, called WGA-X, was introduced by taking advantage of a thermostable mutant of the phi29 polymerase, leading to better genome recovery from individual cells, in particular those with high G+C content. [ 13 ] MDA has also been implemented in a microfluidic droplet-based system to achieve a highly parallelized single-cell whole genome amplification. By encapsulating single-cells in droplets for DNA capture and amplification, this method offers reduced bias and enhanced throughput compared to conventional MDA. [ 14 ] Another common method is MALBAC . [ 15 ] As done in MDA, this method begins with isothermal amplification, but the primers are flanked with a “common” sequence for downstream PCR amplification. As the preliminary amplicons are generated, the common sequence promotes self-ligation and the formation of “loops” to prevent further amplification. In contrast with MDA, the highly branched DNA network is not formed. Instead, the loops are denatured in another temperature cycle allowing the fragments to be amplified with PCR. MALBAC has also been implemented in a microfluidic device, but the amplification performance was not significantly improved by encapsulation in nanoliter droplets. [ 16 ] Comparing MDA and MALBAC, MDA results in better genome coverage, but MALBAC provides more even coverage across the genome. MDA could be more effective for identifying SNPs , whereas MALBAC is preferred for detecting copy number variants. While performing MDA with a microfluidic device markedly reduces bias and contamination, the chemistry involved in MALBAC does not demonstrate the same potential for improved efficiency. A method particularly suitable for the discovery of genomic structural variation is Single-cell DNA template strand sequencing (a.k.a. Strand-seq). [ 17 ] Using the principle of single-cell tri-channel processing, which uses joint modelling of read-orientation, read-depth, and haplotype-phase, Strand-seq enables discovery of the full spectrum of somatic structural variation classes ≥200kb in size. Strand-seq overcomes limitations of whole genome amplification based methods for identification of somatic genetic variation classes in single cells, [ 18 ] because it is not susceptible against read chimers leading to calling artefacts (discussed in detail in the section below), and is less affected by drop outs. The choice of method depends on the goal of the sequencing because each method presents different advantages. [ 7 ] MDA of individual cell genomes results in highly uneven genome coverage, i.e. relative overrepresentation and underrepresentation of various regions of the template, leading to loss of some sequences. There are two components to this process: a) stochastic over- and under-amplification of random regions; and b) systematic bias against high %GC regions. The stochastic component may be addressed by pooling single-cell MDA reactions from the same cell type, by employing fluorescent in situ hybridization (FISH) and/or post-sequencing confirmation. [ 10 ] The bias of MDA against high %GC regions can be addressed by using thermostable polymerases, such as in the process called WGA-X. [ 13 ] Single-nucleotide polymorphisms (SNPs), which are a big part of genetic variation in the human genome , and copy number variation (CNV), pose problems in single cell sequencing, as well as the limited amount of DNA extracted from a single cell. Due to scant amounts of DNA, accurate analysis of DNA poses problems even after amplification since coverage is low and is susceptible to errors. With MDA, average genome coverage is less than 80% and SNPs that are not covered by sequencing reads will be opted out. In addition, MDA shows a high ratio of allele dropout, not detecting alleles from heterozygous samples. Various SNP algorithms are currently in use but none are specific to single-cell sequencing. MDA with CNV also poses the problem of identifying false CNVs that conceal the real CNVs. To solve this, when patterns can be generated from false CNVs, algorithms can detect and eradicate this noise to produce true variants. [ 19 ] Strand-seq overcomes limitations of methods based on whole genome amplification for genetic variant calling: Since Strand-seq does not require reads (or read pairs) transversing the boundaries (or breakpoints) of CNVs or copy-balanced structural variant classes, it is less susceptible to common artefacts of single-cell methods based on whole genome amplification, which include variant calling dropouts due to missing reads at the variant breakpoint and read chimera. [ 7 ] [ 18 ] Strand-seq discovers the full spectrum of structural variation classes of at least 200kb in size, including breakage-fusion-bridge cycles and chromothripsis events, as well as balanced inversions, and copy-number balanced or imbalanced translocations. [ 18 ] " Structural variant calls made by Strand-seq are resolved by chromosome-length haplotype , which provides additional variant calling specificity. [ 18 ] As a current limitation, Strand-seq requires dividing cells for strand-specific labelling using bromodeoxyuridine (BrdU), and the method does not detect variants smaller than 200kb in size, such as mobile element insertions . Microbiomes are among the main targets of single cell genomics due to the difficulty of culturing the majority of microorganisms in most environments. Single-cell genomics is a powerful way to obtain microbial genome sequences without cultivation. This approach has been widely applied on marine, soil, subsurface, organismal, and other types of microbiomes in order to address a wide array of questions related to microbial ecology, evolution, public health and biotechnology potential. [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ] Cancer sequencing is also an emerging application of scDNAseq. Fresh or frozen tumors may be analyzed and categorized with respect to SCNAs, SNVs, and rearrangements quite well using whole-genome DNAS approaches. [ 29 ] Cancer scDNAseq is particularly useful for examining the depth of complexity and compound mutations present in amplified therapeutic targets such as receptor tyrosine kinase genes (EGFR, PDGFRA etc.) where conventional population-level approaches of the bulk tumor are not able to resolve the co-occurrence patterns of these mutations within single cells of the tumor. Such overlap may provide redundancy of pathway activation and tumor cell resistance. Single-cell DNA methylome sequencing quantifies DNA methylation . There are several known types of methylation that occur in nature, including 5-methylcytosine (5mC), 5-hydroxymethylcytosine (5hmC), 6-methyladenosine (6mA), and 4-methylcytosine (4mC). In eukaryotes, especially animals, 5mC is widespread along the genome and plays an important role in regulating gene expression by repressing transposable elements . [ 31 ] Sequencing 5mC in individual cells can reveal how epigenetic changes across genetically identical cells from a single tissue or population give rise to cells with different phenotypes. Bisulfite sequencing has become the gold standard in detecting and sequencing 5mC in single cells. [ 32 ] Treatment of DNA with bisulfite converts cytosine residues to uracil, but leaves 5-methylcytosine residues unaffected. Therefore, DNA that has been treated with bisulfite retains only methylated cytosines. To obtain the methylome readout, the bisulfite-treated sequence is aligned to an unmodified genome. Whole genome bisulfite sequencing was achieved in single cells in 2014. [ 33 ] The method overcomes the loss of DNA associated with the typical procedure, where sequencing adapters are added prior to bisulfite fragmentation. Instead, the adapters are added after the DNA is treated and fragmented with bisulfite, allowing all fragments to be amplified by PCR. [ 34 ] Using deep sequencing, this method captures ~40% of the total CpGs in each cell. With existing technology DNA cannot be amplified prior to bisulfite treatment, as the 5mC marks will not be copied by the polymerase. Single-cell reduced representation bisulfite sequencing (scRRBS) is another method. [ 35 ] This method leverages the tendency of methylated cytosines to cluster at CpG islands (CGIs) to enrich for areas of the genome with a high CpG content. This reduces the cost of sequencing compared to whole-genome bisulfite sequencing, but limits the coverage of this method. When RRBS is applied to bulk samples, the majority of the CpG sites in gene promoters are detected, but site in gene promoters only account for 10% of CpG sites in the entire genome. [ 36 ] In single cells, 40% of the CpG sites from the bulk sample are detected. To increase coverage, this method can also be applied to a small pool of single cells. In a sample of 20 pooled single cells, 63% of the CpG sites from the bulk sample were detected. Pooling single cells is one strategy to increase methylome coverage, but at the cost of obscuring the heterogeneity in the population of cells. While bisulfite sequencing remains the most widely used approach for 5mC detection, the chemical treatment is harsh and fragments and degrades the DNA. This effect is exacerbated when moving from bulk samples to single cells. Other methods to detect DNA methylation include methylation-sensitive restriction enzymes. Restriction enzymes also enable the detection of other types of methylation, such as 6mA with DpnI . [ 37 ] Nanopore-based sequencing also offers a route for direct methylation sequencing without fragmentation or modification to the original DNA. Nanopore sequencing has been used to sequence the methylomes of bacteria, which are dominated by 6mA and 4mC (as opposed to 5mC in eukaryotes), but this technique has not yet been scaled down to single cells. [ 38 ] Single-cell DNA methylation sequencing has been widely used to explore epigenetic differences in genetically similar cells. To validate these methods during their development, the single-cell methylome data of a mixed population were successfully classified by hierarchal clustering to identify distinct cell types. [ 35 ] Another application is studying single cells during the first few cell divisions in early development to understand how different cell types emerge from a single embryo. [ 39 ] Single-cell whole-genome bisulfite sequencing has also been used to study rare but highly active cell types in cancer such as circulating tumor cells (CTCs). [ 40 ] Single cell transposase-accessible chromatin sequencing maps chromatin accessibility across the genome. A transposase inserts sequencing adapters directly into open regions of chromatin, allowing those regions to be amplified and sequenced. [ 41 ] The two methods for library preparation in scATAC-Seq are based on split-pool cellular indexing and microfluidics. Standard methods such as microarrays and bulk RNA-seq analyze the RNA expression from large populations of cells. These measurements may obscure critical differences between individual cells in mixed-cell populations. [ 42 ] [ 43 ] Single-cell RNA sequencing (scRNA-seq) provides the expression profiles of individual cells and is considered the gold standard for defining cell states and phenotypes as of 2020. [ 44 ] Although it is impossible to obtain complete information on every RNA expressed by each cell, due to the small amount of material available, gene expression patterns can be identified through gene clustering analyses . [ 45 ] This can uncover rare cell types within a cell population that may never have been seen before. For example, one group of scientists performing scRNA-seq on neuroblastoma tumor tissue identified a rare pan-neuroblastoma cancer cell, which may be attractive for novel therapy approaches. [ 46 ] Current scRNA-seq protocols involve isolating single cells and their RNA, and then following the same steps as bulk RNA-seq: reverse transcription (RT), amplification, library generation and sequencing. Early methods separated individual cells into separate wells; more recent methods encapsulate individual cells in droplets in a microfluidic device, where the reverse transcription reaction takes place, converting RNAs to cDNAs. Each droplet carries a DNA "barcode" that uniquely labels the cDNAs derived from a single cell. Once reverse transcription is complete, the cDNAs from many cells can be mixed together for sequencing, because transcripts from a particular cell are identified by the unique barcode. [ 47 ] [ 48 ] Challenges for scRNA-Seq include preserving the initial relative abundance of mRNA in a cell and identifying rare transcripts. [ 49 ] The reverse transcription step is critical as the efficiency of the RT reaction determines how much of the cell's RNA population will be eventually analyzed by the sequencer. The processivity of reverse transcriptases and the priming strategies used may affect full-length cDNA production and the generation of libraries biased toward 3’ or 5' end of genes. In the amplification step, either PCR or in vitro transcription (IVT) is currently used to amplify cDNA. One of the advantages of PCR-based methods is the ability to generate full-length cDNA. However, different PCR efficiency on particular sequences (for instance, GC content and snapback structure) may also be exponentially amplified, producing libraries with uneven coverage. On the other hand, while libraries generated by IVT can avoid PCR-induced sequence bias, specific sequences may be transcribed inefficiently, thus causing sequence drop-out or generating incomplete sequences. [ 1 ] [ 42 ] Several scRNA-seq protocols have been published: Tang et al., [ 50 ] STRT, [ 51 ] SMART-seq, [ 52 ] SORT-seq, [ 53 ] CEL-seq, [ 54 ] RAGE-seq, [ 55 ] Quartz-seq. [ 56 ] , and C1-CAGE. [ 57 ] These protocols differ in terms of strategies for reverse transcription, cDNA synthesis and amplification, and the possibility to accommodate sequence-specific barcodes (i.e., UMIs ) or the ability to process pooled samples. [ 58 ] In 2017, two approaches were introduced to simultaneously measure single-cell mRNA and protein expression through oligonucleotide-labeled antibodies known as REAP-seq, [ 59 ] and CITE-seq. [ 60 ] Collecting cellular contents following electrophysiological recording using patch-clamp has also allowed development of the Patch-Seq method, which is steadily gaining ground in neuroscience. [ 61 ] This platform of single cell RNA sequencing allows to analyze transcriptomes on a cell-by-cell basis by the use of microfluidic partitioning to capture single cells and prepare next-generation sequencing (NGS) cDNA libraries . [ 62 ] The droplets based platform enables massively parallel sequencing of mRNA in a large numbers of individual cells by capturing single cell in oil droplet. [ 63 ] Overall, in a first stage individual cells are captured separately and lysed, then reverse transcription (RT) of mRNA is performed and cDNA library is obtained. To select mRNA, the RT is performed with a single-stranded sequence of deoxythymine (oligo dT) primer which bind specifically the poly(A) tail of mRNA molecules. Subsequently, the amplified cDNA library is used for sequencing. [ 64 ] So, the first step of the method is the single cell encapsulation and library preparation. Cells are encapsulated into Gel Beads-in-emulsion (GEMs) thanks to an automate. To form these vesicle, the automate uses a microfluidic chip and combines all components with oil. Each functional GEM contains a single cell, a single Gel Bead, and RT reagents. On the Gel Bead, olignonucleotides composed by 4 distincts parts are bound: PCR primer (essential for the sequencing) ; 10X barcoded oligonucleotides ; Unique Molecular Identifier (UMI) sequence ; PolydT sequence (that enables capture of poly-adenylated mRNA molecules). [ 65 ] Within each GEM reaction vesicle, a single cell is lysed and undergo reverse transcription. cDNA from the same cell are identified thanks to a common 10X barcode. In addition, the number of UMIs express the gene expression level and its analyse allows to detect highly variable genes. Those data are often used for either cellular phenotype classification or new subpopulation identification. [ 66 ] The final step of the platform is the sequencing. Libraries generated can be directly used for single cell whole transcriptome sequencing or target sequencing workflows. The sequencing is performed by using the Illumina dye sequencing method. This sequencing method is based on sequencing by synthesis (SBS) principle and the use of reversible dye-terminator that enables the identification of each single nucleotid. In order to read the transcript sequences on one end, and the barcode and UMI on the other end, paired-end sequencing readers are required. [ 67 ] The droplet-based platform allows the detection of rare cell types thanks to its high throughput. In fact, 500 to 20,000 cells are captured per sample from a single cell suspension. The protocol is performed easily and allows a high cell recovery rate of up to 65%. The global workflow of the droplet-based platform takes 8 hours and so is faster than the Microwell-based method (BD Rhapsody), which takes 10 hours. However, it presents some limitations as the need of fresh samples and the final detection of only 10% mRNA. The major difference between the droplet-based method and the microwell-based method is the technique used for partitioning cells. [ 64 ] Most RNA-seq methods depend on poly(A) tail capture to enrich mRNA and deplete abundant and uninformative rRNA. Thus, they are often restricted to sequencing polyadenylated mRNA molecules. However, recent studies are now starting to appreciate the importance of non-poly(A) RNA, such as long-noncoding RNA and microRNAs in gene expression regulation. Small-seq is a single-cell method that captures small RNAs (<300 nucleotides) such as microRNAs, fragments of tRNAs and small nucleolar RNAs in mammalian cells. [ 68 ] This method uses a combination of “oligonucleotide masks” (that inhibit the capture of highly abundant 5.8S rRNA molecules) and size selection to exclude large RNA species such as other highly abundant rRNA molecules. To target larger non-poly(A) RNAs, such as long non-coding mRNA, histone mRNA, circular RNA, and enhancer RNA, size selection is not applicable for depleting the highly abundant ribosomal RNA molecules (18S and 28s rRNA). [ 69 ] Single-cell RamDA-Seq is a method that achieves this by performing reverse transcription with random priming (random displacement amplification) in the presence of “not so random” (NSR) primers specifically designed to avoid priming on rRNA molecule. [ 70 ] While this method successfully captures full-length total RNA transcripts for sequencing and detected a variety of non-poly(A) RNAs with high sensitivity, it has some limitations. The NSR primers were carefully designed according to rRNA sequences in the specific organism (mouse), and designing new primer sets for other species would take considerable effort. Recently, a CRISPR-based method named scDASH (single-cell depletion of abundant sequences by hybridization) demonstrated another approach to depleting rRNA sequences from single-cell total RNA-seq libraries. [ 71 ] Bacteria and other prokaryotes are currently not amenable to single-cell RNA-seq due to the lack of polyadenylated mRNA. Thus, the development of single-cell RNA-seq methods that do not depend on poly(A) tail capture will also be instrumental in enabling single-cell resolution microbiome studies. Bulk bacterial studies typically apply general rRNA depletion to overcome the lack of polyadenylated mRNA on bacteria, but at the single-cell level, the total RNA found in one cell is too small. [ 69 ] Lack of polyadenylated mRNA and scarcity of total RNA found in single bacteria cells are two important barriers limiting the deployment of scRNA-seq in bacteria. Limitations due to partial and biased scRNA-seq sampling also arise in large, ramified cell types like neurons . In these cells, single-cell isolation only captures RNA from the central cell bodies separated from their processes during trituration. In the brain it is estimated that over 40% of total RNA is in cellular processes such as axons , dendrites , astrocyte end-feet , and thus not visible to scRNA-seq methods. [ 72 ] scRNA-Seq is becoming widely used across biological disciplines including Developmental biology , [ 73 ] Neurology , [ 74 ] Oncology , [ 75 ] [ 76 ] [ 77 ] Immunology , [ 78 ] [ 79 ] Cardiovascular research [ 80 ] [ 81 ] and Infectious disease . [ 82 ] [ 83 ] Using machine learning methods, data from bulk RNA-Seq has been used to increase the signal/noise ratio in scRNA-Seq. Specifically, scientists have used gene expression profiles from pan-cancer datasets in order to build coexpression networks , and then have applied these on single cell gene expression profiles, obtaining a more robust method to detect the presence of mutations in individual cells using transcript levels. [ 84 ] Some scRNA-seq methods have also been applied to single cell microorganisms. SMART-seq2 has been used to analyze single cell eukaryotic microbes, but since it relies on poly(A) tail capture, it has not been applied in prokaryotic cells. [ 85 ] Microfluidic approaches such as Drop-seq and the Fluidigm IFC-C1 devices have been used to sequence single malaria parasites or single yeast cells. [ 86 ] [ 87 ] The single-cell yeast study sought to characterize the heterogeneous stress tolerance in isogenic yeast cells before and after the yeast are exposed to salt stress. Single-cell analysis of the several transcription factors by scRNA-seq revealed heterogeneity across the population. These results suggest that regulation varies among members of a population to increase the chances of survival for a fraction of the population. The first single-cell transcriptome analysis in a prokaryotic species was accomplished using the terminator exonuclease enzyme to selectively degrade rRNA and rolling circle amplification (RCA) of mRNA. [ 88 ] In this method, the ends of single-stranded DNA were ligated together to form a circle, and the resulting loop was then used as a template for linear RNA amplification. The final product library was then analyzed by microarray, with low bias and good coverage. However, RCA has not been tested with RNA-seq, which typically employs next-generation sequencing. Single-cell RNA-seq for bacteria would be highly useful for studying microbiomes. It would address issues encountered in conventional bulk metatranscriptomics approaches, such as failing to capture species present in low abundance, and failing to resolve heterogeneity among cell populations. scRNA-Seq has provided considerable insight into the development of embryos and organisms, including the worm Caenorhabditis elegans , [ 89 ] and the regenerative planarian Schmidtea mediterranea [ 90 ] [ 91 ] and axolotl Ambystoma mexicanum . [ 92 ] [ 93 ] The first vertebrate animals to be mapped in this way were Zebrafish [ 94 ] [ 95 ] [ 96 ] and Xenopus laevis . [ 97 ] In each case multiple stages of the embryo were studied, allowing the entire process of development to be mapped on a cell-by-cell basis. Science recognized these advances as the 2018 Breakthrough of the Year . [ 98 ] A molecular cell atlas of mice testes was established to define BDE47-induced prepubertal testicular toxicity using the ScRNA-seq approach, providing novel insight into our understanding of the underlying mechanisms and pathways involved in BDE47-associated testicular injury at a single-cell resolution. [ 99 ] There are several ways to isolate individual cells prior to whole genome amplification and sequencing. Fluorescence-activated cell sorting (FACS) is a widely used approach. Individual cells can also be collected by micromanipulation, for example by serial dilution or by using a patch pipette or nanotube to harvest a single cell. [ 15 ] [ 100 ] The advantages of micromanipulation are ease and low cost, but they are laborious and susceptible to misidentification of cell types under microscope. Laser-capture microdissection (LCM) can also be used for collecting single cells. Although LCM preserves the knowledge of the spatial location of a sampled cell within a tissue, it is hard to capture a whole single cell without also collecting the materials from neighboring cells. [ 42 ] [ 101 ] [ 102 ] High-throughput methods for single cell isolation also include microfluidics . Both FACS and microfluidics are accurate, automatic and capable of isolating unbiased samples. However, both methods require detaching cells from their microenvironments first, thereby causing perturbation to the transcriptional profiles in RNA expression analysis. [ 103 ] [ 104 ] The single-cell RNA-Seq protocols vary in efficiency of RNA capture, which results in differences in the number of transcripts generated from each single cell. Single-cell libraries are usually sequenced to a depth of 1,000,000 reads because a large majority of genes are detected with 500,000 reads. [ 105 ] Increasing the number of cells and decreasing the read depth increases the power of identifying major cell populations. However, low read depths may not always provide necessary information about the genes, and the difference in their expression between the cell populations is dependent on the stability and detection of the mRNA molecules. Quality control covariates serve as a strategy to analyze the number of cells. These covariates mainly include filtering based on count depth, the number of genes, and the fraction of counts from mitochondrial genes, which leads to the interpretation of cellular signals.
https://en.wikipedia.org/wiki/Single-cell_sequencing
Single-cell transcriptomics examines the gene expression level of individual cells in a given population by simultaneously measuring the RNA concentration (conventionally only messenger RNA (mRNA)) of hundreds to thousands of genes. [ 1 ] Single-cell transcriptomics makes it possible to unravel heterogeneous cell populations, reconstruct cellular developmental pathways, and model transcriptional dynamics — all previously masked in bulk RNA sequencing. [ 2 ] The development of high-throughput RNA sequencing (RNA-seq) and microarrays has made gene expression analysis a routine. RNA analysis was previously limited to tracing individual transcripts by Northern blots or quantitative PCR . Higher throughput and speed allow researchers to frequently characterize the expression profiles of populations of thousands of cells. The data from bulk assays has led to identifying genes differentially expressed in distinct cell populations, and biomarker discovery. [ 3 ] These studies are limited as they provide measurements for whole tissues and, as a result, show an average expression profile for all the constituent cells. This has a couple of drawbacks. Firstly, different cell types within the same tissue can have distinct roles in multicellular organisms. They often form subpopulations with unique transcriptional profiles. Correlations in the gene expression of the subpopulations can often be missed due to the lack of subpopulation identification. [ 1 ] Secondly, bulk assays fail to recognize whether a change in the expression profile is due to a change in regulation or composition — for example if one cell type arises to dominate the population. Lastly, when your goal is to study cellular progression through differentiation , average expression profiles can only order cells by time rather than by developmental stage. Consequently, they cannot show trends in gene expression levels specific to certain stages. [ 4 ] Recent advances in biotechnology allow the measurement of gene expression in hundreds to thousands of individual cells simultaneously. While these breakthroughs in transcriptomics technologies have enabled the generation of single-cell transcriptomic data, they also presented new computational and analytical challenges. Bioinformaticians can use techniques from bulk RNA-seq for single-cell data. Still, many new computational approaches have had to be designed for this data type to facilitate a complete and detailed study of single-cell expression profiles. [ 5 ] There is so far no standardized technique to generate single-cell data: all methods must include cell isolation from the population, lysate formation, amplification through reverse transcription , and quantification of expression levels. Common techniques for measuring expression are quantitative PCR or RNA-seq. [ 6 ] Several methods are available to isolate and amplify cells for single-cell analysis, differing primarily in throughput and potential for cell selection. Low-throughput techniques, such as micropipetting , cytoplasmic aspiration, [ 7 ] and laser capture microdissection , typically isolate hundreds of cells but enable deliberate cell selection. High-throughput methods allow for the rapid isolation of hundreds to tens of thousands of cells. [ 8 ] Common high-throughput approaches include Fluorescence Activated Cell Sorting (FACS) and the use of microfluidic devices . Microfluidic platforms often isolate single cells either by mechanical separation into microwells (e.g., BD Rhapsody, Takara ICELL8, Vycap Puncher Platform, CellMicrosystems CellRaft) or by encapsulation within droplets (e.g., 10x Genomics Chromium, Illumina Bio-Rad ddSEQ, 1CellBio InDrop, Dolomite Bio Nadia). [ 9 ] Furthermore, optimized protocols have been developed by integrating these isolation techniques directly with scRNA-seq workflows. For instance, combining FACS with scRNA-seq led to protocols like SORT-seq, [ 10 ] and a list of studies utilizing SORT-seq can be found here. [ 11 ] Similarly, the integration of microfluidic devices with scRNA-seq has been highly optimized in protocols such as those developed by 10x Genomics . [ 12 ] Single cells are labeled by adding beads with barcoded oligonucleotides; both cells and beads are supplied in limited amounts such that co-occupancy with multiple cells and beads is a very rare event. To measure the level of expression of each transcript qPCR can be applied. Gene specific primers are used to amplify the corresponding gene as with regular PCR and as a result data is usually only obtained for sample sizes of less than 100 genes. The inclusion of housekeeping genes , whose expression should be constant under the conditions, is used for normalization. The most commonly used house keeping genes include GAPDH and α- actin , although the reliability of normalization through this process is questionable as there is evidence that the level of expression can vary significantly. [ 13 ] Fluorescent dyes are used as reporter molecules to detect the PCR product and monitor the progress of the amplification - the increase in fluorescence intensity is proportional to the amplicon concentration. A plot of fluorescence vs. cycle number is made and a threshold fluorescence level is used to find cycle number at which the plot reaches this value. The cycle number at this point is known as the threshold cycle (C t ) and is measured for each gene. [ 14 ] The single-cell RNA-seq technique converts a population of RNAs to a library of cDNA fragments. Single cells are labeled by adding beads with barcoded oligonucleotides; both cells and beads are supplied in limited amounts such that co-occupancy with multiple cells and beads is a very rare event. Once reverse transcription is complete, the cDNAs from many cells can be mixed together for sequencing. These fragments are sequenced by high-throughput next generation sequencing techniques and the reads are mapped back to the reference genome, providing a count of the number of reads associated with each gene. [ 15 ] Transcripts from a particular cell are identified by each cell's unique barcode. [ 16 ] [ 17 ] Normalization of RNA-Seq data accounts for cell to cell variation in the efficiencies of the cDNA library formation and sequencing. One method relies on the use of extrinsic RNA spike-ins that are added in equal quantities to each cell lysate and used to normalize read count by the number of reads mapped to spike-in mRNA . [ 18 ] Another control uses unique molecular identifiers (UMIs)-short DNA sequences (6–10nt) that are added to each cDNA before amplification and act as a bar code for each cDNA molecule. Normalization is achieved by using the count number of unique UMIs associated with each gene to account for differences in amplification efficiency. [ 19 ] A combination of both spike-ins , UMIs and other approaches have been combined to help identify artifacts during library preparation [ 20 ] and for more accurate normalization. scRNA-Seq is becoming widely used across biological disciplines including Development, Neurology , [ 21 ] Oncology , [ 22 ] [ 23 ] [ 24 ] Autoimmune disease , [ 25 ] and Infectious disease . [ 26 ] Several scRNA-Seq protocols have been published: Tang et al., [ 27 ] STRT, [ 28 ] SMART-seq, [ 29 ] CEL-seq, [ 30 ] RAGE-seq, [ 31 ] Quartz-seq [ 32 ] and C1-CAGE. [ 33 ] These protocols differ in terms of strategies for reverse transcription, cDNA synthesis and amplification, and the possibility to accommodate sequence-specific barcodes (i.e. UMIs ) or the ability to process pooled samples. [ 34 ] In 2017, two approaches were introduced to simultaneously measure single-cell mRNA and protein expression through oligonucleotide-labeled antibodies known as REAP-seq, [ 35 ] and CITE-seq. [ 36 ] scRNA-Seq has provided considerable insight into the development of embryos and organisms, including the worm Caenorhabditis elegans , [ 37 ] and the regenerative planarian Schmidtea mediterranea . [ 38 ] [ 39 ] The first vertebrate animals to be mapped in this way were Zebrafish [ 40 ] [ 41 ] and Xenopus laevis . [ 42 ] In each case multiple stages of the embryo were studied, allowing the entire process of development to be mapped on a cell-by-cell basis. [ 43 ] Science recognized these advances as the 2018 Breakthrough of the Year . [ 44 ] A problem associated with single-cell data occurs in the form of zero inflated gene expression distributions, known as technical dropouts, that are common due to low mRNA concentrations of less-expressed genes that are not captured in the reverse transcription process. The percentage of mRNA molecules in the cell lysate that are detected is often only 10-20%. [ 45 ] When using RNA spike-ins for normalization the assumption is made that the amplification and sequencing efficiencies for the endogenous and spike-in RNA are the same. Evidence suggests that this is not the case given fundamental differences in size and features, such as the lack of a polyadenylated tail in spike-ins and therefore shorter length. [ 46 ] Additionally, normalization using UMIs assumes the cDNA library is sequenced to saturation, which is not always the case. [ 19 ] In the amplification step, either PCR or in vitro transcription (IVT) is currently used to amplify cDNA. One of the advantages of PCR-based methods is the ability to generate full-length cDNA. However, different PCR efficiency on particular sequences (for instance, GC content and snapback structure) may also be exponentially amplified, producing libraries with uneven coverage. On the other hand, while libraries generated by IVT can avoid PCR-induced sequence bias, specific sequences may be transcribed inefficiently, thus causing sequence drop-out or generating incomplete sequences. [ 47 ] [ 48 ] Challenges for scRNA-Seq include preserving the initial relative abundance of mRNA in a cell and identifying rare transcripts. [ 49 ] The reverse transcription step is critical as the efficiency of the RT reaction determines how much of the cell's RNA population will be eventually analyzed by the sequencer. The processivity of reverse transcriptases and the priming strategies used may affect full-length cDNA production and the generation of libraries biased toward the 3’ or 5' end of genes. A further consideration when sequencing large, branched cell types, such as neurons , comes from the removal of distal processes containing local pools of RNA during the single-cell isolation process. In these cells, scRNA-seq datasets only capture transcript in the central cell body, omitting transcripts from RNA pools localized to cellular processes that can be involved in local translation or other RNA-mediated subcellular mechanisms. In the brain it has been estimated that over 40% of total RNA is not sequenced by scRNA-seq due to the prevalence of local transcriptomes in cellular processes such as axons , dendrites , myelin , and endfeet . [ 50 ] Insights based on single-cell data analysis assume that the input is a matrix of normalized gene expression counts, generated by the approaches outlined above, and can provide opportunities that are not obtainable by bulk. Three main insights provided: [ 51 ] The techniques outlined have been designed to help visualise and explore patterns in the data in order to facilitate the revelation of these three features. Clustering allows for the formation of subgroups in the cell population. Cells can be clustered by their transcriptomic profile in order to analyse the sub-population structure and identify rare cell types or cell subtypes. Alternatively, genes can be clustered by their expression states in order to identify covarying genes. A combination of both clustering approaches, known as biclustering , has been used to simultaneously cluster by genes and cells to find genes that behave similarly within cell clusters. [ 52 ] Clustering methods applied can be K-means clustering , forming disjoint groups or Hierarchical clustering , forming nested partitions. Biclustering provides several advantages by improving the resolution of clustering. Genes that are only informative to a subset of cells and are hence only expressed there can be identified through biclustering. Moreover, similarly behaving genes that differentiate one cell cluster from another can be identified using this method. [ 53 ] Dimensionality reduction algorithms such as Principal component analysis (PCA) and t-SNE can be used to simplify data for visualisation and pattern detection by transforming cells from a high to a lower dimensional space . The result of this method produces graphs with each cell as a point in a 2-D or 3-D space. Dimensionality reduction is frequently used before clustering as cells in high dimensions can wrongly appear to be close due to distance metrics behaving non-intuitively. [ 54 ] The most frequently used technique is PCA, which identifies the directions of largest variance principal components and transforms the data so that the first principal component has the largest possible variance, and successive principle components in turn each have the highest variance possible while remaining orthogonal to the preceding components. The contribution each gene makes to each component is used to infer which genes are contributing the most to variance in the population and are involved in differentiating different subpopulations. [ 55 ] Detecting differences in gene expression level between two populations is used both single-cell and bulk transcriptomic data. Specialised methods have been designed for single-cell data that considers single cell features such as technical dropouts and shape of the distribution e.g. Bimodal vs. unimodal . [ 56 ] Gene ontology terms describe gene functions and the relationships between those functions into three classes: Gene Ontology (GO) term enrichment is a technique used to identify which GO terms are over-represented or under-represented in a given set of genes. In single-cell analysis input list of genes of interest can be selected based on differentially expressed genes or groups of genes generated from biclustering. The number of genes annotated to a GO term in the input list is normalized against the number of genes annotated to a GO term in the background set of all genes in genome to determine statistical significance. [ 57 ] Pseudo-temporal ordering (or trajectory inference) is a technique that aims to infer gene expression dynamics from snapshot single-cell data. The method tries to order the cells in such a way that similar cells are closely positioned to each other. This trajectory of cells can be linear, but can also bifurcate or follow more complex graph structures. The trajectory, therefore, enables the inference of gene expression dynamics and the ordering of cells by their progression through differentiation or response to external stimuli. The method relies on the assumptions that the cells follow the same path through the process of interest and that their transcriptional state correlates to their progression. The algorithm can be applied to both mixed populations and temporal samples. More than 50 methods for pseudo-temporal ordering have been developed, and each has its own requirements for prior information (such as starting cells or time course data), detectable topologies, and methodology. [ 58 ] An example algorithm is the Monocle algorithm [ 59 ] that carries out dimensionality reduction of the data, builds a minimal spanning tree using the transformed data, orders cells in pseudo-time by following the longest connected path of the tree and consequently labels cells by type. Another example is the diffusion pseudotime (DPT) algorithm, [ 57 ] which uses a diffusion map and diffusion process. Another class of methods such as MARGARET [ 60 ] employ graph partitioning for capturing complex trajectory topologies such as disconnected and multifurcating trajectories. Gene regulatory network inference is a technique that aims to construct a network, shown as a graph, in which the nodes represent the genes and edges indicate co-regulatory interactions. The method relies on the assumption that a strong statistical relationship between the expression of genes is an indication of a potential functional relationship. [ 61 ] The most commonly used method to measure the strength of a statistical relationship is correlation . However, correlation fails to identify non-linear relationships and mutual information is used as an alternative. Gene clusters linked in a network signify genes that undergo coordinated changes in expression. [ 62 ] The presence or strength of technical effects and the types of cells observed often differ in single-cell transcriptomics datasets generated using different experimental protocols and under different conditions. This difference results in strong batch effects that may bias the findings of statistical methods applied across batches, particularly in the presence of confounding . [ 63 ] As a result of the aforementioned properties of single-cell transcriptomic data, batch correction methods developed for bulk sequencing data were observed to perform poorly. Consequently, researchers developed statistical methods to correct for batch effects that are robust to the properties of single-cell transcriptomic data to integrate data from different sources or experimental batches. Laleh Haghverdi performed foundational work in formulating the use of mutual nearest neighbors between each batch to define batch correction vectors. [ 64 ] With these vectors, you can merge datasets that each include at least one shared cell type. An orthogonal approach involves the projection of each dataset onto a shared low-dimensional space using canonical correlation analysis . [ 65 ] Mutual nearest neighbors and canonical correlation analysis have also been combined to define integration "anchors" comprising reference cells in one dataset, to which query cells in another dataset are normalized. [ 66 ] Another class of methods (e.g., scDREAMER [ 67 ] ) uses deep generative models such as variational autoencoders for learning batch-invariant latent cellular representations which can be used for downstream tasks such as cell type clustering, denoising of single-cell gene expression vectors and trajectory inference. [ 60 ]
https://en.wikipedia.org/wiki/Single-cell_transcriptomics
In cell biology , single-cell variability occurs when individual cells in an otherwise similar population differ in shape, size, position in the cell cycle , or molecular-level characteristics. Such differences can be detected using modern single-cell analysis techniques. [ 1 ] Investigation of variability within a population of cells contributes to understanding of developmental and pathological processes, A sample of cells may appear similar, but the cells can vary in their individual characteristics, such as shape and size, mRNA expression levels, genome , or individual counts of metabolites . In the past, the only methods available for investigating such properties required a population of cells and provided an estimate of the characteristic of interest, averaged over the population, which could obscure important differences among the cells. Single-cell analysis allows scientists to study the properties of a single cell of interest with high accuracy, revealing individual differences among populations and offering new insights in molecular biology. These individual differences are important in fields such as developmental biology, where individual cells can take on different " fates " - become specialized cells such as neurons or organ tissue - during the growth of an embryo; in cancer research , where individual malignant cells can vary in their response to therapy; or in infectious disease , where only a subset of cells in a population become infected by a pathogen . Population-level views of cells can offer a distorted view of the data by averaging out the properties of distinct subsets of cells. [ 3 ] For example, if half the cells of a particular group are expressing high levels of a given gene, and the rest are expressing low levels, results from a population-wide analysis may appear as if all cells are expressing a medium level of the given gene. Thus, single-cell analysis allows researchers to study biological processes in finer detail and answer questions that could not have been addressed otherwise. Cells with identical genomes may vary in the expression of their genes due to differences in their specialized function in the body, their timepoint in the cell cycle , their environment, and also noise and stochastic factors. Thus, accurate measurement of gene expression in individual cells allows researchers to better understand these critical aspects of cellular biology. For example, early study of gene expression in individual cells in fruit fly embryos allowed scientists to discover regularized patterns or gradients of specific gene transcription during different stages of growth, allowing for a more detailed understanding of development at the level of location and time. Another phenomenon in gene expression which could only be identified at the single cell level is oscillatory gene expression, in which a gene is expressed on and off periodically. Single-cell gene expression is typically assayed using RNA-seq . After the cell has been isolated, the RNA-seq protocol typically consists of three steps: [ 4 ] the RNA is reverse transcribed into cDNA , the cDNA is amplified to make more material available for the sequencer, and the cDNA is sequenced . A population of single celled organisms like bacteria typically vary slightly in their DNA sequence due to mutations acquired during reproduction. Within a single human, individual cells typically have identical genomes, though there are interesting exceptions, such as B-cells , which have variation in their DNA enabling them to generate different antibodies to bind to the variety of pathogens that can attack the body. Measuring the differences and the rate of change in DNA content at the single-cell level can help scientists better understand how pathogens develop antibiotic resistance, why the immune system often cannot produce antibodies for rapidly mutating viruses like HIV, and other important phenomena. Many technologies exist for sequencing genomes, but they are designed to use DNA from a population of cells rather than a single cell. The primary challenge for single-cell genome sequencing is to make multiple copies of (amplify) the DNA so that there is enough material available for the sequencer, a process called whole genome amplification (WGA). Typical methods for WGA consist of: [ 5 ] (1) Multiple Displacement Amplification (MDA) in which multiple primers anneal to the DNA, polymerases copy the DNA, and knock off other polymerases, freeing strands that can be processed by the sequencer, (2) PCR -based methods, or (3) some combination of both. Cells vary in the metabolites they contain, which are the intermediary compounds and end products of complex biochemical reactions that sustain the cell. Genetically identical cells in different conditions and environments can use different metabolic pathways to sustain themselves. By measuring the metabolites present, scientists can infer the metabolic pathways used, and infer useful information about the state of the cell. An example of this is found in the immune system, where CD4+ cells can differentiate into Th17 or TReg cells (among other possibilities), both of which direct the immune system's response in different ways. Th17 cells stimulate a strong inflammatory response, whereas TReg cells stimulate the opposite effect. The former tend to rely much more on glycolysis , [ 6 ] due to their increased energy demands. In order to profile the metabolic content of a cell, researchers must identify the cell of interest in the larger population, isolate it for analysis, quickly inhibit enzymes and halt the metabolic processes in the cell, and then use techniques such as NMR , mass-spec , microfluidics , and other methods to analyze the contents of the cell. [ 7 ] Similar to variation in the metabolome, the proteins present in a cell and their abundances can vary from cell to cell in an otherwise similar population. While transcription and translation determine the amount and variety of proteins produced, these processes are imprecise, and cells have a number of mechanisms which can change or degrade proteins, allowing for variance in the proteome that may not be accounted for by variance in gene expression. Also, proteins have many other important features besides simply being present or absent, such as whether have undergone posttranslational modifications such as phosphorylation , or are bound to molecules of interest. The variation in abundance and characteristics of proteins has implications for fields such as cancer research and cancer therapy, where a drug targeting a particular protein may vary in its impact due to variability in the proteome, [ 8 ] or vary in efficacy due to the broader biological phenomenon of tumor heterogeneity . Cytometry, surface methods, and microfluidics technologies are the three classes of tools commonly used to profile the proteomes of individual cells. [ 9 ] Cytometry allows researchers to isolate cells of interest, and stain 15–30 proteins to measure their location and/or relative abundance. [ 9 ] Image cycling techniques have been developed to measure multi-target abundance and distribution in biopsy samples and tissues. In these methods, 3–4 targets are stained with fluorescently labeled antibodies, imaged , and then stripped of their fluorophores by a variety of means, including oxidation-based chemistries [ 10 ] or more recently antibody-DNA conjugation methods, [ 11 ] allowing additional targets to be stained in follow-on cycles; in some methods up to 60 individual targets have been visualized. [ 12 ] For surface methods, researchers place a single cell on a surface coated with antibodies, which then bind to proteins secreted by the cell and allow them to be measured. [ 9 ] Microfluidics methods for proteome analysis immobilize single cells on a microchip and use staining to measure the proteins of interest, or antibodies to bind to the proteins of interest. Cells in an otherwise similar population can vary in their size and morphology due to differences in function, changes in metabolism, or simply being in different phases of the cell cycle or some other factor. For example, stem cells can divide asymmetrically, [ 13 ] which means the two resultant daughter cells may have different fates (specialized functions), and can differ from each other in size or shape. Researchers who study development may be interested in tracking the physical characteristics of the individual progeny in a growing population in order to understand how stem cells differentiate into a complex tissue or organism over time. Microscopy can be used to analyze cell size and morphology by obtaining high-quality images over time. These pictures will typically contain a population of cells, but algorithms can be applied to identify and track individual cells across multiple images. [ 14 ] [ 15 ] [ 16 ] The algorithms must be able to process gigabytes of data to remove noise and summarize the relevant characteristics for the given research question. [ 17 ] Individual cells in a population will often be at different points in the cell cycle. Scientists who wish to understand characteristics of the cell at a particular point in the cycle would have difficulty using population-level estimates, since they would average measurements from cells at different stages. Also, understanding the cell cycle in individual diseased cells, like those in a tumor, is also important, since they will often have a very different cycle than healthy cells. Single-cell analysis of characteristics of the cell cycle allow scientists to understand these properties in greater detail. Variability in cell cycle can be studied using several of the methods previously described. For example, cells in G2 will be quite large in size (as they are a just at the point where they are about to divide in two), and can be identified using protocols for cell size and shape. Cells in S phase copy their genomes, and could be identified using protocols for staining DNA and measuring its content by flow cytometry or quantitative fluorescence microscopy, or by using probes for genes expressed highly at specific phases of the cell cycle.
https://en.wikipedia.org/wiki/Single-cell_variability
Single-drop microextraction ( SDME ) is a sample preparation technique in chemical test or analytical chemistry. SDME uses only a single drop of solvent to isolate and preconcentrate analytes from a sample matrix. The extremely low solvent use of SDME makes it cost-effective and less harmful to the environment, subscribing to the principles of green analytical chemistry. [ 1 ] In many chemical test procedures, sample preparation, often the time- and cost-determining step, is designed to isolate analytes from interferences and to provide (typically through enrichment) an analyte concentration suitable for detection. Liquid−liquid extraction (LLE) has long been a widely used technique for the preparation of aqueous samples. Numerous efforts have been made to improve upon the LLE technique for decades. SDME using only one microdrop of organic solvent to perform LLE was first described in 1996 in Analytical Chemistry . Liu and Dasgupta described a microdrop LLE system with a drop (~1.3 microliter) of chloroform at the tip of a tube suspended in an aqueous drop to perform automatic drop-in-drop extraction and in situ optical detection. [ 2 ] Jeannot and Cantwell introduced a method with a single drop (8 microliter) of n -octane at the end of a Teflon rod in a stirred aqueous sample solution to extract the analyte into the organic drop for GC analysis. [ 3 ] Since its introduction, SDME has become a popular LLE technique because it is inexpensive, easy to operate, and uses only minuscule amount of solvent. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ]
https://en.wikipedia.org/wiki/Single-drop_microextraction
A single-ended recuperative (SER) burner is a type of gas burner used in high-temperature industrial kilns and furnaces . These burners are used where indirect heating is required, e.g. where the products of combustion are not allowed to combine with the atmosphere of the furnace. The typical design is a tubular (pipe) shape with convoluted pathways on the interior, closed on the end pointed into the furnace. A gas burner fires a flame down the center of these pathways, and the hot combustion gases are then forced to change direction and travel along the shell of the tube, heating it to incandescent temperatures and allowing efficient transfer of thermal energy to the furnace interior. Exhaust gas is collected back at the burner end where it is eventually discharged to the atmosphere. The hot exhaust can be used to pre-heat the incoming combustion air and fuel gas (recuperation) to boost efficiency. Such burners compete with electrical heating and can be more economical to operate than electric heat, particularly in larger furnaces and in most areas where the price of natural gas per unit of available energy is lower than that of electricity. These particular furnaces are claimed to have a significant increase in efficiency over other types of furnaces, [ 1 ] and are said to achieve efficiency up to 80 percent and higher. This industry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Single-ended_recuperative_burner
Single-Entity Electrochemistry (SEE) refers to the electroanalysis of an individual unit of interest. A unique feature of SEE is that it unifies multiple different branches of electrochemistry. Single-Entity Electrochemistry pushes the bounds of the field as it can measure entities on a scale of 100 microns to angstroms. [ 1 ] Single-Entity Electrochemistry is important because it gives the ability to view how a single molecule, or cell, or "thing" affects the bulk response, and thus the chemistry that might have gone unknown otherwise. The ability to monitor the movement of one electron or ion from one unit to another is valuable, as many vital reactions and mechanisms undergo this process. Electrochemistry is well suited for this measurement due to its incredible sensitivity. [ 1 ] Single-Entity Electrochemistry can be used to investigate nanoparticles, wires, vesicles, nanobubbles, nanotubes, cells, and viruses, and other small molecules and ions. [ 2 ] Single-entity electrochemistry has been successfully used to determine the size distribution of particles as well as the number of particles present inside a vesicle or other similar structures [ 3 ] The Coulter Counter was created by Wallace H. Coulter in 1949. The Coulter counter consists of two electrolyte reservoirs that are connected by a small channel, through which a current of ions flow. Each particle drawn through the channel causes a brief change to the electrical resistance of the liquid. The change in the electrical resistance causes a disturbance in the electric field. The counter detects these changes in electrical resistance; the size of the particles in the field is proportional to magnitude of the disturbance in the electric field. [ 4 ] Patch-Clamp Electrophysiology was developed by Neher and Sakmann in 1976. This technique allowed measurements of individual proteins through ion channels. A glass pipette was fixed to the cell membrane, and the ion currents though the ion channels were measured. [ 1 ] The Patch-Clamp method increased the sensitivity of detection by three orders of magnitude over previous methods, and the time resolution for the measurements was decreased to nearly 10 microseconds. [ 5 ] The success of this method was a result of the ability to create a high resistance seal between the glass micropipette and the cell membrane; isolating the system chemically and electrically. While it is useful to study bulk cell entities, there is an underlying need to study an individual or single cell as it will provide a better understanding of how it contributes to the entity as a whole. It was found that the utilization of electrochemical techniques could analyze cells without interrupting cellular activity as well as provide a highly resolute spectrum. [ 6 ] This analysis method was first completed by Wightman in 1982. In this method of analysis, a carbon microfiber electrode is placed near the studied cell; this electrode can monitor the call via methods of voltammetry or amperometry . Before the measure can be taken, the cell must be stimulated by an ejection pipette to cause a cellular release. This can be cellular release can be measured via the aforementioned methods. From this method, it was seen that instrumental advances were needed in order to perform quality SEE measurements. [ 1 ] Single-Molecule electrochemistry is an electrochemical technique used to study the faradaic response of redox molecules in electrochemical environments. The ability to study singular molecules gives rise to the potential of developing ultra-sensitive sensors which are necessary in SEE. From the work of Bard and Fan, this technique has had large advances with the use of redox cycling. [ 7 ] Redox cycling amplifies a charge transfer by reducing and oxidizing a molecule multiple times as it diffuses between electrodes. [ 7 ] Specifically in this technique, an insulated nano-electrode tip is placed near a substrate electrode to form an ultra-small electrochemical chamber. Molecules will become trapped in this chamber where the redox cycling and charge amplification will occur, allowing for detection of single molecules. From this technique, the necessary tool of charge amplification of redox reactions helped improve SEE measurements. It has helped increase detection limits, which need to be high for SEE. [ 1 ] With the advance of nanoscale electrodes , the resolution of SEE has advanced from being able to detect single cells to detecting single molecules within cells. [ 8 ] Nanoscale electrodes are small enough they can be inserted into the synapses between neurons, which can be used to detect neurotransmitter concentrations. [ 9 ] If the electrode is thin enough, it can be inserted directly into a cell and used to detect concentrations of intracellular molecules, such as metabolites or even DNA. [ 10 ] Plasmonic nanoparticles can be individually analyzed through optoelectrochemical imaging (in which electrochemical processes are measured by optical means). When electrochemistry is performed on a nanoparticle, the refractive index of its environment will change resulting in a shift of the localized surface plasmon resonance. The spectral difference can be measured through characterization techniques such as darkfield microscopy to monitor electrochemical reactions at the surface of plasmonic nanoparticles. [ 11 ] Plasmonics-based electrochemical current microscopy (PECM) measures the contrast that appears from the interference of localized surface plasmon scattered light and reflected light that, like above, is sensitive to changes in the refractive index. This can be used to quantify the electrocatalytic reactions occurring at Pt nanoparticles. Since nanoparticles are inherently heterogenous (which affects catalytic activity), SEE methods can provide more information than traditional methods that measure the average of an ensemble of nanoparticles. [ 12 ] At present, single entity electrochemistry is not sensitive enough to quantify the turnover of a single enzyme.
https://en.wikipedia.org/wiki/Single-entity_electrochemistry
In control engineering , a single-input and single-output ( SISO ) system is a simple single- variable control system with one input and one output. In radio, it is the use of only one antenna both in the transmitter and receiver . SISO systems are typically less complex than multiple-input multiple-output (MIMO) systems. Usually, it is also easier to make an order of magnitude or trending predictions "on the fly" or "back of the envelope". MIMO systems have too many interactions for most of us to trace through them quickly, thoroughly, and effectively in our heads. Frequency domain techniques for analysis and controller design dominate SISO control system theory. Bode plot , Nyquist stability criterion , Nichols plot , and root locus are the usual tools for SISO system analysis. Controllers can be designed through the polynomial design, root locus design methods to name just two of the more popular. Often SISO controllers will be PI, PID, or lead-lag. This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Single-input_single-output_system
In materials science , the term single-layer materials or 2D materials refers to crystalline solids consisting of a single layer of atoms. These materials are promising for some applications but remain the focus of research. Single-layer materials derived from single elements generally carry the -ene suffix in their names, e.g. graphene . Single-layer materials that are compounds of two or more elements have -ane or -ide suffixes. 2D materials can generally be categorized as either 2D allotropes of various elements or as compounds (consisting of two or more covalently bonding elements). It is predicted that there are hundreds of stable single-layer materials. [ 1 ] [ 2 ] The atomic structure and calculated basic properties of these and many other potentially synthesisable single-layer materials, can be found in computational databases. [ 3 ] 2D materials can be produced using mainly two approaches: top-down exfoliation and bottom-up synthesis. The exfoliation methods include sonication , mechanical, hydrothermal, electrochemical, laser-assisted, and microwave-assisted exfoliation. [ 4 ] Graphene is a crystalline allotrope of carbon in the form of a nearly transparent (to visible light) one atom thick sheet. It is hundreds of times stronger than most steels by weight. [ 5 ] It has the highest known thermal and electrical conductivity, displaying current densities 1,000,000 times that of copper . [ 6 ] It was first produced in 2004. [ 7 ] Andre Geim and Konstantin Novoselov won the 2010 Nobel Prize in Physics "for groundbreaking experiments regarding the two-dimensional material graphene". They first produced it by lifting graphene flakes from bulk graphite with adhesive tape and then transferring them onto a silicon wafer. [ 8 ] Graphyne is another 2-dimensional carbon allotrope whose structure is similar to graphene's. It can be seen as a lattice of benzene rings connected by acetylene bonds. Depending on the content of the acetylene groups, graphyne can be considered a mixed hybridization , sp n , where 1 < n < 2, [ 9 ] [ 10 ] compared to graphene (pure sp 2 ) and diamond (pure sp 3 ). First-principle calculations using phonon dispersion curves and ab-initio finite temperature, quantum mechanical molecular dynamics simulations showed graphyne and its boron nitride analogues to be stable. [ 11 ] The existence of graphyne was conjectured before 1960. [ 12 ] In 2010, graphdiyne (graphyne with diacetylene groups) was synthesized on copper substrates. [ 13 ] In 2022 a team claimed to have successfully used alkyne metathesis to synthesise graphyne though this claim is disputed. [ 14 ] [ 15 ] However, after an investigation the team's paper was retracted by the publication citing fabricated data. [ 16 ] [ verification needed ] Later during 2022 synthesis of multi-layered γ‑graphyne was successfully performed through the polymerization of 1,3,5-tribromo-2,4,6-triethynylbenzene under Sonogashira coupling conditions. [ 17 ] [ 18 ] Recently, it has been claimed to be a competitor for graphene due to the potential of direction-dependent Dirac cones . [ 19 ] [ 20 ] Borophene is a crystalline atomic monolayer of boron and is also known as boron sheet . First predicted by theory in the mid-1990s in a freestanding state, [ 21 ] and then demonstrated as distinct monoatomic layers on substrates by Zhang et al., [ 22 ] different borophene structures were experimentally confirmed in 2015. [ 23 ] [ 24 ] Germanene is a two-dimensional allotrope of germanium with a buckled honeycomb structure. [ 25 ] Experimentally synthesized germanene exhibits a honeycomb structure. [ 26 ] [ 27 ] This honeycomb structure consists of two hexagonal sub-lattices that are vertically displaced by 0.2 A from each other. [ 28 ] Silicene is a two-dimensional allotrope of silicon , with a hexagonal honeycomb structure similar to that of graphene. [ 29 ] [ 30 ] [ 31 ] Its growth is scaffolded by a pervasive Si/Ag(111) surface alloy beneath the two-dimensional layer. [ 32 ] Stanene is a predicted topological insulator that may display dissipationless currents at its edges near room temperature . It is composed of tin atoms arranged in a single layer, in a manner similar to graphene. [ 33 ] Its buckled structure leads to high reactivity against common air pollutants such as NO x and CO x and it is able to trap and dissociate them at low temperature. [ 34 ] A structure determination of stanene using low energy electron diffraction has shown ultra-flat stanene on a Cu(111) surface. [ 35 ] Plumbene is a two-dimensional allotrope of lead , with a hexagonal honeycomb structure similar to that of graphene. [ 36 ] Phosphorene is a 2-dimensional, crystalline allotrope of phosphorus . Its mono-atomic hexagonal structure makes it conceptually similar to graphene. However, phosphorene has substantially different electronic properties; in particular it possesses a nonzero band gap while displaying high electron mobility. [ 37 ] This property potentially makes it a better semiconductor than graphene. [ 38 ] The synthesis of phosphorene mainly consists of micromechanical cleavage or liquid phase exfoliation methods. The former has a low yield while the latter produce free standing nanosheets in solvent and not on the solid support. The bottom-up approaches like chemical vapor deposition (CVD) are still blank because of its high reactivity. Therefore, in the current scenario, the most effective method for large area fabrication of thin films of phosphorene consists of wet assembly techniques like Langmuir-Blodgett involving the assembly followed by deposition of nanosheets on solid supports. [ 39 ] Antimonene is a two-dimensional allotrope of antimony , with its atoms arranged in a buckled honeycomb lattice. Theoretical calculations [ 40 ] predicted that antimonene would be a stable semiconductor in ambient conditions with suitable performance for (opto)electronics. Antimonene was first isolated in 2016 by micromechanical exfoliation [ 41 ] and it was found to be very stable under ambient conditions. Its properties make it also a good candidate for biomedical and energy applications. [ 42 ] In a study made in 2018, [ 43 ] antimonene modified screen-printed electrodes (SPE's) were subjected to a galvanostatic charge/discharge test using a two-electrode approach to characterize their supercapacitive properties. The best configuration observed, which contained 36 nanograms of antimonene in the SPE, showed a specific capacitance of 1578 F g −1 at a current of 14 A g −1 . Over 10,000 of these galvanostatic cycles, the capacitance retention values drop to 65% initially after the first 800 cycles, but then remain between 65% and 63% for the remaining 9,200 cycles. The 36 ng antimonene/SPE system also showed an energy density of 20 mW h kg −1 and a power density of 4.8 kW kg −1 . These supercapacitive properties indicate that antimonene is a promising electrode material for supercapacitor systems. A more recent study, [ 44 ] concerning antimonene modified SPEs shows the inherent ability of antimonene layers to form electrochemically passivated layers to facilitate electroanalytical measurements in oxygenated environments, in which the presence of dissolved oxygens normally hinders the analytical procedure. The same study also depicts the in-situ production of antimonene oxide/PEDOT:PSS nanocomposites as electrocatalytic platforms for the determination of nitroaromatic compounds. Bismuthene, the two-dimensional (2D) allotrope of bismuth , was predicted to be a topological insulator. It was predicted that bismuthene retains its topological phase when grown on silicon carbide in 2015. [ 45 ] The prediction was successfully realized and synthesized in 2016. [ 46 ] At first glance the system is similar to graphene, as the Bi atoms arrange in a honeycomb lattice. However the bandgap is as large as 800mV due to the large spin–orbit interaction (coupling) of the Bi atoms and their interaction with the substrate. Thus, room-temperature applications of the quantum spin Hall effect come into reach. It has been reported to be the largest nontrivial bandgap 2D topological insulator in its natural state. [ 47 ] [ 48 ] Top-down exfoliation of bismuthene has been reported in various instances [ 49 ] [ 50 ] with recent works promoting the implementation of bismuthene in the field of electrochemical sensing. [ 51 ] [ 52 ] Emdadul et al. [ 53 ] predicted the mechanical strength and phonon thermal conductivity of monolayer β-bismuthene through atomic-scale analysis. The obtained room temperature (300K) fracture strength is ~4.21 N/m along the armchair direction and ~4.22 N/m along the zigzag direction. At 300 K, its Young's moduli are reported to be ~26.1 N/m and ~25.5 N/m, respectively, along the armchair and zigzag directions. In addition, their predicted phonon thermal conductivity of ~1.3 W/m∙K at 300 K is considerably lower than other analogous 2D honeycombs, making it a promising material for thermoelectric operations. On 16 April 2024, scientists from Linköping University in Sweden reported that they had produced goldene , a single layer of gold atoms 100nm wide. Lars Hultman , a materials scientist on the team behind the new research, is quoted as saying "we submit that goldene is the first free-standing 2D metal, to the best of our knowledge", meaning that it is not attached to any other material, unlike plumbene and stanene . Researchers from New York University Abu Dhabi (NYUAD) previously reported to have synthesised Goldene in 2022, however various other scientists have contended that the NYUAD team failed to prove they made a single-layer sheet of gold, as opposed to a multi-layer sheet. Goldene is expected to be used primarily for its optical properties, with applications such as sensing or as a catalyst . [ 54 ] Single and double atom layers of platinum in a two-dimensional film geometry has been demonstrated. [ 56 ] [ 57 ] These atomically thin platinum films are epitaxially grown on graphene, [ 56 ] which imposes a compressive strain that modifies the surface chemistry of the platinum, while also allowing charge transfer through the graphene. [ 57 ] Single atom layers of palladium with the thickness down to 2.6 Å, [ 55 ] and rhodium with the thickness of less than 4 Å [ 58 ] have been synthesized and characterized with atomic force microscopy and transmission electron microscopy. A 2D titanium formed by additive manufacturing ( laser powder bed fusion ) achieved greater strength than any known material (50% greater than magnesium alloy WE54). The material was arranged in a tubular lattice with a thin band running inside, merging two complementary lattice structures. This reduced by half the stress at the weakest points in the structure. [ 59 ] The supracrystals of 2D materials have been proposed and theoretically simulated. [ 60 ] [ 61 ] These monolayer crystals are built of supra atomic periodic structures where atoms in the nodes of the lattice are replaced by symmetric complexes. For example, in the hexagonal structure of graphene patterns of 4 or 6 carbon atoms would be arranged hexagonally instead of single atoms, as the repeating node in the unit cell . Two-dimensional alloys (or surface alloys) are a single atomic layer of alloy that is incommensurate with the underlying substrate. One example is the 2D ordered alloys of Pb with Sn and with Bi. [ 62 ] [ 63 ] Surface alloys have been found to scaffold two-dimensional layers, as in the case of silicene . [ 32 ] The most commonly studied two-dimensional transition metal dichalcogenide (TMD) is monolayer molybdenum disulfide (MoS 2 ). Several phases are known, notably the 1T and 2H phases. The naming convention reflects the structure: the 1T phase has one "sheet" (consisting of a layer of S-Mo-S; see figure) per unit cell in a trigonal crystal system, while the 2H phase has two sheets per unit cell in a hexagonal crystal system. The 2H phase is more common, as the 1T phase is metastable and spontaneously reverts to 2H without stabilization by additional electron donors (typically surface S vacancies). [ 67 ] The 2H phase of MoS 2 ( Pearson symbol hP6; Strukturbericht designation C7) has space group P6 3 /mmc. Each layer contains Mo surrounded by S in trigonal prismatic coordination. [ 68 ] Conversely, the 1T phase (Pearson symbol hP3) has space group P-3m1, and octahedrally-coordinated Mo; with the 1T unit cell containing only one layer, the unit cell has a c parameter slightly less than half the length of that of the 2H unit cell (5.95 Å and 12.30 Å, respectively). [ 69 ] The different crystal structures of the two phases result in differences in their electronic band structure as well. The d-orbitals of 2H-MoS 2 are split into three bands: d z 2 , d x 2 -y 2 ,xy , and d xz,yz . Of these, only the d z 2 is filled; this combined with the splitting results in a semiconducting material with a bandgap of 1.9eV. [ 70 ] 1T-MoS 2 , on the other hand, has partially filled d-orbitals which give it a metallic character. Because the structure consists of in-plane covalent bonds and inter-layer van der Waals interactions , the electronic properties of monolayer TMDs are highly anisotropic. For example, the conductivity of MoS 2 in the direction parallel to the planar layer (0.1–1 ohm −1 cm −1 ) is ~2200 times larger than the conductivity perpendicular to the layers. [ 71 ] There are also differences between the properties of a monolayer compared to the bulk material: the Hall mobility at room temperature is drastically lower for monolayer 2H MoS 2 (0.1–10 cm 2 V −1 s −1 ) than for bulk MoS 2 (100–500 cm 2 V −1 s −1 ). This difference arises primarily due to charge traps between the monolayer and the substrate it is deposited on. [ 72 ] MoS 2 has important applications in (electro)catalysis. As with other two-dimensional materials, properties can be highly geometry-dependent; the surface of MoS 2 is catalytically inactive, but the edges can act as active sites for catalyzing reactions. [ 73 ] For this reason, device engineering and fabrication may involve considerations for maximizing catalytic surface area, for example by using small nanoparticles rather than large sheets [ 73 ] or depositing the sheets vertically rather than horizontally. [ 74 ] Catalytic efficiency also depends strongly on the phase: the aforementioned electronic properties of 2H MoS 2 make it a poor candidate for catalysis applications, but these issues can be circumvented through a transition to the metallic (1T) phase. The 1T phase has more suitable properties, with a current density of 10 mA/cm 2 , an overpotential of −187 mV relative to RHE, and a Tafel slope of 43 mV/decade (compared to 94 mV/decade for the 2H phase). [ 75 ] [ 76 ] While graphene has a hexagonal honeycomb lattice structure with alternating double-bonds emerging from its sp 2 -bonded carbons, graphane, still maintaining the hexagonal structure, is the fully hydrogenated version of graphene with every sp 3 -hybrized carbon bonded to a hydrogen (chemical formula of (CH) n ). Furthermore, while graphene is planar due to its double-bonded nature, graphane is rugged, with the hexagons adopting different out-of-plane structural conformers like the chair or boat, to allow for the ideal 109.5° angles which reduce ring strain, in a direct analogy to the conformers of cyclohexane. [ 77 ] Graphane was first theorized in 2003, [ 78 ] was shown to be stable using first principles energy calculations in 2007, [ 79 ] and was first experimentally synthesized in 2009. [ 80 ] There are various experimental routes available for making graphane, including the top-down approaches of reduction of graphite in solution or hydrogenation of graphite using plasma/hydrogen gas as well as the bottom-up approach of chemical vapor deposition. [ 77 ] Graphane is an insulator, with a predicted band gap of 3.5 eV; [ 81 ] however, partially hydrogenated graphene is a semi-conductor, with the band gap being controlled by the degree of hydrogenation. [ 77 ] Germanane is a single-layer crystal composed of germanium with one hydrogen bonded in the z-direction for each atom. [ 82 ] [ 83 ] Germanane's structure is similar to graphane , Bulk germanium does not adopt this structure. Germanane is produced in a two-step route starting with calcium germanide . From this material, the calcium (Ca) is removed by de- intercalation with HCl to give a layered solid with the empirical formula GeH. [ 84 ] The Ca sites in Zintl-phase CaGe 2 interchange with the hydrogen atoms in the HCl solution, producing GeH and CaCl 2 . SLSiN (acronym for S ingle- L ayer Si licon N itride), a novel 2D material introduced as the first post-graphene member of Si 3 N 4 , was first discovered computationally in 2020 via density-functional theory based simulations. [ 85 ] This new material is inherently 2D, insulator with a band-gap of about 4 eV, and stable both thermodynamically and in terms of lattice dynamics. Often single-layer materials, specifically elemental allotrops, are connected to the supporting substrate via surface alloys. [ 32 ] [ 33 ] By now, this phenomenon has been proven via a combination of different measurement techniques for silicene, [ 32 ] for which the alloy is difficult to prove by a single technique, and hence has not been expected for a long time. Hence, such scaffolding surface alloys beneath two-dimensional materials can be also expected below other two-dimensional materials, significantly influencing the properties of the two-dimensional layer. During growth, the alloy acts as both, foundation and scaffold for the two-dimensional layer, for which it paves the way. [ 32 ] Ni 3 (HITP) 2 is an organic, crystalline, structurally tunable electrical conductor with a high surface area. HITP is an organic chemical (2,3,6,7,10,11-hexaamino triphenylene ). It shares graphene's hexagonal honeycomb structure. Multiple layers naturally form perfectly aligned stacks, with identical 2-nm openings at the centers of the hexagons. Room temperature electrical conductivity is ~40 S cm −1 , comparable to that of bulk graphite and among the highest for any conducting metal-organic frameworks (MOFs). The temperature dependence of its conductivity is linear at temperatures between 100 K and 500 K, suggesting an unusual charge transport mechanism that has not been previously observed in organic semiconductors . [ 86 ] The material was claimed to be the first of a group formed by switching metals and/or organic compounds. The material can be isolated as a powder or a film with conductivity values of 2 and 40 S cm −1 , respectively. [ 87 ] Using melamine (carbon and nitrogen ring structure) as a monomer , researchers created 2DPA-1, a 2-dimensional polymer sheet held together by hydrogen bonds . The sheet forms spontaneously in solution, allowing thin films to be spin-coated. The polymer has a yield strength twice that of steel, and it resists six times more deformation force than bulletproof glass . It is impermeable to gases and liquids. [ 88 ] [ 89 ] Single layers of 2D materials can be combined into layered assemblies. For example, bilayer graphene is a material consisting of two layers of graphene . One of the first reports of bilayer graphene was in the seminal 2004 Science paper by Geim and colleagues, in which they described devices "which contained just one, two, or three atomic layers". Layered combinations of different 2D materials are generally called van der Waals heterostructures . Twistronics is the study of how the angle (the twist) between layers of two-dimensional materials can change their electrical properties. Microscopy techniques such as transmission electron microscopy , [ 90 ] [ 91 ] [ 92 ] 3D electron diffraction , [ 93 ] scanning probe microscopy , [ 94 ] scanning tunneling microscope , [ 90 ] and atomic-force microscopy [ 90 ] [ 92 ] [ 94 ] are used to characterize the thickness and size of the 2D materials. Electrical properties and structural properties such as composition and defects are characterized by Raman spectroscopy , [ 90 ] [ 92 ] [ 94 ] X-ray diffraction , [ 90 ] [ 92 ] and X-ray photoelectron spectroscopy . [ 95 ] The mechanical characterization of 2D materials is difficult due to ambient reactivity and substrate constraints present in many 2D materials. To this end, many mechanical properties are calculated using molecular dynamics simulations or molecular mechanics simulations. Experimental mechanical characterization is possible in 2D materials which can survive the conditions of the experimental setup as well as can be deposited on suitable substrates or exist in a free-standing form. Many 2D materials also possess out-of-plane deformation which further convolute measurements. [ 96 ] Nanoindentation testing is commonly used to experimentally measure elastic modulus , hardness , and fracture strength of 2D materials. From these directly measured values, models exist which allow the estimation of fracture toughness , work hardening exponent , residual stress, and yield strength . These experiments are run using dedicated nanoindentation equipment or an Atomic Force Microscope (AFM). Nanoindentation experiments are generally run with the 2D material as a linear strip clamped on both ends experiencing indentation by a wedge, or with the 2D material as a circular membrane clamped around the circumference experiencing indentation by a curbed tip in the center. The strip geometry is difficult to prepare but allows for easier analysis due to linear resulting stress fields. The circular drum-like geometry is more commonly used and can be easily prepared by exfoliating samples onto a patterned substrate. The stress applied to the film in the clamping process is referred to as the residual stress. In the case of very thin layers of 2D materials bending stress is generally ignored in indentation measurements, with bending stress becoming relevant in multilayer samples. Elastic modulus and residual stress values can be extracted by determining the linear and cubic portions of the experimental force-displacement curve. The fracture stress of the 2D sheet is extracted from the applied stress at failure of the sample. AFM tip size was found to have little effect on elastic property measurement, but the breaking force was found to have a strong tip size dependence due stress concentration at the apex of the tip. [ 97 ] Using these techniques the elastic modulus and yield strength of graphene were found to be 342 N/m and 55 N/m respectively. [ 97 ] Poisson's ratio measurements in 2D materials is generally straightforward. To get a value, a 2D sheet is placed under stress and displacement responses are measured, or an MD calculation is run. The unique structures found in 2D materials have been found to result in auxetic behavior in phosphorene [ 98 ] and graphene [ 99 ] and a Poisson's ratio of zero in triangular lattice borophene. [ 100 ] Shear modulus measurements of graphene has been extracted by measuring a resonance frequency shift in a double paddle oscillator experiment as well as with MD simulations. [ 101 ] [ 102 ] Fracture toughness of 2D materials in Mode I (K IC ) has been measured directly by stretching pre-cracked layers and monitoring crack propagation in real-time. [ 103 ] MD simulations as well as molecular mechanics simulations have also been used to calculate fracture toughness in Mode I. In anisotropic materials, such as phosphorene, crack propagation was found to happen preferentially along certain directions. [ 104 ] Most 2D materials were found to undergo brittle fracture. The major expectation held amongst researchers is that given their exceptional properties, 2D materials will replace conventional semiconductors to deliver a new generation of electronics. Research on 2D nanomaterials is still in its infancy, with the majority of research focusing on elucidating the unique material characteristics and few reports focusing on biomedical applications of 2D nanomaterials . [ 105 ] Nevertheless, recent rapid advances in 2D nanomaterials have raised important yet exciting questions about their interactions with biological moieties. 2D nanoparticles such as carbon-based 2D materials, silicate clays, transition metal dichalcogenides (TMDs), and transition metal oxides (TMOs) provide enhanced physical, chemical, and biological functionality owing to their uniform shapes, high surface-to-volume ratios, and surface charge. Two-dimensional (2D) nanomaterials are ultrathin nanomaterials with a high degree of anisotropy and chemical functionality. [ 106 ] 2D nanomaterials are highly diverse in terms of their mechanical , chemical , and optical properties, as well as in size, shape, biocompatibility, and degradability. [ 107 ] [ 108 ] These diverse properties make 2D nanomaterials suitable for a wide range of applications, including drug delivery , imaging , tissue engineering , biosensors , and gas sensors among others. [ 109 ] [ 110 ] However, their low-dimension nanostructure gives them some common characteristics. For example, 2D nanomaterials are the thinnest materials known, which means that they also possess the highest specific surface areas of all known materials. This characteristic makes these materials invaluable for applications requiring high levels of surface interactions on a small scale. As a result, 2D nanomaterials are being explored for use in drug delivery systems, where they can adsorb large numbers of drug molecules and enable superior control over release kinetics. [ 111 ] Additionally, their exceptional surface area to volume ratios and typically high modulus values make them useful for improving the mechanical properties of biomedical nanocomposites and nanocomposite hydrogels , even at low concentrations. Their extreme thinness has been instrumental for breakthroughs in biosensing and gene sequencing . Moreover, the thinness of these molecules allows them to respond rapidly to external signals such as light, which has led to utility in optical therapies of all kinds, including imaging applications, photothermal therapy (PTT), and photodynamic therapy (PDT). Despite the rapid pace of development in the field of 2D nanomaterials, these materials must be carefully evaluated for biocompatibility in order to be relevant for biomedical applications. [ 112 ] The newness of this class of materials means that even the relatively well-established 2D materials like graphene are poorly understood in terms of their physiological interactions with living tissues . Additionally, the complexities of variable particle size and shape, impurities from manufacturing, and protein and immune interactions have resulted in a patchwork of knowledge on the biocompatibility of these materials.
https://en.wikipedia.org/wiki/Single-layer_materials
Single-minute digit exchange of die ( SMED ) is one of the many lean production methods for reducing inefficiencies in a manufacturing process. It provides a rapid and efficient way of converting a manufacturing process from running the current product to running the next product. This is key to reducing production lot sizes, and reducing uneven flow ( Mura ), production loss, and output variability. The phrase "single minute" does not mean that all changeovers and startups should only take one minute, rather, it should take less than 10 minutes ("single-digit minute"). [ 1 ] A closely associated yet more difficult concept is one-touch exchange of die ( OTED ), which says changeovers can and should take less than 100 seconds. A die is a tool used in manufacturing. However, SMED's utility is not limited to manufacturing (see value stream mapping ). Frederick Winslow Taylor analyzed non-value-adding parts of setups in his 1911 book, Shop Management (page 171). [ 2 ] However, he did not create any method or structured approach around it. Frank Bunker Gilbreth studied and improved working processes in many different industries, from bricklaying to surgery. As part of his work, he also looked into changeovers. His book Motion Study (also from 1911) described approaches to reduce setup time. Even Henry Ford 's factories were using some setup reduction techniques. In the 1915 publication Ford Methods and Ford Shops , [ 3 ] setup reduction approaches were clearly described. However, these approaches never became mainstream. For most parts during the 20th century, the economic order quantity was the gold standard for lot sizing. The JIT workflow of Toyota had a problem of tool changeover taking between two and eight hours. [ citation needed ] Setup time and lot reduction had been ongoing in Toyota's production system since 1945 when Taiichi Ohno became manager of the machine shops at Toyota. On a trip to the US in 1955, Ohno observed Danly stamping presses with rapid die change capability. Subsequently, Toyota bought multiple Danly presses for the Motomachi plant and started improving the changeover time of their presses. This was known as Quick Die Change , or QDC for short. They developed a structured approach based on a framework from the US World War II Training within Industry (TWI) program, called ECRS – Eliminate, Combine, Rearrange, and Simplify. Over time, Toyota decreased changeover times from hours to fifteen minutes by the 1960s, three minutes by the 1970s, and ultimately just 180 seconds by the 1990s. During the late 1970s, when Toyota's method was already well refined, Shigeo Shingo participated in one QDC workshop. After he started to publicize details of the Toyota Production System without permission, the business connection was terminated abruptly by Toyota. Shingo moved to the US and started to consult on lean manufacturing. Besides claiming to have invented this quick changeover method (among many other things), he renamed it Single Minute Exchange of Die or, in short, SMED. The Single Minute stands for a single digit minute (i.e., less than ten minutes). He promoted TPS and SMED in US. [ 4 ] [ 5 ] Toyota found that the most difficult tools to change were the dies on the large transfer-stamping machines that produce car vehicle body parts. The dies – which must be changed for each model – weigh many tons, and must be assembled in the stamping machines with tolerances of less than a millimeter, otherwise the stamped metal will wrinkle, if not melt, under the intense heat and pressure. When Toyota engineers examined the change-over, they discovered that the established procedure was to stop the line, let down the dies by an overhead crane, position the dies in the machine by human eyesight, and then adjust their position with crowbars while making individual test stampings. The existing process took from twelve hours to almost three days to complete. Toyota's first improvement was to place precision measurement devices on the transfer stamping machines, and record the necessary measurements for each model's die. Installing the die against these measurements, rather than by human eyesight, immediately cut the change-over to a mere hour and a half. Further observations led to further improvements – scheduling the die changes in a standard sequence (as part of FRS ) as a new model moved through the factory, dedicating tools to the die-change process so that all needed tools were nearby, and scheduling use of the overhead cranes so that the new die would be waiting as the old die was removed. Using these processes, Toyota engineers cut the change-over time to less than 10 minutes per die, and thereby reduced the economic lot size below one vehicle. The success of this program contributed directly to just-in-time manufacturing which is part of the Toyota Production System . SMED makes load balancing much more achievable by reducing economic lot size and thus stock levels. Shigeo Shingo, who created the SMED approach, claims [ 6 ] that in his data from between 1975 and 1985 that average setup times he has dealt with have reduced to 2.5% of the time originally required; a 40 times improvement. However, the power of SMED is that it has a lot of other effects which come from systematically looking at operations; these include: Shigeo Shingo recognizes eight fundamental techniques [ 7 ] that should be considered in implementing SMED. NB External setup can be done without the line being stopped whereas internal setup requires that the line be stopped. He suggests [ 8 ] that SMED improvement should pass through four conceptual stages: A) ensure that external setup actions are performed while the machine is still running, B) separate external and internal setup actions, ensure that the parts all function and implement efficient ways of transporting the die and other parts, C) convert internal setup actions to external, D) improve all setup actions. There are seven basic steps [ 9 ] to reducing changeover using the SMED system: This diagram shows four successive runs with learning from each run and improvements applied before the next. The SMED concept is credited to Shigeo Shingo, one of the main contributors to the consolidation of the Toyota Production System, along with Taiichi Ohno . Look for: Record all necessary data Parallel operations using multiple operators By taking the 'actual' operations and making them into a network which contains the dependencies it is possible to optimize task attribution and further optimize setup time. Issues of effective communication between the operators must be managed to ensure safety is assured where potentially noisy or visually obstructive conditions occur.
https://en.wikipedia.org/wiki/Single-minute_exchange_of_die
The single-molecule electric motor is an electrically operated synthetic molecular motor made from a single butyl methyl sulphide molecule. [ 1 ] The molecule is adsorbed onto a copper (111) single-crystal piece by chemisorption . [ 1 ] The motor, the world's smallest electric motor, [ 2 ] is just a nanometer (billionth of a meter) across [ 3 ] (60 000 times smaller than the thickness of a human hair). It was developed by the Sykes group and scientists at the Tufts University School of Arts and Sciences and published online September 4, 2011. [ 4 ] Single-molecule motors have been demonstrated before. These motors were either powered by chemical reactions [ 5 ] or by light. [ 6 ] This is the first experimental demonstration of electrical energy successfully coupling to directed molecular rotation. [ 3 ] [ 7 ] Butyl methyl sulfide is an asymmetrical thioether which is achiral in the gas phase. The molecule can be adsorbed to the surface through either of the sulfur's two lone pair . This gives rise to the surface bound chirality of the molecule. [ 8 ] The asymmetry of the molecular surface interface gives rise to an asymmetrical barrier to rotation. [ 9 ] The molecule rotates around this sulfur-copper bond. Electrons quantum tunneling from the STM tip electrically excite molecular vibrations, which couple to rotational modes. [ 7 ] The rotation of the motor can be controlled by adjusting the electron flux from the scanning tunneling microscope and the background temperature. [ 1 ] The tip of the scanning electron microscope acts as an electrode. The chiralities of the tip of the STM and the molecule determine the rate and direction of rotation. [ 1 ] Images taken of the molecule at 5 K and under non-perturbative scanning conditions show a crescent-shaped protrusion of the molecule. [ 10 ] When the temperature is raised to 8 K, the molecule starts rotating along six orientations determined by the hexagonal structure of the copper it is adsorbed on. In this case, a STM image taken of the molecule appears as a hexagon as the timescale of the imaging is much slower than the rotation rate of the molecule. [ 10 ] The six states of rotation of the molecule can be determined by aligning the tip of the scanning electron microscope asymmetrically on the side of one of the lobes of the molecule during spectroscopy measurements. When the butyl tail is nearest to the tip of the microscope, the tunneling current would be maximum and vice versa. The position of the molecule on the surface can be determined by the tunneling current. By plotting the position versus time the rate and direction of rotation can be determined. [ 10 ] At higher temperatures, the single-molecule motor rotates too fast (up to one million rotations per second at 100 K) to monitor. [ 2 ] The single-molecule electric motor can be efficiently used in engineering, [ 2 ] nanotechnological applications and medicinal applications, [ 3 ] where drugs could be delivered to specified locations more accurately. [ 3 ] By altering the chemical structure of the molecule, it could become a component of a nanoelectromechanical system (NEMS). It also has potential to be utilized to generate microwave radiation. [ 3 ]
https://en.wikipedia.org/wiki/Single-molecule_electric_motor
A single-molecule experiment is an experiment that investigates the properties of individual molecules . Single-molecule studies may be contrasted with measurements on an ensemble or bulk collection of molecules, where the individual behavior of molecules cannot be distinguished, and only average characteristics can be measured. Since many measurement techniques in biology, chemistry, and physics are not sensitive enough to observe single molecules, single-molecule fluorescence techniques (that have emerged since the 1990s for probing various processes on the level of individual molecules) caused a lot of excitement, since these supplied many new details on the measured processes that were not accessible in the past. Indeed, since the 1990s, many techniques for probing individual molecules have been developed. [ 2 ] The first single-molecule experiments were patch clamp experiments performed in the 1970s, but these were limited to studying ion channels . Today, systems investigated using single-molecule techniques include the movement of myosin on actin filaments in muscle tissue and the spectroscopic details of individual local environments in solids. Biological polymers' conformations have been measured using atomic force microscopy (AFM). Using force spectroscopy , single molecules (or pairs of interacting molecules), usually polymers , can be mechanically stretched, and their elastic response recorded in real time. In the gas phase at ultralow pressures, single-molecule experiments have been around for decades, but in the condensed phase only since 1989 with the work by W. E. Moerner and Lothar Kador. [ 3 ] One year later, Michel Orrit and Jacky Bernard were able to show also the detection of the absorption of single molecules by their fluorescence. [ 4 ] Many techniques have the ability to observe one molecule at a time, most notably mass spectrometry , where single ions are detected. In addition, one of the earliest means of detecting single molecules, came about in the field of ion channels with the development of the patch clamp technique by Erwin Neher and Bert Sakmann (who later went on to win the Nobel prize for their seminal contributions). However, the idea of measuring conductance to look at single molecules placed a serious limitation on the kind of systems which could be observed. Fluorescence is a convenient means of observing one molecule at a time, mostly due to the sensitivity of commercial optical detectors, capable of counting single photons. However, spectroscopically, the observation of one molecule requires that the molecule is in an isolated environment and that it emits photons upon excitation, which owing to the technology to detect single photons by use of photomultiplier tubes (PMT) or avalanche photodiodes (APD), enables one to record photon emission events with great sensitivity and time resolution. More recently, single-molecule fluorescence is the subject of intense interest for biological imaging, through the labeling of biomolecules such as proteins and nucleotides to study enzymatic function which cannot easily be studied on the bulk scale, due to subtle time-dependent movements in catalysis and structural reorganization. The most studied protein has been the class of myosin/actin enzymes found in muscle tissues. Through single-molecule techniques the step mechanism has been observed and characterized in many of these proteins. In 1997, single-molecule detection was demonstrated with surface-enhanced Raman spectroscopy (SERS) by Katrin Kneipp , H. Kneipp, Y. Wang, L.T. Perelman and others [ 5 ] at MIT and independently by S. Nie and S. R. Emory at Indiana University . [ 6 ] The MIT team used non-resonance Raman excitation and surface enhancement with silver nanoclusters to detect single cresyl violet molecules, while the team at Indiana University used resonance Raman excitation and surface enhancement with silver nanoparticles to detect single rhodamine 6G molecules. Nanomanipulators such as the atomic force microscope are also suited to single-molecule experiments of biological significance, since they work on the same length scale of most biological polymers. Besides, atomic force microscopy (AFM) is appropriate for the studies of synthetic polymer molecules. AFM provides a unique possibility of 3D visualization of polymer chains. For instance, AFM tapping mode is gentle enough for the recording of adsorbed polyelectrolyte molecules (for example, 0.4 nm thick chains of poly(2-vinylpyridine)) under liquid medium. The location of two-chain-superposition correspond in these experiments to twice the thickness of single chain (0.8 nm in the case of the mentioned example). At the application of proper scanning parameters, conformation of such molecules remain unchanged for hours that allows the performance of experiments under liquid media having various properties. [ 1 ] Furthermore, by controlling the force between the tip and the sample high resolution images can be obtained. [ 7 ] [ 8 ] Optical tweezers have also been used to study and quantify DNA-protein interactions. [ 7 ] [ 8 ] Single-molecule fluorescence spectroscopy uses the fluorescence of a molecule for obtaining information on its environment, structure, and position. The technique affords the ability of obtaining information otherwise not available due to ensemble averaging (that is, a signal obtained when recording many molecules at the same time represents an average property of the molecules' dynamics). The results in many experiments of individual molecules are two-state trajectories . As in the case of single molecule fluorescence spectroscopy, the technique known as single channel recording can be used to obtain specific kinetic information—in this case about ion channel function—that is not available when ensemble recording, such as whole-cell recording, is performed. [ 9 ] Specifically, ion channels alternate between conducting and non-conducting classes, which differ in conformation. Therefore, the functional state of ion channels can be directly measured with sufficiently sensitive electronics, provided that proper precautions are taken to minimize noise. In turn, each of these classes may be divided into one or more kinetic states with direct bearing on the underlying function of the ion channel. Performing these types of single molecule studies under systematically varying conditions (e.g. agonist concentration and structure, permeant ion and/or channel blocker, mutations in the ion channel amino acids), can provide information regarding the interconversion of various kinetic states of the ion channel. [ 10 ] In a minimal model for an ion channel, there are two states : open and closed. However, other states are often needed in order to accurately represent the data, including multiple closed states as well as inactive and/or desensitized states, which are non-conducting states that can occur even in the presence of stimulus. [ 9 ] Single fluorophores can be chemically attached to biomolecules, such as proteins or DNA, and the dynamics of individual molecules can be tracked by monitoring the fluorescent probe. Spatial movements within the Rayleigh limit can be tracked, along with changes in emission intensity and/or radiative lifetime, which often indicate changes in local environment. For instance, single-molecule labeling has yielded a vast quantity of information on how kinesin motor proteins move along microtubule strands in muscle cells. Single-molecule imaging in live cells reveals interesting information about protein dynamics under its physiological environment. Several biophysical parameters about protein dynamics can be quantified such as diffusion coefficient, mean squared displacements, residence time, the fraction of bound and unbound molecules, and target-search mechanism of protein binding to its target site in the live cell. [ 11 ] Main article smFRET . In single-molecule fluorescence resonance energy transfer , the molecule is labeled in (at least) two places. A laser beam is focused on the molecule exciting the first probe. When this probe relaxes and emits a photon, it has a chance of exciting the other probe. The efficiency of the absorption of the photon emitted from the first probe in the second probe depends on the distance between these probes. Since the distance changes with time, this experiment probes the internal dynamics of the molecule. When looking at data related to individual molecules, one usually can construct propagators, and jumping time probability density functions, of the first order, the second order and so on, whereas from bulk experiments, one usually obtains the decay of a correlation function. [ 12 ] From the information contained in these unique functions (obtained from individual molecules), one can extract a relatively clear picture on the way the system behaves; e.g. its kinetic scheme , or its potential of activity, or its reduced dimensions form . [ 13 ] [ 14 ] In particular, one can construct (many properties of) the reaction pathway of an enzyme when monitoring the activity of an individual enzyme. [ 15 ] Additionally, significant aspects regarding the analysis of single molecule data—such as fitting methods and tests for homogeneous populations—have been described by several authors. [ 9 ] On the other hand, there are several issues with the analysis of single molecule data including construction of a low noise environment and insulated pipet tips, filtering some of the remaining unwanted components (noise) found in recordings, and the length of time required for data analysis (pre-processing, unambiguous event detection, plotting data, fitting kinetic schemes, etc.). Single-molecule techniques impacted optics, electronics, biology, and chemistry. In the biological sciences, the study of proteins and other complex biological machinery was limited to ensemble experiments that nearly made impossible the direct observation of their kinetics. For example, it was only after single molecule fluorescence microscopy was used to study kinesin-myosin pairs in muscle tissue that direct observation of the walking mechanisms were understood. These experiments, however, have for the most part been limited to in vitro studies, as useful techniques for live cell imaging have yet to be fully realized. The promise of single molecule in vivo imaging, [ 16 ] however, brings with it an enormous potential to directly observe bio-molecules in native processes. These techniques are often targeted for studies involving low-copy proteins, many of which are still being discovered. These techniques have also been extended to study areas of chemistry, including the mapping of heterogeneous surfaces. [ 17 ]
https://en.wikipedia.org/wiki/Single-molecule_experiment
A single-molecule magnet ( SMM ) is a metal-organic compound that has superparamagnetic behavior below a certain blocking temperature at the molecular scale. In this temperature range, an SMM exhibits magnetic hysteresis of purely molecular origin. [ 1 ] [ 2 ] In contrast to conventional bulk magnets and molecule-based magnets , collective long-range magnetic ordering of magnetic moments is not necessary. [ 2 ] Although the term "single-molecule magnet" was first employed in 1996, [ 3 ] the first single-molecule magnet, [Mn 12 O 12 (OAc) 16 (H 2 O) 4 ] (nicknamed "Mn 12 ") was reported in 1991. [ 4 ] [ 5 ] [ 6 ] This manganese oxide compound features a central Mn(IV) 4 O 4 cube surrounded by a ring of 8 Mn(III) units connected through bridging oxo ligands , and displays slow magnetic relaxation behavior up to temperatures of ca. 4 K. [ 7 ] [ 8 ] Efforts in this field primarily focus on raising the operating temperatures of single-molecule magnets to liquid nitrogen temperature or room temperature in order to enable applications in magnetic memory. Along with raising the blocking temperature, efforts are being made to develop SMMs with high energy barriers to prevent fast spin reorientation. [ 9 ] Recent acceleration in this field of research has resulted in significant enhancements of single-molecule magnet operating temperatures to above 70 K. [ 10 ] [ 11 ] [ 12 ] [ 13 ] Because of single-molecule magnets' magnetic anisotropy , the magnetic moment has usually only two stable orientations antiparallel to each other, separated by an energy barrier . The stable orientations define the molecule's so called “easy axis”. At finite temperature, there is a finite probability for the magnetization to flip and reverse its direction. Identical to a superparamagnet , the mean time between two flips is called the Néel relaxation time and is given by the following Néel–Arrhenius equation: [ 14 ] τ − 1 = τ 0 − 1 exp ⁡ ( − U e f f k B T ) {\displaystyle \tau ^{-1}=\tau _{0}^{-1}\exp \left({\frac {-U_{eff}}{k_{B}T}}\right)} where: This magnetic relaxation time, τ , can be anywhere from a few nanoseconds to years or much longer. The so-called magnetic blocking temperature , T B , is defined as the temperature below which the relaxation of the magnetization becomes slow compared to the time scale of a particular investigation technique. [ 15 ] Historically, the blocking temperature for single-molecule magnets has been defined as the temperature at which the molecule's magnetic relaxation time, τ , is 100 seconds. This definition is the current standard for comparison of single-molecule magnet properties, but otherwise is not technologically significant. There is typically a correlation between increasing an SMM's blocking temperature and energy barrier. The average blocking temperature for SMMs is 4K. [ 16 ] Dy-metallocenium salts are the most recent SMM to achieve the highest temperature of magnetic hysteresis, greater than that of liquid nitrogen. [ 9 ] The magnetic coupling between the spins of the metal ions is mediated by superexchange interactions and can be described by the following isotropic Heisenberg Hamiltonian : where J i , j {\displaystyle J_{i,j}} is the coupling constant between spin i (operator S i {\displaystyle \mathbf {S} _{i}} ) and spin j (operator S j {\displaystyle \mathbf {S} _{j}} ). For positive J the coupling is called ferromagnetic (parallel alignment of spins) and for negative J the coupling is called antiferromagnetic (antiparallel alignment of spins): a high spin ground state , a high zero-field-splitting (due to high magnetic anisotropy ), and negligible magnetic interaction between molecules. The combination of these properties can lead to an energy barrier , so that at low temperatures the system can be trapped in one of the high-spin energy wells. [ 2 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] A single-molecule magnet can have a positive or negative magnetic moment, and the energy barrier between these two states greatly determines the molecule's relaxation time. This barrier depends on the total spin of the molecule's ground state and on its magnetic anisotropy . The latter quantity can be studied with EPR spectroscopy . [ 21 ] The performance of single-molecule magnets is typically defined by two parameters: the effective barrier to slow magnetic relaxation, U eff , and the magnetic blocking temperature, T B . While these two variables are linked, only the latter variable, T B , directly reflects the performance of the single-molecule magnet in practical use. In contrast, U eff , the thermal barrier to slow magnetic relaxation, only correlates to T B when the molecule's magnetic relaxation behavior is perfectly Arrhenius in nature. The table below lists representative and record 100-s magnetic blocking temperatures and U eff values that have been reported for single-molecule magnets. Abbreviations: OAc= acetate , Cp ttt =1,2,4‐tri( tert ‐butyl)cyclopentadienide, Cp Me5 = 1,2,3,4,5-penta(methyl)cyclopentadienide , Cp iPr4H = 1,2,3,4-tetra(isopropyl)cyclopentadienide, Cp iPr4Me = 1,2,3,4-tetra(isopropyl)-5-(methyl)cyclopentadienide, Cp iPr4Et = 1-(ethyl)-2,3,4,5-tetra(isopropyl)cyclopentadienide, Cp iPr5 = 1,2,3,4,5-penta(isopropyl)cyclopentadienide *indicates parameters from magnetically dilute samples [ 26 ] Metal clusters formed the basis of the first decade-plus of single-molecule magnet research, beginning with the archetype of single-molecule magnets, "Mn 12 ". [ 4 ] [ 5 ] [ 6 ] This complex is a polymetallic manganese (Mn) complex having the formula [Mn 12 O 12 (OAc) 16 (H 2 O) 4 ], where OAc stands for acetate . It has the remarkable property of showing an extremely slow relaxation of their magnetization below a blocking temperature. [Mn 12 O 12 (OAc) 16 (H 2 O) 4 ]·4H 2 O·2AcOH, which is called "Mn 12 -acetate" is a common form of this used in research. [ 27 ] Single-molecule magnets are also based on iron clusters [ 15 ] because they potentially have large spin states. In addition, the biomolecule ferritin is also considered a nanomagnet . In the cluster Fe 8 Br the cation Fe 8 stands for [Fe 8 O 2 (OH) 12 (tacn) 6 ] 8+ , with tacn representing 1,4,7-triazacyclononane . The ferrous cube complex [Fe 4 (sae) 4 (MeOH) 4 ] was the first example of a single-molecule magnet involving an Fe(II) cluster, and the core of this complex is a slightly distorted cube with Fe and O atoms on alternating corners. [ 28 ] Remarkably, this single-molecule magnet exhibits non-collinear magnetism, in which the atomic spin moments of the four Fe atoms point in opposite directions along two nearly perpendicular axes. [ 29 ] Theoretical computations showed that approximately two magnetic electrons are localized on each Fe atom, with the other atoms being nearly nonmagnetic, and the spin–orbit-coupling potential energy surface has three local energy minima with a magnetic anisotropy barrier just below 3 meV. [ 30 ] There are many discovered types and potential uses. [ 31 ] [ 32 ] Single-molecule magnets represent a molecular approach to nanomagnets (nanoscale magnetic particles). Due to the typically large, bi-stable spin anisotropy , single-molecule magnets promise the realization of perhaps the smallest practical unit for magnetic memory , and thus are possible building blocks for a quantum computer . [ 1 ] Consequently, many groups have devoted great efforts into synthesis of additional single-molecule magnets. Single-molecule magnets have been considered as potential building blocks for quantum computers . [ 33 ] A single-molecule magnet is a system of many interacting spins with clearly defined low-lying energy levels. The high symmetry of the single-molecule magnet allows for a simplification of the spins that can be controllable in external magnetic fields. Single-molecule magnets display strong anisotropy , a property which allows a material to assume a variation of properties in different orientations. Anisotropy ensures that a collection of independent spins would be advantageous for quantum computing applications. A large amount of independent spins compared to a singular spin, permits the creation of a larger qubit and therefore a larger faculty of memory. Superposition and interference of the independent spins also allows for further simplification of classical computation algorithms and queries. Theoretically, quantum computers can overcome the physical limitations presented by classical computers by encoding and decoding quantum states. Single-molecule magnets have been utilized for the Grover algorithm , a quantum search theory. [ 34 ] The quantum search problem typically requests for a specific element to be retrieved from an unordered database. Classically the element would be retrieved after N/2 attempts, however a quantum search utilizes superpositions of data in order to retrieve the element, theoretically reducing the search to a single query. Single molecular magnets are considered ideal for this function due to their cluster of independent spins. A study conducted by Leuenberger and Loss, specifically utilized crystals to amplify the moment of the single spin molecule magnets Mn 12 and Fe 8 . Mn 12 and Fe 8 were both found to be ideal for memory storage with a retrieval time of approximately 10 −10 seconds. [ 34 ] Another approach to information storage with SMM Fe 4 involves the application of a gate voltage for a state transition from neutral to anionic. Using electrically gated molecular magnets offers the advantage of control over the cluster of spins during a shortened time scale. [ 33 ] The electric field can be applied to the SMM using a tunneling microscope tip or a strip-line . The corresponding changes in conductance are unaffected by the magnetic states, proving that information storage could be performed at much higher temperatures than the blocking temperature. [ 16 ] The specific mode of information transfer includes DVD to another readable medium, as shown with Mn 12 patterned molecules on polymers. [ 35 ] Another application for SMMs is in magnetocaloric refrigerants . A machine learning approach using experimental data has been able to predict novel SMMs that would have large entropy changes, and therefore more suitable for magnetic refrigeration. Three hypothetical SMMs are proposed for experimental synthesis: Cr 2 Gd 2 ( OAc ) 5 + {\displaystyle {\ce {Cr2Gd2(OAc)5+}}} , Mn 2 Gd 2 ( OAc ) 5 + {\displaystyle {\ce {Mn2Gd2(OAc)5+}}} , [ Fe 4 Gd 6 ( O 3 PCH 2 Ph ) 6 ( O 2 CtBu ) 14 ( MeCN ) 2 ] {\displaystyle {\ce {[Fe4Gd6(O3PCH2Ph)6(O2CtBu)14(MeCN)2]}}} . [ 36 ] The main SMM characteristics that contribute to the entropy properties include dimensionality and the coordinating ligands. In addition, single-molecule magnets have provided physicists with useful test-beds for the study of quantum mechanics . Macroscopic quantum tunneling of the magnetization was first observed in Mn 12 O 12 , characterized by evenly spaced steps in the hysteresis curve. [ 37 ] The periodic quenching of this tunneling rate in the compound Fe 8 has been observed and explained with geometric phases . [ 38 ]
https://en.wikipedia.org/wiki/Single-molecule_magnet
Magnetic sequencing is a single-molecule sequencing method in development. A DNA hairpin , containing the sequence of interest, is bound between a magnetic bead and a glass surface. A magnetic field is applied to stretch the hairpin open into single strands, and the hairpin refolds after decreasing of the magnetic field. The hairpin length can be determined by direct imaging of the diffraction rings of the magnetic beads using a simple microscope. The DNA sequences are determined by measuring the changes in the hairpin length following successful hybridization of complementary nucleotides. [ 1 ] With the development of various next-generation sequencing platforms, there has been a substantial reduction in costs, and increase in throughput of DNA sequencing. However, the majority of the sequencing technologies rely on PCR -based clonal amplification of the DNA molecule in order to bring the signal to a detectable range. [ 2 ] Sequencing of amplified clusters, or bulk sequencing in such a propose a read length-dependent phasing problem. During each cycle, not all of the molecules within the bulk have successful incorporation of an additional nucleotide. With increased sequencing cycle, the signal of the lagging molecules will eventually overwhelm the true signal. The phasing problem is a major limitation for the read lengths of the next-generation sequencing technologies. Therefore, there is an increased interest in developing single-molecule sequencing technologies, where no amplification is required. This not only shortens the preparation time for the sequencing libraries, it also has the potential to achieve much longer read lengths, as the lagging molecules with failed extensions can be ignored or considered separately. Previously known single-molecule sequencing technologies include Nanopore sequencing (Oxford Nanopore), [ 3 ] SMRT sequencing ( Pacific Biosciences ), [ 4 ] and Heliscope single molecule sequencing ( Helicos Biosciences ). [ 5 ] The DNA molecule of interest must be incorporated into a hairpin, and attached to a magnetic bead on one end and to an immobile glass surface on the other end. The hairpin is attached to the glass surface via a digoxigenin -antidigoxigenin bond. [ 1 ] The magnetic bead is attached to the opposite end via biotin - streptavidin interaction. [ 1 ] Such DNA hairpin setup can be made in two ways: Electromagnets are placed above the sample slide, [ 6 ] and an inverted microscope is placed below. [ 7 ] The image is captured via a CCD camera and transferred to a computer, where the three-dimensional positions of the magnetic beads are determined. The position of the bead within the horizontal plane of the glass slide, x and y, are determined by real-time correlation of the bead images. [ 8 ] The vertical length of the hairpin, measured by the vertical position of the attached magnetic bead, is measured by the bead’s diffraction ring diameter, which increases with distance. [ 7 ] A constant magnetic force is applied to unzip the DNA hairpin, and reducing the force allows the hairpin to rezip. [ 9 ] Prior to performing the downstream applications several unzipping and rezipping cycles are performed. While the magnetic force required to unzip and rezip may vary depending on the DNA sequence and hairpin length, their absolute values are not critical as long as they are consistent within a sequencing run. [ 1 ] When the DNA hairpin is unzipped into single-strand, oligonucleotides complementary to the hairpin sequence are allowed to hybridize. During the time course of the rezipping process, the bound oligonucleotides cause transient blockages. [ 1 ] The time course measurement of hairpin length allows for the determination of the exact position of the hybridization, as well as the presence of mismatches between the oligonucleotide and the hairpin. [ 1 ] Hybridization is one way to determine the sequence of a DNA strand from detecting the changes in the length of a hairpin. When a probe hybridizes to an open hairpin, complete refolding of the hairpin is stalled, and the position of the hybridized probe can be inferred. Thus the sequence of a DNA fragment of interest can be inferred from overlapping the positions of probes sets, which are allowed to hybridize one by one. [ 10 ] First, a DNA fragment can be converted into a new sequence in which each original nucleotide is encoded by a specific 8-nt sequence (A8, T8, G8 and C8) [ 11 ] and then ligated to a hairpin. After applying a magnetic force, in the unzipped state of the hairpin, a small number of discriminating nucleotides can hybridize to the new individual complementary sequences on the hairpin which can transiently block the refolding of the hairpin. Identification of the blockage positions of the hairpin produced by the hybridization of the discriminating nucleotides can be observed as the pauses in the time course of the hairpin distance measurement. The complete sequence can be reconstructed by the overlapping fragments. Another application for the magnetic sequencing is using the hairpin end-to-end distance to detect the successive ligation of oligonucleotide. [ 12 ] First step of sequencing by ligation is using a primer to extend a DNA fragment. Extension is first attempted with a fragment starting with adenine, which can only be ligated if the next nucleotide on the opposite strand is a thymine. Then fragments starting with cytosine, guanine and thymine are attempted in turn, and the cycle is repeated. The magnetic field is released after each ligation, and then the length of the extended primer is measured. Upon ligation the primer is extended by seven bases, which is resulting in a detectable increase in the hairpin’s end to end distance. RNase cleavage at position 2 is followed by the ligation for the preparation of the next ligation cycle, so that the next ligation is positioned just ahead of the previous one. [ 10 ] 7-nt primer library, 5′-NNNNNNrX-3′, are used in the ligation of a short degenerate oligonucleotide fragmentin, in which N represents any of the four deoxyribonucleotides and Nr represents any of the four ribonucleotides , X is the tested base(A,G,C,T). The ligation to a primer strand of each of the four tested bases in hairpin opening and closing cycles are tested. [ 1 ] 7-nt primer ligates in the open state of the hairpin, which will block rezipping of the last seven nucleotides and increase the distance between the surface and the magnetic bead by ~5 nm. [ 1 ] If the ligation is not successful, no change in the hairpin length is observed. RNase cleavage of the last six nucleotides is the next step following the ligation, ultimately extending the primer strand by a single base. Such cleavage allows rezipping of 6 nucleotides of the hairpin, signaled by a decrease in hairpin length of ~4 nm. [ 1 ] Therefore, an incorporation of a complementary nucleotide is indicated by an increase in 7 nucleotides (+5 nm) followed by a decrease in 6 nucleotides (-4 nm). [ 1 ] After the RNase cleavage of the last six nucleotides, the next step is phosphorylation of the 5'-end via Kinase . Then the next cycle of ligation can be repeated. Many of the competitive single-molecule sequencing methods rely on the incorporation of fluorescently labeled nucleotides. [ 4 ] [ 5 ] In next-generation sequencing, the fluorescence signal of clusters can be easily detected. However, when the same concept is applied to single-molecule sequencing, the largest complication results from the high error rates. Because it is difficult to detect single labeled molecules, these platforms suffer from low signal-to-noise ratios, often resulting in misdetection or non-detection of fluorescent signals. [ 13 ] In the case of magnetic sequencing, the signal measured is the changes in distance between two ends of a hairpin. Such signal can be readily detected with standard cameras. Thus, the signals are easier to detect, even without the use of expensive imaging devices. In addition to the nature of the detected signal, other implementations in this platform allows for an even higher signal-to-noise ratio. In the case of magnetic sequencing by hybridization, a set of overlapping tiles is used such that the sequence of each nucleotide is determined by the hybridization of an 8-mer. [ 1 ] Therefore, the instrument only requires the sensitivity to detect a change of ~ 6 nm (the length of 8 nucleotides). Similarly, for sequencing by ligation cycles, successful incorporation is characterized by a ~5 nm increase (ligation of a 7-mer) followed by a ~ 4 nm decrease (RNase cleavage of 6-mer) in hairpin length. [ 1 ] In this case, the decrease in length in the second step provides additional confirmation for the obtained signal. With the current methods, the instrumental error in the measured hairpin length is 1-1.5 nm. [ 1 ] The length of a basepair, or 2 extended single-stranded nucleotides, is approximately 0.85 nm. [ 1 ] Therefore, the resolution of the system is at a few nucleotides. The sources of noise arise from length-dependent Brownian motion of the bead anchored by the extended hairpin, statistical error in bead position determination, and slow mechanical drifts. [ 1 ] However, as mentioned earlier, such resolution is sufficient for the current sequencing method because changes in >4 nm are being measured. Through the use of magnetic traps, [ 6 ] constant magnetic force can be applied to millions of DNA hairpin-tethered magnetic beads in parallel. The magnetic force can be easily adjusted by changing the distance between the trap and the magnetic beads. The number of eads that can be simultaneously monitored, which determines the read throughput of this platform, is limited by the bead size, length of the tethered DNA hairpin, and the optical resolution limit. [ 1 ] Currently, a density of 750 K/mm 2 (comparable to an Illumina HiSeq 2000) [ 14 ] can be achieved. [ 1 ] As mentioned above, the noise due to the Brownian fluctuations of the bead increases with length. Robust sequencing tests have yet to be performed to determine the maximum read length of this system. However, the ligation of a 7-mer in the middle of a 1241 nucleotide-long hairpin was successfully detected, suggesting that the current system is sufficient to sequence up to ~500 bp. [ 1 ] The rate of sequencing or imaging is dependent on the mechanical movement speed of the magnetic beads, which is limited by drag force. [ 1 ] Currently, it is possible to measure 10 hairpin open-close cycle per second. [ 1 ] Additional complications include the existence of a secondary hairpin structure in the DNA of interest. In such a case the DNA loop to be ligated must be designed such that it its closing is favored over the closing of the endogenous loop in the DNA of interest. [ 1 ]
https://en.wikipedia.org/wiki/Single-molecule_magnetic_sequencing
Single-molecule real-time ( SMRT ) sequencing is a parallelized single molecule DNA sequencing method. Single-molecule real-time sequencing utilizes a zero-mode waveguide (ZMW). [ 1 ] A single DNA polymerase enzyme is affixed at the bottom of a ZMW with a single molecule of DNA as a template. The ZMW is a structure that creates an illuminated observation volume that is small enough to observe only a single nucleotide of DNA being incorporated by DNA polymerase . Each of the four DNA bases is attached to one of four different fluorescent dyes. When a nucleotide is incorporated by the DNA polymerase, the fluorescent tag is cleaved off and diffuses out of the observation area of the ZMW where its fluorescence is no longer observable. A detector detects the fluorescent signal of the nucleotide incorporation, and the base call is made according to the corresponding fluorescence of the dye. [ 2 ] The DNA sequencing is done on a chip that contains many ZMWs. Inside each ZMW, a single active DNA polymerase with a single molecule of single stranded DNA template is immobilized to the bottom through which light can penetrate and create a visualization chamber that allows monitoring of the activity of the DNA polymerase at a single molecule level. The signal from a phospho-linked nucleotide incorporated by the DNA polymerase is detected as the DNA synthesis proceeds which results in the DNA sequencing in real time. To prepare the library, DNA fragments are put into a circular form using hairpin adapter ligations. [ 3 ] For each of the nucleotide bases, there is a corresponding fluorescent dye molecule that enables the detector to identify the base being incorporated by the DNA polymerase as it performs the DNA synthesis . The fluorescent dye molecule is attached to the phosphate chain of the nucleotide. When the nucleotide is incorporated by the DNA polymerase, the fluorescent dye is cleaved off with the phosphate chain as a part of a natural DNA synthesis process during which a phosphodiester bond is created to elongate the DNA chain. The cleaved fluorescent dye molecule then diffuses out of the detection volume so that the fluorescent signal is no longer detected. [ 4 ] The zero-mode waveguide (ZMW) is a nanophotonic confinement structure that consists of a circular hole in an aluminum cladding film deposited on a clear silica substrate. [ 5 ] The ZMW holes are ~70 nm in diameter and ~100 nm in depth. Due to the behavior of light when it travels through a small aperture, the optical field decays exponentially inside the chamber. [ 6 ] [ 7 ] The observation volume within an illuminated ZMW is ~20 zeptoliters (20 X 10 −21 liters). The observation volume being so low eliminates background fluorescence from the free, unincorporated fluorescent nucleotides present in the solution. Within this volume, the activity of DNA polymerase incorporating a single nucleotide can be readily detected where each nucleotide is a separate color. [ 4 ] [ 8 ] Sequencing performance can be measured in read length, accuracy, and total throughput per experiment. PacBio sequencing systems using ZMWs have the advantage of long read lengths, although error rates are on the order of 5-15% and sample throughput is lower than Illumina sequencing platforms. [ 9 ] On 19 Sep 2018, Pacific Biosciences [PacBio] released the Sequel 6.0 chemistry, synchronizing the chemistry version with the software version. Performance is contrasted for large-insert libraries with high molecular weight DNA versus shorter-insert libraries below ~15,000 bases in length. For larger templates average read lengths are up to 30,000 bases. For shorter-insert libraries, average read length are up to 100,000 bases while reading the same molecule in a circle several times. The latter shorter-insert libraries then yield up to 50 billion bases from a single SMRT Cell. [ 10 ] Pacific Biosciences (PacBio) commercialized SMRT sequencing in 2011, [ 11 ] after releasing a beta version of its RS instrument in late 2010. [ 12 ] At commercialization, read length had a normal distribution with a mean of about 1100 bases. A new chemistry kit released in early 2012 increased the sequencer's read length; an early customer of the chemistry cited mean read lengths of 2500 to 2900 bases. [ 13 ] The XL chemistry kit released in late 2012 increased average read length to more than 4300 bases. [ 14 ] [ 15 ] On August 21, 2013, PacBio released a new DNA polymerase Binding Kit P4. This P4 enzyme has average read lengths of more than 4,300 bases when paired with the C2 sequencing chemistry and more than 5,000 bases when paired with the XL chemistry. [ 16 ] The enzyme’s accuracy is similar to C2, reaching QV50 between 30X and 40X coverage. The resulting P4 attributes provided higher-quality assemblies using fewer SMRT Cells and with improved variant calling. [ 16 ] When coupled with input DNA size selection (using an electrophoresis instrument such as BluePippin) yields average read length over 7 kilobases. [ 17 ] On October 3, 2013, PacBio released new reagent combination for PacBio RS II, the P5 DNA polymerase with C3 chemistry (P5-C3). Together, they extend sequencing read lengths to an average of approximately 8,500 bases, with the longest reads exceeding 30,000 bases. [ 18 ] Throughput per SMRT cell is around 500 million bases demonstrated by sequencing results from the CHM1 cell line. [ 19 ] On October 15, 2014, PacBio announced the release of new chemistry P6-C4 for the RS II system, which represents the company's 6th generation of polymerase and 4th generation chemistry--further extending the average read length to 10,000 - 15,000 bases, with the longest reads exceeding 40,000 bases. The throughput with the new chemistry was estimated between 500 million to 1 billion bases per SMRT Cell, depending on the sample being sequenced. [ 20 ] [ 21 ] This was the final version of chemistry released for the RS instrument. Throughput per experiment for the technology is both influenced by the read length of DNA molecules sequenced as well as total multiplex of a SMRT Cell. The prototype of the SMRT Cell contained about 3000 ZMW holes that allowed parallelized DNA sequencing. At commercialization, the SMRT Cells were each patterned with 150,000 ZMW holes that were read in two sets of 75,000. [ 22 ] In April 2013, the company released a new version of the sequencer called the "PacBio RS II" that uses all 150,000 ZMW holes concurrently, doubling the throughput per experiment. [ 23 ] [ 24 ] The highest throughput mode in November 2013 used P5 binding, C3 chemistry, BluePippin size selection, and a PacBio RS II officially yielded 350 million bases per SMRT Cell though a human de novo data set released with the chemistry averaging 500 million bases per SMRT Cell. Throughput varies based on the type of sample being sequenced. [ 25 ] With the introduction of P6-C4 chemistry typical throughput per SMRT Cell increased to 500 million bases to 1 billion bases. In September 2015, the company announced the launch of a new sequencing instrument, the Sequel System, that increased capacity to 1 million ZMW holes. [ 26 ] [ 27 ] With the Sequel instrument initial read lengths were comparable to the RS, then later chemistry releases increased read length. On January 23, 2017, the V2 chemistry was released. It increased average read lengths to between 10,000 and 18,000 bases. [ 28 ] On March 8, 2018, the 2.1 chemistry was released. It increased average read length to 20,000 bases and half of all reads above 30,000 bases in length. Yield per SMRT Cell increased to 10 or 20 billion bases, for either large-insert libraries or shorter-insert (e.g. amplicon ) libraries respectively. [ 29 ] On 19 September 2018, the company announced the Sequel 6.0 chemistry with average read lengths increased to 100,000 bases for shorter-insert libraries and 30,000 for longer-insert libraries. SMRT Cell yield increased up to 50 billion bases for shorter-insert libraries. [ 10 ] In April 2019 the company released a new SMRT Cell with eight million ZMWs, [ 30 ] increasing the expected throughput per SMRT Cell by a factor of eight. [ 31 ] Early access customers in March 2019 reported throughput over 58 customer run cells of 250 GB of raw yield per cell with templates about 15 kb in length, and 67.4 GB yield per cell with templates in higher weight molecules. [ 32 ] System performance is now reported in either high-molecular-weight continuous long reads or in pre-corrected HiFi (also known as Circular Consensus Sequence (CCS)) reads. For high-molecular-weight reads roughly half of all reads are longer than 50 kb in length. The HiFi performance includes corrected bases with quality above Phred score Q20, using repeated amplicon passes for correction. These take amplicons up to 20kb in length. Single-molecule real-time sequencing may be applicable for a broad range of genomics research. For de novo genome sequencing, read lengths from the single-molecule real-time sequencing are comparable to or greater than that from the Sanger sequencing method based on dideoxynucleotide chain termination. The longer read length allows de novo genome sequencing and easier genome assemblies. [ 2 ] [ 33 ] [ 34 ] Scientists are also using single-molecule real-time sequencing in hybrid assemblies for de novo genomes to combine short-read sequence data with long-read sequence data. [ 35 ] [ 36 ] In 2012, several peer-reviewed publications were released demonstrating the automated finishing of bacterial genomes, [ 37 ] [ 38 ] including one paper that updated the Celera Assembler with a pipeline for genome finishing using long SMRT sequencing reads. [ 39 ] In 2013, scientists estimated that long-read sequencing could be used to fully assemble and finish the majority of bacterial and archaeal genomes. [ 40 ] The same DNA molecule can be resequenced independently by creating the circular DNA template and utilizing a strand displacing enzyme that separates the newly synthesized DNA strand from the template. [ 41 ] In August 2012, scientists from the Broad Institute published an evaluation of SMRT sequencing for SNP calling. [ 42 ] The dynamics of polymerase can indicate whether a base is methylated . [ 43 ] Scientists demonstrated the use of single-molecule real-time sequencing for detecting methylation and other base modifications. [ 44 ] [ 45 ] [ 46 ] In 2012 a team of scientists used SMRT sequencing to generate the full methylomes of six bacteria. [ 47 ] In November 2012, scientists published a report on genome-wide methylation of an outbreak strain of E. coli. [ 48 ] Long reads make it possible to sequence full gene isoforms, including the 5' and 3' ends. This type of sequencing is useful to capture isoforms and splice variants. [ 49 ] [ 50 ] SMRT sequencing has several applications in reproductive medical genetics research when investigating families with suspected parental gonadal mosaicism. Long reads enable haplotype phasing in patients to investigate parent-of-origin of mutations. Deep sequencing enables determination of allele frequencies in sperm cells, of relevance for estimation of recurrence risk for future affected offspring. [ 51 ] [ 52 ]
https://en.wikipedia.org/wiki/Single-molecule_real-time_sequencing
In genetics and bioinformatics , a single-nucleotide polymorphism ( SNP / s n ɪ p / ; plural SNPs / s n ɪ p s / ) is a germline substitution of a single nucleotide at a specific position in the genome . Although certain definitions require the substitution to be present in a sufficiently large fraction of the population (e.g. 1% or more), [ 1 ] many publications [ 2 ] [ 3 ] [ 4 ] do not apply such a frequency threshold. For example, a G nucleotide present at a specific location in a reference genome may be replaced by an A in a minority of individuals. The two possible nucleotide variations of this SNP – G or A – are called alleles . [ 5 ] SNPs can help explain differences in susceptibility to a wide range of diseases across a population. For example, a common SNP in the CFH gene is associated with increased risk of age-related macular degeneration . [ 6 ] Differences in the severity of an illness or response to treatments may also be manifestations of genetic variations caused by SNPs. For example, two common SNPs in the APOE gene, rs429358 and rs7412, lead to three major APO-E alleles with different associated risks for development of Alzheimer's disease and age at onset of the disease. [ 7 ] Single nucleotide substitutions with an allele frequency of less than 1% are sometimes called single-nucleotide variants ( SNVs ). [ 8 ] "Variant" may also be used as a general term for any single nucleotide change in a DNA sequence, [ 9 ] encompassing both common SNPs and rare mutations , whether germline or somatic . [ 10 ] [ 11 ] The term SNV has therefore been used to refer to point mutations found in cancer cells. [ 12 ] DNA variants must also commonly be taken into consideration in molecular diagnostics applications such as designing PCR primers to detect viruses, in which the viral RNA or DNA sample may contain SNVs. [ citation needed ] However, this nomenclature uses arbitrary distinctions (such as an allele frequency of 1%) and is not used consistently across all fields; the resulting disagreement has prompted calls for a more consistent framework for naming differences in DNA sequences between two samples. [ 13 ] [ 14 ] Single-nucleotide polymorphisms may fall within coding sequences of genes , non-coding regions of genes , or in the intergenic regions (regions between genes). SNPs within a coding sequence do not necessarily change the amino acid sequence of the protein that is produced, due to degeneracy of the genetic code . [ 15 ] SNPs in the coding region are of two types: synonymous SNPs and nonsynonymous SNPs. Synonymous SNPs do not affect the protein sequence, while nonsynonymous SNPs change the amino acid sequence of protein. [ 16 ] SNPs that are not in protein-coding regions may still affect gene splicing , transcription factor binding, messenger RNA degradation, or the sequence of noncoding RNA. Gene expression affected by this type of SNP is referred to as an eSNP (expression SNP) and may be upstream or downstream from the gene. More than 600 million SNPs have been identified across the human genome in the world's population. [ 22 ] A typical genome differs from the reference human genome at 4–5 million sites, most of which (more than 99.9%) consist of SNPs and short indels . [ 23 ] The genomic distribution of SNPs is not homogenous; SNPs occur in non-coding regions more frequently than in coding regions or, in general, where natural selection is acting and "fixing" the allele (eliminating other variants) of the SNP that constitutes the most favorable genetic adaptation. [ 24 ] Other factors, like genetic recombination and mutation rate, can also determine SNP density. [ 25 ] SNP density can be predicted by the presence of microsatellites : AT microsatellites in particular are potent predictors of SNP density, with long (AT)(n) repeat tracts tending to be found in regions of significantly reduced SNP density and low GC content . [ 26 ] Since there are variations between human populations, a SNP allele that is common in one geographical or ethnic group may be rarer in another. However, this pattern of variation is relatively rare; in a global sample of 67.3 million SNPs, the Human Genome Diversity Project "found no such private variants that are fixed in a given continent or major region. The highest frequencies are reached by a few tens of variants present at >70% (and a few thousands at >50%) in Africa, the Americas, and Oceania. By contrast, the highest frequency variants private to Europe, East Asia, the Middle East, or Central and South Asia reach just 10 to 30%." [ 27 ] Within a population, SNPs can be assigned a minor allele frequency (MAF)—the lowest allele frequency at a locus that is observed in a particular population. [ 28 ] This is simply the lesser of the two allele frequencies for single-nucleotide polymorphisms. With this knowledge, scientists have developed new methods in analyzing population structures in less studied species. [ 29 ] [ 30 ] [ 31 ] By using pooling techniques, the cost of the analysis is significantly lowered. [ citation needed ] These techniques are based on sequencing a population in a pooled sample instead of sequencing every individual within the population by itself. With new bioinformatics tools, there is a possibility of investigating population structure, gene flow, and gene migration by observing the allele frequencies within the entire population. With these protocols there is a possibility for combining the advantages of SNPs with micro satellite markers. [ 32 ] [ 33 ] However, there is information lost in the process, such as linkage disequilibrium and zygosity information. Variations in the DNA sequences of humans can affect how humans develop diseases and respond to pathogens , chemicals , drugs , vaccines , and other agents. SNPs are also critical for personalized medicine . [ 37 ] Examples include biomedical research, forensics, pharmacogenetics, and disease causation, as outlined below. One of the main contributions of SNPs in clinical research is genome-wide association study (GWAS). [ 38 ] Genome-wide genetic data can be generated by multiple technologies, including SNP array and whole genome sequencing. GWAS has been commonly used in identifying SNPs associated with diseases or clinical phenotypes or traits. Since GWAS is a genome-wide assessment, a large sample site is required to obtain sufficient statistical power to detect all possible associations. Some SNPs have relatively small effect on diseases or clinical phenotypes or traits. To estimate study power, the genetic model for disease needs to be considered, such as dominant, recessive, or additive effects. Due to genetic heterogeneity, GWAS analysis must be adjusted for race. Candidate gene association study is commonly used in genetic study before the invention of high throughput genotyping or sequencing technologies. [ 39 ] Candidate gene association study is to investigate limited number of pre-specified SNPs for association with diseases or clinical phenotypes or traits. So this is a hypothesis driven approach. Since only a limited number of SNPs are tested, a relatively small sample size is sufficient to detect the association. Candidate gene association approach is also commonly used to confirm findings from GWAS in independent samples. Genome-wide SNP data can be used for homozygosity mapping. [ 40 ] Homozygosity mapping is a method used to identify homozygous autosomal recessive loci, which can be a powerful tool to map genomic regions or genes that are involved in disease pathogenesis. Recently, preliminary results reported SNPs as important components of the epigenetic program in organisms. [ 41 ] [ 42 ] Moreover, cosmopolitan studies in European and South Asiatic populations have revealed the influence of SNPs in the methylation of specific CpG sites. [ 43 ] In addition, meQTL enrichment analysis using GWAS database, demonstrated that those associations are important toward the prediction of biological traits. [ 43 ] [ 44 ] [ 45 ] SNPs have historically been used to match a forensic DNA sample to a suspect but has been made obsolete due to advancing STR -based DNA fingerprinting techniques. However, the development of next-generation-sequencing (NGS) technology may allow for more opportunities for the use of SNPs in phenotypic clues such as ethnicity , hair color , and eye color with a good probability of a match. This can additionally be applied to increase the accuracy of facial reconstructions by providing information that may otherwise be unknown, and this information can be used to help identify suspects even without a STR DNA profile match. Some cons to using SNPs versus STRs is that SNPs yield less information than STRs, and therefore more SNPs are needed for analysis before a profile of a suspect is able to be created. Additionally, SNPs heavily rely on the presence of a database for comparative analysis of samples. However, in instances with degraded or small volume samples, SNP techniques are an excellent alternative to STR methods. SNPs (as opposed to STRs) have an abundance of potential markers, can be fully automated, and a possible reduction of required fragment length to less than 100 bp. [ 26 ] Pharmacogenetics focuses on identifying genetic variations including SNPs associated with differential responses to treatment. [ 46 ] Many drug metabolizing enzymes, drug targets, or target pathways can be influenced by SNPs. The SNPs involved in drug metabolizing enzyme activities can change drug pharmacokinetics, while the SNPs involved in drug target or its pathway can change drug pharmacodynamics . Therefore, SNPs are potential genetic markers that can be used to predict drug exposure or effectiveness of the treatment. Genome-wide pharmacogenetic study is called pharmacogenomics . Pharmacogenetics and pharmacogenomics are important in the development of precision medicine, especially for life-threatening diseases such as cancers. Only small amount of SNPs in the human genome may have impact on human diseases. Large scale GWAS has been done for the most important human diseases, including heart diseases , metabolic diseases , autoimmune diseases , and neurodegenerative and psychiatric disorders . [ 38 ] Most of the SNPs with relatively large effects on these diseases have been identified. These findings have significantly improved understanding of disease pathogenesis and molecular pathways, and facilitated development of better treatment. Further GWAS with larger samples size will reveal the SNPs with relatively small effect on diseases. For common and complex diseases, such as type-2 diabetes , rheumatoid arthritis , and Alzheimer's disease , multiple genetic factors are involved in disease etiology. In addition, gene-gene interaction and gene-environment interaction also play an important role in disease initiation and progression. [ 47 ] As there are for genes, bioinformatics databases exist for SNPs. The International SNP Map working group mapped the sequence flanking each SNP by alignment to the genomic sequence of large-insert clones in Genebank. These alignments were converted to chromosomal coordinates that is shown in Table 1. [ 60 ] This list has greatly increased since, with, for instance, the Kaviar database now listing 162 million single nucleotide variants (SNVs). The nomenclature for SNPs include several variations for an individual SNP, while lacking a common consensus. The rs### standard is that which has been adopted by dbSNP and uses the prefix "rs", for "reference SNP", followed by a unique and arbitrary number. [ 61 ] SNPs are frequently referred to by their dbSNP rs number, as in the examples above. The Human Genome Variation Society (HGVS) uses a standard which conveys more information about the SNP. Examples are: SNPs can be easily assayed due to only containing two possible alleles and three possible genotypes involving the two alleles: homozygous A, homozygous B and heterozygous AB, leading to many possible techniques for analysis. Some include: DNA sequencing ; capillary electrophoresis ; mass spectrometry ; single-strand conformation polymorphism (SSCP); single base extension ; electrochemical analysis; denaturating HPLC and gel electrophoresis ; restriction fragment length polymorphism ; and hybridization analysis. An important group of SNPs are those that corresponds to missense mutations causing amino acid change on protein level. Point mutation of particular residue can have different effect on protein function (from no effect to complete disruption its function). Usually, change in amino acids with similar size and physico-chemical properties (e.g. substitution from leucine to valine) has mild effect, and opposite. Similarly, if SNP disrupts secondary structure elements (e.g. substitution to proline in alpha helix region) such mutation usually may affect whole protein structure and function. Using those simple and many other machine learning derived rules a group of programs for the prediction of SNP effect was developed: [ 66 ]
https://en.wikipedia.org/wiki/Single-nucleotide_polymorphism
The single-particle spectrum is a distribution of a physical quantity such as energy or momentum . The study of particle spectra allows us to see the global picture of particle production. The spectrum are particles that are in space. This belongs to Raman spectroscopy by Chandrasekhar Venkata Raman . Spectrum particles are nothing but the VIBGYOR rays which are separated by prism or water. For example, a rainbow . This particle physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Single-particle_spectrum
Single-particle tracking ( SPT ) is the observation of the motion of individual particles within a medium. The coordinates time series, which can be either in two dimensions ( x , y ) or in three dimensions ( x , y , z ), is referred to as a trajectory . The trajectory is typically analyzed using statistical methods to extract information about the underlying dynamics of the particle. [ 1 ] [ 2 ] [ 3 ] These dynamics can reveal information about the type of transport being observed (e.g., thermal or active), the medium where the particle is moving, and interactions with other particles. In the case of random motion, trajectory analysis can be used to measure the diffusion coefficient . In life sciences, single-particle tracking is broadly used to quantify the dynamics of molecules/proteins in live cells (of bacteria, yeast, mammalian cells and live Drosophila embryos). [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] It has been extensively used to study the transcription factor dynamics in live cells. [ 9 ] [ 10 ] [ 11 ] This method has been extensively used in the last decade to understand the target-search mechanism of proteins in live cells. It addresses fundamental biological questions such as how a protein of interest finds its target in the complex cellular environment? how long does it take to find its target site for binding? what is the residence time of proteins binding to DNA? [ 5 ] Recently, SPT has been used to study the kinetics of protein translating and processing in vivo. For molecules which bind large structures such as ribosomes, SPT can be used to extract information about the binding kinetics. As ribosome binding increases the effective size of the smaller molecule, the diffusion rate decreases upon binding. By monitoring these changes in diffusion behavior, direct measurements of binding events are obtained. [ 12 ] [ 13 ] Furthermore, exogenous particles are employed as probes to assess the mechanical properties of the medium, a technique known as passive microrheology . [ 14 ] This technique has been applied to investigate the motion of lipids and proteins within membranes, [ 15 ] [ 16 ] molecules in the nucleus [ 8 ] and cytoplasm, [ 17 ] organelles and molecules therein, [ 18 ] lipid granules, [ 19 ] [ 20 ] [ 21 ] vesicles, and particles introduced in the cytoplasm or the nucleus. Additionally, single-particle tracking has been extensively used in the study of reconstituted lipid bilayers, [ 22 ] intermittent diffusion between 3D and either 2D (e.g., a membrane) [ 23 ] or 1D (e.g., a DNA polymer) phases, and synthetic entangled actin networks. [ 24 ] [ 25 ] The most common type of particles used in single particle tracking are based either on scatterers , such as polystyrene beads or gold nanoparticles that can be tracked using bright field illumination, or fluorescent particles. For fluorescent tags , there are many different options with their own advantages and disadvantages, including quantum dots , fluorescent proteins , organic fluorophores , and cyanine dyes. On a fundamental level, once the images are obtained, single-particle tracking is a two step process. First the particles are detected and then the localized different particles are connected in order to obtain individual trajectories. Besides performing particle tracking in 2D, there are several imaging modalities for 3D particle tracking, including multifocal plane microscopy , [ 26 ] double helix point spread function microscopy, [ 27 ] and introducing astigmatism via a cylindrical lens or adaptive optics.
https://en.wikipedia.org/wiki/Single-particle_tracking
Single-particle trajectories ( SPTs ) consist of a collection of successive discrete points causal in time. These trajectories are acquired from images in experimental data. In the context of cell biology, the trajectories are obtained by the transient activation by a laser of small dyes attached to a moving molecule. Molecules can now by visualized based on recent super-resolution microscopy , which allow routine collections of thousands of short and long trajectories. [ 1 ] These trajectories explore part of a cell, either on the membrane or in 3 dimensions and their paths are critically influenced by the local crowded organization and molecular interaction inside the cell, [ 2 ] as emphasized in various cell types such as neuronal cells, [ 3 ] astrocytes , immune cells and many others. SPT allowed observing moving particles. These trajectories are used to investigate cytoplasm or membrane organization, [ 4 ] but also the cell nucleus dynamics, remodeler dynamics or mRNA production. Due to the constant improvement of the instrumentation, the spatial resolution is continuously decreasing, reaching now values of approximately 20 nm, while the acquisition time step is usually in the range of 10 to 50 ms to capture short events occurring in live tissues. A variant of super-resolution microscopy called sptPALM is used to detect the local and dynamically changing organization of molecules in cells, or events of DNA binding by transcription factors in mammalian nucleus. Super-resolution image acquisition and particle tracking are crucial to guarantee a high quality data [ 5 ] [ 6 ] [ 7 ] Once points are acquired, the next step is to reconstruct a trajectory. This step is done known tracking algorithms to connect the acquired points. [ 8 ] Tracking algorithms are based on a physical model of trajectories perturbed by an additive random noise. The redundancy of many short (SPTs) is a key feature to extract biophysical information parameters from empirical data at a molecular level. [ 9 ] In contrast, long isolated trajectories have been used to extract information along trajectories, destroying the natural spatial heterogeneity associated to the various positions. The main statistical tool is to compute the mean-square displacement (MSD) or second order statistical moment : For a Brownian motion, ⟨ | X ( t + Δ t ) − X ( t ) | 2 ⟩ = 2 n D t {\displaystyle \langle |X(t+\Delta t)-X(t)|^{2}\rangle =2nDt} , where D is the diffusion coefficient, n is dimension of the space. Some other properties can also be recovered from long trajectories, such as the radius of confinement for a confined motion. [ 12 ] The MSD has been widely used in early applications of long but not necessarily redundant single-particle trajectories in a biological context. However, the MSD applied to long trajectories suffers from several issues. First, it is not precise in part because the measured points could be correlated. Second, it cannot be used to compute any physical diffusion coefficient when trajectories consists of switching episodes for example alternating between free and confined diffusion. At low spatiotemporal resolution of the observed trajectories, the MSD behaves sublinearly with time, a process known as anomalous diffusion, which is due in part to the averaging of the different phases of the particle motion. In the context of cellular transport (ameoboid), high resolution motion analysis of long SPTs [ 13 ] in micro-fluidic chambers containing obstacles revealed different types of cell motions. Depending on the obstacle density: crawling was found at low density of obstacles and directed motion and random phases can even be differentiated. Statistical methods to extract information from SPTs are based on stochastic models, such as the Langevin equation or its Smoluchowski's limit and associated models that account for additional localization point identification noise or memory kernel. [ 14 ] The Langevin equation describes a stochastic particle driven by a Brownian force Ξ {\displaystyle \Xi } and a field of force (e.g., electrostatic, mechanical, etc.) with an expression F ( x , t ) {\displaystyle F(x,t)} : where m is the mass of the particle and Γ = 6 π a ρ {\displaystyle \Gamma =6\pi a\rho } is the friction coefficient of a diffusing particle, ρ {\displaystyle \rho } the viscosity . Here Ξ {\displaystyle \Xi } is the δ {\displaystyle \delta } -correlated Gaussian white noise . The force can derived from a potential well U so that F ( x , t ) = − U ′ ( x ) {\displaystyle F(x,t)=-U'(x)} and in that case, the equation takes the form where ε = k B T , {\displaystyle \varepsilon =k_{\text{B}}T,} is the energy and k B {\displaystyle k_{\text{B}}} the Boltzmann constant and T the temperature. Langevin's equation is used to describe trajectories where inertia or acceleration matters. For example, at very short timescales, when a molecule unbinds from a binding site or escapes from a potential well [ 15 ] and the inertia term allows the particles to move away from the attractor and thus prevents immediate rebinding that could plague numerical simulations. In the large friction limit γ → ∞ {\displaystyle \gamma \to \infty } the trajectories x ( t ) {\displaystyle x(t)} of the Langevin equation converges in probability to those of the Smoluchowski's equation where w ˙ ( t ) {\displaystyle {\dot {w}}(t)} is δ {\displaystyle \delta } -correlated. This equation is obtained when the diffusion coefficient is constant in space. When this is not case, coarse grained equations (at a coarse spatial resolution) should be derived from molecular considerations. Interpretation of the physical forces are not resolved by Ito's vs Stratonovich integral representations or any others. For a timescale much longer than the elementary molecular collision, the position of a tracked particle is described by a more general overdamped limit of the Langevin stochastic model. Indeed, if the acquisition timescale of empirical recorded trajectories is much lower compared to the thermal fluctuations, rapid events are not resolved in the data. Thus at this coarser spatiotemporal scale, the motion description is replaced by an effective stochastic equation where b ( X ) {\displaystyle {b}(X)} is the drift field and B e {\displaystyle {B}_{e}} the diffusion matrix. The effective diffusion tensor can vary in space D ( X ) = 1 2 B ( X ) B T X T {\displaystyle D(X)={\frac {1}{2}}B(X)B^{T}X^{T}} ( X T {\textstyle X^{T}} denotes the transpose of X {\textstyle X} ). This equation is not derived but assumed. However the diffusion coefficient should be smooth enough as any discontinuity in D should be resolved by a spatial scaling to analyse the source of discontinuity (usually inert obstacles or transitions between two medias). The observed effective diffusion tensor is not necessarily isotropic and can be state-dependent, whereas the friction coefficient γ {\displaystyle \gamma } remains constant as long as the medium stays the same and the microscopic diffusion coefficient (or tensor) could remain isotropic. The development of statistical methods are based on stochastic models, a possible deconvolution procedure applied to the trajectories. Numerical simulations could also be used to identify specific features that could be extracted from single-particle trajectories data. [ 16 ] The goal of building a statistical ensemble from SPTs data is to observe local physical properties of the particles, such as velocity, diffusion, confinement or attracting forces reflecting the interactions of the particles with their local nanometer environments. It is possible to use stochastic modeling to construct from diffusion coefficient (or tensor) the confinement or local density of obstacles reflecting the presence of biological objects of different sizes. Several empirical estimators have been proposed to recover the local diffusion coefficient, vector field and even organized patterns in the drift, such as potential wells. [ 17 ] The construction of empirical estimators that serve to recover physical properties from parametric and non-parametric statistics. Retrieving statistical parameters of a diffusion process from one-dimensional time series statistics use the first moment estimator or Bayesian inference. The models and the analysis assume that processes are stationary, so that the statistical properties of trajectories do not change over time. In practice, this assumption is satisfied when trajectories are acquired for less than a minute, where only few slow changes may occur on the surface of a neuron for example. Non stationary behavior are observed using a time-lapse analysis, with a delay of tens of minutes between successive acquisitions. The coarse-grained model Eq. 1 is recovered from the conditional moments of the trajectory by computing the increments Δ X = X ( t + Δ t ) − X ( t ) {\displaystyle \Delta X=X(t+\Delta t)-X(t)} : Here the notation E [ ⋅ | X ( t ) = x ] {\displaystyle E[\cdot \,|\,X(t)=x]} means averaging over all trajectories that are at point x at time t . The coefficients of the Smoluchowski equation can be statistically estimated at each point x from an infinitely large sample of its trajectories in the neighborhood of the point x at time t . In practice, the expectations for a and D are estimated by finite sample averages and Δ t {\displaystyle \Delta t} is the time-resolution of the recorded trajectories. Formulas for a and D are approximated at the time step Δ t {\displaystyle \Delta t} , where for tens to hundreds of points falling in any bin. This is usually enough for the estimation. To estimate the local drift and diffusion coefficients, trajectories are first grouped within a small neighbourhood. The field of observation is partitioned into square bins S ( x k , r ) {\displaystyle S(x_{k},r)} of side r and centre x k {\displaystyle x_{k}} and the local drift and diffusion are estimated for each of the square. Considering a sample with N t {\displaystyle N_{t}} trajectories { x i ( t 1 ) , … , x i ( t N s ) } , {\displaystyle \{x^{i}(t_{1}),\dots ,x^{i}(t_{N_{s}})\},} where t j {\displaystyle t_{j}} are the sampling times, the discretization of equation for the drift a ( x k ) = ( a x ( x k ) , a y ( x k ) ) {\displaystyle a(x_{k})=(a_{x}(x_{k}),a_{y}(x_{k}))} at position x k {\displaystyle x_{k}} is given for each spatial projection on the x and y axis by where N k {\displaystyle N_{k}} is the number of points of trajectory that fall in the square S ( x k , r ) {\displaystyle S(x_{k},r)} . Similarly, the components of the effective diffusion tensor D ( x k ) {\displaystyle D(x_{k})} are approximated by the empirical sums The moment estimation requires a large number of trajectories passing through each point, which agrees precisely with the massive data generated by the a certain types of super-resolution data such as those acquired by sptPALM technique on biological samples. The exact inversion of Lagenvin's equation demands in theory an infinite number of trajectories passing through any point x of interest. In practice, the recovery of the drift and diffusion tensor is obtained after a region is subdivided by a square grid of radius r or by moving sliding windows (of the order of 50 to 100 nm). Algorithms based on mapping the density of points extracted from trajectories allow to reveal local binding and trafficking interactions and organization of dynamic subcellular sites. The algorithms can be applied to study regions of high density, revealved by SPTs. Examples are organelles such as endoplasmic reticulum or cell membranes. The method is based on spatiotemporal segmentation to detect local architecture and boundaries of high-density regions for domains measuring hundreds of nanometers. [ 18 ]
https://en.wikipedia.org/wiki/Single-particle_trajectory
Single-photon emission computed tomography ( SPECT , or less commonly, SPET ) is a nuclear medicine tomographic imaging technique using gamma rays . [ 1 ] It is very similar to conventional nuclear medicine planar imaging using a gamma camera (that is, scintigraphy ), [ 2 ] but is able to provide true 3D information. This information is typically presented as cross-sectional slices through the patient, but can be freely reformatted or manipulated as required. The technique needs delivery of a gamma-emitting radioisotope (a radionuclide ) into the patient, normally through injection into the bloodstream. On occasion, the radioisotope is a simple soluble dissolved ion, such as an isotope of gallium (III). Usually, however, a marker radioisotope is attached to a specific ligand to create a radioligand , whose properties bind it to certain types of tissues. This marriage allows the combination of ligand and radiopharmaceutical to be carried and bound to a place of interest in the body, where the ligand concentration is seen by a gamma camera. Instead of just "taking a picture of anatomical structures", a SPECT scan monitors level of biological activity at each place in the 3-D region analyzed. Emissions from the radionuclide indicate amounts of blood flow in the capillaries of the imaged regions. In the same way that a plain X-ray is a 2-dimensional (2-D) view of a 3-dimensional structure, the image obtained by a gamma camera is a 2-D view of 3-D distribution of a radionuclide . SPECT imaging is performed by using a gamma camera to acquire multiple 2-D images (also called projections ), from multiple angles. A computer is then used to apply a tomographic reconstruction algorithm to the multiple projections, yielding a 3-D data set. This data set may then be manipulated to show thin slices along any chosen axis of the body, similar to those obtained from other tomographic techniques, such as magnetic resonance imaging (MRI), X-ray computed tomography (X-ray CT), and positron emission tomography (PET). SPECT is similar to PET in its use of radioactive tracer material and detection of gamma rays. In contrast with PET, the tracers used in SPECT emit gamma radiation that is measured directly, whereas PET tracers emit positrons that annihilate with electrons up to a few millimeters away, causing two gamma photons to be emitted in opposite directions. A PET scanner detects these emissions "coincident" in time, which provides more radiation event localization information and, thus, higher spatial resolution images than SPECT (which has about 1 cm resolution). SPECT scans are significantly less expensive than PET scans, in part because they are able to use longer-lived and more easily obtained radioisotopes than PET. Because SPECT acquisition is very similar to planar gamma camera imaging, the same radiopharmaceuticals may be used. If a patient is examined in another type of nuclear medicine scan, but the images are non-diagnostic, it may be possible to proceed straight to SPECT by moving the patient to a SPECT instrument, or even by simply reconfiguring the camera for SPECT image acquisition while the patient remains on the table. To acquire SPECT images, the gamma camera is rotated around the patient. Projections are acquired at defined points during the rotation, typically every 3–6 degrees. In most cases, a full 360-degree rotation is used to obtain an optimal reconstruction. The time taken to obtain each projection is also variable, but 15–20 seconds is typical. This gives a total scan time of 15–20 minutes. Multi-headed gamma cameras can accelerate acquisition. For example, a dual-headed camera can be used with heads spaced 180 degrees apart, allowing two projections to be acquired simultaneously, with each head requiring 180 degrees of rotation. Triple-head cameras with 120-degree spacing are also used. Cardiac gated acquisitions are possible with SPECT, just as with planar imaging techniques such as multi gated acquisition scan (MUGA). Triggered by electrocardiogram (EKG) to obtain differential information about the heart in various parts of its cycle, gated myocardial SPECT can be used to obtain quantitative information about myocardial perfusion, thickness, and contractility of the myocardium during various parts of the cardiac cycle, and also to allow calculation of left ventricular ejection fraction , stroke volume, and cardiac output. SPECT can be used to complement any gamma imaging study, where a true 3D representation can be helpful, such as tumor imaging, infection ( leukocyte ) imaging, thyroid imaging or bone scintigraphy . Because SPECT permits accurate localisation in 3D space, it can be used to provide information about localised function in internal organs, such as functional cardiac or brain imaging. Myocardial perfusion imaging (MPI) is a form of functional cardiac imaging, used for the diagnosis of ischemic heart disease . The underlying principle is that under conditions of stress, diseased myocardium receives less blood flow than normal myocardium. MPI is one of several types of cardiac stress test . A cardiac specific radiopharmaceutical is administered, e.g., 99m Tc- tetrofosmin (Myoview, GE healthcare), 99m Tc-sestamibi (Cardiolite, Bristol-Myers Squibb) or Thallium-201 chloride. Following this, the heart rate is raised to induce myocardial stress, either by exercise on a treadmill or pharmacologically with adenosine , dobutamine , or dipyridamole ( aminophylline can be used to reverse the effects of dipyridamole). SPECT imaging performed after stress reveals the distribution of the radiopharmaceutical, and therefore the relative blood flow to the different regions of the myocardium. Diagnosis is made by comparing stress images to a further set of images obtained at rest which are normally acquired prior to the stress images. MPI has been demonstrated to have an overall accuracy of about 83% ( sensitivity : 85%; specificity : 72%) (in a review, not exclusively of SPECT MPI), [ 3 ] and is comparable with (or better than) other non-invasive tests for ischemic heart disease. Usually, the gamma-emitting tracer used in functional brain imaging is Technetium (99mTc) exametazime . 99m Tc is a metastable nuclear isomer that emits gamma rays detectable by a gamma camera. Attaching it to exametazime allows it to be taken up by brain tissue in a manner proportional to brain blood flow, in turn allowing cerebral blood flow to be assessed with the nuclear gamma camera. Because blood flow in the brain is tightly coupled to local brain metabolism and energy use, the 99m Tc-exametazime tracer (as well as the similar 99m Tc-EC tracer) is used to assess brain metabolism regionally, in an attempt to diagnose and differentiate the different causal pathologies of dementia . Meta-analysis of many reported studies suggests that SPECT with this tracer is about 74% sensitive at diagnosing Alzheimer's disease vs. 81% sensitivity for clinical exam ( cognitive testing , etc.). More recent studies have shown the accuracy of SPECT in Alzheimer's diagnosis may be as high as 88%. [ 4 ] In meta analysis, SPECT was superior to clinical exam and clinical criteria (91% vs. 70%) in being able to differentiate Alzheimer's disease from vascular dementias. [ 5 ] This latter ability relates to SPECT's imaging of local metabolism of the brain, in which the patchy loss of cortical metabolism seen in multiple strokes differs clearly from the more even or "smooth" loss of non-occipital cortical brain function typical of Alzheimer's disease. Another recent review article showed that multi-headed SPECT cameras with quantitative analysis result in an overall sensitivity of 84-89% and an overall specificity of 83-89% in cross sectional studies and sensitivity of 82-96% and specificity of 83-89% for longitudinal studies of dementia. [ 6 ] 99m Tc-exametazime SPECT scanning competes with fludeoxyglucose (FDG) PET scanning of the brain, which works to assess regional brain glucose metabolism, to provide very similar information about local brain damage from many processes. SPECT is more widely available, because the radioisotope used is longer-lasting and far less expensive in SPECT, and the gamma scanning equipment is less expensive as well. While 99m Tc is extracted from relatively simple technetium-99m generators , which are delivered to hospitals and scanning centers weekly to supply fresh radioisotope, FDG PET relies on FDG, which is made in an expensive medical cyclotron and "hot-lab" (automated chemistry lab for radiopharmaceutical manufacture), and then delivered immediately to scanning sites because of the natural short 110-minute half-life of Fluorine-18 . In the nuclear power sector, the SPECT technique can be applied to image radioisotope distributions in irradiated nuclear fuels. [ 7 ] Due to the irradiation of nuclear fuel (e.g. uranium) with neutrons in a nuclear reactor, a wide array of gamma-emitting radionuclides are naturally produced in the fuel, such as fission products ( cesium-137 , barium-140 and europium-154 ) and activation products ( chromium-51 and cobalt-58 ). These may be imaged using SPECT in order to verify the presence of fuel rods in a stored fuel assembly for IAEA safeguards purposes, [ 8 ] to validate predictions of core simulation codes, [ 9 ] or to study the behavior of the nuclear fuel in normal operation, [ 10 ] or in accident scenarios. [ 11 ] Reconstructed images typically have resolutions of 64×64 or 128×128 pixels, with the pixel sizes ranging from 3–6 mm. The number of projections acquired is chosen to be approximately equal to the width of the resulting images. In general, the resulting reconstructed images will be of lower resolution, have increased noise than planar images, and be susceptible to artifacts . Scanning is time-consuming, and it is essential that there is no patient movement during the scan time. Movement can cause significant degradation of the reconstructed images, although movement compensation reconstruction techniques can help with this. A highly uneven distribution of radiopharmaceutical also has the potential to cause artifacts. A very intense area of activity (e.g., the bladder) can cause extensive streaking of the images and obscure neighboring areas of activity. This is a limitation of the filtered back projection reconstruction algorithm. Iterative reconstruction is an alternative algorithm that is growing in importance, as it is less sensitive to artifacts and can also correct for attenuation and depth dependent blurring. Furthermore, iterative algorithms can be made more efficacious using the Superiorization methodology. [ 12 ] Attenuation of the gamma rays within the patient can lead to significant underestimation of activity in deep tissues, compared to superficial tissues. Approximate correction is possible, based on relative position of the activity, and optimal correction is obtained with measured attenuation values. Modern SPECT equipment is available with an integrated X-ray CT scanner. As X-ray CT images are an attenuation map of the tissues, this data can be incorporated into the SPECT reconstruction to correct for attenuation. It also provides a precisely registered CT image, which can provide additional anatomical information. Scatter of the gamma rays as well as the random nature of gamma rays can also lead to the degradation of quality of SPECT images and cause loss of resolution. Scatter correction and resolution recovery are also applied to improve resolution of SPECT images. [ 13 ] In some cases a SPECT gamma scanner may be built to operate with a conventional CT scanner , with coregistration of images. As in PET/CT , this allows location of tumors or tissues which may be seen on SPECT scintigraphy, but are difficult to locate precisely with regard to other anatomical structures. Such scans are most useful for tissues outside the brain, where location of tissues may be far more variable. For example, SPECT/CT may be used in sestamibi parathyroid scan applications, where the technique is useful in locating ectopic parathyroid adenomas which may not be in their usual locations in the thyroid gland. [ 14 ] The overall performance of SPECT systems can be performed by quality control tools such as the Jaszczak phantom . [ 15 ]
https://en.wikipedia.org/wiki/Single-photon_emission_computed_tomography
Single-scattering albedo is the ratio of scattering efficiency to total extinction efficiency (which is also termed "attenuance", a sum of scattering and absorption). Most often it is defined for small-particle scattering of electromagnetic waves . Single-scattering albedo is unitless, and a value of unity implies that all particle extinction is due to scattering; conversely, a single-scattering albedo of zero implies that all extinction is due to absorption. For spherical particles, one can calculate single-scattering albedo from Mie theory and knowledge of bulk properties of material such as refractive index . For non-spherical particles one could use discrete dipole approximation or other methods of computational electromagnetics . The albedo of particles of shapes that are easily parameterized in non-standard coordinate systems may be determined through solutions of Maxwell's equation analogs in such coordinate systems. Scattering albedo equations have yet to be determined in elliptical, toroidal, conical, and many others. Derivation and solutions to such equations is a field of ongoing research. This scattering –related article is a stub . You can help Wikipedia by expanding it . This astronomy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Single-scattering_albedo
Single-strand conformation polymorphism ( SSCP ), or single-strand chain polymorphism, is defined as a conformational difference of single-stranded nucleotide sequences of identical length as induced by differences in the sequences under certain experimental conditions. This property allows sequences to be distinguished by means of gel electrophoresis , which separates fragments according to their different conformations. [ 1 ] A single nucleotide change in a particular sequence, as seen in a double-stranded DNA, cannot be distinguished by gel electrophoresis techniques, which can be attributed to the fact that; the physical properties of the double strands are almost identical for both alleles. After denaturation, single-stranded DNA undergoes a characteristic 3-dimensional folding and may assume a unique conformational state based on its DNA sequence. The difference in shape between two single-stranded DNA strands with different sequences can cause them to migrate differently through an electrophoresis gel, even though the number of nucleotides is the same, which is, in fact, an application of SSCP. SSCP used to be a way to discover new DNA polymorphisms apart from DNA sequencing but is now being supplanted by sequencing techniques on account of efficiency and accuracy. [ 2 ] These days, SSCP is most applicable as a diagnostic tool in molecular biology. It can be used in genotyping to detect homozygous individuals of different allelic states, as well as heterozygous individuals that should each demonstrate distinct patterns in an electrophoresis experiment. [ 3 ] SSCP is also widely used in virology to detect variations in different strains of a virus, the idea being that a particular virus particle present in both strains will have undergone changes due to mutation, and that these changes will cause the two particles to assume different conformations and, thus, be differentiable on an SSCP gel. [ 4 ]
https://en.wikipedia.org/wiki/Single-strand_conformation_polymorphism
A single-use bioreactor or disposable bioreactor is a bioreactor with a disposable bag instead of a culture vessel. Typically, this refers to a bioreactor in which the lining in contact with the cell culture will be plastic , and this lining is encased within a more permanent structure (typically, either a rocker or a cuboid or cylindrical steel support). Commercial single-use bioreactors have been available since the end of the 1990s [ citation needed ] and are now made by several well-known producers ( See below ) . Single-use bioreactors are widely used in the field of mammalian cell culture and are now rapidly replacing conventional bioreactors. Instead of a culture vessel made from stainless steel or glass, a single-use bioreactor is equipped with a disposable bag. The disposable bag is usually made of a three-layer plastic foil. One layer is made from Polyethylene terephthalate or LDPE to provide mechanical stability. A second layer made using PVA or PVC acts as a gas barrier. Finally, a contact layer is made from PVA or PP . [ 1 ] For medical applications the single-use materials that contact the product must be certified by the European Medicines Agency or similar authorities responsible for other regions. In general there are two different approaches for constructing single-use bioreactors, differing in the means used to agitate the culture medium. Some single-use bioreactors use stirrers like conventional bioreactors, but with stirrers that are integrated into the plastic bag. The closed bag and the stirrer are pre-sterilized. In use the bag is mounted in the bioreactor and the stirrer is connected to a driver mechanically or magnetically. Other single-use bioreactors are agitated by a rocking motion. This type of bioreactor does not need any mechanical agitators inside the single-use bag.,. [ 2 ] [ 3 ] Both the stirred and the rocking motion single-use bioreactors are used up to a scale of 1000 Liters volume. Several variations on these two methods exist. The Kuhner Shaker, [ 4 ] was originally designed for media preparation, but is also useful for cell cultivation. The PBS Biotech Air Wheel technology uses buoyancy from the air feed to provide rotational power to a stirrer. Measurement and control of a cell culture process using a single-use bioreactor is challenging, as the bag in which the cultivation will be performed is a closed and pre-sterilized system. Sensors for measuring the temperature, conductivity, glucose, oxygen, or pressure must be built into the bag during the manufacturing prior to sterilization. The sensors can’t be installed prior to use of the bioreactor as in the conventional case. Consequently, some challenges must be taken into consideration. The bag is assembled, delivered and stored dry, with the consequence that the usual pH-electrodes can not be used. Calibration or additional assembly is not possible. These constraints have led to the development of preconfigured bags with new types of analytical probes. The pH value can be measured using a patch that is just a few millimeters in size. This patch consists of a protecting membrane with a pH-sensitive dye behind it. Changing pH in the culture medium changes the pH, and the color, of the dye. The color change can be detected with a laser external to the bag. This and other methods of non-invasive measurement have been developed for single-use bioreactors. Decreasing product contact with parts/systems decreases qualification and validation times when changing from one drug process to another. [ 5 ] Since the biopharmaceutical manufacturing process includes many steps other than just the use of bioreactors, single-use technologies are utilized throughout the manufacturing process due to its advantages. Single-use bioprocessing (SUS) [ note 1 ] steps available are: media and buffer preparation, cell harvesting, filtration, purification and virus inactivation. The major innovation of single-use technologies in this area of processing has been in the construction of 2D/3D bags and tubing wielding- reducing the contact of product to non-single-use parts/systems. [ 6 ] Compared with conventional bioreactor systems, the single-use solution has some advantages. Application of single-use technologies reduces cleaning and sterilization demands. Some estimates show cost savings of more than 60% with single use systems compared to fixed asset stainless steel bioreactors. In pharmaceutical production, complex qualification and validation procedures can be made easier, and will finally lead to significant cost reductions. [ 5 ] The application of single-use bioreactors reduces the risk of cross contamination and enhances the biological and process safety. Single-use applications are especially suitable for any kind of biopharmaceutical product. A major reason single-use bioprocessing (SUS) [ note 1 ] is popular with pharmaceutical companies and contract manufacturing organizations (CMOs) is because a process area/facility can quickly change from one process (drug product) to another. This is due to, as stated previously, reduced qualification and validation procedures. This increases productivity and costs due to less resources and time being required for changing from one process to another. Since drugs in the clinical and R&D stage (pre-commercialized drugs) are not needed on the same scale of most commercial drugs, they are often produced in single-use suites so the same area/facility can quickly switch from one drug to another. Often when drug becomes commercialized the advantages of SUSs decrease since one area/facility can be dedicated to one product- essentially eliminating the need for flexibility which is the major advantage of SUSs. [ 6 ] It is estimated that ≥85% of pre-commercial drug product production utilizes single-use systems-based manufacturing. [ 7 ] Stainless steal reusable systems become more advantageous as the demand for the drug product and batch size increases- often a result of the commercialization of a drug. This is not always the case, as commercialized drugs can be found being produced in single-use suits/facilities. SUSs contain fewer parts compared with conventional biopharmaceutical manufacturing systems, so the initial and maintenance costs are reduced. [ 6 ] Limiting factor for the use of some single-use bioreactors is the achievable oxygen transfer, represented by the specific mass transfer coefficient (k L ) for the specific phase area (a), resulting in the volumetric oxygen mass transfer coefficient (k L a). Theoretically this can be influenced by a higher energy input (increasing the stirrer speed or the rocking frequency). However, since single-use bioreactors are mainly used for cell culturing, the energy input is limited by the delicate nature of cells. Higher energy input leads to higher shear forces causing the risk of cell damages. [ 8 ] Single-use bioreactors are currently available with up to a volume of about 1000 L; that’s why scale up is limited compared to conventional bioreactors. However, a handful of suppliers are now delivering units at the 2,000 liter scale and some suppliers (Sartorius, Xcellerex, Thermo Scientific HyClone and PBS Biotech) are providing a family of single-use bioreactors from bench-top to full-scale production. Three challenges exist for faster and greater single use bioreactor adoption 1) higher quality and lower cost disposable bags and containers, 2) more reusable and disposable sensors and probes that can provide high quality analytics including real-time cell culture level data points, and 3) a family of bioreactors from lab to production that has full scale-up of the bioprocess. Suppliers are working to improve plastic bag materials and performance and also to develop a broader range of sensors and probes that provide scientists greater insight to cell density, quality and other metrics needed to improve yields and product efficacy. New perfusion devices are also becoming popular for certain cell culture applications. Environmental aspects for single-use bioreactors are important to consider due to the amount of disposable material used compared with conventional bioreactors. A complete life cycle assessment comparing single-use bioreactors and conventional bioreactors does not exist, but many ecological reasons are supporting the concept of single-use bioreactors. For a complete life cycle assessment not only the manufacturing, but also the repeated use need to be considered. Even the main part of a single-use bioreactor is not a disposable, but will be continuously reused. The plastic bag that is used instead of a culture vessel is a disposable, as well as all the integrated sub-assemblies like sensors, tubing, and stirrers. The bag and all its parts are mainly made from plastics that are derived from petroleum. Current recycling concepts are mainly focused on incineration, to recover the energy originated from the petroleum as heat and electricity. Most of the petroleum would be burned anyway in power plants or automobiles (citation required). Burning of the single use components of bioreactors creates a detour through biochemical engineering during their life cycle that does not have a big influence. The making of conventional culture vessels form stainless steel or glass requires more energy than making plastic bags. Using conventional bioreactors the culture vessel need to be cleaned and sterilized after each fermentation . Cleaning requires large amounts of water, in addition to acids, alkali and detergents. Sterilization with steam at 121 degrees C and 1 bar pressure requires large quantities of energy and large amounts of distilled water. This distilled water (often called "water for injection" in pharmaceutical nomenclature) must be prepared by expending a large amount of energy as well. A comparison of the life cycle assessment of conventional and single-use bioreactors looks much more favorable for the single-use bioreactors as expected before. According to a report of A. Sinclair et al. [ 9 ] Single-use bioreactors will help to save 30% of electrical energy for operation, 62% of the energy input for the production of the system, 87% of water and finally 95% of detergents, all compared to conventional bioreactors.
https://en.wikipedia.org/wiki/Single-use_bioreactor
Single-wavelength anomalous diffraction (SAD) is a technique used in X-ray crystallography that facilitates the determination of the structure of proteins or other biological macromolecules by allowing the solution of the phase problem . In contrast to multi-wavelength anomalous diffraction (MAD), SAD uses a single dataset at a single appropriate wavelength. Compared to MAD, SAD has weaker phasing power and requires density modification to resolve phase ambiguity. [ 1 ] This downside is not as important as SAD's main advantage: the minimization of time spent in the beam by the crystal, thus reducing potential radiation damage to the molecule while collecting data. SAD also allows a wider choice of heavy atoms and can be conducted without a synchrotron beamline. [ 1 ] Today, selenium-SAD is commonly used for experimental phasing due to the development of methods for selenomethionine incorporation into recombinant proteins. SAD is sometimes called "single-wavelength anomalous dispersion" , but no dispersive differences are used in this technique since the data are collected at a single wavelength.
https://en.wikipedia.org/wiki/Single-wavelength_anomalous_diffraction
In chemistry , a single bond is a chemical bond between two atoms involving two valence electrons . That is, the atoms share one pair of electrons where the bond forms. [ 1 ] Therefore, a single bond is a type of covalent bond . When shared, each of the two electrons involved is no longer in the sole possession of the orbital in which it originated. Rather, both of the two electrons spend time in either of the orbitals which overlap in the bonding process. As a Lewis structure , a single bond is denoted as AːA or A-A, for which A represents an element. [ 2 ] In the first rendition, each dot represents a shared electron, and in the second rendition, the bar represents both of the electrons shared in the single bond. A covalent bond can also be a double bond or a triple bond . A single bond is weaker than either a double bond or a triple bond. This difference in strength can be explained by examining the component bonds of which each of these types of covalent bonds consists (Moore, Stanitski, and Jurs 393). Usually, a single bond is a sigma bond . An exception is the bond in diboron , which is a pi bond . In contrast, the double bond consists of one sigma bond and one pi bond, and a triple bond consists of one sigma bond and two pi bonds (Moore, Stanitski, and Jurs 396). The number of component bonds is what determines the strength disparity. It stands to reason that the single bond is the weakest of the three because it consists of only a sigma bond, and the double bond or triple bond consist not only of this type of component bond but also at least one additional bond. The single bond has the capacity for rotation, a property not possessed by the double bond or the triple bond. The structure of pi bonds does not allow for rotation (at least not at 298 K), so the double bond and the triple bond which contain pi bonds are held due to this property. The sigma bond is not so restrictive, and the single bond is able to rotate using the sigma bond as the axis of rotation (Moore, Stanitski, and Jurs 396-397). Another property comparison can be made in bond length. Single bonds are the longest of the three types of covalent bonds as interatomic attraction is greater in the two other types, double and triple. The increase in component bonds is the reason for this attraction increase as more electrons are shared between the bonded atoms (Moore, Stanitski, and Jurs 343). Single bonds are often seen in diatomic molecules . Examples of this use of single bonds include H 2 , F 2 , and HCl . Single bonds are also seen in molecules made up of more than two atoms. Examples of this use of single bonds include: Single bonding even appears in molecules as complex as hydrocarbons larger than methane. The type of covalent bonding in hydrocarbons is extremely important in the nomenclature of these molecules. Hydrocarbons containing only single bonds are referred to as alkanes (Moore, Stanitski, and Jurs 334). The names of specific molecules which belong to this group end with the suffix -ane . Examples include ethane , 2-methylbutane , and cyclopentane (Moore, Stanitski, and Jurs 335).
https://en.wikipedia.org/wiki/Single_bond
A Single buoy mooring ( SrM ) (also known as single-point mooring or SPM ) is a loading buoy anchored offshore, that serves as a mooring point and interconnect for tankers loading or offloading gas or liquid products. SPMs are the link between geostatic subsea manifold connections and weathervaning tankers. They are capable of handling any tonnage ship, even very large crude carriers (VLCC) where no alternative facility is available. In shallow water SPMs are used to load and unload crude oil and refined products from inshore and offshore oilfields or refineries, usually through some form of storage system. These buoys are usually suitable for use by all types of oil tanker. In deep water oil fields, SPMs are usually used to load crude oil direct from the production platforms, where there are economic reasons not to run a pipeline to the shore. These moorings usually supply to dedicated tankers which can moor without assistance. [ 1 ] Several types of single point mooring are in use. [ 1 ] There are four groups of parts in the total mooring system: the body of the buoy, mooring and anchoring elements, product transfer system and other components. The buoy body may be supported on static legs attached to the seabed, with a rotating part above water level connected to the (off)loading tanker. The two sections are linked by a roller bearing, referred to as the "main bearing". Alternatively the buoy body may be held in place by multiple radiating anchor chains. The moored tanker can freely weather vane around the buoy and find a stable position due to this arrangement. Moorings fix the buoy to the sea bed. Buoy design must account for the behaviour of the buoy given applicable wind, wave and current conditions and tanker tonnages. This determines the optimum mooring arrangement and size of the various mooring leg components. Anchoring points are greatly dependent on local soil condition. [ 2 ] A tanker is moored to a buoy by means of a hawser arrangement. Oil Companies International Marine Forum (OCIMF) standards are available for mooring systems. The hawser arrangement usually consist of nylon rope, which is shackled to an integrated mooring uni-joint on the buoy deck. At the tanker end of the hawser, a chafe chain is connected to prevent damage from the tanker fairlead. A load pin can be applied to the mooring uni-joint on the buoy deck to measure hawser loads. Hawser systems use either one or two ropes depending on the largest tonnage of vessel which would be moored to the buoy. The ropes would either be single-leg or grommet leg type ropes. These are usually connected to an OCIMF chafe chain on the export tanker side (either type A or B depending on the maximum tonnage of the tanker and the mooring loads). This chafe chain would then be held in the chain stopper on board the export tanker. A basic hawser system would consist of the following (working from the buoy outwards): Buoy-side shackle and bridle assembly for connection to the padeye on the buoy; Mooring hawser shackle; Mooring hawser; Chafe chain assembly; Support buoy; Pick-up / messenger lines; Marker buoy for retrieval from the water. Under OCIMF recommendations, the hawser arrangement would normally be purchased as a full assembly from a manufacturer. The heart of each buoy is the product transfer system. From a geostatic location, e.g. a pipeline end manifold (PLEM) located on the seabed, this system transfers products to the offtake tanker. The basic product transfer system components are: The risers are flexible hoses that connect the subsea piping to the buoy. Configuration of these risers can vary depending on water depth, sea conditions, buoy motions, etc. Floating hose string(s) connect the buoy to the offloading tanker. The hose string can be equipped with a breakaway coupling to prevent rupture of hoses/hawser and subsequent oil spills. The product swivel is the connection between the geostatic and the rotating parts of the buoy. The swivel enables an offloading tanker to rotate with respect to the mooring buoy. Product swivels range in size depending on the capacity of attached piping and risers. Product swivels can provide one or several independent paths for fluids, gases, electrical signals or power. Swivels are equipped with a multiple seal arrangement to minimise the possibility of leakage of product into the environment. Other possible components of SPMs are: A commonly used configuration is the catenary anchor leg mooring (CALM), which can be capable of handling very large crude carriers. This configuration uses six or eight heavy anchor chains placed radially around the buoy, of a tonnage to suit the designed load, each about 350 metres (1,150 ft) long, and attached to an anchor or pile to provide the required holding power. The anchor chains are pre-tensioned to ensure that the buoy is held in position above the PLEM. As the load from the tanker is applied, the heavy chains on the far side straighten and lift off the seabed to apply the balancing load. Under full design load there is still some 27 metres (89 ft) of chain lying on the bottom. The flexible hose riser may be in one of three basic configurations, all designed to accommodate tidal depth variation and lateral displacement due to mooring loads. In all cases the hose curvature changes to accommodate lateral and vertical movement of the buoy, and the hoses are supported at near neutral buoyancy by floats along the length. These are: [ 1 ] Less commonly used configurations include:
https://en.wikipedia.org/wiki/Single_buoy_mooring
Single cell epigenomics is the study of epigenomics (the complete set of epigenetic modifications on the genetic material of a cell) in individual cells by single cell sequencing . [ 2 ] [ 1 ] [ 3 ] Since 2013, methods have been created including whole-genome single-cell bisulfite sequencing to measure DNA methylation , whole-genome ChIP-sequencing to measure histone modifications, whole-genome ATAC-seq to measure chromatin accessibility and chromosome conformation capture . Single cell DNA genome sequencing quantifies DNA methylation . This is similar to single cell genome sequencing, but with the addition of a bisulfite treatment before sequencing. Forms include whole genome bisulfite sequencing, [ 4 ] [ 5 ] and reduced representation bisulfite sequencing [ 6 ] [ 7 ] ATAC-seq stands for Assay for Transposase-Accessible Chromatin with high throughput sequencing. [ 9 ] It is a technique used in molecular biology to identify accessible DNA regions, equivalent to DNase I hypersensitive sites. [ 9 ] Single cell ATAC-seq has been performed since 2015, using methods ranging from FACS sorting, microfluidic isolation of single cells, to combinatorial indexing. [ 8 ] In initial studies, the method was able to reliably separate cells based on their cell types, uncover sources of cell-to-cell variability, and show a link between chromatin organization and cell-to-cell variation. [ 8 ] ChIP-sequencing, also known as ChIP-seq, is a method used to analyze protein interactions with DNA . [ 9 ] ChIP-seq combines chromatin immunoprecipitation (ChIP) with massively parallel DNA sequencing to identify the binding sites of DNA-associated proteins. [ 9 ] In epigenomics, this is often used to assess histone modifications (such as methylation ). [ 9 ] ChIP-seq is also often used to determine transcription factor binding sites . [ 9 ] Single-cell ChIP-seq is extremely challenging due to background noise caused by nonspecific antibody pull-down, [ 1 ] and only one study so far has performed it successfully. This study used a droplet-based microfluidics approach, and the low coverage required thousands of cells to be sequenced in order to assess cellular heterogeneity. [ 10 ] [ 1 ] Chromosome conformation capture techniques (often abbreviated to 3C technologies or 3C-based methods [ 11 ] ) are a set of molecular biology methods used to analyze the spatial organization of chromatin in a cell. These methods quantify the number of interactions between genomic loci that are nearby in three dimensional space, even if the loci are separated by many kilobases [ 12 ] in the linear genome. Currently, 3C methods start with a similar set of steps, performed on a sample of cells. [ 11 ] First, the cells are cross-linked , which introduces bonds between proteins, and between proteins and nucleic acids, [ 12 ] that effectively "freeze" interactions between genomic loci. [ 11 ] The genome is then cut digested into fragments through the use of restriction enzymes . Next, proximity based ligation is performed, creating long regions of hybrid DNA. [ 11 ] Lastly, the hybrid DNA is sequenced to determine genomic loci that are in close proximity to each other. [ 11 ] Single-cell Hi-C is a modification of the original Hi-C protocol, which is an adaptation of the 3C method, that allows you to determine proximity of different regions of the genome in a single cell. [ 13 ] This method was made possible by performing the digestion and ligation steps in individual nuclei, [ 13 ] as opposed to the original Hi-C protocol, where ligation was performed after cell lysis in a pool containing crosslinked chromatin complexes. [ 14 ] In single cell Hi-C, after ligation, single cells are isolated and the remaining steps are performed in separate compartments, [ 13 ] [ 15 ] and hybrid DNA is tagged with a compartment specific barcode. High-throughput sequencing is then performed on the pool of the hybrid DNA from the single cells. Although the recovery rate of sequenced interactions (hybrid DNA) can be as low as 2.5% of potential interactions, [ 16 ] it has been possible to generate three dimensional maps of entire genomes using this method. [ 17 ] [ 18 ] Additionally, advances have been made in the analysis of Hi-C data,  allowing for the enhancement of HiC datasets to generate even more accurate and detailed contact maps and 3D models. [ 15 ]
https://en.wikipedia.org/wiki/Single_cell_epigenomics
Single colour reflectometry ( SCORE ), formerly known as imaging Reflectometric Interferometry ( iRIf ) and 1-lambda Reflectometry , is a physical method based on interference of monochromatic light at thin films, which is used to investigate (bio-)molecular interactions. The obtained binding curves using SCORE provide detailed information on kinetics and thermodynamics of the observed interaction(s) as well as on concentrations of the used analytes. These data can be relevant for pharmaceutical screening and drug design , biosensors and other biomedical applications, diagnostics, and cell-based assays. The underlying principle corresponds to that of the Fabry-Pérot interferometer , which is also the underlying principle for the white-light interferometry . Monochromatic light is illuminated vertically on the rear side of a transparent multi-layer substrate. The partial beams of the monochromatic light are transmitted and reflected at each interphase of the multi-layer system. Superimposition of the reflected beams result in destructive or constructive interference (depending on wavelength of the used light and the used substrate/multi-layer system materials) that can be detected in an intensity change of the reflected light using a photodiode , CCD , or CMOS element. The sensitive layer on top of the multi-layer system can be (bio-)chemically modified with receptor molecules, e.g. antibodies. Binding of specific ligands to the immobilised receptor molecules results in a change refractive index n and physical thickness d of the sensitive layer. The product of n and d results in the optical thickness (n*d) of the sensitive layer. Monitoring the change of the reflected intensity of the used light over time results in binding curves that provide information on: Compared to bio-layer interferometry , which monitors the change of the interference pattern of reflected white light, SCORE only monitors the intensity change of the reflected light using a photodiode, CCD, or CMOS element. Thus, it is possible to analyse not only a single interaction but high-density arrays with up to 10,000 interactions per cm 2 . [ 1 ] Compared to surface plasmon resonance (SPR) , which penetration depth is limited by the evanescent field , SCORE is limited by the coherence length of the light source, which is typically a few micrometers. This is especially relevant when investigating whole cell assays. Also, SCORE (as well as BLI) is not influenced by temperature fluctuations during the measurement, while SPR needs thermostabilisation. SCORE is especially used as detection method in bio- and chemosensors. It is a label-free technique like Reflectometric interference spectroscopy (RIfS), Bio-layer Interferometry (BLI) and Surface plasmon resonance (SPR), which allows time-resolved observation of binding events on the sensor surface without the use of fluorescence or radioactive labels. The SCORE technology was commercialised by Biametrics GmbH, a service provider and instrument manufacturer with headquarters in Tübingen , Germany. In January 2020, Biametrics GmbH and its technology was acquired by BioCopy Holding AG, headquartered in Aadorf, Switzerland.
https://en.wikipedia.org/wiki/Single_colour_reflectometry
A single-displacement reaction , also known as single replacement reaction or exchange reaction , is an archaic concept in chemistry. It describes the stoichiometry of some chemical reactions in which one element or ligand is replaced by an atom or group. [ 1 ] [ 2 ] [ 3 ] It can be represented generically as: where either This will most often occur if A {\displaystyle {\ce {A}}} is more reactive than B {\displaystyle {\ce {B}}} , thus giving a more stable product. The reaction in that case is exergonic and spontaneous. In the first case, when A {\displaystyle {\ce {A}}} and B {\displaystyle {\ce {B}}} are metals, BC {\displaystyle {\ce {BC}}} and AC {\displaystyle {\ce {AC}}} are usually aqueous compounds (or very rarely in a molten state) and C {\displaystyle {\ce {C}}} is a spectator ion (i.e. remains unchanged). [ 1 ] In the reactivity series , the metals with the highest propensity to donate their electrons to react are listed first, followed by less reactive ones. Therefore, a metal higher on the list can displace anything below it. Here is a condensed version of the same: [ 1 ] Similarly, the halogens with the highest propensity to acquire electrons are the most reactive. The activity series for halogens is: [ 1 ] [ 2 ] [ 3 ] Due to the free state nature of A {\displaystyle {\ce {A}}} and B {\displaystyle {\ce {B}}} , single displacement reactions are also redox reactions, involving the transfer of electrons from one reactant to another. [ 4 ] When A {\displaystyle {\ce {A}}} and B {\displaystyle {\ce {B}}} are metals, A {\displaystyle {\ce {A}}} is always oxidized and B {\displaystyle {\ce {B}}} is always reduced. Since halogens prefer to gain electrons, A {\displaystyle {\ce {A}}} is reduced (from 0 {\displaystyle {\ce {0}}} to − 1 {\displaystyle {\ce {-1}}} ) and B {\displaystyle {\ce {B}}} is oxidized (from − 1 {\displaystyle {\ce {-1}}} to 0 {\displaystyle {\ce {0}}} ). Here one cation replaces another: (Element A has replaced B in compound BC to become a new compound AC and the free element B.) Some examples are: These reactions are exothermic and the rise in temperature is usually in the order of the reactivity of the different metals. [ 5 ] If the reactant in elemental form is not the more reactive metal , then no reaction will occur. Some examples of this would be the reverse. Here one anion replaces another: (Element A has replaced B in the compound CB to form a new compound CA and the free element B.) Some examples are: Cl 2 + 2 NaBr ⟶ 2 NaCl + Br 2 {\displaystyle {\ce {Cl2 + 2NaBr -> 2NaCl + Br2}}} Br 2 + 2 KI ⟶ 2 KBr + I 2 ↓ {\displaystyle {\ce {Br2 + 2KI -> 2KBr + I2(v)}}} Cl 2 + H 2 S ⟶ 2 HCl + S ↓ {\displaystyle {\ce {Cl2 + H2S -> 2HCl + S(v)}}} Again, the less reactive halogen cannot replace the more reactive halogen: Metals react with acids to form salts and hydrogen gas. However, less reactive metals cannot displace the hydrogen from acids. [ 3 ] (They may react with oxidizing acids though.) Metals react with water to form metal oxides and hydrogen gas. The metal oxides further dissolve in water to form alkalies. The reaction can be extremely violent with alkali metals as the hydrogen gas catches fire. [ 2 ] Metals like gold and silver, which are below hydrogen in the reactivity series, do not react with water. Coke or more reactive metals are used to reduce metals by carbon from their metal oxides, [ 6 ] such as in the carbothermic reaction of zinc oxide ( zincite ) to produce zinc metal: and the use of aluminium to produce manganese from manganese dioxide : Such reactions are also used in extraction of boron, silicon, titanium and tungsten. Using highly reactive metals as reducing agents leads to exothermic reactions that melt the metal produced. This is used for welding railway tracks. [ 6 ] a (Haematite) Silver tarnishes due to the presence of hydrogen sulfide , leading to formation of silver sulfide . [ 7 ] [ 2 ] Chlorine is manufactured industrially by the Deacon's process . The reaction takes place at about 400 to 450 °C in the presence of a variety of catalysts such as CuCl 2 {\displaystyle {\ce {CuCl2}}} . Bromine and iodine are extracted from brine by displacing with chlorine. Reactivity series by RSC Halogen displacement reaction by RSC Chlorine water reacting with Iodide and Bromide , YouTube
https://en.wikipedia.org/wiki/Single_displacement_reaction
Single molecule fluorescent sequencing is one method of DNA sequencing. The core principle is the imaging of individual fluorophore molecules, each corresponding to one base. [ 1 ] By working on single molecule level, amplification of DNA is not required, avoiding amplification bias. The method lends itself to parallelization by probing many sequences simultaneously, imaging all of them at the same time. The principle can be applied stepwise (e.g. the Helicos implementation), or in real time (as in the Pacific Biosciences implementation). [ 2 ] This genetics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Single_molecule_fluorescent_sequencing
Single Particle Extinction and Scattering (SPES) is a technique in physics that is used to characterise micro and nanoparticles suspended in a fluid through two independent parameters, the diameter and the effective refractive index . [ 1 ] [ 2 ] A laser generates a gaussian beam which focuses inside a flow cell. A particle that passes through the focal region generates an interference in the beam, which is collected with a sensor. Through this signal it is possible to derive the real part and the imaginary part of the forward scattering field from each particle. The technique was developed to measure aerosols in air [ 3 ] and particles in liquid. [ 4 ] This physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Single_particle_extinction_and_scattering
A single point of failure ( SPOF ) is a part of a system that would stop the entire system from working if it were to fail . [ 1 ] The term single point of failure implies that there is not a backup or redundant option that would enable the system to continue to function without it. SPOFs are undesirable in any system with a goal of high availability or reliability , be it a business practice, software application, or other industrial system. If there is a SPOF present in a system, it produces a potential interruption to the system that is substantially more disruptive than an error would elsewhere in the system. Systems can be made robust by adding redundancy in all potential SPOFs. Redundancy can be achieved at various levels. The assessment of a potential SPOF involves identifying the critical components of a complex system that would provoke a total systems failure in case of malfunction . [ 2 ] Highly reliable systems should not rely on any such individual component. For instance, the owner of a small tree care company may only own one woodchipper . If the chipper breaks, they may be unable to complete their current job and may have to cancel future jobs until they can obtain a replacement. The owner could prepare for this in multiple ways. The owner of the tree care company may have spare parts ready for the repair of the wood chipper, in case it fails. At a higher level, they may have a second wood chipper that they can bring to the job site. Finally, at the highest level, they may have enough equipment available to completely replace everything at the work site in the case of multiple failures. A fault-tolerant computer system can be achieved at the internal component level, at the system level (multiple machines), or site level (replication). One would normally deploy a load balancer to ensure high availability for a server cluster at the system level. [ 3 ] In a high-availability server cluster, each individual server may attain internal component redundancy by having multiple power supplies, hard drives, and other components. System-level redundancy could be obtained by having spare servers waiting to take on the work of another server if it fails. Since a data center is often a support center for other operations such as business logic, it represents a potential SPOF in itself. Thus, at the site level, the entire cluster may be replicated at another location, where it can be accessed in case the primary location becomes unavailable. This is typically addressed as part of an IT disaster recovery program. While previously the solution to this SPOF was physical duplication of clusters, the high demand for this duplication led multiple businesses to outsource duplication to 3rd parties using cloud computing . It has been argued by scholars, however, that doing so simply moves the SPOF and may even increase the likelihood of a failure or cyberattack . [ 4 ] Paul Baran and Donald Davies developed packet switching , a key part of "survivable communications networks". Such networks – including ARPANET and the Internet – are designed to have no single point of failure. Multiple paths between any two points on the network allow those points to continue communicating with each other, the packets "routing around" damage , even after any single failure of any one particular path or any one intermediate node. In software engineering , a bottleneck occurs when the capacity of an application or a computer system is limited by a single component. The bottleneck has lowest throughput of all parts of the transaction path. A common example is when a used programming language is capable of parallel processing , but a given snippet of code has several independent processes run sequentially rather than simultaneously. Tracking down bottlenecks (sometimes known as hot spots – sections of the code that execute most frequently – i.e., have the highest execution count) is called performance analysis . Reduction is usually achieved with the help of specialized tools, known as performance analyzers or profilers. The objective is to make those particular sections of code perform as fast as possible to improve overall algorithmic efficiency . A vulnerability or security exploit in just one component can compromise an entire system. One of the largest concerns in computer security is attempting to eliminate SPOFs without sacrificing too much convenience to the user. With the invention and popularization of the Internet , several systems became connected to the broader world through many difficult to secure connections. [ 4 ] While companies have developed a number of solutions to this, the most consistent form of SPOFs in complex systems tends to remain user error , either by accidental mishandling by an operator or outside interference through phishing attacks. [ 5 ] The concept of a single point of failure has also been applied to fields outside of engineering, computers, and networking, such as corporate supply chain management [ 6 ] and transportation management. [ 7 ] Design structures that create single points of failure include bottlenecks and series circuits (in contrast to parallel circuits ). In transportation, some noted recent examples of the concept's application have included the Nipigon River Bridge in Canada, where a partial bridge failure in January 2016 entirely severed road traffic between Eastern Canada and Western Canada for several days because it is located along a portion of the Trans-Canada Highway where there is no alternate detour route for vehicles to take; [ 8 ] and the Norwalk River Railroad Bridge in Norwalk , Connecticut , an aging swing bridge that sometimes gets stuck when opening or closing, disrupting rail traffic on the Northeast Corridor line. [ 7 ] The concept of a single point of failure has also been applied to the fields of intelligence. Edward Snowden talked of the dangers of being what he described as "the single point of failure" – the sole repository of information. [ 9 ] A component of a life-support system that would constitute a single point of failure would be required to be extremely reliable.
https://en.wikipedia.org/wiki/Single_point_of_failure
Singlet fission is a spin-allowed process, unique to molecular photophysics, whereby one singlet excited state is converted into two triplet states . The phenomenon has been observed in molecular crystals, aggregates, disordered thin films, and covalently-linked dimers, where the chromophores are oriented such that the electronic coupling between singlet and the double triplet states is large. Being spin allowed, the process can occur very rapidly (on a picosecond or femtosecond timescale) and out-compete radiative decay (that generally occurs on a nanosecond timescale) thereby producing two triplets with very high efficiency. The process is distinct from intersystem crossing , in that singlet fission does not involve a spin flip, but is mediated by two triplets coupled into an overall singlet. [ 1 ] It has been proposed that singlet fission in organic photovoltaic devices could improve the photoconversion efficiencies . [ 2 ] The process of singlet fission was first introduced to describe the photophysics of anthracene in 1965. [ 3 ] Early studies on the effect of the magnetic field on the fluorescence of crystalline tetracene solidified understanding of singlet fission in polyacenes . Acenes, Pentacene and Tetracene in particular, are prominent candidates for singlet fission. The energy of the triplet states are smaller than or equal to half of the singlet (S 1 ) state energy, thus satisfying the requirement of S 1 ≥ 2T 1 . Singlet fission in functionalized pentacene compounds has been observed experimentally. [ 4 ] Intramolecular singlet fission in covalently linked pentacene and tetracene dimers has also been reported. [ 5 ] The detailed mechanism of the process is unknown. Particularly, the role of charge transfer states in the singlet fission process is still debated. Typically, the mechanisms for singlet fission are classified into (a) Direct coupling between the molecules and (b) Step-wise one-electron processes involving the charge-transfer states. Intermolecular interactions and the relative orientation of the molecules within the aggregates are known to critically effect the singlet fission efficiencies. [ 6 ] The limited number and structural similarity of chromophores is believed to be the major obstacle to advancing the field for practical applications. [ 7 ] [ 8 ] [ 9 ] It has been proposed that computational modeling of the diradical character of molecules may serve as a guiding principle for the discovery of new classes of singlet fission chromophores. [ 10 ] Computations allowed to identify carbenes as building blocks for engineering singlet fission molecules. [ 11 ] [ 12 ] Singlet fission (SF) involves the conversion of a singlet excited state (S 1 ) into two triplet states (T 1 ). The process can be described by a two-step kinetic model (see Figure 1): 1. Formation of a correlated triplet pair state 1 (T 1 T 1 ) from the singlet excited state: 2. Separation of the triplet pair into two individual triplet states: The rate of singlet fission, denoted as k SF , can be expressed using Fermi's Golden Rule: where H el is the electronic coupling Hamiltonian, and d represents the density of states. This equation shows that electronic coupling and state density determine the efficiency of singlet fission. [ 13 ] [ 14 ] [ 15 ] Efficient singlet fission requires materials where the energy of the singlet state E (S 1 ) is at least twice the energy of the triplet state E (T 1 ): The energetic requirements for singlet fission can be met by acenes (e.g., tetracene , pentacene ), perylene derivatives, and diketopyrrolopyrroles (DPPs). [ 14 ] Crystal morphology, molecular packing, and minimizing defects influence performance. For instance, single-crystal tetracene displays coherent quantum beats from spin-state interactions, whereas polycrystalline films exhibit less coherence due to defects. Single-crystal tetracene has slower singlet decay times (200–300 ps) compared to polycrystalline films (70–90 ps). In polycrystalline films, excitons can diffuse to defect-rich regions, creating “hotspots” that enhance singlet fission, with excimer-like emissions reflecting the influence of structural defects on SF rates. [ 16 ] When materials do not meet the energetic requirements for singlet fission, other relaxation pathways occur such as fluorescence , non-radiative decay, or intersystem crossing to a single triplet state dominate, leading to lower efficiency in photovoltaic applications. Ultrafast and time-resolved spectroscopic techniques , including transient absorption and time-resolved fluorescence spectroscopy, allows determination of the rates of singlet exciton decay and the formation of triplet states. Transient absorption techniques capture the rapid conversion of singlet excitons into triplet pairs, highlighting the efficiency of singlet fission in various material morphologies. Using time-resolved fluorescence spectroscopy, one can observe coherent quantum beats resulting from spin-state interactions in triplet pairs. [ 16 ] Singlet fission has the potential to enhance solar cell efficiency beyond the Shockley–Queisser limit , especially for organic photovoltaics . [ 13 ] Applications extend to other fields, including light-emitting devices .
https://en.wikipedia.org/wiki/Singlet_fission
Singlet oxygen , systematically named dioxygen(singlet) and dioxidene , is a gaseous inorganic chemical with the formula O=O (also written as 1 [O 2 ] or 1 O 2 ), which is in a quantum state where all electrons are spin paired . It is kinetically unstable at ambient temperature, but the rate of decay is slow. The lowest excited state of the diatomic oxygen molecule is a singlet state . It is a gas with physical properties differing only subtly from those of the more prevalent triplet ground state of O 2 . In terms of its chemical reactivity, however, singlet oxygen is far more reactive toward organic compounds. It is responsible for the photodegradation of many materials but can be put to constructive use in preparative organic chemistry and photodynamic therapy . Trace amounts of singlet oxygen are found in the upper atmosphere and in polluted urban atmospheres where it contributes to the formation of lung-damaging nitrogen dioxide . [ 1 ] : 355–68 It often appears and coexists confounded in environments that also generate ozone , such as pine forests with photodegradation of turpentine . [ citation needed ] The terms 'singlet oxygen' and ' triplet oxygen ' derive from each form's number of electron spins . The singlet has only one possible arrangement of electron spins with a total quantum spin of 0, while the triplet has three possible arrangements of electron spins with a total quantum spin of 1, corresponding to three degenerate states. In spectroscopic notation , the lowest singlet and triplet forms of O 2 are labeled 1 Δ g and 3 Σ − g , respectively. [ 2 ] [ 3 ] [ 4 ] Singlet oxygen refers to one of two singlet electronic excited states. The two singlet states are denoted 1 Σ + g and 1 Δ g (the preceding superscript "1" indicates a singlet state). The singlet states of oxygen are 158 and 95 kilojoules per mole higher in energy than the triplet ground state of oxygen. Under most common laboratory conditions, the higher energy 1 Σ + g singlet state rapidly converts to the more stable, lower energy 1 Δ g singlet state. [ 2 ] This more stable of the two excited states has its two valence electrons spin-paired in one π* orbital while the second π* orbital is empty. This state is referred to by the title term, singlet oxygen , commonly abbreviated 1 O 2 , to distinguish it from the triplet ground state molecule, 3 O 2 . [ 2 ] [ 3 ] Molecular orbital theory predicts the electronic ground state denoted by the molecular term symbol 3 Σ – g , and two low-lying excited singlet states with term symbols 1 Δ g and 1 Σ + g . These three electronic states differ only in the spin and the occupancy of oxygen's two antibonding π g -orbitals, which are degenerate (equal in energy). These two orbitals are classified as antibonding and are of higher energy. Following Hund's first rule , in the ground state, these electrons are unpaired and have like (same) spin. This open-shell triplet ground state of molecular oxygen differs from most stable diatomic molecules, which have singlet ( 1 Σ + g ) ground states. [ 5 ] Two less stable, higher energy excited states are readily accessible from this ground state, again in accordance with Hund's first rule ; [ 6 ] the first moves one of the high energy unpaired ground state electrons from one degenerate orbital to the other, where it "flips" and pairs the other, and creates a new state, a singlet state referred to as the 1 Δ g state (a term symbol , where the preceding superscripted "1" indicates it as a singlet state). [ 2 ] [ 3 ] Alternatively, both electrons can remain in their degenerate ground state orbitals, but the spin of one can "flip" so that it is now opposite to the second (i.e., it is still in a separate degenerate orbital, but no longer of like spin); this also creates a new state, a singlet state referred to as the 1 Σ + g state. [ 2 ] [ 3 ] The ground and first two singlet excited states of oxygen can be described by the simple scheme in the figure below. [ 7 ] [ 8 ] The 1 Δ g singlet state is 7882.4 cm −1 above the triplet 3 Σ − g ground state., [ 3 ] [ 9 ] which in other units corresponds to 94.29 kJ/mol or 0.9773 eV. The 1 Σ + g singlet is 13 120.9 cm −1 [ 3 ] [ 9 ] (157.0 kJ/mol or 1.6268 eV) above the ground state. Radiative transitions between the three low-lying electronic states of oxygen are formally forbidden as electric dipole processes. [ 10 ] The two singlet-triplet transitions are forbidden both because of the spin selection rule ΔS = 0 and because of the parity rule that g-g transitions are forbidden. [ 11 ] The singlet-singlet transition between the two excited states is spin-allowed but parity-forbidden. The lower, O 2 ( 1 Δ g ) state is commonly referred to as singlet oxygen . The energy difference of 94.3 kJ/mol between ground state and singlet oxygen corresponds to a forbidden singlet-triplet transition in the near- infrared at ~1270 nm. [ 12 ] As a consequence, singlet oxygen in the gas phase is relatively long lived (54-86 milliseconds), [ 13 ] although interaction with solvents reduces the lifetime to microseconds or even nanoseconds. [ 14 ] In 2021, the lifetime of airborne singlet oxygen at air/solid interfaces was measured to be 550 microseconds. [ 15 ] The higher 1 Σ + g state is moderately short lived. In the gas phase, it relaxes primarily to the ground state triplet with a mean lifetime of 11.8 seconds. [ 10 ] However in solvents such as CS 2 and CCl 4 , it relaxes to the lower singlet 1 Δ g in milliseconds due to radiationless decay channels. [ 10 ] Both singlet oxygen states have no unpaired electrons and therefore no net electron spin. The 1 Δ g is however paramagnetic as shown by the observation of an electron paramagnetic resonance (EPR) spectrum. [ 16 ] [ 17 ] [ 18 ] The paramagnetism of the 1 Δ g state is due to a net orbital (and not spin) electronic angular momentum. In a magnetic field the degeneracy of the M L {\displaystyle M_{L}} levels is split into two levels with z projections of angular momenta +1 ħ and −1 ħ around the molecular axis. The magnetic transition between these levels gives rise to the g = 1 {\displaystyle g=1} EPR transition. Various methods for the production of singlet oxygen exist. Irradiation of oxygen gas in the presence of an organic dye as a sensitizer, such as rose bengal , methylene blue , or porphyrins —a photochemical method —results in its production. [ 19 ] [ 9 ] Large steady state concentrations of singlet oxygen are reported from the reaction of triplet excited state pyruvic acid with dissolved oxygen in water. [ 20 ] Singlet oxygen can also be produced by chemical procedures without irradiation. One chemical method involves the decomposition of triethylsilyl hydrotrioxide generated in situ from triethylsilane and ozone. [ 21 ] Another method uses a reaction of hydrogen peroxide with sodium hypochlorite in aqueous solution: [ 19 ] A retro-Diels Alder reaction of the diphenylanthracene peroxide can also yield singlet oxygen, along with an diphenylanthracene: [ 22 ] A third method liberates singlet oxygen via phosphite ozonides, which are, in turn, generated in situ such as triphenyl phosphite ozonide . [ 23 ] [ 24 ] Phosphite ozonides will decompose to give singlet oxygen: [ 25 ] An advantage of this method is that it is amenable to non-aqueous conditions. [ 25 ] Because of differences in their electron shells, singlet and triplet oxygen differ in their chemical properties; singlet oxygen is highly reactive. [ 26 ] The lifetime of singlet oxygen depends on the medium and pressure. In normal organic solvents, the lifetime is only a few microseconds whereas in solvents lacking C-H bonds, the lifetime can be as long as seconds. [ 25 ] [ 27 ] Unlike ground state oxygen, singlet oxygen participates in Diels–Alder [4+2]- and [2+2]- cycloaddition reactions and formal concerted ene reactions ( Schenck ene reaction ), causing photooxygenation . [ 25 ] It oxidizes thioethers to sulfoxides. Organometallic complexes are often degraded by singlet oxygen. [ 28 ] [ 29 ] With some substrates 1,2-dioxetanes are formed; cyclic dienes such as 1,3-cyclohexadiene form [4+2] cycloaddition adducts. [ 30 ] The [4+2]-cycloaddition between singlet oxygen and furans is widely used in organic synthesis . [ 31 ] [ 32 ] In singlet oxygen reactions with alkenic allyl groups , e.g., citronella, shown, by abstraction of the allylic proton, in an ene-like reaction , yielding the allyl hydroperoxide , R–O–OH (R = alkyl ), which can then be reduced to the corresponding allylic alcohol . [ 25 ] [ 33 ] [ 34 ] [ 35 ] In reactions with water, trioxidane , an unusual molecule with three consecutive linked oxygen atoms, is formed. [ citation needed ] In photosynthesis , singlet oxygen can be produced from the light-harvesting chlorophyll molecules. One of the roles of carotenoids in photosynthetic systems is to prevent damage caused by produced singlet oxygen by either removing excess light energy from chlorophyll molecules or quenching the singlet oxygen molecules directly. In mammalian biology , singlet oxygen is one of the reactive oxygen species , which is linked to oxidation of LDL cholesterol and resultant cardiovascular effects. Polyphenol antioxidants can scavenge and reduce concentrations of reactive oxygen species and may prevent such deleterious oxidative effects. [ 36 ] Ingestion of pigments capable of producing singlet oxygen with activation by light can produce severe photosensitivity of skin (see phototoxicity , photosensitivity in humans , photodermatitis , phytophotodermatitis ). This is especially a concern in herbivorous animals (see Photosensitivity in animals ). Singlet oxygen is the active species in photodynamic therapy . Singlet oxygen luminesces concomitant with its decay to the triplet ground state. This phenomenon was first observed in the thermal degradation of the endo peroxide of rubrene . [ 38 ] Nascent oxygen O Dioxygen ( singlet and triplet ) O 2 Trioxygen ( ozone and cyclic ozone ) O 3 Tetraoxygen O 4 Octaoxygen O 8
https://en.wikipedia.org/wiki/Singlet_oxygen
In object-oriented programming , the singleton pattern is a software design pattern that restricts the instantiation of a class to a singular instance. It is one of the well-known "Gang of Four" design patterns , which describe how to solve recurring problems in object-oriented software. [ 1 ] The pattern is useful when exactly one object is needed to coordinate actions across a system. More specifically, the singleton pattern allows classes to: [ 2 ] The term comes from the mathematical concept of a singleton . Singletons are often preferred to global variables because they do not pollute the global namespace (or their containing namespace). Additionally, they permit lazy allocation and initialization, whereas global variables in many languages will always consume resources. [ 1 ] [ 3 ] The singleton pattern can also be used as a basis for other design patterns, such as the abstract factory , factory method , builder and prototype patterns. Facade objects are also often singletons because only one facade object is required. Logging is a common real-world use case for singletons, because all objects that wish to log messages require a uniform point of access and conceptually write to a single source. [ 4 ] Implementations of the singleton pattern ensure that only one instance of the singleton class ever exists and typically provide global access to that instance. Typically, this is accomplished by: The instance is usually stored as a private static variable ; the instance is created when the variable is initialized, at some point before when the static method is first called. This C++23 implementation is based on the pre-C++98 implementation in the book [ citation needed ] . The program output is This is an implementation of the Meyers singleton [ 5 ] in C++11. The Meyers singleton has no destruct method. The program output is the same as above. A singleton implementation may use lazy initialization in which the instance is created when the static method is first invoked. In multithreaded programs, this can cause race conditions that result in the creation of multiple instances. The following Java 5+ example [ 6 ] is a thread-safe implementation, using lazy initialization with double-checked locking . Some consider the singleton to be an anti-pattern that introduces global state into an application, often unnecessarily. This introduces a potential dependency on the singleton by other objects, requiring analysis of implementation details to determine whether a dependency actually exists. [ 7 ] This increased coupling can introduce difficulties with unit testing . [ 8 ] In turn, this places restrictions on any abstraction that uses the singleton, such as preventing concurrent use of multiple instances. [ 8 ] [ 9 ] [ 10 ] Singletons also violate the single-responsibility principle because they are responsible for enforcing their own uniqueness along with performing their normal functions. [ 8 ]
https://en.wikipedia.org/wiki/Singleton_pattern
Singmaster's conjecture is a conjecture in combinatorial number theory , named after the British mathematician David Singmaster who proposed it in 1971. It says that there is a finite upper bound on the multiplicities of entries in Pascal's triangle (other than the number 1, which appears infinitely many times). It is clear that the only number that appears infinitely many times in Pascal's triangle is 1, because any other number x can appear only within the first x + 1 rows of the triangle. Let N ( a ) be the number of times the number a > 1 appears in Pascal's triangle. In big O notation , the conjecture is: Singmaster (1971) showed that Abbott, Erdős , and Hanson (1974) (see References ) refined the estimate to: The best currently known (unconditional) bound is and is due to Kane (2007). Abbott, Erdős, and Hanson note that, conditional on Cramér's conjecture on gaps between consecutive primes, holds for every ε > 0 {\displaystyle \varepsilon >0} . Singmaster (1975) showed that the Diophantine equation has infinitely many solutions for the two variables n , k . It follows that there are infinitely many triangle entries of multiplicity at least 6: For any non-negative i , a number a with six appearances in Pascal's triangle is given by either of the above two expressions with where F j is the j th Fibonacci number (indexed according to the convention that F 0 = 0 and F 1 = 1). The above two expressions locate two of the appearances; two others appear symmetrically in the triangle with respect to those two; and the other two appearances are at ( a 1 ) {\displaystyle {a \choose 1}} and ( a a − 1 ) . {\displaystyle {a \choose a-1}.} The number of times n appears in Pascal's triangle is By Abbott, Erdős, and Hanson (1974), the number of integers no larger than x that appear more than twice in Pascal's triangle is O ( x 1 / 2 ) {\displaystyle O(x^{1/2})} . The smallest natural number greater than 1 that appears (at least) n times in Pascal's triangle is The numbers which appear at least five times in Pascal's triangle are Of these, the ones in Singmaster's infinite family are It is not known whether any number appears more than eight times, nor whether any number besides 3003 appears that many times. The conjectured finite upper bound could be as small as 8, but Singmaster thought it might be 10 or 12. It is also unknown whether any numbers appear exactly five or seven times.
https://en.wikipedia.org/wiki/Singmaster's_conjecture
A singular matrix is a square matrix that is not invertible , unlike non-singular matrix which is invertible. Equivalently, an n x n {\displaystyle nxn} matrix A {\displaystyle A} is singular if and only if determinant , d e t ( A ) = 0 {\displaystyle det(A)=0} . [ 1 ] In classical linear algebra, a matrix is called non-singular (or invertible) when it has an inverse ; by definition, a matrix that fails this criterion is singular. In more algebraic terms, an n x n {\displaystyle nxn} matrix A is singular exactly when its columns (and rows) are linearly dependent, so that the linear map x → A x {\displaystyle x\rightarrow Ax} is not one-to-one. In this case the kernel ( null space ) of A is non-trivial (has dimension ≥1), and the homogeneous system A x = 0 {\displaystyle Ax=0} admits non-zero solutions. These characterizations follow from standard rank-nullity and invertibility theorems: for a square matrix A, d e t ( A ) ≠ 0 {\displaystyle det(A)\neq 0} if and only if r a n k ( A ) = n {\displaystyle rank(A)=n} , and d e t ( A ) = 0 {\displaystyle det(A)=0} if and only if r a n k ( A ) < n {\displaystyle rank(A)<n} . In summary, any condition that forces the determinant to zero or the rank to drop below full automatically yields singularity. [ 2 ] [ 3 ] The study of singular matrices is rooted in the early history of linear algebra . Determinants were first developed (in Japan by Seki in 1683 and in Europe by Leibniz and Cramer in the 1690s [ 10 ] as tools for solving systems of equations. Leibniz explicitly recognized that a system has a solution precisely when a certain determinant expression equals zero. In that sense, singularity (determinant zero) was understood as the critical condition for solvability. Over the 18th and 19th centuries, mathematicians ( Laplace , Cauchy , etc.) established many properties of determinants and invertible matrices, formalizing the notion that d e t ( A ) = 0 {\displaystyle det(A)=0} characterizes non-invertibility. The term "singular matrix" itself emerged later, but the conceptual importance remained. In the 20th century, generalizations like the Moore–Penrose pseudoinverse were introduced to systematically handle singular or non-square cases. As recent scholarship notes, the idea of a pseudoinverse was proposed by E. H. Moore in 1920 and rediscovered by R. Penrose in 1955, [ 11 ] reflecting its longstanding utility. The pseudoinverse and singular value decomposition became fundamental in both theory and applications (e.g. in quantum mechanics, signal processing, and more) for dealing with singularity. Today, singular matrices are a canonical subject in linear algebra: they delineate the boundary between invertible (well-behaved) cases and degenerate (ill-posed) cases. In abstract terms, singular matrices correspond to non- isomorphisms in linear mappings and are thus central to the theory of vector spaces and linear transformations. Example 1 (2×2 matrix) : A = [ 1 2 2 4 ] {\displaystyle A={\begin{bmatrix}1&2\\2&4\end{bmatrix}}} Compute its determinant: d e t ( A ) {\displaystyle det(A)} = 1.4 − 2.2 = 4 − 4 = 0 {\displaystyle 1.4-2.2=4-4=0} . Thus A is singular . One sees directly that the second row is twice the first, so the rows are linearly dependent. To illustrate failure of invertibility, attempt Gaussian elimination: R o w 2 ← R o w 2 − 2 ∗ R o w 1 {\displaystyle Row_{2}\leftarrow Row_{2}-2*Row_{1}} = [ 1 2 2 − 1 ( 2 ) 4 − 2 ( 2 ) ] {\displaystyle {\begin{bmatrix}1&2\\2-1(2)&4-2(2)\end{bmatrix}}} = [ 1 2 0 0 ] {\displaystyle {\begin{bmatrix}1&2\\0&0\end{bmatrix}}} Now the second pivot would be the (2,2) entry, but it is zero. Since no nonzero pivot exists in column 2, elimination stops. This confirms r a n k ( A ) = 1 < 2 {\displaystyle rank(A)=1<2} and that A has no inverse. [ 12 ] Solving A x = b {\displaystyle Ax=b} exhibits infinite/ no solutions. For example, Ax=0 gives: x + 2 y = 0 , 2 x + 4 y = 0 {\displaystyle x+2y=0,2x+4y=0} which are the same equation. Thus the nullspace is one-dimensional, then Ax=b has no solution.
https://en.wikipedia.org/wiki/Singular_matrix
In mathematics , two positive (or signed or complex ) measures μ {\displaystyle \mu } and ν {\displaystyle \nu } defined on a measurable space ( Ω , Σ ) {\displaystyle (\Omega ,\Sigma )} are called singular if there exist two disjoint measurable sets A , B ∈ Σ {\displaystyle A,B\in \Sigma } whose union is Ω {\displaystyle \Omega } such that μ {\displaystyle \mu } is zero on all measurable subsets of B {\displaystyle B} while ν {\displaystyle \nu } is zero on all measurable subsets of A . {\displaystyle A.} This is denoted by μ ⊥ ν . {\displaystyle \mu \perp \nu .} A refined form of Lebesgue's decomposition theorem decomposes a singular measure into a singular continuous measure and a discrete measure . See below for examples. As a particular case, a measure defined on the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} is called singular , if it is singular with respect to the Lebesgue measure on this space. For example, the Dirac delta function is a singular measure. Example. A discrete measure . The Heaviside step function on the real line , H ( x ) = d e f { 0 , x < 0 ; 1 , x ≥ 0 ; {\displaystyle H(x)\ {\stackrel {\mathrm {def} }{=}}{\begin{cases}0,&x<0;\\1,&x\geq 0;\end{cases}}} has the Dirac delta distribution δ 0 {\displaystyle \delta _{0}} as its distributional derivative . This is a measure on the real line, a " point mass " at 0. {\displaystyle 0.} However, the Dirac measure δ 0 {\displaystyle \delta _{0}} is not absolutely continuous with respect to Lebesgue measure λ , {\displaystyle \lambda ,} nor is λ {\displaystyle \lambda } absolutely continuous with respect to δ 0 : {\displaystyle \delta _{0}:} λ ( { 0 } ) = 0 {\displaystyle \lambda (\{0\})=0} but δ 0 ( { 0 } ) = 1 ; {\displaystyle \delta _{0}(\{0\})=1;} if U {\displaystyle U} is any non-empty open set not containing 0, then λ ( U ) > 0 {\displaystyle \lambda (U)>0} but δ 0 ( U ) = 0. {\displaystyle \delta _{0}(U)=0.} Example. A singular continuous measure. The Cantor distribution has a cumulative distribution function that is continuous but not absolutely continuous , and indeed its absolutely continuous part is zero: it is singular continuous. Example. A singular continuous measure on R 2 . {\displaystyle \mathbb {R} ^{2}.} The upper and lower Fréchet–Hoeffding bounds are singular distributions in two dimensions. This article incorporates material from singular measure on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
https://en.wikipedia.org/wiki/Singular_measure
A singular solution y s ( x ) of an ordinary differential equation is a solution that is singular or one for which the initial value problem (also called the Cauchy problem by some authors) fails to have a unique solution at some point on the solution. The set on which a solution is singular may be as small as a single point or as large as the full real line. Solutions which are singular in the sense that the initial value problem fails to have a unique solution need not be singular functions . In some cases, the term singular solution is used to mean a solution at which there is a failure of uniqueness to the initial value problem at every point on the curve. A singular solution in this stronger sense is often given as tangent to every solution from a family of solutions. By tangent we mean that there is a point x where y s ( x ) = y c ( x ) and y' s ( x ) = y' c ( x ) where y c is a solution in a family of solutions parameterized by c . This means that the singular solution is the envelope of the family of solutions. Usually, singular solutions appear in differential equations when there is a need to divide in a term that might be equal to zero . Therefore, when one is solving a differential equation and using division one must check what happens if the term is equal to zero, and whether it leads to a singular solution. The Picard–Lindelöf theorem , which gives sufficient conditions for unique solutions to exist, can be used to rule out the existence of singular solutions. Other theorems, such as the Peano existence theorem , give sufficient conditions for solutions to exist without necessarily being unique, which can allow for the existence of singular solutions. Consider the homogeneous linear ordinary differential equation where primes denote derivatives with respect to x . The general solution to this equation is For a given C {\displaystyle C} , this solution is smooth except at x = 0 {\displaystyle x=0} where the solution is divergent. Furthermore, for a given x ≠ 0 {\displaystyle x\not =0} , this is the unique solution going through ( x , y ( x ) ) {\displaystyle (x,y(x))} . Consider the differential equation A one-parameter family of solutions to this equation is given by Another solution is given by Since the equation being studied is a first-order equation, the initial conditions are the initial x and y values. By considering the two sets of solutions above, one can see that the solution fails to be unique when y = 0 {\displaystyle y=0} . (It can be shown that for y > 0 {\displaystyle y>0} if a single branch of the square root is chosen, then there is a local solution which is unique using the Picard–Lindelöf theorem .) Thus, the solutions above are all singular solutions, in the sense that solution fails to be unique in a neighbourhood of one or more points. (Commonly, we say "uniqueness fails" at these points.) For the first set of solutions, uniqueness fails at one point, x = c {\displaystyle x=c} , and for the second solution, uniqueness fails at every value of x {\displaystyle x} . Thus, the solution y s {\displaystyle y_{s}} is a singular solution in the stronger sense that uniqueness fails at every value of x . However, it is not a singular function since it and all its derivatives are continuous. In this example, the solution y s ( x ) = 0 {\displaystyle y_{s}(x)=0} is the envelope of the family of solutions y c ( x ) = ( x − c ) 2 {\displaystyle y_{c}(x)=(x-c)^{2}} . The solution y s {\displaystyle y_{s}} is tangent to every curve y c ( x ) {\displaystyle y_{c}(x)} at the point ( c , 0 ) {\displaystyle (c,0)} . The failure of uniqueness can be used to construct more solutions. These can be found by taking two constant c 1 < c 2 {\displaystyle c_{1}<c_{2}} and defining a solution y ( x ) {\displaystyle y(x)} to be ( x − c 1 ) 2 {\displaystyle (x-c_{1})^{2}} when x < c 1 {\displaystyle x<c_{1}} , to be 0 {\displaystyle 0} when c 1 ≤ x ≤ c 2 {\displaystyle c_{1}\leq x\leq c_{2}} , and to be ( x − c 2 ) 2 {\displaystyle (x-c_{2})^{2}} when x > c 2 {\displaystyle x>c_{2}} . Direct calculation shows that this is a solution of the differential equation at every point, including x = c 1 {\displaystyle x=c_{1}} and x = c 2 {\displaystyle x=c_{2}} . Uniqueness fails for these solutions on the interval c 1 ≤ x ≤ c 2 {\displaystyle c_{1}\leq x\leq c_{2}} , and the solutions are singular, in the sense that the second derivative fails to exist, at x = c 1 {\displaystyle x=c_{1}} and x = c 2 {\displaystyle x=c_{2}} . The previous example might give the erroneous impression that failure of uniqueness is directly related to y ( x ) = 0 {\displaystyle y(x)=0} . Failure of uniqueness can also be seen in the following example of a Clairaut's equation : We write y' = p and then Now, we shall take the differential according to x : which by simple algebra yields This condition is solved if 2 p + x =0 or if p ′=0. If p' = 0 it means that y' = p = c = constant, and the general solution of this new equation is: where c is determined by the initial value. If x + 2 p = 0 then we get that p = −½ x and substituting in the ODE gives Now we shall check when these solutions are singular solutions. If two solutions intersect each other, that is, they both go through the same point ( x , y ), then there is a failure of uniqueness for a first-order ordinary differential equation. Thus, there will be a failure of uniqueness if a solution of the first form intersects the second solution. The condition of intersection is : y s ( x ) = y c ( x ). We solve to find the intersection point, which is ( − 2 c , − c 2 ) {\displaystyle (-2c,-c^{2})} . We can verify that the curves are tangent at this point y' s ( x ) = y' c ( x ). We calculate the derivatives : Hence, is tangent to every member of the one-parameter family of solutions of this Clairaut equation:
https://en.wikipedia.org/wiki/Singular_solution
In mathematics , a singularity is a point at which a given mathematical object is not defined, or a point where the mathematical object ceases to be well-behaved in some particular way, such as by lacking differentiability or analyticity . [ 1 ] [ 2 ] [ 3 ] For example, the reciprocal function f ( x ) = 1 / x {\displaystyle f(x)=1/x} has a singularity at x = 0 {\displaystyle x=0} , where the value of the function is not defined, as involving a division by zero . The absolute value function g ( x ) = | x | {\displaystyle g(x)=|x|} also has a singularity at x = 0 {\displaystyle x=0} , since it is not differentiable there. [ 4 ] The algebraic curve defined by { ( x , y ) : y 3 − x 2 = 0 } {\displaystyle \left\{(x,y):y^{3}-x^{2}=0\right\}} in the ( x , y ) {\displaystyle (x,y)} coordinate system has a singularity (called a cusp ) at ( 0 , 0 ) {\displaystyle (0,0)} . For singularities in algebraic geometry , see singular point of an algebraic variety . For singularities in differential geometry , see singularity theory . In real analysis , singularities are either discontinuities , or discontinuities of the derivative (sometimes also discontinuities of higher order derivatives). There are four kinds of discontinuities: type I , which has two subtypes, and type II , which can also be divided into two subtypes (though usually is not). To describe the way these two types of limits are being used, suppose that f ( x ) {\displaystyle f(x)} is a function of a real argument x {\displaystyle x} , and for any value of its argument, say c {\displaystyle c} , then the left-handed limit , f ( c − ) {\displaystyle f(c^{-})} , and the right-handed limit , f ( c + ) {\displaystyle f(c^{+})} , are defined by: The value f ( c − ) {\displaystyle f(c^{-})} is the value that the function f ( x ) {\displaystyle f(x)} tends towards as the value x {\displaystyle x} approaches c {\displaystyle c} from below , and the value f ( c + ) {\displaystyle f(c^{+})} is the value that the function f ( x ) {\displaystyle f(x)} tends towards as the value x {\displaystyle x} approaches c {\displaystyle c} from above , regardless of the actual value the function has at the point where x = c {\displaystyle x=c} . There are some functions for which these limits do not exist at all. For example, the function does not tend towards anything as x {\displaystyle x} approaches c = 0 {\displaystyle c=0} . The limits in this case are not infinite, but rather undefined : there is no value that g ( x ) {\displaystyle g(x)} settles in on. Borrowing from complex analysis, this is sometimes called an essential singularity . The possible cases at a given value c {\displaystyle c} for the argument are as follows. In real analysis, a singularity or discontinuity is a property of a function alone. Any singularities that may exist in the derivative of a function are considered as belonging to the derivative, not to the original function. A coordinate singularity occurs when an apparent singularity or discontinuity occurs in one coordinate frame, which can be removed by choosing a different frame. An example of this is the apparent singularity at the 90 degree latitude in spherical coordinates . An object moving due north (for example, along the line 0 degrees longitude) on the surface of a sphere will suddenly experience an instantaneous change in longitude at the pole (in the case of the example, jumping from longitude 0 to longitude 180 degrees). This discontinuity, however, is only apparent; it is an artifact of the coordinate system chosen, which is singular at the poles. A different coordinate system would eliminate the apparent discontinuity (e.g., by replacing the latitude/longitude representation with an n -vector representation). In complex analysis , there are several classes of singularities. These include the isolated singularities, the nonisolated singularities, and the branch points. Suppose that f {\displaystyle f} is a function that is complex differentiable in the complement of a point a {\displaystyle a} in an open subset U {\displaystyle U} of the complex numbers C . {\displaystyle \mathbb {C} .} Then: Other than isolated singularities, complex functions of one variable may exhibit other singular behaviour. These are termed nonisolated singularities, of which there are two types: Branch points are generally the result of a multi-valued function , such as z {\displaystyle {\sqrt {z}}} or log ⁡ ( z ) , {\displaystyle \log(z),} which are defined within a certain limited domain so that the function can be made single-valued within the domain. The cut is a line or curve excluded from the domain to introduce a technical separation between discontinuous values of the function. When the cut is genuinely required, the function will have distinctly different values on each side of the branch cut. The shape of the branch cut is a matter of choice, even though it must connect two different branch points (such as z = 0 {\displaystyle z=0} and z = ∞ {\displaystyle z=\infty } for log ⁡ ( z ) {\displaystyle \log(z)} ) which are fixed in place. A finite-time singularity occurs when one input variable is time, and an output variable increases towards infinity at a finite time. These are important in kinematics and Partial Differential Equations – infinites do not occur physically, but the behavior near the singularity is often of interest. Mathematically, the simplest finite-time singularities are power laws for various exponents of the form x − α , {\displaystyle x^{-\alpha },} of which the simplest is hyperbolic growth , where the exponent is (negative) 1: x − 1 . {\displaystyle x^{-1}.} More precisely, in order to get a singularity at positive time as time advances (so the output grows to infinity), one instead uses ( t 0 − t ) − α {\displaystyle (t_{0}-t)^{-\alpha }} (using t for time, reversing direction to − t {\displaystyle -t} so that time increases to infinity, and shifting the singularity forward from 0 to a fixed time t 0 {\displaystyle t_{0}} ). An example would be the bouncing motion of an inelastic ball on a plane. If idealized motion is considered, in which the same fraction of kinetic energy is lost on each bounce, the frequency of bounces becomes infinite, as the ball comes to rest in a finite time. Other examples of finite-time singularities include the various forms of the Painlevé paradox (for example, the tendency of a chalk to skip when dragged across a blackboard), and how the precession rate of a coin spun on a flat surface accelerates towards infinite—before abruptly stopping (as studied using the Euler's Disk toy). Hypothetical examples include Heinz von Foerster 's facetious " Doomsday's equation " (simplistic models yield infinite human population in finite time). In algebraic geometry , a singularity of an algebraic variety is a point of the variety where the tangent space may not be regularly defined. The simplest example of singularities are curves that cross themselves. But there are other types of singularities, like cusps . For example, the equation y 2 − x 3 = 0 defines a curve that has a cusp at the origin x = y = 0 . One could define the x -axis as a tangent at this point, but this definition can not be the same as the definition at other points. In fact, in this case, the x -axis is a "double tangent." For affine and projective varieties , the singularities are the points where the Jacobian matrix has a rank which is lower than at other points of the variety. An equivalent definition in terms of commutative algebra may be given, which extends to abstract varieties and schemes : A point is singular if the local ring at this point is not a regular local ring .
https://en.wikipedia.org/wiki/Singularity_(mathematics)
In the study of unstable systems, James Clerk Maxwell in 1873 was the first to use the term singularity in its most general sense: that in which it refers to contexts in which arbitrarily small changes, commonly unpredictably, may lead to arbitrarily large effects. In this sense, Maxwell did not differentiate between dynamical systems and social systems . He used the concept of singularities primarily as an argument against determinism or absolute causality . He did not in his day deny that the same initial conditions would always achieve the same results, but pointed out that such a statement is of little value in a world in which the same initial conditions are never repeated. In the late pre-quantum-theoretic philosophy of science, this was a significant recognition of the principle of underdetermination. [ 1 ] The attributes of singularities include the following in various degrees, according to context: Henri Poincaré developed Maxwell's ideas on singularities in dynamic systems . Poincaré distinguished four different simple singularities in the singular points of differential equations . he mentioned: In recent times, chaos theory has attracted a great deal of work, but deterministic chaos is just a special case of a singularity in which a small cause produces a large observable effect as a result of nonlinear dynamic behavior . In contrast the singularities raised by Maxwell, such as a loose rock at a singular point on a slope, show a linear dynamic behavior as Poincaré demonstrated. Singularities are a common staple of chaos theory, catastrophe theory , and bifurcation theory . [ 4 ] In social systems, deterministic chaos is infrequent, because the elements of the system include individuals whose values, awareness, will, foresight, and fallibility, affect the dynamic behavior of the system. [ 5 ] However, this does not completely exclude any notional possibility of deterministic chaos in social systems. In fact some authorities argue an increase in the development of nonlinear dynamics and instabilities of social systems. [ 6 ] In the colloquial sense of disorder or confusion, however, chaos certainly occurs in social systems. It often is the basis for singularities, where cause-and-effect relationships are ill-defined at best. Many examples of singularities in social systems arise from the work of Maxwell and Poincaré. Maxwell remarked that a word can start a war and that all the great discoveries of humanity emerged from singular states. Poincaré gives the example of a roofer who drops a brick and randomly kills a passing man, but there is no clear limit to how small an event could cause an indefinitely large divergence in history; a single decay event of an unstable isotope could change the history of the world within a generation. The currently dominant theory of the origin of our universe postulates a physical singularity (specifically the Big Bang ). It is suggested to have dispersed plasma uniformly throughout space, and to have cooled by increasing expansion, till atoms formed; subsequently, very small (singular) fluctuations in the uniform density created self-reinforcing inhomogeneities . These subsequently grew into stars, galaxies, and other systems, from which life forms eventually emerged, a process that still is under way. Even if the singularity of the Big Bang can be omitted from the mathematical models, many other singularities remain ubiquitous in the history of the universe. Biological evolutionary history shows that not only mutations that give rise to microevolution can amount to singularities, but that macroevolutionary events that affect the entire course of the history of the biosphere also amount to singularities. [ 7 ] [ 8 ] Recently, Ward and Kirschvink have argued that the history of life has been more influenced by disasters that generated singularities, than by continuous evolution. [ 9 ] Disastrous singularities that create niches for biological innovations that give rise to productive singularities. [ 10 ] Concepts of singularity and of complexity are closely related. Maxwell pointed out that the more singular points a system has, the more complex it is likely to be. Complexity in turn is the basis of perceived chaos and singularities. This commonly renders it impossible, or even meaningless, to determine a seemingly insignificant event that produces a great effect, even in a simple context; in a complex situation with many elements and relationships it commonly is impossible. Complexity may amount to a breeding ground for singularities, and this has emerged in the downfall of many, perhaps all, ancient cultures and modern countries. Individual causes such as intruders, internal conflicts or natural disasters commonly do not suffice to destroy a culture. More often an increasing complexity of interdependent factors has rendered a community vulnerable to the loss of a few infrastructural necessities that lead to successive collapse in a domino effect . [ 11 ] The 2007–2008 financial crisis illustrated such effects. Accordingly, the complexity of financial systems is a major challenge for financial markets and institutions to deal with. [ 12 ] Notionally one solution would be to reduce complexity and increase the potential for adaptation and robustness. In a complex world with increasing singularities, some people assert that it is therefore necessary to abandon optimization potential to gain adaptability to external shocks and disasters. However, no one has yet demonstrated how to implement such a solution. [ 13 ]
https://en.wikipedia.org/wiki/Singularity_(systems_theory)
Singularity Hypotheses: A Scientific and Philosophical Assessment is a 2013 book written by Amnon H. Eden, James H. Moor , Johnny H. Soraker, and Eric Steinhart. It focuses on conjectures about the intelligence explosion , transhumanism , and whole brain emulation . The book features essays and commentary on the technological singularity . One version of the hypothesis explored in the book states that humans will one day make an intelligent agent that will enter runaway self-improvement cycles, with each new version appearing more rapidly, causing an intelligence explosion and resulting in an intelligence that will far surpass us. [ 1 ] The book's contributing authors include computational biologist Dennis Bray , artificial intelligence researcher Ben Goertzel , neuroscientist Randal A. Koene , philosophers Diane Proudfoot and David Pearce , among others. [ 1 ]
https://en.wikipedia.org/wiki/Singularity_Hypotheses:_A_Scientific_and_Philosophical_Assessment
Singulation is a method by which an RFID reader identifies a tag with a specific serial number from a number of tags in its field. This is necessary because if multiple tags respond simultaneously to a query, they will jam each other. In a typical commercial application, such as scanning a bag of groceries, potentially hundreds of tags might be within range of the reader. When all the tags cooperate with the tag reader and follow the same anti-collision protocol , also called singulation protocol , [ 1 ] [ 2 ] [ 3 ] then the tag reader can read data from each and every tag without interference from the other tags. Generally, a collision occurs when two entities require the same resource; for example, two ships with crossing courses in a narrows. In wireless technology, a collision occurs when two transmitters transmit at the same time with the same modulation scheme on the same frequency. In RFID technology, various strategies have been developed to overcome this situation. There are different methods of singulation, but the most common is tree walking , which involves asking all tags with a serial number that starts with, for instance, a 0 to respond. If more than one responds, the reader might ask for all tags with a serial number that starts with 01 to respond, and then 010. It keeps refining the qualifier until only one tag responds. Note that if the reader has some idea of what tags it wishes to interrogate, it can considerably optimise the search order. For example with some designs of tags, if a reader already suspects certain tags to be present then those tags can be instructed to remain silent, then tree walking can proceed without interference from these. [ citation needed ] This simple protocol leaks considerable information because anyone able to eavesdrop on the tag reader alone may be able to determine all but the last bit of a tag's serial number. Thus a tag can be (largely) identified so long as the reader's signal is receivable, which is usually possible at much greater distance than simply reading a tag directly. Because of privacy and security concerns related to this, the Auto-ID Labs have developed two more advanced singulation protocols, called Class 0 UHF and Class 1 UHF, which are intended to be resistant to these sorts of attacks. [ citation needed ] These protocols, which are based on tree-walking but include other elements, have a performance of up to 1000 tags per second. The tree walking protocol may be blocked or partially blocked by RSA Security 's blocker tags . The first offered singulation protocol is the ALOHA protocol , originally used decades ago in ALOHAnet and very similar to CSMA/CD used by Ethernet . These protocols are mainly used in HF tags. In ALOHA , tags detect when a collision has occurred, and attempt to resend after waiting a random interval. The performance of such collide-and-resend protocols is approximately doubled if transmissions are synchronised to particular time-slots, and in this application time-slots for the tags are readily provided for by the reader. ALOHA does not leak information like the tree-walking protocol, and is much less vulnerable to blocker tags, which would need to be active devices with much higher power handling capabilities in order to work. However when the reader field is densely populated, ALOHA may make much less efficient use of available bandwidth than optimised versions of tree-walking. In the worst case, an ALOHA protocol network can reach a state of congestion collapse . The Auto-ID consortium is attempting to standardise a version of an ALOHA protocol which it calls Class 0 HF. This has a performance of up to 200 tags per second. Slotted Aloha is another variety offering better properties than the initial concept. It is implemented in most of the modern bulk detection systems, especially in the clothing industry. This concept is known from polite conversation. It applies as well to wireless communication, also named listen before send . With RFID it is applied for concurrence of readers ( CSMA ) as well as with concurrence of tags.
https://en.wikipedia.org/wiki/Singulation
Sinistral and dextral , in some scientific fields, are the two types of chirality (" handedness ") or relative direction . The terms are derived from the Latin words for "left" ( sinister ) and "right" ( dexter ). Other disciplines use different terms (such as dextro- and laevo-rotary in chemistry , or clockwise and anticlockwise in physics ) or simply use left and right (as in anatomy ). Relative direction and chirality are distinct concepts. Relative direction is from the point of view of the observer; a completely symmetric object has a left side and a right side, from the observer's point of view, if the top and bottom and direction of observation are defined. Chirality, however, is observer-independent: no matter how one looks at a right-hand screw thread , it remains different from a left-hand screw thread. Therefore, a symmetric object has sinistral and dextral directions arbitrarily defined by the position of the observer, while an asymmetric object that shows chirality may have sinistral and dextral directions defined by characteristics of the object, regardless of the position of the observer. Because the coiled shells of gastropods are asymmetric, they possess a quality called chirality –the "handedness" of an asymmetric structure. Over 90% [ 1 ] of gastropod species have shells in which the direction of the coil is dextral (right-handed). A small minority of species and genera have shells in which the coils are almost always sinistral (left-handed). Very few species show an even mixture of dextral and sinistral individuals (for example, Amphidromus perversus ). [ 2 ] The most obvious characteristic of flatfish , other than their flatness, is their asymmetric morphology: both eyes are on the same side of the head in the adult fish. In some families of flatfish, the eyes are always on the right side of the body ( dextral or right-eyed flatfish), and in others, they are always on the left ( sinistral or left-eyed flatfish). Primitive spiny turbots include equal numbers of right- and left-sided individuals, and are generally more symmetric than other families. [ 3 ] In geology , the terms sinistral and dextral refer to the horizontal component of the movement of blocks on either side of a fault or the sense of movement within a shear zone . These are terms of relative direction, as the movement of the blocks is described relative to each other when viewed from above. Movement is sinistral (left-handed) if the block on the other side of the fault moves to the left, or if straddling the fault the left side moves toward the observer. Movement is dextral (right-handed) if the block on the other side of the fault moves to the right, or if straddling the fault the right side moves toward the observer. [ 4 ]
https://en.wikipedia.org/wiki/Sinistral_and_dextral
In computing , a sink , or data sink generally refers to the destination of data flow. The word sink has multiple uses in computing. In software engineering , an event sink is a class or function that receives events from another object or function, while a sink can also refer to a node of a directed acyclic graph with no additional nodes leading out from it, among other uses. An event sink is a class or function designed to receive incoming events from another object or function. This is commonly implemented in C++ as callbacks . Other object-oriented languages , such as Java and C# , have built-in support for sinks by allowing events to be fired to delegate functions. Due to lack of formal definition, a sink is often misconstrued with a gateway, which is a similar construct but the latter is usually either an end-point or allows bi-direction communication between dissimilar systems, as opposed to just an event input point [ citation needed ] . This is often seen in C++ and hardware-related programming [ citation needed ] , thus the choice of nomenclature by a developer usually depends on whether the agent acting on a sink is a producer or consumer of the sink content. In a Directed acyclic graph , a source node is a node (also known as a vertex ) with no incoming connections from other nodes, while a sink node is a node without outgoing connections. [ 1 ] Directed acyclic graphs are used in instruction scheduling , neural networks and data compression . In several computer programs employing streams, such as GStreamer , PulseAudio , or PipeWire , a sink is the starting point of a pipeline which consumes a stream of data, while a source is the end point which emits a stream a data (often after performing some processing function on the data). [ 2 ] An example is an audio pipeline in the PulseAudio sound system. An input device such as a microphone is an audio device which will send data to a sink for consumption. The audio signal will then be made available as an audio source, which may have undergone audio processing, such as volume adjustment. Typically, it will also pass through other stages, such as audio mixing. In this way the volume adjustment processing receives audio samples via its sink, emits them from its source, which is then connected to a mixer sink, which mixes audio, ultimately emitting the processed audio from its source. Referred to as an output source in PulseAudio. The configuration and connection of these pipelines can be complex and dynamic. [ 3 ] The terms sink and source may be confusing, but they specifically refer to the point of entry (source) and exit (sink) in systems. The terminology is exactly analogous to that used in other domains, such as electrical engineering. [ 4 ] The word sink has been used for both input and output in the industry. [ citation needed ] Mobile sink is proposed to save sensor energy for multihop communication in transferring data to a base station (sink) in wireless sensor networks . This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sink_(computing)
In pharmaceutics , sink condition is a term mostly related to the dissolution testing procedure. It means using a sheer volume of solvent , usually about 5 to 10 times greater than the volume present in the saturated solution of the targeted chemical (often the API , and sometimes the excipients ) contained in the dosage form being tested. [ 1 ] During the dissolution testing , "sink condition" is a mandatory requirement, otherwise when the concentration begins to get too close to the saturation point , even though the total soluble amount still remains constant, the dissolution rate will gradually begin to reduce in significant amounts, enough to corrupt the test results. . [ 2 ] [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Sink_condition
In plumbing, a sink strainer is a type of perforated metal sieve or mesh strainer used to strain or filter out solid debris in the water system . Different varieties are used in residential premises and for industrial or commercial applications. Such strainer elements are generally made from stainless steel for corrosion resistance. In houses, sink strainers are often used as drain covers in sinks , showers and bathtubs . Water lines or kitchen systems can get gravel, deposits that break free, and other stray items in the line. Due to the velocity of the water pushing them, they can severely damage or clog devices installed in the flow stream of the water line, for example p-traps or pipes. A strainer is essentially a screen installed to allow water to pass through, but not larger items. The larger items fall to the bottom or are held in a basket for later clean out. They normally have an access that allows for them to be cleaned or have the strainer plate or basket replaced. Strainers come in several different styles based on the needs. A plate strainer is the simplest, in which water flows through a perforated plate. Often the plate is corrugated shape to increase surface area. A basket strainer is a design where the strainer is shaped like a basket and usually installed in a vertical system. The basket strainer is easier to clean, since debris is captured in the basket. It can also sometimes offer more straining surface area than a plate strainer, improving flow rates, or decreasing pressure loss through the strainer. They can be made of stainless steel AISI 304, 202 etc.
https://en.wikipedia.org/wiki/Sink_strainer
Sink test is a form of medical laboratory diagnostics healthcare fraud whereby clinical specimens are discarded, via a sink drain, and fabricated results are reported, without the clinical specimen actually being tested. [ 1 ] In the United States, the prevalence of sink test laboratories in the 1980s led in part to regulation following the passage of Clinical Laboratory Improvement Amendments in 1988. [ 2 ] While this illegal practice still occurs, it is rare within the highly regulated US lab market. This crime -related article is a stub . You can help Wikipedia by expanding it . This medical article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sink_test
Sinkclose is a security vulnerability in certain AMD microprocessors dating back to 2006 that was made public by IOActive security researchers on August 9, 2024. [ 1 ] IOActive researchers Enrique Nissim and Krzysztof Okupski presented their findings at the 2024 DEF CON security conference in Las Vegas [ 2 ] in a talk titled "AMD Sinkclose: Universal Ring-2 Privilege Escalation". AMD said it would patch all affected Zen-based Ryzen , Epyc and Threadripper processors but initially omitted Ryzen 3000 desktop processors. AMD followed up and said the patch would be available for them as well. [ 3 ] AMD said the patches would be released on August 20, 2024. Sinkclose affects the System Management Mode (SMM) of AMD processors. It can only be exploited by first compromising the operating system kernel . [ 1 ] [ 2 ] Once the exploit is effected, it is possible to avoid detection by antivirus software and even compromise a system after the operating system has been re-installed.
https://en.wikipedia.org/wiki/Sinkclose
A sinkhole is a depression or hole in the ground caused by some form of collapse of the surface layer. The term is sometimes used to refer to doline , enclosed depressions that are also known as shakeholes , and to openings where surface water enters into underground passages known as ponor , swallow hole or swallet . [ 1 ] [ 2 ] [ 3 ] [ 4 ] A cenote is a type of sinkhole that exposes groundwater underneath. [ 4 ] Sink , and stream sink are more general terms for sites that drain surface water, possibly by infiltration into sediment or crumbled rock. [ 2 ] Most sinkholes are caused by karst processes – the chemical dissolution of carbonate rocks , collapse or suffosion processes. [ 1 ] [ 5 ] Sinkholes are usually circular and vary in size from tens to hundreds of meters both in diameter and depth, and vary in form from soil-lined bowls to bedrock-edged chasms. Sinkholes may form gradually or suddenly, and are found worldwide. [ 2 ] [ 1 ] Sinkholes may capture surface drainage from running or standing water, but may also form in high and dry places in specific locations. Sinkholes that capture drainage can hold it in large limestone caves. These caves may drain into tributaries of larger rivers. [ 6 ] [ 7 ] The formation of sinkholes involves natural processes of erosion [ 8 ] or gradual removal of slightly soluble bedrock (such as limestone ) by percolating water, the collapse of a cave roof, or a lowering of the water table . [ 9 ] Sinkholes often form through the process of suffosion . [ 10 ] For example, groundwater may dissolve the carbonate cement holding the sandstone particles together and then carry away the lax particles, gradually forming a void. Occasionally a sinkhole may exhibit a visible opening into a cave below. In the case of exceptionally large sinkholes, such as the Minyé sinkhole in Papua New Guinea or Cedar Sink at Mammoth Cave National Park in Kentucky , an underground stream or river may be visible across its bottom flowing from one side to the other. Sinkholes are common where the rock below the land surface is limestone or other carbonate rock , salt beds , or in other soluble rocks, such as gypsum , [ 11 ] that can be dissolved naturally by circulating ground water . Sinkholes also occur in sandstone and quartzite terrains. As the rock dissolves, spaces and caverns develop underground. These sinkholes can be dramatic, because the surface land usually stays intact until there is not enough support. Then, a sudden collapse of the land surface can occur. [ 12 ] On 2 July 2015, scientists reported that active pits, related to sinkhole collapses and possibly associated with outbursts, were found on the comet 67P/Churyumov-Gerasimenko by the Rosetta space probe . [ 13 ] [ 14 ] Collapses, commonly incorrectly labeled as sinkholes, also occur due to human activity, such as the collapse of abandoned mines and salt cavern storage in salt domes in places like Louisiana , Mississippi , and Texas , in the United States. More commonly, collapses occur in urban areas due to water main breaks or sewer collapses when old pipes give way. They can also occur from the overpumping and extraction of groundwater and subsurface fluids. Sinkholes can also form when natural water drainage patterns are changed and new water diversion systems are developed. Some sinkholes form when the land surface is changed, such as when industrial and runoff storage ponds are created; the substantial weight of the new material can trigger a collapse of the roof of an existing void or cavity in the subsurface, resulting in development of a sinkhole. Solution or dissolution sinkholes form where water dissolves limestone under a soil covering. Dissolution enlarges natural openings in the rock such as joints, fractures, and bedding planes. Soil settles down into the enlarged openings forming a small depression at the ground surface. [ 15 ] Cover-subsidence sinkholes form where voids in the underlying limestone allow more settling of the soil to create larger surface depressions. [ 15 ] Cover-collapse sinkholes or "dropouts" form where so much soil settles down into voids in the limestone that the ground surface collapses. The surface collapses may occur abruptly and cause catastrophic damages. New sinkhole collapses can also form when human activity changes the natural water-drainage patterns in karst areas. [ 15 ] Pseudokarst sinkholes resemble karst sinkholes but are formed by processes other than the natural dissolution of rock. [ 16 ] : 4 The U.S. Geological Survey notes that "It is a frightening thought to imagine the ground below your feet or house suddenly collapsing and forming a big hole in the ground." [ 15 ] Human activities can accelerate collapses of karst sinkholes, causing collapse within a few years that would normally evolve over thousands of years under natural conditions. [ 17 ] : 2 [ 18 ] [ 16 ] : 1 and 92 Soil-collapse sinkholes, which are characterized by the collapse of cavities in soil that have developed where soil falls down into underlying rock cavities, pose the most serious hazards to life and property. Fluctuation of the water level accelerates this collapse process. When water rises up through fissures in the rock, it reduces soil cohesion . Later, as the water level moves downward, the softened soil seeps downwards into rock cavities. Flowing water in karst conduits carries the soil away, preventing soil from accumulating in rock cavities and allowing the collapse process to continue. [ 19 ] : 52–53 Induced sinkholes occur where human activity alters how surface water recharges groundwater . Many human-induced sinkholes occur where natural diffused recharge is disturbed and surface water becomes concentrated. Activities that can accelerate sinkhole collapses include timber removal, ditching, laying pipelines, sewers, water lines, storm drains, and drilling. These activities can increase the downward movement of water beyond the natural rate of groundwater recharge. [ 17 ] : 26–29 The increased runoff from the impervious surfaces of roads, roofs, and parking lots also accelerate man-induced sinkhole collapses. [ 16 ] : 8 Some induced sinkholes are preceded by warning signs, such as cracks, sagging, jammed doors, or cracking noises, but others develop with little or no warning. [ 17 ] : 32–34 However, karst development is well understood, and proper site characterization can avoid karst disasters. Thus most sinkhole disasters are predictable and preventable rather than " acts of God ". [ 20 ] : xii [ 16 ] : 17 and 104 The American Society of Civil Engineers has declared that the potential for sinkhole collapse must be a part of land-use planning in karst areas. Where sinkhole collapse of structures could cause loss of life, the public should be made aware of the risks. [ 19 ] : 88 The most likely locations for sinkhole collapse are areas where there is already a high density of existing sinkholes. Their presence shows that the subsurface contains a cave system or other unstable voids. [ 21 ] Where large cavities exist in the limestone large surface collapses can occur, such the Winter Park, Florida sinkhole collapse . [ 16 ] : 91–92 Recommendations for land uses in karst areas should avoid or minimize alterations of the land surface and natural drainage. [ 17 ] : 36 Since water level changes accelerate sinkhole collapse, measures must be taken to minimize water level changes. The areas most susceptible to sinkhole collapse can be identified and avoided. [ 19 ] : 88 In karst areas the traditional foundation evaluations ( bearing capacity and settlement ) of the ability of soil to support a structure must be supplemented by geotechnical site investigation for cavities and defects in the underlying rock. [ 19 ] : 113 Since the soil/rock surface in karst areas are very irregular the number of subsurface samples ( borings and core samples ) required per unit area is usually much greater than in non-karst areas. [ 19 ] : 98–99 In 2015, the U.S. Geological Survey estimated the cost for repairs of damage arising from karst-related processes as at least $300 million per year over the preceding 15 years, but noted that this may be a gross underestimate based on inadequate data. [ 22 ] The greatest amount of karst sinkhole damage in the United States occurs in Florida, Texas, Alabama, Missouri, Kentucky, Tennessee, and Pennsylvania. [ 23 ] The largest recent sinkhole in the USA is possibly one that formed in 1972 in Montevallo, Alabama , as a result of man-made lowering of the water level in a nearby rock quarry. This "December Giant" or "Golly Hole" sinkhole measures 130 m (425 ft) long, 105 m (350 ft) wide and 45 m (150 ft) deep. [ 17 ] : 1–2 [ 19 ] : 61–63 [ 24 ] Other areas of significant karst hazards include the Ebro Basin in northern Spain ; the island of Sardinia ; the Italian peninsula; the Chalk areas in southern England ; Sichuan , China ; Jamaica ; France ; [ 25 ] Croatia ; [ 26 ] Bosnia and Herzegovina ; Slovenia ; and Russia , where one-third of the total land area is underlain by karst. [ 27 ] Sinkholes tend to occur in karst landscapes. [ 12 ] Karst landscapes can have up to thousands of sinkholes within a small area, giving the landscape a pock-marked appearance. These sinkholes drain all the water, so there are only subterranean rivers in these areas. Examples of karst landscapes with numerous massive sinkholes include Khammouan Mountains ( Laos ) and Mamo Plateau (Papua New Guinea). [ 28 ] [ 29 ] The largest known sinkholes formed in sandstone are Sima Humboldt and Sima Martel in Venezuela . [ 29 ] Some sinkholes form in thick layers of homogeneous limestone. Their formation is facilitated by high groundwater flow, often caused by high rainfall; such rainfall causes formation of the giant sinkholes in the Nakanaï Mountains , on the New Britain island in Papua New Guinea. [ 30 ] Powerful underground rivers may form on the contact between limestone and underlying insoluble rock, creating large underground voids. In such conditions, the largest known sinkholes of the world have formed, like the 662-metre-deep (2,172 ft) Xiaozhai Tiankeng ( Chongqing , China), giant sótanos in Querétaro and San Luis Potosí states in Mexico and others. [ 29 ] [ 31 ] Unusual processes have formed the enormous sinkholes of Sistema Zacatón in Tamaulipas (Mexico), where more than 20 sinkholes and other karst formations have been shaped by volcanically heated, acidic groundwater. [ 32 ] [ 33 ] This has produced not only the formation of the deepest water-filled sinkhole in the world— Zacatón —but also unique processes of travertine sedimentation in upper parts of sinkholes, leading to sealing of these sinkholes with travertine lids. [ 33 ] The U.S. state of Florida in North America is known for having frequent sinkhole collapses, especially in the central part of the state. Underlying limestone there is from 15 to 25 million years old. On the fringes of the state, sinkholes are rare or non-existent; limestone there is around 120,000 years old. [ 34 ] The Murge area in southern Italy also has numerous sinkholes. Sinkholes can be formed in retention ponds from large amounts of rain. [ 35 ] On the Arctic seafloor, methane emissions have caused large sinkholes to form. [ 36 ] [ 37 ] Sinkholes have been used for centuries as disposal sites for various forms of waste . A consequence of this is the pollution of groundwater resources, with serious health implications in such areas. [ 38 ] [ 39 ] The Maya civilization sometimes used sinkholes in the Yucatán Peninsula (known as cenotes ) as places to deposit precious items and human sacrifices. [ 40 ] When sinkholes are very deep or connected to caves, they may offer challenges for experienced cavers or, when water-filled, divers . Some of the most spectacular are the Zacatón cenote in Mexico (the world's deepest water-filled sinkhole), the Boesmansgat sinkhole in South Africa, Sarisariñama tepuy in Venezuela, the Sótano del Barro in Mexico, and in the town of Mount Gambier, South Australia . Sinkholes that form in coral reefs and islands that collapse to enormous depths are known as blue holes and often become popular diving spots. [ 41 ] Large and visually unusual sinkholes have been well known to local people since ancient times. Nowadays sinkholes are grouped and named in site-specific or generic names. Some examples of such names are listed below. [ 42 ] The 2010 Guatemala City sinkhole formed suddenly in May of that year; torrential rains from Tropical Storm Agatha and a bad drainage system were blamed for its creation. It swallowed a three-story building and a house; it measured approximately 20 m (66 ft) wide and 30 m (98 ft) deep. [ 47 ] A similar hole had formed nearby in February 2007. [ 48 ] [ 49 ] [ 50 ] This large vertical hole is not a true sinkhole, as it did not form via the dissolution of limestone, dolomite, marble, or any other water-soluble rock. [ 51 ] [ 52 ] Instead, they are examples of "piping pseudokarst", created by the collapse of large cavities that had developed in the weak, crumbly Quaternary volcanic deposits underlying the city. Although weak and crumbly, these volcanic deposits have enough cohesion to allow them to stand in vertical faces and to develop large subterranean voids within them. A process called " soil piping " first created large underground voids, as water from leaking water mains flowed through these volcanic deposits and mechanically washed fine volcanic materials out of them, then progressively eroded and removed coarser materials. Eventually, these underground voids became large enough that their roofs collapsed to create large holes. [ 51 ] A crown hole is subsidence due to subterranean human activity, such as mining and military trenches . [ 53 ] [ 54 ] Examples have included, instances above World War I trenches in Ypres , Belgium ; near mines in Nitra , Slovakia ; [ 55 ] a limestone quarry in Dudley , England; [ 55 ] [ 56 ] and above an old gypsum mine in Magheracloone , Ireland . [ 54 ] Some of the largest sinkholes in the world are: [ 29 ] This article incorporates public domain material from websites or documents of the United States Geological Survey . Bibliography
https://en.wikipedia.org/wiki/Sinkhole
Sinogene Biotechnology is a Chinese biotechnology company, focusing on animal cloning technology. [ 1 ] [ 2 ] In 2022, Sinogene was the first to clone a wild Arctic wolf . [ 3 ] Sinogene Biotechnology began offering dog cloning services in 2017 [ 4 ] [ 5 ] [ 6 ] and later introduced cat cloning in 2019. [ 7 ] [ 8 ] [ 9 ] In 2022, Sinogene became the first company to successfully clone an Arctic wolf , [ 10 ] and started horse cloning in 2023. [ 11 ] [ 12 ] The donor cell came from a wild female Arctic wolf, the oocyte was from a female dog, and the surrogate was a beagle . [ 3 ] The company transferred 85 embryos into seven beagles and one Arctic wolf was born. [ 13 ] In June of the same year, Sinogene cloned a male horse using skin cells from a horse born in 1995. [ 14 ] Sinogene partnered with Beijing Wildlife Park in 2022 to work together on improving breeding for endangered animals as well as improving ways to protect endangered animals. [ 13 ] [ 15 ] Sinogene clients can harvest cells from their living pets, to one day use in the cloning process after their pet dies. [ 12 ] Customers receive their cloned animals three months after they are born. [ 12 ] This Chinese corporation or company article is a stub . You can help Wikipedia by expanding it . This biotechnology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sinogene_Biotechnology
Sinter plants agglomerate iron ore fines (dust) with other fine materials at high temperature, to create a product that can be used in a blast furnace . The final product, a sinter , is a small, irregular nodule of iron mixed with small amounts of other minerals. The process, called sintering , causes the constituent materials to fuse to make a single porous mass with little change in the chemical properties of the ingredients. The purpose of sinter are to be used converting iron into steel . Sinter plants, in combination with blast furnaces, are also used in non-ferrous smelting . About 70% of the world's primary lead production is still produced this way. [ 1 ] The combination was once used in copper smelting, as at the Electrolytic Refining and Smelting smelter in Wollongong , New South Wales . [ 2 ] Many countries, including India , France and Germany , have underground deposits of iron ore in dust form (blue dust). Such iron ore cannot be directly charged in a blast furnace . In the early 20th century, sinter technology was developed for converting ore fines into lumpy material chargeable in blast furnaces. Sinter technology took 30 years to gain acceptance in the iron-making domain, but now plays an important role. Initially developed to generate steel, it is now a means of using metallurgical waste generated in steel plants to enhance blast furnace operation and reducing waste. The largest sinter plant is located in Chennai, India, and employs 10,000 people. [ 3 ] Main feed into a sinter plant is base mix, which consists of iron ore fines, coke fines and flux (limestone) fines. In addition to base mix, coke fines, flux fines, sinter fines, iron dust (collected from plant de-dusting system and electrostatic precipitator ) and plant waste are mixed in proportion (by weight) in a rotary drum, often called mixing and nodulizing drum. Calcined lime is used as binder of the mixed material along with water (all in particular proportion by weight) to form feed-sinter of about 5 to 7 mm in size. This sinter globules are fed to sintering machine and burnt therein to produce blast furnace feed sinter. Material is put on a sinter machine in two layers. The bottom layer may vary in thickness from 30 to 75 millimetres (1.2 to 3.0 in). A 12 to 20 mm sinter fraction is used, also referred to as the hearth layer. The second, covering layer consists of mixed materials, making for a total bed height of 350 to 660 millimetres (14 to 26 in). The mixed materials are applied with drum feeders and roll feeders, which distributes the nodules in certain depth throughout the sintering machine. The upper layer is smoothed using a leveler. The material, also known as a charge, enters the ignition furnace into rows of multi-slit burners. In the case of one plant, the first (ignition) zone has eleven burners. The next (soaking/ annealing ) zone typically offers 12 burners. Air is sucked from the bottom of the bed of mixed material throughout the sintering machine. Fire penetrates the mixed material gradually, until it reaches the hearth layer. This end point of burning is called burn through point (BTP). The hearth layer, which is nothing but sinter in smaller size, restricts sticking of hot sinter with pallets. BTP is achieved in a certain zone of sinter machine, to optimize the process, by means of several temperature measuring instrument placed throughout the sinter machine. After completion of burning, the mix converts into sinter, which then breaks into smaller size by sinter breaker. After breaking into small sizes, it cools down in cooler (linear or circular) by means of forced air. At discharge of sinter cooler, temperature of sinter is maintained as low, so that the hot sinter can be transported by a conveyor belt made of rubber. Necessary precautions are taken to trace any existence of fire in the belt and necessary extinguishing is done by spraying water. Then this product is being passed through a jaw-crusher, where the size of sinter is further reduced (~ 50 mm) into smaller size. Then the complete mixture is being passed through two screens. Smallest sinter fines (< 5 mm) are stored in proportioning bins and reused for preparing sinter again through mixing and nodulizing drum and fed to sinter machine for burning. A part of the smaller one ( 5 – 20 mm) is used for hearth layer in sinter machine and the rest is taken to the blast furnace along with the biggest sized sinters. The temperature is typically maintained between 1,150 and 1,250 °C (2,100 and 2,280 °F) in the ignition zone and between 900 and 1000 °C in the soaking zone to prevent sudden quenching of the sintered layer. The top 5 mm from screens goes to the conveyor carrying the sinter for the blast furnace and, along with blast furnace grade sinter, either goes to sinter storage bunkers or to blast furnace bunkers. Blast furnace-grade sinter consists of particles sized 5 to 12 mm as well as 20 mm and above. There are certain advantages of using sinters as opposed to using other materials which include recycling the fines and other waste products, to include flue dust, mill scale, lime dust and sludge. Processing sinter helps eliminate raw flux, which is a binding material used to agglomerate materials, which saves the heating material, coke, and improves furnace productivity. Improvements and efficiency can be gained from higher softening temperature and narrower softening in the melting zone, which increases the volume of the granular zone and shrinks the width of the cohesive zone. A lower silica content and higher hot metal temperature contributes to more sulphur removal.
https://en.wikipedia.org/wiki/Sinter_plant
Sintered polyethylene is a polyethylene powder that is formed into a solid without melting it. It can be produced using heat, pressure, or selective laser sintering . [ 1 ] It has applications as a coating on pipes and skis, and as a filter medium. Sintered polyethylene is applied to steel pipes to resist corrosion. Pipes are heated and then polyethylene powder coated , either electrostatically or through immersion. [ 2 ] [ 3 ] Sintered polyethylene was introduced as a coating on skis in 1962. It is more durable, and less dense than solid, extruded polyethylene. Its porous structure allows the ski to absorb wax . [ 4 ] [ 5 ] Sinter plate filter elements were developed in Germany [ 6 ] during the years of 1981 and 1982. These rigid filter elements are used in dust collectors and will produce clean gas values of 0.1 to 1.0 mg/m 3 with emissions of 0.001 pounds per hour with a maximum of 0.0004 gr/ft3, even with dusts of D50 < 1 μm. The EPA [1] defines high efficiency as removal of 0 to 5.0 μm particulates. [2] The filter element matrix consists of molded sintered polyethylene (PE) with a reinforced mounting header. An additional PTFE -coating gets into the pores of the PE-basic body and forms a micro-porous, non-sticking surface resulting in surface-loaded filtration. [ 7 ] This combined material is chemical resistant and unaffected by slight moisture. The rigid sinter plate filters have an extremely stable structure and can be recycled. The average useful life of a sinter plate filter element can exceed 10 years. Sinter plate PE filter elements are environmentally compatible. They produce no toxic waste in themselves and can be washed off and reused. There is no contamination of material from filter fibers like fabric and cartridge media elements. Dust collector#Fabric Collectors The rigidity of the matrix results in no wear from abrasive material. Cleaning is accomplished by on-line, low-pressure reverse air jet-pulse. [ 7 ] Standard elements will handle temperatures up to 158 °F with improved elements designed for up to 230 °F The separation efficiency exceeds that required by the BIA-certificate regarding dust classification M.
https://en.wikipedia.org/wiki/Sintered_polyethylene
Sintering or frittage is the process of compacting and forming a solid mass of material by pressure [ 1 ] or heat [ 2 ] without melting it to the point of liquefaction . Sintering happens as part of a manufacturing process used with metals , ceramics , plastics , and other materials. The atoms/molecules in the sintered material diffuse across the boundaries of the particles, fusing the particles together and creating a solid piece. Since the sintering temperature does not have to reach the melting point of the material, sintering is often chosen as the shaping process for materials with extremely high melting points, such as tungsten and molybdenum . The study of sintering in metallurgical powder-related processes is known as powder metallurgy . An example of sintering can be observed when ice cubes in a glass of water adhere to each other, which is driven by the temperature difference between the water and the ice. Examples of pressure-driven sintering are the compacting of snowfall to a glacier, or the formation of a hard snowball by pressing loose snow together. The material produced by sintering is called sinter . The word sinter comes from the Middle High German sinter , a cognate of English cinder . Sintering is generally considered successful when the process reduces porosity and enhances properties such as strength, electrical conductivity , translucency and thermal conductivity . In some special cases, sintering is carefully applied to enhance the strength of a material while preserving porosity (e.g. in filters or catalysts, where gas adsorption is a priority). During the sintering process, atomic diffusion drives powder surface elimination in different stages, starting at the formation of necks between powders to final elimination of small pores at the end of the process. The driving force for densification is the change in free energy from the decrease in surface area and lowering of the surface free energy by the replacement of solid-vapor interfaces. It forms new but lower-energy solid-solid interfaces with a net decrease in total free energy. On a microscopic scale, material transfer is affected by the change in pressure and differences in free energy across the curved surface. If the size of the particle is small (and its curvature is high), these effects become very large in magnitude. The change in energy is much higher when the radius of curvature is less than a few micrometers, which is one of the main reasons why much ceramic technology is based on the use of fine-particle materials. [ 3 ] The ratio of bond area to particle size is a determining factor for properties such as strength and electrical conductivity. To yield the desired bond area, temperature and initial grain size are precisely controlled over the sintering process. At steady state, the particle radius and the vapor pressure are proportional to (p 0 ) 2/3 and to (p 0 ) 1/3 , respectively. [ 3 ] The source of power for solid-state processes is the change in free or chemical potential energy between the neck and the surface of the particle. This energy creates a transfer of material through the fastest means possible; if transfer were to take place from the particle volume or the grain boundary between particles, particle count would decrease and pores would be destroyed. Pore elimination is fastest in samples with many pores of uniform size because the boundary diffusion distance is smallest. During the latter portions of the process, boundary and lattice diffusion from the boundary become important. [ 3 ] Control of temperature is very important to the sintering process, since grain-boundary diffusion and volume diffusion rely heavily upon temperature, particle size, particle distribution, material composition, and often other properties of the sintering environment itself. [ 3 ] Sintering is part of the firing process used in the manufacture of pottery and other ceramic objects. Sintering and vitrification (which requires higher temperatures) are the two main mechanisms behind the strength and stability of ceramics. Sintered ceramic objects are made from substances such as glass , alumina , zirconia , silica , magnesia , lime , beryllium oxide , and ferric oxide . Some ceramic raw materials have a lower affinity for water and a lower plasticity index than clay , requiring organic additives in the stages before sintering. Sintering begins when sufficient temperatures have been reached to mobilize the active elements in the ceramic material, which can start below their melting point (typically at 50–80% of their melting point [ 4 ] ), e.g. as premelting . When sufficient sintering has taken place, the ceramic body will no longer break down in water; additional sintering can reduce the porosity of the ceramic, increase the bond area between ceramic particles, and increase the material strength. [ 5 ] Industrial procedures to create ceramic objects via sintering of powders generally include: [ 6 ] All the characteristic temperatures associated with phase transformation, glass transitions, and melting points, occurring during a sinterisation cycle of a particular ceramic's formulation (i.e., tails and frits) can be easily obtained by observing the expansion-temperature curves during optical dilatometer thermal analysis. In fact, sinterisation is associated with a remarkable shrinkage of the material because glass phases flow once their transition temperature is reached, and start consolidating the powdery structure and considerably reducing the porosity of the material. Sintering is performed at high temperature. Additionally, a second and/or third external force (such as pressure, electric current) could be used. A commonly used second external force is pressure. Sintering performed by only heating is generally termed "pressureless sintering", which is possible with graded metal-ceramic composites, utilising a nanoparticle sintering aid and bulk molding technology. A variant used for 3D shapes is called hot isostatic pressing . To allow efficient stacking of product in the furnace during sintering and to prevent parts sticking together, many manufacturers separate ware using ceramic powder separator sheets. These sheets are available in various materials such as alumina, zirconia and magnesia. They are additionally categorized by fine, medium and coarse particle sizes. By matching the material and particle size to the ware being sintered, surface damage and contamination can be reduced while maximizing furnace loading. Most metals can be sintered, although some with greater difficulty (i.e., aluminium alloy and titanium alloys [ 7 ] ). This applies especially to pure metals produced in vacuum which suffer no surface contamination. Sintering under atmospheric pressure requires the use of a protective gas, quite often endothermic gas . Sintering, with subsequent reworking, can produce a great range of material properties. Changes in density, alloying , and heat treatments can alter the physical characteristics of various products. For instance, the Young's modulus E n of sintered iron powders remains somewhat insensitive to sintering time, alloying, or particle size in the original powder for lower sintering temperatures, but depends upon the density of the final product: E n / E = ( D / d ) 3.4 {\displaystyle E_{n}/E=(D/d)^{3.4}} where D is the density, E is Young's modulus and d is the maximum density of iron. Sintering is static when a metal powder under certain external conditions may exhibit coalescence, and yet reverts to its normal behavior when such conditions are removed. In most cases, the density of a collection of grains increases as material flows into voids, causing a decrease in overall volume. Mass movements that occur during sintering consist of the reduction of total porosity by repacking, followed by material transport due to evaporation and condensation from diffusion . In the final stages, metal atoms move along crystal boundaries to the walls of internal pores, redistributing mass from the internal bulk of the object and smoothing pore walls. Surface tension is the driving force for this movement. A special form of sintering (which is still considered part of powder metallurgy) is liquid-state sintering in which at least one but not all elements are in a liquid state. Liquid-state sintering is required for making cemented carbide and tungsten carbide . Sintered bronze in particular is frequently used as a material for bearings , since its porosity allows lubricants to flow through it or remain captured within it. Sintered copper may be used as a wicking structure in certain types of heat pipe construction, where the porosity allows a liquid agent to move through the porous material via capillary action . For materials that have high melting points such as molybdenum , tungsten , rhenium , tantalum , osmium and carbon , sintering is one of the few viable manufacturing processes. In these cases, very low porosity is desirable and can often be achieved. Sintered metal powder is used to make frangible shotgun shells called breaching rounds , as used by military and SWAT teams to quickly force entry into a locked room. These shotgun shells are designed to destroy door deadbolts, locks and hinges without risking lives by ricocheting or by flying on at lethal speed through the door. They work by destroying the object they hit and then dispersing into a relatively harmless powder. Sintered bronze and stainless steel are used as filter materials in applications requiring high temperature resistance while retaining the ability to regenerate the filter element. For example, sintered stainless steel elements are employed for filtering steam in food and pharmaceutical applications, and sintered bronze in aircraft hydraulic systems. Sintering of powders containing precious metals such as silver and gold is used to make small jewelry items. Evaporative self-assembly of colloidal silver nanocubes into supercrystals has been shown to allow the sintering of electrical joints at temperatures lower than 200 °C. [ 8 ] Particular advantages of the powder technology include: The literature contains many references on sintering dissimilar materials to produce solid/solid-phase compounds or solid/melt mixtures at the processing stage. Almost any substance can be obtained in powder form, through either chemical, mechanical or physical processes, so basically any material can be obtained through sintering. When pure elements are sintered, the leftover powder is still pure, so it can be recycled. Particular disadvantages of the powder technology include: [ original research? ] Plastic materials are formed by sintering for applications that require materials of specific porosity. Sintered plastic porous components are used in filtration and to control fluid and gas flows. Sintered plastics are used in applications requiring caustic fluid separation processes such as the nibs in whiteboard markers, inhaler filters, and vents for caps and liners on packaging materials. [ 9 ] Sintered ultra high molecular weight polyethylene materials are used as ski and snowboard base materials. The porous texture allows wax to be retained within the structure of the base material, thus providing a more durable wax coating. For materials that are difficult to sinter, a process called liquid phase sintering is commonly used. Materials for which liquid phase sintering is common are Si 3 N 4 , WC , SiC , and more. Liquid phase sintering is the process of adding an additive to the powder which will melt before the matrix phase. The process of liquid phase sintering has three stages: For liquid phase sintering to be practical the major phase should be at least slightly soluble in the liquid phase and the additive should melt before any major sintering of the solid particulate network occurs, otherwise rearrangement of grains will not occur. Liquid phase sintering was successfully applied to improve grain growth of thin semiconductor layers from nanoparticle precursor films. [ 10 ] These techniques employ electric currents to drive or enhance sintering. [ 11 ] [ 12 ] English engineer A. G. Bloxam registered in 1906 the first patent on sintering powders using direct current in vacuum . The primary purpose of his inventions was the industrial scale production of filaments for incandescent lamps by compacting tungsten or molybdenum particles. The applied current was particularly effective in reducing surface oxides that increased the emissivity of the filaments. [ 13 ] In 1913, Weintraub and Rush patented a modified sintering method which combined electric current with pressure . The benefits of this method were proved for the sintering of refractory metals as well as conductive carbide or nitride powders. The starting boron – carbon or silicon –carbon powders were placed in an electrically insulating tube and compressed by two rods which also served as electrodes for the current. The estimated sintering temperature was 2000 °C. [ 13 ] In the United States, sintering was first patented by Duval d'Adrian in 1922. His three-step process aimed at producing heat-resistant blocks from such oxide materials as zirconia , thoria or tantalia . The steps were: (i) molding the powder; (ii) annealing it at about 2500 °C to make it conducting; (iii) applying current-pressure sintering as in the method by Weintraub and Rush. [ 13 ] Sintering that uses an arc produced via a capacitance discharge to eliminate oxides before direct current heating, was patented by G. F. Taylor in 1932. This originated sintering methods employing pulsed or alternating current , eventually superimposed to a direct current. Those techniques have been developed over many decades and summarized in more than 640 patents. [ 13 ] Of these technologies the most well known is resistance sintering (also called hot pressing ) and spark plasma sintering , while electro sinter forging is the latest advancement in this field. In spark plasma sintering (SPS), external pressure and an electric field are applied simultaneously to enhance the densification of the metallic/ceramic powder compacts. However, after commercialization it was determined there is no plasma, so the proper name is spark sintering as coined by Lenel. The electric field driven densification supplements sintering with a form of hot pressing, to enable lower temperatures and taking less time than typical sintering. [ 14 ] For a number of years, it was speculated that the existence of sparks or plasma between particles could aid sintering; however, Hulbert and coworkers systematically proved that the electric parameters used during spark plasma sintering make it (highly) unlikely. [ 15 ] In light of this, the name "spark plasma sintering" has been rendered obsolete. Terms such as field assisted sintering technique (FAST), electric field assisted sintering (EFAS), and direct current sintering (DCS) have been implemented by the sintering community. [ 16 ] Using a direct current (DC) pulse as the electric current, spark plasma, spark impact pressure, joule heating, and an electrical field diffusion effect would be created. [ 17 ] By modifying the graphite die design and its assembly, it is possible to perform pressureless sintering in spark plasma sintering facility. This modified die design setup is reported to synergize the advantages of both conventional pressureless sintering and spark plasma sintering techniques. [ 18 ] Electro sinter forging is an electric current assisted sintering (ECAS) technology originated from capacitor discharge sintering . It is used for the production of diamond metal matrix composites and is under evaluation for the production of hard metals, [ 19 ] nitinol [ 20 ] and other metals and intermetallics. It is characterized by a very low sintering time, allowing machines to sinter at the same speed as a compaction press. Pressureless sintering is the sintering of a powder compact (sometimes at very high temperatures, depending on the powder) without applied pressure. This avoids density variations in the final component, which occurs with more traditional hot pressing methods. [ 21 ] The powder compact (if a ceramic) can be created by slip casting , injection moulding , and cold isostatic pressing . After presintering, the final green compact can be machined to its final shape before being sintered. Three different heating schedules can be performed with pressureless sintering: constant-rate of heating (CRH), rate-controlled sintering (RCS), and two-step sintering (TSS). The microstructure and grain size of the ceramics may vary depending on the material and method used. [ 21 ] Constant-rate of heating (CRH), also known as temperature-controlled sintering, consists of heating the green compact at a constant rate up to the sintering temperature. [ 22 ] Experiments with zirconia have been performed to optimize the sintering temperature and sintering rate for CRH method. Results showed that the grain sizes were identical when the samples were sintered to the same density, proving that grain size is a function of specimen density rather than CRH temperature mode. In rate-controlled sintering (RCS), the densification rate in the open-porosity phase is lower than in the CRH method. [ 22 ] By definition, the relative density, ρ rel , in the open-porosity phase is lower than 90%. Although this should prevent separation of pores from grain boundaries, it has been proven statistically that RCS did not produce smaller grain sizes than CRH for alumina, zirconia, and ceria samples. [ 21 ] Two-step sintering (TSS) uses two different sintering temperatures. The first sintering temperature should guarantee a relative density higher than 75% of theoretical sample density. This will remove supercritical pores from the body. The sample will then be cooled down and held at the second sintering temperature until densification is completed. Grains of cubic zirconia and cubic strontium titanate were significantly refined by TSS compared to CRH. However, the grain size changes in other ceramic materials, like tetragonal zirconia and hexagonal alumina, were not statistically significant. [ 21 ] In microwave sintering, heat is sometimes generated internally within the material, rather than via surface radiative heat transfer from an external heat source. Some materials fail to couple and others exhibit run-away behavior, so it is restricted in usefulness. A benefit of microwave sintering is faster heating for small loads, meaning less time is needed to reach the sintering temperature, less heating energy is required and there are improvements in the product properties. [ 23 ] A failing of microwave sintering is that it generally sinters only one compact at a time, so overall productivity turns out to be poor except for situations involving one of a kind sintering, such as for artists. As microwaves can only penetrate a short distance in materials with a high conductivity and a high permeability , microwave sintering requires the sample to be delivered in powders with a particle size around the penetration depth of microwaves in the particular material. The sintering process and side-reactions run several times faster during microwave sintering at the same temperature, which results in different properties for the sintered product. [ 23 ] This technique is acknowledged to be quite effective in maintaining fine grains/nano sized grains in sintered bioceramics . Magnesium phosphates and calcium phosphates are the examples which have been processed through the microwave sintering technique. [ 24 ] Sintering in practice is the control of both densification and grain growth . Densification is the act of reducing porosity in a sample, thereby making it denser. Grain growth is the process of grain boundary motion and Ostwald ripening to increase the average grain size. Many properties ( mechanical strength , electrical breakdown strength, etc.) benefit from both a high relative density and a small grain size. Therefore, being able to control these properties during processing is of high technical importance. Since densification of powders requires high temperatures, grain growth naturally occurs during sintering. Reduction of this process is key for many engineering ceramics. Under certain conditions of chemistry and orientation, some grains may grow rapidly at the expense of their neighbours during sintering. This phenomenon, known as abnormal grain growth (AGG), results in a bimodal grain size distribution that has consequences for the mechanical, dielectric and thermal performance of the sintered material. For densification to occur at a quick pace it is essential to have (1) an amount of liquid phase that is large in size, (2) a near complete solubility of the solid in the liquid, and (3) wetting of the solid by the liquid. The power behind the densification is derived from the capillary pressure of the liquid phase located between the fine solid particles. When the liquid phase wets the solid particles, each space between the particles becomes a capillary in which a substantial capillary pressure is developed. For submicrometre particle sizes, capillaries with diameters in the range of 0.1 to 1 micrometres develop pressures in the range of 175 pounds per square inch (1,210 kPa) to 1,750 pounds per square inch (12,100 kPa) for silicate liquids and in the range of 975 pounds per square inch (6,720 kPa) to 9,750 pounds per square inch (67,200 kPa) for a metal such as liquid cobalt. [ 3 ] Densification requires constant capillary pressure where just solution-precipitation material transfer would not produce densification. For further densification, additional particle movement while the particle undergoes grain-growth and grain-shape changes occurs. Shrinkage would result when the liquid slips between particles and increases pressure at points of contact causing the material to move away from the contact areas, forcing particle centers to draw near each other. [ 3 ] The sintering of liquid-phase materials involves a fine-grained solid phase to create the needed capillary pressures proportional to its diameter, and the liquid concentration must also create the required capillary pressure within range, else the process ceases. The vitrification rate is dependent upon the pore size, the viscosity and amount of liquid phase present leading to the viscosity of the overall composition, and the surface tension. Temperature dependence for densification controls the process because at higher temperatures viscosity decreases and increases liquid content. Therefore, when changes to the composition and processing are made, it will affect the vitrification process. [ 3 ] Sintering occurs by diffusion of atoms through the microstructure. This diffusion is caused by a gradient of chemical potential – atoms move from an area of higher chemical potential to an area of lower chemical potential. The different paths the atoms take to get from one spot to another are the "sintering mechanisms" or "matter transport mechanisms". In solid state sintering, the six common mechanisms are: [ 3 ] Mechanisms 1–3 above are non-densifying (i.e. do not cause the pores and the overall ceramic body to shrink) but can still increase the area of the bond or "neck" between grains; they take atoms from the surface and rearrange them onto another surface or part of the same surface. Mechanisms 4–6 are densifying – atoms are moved from the bulk material or the grain boundaries to the surface of pores, thereby eliminating porosity and increasing the density of the sample. A grain boundary (GB) is the transition area or interface between adjacent crystallites (or grains) of the same chemical and lattice composition, not to be confused with a phase boundary . The adjacent grains do not have the same orientation of the lattice, thus giving the atoms in GB shifted positions relative to the lattice in the crystals . Due to the shifted positioning of the atoms in the GB they have a higher energy state when compared with the atoms in the crystal lattice of the grains. It is this imperfection that makes it possible to selectively etch the GBs when one wants the microstructure to be visible. [ 25 ] Striving to minimize its energy leads to the coarsening of the microstructure to reach a metastable state within the specimen. This involves minimizing its GB area and changing its topological structure to minimize its energy. This grain growth can either be normal or abnormal , a normal grain growth is characterized by the uniform growth and size of all the grains in the specimen. Abnormal grain growth is when a few grains grow much larger than the remaining majority. [ 26 ] The atoms in the GB are normally in a higher energy state than their equivalent in the bulk material. This is due to their more stretched bonds, which gives rise to a GB tension σ G B {\displaystyle \sigma _{GB}} . This extra energy that the atoms possess is called the grain boundary energy, γ G B {\displaystyle \gamma _{GB}} . The grain will want to minimize this extra energy, thus striving to make the grain boundary area smaller and this change requires energy. [ 26 ] "Or, in other words, a force has to be applied, in the plane of the grain boundary and acting along a line in the grain-boundary area, in order to extend the grain-boundary area in the direction of the force. The force per unit length, i.e. tension/stress, along the line mentioned is σGB. On the basis of this reasoning it would follow that: σ G B d A (work done) = γ G B d A (energy change) {\displaystyle \sigma _{GB}dA{\text{ (work done)}}=\gamma _{GB}dA{\text{ (energy change)}}\,\!} with dA as the increase of grain-boundary area per unit length along the line in the grain-boundary area considered." [ 26 ] [pg 478] The GB tension can also be thought of as the attractive forces between the atoms at the surface and the tension between these atoms is due to the fact that there is a larger interatomic distance between them at the surface compared to the bulk (i.e. surface tension ). When the surface area becomes bigger the bonds stretch more and the GB tension increases. To counteract this increase in tension there must be a transport of atoms to the surface keeping the GB tension constant. This diffusion of atoms accounts for the constant surface tension in liquids. Then the argument, σ G B d A (work done) = γ G B d A (energy change) {\displaystyle \sigma _{GB}dA{\text{ (work done)}}=\gamma _{GB}dA{\text{ (energy change)}}\,\!} holds true. For solids, on the other hand, diffusion of atoms to the surface might not be sufficient and the surface tension can vary with an increase in surface area. [ 27 ] For a solid, one can derive an expression for the change in Gibbs free energy, dG, upon the change of GB area, dA. dG is given by σ G B d A (work done) = d G (energy change) = γ G B d A + A d γ G B {\displaystyle \sigma _{GB}dA{\text{ (work done)}}=dG{\text{ (energy change)}}=\gamma _{GB}dA+Ad\gamma _{GB}\,\!} which gives σ G B = γ G B + A d γ G B d A {\displaystyle \sigma _{GB}=\gamma _{GB}+{\frac {Ad\gamma _{GB}}{dA}}\,\!} σ G B {\displaystyle \sigma _{GB}} is normally expressed in units of N m {\displaystyle {\frac {N}{m}}} while γ G B {\displaystyle \gamma _{GB}} is normally expressed in units of J m 2 {\displaystyle {\frac {J}{m^{2}}}} ( J = N m ) {\displaystyle (J=Nm)} since they are different physical properties. [ 26 ] In a two-dimensional isotropic material the grain boundary tension would be the same for the grains. This would give angle of 120° at GB junction where three grains meet. This would give the structure a hexagonal pattern which is the metastable state (or mechanical equilibrium ) of the 2D specimen. A consequence of this is that, to keep trying to be as close to the equilibrium as possible, grains with fewer sides than six will bend the GB to try keep the 120° angle between each other. This results in a curved boundary with its curvature towards itself. A grain with six sides will, as mentioned, have straight boundaries, while a grain with more than six sides will have curved boundaries with its curvature away from itself. A grain with six boundaries (i.e. hexagonal structure) is in a metastable state (i.e. local equilibrium) within the 2D structure. [ 26 ] In three dimensions structural details are similar but much more complex and the metastable structure for a grain is a non-regular 14-sided polyhedra with doubly curved faces. In practice all arrays of grains are always unstable and thus always grow until prevented by a counterforce. [ 28 ] Grains strive to minimize their energy, and a curved boundary has a higher energy than a straight boundary. This means that the grain boundary will migrate towards the curvature. [ clarification needed ] The consequence of this is that grains with less than 6 sides will decrease in size while grains with more than 6 sides will increase in size. [ 29 ] Grain growth occurs due to motion of atoms across a grain boundary. Convex surfaces have a higher chemical potential than concave surfaces, therefore grain boundaries will move toward their center of curvature. As smaller particles tend to have a higher radius of curvature and this results in smaller grains losing atoms to larger grains and shrinking. This is a process called Ostwald ripening. Large grains grow at the expense of small grains. Grain growth in a simple model is found to follow: G m = G 0 m + K t {\displaystyle G^{m}=G_{0}^{m}+Kt} Here G is final average grain size, G 0 is the initial average grain size, t is time, m is a factor between 2 and 4, and K is a factor given by: K = K 0 e − Q R T {\displaystyle K=K_{0}e^{\frac {-Q}{RT}}} Here Q is the molar activation energy, R is the ideal gas constant, T is absolute temperature, and K 0 is a material dependent factor. In most materials the sintered grain size is proportional to the inverse square root of the fractional porosity, implying that pores are the most effective retardant for grain growth during sintering. If a dopant is added to the material (example: Nd in BaTiO 3 ) the impurity will tend to stick to the grain boundaries. As the grain boundary tries to move (as atoms jump from the convex to concave surface) the change in concentration of the dopant at the grain boundary will impose a drag on the boundary. The original concentration of solute around the grain boundary will be asymmetrical in most cases. As the grain boundary tries to move, the concentration on the side opposite of motion will have a higher concentration and therefore have a higher chemical potential. This increased chemical potential will act as a backforce to the original chemical potential gradient that is the reason for grain boundary movement. This decrease in net chemical potential will decrease the grain boundary velocity and therefore grain growth. If particles of a second phase which are insoluble in the matrix phase are added to the powder in the form of a much finer powder, then this will decrease grain boundary movement. When the grain boundary tries to move past the inclusion diffusion of atoms from one grain to the other, it will be hindered by the insoluble particle. This is because it is beneficial for particles to reside in the grain boundaries and they exert a force in opposite direction compared to grain boundary migration. This effect is called the Zener effect after the man who estimated this drag force to F = π r λ sin ⁡ ( 2 θ ) {\displaystyle F=\pi r\lambda \sin(2\theta )\,\!} where r is the radius of the particle and λ the interfacial energy of the boundary if there are N particles per unit volume their volume fraction f is f = 4 3 π r 3 N {\displaystyle f={\frac {4}{3}}\pi r^{3}N\,\!} assuming they are randomly distributed. A boundary of unit area will intersect all particles within a volume of 2r which is 2Nr particles. So the number of particles n intersecting a unit area of grain boundary is: n = 3 f 2 π r 2 {\displaystyle n={\frac {3f}{2\pi r^{2}}}\,\!} Now, assuming that the grains only grow due to the influence of curvature, the driving force of growth is 2 λ R {\displaystyle {\frac {2\lambda }{R}}} where (for homogeneous grain structure) R approximates to the mean diameter of the grains. With this the critical diameter that has to be reached before the grains ceases to grow: n F m a x = 2 λ D c r i t {\displaystyle nF_{max}={\frac {2\lambda }{D_{crit}}}\,\!} This can be reduced to D c r i t = 4 r 3 f {\displaystyle D_{crit}={\frac {4r}{3f}}\,\!} so the critical diameter of the grains is dependent on the size and volume fraction of the particles at the grain boundaries. [ 30 ] It has also been shown that small bubbles or cavities can act as inclusion More complicated interactions which slow grain boundary motion include interactions of the surface energies of the two grains and the inclusion and are discussed in detail by C.S. Smith. [ 31 ] Sintering is an important cause for loss of catalytic activity , especially on supported metal catalysts. It decreases the surface area of the catalyst and changes the surface structure. [ 32 ] For a porous catalytic surface, the pores may collapse due to sintering, resulting in loss of surface area. Sintering is in general an irreversible process. [ 33 ] Small catalyst particles have the highest possible relative surface area and high reaction temperature, both factors that generally increase the reactivity of a catalyst. However, these factors are also the circumstances under which sintering occurs. [ 34 ] Specific materials may also increase the rate of sintering. On the other hand, by alloying catalysts with other materials, sintering can be reduced. Rare-earth metals in particular have been shown to reduce sintering of metal catalysts when alloyed. [ 35 ] For many supported metal catalysts , sintering starts to become a significant effect at temperatures over 500 °C (932 °F). [ 32 ] Catalysts that operate at higher temperatures, such as a car catalyst , use structural improvements to reduce or prevent sintering. These improvements are in general in the form of a support made from an inert and thermally stable material such as silica , carbon or alumina . [ 36 ]
https://en.wikipedia.org/wiki/Sintering
Sinuosity , sinuosity index , or sinuosity coefficient of a continuously differentiable curve having at least one inflection point is the ratio of the curvilinear length (along the curve) and the Euclidean distance ( straight line ) between the end points of the curve. This dimensionless quantity can also be rephrased as the "actual path length" divided by the "shortest path length" of a curve. The value ranges from 1 (case of straight line) to infinity (case of a closed loop, where the shortest path length is zero for an infinitely-long actual path [ 1 ] ). The curve must be continuous (no jump) between the two ends. The sinuosity value is really significant when the line is continuously differentiable (no angular point). The distance between both ends can also be evaluated by a plurality of segments according to a broken line passing through the successive inflection points (sinuosity of order 2). The calculation of the sinuosity is valid in a 3-dimensional space (e.g. for the central axis of the small intestine ), although it is often performed in a plane (with then a possible orthogonal projection of the curve in the selected plan; "classic" sinuosity on the horizontal plane, longitudinal profile sinuosity on the vertical plane). The classification of a sinuosity (e.g. strong / weak) often depends on the cartographic scale of the curve (see the coastline paradox for further details) and of the object velocity which flowing therethrough (river, avalanche, car, bicycle, bobsleigh, skier, high speed train, etc.): the sinuosity of the same curved line could be considered very strong for a high speed train but low for a river. Nevertheless, it is possible to see a very strong sinuosity in the succession of few river bends, or of laces on some mountain roads. The sinuosity S of: With similar opposite arcs joints in the same plane, continuously differentiable: In studies of rivers, the sinuosity index is similar but not identical to the general form given above, being given by: The difference from the general form happens because the downvalley path is not perfectly straight. The sinuosity index can be explained, then, as the deviations from a path defined by the direction of maximum downslope. For this reason, bedrock streams that flow directly downslope have a sinuosity index of 1, and meandering streams have a sinuosity index that is greater than 1. [ 2 ] It is also possible to distinguish the case where the stream flowing on the line could not physically travel the distance between the ends: in some hydraulic studies, this leads to assign a sinuosity value of 1 for a torrent flowing over rocky bedrock along a horizontal rectilinear projection, even if the slope angle varies. For rivers, the conventional classes of sinuosity, SI, are: It has been claimed that river shapes are governed by a self-organizing system that causes their average sinuosity (measured in terms of the source-to-mouth distance, not channel length) to be π , [ 3 ] but this has not been borne out by later studies, which found an average value less than 2. [ 4 ]
https://en.wikipedia.org/wiki/Sinuosity
A sinus is a sac or cavity in any organ or tissue , or an abnormal cavity or passage. In common usage, "sinus" usually refers to the paranasal sinuses , which are air cavities in the cranial bones, especially those near the nose and connecting to it. Most individuals have four paired cavities located in the cranial bone or skull. Sinus is Latin for "bay", "pocket", "curve", or "bosom". In anatomy , the term is used in various contexts. The word "sinusitis" is used to indicate that one or more of the membrane linings found in the sinus cavities has become inflamed or infected. It is however distinct from a fistula , which is a tract connecting two epithelial surfaces. If left untreated, infections occurring in the sinus cavities can affect the chest and lungs. The four paired sinuses or air cavities can be referred to as: The function of the sinus cavities within the cranial bone (skull) is not entirely clear. Beliefs about their possible function include: [ 1 ] If one or more of the paired paranasal sinuses or air cavities becomes inflamed, it leads to an infection called sinusitis . The term "sinusitis" means an inflammation of one or more of the sinus cavities. This inflammation causes an increase in internal pressure within these areas. The pressure is often experienced in the cheek area, eyes, nose, on one side of the head (temple areas), and can result in a severe headache. [ 2 ] When diagnosing a sinus infection, one can identify which sinus cavity the infection is located in by the term given to the cavity. Ethmoiditis refers to an infection in the ethmoid sinus cavity/ies, frontal sinusitis refers to an infection occurring in the frontal sinus cavity/ies, antritis is used to refer to an infection in the maxillary sinus cavity/ies whilst sphenoiditis refers to an infection in the sphenoid sinus cavity/ies. Sinusitis can be acute, chronic or recurrent. A sinus infection can have a number of causes. Untreated allergies are one of the main contributing factors to the development of sinus infections. A person with a sinus infection often has nasal congestion with thick nasal secretions, fever, and cough (WebMD). Patients can be treated by “reducing the swelling or inflammation in the nasal passages and sinuses, eliminating the infection, promoting drainage from the sinuses, and maintaining open sinuses” (WebMD). Sinusitis can be treated with medications and can also be eliminated by surgery. [ 3 ] Another cause of sinus infections is a result of bacterial invasion within one or more of the sinus cavities. Any bacteria that enter the nasal passages and sinus cavities through the air that is inhaled, are trapped by the mucus secreted by the mucous membranes surrounding these areas. These trapped particles can cause an irritation to these linings resulting in swelling and inflammation. “Bacteria that normally cause acute sinusitis are Streptococcus pneumoniae, Haemophilus influenzae, and Moraxella catarrhalis (WebMD). These microorganisms, along with Staphylococcus aureus and some anaerobes (bacteria that live without oxygen), are involved in chronic sinusitis. (WebMD)” Fungi can also cause chronic sinusitis. Certain abnormalities or trauma related injuries to the nasal cavity can make it difficult for effective drainage of mucus from the sinus cavities. This mucus is then allowed to develop in these areas making the cavity an ideal area in which bacteria can both attach and thrive. Sinusitis or sinus infections usually clear up if treated early and appropriately. [ 4 ] Apart from complications, the outlook for acute bacterial sinusitis is good. People may develop chronic sinusitis or have recurrent attacks of acute sinusitis if they suffer with allergies or if they have any “structural or anatomical causes" which predispose them to developing sinus infections. Viral sinus infections do not, however, respond well to conventional treatments such as antibiotics. When treating fungal sinusitis, an appropriate fungicide is usually administered.
https://en.wikipedia.org/wiki/Sinus_(anatomy)
In mathematics , sine and cosine are trigonometric functions of an angle . The sine and cosine of an acute angle are defined in the context of a right triangle : for the specified angle, its sine is the ratio of the length of the side opposite that angle to the length of the longest side of the triangle (the hypotenuse ), and the cosine is the ratio of the length of the adjacent leg to that of the hypotenuse . For an angle θ {\displaystyle \theta } , the sine and cosine functions are denoted as sin ⁡ ( θ ) {\displaystyle \sin(\theta )} and cos ⁡ ( θ ) {\displaystyle \cos(\theta )} . The definitions of sine and cosine have been extended to any real value in terms of the lengths of certain line segments in a unit circle . More modern definitions express the sine and cosine as infinite series , or as the solutions of certain differential equations , allowing their extension to arbitrary positive and negative values and even to complex numbers . The sine and cosine functions are commonly used to model periodic phenomena such as sound and light waves , the position and velocity of harmonic oscillators, sunlight intensity and day length, and average temperature variations throughout the year. They can be traced to the jyā and koṭi-jyā functions used in Indian astronomy during the Gupta period . To define the sine and cosine of an acute angle α {\displaystyle \alpha } , start with a right triangle that contains an angle of measure α {\displaystyle \alpha } ; in the accompanying figure, angle α {\displaystyle \alpha } in a right triangle A B C {\displaystyle ABC} is the angle of interest. The three sides of the triangle are named as follows: [ 1 ] Once such a triangle is chosen, the sine of the angle is equal to the length of the opposite side divided by the length of the hypotenuse, and the cosine of the angle is equal to the length of the adjacent side divided by the length of the hypotenuse: [ 1 ] sin ⁡ ( α ) = opposite hypotenuse , cos ⁡ ( α ) = adjacent hypotenuse . {\displaystyle \sin(\alpha )={\frac {\text{opposite}}{\text{hypotenuse}}},\qquad \cos(\alpha )={\frac {\text{adjacent}}{\text{hypotenuse}}}.} The other trigonometric functions of the angle can be defined similarly; for example, the tangent is the ratio between the opposite and adjacent sides or equivalently the ratio between the sine and cosine functions. The reciprocal of sine is cosecant, which gives the ratio of the hypotenuse length to the length of the opposite side. Similarly, the reciprocal of cosine is secant, which gives the ratio of the hypotenuse length to that of the adjacent side. The cotangent function is the ratio between the adjacent and opposite sides, a reciprocal of a tangent function. These functions can be formulated as: [ 1 ] tan ⁡ ( θ ) = sin ⁡ ( θ ) cos ⁡ ( θ ) = opposite adjacent , cot ⁡ ( θ ) = 1 tan ⁡ ( θ ) = adjacent opposite , csc ⁡ ( θ ) = 1 sin ⁡ ( θ ) = hypotenuse opposite , sec ⁡ ( θ ) = 1 cos ⁡ ( θ ) = hypotenuse adjacent . {\displaystyle {\begin{aligned}\tan(\theta )&={\frac {\sin(\theta )}{\cos(\theta )}}={\frac {\text{opposite}}{\text{adjacent}}},\\\cot(\theta )&={\frac {1}{\tan(\theta )}}={\frac {\text{adjacent}}{\text{opposite}}},\\\csc(\theta )&={\frac {1}{\sin(\theta )}}={\frac {\text{hypotenuse}}{\text{opposite}}},\\\sec(\theta )&={\frac {1}{\cos(\theta )}}={\frac {\textrm {hypotenuse}}{\textrm {adjacent}}}.\end{aligned}}} As stated, the values sin ⁡ ( α ) {\displaystyle \sin(\alpha )} and cos ⁡ ( α ) {\displaystyle \cos(\alpha )} appear to depend on the choice of a right triangle containing an angle of measure α {\displaystyle \alpha } . However, this is not the case as all such triangles are similar , and so the ratios are the same for each of them. For example, each leg of the 45-45-90 right triangle is 1 unit, and its hypotenuse is 2 {\displaystyle {\sqrt {2}}} ; therefore, sin ⁡ 45 ∘ = cos ⁡ 45 ∘ = 2 2 {\textstyle \sin 45^{\circ }=\cos 45^{\circ }={\frac {\sqrt {2}}{2}}} . [ 2 ] The following table shows the special value of each input for both sine and cosine with the domain between 0 < α < π 2 {\textstyle 0<\alpha <{\frac {\pi }{2}}} . The input in this table provides various unit systems such as degree, radian, and so on. The angles other than those five can be obtained by using a calculator. [ 3 ] [ 4 ] The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. [ 5 ] Given a triangle A B C {\displaystyle ABC} with sides a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} , and angles opposite those sides α {\displaystyle \alpha } , β {\displaystyle \beta } , and γ {\displaystyle \gamma } , the law states, sin ⁡ α a = sin ⁡ β b = sin ⁡ γ c . {\displaystyle {\frac {\sin \alpha }{a}}={\frac {\sin \beta }{b}}={\frac {\sin \gamma }{c}}.} This is equivalent to the equality of the first three expressions below: a sin ⁡ α = b sin ⁡ β = c sin ⁡ γ = 2 R , {\displaystyle {\frac {a}{\sin \alpha }}={\frac {b}{\sin \beta }}={\frac {c}{\sin \gamma }}=2R,} where R {\displaystyle R} is the triangle's circumradius . The law of cosines is useful for computing the length of an unknown side if two other sides and an angle are known. [ 5 ] The law states, a 2 + b 2 − 2 a b cos ⁡ ( γ ) = c 2 {\displaystyle a^{2}+b^{2}-2ab\cos(\gamma )=c^{2}} In the case where γ = π / 2 {\displaystyle \gamma =\pi /2} from which cos ⁡ ( γ ) = 0 {\displaystyle \cos(\gamma )=0} , the resulting equation becomes the Pythagorean theorem . [ 6 ] The cross product and dot product are operations on two vectors in Euclidean vector space . The sine and cosine functions can be defined in terms of the cross product and dot product. If a {\displaystyle \mathbb {a} } and b {\displaystyle \mathbb {b} } are vectors, and θ {\displaystyle \theta } is the angle between a {\displaystyle \mathbb {a} } and b {\displaystyle \mathbb {b} } , then sine and cosine can be defined as: sin ⁡ ( θ ) = | a × b | | a | | b | , cos ⁡ ( θ ) = a ⋅ b | a | | b | . {\displaystyle {\begin{aligned}\sin(\theta )&={\frac {|\mathbb {a} \times \mathbb {b} |}{|a||b|}},\\\cos(\theta )&={\frac {\mathbb {a} \cdot \mathbb {b} }{|a||b|}}.\end{aligned}}} The sine and cosine functions may also be defined in a more general way by using unit circle , a circle of radius one centered at the origin ( 0 , 0 ) {\displaystyle (0,0)} , formulated as the equation of x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} in the Cartesian coordinate system . Let a line through the origin intersect the unit circle, making an angle of θ {\displaystyle \theta } with the positive half of the x {\displaystyle x} - axis. The x {\displaystyle x} - and y {\displaystyle y} - coordinates of this point of intersection are equal to cos ⁡ ( θ ) {\displaystyle \cos(\theta )} and sin ⁡ ( θ ) {\displaystyle \sin(\theta )} , respectively; that is, [ 7 ] sin ⁡ ( θ ) = y , cos ⁡ ( θ ) = x . {\displaystyle \sin(\theta )=y,\qquad \cos(\theta )=x.} This definition is consistent with the right-angled triangle definition of sine and cosine when 0 < θ < π 2 {\textstyle 0<\theta <{\frac {\pi }{2}}} because the length of the hypotenuse of the unit circle is always 1; mathematically speaking, the sine of an angle equals the opposite side of the triangle, which is simply the y {\displaystyle y} - coordinate. A similar argument can be made for the cosine function to show that the cosine of an angle when 0 < θ < π 2 {\textstyle 0<\theta <{\frac {\pi }{2}}} , even under the new definition using the unit circle. [ 8 ] [ 9 ] Using the unit circle definition has the advantage of drawing a graph of sine and cosine functions. This can be done by rotating counterclockwise a point along the circumference of a circle, depending on the input θ > 0 {\displaystyle \theta >0} . In a sine function, if the input is θ = π 2 {\textstyle \theta ={\frac {\pi }{2}}} , the point is rotated counterclockwise and stopped exactly on the y {\displaystyle y} - axis. If θ = π {\displaystyle \theta =\pi } , the point is at the circle's halfway. If θ = 2 π {\displaystyle \theta =2\pi } , the point returned to its origin. This results that both sine and cosine functions have the range between − 1 ≤ y ≤ 1 {\displaystyle -1\leq y\leq 1} . [ 10 ] Extending the angle to any real domain, the point rotated counterclockwise continuously. This can be done similarly for the cosine function as well, although the point is rotated initially from the y {\displaystyle y} - coordinate. In other words, both sine and cosine functions are periodic , meaning any angle added by the circumference's circle is the angle itself. Mathematically, [ 11 ] sin ⁡ ( θ + 2 π ) = sin ⁡ ( θ ) , cos ⁡ ( θ + 2 π ) = cos ⁡ ( θ ) . {\displaystyle \sin(\theta +2\pi )=\sin(\theta ),\qquad \cos(\theta +2\pi )=\cos(\theta ).} A function f {\displaystyle f} is said to be odd if f ( − x ) = − f ( x ) {\displaystyle f(-x)=-f(x)} , and is said to be even if f ( − x ) = f ( x ) {\displaystyle f(-x)=f(x)} . The sine function is odd, whereas the cosine function is even. [ 12 ] Both sine and cosine functions are similar, with their difference being shifted by π 2 {\textstyle {\frac {\pi }{2}}} . This means, [ 13 ] sin ⁡ ( θ ) = cos ⁡ ( π 2 − θ ) , cos ⁡ ( θ ) = sin ⁡ ( π 2 − θ ) . {\displaystyle {\begin{aligned}\sin(\theta )&=\cos \left({\frac {\pi }{2}}-\theta \right),\\\cos(\theta )&=\sin \left({\frac {\pi }{2}}-\theta \right).\end{aligned}}} Zero is the only real fixed point of the sine function; in other words the only intersection of the sine function and the identity function is sin ⁡ ( 0 ) = 0 {\displaystyle \sin(0)=0} . The only real fixed point of the cosine function is called the Dottie number . The Dottie number is the unique real root of the equation cos ⁡ ( x ) = x {\displaystyle \cos(x)=x} . The decimal expansion of the Dottie number is approximately 0.739085. [ 14 ] The sine and cosine functions are infinitely differentiable. [ 15 ] The derivative of sine is cosine, and the derivative of cosine is negative sine: [ 16 ] d d x sin ⁡ ( x ) = cos ⁡ ( x ) , d d x cos ⁡ ( x ) = − sin ⁡ ( x ) . {\displaystyle {\frac {d}{dx}}\sin(x)=\cos(x),\qquad {\frac {d}{dx}}\cos(x)=-\sin(x).} Continuing the process in higher-order derivative results in the repeated same functions; the fourth derivative of a sine is the sine itself. [ 15 ] These derivatives can be applied to the first derivative test , according to which the monotonicity of a function can be defined as the inequality of function's first derivative greater or less than equal to zero. [ 17 ] It can also be applied to second derivative test , according to which the concavity of a function can be defined by applying the inequality of the function's second derivative greater or less than equal to zero. [ 18 ] The following table shows that both sine and cosine functions have concavity and monotonicity—the positive sign ( + {\displaystyle +} ) denotes a graph is increasing (going upward) and the negative sign ( − {\displaystyle -} ) is decreasing (going downward)—in certain intervals. [ 19 ] This information can be represented as a Cartesian coordinates system divided into four quadrants. Both sine and cosine functions can be defined by using differential equations. The pair of ( cos ⁡ θ , sin ⁡ θ ) {\displaystyle (\cos \theta ,\sin \theta )} is the solution ( x ( θ ) , y ( θ ) ) {\displaystyle (x(\theta ),y(\theta ))} to the two-dimensional system of differential equations y ′ ( θ ) = x ( θ ) {\displaystyle y'(\theta )=x(\theta )} and x ′ ( θ ) = − y ( θ ) {\displaystyle x'(\theta )=-y(\theta )} with the initial conditions y ( 0 ) = 0 {\displaystyle y(0)=0} and x ( 0 ) = 1 {\displaystyle x(0)=1} . One could interpret the unit circle in the above definitions as defining the phase space trajectory of the differential equation with the given initial conditions. It can be interpreted as a phase space trajectory of the system of differential equations y ′ ( θ ) = x ( θ ) {\displaystyle y'(\theta )=x(\theta )} and x ′ ( θ ) = − y ( θ ) {\displaystyle x'(\theta )=-y(\theta )} starting from the initial conditions y ( 0 ) = 0 {\displaystyle y(0)=0} and x ( 0 ) = 1 {\displaystyle x(0)=1} . [ citation needed ] Their area under a curve can be obtained by using the integral with a certain bounded interval. Their antiderivatives are: ∫ sin ⁡ ( x ) d x = − cos ⁡ ( x ) + C ∫ cos ⁡ ( x ) d x = sin ⁡ ( x ) + C , {\displaystyle \int \sin(x)\,dx=-\cos(x)+C\qquad \int \cos(x)\,dx=\sin(x)+C,} where C {\displaystyle C} denotes the constant of integration . [ 20 ] These antiderivatives may be applied to compute the mensuration properties of both sine and cosine functions' curves with a given interval. For example, the arc length of the sine curve between 0 {\displaystyle 0} and t {\displaystyle t} is ∫ 0 t 1 + cos 2 ⁡ ( x ) d x = 2 E ⁡ ( t , 1 2 ) , {\displaystyle \int _{0}^{t}\!{\sqrt {1+\cos ^{2}(x)}}\,dx={\sqrt {2}}\operatorname {E} \left(t,{\frac {1}{\sqrt {2}}}\right),} where E ⁡ ( φ , k ) {\displaystyle \operatorname {E} (\varphi ,k)} is the incomplete elliptic integral of the second kind with modulus k {\displaystyle k} . It cannot be expressed using elementary functions . [ 21 ] In the case of a full period, its arc length is L = 4 2 π 3 Γ ( 1 / 4 ) 2 + Γ ( 1 / 4 ) 2 2 π = 2 π ϖ + 2 ϖ ≈ 7.6404 … {\displaystyle L={\frac {4{\sqrt {2\pi ^{3}}}}{\Gamma (1/4)^{2}}}+{\frac {\Gamma (1/4)^{2}}{\sqrt {2\pi }}}={\frac {2\pi }{\varpi }}+2\varpi \approx 7.6404\ldots } where Γ {\displaystyle \Gamma } is the gamma function and ϖ {\displaystyle \varpi } is the lemniscate constant . [ 22 ] The inverse function of sine is arcsine or inverse sine, denoted as "arcsin", "asin", or sin − 1 {\displaystyle \sin ^{-1}} . [ 23 ] The inverse function of cosine is arccosine, denoted as "arccos", "acos", or cos − 1 {\displaystyle \cos ^{-1}} . [ a ] As sine and cosine are not injective , their inverses are not exact inverse functions, but partial inverse functions. For example, sin ⁡ ( 0 ) = 0 {\displaystyle \sin(0)=0} , but also sin ⁡ ( π ) = 0 {\displaystyle \sin(\pi )=0} , sin ⁡ ( 2 π ) = 0 {\displaystyle \sin(2\pi )=0} , and so on. It follows that the arcsine function is multivalued: arcsin ⁡ ( 0 ) = 0 {\displaystyle \arcsin(0)=0} , but also arcsin ⁡ ( 0 ) = π {\displaystyle \arcsin(0)=\pi } , arcsin ⁡ ( 0 ) = 2 π {\displaystyle \arcsin(0)=2\pi } , and so on. When only one value is desired, the function may be restricted to its principal branch . With this restriction, for each x {\displaystyle x} in the domain, the expression arcsin ⁡ ( x ) {\displaystyle \arcsin(x)} will evaluate only to a single value, called its principal value . The standard range of principal values for arcsin is from − π 2 {\textstyle -{\frac {\pi }{2}}} to π 2 {\textstyle {\frac {\pi }{2}}} , and the standard range for arccos is from 0 {\displaystyle 0} to π {\displaystyle \pi } . [ 24 ] The inverse function of both sine and cosine are defined as: [ citation needed ] θ = arcsin ⁡ ( opposite hypotenuse ) = arccos ⁡ ( adjacent hypotenuse ) , {\displaystyle \theta =\arcsin \left({\frac {\text{opposite}}{\text{hypotenuse}}}\right)=\arccos \left({\frac {\text{adjacent}}{\text{hypotenuse}}}\right),} where for some integer k {\displaystyle k} , sin ⁡ ( y ) = x ⟺ y = arcsin ⁡ ( x ) + 2 π k , or y = π − arcsin ⁡ ( x ) + 2 π k cos ⁡ ( y ) = x ⟺ y = arccos ⁡ ( x ) + 2 π k , or y = − arccos ⁡ ( x ) + 2 π k {\displaystyle {\begin{aligned}\sin(y)=x\iff &y=\arcsin(x)+2\pi k,{\text{ or }}\\&y=\pi -\arcsin(x)+2\pi k\\\cos(y)=x\iff &y=\arccos(x)+2\pi k,{\text{ or }}\\&y=-\arccos(x)+2\pi k\end{aligned}}} By definition, both functions satisfy the equations: [ citation needed ] sin ⁡ ( arcsin ⁡ ( x ) ) = x cos ⁡ ( arccos ⁡ ( x ) ) = x {\displaystyle \sin(\arcsin(x))=x\qquad \cos(\arccos(x))=x} and arcsin ⁡ ( sin ⁡ ( θ ) ) = θ for − π 2 ≤ θ ≤ π 2 arccos ⁡ ( cos ⁡ ( θ ) ) = θ for 0 ≤ θ ≤ π {\displaystyle {\begin{aligned}\arcsin(\sin(\theta ))=\theta \quad &{\text{for}}\quad -{\frac {\pi }{2}}\leq \theta \leq {\frac {\pi }{2}}\\\arccos(\cos(\theta ))=\theta \quad &{\text{for}}\quad 0\leq \theta \leq \pi \end{aligned}}} According to Pythagorean theorem , the squared hypotenuse is the sum of two squared legs of a right triangle. Dividing the formula on both sides with squared hypotenuse resulting in the Pythagorean trigonometric identity , the sum of a squared sine and a squared cosine equals 1: [ 25 ] [ b ] sin 2 ⁡ ( θ ) + cos 2 ⁡ ( θ ) = 1. {\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1.} Sine and cosine satisfy the following double-angle formulas: [ 26 ] sin ⁡ ( 2 θ ) = 2 sin ⁡ ( θ ) cos ⁡ ( θ ) , cos ⁡ ( 2 θ ) = cos 2 ⁡ ( θ ) − sin 2 ⁡ ( θ ) = 2 cos 2 ⁡ ( θ ) − 1 = 1 − 2 sin 2 ⁡ ( θ ) {\displaystyle {\begin{aligned}\sin(2\theta )&=2\sin(\theta )\cos(\theta ),\\\cos(2\theta )&=\cos ^{2}(\theta )-\sin ^{2}(\theta )\\&=2\cos ^{2}(\theta )-1\\&=1-2\sin ^{2}(\theta )\end{aligned}}} The cosine double angle formula implies that sin 2 and cos 2 are, themselves, shifted and scaled sine waves. Specifically, [ 27 ] sin 2 ⁡ ( θ ) = 1 − cos ⁡ ( 2 θ ) 2 cos 2 ⁡ ( θ ) = 1 + cos ⁡ ( 2 θ ) 2 {\displaystyle \sin ^{2}(\theta )={\frac {1-\cos(2\theta )}{2}}\qquad \cos ^{2}(\theta )={\frac {1+\cos(2\theta )}{2}}} The graph shows both sine and sine squared functions, with the sine in blue and the sine squared in red. Both graphs have the same shape but with different ranges of values and different periods. Sine squared has only positive values, but twice the number of periods. [ citation needed ] Both sine and cosine functions can be defined by using a Taylor series , a power series involving the higher-order derivatives. As mentioned in § Continuity and differentiation , the derivative of sine is cosine and that the derivative of cosine is the negative of sine. This means the successive derivatives of sin ⁡ ( x ) {\displaystyle \sin(x)} are cos ⁡ ( x ) {\displaystyle \cos(x)} , − sin ⁡ ( x ) {\displaystyle -\sin(x)} , − cos ⁡ ( x ) {\displaystyle -\cos(x)} , sin ⁡ ( x ) {\displaystyle \sin(x)} , continuing to repeat those four functions. The ( 4 n + k ) {\displaystyle (4n+k)} - th derivative, evaluated at the point 0: sin ( 4 n + k ) ⁡ ( 0 ) = { 0 when k = 0 1 when k = 1 0 when k = 2 − 1 when k = 3 {\displaystyle \sin ^{(4n+k)}(0)={\begin{cases}0&{\text{when }}k=0\\1&{\text{when }}k=1\\0&{\text{when }}k=2\\-1&{\text{when }}k=3\end{cases}}} where the superscript represents repeated differentiation. This implies the following Taylor series expansion at x = 0 {\displaystyle x=0} . One can then use the theory of Taylor series to show that the following identities hold for all real numbers x {\displaystyle x} —where x {\displaystyle x} is the angle in radians. [ 28 ] More generally, for all complex numbers : [ 29 ] sin ⁡ ( x ) = x − x 3 3 ! + x 5 5 ! − x 7 7 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) ! x 2 n + 1 {\displaystyle {\begin{aligned}\sin(x)&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}\end{aligned}}} Taking the derivative of each term gives the Taylor series for cosine: [ 28 ] [ 29 ] cos ⁡ ( x ) = 1 − x 2 2 ! + x 4 4 ! − x 6 6 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! x 2 n {\displaystyle {\begin{aligned}\cos(x)&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}\end{aligned}}} Both sine and cosine functions with multiple angles may appear as their linear combination , resulting in a polynomial. Such a polynomial is known as the trigonometric polynomial . The trigonometric polynomial's ample applications may be acquired in its interpolation , and its extension of a periodic function known as the Fourier series . Let a n {\displaystyle a_{n}} and b n {\displaystyle b_{n}} be any coefficients, then the trigonometric polynomial of a degree N {\displaystyle N} —denoted as T ( x ) {\displaystyle T(x)} —is defined as: [ 30 ] [ 31 ] T ( x ) = a 0 + ∑ n = 1 N a n cos ⁡ ( n x ) + ∑ n = 1 N b n sin ⁡ ( n x ) . {\displaystyle T(x)=a_{0}+\sum _{n=1}^{N}a_{n}\cos(nx)+\sum _{n=1}^{N}b_{n}\sin(nx).} The trigonometric series can be defined similarly analogous to the trigonometric polynomial, its infinite inversion. Let A n {\displaystyle A_{n}} and B n {\displaystyle B_{n}} be any coefficients, then the trigonometric series can be defined as: [ 32 ] 1 2 A 0 + ∑ n = 1 ∞ A n cos ⁡ ( n x ) + B n sin ⁡ ( n x ) . {\displaystyle {\frac {1}{2}}A_{0}+\sum _{n=1}^{\infty }A_{n}\cos(nx)+B_{n}\sin(nx).} In the case of a Fourier series with a given integrable function f {\displaystyle f} , the coefficients of a trigonometric series are: [ 33 ] A n = 1 π ∫ 0 2 π f ( x ) cos ⁡ ( n x ) d x , B n = 1 π ∫ 0 2 π f ( x ) sin ⁡ ( n x ) d x . {\displaystyle {\begin{aligned}A_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\cos(nx)\,dx,\\B_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\sin(nx)\,dx.\end{aligned}}} Both sine and cosine can be extended further via complex number , a set of numbers composed of both real and imaginary numbers . For real number θ {\displaystyle \theta } , the definition of both sine and cosine functions can be extended in a complex plane in terms of an exponential function as follows: [ 34 ] sin ⁡ ( θ ) = e i θ − e − i θ 2 i , cos ⁡ ( θ ) = e i θ + e − i θ 2 , {\displaystyle {\begin{aligned}\sin(\theta )&={\frac {e^{i\theta }-e^{-i\theta }}{2i}},\\\cos(\theta )&={\frac {e^{i\theta }+e^{-i\theta }}{2}},\end{aligned}}} Alternatively, both functions can be defined in terms of Euler's formula : [ 34 ] e i θ = cos ⁡ ( θ ) + i sin ⁡ ( θ ) , e − i θ = cos ⁡ ( θ ) − i sin ⁡ ( θ ) . {\displaystyle {\begin{aligned}e^{i\theta }&=\cos(\theta )+i\sin(\theta ),\\e^{-i\theta }&=\cos(\theta )-i\sin(\theta ).\end{aligned}}} When plotted on the complex plane , the function e i x {\displaystyle e^{ix}} for real values of x {\displaystyle x} traces out the unit circle in the complex plane. Both sine and cosine functions may be simplified to the imaginary and real parts of e i θ {\displaystyle e^{i\theta }} as: [ 35 ] sin ⁡ θ = Im ⁡ ( e i θ ) , cos ⁡ θ = Re ⁡ ( e i θ ) . {\displaystyle {\begin{aligned}\sin \theta &=\operatorname {Im} (e^{i\theta }),\\\cos \theta &=\operatorname {Re} (e^{i\theta }).\end{aligned}}} When z = x + i y {\displaystyle z=x+iy} for real values x {\displaystyle x} and y {\displaystyle y} , where i = − 1 {\displaystyle i={\sqrt {-1}}} , both sine and cosine functions can be expressed in terms of real sines, cosines, and hyperbolic functions as: [ citation needed ] sin ⁡ z = sin ⁡ x cosh ⁡ y + i cos ⁡ x sinh ⁡ y , cos ⁡ z = cos ⁡ x cosh ⁡ y − i sin ⁡ x sinh ⁡ y . {\displaystyle {\begin{aligned}\sin z&=\sin x\cosh y+i\cos x\sinh y,\\\cos z&=\cos x\cosh y-i\sin x\sinh y.\end{aligned}}} Sine and cosine are used to connect the real and imaginary parts of a complex number with its polar coordinates ( r , θ ) {\displaystyle (r,\theta )} : z = r ( cos ⁡ ( θ ) + i sin ⁡ ( θ ) ) , {\displaystyle z=r(\cos(\theta )+i\sin(\theta )),} and the real and imaginary parts are Re ⁡ ( z ) = r cos ⁡ ( θ ) , Im ⁡ ( z ) = r sin ⁡ ( θ ) , {\displaystyle {\begin{aligned}\operatorname {Re} (z)&=r\cos(\theta ),\\\operatorname {Im} (z)&=r\sin(\theta ),\end{aligned}}} where r {\displaystyle r} and θ {\displaystyle \theta } represent the magnitude and angle of the complex number z {\displaystyle z} . For any real number θ {\displaystyle \theta } , Euler's formula in terms of polar coordinates is stated as z = r e i θ {\textstyle z=re^{i\theta }} . Applying the series definition of the sine and cosine to a complex argument, z , gives: where sinh and cosh are the hyperbolic sine and cosine . These are entire functions . It is also sometimes useful to express the complex sine and cosine functions in terms of the real and imaginary parts of its argument: Using the partial fraction expansion technique in complex analysis , one can find that the infinite series ∑ n = − ∞ ∞ ( − 1 ) n z − n = 1 z − 2 z ∑ n = 1 ∞ ( − 1 ) n n 2 − z 2 {\displaystyle \sum _{n=-\infty }^{\infty }{\frac {(-1)^{n}}{z-n}}={\frac {1}{z}}-2z\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{n^{2}-z^{2}}}} both converge and are equal to π sin ⁡ ( π z ) {\textstyle {\frac {\pi }{\sin(\pi z)}}} . Similarly, one can show that π 2 sin 2 ⁡ ( π z ) = ∑ n = − ∞ ∞ 1 ( z − n ) 2 . {\displaystyle {\frac {\pi ^{2}}{\sin ^{2}(\pi z)}}=\sum _{n=-\infty }^{\infty }{\frac {1}{(z-n)^{2}}}.} Using product expansion technique, one can derive sin ⁡ ( π z ) = π z ∏ n = 1 ∞ ( 1 − z 2 n 2 ) . {\displaystyle \sin(\pi z)=\pi z\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{n^{2}}}\right).} sin( z ) is found in the functional equation for the Gamma function , which in turn is found in the functional equation for the Riemann zeta-function , As a holomorphic function , sin z is a 2D solution of Laplace's equation : The complex sine function is also related to the level curves of pendulums . [ how? ] [ 36 ] [ better source needed ] The word sine is derived, indirectly, from the Sanskrit word jyā 'bow-string' or more specifically its synonym jīvá (both adopted from Ancient Greek χορδή 'string; chord'), due to visual similarity between the arc of a circle with its corresponding chord and a bow with its string (see jyā, koti-jyā and utkrama-jyā ; sine and chord are closely related in a circle of unit diameter, see Ptolemy’s Theorem ). This was transliterated in Arabic as jība , which is meaningless in that language and written as jb ( جب ). Since Arabic is written without short vowels, jb was interpreted as the homograph jayb ( جيب ), which means 'bosom', 'pocket', or 'fold'. [ 37 ] [ 38 ] When the Arabic texts of Al-Battani and al-Khwārizmī were translated into Medieval Latin in the 12th century by Gerard of Cremona , he used the Latin equivalent sinus (which also means 'bay' or 'fold', and more specifically 'the hanging fold of a toga over the breast'). [ 39 ] [ 40 ] [ 41 ] Gerard was probably not the first scholar to use this translation; Robert of Chester appears to have preceded him and there is evidence of even earlier usage. [ 42 ] [ 43 ] The English form sine was introduced in Thomas Fale 's 1593 Horologiographia . [ 44 ] The word cosine derives from an abbreviation of the Latin complementi sinus 'sine of the complementary angle ' as cosinus in Edmund Gunter 's Canon triangulorum (1620), which also includes a similar definition of cotangens . [ 45 ] While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was discovered by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). [ 46 ] The sine and cosine functions are closely related to the jyā and koṭi-jyā functions used in Indian astronomy during the Gupta period ( Aryabhatiya and Surya Siddhanta ), via translation from Sanskrit to Arabic and then from Arabic to Latin. [ 39 ] [ 47 ] All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines , used in solving triangles . [ 48 ] Al-Khwārizmī (c. 780–850) produced tables of sines, cosines and tangents. [ 49 ] [ 50 ] Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. [ 50 ] In the early 17th-century, the French mathematician Albert Girard published the first use of the abbreviations sin , cos , and tan ; these were further promulgated by Euler (see below). The Opus palatinum de triangulis of Georg Joachim Rheticus , a student of Copernicus , was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596. In a paper published in 1682, Leibniz proved that sin x is not an algebraic function of x . [ 51 ] Roger Cotes computed the derivative of sine in his Harmonia Mensurarum (1722). [ 52 ] Leonhard Euler 's Introductio in analysin infinitorum (1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, also defining them as infinite series and presenting " Euler's formula ", as well as the near-modern abbreviations sin. , cos. , tang. , cot. , sec. , and cosec. [ 39 ] There is no standard algorithm for calculating sine and cosine. IEEE 754 , the most widely used standard for the specification of reliable floating-point computation, does not address calculating trigonometric functions such as sine. The reason is that no efficient algorithm is known for computing sine and cosine with a specified accuracy, especially for large inputs. [ 53 ] Algorithms for calculating sine may be balanced for such constraints as speed, accuracy, portability, or range of input values accepted. This can lead to different results for different algorithms, especially for special circumstances such as very large inputs, e.g. sin(10 22 ) . A common programming optimization, used especially in 3D graphics, is to pre-calculate a table of sine values, for example one value per degree, then for values in-between pick the closest pre-calculated value, or linearly interpolate between the 2 closest values to approximate it. This allows results to be looked up from a table rather than being calculated in real time. With modern CPU architectures this method may offer no advantage. [ citation needed ] The CORDIC algorithm is commonly used in scientific calculators. The sine and cosine functions, along with other trigonometric functions, are widely available across programming languages and platforms. In computing, they are typically abbreviated to sin and cos . Some CPU architectures have a built-in instruction for sine, including the Intel x87 FPUs since the 80387. In programming languages, sin and cos are typically either a built-in function or found within the language's standard math library. For example, the C standard library defines sine functions within math.h : sin( double ) , sinf( float ) , and sinl( long double ) . The parameter of each is a floating point value, specifying the angle in radians. Each function returns the same data type as it accepts. Many other trigonometric functions are also defined in math.h , such as for cosine, arc sine, and hyperbolic sine (sinh). Similarly, Python defines math.sin(x) and math.cos(x) within the built-in math module. Complex sine and cosine functions are also available within the cmath module, e.g. cmath.sin(z) . CPython 's math functions call the C math library, and use a double-precision floating-point format . Some software libraries provide implementations of sine and cosine using the input angle in half- turns , a half-turn being an angle of 180 degrees or π {\displaystyle \pi } radians. Representing angles in turns or half-turns has accuracy advantages and efficiency advantages in some cases. [ 54 ] [ 55 ] These functions are called sinpi and cospi in MATLAB, [ 54 ] OpenCL, [ 56 ] R, [ 55 ] Julia, [ 57 ] CUDA, [ 58 ] and ARM. [ 59 ] For example, sinpi(x) would evaluate to sin ⁡ ( π x ) , {\displaystyle \sin(\pi x),} where x is expressed in half-turns, and consequently the final input to the function, πx can be interpreted in radians by sin . The accuracy advantage stems from the ability to perfectly represent key angles like full-turn, half-turn, and quarter-turn losslessly in binary floating-point or fixed-point. In contrast, representing 2 π {\displaystyle 2\pi } , π {\displaystyle \pi } , and π 2 {\textstyle {\frac {\pi }{2}}} in binary floating-point or binary scaled fixed-point always involves a loss of accuracy since irrational numbers cannot be represented with finitely many binary digits. Turns also have an accuracy advantage and efficiency advantage for computing modulo to one period. Computing modulo 1 turn or modulo 2 half-turns can be losslessly and efficiently computed in both floating-point and fixed-point. For example, computing modulo 1 or modulo 2 for a binary point scaled fixed-point value requires only a bit shift or bitwise AND operation. In contrast, computing modulo π 2 {\textstyle {\frac {\pi }{2}}} involves inaccuracies in representing π 2 {\textstyle {\frac {\pi }{2}}} . For applications involving angle sensors, the sensor typically provides angle measurements in a form directly compatible with turns or half-turns. For example, an angle sensor may count from 0 to 4096 over one complete revolution. [ 60 ] If half-turns are used as the unit for angle, then the value provided by the sensor directly and losslessly maps to a fixed-point data type with 11 bits to the right of the binary point. In contrast, if radians are used as the unit for storing the angle, then the inaccuracies and cost of multiplying the raw sensor integer by an approximation to π 2048 {\textstyle {\frac {\pi }{2048}}} would be incurred.
https://en.wikipedia.org/wiki/Sinus_(trigonometry)
In mathematics , the trigonometric functions (also called circular functions , angle functions or goniometric functions ) [ 1 ] are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry , such as navigation , solid mechanics , celestial mechanics , geodesy , and many others. They are among the simplest periodic functions , and as such are also widely used for studying periodic phenomena through Fourier analysis . The trigonometric functions most widely used in modern mathematics are the sine , the cosine , and the tangent functions. Their reciprocals are respectively the cosecant , the secant , and the cotangent functions, which are less used. Each of these six trigonometric functions has a corresponding inverse function , and an analog among the hyperbolic functions . The oldest definitions of trigonometric functions, related to right-angle triangles, define them only for acute angles . To extend the sine and cosine functions to functions whose domain is the whole real line , geometrical definitions using the standard unit circle (i.e., a circle with radius 1 unit) are often used; then the domain of the other functions is the real line with some isolated points removed. Modern definitions express trigonometric functions as infinite series or as solutions of differential equations . This allows extending the domain of sine and cosine functions to the whole complex plane , and the domain of the other trigonometric functions to the complex plane with some isolated points removed. Conventionally, an abbreviation of each trigonometric function's name is used as its symbol in formulas. Today, the most common versions of these abbreviations are " sin " for sine, " cos " for cosine, " tan " or " tg " for tangent, " sec " for secant, " csc " or " cosec " for cosecant, and " cot " or " ctg " for cotangent. Historically, these abbreviations were first used in prose sentences to indicate particular line segments or their lengths related to an arc of an arbitrary circle, and later to indicate ratios of lengths, but as the function concept developed in the 17th–18th century, they began to be considered as functions of real-number-valued angle measures, and written with functional notation , for example sin( x ) . Parentheses are still often omitted to reduce clutter, but are sometimes necessary; for example the expression sin ⁡ x + y {\displaystyle \sin x+y} would typically be interpreted to mean ( sin ⁡ x ) + y , {\displaystyle (\sin x)+y,} so parentheses are required to express sin ⁡ ( x + y ) . {\displaystyle \sin(x+y).} A positive integer appearing as a superscript after the symbol of the function denotes exponentiation , not function composition . For example sin 2 ⁡ x {\displaystyle \sin ^{2}x} and sin 2 ⁡ ( x ) {\displaystyle \sin ^{2}(x)} denote ( sin ⁡ x ) 2 , {\displaystyle (\sin x)^{2},} not sin ⁡ ( sin ⁡ x ) . {\displaystyle \sin(\sin x).} This differs from the (historically later) general functional notation in which f 2 ( x ) = ( f ∘ f ) ( x ) = f ( f ( x ) ) . {\displaystyle f^{2}(x)=(f\circ f)(x)=f(f(x)).} In contrast, the superscript − 1 {\displaystyle -1} is commonly used to denote the inverse function , not the reciprocal . For example sin − 1 ⁡ x {\displaystyle \sin ^{-1}x} and sin − 1 ⁡ ( x ) {\displaystyle \sin ^{-1}(x)} denote the inverse trigonometric function alternatively written arcsin ⁡ x . {\displaystyle \arcsin x\,.} The equation θ = sin − 1 ⁡ x {\displaystyle \theta =\sin ^{-1}x} implies sin ⁡ θ = x , {\displaystyle \sin \theta =x,} not θ ⋅ sin ⁡ x = 1. {\displaystyle \theta \cdot \sin x=1.} In this case, the superscript could be considered as denoting a composed or iterated function , but negative superscripts other than − 1 {\displaystyle {-1}} are not in common use. If the acute angle θ is given, then any right triangles that have an angle of θ are similar to each other. This means that the ratio of any two side lengths depends only on θ . Thus these six ratios define six functions of θ , which are the trigonometric functions. In the following definitions, the hypotenuse is the length of the side opposite the right angle, opposite represents the side opposite the given angle θ , and adjacent represents the side between the angle θ and the right angle. [ 2 ] [ 3 ] Various mnemonics can be used to remember these definitions. In a right-angled triangle, the sum of the two acute angles is a right angle, that is, 90° or ⁠ π / 2 ⁠ radians . Therefore sin ⁡ ( θ ) {\displaystyle \sin(\theta )} and cos ⁡ ( 90 ∘ − θ ) {\displaystyle \cos(90^{\circ }-\theta )} represent the same ratio, and thus are equal. This identity and analogous relationships between the other trigonometric functions are summarized in the following table. In geometric applications, the argument of a trigonometric function is generally the measure of an angle . For this purpose, any angular unit is convenient. One common unit is degrees , in which a right angle is 90° and a complete turn is 360° (particularly in elementary mathematics ). However, in calculus and mathematical analysis , the trigonometric functions are generally regarded more abstractly as functions of real or complex numbers , rather than angles. In fact, the functions sin and cos can be defined for all complex numbers in terms of the exponential function , via power series, [ 5 ] or as solutions to differential equations given particular initial values [ 6 ] ( see below ), without reference to any geometric notions. The other four trigonometric functions ( tan , cot , sec , csc ) can be defined as quotients and reciprocals of sin and cos , except where zero occurs in the denominator. It can be proved, for real arguments, that these definitions coincide with elementary geometric definitions if the argument is regarded as an angle in radians. [ 5 ] Moreover, these definitions result in simple expressions for the derivatives and indefinite integrals for the trigonometric functions. [ 7 ] Thus, in settings beyond elementary geometry, radians are regarded as the mathematically natural unit for describing angle measures. When radians (rad) are employed, the angle is given as the length of the arc of the unit circle subtended by it: the angle that subtends an arc of length 1 on the unit circle is 1 rad (≈ 57.3°), [ 8 ] and a complete turn (360°) is an angle of 2 π (≈ 6.28) rad. [ 9 ] For real number x , the notation sin x , cos x , etc. refers to the value of the trigonometric functions evaluated at an angle of x rad. If units of degrees are intended, the degree sign must be explicitly shown ( sin x° , cos x° , etc.). Using this standard notation, the argument x for the trigonometric functions satisfies the relationship x = (180 x / π )°, so that, for example, sin π = sin 180° when we take x = π . In this way, the degree symbol can be regarded as a mathematical constant such that 1° = π /180 ≈ 0.0175. [ 10 ] The six trigonometric functions can be defined as coordinate values of points on the Euclidean plane that are related to the unit circle , which is the circle of radius one centered at the origin O of this coordinate system. While right-angled triangle definitions allow for the definition of the trigonometric functions for angles between 0 and π 2 {\textstyle {\frac {\pi }{2}}} radians (90°), the unit circle definitions allow the domain of trigonometric functions to be extended to all positive and negative real numbers. Let L {\displaystyle {\mathcal {L}}} be the ray obtained by rotating by an angle θ the positive half of the x -axis ( counterclockwise rotation for θ > 0 , {\displaystyle \theta >0,} and clockwise rotation for θ < 0 {\displaystyle \theta <0} ). This ray intersects the unit circle at the point A = ( x A , y A ) . {\displaystyle \mathrm {A} =(x_{\mathrm {A} },y_{\mathrm {A} }).} The ray L , {\displaystyle {\mathcal {L}},} extended to a line if necessary, intersects the line of equation x = 1 {\displaystyle x=1} at point B = ( 1 , y B ) , {\displaystyle \mathrm {B} =(1,y_{\mathrm {B} }),} and the line of equation y = 1 {\displaystyle y=1} at point C = ( x C , 1 ) . {\displaystyle \mathrm {C} =(x_{\mathrm {C} },1).} The tangent line to the unit circle at the point A , is perpendicular to L , {\displaystyle {\mathcal {L}},} and intersects the y - and x -axes at points D = ( 0 , y D ) {\displaystyle \mathrm {D} =(0,y_{\mathrm {D} })} and E = ( x E , 0 ) . {\displaystyle \mathrm {E} =(x_{\mathrm {E} },0).} The coordinates of these points give the values of all trigonometric functions for any arbitrary real value of θ in the following manner. The trigonometric functions cos and sin are defined, respectively, as the x - and y -coordinate values of point A . That is, In the range 0 ≤ θ ≤ π / 2 {\displaystyle 0\leq \theta \leq \pi /2} , this definition coincides with the right-angled triangle definition, by taking the right-angled triangle to have the unit radius OA as hypotenuse . And since the equation x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} holds for all points P = ( x , y ) {\displaystyle \mathrm {P} =(x,y)} on the unit circle, this definition of cosine and sine also satisfies the Pythagorean identity . The other trigonometric functions can be found along the unit circle as By applying the Pythagorean identity and geometric proof methods, these definitions can readily be shown to coincide with the definitions of tangent, cotangent, secant and cosecant in terms of sine and cosine, that is Since a rotation of an angle of ± 2 π {\displaystyle \pm 2\pi } does not change the position or size of a shape, the points A , B , C , D , and E are the same for two angles whose difference is an integer multiple of 2 π {\displaystyle 2\pi } . Thus trigonometric functions are periodic functions with period 2 π {\displaystyle 2\pi } . That is, the equalities hold for any angle θ and any integer k . The same is true for the four other trigonometric functions. By observing the sign and the monotonicity of the functions sine, cosine, cosecant, and secant in the four quadrants, one can show that 2 π {\displaystyle 2\pi } is the smallest value for which they are periodic (i.e., 2 π {\displaystyle 2\pi } is the fundamental period of these functions). However, after a rotation by an angle π {\displaystyle \pi } , the points B and C already return to their original position, so that the tangent function and the cotangent function have a fundamental period of π {\displaystyle \pi } . That is, the equalities hold for any angle θ and any integer k . The algebraic expressions for the most important angles are as follows: Writing the numerators as square roots of consecutive non-negative integers, with a denominator of 2, provides an easy way to remember the values. [ 13 ] Such simple expressions generally do not exist for other angles which are rational multiples of a right angle. The following table lists the sines, cosines, and tangents of multiples of 15 degrees from 0 to 90 degrees. G. H. Hardy noted in his 1908 work A Course of Pure Mathematics that the definition of the trigonometric functions in terms of the unit circle is not satisfactory, because it depends implicitly on a notion of angle that can be measured by a real number. [ 14 ] Thus in modern analysis, trigonometric functions are usually constructed without reference to geometry. Various ways exist in the literature for defining the trigonometric functions in a manner suitable for analysis; they include: Sine and cosine can be defined as the unique solution to the initial value problem : [ 17 ] Differentiating again, d 2 d x 2 sin ⁡ x = d d x cos ⁡ x = − sin ⁡ x {\textstyle {\frac {d^{2}}{dx^{2}}}\sin x={\frac {d}{dx}}\cos x=-\sin x} and d 2 d x 2 cos ⁡ x = − d d x sin ⁡ x = − cos ⁡ x {\textstyle {\frac {d^{2}}{dx^{2}}}\cos x=-{\frac {d}{dx}}\sin x=-\cos x} , so both sine and cosine are solutions of the same ordinary differential equation Sine is the unique solution with y (0) = 0 and y ′(0) = 1 ; cosine is the unique solution with y (0) = 1 and y ′(0) = 0 . One can then prove, as a theorem, that solutions cos , sin {\displaystyle \cos ,\sin } are periodic, having the same period. Writing this period as 2 π {\displaystyle 2\pi } is then a definition of the real number π {\displaystyle \pi } which is independent of geometry. Applying the quotient rule to the tangent tan ⁡ x = sin ⁡ x / cos ⁡ x {\displaystyle \tan x=\sin x/\cos x} , so the tangent function satisfies the ordinary differential equation It is the unique solution with y (0) = 0 . The basic trigonometric functions can be defined by the following power series expansions. [ 18 ] These series are also known as the Taylor series or Maclaurin series of these trigonometric functions: The radius of convergence of these series is infinite. Therefore, the sine and the cosine can be extended to entire functions (also called "sine" and "cosine"), which are (by definition) complex-valued functions that are defined and holomorphic on the whole complex plane . Term-by-term differentiation shows that the sine and cosine defined by the series obey the differential equation discussed previously, and conversely one can obtain these series from elementary recursion relations derived from the differential equation. Being defined as fractions of entire functions, the other trigonometric functions may be extended to meromorphic functions , that is functions that are holomorphic in the whole complex plane, except some isolated points called poles . Here, the poles are the numbers of the form ( 2 k + 1 ) π 2 {\textstyle (2k+1){\frac {\pi }{2}}} for the tangent and the secant, or k π {\displaystyle k\pi } for the cotangent and the cosecant, where k is an arbitrary integer. Recurrences relations may also be computed for the coefficients of the Taylor series of the other trigonometric functions. These series have a finite radius of convergence . Their coefficients have a combinatorial interpretation: they enumerate alternating permutations of finite sets. [ 19 ] More precisely, defining one has the following series expansions: [ 20 ] The following continued fractions are valid in the whole complex plane: The last one was used in the historically first proof that π is irrational . [ 21 ] There is a series representation as partial fraction expansion where just translated reciprocal functions are summed up, such that the poles of the cotangent function and the reciprocal functions match: [ 22 ] This identity can be proved with the Herglotz trick. [ 23 ] Combining the (– n ) th with the n th term lead to absolutely convergent series: Similarly, one can find a partial fraction expansion for the secant, cosecant and tangent functions: The following infinite product for the sine is due to Leonhard Euler , and is of great importance in complex analysis: [ 24 ] This may be obtained from the partial fraction decomposition of cot ⁡ z {\displaystyle \cot z} given above, which is the logarithmic derivative of sin ⁡ z {\displaystyle \sin z} . [ 25 ] From this, it can be deduced also that Euler's formula relates sine and cosine to the exponential function : This formula is commonly considered for real values of x , but it remains true for all complex values. Proof : Let f 1 ( x ) = cos ⁡ x + i sin ⁡ x , {\displaystyle f_{1}(x)=\cos x+i\sin x,} and f 2 ( x ) = e i x . {\displaystyle f_{2}(x)=e^{ix}.} One has d f j ( x ) / d x = i f j ( x ) {\displaystyle df_{j}(x)/dx=if_{j}(x)} for j = 1, 2 . The quotient rule implies thus that d / d x ( f 1 ( x ) / f 2 ( x ) ) = 0 {\displaystyle d/dx\,(f_{1}(x)/f_{2}(x))=0} . Therefore, f 1 ( x ) / f 2 ( x ) {\displaystyle f_{1}(x)/f_{2}(x)} is a constant function, which equals 1 , as f 1 ( 0 ) = f 2 ( 0 ) = 1. {\displaystyle f_{1}(0)=f_{2}(0)=1.} This proves the formula. One has Solving this linear system in sine and cosine, one can express them in terms of the exponential function: When x is real, this may be rewritten as Most trigonometric identities can be proved by expressing trigonometric functions in terms of the complex exponential function by using above formulas, and then using the identity e a + b = e a e b {\displaystyle e^{a+b}=e^{a}e^{b}} for simplifying the result. Euler's formula can also be used to define the basic trigonometric function directly, as follows, using the language of topological groups . [ 26 ] The set U {\displaystyle U} of complex numbers of unit modulus is a compact and connected topological group, which has a neighborhood of the identity that is homeomorphic to the real line. Therefore, it is isomorphic as a topological group to the one-dimensional torus group R / Z {\displaystyle \mathbb {R} /\mathbb {Z} } , via an isomorphism e : R / Z → U . {\displaystyle e:\mathbb {R} /\mathbb {Z} \to U.} In pedestrian terms e ( t ) = exp ⁡ ( 2 π i t ) {\displaystyle e(t)=\exp(2\pi it)} , and this isomorphism is unique up to taking complex conjugates. For a nonzero real number a {\displaystyle a} (the base ), the function t ↦ e ( t / a ) {\displaystyle t\mapsto e(t/a)} defines an isomorphism of the group R / a Z → U {\displaystyle \mathbb {R} /a\mathbb {Z} \to U} . The real and imaginary parts of e ( t / a ) {\displaystyle e(t/a)} are the cosine and sine, where a {\displaystyle a} is used as the base for measuring angles. For example, when a = 2 π {\displaystyle a=2\pi } , we get the measure in radians, and the usual trigonometric functions. When a = 360 {\displaystyle a=360} , we get the sine and cosine of angles measured in degrees. Note that a = 2 π {\displaystyle a=2\pi } is the unique value at which the derivative d d t e ( t / a ) {\displaystyle {\frac {d}{dt}}e(t/a)} becomes a unit vector with positive imaginary part at t = 0 {\displaystyle t=0} . This fact can, in turn, be used to define the constant 2 π {\displaystyle 2\pi } . Another way to define the trigonometric functions in analysis is using integration. [ 14 ] [ 27 ] For a real number t {\displaystyle t} , put θ ( t ) = ∫ 0 t d τ 1 + τ 2 = arctan ⁡ t {\displaystyle \theta (t)=\int _{0}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\arctan t} where this defines this inverse tangent function. Also, π {\displaystyle \pi } is defined by 1 2 π = ∫ 0 ∞ d τ 1 + τ 2 {\displaystyle {\frac {1}{2}}\pi =\int _{0}^{\infty }{\frac {d\tau }{1+\tau ^{2}}}} a definition that goes back to Karl Weierstrass . [ 28 ] On the interval − π / 2 < θ < π / 2 {\displaystyle -\pi /2<\theta <\pi /2} , the trigonometric functions are defined by inverting the relation θ = arctan ⁡ t {\displaystyle \theta =\arctan t} . Thus we define the trigonometric functions by tan ⁡ θ = t , cos ⁡ θ = ( 1 + t 2 ) − 1 / 2 , sin ⁡ θ = t ( 1 + t 2 ) − 1 / 2 {\displaystyle \tan \theta =t,\quad \cos \theta =(1+t^{2})^{-1/2},\quad \sin \theta =t(1+t^{2})^{-1/2}} where the point ( t , θ ) {\displaystyle (t,\theta )} is on the graph of θ = arctan ⁡ t {\displaystyle \theta =\arctan t} and the positive square root is taken. This defines the trigonometric functions on ( − π / 2 , π / 2 ) {\displaystyle (-\pi /2,\pi /2)} . The definition can be extended to all real numbers by first observing that, as θ → π / 2 {\displaystyle \theta \to \pi /2} , t → ∞ {\displaystyle t\to \infty } , and so cos ⁡ θ = ( 1 + t 2 ) − 1 / 2 → 0 {\displaystyle \cos \theta =(1+t^{2})^{-1/2}\to 0} and sin ⁡ θ = t ( 1 + t 2 ) − 1 / 2 → 1 {\displaystyle \sin \theta =t(1+t^{2})^{-1/2}\to 1} . Thus cos ⁡ θ {\displaystyle \cos \theta } and sin ⁡ θ {\displaystyle \sin \theta } are extended continuously so that cos ⁡ ( π / 2 ) = 0 , sin ⁡ ( π / 2 ) = 1 {\displaystyle \cos(\pi /2)=0,\sin(\pi /2)=1} . Now the conditions cos ⁡ ( θ + π ) = − cos ⁡ ( θ ) {\displaystyle \cos(\theta +\pi )=-\cos(\theta )} and sin ⁡ ( θ + π ) = − sin ⁡ ( θ ) {\displaystyle \sin(\theta +\pi )=-\sin(\theta )} define the sine and cosine as periodic functions with period 2 π {\displaystyle 2\pi } , for all real numbers. Proving the basic properties of sine and cosine, including the fact that sine and cosine are analytic, one may first establish the addition formulae. First, arctan ⁡ s + arctan ⁡ t = arctan ⁡ s + t 1 − s t {\displaystyle \arctan s+\arctan t=\arctan {\frac {s+t}{1-st}}} holds, provided arctan ⁡ s + arctan ⁡ t ∈ ( − π / 2 , π / 2 ) {\displaystyle \arctan s+\arctan t\in (-\pi /2,\pi /2)} , since arctan ⁡ s + arctan ⁡ t = ∫ − s t d τ 1 + τ 2 = ∫ 0 s + t 1 − s t d τ 1 + τ 2 {\displaystyle \arctan s+\arctan t=\int _{-s}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\int _{0}^{\frac {s+t}{1-st}}{\frac {d\tau }{1+\tau ^{2}}}} after the substitution τ → s + τ 1 − s τ {\displaystyle \tau \to {\frac {s+\tau }{1-s\tau }}} . In particular, the limiting case as s → ∞ {\displaystyle s\to \infty } gives arctan ⁡ t + π 2 = arctan ⁡ ( − 1 / t ) , t ∈ ( − ∞ , 0 ) . {\displaystyle \arctan t+{\frac {\pi }{2}}=\arctan(-1/t),\quad t\in (-\infty ,0).} Thus we have sin ⁡ ( θ + π 2 ) = − 1 t 1 + ( − 1 / t ) 2 = − 1 1 + t 2 = − cos ⁡ ( θ ) {\displaystyle \sin \left(\theta +{\frac {\pi }{2}}\right)={\frac {-1}{t{\sqrt {1+(-1/t)^{2}}}}}={\frac {-1}{\sqrt {1+t^{2}}}}=-\cos(\theta )} and cos ⁡ ( θ + π 2 ) = 1 1 + ( − 1 / t ) 2 = t 1 + t 2 = sin ⁡ ( θ ) . {\displaystyle \cos \left(\theta +{\frac {\pi }{2}}\right)={\frac {1}{\sqrt {1+(-1/t)^{2}}}}={\frac {t}{\sqrt {1+t^{2}}}}=\sin(\theta ).} So the sine and cosine functions are related by translation over a quarter period π / 2 {\displaystyle \pi /2} . One can also define the trigonometric functions using various functional equations . For example, [ 29 ] the sine and the cosine form the unique pair of continuous functions that satisfy the difference formula and the added condition The sine and cosine of a complex number z = x + i y {\displaystyle z=x+iy} can be expressed in terms of real sines, cosines, and hyperbolic functions as follows: By taking advantage of domain coloring , it is possible to graph the trigonometric functions as complex-valued functions. Various features unique to the complex functions can be seen from the graph; for example, the sine and cosine functions can be seen to be unbounded as the imaginary part of z {\displaystyle z} becomes larger (since the color white represents infinity), and the fact that the functions contain simple zeros or poles is apparent from the fact that the hue cycles around each zero or pole exactly once. Comparing these graphs with those of the corresponding Hyperbolic functions highlights the relationships between the two. sin ⁡ z {\displaystyle \sin z\,} cos ⁡ z {\displaystyle \cos z\,} tan ⁡ z {\displaystyle \tan z\,} cot ⁡ z {\displaystyle \cot z\,} sec ⁡ z {\displaystyle \sec z\,} csc ⁡ z {\displaystyle \csc z\,} The sine and cosine functions are periodic , with period 2 π {\displaystyle 2\pi } , which is the smallest positive period: sin ⁡ ( z + 2 π ) = sin ⁡ ( z ) , cos ⁡ ( z + 2 π ) = cos ⁡ ( z ) . {\displaystyle \sin(z+2\pi )=\sin(z),\quad \cos(z+2\pi )=\cos(z).} Consequently, the cosecant and secant also have 2 π {\displaystyle 2\pi } as their period. The functions sine and cosine also have semiperiods π {\displaystyle \pi } , and sin ⁡ ( z + π ) = − sin ⁡ ( z ) , cos ⁡ ( z + π ) = − cos ⁡ ( z ) {\displaystyle \sin(z+\pi )=-\sin(z),\quad \cos(z+\pi )=-\cos(z)} and consequently tan ⁡ ( z + π ) = tan ⁡ ( z ) , cot ⁡ ( z + π ) = cot ⁡ ( z ) . {\displaystyle \tan(z+\pi )=\tan(z),\quad \cot(z+\pi )=\cot(z).} Also, sin ⁡ ( x + π / 2 ) = cos ⁡ ( x ) , cos ⁡ ( x + π / 2 ) = − sin ⁡ ( x ) {\displaystyle \sin(x+\pi /2)=\cos(x),\quad \cos(x+\pi /2)=-\sin(x)} (see Complementary angles ). The function sin ⁡ ( z ) {\displaystyle \sin(z)} has a unique zero (at z = 0 {\displaystyle z=0} ) in the strip − π < ℜ ( z ) < π {\displaystyle -\pi <\Re (z)<\pi } . The function cos ⁡ ( z ) {\displaystyle \cos(z)} has the pair of zeros z = ± π / 2 {\displaystyle z=\pm \pi /2} in the same strip. Because of the periodicity, the zeros of sine are π Z = { … , − 2 π , − π , 0 , π , 2 π , … } ⊂ C . {\displaystyle \pi \mathbb {Z} =\left\{\dots ,-2\pi ,-\pi ,0,\pi ,2\pi ,\dots \right\}\subset \mathbb {C} .} There zeros of cosine are π 2 + π Z = { … , − 3 π 2 , − π 2 , π 2 , 3 π 2 , … } ⊂ C . {\displaystyle {\frac {\pi }{2}}+\pi \mathbb {Z} =\left\{\dots ,-{\frac {3\pi }{2}},-{\frac {\pi }{2}},{\frac {\pi }{2}},{\frac {3\pi }{2}},\dots \right\}\subset \mathbb {C} .} All of the zeros are simple zeros, and both functions have derivative ± 1 {\displaystyle \pm 1} at each of the zeros. The tangent function tan ⁡ ( z ) = sin ⁡ ( z ) / cos ⁡ ( z ) {\displaystyle \tan(z)=\sin(z)/\cos(z)} has a simple zero at z = 0 {\displaystyle z=0} and vertical asymptotes at z = ± π / 2 {\displaystyle z=\pm \pi /2} , where it has a simple pole of residue − 1 {\displaystyle -1} . Again, owing to the periodicity, the zeros are all the integer multiples of π {\displaystyle \pi } and the poles are odd multiples of π / 2 {\displaystyle \pi /2} , all having the same residue. The poles correspond to vertical asymptotes lim x → π − tan ⁡ ( x ) = + ∞ , lim x → π + tan ⁡ ( x ) = − ∞ . {\displaystyle \lim _{x\to \pi ^{-}}\tan(x)=+\infty ,\quad \lim _{x\to \pi ^{+}}\tan(x)=-\infty .} The cotangent function cot ⁡ ( z ) = cos ⁡ ( z ) / sin ⁡ ( z ) {\displaystyle \cot(z)=\cos(z)/\sin(z)} has a simple pole of residue 1 at the integer multiples of π {\displaystyle \pi } and simple zeros at odd multiples of π / 2 {\displaystyle \pi /2} . The poles correspond to vertical asymptotes lim x → 0 − cot ⁡ ( x ) = − ∞ , lim x → 0 + cot ⁡ ( x ) = + ∞ . {\displaystyle \lim _{x\to 0^{-}}\cot(x)=-\infty ,\quad \lim _{x\to 0^{+}}\cot(x)=+\infty .} Many identities interrelate the trigonometric functions. This section contains the most basic ones; for more identities, see List of trigonometric identities . These identities may be proved geometrically from the unit-circle definitions or the right-angled-triangle definitions (although, for the latter definitions, care must be taken for angles that are not in the interval [0, π /2] , see Proofs of trigonometric identities ). For non-geometrical proofs using only tools of calculus , one may use directly the differential equations, in a way that is similar to that of the above proof of Euler's identity. One can also use Euler's identity for expressing all trigonometric functions in terms of complex exponentials and using properties of the exponential function. The cosine and the secant are even functions ; the other trigonometric functions are odd functions . That is: All trigonometric functions are periodic functions of period 2 π . This is the smallest period, except for the tangent and the cotangent, which have π as smallest period. This means that, for every integer k , one has See Periodicity and asymptotes . The Pythagorean identity, is the expression of the Pythagorean theorem in terms of trigonometric functions. It is Dividing through by either cos 2 ⁡ x {\displaystyle \cos ^{2}x} or sin 2 ⁡ x {\displaystyle \sin ^{2}x} gives and The sum and difference formulas allow expanding the sine, the cosine, and the tangent of a sum or a difference of two angles in terms of sines and cosines and tangents of the angles themselves. These can be derived geometrically, using arguments that date to Ptolemy (see Angle sum and difference identities ). One can also produce them algebraically using Euler's formula . When the two angles are equal, the sum formulas reduce to simpler equations known as the double-angle formulae . These identities can be used to derive the product-to-sum identities . By setting t = tan ⁡ 1 2 θ , {\displaystyle t=\tan {\tfrac {1}{2}}\theta ,} all trigonometric functions of θ {\displaystyle \theta } can be expressed as rational fractions of t {\displaystyle t} : Together with this is the tangent half-angle substitution , which reduces the computation of integrals and antiderivatives of trigonometric functions to that of rational fractions. The derivatives of trigonometric functions result from those of sine and cosine by applying the quotient rule . The values given for the antiderivatives in the following table can be verified by differentiating them. The number C is a constant of integration . Note: For 0 < x < π {\displaystyle 0<x<\pi } the integral of csc ⁡ x {\displaystyle \csc x} can also be written as − arsinh ⁡ ( cot ⁡ x ) , {\displaystyle -\operatorname {arsinh} (\cot x),} and for the integral of sec ⁡ x {\displaystyle \sec x} for − π / 2 < x < π / 2 {\displaystyle -\pi /2<x<\pi /2} as arsinh ⁡ ( tan ⁡ x ) , {\displaystyle \operatorname {arsinh} (\tan x),} where arsinh {\displaystyle \operatorname {arsinh} } is the inverse hyperbolic sine . Alternatively, the derivatives of the 'co-functions' can be obtained using trigonometric identities and the chain rule: The trigonometric functions are periodic, and hence not injective , so strictly speaking, they do not have an inverse function . However, on each interval on which a trigonometric function is monotonic , one can define an inverse function, and this defines inverse trigonometric functions as multivalued functions . To define a true inverse function, one must restrict the domain to an interval where the function is monotonic, and is thus bijective from this interval to its image by the function. The common choice for this interval, called the set of principal values , is given in the following table. As usual, the inverse trigonometric functions are denoted with the prefix "arc" before the name or its abbreviation of the function. The notations sin −1 , cos −1 , etc. are often used for arcsin and arccos , etc. When this notation is used, inverse functions could be confused with multiplicative inverses. The notation with the "arc" prefix avoids such a confusion, though "arcsec" for arcsecant can be confused with " arcsecond ". Just like the sine and cosine, the inverse trigonometric functions can also be expressed in terms of infinite series. They can also be expressed in terms of complex logarithms . In this section A , B , C denote the three (interior) angles of a triangle, and a , b , c denote the lengths of the respective opposite edges. They are related by various formulas, which are named by the trigonometric functions they involve. The law of sines states that for an arbitrary triangle with sides a , b , and c and angles opposite those sides A , B and C : sin ⁡ A a = sin ⁡ B b = sin ⁡ C c = 2 Δ a b c , {\displaystyle {\frac {\sin A}{a}}={\frac {\sin B}{b}}={\frac {\sin C}{c}}={\frac {2\Delta }{abc}},} where Δ is the area of the triangle, or, equivalently, a sin ⁡ A = b sin ⁡ B = c sin ⁡ C = 2 R , {\displaystyle {\frac {a}{\sin A}}={\frac {b}{\sin B}}={\frac {c}{\sin C}}=2R,} where R is the triangle's circumradius . It can be proved by dividing the triangle into two right ones and using the above definition of sine. The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. This is a common situation occurring in triangulation , a technique to determine unknown distances by measuring two angles and an accessible enclosed distance. The law of cosines (also known as the cosine formula or cosine rule) is an extension of the Pythagorean theorem : c 2 = a 2 + b 2 − 2 a b cos ⁡ C , {\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos C,} or equivalently, cos ⁡ C = a 2 + b 2 − c 2 2 a b . {\displaystyle \cos C={\frac {a^{2}+b^{2}-c^{2}}{2ab}}.} In this formula the angle at C is opposite to the side c . This theorem can be proved by dividing the triangle into two right ones and using the Pythagorean theorem . The law of cosines can be used to determine a side of a triangle if two sides and the angle between them are known. It can also be used to find the cosines of an angle (and consequently the angles themselves) if the lengths of all the sides are known. The law of tangents says that: If s is the triangle's semiperimeter, ( a + b + c )/2, and r is the radius of the triangle's incircle , then rs is the triangle's area. Therefore Heron's formula implies that: The law of cotangents says that: [ 30 ] It follows that The trigonometric functions are also important in physics. The sine and the cosine functions, for example, are used to describe simple harmonic motion , which models many natural phenomena, such as the movement of a mass attached to a spring and, for small angles, the pendular motion of a mass hanging by a string. The sine and cosine functions are one-dimensional projections of uniform circular motion . Trigonometric functions also prove to be useful in the study of general periodic functions . The characteristic wave patterns of periodic functions are useful for modeling recurring phenomena such as sound or light waves . [ 31 ] Under rather general conditions, a periodic function f ( x ) can be expressed as a sum of sine waves or cosine waves in a Fourier series . [ 32 ] Denoting the sine or cosine basis functions by φ k , the expansion of the periodic function f ( t ) takes the form: f ( t ) = ∑ k = 1 ∞ c k φ k ( t ) . {\displaystyle f(t)=\sum _{k=1}^{\infty }c_{k}\varphi _{k}(t).} For example, the square wave can be written as the Fourier series f square ( t ) = 4 π ∑ k = 1 ∞ sin ⁡ ( ( 2 k − 1 ) t ) 2 k − 1 . {\displaystyle f_{\text{square}}(t)={\frac {4}{\pi }}\sum _{k=1}^{\infty }{\sin {\big (}(2k-1)t{\big )} \over 2k-1}.} In the animation of a square wave at top right it can be seen that just a few terms already produce a fairly good approximation. The superposition of several terms in the expansion of a sawtooth wave are shown underneath. While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was defined by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). The functions of sine and versine (1 – cosine) are closely related to the jyā and koti-jyā functions used in Gupta period Indian astronomy ( Aryabhatiya , Surya Siddhanta ), via translation from Sanskrit to Arabic and then from Arabic to Latin. [ 33 ] (See Aryabhata's sine table .) All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines , used in solving triangles . [ 34 ] Al-Khwārizmī (c. 780–850) produced tables of sines and cosines. Circa 860, Habash al-Hasib al-Marwazi defined the tangent and the cotangent, and produced their tables. [ 35 ] [ 36 ] Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) defined the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. [ 36 ] The trigonometric functions were later studied by mathematicians including Omar Khayyám , Bhāskara II , Nasir al-Din al-Tusi , Jamshīd al-Kāshī (14th century), Ulugh Beg (14th century), Regiomontanus (1464), Rheticus , and Rheticus' student Valentinus Otho . Madhava of Sangamagrama (c. 1400) made early strides in the analysis of trigonometric functions in terms of infinite series . [ 37 ] (See Madhava series and Madhava's sine table .) The tangent function was brought to Europe by Giovanni Bianchini in 1467 in trigonometry tables he created to support the calculation of stellar coordinates. [ 38 ] The terms tangent and secant were first introduced by the Danish mathematician Thomas Fincke in his book Geometria rotundi (1583). [ 39 ] The 17th century French mathematician Albert Girard made the first published use of the abbreviations sin , cos , and tan in his book Trigonométrie . [ 40 ] In a paper published in 1682, Gottfried Leibniz proved that sin x is not an algebraic function of x . [ 41 ] Though defined as ratios of sides of a right triangle , and thus appearing to be rational functions , Leibnitz result established that they are actually transcendental functions of their argument. The task of assimilating circular functions into algebraic expressions was accomplished by Euler in his Introduction to the Analysis of the Infinite (1748). His method was to show that the sine and cosine functions are alternating series formed from the even and odd terms respectively of the exponential series . He presented " Euler's formula ", as well as near-modern abbreviations ( sin. , cos. , tang. , cot. , sec. , and cosec. ). [ 33 ] A few functions were common historically, but are now seldom used, such as the chord , versine (which appeared in the earliest tables [ 33 ] ), haversine , coversine , [ 42 ] half-tangent (tangent of half an angle), and exsecant . List of trigonometric identities shows more relations between these functions. Historically, trigonometric functions were often combined with logarithms in compound functions like the logarithmic sine, logarithmic cosine, logarithmic secant, logarithmic cosecant, logarithmic tangent and logarithmic cotangent. [ 43 ] [ 44 ] [ 45 ] [ 46 ] The word sine derives [ 47 ] from Latin sinus , meaning "bend; bay", and more specifically "the hanging fold of the upper part of a toga ", "the bosom of a garment", which was chosen as the translation of what was interpreted as the Arabic word jaib , meaning "pocket" or "fold" in the twelfth-century translations of works by Al-Battani and al-Khwārizmī into Medieval Latin . [ 48 ] The choice was based on a misreading of the Arabic written form j-y-b ( جيب ), which itself originated as a transliteration from Sanskrit jīvā , which along with its synonym jyā (the standard Sanskrit term for the sine) translates to "bowstring", being in turn adopted from Ancient Greek χορδή "string". [ 49 ] The word tangent comes from Latin tangens meaning "touching", since the line touches the circle of unit radius, whereas secant stems from Latin secans —"cutting"—since the line cuts the circle. [ 50 ] The prefix " co- " (in "cosine", "cotangent", "cosecant") is found in Edmund Gunter 's Canon triangulorum (1620), which defines the cosinus as an abbreviation of the sinus complementi (sine of the complementary angle ) and proceeds to define the cotangens similarly. [ 51 ] [ 52 ]
https://en.wikipedia.org/wiki/Sinus_complementi
Right Interior Exterior Adjacent Vertical Complementary Supplementary Dihedral In geometry and trigonometry , a right angle is an angle of exactly 90 degrees or ⁠ π {\displaystyle \pi } / 2 ⁠ radians [ 1 ] corresponding to a quarter turn . [ 2 ] If a ray is placed so that its endpoint is on a line and the adjacent angles are equal, then they are right angles. [ 3 ] The term is a calque of Latin angulus rectus ; here rectus means "upright", referring to the vertical perpendicular to a horizontal base line. Closely related and important geometrical concepts are perpendicular lines, meaning lines that form right angles at their point of intersection, and orthogonality , which is the property of forming right angles, usually applied to vectors . The presence of a right angle in a triangle is the defining factor for right triangles , [ 4 ] making the right angle basic to trigonometry. The meaning of right in right angle possibly refers to the Latin adjective rectus 'erect, straight, upright, perpendicular'. A Greek equivalent is orthos 'straight; perpendicular' (see orthogonality ). A rectangle is a quadrilateral with four right angles. A square has four right angles, in addition to equal-length sides. The Pythagorean theorem states how to determine when a triangle is a right triangle . In Unicode , the symbol for a right angle is U+221F ∟ RIGHT ANGLE ( &angrt; ). It should not be confused with the similarly shaped symbol U+231E ⌞ BOTTOM LEFT CORNER ( &dlcorn;, &llcorner; ). Related symbols are U+22BE ⊾ RIGHT ANGLE WITH ARC ( &angrtvb; ), U+299C ⦜ RIGHT ANGLE VARIANT WITH SQUARE ( &vangrt; ), and U+299D ⦝ MEASURED RIGHT ANGLE WITH DOT ( &angrtvbd; ). [ 5 ] In diagrams, the fact that an angle is a right angle is usually expressed by adding a small right angle that forms a square with the angle in the diagram, as seen in the diagram of a right triangle (in British English, a right-angled triangle) to the right. The symbol for a measured angle, an arc, with a dot, is used in some European countries, including German-speaking countries and Poland, as an alternative symbol for a right angle. [ 6 ] Right angles are fundamental in Euclid's Elements . They are defined in Book 1, definition 10, which also defines perpendicular lines. Definition 10 does not use numerical degree measurements but rather touches at the very heart of what a right angle is, namely two straight lines intersecting to form two equal and adjacent angles. [ 7 ] The straight lines which form right angles are called perpendicular. [ 8 ] Euclid uses right angles in definitions 11 and 12 to define acute angles (those smaller than a right angle) and obtuse angles (those greater than a right angle). [ 9 ] Two angles are called complementary if their sum is a right angle. [ 10 ] Book 1 Postulate 4 states that all right angles are equal, which allows Euclid to use a right angle as a unit to measure other angles with. Euclid's commentator Proclus gave a proof of this postulate using the previous postulates, but it may be argued that this proof makes use of some hidden assumptions. Saccheri gave a proof as well but using a more explicit assumption. In Hilbert 's axiomatization of geometry this statement is given as a theorem, but only after much groundwork. One may argue that, even if postulate 4 can be proven from the preceding ones, in the order that Euclid presents his material it is necessary to include it since without it postulate 5, which uses the right angle as a unit of measure, makes no sense. [ 11 ] A right angle may be expressed in different units: Throughout history, carpenters and masons have known a quick way to confirm if an angle is a true right angle. It is based on the Pythagorean triple (3, 4, 5) and the rule of 3-4-5. From the angle in question, running a straight line along one side exactly three units in length, and along the second side exactly four units in length, will create a hypotenuse (the longer line opposite the right angle that connects the two measured endpoints) of exactly five units in length. Thales' theorem states that an angle inscribed in a semicircle (with a vertex on the semicircle and its defining rays going through the endpoints of the semicircle) is a right angle. Two application examples in which the right angle and the Thales' theorem are included (see animations). The solid angle subtended by an octant of a sphere (the spherical triangle with three right angles) equals π /2 sr . [ 12 ]
https://en.wikipedia.org/wiki/Sinus_perfectus
In mathematics , sine and cosine are trigonometric functions of an angle . The sine and cosine of an acute angle are defined in the context of a right triangle : for the specified angle, its sine is the ratio of the length of the side opposite that angle to the length of the longest side of the triangle (the hypotenuse ), and the cosine is the ratio of the length of the adjacent leg to that of the hypotenuse . For an angle θ {\displaystyle \theta } , the sine and cosine functions are denoted as sin ⁡ ( θ ) {\displaystyle \sin(\theta )} and cos ⁡ ( θ ) {\displaystyle \cos(\theta )} . The definitions of sine and cosine have been extended to any real value in terms of the lengths of certain line segments in a unit circle . More modern definitions express the sine and cosine as infinite series , or as the solutions of certain differential equations , allowing their extension to arbitrary positive and negative values and even to complex numbers . The sine and cosine functions are commonly used to model periodic phenomena such as sound and light waves , the position and velocity of harmonic oscillators, sunlight intensity and day length, and average temperature variations throughout the year. They can be traced to the jyā and koṭi-jyā functions used in Indian astronomy during the Gupta period . To define the sine and cosine of an acute angle α {\displaystyle \alpha } , start with a right triangle that contains an angle of measure α {\displaystyle \alpha } ; in the accompanying figure, angle α {\displaystyle \alpha } in a right triangle A B C {\displaystyle ABC} is the angle of interest. The three sides of the triangle are named as follows: [ 1 ] Once such a triangle is chosen, the sine of the angle is equal to the length of the opposite side divided by the length of the hypotenuse, and the cosine of the angle is equal to the length of the adjacent side divided by the length of the hypotenuse: [ 1 ] sin ⁡ ( α ) = opposite hypotenuse , cos ⁡ ( α ) = adjacent hypotenuse . {\displaystyle \sin(\alpha )={\frac {\text{opposite}}{\text{hypotenuse}}},\qquad \cos(\alpha )={\frac {\text{adjacent}}{\text{hypotenuse}}}.} The other trigonometric functions of the angle can be defined similarly; for example, the tangent is the ratio between the opposite and adjacent sides or equivalently the ratio between the sine and cosine functions. The reciprocal of sine is cosecant, which gives the ratio of the hypotenuse length to the length of the opposite side. Similarly, the reciprocal of cosine is secant, which gives the ratio of the hypotenuse length to that of the adjacent side. The cotangent function is the ratio between the adjacent and opposite sides, a reciprocal of a tangent function. These functions can be formulated as: [ 1 ] tan ⁡ ( θ ) = sin ⁡ ( θ ) cos ⁡ ( θ ) = opposite adjacent , cot ⁡ ( θ ) = 1 tan ⁡ ( θ ) = adjacent opposite , csc ⁡ ( θ ) = 1 sin ⁡ ( θ ) = hypotenuse opposite , sec ⁡ ( θ ) = 1 cos ⁡ ( θ ) = hypotenuse adjacent . {\displaystyle {\begin{aligned}\tan(\theta )&={\frac {\sin(\theta )}{\cos(\theta )}}={\frac {\text{opposite}}{\text{adjacent}}},\\\cot(\theta )&={\frac {1}{\tan(\theta )}}={\frac {\text{adjacent}}{\text{opposite}}},\\\csc(\theta )&={\frac {1}{\sin(\theta )}}={\frac {\text{hypotenuse}}{\text{opposite}}},\\\sec(\theta )&={\frac {1}{\cos(\theta )}}={\frac {\textrm {hypotenuse}}{\textrm {adjacent}}}.\end{aligned}}} As stated, the values sin ⁡ ( α ) {\displaystyle \sin(\alpha )} and cos ⁡ ( α ) {\displaystyle \cos(\alpha )} appear to depend on the choice of a right triangle containing an angle of measure α {\displaystyle \alpha } . However, this is not the case as all such triangles are similar , and so the ratios are the same for each of them. For example, each leg of the 45-45-90 right triangle is 1 unit, and its hypotenuse is 2 {\displaystyle {\sqrt {2}}} ; therefore, sin ⁡ 45 ∘ = cos ⁡ 45 ∘ = 2 2 {\textstyle \sin 45^{\circ }=\cos 45^{\circ }={\frac {\sqrt {2}}{2}}} . [ 2 ] The following table shows the special value of each input for both sine and cosine with the domain between 0 < α < π 2 {\textstyle 0<\alpha <{\frac {\pi }{2}}} . The input in this table provides various unit systems such as degree, radian, and so on. The angles other than those five can be obtained by using a calculator. [ 3 ] [ 4 ] The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. [ 5 ] Given a triangle A B C {\displaystyle ABC} with sides a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} , and angles opposite those sides α {\displaystyle \alpha } , β {\displaystyle \beta } , and γ {\displaystyle \gamma } , the law states, sin ⁡ α a = sin ⁡ β b = sin ⁡ γ c . {\displaystyle {\frac {\sin \alpha }{a}}={\frac {\sin \beta }{b}}={\frac {\sin \gamma }{c}}.} This is equivalent to the equality of the first three expressions below: a sin ⁡ α = b sin ⁡ β = c sin ⁡ γ = 2 R , {\displaystyle {\frac {a}{\sin \alpha }}={\frac {b}{\sin \beta }}={\frac {c}{\sin \gamma }}=2R,} where R {\displaystyle R} is the triangle's circumradius . The law of cosines is useful for computing the length of an unknown side if two other sides and an angle are known. [ 5 ] The law states, a 2 + b 2 − 2 a b cos ⁡ ( γ ) = c 2 {\displaystyle a^{2}+b^{2}-2ab\cos(\gamma )=c^{2}} In the case where γ = π / 2 {\displaystyle \gamma =\pi /2} from which cos ⁡ ( γ ) = 0 {\displaystyle \cos(\gamma )=0} , the resulting equation becomes the Pythagorean theorem . [ 6 ] The cross product and dot product are operations on two vectors in Euclidean vector space . The sine and cosine functions can be defined in terms of the cross product and dot product. If a {\displaystyle \mathbb {a} } and b {\displaystyle \mathbb {b} } are vectors, and θ {\displaystyle \theta } is the angle between a {\displaystyle \mathbb {a} } and b {\displaystyle \mathbb {b} } , then sine and cosine can be defined as: sin ⁡ ( θ ) = | a × b | | a | | b | , cos ⁡ ( θ ) = a ⋅ b | a | | b | . {\displaystyle {\begin{aligned}\sin(\theta )&={\frac {|\mathbb {a} \times \mathbb {b} |}{|a||b|}},\\\cos(\theta )&={\frac {\mathbb {a} \cdot \mathbb {b} }{|a||b|}}.\end{aligned}}} The sine and cosine functions may also be defined in a more general way by using unit circle , a circle of radius one centered at the origin ( 0 , 0 ) {\displaystyle (0,0)} , formulated as the equation of x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} in the Cartesian coordinate system . Let a line through the origin intersect the unit circle, making an angle of θ {\displaystyle \theta } with the positive half of the x {\displaystyle x} - axis. The x {\displaystyle x} - and y {\displaystyle y} - coordinates of this point of intersection are equal to cos ⁡ ( θ ) {\displaystyle \cos(\theta )} and sin ⁡ ( θ ) {\displaystyle \sin(\theta )} , respectively; that is, [ 7 ] sin ⁡ ( θ ) = y , cos ⁡ ( θ ) = x . {\displaystyle \sin(\theta )=y,\qquad \cos(\theta )=x.} This definition is consistent with the right-angled triangle definition of sine and cosine when 0 < θ < π 2 {\textstyle 0<\theta <{\frac {\pi }{2}}} because the length of the hypotenuse of the unit circle is always 1; mathematically speaking, the sine of an angle equals the opposite side of the triangle, which is simply the y {\displaystyle y} - coordinate. A similar argument can be made for the cosine function to show that the cosine of an angle when 0 < θ < π 2 {\textstyle 0<\theta <{\frac {\pi }{2}}} , even under the new definition using the unit circle. [ 8 ] [ 9 ] Using the unit circle definition has the advantage of drawing a graph of sine and cosine functions. This can be done by rotating counterclockwise a point along the circumference of a circle, depending on the input θ > 0 {\displaystyle \theta >0} . In a sine function, if the input is θ = π 2 {\textstyle \theta ={\frac {\pi }{2}}} , the point is rotated counterclockwise and stopped exactly on the y {\displaystyle y} - axis. If θ = π {\displaystyle \theta =\pi } , the point is at the circle's halfway. If θ = 2 π {\displaystyle \theta =2\pi } , the point returned to its origin. This results that both sine and cosine functions have the range between − 1 ≤ y ≤ 1 {\displaystyle -1\leq y\leq 1} . [ 10 ] Extending the angle to any real domain, the point rotated counterclockwise continuously. This can be done similarly for the cosine function as well, although the point is rotated initially from the y {\displaystyle y} - coordinate. In other words, both sine and cosine functions are periodic , meaning any angle added by the circumference's circle is the angle itself. Mathematically, [ 11 ] sin ⁡ ( θ + 2 π ) = sin ⁡ ( θ ) , cos ⁡ ( θ + 2 π ) = cos ⁡ ( θ ) . {\displaystyle \sin(\theta +2\pi )=\sin(\theta ),\qquad \cos(\theta +2\pi )=\cos(\theta ).} A function f {\displaystyle f} is said to be odd if f ( − x ) = − f ( x ) {\displaystyle f(-x)=-f(x)} , and is said to be even if f ( − x ) = f ( x ) {\displaystyle f(-x)=f(x)} . The sine function is odd, whereas the cosine function is even. [ 12 ] Both sine and cosine functions are similar, with their difference being shifted by π 2 {\textstyle {\frac {\pi }{2}}} . This means, [ 13 ] sin ⁡ ( θ ) = cos ⁡ ( π 2 − θ ) , cos ⁡ ( θ ) = sin ⁡ ( π 2 − θ ) . {\displaystyle {\begin{aligned}\sin(\theta )&=\cos \left({\frac {\pi }{2}}-\theta \right),\\\cos(\theta )&=\sin \left({\frac {\pi }{2}}-\theta \right).\end{aligned}}} Zero is the only real fixed point of the sine function; in other words the only intersection of the sine function and the identity function is sin ⁡ ( 0 ) = 0 {\displaystyle \sin(0)=0} . The only real fixed point of the cosine function is called the Dottie number . The Dottie number is the unique real root of the equation cos ⁡ ( x ) = x {\displaystyle \cos(x)=x} . The decimal expansion of the Dottie number is approximately 0.739085. [ 14 ] The sine and cosine functions are infinitely differentiable. [ 15 ] The derivative of sine is cosine, and the derivative of cosine is negative sine: [ 16 ] d d x sin ⁡ ( x ) = cos ⁡ ( x ) , d d x cos ⁡ ( x ) = − sin ⁡ ( x ) . {\displaystyle {\frac {d}{dx}}\sin(x)=\cos(x),\qquad {\frac {d}{dx}}\cos(x)=-\sin(x).} Continuing the process in higher-order derivative results in the repeated same functions; the fourth derivative of a sine is the sine itself. [ 15 ] These derivatives can be applied to the first derivative test , according to which the monotonicity of a function can be defined as the inequality of function's first derivative greater or less than equal to zero. [ 17 ] It can also be applied to second derivative test , according to which the concavity of a function can be defined by applying the inequality of the function's second derivative greater or less than equal to zero. [ 18 ] The following table shows that both sine and cosine functions have concavity and monotonicity—the positive sign ( + {\displaystyle +} ) denotes a graph is increasing (going upward) and the negative sign ( − {\displaystyle -} ) is decreasing (going downward)—in certain intervals. [ 19 ] This information can be represented as a Cartesian coordinates system divided into four quadrants. Both sine and cosine functions can be defined by using differential equations. The pair of ( cos ⁡ θ , sin ⁡ θ ) {\displaystyle (\cos \theta ,\sin \theta )} is the solution ( x ( θ ) , y ( θ ) ) {\displaystyle (x(\theta ),y(\theta ))} to the two-dimensional system of differential equations y ′ ( θ ) = x ( θ ) {\displaystyle y'(\theta )=x(\theta )} and x ′ ( θ ) = − y ( θ ) {\displaystyle x'(\theta )=-y(\theta )} with the initial conditions y ( 0 ) = 0 {\displaystyle y(0)=0} and x ( 0 ) = 1 {\displaystyle x(0)=1} . One could interpret the unit circle in the above definitions as defining the phase space trajectory of the differential equation with the given initial conditions. It can be interpreted as a phase space trajectory of the system of differential equations y ′ ( θ ) = x ( θ ) {\displaystyle y'(\theta )=x(\theta )} and x ′ ( θ ) = − y ( θ ) {\displaystyle x'(\theta )=-y(\theta )} starting from the initial conditions y ( 0 ) = 0 {\displaystyle y(0)=0} and x ( 0 ) = 1 {\displaystyle x(0)=1} . [ citation needed ] Their area under a curve can be obtained by using the integral with a certain bounded interval. Their antiderivatives are: ∫ sin ⁡ ( x ) d x = − cos ⁡ ( x ) + C ∫ cos ⁡ ( x ) d x = sin ⁡ ( x ) + C , {\displaystyle \int \sin(x)\,dx=-\cos(x)+C\qquad \int \cos(x)\,dx=\sin(x)+C,} where C {\displaystyle C} denotes the constant of integration . [ 20 ] These antiderivatives may be applied to compute the mensuration properties of both sine and cosine functions' curves with a given interval. For example, the arc length of the sine curve between 0 {\displaystyle 0} and t {\displaystyle t} is ∫ 0 t 1 + cos 2 ⁡ ( x ) d x = 2 E ⁡ ( t , 1 2 ) , {\displaystyle \int _{0}^{t}\!{\sqrt {1+\cos ^{2}(x)}}\,dx={\sqrt {2}}\operatorname {E} \left(t,{\frac {1}{\sqrt {2}}}\right),} where E ⁡ ( φ , k ) {\displaystyle \operatorname {E} (\varphi ,k)} is the incomplete elliptic integral of the second kind with modulus k {\displaystyle k} . It cannot be expressed using elementary functions . [ 21 ] In the case of a full period, its arc length is L = 4 2 π 3 Γ ( 1 / 4 ) 2 + Γ ( 1 / 4 ) 2 2 π = 2 π ϖ + 2 ϖ ≈ 7.6404 … {\displaystyle L={\frac {4{\sqrt {2\pi ^{3}}}}{\Gamma (1/4)^{2}}}+{\frac {\Gamma (1/4)^{2}}{\sqrt {2\pi }}}={\frac {2\pi }{\varpi }}+2\varpi \approx 7.6404\ldots } where Γ {\displaystyle \Gamma } is the gamma function and ϖ {\displaystyle \varpi } is the lemniscate constant . [ 22 ] The inverse function of sine is arcsine or inverse sine, denoted as "arcsin", "asin", or sin − 1 {\displaystyle \sin ^{-1}} . [ 23 ] The inverse function of cosine is arccosine, denoted as "arccos", "acos", or cos − 1 {\displaystyle \cos ^{-1}} . [ a ] As sine and cosine are not injective , their inverses are not exact inverse functions, but partial inverse functions. For example, sin ⁡ ( 0 ) = 0 {\displaystyle \sin(0)=0} , but also sin ⁡ ( π ) = 0 {\displaystyle \sin(\pi )=0} , sin ⁡ ( 2 π ) = 0 {\displaystyle \sin(2\pi )=0} , and so on. It follows that the arcsine function is multivalued: arcsin ⁡ ( 0 ) = 0 {\displaystyle \arcsin(0)=0} , but also arcsin ⁡ ( 0 ) = π {\displaystyle \arcsin(0)=\pi } , arcsin ⁡ ( 0 ) = 2 π {\displaystyle \arcsin(0)=2\pi } , and so on. When only one value is desired, the function may be restricted to its principal branch . With this restriction, for each x {\displaystyle x} in the domain, the expression arcsin ⁡ ( x ) {\displaystyle \arcsin(x)} will evaluate only to a single value, called its principal value . The standard range of principal values for arcsin is from − π 2 {\textstyle -{\frac {\pi }{2}}} to π 2 {\textstyle {\frac {\pi }{2}}} , and the standard range for arccos is from 0 {\displaystyle 0} to π {\displaystyle \pi } . [ 24 ] The inverse function of both sine and cosine are defined as: [ citation needed ] θ = arcsin ⁡ ( opposite hypotenuse ) = arccos ⁡ ( adjacent hypotenuse ) , {\displaystyle \theta =\arcsin \left({\frac {\text{opposite}}{\text{hypotenuse}}}\right)=\arccos \left({\frac {\text{adjacent}}{\text{hypotenuse}}}\right),} where for some integer k {\displaystyle k} , sin ⁡ ( y ) = x ⟺ y = arcsin ⁡ ( x ) + 2 π k , or y = π − arcsin ⁡ ( x ) + 2 π k cos ⁡ ( y ) = x ⟺ y = arccos ⁡ ( x ) + 2 π k , or y = − arccos ⁡ ( x ) + 2 π k {\displaystyle {\begin{aligned}\sin(y)=x\iff &y=\arcsin(x)+2\pi k,{\text{ or }}\\&y=\pi -\arcsin(x)+2\pi k\\\cos(y)=x\iff &y=\arccos(x)+2\pi k,{\text{ or }}\\&y=-\arccos(x)+2\pi k\end{aligned}}} By definition, both functions satisfy the equations: [ citation needed ] sin ⁡ ( arcsin ⁡ ( x ) ) = x cos ⁡ ( arccos ⁡ ( x ) ) = x {\displaystyle \sin(\arcsin(x))=x\qquad \cos(\arccos(x))=x} and arcsin ⁡ ( sin ⁡ ( θ ) ) = θ for − π 2 ≤ θ ≤ π 2 arccos ⁡ ( cos ⁡ ( θ ) ) = θ for 0 ≤ θ ≤ π {\displaystyle {\begin{aligned}\arcsin(\sin(\theta ))=\theta \quad &{\text{for}}\quad -{\frac {\pi }{2}}\leq \theta \leq {\frac {\pi }{2}}\\\arccos(\cos(\theta ))=\theta \quad &{\text{for}}\quad 0\leq \theta \leq \pi \end{aligned}}} According to Pythagorean theorem , the squared hypotenuse is the sum of two squared legs of a right triangle. Dividing the formula on both sides with squared hypotenuse resulting in the Pythagorean trigonometric identity , the sum of a squared sine and a squared cosine equals 1: [ 25 ] [ b ] sin 2 ⁡ ( θ ) + cos 2 ⁡ ( θ ) = 1. {\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1.} Sine and cosine satisfy the following double-angle formulas: [ 26 ] sin ⁡ ( 2 θ ) = 2 sin ⁡ ( θ ) cos ⁡ ( θ ) , cos ⁡ ( 2 θ ) = cos 2 ⁡ ( θ ) − sin 2 ⁡ ( θ ) = 2 cos 2 ⁡ ( θ ) − 1 = 1 − 2 sin 2 ⁡ ( θ ) {\displaystyle {\begin{aligned}\sin(2\theta )&=2\sin(\theta )\cos(\theta ),\\\cos(2\theta )&=\cos ^{2}(\theta )-\sin ^{2}(\theta )\\&=2\cos ^{2}(\theta )-1\\&=1-2\sin ^{2}(\theta )\end{aligned}}} The cosine double angle formula implies that sin 2 and cos 2 are, themselves, shifted and scaled sine waves. Specifically, [ 27 ] sin 2 ⁡ ( θ ) = 1 − cos ⁡ ( 2 θ ) 2 cos 2 ⁡ ( θ ) = 1 + cos ⁡ ( 2 θ ) 2 {\displaystyle \sin ^{2}(\theta )={\frac {1-\cos(2\theta )}{2}}\qquad \cos ^{2}(\theta )={\frac {1+\cos(2\theta )}{2}}} The graph shows both sine and sine squared functions, with the sine in blue and the sine squared in red. Both graphs have the same shape but with different ranges of values and different periods. Sine squared has only positive values, but twice the number of periods. [ citation needed ] Both sine and cosine functions can be defined by using a Taylor series , a power series involving the higher-order derivatives. As mentioned in § Continuity and differentiation , the derivative of sine is cosine and that the derivative of cosine is the negative of sine. This means the successive derivatives of sin ⁡ ( x ) {\displaystyle \sin(x)} are cos ⁡ ( x ) {\displaystyle \cos(x)} , − sin ⁡ ( x ) {\displaystyle -\sin(x)} , − cos ⁡ ( x ) {\displaystyle -\cos(x)} , sin ⁡ ( x ) {\displaystyle \sin(x)} , continuing to repeat those four functions. The ( 4 n + k ) {\displaystyle (4n+k)} - th derivative, evaluated at the point 0: sin ( 4 n + k ) ⁡ ( 0 ) = { 0 when k = 0 1 when k = 1 0 when k = 2 − 1 when k = 3 {\displaystyle \sin ^{(4n+k)}(0)={\begin{cases}0&{\text{when }}k=0\\1&{\text{when }}k=1\\0&{\text{when }}k=2\\-1&{\text{when }}k=3\end{cases}}} where the superscript represents repeated differentiation. This implies the following Taylor series expansion at x = 0 {\displaystyle x=0} . One can then use the theory of Taylor series to show that the following identities hold for all real numbers x {\displaystyle x} —where x {\displaystyle x} is the angle in radians. [ 28 ] More generally, for all complex numbers : [ 29 ] sin ⁡ ( x ) = x − x 3 3 ! + x 5 5 ! − x 7 7 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) ! x 2 n + 1 {\displaystyle {\begin{aligned}\sin(x)&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}\end{aligned}}} Taking the derivative of each term gives the Taylor series for cosine: [ 28 ] [ 29 ] cos ⁡ ( x ) = 1 − x 2 2 ! + x 4 4 ! − x 6 6 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! x 2 n {\displaystyle {\begin{aligned}\cos(x)&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}\end{aligned}}} Both sine and cosine functions with multiple angles may appear as their linear combination , resulting in a polynomial. Such a polynomial is known as the trigonometric polynomial . The trigonometric polynomial's ample applications may be acquired in its interpolation , and its extension of a periodic function known as the Fourier series . Let a n {\displaystyle a_{n}} and b n {\displaystyle b_{n}} be any coefficients, then the trigonometric polynomial of a degree N {\displaystyle N} —denoted as T ( x ) {\displaystyle T(x)} —is defined as: [ 30 ] [ 31 ] T ( x ) = a 0 + ∑ n = 1 N a n cos ⁡ ( n x ) + ∑ n = 1 N b n sin ⁡ ( n x ) . {\displaystyle T(x)=a_{0}+\sum _{n=1}^{N}a_{n}\cos(nx)+\sum _{n=1}^{N}b_{n}\sin(nx).} The trigonometric series can be defined similarly analogous to the trigonometric polynomial, its infinite inversion. Let A n {\displaystyle A_{n}} and B n {\displaystyle B_{n}} be any coefficients, then the trigonometric series can be defined as: [ 32 ] 1 2 A 0 + ∑ n = 1 ∞ A n cos ⁡ ( n x ) + B n sin ⁡ ( n x ) . {\displaystyle {\frac {1}{2}}A_{0}+\sum _{n=1}^{\infty }A_{n}\cos(nx)+B_{n}\sin(nx).} In the case of a Fourier series with a given integrable function f {\displaystyle f} , the coefficients of a trigonometric series are: [ 33 ] A n = 1 π ∫ 0 2 π f ( x ) cos ⁡ ( n x ) d x , B n = 1 π ∫ 0 2 π f ( x ) sin ⁡ ( n x ) d x . {\displaystyle {\begin{aligned}A_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\cos(nx)\,dx,\\B_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\sin(nx)\,dx.\end{aligned}}} Both sine and cosine can be extended further via complex number , a set of numbers composed of both real and imaginary numbers . For real number θ {\displaystyle \theta } , the definition of both sine and cosine functions can be extended in a complex plane in terms of an exponential function as follows: [ 34 ] sin ⁡ ( θ ) = e i θ − e − i θ 2 i , cos ⁡ ( θ ) = e i θ + e − i θ 2 , {\displaystyle {\begin{aligned}\sin(\theta )&={\frac {e^{i\theta }-e^{-i\theta }}{2i}},\\\cos(\theta )&={\frac {e^{i\theta }+e^{-i\theta }}{2}},\end{aligned}}} Alternatively, both functions can be defined in terms of Euler's formula : [ 34 ] e i θ = cos ⁡ ( θ ) + i sin ⁡ ( θ ) , e − i θ = cos ⁡ ( θ ) − i sin ⁡ ( θ ) . {\displaystyle {\begin{aligned}e^{i\theta }&=\cos(\theta )+i\sin(\theta ),\\e^{-i\theta }&=\cos(\theta )-i\sin(\theta ).\end{aligned}}} When plotted on the complex plane , the function e i x {\displaystyle e^{ix}} for real values of x {\displaystyle x} traces out the unit circle in the complex plane. Both sine and cosine functions may be simplified to the imaginary and real parts of e i θ {\displaystyle e^{i\theta }} as: [ 35 ] sin ⁡ θ = Im ⁡ ( e i θ ) , cos ⁡ θ = Re ⁡ ( e i θ ) . {\displaystyle {\begin{aligned}\sin \theta &=\operatorname {Im} (e^{i\theta }),\\\cos \theta &=\operatorname {Re} (e^{i\theta }).\end{aligned}}} When z = x + i y {\displaystyle z=x+iy} for real values x {\displaystyle x} and y {\displaystyle y} , where i = − 1 {\displaystyle i={\sqrt {-1}}} , both sine and cosine functions can be expressed in terms of real sines, cosines, and hyperbolic functions as: [ citation needed ] sin ⁡ z = sin ⁡ x cosh ⁡ y + i cos ⁡ x sinh ⁡ y , cos ⁡ z = cos ⁡ x cosh ⁡ y − i sin ⁡ x sinh ⁡ y . {\displaystyle {\begin{aligned}\sin z&=\sin x\cosh y+i\cos x\sinh y,\\\cos z&=\cos x\cosh y-i\sin x\sinh y.\end{aligned}}} Sine and cosine are used to connect the real and imaginary parts of a complex number with its polar coordinates ( r , θ ) {\displaystyle (r,\theta )} : z = r ( cos ⁡ ( θ ) + i sin ⁡ ( θ ) ) , {\displaystyle z=r(\cos(\theta )+i\sin(\theta )),} and the real and imaginary parts are Re ⁡ ( z ) = r cos ⁡ ( θ ) , Im ⁡ ( z ) = r sin ⁡ ( θ ) , {\displaystyle {\begin{aligned}\operatorname {Re} (z)&=r\cos(\theta ),\\\operatorname {Im} (z)&=r\sin(\theta ),\end{aligned}}} where r {\displaystyle r} and θ {\displaystyle \theta } represent the magnitude and angle of the complex number z {\displaystyle z} . For any real number θ {\displaystyle \theta } , Euler's formula in terms of polar coordinates is stated as z = r e i θ {\textstyle z=re^{i\theta }} . Applying the series definition of the sine and cosine to a complex argument, z , gives: where sinh and cosh are the hyperbolic sine and cosine . These are entire functions . It is also sometimes useful to express the complex sine and cosine functions in terms of the real and imaginary parts of its argument: Using the partial fraction expansion technique in complex analysis , one can find that the infinite series ∑ n = − ∞ ∞ ( − 1 ) n z − n = 1 z − 2 z ∑ n = 1 ∞ ( − 1 ) n n 2 − z 2 {\displaystyle \sum _{n=-\infty }^{\infty }{\frac {(-1)^{n}}{z-n}}={\frac {1}{z}}-2z\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{n^{2}-z^{2}}}} both converge and are equal to π sin ⁡ ( π z ) {\textstyle {\frac {\pi }{\sin(\pi z)}}} . Similarly, one can show that π 2 sin 2 ⁡ ( π z ) = ∑ n = − ∞ ∞ 1 ( z − n ) 2 . {\displaystyle {\frac {\pi ^{2}}{\sin ^{2}(\pi z)}}=\sum _{n=-\infty }^{\infty }{\frac {1}{(z-n)^{2}}}.} Using product expansion technique, one can derive sin ⁡ ( π z ) = π z ∏ n = 1 ∞ ( 1 − z 2 n 2 ) . {\displaystyle \sin(\pi z)=\pi z\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{n^{2}}}\right).} sin( z ) is found in the functional equation for the Gamma function , which in turn is found in the functional equation for the Riemann zeta-function , As a holomorphic function , sin z is a 2D solution of Laplace's equation : The complex sine function is also related to the level curves of pendulums . [ how? ] [ 36 ] [ better source needed ] The word sine is derived, indirectly, from the Sanskrit word jyā 'bow-string' or more specifically its synonym jīvá (both adopted from Ancient Greek χορδή 'string; chord'), due to visual similarity between the arc of a circle with its corresponding chord and a bow with its string (see jyā, koti-jyā and utkrama-jyā ; sine and chord are closely related in a circle of unit diameter, see Ptolemy’s Theorem ). This was transliterated in Arabic as jība , which is meaningless in that language and written as jb ( جب ). Since Arabic is written without short vowels, jb was interpreted as the homograph jayb ( جيب ), which means 'bosom', 'pocket', or 'fold'. [ 37 ] [ 38 ] When the Arabic texts of Al-Battani and al-Khwārizmī were translated into Medieval Latin in the 12th century by Gerard of Cremona , he used the Latin equivalent sinus (which also means 'bay' or 'fold', and more specifically 'the hanging fold of a toga over the breast'). [ 39 ] [ 40 ] [ 41 ] Gerard was probably not the first scholar to use this translation; Robert of Chester appears to have preceded him and there is evidence of even earlier usage. [ 42 ] [ 43 ] The English form sine was introduced in Thomas Fale 's 1593 Horologiographia . [ 44 ] The word cosine derives from an abbreviation of the Latin complementi sinus 'sine of the complementary angle ' as cosinus in Edmund Gunter 's Canon triangulorum (1620), which also includes a similar definition of cotangens . [ 45 ] While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was discovered by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). [ 46 ] The sine and cosine functions are closely related to the jyā and koṭi-jyā functions used in Indian astronomy during the Gupta period ( Aryabhatiya and Surya Siddhanta ), via translation from Sanskrit to Arabic and then from Arabic to Latin. [ 39 ] [ 47 ] All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines , used in solving triangles . [ 48 ] Al-Khwārizmī (c. 780–850) produced tables of sines, cosines and tangents. [ 49 ] [ 50 ] Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. [ 50 ] In the early 17th-century, the French mathematician Albert Girard published the first use of the abbreviations sin , cos , and tan ; these were further promulgated by Euler (see below). The Opus palatinum de triangulis of Georg Joachim Rheticus , a student of Copernicus , was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596. In a paper published in 1682, Leibniz proved that sin x is not an algebraic function of x . [ 51 ] Roger Cotes computed the derivative of sine in his Harmonia Mensurarum (1722). [ 52 ] Leonhard Euler 's Introductio in analysin infinitorum (1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, also defining them as infinite series and presenting " Euler's formula ", as well as the near-modern abbreviations sin. , cos. , tang. , cot. , sec. , and cosec. [ 39 ] There is no standard algorithm for calculating sine and cosine. IEEE 754 , the most widely used standard for the specification of reliable floating-point computation, does not address calculating trigonometric functions such as sine. The reason is that no efficient algorithm is known for computing sine and cosine with a specified accuracy, especially for large inputs. [ 53 ] Algorithms for calculating sine may be balanced for such constraints as speed, accuracy, portability, or range of input values accepted. This can lead to different results for different algorithms, especially for special circumstances such as very large inputs, e.g. sin(10 22 ) . A common programming optimization, used especially in 3D graphics, is to pre-calculate a table of sine values, for example one value per degree, then for values in-between pick the closest pre-calculated value, or linearly interpolate between the 2 closest values to approximate it. This allows results to be looked up from a table rather than being calculated in real time. With modern CPU architectures this method may offer no advantage. [ citation needed ] The CORDIC algorithm is commonly used in scientific calculators. The sine and cosine functions, along with other trigonometric functions, are widely available across programming languages and platforms. In computing, they are typically abbreviated to sin and cos . Some CPU architectures have a built-in instruction for sine, including the Intel x87 FPUs since the 80387. In programming languages, sin and cos are typically either a built-in function or found within the language's standard math library. For example, the C standard library defines sine functions within math.h : sin( double ) , sinf( float ) , and sinl( long double ) . The parameter of each is a floating point value, specifying the angle in radians. Each function returns the same data type as it accepts. Many other trigonometric functions are also defined in math.h , such as for cosine, arc sine, and hyperbolic sine (sinh). Similarly, Python defines math.sin(x) and math.cos(x) within the built-in math module. Complex sine and cosine functions are also available within the cmath module, e.g. cmath.sin(z) . CPython 's math functions call the C math library, and use a double-precision floating-point format . Some software libraries provide implementations of sine and cosine using the input angle in half- turns , a half-turn being an angle of 180 degrees or π {\displaystyle \pi } radians. Representing angles in turns or half-turns has accuracy advantages and efficiency advantages in some cases. [ 54 ] [ 55 ] These functions are called sinpi and cospi in MATLAB, [ 54 ] OpenCL, [ 56 ] R, [ 55 ] Julia, [ 57 ] CUDA, [ 58 ] and ARM. [ 59 ] For example, sinpi(x) would evaluate to sin ⁡ ( π x ) , {\displaystyle \sin(\pi x),} where x is expressed in half-turns, and consequently the final input to the function, πx can be interpreted in radians by sin . The accuracy advantage stems from the ability to perfectly represent key angles like full-turn, half-turn, and quarter-turn losslessly in binary floating-point or fixed-point. In contrast, representing 2 π {\displaystyle 2\pi } , π {\displaystyle \pi } , and π 2 {\textstyle {\frac {\pi }{2}}} in binary floating-point or binary scaled fixed-point always involves a loss of accuracy since irrational numbers cannot be represented with finitely many binary digits. Turns also have an accuracy advantage and efficiency advantage for computing modulo to one period. Computing modulo 1 turn or modulo 2 half-turns can be losslessly and efficiently computed in both floating-point and fixed-point. For example, computing modulo 1 or modulo 2 for a binary point scaled fixed-point value requires only a bit shift or bitwise AND operation. In contrast, computing modulo π 2 {\textstyle {\frac {\pi }{2}}} involves inaccuracies in representing π 2 {\textstyle {\frac {\pi }{2}}} . For applications involving angle sensors, the sensor typically provides angle measurements in a form directly compatible with turns or half-turns. For example, an angle sensor may count from 0 to 4096 over one complete revolution. [ 60 ] If half-turns are used as the unit for angle, then the value provided by the sensor directly and losslessly maps to a fixed-point data type with 11 bits to the right of the binary point. In contrast, if radians are used as the unit for storing the angle, then the inaccuracies and cost of multiplying the raw sensor integer by an approximation to π 2048 {\textstyle {\frac {\pi }{2048}}} would be incurred.
https://en.wikipedia.org/wiki/Sinus_rectus_(trigonometry)
In mathematics , sine and cosine are trigonometric functions of an angle . The sine and cosine of an acute angle are defined in the context of a right triangle : for the specified angle, its sine is the ratio of the length of the side opposite that angle to the length of the longest side of the triangle (the hypotenuse ), and the cosine is the ratio of the length of the adjacent leg to that of the hypotenuse . For an angle θ {\displaystyle \theta } , the sine and cosine functions are denoted as sin ⁡ ( θ ) {\displaystyle \sin(\theta )} and cos ⁡ ( θ ) {\displaystyle \cos(\theta )} . The definitions of sine and cosine have been extended to any real value in terms of the lengths of certain line segments in a unit circle . More modern definitions express the sine and cosine as infinite series , or as the solutions of certain differential equations , allowing their extension to arbitrary positive and negative values and even to complex numbers . The sine and cosine functions are commonly used to model periodic phenomena such as sound and light waves , the position and velocity of harmonic oscillators, sunlight intensity and day length, and average temperature variations throughout the year. They can be traced to the jyā and koṭi-jyā functions used in Indian astronomy during the Gupta period . To define the sine and cosine of an acute angle α {\displaystyle \alpha } , start with a right triangle that contains an angle of measure α {\displaystyle \alpha } ; in the accompanying figure, angle α {\displaystyle \alpha } in a right triangle A B C {\displaystyle ABC} is the angle of interest. The three sides of the triangle are named as follows: [ 1 ] Once such a triangle is chosen, the sine of the angle is equal to the length of the opposite side divided by the length of the hypotenuse, and the cosine of the angle is equal to the length of the adjacent side divided by the length of the hypotenuse: [ 1 ] sin ⁡ ( α ) = opposite hypotenuse , cos ⁡ ( α ) = adjacent hypotenuse . {\displaystyle \sin(\alpha )={\frac {\text{opposite}}{\text{hypotenuse}}},\qquad \cos(\alpha )={\frac {\text{adjacent}}{\text{hypotenuse}}}.} The other trigonometric functions of the angle can be defined similarly; for example, the tangent is the ratio between the opposite and adjacent sides or equivalently the ratio between the sine and cosine functions. The reciprocal of sine is cosecant, which gives the ratio of the hypotenuse length to the length of the opposite side. Similarly, the reciprocal of cosine is secant, which gives the ratio of the hypotenuse length to that of the adjacent side. The cotangent function is the ratio between the adjacent and opposite sides, a reciprocal of a tangent function. These functions can be formulated as: [ 1 ] tan ⁡ ( θ ) = sin ⁡ ( θ ) cos ⁡ ( θ ) = opposite adjacent , cot ⁡ ( θ ) = 1 tan ⁡ ( θ ) = adjacent opposite , csc ⁡ ( θ ) = 1 sin ⁡ ( θ ) = hypotenuse opposite , sec ⁡ ( θ ) = 1 cos ⁡ ( θ ) = hypotenuse adjacent . {\displaystyle {\begin{aligned}\tan(\theta )&={\frac {\sin(\theta )}{\cos(\theta )}}={\frac {\text{opposite}}{\text{adjacent}}},\\\cot(\theta )&={\frac {1}{\tan(\theta )}}={\frac {\text{adjacent}}{\text{opposite}}},\\\csc(\theta )&={\frac {1}{\sin(\theta )}}={\frac {\text{hypotenuse}}{\text{opposite}}},\\\sec(\theta )&={\frac {1}{\cos(\theta )}}={\frac {\textrm {hypotenuse}}{\textrm {adjacent}}}.\end{aligned}}} As stated, the values sin ⁡ ( α ) {\displaystyle \sin(\alpha )} and cos ⁡ ( α ) {\displaystyle \cos(\alpha )} appear to depend on the choice of a right triangle containing an angle of measure α {\displaystyle \alpha } . However, this is not the case as all such triangles are similar , and so the ratios are the same for each of them. For example, each leg of the 45-45-90 right triangle is 1 unit, and its hypotenuse is 2 {\displaystyle {\sqrt {2}}} ; therefore, sin ⁡ 45 ∘ = cos ⁡ 45 ∘ = 2 2 {\textstyle \sin 45^{\circ }=\cos 45^{\circ }={\frac {\sqrt {2}}{2}}} . [ 2 ] The following table shows the special value of each input for both sine and cosine with the domain between 0 < α < π 2 {\textstyle 0<\alpha <{\frac {\pi }{2}}} . The input in this table provides various unit systems such as degree, radian, and so on. The angles other than those five can be obtained by using a calculator. [ 3 ] [ 4 ] The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. [ 5 ] Given a triangle A B C {\displaystyle ABC} with sides a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} , and angles opposite those sides α {\displaystyle \alpha } , β {\displaystyle \beta } , and γ {\displaystyle \gamma } , the law states, sin ⁡ α a = sin ⁡ β b = sin ⁡ γ c . {\displaystyle {\frac {\sin \alpha }{a}}={\frac {\sin \beta }{b}}={\frac {\sin \gamma }{c}}.} This is equivalent to the equality of the first three expressions below: a sin ⁡ α = b sin ⁡ β = c sin ⁡ γ = 2 R , {\displaystyle {\frac {a}{\sin \alpha }}={\frac {b}{\sin \beta }}={\frac {c}{\sin \gamma }}=2R,} where R {\displaystyle R} is the triangle's circumradius . The law of cosines is useful for computing the length of an unknown side if two other sides and an angle are known. [ 5 ] The law states, a 2 + b 2 − 2 a b cos ⁡ ( γ ) = c 2 {\displaystyle a^{2}+b^{2}-2ab\cos(\gamma )=c^{2}} In the case where γ = π / 2 {\displaystyle \gamma =\pi /2} from which cos ⁡ ( γ ) = 0 {\displaystyle \cos(\gamma )=0} , the resulting equation becomes the Pythagorean theorem . [ 6 ] The cross product and dot product are operations on two vectors in Euclidean vector space . The sine and cosine functions can be defined in terms of the cross product and dot product. If a {\displaystyle \mathbb {a} } and b {\displaystyle \mathbb {b} } are vectors, and θ {\displaystyle \theta } is the angle between a {\displaystyle \mathbb {a} } and b {\displaystyle \mathbb {b} } , then sine and cosine can be defined as: sin ⁡ ( θ ) = | a × b | | a | | b | , cos ⁡ ( θ ) = a ⋅ b | a | | b | . {\displaystyle {\begin{aligned}\sin(\theta )&={\frac {|\mathbb {a} \times \mathbb {b} |}{|a||b|}},\\\cos(\theta )&={\frac {\mathbb {a} \cdot \mathbb {b} }{|a||b|}}.\end{aligned}}} The sine and cosine functions may also be defined in a more general way by using unit circle , a circle of radius one centered at the origin ( 0 , 0 ) {\displaystyle (0,0)} , formulated as the equation of x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} in the Cartesian coordinate system . Let a line through the origin intersect the unit circle, making an angle of θ {\displaystyle \theta } with the positive half of the x {\displaystyle x} - axis. The x {\displaystyle x} - and y {\displaystyle y} - coordinates of this point of intersection are equal to cos ⁡ ( θ ) {\displaystyle \cos(\theta )} and sin ⁡ ( θ ) {\displaystyle \sin(\theta )} , respectively; that is, [ 7 ] sin ⁡ ( θ ) = y , cos ⁡ ( θ ) = x . {\displaystyle \sin(\theta )=y,\qquad \cos(\theta )=x.} This definition is consistent with the right-angled triangle definition of sine and cosine when 0 < θ < π 2 {\textstyle 0<\theta <{\frac {\pi }{2}}} because the length of the hypotenuse of the unit circle is always 1; mathematically speaking, the sine of an angle equals the opposite side of the triangle, which is simply the y {\displaystyle y} - coordinate. A similar argument can be made for the cosine function to show that the cosine of an angle when 0 < θ < π 2 {\textstyle 0<\theta <{\frac {\pi }{2}}} , even under the new definition using the unit circle. [ 8 ] [ 9 ] Using the unit circle definition has the advantage of drawing a graph of sine and cosine functions. This can be done by rotating counterclockwise a point along the circumference of a circle, depending on the input θ > 0 {\displaystyle \theta >0} . In a sine function, if the input is θ = π 2 {\textstyle \theta ={\frac {\pi }{2}}} , the point is rotated counterclockwise and stopped exactly on the y {\displaystyle y} - axis. If θ = π {\displaystyle \theta =\pi } , the point is at the circle's halfway. If θ = 2 π {\displaystyle \theta =2\pi } , the point returned to its origin. This results that both sine and cosine functions have the range between − 1 ≤ y ≤ 1 {\displaystyle -1\leq y\leq 1} . [ 10 ] Extending the angle to any real domain, the point rotated counterclockwise continuously. This can be done similarly for the cosine function as well, although the point is rotated initially from the y {\displaystyle y} - coordinate. In other words, both sine and cosine functions are periodic , meaning any angle added by the circumference's circle is the angle itself. Mathematically, [ 11 ] sin ⁡ ( θ + 2 π ) = sin ⁡ ( θ ) , cos ⁡ ( θ + 2 π ) = cos ⁡ ( θ ) . {\displaystyle \sin(\theta +2\pi )=\sin(\theta ),\qquad \cos(\theta +2\pi )=\cos(\theta ).} A function f {\displaystyle f} is said to be odd if f ( − x ) = − f ( x ) {\displaystyle f(-x)=-f(x)} , and is said to be even if f ( − x ) = f ( x ) {\displaystyle f(-x)=f(x)} . The sine function is odd, whereas the cosine function is even. [ 12 ] Both sine and cosine functions are similar, with their difference being shifted by π 2 {\textstyle {\frac {\pi }{2}}} . This means, [ 13 ] sin ⁡ ( θ ) = cos ⁡ ( π 2 − θ ) , cos ⁡ ( θ ) = sin ⁡ ( π 2 − θ ) . {\displaystyle {\begin{aligned}\sin(\theta )&=\cos \left({\frac {\pi }{2}}-\theta \right),\\\cos(\theta )&=\sin \left({\frac {\pi }{2}}-\theta \right).\end{aligned}}} Zero is the only real fixed point of the sine function; in other words the only intersection of the sine function and the identity function is sin ⁡ ( 0 ) = 0 {\displaystyle \sin(0)=0} . The only real fixed point of the cosine function is called the Dottie number . The Dottie number is the unique real root of the equation cos ⁡ ( x ) = x {\displaystyle \cos(x)=x} . The decimal expansion of the Dottie number is approximately 0.739085. [ 14 ] The sine and cosine functions are infinitely differentiable. [ 15 ] The derivative of sine is cosine, and the derivative of cosine is negative sine: [ 16 ] d d x sin ⁡ ( x ) = cos ⁡ ( x ) , d d x cos ⁡ ( x ) = − sin ⁡ ( x ) . {\displaystyle {\frac {d}{dx}}\sin(x)=\cos(x),\qquad {\frac {d}{dx}}\cos(x)=-\sin(x).} Continuing the process in higher-order derivative results in the repeated same functions; the fourth derivative of a sine is the sine itself. [ 15 ] These derivatives can be applied to the first derivative test , according to which the monotonicity of a function can be defined as the inequality of function's first derivative greater or less than equal to zero. [ 17 ] It can also be applied to second derivative test , according to which the concavity of a function can be defined by applying the inequality of the function's second derivative greater or less than equal to zero. [ 18 ] The following table shows that both sine and cosine functions have concavity and monotonicity—the positive sign ( + {\displaystyle +} ) denotes a graph is increasing (going upward) and the negative sign ( − {\displaystyle -} ) is decreasing (going downward)—in certain intervals. [ 19 ] This information can be represented as a Cartesian coordinates system divided into four quadrants. Both sine and cosine functions can be defined by using differential equations. The pair of ( cos ⁡ θ , sin ⁡ θ ) {\displaystyle (\cos \theta ,\sin \theta )} is the solution ( x ( θ ) , y ( θ ) ) {\displaystyle (x(\theta ),y(\theta ))} to the two-dimensional system of differential equations y ′ ( θ ) = x ( θ ) {\displaystyle y'(\theta )=x(\theta )} and x ′ ( θ ) = − y ( θ ) {\displaystyle x'(\theta )=-y(\theta )} with the initial conditions y ( 0 ) = 0 {\displaystyle y(0)=0} and x ( 0 ) = 1 {\displaystyle x(0)=1} . One could interpret the unit circle in the above definitions as defining the phase space trajectory of the differential equation with the given initial conditions. It can be interpreted as a phase space trajectory of the system of differential equations y ′ ( θ ) = x ( θ ) {\displaystyle y'(\theta )=x(\theta )} and x ′ ( θ ) = − y ( θ ) {\displaystyle x'(\theta )=-y(\theta )} starting from the initial conditions y ( 0 ) = 0 {\displaystyle y(0)=0} and x ( 0 ) = 1 {\displaystyle x(0)=1} . [ citation needed ] Their area under a curve can be obtained by using the integral with a certain bounded interval. Their antiderivatives are: ∫ sin ⁡ ( x ) d x = − cos ⁡ ( x ) + C ∫ cos ⁡ ( x ) d x = sin ⁡ ( x ) + C , {\displaystyle \int \sin(x)\,dx=-\cos(x)+C\qquad \int \cos(x)\,dx=\sin(x)+C,} where C {\displaystyle C} denotes the constant of integration . [ 20 ] These antiderivatives may be applied to compute the mensuration properties of both sine and cosine functions' curves with a given interval. For example, the arc length of the sine curve between 0 {\displaystyle 0} and t {\displaystyle t} is ∫ 0 t 1 + cos 2 ⁡ ( x ) d x = 2 E ⁡ ( t , 1 2 ) , {\displaystyle \int _{0}^{t}\!{\sqrt {1+\cos ^{2}(x)}}\,dx={\sqrt {2}}\operatorname {E} \left(t,{\frac {1}{\sqrt {2}}}\right),} where E ⁡ ( φ , k ) {\displaystyle \operatorname {E} (\varphi ,k)} is the incomplete elliptic integral of the second kind with modulus k {\displaystyle k} . It cannot be expressed using elementary functions . [ 21 ] In the case of a full period, its arc length is L = 4 2 π 3 Γ ( 1 / 4 ) 2 + Γ ( 1 / 4 ) 2 2 π = 2 π ϖ + 2 ϖ ≈ 7.6404 … {\displaystyle L={\frac {4{\sqrt {2\pi ^{3}}}}{\Gamma (1/4)^{2}}}+{\frac {\Gamma (1/4)^{2}}{\sqrt {2\pi }}}={\frac {2\pi }{\varpi }}+2\varpi \approx 7.6404\ldots } where Γ {\displaystyle \Gamma } is the gamma function and ϖ {\displaystyle \varpi } is the lemniscate constant . [ 22 ] The inverse function of sine is arcsine or inverse sine, denoted as "arcsin", "asin", or sin − 1 {\displaystyle \sin ^{-1}} . [ 23 ] The inverse function of cosine is arccosine, denoted as "arccos", "acos", or cos − 1 {\displaystyle \cos ^{-1}} . [ a ] As sine and cosine are not injective , their inverses are not exact inverse functions, but partial inverse functions. For example, sin ⁡ ( 0 ) = 0 {\displaystyle \sin(0)=0} , but also sin ⁡ ( π ) = 0 {\displaystyle \sin(\pi )=0} , sin ⁡ ( 2 π ) = 0 {\displaystyle \sin(2\pi )=0} , and so on. It follows that the arcsine function is multivalued: arcsin ⁡ ( 0 ) = 0 {\displaystyle \arcsin(0)=0} , but also arcsin ⁡ ( 0 ) = π {\displaystyle \arcsin(0)=\pi } , arcsin ⁡ ( 0 ) = 2 π {\displaystyle \arcsin(0)=2\pi } , and so on. When only one value is desired, the function may be restricted to its principal branch . With this restriction, for each x {\displaystyle x} in the domain, the expression arcsin ⁡ ( x ) {\displaystyle \arcsin(x)} will evaluate only to a single value, called its principal value . The standard range of principal values for arcsin is from − π 2 {\textstyle -{\frac {\pi }{2}}} to π 2 {\textstyle {\frac {\pi }{2}}} , and the standard range for arccos is from 0 {\displaystyle 0} to π {\displaystyle \pi } . [ 24 ] The inverse function of both sine and cosine are defined as: [ citation needed ] θ = arcsin ⁡ ( opposite hypotenuse ) = arccos ⁡ ( adjacent hypotenuse ) , {\displaystyle \theta =\arcsin \left({\frac {\text{opposite}}{\text{hypotenuse}}}\right)=\arccos \left({\frac {\text{adjacent}}{\text{hypotenuse}}}\right),} where for some integer k {\displaystyle k} , sin ⁡ ( y ) = x ⟺ y = arcsin ⁡ ( x ) + 2 π k , or y = π − arcsin ⁡ ( x ) + 2 π k cos ⁡ ( y ) = x ⟺ y = arccos ⁡ ( x ) + 2 π k , or y = − arccos ⁡ ( x ) + 2 π k {\displaystyle {\begin{aligned}\sin(y)=x\iff &y=\arcsin(x)+2\pi k,{\text{ or }}\\&y=\pi -\arcsin(x)+2\pi k\\\cos(y)=x\iff &y=\arccos(x)+2\pi k,{\text{ or }}\\&y=-\arccos(x)+2\pi k\end{aligned}}} By definition, both functions satisfy the equations: [ citation needed ] sin ⁡ ( arcsin ⁡ ( x ) ) = x cos ⁡ ( arccos ⁡ ( x ) ) = x {\displaystyle \sin(\arcsin(x))=x\qquad \cos(\arccos(x))=x} and arcsin ⁡ ( sin ⁡ ( θ ) ) = θ for − π 2 ≤ θ ≤ π 2 arccos ⁡ ( cos ⁡ ( θ ) ) = θ for 0 ≤ θ ≤ π {\displaystyle {\begin{aligned}\arcsin(\sin(\theta ))=\theta \quad &{\text{for}}\quad -{\frac {\pi }{2}}\leq \theta \leq {\frac {\pi }{2}}\\\arccos(\cos(\theta ))=\theta \quad &{\text{for}}\quad 0\leq \theta \leq \pi \end{aligned}}} According to Pythagorean theorem , the squared hypotenuse is the sum of two squared legs of a right triangle. Dividing the formula on both sides with squared hypotenuse resulting in the Pythagorean trigonometric identity , the sum of a squared sine and a squared cosine equals 1: [ 25 ] [ b ] sin 2 ⁡ ( θ ) + cos 2 ⁡ ( θ ) = 1. {\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1.} Sine and cosine satisfy the following double-angle formulas: [ 26 ] sin ⁡ ( 2 θ ) = 2 sin ⁡ ( θ ) cos ⁡ ( θ ) , cos ⁡ ( 2 θ ) = cos 2 ⁡ ( θ ) − sin 2 ⁡ ( θ ) = 2 cos 2 ⁡ ( θ ) − 1 = 1 − 2 sin 2 ⁡ ( θ ) {\displaystyle {\begin{aligned}\sin(2\theta )&=2\sin(\theta )\cos(\theta ),\\\cos(2\theta )&=\cos ^{2}(\theta )-\sin ^{2}(\theta )\\&=2\cos ^{2}(\theta )-1\\&=1-2\sin ^{2}(\theta )\end{aligned}}} The cosine double angle formula implies that sin 2 and cos 2 are, themselves, shifted and scaled sine waves. Specifically, [ 27 ] sin 2 ⁡ ( θ ) = 1 − cos ⁡ ( 2 θ ) 2 cos 2 ⁡ ( θ ) = 1 + cos ⁡ ( 2 θ ) 2 {\displaystyle \sin ^{2}(\theta )={\frac {1-\cos(2\theta )}{2}}\qquad \cos ^{2}(\theta )={\frac {1+\cos(2\theta )}{2}}} The graph shows both sine and sine squared functions, with the sine in blue and the sine squared in red. Both graphs have the same shape but with different ranges of values and different periods. Sine squared has only positive values, but twice the number of periods. [ citation needed ] Both sine and cosine functions can be defined by using a Taylor series , a power series involving the higher-order derivatives. As mentioned in § Continuity and differentiation , the derivative of sine is cosine and that the derivative of cosine is the negative of sine. This means the successive derivatives of sin ⁡ ( x ) {\displaystyle \sin(x)} are cos ⁡ ( x ) {\displaystyle \cos(x)} , − sin ⁡ ( x ) {\displaystyle -\sin(x)} , − cos ⁡ ( x ) {\displaystyle -\cos(x)} , sin ⁡ ( x ) {\displaystyle \sin(x)} , continuing to repeat those four functions. The ( 4 n + k ) {\displaystyle (4n+k)} - th derivative, evaluated at the point 0: sin ( 4 n + k ) ⁡ ( 0 ) = { 0 when k = 0 1 when k = 1 0 when k = 2 − 1 when k = 3 {\displaystyle \sin ^{(4n+k)}(0)={\begin{cases}0&{\text{when }}k=0\\1&{\text{when }}k=1\\0&{\text{when }}k=2\\-1&{\text{when }}k=3\end{cases}}} where the superscript represents repeated differentiation. This implies the following Taylor series expansion at x = 0 {\displaystyle x=0} . One can then use the theory of Taylor series to show that the following identities hold for all real numbers x {\displaystyle x} —where x {\displaystyle x} is the angle in radians. [ 28 ] More generally, for all complex numbers : [ 29 ] sin ⁡ ( x ) = x − x 3 3 ! + x 5 5 ! − x 7 7 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) ! x 2 n + 1 {\displaystyle {\begin{aligned}\sin(x)&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}\end{aligned}}} Taking the derivative of each term gives the Taylor series for cosine: [ 28 ] [ 29 ] cos ⁡ ( x ) = 1 − x 2 2 ! + x 4 4 ! − x 6 6 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! x 2 n {\displaystyle {\begin{aligned}\cos(x)&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}\end{aligned}}} Both sine and cosine functions with multiple angles may appear as their linear combination , resulting in a polynomial. Such a polynomial is known as the trigonometric polynomial . The trigonometric polynomial's ample applications may be acquired in its interpolation , and its extension of a periodic function known as the Fourier series . Let a n {\displaystyle a_{n}} and b n {\displaystyle b_{n}} be any coefficients, then the trigonometric polynomial of a degree N {\displaystyle N} —denoted as T ( x ) {\displaystyle T(x)} —is defined as: [ 30 ] [ 31 ] T ( x ) = a 0 + ∑ n = 1 N a n cos ⁡ ( n x ) + ∑ n = 1 N b n sin ⁡ ( n x ) . {\displaystyle T(x)=a_{0}+\sum _{n=1}^{N}a_{n}\cos(nx)+\sum _{n=1}^{N}b_{n}\sin(nx).} The trigonometric series can be defined similarly analogous to the trigonometric polynomial, its infinite inversion. Let A n {\displaystyle A_{n}} and B n {\displaystyle B_{n}} be any coefficients, then the trigonometric series can be defined as: [ 32 ] 1 2 A 0 + ∑ n = 1 ∞ A n cos ⁡ ( n x ) + B n sin ⁡ ( n x ) . {\displaystyle {\frac {1}{2}}A_{0}+\sum _{n=1}^{\infty }A_{n}\cos(nx)+B_{n}\sin(nx).} In the case of a Fourier series with a given integrable function f {\displaystyle f} , the coefficients of a trigonometric series are: [ 33 ] A n = 1 π ∫ 0 2 π f ( x ) cos ⁡ ( n x ) d x , B n = 1 π ∫ 0 2 π f ( x ) sin ⁡ ( n x ) d x . {\displaystyle {\begin{aligned}A_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\cos(nx)\,dx,\\B_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\sin(nx)\,dx.\end{aligned}}} Both sine and cosine can be extended further via complex number , a set of numbers composed of both real and imaginary numbers . For real number θ {\displaystyle \theta } , the definition of both sine and cosine functions can be extended in a complex plane in terms of an exponential function as follows: [ 34 ] sin ⁡ ( θ ) = e i θ − e − i θ 2 i , cos ⁡ ( θ ) = e i θ + e − i θ 2 , {\displaystyle {\begin{aligned}\sin(\theta )&={\frac {e^{i\theta }-e^{-i\theta }}{2i}},\\\cos(\theta )&={\frac {e^{i\theta }+e^{-i\theta }}{2}},\end{aligned}}} Alternatively, both functions can be defined in terms of Euler's formula : [ 34 ] e i θ = cos ⁡ ( θ ) + i sin ⁡ ( θ ) , e − i θ = cos ⁡ ( θ ) − i sin ⁡ ( θ ) . {\displaystyle {\begin{aligned}e^{i\theta }&=\cos(\theta )+i\sin(\theta ),\\e^{-i\theta }&=\cos(\theta )-i\sin(\theta ).\end{aligned}}} When plotted on the complex plane , the function e i x {\displaystyle e^{ix}} for real values of x {\displaystyle x} traces out the unit circle in the complex plane. Both sine and cosine functions may be simplified to the imaginary and real parts of e i θ {\displaystyle e^{i\theta }} as: [ 35 ] sin ⁡ θ = Im ⁡ ( e i θ ) , cos ⁡ θ = Re ⁡ ( e i θ ) . {\displaystyle {\begin{aligned}\sin \theta &=\operatorname {Im} (e^{i\theta }),\\\cos \theta &=\operatorname {Re} (e^{i\theta }).\end{aligned}}} When z = x + i y {\displaystyle z=x+iy} for real values x {\displaystyle x} and y {\displaystyle y} , where i = − 1 {\displaystyle i={\sqrt {-1}}} , both sine and cosine functions can be expressed in terms of real sines, cosines, and hyperbolic functions as: [ citation needed ] sin ⁡ z = sin ⁡ x cosh ⁡ y + i cos ⁡ x sinh ⁡ y , cos ⁡ z = cos ⁡ x cosh ⁡ y − i sin ⁡ x sinh ⁡ y . {\displaystyle {\begin{aligned}\sin z&=\sin x\cosh y+i\cos x\sinh y,\\\cos z&=\cos x\cosh y-i\sin x\sinh y.\end{aligned}}} Sine and cosine are used to connect the real and imaginary parts of a complex number with its polar coordinates ( r , θ ) {\displaystyle (r,\theta )} : z = r ( cos ⁡ ( θ ) + i sin ⁡ ( θ ) ) , {\displaystyle z=r(\cos(\theta )+i\sin(\theta )),} and the real and imaginary parts are Re ⁡ ( z ) = r cos ⁡ ( θ ) , Im ⁡ ( z ) = r sin ⁡ ( θ ) , {\displaystyle {\begin{aligned}\operatorname {Re} (z)&=r\cos(\theta ),\\\operatorname {Im} (z)&=r\sin(\theta ),\end{aligned}}} where r {\displaystyle r} and θ {\displaystyle \theta } represent the magnitude and angle of the complex number z {\displaystyle z} . For any real number θ {\displaystyle \theta } , Euler's formula in terms of polar coordinates is stated as z = r e i θ {\textstyle z=re^{i\theta }} . Applying the series definition of the sine and cosine to a complex argument, z , gives: where sinh and cosh are the hyperbolic sine and cosine . These are entire functions . It is also sometimes useful to express the complex sine and cosine functions in terms of the real and imaginary parts of its argument: Using the partial fraction expansion technique in complex analysis , one can find that the infinite series ∑ n = − ∞ ∞ ( − 1 ) n z − n = 1 z − 2 z ∑ n = 1 ∞ ( − 1 ) n n 2 − z 2 {\displaystyle \sum _{n=-\infty }^{\infty }{\frac {(-1)^{n}}{z-n}}={\frac {1}{z}}-2z\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{n^{2}-z^{2}}}} both converge and are equal to π sin ⁡ ( π z ) {\textstyle {\frac {\pi }{\sin(\pi z)}}} . Similarly, one can show that π 2 sin 2 ⁡ ( π z ) = ∑ n = − ∞ ∞ 1 ( z − n ) 2 . {\displaystyle {\frac {\pi ^{2}}{\sin ^{2}(\pi z)}}=\sum _{n=-\infty }^{\infty }{\frac {1}{(z-n)^{2}}}.} Using product expansion technique, one can derive sin ⁡ ( π z ) = π z ∏ n = 1 ∞ ( 1 − z 2 n 2 ) . {\displaystyle \sin(\pi z)=\pi z\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{n^{2}}}\right).} sin( z ) is found in the functional equation for the Gamma function , which in turn is found in the functional equation for the Riemann zeta-function , As a holomorphic function , sin z is a 2D solution of Laplace's equation : The complex sine function is also related to the level curves of pendulums . [ how? ] [ 36 ] [ better source needed ] The word sine is derived, indirectly, from the Sanskrit word jyā 'bow-string' or more specifically its synonym jīvá (both adopted from Ancient Greek χορδή 'string; chord'), due to visual similarity between the arc of a circle with its corresponding chord and a bow with its string (see jyā, koti-jyā and utkrama-jyā ; sine and chord are closely related in a circle of unit diameter, see Ptolemy’s Theorem ). This was transliterated in Arabic as jība , which is meaningless in that language and written as jb ( جب ). Since Arabic is written without short vowels, jb was interpreted as the homograph jayb ( جيب ), which means 'bosom', 'pocket', or 'fold'. [ 37 ] [ 38 ] When the Arabic texts of Al-Battani and al-Khwārizmī were translated into Medieval Latin in the 12th century by Gerard of Cremona , he used the Latin equivalent sinus (which also means 'bay' or 'fold', and more specifically 'the hanging fold of a toga over the breast'). [ 39 ] [ 40 ] [ 41 ] Gerard was probably not the first scholar to use this translation; Robert of Chester appears to have preceded him and there is evidence of even earlier usage. [ 42 ] [ 43 ] The English form sine was introduced in Thomas Fale 's 1593 Horologiographia . [ 44 ] The word cosine derives from an abbreviation of the Latin complementi sinus 'sine of the complementary angle ' as cosinus in Edmund Gunter 's Canon triangulorum (1620), which also includes a similar definition of cotangens . [ 45 ] While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was discovered by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). [ 46 ] The sine and cosine functions are closely related to the jyā and koṭi-jyā functions used in Indian astronomy during the Gupta period ( Aryabhatiya and Surya Siddhanta ), via translation from Sanskrit to Arabic and then from Arabic to Latin. [ 39 ] [ 47 ] All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines , used in solving triangles . [ 48 ] Al-Khwārizmī (c. 780–850) produced tables of sines, cosines and tangents. [ 49 ] [ 50 ] Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. [ 50 ] In the early 17th-century, the French mathematician Albert Girard published the first use of the abbreviations sin , cos , and tan ; these were further promulgated by Euler (see below). The Opus palatinum de triangulis of Georg Joachim Rheticus , a student of Copernicus , was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596. In a paper published in 1682, Leibniz proved that sin x is not an algebraic function of x . [ 51 ] Roger Cotes computed the derivative of sine in his Harmonia Mensurarum (1722). [ 52 ] Leonhard Euler 's Introductio in analysin infinitorum (1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, also defining them as infinite series and presenting " Euler's formula ", as well as the near-modern abbreviations sin. , cos. , tang. , cot. , sec. , and cosec. [ 39 ] There is no standard algorithm for calculating sine and cosine. IEEE 754 , the most widely used standard for the specification of reliable floating-point computation, does not address calculating trigonometric functions such as sine. The reason is that no efficient algorithm is known for computing sine and cosine with a specified accuracy, especially for large inputs. [ 53 ] Algorithms for calculating sine may be balanced for such constraints as speed, accuracy, portability, or range of input values accepted. This can lead to different results for different algorithms, especially for special circumstances such as very large inputs, e.g. sin(10 22 ) . A common programming optimization, used especially in 3D graphics, is to pre-calculate a table of sine values, for example one value per degree, then for values in-between pick the closest pre-calculated value, or linearly interpolate between the 2 closest values to approximate it. This allows results to be looked up from a table rather than being calculated in real time. With modern CPU architectures this method may offer no advantage. [ citation needed ] The CORDIC algorithm is commonly used in scientific calculators. The sine and cosine functions, along with other trigonometric functions, are widely available across programming languages and platforms. In computing, they are typically abbreviated to sin and cos . Some CPU architectures have a built-in instruction for sine, including the Intel x87 FPUs since the 80387. In programming languages, sin and cos are typically either a built-in function or found within the language's standard math library. For example, the C standard library defines sine functions within math.h : sin( double ) , sinf( float ) , and sinl( long double ) . The parameter of each is a floating point value, specifying the angle in radians. Each function returns the same data type as it accepts. Many other trigonometric functions are also defined in math.h , such as for cosine, arc sine, and hyperbolic sine (sinh). Similarly, Python defines math.sin(x) and math.cos(x) within the built-in math module. Complex sine and cosine functions are also available within the cmath module, e.g. cmath.sin(z) . CPython 's math functions call the C math library, and use a double-precision floating-point format . Some software libraries provide implementations of sine and cosine using the input angle in half- turns , a half-turn being an angle of 180 degrees or π {\displaystyle \pi } radians. Representing angles in turns or half-turns has accuracy advantages and efficiency advantages in some cases. [ 54 ] [ 55 ] These functions are called sinpi and cospi in MATLAB, [ 54 ] OpenCL, [ 56 ] R, [ 55 ] Julia, [ 57 ] CUDA, [ 58 ] and ARM. [ 59 ] For example, sinpi(x) would evaluate to sin ⁡ ( π x ) , {\displaystyle \sin(\pi x),} where x is expressed in half-turns, and consequently the final input to the function, πx can be interpreted in radians by sin . The accuracy advantage stems from the ability to perfectly represent key angles like full-turn, half-turn, and quarter-turn losslessly in binary floating-point or fixed-point. In contrast, representing 2 π {\displaystyle 2\pi } , π {\displaystyle \pi } , and π 2 {\textstyle {\frac {\pi }{2}}} in binary floating-point or binary scaled fixed-point always involves a loss of accuracy since irrational numbers cannot be represented with finitely many binary digits. Turns also have an accuracy advantage and efficiency advantage for computing modulo to one period. Computing modulo 1 turn or modulo 2 half-turns can be losslessly and efficiently computed in both floating-point and fixed-point. For example, computing modulo 1 or modulo 2 for a binary point scaled fixed-point value requires only a bit shift or bitwise AND operation. In contrast, computing modulo π 2 {\textstyle {\frac {\pi }{2}}} involves inaccuracies in representing π 2 {\textstyle {\frac {\pi }{2}}} . For applications involving angle sensors, the sensor typically provides angle measurements in a form directly compatible with turns or half-turns. For example, an angle sensor may count from 0 to 4096 over one complete revolution. [ 60 ] If half-turns are used as the unit for angle, then the value provided by the sensor directly and losslessly maps to a fixed-point data type with 11 bits to the right of the binary point. In contrast, if radians are used as the unit for storing the angle, then the inaccuracies and cost of multiplying the raw sensor integer by an approximation to π 2048 {\textstyle {\frac {\pi }{2048}}} would be incurred.
https://en.wikipedia.org/wiki/Sinus_rectus_arcus
In mathematics , sine and cosine are trigonometric functions of an angle . The sine and cosine of an acute angle are defined in the context of a right triangle : for the specified angle, its sine is the ratio of the length of the side opposite that angle to the length of the longest side of the triangle (the hypotenuse ), and the cosine is the ratio of the length of the adjacent leg to that of the hypotenuse . For an angle θ {\displaystyle \theta } , the sine and cosine functions are denoted as sin ⁡ ( θ ) {\displaystyle \sin(\theta )} and cos ⁡ ( θ ) {\displaystyle \cos(\theta )} . The definitions of sine and cosine have been extended to any real value in terms of the lengths of certain line segments in a unit circle . More modern definitions express the sine and cosine as infinite series , or as the solutions of certain differential equations , allowing their extension to arbitrary positive and negative values and even to complex numbers . The sine and cosine functions are commonly used to model periodic phenomena such as sound and light waves , the position and velocity of harmonic oscillators, sunlight intensity and day length, and average temperature variations throughout the year. They can be traced to the jyā and koṭi-jyā functions used in Indian astronomy during the Gupta period . To define the sine and cosine of an acute angle α {\displaystyle \alpha } , start with a right triangle that contains an angle of measure α {\displaystyle \alpha } ; in the accompanying figure, angle α {\displaystyle \alpha } in a right triangle A B C {\displaystyle ABC} is the angle of interest. The three sides of the triangle are named as follows: [ 1 ] Once such a triangle is chosen, the sine of the angle is equal to the length of the opposite side divided by the length of the hypotenuse, and the cosine of the angle is equal to the length of the adjacent side divided by the length of the hypotenuse: [ 1 ] sin ⁡ ( α ) = opposite hypotenuse , cos ⁡ ( α ) = adjacent hypotenuse . {\displaystyle \sin(\alpha )={\frac {\text{opposite}}{\text{hypotenuse}}},\qquad \cos(\alpha )={\frac {\text{adjacent}}{\text{hypotenuse}}}.} The other trigonometric functions of the angle can be defined similarly; for example, the tangent is the ratio between the opposite and adjacent sides or equivalently the ratio between the sine and cosine functions. The reciprocal of sine is cosecant, which gives the ratio of the hypotenuse length to the length of the opposite side. Similarly, the reciprocal of cosine is secant, which gives the ratio of the hypotenuse length to that of the adjacent side. The cotangent function is the ratio between the adjacent and opposite sides, a reciprocal of a tangent function. These functions can be formulated as: [ 1 ] tan ⁡ ( θ ) = sin ⁡ ( θ ) cos ⁡ ( θ ) = opposite adjacent , cot ⁡ ( θ ) = 1 tan ⁡ ( θ ) = adjacent opposite , csc ⁡ ( θ ) = 1 sin ⁡ ( θ ) = hypotenuse opposite , sec ⁡ ( θ ) = 1 cos ⁡ ( θ ) = hypotenuse adjacent . {\displaystyle {\begin{aligned}\tan(\theta )&={\frac {\sin(\theta )}{\cos(\theta )}}={\frac {\text{opposite}}{\text{adjacent}}},\\\cot(\theta )&={\frac {1}{\tan(\theta )}}={\frac {\text{adjacent}}{\text{opposite}}},\\\csc(\theta )&={\frac {1}{\sin(\theta )}}={\frac {\text{hypotenuse}}{\text{opposite}}},\\\sec(\theta )&={\frac {1}{\cos(\theta )}}={\frac {\textrm {hypotenuse}}{\textrm {adjacent}}}.\end{aligned}}} As stated, the values sin ⁡ ( α ) {\displaystyle \sin(\alpha )} and cos ⁡ ( α ) {\displaystyle \cos(\alpha )} appear to depend on the choice of a right triangle containing an angle of measure α {\displaystyle \alpha } . However, this is not the case as all such triangles are similar , and so the ratios are the same for each of them. For example, each leg of the 45-45-90 right triangle is 1 unit, and its hypotenuse is 2 {\displaystyle {\sqrt {2}}} ; therefore, sin ⁡ 45 ∘ = cos ⁡ 45 ∘ = 2 2 {\textstyle \sin 45^{\circ }=\cos 45^{\circ }={\frac {\sqrt {2}}{2}}} . [ 2 ] The following table shows the special value of each input for both sine and cosine with the domain between 0 < α < π 2 {\textstyle 0<\alpha <{\frac {\pi }{2}}} . The input in this table provides various unit systems such as degree, radian, and so on. The angles other than those five can be obtained by using a calculator. [ 3 ] [ 4 ] The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. [ 5 ] Given a triangle A B C {\displaystyle ABC} with sides a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} , and angles opposite those sides α {\displaystyle \alpha } , β {\displaystyle \beta } , and γ {\displaystyle \gamma } , the law states, sin ⁡ α a = sin ⁡ β b = sin ⁡ γ c . {\displaystyle {\frac {\sin \alpha }{a}}={\frac {\sin \beta }{b}}={\frac {\sin \gamma }{c}}.} This is equivalent to the equality of the first three expressions below: a sin ⁡ α = b sin ⁡ β = c sin ⁡ γ = 2 R , {\displaystyle {\frac {a}{\sin \alpha }}={\frac {b}{\sin \beta }}={\frac {c}{\sin \gamma }}=2R,} where R {\displaystyle R} is the triangle's circumradius . The law of cosines is useful for computing the length of an unknown side if two other sides and an angle are known. [ 5 ] The law states, a 2 + b 2 − 2 a b cos ⁡ ( γ ) = c 2 {\displaystyle a^{2}+b^{2}-2ab\cos(\gamma )=c^{2}} In the case where γ = π / 2 {\displaystyle \gamma =\pi /2} from which cos ⁡ ( γ ) = 0 {\displaystyle \cos(\gamma )=0} , the resulting equation becomes the Pythagorean theorem . [ 6 ] The cross product and dot product are operations on two vectors in Euclidean vector space . The sine and cosine functions can be defined in terms of the cross product and dot product. If a {\displaystyle \mathbb {a} } and b {\displaystyle \mathbb {b} } are vectors, and θ {\displaystyle \theta } is the angle between a {\displaystyle \mathbb {a} } and b {\displaystyle \mathbb {b} } , then sine and cosine can be defined as: sin ⁡ ( θ ) = | a × b | | a | | b | , cos ⁡ ( θ ) = a ⋅ b | a | | b | . {\displaystyle {\begin{aligned}\sin(\theta )&={\frac {|\mathbb {a} \times \mathbb {b} |}{|a||b|}},\\\cos(\theta )&={\frac {\mathbb {a} \cdot \mathbb {b} }{|a||b|}}.\end{aligned}}} The sine and cosine functions may also be defined in a more general way by using unit circle , a circle of radius one centered at the origin ( 0 , 0 ) {\displaystyle (0,0)} , formulated as the equation of x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} in the Cartesian coordinate system . Let a line through the origin intersect the unit circle, making an angle of θ {\displaystyle \theta } with the positive half of the x {\displaystyle x} - axis. The x {\displaystyle x} - and y {\displaystyle y} - coordinates of this point of intersection are equal to cos ⁡ ( θ ) {\displaystyle \cos(\theta )} and sin ⁡ ( θ ) {\displaystyle \sin(\theta )} , respectively; that is, [ 7 ] sin ⁡ ( θ ) = y , cos ⁡ ( θ ) = x . {\displaystyle \sin(\theta )=y,\qquad \cos(\theta )=x.} This definition is consistent with the right-angled triangle definition of sine and cosine when 0 < θ < π 2 {\textstyle 0<\theta <{\frac {\pi }{2}}} because the length of the hypotenuse of the unit circle is always 1; mathematically speaking, the sine of an angle equals the opposite side of the triangle, which is simply the y {\displaystyle y} - coordinate. A similar argument can be made for the cosine function to show that the cosine of an angle when 0 < θ < π 2 {\textstyle 0<\theta <{\frac {\pi }{2}}} , even under the new definition using the unit circle. [ 8 ] [ 9 ] Using the unit circle definition has the advantage of drawing a graph of sine and cosine functions. This can be done by rotating counterclockwise a point along the circumference of a circle, depending on the input θ > 0 {\displaystyle \theta >0} . In a sine function, if the input is θ = π 2 {\textstyle \theta ={\frac {\pi }{2}}} , the point is rotated counterclockwise and stopped exactly on the y {\displaystyle y} - axis. If θ = π {\displaystyle \theta =\pi } , the point is at the circle's halfway. If θ = 2 π {\displaystyle \theta =2\pi } , the point returned to its origin. This results that both sine and cosine functions have the range between − 1 ≤ y ≤ 1 {\displaystyle -1\leq y\leq 1} . [ 10 ] Extending the angle to any real domain, the point rotated counterclockwise continuously. This can be done similarly for the cosine function as well, although the point is rotated initially from the y {\displaystyle y} - coordinate. In other words, both sine and cosine functions are periodic , meaning any angle added by the circumference's circle is the angle itself. Mathematically, [ 11 ] sin ⁡ ( θ + 2 π ) = sin ⁡ ( θ ) , cos ⁡ ( θ + 2 π ) = cos ⁡ ( θ ) . {\displaystyle \sin(\theta +2\pi )=\sin(\theta ),\qquad \cos(\theta +2\pi )=\cos(\theta ).} A function f {\displaystyle f} is said to be odd if f ( − x ) = − f ( x ) {\displaystyle f(-x)=-f(x)} , and is said to be even if f ( − x ) = f ( x ) {\displaystyle f(-x)=f(x)} . The sine function is odd, whereas the cosine function is even. [ 12 ] Both sine and cosine functions are similar, with their difference being shifted by π 2 {\textstyle {\frac {\pi }{2}}} . This means, [ 13 ] sin ⁡ ( θ ) = cos ⁡ ( π 2 − θ ) , cos ⁡ ( θ ) = sin ⁡ ( π 2 − θ ) . {\displaystyle {\begin{aligned}\sin(\theta )&=\cos \left({\frac {\pi }{2}}-\theta \right),\\\cos(\theta )&=\sin \left({\frac {\pi }{2}}-\theta \right).\end{aligned}}} Zero is the only real fixed point of the sine function; in other words the only intersection of the sine function and the identity function is sin ⁡ ( 0 ) = 0 {\displaystyle \sin(0)=0} . The only real fixed point of the cosine function is called the Dottie number . The Dottie number is the unique real root of the equation cos ⁡ ( x ) = x {\displaystyle \cos(x)=x} . The decimal expansion of the Dottie number is approximately 0.739085. [ 14 ] The sine and cosine functions are infinitely differentiable. [ 15 ] The derivative of sine is cosine, and the derivative of cosine is negative sine: [ 16 ] d d x sin ⁡ ( x ) = cos ⁡ ( x ) , d d x cos ⁡ ( x ) = − sin ⁡ ( x ) . {\displaystyle {\frac {d}{dx}}\sin(x)=\cos(x),\qquad {\frac {d}{dx}}\cos(x)=-\sin(x).} Continuing the process in higher-order derivative results in the repeated same functions; the fourth derivative of a sine is the sine itself. [ 15 ] These derivatives can be applied to the first derivative test , according to which the monotonicity of a function can be defined as the inequality of function's first derivative greater or less than equal to zero. [ 17 ] It can also be applied to second derivative test , according to which the concavity of a function can be defined by applying the inequality of the function's second derivative greater or less than equal to zero. [ 18 ] The following table shows that both sine and cosine functions have concavity and monotonicity—the positive sign ( + {\displaystyle +} ) denotes a graph is increasing (going upward) and the negative sign ( − {\displaystyle -} ) is decreasing (going downward)—in certain intervals. [ 19 ] This information can be represented as a Cartesian coordinates system divided into four quadrants. Both sine and cosine functions can be defined by using differential equations. The pair of ( cos ⁡ θ , sin ⁡ θ ) {\displaystyle (\cos \theta ,\sin \theta )} is the solution ( x ( θ ) , y ( θ ) ) {\displaystyle (x(\theta ),y(\theta ))} to the two-dimensional system of differential equations y ′ ( θ ) = x ( θ ) {\displaystyle y'(\theta )=x(\theta )} and x ′ ( θ ) = − y ( θ ) {\displaystyle x'(\theta )=-y(\theta )} with the initial conditions y ( 0 ) = 0 {\displaystyle y(0)=0} and x ( 0 ) = 1 {\displaystyle x(0)=1} . One could interpret the unit circle in the above definitions as defining the phase space trajectory of the differential equation with the given initial conditions. It can be interpreted as a phase space trajectory of the system of differential equations y ′ ( θ ) = x ( θ ) {\displaystyle y'(\theta )=x(\theta )} and x ′ ( θ ) = − y ( θ ) {\displaystyle x'(\theta )=-y(\theta )} starting from the initial conditions y ( 0 ) = 0 {\displaystyle y(0)=0} and x ( 0 ) = 1 {\displaystyle x(0)=1} . [ citation needed ] Their area under a curve can be obtained by using the integral with a certain bounded interval. Their antiderivatives are: ∫ sin ⁡ ( x ) d x = − cos ⁡ ( x ) + C ∫ cos ⁡ ( x ) d x = sin ⁡ ( x ) + C , {\displaystyle \int \sin(x)\,dx=-\cos(x)+C\qquad \int \cos(x)\,dx=\sin(x)+C,} where C {\displaystyle C} denotes the constant of integration . [ 20 ] These antiderivatives may be applied to compute the mensuration properties of both sine and cosine functions' curves with a given interval. For example, the arc length of the sine curve between 0 {\displaystyle 0} and t {\displaystyle t} is ∫ 0 t 1 + cos 2 ⁡ ( x ) d x = 2 E ⁡ ( t , 1 2 ) , {\displaystyle \int _{0}^{t}\!{\sqrt {1+\cos ^{2}(x)}}\,dx={\sqrt {2}}\operatorname {E} \left(t,{\frac {1}{\sqrt {2}}}\right),} where E ⁡ ( φ , k ) {\displaystyle \operatorname {E} (\varphi ,k)} is the incomplete elliptic integral of the second kind with modulus k {\displaystyle k} . It cannot be expressed using elementary functions . [ 21 ] In the case of a full period, its arc length is L = 4 2 π 3 Γ ( 1 / 4 ) 2 + Γ ( 1 / 4 ) 2 2 π = 2 π ϖ + 2 ϖ ≈ 7.6404 … {\displaystyle L={\frac {4{\sqrt {2\pi ^{3}}}}{\Gamma (1/4)^{2}}}+{\frac {\Gamma (1/4)^{2}}{\sqrt {2\pi }}}={\frac {2\pi }{\varpi }}+2\varpi \approx 7.6404\ldots } where Γ {\displaystyle \Gamma } is the gamma function and ϖ {\displaystyle \varpi } is the lemniscate constant . [ 22 ] The inverse function of sine is arcsine or inverse sine, denoted as "arcsin", "asin", or sin − 1 {\displaystyle \sin ^{-1}} . [ 23 ] The inverse function of cosine is arccosine, denoted as "arccos", "acos", or cos − 1 {\displaystyle \cos ^{-1}} . [ a ] As sine and cosine are not injective , their inverses are not exact inverse functions, but partial inverse functions. For example, sin ⁡ ( 0 ) = 0 {\displaystyle \sin(0)=0} , but also sin ⁡ ( π ) = 0 {\displaystyle \sin(\pi )=0} , sin ⁡ ( 2 π ) = 0 {\displaystyle \sin(2\pi )=0} , and so on. It follows that the arcsine function is multivalued: arcsin ⁡ ( 0 ) = 0 {\displaystyle \arcsin(0)=0} , but also arcsin ⁡ ( 0 ) = π {\displaystyle \arcsin(0)=\pi } , arcsin ⁡ ( 0 ) = 2 π {\displaystyle \arcsin(0)=2\pi } , and so on. When only one value is desired, the function may be restricted to its principal branch . With this restriction, for each x {\displaystyle x} in the domain, the expression arcsin ⁡ ( x ) {\displaystyle \arcsin(x)} will evaluate only to a single value, called its principal value . The standard range of principal values for arcsin is from − π 2 {\textstyle -{\frac {\pi }{2}}} to π 2 {\textstyle {\frac {\pi }{2}}} , and the standard range for arccos is from 0 {\displaystyle 0} to π {\displaystyle \pi } . [ 24 ] The inverse function of both sine and cosine are defined as: [ citation needed ] θ = arcsin ⁡ ( opposite hypotenuse ) = arccos ⁡ ( adjacent hypotenuse ) , {\displaystyle \theta =\arcsin \left({\frac {\text{opposite}}{\text{hypotenuse}}}\right)=\arccos \left({\frac {\text{adjacent}}{\text{hypotenuse}}}\right),} where for some integer k {\displaystyle k} , sin ⁡ ( y ) = x ⟺ y = arcsin ⁡ ( x ) + 2 π k , or y = π − arcsin ⁡ ( x ) + 2 π k cos ⁡ ( y ) = x ⟺ y = arccos ⁡ ( x ) + 2 π k , or y = − arccos ⁡ ( x ) + 2 π k {\displaystyle {\begin{aligned}\sin(y)=x\iff &y=\arcsin(x)+2\pi k,{\text{ or }}\\&y=\pi -\arcsin(x)+2\pi k\\\cos(y)=x\iff &y=\arccos(x)+2\pi k,{\text{ or }}\\&y=-\arccos(x)+2\pi k\end{aligned}}} By definition, both functions satisfy the equations: [ citation needed ] sin ⁡ ( arcsin ⁡ ( x ) ) = x cos ⁡ ( arccos ⁡ ( x ) ) = x {\displaystyle \sin(\arcsin(x))=x\qquad \cos(\arccos(x))=x} and arcsin ⁡ ( sin ⁡ ( θ ) ) = θ for − π 2 ≤ θ ≤ π 2 arccos ⁡ ( cos ⁡ ( θ ) ) = θ for 0 ≤ θ ≤ π {\displaystyle {\begin{aligned}\arcsin(\sin(\theta ))=\theta \quad &{\text{for}}\quad -{\frac {\pi }{2}}\leq \theta \leq {\frac {\pi }{2}}\\\arccos(\cos(\theta ))=\theta \quad &{\text{for}}\quad 0\leq \theta \leq \pi \end{aligned}}} According to Pythagorean theorem , the squared hypotenuse is the sum of two squared legs of a right triangle. Dividing the formula on both sides with squared hypotenuse resulting in the Pythagorean trigonometric identity , the sum of a squared sine and a squared cosine equals 1: [ 25 ] [ b ] sin 2 ⁡ ( θ ) + cos 2 ⁡ ( θ ) = 1. {\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1.} Sine and cosine satisfy the following double-angle formulas: [ 26 ] sin ⁡ ( 2 θ ) = 2 sin ⁡ ( θ ) cos ⁡ ( θ ) , cos ⁡ ( 2 θ ) = cos 2 ⁡ ( θ ) − sin 2 ⁡ ( θ ) = 2 cos 2 ⁡ ( θ ) − 1 = 1 − 2 sin 2 ⁡ ( θ ) {\displaystyle {\begin{aligned}\sin(2\theta )&=2\sin(\theta )\cos(\theta ),\\\cos(2\theta )&=\cos ^{2}(\theta )-\sin ^{2}(\theta )\\&=2\cos ^{2}(\theta )-1\\&=1-2\sin ^{2}(\theta )\end{aligned}}} The cosine double angle formula implies that sin 2 and cos 2 are, themselves, shifted and scaled sine waves. Specifically, [ 27 ] sin 2 ⁡ ( θ ) = 1 − cos ⁡ ( 2 θ ) 2 cos 2 ⁡ ( θ ) = 1 + cos ⁡ ( 2 θ ) 2 {\displaystyle \sin ^{2}(\theta )={\frac {1-\cos(2\theta )}{2}}\qquad \cos ^{2}(\theta )={\frac {1+\cos(2\theta )}{2}}} The graph shows both sine and sine squared functions, with the sine in blue and the sine squared in red. Both graphs have the same shape but with different ranges of values and different periods. Sine squared has only positive values, but twice the number of periods. [ citation needed ] Both sine and cosine functions can be defined by using a Taylor series , a power series involving the higher-order derivatives. As mentioned in § Continuity and differentiation , the derivative of sine is cosine and that the derivative of cosine is the negative of sine. This means the successive derivatives of sin ⁡ ( x ) {\displaystyle \sin(x)} are cos ⁡ ( x ) {\displaystyle \cos(x)} , − sin ⁡ ( x ) {\displaystyle -\sin(x)} , − cos ⁡ ( x ) {\displaystyle -\cos(x)} , sin ⁡ ( x ) {\displaystyle \sin(x)} , continuing to repeat those four functions. The ( 4 n + k ) {\displaystyle (4n+k)} - th derivative, evaluated at the point 0: sin ( 4 n + k ) ⁡ ( 0 ) = { 0 when k = 0 1 when k = 1 0 when k = 2 − 1 when k = 3 {\displaystyle \sin ^{(4n+k)}(0)={\begin{cases}0&{\text{when }}k=0\\1&{\text{when }}k=1\\0&{\text{when }}k=2\\-1&{\text{when }}k=3\end{cases}}} where the superscript represents repeated differentiation. This implies the following Taylor series expansion at x = 0 {\displaystyle x=0} . One can then use the theory of Taylor series to show that the following identities hold for all real numbers x {\displaystyle x} —where x {\displaystyle x} is the angle in radians. [ 28 ] More generally, for all complex numbers : [ 29 ] sin ⁡ ( x ) = x − x 3 3 ! + x 5 5 ! − x 7 7 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) ! x 2 n + 1 {\displaystyle {\begin{aligned}\sin(x)&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}\end{aligned}}} Taking the derivative of each term gives the Taylor series for cosine: [ 28 ] [ 29 ] cos ⁡ ( x ) = 1 − x 2 2 ! + x 4 4 ! − x 6 6 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! x 2 n {\displaystyle {\begin{aligned}\cos(x)&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}\end{aligned}}} Both sine and cosine functions with multiple angles may appear as their linear combination , resulting in a polynomial. Such a polynomial is known as the trigonometric polynomial . The trigonometric polynomial's ample applications may be acquired in its interpolation , and its extension of a periodic function known as the Fourier series . Let a n {\displaystyle a_{n}} and b n {\displaystyle b_{n}} be any coefficients, then the trigonometric polynomial of a degree N {\displaystyle N} —denoted as T ( x ) {\displaystyle T(x)} —is defined as: [ 30 ] [ 31 ] T ( x ) = a 0 + ∑ n = 1 N a n cos ⁡ ( n x ) + ∑ n = 1 N b n sin ⁡ ( n x ) . {\displaystyle T(x)=a_{0}+\sum _{n=1}^{N}a_{n}\cos(nx)+\sum _{n=1}^{N}b_{n}\sin(nx).} The trigonometric series can be defined similarly analogous to the trigonometric polynomial, its infinite inversion. Let A n {\displaystyle A_{n}} and B n {\displaystyle B_{n}} be any coefficients, then the trigonometric series can be defined as: [ 32 ] 1 2 A 0 + ∑ n = 1 ∞ A n cos ⁡ ( n x ) + B n sin ⁡ ( n x ) . {\displaystyle {\frac {1}{2}}A_{0}+\sum _{n=1}^{\infty }A_{n}\cos(nx)+B_{n}\sin(nx).} In the case of a Fourier series with a given integrable function f {\displaystyle f} , the coefficients of a trigonometric series are: [ 33 ] A n = 1 π ∫ 0 2 π f ( x ) cos ⁡ ( n x ) d x , B n = 1 π ∫ 0 2 π f ( x ) sin ⁡ ( n x ) d x . {\displaystyle {\begin{aligned}A_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\cos(nx)\,dx,\\B_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\sin(nx)\,dx.\end{aligned}}} Both sine and cosine can be extended further via complex number , a set of numbers composed of both real and imaginary numbers . For real number θ {\displaystyle \theta } , the definition of both sine and cosine functions can be extended in a complex plane in terms of an exponential function as follows: [ 34 ] sin ⁡ ( θ ) = e i θ − e − i θ 2 i , cos ⁡ ( θ ) = e i θ + e − i θ 2 , {\displaystyle {\begin{aligned}\sin(\theta )&={\frac {e^{i\theta }-e^{-i\theta }}{2i}},\\\cos(\theta )&={\frac {e^{i\theta }+e^{-i\theta }}{2}},\end{aligned}}} Alternatively, both functions can be defined in terms of Euler's formula : [ 34 ] e i θ = cos ⁡ ( θ ) + i sin ⁡ ( θ ) , e − i θ = cos ⁡ ( θ ) − i sin ⁡ ( θ ) . {\displaystyle {\begin{aligned}e^{i\theta }&=\cos(\theta )+i\sin(\theta ),\\e^{-i\theta }&=\cos(\theta )-i\sin(\theta ).\end{aligned}}} When plotted on the complex plane , the function e i x {\displaystyle e^{ix}} for real values of x {\displaystyle x} traces out the unit circle in the complex plane. Both sine and cosine functions may be simplified to the imaginary and real parts of e i θ {\displaystyle e^{i\theta }} as: [ 35 ] sin ⁡ θ = Im ⁡ ( e i θ ) , cos ⁡ θ = Re ⁡ ( e i θ ) . {\displaystyle {\begin{aligned}\sin \theta &=\operatorname {Im} (e^{i\theta }),\\\cos \theta &=\operatorname {Re} (e^{i\theta }).\end{aligned}}} When z = x + i y {\displaystyle z=x+iy} for real values x {\displaystyle x} and y {\displaystyle y} , where i = − 1 {\displaystyle i={\sqrt {-1}}} , both sine and cosine functions can be expressed in terms of real sines, cosines, and hyperbolic functions as: [ citation needed ] sin ⁡ z = sin ⁡ x cosh ⁡ y + i cos ⁡ x sinh ⁡ y , cos ⁡ z = cos ⁡ x cosh ⁡ y − i sin ⁡ x sinh ⁡ y . {\displaystyle {\begin{aligned}\sin z&=\sin x\cosh y+i\cos x\sinh y,\\\cos z&=\cos x\cosh y-i\sin x\sinh y.\end{aligned}}} Sine and cosine are used to connect the real and imaginary parts of a complex number with its polar coordinates ( r , θ ) {\displaystyle (r,\theta )} : z = r ( cos ⁡ ( θ ) + i sin ⁡ ( θ ) ) , {\displaystyle z=r(\cos(\theta )+i\sin(\theta )),} and the real and imaginary parts are Re ⁡ ( z ) = r cos ⁡ ( θ ) , Im ⁡ ( z ) = r sin ⁡ ( θ ) , {\displaystyle {\begin{aligned}\operatorname {Re} (z)&=r\cos(\theta ),\\\operatorname {Im} (z)&=r\sin(\theta ),\end{aligned}}} where r {\displaystyle r} and θ {\displaystyle \theta } represent the magnitude and angle of the complex number z {\displaystyle z} . For any real number θ {\displaystyle \theta } , Euler's formula in terms of polar coordinates is stated as z = r e i θ {\textstyle z=re^{i\theta }} . Applying the series definition of the sine and cosine to a complex argument, z , gives: where sinh and cosh are the hyperbolic sine and cosine . These are entire functions . It is also sometimes useful to express the complex sine and cosine functions in terms of the real and imaginary parts of its argument: Using the partial fraction expansion technique in complex analysis , one can find that the infinite series ∑ n = − ∞ ∞ ( − 1 ) n z − n = 1 z − 2 z ∑ n = 1 ∞ ( − 1 ) n n 2 − z 2 {\displaystyle \sum _{n=-\infty }^{\infty }{\frac {(-1)^{n}}{z-n}}={\frac {1}{z}}-2z\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{n^{2}-z^{2}}}} both converge and are equal to π sin ⁡ ( π z ) {\textstyle {\frac {\pi }{\sin(\pi z)}}} . Similarly, one can show that π 2 sin 2 ⁡ ( π z ) = ∑ n = − ∞ ∞ 1 ( z − n ) 2 . {\displaystyle {\frac {\pi ^{2}}{\sin ^{2}(\pi z)}}=\sum _{n=-\infty }^{\infty }{\frac {1}{(z-n)^{2}}}.} Using product expansion technique, one can derive sin ⁡ ( π z ) = π z ∏ n = 1 ∞ ( 1 − z 2 n 2 ) . {\displaystyle \sin(\pi z)=\pi z\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{n^{2}}}\right).} sin( z ) is found in the functional equation for the Gamma function , which in turn is found in the functional equation for the Riemann zeta-function , As a holomorphic function , sin z is a 2D solution of Laplace's equation : The complex sine function is also related to the level curves of pendulums . [ how? ] [ 36 ] [ better source needed ] The word sine is derived, indirectly, from the Sanskrit word jyā 'bow-string' or more specifically its synonym jīvá (both adopted from Ancient Greek χορδή 'string; chord'), due to visual similarity between the arc of a circle with its corresponding chord and a bow with its string (see jyā, koti-jyā and utkrama-jyā ; sine and chord are closely related in a circle of unit diameter, see Ptolemy’s Theorem ). This was transliterated in Arabic as jība , which is meaningless in that language and written as jb ( جب ). Since Arabic is written without short vowels, jb was interpreted as the homograph jayb ( جيب ), which means 'bosom', 'pocket', or 'fold'. [ 37 ] [ 38 ] When the Arabic texts of Al-Battani and al-Khwārizmī were translated into Medieval Latin in the 12th century by Gerard of Cremona , he used the Latin equivalent sinus (which also means 'bay' or 'fold', and more specifically 'the hanging fold of a toga over the breast'). [ 39 ] [ 40 ] [ 41 ] Gerard was probably not the first scholar to use this translation; Robert of Chester appears to have preceded him and there is evidence of even earlier usage. [ 42 ] [ 43 ] The English form sine was introduced in Thomas Fale 's 1593 Horologiographia . [ 44 ] The word cosine derives from an abbreviation of the Latin complementi sinus 'sine of the complementary angle ' as cosinus in Edmund Gunter 's Canon triangulorum (1620), which also includes a similar definition of cotangens . [ 45 ] While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was discovered by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). [ 46 ] The sine and cosine functions are closely related to the jyā and koṭi-jyā functions used in Indian astronomy during the Gupta period ( Aryabhatiya and Surya Siddhanta ), via translation from Sanskrit to Arabic and then from Arabic to Latin. [ 39 ] [ 47 ] All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines , used in solving triangles . [ 48 ] Al-Khwārizmī (c. 780–850) produced tables of sines, cosines and tangents. [ 49 ] [ 50 ] Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. [ 50 ] In the early 17th-century, the French mathematician Albert Girard published the first use of the abbreviations sin , cos , and tan ; these were further promulgated by Euler (see below). The Opus palatinum de triangulis of Georg Joachim Rheticus , a student of Copernicus , was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596. In a paper published in 1682, Leibniz proved that sin x is not an algebraic function of x . [ 51 ] Roger Cotes computed the derivative of sine in his Harmonia Mensurarum (1722). [ 52 ] Leonhard Euler 's Introductio in analysin infinitorum (1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, also defining them as infinite series and presenting " Euler's formula ", as well as the near-modern abbreviations sin. , cos. , tang. , cot. , sec. , and cosec. [ 39 ] There is no standard algorithm for calculating sine and cosine. IEEE 754 , the most widely used standard for the specification of reliable floating-point computation, does not address calculating trigonometric functions such as sine. The reason is that no efficient algorithm is known for computing sine and cosine with a specified accuracy, especially for large inputs. [ 53 ] Algorithms for calculating sine may be balanced for such constraints as speed, accuracy, portability, or range of input values accepted. This can lead to different results for different algorithms, especially for special circumstances such as very large inputs, e.g. sin(10 22 ) . A common programming optimization, used especially in 3D graphics, is to pre-calculate a table of sine values, for example one value per degree, then for values in-between pick the closest pre-calculated value, or linearly interpolate between the 2 closest values to approximate it. This allows results to be looked up from a table rather than being calculated in real time. With modern CPU architectures this method may offer no advantage. [ citation needed ] The CORDIC algorithm is commonly used in scientific calculators. The sine and cosine functions, along with other trigonometric functions, are widely available across programming languages and platforms. In computing, they are typically abbreviated to sin and cos . Some CPU architectures have a built-in instruction for sine, including the Intel x87 FPUs since the 80387. In programming languages, sin and cos are typically either a built-in function or found within the language's standard math library. For example, the C standard library defines sine functions within math.h : sin( double ) , sinf( float ) , and sinl( long double ) . The parameter of each is a floating point value, specifying the angle in radians. Each function returns the same data type as it accepts. Many other trigonometric functions are also defined in math.h , such as for cosine, arc sine, and hyperbolic sine (sinh). Similarly, Python defines math.sin(x) and math.cos(x) within the built-in math module. Complex sine and cosine functions are also available within the cmath module, e.g. cmath.sin(z) . CPython 's math functions call the C math library, and use a double-precision floating-point format . Some software libraries provide implementations of sine and cosine using the input angle in half- turns , a half-turn being an angle of 180 degrees or π {\displaystyle \pi } radians. Representing angles in turns or half-turns has accuracy advantages and efficiency advantages in some cases. [ 54 ] [ 55 ] These functions are called sinpi and cospi in MATLAB, [ 54 ] OpenCL, [ 56 ] R, [ 55 ] Julia, [ 57 ] CUDA, [ 58 ] and ARM. [ 59 ] For example, sinpi(x) would evaluate to sin ⁡ ( π x ) , {\displaystyle \sin(\pi x),} where x is expressed in half-turns, and consequently the final input to the function, πx can be interpreted in radians by sin . The accuracy advantage stems from the ability to perfectly represent key angles like full-turn, half-turn, and quarter-turn losslessly in binary floating-point or fixed-point. In contrast, representing 2 π {\displaystyle 2\pi } , π {\displaystyle \pi } , and π 2 {\textstyle {\frac {\pi }{2}}} in binary floating-point or binary scaled fixed-point always involves a loss of accuracy since irrational numbers cannot be represented with finitely many binary digits. Turns also have an accuracy advantage and efficiency advantage for computing modulo to one period. Computing modulo 1 turn or modulo 2 half-turns can be losslessly and efficiently computed in both floating-point and fixed-point. For example, computing modulo 1 or modulo 2 for a binary point scaled fixed-point value requires only a bit shift or bitwise AND operation. In contrast, computing modulo π 2 {\textstyle {\frac {\pi }{2}}} involves inaccuracies in representing π 2 {\textstyle {\frac {\pi }{2}}} . For applications involving angle sensors, the sensor typically provides angle measurements in a form directly compatible with turns or half-turns. For example, an angle sensor may count from 0 to 4096 over one complete revolution. [ 60 ] If half-turns are used as the unit for angle, then the value provided by the sensor directly and losslessly maps to a fixed-point data type with 11 bits to the right of the binary point. In contrast, if radians are used as the unit for storing the angle, then the inaccuracies and cost of multiplying the raw sensor integer by an approximation to π 2048 {\textstyle {\frac {\pi }{2048}}} would be incurred.
https://en.wikipedia.org/wiki/Sinus_rectus_primus
In mathematics , the trigonometric functions (also called circular functions , angle functions or goniometric functions ) [ 1 ] are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry , such as navigation , solid mechanics , celestial mechanics , geodesy , and many others. They are among the simplest periodic functions , and as such are also widely used for studying periodic phenomena through Fourier analysis . The trigonometric functions most widely used in modern mathematics are the sine , the cosine , and the tangent functions. Their reciprocals are respectively the cosecant , the secant , and the cotangent functions, which are less used. Each of these six trigonometric functions has a corresponding inverse function , and an analog among the hyperbolic functions . The oldest definitions of trigonometric functions, related to right-angle triangles, define them only for acute angles . To extend the sine and cosine functions to functions whose domain is the whole real line , geometrical definitions using the standard unit circle (i.e., a circle with radius 1 unit) are often used; then the domain of the other functions is the real line with some isolated points removed. Modern definitions express trigonometric functions as infinite series or as solutions of differential equations . This allows extending the domain of sine and cosine functions to the whole complex plane , and the domain of the other trigonometric functions to the complex plane with some isolated points removed. Conventionally, an abbreviation of each trigonometric function's name is used as its symbol in formulas. Today, the most common versions of these abbreviations are " sin " for sine, " cos " for cosine, " tan " or " tg " for tangent, " sec " for secant, " csc " or " cosec " for cosecant, and " cot " or " ctg " for cotangent. Historically, these abbreviations were first used in prose sentences to indicate particular line segments or their lengths related to an arc of an arbitrary circle, and later to indicate ratios of lengths, but as the function concept developed in the 17th–18th century, they began to be considered as functions of real-number-valued angle measures, and written with functional notation , for example sin( x ) . Parentheses are still often omitted to reduce clutter, but are sometimes necessary; for example the expression sin ⁡ x + y {\displaystyle \sin x+y} would typically be interpreted to mean ( sin ⁡ x ) + y , {\displaystyle (\sin x)+y,} so parentheses are required to express sin ⁡ ( x + y ) . {\displaystyle \sin(x+y).} A positive integer appearing as a superscript after the symbol of the function denotes exponentiation , not function composition . For example sin 2 ⁡ x {\displaystyle \sin ^{2}x} and sin 2 ⁡ ( x ) {\displaystyle \sin ^{2}(x)} denote ( sin ⁡ x ) 2 , {\displaystyle (\sin x)^{2},} not sin ⁡ ( sin ⁡ x ) . {\displaystyle \sin(\sin x).} This differs from the (historically later) general functional notation in which f 2 ( x ) = ( f ∘ f ) ( x ) = f ( f ( x ) ) . {\displaystyle f^{2}(x)=(f\circ f)(x)=f(f(x)).} In contrast, the superscript − 1 {\displaystyle -1} is commonly used to denote the inverse function , not the reciprocal . For example sin − 1 ⁡ x {\displaystyle \sin ^{-1}x} and sin − 1 ⁡ ( x ) {\displaystyle \sin ^{-1}(x)} denote the inverse trigonometric function alternatively written arcsin ⁡ x . {\displaystyle \arcsin x\,.} The equation θ = sin − 1 ⁡ x {\displaystyle \theta =\sin ^{-1}x} implies sin ⁡ θ = x , {\displaystyle \sin \theta =x,} not θ ⋅ sin ⁡ x = 1. {\displaystyle \theta \cdot \sin x=1.} In this case, the superscript could be considered as denoting a composed or iterated function , but negative superscripts other than − 1 {\displaystyle {-1}} are not in common use. If the acute angle θ is given, then any right triangles that have an angle of θ are similar to each other. This means that the ratio of any two side lengths depends only on θ . Thus these six ratios define six functions of θ , which are the trigonometric functions. In the following definitions, the hypotenuse is the length of the side opposite the right angle, opposite represents the side opposite the given angle θ , and adjacent represents the side between the angle θ and the right angle. [ 2 ] [ 3 ] Various mnemonics can be used to remember these definitions. In a right-angled triangle, the sum of the two acute angles is a right angle, that is, 90° or ⁠ π / 2 ⁠ radians . Therefore sin ⁡ ( θ ) {\displaystyle \sin(\theta )} and cos ⁡ ( 90 ∘ − θ ) {\displaystyle \cos(90^{\circ }-\theta )} represent the same ratio, and thus are equal. This identity and analogous relationships between the other trigonometric functions are summarized in the following table. In geometric applications, the argument of a trigonometric function is generally the measure of an angle . For this purpose, any angular unit is convenient. One common unit is degrees , in which a right angle is 90° and a complete turn is 360° (particularly in elementary mathematics ). However, in calculus and mathematical analysis , the trigonometric functions are generally regarded more abstractly as functions of real or complex numbers , rather than angles. In fact, the functions sin and cos can be defined for all complex numbers in terms of the exponential function , via power series, [ 5 ] or as solutions to differential equations given particular initial values [ 6 ] ( see below ), without reference to any geometric notions. The other four trigonometric functions ( tan , cot , sec , csc ) can be defined as quotients and reciprocals of sin and cos , except where zero occurs in the denominator. It can be proved, for real arguments, that these definitions coincide with elementary geometric definitions if the argument is regarded as an angle in radians. [ 5 ] Moreover, these definitions result in simple expressions for the derivatives and indefinite integrals for the trigonometric functions. [ 7 ] Thus, in settings beyond elementary geometry, radians are regarded as the mathematically natural unit for describing angle measures. When radians (rad) are employed, the angle is given as the length of the arc of the unit circle subtended by it: the angle that subtends an arc of length 1 on the unit circle is 1 rad (≈ 57.3°), [ 8 ] and a complete turn (360°) is an angle of 2 π (≈ 6.28) rad. [ 9 ] For real number x , the notation sin x , cos x , etc. refers to the value of the trigonometric functions evaluated at an angle of x rad. If units of degrees are intended, the degree sign must be explicitly shown ( sin x° , cos x° , etc.). Using this standard notation, the argument x for the trigonometric functions satisfies the relationship x = (180 x / π )°, so that, for example, sin π = sin 180° when we take x = π . In this way, the degree symbol can be regarded as a mathematical constant such that 1° = π /180 ≈ 0.0175. [ 10 ] The six trigonometric functions can be defined as coordinate values of points on the Euclidean plane that are related to the unit circle , which is the circle of radius one centered at the origin O of this coordinate system. While right-angled triangle definitions allow for the definition of the trigonometric functions for angles between 0 and π 2 {\textstyle {\frac {\pi }{2}}} radians (90°), the unit circle definitions allow the domain of trigonometric functions to be extended to all positive and negative real numbers. Let L {\displaystyle {\mathcal {L}}} be the ray obtained by rotating by an angle θ the positive half of the x -axis ( counterclockwise rotation for θ > 0 , {\displaystyle \theta >0,} and clockwise rotation for θ < 0 {\displaystyle \theta <0} ). This ray intersects the unit circle at the point A = ( x A , y A ) . {\displaystyle \mathrm {A} =(x_{\mathrm {A} },y_{\mathrm {A} }).} The ray L , {\displaystyle {\mathcal {L}},} extended to a line if necessary, intersects the line of equation x = 1 {\displaystyle x=1} at point B = ( 1 , y B ) , {\displaystyle \mathrm {B} =(1,y_{\mathrm {B} }),} and the line of equation y = 1 {\displaystyle y=1} at point C = ( x C , 1 ) . {\displaystyle \mathrm {C} =(x_{\mathrm {C} },1).} The tangent line to the unit circle at the point A , is perpendicular to L , {\displaystyle {\mathcal {L}},} and intersects the y - and x -axes at points D = ( 0 , y D ) {\displaystyle \mathrm {D} =(0,y_{\mathrm {D} })} and E = ( x E , 0 ) . {\displaystyle \mathrm {E} =(x_{\mathrm {E} },0).} The coordinates of these points give the values of all trigonometric functions for any arbitrary real value of θ in the following manner. The trigonometric functions cos and sin are defined, respectively, as the x - and y -coordinate values of point A . That is, In the range 0 ≤ θ ≤ π / 2 {\displaystyle 0\leq \theta \leq \pi /2} , this definition coincides with the right-angled triangle definition, by taking the right-angled triangle to have the unit radius OA as hypotenuse . And since the equation x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} holds for all points P = ( x , y ) {\displaystyle \mathrm {P} =(x,y)} on the unit circle, this definition of cosine and sine also satisfies the Pythagorean identity . The other trigonometric functions can be found along the unit circle as By applying the Pythagorean identity and geometric proof methods, these definitions can readily be shown to coincide with the definitions of tangent, cotangent, secant and cosecant in terms of sine and cosine, that is Since a rotation of an angle of ± 2 π {\displaystyle \pm 2\pi } does not change the position or size of a shape, the points A , B , C , D , and E are the same for two angles whose difference is an integer multiple of 2 π {\displaystyle 2\pi } . Thus trigonometric functions are periodic functions with period 2 π {\displaystyle 2\pi } . That is, the equalities hold for any angle θ and any integer k . The same is true for the four other trigonometric functions. By observing the sign and the monotonicity of the functions sine, cosine, cosecant, and secant in the four quadrants, one can show that 2 π {\displaystyle 2\pi } is the smallest value for which they are periodic (i.e., 2 π {\displaystyle 2\pi } is the fundamental period of these functions). However, after a rotation by an angle π {\displaystyle \pi } , the points B and C already return to their original position, so that the tangent function and the cotangent function have a fundamental period of π {\displaystyle \pi } . That is, the equalities hold for any angle θ and any integer k . The algebraic expressions for the most important angles are as follows: Writing the numerators as square roots of consecutive non-negative integers, with a denominator of 2, provides an easy way to remember the values. [ 13 ] Such simple expressions generally do not exist for other angles which are rational multiples of a right angle. The following table lists the sines, cosines, and tangents of multiples of 15 degrees from 0 to 90 degrees. G. H. Hardy noted in his 1908 work A Course of Pure Mathematics that the definition of the trigonometric functions in terms of the unit circle is not satisfactory, because it depends implicitly on a notion of angle that can be measured by a real number. [ 14 ] Thus in modern analysis, trigonometric functions are usually constructed without reference to geometry. Various ways exist in the literature for defining the trigonometric functions in a manner suitable for analysis; they include: Sine and cosine can be defined as the unique solution to the initial value problem : [ 17 ] Differentiating again, d 2 d x 2 sin ⁡ x = d d x cos ⁡ x = − sin ⁡ x {\textstyle {\frac {d^{2}}{dx^{2}}}\sin x={\frac {d}{dx}}\cos x=-\sin x} and d 2 d x 2 cos ⁡ x = − d d x sin ⁡ x = − cos ⁡ x {\textstyle {\frac {d^{2}}{dx^{2}}}\cos x=-{\frac {d}{dx}}\sin x=-\cos x} , so both sine and cosine are solutions of the same ordinary differential equation Sine is the unique solution with y (0) = 0 and y ′(0) = 1 ; cosine is the unique solution with y (0) = 1 and y ′(0) = 0 . One can then prove, as a theorem, that solutions cos , sin {\displaystyle \cos ,\sin } are periodic, having the same period. Writing this period as 2 π {\displaystyle 2\pi } is then a definition of the real number π {\displaystyle \pi } which is independent of geometry. Applying the quotient rule to the tangent tan ⁡ x = sin ⁡ x / cos ⁡ x {\displaystyle \tan x=\sin x/\cos x} , so the tangent function satisfies the ordinary differential equation It is the unique solution with y (0) = 0 . The basic trigonometric functions can be defined by the following power series expansions. [ 18 ] These series are also known as the Taylor series or Maclaurin series of these trigonometric functions: The radius of convergence of these series is infinite. Therefore, the sine and the cosine can be extended to entire functions (also called "sine" and "cosine"), which are (by definition) complex-valued functions that are defined and holomorphic on the whole complex plane . Term-by-term differentiation shows that the sine and cosine defined by the series obey the differential equation discussed previously, and conversely one can obtain these series from elementary recursion relations derived from the differential equation. Being defined as fractions of entire functions, the other trigonometric functions may be extended to meromorphic functions , that is functions that are holomorphic in the whole complex plane, except some isolated points called poles . Here, the poles are the numbers of the form ( 2 k + 1 ) π 2 {\textstyle (2k+1){\frac {\pi }{2}}} for the tangent and the secant, or k π {\displaystyle k\pi } for the cotangent and the cosecant, where k is an arbitrary integer. Recurrences relations may also be computed for the coefficients of the Taylor series of the other trigonometric functions. These series have a finite radius of convergence . Their coefficients have a combinatorial interpretation: they enumerate alternating permutations of finite sets. [ 19 ] More precisely, defining one has the following series expansions: [ 20 ] The following continued fractions are valid in the whole complex plane: The last one was used in the historically first proof that π is irrational . [ 21 ] There is a series representation as partial fraction expansion where just translated reciprocal functions are summed up, such that the poles of the cotangent function and the reciprocal functions match: [ 22 ] This identity can be proved with the Herglotz trick. [ 23 ] Combining the (– n ) th with the n th term lead to absolutely convergent series: Similarly, one can find a partial fraction expansion for the secant, cosecant and tangent functions: The following infinite product for the sine is due to Leonhard Euler , and is of great importance in complex analysis: [ 24 ] This may be obtained from the partial fraction decomposition of cot ⁡ z {\displaystyle \cot z} given above, which is the logarithmic derivative of sin ⁡ z {\displaystyle \sin z} . [ 25 ] From this, it can be deduced also that Euler's formula relates sine and cosine to the exponential function : This formula is commonly considered for real values of x , but it remains true for all complex values. Proof : Let f 1 ( x ) = cos ⁡ x + i sin ⁡ x , {\displaystyle f_{1}(x)=\cos x+i\sin x,} and f 2 ( x ) = e i x . {\displaystyle f_{2}(x)=e^{ix}.} One has d f j ( x ) / d x = i f j ( x ) {\displaystyle df_{j}(x)/dx=if_{j}(x)} for j = 1, 2 . The quotient rule implies thus that d / d x ( f 1 ( x ) / f 2 ( x ) ) = 0 {\displaystyle d/dx\,(f_{1}(x)/f_{2}(x))=0} . Therefore, f 1 ( x ) / f 2 ( x ) {\displaystyle f_{1}(x)/f_{2}(x)} is a constant function, which equals 1 , as f 1 ( 0 ) = f 2 ( 0 ) = 1. {\displaystyle f_{1}(0)=f_{2}(0)=1.} This proves the formula. One has Solving this linear system in sine and cosine, one can express them in terms of the exponential function: When x is real, this may be rewritten as Most trigonometric identities can be proved by expressing trigonometric functions in terms of the complex exponential function by using above formulas, and then using the identity e a + b = e a e b {\displaystyle e^{a+b}=e^{a}e^{b}} for simplifying the result. Euler's formula can also be used to define the basic trigonometric function directly, as follows, using the language of topological groups . [ 26 ] The set U {\displaystyle U} of complex numbers of unit modulus is a compact and connected topological group, which has a neighborhood of the identity that is homeomorphic to the real line. Therefore, it is isomorphic as a topological group to the one-dimensional torus group R / Z {\displaystyle \mathbb {R} /\mathbb {Z} } , via an isomorphism e : R / Z → U . {\displaystyle e:\mathbb {R} /\mathbb {Z} \to U.} In pedestrian terms e ( t ) = exp ⁡ ( 2 π i t ) {\displaystyle e(t)=\exp(2\pi it)} , and this isomorphism is unique up to taking complex conjugates. For a nonzero real number a {\displaystyle a} (the base ), the function t ↦ e ( t / a ) {\displaystyle t\mapsto e(t/a)} defines an isomorphism of the group R / a Z → U {\displaystyle \mathbb {R} /a\mathbb {Z} \to U} . The real and imaginary parts of e ( t / a ) {\displaystyle e(t/a)} are the cosine and sine, where a {\displaystyle a} is used as the base for measuring angles. For example, when a = 2 π {\displaystyle a=2\pi } , we get the measure in radians, and the usual trigonometric functions. When a = 360 {\displaystyle a=360} , we get the sine and cosine of angles measured in degrees. Note that a = 2 π {\displaystyle a=2\pi } is the unique value at which the derivative d d t e ( t / a ) {\displaystyle {\frac {d}{dt}}e(t/a)} becomes a unit vector with positive imaginary part at t = 0 {\displaystyle t=0} . This fact can, in turn, be used to define the constant 2 π {\displaystyle 2\pi } . Another way to define the trigonometric functions in analysis is using integration. [ 14 ] [ 27 ] For a real number t {\displaystyle t} , put θ ( t ) = ∫ 0 t d τ 1 + τ 2 = arctan ⁡ t {\displaystyle \theta (t)=\int _{0}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\arctan t} where this defines this inverse tangent function. Also, π {\displaystyle \pi } is defined by 1 2 π = ∫ 0 ∞ d τ 1 + τ 2 {\displaystyle {\frac {1}{2}}\pi =\int _{0}^{\infty }{\frac {d\tau }{1+\tau ^{2}}}} a definition that goes back to Karl Weierstrass . [ 28 ] On the interval − π / 2 < θ < π / 2 {\displaystyle -\pi /2<\theta <\pi /2} , the trigonometric functions are defined by inverting the relation θ = arctan ⁡ t {\displaystyle \theta =\arctan t} . Thus we define the trigonometric functions by tan ⁡ θ = t , cos ⁡ θ = ( 1 + t 2 ) − 1 / 2 , sin ⁡ θ = t ( 1 + t 2 ) − 1 / 2 {\displaystyle \tan \theta =t,\quad \cos \theta =(1+t^{2})^{-1/2},\quad \sin \theta =t(1+t^{2})^{-1/2}} where the point ( t , θ ) {\displaystyle (t,\theta )} is on the graph of θ = arctan ⁡ t {\displaystyle \theta =\arctan t} and the positive square root is taken. This defines the trigonometric functions on ( − π / 2 , π / 2 ) {\displaystyle (-\pi /2,\pi /2)} . The definition can be extended to all real numbers by first observing that, as θ → π / 2 {\displaystyle \theta \to \pi /2} , t → ∞ {\displaystyle t\to \infty } , and so cos ⁡ θ = ( 1 + t 2 ) − 1 / 2 → 0 {\displaystyle \cos \theta =(1+t^{2})^{-1/2}\to 0} and sin ⁡ θ = t ( 1 + t 2 ) − 1 / 2 → 1 {\displaystyle \sin \theta =t(1+t^{2})^{-1/2}\to 1} . Thus cos ⁡ θ {\displaystyle \cos \theta } and sin ⁡ θ {\displaystyle \sin \theta } are extended continuously so that cos ⁡ ( π / 2 ) = 0 , sin ⁡ ( π / 2 ) = 1 {\displaystyle \cos(\pi /2)=0,\sin(\pi /2)=1} . Now the conditions cos ⁡ ( θ + π ) = − cos ⁡ ( θ ) {\displaystyle \cos(\theta +\pi )=-\cos(\theta )} and sin ⁡ ( θ + π ) = − sin ⁡ ( θ ) {\displaystyle \sin(\theta +\pi )=-\sin(\theta )} define the sine and cosine as periodic functions with period 2 π {\displaystyle 2\pi } , for all real numbers. Proving the basic properties of sine and cosine, including the fact that sine and cosine are analytic, one may first establish the addition formulae. First, arctan ⁡ s + arctan ⁡ t = arctan ⁡ s + t 1 − s t {\displaystyle \arctan s+\arctan t=\arctan {\frac {s+t}{1-st}}} holds, provided arctan ⁡ s + arctan ⁡ t ∈ ( − π / 2 , π / 2 ) {\displaystyle \arctan s+\arctan t\in (-\pi /2,\pi /2)} , since arctan ⁡ s + arctan ⁡ t = ∫ − s t d τ 1 + τ 2 = ∫ 0 s + t 1 − s t d τ 1 + τ 2 {\displaystyle \arctan s+\arctan t=\int _{-s}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\int _{0}^{\frac {s+t}{1-st}}{\frac {d\tau }{1+\tau ^{2}}}} after the substitution τ → s + τ 1 − s τ {\displaystyle \tau \to {\frac {s+\tau }{1-s\tau }}} . In particular, the limiting case as s → ∞ {\displaystyle s\to \infty } gives arctan ⁡ t + π 2 = arctan ⁡ ( − 1 / t ) , t ∈ ( − ∞ , 0 ) . {\displaystyle \arctan t+{\frac {\pi }{2}}=\arctan(-1/t),\quad t\in (-\infty ,0).} Thus we have sin ⁡ ( θ + π 2 ) = − 1 t 1 + ( − 1 / t ) 2 = − 1 1 + t 2 = − cos ⁡ ( θ ) {\displaystyle \sin \left(\theta +{\frac {\pi }{2}}\right)={\frac {-1}{t{\sqrt {1+(-1/t)^{2}}}}}={\frac {-1}{\sqrt {1+t^{2}}}}=-\cos(\theta )} and cos ⁡ ( θ + π 2 ) = 1 1 + ( − 1 / t ) 2 = t 1 + t 2 = sin ⁡ ( θ ) . {\displaystyle \cos \left(\theta +{\frac {\pi }{2}}\right)={\frac {1}{\sqrt {1+(-1/t)^{2}}}}={\frac {t}{\sqrt {1+t^{2}}}}=\sin(\theta ).} So the sine and cosine functions are related by translation over a quarter period π / 2 {\displaystyle \pi /2} . One can also define the trigonometric functions using various functional equations . For example, [ 29 ] the sine and the cosine form the unique pair of continuous functions that satisfy the difference formula and the added condition The sine and cosine of a complex number z = x + i y {\displaystyle z=x+iy} can be expressed in terms of real sines, cosines, and hyperbolic functions as follows: By taking advantage of domain coloring , it is possible to graph the trigonometric functions as complex-valued functions. Various features unique to the complex functions can be seen from the graph; for example, the sine and cosine functions can be seen to be unbounded as the imaginary part of z {\displaystyle z} becomes larger (since the color white represents infinity), and the fact that the functions contain simple zeros or poles is apparent from the fact that the hue cycles around each zero or pole exactly once. Comparing these graphs with those of the corresponding Hyperbolic functions highlights the relationships between the two. sin ⁡ z {\displaystyle \sin z\,} cos ⁡ z {\displaystyle \cos z\,} tan ⁡ z {\displaystyle \tan z\,} cot ⁡ z {\displaystyle \cot z\,} sec ⁡ z {\displaystyle \sec z\,} csc ⁡ z {\displaystyle \csc z\,} The sine and cosine functions are periodic , with period 2 π {\displaystyle 2\pi } , which is the smallest positive period: sin ⁡ ( z + 2 π ) = sin ⁡ ( z ) , cos ⁡ ( z + 2 π ) = cos ⁡ ( z ) . {\displaystyle \sin(z+2\pi )=\sin(z),\quad \cos(z+2\pi )=\cos(z).} Consequently, the cosecant and secant also have 2 π {\displaystyle 2\pi } as their period. The functions sine and cosine also have semiperiods π {\displaystyle \pi } , and sin ⁡ ( z + π ) = − sin ⁡ ( z ) , cos ⁡ ( z + π ) = − cos ⁡ ( z ) {\displaystyle \sin(z+\pi )=-\sin(z),\quad \cos(z+\pi )=-\cos(z)} and consequently tan ⁡ ( z + π ) = tan ⁡ ( z ) , cot ⁡ ( z + π ) = cot ⁡ ( z ) . {\displaystyle \tan(z+\pi )=\tan(z),\quad \cot(z+\pi )=\cot(z).} Also, sin ⁡ ( x + π / 2 ) = cos ⁡ ( x ) , cos ⁡ ( x + π / 2 ) = − sin ⁡ ( x ) {\displaystyle \sin(x+\pi /2)=\cos(x),\quad \cos(x+\pi /2)=-\sin(x)} (see Complementary angles ). The function sin ⁡ ( z ) {\displaystyle \sin(z)} has a unique zero (at z = 0 {\displaystyle z=0} ) in the strip − π < ℜ ( z ) < π {\displaystyle -\pi <\Re (z)<\pi } . The function cos ⁡ ( z ) {\displaystyle \cos(z)} has the pair of zeros z = ± π / 2 {\displaystyle z=\pm \pi /2} in the same strip. Because of the periodicity, the zeros of sine are π Z = { … , − 2 π , − π , 0 , π , 2 π , … } ⊂ C . {\displaystyle \pi \mathbb {Z} =\left\{\dots ,-2\pi ,-\pi ,0,\pi ,2\pi ,\dots \right\}\subset \mathbb {C} .} There zeros of cosine are π 2 + π Z = { … , − 3 π 2 , − π 2 , π 2 , 3 π 2 , … } ⊂ C . {\displaystyle {\frac {\pi }{2}}+\pi \mathbb {Z} =\left\{\dots ,-{\frac {3\pi }{2}},-{\frac {\pi }{2}},{\frac {\pi }{2}},{\frac {3\pi }{2}},\dots \right\}\subset \mathbb {C} .} All of the zeros are simple zeros, and both functions have derivative ± 1 {\displaystyle \pm 1} at each of the zeros. The tangent function tan ⁡ ( z ) = sin ⁡ ( z ) / cos ⁡ ( z ) {\displaystyle \tan(z)=\sin(z)/\cos(z)} has a simple zero at z = 0 {\displaystyle z=0} and vertical asymptotes at z = ± π / 2 {\displaystyle z=\pm \pi /2} , where it has a simple pole of residue − 1 {\displaystyle -1} . Again, owing to the periodicity, the zeros are all the integer multiples of π {\displaystyle \pi } and the poles are odd multiples of π / 2 {\displaystyle \pi /2} , all having the same residue. The poles correspond to vertical asymptotes lim x → π − tan ⁡ ( x ) = + ∞ , lim x → π + tan ⁡ ( x ) = − ∞ . {\displaystyle \lim _{x\to \pi ^{-}}\tan(x)=+\infty ,\quad \lim _{x\to \pi ^{+}}\tan(x)=-\infty .} The cotangent function cot ⁡ ( z ) = cos ⁡ ( z ) / sin ⁡ ( z ) {\displaystyle \cot(z)=\cos(z)/\sin(z)} has a simple pole of residue 1 at the integer multiples of π {\displaystyle \pi } and simple zeros at odd multiples of π / 2 {\displaystyle \pi /2} . The poles correspond to vertical asymptotes lim x → 0 − cot ⁡ ( x ) = − ∞ , lim x → 0 + cot ⁡ ( x ) = + ∞ . {\displaystyle \lim _{x\to 0^{-}}\cot(x)=-\infty ,\quad \lim _{x\to 0^{+}}\cot(x)=+\infty .} Many identities interrelate the trigonometric functions. This section contains the most basic ones; for more identities, see List of trigonometric identities . These identities may be proved geometrically from the unit-circle definitions or the right-angled-triangle definitions (although, for the latter definitions, care must be taken for angles that are not in the interval [0, π /2] , see Proofs of trigonometric identities ). For non-geometrical proofs using only tools of calculus , one may use directly the differential equations, in a way that is similar to that of the above proof of Euler's identity. One can also use Euler's identity for expressing all trigonometric functions in terms of complex exponentials and using properties of the exponential function. The cosine and the secant are even functions ; the other trigonometric functions are odd functions . That is: All trigonometric functions are periodic functions of period 2 π . This is the smallest period, except for the tangent and the cotangent, which have π as smallest period. This means that, for every integer k , one has See Periodicity and asymptotes . The Pythagorean identity, is the expression of the Pythagorean theorem in terms of trigonometric functions. It is Dividing through by either cos 2 ⁡ x {\displaystyle \cos ^{2}x} or sin 2 ⁡ x {\displaystyle \sin ^{2}x} gives and The sum and difference formulas allow expanding the sine, the cosine, and the tangent of a sum or a difference of two angles in terms of sines and cosines and tangents of the angles themselves. These can be derived geometrically, using arguments that date to Ptolemy (see Angle sum and difference identities ). One can also produce them algebraically using Euler's formula . When the two angles are equal, the sum formulas reduce to simpler equations known as the double-angle formulae . These identities can be used to derive the product-to-sum identities . By setting t = tan ⁡ 1 2 θ , {\displaystyle t=\tan {\tfrac {1}{2}}\theta ,} all trigonometric functions of θ {\displaystyle \theta } can be expressed as rational fractions of t {\displaystyle t} : Together with this is the tangent half-angle substitution , which reduces the computation of integrals and antiderivatives of trigonometric functions to that of rational fractions. The derivatives of trigonometric functions result from those of sine and cosine by applying the quotient rule . The values given for the antiderivatives in the following table can be verified by differentiating them. The number C is a constant of integration . Note: For 0 < x < π {\displaystyle 0<x<\pi } the integral of csc ⁡ x {\displaystyle \csc x} can also be written as − arsinh ⁡ ( cot ⁡ x ) , {\displaystyle -\operatorname {arsinh} (\cot x),} and for the integral of sec ⁡ x {\displaystyle \sec x} for − π / 2 < x < π / 2 {\displaystyle -\pi /2<x<\pi /2} as arsinh ⁡ ( tan ⁡ x ) , {\displaystyle \operatorname {arsinh} (\tan x),} where arsinh {\displaystyle \operatorname {arsinh} } is the inverse hyperbolic sine . Alternatively, the derivatives of the 'co-functions' can be obtained using trigonometric identities and the chain rule: The trigonometric functions are periodic, and hence not injective , so strictly speaking, they do not have an inverse function . However, on each interval on which a trigonometric function is monotonic , one can define an inverse function, and this defines inverse trigonometric functions as multivalued functions . To define a true inverse function, one must restrict the domain to an interval where the function is monotonic, and is thus bijective from this interval to its image by the function. The common choice for this interval, called the set of principal values , is given in the following table. As usual, the inverse trigonometric functions are denoted with the prefix "arc" before the name or its abbreviation of the function. The notations sin −1 , cos −1 , etc. are often used for arcsin and arccos , etc. When this notation is used, inverse functions could be confused with multiplicative inverses. The notation with the "arc" prefix avoids such a confusion, though "arcsec" for arcsecant can be confused with " arcsecond ". Just like the sine and cosine, the inverse trigonometric functions can also be expressed in terms of infinite series. They can also be expressed in terms of complex logarithms . In this section A , B , C denote the three (interior) angles of a triangle, and a , b , c denote the lengths of the respective opposite edges. They are related by various formulas, which are named by the trigonometric functions they involve. The law of sines states that for an arbitrary triangle with sides a , b , and c and angles opposite those sides A , B and C : sin ⁡ A a = sin ⁡ B b = sin ⁡ C c = 2 Δ a b c , {\displaystyle {\frac {\sin A}{a}}={\frac {\sin B}{b}}={\frac {\sin C}{c}}={\frac {2\Delta }{abc}},} where Δ is the area of the triangle, or, equivalently, a sin ⁡ A = b sin ⁡ B = c sin ⁡ C = 2 R , {\displaystyle {\frac {a}{\sin A}}={\frac {b}{\sin B}}={\frac {c}{\sin C}}=2R,} where R is the triangle's circumradius . It can be proved by dividing the triangle into two right ones and using the above definition of sine. The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. This is a common situation occurring in triangulation , a technique to determine unknown distances by measuring two angles and an accessible enclosed distance. The law of cosines (also known as the cosine formula or cosine rule) is an extension of the Pythagorean theorem : c 2 = a 2 + b 2 − 2 a b cos ⁡ C , {\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos C,} or equivalently, cos ⁡ C = a 2 + b 2 − c 2 2 a b . {\displaystyle \cos C={\frac {a^{2}+b^{2}-c^{2}}{2ab}}.} In this formula the angle at C is opposite to the side c . This theorem can be proved by dividing the triangle into two right ones and using the Pythagorean theorem . The law of cosines can be used to determine a side of a triangle if two sides and the angle between them are known. It can also be used to find the cosines of an angle (and consequently the angles themselves) if the lengths of all the sides are known. The law of tangents says that: If s is the triangle's semiperimeter, ( a + b + c )/2, and r is the radius of the triangle's incircle , then rs is the triangle's area. Therefore Heron's formula implies that: The law of cotangents says that: [ 30 ] It follows that The trigonometric functions are also important in physics. The sine and the cosine functions, for example, are used to describe simple harmonic motion , which models many natural phenomena, such as the movement of a mass attached to a spring and, for small angles, the pendular motion of a mass hanging by a string. The sine and cosine functions are one-dimensional projections of uniform circular motion . Trigonometric functions also prove to be useful in the study of general periodic functions . The characteristic wave patterns of periodic functions are useful for modeling recurring phenomena such as sound or light waves . [ 31 ] Under rather general conditions, a periodic function f ( x ) can be expressed as a sum of sine waves or cosine waves in a Fourier series . [ 32 ] Denoting the sine or cosine basis functions by φ k , the expansion of the periodic function f ( t ) takes the form: f ( t ) = ∑ k = 1 ∞ c k φ k ( t ) . {\displaystyle f(t)=\sum _{k=1}^{\infty }c_{k}\varphi _{k}(t).} For example, the square wave can be written as the Fourier series f square ( t ) = 4 π ∑ k = 1 ∞ sin ⁡ ( ( 2 k − 1 ) t ) 2 k − 1 . {\displaystyle f_{\text{square}}(t)={\frac {4}{\pi }}\sum _{k=1}^{\infty }{\sin {\big (}(2k-1)t{\big )} \over 2k-1}.} In the animation of a square wave at top right it can be seen that just a few terms already produce a fairly good approximation. The superposition of several terms in the expansion of a sawtooth wave are shown underneath. While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was defined by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). The functions of sine and versine (1 – cosine) are closely related to the jyā and koti-jyā functions used in Gupta period Indian astronomy ( Aryabhatiya , Surya Siddhanta ), via translation from Sanskrit to Arabic and then from Arabic to Latin. [ 33 ] (See Aryabhata's sine table .) All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines , used in solving triangles . [ 34 ] Al-Khwārizmī (c. 780–850) produced tables of sines and cosines. Circa 860, Habash al-Hasib al-Marwazi defined the tangent and the cotangent, and produced their tables. [ 35 ] [ 36 ] Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) defined the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. [ 36 ] The trigonometric functions were later studied by mathematicians including Omar Khayyám , Bhāskara II , Nasir al-Din al-Tusi , Jamshīd al-Kāshī (14th century), Ulugh Beg (14th century), Regiomontanus (1464), Rheticus , and Rheticus' student Valentinus Otho . Madhava of Sangamagrama (c. 1400) made early strides in the analysis of trigonometric functions in terms of infinite series . [ 37 ] (See Madhava series and Madhava's sine table .) The tangent function was brought to Europe by Giovanni Bianchini in 1467 in trigonometry tables he created to support the calculation of stellar coordinates. [ 38 ] The terms tangent and secant were first introduced by the Danish mathematician Thomas Fincke in his book Geometria rotundi (1583). [ 39 ] The 17th century French mathematician Albert Girard made the first published use of the abbreviations sin , cos , and tan in his book Trigonométrie . [ 40 ] In a paper published in 1682, Gottfried Leibniz proved that sin x is not an algebraic function of x . [ 41 ] Though defined as ratios of sides of a right triangle , and thus appearing to be rational functions , Leibnitz result established that they are actually transcendental functions of their argument. The task of assimilating circular functions into algebraic expressions was accomplished by Euler in his Introduction to the Analysis of the Infinite (1748). His method was to show that the sine and cosine functions are alternating series formed from the even and odd terms respectively of the exponential series . He presented " Euler's formula ", as well as near-modern abbreviations ( sin. , cos. , tang. , cot. , sec. , and cosec. ). [ 33 ] A few functions were common historically, but are now seldom used, such as the chord , versine (which appeared in the earliest tables [ 33 ] ), haversine , coversine , [ 42 ] half-tangent (tangent of half an angle), and exsecant . List of trigonometric identities shows more relations between these functions. Historically, trigonometric functions were often combined with logarithms in compound functions like the logarithmic sine, logarithmic cosine, logarithmic secant, logarithmic cosecant, logarithmic tangent and logarithmic cotangent. [ 43 ] [ 44 ] [ 45 ] [ 46 ] The word sine derives [ 47 ] from Latin sinus , meaning "bend; bay", and more specifically "the hanging fold of the upper part of a toga ", "the bosom of a garment", which was chosen as the translation of what was interpreted as the Arabic word jaib , meaning "pocket" or "fold" in the twelfth-century translations of works by Al-Battani and al-Khwārizmī into Medieval Latin . [ 48 ] The choice was based on a misreading of the Arabic written form j-y-b ( جيب ), which itself originated as a transliteration from Sanskrit jīvā , which along with its synonym jyā (the standard Sanskrit term for the sine) translates to "bowstring", being in turn adopted from Ancient Greek χορδή "string". [ 49 ] The word tangent comes from Latin tangens meaning "touching", since the line touches the circle of unit radius, whereas secant stems from Latin secans —"cutting"—since the line cuts the circle. [ 50 ] The prefix " co- " (in "cosine", "cotangent", "cosecant") is found in Edmund Gunter 's Canon triangulorum (1620), which defines the cosinus as an abbreviation of the sinus complementi (sine of the complementary angle ) and proceeds to define the cotangens similarly. [ 51 ] [ 52 ]
https://en.wikipedia.org/wiki/Sinus_rectus_secundus
In mathematics , the trigonometric functions (also called circular functions , angle functions or goniometric functions ) [ 1 ] are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry , such as navigation , solid mechanics , celestial mechanics , geodesy , and many others. They are among the simplest periodic functions , and as such are also widely used for studying periodic phenomena through Fourier analysis . The trigonometric functions most widely used in modern mathematics are the sine , the cosine , and the tangent functions. Their reciprocals are respectively the cosecant , the secant , and the cotangent functions, which are less used. Each of these six trigonometric functions has a corresponding inverse function , and an analog among the hyperbolic functions . The oldest definitions of trigonometric functions, related to right-angle triangles, define them only for acute angles . To extend the sine and cosine functions to functions whose domain is the whole real line , geometrical definitions using the standard unit circle (i.e., a circle with radius 1 unit) are often used; then the domain of the other functions is the real line with some isolated points removed. Modern definitions express trigonometric functions as infinite series or as solutions of differential equations . This allows extending the domain of sine and cosine functions to the whole complex plane , and the domain of the other trigonometric functions to the complex plane with some isolated points removed. Conventionally, an abbreviation of each trigonometric function's name is used as its symbol in formulas. Today, the most common versions of these abbreviations are " sin " for sine, " cos " for cosine, " tan " or " tg " for tangent, " sec " for secant, " csc " or " cosec " for cosecant, and " cot " or " ctg " for cotangent. Historically, these abbreviations were first used in prose sentences to indicate particular line segments or their lengths related to an arc of an arbitrary circle, and later to indicate ratios of lengths, but as the function concept developed in the 17th–18th century, they began to be considered as functions of real-number-valued angle measures, and written with functional notation , for example sin( x ) . Parentheses are still often omitted to reduce clutter, but are sometimes necessary; for example the expression sin ⁡ x + y {\displaystyle \sin x+y} would typically be interpreted to mean ( sin ⁡ x ) + y , {\displaystyle (\sin x)+y,} so parentheses are required to express sin ⁡ ( x + y ) . {\displaystyle \sin(x+y).} A positive integer appearing as a superscript after the symbol of the function denotes exponentiation , not function composition . For example sin 2 ⁡ x {\displaystyle \sin ^{2}x} and sin 2 ⁡ ( x ) {\displaystyle \sin ^{2}(x)} denote ( sin ⁡ x ) 2 , {\displaystyle (\sin x)^{2},} not sin ⁡ ( sin ⁡ x ) . {\displaystyle \sin(\sin x).} This differs from the (historically later) general functional notation in which f 2 ( x ) = ( f ∘ f ) ( x ) = f ( f ( x ) ) . {\displaystyle f^{2}(x)=(f\circ f)(x)=f(f(x)).} In contrast, the superscript − 1 {\displaystyle -1} is commonly used to denote the inverse function , not the reciprocal . For example sin − 1 ⁡ x {\displaystyle \sin ^{-1}x} and sin − 1 ⁡ ( x ) {\displaystyle \sin ^{-1}(x)} denote the inverse trigonometric function alternatively written arcsin ⁡ x . {\displaystyle \arcsin x\,.} The equation θ = sin − 1 ⁡ x {\displaystyle \theta =\sin ^{-1}x} implies sin ⁡ θ = x , {\displaystyle \sin \theta =x,} not θ ⋅ sin ⁡ x = 1. {\displaystyle \theta \cdot \sin x=1.} In this case, the superscript could be considered as denoting a composed or iterated function , but negative superscripts other than − 1 {\displaystyle {-1}} are not in common use. If the acute angle θ is given, then any right triangles that have an angle of θ are similar to each other. This means that the ratio of any two side lengths depends only on θ . Thus these six ratios define six functions of θ , which are the trigonometric functions. In the following definitions, the hypotenuse is the length of the side opposite the right angle, opposite represents the side opposite the given angle θ , and adjacent represents the side between the angle θ and the right angle. [ 2 ] [ 3 ] Various mnemonics can be used to remember these definitions. In a right-angled triangle, the sum of the two acute angles is a right angle, that is, 90° or ⁠ π / 2 ⁠ radians . Therefore sin ⁡ ( θ ) {\displaystyle \sin(\theta )} and cos ⁡ ( 90 ∘ − θ ) {\displaystyle \cos(90^{\circ }-\theta )} represent the same ratio, and thus are equal. This identity and analogous relationships between the other trigonometric functions are summarized in the following table. In geometric applications, the argument of a trigonometric function is generally the measure of an angle . For this purpose, any angular unit is convenient. One common unit is degrees , in which a right angle is 90° and a complete turn is 360° (particularly in elementary mathematics ). However, in calculus and mathematical analysis , the trigonometric functions are generally regarded more abstractly as functions of real or complex numbers , rather than angles. In fact, the functions sin and cos can be defined for all complex numbers in terms of the exponential function , via power series, [ 5 ] or as solutions to differential equations given particular initial values [ 6 ] ( see below ), without reference to any geometric notions. The other four trigonometric functions ( tan , cot , sec , csc ) can be defined as quotients and reciprocals of sin and cos , except where zero occurs in the denominator. It can be proved, for real arguments, that these definitions coincide with elementary geometric definitions if the argument is regarded as an angle in radians. [ 5 ] Moreover, these definitions result in simple expressions for the derivatives and indefinite integrals for the trigonometric functions. [ 7 ] Thus, in settings beyond elementary geometry, radians are regarded as the mathematically natural unit for describing angle measures. When radians (rad) are employed, the angle is given as the length of the arc of the unit circle subtended by it: the angle that subtends an arc of length 1 on the unit circle is 1 rad (≈ 57.3°), [ 8 ] and a complete turn (360°) is an angle of 2 π (≈ 6.28) rad. [ 9 ] For real number x , the notation sin x , cos x , etc. refers to the value of the trigonometric functions evaluated at an angle of x rad. If units of degrees are intended, the degree sign must be explicitly shown ( sin x° , cos x° , etc.). Using this standard notation, the argument x for the trigonometric functions satisfies the relationship x = (180 x / π )°, so that, for example, sin π = sin 180° when we take x = π . In this way, the degree symbol can be regarded as a mathematical constant such that 1° = π /180 ≈ 0.0175. [ 10 ] The six trigonometric functions can be defined as coordinate values of points on the Euclidean plane that are related to the unit circle , which is the circle of radius one centered at the origin O of this coordinate system. While right-angled triangle definitions allow for the definition of the trigonometric functions for angles between 0 and π 2 {\textstyle {\frac {\pi }{2}}} radians (90°), the unit circle definitions allow the domain of trigonometric functions to be extended to all positive and negative real numbers. Let L {\displaystyle {\mathcal {L}}} be the ray obtained by rotating by an angle θ the positive half of the x -axis ( counterclockwise rotation for θ > 0 , {\displaystyle \theta >0,} and clockwise rotation for θ < 0 {\displaystyle \theta <0} ). This ray intersects the unit circle at the point A = ( x A , y A ) . {\displaystyle \mathrm {A} =(x_{\mathrm {A} },y_{\mathrm {A} }).} The ray L , {\displaystyle {\mathcal {L}},} extended to a line if necessary, intersects the line of equation x = 1 {\displaystyle x=1} at point B = ( 1 , y B ) , {\displaystyle \mathrm {B} =(1,y_{\mathrm {B} }),} and the line of equation y = 1 {\displaystyle y=1} at point C = ( x C , 1 ) . {\displaystyle \mathrm {C} =(x_{\mathrm {C} },1).} The tangent line to the unit circle at the point A , is perpendicular to L , {\displaystyle {\mathcal {L}},} and intersects the y - and x -axes at points D = ( 0 , y D ) {\displaystyle \mathrm {D} =(0,y_{\mathrm {D} })} and E = ( x E , 0 ) . {\displaystyle \mathrm {E} =(x_{\mathrm {E} },0).} The coordinates of these points give the values of all trigonometric functions for any arbitrary real value of θ in the following manner. The trigonometric functions cos and sin are defined, respectively, as the x - and y -coordinate values of point A . That is, In the range 0 ≤ θ ≤ π / 2 {\displaystyle 0\leq \theta \leq \pi /2} , this definition coincides with the right-angled triangle definition, by taking the right-angled triangle to have the unit radius OA as hypotenuse . And since the equation x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} holds for all points P = ( x , y ) {\displaystyle \mathrm {P} =(x,y)} on the unit circle, this definition of cosine and sine also satisfies the Pythagorean identity . The other trigonometric functions can be found along the unit circle as By applying the Pythagorean identity and geometric proof methods, these definitions can readily be shown to coincide with the definitions of tangent, cotangent, secant and cosecant in terms of sine and cosine, that is Since a rotation of an angle of ± 2 π {\displaystyle \pm 2\pi } does not change the position or size of a shape, the points A , B , C , D , and E are the same for two angles whose difference is an integer multiple of 2 π {\displaystyle 2\pi } . Thus trigonometric functions are periodic functions with period 2 π {\displaystyle 2\pi } . That is, the equalities hold for any angle θ and any integer k . The same is true for the four other trigonometric functions. By observing the sign and the monotonicity of the functions sine, cosine, cosecant, and secant in the four quadrants, one can show that 2 π {\displaystyle 2\pi } is the smallest value for which they are periodic (i.e., 2 π {\displaystyle 2\pi } is the fundamental period of these functions). However, after a rotation by an angle π {\displaystyle \pi } , the points B and C already return to their original position, so that the tangent function and the cotangent function have a fundamental period of π {\displaystyle \pi } . That is, the equalities hold for any angle θ and any integer k . The algebraic expressions for the most important angles are as follows: Writing the numerators as square roots of consecutive non-negative integers, with a denominator of 2, provides an easy way to remember the values. [ 13 ] Such simple expressions generally do not exist for other angles which are rational multiples of a right angle. The following table lists the sines, cosines, and tangents of multiples of 15 degrees from 0 to 90 degrees. G. H. Hardy noted in his 1908 work A Course of Pure Mathematics that the definition of the trigonometric functions in terms of the unit circle is not satisfactory, because it depends implicitly on a notion of angle that can be measured by a real number. [ 14 ] Thus in modern analysis, trigonometric functions are usually constructed without reference to geometry. Various ways exist in the literature for defining the trigonometric functions in a manner suitable for analysis; they include: Sine and cosine can be defined as the unique solution to the initial value problem : [ 17 ] Differentiating again, d 2 d x 2 sin ⁡ x = d d x cos ⁡ x = − sin ⁡ x {\textstyle {\frac {d^{2}}{dx^{2}}}\sin x={\frac {d}{dx}}\cos x=-\sin x} and d 2 d x 2 cos ⁡ x = − d d x sin ⁡ x = − cos ⁡ x {\textstyle {\frac {d^{2}}{dx^{2}}}\cos x=-{\frac {d}{dx}}\sin x=-\cos x} , so both sine and cosine are solutions of the same ordinary differential equation Sine is the unique solution with y (0) = 0 and y ′(0) = 1 ; cosine is the unique solution with y (0) = 1 and y ′(0) = 0 . One can then prove, as a theorem, that solutions cos , sin {\displaystyle \cos ,\sin } are periodic, having the same period. Writing this period as 2 π {\displaystyle 2\pi } is then a definition of the real number π {\displaystyle \pi } which is independent of geometry. Applying the quotient rule to the tangent tan ⁡ x = sin ⁡ x / cos ⁡ x {\displaystyle \tan x=\sin x/\cos x} , so the tangent function satisfies the ordinary differential equation It is the unique solution with y (0) = 0 . The basic trigonometric functions can be defined by the following power series expansions. [ 18 ] These series are also known as the Taylor series or Maclaurin series of these trigonometric functions: The radius of convergence of these series is infinite. Therefore, the sine and the cosine can be extended to entire functions (also called "sine" and "cosine"), which are (by definition) complex-valued functions that are defined and holomorphic on the whole complex plane . Term-by-term differentiation shows that the sine and cosine defined by the series obey the differential equation discussed previously, and conversely one can obtain these series from elementary recursion relations derived from the differential equation. Being defined as fractions of entire functions, the other trigonometric functions may be extended to meromorphic functions , that is functions that are holomorphic in the whole complex plane, except some isolated points called poles . Here, the poles are the numbers of the form ( 2 k + 1 ) π 2 {\textstyle (2k+1){\frac {\pi }{2}}} for the tangent and the secant, or k π {\displaystyle k\pi } for the cotangent and the cosecant, where k is an arbitrary integer. Recurrences relations may also be computed for the coefficients of the Taylor series of the other trigonometric functions. These series have a finite radius of convergence . Their coefficients have a combinatorial interpretation: they enumerate alternating permutations of finite sets. [ 19 ] More precisely, defining one has the following series expansions: [ 20 ] The following continued fractions are valid in the whole complex plane: The last one was used in the historically first proof that π is irrational . [ 21 ] There is a series representation as partial fraction expansion where just translated reciprocal functions are summed up, such that the poles of the cotangent function and the reciprocal functions match: [ 22 ] This identity can be proved with the Herglotz trick. [ 23 ] Combining the (– n ) th with the n th term lead to absolutely convergent series: Similarly, one can find a partial fraction expansion for the secant, cosecant and tangent functions: The following infinite product for the sine is due to Leonhard Euler , and is of great importance in complex analysis: [ 24 ] This may be obtained from the partial fraction decomposition of cot ⁡ z {\displaystyle \cot z} given above, which is the logarithmic derivative of sin ⁡ z {\displaystyle \sin z} . [ 25 ] From this, it can be deduced also that Euler's formula relates sine and cosine to the exponential function : This formula is commonly considered for real values of x , but it remains true for all complex values. Proof : Let f 1 ( x ) = cos ⁡ x + i sin ⁡ x , {\displaystyle f_{1}(x)=\cos x+i\sin x,} and f 2 ( x ) = e i x . {\displaystyle f_{2}(x)=e^{ix}.} One has d f j ( x ) / d x = i f j ( x ) {\displaystyle df_{j}(x)/dx=if_{j}(x)} for j = 1, 2 . The quotient rule implies thus that d / d x ( f 1 ( x ) / f 2 ( x ) ) = 0 {\displaystyle d/dx\,(f_{1}(x)/f_{2}(x))=0} . Therefore, f 1 ( x ) / f 2 ( x ) {\displaystyle f_{1}(x)/f_{2}(x)} is a constant function, which equals 1 , as f 1 ( 0 ) = f 2 ( 0 ) = 1. {\displaystyle f_{1}(0)=f_{2}(0)=1.} This proves the formula. One has Solving this linear system in sine and cosine, one can express them in terms of the exponential function: When x is real, this may be rewritten as Most trigonometric identities can be proved by expressing trigonometric functions in terms of the complex exponential function by using above formulas, and then using the identity e a + b = e a e b {\displaystyle e^{a+b}=e^{a}e^{b}} for simplifying the result. Euler's formula can also be used to define the basic trigonometric function directly, as follows, using the language of topological groups . [ 26 ] The set U {\displaystyle U} of complex numbers of unit modulus is a compact and connected topological group, which has a neighborhood of the identity that is homeomorphic to the real line. Therefore, it is isomorphic as a topological group to the one-dimensional torus group R / Z {\displaystyle \mathbb {R} /\mathbb {Z} } , via an isomorphism e : R / Z → U . {\displaystyle e:\mathbb {R} /\mathbb {Z} \to U.} In pedestrian terms e ( t ) = exp ⁡ ( 2 π i t ) {\displaystyle e(t)=\exp(2\pi it)} , and this isomorphism is unique up to taking complex conjugates. For a nonzero real number a {\displaystyle a} (the base ), the function t ↦ e ( t / a ) {\displaystyle t\mapsto e(t/a)} defines an isomorphism of the group R / a Z → U {\displaystyle \mathbb {R} /a\mathbb {Z} \to U} . The real and imaginary parts of e ( t / a ) {\displaystyle e(t/a)} are the cosine and sine, where a {\displaystyle a} is used as the base for measuring angles. For example, when a = 2 π {\displaystyle a=2\pi } , we get the measure in radians, and the usual trigonometric functions. When a = 360 {\displaystyle a=360} , we get the sine and cosine of angles measured in degrees. Note that a = 2 π {\displaystyle a=2\pi } is the unique value at which the derivative d d t e ( t / a ) {\displaystyle {\frac {d}{dt}}e(t/a)} becomes a unit vector with positive imaginary part at t = 0 {\displaystyle t=0} . This fact can, in turn, be used to define the constant 2 π {\displaystyle 2\pi } . Another way to define the trigonometric functions in analysis is using integration. [ 14 ] [ 27 ] For a real number t {\displaystyle t} , put θ ( t ) = ∫ 0 t d τ 1 + τ 2 = arctan ⁡ t {\displaystyle \theta (t)=\int _{0}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\arctan t} where this defines this inverse tangent function. Also, π {\displaystyle \pi } is defined by 1 2 π = ∫ 0 ∞ d τ 1 + τ 2 {\displaystyle {\frac {1}{2}}\pi =\int _{0}^{\infty }{\frac {d\tau }{1+\tau ^{2}}}} a definition that goes back to Karl Weierstrass . [ 28 ] On the interval − π / 2 < θ < π / 2 {\displaystyle -\pi /2<\theta <\pi /2} , the trigonometric functions are defined by inverting the relation θ = arctan ⁡ t {\displaystyle \theta =\arctan t} . Thus we define the trigonometric functions by tan ⁡ θ = t , cos ⁡ θ = ( 1 + t 2 ) − 1 / 2 , sin ⁡ θ = t ( 1 + t 2 ) − 1 / 2 {\displaystyle \tan \theta =t,\quad \cos \theta =(1+t^{2})^{-1/2},\quad \sin \theta =t(1+t^{2})^{-1/2}} where the point ( t , θ ) {\displaystyle (t,\theta )} is on the graph of θ = arctan ⁡ t {\displaystyle \theta =\arctan t} and the positive square root is taken. This defines the trigonometric functions on ( − π / 2 , π / 2 ) {\displaystyle (-\pi /2,\pi /2)} . The definition can be extended to all real numbers by first observing that, as θ → π / 2 {\displaystyle \theta \to \pi /2} , t → ∞ {\displaystyle t\to \infty } , and so cos ⁡ θ = ( 1 + t 2 ) − 1 / 2 → 0 {\displaystyle \cos \theta =(1+t^{2})^{-1/2}\to 0} and sin ⁡ θ = t ( 1 + t 2 ) − 1 / 2 → 1 {\displaystyle \sin \theta =t(1+t^{2})^{-1/2}\to 1} . Thus cos ⁡ θ {\displaystyle \cos \theta } and sin ⁡ θ {\displaystyle \sin \theta } are extended continuously so that cos ⁡ ( π / 2 ) = 0 , sin ⁡ ( π / 2 ) = 1 {\displaystyle \cos(\pi /2)=0,\sin(\pi /2)=1} . Now the conditions cos ⁡ ( θ + π ) = − cos ⁡ ( θ ) {\displaystyle \cos(\theta +\pi )=-\cos(\theta )} and sin ⁡ ( θ + π ) = − sin ⁡ ( θ ) {\displaystyle \sin(\theta +\pi )=-\sin(\theta )} define the sine and cosine as periodic functions with period 2 π {\displaystyle 2\pi } , for all real numbers. Proving the basic properties of sine and cosine, including the fact that sine and cosine are analytic, one may first establish the addition formulae. First, arctan ⁡ s + arctan ⁡ t = arctan ⁡ s + t 1 − s t {\displaystyle \arctan s+\arctan t=\arctan {\frac {s+t}{1-st}}} holds, provided arctan ⁡ s + arctan ⁡ t ∈ ( − π / 2 , π / 2 ) {\displaystyle \arctan s+\arctan t\in (-\pi /2,\pi /2)} , since arctan ⁡ s + arctan ⁡ t = ∫ − s t d τ 1 + τ 2 = ∫ 0 s + t 1 − s t d τ 1 + τ 2 {\displaystyle \arctan s+\arctan t=\int _{-s}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\int _{0}^{\frac {s+t}{1-st}}{\frac {d\tau }{1+\tau ^{2}}}} after the substitution τ → s + τ 1 − s τ {\displaystyle \tau \to {\frac {s+\tau }{1-s\tau }}} . In particular, the limiting case as s → ∞ {\displaystyle s\to \infty } gives arctan ⁡ t + π 2 = arctan ⁡ ( − 1 / t ) , t ∈ ( − ∞ , 0 ) . {\displaystyle \arctan t+{\frac {\pi }{2}}=\arctan(-1/t),\quad t\in (-\infty ,0).} Thus we have sin ⁡ ( θ + π 2 ) = − 1 t 1 + ( − 1 / t ) 2 = − 1 1 + t 2 = − cos ⁡ ( θ ) {\displaystyle \sin \left(\theta +{\frac {\pi }{2}}\right)={\frac {-1}{t{\sqrt {1+(-1/t)^{2}}}}}={\frac {-1}{\sqrt {1+t^{2}}}}=-\cos(\theta )} and cos ⁡ ( θ + π 2 ) = 1 1 + ( − 1 / t ) 2 = t 1 + t 2 = sin ⁡ ( θ ) . {\displaystyle \cos \left(\theta +{\frac {\pi }{2}}\right)={\frac {1}{\sqrt {1+(-1/t)^{2}}}}={\frac {t}{\sqrt {1+t^{2}}}}=\sin(\theta ).} So the sine and cosine functions are related by translation over a quarter period π / 2 {\displaystyle \pi /2} . One can also define the trigonometric functions using various functional equations . For example, [ 29 ] the sine and the cosine form the unique pair of continuous functions that satisfy the difference formula and the added condition The sine and cosine of a complex number z = x + i y {\displaystyle z=x+iy} can be expressed in terms of real sines, cosines, and hyperbolic functions as follows: By taking advantage of domain coloring , it is possible to graph the trigonometric functions as complex-valued functions. Various features unique to the complex functions can be seen from the graph; for example, the sine and cosine functions can be seen to be unbounded as the imaginary part of z {\displaystyle z} becomes larger (since the color white represents infinity), and the fact that the functions contain simple zeros or poles is apparent from the fact that the hue cycles around each zero or pole exactly once. Comparing these graphs with those of the corresponding Hyperbolic functions highlights the relationships between the two. sin ⁡ z {\displaystyle \sin z\,} cos ⁡ z {\displaystyle \cos z\,} tan ⁡ z {\displaystyle \tan z\,} cot ⁡ z {\displaystyle \cot z\,} sec ⁡ z {\displaystyle \sec z\,} csc ⁡ z {\displaystyle \csc z\,} The sine and cosine functions are periodic , with period 2 π {\displaystyle 2\pi } , which is the smallest positive period: sin ⁡ ( z + 2 π ) = sin ⁡ ( z ) , cos ⁡ ( z + 2 π ) = cos ⁡ ( z ) . {\displaystyle \sin(z+2\pi )=\sin(z),\quad \cos(z+2\pi )=\cos(z).} Consequently, the cosecant and secant also have 2 π {\displaystyle 2\pi } as their period. The functions sine and cosine also have semiperiods π {\displaystyle \pi } , and sin ⁡ ( z + π ) = − sin ⁡ ( z ) , cos ⁡ ( z + π ) = − cos ⁡ ( z ) {\displaystyle \sin(z+\pi )=-\sin(z),\quad \cos(z+\pi )=-\cos(z)} and consequently tan ⁡ ( z + π ) = tan ⁡ ( z ) , cot ⁡ ( z + π ) = cot ⁡ ( z ) . {\displaystyle \tan(z+\pi )=\tan(z),\quad \cot(z+\pi )=\cot(z).} Also, sin ⁡ ( x + π / 2 ) = cos ⁡ ( x ) , cos ⁡ ( x + π / 2 ) = − sin ⁡ ( x ) {\displaystyle \sin(x+\pi /2)=\cos(x),\quad \cos(x+\pi /2)=-\sin(x)} (see Complementary angles ). The function sin ⁡ ( z ) {\displaystyle \sin(z)} has a unique zero (at z = 0 {\displaystyle z=0} ) in the strip − π < ℜ ( z ) < π {\displaystyle -\pi <\Re (z)<\pi } . The function cos ⁡ ( z ) {\displaystyle \cos(z)} has the pair of zeros z = ± π / 2 {\displaystyle z=\pm \pi /2} in the same strip. Because of the periodicity, the zeros of sine are π Z = { … , − 2 π , − π , 0 , π , 2 π , … } ⊂ C . {\displaystyle \pi \mathbb {Z} =\left\{\dots ,-2\pi ,-\pi ,0,\pi ,2\pi ,\dots \right\}\subset \mathbb {C} .} There zeros of cosine are π 2 + π Z = { … , − 3 π 2 , − π 2 , π 2 , 3 π 2 , … } ⊂ C . {\displaystyle {\frac {\pi }{2}}+\pi \mathbb {Z} =\left\{\dots ,-{\frac {3\pi }{2}},-{\frac {\pi }{2}},{\frac {\pi }{2}},{\frac {3\pi }{2}},\dots \right\}\subset \mathbb {C} .} All of the zeros are simple zeros, and both functions have derivative ± 1 {\displaystyle \pm 1} at each of the zeros. The tangent function tan ⁡ ( z ) = sin ⁡ ( z ) / cos ⁡ ( z ) {\displaystyle \tan(z)=\sin(z)/\cos(z)} has a simple zero at z = 0 {\displaystyle z=0} and vertical asymptotes at z = ± π / 2 {\displaystyle z=\pm \pi /2} , where it has a simple pole of residue − 1 {\displaystyle -1} . Again, owing to the periodicity, the zeros are all the integer multiples of π {\displaystyle \pi } and the poles are odd multiples of π / 2 {\displaystyle \pi /2} , all having the same residue. The poles correspond to vertical asymptotes lim x → π − tan ⁡ ( x ) = + ∞ , lim x → π + tan ⁡ ( x ) = − ∞ . {\displaystyle \lim _{x\to \pi ^{-}}\tan(x)=+\infty ,\quad \lim _{x\to \pi ^{+}}\tan(x)=-\infty .} The cotangent function cot ⁡ ( z ) = cos ⁡ ( z ) / sin ⁡ ( z ) {\displaystyle \cot(z)=\cos(z)/\sin(z)} has a simple pole of residue 1 at the integer multiples of π {\displaystyle \pi } and simple zeros at odd multiples of π / 2 {\displaystyle \pi /2} . The poles correspond to vertical asymptotes lim x → 0 − cot ⁡ ( x ) = − ∞ , lim x → 0 + cot ⁡ ( x ) = + ∞ . {\displaystyle \lim _{x\to 0^{-}}\cot(x)=-\infty ,\quad \lim _{x\to 0^{+}}\cot(x)=+\infty .} Many identities interrelate the trigonometric functions. This section contains the most basic ones; for more identities, see List of trigonometric identities . These identities may be proved geometrically from the unit-circle definitions or the right-angled-triangle definitions (although, for the latter definitions, care must be taken for angles that are not in the interval [0, π /2] , see Proofs of trigonometric identities ). For non-geometrical proofs using only tools of calculus , one may use directly the differential equations, in a way that is similar to that of the above proof of Euler's identity. One can also use Euler's identity for expressing all trigonometric functions in terms of complex exponentials and using properties of the exponential function. The cosine and the secant are even functions ; the other trigonometric functions are odd functions . That is: All trigonometric functions are periodic functions of period 2 π . This is the smallest period, except for the tangent and the cotangent, which have π as smallest period. This means that, for every integer k , one has See Periodicity and asymptotes . The Pythagorean identity, is the expression of the Pythagorean theorem in terms of trigonometric functions. It is Dividing through by either cos 2 ⁡ x {\displaystyle \cos ^{2}x} or sin 2 ⁡ x {\displaystyle \sin ^{2}x} gives and The sum and difference formulas allow expanding the sine, the cosine, and the tangent of a sum or a difference of two angles in terms of sines and cosines and tangents of the angles themselves. These can be derived geometrically, using arguments that date to Ptolemy (see Angle sum and difference identities ). One can also produce them algebraically using Euler's formula . When the two angles are equal, the sum formulas reduce to simpler equations known as the double-angle formulae . These identities can be used to derive the product-to-sum identities . By setting t = tan ⁡ 1 2 θ , {\displaystyle t=\tan {\tfrac {1}{2}}\theta ,} all trigonometric functions of θ {\displaystyle \theta } can be expressed as rational fractions of t {\displaystyle t} : Together with this is the tangent half-angle substitution , which reduces the computation of integrals and antiderivatives of trigonometric functions to that of rational fractions. The derivatives of trigonometric functions result from those of sine and cosine by applying the quotient rule . The values given for the antiderivatives in the following table can be verified by differentiating them. The number C is a constant of integration . Note: For 0 < x < π {\displaystyle 0<x<\pi } the integral of csc ⁡ x {\displaystyle \csc x} can also be written as − arsinh ⁡ ( cot ⁡ x ) , {\displaystyle -\operatorname {arsinh} (\cot x),} and for the integral of sec ⁡ x {\displaystyle \sec x} for − π / 2 < x < π / 2 {\displaystyle -\pi /2<x<\pi /2} as arsinh ⁡ ( tan ⁡ x ) , {\displaystyle \operatorname {arsinh} (\tan x),} where arsinh {\displaystyle \operatorname {arsinh} } is the inverse hyperbolic sine . Alternatively, the derivatives of the 'co-functions' can be obtained using trigonometric identities and the chain rule: The trigonometric functions are periodic, and hence not injective , so strictly speaking, they do not have an inverse function . However, on each interval on which a trigonometric function is monotonic , one can define an inverse function, and this defines inverse trigonometric functions as multivalued functions . To define a true inverse function, one must restrict the domain to an interval where the function is monotonic, and is thus bijective from this interval to its image by the function. The common choice for this interval, called the set of principal values , is given in the following table. As usual, the inverse trigonometric functions are denoted with the prefix "arc" before the name or its abbreviation of the function. The notations sin −1 , cos −1 , etc. are often used for arcsin and arccos , etc. When this notation is used, inverse functions could be confused with multiplicative inverses. The notation with the "arc" prefix avoids such a confusion, though "arcsec" for arcsecant can be confused with " arcsecond ". Just like the sine and cosine, the inverse trigonometric functions can also be expressed in terms of infinite series. They can also be expressed in terms of complex logarithms . In this section A , B , C denote the three (interior) angles of a triangle, and a , b , c denote the lengths of the respective opposite edges. They are related by various formulas, which are named by the trigonometric functions they involve. The law of sines states that for an arbitrary triangle with sides a , b , and c and angles opposite those sides A , B and C : sin ⁡ A a = sin ⁡ B b = sin ⁡ C c = 2 Δ a b c , {\displaystyle {\frac {\sin A}{a}}={\frac {\sin B}{b}}={\frac {\sin C}{c}}={\frac {2\Delta }{abc}},} where Δ is the area of the triangle, or, equivalently, a sin ⁡ A = b sin ⁡ B = c sin ⁡ C = 2 R , {\displaystyle {\frac {a}{\sin A}}={\frac {b}{\sin B}}={\frac {c}{\sin C}}=2R,} where R is the triangle's circumradius . It can be proved by dividing the triangle into two right ones and using the above definition of sine. The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. This is a common situation occurring in triangulation , a technique to determine unknown distances by measuring two angles and an accessible enclosed distance. The law of cosines (also known as the cosine formula or cosine rule) is an extension of the Pythagorean theorem : c 2 = a 2 + b 2 − 2 a b cos ⁡ C , {\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos C,} or equivalently, cos ⁡ C = a 2 + b 2 − c 2 2 a b . {\displaystyle \cos C={\frac {a^{2}+b^{2}-c^{2}}{2ab}}.} In this formula the angle at C is opposite to the side c . This theorem can be proved by dividing the triangle into two right ones and using the Pythagorean theorem . The law of cosines can be used to determine a side of a triangle if two sides and the angle between them are known. It can also be used to find the cosines of an angle (and consequently the angles themselves) if the lengths of all the sides are known. The law of tangents says that: If s is the triangle's semiperimeter, ( a + b + c )/2, and r is the radius of the triangle's incircle , then rs is the triangle's area. Therefore Heron's formula implies that: The law of cotangents says that: [ 30 ] It follows that The trigonometric functions are also important in physics. The sine and the cosine functions, for example, are used to describe simple harmonic motion , which models many natural phenomena, such as the movement of a mass attached to a spring and, for small angles, the pendular motion of a mass hanging by a string. The sine and cosine functions are one-dimensional projections of uniform circular motion . Trigonometric functions also prove to be useful in the study of general periodic functions . The characteristic wave patterns of periodic functions are useful for modeling recurring phenomena such as sound or light waves . [ 31 ] Under rather general conditions, a periodic function f ( x ) can be expressed as a sum of sine waves or cosine waves in a Fourier series . [ 32 ] Denoting the sine or cosine basis functions by φ k , the expansion of the periodic function f ( t ) takes the form: f ( t ) = ∑ k = 1 ∞ c k φ k ( t ) . {\displaystyle f(t)=\sum _{k=1}^{\infty }c_{k}\varphi _{k}(t).} For example, the square wave can be written as the Fourier series f square ( t ) = 4 π ∑ k = 1 ∞ sin ⁡ ( ( 2 k − 1 ) t ) 2 k − 1 . {\displaystyle f_{\text{square}}(t)={\frac {4}{\pi }}\sum _{k=1}^{\infty }{\sin {\big (}(2k-1)t{\big )} \over 2k-1}.} In the animation of a square wave at top right it can be seen that just a few terms already produce a fairly good approximation. The superposition of several terms in the expansion of a sawtooth wave are shown underneath. While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was defined by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). The functions of sine and versine (1 – cosine) are closely related to the jyā and koti-jyā functions used in Gupta period Indian astronomy ( Aryabhatiya , Surya Siddhanta ), via translation from Sanskrit to Arabic and then from Arabic to Latin. [ 33 ] (See Aryabhata's sine table .) All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines , used in solving triangles . [ 34 ] Al-Khwārizmī (c. 780–850) produced tables of sines and cosines. Circa 860, Habash al-Hasib al-Marwazi defined the tangent and the cotangent, and produced their tables. [ 35 ] [ 36 ] Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) defined the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. [ 36 ] The trigonometric functions were later studied by mathematicians including Omar Khayyám , Bhāskara II , Nasir al-Din al-Tusi , Jamshīd al-Kāshī (14th century), Ulugh Beg (14th century), Regiomontanus (1464), Rheticus , and Rheticus' student Valentinus Otho . Madhava of Sangamagrama (c. 1400) made early strides in the analysis of trigonometric functions in terms of infinite series . [ 37 ] (See Madhava series and Madhava's sine table .) The tangent function was brought to Europe by Giovanni Bianchini in 1467 in trigonometry tables he created to support the calculation of stellar coordinates. [ 38 ] The terms tangent and secant were first introduced by the Danish mathematician Thomas Fincke in his book Geometria rotundi (1583). [ 39 ] The 17th century French mathematician Albert Girard made the first published use of the abbreviations sin , cos , and tan in his book Trigonométrie . [ 40 ] In a paper published in 1682, Gottfried Leibniz proved that sin x is not an algebraic function of x . [ 41 ] Though defined as ratios of sides of a right triangle , and thus appearing to be rational functions , Leibnitz result established that they are actually transcendental functions of their argument. The task of assimilating circular functions into algebraic expressions was accomplished by Euler in his Introduction to the Analysis of the Infinite (1748). His method was to show that the sine and cosine functions are alternating series formed from the even and odd terms respectively of the exponential series . He presented " Euler's formula ", as well as near-modern abbreviations ( sin. , cos. , tang. , cot. , sec. , and cosec. ). [ 33 ] A few functions were common historically, but are now seldom used, such as the chord , versine (which appeared in the earliest tables [ 33 ] ), haversine , coversine , [ 42 ] half-tangent (tangent of half an angle), and exsecant . List of trigonometric identities shows more relations between these functions. Historically, trigonometric functions were often combined with logarithms in compound functions like the logarithmic sine, logarithmic cosine, logarithmic secant, logarithmic cosecant, logarithmic tangent and logarithmic cotangent. [ 43 ] [ 44 ] [ 45 ] [ 46 ] The word sine derives [ 47 ] from Latin sinus , meaning "bend; bay", and more specifically "the hanging fold of the upper part of a toga ", "the bosom of a garment", which was chosen as the translation of what was interpreted as the Arabic word jaib , meaning "pocket" or "fold" in the twelfth-century translations of works by Al-Battani and al-Khwārizmī into Medieval Latin . [ 48 ] The choice was based on a misreading of the Arabic written form j-y-b ( جيب ), which itself originated as a transliteration from Sanskrit jīvā , which along with its synonym jyā (the standard Sanskrit term for the sine) translates to "bowstring", being in turn adopted from Ancient Greek χορδή "string". [ 49 ] The word tangent comes from Latin tangens meaning "touching", since the line touches the circle of unit radius, whereas secant stems from Latin secans —"cutting"—since the line cuts the circle. [ 50 ] The prefix " co- " (in "cosine", "cotangent", "cosecant") is found in Edmund Gunter 's Canon triangulorum (1620), which defines the cosinus as an abbreviation of the sinus complementi (sine of the complementary angle ) and proceeds to define the cotangens similarly. [ 51 ] [ 52 ]
https://en.wikipedia.org/wiki/Sinus_secundus
In mathematics , the trigonometric functions (also called circular functions , angle functions or goniometric functions ) [ 1 ] are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry , such as navigation , solid mechanics , celestial mechanics , geodesy , and many others. They are among the simplest periodic functions , and as such are also widely used for studying periodic phenomena through Fourier analysis . The trigonometric functions most widely used in modern mathematics are the sine , the cosine , and the tangent functions. Their reciprocals are respectively the cosecant , the secant , and the cotangent functions, which are less used. Each of these six trigonometric functions has a corresponding inverse function , and an analog among the hyperbolic functions . The oldest definitions of trigonometric functions, related to right-angle triangles, define them only for acute angles . To extend the sine and cosine functions to functions whose domain is the whole real line , geometrical definitions using the standard unit circle (i.e., a circle with radius 1 unit) are often used; then the domain of the other functions is the real line with some isolated points removed. Modern definitions express trigonometric functions as infinite series or as solutions of differential equations . This allows extending the domain of sine and cosine functions to the whole complex plane , and the domain of the other trigonometric functions to the complex plane with some isolated points removed. Conventionally, an abbreviation of each trigonometric function's name is used as its symbol in formulas. Today, the most common versions of these abbreviations are " sin " for sine, " cos " for cosine, " tan " or " tg " for tangent, " sec " for secant, " csc " or " cosec " for cosecant, and " cot " or " ctg " for cotangent. Historically, these abbreviations were first used in prose sentences to indicate particular line segments or their lengths related to an arc of an arbitrary circle, and later to indicate ratios of lengths, but as the function concept developed in the 17th–18th century, they began to be considered as functions of real-number-valued angle measures, and written with functional notation , for example sin( x ) . Parentheses are still often omitted to reduce clutter, but are sometimes necessary; for example the expression sin ⁡ x + y {\displaystyle \sin x+y} would typically be interpreted to mean ( sin ⁡ x ) + y , {\displaystyle (\sin x)+y,} so parentheses are required to express sin ⁡ ( x + y ) . {\displaystyle \sin(x+y).} A positive integer appearing as a superscript after the symbol of the function denotes exponentiation , not function composition . For example sin 2 ⁡ x {\displaystyle \sin ^{2}x} and sin 2 ⁡ ( x ) {\displaystyle \sin ^{2}(x)} denote ( sin ⁡ x ) 2 , {\displaystyle (\sin x)^{2},} not sin ⁡ ( sin ⁡ x ) . {\displaystyle \sin(\sin x).} This differs from the (historically later) general functional notation in which f 2 ( x ) = ( f ∘ f ) ( x ) = f ( f ( x ) ) . {\displaystyle f^{2}(x)=(f\circ f)(x)=f(f(x)).} In contrast, the superscript − 1 {\displaystyle -1} is commonly used to denote the inverse function , not the reciprocal . For example sin − 1 ⁡ x {\displaystyle \sin ^{-1}x} and sin − 1 ⁡ ( x ) {\displaystyle \sin ^{-1}(x)} denote the inverse trigonometric function alternatively written arcsin ⁡ x . {\displaystyle \arcsin x\,.} The equation θ = sin − 1 ⁡ x {\displaystyle \theta =\sin ^{-1}x} implies sin ⁡ θ = x , {\displaystyle \sin \theta =x,} not θ ⋅ sin ⁡ x = 1. {\displaystyle \theta \cdot \sin x=1.} In this case, the superscript could be considered as denoting a composed or iterated function , but negative superscripts other than − 1 {\displaystyle {-1}} are not in common use. If the acute angle θ is given, then any right triangles that have an angle of θ are similar to each other. This means that the ratio of any two side lengths depends only on θ . Thus these six ratios define six functions of θ , which are the trigonometric functions. In the following definitions, the hypotenuse is the length of the side opposite the right angle, opposite represents the side opposite the given angle θ , and adjacent represents the side between the angle θ and the right angle. [ 2 ] [ 3 ] Various mnemonics can be used to remember these definitions. In a right-angled triangle, the sum of the two acute angles is a right angle, that is, 90° or ⁠ π / 2 ⁠ radians . Therefore sin ⁡ ( θ ) {\displaystyle \sin(\theta )} and cos ⁡ ( 90 ∘ − θ ) {\displaystyle \cos(90^{\circ }-\theta )} represent the same ratio, and thus are equal. This identity and analogous relationships between the other trigonometric functions are summarized in the following table. In geometric applications, the argument of a trigonometric function is generally the measure of an angle . For this purpose, any angular unit is convenient. One common unit is degrees , in which a right angle is 90° and a complete turn is 360° (particularly in elementary mathematics ). However, in calculus and mathematical analysis , the trigonometric functions are generally regarded more abstractly as functions of real or complex numbers , rather than angles. In fact, the functions sin and cos can be defined for all complex numbers in terms of the exponential function , via power series, [ 5 ] or as solutions to differential equations given particular initial values [ 6 ] ( see below ), without reference to any geometric notions. The other four trigonometric functions ( tan , cot , sec , csc ) can be defined as quotients and reciprocals of sin and cos , except where zero occurs in the denominator. It can be proved, for real arguments, that these definitions coincide with elementary geometric definitions if the argument is regarded as an angle in radians. [ 5 ] Moreover, these definitions result in simple expressions for the derivatives and indefinite integrals for the trigonometric functions. [ 7 ] Thus, in settings beyond elementary geometry, radians are regarded as the mathematically natural unit for describing angle measures. When radians (rad) are employed, the angle is given as the length of the arc of the unit circle subtended by it: the angle that subtends an arc of length 1 on the unit circle is 1 rad (≈ 57.3°), [ 8 ] and a complete turn (360°) is an angle of 2 π (≈ 6.28) rad. [ 9 ] For real number x , the notation sin x , cos x , etc. refers to the value of the trigonometric functions evaluated at an angle of x rad. If units of degrees are intended, the degree sign must be explicitly shown ( sin x° , cos x° , etc.). Using this standard notation, the argument x for the trigonometric functions satisfies the relationship x = (180 x / π )°, so that, for example, sin π = sin 180° when we take x = π . In this way, the degree symbol can be regarded as a mathematical constant such that 1° = π /180 ≈ 0.0175. [ 10 ] The six trigonometric functions can be defined as coordinate values of points on the Euclidean plane that are related to the unit circle , which is the circle of radius one centered at the origin O of this coordinate system. While right-angled triangle definitions allow for the definition of the trigonometric functions for angles between 0 and π 2 {\textstyle {\frac {\pi }{2}}} radians (90°), the unit circle definitions allow the domain of trigonometric functions to be extended to all positive and negative real numbers. Let L {\displaystyle {\mathcal {L}}} be the ray obtained by rotating by an angle θ the positive half of the x -axis ( counterclockwise rotation for θ > 0 , {\displaystyle \theta >0,} and clockwise rotation for θ < 0 {\displaystyle \theta <0} ). This ray intersects the unit circle at the point A = ( x A , y A ) . {\displaystyle \mathrm {A} =(x_{\mathrm {A} },y_{\mathrm {A} }).} The ray L , {\displaystyle {\mathcal {L}},} extended to a line if necessary, intersects the line of equation x = 1 {\displaystyle x=1} at point B = ( 1 , y B ) , {\displaystyle \mathrm {B} =(1,y_{\mathrm {B} }),} and the line of equation y = 1 {\displaystyle y=1} at point C = ( x C , 1 ) . {\displaystyle \mathrm {C} =(x_{\mathrm {C} },1).} The tangent line to the unit circle at the point A , is perpendicular to L , {\displaystyle {\mathcal {L}},} and intersects the y - and x -axes at points D = ( 0 , y D ) {\displaystyle \mathrm {D} =(0,y_{\mathrm {D} })} and E = ( x E , 0 ) . {\displaystyle \mathrm {E} =(x_{\mathrm {E} },0).} The coordinates of these points give the values of all trigonometric functions for any arbitrary real value of θ in the following manner. The trigonometric functions cos and sin are defined, respectively, as the x - and y -coordinate values of point A . That is, In the range 0 ≤ θ ≤ π / 2 {\displaystyle 0\leq \theta \leq \pi /2} , this definition coincides with the right-angled triangle definition, by taking the right-angled triangle to have the unit radius OA as hypotenuse . And since the equation x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} holds for all points P = ( x , y ) {\displaystyle \mathrm {P} =(x,y)} on the unit circle, this definition of cosine and sine also satisfies the Pythagorean identity . The other trigonometric functions can be found along the unit circle as By applying the Pythagorean identity and geometric proof methods, these definitions can readily be shown to coincide with the definitions of tangent, cotangent, secant and cosecant in terms of sine and cosine, that is Since a rotation of an angle of ± 2 π {\displaystyle \pm 2\pi } does not change the position or size of a shape, the points A , B , C , D , and E are the same for two angles whose difference is an integer multiple of 2 π {\displaystyle 2\pi } . Thus trigonometric functions are periodic functions with period 2 π {\displaystyle 2\pi } . That is, the equalities hold for any angle θ and any integer k . The same is true for the four other trigonometric functions. By observing the sign and the monotonicity of the functions sine, cosine, cosecant, and secant in the four quadrants, one can show that 2 π {\displaystyle 2\pi } is the smallest value for which they are periodic (i.e., 2 π {\displaystyle 2\pi } is the fundamental period of these functions). However, after a rotation by an angle π {\displaystyle \pi } , the points B and C already return to their original position, so that the tangent function and the cotangent function have a fundamental period of π {\displaystyle \pi } . That is, the equalities hold for any angle θ and any integer k . The algebraic expressions for the most important angles are as follows: Writing the numerators as square roots of consecutive non-negative integers, with a denominator of 2, provides an easy way to remember the values. [ 13 ] Such simple expressions generally do not exist for other angles which are rational multiples of a right angle. The following table lists the sines, cosines, and tangents of multiples of 15 degrees from 0 to 90 degrees. G. H. Hardy noted in his 1908 work A Course of Pure Mathematics that the definition of the trigonometric functions in terms of the unit circle is not satisfactory, because it depends implicitly on a notion of angle that can be measured by a real number. [ 14 ] Thus in modern analysis, trigonometric functions are usually constructed without reference to geometry. Various ways exist in the literature for defining the trigonometric functions in a manner suitable for analysis; they include: Sine and cosine can be defined as the unique solution to the initial value problem : [ 17 ] Differentiating again, d 2 d x 2 sin ⁡ x = d d x cos ⁡ x = − sin ⁡ x {\textstyle {\frac {d^{2}}{dx^{2}}}\sin x={\frac {d}{dx}}\cos x=-\sin x} and d 2 d x 2 cos ⁡ x = − d d x sin ⁡ x = − cos ⁡ x {\textstyle {\frac {d^{2}}{dx^{2}}}\cos x=-{\frac {d}{dx}}\sin x=-\cos x} , so both sine and cosine are solutions of the same ordinary differential equation Sine is the unique solution with y (0) = 0 and y ′(0) = 1 ; cosine is the unique solution with y (0) = 1 and y ′(0) = 0 . One can then prove, as a theorem, that solutions cos , sin {\displaystyle \cos ,\sin } are periodic, having the same period. Writing this period as 2 π {\displaystyle 2\pi } is then a definition of the real number π {\displaystyle \pi } which is independent of geometry. Applying the quotient rule to the tangent tan ⁡ x = sin ⁡ x / cos ⁡ x {\displaystyle \tan x=\sin x/\cos x} , so the tangent function satisfies the ordinary differential equation It is the unique solution with y (0) = 0 . The basic trigonometric functions can be defined by the following power series expansions. [ 18 ] These series are also known as the Taylor series or Maclaurin series of these trigonometric functions: The radius of convergence of these series is infinite. Therefore, the sine and the cosine can be extended to entire functions (also called "sine" and "cosine"), which are (by definition) complex-valued functions that are defined and holomorphic on the whole complex plane . Term-by-term differentiation shows that the sine and cosine defined by the series obey the differential equation discussed previously, and conversely one can obtain these series from elementary recursion relations derived from the differential equation. Being defined as fractions of entire functions, the other trigonometric functions may be extended to meromorphic functions , that is functions that are holomorphic in the whole complex plane, except some isolated points called poles . Here, the poles are the numbers of the form ( 2 k + 1 ) π 2 {\textstyle (2k+1){\frac {\pi }{2}}} for the tangent and the secant, or k π {\displaystyle k\pi } for the cotangent and the cosecant, where k is an arbitrary integer. Recurrences relations may also be computed for the coefficients of the Taylor series of the other trigonometric functions. These series have a finite radius of convergence . Their coefficients have a combinatorial interpretation: they enumerate alternating permutations of finite sets. [ 19 ] More precisely, defining one has the following series expansions: [ 20 ] The following continued fractions are valid in the whole complex plane: The last one was used in the historically first proof that π is irrational . [ 21 ] There is a series representation as partial fraction expansion where just translated reciprocal functions are summed up, such that the poles of the cotangent function and the reciprocal functions match: [ 22 ] This identity can be proved with the Herglotz trick. [ 23 ] Combining the (– n ) th with the n th term lead to absolutely convergent series: Similarly, one can find a partial fraction expansion for the secant, cosecant and tangent functions: The following infinite product for the sine is due to Leonhard Euler , and is of great importance in complex analysis: [ 24 ] This may be obtained from the partial fraction decomposition of cot ⁡ z {\displaystyle \cot z} given above, which is the logarithmic derivative of sin ⁡ z {\displaystyle \sin z} . [ 25 ] From this, it can be deduced also that Euler's formula relates sine and cosine to the exponential function : This formula is commonly considered for real values of x , but it remains true for all complex values. Proof : Let f 1 ( x ) = cos ⁡ x + i sin ⁡ x , {\displaystyle f_{1}(x)=\cos x+i\sin x,} and f 2 ( x ) = e i x . {\displaystyle f_{2}(x)=e^{ix}.} One has d f j ( x ) / d x = i f j ( x ) {\displaystyle df_{j}(x)/dx=if_{j}(x)} for j = 1, 2 . The quotient rule implies thus that d / d x ( f 1 ( x ) / f 2 ( x ) ) = 0 {\displaystyle d/dx\,(f_{1}(x)/f_{2}(x))=0} . Therefore, f 1 ( x ) / f 2 ( x ) {\displaystyle f_{1}(x)/f_{2}(x)} is a constant function, which equals 1 , as f 1 ( 0 ) = f 2 ( 0 ) = 1. {\displaystyle f_{1}(0)=f_{2}(0)=1.} This proves the formula. One has Solving this linear system in sine and cosine, one can express them in terms of the exponential function: When x is real, this may be rewritten as Most trigonometric identities can be proved by expressing trigonometric functions in terms of the complex exponential function by using above formulas, and then using the identity e a + b = e a e b {\displaystyle e^{a+b}=e^{a}e^{b}} for simplifying the result. Euler's formula can also be used to define the basic trigonometric function directly, as follows, using the language of topological groups . [ 26 ] The set U {\displaystyle U} of complex numbers of unit modulus is a compact and connected topological group, which has a neighborhood of the identity that is homeomorphic to the real line. Therefore, it is isomorphic as a topological group to the one-dimensional torus group R / Z {\displaystyle \mathbb {R} /\mathbb {Z} } , via an isomorphism e : R / Z → U . {\displaystyle e:\mathbb {R} /\mathbb {Z} \to U.} In pedestrian terms e ( t ) = exp ⁡ ( 2 π i t ) {\displaystyle e(t)=\exp(2\pi it)} , and this isomorphism is unique up to taking complex conjugates. For a nonzero real number a {\displaystyle a} (the base ), the function t ↦ e ( t / a ) {\displaystyle t\mapsto e(t/a)} defines an isomorphism of the group R / a Z → U {\displaystyle \mathbb {R} /a\mathbb {Z} \to U} . The real and imaginary parts of e ( t / a ) {\displaystyle e(t/a)} are the cosine and sine, where a {\displaystyle a} is used as the base for measuring angles. For example, when a = 2 π {\displaystyle a=2\pi } , we get the measure in radians, and the usual trigonometric functions. When a = 360 {\displaystyle a=360} , we get the sine and cosine of angles measured in degrees. Note that a = 2 π {\displaystyle a=2\pi } is the unique value at which the derivative d d t e ( t / a ) {\displaystyle {\frac {d}{dt}}e(t/a)} becomes a unit vector with positive imaginary part at t = 0 {\displaystyle t=0} . This fact can, in turn, be used to define the constant 2 π {\displaystyle 2\pi } . Another way to define the trigonometric functions in analysis is using integration. [ 14 ] [ 27 ] For a real number t {\displaystyle t} , put θ ( t ) = ∫ 0 t d τ 1 + τ 2 = arctan ⁡ t {\displaystyle \theta (t)=\int _{0}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\arctan t} where this defines this inverse tangent function. Also, π {\displaystyle \pi } is defined by 1 2 π = ∫ 0 ∞ d τ 1 + τ 2 {\displaystyle {\frac {1}{2}}\pi =\int _{0}^{\infty }{\frac {d\tau }{1+\tau ^{2}}}} a definition that goes back to Karl Weierstrass . [ 28 ] On the interval − π / 2 < θ < π / 2 {\displaystyle -\pi /2<\theta <\pi /2} , the trigonometric functions are defined by inverting the relation θ = arctan ⁡ t {\displaystyle \theta =\arctan t} . Thus we define the trigonometric functions by tan ⁡ θ = t , cos ⁡ θ = ( 1 + t 2 ) − 1 / 2 , sin ⁡ θ = t ( 1 + t 2 ) − 1 / 2 {\displaystyle \tan \theta =t,\quad \cos \theta =(1+t^{2})^{-1/2},\quad \sin \theta =t(1+t^{2})^{-1/2}} where the point ( t , θ ) {\displaystyle (t,\theta )} is on the graph of θ = arctan ⁡ t {\displaystyle \theta =\arctan t} and the positive square root is taken. This defines the trigonometric functions on ( − π / 2 , π / 2 ) {\displaystyle (-\pi /2,\pi /2)} . The definition can be extended to all real numbers by first observing that, as θ → π / 2 {\displaystyle \theta \to \pi /2} , t → ∞ {\displaystyle t\to \infty } , and so cos ⁡ θ = ( 1 + t 2 ) − 1 / 2 → 0 {\displaystyle \cos \theta =(1+t^{2})^{-1/2}\to 0} and sin ⁡ θ = t ( 1 + t 2 ) − 1 / 2 → 1 {\displaystyle \sin \theta =t(1+t^{2})^{-1/2}\to 1} . Thus cos ⁡ θ {\displaystyle \cos \theta } and sin ⁡ θ {\displaystyle \sin \theta } are extended continuously so that cos ⁡ ( π / 2 ) = 0 , sin ⁡ ( π / 2 ) = 1 {\displaystyle \cos(\pi /2)=0,\sin(\pi /2)=1} . Now the conditions cos ⁡ ( θ + π ) = − cos ⁡ ( θ ) {\displaystyle \cos(\theta +\pi )=-\cos(\theta )} and sin ⁡ ( θ + π ) = − sin ⁡ ( θ ) {\displaystyle \sin(\theta +\pi )=-\sin(\theta )} define the sine and cosine as periodic functions with period 2 π {\displaystyle 2\pi } , for all real numbers. Proving the basic properties of sine and cosine, including the fact that sine and cosine are analytic, one may first establish the addition formulae. First, arctan ⁡ s + arctan ⁡ t = arctan ⁡ s + t 1 − s t {\displaystyle \arctan s+\arctan t=\arctan {\frac {s+t}{1-st}}} holds, provided arctan ⁡ s + arctan ⁡ t ∈ ( − π / 2 , π / 2 ) {\displaystyle \arctan s+\arctan t\in (-\pi /2,\pi /2)} , since arctan ⁡ s + arctan ⁡ t = ∫ − s t d τ 1 + τ 2 = ∫ 0 s + t 1 − s t d τ 1 + τ 2 {\displaystyle \arctan s+\arctan t=\int _{-s}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\int _{0}^{\frac {s+t}{1-st}}{\frac {d\tau }{1+\tau ^{2}}}} after the substitution τ → s + τ 1 − s τ {\displaystyle \tau \to {\frac {s+\tau }{1-s\tau }}} . In particular, the limiting case as s → ∞ {\displaystyle s\to \infty } gives arctan ⁡ t + π 2 = arctan ⁡ ( − 1 / t ) , t ∈ ( − ∞ , 0 ) . {\displaystyle \arctan t+{\frac {\pi }{2}}=\arctan(-1/t),\quad t\in (-\infty ,0).} Thus we have sin ⁡ ( θ + π 2 ) = − 1 t 1 + ( − 1 / t ) 2 = − 1 1 + t 2 = − cos ⁡ ( θ ) {\displaystyle \sin \left(\theta +{\frac {\pi }{2}}\right)={\frac {-1}{t{\sqrt {1+(-1/t)^{2}}}}}={\frac {-1}{\sqrt {1+t^{2}}}}=-\cos(\theta )} and cos ⁡ ( θ + π 2 ) = 1 1 + ( − 1 / t ) 2 = t 1 + t 2 = sin ⁡ ( θ ) . {\displaystyle \cos \left(\theta +{\frac {\pi }{2}}\right)={\frac {1}{\sqrt {1+(-1/t)^{2}}}}={\frac {t}{\sqrt {1+t^{2}}}}=\sin(\theta ).} So the sine and cosine functions are related by translation over a quarter period π / 2 {\displaystyle \pi /2} . One can also define the trigonometric functions using various functional equations . For example, [ 29 ] the sine and the cosine form the unique pair of continuous functions that satisfy the difference formula and the added condition The sine and cosine of a complex number z = x + i y {\displaystyle z=x+iy} can be expressed in terms of real sines, cosines, and hyperbolic functions as follows: By taking advantage of domain coloring , it is possible to graph the trigonometric functions as complex-valued functions. Various features unique to the complex functions can be seen from the graph; for example, the sine and cosine functions can be seen to be unbounded as the imaginary part of z {\displaystyle z} becomes larger (since the color white represents infinity), and the fact that the functions contain simple zeros or poles is apparent from the fact that the hue cycles around each zero or pole exactly once. Comparing these graphs with those of the corresponding Hyperbolic functions highlights the relationships between the two. sin ⁡ z {\displaystyle \sin z\,} cos ⁡ z {\displaystyle \cos z\,} tan ⁡ z {\displaystyle \tan z\,} cot ⁡ z {\displaystyle \cot z\,} sec ⁡ z {\displaystyle \sec z\,} csc ⁡ z {\displaystyle \csc z\,} The sine and cosine functions are periodic , with period 2 π {\displaystyle 2\pi } , which is the smallest positive period: sin ⁡ ( z + 2 π ) = sin ⁡ ( z ) , cos ⁡ ( z + 2 π ) = cos ⁡ ( z ) . {\displaystyle \sin(z+2\pi )=\sin(z),\quad \cos(z+2\pi )=\cos(z).} Consequently, the cosecant and secant also have 2 π {\displaystyle 2\pi } as their period. The functions sine and cosine also have semiperiods π {\displaystyle \pi } , and sin ⁡ ( z + π ) = − sin ⁡ ( z ) , cos ⁡ ( z + π ) = − cos ⁡ ( z ) {\displaystyle \sin(z+\pi )=-\sin(z),\quad \cos(z+\pi )=-\cos(z)} and consequently tan ⁡ ( z + π ) = tan ⁡ ( z ) , cot ⁡ ( z + π ) = cot ⁡ ( z ) . {\displaystyle \tan(z+\pi )=\tan(z),\quad \cot(z+\pi )=\cot(z).} Also, sin ⁡ ( x + π / 2 ) = cos ⁡ ( x ) , cos ⁡ ( x + π / 2 ) = − sin ⁡ ( x ) {\displaystyle \sin(x+\pi /2)=\cos(x),\quad \cos(x+\pi /2)=-\sin(x)} (see Complementary angles ). The function sin ⁡ ( z ) {\displaystyle \sin(z)} has a unique zero (at z = 0 {\displaystyle z=0} ) in the strip − π < ℜ ( z ) < π {\displaystyle -\pi <\Re (z)<\pi } . The function cos ⁡ ( z ) {\displaystyle \cos(z)} has the pair of zeros z = ± π / 2 {\displaystyle z=\pm \pi /2} in the same strip. Because of the periodicity, the zeros of sine are π Z = { … , − 2 π , − π , 0 , π , 2 π , … } ⊂ C . {\displaystyle \pi \mathbb {Z} =\left\{\dots ,-2\pi ,-\pi ,0,\pi ,2\pi ,\dots \right\}\subset \mathbb {C} .} There zeros of cosine are π 2 + π Z = { … , − 3 π 2 , − π 2 , π 2 , 3 π 2 , … } ⊂ C . {\displaystyle {\frac {\pi }{2}}+\pi \mathbb {Z} =\left\{\dots ,-{\frac {3\pi }{2}},-{\frac {\pi }{2}},{\frac {\pi }{2}},{\frac {3\pi }{2}},\dots \right\}\subset \mathbb {C} .} All of the zeros are simple zeros, and both functions have derivative ± 1 {\displaystyle \pm 1} at each of the zeros. The tangent function tan ⁡ ( z ) = sin ⁡ ( z ) / cos ⁡ ( z ) {\displaystyle \tan(z)=\sin(z)/\cos(z)} has a simple zero at z = 0 {\displaystyle z=0} and vertical asymptotes at z = ± π / 2 {\displaystyle z=\pm \pi /2} , where it has a simple pole of residue − 1 {\displaystyle -1} . Again, owing to the periodicity, the zeros are all the integer multiples of π {\displaystyle \pi } and the poles are odd multiples of π / 2 {\displaystyle \pi /2} , all having the same residue. The poles correspond to vertical asymptotes lim x → π − tan ⁡ ( x ) = + ∞ , lim x → π + tan ⁡ ( x ) = − ∞ . {\displaystyle \lim _{x\to \pi ^{-}}\tan(x)=+\infty ,\quad \lim _{x\to \pi ^{+}}\tan(x)=-\infty .} The cotangent function cot ⁡ ( z ) = cos ⁡ ( z ) / sin ⁡ ( z ) {\displaystyle \cot(z)=\cos(z)/\sin(z)} has a simple pole of residue 1 at the integer multiples of π {\displaystyle \pi } and simple zeros at odd multiples of π / 2 {\displaystyle \pi /2} . The poles correspond to vertical asymptotes lim x → 0 − cot ⁡ ( x ) = − ∞ , lim x → 0 + cot ⁡ ( x ) = + ∞ . {\displaystyle \lim _{x\to 0^{-}}\cot(x)=-\infty ,\quad \lim _{x\to 0^{+}}\cot(x)=+\infty .} Many identities interrelate the trigonometric functions. This section contains the most basic ones; for more identities, see List of trigonometric identities . These identities may be proved geometrically from the unit-circle definitions or the right-angled-triangle definitions (although, for the latter definitions, care must be taken for angles that are not in the interval [0, π /2] , see Proofs of trigonometric identities ). For non-geometrical proofs using only tools of calculus , one may use directly the differential equations, in a way that is similar to that of the above proof of Euler's identity. One can also use Euler's identity for expressing all trigonometric functions in terms of complex exponentials and using properties of the exponential function. The cosine and the secant are even functions ; the other trigonometric functions are odd functions . That is: All trigonometric functions are periodic functions of period 2 π . This is the smallest period, except for the tangent and the cotangent, which have π as smallest period. This means that, for every integer k , one has See Periodicity and asymptotes . The Pythagorean identity, is the expression of the Pythagorean theorem in terms of trigonometric functions. It is Dividing through by either cos 2 ⁡ x {\displaystyle \cos ^{2}x} or sin 2 ⁡ x {\displaystyle \sin ^{2}x} gives and The sum and difference formulas allow expanding the sine, the cosine, and the tangent of a sum or a difference of two angles in terms of sines and cosines and tangents of the angles themselves. These can be derived geometrically, using arguments that date to Ptolemy (see Angle sum and difference identities ). One can also produce them algebraically using Euler's formula . When the two angles are equal, the sum formulas reduce to simpler equations known as the double-angle formulae . These identities can be used to derive the product-to-sum identities . By setting t = tan ⁡ 1 2 θ , {\displaystyle t=\tan {\tfrac {1}{2}}\theta ,} all trigonometric functions of θ {\displaystyle \theta } can be expressed as rational fractions of t {\displaystyle t} : Together with this is the tangent half-angle substitution , which reduces the computation of integrals and antiderivatives of trigonometric functions to that of rational fractions. The derivatives of trigonometric functions result from those of sine and cosine by applying the quotient rule . The values given for the antiderivatives in the following table can be verified by differentiating them. The number C is a constant of integration . Note: For 0 < x < π {\displaystyle 0<x<\pi } the integral of csc ⁡ x {\displaystyle \csc x} can also be written as − arsinh ⁡ ( cot ⁡ x ) , {\displaystyle -\operatorname {arsinh} (\cot x),} and for the integral of sec ⁡ x {\displaystyle \sec x} for − π / 2 < x < π / 2 {\displaystyle -\pi /2<x<\pi /2} as arsinh ⁡ ( tan ⁡ x ) , {\displaystyle \operatorname {arsinh} (\tan x),} where arsinh {\displaystyle \operatorname {arsinh} } is the inverse hyperbolic sine . Alternatively, the derivatives of the 'co-functions' can be obtained using trigonometric identities and the chain rule: The trigonometric functions are periodic, and hence not injective , so strictly speaking, they do not have an inverse function . However, on each interval on which a trigonometric function is monotonic , one can define an inverse function, and this defines inverse trigonometric functions as multivalued functions . To define a true inverse function, one must restrict the domain to an interval where the function is monotonic, and is thus bijective from this interval to its image by the function. The common choice for this interval, called the set of principal values , is given in the following table. As usual, the inverse trigonometric functions are denoted with the prefix "arc" before the name or its abbreviation of the function. The notations sin −1 , cos −1 , etc. are often used for arcsin and arccos , etc. When this notation is used, inverse functions could be confused with multiplicative inverses. The notation with the "arc" prefix avoids such a confusion, though "arcsec" for arcsecant can be confused with " arcsecond ". Just like the sine and cosine, the inverse trigonometric functions can also be expressed in terms of infinite series. They can also be expressed in terms of complex logarithms . In this section A , B , C denote the three (interior) angles of a triangle, and a , b , c denote the lengths of the respective opposite edges. They are related by various formulas, which are named by the trigonometric functions they involve. The law of sines states that for an arbitrary triangle with sides a , b , and c and angles opposite those sides A , B and C : sin ⁡ A a = sin ⁡ B b = sin ⁡ C c = 2 Δ a b c , {\displaystyle {\frac {\sin A}{a}}={\frac {\sin B}{b}}={\frac {\sin C}{c}}={\frac {2\Delta }{abc}},} where Δ is the area of the triangle, or, equivalently, a sin ⁡ A = b sin ⁡ B = c sin ⁡ C = 2 R , {\displaystyle {\frac {a}{\sin A}}={\frac {b}{\sin B}}={\frac {c}{\sin C}}=2R,} where R is the triangle's circumradius . It can be proved by dividing the triangle into two right ones and using the above definition of sine. The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. This is a common situation occurring in triangulation , a technique to determine unknown distances by measuring two angles and an accessible enclosed distance. The law of cosines (also known as the cosine formula or cosine rule) is an extension of the Pythagorean theorem : c 2 = a 2 + b 2 − 2 a b cos ⁡ C , {\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos C,} or equivalently, cos ⁡ C = a 2 + b 2 − c 2 2 a b . {\displaystyle \cos C={\frac {a^{2}+b^{2}-c^{2}}{2ab}}.} In this formula the angle at C is opposite to the side c . This theorem can be proved by dividing the triangle into two right ones and using the Pythagorean theorem . The law of cosines can be used to determine a side of a triangle if two sides and the angle between them are known. It can also be used to find the cosines of an angle (and consequently the angles themselves) if the lengths of all the sides are known. The law of tangents says that: If s is the triangle's semiperimeter, ( a + b + c )/2, and r is the radius of the triangle's incircle , then rs is the triangle's area. Therefore Heron's formula implies that: The law of cotangents says that: [ 30 ] It follows that The trigonometric functions are also important in physics. The sine and the cosine functions, for example, are used to describe simple harmonic motion , which models many natural phenomena, such as the movement of a mass attached to a spring and, for small angles, the pendular motion of a mass hanging by a string. The sine and cosine functions are one-dimensional projections of uniform circular motion . Trigonometric functions also prove to be useful in the study of general periodic functions . The characteristic wave patterns of periodic functions are useful for modeling recurring phenomena such as sound or light waves . [ 31 ] Under rather general conditions, a periodic function f ( x ) can be expressed as a sum of sine waves or cosine waves in a Fourier series . [ 32 ] Denoting the sine or cosine basis functions by φ k , the expansion of the periodic function f ( t ) takes the form: f ( t ) = ∑ k = 1 ∞ c k φ k ( t ) . {\displaystyle f(t)=\sum _{k=1}^{\infty }c_{k}\varphi _{k}(t).} For example, the square wave can be written as the Fourier series f square ( t ) = 4 π ∑ k = 1 ∞ sin ⁡ ( ( 2 k − 1 ) t ) 2 k − 1 . {\displaystyle f_{\text{square}}(t)={\frac {4}{\pi }}\sum _{k=1}^{\infty }{\sin {\big (}(2k-1)t{\big )} \over 2k-1}.} In the animation of a square wave at top right it can be seen that just a few terms already produce a fairly good approximation. The superposition of several terms in the expansion of a sawtooth wave are shown underneath. While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was defined by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). The functions of sine and versine (1 – cosine) are closely related to the jyā and koti-jyā functions used in Gupta period Indian astronomy ( Aryabhatiya , Surya Siddhanta ), via translation from Sanskrit to Arabic and then from Arabic to Latin. [ 33 ] (See Aryabhata's sine table .) All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines , used in solving triangles . [ 34 ] Al-Khwārizmī (c. 780–850) produced tables of sines and cosines. Circa 860, Habash al-Hasib al-Marwazi defined the tangent and the cotangent, and produced their tables. [ 35 ] [ 36 ] Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) defined the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. [ 36 ] The trigonometric functions were later studied by mathematicians including Omar Khayyám , Bhāskara II , Nasir al-Din al-Tusi , Jamshīd al-Kāshī (14th century), Ulugh Beg (14th century), Regiomontanus (1464), Rheticus , and Rheticus' student Valentinus Otho . Madhava of Sangamagrama (c. 1400) made early strides in the analysis of trigonometric functions in terms of infinite series . [ 37 ] (See Madhava series and Madhava's sine table .) The tangent function was brought to Europe by Giovanni Bianchini in 1467 in trigonometry tables he created to support the calculation of stellar coordinates. [ 38 ] The terms tangent and secant were first introduced by the Danish mathematician Thomas Fincke in his book Geometria rotundi (1583). [ 39 ] The 17th century French mathematician Albert Girard made the first published use of the abbreviations sin , cos , and tan in his book Trigonométrie . [ 40 ] In a paper published in 1682, Gottfried Leibniz proved that sin x is not an algebraic function of x . [ 41 ] Though defined as ratios of sides of a right triangle , and thus appearing to be rational functions , Leibnitz result established that they are actually transcendental functions of their argument. The task of assimilating circular functions into algebraic expressions was accomplished by Euler in his Introduction to the Analysis of the Infinite (1748). His method was to show that the sine and cosine functions are alternating series formed from the even and odd terms respectively of the exponential series . He presented " Euler's formula ", as well as near-modern abbreviations ( sin. , cos. , tang. , cot. , sec. , and cosec. ). [ 33 ] A few functions were common historically, but are now seldom used, such as the chord , versine (which appeared in the earliest tables [ 33 ] ), haversine , coversine , [ 42 ] half-tangent (tangent of half an angle), and exsecant . List of trigonometric identities shows more relations between these functions. Historically, trigonometric functions were often combined with logarithms in compound functions like the logarithmic sine, logarithmic cosine, logarithmic secant, logarithmic cosecant, logarithmic tangent and logarithmic cotangent. [ 43 ] [ 44 ] [ 45 ] [ 46 ] The word sine derives [ 47 ] from Latin sinus , meaning "bend; bay", and more specifically "the hanging fold of the upper part of a toga ", "the bosom of a garment", which was chosen as the translation of what was interpreted as the Arabic word jaib , meaning "pocket" or "fold" in the twelfth-century translations of works by Al-Battani and al-Khwārizmī into Medieval Latin . [ 48 ] The choice was based on a misreading of the Arabic written form j-y-b ( جيب ), which itself originated as a transliteration from Sanskrit jīvā , which along with its synonym jyā (the standard Sanskrit term for the sine) translates to "bowstring", being in turn adopted from Ancient Greek χορδή "string". [ 49 ] The word tangent comes from Latin tangens meaning "touching", since the line touches the circle of unit radius, whereas secant stems from Latin secans —"cutting"—since the line cuts the circle. [ 50 ] The prefix " co- " (in "cosine", "cotangent", "cosecant") is found in Edmund Gunter 's Canon triangulorum (1620), which defines the cosinus as an abbreviation of the sinus complementi (sine of the complementary angle ) and proceeds to define the cotangens similarly. [ 51 ] [ 52 ]
https://en.wikipedia.org/wiki/Sinus_secundus_arcus
In trigonometry , the sinus totus (Latin for "total sine") was historically the radius of the base circle used to construct a sine table; that is, the maximum possible value of the sine. Letting the notation Sin ⁡ θ {\displaystyle \operatorname {Sin} \theta } stand for the historical sine, and sin ⁡ θ {\displaystyle \sin \theta } stand for the modern sine function, where R {\displaystyle R} is the sinus totus , R = Sin ⁡ 90 ∘ . {\displaystyle R=\operatorname {Sin} 90^{\circ }.} This elementary geometry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sinus_totus
The versine or versed sine is a trigonometric function found in some of the earliest ( Sanskrit Aryabhatia , [ 1 ] Section I) trigonometric tables . The versine of an angle is 1 minus its cosine . There are several related functions, most notably the coversine and haversine . The latter, half a versine, is of particular importance in the haversine formula of navigation. The versine [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] or versed sine [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] is a trigonometric function already appearing in some of the earliest trigonometric tables. It is symbolized in formulas using the abbreviations versin , sinver , [ 13 ] [ 14 ] vers , or siv . [ 15 ] [ 16 ] In Latin , it is known as the sinus versus (flipped sine), versinus , versus , or sagitta (arrow). [ 17 ] Expressed in terms of common trigonometric functions sine, cosine, and tangent, the versine is equal to versin ⁡ θ = 1 − cos ⁡ θ = 2 sin 2 ⁡ θ 2 = sin ⁡ θ tan ⁡ θ 2 {\displaystyle \operatorname {versin} \theta =1-\cos \theta =2\sin ^{2}{\frac {\theta }{2}}=\sin \theta \,\tan {\frac {\theta }{2}}} There are several related functions corresponding to the versine: Special tables were also made of half of the versed sine, because of its particular use in the haversine formula used historically in navigation . hav θ = sin 2 ⁡ ( θ 2 ) = 1 − cos ⁡ θ 2 {\displaystyle {\text{hav}}\ \theta =\sin ^{2}\left({\frac {\theta }{2}}\right)={\frac {1-\cos \theta }{2}}} The ordinary sine function ( see note on etymology ) was sometimes historically called the sinus rectus ("straight sine"), to contrast it with the versed sine ( sinus versus ). [ 31 ] The meaning of these terms is apparent if one looks at the functions in the original context for their definition, a unit circle : For a vertical chord AB of the unit circle, the sine of the angle θ (representing half of the subtended angle Δ ) is the distance AC (half of the chord). On the other hand, the versed sine of θ is the distance CD from the center of the chord to the center of the arc. Thus, the sum of cos( θ ) (equal to the length of line OC ) and versin( θ ) (equal to the length of line CD ) is the radius OD (with length 1). Illustrated this way, the sine is vertical ( rectus , literally "straight") while the versine is horizontal ( versus , literally "turned against, out-of-place"); both are distances from C to the circle. This figure also illustrates the reason why the versine was sometimes called the sagitta , Latin for arrow . [ 17 ] [ 30 ] If the arc ADB of the double-angle Δ = 2 θ is viewed as a " bow " and the chord AB as its "string", then the versine CD is clearly the "arrow shaft". In further keeping with the interpretation of the sine as "vertical" and the versed sine as "horizontal", sagitta is also an obsolete synonym for the abscissa (the horizontal axis of a graph). [ 30 ] In 1821, Cauchy used the terms sinus versus ( siv ) for the versine and cosinus versus ( cosiv ) for the coversine. [ 15 ] [ 16 ] [ nb 1 ] As θ goes to zero, versin( θ ) is the difference between two nearly equal quantities, so a user of a trigonometric table for the cosine alone would need a very high accuracy to obtain the versine in order to avoid catastrophic cancellation , making separate tables for the latter convenient. [ 12 ] Even with a calculator or computer, round-off errors make it advisable to use the sin 2 formula for small θ . Another historical advantage of the versine is that it is always non-negative, so its logarithm is defined everywhere except for the single angle ( θ = 0, 2 π , …) where it is zero—thus, one could use logarithmic tables for multiplications in formulas involving versines. In fact, the earliest surviving table of sine (half- chord ) values (as opposed to the chords tabulated by Ptolemy and other Greek authors), calculated from the Surya Siddhantha of India dated back to the 3rd century BC, was a table of values for the sine and versed sine (in 3.75° increments from 0 to 90°). [ 31 ] The versine appears as an intermediate step in the application of the half-angle formula sin 2 ( ⁠ θ / 2 ⁠ ) = ⁠ 1 / 2 ⁠ versin( θ ), derived by Ptolemy , that was used to construct such tables. The haversine, in particular, was important in navigation because it appears in the haversine formula , which is used to reasonably accurately compute distances on an astronomic spheroid (see issues with the Earth's radius vs. sphere ) given angular positions (e.g., longitude and latitude ). One could also use sin 2 ( ⁠ θ / 2 ⁠ ) directly, but having a table of the haversine removed the need to compute squares and square roots. [ 12 ] An early utilization by José de Mendoza y Ríos of what later would be called haversines is documented in 1801. [ 14 ] [ 32 ] The first known English equivalent to a table of haversines was published by James Andrew in 1805, under the name "Squares of Natural Semi-Chords". [ 33 ] [ 34 ] [ 17 ] In 1835, the term haversine (notated naturally as hav. or base-10 logarithmically as log. haversine or log. havers. ) was coined [ 35 ] by James Inman [ 14 ] [ 36 ] [ 37 ] in the third edition of his work Navigation and Nautical Astronomy: For the Use of British Seamen to simplify the calculation of distances between two points on the surface of the Earth using spherical trigonometry for applications in navigation. [ 3 ] [ 35 ] Inman also used the terms nat. versine and nat. vers. for versines. [ 3 ] Other high-regarded tables of haversines were those of Richard Farley in 1856 [ 33 ] [ 38 ] and John Caulfield Hannyngton in 1876. [ 33 ] [ 39 ] The haversine continues to be used in navigation and has found new applications in recent decades, as in Bruce D. Stark's method for clearing lunar distances utilizing Gaussian logarithms since 1995 [ 40 ] [ 41 ] or in a more compact method for sight reduction since 2014. [ 29 ] While the usage of the versine, coversine and haversine as well as their inverse functions can be traced back centuries, the names for the other five cofunctions appear to be of much younger origin. One period (0 < θ < 2 π ) of a versine or, more commonly, a haversine waveform is also commonly used in signal processing and control theory as the shape of a pulse or a window function (including Hann , Hann–Poisson and Tukey windows ), because it smoothly ( continuous in value and slope ) "turns on" from zero to one (for haversine) and back to zero. [ nb 2 ] In these applications, it is named Hann function or raised-cosine filter . The functions are circular rotations of each other. Inverse functions like arcversine (arcversin, arcvers, [ 8 ] avers, [ 43 ] [ 44 ] aver), arcvercosine (arcvercosin, arcvercos, avercos, avcs), arccoversine (arccoversin, arccovers, [ 8 ] acovers, [ 43 ] [ 44 ] acvs), arccovercosine (arccovercosin, arccovercos, acovercos, acvc), archaversine (archaversin, archav, haversin −1 , [ 45 ] invhav, [ 46 ] [ 47 ] [ 48 ] ahav, [ 43 ] [ 44 ] ahvs, ahv, hav −1 [ 49 ] [ 50 ] ), archavercosine (archavercosin, archavercos, ahvc), archacoversine (archacoversin, ahcv) or archacovercosine (archacovercosin, archacovercos, ahcc) exist as well: These functions can be extended into the complex plane . [ 42 ] [ 19 ] [ 24 ] Maclaurin series : [ 24 ] When the versine v is small in comparison to the radius r , it may be approximated from the half-chord length L (the distance AC shown above) by the formula [ 51 ] v ≈ L 2 2 r . {\displaystyle v\approx {\frac {L^{2}}{2r}}.} Alternatively, if the versine is small and the versine, radius, and half-chord length are known, they may be used to estimate the arc length s ( AD in the figure above) by the formula s ≈ L + v 2 r {\displaystyle s\approx L+{\frac {v^{2}}{r}}} This formula was known to the Chinese mathematician Shen Kuo , and a more accurate formula also involving the sagitta was developed two centuries later by Guo Shoujing . [ 52 ] A more accurate approximation used in engineering [ 53 ] is v ≈ s 3 2 L 1 2 8 r {\displaystyle v\approx {\frac {s^{\frac {3}{2}}L^{\frac {1}{2}}}{8r}}} The term versine is also sometimes used to describe deviations from straightness in an arbitrary planar curve, of which the above circle is a special case. Given a chord between two points in a curve, the perpendicular distance v from the chord to the curve (usually at the chord midpoint) is called a versine measurement. For a straight line, the versine of any chord is zero, so this measurement characterizes the straightness of the curve. In the limit as the chord length L goes to zero, the ratio ⁠ 8 v / L 2 ⁠ goes to the instantaneous curvature . This usage is especially common in rail transport , where it describes measurements of the straightness of the rail tracks [ 54 ] and it is the basis of the Hallade method for rail surveying . The term sagitta (often abbreviated sag ) is used similarly in optics , for describing the surfaces of lenses and mirrors .
https://en.wikipedia.org/wiki/Sinus_versus
The versine or versed sine is a trigonometric function found in some of the earliest ( Sanskrit Aryabhatia , [ 1 ] Section I) trigonometric tables . The versine of an angle is 1 minus its cosine . There are several related functions, most notably the coversine and haversine . The latter, half a versine, is of particular importance in the haversine formula of navigation. The versine [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] or versed sine [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] is a trigonometric function already appearing in some of the earliest trigonometric tables. It is symbolized in formulas using the abbreviations versin , sinver , [ 13 ] [ 14 ] vers , or siv . [ 15 ] [ 16 ] In Latin , it is known as the sinus versus (flipped sine), versinus , versus , or sagitta (arrow). [ 17 ] Expressed in terms of common trigonometric functions sine, cosine, and tangent, the versine is equal to versin ⁡ θ = 1 − cos ⁡ θ = 2 sin 2 ⁡ θ 2 = sin ⁡ θ tan ⁡ θ 2 {\displaystyle \operatorname {versin} \theta =1-\cos \theta =2\sin ^{2}{\frac {\theta }{2}}=\sin \theta \,\tan {\frac {\theta }{2}}} There are several related functions corresponding to the versine: Special tables were also made of half of the versed sine, because of its particular use in the haversine formula used historically in navigation . hav θ = sin 2 ⁡ ( θ 2 ) = 1 − cos ⁡ θ 2 {\displaystyle {\text{hav}}\ \theta =\sin ^{2}\left({\frac {\theta }{2}}\right)={\frac {1-\cos \theta }{2}}} The ordinary sine function ( see note on etymology ) was sometimes historically called the sinus rectus ("straight sine"), to contrast it with the versed sine ( sinus versus ). [ 31 ] The meaning of these terms is apparent if one looks at the functions in the original context for their definition, a unit circle : For a vertical chord AB of the unit circle, the sine of the angle θ (representing half of the subtended angle Δ ) is the distance AC (half of the chord). On the other hand, the versed sine of θ is the distance CD from the center of the chord to the center of the arc. Thus, the sum of cos( θ ) (equal to the length of line OC ) and versin( θ ) (equal to the length of line CD ) is the radius OD (with length 1). Illustrated this way, the sine is vertical ( rectus , literally "straight") while the versine is horizontal ( versus , literally "turned against, out-of-place"); both are distances from C to the circle. This figure also illustrates the reason why the versine was sometimes called the sagitta , Latin for arrow . [ 17 ] [ 30 ] If the arc ADB of the double-angle Δ = 2 θ is viewed as a " bow " and the chord AB as its "string", then the versine CD is clearly the "arrow shaft". In further keeping with the interpretation of the sine as "vertical" and the versed sine as "horizontal", sagitta is also an obsolete synonym for the abscissa (the horizontal axis of a graph). [ 30 ] In 1821, Cauchy used the terms sinus versus ( siv ) for the versine and cosinus versus ( cosiv ) for the coversine. [ 15 ] [ 16 ] [ nb 1 ] As θ goes to zero, versin( θ ) is the difference between two nearly equal quantities, so a user of a trigonometric table for the cosine alone would need a very high accuracy to obtain the versine in order to avoid catastrophic cancellation , making separate tables for the latter convenient. [ 12 ] Even with a calculator or computer, round-off errors make it advisable to use the sin 2 formula for small θ . Another historical advantage of the versine is that it is always non-negative, so its logarithm is defined everywhere except for the single angle ( θ = 0, 2 π , …) where it is zero—thus, one could use logarithmic tables for multiplications in formulas involving versines. In fact, the earliest surviving table of sine (half- chord ) values (as opposed to the chords tabulated by Ptolemy and other Greek authors), calculated from the Surya Siddhantha of India dated back to the 3rd century BC, was a table of values for the sine and versed sine (in 3.75° increments from 0 to 90°). [ 31 ] The versine appears as an intermediate step in the application of the half-angle formula sin 2 ( ⁠ θ / 2 ⁠ ) = ⁠ 1 / 2 ⁠ versin( θ ), derived by Ptolemy , that was used to construct such tables. The haversine, in particular, was important in navigation because it appears in the haversine formula , which is used to reasonably accurately compute distances on an astronomic spheroid (see issues with the Earth's radius vs. sphere ) given angular positions (e.g., longitude and latitude ). One could also use sin 2 ( ⁠ θ / 2 ⁠ ) directly, but having a table of the haversine removed the need to compute squares and square roots. [ 12 ] An early utilization by José de Mendoza y Ríos of what later would be called haversines is documented in 1801. [ 14 ] [ 32 ] The first known English equivalent to a table of haversines was published by James Andrew in 1805, under the name "Squares of Natural Semi-Chords". [ 33 ] [ 34 ] [ 17 ] In 1835, the term haversine (notated naturally as hav. or base-10 logarithmically as log. haversine or log. havers. ) was coined [ 35 ] by James Inman [ 14 ] [ 36 ] [ 37 ] in the third edition of his work Navigation and Nautical Astronomy: For the Use of British Seamen to simplify the calculation of distances between two points on the surface of the Earth using spherical trigonometry for applications in navigation. [ 3 ] [ 35 ] Inman also used the terms nat. versine and nat. vers. for versines. [ 3 ] Other high-regarded tables of haversines were those of Richard Farley in 1856 [ 33 ] [ 38 ] and John Caulfield Hannyngton in 1876. [ 33 ] [ 39 ] The haversine continues to be used in navigation and has found new applications in recent decades, as in Bruce D. Stark's method for clearing lunar distances utilizing Gaussian logarithms since 1995 [ 40 ] [ 41 ] or in a more compact method for sight reduction since 2014. [ 29 ] While the usage of the versine, coversine and haversine as well as their inverse functions can be traced back centuries, the names for the other five cofunctions appear to be of much younger origin. One period (0 < θ < 2 π ) of a versine or, more commonly, a haversine waveform is also commonly used in signal processing and control theory as the shape of a pulse or a window function (including Hann , Hann–Poisson and Tukey windows ), because it smoothly ( continuous in value and slope ) "turns on" from zero to one (for haversine) and back to zero. [ nb 2 ] In these applications, it is named Hann function or raised-cosine filter . The functions are circular rotations of each other. Inverse functions like arcversine (arcversin, arcvers, [ 8 ] avers, [ 43 ] [ 44 ] aver), arcvercosine (arcvercosin, arcvercos, avercos, avcs), arccoversine (arccoversin, arccovers, [ 8 ] acovers, [ 43 ] [ 44 ] acvs), arccovercosine (arccovercosin, arccovercos, acovercos, acvc), archaversine (archaversin, archav, haversin −1 , [ 45 ] invhav, [ 46 ] [ 47 ] [ 48 ] ahav, [ 43 ] [ 44 ] ahvs, ahv, hav −1 [ 49 ] [ 50 ] ), archavercosine (archavercosin, archavercos, ahvc), archacoversine (archacoversin, ahcv) or archacovercosine (archacovercosin, archacovercos, ahcc) exist as well: These functions can be extended into the complex plane . [ 42 ] [ 19 ] [ 24 ] Maclaurin series : [ 24 ] When the versine v is small in comparison to the radius r , it may be approximated from the half-chord length L (the distance AC shown above) by the formula [ 51 ] v ≈ L 2 2 r . {\displaystyle v\approx {\frac {L^{2}}{2r}}.} Alternatively, if the versine is small and the versine, radius, and half-chord length are known, they may be used to estimate the arc length s ( AD in the figure above) by the formula s ≈ L + v 2 r {\displaystyle s\approx L+{\frac {v^{2}}{r}}} This formula was known to the Chinese mathematician Shen Kuo , and a more accurate formula also involving the sagitta was developed two centuries later by Guo Shoujing . [ 52 ] A more accurate approximation used in engineering [ 53 ] is v ≈ s 3 2 L 1 2 8 r {\displaystyle v\approx {\frac {s^{\frac {3}{2}}L^{\frac {1}{2}}}{8r}}} The term versine is also sometimes used to describe deviations from straightness in an arbitrary planar curve, of which the above circle is a special case. Given a chord between two points in a curve, the perpendicular distance v from the chord to the curve (usually at the chord midpoint) is called a versine measurement. For a straight line, the versine of any chord is zero, so this measurement characterizes the straightness of the curve. In the limit as the chord length L goes to zero, the ratio ⁠ 8 v / L 2 ⁠ goes to the instantaneous curvature . This usage is especially common in rail transport , where it describes measurements of the straightness of the rail tracks [ 54 ] and it is the basis of the Hallade method for rail surveying . The term sagitta (often abbreviated sag ) is used similarly in optics , for describing the surfaces of lenses and mirrors .
https://en.wikipedia.org/wiki/Sinus_versus_arcus
SipXecs is a free software enterprise communications system. [ 1 ] It was initially developed by Pingtel Corporation in 2003 as a voice over IP telephony server located in Boston , MA. [ 2 ] The server was later extended with additional collaboration capabilities as part of the SIPfoundry project. Since its extension, sipXecs now acts as a software implementation of the Session Initiation Protocol (SIP), making it a full IP-based communications system . SipXecs competitors include other open-source telephony and SoftSwitch solutions such as Asterisk , [ 3 ] FreeSWITCH , [ 4 ] and the SIP Express Router . Development of sipXecs began in 2003 by Pingtel Corporation. In 2004, Pingtel adopted an open-source business model and contributed the codebase to the not-for-profit organization SIPfoundry . [ 5 ] It has been an open source project since then. [ 6 ] Pingtel's assets were acquired by Bluesocket in July 2007. [ 7 ] In August 2008 the Pingtel assets were acquired from Bluesocket by Nortel . [ 8 ] Subsequent to the acquisition by Nortel, Nortel released the SCS500 [ 9 ] product based on sipXecs. SCS500 was positioned as an open and software-only telephony server for the SMB market up to 500 users and received some recognition. [ 10 ] It was later renamed SCS and positioned as an enterprise communications system. [ 11 ] Subsequent to the Nortel bankruptcy [ 12 ] and the acquisition of the Nortel assets by Avaya , [ 13 ] sipXecs continued to be used as the basis for the Avaya Live cloud based communications service. In April 2010 the founders of SIPfoundry founded eZuce, a commercial version of the software. [ 14 ] SipXecs is designed as a software-only, distributed cloud application . It runs on the Linux operating system CentOS or RHEL on either virtualized or physical servers. A minimum configuration allows running all of the sipXecs components on a single server, including database, all available services, and the sipXecs management. Global clusters can be built using built-in auto-configuration capabilities from the centralized management system. SipXecs uses MongoDB as a distributed and partition tolerant database for global transactions, includes CFEngine for orchestration of clusters and JasperReports for reporting. The management and configuration system is based on the Spring Framework . sipXecs includes FreeSWITCH as its media server and Openfire for presence and instant messaging services. SipXecs follows standards such as Session Initiation Protocol (SIP), SRTP , Extensible Messaging and Presence Protocol (XMPP) , SIP and XMPP over TLS , and several Web standards including WebRTC , WebSOCKET and Representational State Transfer (REST). Amazon.com was an early adopter of sipXecs. [ 15 ] This initial 5,000 user deployment expanded considerably in the following years. OnRelay, a company in the UK, selected sipXecs for its fixed-mobile convergence solution sold to carriers. [ 16 ] Colorado State University and Cedarville University of Ohio committed to sipXecs in 2010. [ 17 ] Red Hat deployed a commercial version of sipXecs from eZuce globally in 2012. [ 18 ] Under the SIPfoundry Higher Education Program (HEP) and as of 2014 [ 19 ] Lafayette College, St. Mary's University, Messiah College, Colorado School of Mines, [ 20 ] Carthage College deployed sipXecs to replace their respective PBX systems. SipXecs is used by small and large enterprises ranging up to about 20,000 [ 21 ] users per cluster. SIPfoundry lists the following users on its Web site: [ 19 ] Brevard County FL, Dutch Police, Easter Seals, Siemens Transportation, British Airways. SipXecs is available for Red Hat Linux and CentOS. It runs virtualized in different cloud environments such as the Amazon Elastic Compute Cloud , the Google Compute Engine , the HP Cloud , IBM SoftLayer , VMware vCloud and VMware ESX , OpenStack environments, and clouds from other vendors using these technologies. SIPfoundry distributes the sipXecs source code under the AGPL-3.0-or-later license. [ 22 ] Many different corporate and individual contributors contributed to sipXecs, [ 23 ] including Pingtel, Bluesocket, Nortel, Avaya, and eZuce as some of the larger corporate contributors representing 864,791 lines of code. In addition, the sipXecs solution includes many other open-source components. SIPfoundry holds Copyright on all derivative work . Contributions to sipXecs are made under a Contributor Agreement, which grants SIPfoundry shared Copyright with the original author on all contributed code. SipXecs supports a wide range of SIP compatible hardware, such as PSTN gateways, desk phones, softphones and mobile phone applications. A plug n'play auto-configuration capability is available for phones from currently (software release 14.04) 18 different vendors. The SipXecs system represents a reference implementation of the SIP standard. It was used at SIPIT interoperability events organized by the SIP Forum to test interoperability of SIP solutions from many different vendors. [ 24 ]
https://en.wikipedia.org/wiki/SipXecs
A siphon (from Ancient Greek σίφων ( síphōn ) ' pipe, tube ' ; also spelled syphon ) is any of a wide variety of devices that involve the flow of liquids through tubes. In a narrower sense, the word refers particularly to a tube in an inverted "U" shape, which causes a liquid to flow upward, above the surface of a reservoir , with no pump , but powered by the fall of the liquid as it flows down the tube under the pull of gravity , then discharging at a level lower than the surface of the reservoir from which it came. There are two leading theories about how siphons cause liquid to flow uphill, against gravity, without being pumped, and powered only by gravity. The traditional theory for centuries was that gravity pulling the liquid down on the exit side of the siphon resulted in reduced pressure at the top of the siphon. Then atmospheric pressure was able to push the liquid from the upper reservoir, up into the reduced pressure at the top of the siphon, like in a barometer or drinking straw , and then over. [ 1 ] [ 2 ] [ 3 ] [ 4 ] However, it has been demonstrated that siphons can operate in a vacuum [ 4 ] [ 5 ] [ 6 ] [ 7 ] and to heights exceeding the barometric height of the liquid. [ 4 ] [ 5 ] [ 8 ] Consequently, the cohesion tension theory of siphon operation has been advocated, where the liquid is pulled over the siphon in a way similar to the chain fountain . [ 9 ] It need not be one theory or the other that is correct, but rather both theories may be correct in different circumstances of ambient pressure. The atmospheric pressure with gravity theory cannot explain siphons in vacuum, where there is no significant atmospheric pressure. But the cohesion tension with gravity theory cannot explain CO 2 gas siphons, [ 10 ] siphons working despite bubbles, and the flying droplet siphon, where gases do not exert significant pulling forces, and liquids not in contact cannot exert a cohesive tension force. All known published theories in modern times recognize Bernoulli's equation as a decent approximation to idealized, friction-free siphon operation. Egyptian reliefs from 1500 BC depict siphons used to extract liquids from large storage jars. [ 11 ] [ 12 ] Physical evidence for the use of siphons by Greeks are the Justice cup of Pythagoras in Samos in the 6th century BC and usage by Greek engineers in the 3rd century BC at Pergamon . [ 12 ] [ 13 ] Hero of Alexandria wrote extensively about siphons in the treatise Pneumatica . [ 14 ] The Banu Musa brothers of 9th-century Baghdad invented a double-concentric siphon, which they described in their Book of Ingenious Devices . [ 15 ] [ 16 ] The edition edited by Hill includes an analysis of the double-concentric siphon. Siphons were studied further in the 17th century, in the context of suction pumps (and the recently developed vacuum pumps ), particularly with an eye to understanding the maximum height of pumps (and siphons) and the apparent vacuum at the top of early barometers . This was initially explained by Galileo Galilei via the theory of horror vacui ("nature abhors a vacuum"), which dates to Aristotle , and which Galileo restated as resintenza del vacuo , but this was subsequently disproved by later workers, notably Evangelista Torricelli and Blaise Pascal [ 17 ] – see barometer: history . A practical siphon, operating at typical atmospheric pressures and tube heights, works because gravity pulling down on the taller column of liquid leaves reduced pressure at the top of the siphon (formally, hydrostatic pressure when the liquid is not moving). This reduced pressure at the top means gravity pulling down on the shorter column of liquid is not sufficient to keep the liquid stationary against the atmospheric pressure pushing it up into the reduced-pressure zone at the top of the siphon. So the liquid flows from the higher-pressure area of the upper reservoir up to the lower-pressure zone at the top of the siphon, over the top, and then, with the help of gravity and a taller column of liquid, down to the higher-pressure zone at the exit. [ 2 ] [ 18 ] The chain model is a useful but not completely accurate conceptual model of a siphon. The chain model helps to understand how a siphon can cause liquid to flow uphill, powered only by the downward force of gravity. A siphon can sometimes be thought of like a chain hanging over a pulley, with one end of the chain piled on a higher surface than the other. Since the length of chain on the shorter side is lighter than the length of chain on the taller side, the heavier chain on the taller side will move down and pull up the chain on the lighter side. Similar to a siphon, the chain model is obviously just powered by gravity acting on the heavier side, and there is clearly no violation of conservation of energy, because the chain is ultimately just moving from a higher to a lower location, as the liquid does in a siphon. There are a number of problems with the chain model of a siphon, and understanding these differences helps to explain the actual workings of siphons. First, unlike in the chain model of the siphon, it is not actually the weight on the taller side compared to the shorter side that matters. Rather it is the difference in height from the reservoir surfaces to the top of the siphon, that determines the balance of pressure . For example, if the tube from the upper reservoir to the top of the siphon has a much larger diameter than the taller section of tube from the lower reservoir to the top of the siphon, the shorter upper section of the siphon may have a much larger weight of liquid in it, and yet the lighter volume of liquid in the down tube can pull liquid up the fatter up tube, and the siphon can function normally. [ 19 ] Another difference is that under most practical circumstances, dissolved gases, vapor pressure, and (sometimes) lack of adhesion with tube walls, conspire to render the tensile strength within the liquid ineffective for siphoning. Thus, unlike a chain, which has significant tensile strength, liquids usually have little tensile strength under typical siphon conditions, and therefore the liquid on the rising side cannot be pulled up in the way the chain is pulled up on the rising side. [ 7 ] [ 18 ] An occasional misunderstanding of siphons is that they rely on the tensile strength of the liquid to pull the liquid up and over the rise. [ 2 ] [ 18 ] While water has been found to have a significant tensile strength in some experiments (such as with the z-tube [ 20 ] ), and siphons in vacuum rely on such cohesion, common siphons can easily be demonstrated to need no liquid tensile strength at all to function. [ 7 ] [ 2 ] [ 18 ] Furthermore, since common siphons operate at positive pressures throughout the siphon, there is no contribution from liquid tensile strength, because the molecules are actually repelling each other in order to resist the pressure, rather than pulling on each other. [ 7 ] To demonstrate, the longer lower leg of a common siphon can be plugged at the bottom and filled almost to the crest with liquid as in the figure, leaving the top and the shorter upper leg completely dry and containing only air. When the plug is removed and the liquid in the longer lower leg is allowed to fall, the liquid in the upper reservoir will then typically sweep the air bubble down and out of the tube. The apparatus will then continue to operate as a normal siphon. As there is no contact between the liquid on either side of the siphon at the beginning of this experiment, there can be no cohesion between the liquid molecules to pull the liquid over the rise. It has been suggested by advocates of the liquid tensile strength theory, that the air start siphon only demonstrates the effect as the siphon starts, but that the situation changes after the bubble is swept out and the siphon achieves steady flow. But a similar effect can be seen in the flying-droplet siphon (see above). The flying-droplet siphon works continuously without liquid tensile strength pulling the liquid up. The siphon in the video demonstration operated steadily for more than 28 minutes until the upper reservoir was empty. Another simple demonstration that liquid tensile strength is not needed in the siphon is to simply introduce a bubble into the siphon during operation. The bubble can be large enough to entirely disconnect the liquids in the tube before and after the bubble, defeating any liquid tensile strength, and yet if the bubble is not too big, the siphon will continue to operate with little change as it sweeps the bubble out. Another common misconception about siphons is that because the atmospheric pressure is virtually identical at the entrance and exit, the atmospheric pressure cancels, and therefore atmospheric pressure cannot be pushing the liquid up the siphon. But equal and opposite forces may not completely cancel if there is an intervening force that counters some or all of one of the forces. In the siphon, the atmospheric pressure at the entrance and exit are both lessened by the force of gravity pulling down the liquid in each tube, but the pressure on the down side is lessened more by the taller column of liquid on the down side. In effect, the atmospheric pressure coming up the down side does not entirely "make it" to the top to cancel all of the atmospheric pressure pushing up the up side. This effect can be seen more easily in the example of two carts being pushed up opposite sides of a hill. As shown in the diagram, even though the person on the left seems to have his push canceled entirely by the equal and opposite push from the person on the right, the person on the left's seemingly canceled push is still the source of the force to push the left cart up. In some situations siphons do function in the absence of atmospheric pressure and due to tensile strength – see vacuum siphons – and in these situations the chain model can be instructive. Further, in other settings water transport does occur due to tension, most significantly in transpirational pull in the xylem of vascular plants . [ 2 ] [ 21 ] Water and other liquids may seem to have no tensile strength because when a handful is scooped up and pulled on, the liquids narrow and pull apart effortlessly. But liquid tensile strength in a siphon is possible when the liquid adheres to the tube walls and thereby resists narrowing. Any contamination on the tube walls, such as grease or air bubbles, or other minor influences such as turbulence or vibration, can cause the liquid to detach from the walls and lose all tensile strength. In more detail, one can look at how the hydrostatic pressure varies through a static siphon, considering in turn the vertical tube from the top reservoir, the vertical tube from the bottom reservoir, and the horizontal tube connecting them (assuming a U-shape). At liquid level in the top reservoir, the liquid is under atmospheric pressure, and as one goes up the siphon, the hydrostatic pressure decreases (under vertical pressure variation ), since the weight of atmospheric pressure pushing the water up is counterbalanced by the column of water in the siphon pushing down (until one reaches the maximal height of a barometer/siphon, at which point the liquid cannot be pushed higher) – the hydrostatic pressure at the top of the tube is then lower than atmospheric pressure by an amount proportional to the height of the tube. Doing the same analysis on the tube rising from the lower reservoir yields the pressure at the top of that (vertical) tube; this pressure is lower because the tube is longer (there is more water pushing down), and requires that the lower reservoir is lower than the upper reservoir, or more generally that the discharge outlet simply be lower than the surface of the upper reservoir. Considering now the horizontal tube connecting them, one sees that the pressure at the top of the tube from the top reservoir is higher (since less water is being lifted), while the pressure at the top of the tube from the bottom reservoir is lower (since more water is being lifted), and since liquids move from high pressure to low pressure, the liquid flows across the horizontal tube from the top basin to the bottom basin. The liquid is under positive pressure (compression) throughout the tube, not tension. Bernoulli's equation is considered in the scientific literature to be a fair approximation to the operation of the siphon. In non-ideal fluids, compressibility, tensile strength and other characteristics of the working fluid (or multiple fluids) complicate Bernoulli's equation. Once started, a siphon requires no additional energy to keep the liquid flowing up and out of the reservoir. The siphon will draw liquid out of the reservoir until the level falls below the intake, allowing air or other surrounding gas to break the siphon, or until the outlet of the siphon equals the level of the reservoir, whichever comes first. In addition to atmospheric pressure , the density of the liquid, and gravity , the maximal height of the crest in practical siphons is limited by the vapour pressure of the liquid. When the pressure within the liquid drops to below the liquid's vapor pressure, tiny vapor bubbles can begin to form at the high point, and the siphon effect will end. This effect depends on how efficiently the liquid can nucleate bubbles; in the absence of impurities or rough surfaces to act as easy nucleation sites for bubbles, siphons can temporarily exceed their standard maximal height during the extended time it takes bubbles to nucleate. One siphon of degassed water was demonstrated to 24 m (79 feet ) for an extended period of time [ 8 ] and other controlled experiments to 10 m (33 feet ). [ 22 ] For water at standard atmospheric pressure , the maximal siphon height is approximately 10 m (33 feet); for mercury it is 76 cm (30 inches ), which is the definition of standard pressure. This equals the maximal height of a suction pump , which operates by the same principle. [ 17 ] [ 23 ] The ratio of heights (about 13.6) equals the ratio of densities of water and mercury (at a given temperature), since the column of water (resp. mercury) is balancing with the column of air yielding atmospheric pressure, and indeed maximal height is (neglecting vapor pressure and velocity of liquid) inversely proportional to density of liquid. In 1948, Malcolm Nokes investigated siphons working in both air pressure and in a partial vacuum ; for siphons in vacuum he concluded: "The gravitational force on the column of liquid in the downtake tube less the gravitational force in the uptake tube causes the liquid to move. The liquid is therefore in tension and sustains a longitudinal strain which, in the absence of disturbing factors, is insufficient to break the column of liquid". But for siphons of small uptake height working at atmospheric pressure, he wrote: "... the tension of the liquid column is neutralized and reversed by the compressive effect of the atmosphere on the opposite ends of the liquid column." [ 7 ] Potter and Barnes at the University of Edinburgh revisited siphons in 1971. They re-examined the theories of the siphon and ran experiments on siphons in air pressure. They concluded: "By now it should be clear that, despite a wealth of tradition, the basic mechanism of a siphon does not depend upon atmospheric pressure." [ 24 ] Gravity , pressure and molecular cohesion were the focus of work in 2010 by Hughes at the Queensland University of Technology . He used siphons at air pressure and his conclusion was: "The flow of water out of the bottom of a siphon depends on the difference in height between the inflow and outflow, and therefore cannot be dependent on atmospheric pressure…" [ 21 ] Hughes did further work on siphons at air pressure in 2011 and concluded: "The experiments described above demonstrate that ordinary siphons at atmospheric pressure operate through gravity and not atmospheric pressure". [ 25 ] The father and son researchers Ramette and Ramette successfully siphoned carbon dioxide under air pressure in 2011 and concluded that molecular cohesion is not required for the operation of a siphon, but: "The basic explanation of siphon action is that, once the tube is filled, the flow is initiated by the greater pull of gravity on the fluid on the longer side compared with that on the short side. This creates a pressure drop throughout the siphon tube, in the same sense that 'sucking' on a straw reduces the pressure along its length all the way to the intake point. The ambient atmospheric pressure at the intake point responds to the reduced pressure by forcing the fluid upwards, sustaining the flow, just as in a steadily sucked straw in a milkshake." [ 1 ] Again in 2011, Richert and Binder (at the University of Hawaii ) examined the siphon and concluded that molecular cohesion is not required for the operation of a siphon but relies upon gravity and a pressure differential, writing: "As the fluid initially primed on the long leg of the siphon rushes down due to gravity, it leaves behind a partial vacuum that allows pressure on the entrance point of the higher container to push fluid up the leg on that side". [ 2 ] The research team of Boatwright, Puttick, and Licence, all at the University of Nottingham , succeeded in running a siphon in high vacuum , also in 2011. They wrote: "It is widely believed that the siphon is principally driven by the force of atmospheric pressure. An experiment is described that shows that a siphon can function even under high-vacuum conditions. Molecular cohesion and gravity are shown to be contributing factors in the operation of a siphon; the presence of a positive atmospheric pressure is not required". [ 26 ] Writing in Physics Today in 2011, J. Dooley from Millersville University stated that both a pressure differential within the siphon tube and the tensile strength of the liquid are required for a siphon to operate. [ 27 ] A researcher at Humboldt State University , A. McGuire, examined flow in siphons in 2012. Using the advanced general-purpose multiphysics simulation software package LS-DYNA he examined pressure initialisation, flow, and pressure propagation within a siphon. He concluded: "Pressure, gravity and molecular cohesion can all be driving forces in the operation of siphons". [ 3 ] In 2014, Hughes and Gurung (at the Queensland University of Technology) ran a water siphon under varying air pressures ranging from sea level to 11.9 km ( 39 000 ft ) altitude. They noted: "Flow remained more or less constant during ascension indicating that siphon flow is independent of ambient barometric pressure ". They used Bernoulli's equation and the Poiseuille equation to examine pressure differentials and fluid flow within a siphon. Their conclusion was: "It follows from the above analysis that there must be a direct cohesive connection between water molecules flowing in and out of a siphon. This is true at all atmospheric pressures in which the pressure in the apex of the siphon is above the vapour pressure of water, an exception being ionic liquids". [ 28 ] A plain tube can be used as a siphon. An external pump has to be applied to start the liquid flowing and prime the siphon (in home use this is often done by a person inhaling through the tube until enough of it has filled with liquid; this may pose danger to the user, depending on the liquid that is being siphoned). This is sometimes done with any leak-free hose to siphon gasoline from a motor vehicle's gasoline tank to an external tank. (Siphoning gasoline by mouth often results in the accidental swallowing of gasoline, or aspirating it into the lungs, which can cause death or lung damage. [ 29 ] ) If the tube is flooded with liquid before part of the tube is raised over the intermediate high point and care is taken to keep the tube flooded while it is being raised, no pump is required. Devices sold as siphons often come with a siphon pump to start the siphon process. In some applications it can be helpful to use siphon tubing that is not much larger than necessary. Using piping of too great a diameter and then throttling the flow using valves or constrictive piping appears to increase the effect of previously cited concerns over gases or vapor collecting in the crest which serve to break the vacuum. If the vacuum is reduced too much, the siphon effect can be lost. Reducing the size of pipe used closer to requirements appears to reduce this effect and creates a more functional siphon that does not require constant re-priming and restarting. In this respect, where the requirement is to match a flow into a container with a flow out of said container (to maintain a constant level in a pond fed by a stream, for example) it would be preferable to utilize two or three smaller separate parallel pipes that can be started as required rather than attempting to use a single large pipe and attempting to throttle it. Siphons are sometimes employed as automatic machines, in situations where it is desirable to turn a continuous trickling flow or an irregular small surge flow into a large surge volume. A common example of this is a public restroom with urinals regularly flushed by an automatic siphon in a small water tank overhead. When the container is filled, all the stored liquid is released, emerging as a large surge volume that then resets and fills again. One way to do this intermittent action involves complex machinery such as floats, chains, levers, and valves, but these can corrode, wear out, or jam over time. An alternate method is with rigid pipes and chambers, using only the water itself in a siphon as the operating mechanism. A siphon used in an automatic unattended device needs to be able to function reliably without failure. This is different from the common demonstration self-starting siphons in that there are ways the siphon can fail to function which require manual intervention to return to normal surge flow operation. A video demonstration of a self-starting siphon can be found here , courtesy of The Curiosity Show . The most common failure is for the liquid to dribble out slowly, matching the rate that the container is filling, and the siphon enters an undesired steady-state condition. Preventing dribbling typically involves pneumatic principles to trap one or more large air bubbles in various pipes, which are sealed by water traps. This method can fail if it cannot start working intermittently without water already present in parts of the mechanism, and which will not be filled if the mechanism starts from a dry state. A second problem is that the trapped air pockets will shrink over time if the siphon is not operating due to no inflow. The air in pockets is absorbed by the liquid, which pulls liquid up into the piping until the air pocket disappears, and can cause activation of water flow outside the normal range of operating when the storage tank is not full, leading to loss of the liquid seal in lower parts of the mechanism. A third problem is where the lower end of the liquid seal is simply a U-trap bend in an outflow pipe. During vigorous emptying, the kinetic motion of the liquid out the outflow can propel too much liquid out, causing a loss of the sealing volume in the outflow trap and loss of the trapped air bubble to maintain intermittent operation. A fourth problem involves seep holes in the mechanism, intended to slowly refill these various sealing chambers when the siphon is dry. The seep holes can be plugged by debris and corrosion, requiring manual cleaning and intervention. To prevent this, the siphon may be restricted to pure liquid sources, free of solids or precipitate. Many automatic siphons have been invented going back to at least the 1850s, for automatic siphon mechanisms that attempt to overcome these problems using various pneumatic and hydrodynamic principles. When certain liquids needs to be purified, siphoning can help prevent either the bottom ( dregs ) or the top ( foam and floaties) from being transferred out of one container into a new container. Siphoning is thus useful in the fermentation of wine and beer for this reason, since it can keep unwanted impurities out of the new container. Self-constructed siphons, made of pipes or tubes, can be used to evacuate water from cellars after floodings. Between the flooded cellar and a deeper place outside a connection is built, using a tube or some pipes. They are filled with water through an intake valve (at the highest end of the construction). When the ends are opened, the water flows through the pipe into the sewer or the river. Siphoning is common in irrigated fields to transfer a controlled amount of water from a ditch, over the ditch wall, into furrows. Large siphons may be used in municipal waterworks and industry. Their size requires control via valves at the intake, outlet and crest of the siphon. The siphon may be primed by closing the intake and outlets and filling the siphon at the crest. If intakes and outlets are submerged, a vacuum pump may be applied at the crest to prime the siphon. Alternatively the siphon may be primed by a pump at either the intake or outlet. Gas in the liquid is a concern in large siphons. [ 30 ] The gas tends to accumulate at the crest and if enough accumulates to break the flow of liquid, the siphon stops working. The siphon itself will exacerbate the problem because as the liquid is raised through the siphon, the pressure drops, causing dissolved gases within the liquid to come out of solution. Higher temperature accelerates the release of gas from liquids so maintaining a constant, low temperature helps. The longer the liquid is in the siphon, the more gas is released, so a shorter siphon overall helps. Local high points will trap gas so the intake and outlet legs should have continuous slopes without intermediate high points. The flow of the liquid moves bubbles thus the intake leg can have a shallow slope as the flow will push the gas bubbles to the crest. Conversely, the outlet leg needs to have a steep slope to allow the bubbles to move against the liquid flow; though other designs call for a shallow slope in the outlet leg as well to allow the bubbles to be carried out of the siphon. At the crest the gas can be trapped in a chamber above the crest. The chamber needs to be occasionally primed again with liquid to remove the gas. A siphon rain gauge is a rain gauge that can record rainfall over an extended period. A siphon is used to automatically empty the gauge. It is often simply called a "siphon gauge" and is not to be confused with a siphon pressure gauge. A siphon drainage method is being implemented in several expressways as of 2022. Recent studies found that it can reduce groundwater level behind expressway retaining walls, and there was no indication of clogging. This new drainage system is being pioneered as a long-term method to limit leakage hazard in the retaining wall. [ 31 ] Siphon drainage is also used in draining unstable slopes, and siphon roof-water drainage systems have been in use since the 1960s. [ 32 ] [ 33 ] A siphon spillway in a dam is usually not technically a siphon, as it is generally used to drain elevated water levels. [ 34 ] However, a siphon spillway operates as an actual siphon if it raises the flow higher than the surface of the source reservoir, as sometimes is the case when used in irrigation. [ 35 ] [ 21 ] In operation, a siphon spillway is considered to be "pipe flow" or "closed-duct flow". [ 36 ] A normal spillway flow is pressurized by the height of the reservoir above the spillway, whereas a siphon flow rate is governed by the difference in height of the inlet and outlet. [ citation needed ] Some designs make use of an automatic system that uses the flow of water in a spiral vortex to remove the air above to prime the siphon. Such a design includes the volute siphon. [ 37 ] Flush toilets often have some siphon effect as the bowl empties. Some toilets also use the siphon principle to obtain the actual flush from the cistern . The flush is triggered by a lever or handle that operates a simple diaphragm-like piston pump that lifts enough water to the crest of the siphon to start the flow of water which then completely empties the contents of the cistern into the toilet bowl. The advantage of this system was that no water would leak from the cistern excepting when flushed. These were mandatory in the UK until 2011. [ 38 ] [ failed verification ] Early urinals incorporated a siphon in the cistern which would flush automatically on a regular cycle because there was a constant trickle of clean water being fed to the cistern by a slightly open valve. While if both ends of a siphon are at atmospheric pressure, liquid flows from high to low, if the bottom end of a siphon is pressurized, liquid can flow from low to high. If pressure is removed from the bottom end, the liquid flow will reverse, illustrating that it is pressure driving the siphon. An everyday illustration of this is the siphon coffee brewer, which works as follows (designs vary; this is a standard design, omitting coffee grounds): In practice, the top vessel is filled with coffee grounds, and the heat is removed from the bottom vessel when the coffee has finished brewing. What vapor pressure means concretely is that the boiling water converts high-density water (a liquid) into low-density steam (a gas), which thus expands to take up more volume (in other words, the pressure increases). This pressure from the expanding steam then forces the liquid up the siphon; when the steam then condenses down to water the pressure decreases and the liquid flows back down. While a simple siphon cannot output liquid at a level higher than the source reservoir, a more complicated device utilizing an airtight metering chamber at the crest and a system of automatic valves, may discharge liquid on an ongoing basis, at a level higher than the source reservoir, without outside pumping energy being added. It can accomplish this despite what initially appears to be a violation of conservation of energy because it can take advantage of the energy of a large volume of liquid dropping some distance, to raise and discharge a small volume of liquid above the source reservoir. Thus it might be said to "require" a large quantity of falling liquid to power the dispensing of a small quantity. Such a system typically operates in a cyclical or start/stop but ongoing and self-powered manner. [ 39 ] [ 40 ] Ram pumps do not work in this way. These metering pumps are true siphon pumping devices which use siphons as their power source. An inverted siphon is not a siphon but a term applied to pipes that must dip below an obstruction to form a U-shaped flow path. Large inverted siphons are used to convey water being carried in canals or flumes across valleys, for irrigation or gold mining. The Romans used inverted siphons of lead pipes to cross valleys that were too big to construct an aqueduct . [ 41 ] [ 42 ] [ 43 ] Inverted siphons are commonly called traps for their function in preventing sewer gases from coming back out of sewers [ 44 ] and sometimes making dense objects like rings and electronic components retrievable after falling into a drain. [ 45 ] [ 46 ] Liquid flowing in one end simply forces liquid up and out the other end, but solids like sand will accumulate. This is especially important in sewerage systems or culverts which must be routed under rivers or other deep obstructions where the better term is "depressed sewer". [ 47 ] [ 48 ] [ failed verification ] Back siphonage is a plumbing term applied to the reversal of normal water flow in a plumbing system due to sharply reduced or negative pressure on the water supply side, such as high demand on water supply by fire-fighting ; [ 49 ] it is not an actual siphon as it is suction . [ 50 ] Back siphonage is rare as it depends on submerged inlets at the outlet (home) end and these are uncommon. [ 51 ] Back siphonage is not to be confused with backflow ; which is the reversed flow of water from the outlet end to the supply end caused by pressure occurring at the outlet end. [ 51 ] Also, building codes usually demand a check valve where the water supply enters a building to prevent backflow into the drinking water system. Building codes often contain specific sections on back siphonage and especially for external faucets (See the sample building code quote, below). Backflow prevention devices such as anti-siphon valves [ 52 ] are required in such designs. The reason is that external faucets may be attached to hoses which may be immersed in an external body of water, such as a garden pond , swimming pool , aquarium or washing machine . In these situations the unwanted flow is not actually the result of a siphon but suction due to reduced pressure on the water supply side. Should the pressure within the water supply system fall, the external water may be returned by back pressure into the drinking water system through the faucet. Another possible contamination point is the water intake in the toilet tank. An anti-siphon valve is also required here to prevent pressure drops in the water supply line from suctioning water out of the toilet tank (which may contain additives such as "toilet blue" [ 53 ] ) and contaminating the water system. Anti-siphon valves function as a one-direction check valve . Anti-siphon valves are also used medically. Hydrocephalus , or excess fluid in the brain, may be treated with a shunt which drains cerebrospinal fluid from the brain. All shunts have a valve to relieve excess pressure in the brain. The shunt may lead into the abdominal cavity such that the shunt outlet is significantly lower than the shunt intake when the patient is standing. Thus a siphon effect may take place and instead of simply relieving excess pressure, the shunt may act as a siphon, completely draining cerebrospinal fluid from the brain. The valve in the shunt may be designed to prevent this siphon action so that negative pressure on the drain of the shunt does not result in excess drainage. Only excess positive pressure from within the brain should result in drainage. [ 54 ] [ 55 ] [ 56 ] The anti-siphon valve in medical shunts is preventing excess forward flow of liquid. In plumbing systems, the anti-siphon valve is preventing backflow. Sample building code regulations regarding "back siphonage" from the Canadian province of Ontario : [ 57 ] Along with anti-siphon valves, anti-siphoning devices also exist. The two are unrelated in application. Siphoning can be used to remove fuel from tanks. With the cost of fuel increasing, it has been linked in several countries to the rise in fuel theft . Trucks, with their large fuel tanks, are most vulnerable. The anti-siphon device prevents thieves from inserting a tube into the fuel tank. A siphon barometer is the term sometimes applied to the simplest of mercury barometers . A continuous U-shaped tube of the same diameter throughout is sealed on one end and filled with mercury. When placed into the upright, "U", position, mercury will flow away from the sealed end, forming a partial vacuum, until balanced by atmospheric pressure on the other end. The term "siphon" derives from the belief that air pressure is involved in the operation of a siphon. The difference in height of the fluid between the two arms of the U-shaped tube is the same as the maximum intermediate height of a siphon. When used to measure pressures other than atmospheric pressure, a siphon barometer is sometimes called a siphon gauge ; these are not siphons but follow a standard U-shaped design [ 58 ] leading to the term. Siphon barometers are still produced as precision instruments. [ 59 ] Siphon barometers should not be confused with a siphon rain gauge., [ 60 ] A siphon bottle (also called a soda syphon or, archaically, a siphoid [ 61 ] ) is a pressurized bottle with a vent and a valve. It is not a siphon as pressure within the bottle drives the liquid up and out a tube. A special form was the gasogene . A siphon cup is the (hanging) reservoir of paint attached to a spray gun, it is not a siphon as a vacuum pump extracts the paint. [ 62 ] This name is to distinguish it from gravity-fed reservoirs. An archaic use of the term is a cup of oil in which the oil is transported out of the cup via a cotton wick or tube to a surface to be lubricated, this is not a siphon but an example of capillary action . Heron's siphon is not a siphon as it works as a gravity driven pressure pump, [ 63 ] [ 64 ] at first glance it appears to be a perpetual motion machine but will stop when the air in the priming pump is depleted. In a slightly differently configuration, it is also known as Heron's fountain . [ 65 ] A venturi siphon, also known as an eductor , is not a siphon but a form of vacuum pump using the Venturi effect of fast flowing fluids (e.g. air), to produce low pressures to suction other fluids; a common example is the carburetor . See pressure head . The low pressure at the throat of the venturi is called a siphon when a second fluid is introduced, or an aspirator when the fluid is air, this is an example of the misconception that air pressure is the operating force for siphons. Despite the name, siphonic roof drainage does not work as a siphon; the technology makes use of gravity induced vacuum pumping [ 66 ] to carry water horizontally from multiple roof drains to a single downpipe and to increase flow velocity. [ 67 ] Metal baffles at the roof drain inlets reduce the injection of air which increases the efficiency of the system. [ 68 ] One benefit to this drainage technique is reduced capital costs in construction compared to traditional roof drainage. [ 66 ] Another benefit is the elimination of pipe pitch or gradient required for conventional roof drainage piping. However this system of gravity pumping is mainly suitable for large buildings and is not usually suitable for residential properties. [ 68 ] The term self-siphon is used in a number of ways. Liquids that are composed of long polymers can "self-siphon" [ 69 ] [ 70 ] and these liquids do not depend on atmospheric pressure. Self-siphoning polymer liquids work the same as the siphon-chain model where the lower part of the chain pulls the rest of the chain up and over the crest. This phenomenon is also called a tubeless siphon . [ 71 ] "Self-siphon" is also often used in sales literature by siphon manufacturers to describe portable siphons that contain a pump. With the pump, no external suction (e.g. from a person's mouth/lungs) is required to start the siphon and thus the product is described as a "self-siphon". If the upper reservoir is such that the liquid there can rise above the height of the siphon crest, the rising liquid in the reservoir can "self-prime" the siphon and the whole apparatus be described as a "self-siphon". [ 72 ] Once primed, such a siphon will continue to operate until the level of the upper reservoir falls below the intake of the siphon. Such self-priming siphons are useful in some rain gauges and dams. The term "siphon" is used for a number of structures in human and animal anatomy, either because flowing liquids are involved or because the structure is shaped like a siphon, but in which no actual siphon effect is occurring: see Siphon (disambiguation) . There has been a debate if whether the siphon mechanism plays a role in blood circulation . However, in the 'closed loop' of circulation this was discounted; "In contrast, in 'closed' systems, like the circulation, gravity does not hinder uphill flow nor does it cause downhill flow, because gravity acts equally on the ascending and descending limbs of the circuit", but for "historical reasons", the term is used. [ 73 ] [ 74 ] One hypothesis (in 1989) was that a siphon existed in the circulation of the giraffe . [ 75 ] But further research in 2004 found that, "There is no hydrostatic gradient and since the 'fall' of fluid does not assist the ascending arm, there is no siphon. The giraffe's high arterial pressure, which is sufficient to raise the blood 2 m from heart to head with sufficient remaining pressure to perfuse the brain, supports this concept." [ 74 ] [ 76 ] However, a paper written in 2005 urged more research on the hypothesis: The principle of the siphon is not species specific and should be a fundamental principle of closed circulatory systems. Therefore, the controversy surrounding the role of the siphon principle may best be resolved by a comparative approach. Analyses of blood pressure on a variety of long-necked and long-bodied animals, which take into account phylogenetic relatedness, will be important. In addition experimental studies that combined measurements of arterial and venous blood pressures, with cerebral blood flow, under a variety of gravitational stresses (different head positions), will ultimately resolve this controversy. [ 77 ] Some species are named after siphons because they resemble siphons in whole or in part. Geosiphons are fungi . There are species of alga belonging to the family Siphonocladaceae in the phylum Chlorophyta [ 78 ] which have tube-like structures. Ruellia villosa is a tropical plant in the family Acanthaceae that is also known by the botanical synonym , Siphonacanthus villosus Nees '. [ 79 ] In speleology, a siphon or a sump is that part of a cave passage that lies under water and through which cavers have to dive to progress further into the cave system , but it is not an actual siphon. A river siphon occurs when part of the water flow passes under a submerged object like a rock or tree trunk. The water flowing under the obstruction can be very powerful, and as such can be very dangerous for kayaking, canyoning, and other river-based watersports. Bernoulli's equation may be applied to a siphon to derive its ideal flow rate and theoretical maximum height. Bernoulli's equation: Apply Bernoulli's equation to the surface of the upper reservoir. The surface is technically falling as the upper reservoir is being drained. However, for this example we will assume the reservoir to be infinite and the velocity of the surface may be set to zero. Furthermore, the pressure at both the surface and the exit point C is atmospheric pressure. Thus: Apply Bernoulli's equation to point A at the start of the siphon tube in the upper reservoir where P = P A , v = v A and y = − d Apply Bernoulli's equation to point B at the intermediate high point of the siphon tube where P = P B , v = v B and y = h B Apply Bernoulli's equation to point C where the siphon empties. Where v = v C and y = − h C . Furthermore, the pressure at the exit point is atmospheric pressure. Thus: As the siphon is a single system, the constant in all four equations is the same. Setting equations 1 and 4 equal to each other gives: Solving for v C : The velocity of the siphon is thus driven solely by the height difference between the surface of the upper reservoir and the drain point. The height of the intermediate high point, h B , does not affect the velocity of the siphon. However, as the siphon is a single system, v B = v C and the intermediate high point does limit the maximum velocity. The drain point cannot be lowered indefinitely to increase the velocity. Equation 3 will limit the velocity to retain a positive pressure at the intermediate high point to prevent cavitation . The maximum velocity may be calculated by combining equations 1 and 3: Setting P B = 0 and solving for v max : The depth, − d , of the initial entry point of the siphon in the upper reservoir, does not affect the velocity of the siphon. No limit to the depth of the siphon start point is implied by Equation 2 as pressure P A increases with depth d . Both these facts imply the operator of the siphon may bottom skim or top skim the upper reservoir without impacting the siphon's performance. This equation for the velocity is the same as that of any object falling height h C . This equation assumes P C is atmospheric pressure. If the end of the siphon is below the surface, the height to the end of the siphon cannot be used; rather the height difference between the reservoirs should be used. Although siphons can exceed the barometric height of the liquid in special circumstances, e.g. when the liquid is degassed and the tube is clean and smooth, [ 80 ] in general the practical maximum height can be found as follows. Setting equations 1 and 3 equal to each other gives: Maximum height of the intermediate high point occurs when it is so high that the pressure at the intermediate high point is zero; in typical scenarios this will cause the liquid to form bubbles and if the bubbles enlarge to fill the pipe then the siphon will "break". Setting P B = 0: Solving for h B : This means that the height of the intermediate high point is limited by pressure along the streamline being always greater than zero. This is the maximum height that a siphon will work. Substituting values will give approximately 10 m (33 feet) for water and, by definition of standard pressure , 0.76 m (760 mm; 30 in) for mercury. The ratio of heights (about 13.6) equals the ratio of densities of water and mercury (at a given temperature). As long as this condition is satisfied (pressure greater than zero), the flow at the output of the siphon is still only governed by the height difference between the source surface and the outlet. Volume of fluid in the apparatus is not relevant as long as the pressure head remains above zero in every section. Because pressure drops when velocity is increased, a static siphon (or manometer) can have a slightly higher height than a flowing siphon. Experiments have shown that siphons can operate in a vacuum, via cohesion and tensile strength between molecules, provided that the liquids are pure and degassed and surfaces are very clean. [ 4 ] [ 81 ] [ 6 ] [ 7 ] [ 82 ] [ 83 ] [ 84 ] [ 26 ] The Oxford English Dictionary (OED) entry on siphon , published in 1911, states that a siphon works by atmospheric pressure . Stephen Hughes of Queensland University of Technology criticized this in a 2010 article [ 21 ] which was widely reported in the media. [ 85 ] [ 86 ] [ 87 ] [ 88 ] The OED editors stated, "there is continuing debate among scientists as to which view is correct. ... We would expect to reflect this debate in the fully updated entry for siphon, due to be published later this year." [ 89 ] Hughes continued to defend his view of the siphon in a late September post at the Oxford blog. [ 90 ] The 2015 definition by the OED is: A tube used to convey liquid upwards from a reservoir and then down to a lower level of its own accord. Once the liquid has been forced into the tube, typically by suction or immersion, flow continues unaided. The Encyclopædia Britannica currently describes a siphon as: Siphon, also spelled syphon, instrument, usually in the form of a tube bent to form two legs of unequal length, for conveying liquid over the edge of a vessel and delivering it at a lower level. Siphons may be of any size. The action depends upon the influence of gravity (not, as sometimes thought, on the difference in atmospheric pressure; a siphon will work in a vacuum) and upon the cohesive forces that prevent the columns of liquid in the legs of the siphon from breaking under their own weight. At sea level, water can be lifted a little more than 10 metres (33 feet) by a siphon. In civil engineering, pipelines called inverted siphons are used to carry sewage or stormwater under streams, highway cuts, or other depressions in the ground. In an inverted siphon the liquid completely fills the pipe and flows under pressure, as opposed to the open-channel gravity flow that occurs in most sanitary or storm sewers. [ 91 ] The American Society of Mechanical Engineers (ASME) publishes the following Tri-Harmonized Standard:
https://en.wikipedia.org/wiki/Siphon
In computational complexity theory , the Sipser–Lautemann theorem or Sipser–Gács–Lautemann theorem states that bounded-error probabilistic polynomial (BPP) time is contained in the polynomial time hierarchy , and more specifically Σ 2 ∩ Π 2 . In 1983, Michael Sipser showed that BPP is contained in the polynomial time hierarchy . [ 1 ] Péter Gács showed that BPP is actually contained in Σ 2 ∩ Π 2 . Clemens Lautemann contributed by giving a simple proof of BPP’s membership in Σ 2 ∩ Π 2 , also in 1983. [ 2 ] It is conjectured that in fact BPP= P , which is a much stronger statement than the Sipser–Lautemann theorem. Here we present the proof by Lautemann. [ 2 ] Without loss of generality, a machine M ∈ BPP with error ≤ 2 −| x | can be chosen. (All BPP problems can be amplified to reduce the error probability exponentially.) The basic idea of the proof is to define a Σ 2 sentence that is equivalent to stating that x is in the language, L , defined by M by using a set of transforms of the random variable inputs. Since the output of M depends on random input, as well as the input x , it is useful to define which random strings produce the correct output as A ( x ) = { r | M ( x , r ) accepts}. The key to the proof is to note that when x ∈ L , A ( x ) is very large and when x ∉ L , A ( x ) is very small. By using bitwise parity , ⊕, a set of transforms can be defined as A ( x ) ⊕ t ={ r ⊕ t | r ∈ A ( x )}. The first main lemma of the proof shows that the union of a small finite number of these transforms will contain the entire space of random input strings. Using this fact, a Σ 2 sentence and a Π 2 sentence can be generated that is true if and only if x ∈ L (see conclusion). The general idea of lemma one is to prove that if A ( x ) covers a large part of the random space R = { 1 , 0 } | r | {\displaystyle R=\{1,0\}^{|r|}} then there exists a small set of translations that will cover the entire random space. In more mathematical language: Proof. Randomly pick t 1 , t 2 , ..., t | r | . Let S = ⋃ i A ( x ) ⊕ t i {\displaystyle S=\bigcup _{i}A(x)\oplus t_{i}} (the union of all transforms of A ( x )). So, for all r in R , The probability that there will exist at least one element in R not in S is Therefore Thus there is a selection for each t 1 , t 2 , … , t | r | {\displaystyle t_{1},t_{2},\ldots ,t_{|r|}} such that The previous lemma shows that A ( x ) can cover every possible point in the space using a small set of translations. Complementary to this, for x ∉ L only a small fraction of the space is covered by S = ⋃ i A ( x ) ⊕ t i {\displaystyle S=\bigcup _{i}A(x)\oplus t_{i}} . We have: because | r | {\displaystyle |r|} is polynomial in | x | {\displaystyle |x|} . The lemmas show that language membership of a language in BPP can be expressed as a Σ 2 expression, as follows. That is, x is in language L if and only if there exist | r | {\displaystyle |r|} binary vectors, where for all random bit vectors r , TM M accepts at least one random vector ⊕ t i . The above expression is in Σ 2 in that it is first existentially then universally quantified. Therefore BPP ⊆ Σ 2 . Because BPP is closed under complement , this proves BPP ⊆ Σ 2 ∩ Π 2 . The theorem can be strengthened to B P P ⊆ M A ⊆ S 2 P ⊆ Σ 2 ∩ Π 2 {\displaystyle {\mathsf {BPP}}\subseteq {\mathsf {MA}}\subseteq {\mathsf {S}}_{2}^{P}\subseteq \Sigma _{2}\cap \Pi _{2}} (see MA , S P 2 ). [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Sipser–Lautemann_theorem
Sipuleucel-T , sold under the brand name Provenge , developed by Dendreon Pharmaceuticals, LLC, is a cell-based cancer immunotherapy for prostate cancer (CaP). [ 1 ] [ 3 ] [ 4 ] It is an autologous cellular immunotherapy. [ 1 ] Sipuleucel-T is indicated for the treatment of metastatic , asymptomatic or minimally symptomatic, metastatic castrate-resistant hormone-refractory prostate cancer (HRPC). Other names for this stage are metastatic castrate-resistant (mCRPC) and androgen independent (AI) or (AIPC). This stage leads to mCRPC with lymph node involvement and distal (distant) tumors; this is the lethal stage of CaP. The prostate cancer staging designation is T4,N1,M1c. [ 5 ] [ 6 ] [ 7 ] A course of treatment consists of three basic steps: Premedication with acetaminophen and antihistamine is recommended to minimize side effects. [ 8 ] Common side effects include: bladder pain; bloating or swelling of the face, arms, hands, lower legs, or feet; bloody or cloudy urine; body aches or pain; chest pain; chills; confusion; cough; diarrhea; difficult, burning, or painful urination; difficulty with breathing; difficulty with speaking up to inability to speak; double vision; sleeplessness; and inability to move the arms, legs, or facial muscles. [ 9 ] [ 10 ] Sipuleucel-T was approved by the U.S. Food and Drug Administration (FDA) on April 29, 2010, to treat asymptomatic or minimally symptomatic metastatic HRPC. [ 11 ] [ 12 ] [ 2 ] [ 13 ] [ 1 ] Shortly afterward, sipuleucel-T was added to the compendium of cancer treatments published by the National Comprehensive Cancer Network (NCCN) as a "category 1" (highest recommendation) treatment for HRPC. The NCCN Compendium is used by Medicare and major health care insurance providers to decide whether a treatment should be reimbursed. [ 14 ] [ 15 ] Sipuleucel-T showed overall survival (OS) benefit to patients in three double-blind randomized phase III clinical trials , D9901, [ 6 ] D9902a, [ 16 ] [ 17 ] and IMPACT. [ 5 ] The IMPACT trial [ 5 ] served as the basis for FDA licensing. This trial enrolled 512 patients with asymptomatic or minimally symptomatic metastatic HRPC randomized in a 2:1 ratio. The median survival time for sipuleucel-T patients was 25.8 months comparing to 21.7 months for placebo-treated patients, an increase of 4.1 months. [ 18 ] 31.7% of treated patients survived for 36 months vs. 23.0% in the control arm. [ 19 ] Overall survival was statistically significant (P=0.032). The longer survival without tumor shrinkage or change in progression is surprising. This may suggest the effect of an unmeasured variable. [ 7 ] The trial was conducted pursuant to a FDA Special Protocol Assessment (SPA), a set of guidelines binding trial investigators to specific agreed-upon parameters with respect to trial design, procedures and endpoints; compliance ensured overall scientific integrity and accelerated FDA approval. [ citation needed ] The D9901 trial [ 6 ] enrolled 127 patients with asymptomatic metastatic HRPC randomized in a 2:1 ratio. The median survival time for patients treated with sipuleucel-T was 25.9 months comparing to 21.4 months for placebo-treated patients. Overall survival was statistically significant ( P=0.01 ). [ citation needed ] The D9902a trial [ 16 ] was designed like the D9901 trial but enrolled 98 patients. The median survival time for patients treated with sipuleucel-T was 19.0 months comparing to 15.3 months for placebo-treated patients, but did not reach statistical significance. [ citation needed ] As of August 2014, the PRO Treatment and Early Cancer Treatment (PROTECT) trial, a phase IIIB clinical trial started in 2001, was tracking subjects but no longer enrolling new subjects. [ 20 ] Its purpose is to test efficacy for patients whose CaP is still controlled by either suppression of testosterone by hormone treatment or by surgical castration . Such patients have usually failed primary treatment of either surgical removal of the prostate, ( EBRT ), internal radiation, BNCT or ( HIFU ) for curative intent. Such failure is called biochemical failure and is defined as a PSA reading of 2.0 ng/mL above nadir (the lowest reading taken post primary treatment). [ 21 ] As of August 2014, a clinical trial administering sipuleucel-T in conjunction with ipilimumab (Yervoy) was tracking subjects but no longer enrolling new subjects; the trial evaluates the clinical safety and anti-cancer effects (quantified in PSA, radiographic and T cell response) of the combination therapy in patients with advanced prostate cancer. [ 22 ] This article incorporates public domain material from Dictionary of Cancer Terms . U.S. National Cancer Institute .
https://en.wikipedia.org/wiki/Sipuleucel-T