text
stringlengths
11
320k
source
stringlengths
26
161
In calculus and related areas of mathematics, a linear function from the real numbers to the real numbers is a function whose graph (in Cartesian coordinates ) is a non-vertical line in the plane. [ 1 ] The characteristic property of linear functions is that when the input variable is changed, the change in the output is proportional to the change in the input. Linear functions are related to linear equations . A linear function is a polynomial function in which the variable x has degree at most one: [ 2 ] Such a function is called linear because its graph , the set of all points ( x , f ( x ) ) {\displaystyle (x,f(x))} in the Cartesian plane , is a line . The coefficient a is called the slope of the function and of the line (see below). If the slope is a = 0 {\displaystyle a=0} , this is a constant function f ( x ) = b {\displaystyle f(x)=b} defining a horizontal line, which some authors exclude from the class of linear functions. [ 3 ] With this definition, the degree of a linear polynomial would be exactly one, and its graph would be a line that is neither vertical nor horizontal. However, in this article, a ≠ 0 {\displaystyle a\neq 0} is not required, so constant functions will be considered linear. If b = 0 {\displaystyle b=0} then the linear function is said to be homogeneous . Such function defines a line that passes through the origin of the coordinate system, that is, the point ( x , y ) = ( 0 , 0 ) {\displaystyle (x,y)=(0,0)} . In advanced mathematics texts, the term linear function often denotes specifically homogeneous linear functions, while the term affine function is used for the general case, which includes b ≠ 0 {\displaystyle b\neq 0} . The natural domain of a linear function f ( x ) {\displaystyle f(x)} , the set of allowed input values for x , is the entire set of real numbers , x ∈ R . {\displaystyle x\in \mathbb {R} .} One can also consider such functions with x in an arbitrary field , taking the coefficients a, b in that field. The graph y = f ( x ) = a x + b {\displaystyle y=f(x)=ax+b} is a non-vertical line having exactly one intersection with the y -axis, its y -intercept point ( x , y ) = ( 0 , b ) . {\displaystyle (x,y)=(0,b).} The y -intercept value y = f ( 0 ) = b {\displaystyle y=f(0)=b} is also called the initial value of f ( x ) . {\displaystyle f(x).} If a ≠ 0 , {\displaystyle a\neq 0,} the graph is a non-horizontal line having exactly one intersection with the x -axis, the x -intercept point ( x , y ) = ( − b a , 0 ) . {\displaystyle (x,y)=(-{\tfrac {b}{a}},0).} The x -intercept value x = − b a , {\displaystyle x=-{\tfrac {b}{a}},} the solution of the equation f ( x ) = 0 , {\displaystyle f(x)=0,} is also called the root or zero of f ( x ) . {\displaystyle f(x).} The slope of a nonvertical line is a number that measures how steeply the line is slanted (rise-over-run). If the line is the graph of the linear function f ( x ) = a x + b {\displaystyle f(x)=ax+b} , this slope is given by the constant a . The slope measures the constant rate of change of f ( x ) {\displaystyle f(x)} per unit change in x : whenever the input x is increased by one unit, the output changes by a units: f ( x + 1 ) = f ( x ) + a {\displaystyle f(x{+}1)=f(x)+a} , and more generally f ( x + Δ x ) = f ( x ) + a Δ x {\displaystyle f(x{+}\Delta x)=f(x)+a\Delta x} for any number Δ x {\displaystyle \Delta x} . If the slope is positive, a > 0 {\displaystyle a>0} , then the function f ( x ) {\displaystyle f(x)} is increasing; if a < 0 {\displaystyle a<0} , then f ( x ) {\displaystyle f(x)} is decreasing In calculus , the derivative of a general function measures its rate of change. A linear function f ( x ) = a x + b {\displaystyle f(x)=ax+b} has a constant rate of change equal to its slope a , so its derivative is the constant function f ′ ( x ) = a {\displaystyle f\,'(x)=a} . The fundamental idea of differential calculus is that any smooth function f ( x ) {\displaystyle f(x)} (not necessarily linear) can be closely approximated near a given point x = c {\displaystyle x=c} by a unique linear function. The derivative f ′ ( c ) {\displaystyle f\,'(c)} is the slope of this linear function, and the approximation is: f ( x ) ≈ f ′ ( c ) ( x − c ) + f ( c ) {\displaystyle f(x)\approx f\,'(c)(x{-}c)+f(c)} for x ≈ c {\displaystyle x\approx c} . The graph of the linear approximation is the tangent line of the graph y = f ( x ) {\displaystyle y=f(x)} at the point ( c , f ( c ) ) {\displaystyle (c,f(c))} . The derivative slope f ′ ( c ) {\displaystyle f\,'(c)} generally varies with the point c . Linear functions can be characterized as the only real functions whose derivative is constant: if f ′ ( x ) = a {\displaystyle f\,'(x)=a} for all x , then f ( x ) = a x + b {\displaystyle f(x)=ax+b} for b = f ( 0 ) {\displaystyle b=f(0)} . A given linear function f ( x ) {\displaystyle f(x)} can be written in several standard formulas displaying its various properties. The simplest is the slope-intercept form : from which one can immediately see the slope a and the initial value f ( 0 ) = b {\displaystyle f(0)=b} , which is the y -intercept of the graph y = f ( x ) {\displaystyle y=f(x)} . Given a slope a and one known value f ( x 0 ) = y 0 {\displaystyle f(x_{0})=y_{0}} , we write the point-slope form : In graphical terms, this gives the line y = f ( x ) {\displaystyle y=f(x)} with slope a passing through the point ( x 0 , y 0 ) {\displaystyle (x_{0},y_{0})} . The two-point form starts with two known values f ( x 0 ) = y 0 {\displaystyle f(x_{0})=y_{0}} and f ( x 1 ) = y 1 {\displaystyle f(x_{1})=y_{1}} . One computes the slope a = y 1 − y 0 x 1 − x 0 {\displaystyle a={\tfrac {y_{1}-y_{0}}{x_{1}-x_{0}}}} and inserts this into the point-slope form: Its graph y = f ( x ) {\displaystyle y=f(x)} is the unique line passing through the points ( x 0 , y 0 ) , ( x 1 , y 1 ) {\displaystyle (x_{0},y_{0}\!),(x_{1},y_{1}\!)} . The equation y = f ( x ) {\displaystyle y=f(x)} may also be written to emphasize the constant slope: Linear functions commonly arise from practical problems involving variables x , y {\displaystyle x,y} with a linear relationship, that is, obeying a linear equation A x + B y = C {\displaystyle Ax+By=C} . If B ≠ 0 {\displaystyle B\neq 0} , one can solve this equation for y , obtaining where we denote a = − A B {\displaystyle a=-{\tfrac {A}{B}}} and b = C B {\displaystyle b={\tfrac {C}{B}}} . That is, one may consider y as a dependent variable (output) obtained from the independent variable (input) x via a linear function: y = f ( x ) = a x + b {\displaystyle y=f(x)=ax+b} . In the xy -coordinate plane, the possible values of ( x , y ) {\displaystyle (x,y)} form a line, the graph of the function f ( x ) {\displaystyle f(x)} . If B = 0 {\displaystyle B=0} in the original equation, the resulting line x = C A {\displaystyle x={\tfrac {C}{A}}} is vertical, and cannot be written as y = f ( x ) {\displaystyle y=f(x)} . The features of the graph y = f ( x ) = a x + b {\displaystyle y=f(x)=ax+b} can be interpreted in terms of the variables x and y . The y -intercept is the initial value y = f ( 0 ) = b {\displaystyle y=f(0)=b} at x = 0 {\displaystyle x=0} . The slope a measures the rate of change of the output y per unit change in the input x . In the graph, moving one unit to the right (increasing x by 1) moves the y -value up by a : that is, f ( x + 1 ) = f ( x ) + a {\displaystyle f(x{+}1)=f(x)+a} . Negative slope a indicates a decrease in y for each increase in x . For example, the linear function y = − 2 x + 4 {\displaystyle y=-2x+4} has slope a = − 2 {\displaystyle a=-2} , y -intercept point ( 0 , b ) = ( 0 , 4 ) {\displaystyle (0,b)=(0,4)} , and x -intercept point ( 2 , 0 ) {\displaystyle (2,0)} . Suppose salami and sausage cost €6 and €3 per kilogram, and we wish to buy €12 worth. How much of each can we purchase? If x kilograms of salami and y kilograms of sausage costs a total of €12 then, €6× x + €3× y = €12. Solving for y gives the point-slope form y = − 2 x + 4 {\displaystyle y=-2x+4} , as above. That is, if we first choose the amount of salami x , the amount of sausage can be computed as a function y = f ( x ) = − 2 x + 4 {\displaystyle y=f(x)=-2x+4} . Since salami costs twice as much as sausage, adding one kilo of salami decreases the sausage by 2 kilos: f ( x + 1 ) = f ( x ) − 2 {\displaystyle f(x{+}1)=f(x)-2} , and the slope is −2. The y -intercept point ( x , y ) = ( 0 , 4 ) {\displaystyle (x,y)=(0,4)} corresponds to buying only 4 kg of sausage; while the x -intercept point ( x , y ) = ( 2 , 0 ) {\displaystyle (x,y)=(2,0)} corresponds to buying only 2 kg of salami. Note that the graph includes points with negative values of x or y , which have no meaning in terms of the original variables (unless we imagine selling meat to the butcher). Thus we should restrict our function f ( x ) {\displaystyle f(x)} to the domain 0 ≤ x ≤ 2 {\displaystyle 0\leq x\leq 2} . Also, we could choose y as the independent variable, and compute x by the inverse linear function: x = g ( y ) = − 1 2 y + 2 {\displaystyle x=g(y)=-{\tfrac {1}{2}}y+2} over the domain 0 ≤ y ≤ 4 {\displaystyle 0\leq y\leq 4} . If the coefficient of the variable is not zero ( a ≠ 0 ), then a linear function is represented by a degree 1 polynomial (also called a linear polynomial ), otherwise it is a constant function – also a polynomial function, but of zero degree. A straight line, when drawn in a different kind of coordinate system may represent other functions. For example, it may represent an exponential function when its values are expressed in the logarithmic scale . It means that when log ( g ( x )) is a linear function of x , the function g is exponential. With linear functions, increasing the input by one unit causes the output to increase by a fixed amount, which is the slope of the graph of the function. With exponential functions, increasing the input by one unit causes the output to increase by a fixed multiple, which is known as the base of the exponential function. If both arguments and values of a function are in the logarithmic scale (i.e., when log ( y ) is a linear function of log ( x ) ), then the straight line represents a power law : On the other hand, the graph of a linear function in terms of polar coordinates : is an Archimedean spiral if a ≠ 0 {\displaystyle a\neq 0} and a circle otherwise.
https://en.wikipedia.org/wiki/Linear_function_(calculus)
In mathematics a linear inequality is an inequality which involves a linear function . A linear inequality contains one of the symbols of inequality: [ 1 ] A linear inequality looks exactly like a linear equation , with the inequality sign replacing the equality sign . Two-dimensional linear inequalities, are expressions in two variables of the form: where the inequalities may either be strict or not. The solution set of such an inequality can be graphically represented by a half-plane (all the points on one "side" of a fixed line) in the Euclidean plane. [ 2 ] The line that determines the half-planes ( ax + by = c ) is not included in the solution set when the inequality is strict. A simple procedure to determine which half-plane is in the solution set is to calculate the value of ax + by at a point ( x 0 , y 0 ) which is not on the line and observe whether or not the inequality is satisfied. For example, [ 3 ] to draw the solution set of x + 3 y < 9, one first draws the line with equation x + 3 y = 9 as a dotted line, to indicate that the line is not included in the solution set since the inequality is strict. Then, pick a convenient point not on the line, such as (0,0). Since 0 + 3(0) = 0 < 9, this point is in the solution set, so the half-plane containing this point (the half-plane "below" the line) is the solution set of this linear inequality. In R n linear inequalities are the expressions that may be written in the form where f is a linear form (also called a linear functional ), x ¯ = ( x 1 , x 2 , … , x n ) {\displaystyle {\bar {x}}=(x_{1},x_{2},\ldots ,x_{n})} and b a constant real number. More concretely, this may be written out as or Here x 1 , x 2 , . . . , x n {\displaystyle x_{1},x_{2},...,x_{n}} are called the unknowns, and a 1 , a 2 , . . . , a n {\displaystyle a_{1},a_{2},...,a_{n}} are called the coefficients. Alternatively, these may be written as where g is an affine function . [ 4 ] That is or Note that any inequality containing a "greater than" or a "greater than or equal" sign can be rewritten with a "less than" or "less than or equal" sign, so there is no need to define linear inequalities using those signs. A system of linear inequalities is a set of linear inequalities in the same variables: Here x 1 , x 2 , . . . , x n {\displaystyle x_{1},\ x_{2},...,x_{n}} are the unknowns, a 11 , a 12 , . . . , a m n {\displaystyle a_{11},\ a_{12},...,\ a_{mn}} are the coefficients of the system, and b 1 , b 2 , . . . , b m {\displaystyle b_{1},\ b_{2},...,b_{m}} are the constant terms. This can be concisely written as the matrix inequality where A is an m × n matrix, x is an n ×1 column vector of variables, and b is an m ×1 column vector of constants. [ citation needed ] In the above systems both strict and non-strict inequalities may be used. Variables can be eliminated from systems of linear inequalities using Fourier–Motzkin elimination . [ 5 ] The set of solutions of a real linear inequality constitutes a half-space of the 'n'-dimensional real space, one of the two defined by the corresponding linear equation. The set of solutions of a system of linear inequalities corresponds to the intersection of the half-spaces defined by individual inequalities. It is a convex set , since the half-spaces are convex sets, and the intersection of a set of convex sets is also convex. In the non- degenerate cases this convex set is a convex polyhedron (possibly unbounded, e.g., a half-space, a slab between two parallel half-spaces or a polyhedral cone ). It may also be empty or a convex polyhedron of lower dimension confined to an affine subspace of the n -dimensional space R n . A linear programming problem seeks to optimize (find a maximum or minimum value) a function (called the objective function ) subject to a number of constraints on the variables which, in general, are linear inequalities. [ 6 ] The list of constraints is a system of linear inequalities. The above definition requires well-defined operations of addition , multiplication and comparison ; therefore, the notion of a linear inequality may be extended to ordered rings , and in particular to ordered fields .
https://en.wikipedia.org/wiki/Linear_inequality
The linear ion trap ( LIT ) is a type of ion trap mass spectrometer . In a LIT, ions are confined radially by a two-dimensional radio frequency (RF) field, and axially by stopping potentials applied to end electrodes . LITs have high injection efficiencies and high ion storage capacities. [ 1 ] One of the first LITs was constructed in 1969, by Dierdre A. Church, [ 2 ] who bent linear quadrupoles into closed circle and racetrack geometries and demonstrated storage of 3 He + and H + ions for several minutes. Earlier, Drees and Paul described a circular quadrupole. [ citation needed ] However, it was used to produce and confine a plasma , not to store ions. In 1989, Prestage, Dick, and Malecki described that ions could be trapped in the linear quadrupole trap system to enhance ion-molecule reactions, thus it can be used to study spectroscopy of stored ions. [ 1 ] The LIT uses a set of quadrupole rods to confine ions radially and a static electrical potential on the end electrodes to confine the ions axially. [ 3 ] The LIT can be used as a mass filter or as a trap by creating a potential well for the ions along the axis of the trap. [ 4 ] The mass of trapped ions may be determined if the m/z lies between defined parameters . [ 5 ] Advantages of the LIT design are high ion storage capacity, high scan rate, and simplicity of construction. Although quadrupole rod alignment is critical, adding a quality control constraint to their production, this constraint is additionally present in the machining requirements of the 3D trap. [ 6 ] Ions are either injected into or created within the interior of the LIT. They are confined by application of appropriate RF and DC voltages with their final position maintained within the center section of the LIT. The RF voltage is adjusted and multi-frequency resonance ejection waveforms are applied to the trap to eliminate all but the desired ions in preparation for subsequent fragmentation and mass analysis. The voltages applied to the ion trap are adjusted to stabilize the selected ions and to allow for collisional cooling in preparation for excitation. The energy of the selected ions is increased by application of a supplemental resonance excitation voltage applied to all segments of two rods located on the X-axis. This increase of energy causes dissociation of the selected ions due to collisions with damping gas. The product ions formed are retained in the trapping field. Scanning the contents of the trap to produce a mass spectrum is accomplished by linearly increasing the RF voltage applied to all sections of the trap and utilizing a supplemental resonance ejection voltage. These changes sequentially move ions from within the stability diagram to a position where they become unstable in the x-direction and leave the trapping field for detection. Ions are accelerated into two high voltage dynodes where ions produce secondary electrons . This signal is subsequently amplified by two electron multipliers and the analog signals are then integrated together and digitized. LITs can be used as stand alone mass analyzers , and they can be combined with other mass analyzers, such as 3D Paul ion traps, TOF mass spectrometers, FTMS , and other kind of mass analyzers. 3D ion trap (or Paul trap ) mass spectrometers are widely used but have limitations. With a continuous source, such as one utilizing electrospray ionization (ESI), ions generated while the 3D trap is processing other ions are not used, thereby limiting the duty cycle . Furthermore, the total number of ions that can be stored in a 3D ion trap is limited by space charge effects. Combining a linear trap with a 3D trap can help overcome these limitations. [ 1 ] Recently, Hardman and Makarov have described the use of a linear quadrupole trap to store ions formed by ESI for injection into an orbitrap mass analyzer. Ions passed through an orifice and skimmer, a quadrupole ion guide for ion cooling and then entered the quadrupole storage trap. The quadrupole trap has two rod sets; short rods near the exit were biased so that most ions accumulated in this region. Because the orbitrap requires that ions be injected in very short pulses, kilovolt ion extraction potentials were applied to the exit aperture. Flight times of ions to the orbitrap were mass dependent, but for a given mass, ions were injected in bunches less than 100 nanoseconds wide (fwhm). A TOF mass spectrometer can also have a low-duty cycle when coupled with a continuous ion source. Combining an ion trap with a TOF mass analyzer can improve the duty cycle. Both 3D and linear traps have been combined with TOF mass analyzers. A trap can also add MSn capabilities to the system. [ 1 ] Linear traps can be used to improve the performance of FT-ICR (or FTMS) systems. As with 3D ion traps, the duty cycle can be increased to nearly 100% if ions are accumulated in a linear trap, while the FTMS performs other functions. Unwanted ions that can cause space charge problems in the FTMS can be ejected in the linear trap to improve the resolution, sensitivity, and dynamic range of the system, although the system parameters used to optimize such signal characteristics co-vary with one another. [ 1 ] [ 7 ] The combination of triple quadrupole MS with LIT technology in the form of an instrument of configuration QqLIT, using axial ejection, is particularly interesting, because this instrument retains the classical triple quadrupole scan functions such as selected reaction monitoring (SRM), product ion (PI), neutral loss (NL) and precursor ion (PC) while also providing access to sensitive ion trap experiments. For small molecules, quantitative and qualitative analysis can be performed using the same instrument. In addition, for peptide analysis, the enhanced multiply charged (EMC) scan allows an increase in selectivity, while the time-delayed fragmentation (TDF) scan provides additional structural information. In the case of the QqLIT, the uniqueness of the instrument is that the same mass analyzer Q3 can be run in two different modes. This allows very powerful scan combinations when performing information-dependent data acquisition.
https://en.wikipedia.org/wiki/Linear_ion_trap
In mathematics , and more specifically in linear algebra , a linear map (also called a linear mapping , linear transformation , vector space homomorphism , or in some contexts linear function ) is a mapping V → W {\displaystyle V\to W} between two vector spaces that preserves the operations of vector addition and scalar multiplication . The same names and the same definition are also used for the more general case of modules over a ring ; see Module homomorphism . If a linear map is a bijection then it is called a linear isomorphism . In the case where V = W {\displaystyle V=W} , a linear map is called a linear endomorphism . Sometimes the term linear operator refers to this case, [ 1 ] but the term "linear operator" can have different meanings for different conventions: for example, it can be used to emphasize that V {\displaystyle V} and W {\displaystyle W} are real vector spaces (not necessarily with V = W {\displaystyle V=W} ), [ citation needed ] or it can be used to emphasize that V {\displaystyle V} is a function space , which is a common convention in functional analysis . [ 2 ] Sometimes the term linear function has the same meaning as linear map , while in analysis it does not. A linear map from V {\displaystyle V} to W {\displaystyle W} always maps the origin of V {\displaystyle V} to the origin of W {\displaystyle W} . Moreover, it maps linear subspaces in V {\displaystyle V} onto linear subspaces in W {\displaystyle W} (possibly of a lower dimension ); [ 3 ] for example, it maps a plane through the origin in V {\displaystyle V} to either a plane through the origin in W {\displaystyle W} , a line through the origin in W {\displaystyle W} , or just the origin in W {\displaystyle W} . Linear maps can often be represented as matrices , and simple examples include rotation and reflection linear transformations . In the language of category theory , linear maps are the morphisms of vector spaces, and they form a category equivalent to the one of matrices . Let V {\displaystyle V} and W {\displaystyle W} be vector spaces over the same field K {\displaystyle K} . A function f : V → W {\displaystyle f:V\to W} is said to be a linear map if for any two vectors u , v ∈ V {\textstyle \mathbf {u} ,\mathbf {v} \in V} and any scalar c ∈ K {\displaystyle c\in K} the following two conditions are satisfied: Thus, a linear map is said to be operation preserving . In other words, it does not matter whether the linear map is applied before (the right hand sides of the above examples) or after (the left hand sides of the examples) the operations of addition and scalar multiplication. By the associativity of the addition operation denoted as +, for any vectors u 1 , … , u n ∈ V {\textstyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{n}\in V} and scalars c 1 , … , c n ∈ K , {\textstyle c_{1},\ldots ,c_{n}\in K,} the following equality holds: [ 4 ] [ 5 ] f ( c 1 u 1 + ⋯ + c n u n ) = c 1 f ( u 1 ) + ⋯ + c n f ( u n ) . {\displaystyle f(c_{1}\mathbf {u} _{1}+\cdots +c_{n}\mathbf {u} _{n})=c_{1}f(\mathbf {u} _{1})+\cdots +c_{n}f(\mathbf {u} _{n}).} Thus a linear map is one which preserves linear combinations . Denoting the zero elements of the vector spaces V {\displaystyle V} and W {\displaystyle W} by 0 V {\textstyle \mathbf {0} _{V}} and 0 W {\textstyle \mathbf {0} _{W}} respectively, it follows that f ( 0 V ) = 0 W . {\textstyle f(\mathbf {0} _{V})=\mathbf {0} _{W}.} Let c = 0 {\displaystyle c=0} and v ∈ V {\textstyle \mathbf {v} \in V} in the equation for homogeneity of degree 1: f ( 0 V ) = f ( 0 v ) = 0 f ( v ) = 0 W . {\displaystyle f(\mathbf {0} _{V})=f(0\mathbf {v} )=0f(\mathbf {v} )=\mathbf {0} _{W}.} A linear map V → K {\displaystyle V\to K} with K {\displaystyle K} viewed as a one-dimensional vector space over itself is called a linear functional . [ 6 ] These statements generalize to any left-module R M {\textstyle {}_{R}M} over a ring R {\displaystyle R} without modification, and to any right-module upon reversing of the scalar multiplication. Often, a linear map is constructed by defining it on a subset of a vector space and then extending by linearity to the linear span of the domain. Suppose X {\displaystyle X} and Y {\displaystyle Y} are vector spaces and f : S → Y {\displaystyle f:S\to Y} is a function defined on some subset S ⊆ X . {\displaystyle S\subseteq X.} Then a linear extension of f {\displaystyle f} to X , {\displaystyle X,} if it exists, is a linear map F : X → Y {\displaystyle F:X\to Y} defined on X {\displaystyle X} that extends f {\displaystyle f} [ note 1 ] (meaning that F ( s ) = f ( s ) {\displaystyle F(s)=f(s)} for all s ∈ S {\displaystyle s\in S} ) and takes its values from the codomain of f . {\displaystyle f.} [ 9 ] When the subset S {\displaystyle S} is a vector subspace of X {\displaystyle X} then a ( Y {\displaystyle Y} -valued) linear extension of f {\displaystyle f} to all of X {\displaystyle X} is guaranteed to exist if (and only if) f : S → Y {\displaystyle f:S\to Y} is a linear map. [ 9 ] In particular, if f {\displaystyle f} has a linear extension to span ⁡ S , {\displaystyle \operatorname {span} S,} then it has a linear extension to all of X . {\displaystyle X.} The map f : S → Y {\displaystyle f:S\to Y} can be extended to a linear map F : span ⁡ S → Y {\displaystyle F:\operatorname {span} S\to Y} if and only if whenever n > 0 {\displaystyle n>0} is an integer, c 1 , … , c n {\displaystyle c_{1},\ldots ,c_{n}} are scalars, and s 1 , … , s n ∈ S {\displaystyle s_{1},\ldots ,s_{n}\in S} are vectors such that 0 = c 1 s 1 + ⋯ + c n s n , {\displaystyle 0=c_{1}s_{1}+\cdots +c_{n}s_{n},} then necessarily 0 = c 1 f ( s 1 ) + ⋯ + c n f ( s n ) . {\displaystyle 0=c_{1}f\left(s_{1}\right)+\cdots +c_{n}f\left(s_{n}\right).} [ 10 ] If a linear extension of f : S → Y {\displaystyle f:S\to Y} exists then the linear extension F : span ⁡ S → Y {\displaystyle F:\operatorname {span} S\to Y} is unique and F ( c 1 s 1 + ⋯ c n s n ) = c 1 f ( s 1 ) + ⋯ + c n f ( s n ) {\displaystyle F\left(c_{1}s_{1}+\cdots c_{n}s_{n}\right)=c_{1}f\left(s_{1}\right)+\cdots +c_{n}f\left(s_{n}\right)} holds for all n , c 1 , … , c n , {\displaystyle n,c_{1},\ldots ,c_{n},} and s 1 , … , s n {\displaystyle s_{1},\ldots ,s_{n}} as above. [ 10 ] If S {\displaystyle S} is linearly independent then every function f : S → Y {\displaystyle f:S\to Y} into any vector space has a linear extension to a (linear) map span ⁡ S → Y {\displaystyle \;\operatorname {span} S\to Y} (the converse is also true). For example, if X = R 2 {\displaystyle X=\mathbb {R} ^{2}} and Y = R {\displaystyle Y=\mathbb {R} } then the assignment ( 1 , 0 ) → − 1 {\displaystyle (1,0)\to -1} and ( 0 , 1 ) → 2 {\displaystyle (0,1)\to 2} can be linearly extended from the linearly independent set of vectors S := { ( 1 , 0 ) , ( 0 , 1 ) } {\displaystyle S:=\{(1,0),(0,1)\}} to a linear map on span ⁡ { ( 1 , 0 ) , ( 0 , 1 ) } = R 2 . {\displaystyle \operatorname {span} \{(1,0),(0,1)\}=\mathbb {R} ^{2}.} The unique linear extension F : R 2 → R {\displaystyle F:\mathbb {R} ^{2}\to \mathbb {R} } is the map that sends ( x , y ) = x ( 1 , 0 ) + y ( 0 , 1 ) ∈ R 2 {\displaystyle (x,y)=x(1,0)+y(0,1)\in \mathbb {R} ^{2}} to F ( x , y ) = x ( − 1 ) + y ( 2 ) = − x + 2 y . {\displaystyle F(x,y)=x(-1)+y(2)=-x+2y.} Every (scalar-valued) linear functional f {\displaystyle f} defined on a vector subspace of a real or complex vector space X {\displaystyle X} has a linear extension to all of X . {\displaystyle X.} Indeed, the Hahn–Banach dominated extension theorem even guarantees that when this linear functional f {\displaystyle f} is dominated by some given seminorm p : X → R {\displaystyle p:X\to \mathbb {R} } (meaning that | f ( m ) | ≤ p ( m ) {\displaystyle |f(m)|\leq p(m)} holds for all m {\displaystyle m} in the domain of f {\displaystyle f} ) then there exists a linear extension to X {\displaystyle X} that is also dominated by p . {\displaystyle p.} If V {\displaystyle V} and W {\displaystyle W} are finite-dimensional vector spaces and a basis is defined for each vector space, then every linear map from V {\displaystyle V} to W {\displaystyle W} can be represented by a matrix . [ 11 ] This is useful because it allows concrete calculations. Matrices yield examples of linear maps: if A {\displaystyle A} is a real m × n {\displaystyle m\times n} matrix, then f ( x ) = A x {\displaystyle f(\mathbf {x} )=A\mathbf {x} } describes a linear map R n → R m {\displaystyle \mathbb {R} ^{n}\to \mathbb {R} ^{m}} (see Euclidean space ). Let { v 1 , … , v n } {\displaystyle \{\mathbf {v} _{1},\ldots ,\mathbf {v} _{n}\}} be a basis for V {\displaystyle V} . Then every vector v ∈ V {\displaystyle \mathbf {v} \in V} is uniquely determined by the coefficients c 1 , … , c n {\displaystyle c_{1},\ldots ,c_{n}} in the field R {\displaystyle \mathbb {R} } : v = c 1 v 1 + ⋯ + c n v n . {\displaystyle \mathbf {v} =c_{1}\mathbf {v} _{1}+\cdots +c_{n}\mathbf {v} _{n}.} If f : V → W {\textstyle f:V\to W} is a linear map, f ( v ) = f ( c 1 v 1 + ⋯ + c n v n ) = c 1 f ( v 1 ) + ⋯ + c n f ( v n ) , {\displaystyle f(\mathbf {v} )=f(c_{1}\mathbf {v} _{1}+\cdots +c_{n}\mathbf {v} _{n})=c_{1}f(\mathbf {v} _{1})+\cdots +c_{n}f\left(\mathbf {v} _{n}\right),} which implies that the function f is entirely determined by the vectors f ( v 1 ) , … , f ( v n ) {\displaystyle f(\mathbf {v} _{1}),\ldots ,f(\mathbf {v} _{n})} . Now let { w 1 , … , w m } {\displaystyle \{\mathbf {w} _{1},\ldots ,\mathbf {w} _{m}\}} be a basis for W {\displaystyle W} . Then we can represent each vector f ( v j ) {\displaystyle f(\mathbf {v} _{j})} as f ( v j ) = a 1 j w 1 + ⋯ + a m j w m . {\displaystyle f\left(\mathbf {v} _{j}\right)=a_{1j}\mathbf {w} _{1}+\cdots +a_{mj}\mathbf {w} _{m}.} Thus, the function f {\displaystyle f} is entirely determined by the values of a i j {\displaystyle a_{ij}} . If we put these values into an m × n {\displaystyle m\times n} matrix M {\displaystyle M} , then we can conveniently use it to compute the vector output of f {\displaystyle f} for any vector in V {\displaystyle V} . To get M {\displaystyle M} , every column j {\displaystyle j} of M {\displaystyle M} is a vector ( a 1 j ⋮ a m j ) {\displaystyle {\begin{pmatrix}a_{1j}\\\vdots \\a_{mj}\end{pmatrix}}} corresponding to f ( v j ) {\displaystyle f(\mathbf {v} _{j})} as defined above. To define it more clearly, for some column j {\displaystyle j} that corresponds to the mapping f ( v j ) {\displaystyle f(\mathbf {v} _{j})} , M = ( ⋯ a 1 j ⋯ ⋮ a m j ) {\displaystyle \mathbf {M} ={\begin{pmatrix}\ \cdots &a_{1j}&\cdots \ \\&\vdots &\\&a_{mj}&\end{pmatrix}}} where M {\displaystyle M} is the matrix of f {\displaystyle f} . In other words, every column j = 1 , … , n {\displaystyle j=1,\ldots ,n} has a corresponding vector f ( v j ) {\displaystyle f(\mathbf {v} _{j})} whose coordinates a 1 j , ⋯ , a m j {\displaystyle a_{1j},\cdots ,a_{mj}} are the elements of column j {\displaystyle j} . A single linear map may be represented by many matrices. This is because the values of the elements of a matrix depend on the bases chosen. The matrices of a linear transformation can be represented visually: Such that starting in the bottom left corner [ v ] B ′ {\textstyle \left[\mathbf {v} \right]_{B'}} and looking for the bottom right corner [ T ( v ) ] B ′ {\textstyle \left[T\left(\mathbf {v} \right)\right]_{B'}} , one would left-multiply—that is, A ′ [ v ] B ′ = [ T ( v ) ] B ′ {\textstyle A'\left[\mathbf {v} \right]_{B'}=\left[T\left(\mathbf {v} \right)\right]_{B'}} . The equivalent method would be the "longer" method going clockwise from the same point such that [ v ] B ′ {\textstyle \left[\mathbf {v} \right]_{B'}} is left-multiplied with P − 1 A P {\textstyle P^{-1}AP} , or P − 1 A P [ v ] B ′ = [ T ( v ) ] B ′ {\textstyle P^{-1}AP\left[\mathbf {v} \right]_{B'}=\left[T\left(\mathbf {v} \right)\right]_{B'}} . In two- dimensional space R 2 linear maps are described by 2 × 2 matrices . These are some examples: If a linear map is only composed of rotation, reflection, and/or uniform scaling, then the linear map is a conformal linear transformation . The composition of linear maps is linear: if f : V → W {\displaystyle f:V\to W} and g : W → Z {\textstyle g:W\to Z} are linear, then so is their composition g ∘ f : V → Z {\textstyle g\circ f:V\to Z} . It follows from this that the class of all vector spaces over a given field K , together with K -linear maps as morphisms , forms a category . The inverse of a linear map, when defined, is again a linear map. If f 1 : V → W {\textstyle f_{1}:V\to W} and f 2 : V → W {\textstyle f_{2}:V\to W} are linear, then so is their pointwise sum f 1 + f 2 {\displaystyle f_{1}+f_{2}} , which is defined by ( f 1 + f 2 ) ( x ) = f 1 ( x ) + f 2 ( x ) {\displaystyle (f_{1}+f_{2})(\mathbf {x} )=f_{1}(\mathbf {x} )+f_{2}(\mathbf {x} )} . If f : V → W {\textstyle f:V\to W} is linear and α {\textstyle \alpha } is an element of the ground field K {\textstyle K} , then the map α f {\textstyle \alpha f} , defined by ( α f ) ( x ) = α ( f ( x ) ) {\textstyle (\alpha f)(\mathbf {x} )=\alpha (f(\mathbf {x} ))} , is also linear. Thus the set L ( V , W ) {\textstyle {\mathcal {L}}(V,W)} of linear maps from V {\textstyle V} to W {\textstyle W} itself forms a vector space over K {\textstyle K} , [ 12 ] sometimes denoted Hom ⁡ ( V , W ) {\textstyle \operatorname {Hom} (V,W)} . [ 13 ] Furthermore, in the case that V = W {\textstyle V=W} , this vector space, denoted End ⁡ ( V ) {\textstyle \operatorname {End} (V)} , is an associative algebra under composition of maps , since the composition of two linear maps is again a linear map, and the composition of maps is always associative. This case is discussed in more detail below. Given again the finite-dimensional case, if bases have been chosen, then the composition of linear maps corresponds to the matrix multiplication , the addition of linear maps corresponds to the matrix addition , and the multiplication of linear maps with scalars corresponds to the multiplication of matrices with scalars. A linear transformation f : V → V {\textstyle f:V\to V} is an endomorphism of V {\textstyle V} ; the set of all such endomorphisms End ⁡ ( V ) {\textstyle \operatorname {End} (V)} together with addition, composition and scalar multiplication as defined above forms an associative algebra with identity element over the field K {\textstyle K} (and in particular a ring ). The multiplicative identity element of this algebra is the identity map id : V → V {\textstyle \operatorname {id} :V\to V} . An endomorphism of V {\textstyle V} that is also an isomorphism is called an automorphism of V {\textstyle V} . The composition of two automorphisms is again an automorphism, and the set of all automorphisms of V {\textstyle V} forms a group , the automorphism group of V {\textstyle V} which is denoted by Aut ⁡ ( V ) {\textstyle \operatorname {Aut} (V)} or GL ⁡ ( V ) {\textstyle \operatorname {GL} (V)} . Since the automorphisms are precisely those endomorphisms which possess inverses under composition, Aut ⁡ ( V ) {\textstyle \operatorname {Aut} (V)} is the group of units in the ring End ⁡ ( V ) {\textstyle \operatorname {End} (V)} . If V {\textstyle V} has finite dimension n {\textstyle n} , then End ⁡ ( V ) {\textstyle \operatorname {End} (V)} is isomorphic to the associative algebra of all n × n {\textstyle n\times n} matrices with entries in K {\textstyle K} . The automorphism group of V {\textstyle V} is isomorphic to the general linear group GL ⁡ ( n , K ) {\textstyle \operatorname {GL} (n,K)} of all n × n {\textstyle n\times n} invertible matrices with entries in K {\textstyle K} . If f : V → W {\textstyle f:V\to W} is linear, we define the kernel and the image or range of f {\textstyle f} by ker ⁡ ( f ) = { x ∈ V : f ( x ) = 0 } im ⁡ ( f ) = { w ∈ W : w = f ( x ) , x ∈ V } {\displaystyle {\begin{aligned}\ker(f)&=\{\,\mathbf {x} \in V:f(\mathbf {x} )=\mathbf {0} \,\}\\\operatorname {im} (f)&=\{\,\mathbf {w} \in W:\mathbf {w} =f(\mathbf {x} ),\mathbf {x} \in V\,\}\end{aligned}}} ker ⁡ ( f ) {\textstyle \ker(f)} is a subspace of V {\textstyle V} and im ⁡ ( f ) {\textstyle \operatorname {im} (f)} is a subspace of W {\textstyle W} . The following dimension formula is known as the rank–nullity theorem : [ 14 ] dim ⁡ ( ker ⁡ ( f ) ) + dim ⁡ ( im ⁡ ( f ) ) = dim ⁡ ( V ) . {\displaystyle \dim(\ker(f))+\dim(\operatorname {im} (f))=\dim(V).} The number dim ⁡ ( im ⁡ ( f ) ) {\textstyle \dim(\operatorname {im} (f))} is also called the rank of f {\textstyle f} and written as rank ⁡ ( f ) {\textstyle \operatorname {rank} (f)} , or sometimes, ρ ( f ) {\textstyle \rho (f)} ; [ 15 ] [ 16 ] the number dim ⁡ ( ker ⁡ ( f ) ) {\textstyle \dim(\ker(f))} is called the nullity of f {\textstyle f} and written as null ⁡ ( f ) {\textstyle \operatorname {null} (f)} or ν ( f ) {\textstyle \nu (f)} . [ 15 ] [ 16 ] If V {\textstyle V} and W {\textstyle W} are finite-dimensional, bases have been chosen and f {\textstyle f} is represented by the matrix A {\textstyle A} , then the rank and nullity of f {\textstyle f} are equal to the rank and nullity of the matrix A {\textstyle A} , respectively. A subtler invariant of a linear transformation f : V → W {\textstyle f:V\to W} is the co kernel , which is defined as coker ⁡ ( f ) := W / f ( V ) = W / im ⁡ ( f ) . {\displaystyle \operatorname {coker} (f):=W/f(V)=W/\operatorname {im} (f).} This is the dual notion to the kernel: just as the kernel is a sub space of the domain, the co-kernel is a quotient space of the target. Formally, one has the exact sequence 0 → ker ⁡ ( f ) → V → W → coker ⁡ ( f ) → 0. {\displaystyle 0\to \ker(f)\to V\to W\to \operatorname {coker} (f)\to 0.} These can be interpreted thus: given a linear equation f ( v ) = w to solve, The dimension of the co-kernel and the dimension of the image (the rank) add up to the dimension of the target space. For finite dimensions, this means that the dimension of the quotient space W / f ( V ) is the dimension of the target space minus the dimension of the image. As a simple example, consider the map f : R 2 → R 2 , given by f ( x , y ) = (0, y ). Then for an equation f ( x , y ) = ( a , b ) to have a solution, we must have a = 0 (one constraint), and in that case the solution space is ( x , b ) or equivalently stated, (0, b ) + ( x , 0), (one degree of freedom). The kernel may be expressed as the subspace ( x , 0) < V : the value of x is the freedom in a solution – while the cokernel may be expressed via the map W → R , ( a , b ) ↦ ( a ) {\textstyle (a,b)\mapsto (a)} : given a vector ( a , b ), the value of a is the obstruction to there being a solution. An example illustrating the infinite-dimensional case is afforded by the map f : R ∞ → R ∞ , { a n } ↦ { b n } {\textstyle \left\{a_{n}\right\}\mapsto \left\{b_{n}\right\}} with b 1 = 0 and b n + 1 = a n for n > 0. Its image consists of all sequences with first element 0, and thus its cokernel consists of the classes of sequences with identical first element. Thus, whereas its kernel has dimension 0 (it maps only the zero sequence to the zero sequence), its co-kernel has dimension 1. Since the domain and the target space are the same, the rank and the dimension of the kernel add up to the same sum as the rank and the dimension of the co-kernel ( ℵ 0 + 0 = ℵ 0 + 1 {\textstyle \aleph _{0}+0=\aleph _{0}+1} ), but in the infinite-dimensional case it cannot be inferred that the kernel and the co-kernel of an endomorphism have the same dimension (0 ≠ 1). The reverse situation obtains for the map h : R ∞ → R ∞ , { a n } ↦ { c n } {\textstyle \left\{a_{n}\right\}\mapsto \left\{c_{n}\right\}} with c n = a n + 1 . Its image is the entire target space, and hence its co-kernel has dimension 0, but since it maps all sequences in which only the first element is non-zero to the zero sequence, its kernel has dimension 1. For a linear operator with finite-dimensional kernel and co-kernel, one may define index as: ind ⁡ ( f ) := dim ⁡ ( ker ⁡ ( f ) ) − dim ⁡ ( coker ⁡ ( f ) ) , {\displaystyle \operatorname {ind} (f):=\dim(\ker(f))-\dim(\operatorname {coker} (f)),} namely the degrees of freedom minus the number of constraints. For a transformation between finite-dimensional vector spaces, this is just the difference dim( V ) − dim( W ), by rank–nullity. This gives an indication of how many solutions or how many constraints one has: if mapping from a larger space to a smaller one, the map may be onto, and thus will have degrees of freedom even without constraints. Conversely, if mapping from a smaller space to a larger one, the map cannot be onto, and thus one will have constraints even without degrees of freedom. The index of an operator is precisely the Euler characteristic of the 2-term complex 0 → V → W → 0. In operator theory , the index of Fredholm operators is an object of study, with a major result being the Atiyah–Singer index theorem . [ 17 ] No classification of linear maps could be exhaustive. The following incomplete list enumerates some important classifications that do not require any additional structure on the vector space. Let V and W denote vector spaces over a field F and let T : V → W be a linear map. T is said to be injective or a monomorphism if any of the following equivalent conditions are true: T is said to be surjective or an epimorphism if any of the following equivalent conditions are true: T is said to be an isomorphism if it is both left- and right-invertible. This is equivalent to T being both one-to-one and onto (a bijection of sets) or also to T being both epic and monic, and so being a bimorphism . If T : V → V is an endomorphism, then: Given a linear map which is an endomorphism whose matrix is A , in the basis B of the space it transforms vector coordinates [u] as [v] = A [u]. As vectors change with the inverse of B (vectors coordinates are contravariant ) its inverse transformation is [v] = B [v']. Substituting this in the first expression B [ v ′ ] = A B [ u ′ ] {\displaystyle B\left[v'\right]=AB\left[u'\right]} hence [ v ′ ] = B − 1 A B [ u ′ ] = A ′ [ u ′ ] . {\displaystyle \left[v'\right]=B^{-1}AB\left[u'\right]=A'\left[u'\right].} Therefore, the matrix in the new basis is A′ = B −1 AB , being B the matrix of the given basis. Therefore, linear maps are said to be 1-co- 1-contra- variant objects, or type (1, 1) tensors . A linear transformation between topological vector spaces , for example normed spaces , may be continuous . If its domain and codomain are the same, it will then be a continuous linear operator . A linear operator on a normed linear space is continuous if and only if it is bounded , for example, when the domain is finite-dimensional. [ 18 ] An infinite-dimensional domain may have discontinuous linear operators . An example of an unbounded, hence discontinuous, linear transformation is differentiation on the space of smooth functions equipped with the supremum norm (a function with small values can have a derivative with large values, while the derivative of 0 is 0). For a specific example, sin( nx )/ n converges to 0, but its derivative cos( nx ) does not, so differentiation is not continuous at 0 (and by a variation of this argument, it is not continuous anywhere). A specific application of linear maps is for geometric transformations , such as those performed in computer graphics , where the translation, rotation and scaling of 2D or 3D objects is performed by the use of a transformation matrix . Linear mappings also are used as a mechanism for describing change: for example in calculus correspond to derivatives; or in relativity, used as a device to keep track of the local transformations of reference frames. Another application of these transformations is in compiler optimizations of nested-loop code, and in parallelizing compiler techniques.
https://en.wikipedia.org/wiki/Linear_map
The Linear Model of Innovation was an early model designed to understand the relationship of science and technology that begins with basic research that flows into applied research, development and diffusion [ 1 ] It posits scientific research as the basis of innovation which eventually leads to economic growth. [ 2 ] The model has been criticized by many scholars over decades of years. The majority of the criticisms pointed out its crudeness and limitations in capturing the sources, process, and effects of innovation. [ 2 ] However, it has also been argued that the linear model was simply a creation by academics, debated heavily in academia, but was never believed in practice. [ 2 ] The model is more fittingly used as a basis to understand more nuanced alternative models. Two versions of the linear model of innovation are often presented: From the 1950s to the Mid-1960s, the industrial innovation process was generally perceived as a linear progression from scientific discovery, through technological development in firms, to the marketplace. [ 3 ] The stages of the "Technology Push" model are: From the Mid 1960s to the Early 1970s, emerges the second-generation Innovation model, referred to as the "market pull" model of innovation. [ 3 ] According to this simple sequential model, the market was the source of new ideas for directing R&D , which had a reactive role in the process. The stages of the "market pull " model are: The linear models of innovation supported numerous criticisms concerning the linearity of the models. These models ignore the many feedbacks and loops that occur between the different "stages" of the process. Shortcomings and failures that occur at various stages may lead to a reconsideration of earlier steps and this may result in an innovation. A history of the linear model of innovation may be found in Benoît Godin 's The Linear Model of Innovation: The Historical Construction of an Analytical Framework . [ 4 ] A critical look at the origin of the terminology and how it may have a dubious history can be found in David Edgerton 's ‘The linear model’ did not exist: Reflections on the history and historiography of science and research in industry in the twentieth century. [ 2 ] Current models of innovation derive from approaches such as Actor-Network Theory , Social shaping of technology and social learning , [ 5 ] provide a much richer picture of the way innovation works. Current ideas in Open Innovation and User innovation derive from these later ideas. In the 'Phase Gate Model', the product or services concept is frozen at an early stage to minimize risk. Through enterprise, the innovation process involves a series of sequential phases arranged in a manner that the preceding phase must be cleared before moving to the next phase. Therefore, a project must pass through a gate with the permission of the gatekeeper before moving to the next succeeding phase. Criteria for passing through each gate is defined beforehand. The gatekeeper examines whether the stated objectives for the preceding phase have been properly met or not and whether desired development has taken place during the preceding phase or not.
https://en.wikipedia.org/wiki/Linear_model_of_innovation
The linear molecular geometry describes the geometry around a central atom bonded to two other atoms (or ligands ) placed at a bond angle of 180°. Linear organic molecules , such as acetylene ( HC≡CH ), are often described by invoking sp orbital hybridization for their carbon centers. According to the VSEPR model (Valence Shell Electron Pair Repulsion model), linear geometry occurs at central atoms with two bonded atoms and zero or three lone pairs ( AX 2 or AX 2 E 3 ) in the AXE notation . Neutral AX 2 molecules with linear geometry include beryllium fluoride ( F−Be−F ) with two single bonds , [ 1 ] carbon dioxide ( O=C=O ) with two double bonds , hydrogen cyanide ( H−C≡N ) with one single and one triple bond. The most important linear molecule with more than three atoms is acetylene ( H−C≡C−H ), in which each of its carbon atoms is considered to be a central atom with a single bond to one hydrogen and a triple bond to the other carbon atom. Linear anions include azide ( N − =N + =N − ) and thiocyanate ( S=C=N − ), and a linear cation is the nitronium ion ( O=N + =O ). [ 2 ] Linear geometry also occurs in AX 2 E 3 molecules, such as xenon difluoride ( XeF 2 ) [ 3 ] and the triiodide ion ( I − 3 ) with one iodide bonded to the two others. As described by the VSEPR model, the five valence electron pairs on the central atom form a trigonal bipyramid in which the three lone pairs occupy the less crowded equatorial positions and the two bonded atoms occupy the two axial positions at the opposite ends of an axis, forming a linear molecule.
https://en.wikipedia.org/wiki/Linear_molecular_geometry
Linear motion , also called rectilinear motion , [ 1 ] is one-dimensional motion along a straight line , and can therefore be described mathematically using only one spatial dimension . The linear motion can be of two types: uniform linear motion , with constant velocity (zero acceleration ); and non-uniform linear motion , with variable velocity (non-zero acceleration). The motion of a particle (a point-like object) along a line can be described by its position x {\displaystyle x} , which varies with t {\displaystyle t} (time). An example of linear motion is an athlete running a 100-meter dash along a straight track. [ 2 ] Linear motion is the most basic of all motion. According to Newton's first law of motion , objects that do not experience any net force will continue to move in a straight line with a constant velocity until they are subjected to a net force. Under everyday circumstances, external forces such as gravity and friction can cause an object to change the direction of its motion, so that its motion cannot be described as linear. [ 3 ] One may compare linear motion to general motion. In general motion, a particle's position and velocity are described by vectors , which have a magnitude and direction. In linear motion, the directions of all the vectors describing the system are equal and constant which means the objects move along the same axis and do not change direction. The analysis of such systems may therefore be simplified by neglecting the direction components of the vectors involved and dealing only with the magnitude . [ 2 ] The motion in which all the particles of a body move through the same distance in the same time is called translatory motion. There are two types of translatory motions: rectilinear motion; curvilinear motion . Since linear motion is a motion in a single dimension, the distance traveled by an object in particular direction is the same as displacement . [ 4 ] The SI unit of displacement is the metre . [ 5 ] [ 6 ] If x 1 {\displaystyle x_{1}} is the initial position of an object and x 2 {\displaystyle x_{2}} is the final position, then mathematically the displacement is given by: Δ x = x 2 − x 1 {\displaystyle \Delta x=x_{2}-x_{1}} The equivalent of displacement in rotational motion is the angular displacement θ {\displaystyle \theta } measured in radians . The displacement of an object cannot be greater than the distance because it is also a distance but the shortest one. Consider a person travelling to work daily. Overall displacement when he returns home is zero, since the person ends up back where he started, but the distance travelled is clearly not zero. Velocity refers to a displacement in one direction with respect to an interval of time. It is defined as the rate of change of displacement over change in time. [ 7 ] Velocity is a vector quantity, representing a direction and a magnitude of movement. The magnitude of a velocity is called speed. The SI unit of speed is m ⋅ s − 1 , {\displaystyle {\text{m}}\cdot {\text{s}}^{-1},} that is metre per second . [ 6 ] The average velocity of a moving body is its total displacement divided by the total time needed to travel from the initial point to the final point. It is an estimated velocity for a distance to travel. Mathematically, it is given by: [ 8 ] [ 9 ] v avg = Δ x Δ t = x 2 − x 1 t 2 − t 1 {\displaystyle \mathbf {v} _{\text{avg}}={\frac {\Delta \mathbf {x} }{\Delta t}}={\frac {\mathbf {x} _{2}-\mathbf {x} _{1}}{t_{2}-t_{1}}}} where: The magnitude of the average velocity | v avg | {\displaystyle \left|\mathbf {v} _{\text{avg}}\right|} is called an average speed. In contrast to an average velocity, referring to the overall motion in a finite time interval, the instantaneous velocity of an object describes the state of motion at a specific point in time. It is defined by letting the length of the time interval Δ t {\displaystyle \Delta t} tend to zero, that is, the velocity is the time derivative of the displacement as a function of time. v = lim Δ t → 0 Δ x Δ t = d x d t . {\displaystyle \mathbf {v} =\lim _{\Delta t\to 0}{\frac {\Delta \mathbf {x} }{\Delta t}}={\frac {d\mathbf {x} }{dt}}.} The magnitude of the instantaneous velocity | v | {\displaystyle |\mathbf {v} |} is called the instantaneous speed. The instantaneous velocity equation comes from finding the limit as t approaches 0 of the average velocity. The instantaneous velocity shows the position function with respect to time. From the instantaneous velocity the instantaneous speed can be derived by getting the magnitude of the instantaneous velocity. Acceleration is defined as the rate of change of velocity with respect to time. Acceleration is the second derivative of displacement i.e. acceleration can be found by differentiating position with respect to time twice or differentiating velocity with respect to time once. [ 10 ] The SI unit of acceleration is m ⋅ s − 2 {\displaystyle \mathrm {m\cdot s^{-2}} } or metre per second squared . [ 6 ] If a avg {\displaystyle \mathbf {a} _{\text{avg}}} is the average acceleration and Δ v = v 2 − v 1 {\displaystyle \Delta \mathbf {v} =\mathbf {v} _{2}-\mathbf {v} _{1}} is the change in velocity over the time interval Δ t {\displaystyle \Delta t} then mathematically, a avg = Δ v Δ t = v 2 − v 1 t 2 − t 1 {\displaystyle \mathbf {a} _{\text{avg}}={\frac {\Delta \mathbf {v} }{\Delta t}}={\frac {\mathbf {v} _{2}-\mathbf {v} _{1}}{t_{2}-t_{1}}}} The instantaneous acceleration is the limit, as Δ t {\displaystyle \Delta t} approaches zero, of the ratio Δ v {\displaystyle \Delta \mathbf {v} } and Δ t {\displaystyle \Delta t} , i.e., a = lim Δ t → 0 Δ v Δ t = d v d t = d 2 x d t 2 {\displaystyle \mathbf {a} =\lim _{\Delta t\to 0}{\frac {\Delta \mathbf {v} }{\Delta t}}={\frac {d\mathbf {v} }{dt}}={\frac {d^{2}\mathbf {x} }{dt^{2}}}} The rate of change of acceleration, the third derivative of displacement is known as jerk. [ 11 ] The SI unit of jerk is m ⋅ s − 3 {\displaystyle \mathrm {m\cdot s^{-3}} } . In the UK jerk is also referred to as jolt. The rate of change of jerk, the fourth derivative of displacement is known as jounce. [ 11 ] The SI unit of jounce is m ⋅ s − 4 {\displaystyle \mathrm {m\cdot s^{-4}} } which can be pronounced as metres per quartic second . In case of constant acceleration, the four physical quantities acceleration, velocity, time and displacement can be related by using the equations of motion . [ 12 ] [ 13 ] [ 14 ] Here, These relationships can be demonstrated graphically. The gradient of a line on a displacement time graph represents the velocity. The gradient of the velocity time graph gives the acceleration while the area under the velocity time graph gives the displacement. The area under a graph of acceleration versus time is equal to the change in velocity. The following table refers to rotation of a rigid body about a fixed axis: s {\displaystyle \mathbf {s} } is arc length , r {\displaystyle \mathbf {r} } is the distance from the axis to any point, and a t {\displaystyle \mathbf {a} _{\mathbf {t} }} is the tangential acceleration , which is the component of the acceleration that is parallel to the motion. In contrast, the centripetal acceleration, a c = v 2 / r = ω 2 r {\displaystyle \mathbf {a} _{\mathbf {c} }=v^{2}/r=\omega ^{2}r} , is perpendicular to the motion. The component of the force parallel to the motion, or equivalently, perpendicular to the line connecting the point of application to the axis is F ⊥ {\displaystyle \mathbf {F} _{\perp }} . The sum is over j {\displaystyle j} from 1 {\displaystyle 1} to N {\displaystyle N} particles and/or points of application. The following table shows the analogy in derived SI units: Media related to Linear movement at Wikimedia Commons
https://en.wikipedia.org/wiki/Linear_motion
In computer networking , linear network coding is a program in which intermediate nodes transmit data from source nodes to sink nodes by means of linear combinations . Linear network coding may be used to improve a network's throughput, efficiency, and scalability , as well as reducing attacks and eavesdropping. The nodes of a network take several packets and combine for transmission. This process may be used to attain the maximum possible information flow in a network . It has been proven that, theoretically, linear coding is enough to achieve the upper bound in multicast problems with one source. [ 1 ] However linear coding is not sufficient in general; even for more general versions of linearity such as convolutional coding and filter-bank coding . [ 2 ] Finding optimal coding solutions for general network problems with arbitrary demands is a hard problem, which can be NP-hard [ 3 ] [ 4 ] and even undecidable . [ 5 ] [ 6 ] In a linear network coding problem, a group of nodes P {\displaystyle P} are involved in moving the data from S {\displaystyle S} source nodes to K {\displaystyle K} sink nodes. Each node generates new packets which are linear combinations of past received packets by multiplying them by coefficients chosen from a finite field , typically of size G F ( 2 s ) {\displaystyle GF(2^{s})} . More formally, each node, p k {\displaystyle p_{k}} with indegree , I n D e g ( p k ) = S {\displaystyle InDeg(p_{k})=S} , generates a message X k {\displaystyle X_{k}} from the linear combination of received messages { M i } i = 1 S {\displaystyle \{M_{i}\}_{i=1}^{S}} by the formula: Where the values g k i {\displaystyle g_{k}^{i}} are coefficients selected from G F ( 2 s ) {\displaystyle GF(2^{s})} . Since operations are computed in a finite field, the generated message is of the same length as the original messages. Each node forwards the computed value X k {\displaystyle X_{k}} along with the coefficients, g k i {\displaystyle g_{k}^{i}} , used in the k th {\displaystyle k^{\text{th}}} level, g k i {\displaystyle g_{k}^{i}} . Sink nodes receive these network coded messages, and collect them in a matrix. The original messages can be recovered by performing Gaussian elimination on the matrix. [ 7 ] In reduced row echelon form, decoded packets correspond to the rows of the form e i = [ 0...010...0 ] {\displaystyle e_{i}=[0...010...0]} A network is represented by a directed graph G = ( V , E , C ) {\displaystyle {\mathcal {G}}=(V,E,C)} . V {\displaystyle V} is the set of nodes or vertices, E {\displaystyle E} is the set of directed links (or edges), and C {\displaystyle C} gives the capacity of each link of E {\displaystyle E} . Let T ( s , t ) {\displaystyle T(s,t)} be the maximum possible throughput from node s {\displaystyle s} to node t {\displaystyle t} . By the max-flow min-cut theorem , T ( s , t ) {\displaystyle T(s,t)} is upper bounded by the minimum capacity of all cuts , which is the sum of the capacities of the edges on a cut, between these two nodes. Karl Menger proved that there is always a set of edge-disjoint paths achieving the upper bound in a unicast scenario, known as the max-flow min-cut theorem . Later, the Ford–Fulkerson algorithm was proposed to find such paths in polynomial time. Then, Edmonds proved in the paper "Edge-Disjoint Branchings" [ which? ] the upper bound in the broadcast scenario is also achievable, and proposed a polynomial time algorithm. However, the situation in the multicast scenario is more complicated, and in fact, such an upper bound can't be reached using traditional routing ideas. Ahlswede et al. proved that it can be achieved if additional computing tasks (incoming packets are combined into one or several outgoing packets) can be done in the intermediate nodes. [ 8 ] The butterfly network [ 8 ] is often used to illustrate how linear network coding can outperform routing . Two source nodes (at the top of the picture) have information A and B that must be transmitted to the two destination nodes (at the bottom). Each destination node wants to know both A and B. Each edge can carry only a single value (we can think of an edge transmitting a bit in each time slot). If only routing were allowed, then the central link would be only able to carry A or B, but not both. Supposing we send A through the center; then the left destination would receive A twice and not know B at all. Sending B poses a similar problem for the right destination. We say that routing is insufficient because no routing scheme can transmit both A and B to both destinations simultaneously. Meanwhile, it takes four time slots in total for both destination nodes to know A and B. Using a simple code, as shown, A and B can be transmitted to both destinations simultaneously by sending the sum of the symbols through the two relay nodes – encoding A and B using the formula "A+B". The left destination receives A and A + B, and can calculate B by subtracting the two values. Similarly, the right destination will receive B and A + B, and will also be able to determine both A and B. Therefore, with network coding, it takes only three time slots and improves the throughput. Random linear network coding [ 9 ] (RLNC) is a simple yet powerful encoding scheme, which in broadcast transmission schemes allows close to optimal throughput using a decentralized algorithm. Nodes transmit random linear combinations of the packets they receive, with coefficients chosen randomly, with a uniform distribution from a Galois field. If the field size is sufficiently large, the probability that the receiver(s) will obtain linearly independent combinations (and therefore obtain innovative information) approaches 1. It should however be noted that, although random linear network coding has excellent throughput performance, if a receiver obtains an insufficient number of packets, it is extremely unlikely that they can recover any of the original packets. This can be addressed by sending additional random linear combinations until the receiver obtains the appropriate number of packets. There are three key parameters in RLNC. The first one is the generation size. In RLNC, the original data transmitted over the network is divided into packets. The source and intermediate nodes in the network can combine and recombine the set of original and coded packets. The original M {\displaystyle M} packets form a block, usually called a generation. The number of original packets combined and recombined together is the generation size. The second parameter is the packet size. Usually, the size of the original packets is fixed. In the case of unequally-sized packets, these can be zero-padded if they are shorter or split into multiple packets if they are longer. In practice, the packet size can be the size of the maximum transmission unit (MTU) of the underlying network protocol. For example, it can be around 1500 bytes in an Ethernet frame . The third key parameter is the Galois field used. In practice, the most commonly used Galois fields are binary extension fields. And the most commonly used sizes for the Galois fields are the binary field G F ( 2 ) {\displaystyle GF(2)} and the so-called binary-8 ( G F ( 2 8 ) {\displaystyle GF(2^{8})} ). In the binary field, each element is one bit long, while in the binary-8, it is one byte long. Since the packet size is usually larger than the field size, each packet is seen as a set of elements from the Galois field (usually referred to as symbols) appended together. The packets have a fixed amount of symbols (Galois field elements), and since all the operations are performed over Galois fields, then the size of the packets does not change with subsequent linear combinations. The sources and the intermediate nodes can combine any subset of the original and previously coded packets performing linear operations. To form a coded packet in RLNC, the original and previously coded packets are multiplied by randomly chosen coefficients and added together. Since each packet is just an appended set of Galois field elements, the operations of multiplication and addition are performed symbol-wise over each of the individual symbols of the packets, as shown in the picture from the example. To preserve the statelessness of the code, the coding coefficients used to generate the coded packets are appended to the packets transmitted over the network. Therefore, each node in the network can see what coefficients were used to generate each coded packet. One novelty of linear network coding over traditional block codes is that it allows the recombination of previously coded packets into new and valid coded packets. This process is usually called recoding. After a recoding operation, the size of the appended coding coefficients does not change. Since all the operations are linear, the state of the recoded packet can be preserved by applying the same operations of addition and multiplication to the payload and the appended coding coefficients. In the following example, we will illustrate this process. Any destination node must collect enough linearly independent coded packets to be able to reconstruct the original data. Each coded packet can be understood as a linear equation where the coefficients are known since they are appended to the packet. In these equations, each of the original M {\displaystyle M} packets is the unknown. To solve the linear system of equations, the destination needs at least M {\displaystyle M} linearly independent equations (packets). In the figure, we can see an example of two packets linearly combined into a new coded packet. In the example, we have two packets, namely packet f {\displaystyle f} and packet e {\displaystyle e} . The generation size of our example is two. We know this because each packet has two coding coefficients ( C i j {\displaystyle C_{ij}} ) appended. The appended coefficients can take any value from the Galois field. However, an original, uncoded data packet would have appended the coding coefficients [ 0 , 1 ] {\displaystyle [0,1]} or [ 1 , 0 ] {\displaystyle [1,0]} , which means that they are constructed by a linear combination of zero times one of the packets plus one time the other packet. Any coded packet would have appended other coefficients. In our example, packet f {\displaystyle f} for instance has appended the coefficients [ C 11 , C 12 ] {\displaystyle [C_{11},C_{12}]} . Since network coding can be applied at any layter of the communication protocol, these packets can have a header from the other layers, which is ignored in the network coding operations. Now, lets assume that the network node wants to produce a new coded packet combining packet f {\displaystyle f} and packet e {\displaystyle e} . In RLNC, it will randomly choose two coding coefficients, d 1 {\displaystyle d_{1}} and d 2 {\displaystyle d_{2}} in the example. The node will multiply each symbol of packet f {\displaystyle f} by d 1 {\displaystyle d_{1}} , and each symbol of packet e {\displaystyle e} by d 2 {\displaystyle d_{2}} . Then, it will add the results symbol-wise to produce the new coded data. It will perform the same operations of multiplication and addition to the coding coefficients of the coded packets. Linear network coding is still a relatively new subject. However, the topic has been vastly researched over the last twenty years. Nevertheless, there are still some misconceptions that are no longer valid: Decoding computational complexity: Network coding decoders have been improved over the years. Nowadays, the algorithms are highly efficient and parallelizable. In 2016, with Intel Core i5 processors with SIMD instructions enabled, the decoding goodput of network coding was 750 MB/s for a generation size of 16 packets and 250 MB/s for a generation size of 64 packets. [ 10 ] Furthermore, today's algorithms can be vastly parallelizable, increasing the encoding and decoding goodput even further. [ 11 ] Transmission Overhead: It is usually thought that the transmission overhead of network coding is high due to the need to append the coding coefficients to each coded packet. In reality, this overhead is negligible in most applications. The overhead due to coding coefficients can be computed as follows. Each packet has appended M {\displaystyle M} coding coefficients. The size of each coefficient is the number of bits needed to represent one element of the Galois field. In practice, most network coding applications use a generation size of no more than 32 packets per generation and Galois fields of 256 elements (binary-8). With these numbers, each packet needs M ∗ l o g 2 ( s ) = 32 {\displaystyle M*log_{2}(s)=32} bytes of appended overhead. If each packet is 1500 bytes long (i.e. the Ethernet MTU), then 32 bytes represent an overhead of only 2%. Overhead due to linear dependencies: Since the coding coefficients are chosen randomly in RLNC, there is a chance that some transmitted coded packets are not beneficial to the destination because they are formed using a linearly dependent combination of packets. However, this overhead is negligible in most applications. The linear dependencies depend on the Galois fields' size and are practically independent of the generation size used. We can illustrate this with the following example. Let us assume we are using a Galois field of q {\displaystyle q} elements and a generation size of M {\displaystyle M} packets. If the destination has not received any coded packet, we say it has M {\displaystyle M} degrees of freedom, and then almost any coded packet will be useful and innovative. In fact, only the zero-packet (only zeroes in the coding coefficients) will be non-innovative. The probability of generating the zero-packet is equal to the probability of each of the M {\displaystyle M} coding coefficient to be equal to the zero-element of the Galois field. I.e., the probability of a non-innovative packet is of 1 q M {\displaystyle {\frac {1}{q^{M}}}} . With each successive innovative transmission, it can be shown that the exponent of the probability of a non innovative packet is reduced by one. When the destination has received M − 1 {\displaystyle M-1} innovative packets (i.e., it needs only one more packet to fully decode the data). Then the probability of a non innovative packet is of 1 q {\displaystyle {\frac {1}{q}}} . We can use this knowledge to calculate the expected number of linearly dependent packets per generation. In the worst-case scenario, when the Galois field used contains only two elements ( q = 2 {\displaystyle q=2} ), the expected number of linearly dependent packets per generation is of 1.6 extra packets. If our generation size if of 32 or 64 packets, this represents an overhead of 5% or 2.5%, respectively. If we use the binary-8 field ( q = 256 {\displaystyle q=256} ), then the expected number of linearly dependent packets per generation is practically zero. Since it is the last packets the major contributor to the overhead due to linear dependencies, there are RLNC-based protocols such as tunable sparse network coding [ 12 ] that exploit this knowledge. These protocols introduce sparsity (zero-elements) in the coding coefficients at the beginning of the transmission to reduce the decoding complexity, and reduce the sparsity at the end of the transmission to reduce the overhead due to linear dependencies. Over the years, multiple researchers and companies have integrated network coding solutions into their applications. [ 13 ] We can list some of the applications of network coding in different areas:
https://en.wikipedia.org/wiki/Linear_network_coding
The linear no-threshold model ( LNT ) is a dose-response model used in radiation protection to estimate stochastic health effects such as radiation-induced cancer , genetic mutations and teratogenic effects on the human body due to exposure to ionizing radiation . The model assumes a linear relationship between dose and health effects, even for very low doses where biological effects are more difficult to observe. The LNT model implies that all exposure to ionizing radiation is harmful, regardless of how low the dose is, and that the effect is cumulative over lifetime. The LNT model is commonly used by regulatory bodies as a basis for formulating public health policies that set regulatory dose limits to protect against the effects of radiation. The validity of the LNT model, however, is disputed, and other models exist: the threshold model , which assumes that very small exposures are harmless, the radiation hormesis model, which says that radiation at very small doses can be beneficial, and the supra-linear model. It has been argued that the LNT model may have created an irrational fear of radiation. [ 1 ] [ 2 ] Scientific organizations and government regulatory bodies generally support use of the LNT model, particularly for optimization. However, some caution against estimating health effects from doses below a certain level (see § Controversy ). Stochastic health effects are those that occur by chance, and whose probability is proportional to the dose , but whose severity is independent of the dose. [ 3 ] The LNT model assumes there is no lower threshold at which stochastic effects start, and assumes a linear relationship between dose and the stochastic health risk. In other words, LNT assumes that radiation has the potential to cause harm at any dose level, however small, and the sum of several very small exposures is just as likely to cause a stochastic health effect as a single larger exposure of equal dose value. [ 1 ] In contrast, deterministic health effects are radiation-induced effects such as acute radiation syndrome , which are caused by tissue damage. Deterministic effects reliably occur above a threshold dose and their severity increases with dose. [ 4 ] Because of the inherent differences, LNT is not a model for deterministic effects, which are instead characterized by other types of dose-response relationships. LNT is a common model to calculate the probability of radiation-induced cancer both at high doses where epidemiology studies support its application, but controversially, also at low doses, which is a dose region that has a lower predictive statistical confidence . [ 1 ] Nonetheless, regulatory bodies, such as the Nuclear Regulatory Commission (NRC), commonly use LNT as a basis for regulatory dose limits to protect against stochastic health effects, as found in many public health policies. Whether the LNT model describes the reality for small-dose exposures is disputed, and challenges to the LNT model used by NRC for setting radiation protection regulations were submitted. [ 2 ] NRC rejected the petitions in 2021 because "they fail to present an adequate basis supporting the request to discontinue use of the LNT model". [ 5 ] Other dose models include: the threshold model , which assumes that very small exposures are harmless, and the radiation hormesis model, which claims that radiation at very small doses can be beneficial. Because the current data is inconclusive, scientists disagree on which model should be used, though most national and international cancer research organizations explicitly endorse LNT for regulating exposures to low dose radiation. The model is sometimes used to quantify the cancerous effect of collective doses of low-level radioactive contaminations, which is controversial. Such practice has been criticized by the International Commission on Radiological Protection since 2007. [ 6 ] [ 1 ] The association of exposure to radiation with cancer had been observed as early as 1902, six years after the discovery of X-rays by Wilhelm Röntgen and radioactivity by Henri Becquerel . [ 8 ] In 1927, Hermann Muller demonstrated that radiation may cause genetic mutation. [ 9 ] He also suggested mutation as a cause of cancer. [ 10 ] Gilbert N. Lewis and Alex Olson, based on Muller's discovery of the effect of radiation on mutation, proposed a mechanism for biological evolution in 1928, suggesting that genomic mutation was induced by cosmic and terrestrial radiation and first introduced the idea that such mutation may occur proportionally to the dose of radiation. [ 11 ] Various laboratories, including Muller's, then demonstrated the apparent linear dose response of mutation frequency. [ 12 ] Muller, who received a Nobel Prize for his work on the mutagenic effect of radiation in 1946, asserted in his Nobel lecture, The Production of Mutation , that mutation frequency is "directly and simply proportional to the dose of irradiation applied" and that there is "no threshold dose". [ 13 ] The early studies were based on higher levels of radiation that made it hard to establish the safety of low level of radiation. Indeed, many early scientists believed that there may be a tolerance level, and that low doses of radiation may not be harmful. [ 8 ] A later study in 1955 on mice exposed to low dose of radiation suggests that they may outlive control animals. [ 14 ] The interest in the effects of radiation intensified after the dropping of atomic bombs on Hiroshima and Nagasaki , and studies were conducted on the survivors. Although compelling evidence on the effect of low dosage of radiation was hard to come by, by the late 1940s, the idea of LNT became more popular due to its mathematical simplicity. In 1954, the National Council on Radiation Protection and Measurements (NCRP) introduced the concept of maximum permissible dose . In 1958, the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) assessed the LNT model and a threshold model, but noted the difficulty in acquiring "reliable information about the correlation between small doses and their effects either in individuals or in large populations". The United States Congress Joint Committee on Atomic Energy (JCAE) similarly could not establish if there is a threshold or "safe" level for exposure; nevertheless, it introduced the concept of " As Low As Reasonably Achievable " (ALARA). ALARA would become a fundamental principle in radiation protection policy that implicitly accepts the validity of LNT. In 1959, the United States Federal Radiation Council (FRC) supported the concept of the LNT extrapolation down to the low dose region in its first report. [ 8 ] By the 1970s, the LNT model had become accepted as the standard in radiation protection practice by a number of bodies. [ 8 ] In 1972, the first report of National Academy of Sciences (NAS) Biological Effects of Ionizing Radiation (BEIR), an expert panel who reviewed available peer reviewed literature, supported the LNT model on pragmatic grounds, noting that while "dose-effect relationship for x rays and gamma rays may not be a linear function", the "use of linear extrapolation ... may be justified on pragmatic grounds as a basis for risk estimation." In its seventh report of 2006, NAS BEIR VII writes, "the committee concludes that the preponderance of information indicates that there will be some risk, even at low doses". [ 15 ] The Health Physics Society (in the United States) has published a documentary series on the origins of the LNT model. [ 16 ] Radiation precautions have led to sunlight being listed as a carcinogen at all sun exposure rates, due to the ultraviolet component of sunlight, with no safe level of sunlight exposure being suggested, following the precautionary LNT model. According to a 2007 study submitted by the University of Ottawa to the Department of Health and Human Services in Washington, D.C., there is not enough information to determine a safe level of sun exposure. [ 17 ] The linear no-threshold model is used to extrapolate the expected number of extra deaths caused by exposure to environmental radiation , and it therefore has a great impact on public policy . The model is used to translate any radiation release , into a number of lives lost, while any reduction in radiation exposure , for example as a consequence of radon detection, is translated into a number of lives saved. When the doses are very low the model predicts new cancers only in a very small fraction of the population, but for a large population, the number of lives is extrapolated into hundreds or thousands. A linear model has long been used in health physics to set maximum acceptable radiation exposures. The LNT model has been contested by a number of scientists. [ 1 ] It has been claimed that the early proponent of the model Hermann Joseph Muller intentionally ignored an early study that did not support the LNT model when he gave his 1946 Nobel Prize address advocating the model. [ 18 ] In very high dose radiation therapy , it was known at the time that radiation can cause a physiological increase in the rate of pregnancy anomalies; however, human exposure data and animal testing suggests that the "malformation of organs appears to be a deterministic effect with a threshold dose ", below which no rate increase is observed. [ 19 ] A review in 1999 on the link between the Chernobyl accident and teratology (birth defects) concludes that "there is no substantive proof regarding radiation‐induced teratogenic effects from the Chernobyl accident". [ 19 ] It is argued that the human body has defense mechanisms, such as DNA repair and programmed cell death , that would protect it against carcinogenesis due to low-dose exposures of carcinogens. [ 20 ] However, these repair mechanisms are known to be error prone. [ 5 ] A 2011 research of the cellular repair mechanisms support the evidence against the linear no-threshold model. [ 21 ] According to its authors, this study published in the Proceedings of the National Academy of Sciences of the United States of America "casts considerable doubt on the general assumption that risk to ionizing radiation is proportional to dose". A 2011 review of studies addressing childhood leukaemia following exposure to ionizing radiation, including both diagnostic exposure and natural background exposure from radon , concluded that existing risk factors, excess relative risk per sievert (ERR/Sv), is "broadly applicable" to low dose or low dose-rate exposure, "although the uncertainties associated with this estimate are considerable". The study also notes that "epidemiological studies have been unable, in general, to detect the influence of natural background radiation upon the risk of childhood leukaemia" [ 22 ] Many expert scientific panels have been convened on the risks of ionizing radiation. Most explicitly support the LNT model and none have concluded that evidence exists for a threshold, with the exception of the French Academy of Sciences in a 2005 report. [ 23 ] [ 24 ] Considering the uncertainty of health effects at low doses, several organizations caution against estimating health effects below certain doses, generally below natural background, as noted below: Based upon the current state of science, the NRC concludes that the actual level of risk associated with low doses of radiation remains uncertain and some studies, such as the INWORKS study, show there is at least some risk from low doses of radiation. Moreover, the current state of science does not provide compelling evidence of a threshold, as highlighted by the fact that no national or international authoritative scientific advisory bodies have concluded that such evidence exists. Therefore, based upon the stated positions of the aforementioned advisory bodies; the comments and recommendations of NCI, NIOSH, and the EPA; the October 28, 2015, recommendation of the ACMUI; and its own professional and technical judgment, the NRC has determined that the LNT model continues to provide a sound regulatory basis for minimizing the risk of unnecessary radiation exposure to both members of the public and occupational workers. Consequently, the NRC will retain the dose limits for occupational workers and members of the public in 10 CFR part 20 radiation protection regulations. The assumption that any stimulatory hormetic effects from low doses of ionizing radiation will have a significant health benefit to humans that exceeds potential detrimental effects from the radiation exposure is unwarranted at this time. The scientific research base shows that there is no threshold of exposure below which low levels of ionizing radiation can be demonstrated to be harmless or beneficial. Underlying the risk models is a large body of epidemiological and radiobiological data. In general, results from both lines of research are consistent with a linear, no-threshold dose (LNT) response model in which the risk of inducing a cancer in an irradiated tissue by low doses of radiation is proportional to the dose to that tissue The Committee concluded that there remains good justification for the use of a non-threshold model for risk inference given the robust knowledge on the role of mutation and chromosomal aberrations in carcinogenesis. That said, there are ways that radiation could act that might lead to a re-evaluation of the use of a linear dose-response model to infer radiation cancer risks. A number of organisations caution against using the Linear no-threshold model to estimate risk from radiation exposure below a certain level: In conclusion, this report raises doubts on the validity of using LNT for evaluating the carcinogenic risk of low doses (< 100 mSv) and even more for very low doses (< 10 mSv). The LNT concept can be a useful pragmatic tool for assessing rules in radioprotection for doses above 10 mSv; however since it is not based on biological concepts of our current knowledge, it should not be used without precaution for assessing by extrapolation the risks associated with low and even more so, with very low doses (< 10 mSv), especially for benefit-risk assessments imposed on radiologists by the European directive 97-43. The Health Physics Society advises against estimating health risks to people from exposures to ionizing radiation that are near or less than natural background levels because statistical uncertainties at these low levels are great. The Scientific Committee does not recommend multiplying very low doses by large numbers of individuals to estimate numbers of radiation-induced health effects within a population exposed to incremental doses at levels equivalent to or lower than natural background levels. It has been argued that the LNT model had caused an irrational fear of radiation , whose observable effects are much more significant than non-observable effects postulated by LNT. [ 1 ] In the wake of the 1986 Chernobyl accident in Ukraine , Europe-wide anxieties were fomented in pregnant mothers over the perception enforced by the LNT model that their children would be born with a higher rate of mutations. [ 37 ] As far afield as the country of Switzerland , hundreds of excess induced abortions were performed on the healthy unborn, out of this no-threshold fear. [ 38 ] Following the accident however, studies of data sets approaching a million births in the EUROCAT database, divided into "exposed" and control groups were assessed in 1999. As no Chernobyl impacts were detected, the researchers conclude "in retrospect the widespread fear in the population about the possible effects of exposure on the unborn was not justified". [ 39 ] Despite studies from Germany and Turkey, the only robust evidence of negative pregnancy outcomes that transpired after the accident were these elective abortion indirect effects, in Greece, Denmark, Italy etc., due to the anxieties created. [ 40 ] The consequences of low-level radiation are often more psychological than radiological. Because damage from very-low-level radiation cannot be detected, people exposed to it are left in anguished uncertainty about what will happen to them. Many believe they have been fundamentally contaminated for life and may refuse to have children for fear of birth defects . They may be shunned by others in their community who fear a sort of mysterious contagion. [ 41 ] Forced evacuation from a radiation or nuclear accident may lead to social isolation, anxiety, depression, psychosomatic medical problems, reckless behavior, or suicide. Such was the outcome of the 1986 Chernobyl nuclear disaster in Ukraine. A comprehensive 2005 study concluded that "the mental health impact of Chernobyl is the largest public health problem unleashed by the accident to date". [ 41 ] Frank N. von Hippel , a U.S. scientist, commented on the 2011 Fukushima nuclear disaster , saying that "fear of ionizing radiation could have long-term psychological effects on a large portion of the population in the contaminated areas". [ 42 ] Such great psychological danger does not accompany other materials that put people at risk of cancer and other deadly illness. Visceral fear is not widely aroused by, for example, the daily emissions from coal burning, although as a National Academy of Sciences study found, this causes 10,000 premature deaths a year in the US. It is "only nuclear radiation that bears a huge psychological burden – for it carries a unique historical legacy". [ 41 ]
https://en.wikipedia.org/wiki/Linear_no-threshold_model
Linear predictive coding ( LPC ) is a method used mostly in audio signal processing and speech processing for representing the spectral envelope of a digital signal of speech in compressed form, using the information of a linear predictive model . [ 1 ] [ 2 ] LPC is the most widely used method in speech coding and speech synthesis . It is a powerful speech analysis technique, and a useful method for encoding good quality speech at a low bit rate . LPC starts with the assumption that a speech signal is produced by a buzzer at the end of a tube (for voiced sounds), with occasional added hissing and popping sounds (for voiceless sounds such as sibilants and plosives ). Although apparently crude, this Source–filter model is actually a close approximation of the reality of speech production. The glottis (the space between the vocal folds) produces the buzz, which is characterized by its intensity ( loudness ) and frequency (pitch). The vocal tract (the throat and mouth) forms the tube, which is characterized by its resonances; these resonances give rise to formants , or enhanced frequency bands in the sound produced. Hisses and pops are generated by the action of the tongue, lips and throat during sibilants and plosives. LPC analyzes the speech signal by estimating the formants, removing their effects from the speech signal, and estimating the intensity and frequency of the remaining buzz. The process of removing the formants is called inverse filtering, and the remaining signal after the subtraction of the filtered modeled signal is called the residue. The numbers which describe the intensity and frequency of the buzz, the formants, and the residue signal, can be stored or transmitted somewhere else. LPC synthesizes the speech signal by reversing the process: use the buzz parameters and the residue to create a source signal, use the formants to create a filter (which represents the tube), and run the source through the filter, resulting in speech. Because speech signals vary with time, this process is done on short chunks of the speech signal, which are called frames; generally, 30 to 50 frames per second give an intelligible speech with good compression. Linear prediction (signal estimation) goes back to at least the 1940s when Norbert Wiener developed a mathematical theory for calculating the best filters and predictors for detecting signals hidden in noise. [ 3 ] [ 4 ] Soon after Claude Shannon established a general theory of coding , work on predictive coding was done by C. Chapin Cutler , [ 5 ] Bernard M. Oliver [ 6 ] and Henry C. Harrison. [ 7 ] Peter Elias in 1955 published two papers on predictive coding of signals. [ 8 ] [ 9 ] Linear predictors were applied to speech analysis independently by Fumitada Itakura of Nagoya University and Shuzo Saito of Nippon Telegraph and Telephone in 1966 and in 1967 by Bishnu S. Atal , Manfred R. Schroeder and John Burg. Itakura and Saito described a statistical approach based on maximum likelihood estimation ; Atal and Schroeder described an adaptive linear predictor approach; Burg outlined an approach based on principle of maximum entropy . [ 4 ] [ 10 ] [ 11 ] [ 12 ] In 1969, Itakura and Saito introduced method based on partial correlation (PARCOR), Glen Culler proposed real-time speech encoding, and Bishnu S. Atal presented an LPC speech coder at the Annual Meeting of the Acoustical Society of America . In 1971, realtime LPC using 16-bit LPC hardware was demonstrated by Philco-Ford ; four units were sold. [ 13 ] LPC technology was advanced by Bishnu Atal and Manfred Schroeder during the 1970s–1980s. [ 13 ] In 1978, Atal and Vishwanath et al. of BBN developed the first variable-rate LPC algorithm. [ 13 ] The same year, Atal and Manfred R. Schroeder at Bell Labs proposed an LPC speech codec called adaptive predictive coding , which used a psychoacoustic coding algorithm exploiting the masking properties of the human ear. [ 14 ] [ 15 ] This later became the basis for the perceptual coding technique used by the MP3 audio compression format, introduced in 1993. [ 14 ] Code-excited linear prediction (CELP) was developed by Schroeder and Atal in 1985. [ 16 ] LPC is the basis for voice-over-IP (VoIP) technology. [ 13 ] In 1972, Bob Kahn of ARPA with Jim Forgie of Lincoln Laboratory (LL) and Dave Walden of BBN Technologies started the first developments in packetized speech, which would eventually lead to voice-over-IP technology. In 1973, according to Lincoln Laboratory informal history, the first real-time 2400 bit / s LPC was implemented by Ed Hofstetter. In 1974, the first real-time two-way LPC packet speech communication was accomplished over the ARPANET at 3500 bit/s between Culler-Harrison and Lincoln Laboratory. LPC is frequently used for transmitting spectral envelope information, and as such it has to be tolerant of transmission errors. Transmission of the filter coefficients directly (see linear prediction for a definition of coefficients) is undesirable, since they are very sensitive to errors. In other words, a very small error can distort the whole spectrum, or worse, a small error might make the prediction filter unstable. There are more advanced representations such as log area ratios (LAR), line spectral pairs (LSP) decomposition and reflection coefficients . Of these, especially LSP decomposition has gained popularity since it ensures the stability of the predictor, and spectral errors are local for small coefficient deviations. LPC is the most widely used method in speech coding and speech synthesis . [ 17 ] It is generally used for speech analysis and resynthesis. It is used as a form of voice compression by phone companies, such as in the GSM standard, for example. It is also used for secure wireless, where voice must be digitized , encrypted and sent over a narrow voice channel; an early example of this is the US government's Navajo I . LPC synthesis can be used to construct vocoders where musical instruments are used as an excitation signal to the time-varying filter estimated from a singer's speech. This is somewhat popular in electronic music . Paul Lansky made the well-known computer music piece notjustmoreidlechatter using linear predictive coding. [ 18 ] A 10th-order LPC was used in the popular 1980s Speak & Spell educational toy. LPC predictors are used in Shorten , MPEG-4 ALS , FLAC , SILK audio codec , and other lossless audio codecs. LPC has received some attention as a tool for use in the tonal analysis of violins and other stringed musical instruments. [ 19 ]
https://en.wikipedia.org/wiki/Linear_predictive_coding
In mathematics (including combinatorics , linear algebra , and dynamical systems ), a linear recurrence with constant coefficients [ 1 ] : ch. 17 [ 2 ] : ch. 10 (also known as a linear recurrence relation or linear difference equation ) sets equal to 0 a polynomial that is linear in the various iterates of a variable —that is, in the values of the elements of a sequence . The polynomial's linearity means that each of its terms has degree 0 or 1. A linear recurrence denotes the evolution of some variable over time, with the current time period or discrete moment in time denoted as t , one period earlier denoted as t − 1 , one period later as t + 1 , etc. The solution of such an equation is a function of t , and not of any iterate values, giving the value of the iterate at any time. To find the solution it is necessary to know the specific values (known as initial conditions ) of n of the iterates, and normally these are the n iterates that are oldest. The equation or its variable is said to be stable if from any set of initial conditions the variable's limit as time goes to infinity exists; this limit is called the steady state . Difference equations are used in a variety of contexts, such as in economics to model the evolution through time of variables such as gross domestic product , the inflation rate , the exchange rate , etc. They are used in modeling such time series because values of these variables are only measured at discrete intervals. In econometric applications, linear difference equations are modeled with stochastic terms in the form of autoregressive (AR) models and in models such as vector autoregression (VAR) and autoregressive moving average (ARMA) models that combine AR with other features. A linear recurrence with constant coefficients is an equation of the following form, written in terms of parameters a 1 , ..., a n and b : y t = a 1 y t − 1 + ⋯ + a n y t − n + b , {\displaystyle y_{t}=a_{1}y_{t-1}+\cdots +a_{n}y_{t-n}+b,} or equivalently as y t + n = a 1 y t + n − 1 + ⋯ + a n y t + b . {\displaystyle y_{t+n}=a_{1}y_{t+n-1}+\cdots +a_{n}y_{t}+b.} The positive integer n {\displaystyle n} is called the order of the recurrence and denotes the longest time lag between iterates. The equation is called homogeneous if b = 0 and nonhomogeneous if b ≠ 0 . If the equation is homogeneous, the coefficients determine the characteristic polynomial (also "auxiliary polynomial" or "companion polynomial") p ( λ ) = λ n − a 1 λ n − 1 − a 2 λ n − 2 − ⋯ − a n {\displaystyle p(\lambda )=\lambda ^{n}-a_{1}\lambda ^{n-1}-a_{2}\lambda ^{n-2}-\cdots -a_{n}} whose roots play a crucial role in finding and understanding the sequences satisfying the recurrence. If b ≠ 0 , the equation y t = a 1 y t − 1 + ⋯ + a n y t − n + b {\displaystyle y_{t}=a_{1}y_{t-1}+\cdots +a_{n}y_{t-n}+b} is said to be nonhomogeneous . To solve this equation it is convenient to convert it to homogeneous form, with no constant term. This is done by first finding the equation's steady state value —a value y * such that, if n successive iterates all had this value, so would all future values. This value is found by setting all values of y equal to y * in the difference equation, and solving, thus obtaining y ∗ = b 1 − a 1 − ⋯ − a n {\displaystyle y^{*}={\frac {b}{1-a_{1}-\cdots -a_{n}}}} assuming the denominator is not 0. If it is zero, the steady state does not exist. Given the steady state, the difference equation can be rewritten in terms of deviations of the iterates from the steady state, as ( y t − y ∗ ) = a 1 ( y t − 1 − y ∗ ) + ⋯ + a n ( y t − n − y ∗ ) {\displaystyle \left(y_{t}-y^{*}\right)=a_{1}\left(y_{t-1}-y^{*}\right)+\cdots +a_{n}\left(y_{t-n}-y^{*}\right)} which has no constant term, and which can be written more succinctly as x t = a 1 x t − 1 + ⋯ + a n x t − n {\displaystyle x_{t}=a_{1}x_{t-1}+\cdots +a_{n}x_{t-n}} where x equals y − y * . This is the homogeneous form. If there is no steady state, the difference equation y t = a 1 y t − 1 + ⋯ + a n y t − n + b {\displaystyle y_{t}=a_{1}y_{t-1}+\cdots +a_{n}y_{t-n}+b} can be combined with its equivalent form y t − 1 = a 1 y t − 2 + ⋯ + a n y t − ( n + 1 ) + b {\displaystyle y_{t-1}=a_{1}y_{t-2}+\cdots +a_{n}y_{t-(n+1)}+b} to obtain (by solving both for b ) y t − a 1 y t − 1 − ⋯ − a n y t − n = y t − 1 − a 1 y t − 2 − ⋯ − a n y t − ( n + 1 ) {\displaystyle y_{t}-a_{1}y_{t-1}-\cdots -a_{n}y_{t-n}=y_{t-1}-a_{1}y_{t-2}-\cdots -a_{n}y_{t-(n+1)}} in which like terms can be combined to give a homogeneous equation of one order higher than the original. The roots of the characteristic polynomial play a crucial role in finding and understanding the sequences satisfying the recurrence. If there are d {\displaystyle d} distinct roots r 1 , r 2 , … , r d , {\displaystyle r_{1},r_{2},\ldots ,r_{d},} then each solution to the recurrence takes the form a n = k 1 r 1 n + k 2 r 2 n + ⋯ + k d r d n , {\displaystyle a_{n}=k_{1}r_{1}^{n}+k_{2}r_{2}^{n}+\cdots +k_{d}r_{d}^{n},} where the coefficients k i {\displaystyle k_{i}} are determined in order to fit the initial conditions of the recurrence. When the same roots occur multiple times, the terms in this formula corresponding to the second and later occurrences of the same root are multiplied by increasing powers of n {\displaystyle n} . For instance, if the characteristic polynomial can be factored as ( x − r ) 3 {\displaystyle (x-r)^{3}} , with the same root r {\displaystyle r} occurring three times, then the solution would take the form a n = k 1 r n + k 2 n r n + k 3 n 2 r n . {\displaystyle a_{n}=k_{1}r^{n}+k_{2}nr^{n}+k_{3}n^{2}r^{n}.} [ 3 ] For order 1, the recurrence a n = r a n − 1 {\displaystyle a_{n}=ra_{n-1}} has the solution a n = r n {\displaystyle a_{n}=r^{n}} with a 0 = 1 {\displaystyle a_{0}=1} and the most general solution is a n = k r n {\displaystyle a_{n}=kr^{n}} with a 0 = k {\displaystyle a_{0}=k} . The characteristic polynomial equated to zero (the characteristic equation ) is simply t − r = 0 {\displaystyle t-r=0} . Solutions to such recurrence relations of higher order are found by systematic means, often using the fact that a n = r n {\displaystyle a_{n}=r^{n}} is a solution for the recurrence exactly when t = r {\displaystyle t=r} is a root of the characteristic polynomial. This can be approached directly or using generating functions ( formal power series ) or matrices. Consider, for example, a recurrence relation of the form a n = A a n − 1 + B a n − 2 . {\displaystyle a_{n}=Aa_{n-1}+Ba_{n-2}.} When does it have a solution of the same general form as a n = r n {\displaystyle a_{n}=r^{n}} ? Substituting this guess ( ansatz ) in the recurrence relation, we find that r n = A r n − 1 + B r n − 2 {\displaystyle r^{n}=Ar^{n-1}+Br^{n-2}} must be true for all n > 1 {\displaystyle n>1} . Dividing through by r n − 2 {\displaystyle r^{n-2}} , we get that all these equations reduce to the same thing: r 2 = A r + B , r 2 − A r − B = 0 , {\displaystyle {\begin{aligned}r^{2}&=Ar+B,\\r^{2}-Ar-B&=0,\end{aligned}}} which is the characteristic equation of the recurrence relation. Solve for r {\displaystyle r} to obtain the two roots λ 1 {\displaystyle \lambda _{1}} , λ 2 {\displaystyle \lambda _{2}} : these roots are known as the characteristic roots or eigenvalues of the characteristic equation. Different solutions are obtained depending on the nature of the roots: If these roots are distinct, we have the general solution a n = C λ 1 n + D λ 2 n {\displaystyle a_{n}=C\lambda _{1}^{n}+D\lambda _{2}^{n}} while if they are identical (when A 2 + 4 B = 0 {\displaystyle A^{2}+4B=0} ), we have a n = C λ n + D n λ n {\displaystyle a_{n}=C\lambda ^{n}+Dn\lambda ^{n}} This is the most general solution; the two constants C {\displaystyle C} and D {\displaystyle D} can be chosen based on two given initial conditions a 0 {\displaystyle a_{0}} and a 1 {\displaystyle a_{1}} to produce a specific solution. In the case of complex eigenvalues (which also gives rise to complex values for the solution parameters C {\displaystyle C} and D {\displaystyle D} ), the use of complex numbers can be eliminated by rewriting the solution in trigonometric form. In this case we can write the eigenvalues as λ 1 , λ 2 = α ± β i . {\displaystyle \lambda _{1},\lambda _{2}=\alpha \pm \beta i.} Then it can be shown that a n = C λ 1 n + D λ 2 n {\displaystyle a_{n}=C\lambda _{1}^{n}+D\lambda _{2}^{n}} can be rewritten as [ 4 ] : 576–585 a n = 2 M n ( E cos ⁡ ( θ n ) + F sin ⁡ ( θ n ) ) = 2 G M n cos ⁡ ( θ n − δ ) , {\displaystyle a_{n}=2M^{n}\left(E\cos(\theta n)+F\sin(\theta n)\right)=2GM^{n}\cos(\theta n-\delta ),} where M = α 2 + β 2 cos ⁡ ( θ ) = α M sin ⁡ ( θ ) = β M C , D = E ∓ F i G = E 2 + F 2 cos ⁡ ( δ ) = E G sin ⁡ ( δ ) = F G {\displaystyle {\begin{array}{lcl}M={\sqrt {\alpha ^{2}+\beta ^{2}}}&\cos(\theta )={\tfrac {\alpha }{M}}&\sin(\theta )={\tfrac {\beta }{M}}\\C,D=E\mp Fi&&\\G={\sqrt {E^{2}+F^{2}}}&\cos(\delta )={\tfrac {E}{G}}&\sin(\delta )={\tfrac {F}{G}}\end{array}}} Here E {\displaystyle E} and F {\displaystyle F} (or equivalently, G {\displaystyle G} and δ {\displaystyle \delta } ) are real constants which depend on the initial conditions. Using λ 1 + λ 2 = 2 α = A , {\displaystyle \lambda _{1}+\lambda _{2}=2\alpha =A,} λ 1 ⋅ λ 2 = α 2 + β 2 = − B , {\displaystyle \lambda _{1}\cdot \lambda _{2}=\alpha ^{2}+\beta ^{2}=-B,} one may simplify the solution given above as a n = ( − B ) n 2 ( E cos ⁡ ( θ n ) + F sin ⁡ ( θ n ) ) , {\displaystyle a_{n}=(-B)^{\frac {n}{2}}\left(E\cos(\theta n)+F\sin(\theta n)\right),} where a 1 {\displaystyle a_{1}} and a 2 {\displaystyle a_{2}} are the initial conditions and E = − A a 1 + a 2 B F = − i A 2 a 1 − A a 2 + 2 a 1 B B A 2 + 4 B θ = arccos ⁡ ( A 2 − B ) {\displaystyle {\begin{aligned}E&={\frac {-Aa_{1}+a_{2}}{B}}\\F&=-i{\frac {A^{2}a_{1}-Aa_{2}+2a_{1}B}{B{\sqrt {A^{2}+4B}}}}\\\theta &=\arccos \left({\frac {A}{2{\sqrt {-B}}}}\right)\end{aligned}}} In this way there is no need to solve for λ 1 {\displaystyle \lambda _{1}} and λ 2 {\displaystyle \lambda _{2}} . In all cases—real distinct eigenvalues, real duplicated eigenvalues, and complex conjugate eigenvalues—the equation is stable (that is, the variable a {\displaystyle a} converges to a fixed value [specifically, zero]) if and only if both eigenvalues are smaller than one in absolute value . In this second-order case, this condition on the eigenvalues can be shown [ 5 ] to be equivalent to | A | < 1 − B < 2 {\displaystyle |A|<1-B<2} , which is equivalent to | B | < 1 {\displaystyle |B|<1} and | A | < 1 − B {\displaystyle |A|<1-B} . Solving the homogeneous equation x t = a 1 x t − 1 + ⋯ + a n x t − n {\displaystyle x_{t}=a_{1}x_{t-1}+\cdots +a_{n}x_{t-n}} involves first solving its characteristic polynomial λ n = a 1 λ n − 1 + ⋯ + a n − 2 λ 2 + a n − 1 λ + a n {\displaystyle \lambda ^{n}=a_{1}\lambda ^{n-1}+\cdots +a_{n-2}\lambda ^{2}+a_{n-1}\lambda +a_{n}} for its characteristic roots λ 1 , ..., λ n . These roots can be solved for algebraically if n ≤ 4 , but not necessarily otherwise . If the solution is to be used numerically, all the roots of this characteristic equation can be found by numerical methods . However, for use in a theoretical context it may be that the only information required about the roots is whether any of them are greater than or equal to 1 in absolute value . It may be that all the roots are real or instead there may be some that are complex numbers . In the latter case, all the complex roots come in complex conjugate pairs. If all the characteristic roots are distinct, the solution of the homogeneous linear recurrence x t = a 1 x t − 1 + ⋯ + a n x t − n {\displaystyle x_{t}=a_{1}x_{t-1}+\cdots +a_{n}x_{t-n}} can be written in terms of the characteristic roots as x t = c 1 λ 1 t + ⋯ + c n λ n t {\displaystyle x_{t}=c_{1}\lambda _{1}^{t}+\cdots +c_{n}\lambda _{n}^{t}} where the coefficients c i can be found by invoking the initial conditions. Specifically, for each time period for which an iterate value is known, this value and its corresponding value of t can be substituted into the solution equation to obtain a linear equation in the n as-yet-unknown parameters; n such equations, one for each initial condition, can be solved simultaneously for the n parameter values. If all characteristic roots are real, then all the coefficient values c i will also be real; but with non-real complex roots, in general some of these coefficients will also be non-real. If there are complex roots, they come in conjugate pairs and so do the complex terms in the solution equation. If two of these complex terms are c j λ t j and c j +1 λ t j +1 , the roots λ j can be written as λ j , λ j + 1 = α ± β i = M ( α M ± β M i ) {\displaystyle \lambda _{j},\lambda _{j+1}=\alpha \pm \beta i=M\left({\frac {\alpha }{M}}\pm {\frac {\beta }{M}}i\right)} where i is the imaginary unit and M is the modulus of the roots: M = α 2 + β 2 . {\displaystyle M={\sqrt {\alpha ^{2}+\beta ^{2}}}.} Then the two complex terms in the solution equation can be written as c j λ j t + c j + 1 λ j + 1 t = M t ( c j ( α M + β M i ) t + c j + 1 ( α M − β M i ) t ) = M t ( c j ( cos ⁡ θ + i sin ⁡ θ ) t + c j + 1 ( cos ⁡ θ − i sin ⁡ θ ) t ) = M t ( c j ( cos ⁡ θ t + i sin ⁡ θ t ) + c j + 1 ( cos ⁡ θ t − i sin ⁡ θ t ) ) {\displaystyle {\begin{aligned}c_{j}\lambda _{j}^{t}+c_{j+1}\lambda _{j+1}^{t}&=M^{t}\left(c_{j}\left({\frac {\alpha }{M}}+{\frac {\beta }{M}}i\right)^{t}+c_{j+1}\left({\frac {\alpha }{M}}-{\frac {\beta }{M}}i\right)^{t}\right)\\[6pt]&=M^{t}\left(c_{j}\left(\cos \theta +i\sin \theta \right)^{t}+c_{j+1}\left(\cos \theta -i\sin \theta \right)^{t}\right)\\[6pt]&=M^{t}{\bigl (}c_{j}\left(\cos \theta t+i\sin \theta t\right)+c_{j+1}\left(\cos \theta t-i\sin \theta t\right){\bigr )}\end{aligned}}} where θ is the angle whose cosine is ⁠ α / M ⁠ and whose sine is ⁠ β / M ⁠ ; the last equality here made use of de Moivre's formula . Now the process of finding the coefficients c j and c j +1 guarantees that they are also complex conjugates, which can be written as γ ± δi . Using this in the last equation gives this expression for the two complex terms in the solution equation: 2 M t ( γ cos ⁡ θ t − δ sin ⁡ θ t ) {\displaystyle 2M^{t}\left(\gamma \cos \theta t-\delta \sin \theta t\right)} which can also be written as 2 γ 2 + δ 2 M t cos ⁡ ( θ t + ψ ) {\displaystyle 2{\sqrt {\gamma ^{2}+\delta ^{2}}}M^{t}\cos(\theta t+\psi )} where ψ is the angle whose cosine is ⁠ γ / √ γ 2 + δ 2 ⁠ and whose sine is ⁠ δ / √ γ 2 + δ 2 ⁠ . Depending on the initial conditions, even with all roots real the iterates can experience a transitory tendency to go above and below the steady state value. But true cyclicity involves a permanent tendency to fluctuate, and this occurs if there is at least one pair of complex conjugate characteristic roots. This can be seen in the trigonometric form of their contribution to the solution equation, involving cos θt and sin θt . In the second-order case, if the two roots are identical ( λ 1 = λ 2 ), they can both be denoted as λ and a solution may be of the form x t = c 1 λ t + c 2 t λ t . {\displaystyle x_{t}=c_{1}\lambda ^{t}+c_{2}t\lambda ^{t}.} An alternative solution method involves converting the n th order difference equation to a first-order matrix difference equation . This is accomplished by writing w 1, t = y t , w 2, t = y t −1 = w 1, t −1 , w 3, t = y t −2 = w 2, t −1 , and so on. Then the original single n th-order equation y t = a 1 y t − 1 + a 2 y t − 2 + ⋯ + a n y t − n + b {\displaystyle y_{t}=a_{1}y_{t-1}+a_{2}y_{t-2}+\cdots +a_{n}y_{t-n}+b} can be replaced by the following n first-order equations: w 1 , t = a 1 w 1 , t − 1 + a 2 w 2 , t − 1 + ⋯ + a n w n , t − 1 + b w 2 , t = w 1 , t − 1 ⋮ w n , t = w n − 1 , t − 1 . {\displaystyle {\begin{aligned}w_{1,t}&=a_{1}w_{1,t-1}+a_{2}w_{2,t-1}+\cdots +a_{n}w_{n,t-1}+b\\w_{2,t}&=w_{1,t-1}\\&\,\,\,\vdots \\w_{n,t}&=w_{n-1,t-1}.\end{aligned}}} Defining the vector w i as w i = [ w 1 , i w 2 , i ⋮ w n , i ] {\displaystyle \mathbf {w} _{i}={\begin{bmatrix}w_{1,i}\\w_{2,i}\\\vdots \\w_{n,i}\end{bmatrix}}} this can be put in matrix form as w t = A w t − 1 + b {\displaystyle \mathbf {w} _{t}=\mathbf {A} \mathbf {w} _{t-1}+\mathbf {b} } Here A is an n × n matrix in which the first row contains a 1 , ..., a n and all other rows have a single 1 with all other elements being 0, and b is a column vector with first element b and with the rest of its elements being 0. This matrix equation can be solved using the methods in the article Matrix difference equation . In the homogeneous case y i is a para-permanent of a lower triangular matrix [ 6 ] The recurrence y t = a 1 y t − 1 + ⋯ + a n y t − n + b , {\displaystyle y_{t}=a_{1}y_{t-1}+\cdots +a_{n}y_{t-n}+b,} can be solved using the theory of generating functions . First, we write Y ( x ) = ∑ t ≥ 0 y t x t {\textstyle Y(x)=\sum _{t\geq 0}y_{t}x^{t}} . The recurrence is then equivalent to the following generating function equation: Y ( x ) = a 1 x Y ( x ) + a 2 x 2 Y ( x ) + ⋯ + a n x n Y ( x ) + b 1 − x + p ( x ) {\displaystyle Y(x)=a_{1}xY(x)+a_{2}x^{2}Y(x)+\cdots +a_{n}x^{n}Y(x)+{\frac {b}{1-x}}+p(x)} where p ( x ) {\displaystyle p(x)} is a polynomial of degree at most n − 1 {\displaystyle n-1} correcting the initial terms. From this equation we can solve to get Y ( x ) = ( b 1 − x + p ( x ) ) ⋅ 1 1 − a 1 x − a 2 x 2 − ⋯ − a n x n . {\displaystyle Y(x)=\left({\frac {b}{1-x}}+p(x)\right)\cdot {\frac {1}{1-a_{1}x-a_{2}x^{2}-\cdots -a_{n}x^{n}}}.} In other words, not worrying about the exact coefficients, Y ( x ) {\displaystyle Y(x)} can be expressed as a rational function Y ( x ) = f ( x ) g ( x ) . {\displaystyle Y(x)={\frac {f(x)}{g(x)}}.} The closed form can then be derived via partial fraction decomposition . Specifically, if the generating function is written as f ( x ) g ( x ) = ∑ i f i ( x ) ( x − r i ) m i {\displaystyle {\frac {f(x)}{g(x)}}=\sum _{i}{\frac {f_{i}(x)}{(x-r_{i})^{m_{i}}}}} then the polynomial p ( x ) {\displaystyle p(x)} determines the initial set of corrections z ( n ) {\displaystyle z(n)} , the denominator ( x − r i ) m {\displaystyle (x-r_{i})^{m}} determines the exponential term r i n {\displaystyle r_{i}^{n}} , and the degree m {\displaystyle m} together with the numerator f i ( x ) {\displaystyle f_{i}(x)} determine the polynomial coefficient k i ( n ) {\displaystyle k_{i}(n)} . The method for solving linear differential equations is similar to the method above—the "intelligent guess" ( ansatz ) for linear differential equations with constant coefficients is e λ x {\displaystyle e^{\lambda x}} where λ {\displaystyle \lambda } is a complex number that is determined by substituting the guess into the differential equation. This is not a coincidence. Considering the Taylor series of the solution to a linear differential equation: ∑ n = 0 ∞ f ( n ) ( a ) n ! ( x − a ) n {\displaystyle \sum _{n=0}^{\infty }{\frac {f^{(n)}(a)}{n!}}(x-a)^{n}} it can be seen that the coefficients of the series are given by the n {\displaystyle n} -th derivative of f ( x ) {\displaystyle f(x)} evaluated at the point a {\displaystyle a} . The differential equation provides a linear difference equation relating these coefficients. This equivalence can be used to quickly solve for the recurrence relationship for the coefficients in the power series solution of a linear differential equation. The rule of thumb (for equations in which the polynomial multiplying the first term is non-zero at zero) is that: y [ k ] → f [ n + k ] {\displaystyle y^{[k]}\to f[n+k]} and more generally x m ∗ y [ k ] → n ( n − 1 ) . . . ( n − m + 1 ) f [ n + k − m ] {\displaystyle x^{m}*y^{[k]}\to n(n-1)...(n-m+1)f[n+k-m]} Example: The recurrence relationship for the Taylor series coefficients of the equation: ( x 2 + 3 x − 4 ) y [ 3 ] − ( 3 x + 1 ) y [ 2 ] + 2 y = 0 {\displaystyle (x^{2}+3x-4)y^{[3]}-(3x+1)y^{[2]}+2y=0} is given by n ( n − 1 ) f [ n + 1 ] + 3 n f [ n + 2 ] − 4 f [ n + 3 ] − 3 n f [ n + 1 ] − f [ n + 2 ] + 2 f [ n ] = 0 {\displaystyle n(n-1)f[n+1]+3nf[n+2]-4f[n+3]-3nf[n+1]-f[n+2]+2f[n]=0} or − 4 f [ n + 3 ] + 2 n f [ n + 2 ] + n ( n − 4 ) f [ n + 1 ] + 2 f [ n ] = 0. {\displaystyle -4f[n+3]+2nf[n+2]+n(n-4)f[n+1]+2f[n]=0.} This example shows how problems generally solved using the power series solution method taught in normal differential equation classes can be solved in a much easier way. Example: The differential equation a y ″ + b y ′ + c y = 0 {\displaystyle ay''+by'+cy=0} has solution y = e a x . {\displaystyle y=e^{ax}.} The conversion of the differential equation to a difference equation of the Taylor coefficients is a f [ n + 2 ] + b f [ n + 1 ] + c f [ n ] = 0. {\displaystyle af[n+2]+bf[n+1]+cf[n]=0.} It is easy to see that the n {\displaystyle n} -th derivative of e a x {\displaystyle e^{ax}} evaluated at 0 {\displaystyle 0} is a n {\displaystyle a^{n}} . Certain difference equations - in particular, linear constant coefficient difference equations - can be solved using z-transforms . The z -transforms are a class of integral transforms that lead to more convenient algebraic manipulations and more straightforward solutions. There are cases in which obtaining a direct solution would be all but impossible, yet solving the problem via a thoughtfully chosen integral transform is straightforward. In the solution equation x t = c 1 λ 1 t + ⋯ + c n λ n t , {\displaystyle x_{t}=c_{1}\lambda _{1}^{t}+\cdots +c_{n}\lambda _{n}^{t},} a term with real characteristic roots converges to 0 as t grows indefinitely large if the absolute value of the characteristic root is less than 1. If the absolute value equals 1, the term will stay constant as t grows if the root is +1 but will fluctuate between two values if the root is −1. If the absolute value of the root is greater than 1 the term will become larger and larger over time. A pair of terms with complex conjugate characteristic roots will converge to 0 with dampening fluctuations if the absolute value of the modulus M of the roots is less than 1; if the modulus equals 1 then constant amplitude fluctuations in the combined terms will persist; and if the modulus is greater than 1, the combined terms will show fluctuations of ever-increasing magnitude. Thus the evolving variable x will converge to 0 if all of the characteristic roots have magnitude less than 1. If the largest root has absolute value 1, neither convergence to 0 nor divergence to infinity will occur. If all roots with magnitude 1 are real and positive, x will converge to the sum of their constant terms c i ; unlike in the stable case, this converged value depends on the initial conditions; different starting points lead to different points in the long run. If any root is −1, its term will contribute permanent fluctuations between two values. If any of the unit-magnitude roots are complex then constant-amplitude fluctuations of x will persist. Finally, if any characteristic root has magnitude greater than 1, then x will diverge to infinity as time goes to infinity, or will fluctuate between increasingly large positive and negative values. A theorem of Issai Schur states that all roots have magnitude less than 1 (the stable case) if and only if a particular string of determinants are all positive. [ 2 ] : 247 If a non-homogeneous linear difference equation has been converted to homogeneous form which has been analyzed as above, then the stability and cyclicality properties of the original non-homogeneous equation will be the same as those of the derived homogeneous form, with convergence in the stable case being to the steady-state value y * instead of to 0.
https://en.wikipedia.org/wiki/Linear_recurrence_with_constant_coefficients
Linear referencing , also called linear reference system or linear referencing system ( LRS ), is a method of spatial referencing over linear or curvilinear elements, such as roads or rivers. In LRS, the locations of physical features are described parametrically in terms of a single curvilinear coordinate , typically the distance traveled from a fixed point, such as a milestone . It is an alternative to referencing by geographic coordinates , which would involve two coordinates, latitude and longitude. Point features (e.g. a signpost) are located by a single distance value while linear features (e.g. a no-passing zone) are delimited by two distance values, corresponding to beginning and end. If the subjacent linear referencing element or route is changed, only the linear coordinates of those locations on the changed segment need to be updated. Linear referencing is suitable for management of data related to linear features like roads , railways , oil and gas transmission pipelines , power and data transmission lines, and rivers . It is used in engineering , construction , and utilities management. A system for identifying the location of pipeline features and characteristics is by measuring distance from the start of the pipeline. An example linear reference address is: Engineering Station 1145 + 86 on pipeline Alpha = 114,586 feet from the start of the pipeline. With a reroute, cumulative stationing might not be the same as engineering stationing, because of the addition of the extra pipeline. Linear referencing systems compute the differences to resolve this dilemma. Linear referencing is one of a family of methods of expressing location. Coordinates such as latitude and longitude are another member of the family, as are landmark references such as "5 km south of Ayers Rock." Linear referencing has traditionally been the expression of choice in engineering applications such as road and pipeline maintenance. One can more realistically dispatch a worker to a bridge 12.7 km along a road from a reference point, rather than to a pair of coordinates or a landmark. The road serves as the reference frame, just as the earth serves as the reference frame for latitude and longitude. Linear referencing can be used to define points along a linear feature with just a small amount of information such as the name of a road and the distance and bearing from a landmark along the road. This information can be communicated concisely via plaintext. For example: "State route 4, 20 feet east of mile marker 187." Giving a latitude and longitude coordinate to a work crew is not meaningful unless the coordinate is plotted on a map. Often work crews work in remote areas without wireless connectivity which makes on-line digital maps not practical, and the relatively higher effort of providing offline maps or printed maps is not as economical as simply stating locations as offsets, or ranges of offsets, along a linear feature. Linear referencing systems can also be made to be both very precise and very accurate at a much lower cost than is needed to collect latitude and longitude coordinates with high accuracy, especially when the goal is sub-meter accuracy. This is highly dependent upon the width of the linear feature, its centerline, and the visibility of the landmarks and markers that are used to define linear reference offsets. Often, roads are created by engineers using CAD tools that have no geospatial reference at all, and LRS is the preferred method of defining data for linear features. Consequently, a major limitation of linear referencing is that specifying points that are not on a linear feature is troublesome and error-prone, though not entirely impossible. Consider for example a ski lodge located 100 meters to the right of the road, traveling north. The linear referencing system can be extended by specifying a lateral offset, but the absolute location (i.e. coordinates) of the lodge cannot be determined unless coordinates are specified for the road; that process is prone to error particularly on curved roads. Another major drawback of linear referencing is that a modification in the alignment of a road (e.g. constructing a bypass around a town) changes the measurements that reference all downstream points. The system requires an extensive network of reference stations, and constant maintenance. In an era of mobile maps and GPS, this maintenance overhead for linear referencing systems challenges its long-term viability. (But see below for US Federal Highway Administration requirement that all State DOTs use LRS.) Nonetheless, travel along a road is a linear experience, and at the very least, linear referencing will continue to have a conversational role. Linear referencing systems are recognized by the US Federal government as a valuable tool for specifying right of way data, and are now actually required for the States. Therefore, it is not likely to see LRS usage decline any time soon. The US Federal Highway Administration is pushing states to move closer to standardization of LRS data with the ARNOLD requirement. To wit: "On August 7, 2012, FHWA announced that the HPMS is expanding the requirement for State Departments of Transportation (DOTs) to submit their LRS to include all public roads. This requirement will be referred to as the All Road Network of Linear Referenced Data (ARNOLD)". [ 1 ] The ARNOLD requirement sets the stage for systems that utilize both LRS and coordinates. Both systems are useful in different contexts, and while using latitude and longitude is becoming very popular due to the availability of practical and affordable devices for capturing and displaying global coordinate data, the use of LRS has widely been adopted for planning, engineering, and maintenance. Linear referencing is supported for example by several Geographic Information System software, including:
https://en.wikipedia.org/wiki/Linear_referencing
In linear algebra , a linear relation , or simply relation , between elements of a vector space or a module is a linear equation that has these elements as a solution. More precisely, if e 1 , … , e n {\displaystyle e_{1},\dots ,e_{n}} are elements of a (left) module M over a ring R (the case of a vector space over a field is a special case), a relation between e 1 , … , e n {\displaystyle e_{1},\dots ,e_{n}} is a sequence ( f 1 , … , f n ) {\displaystyle (f_{1},\dots ,f_{n})} of elements of R such that The relations between e 1 , … , e n {\displaystyle e_{1},\dots ,e_{n}} form a module. One is generally interested in the case where e 1 , … , e n {\displaystyle e_{1},\dots ,e_{n}} is a generating set of a finitely generated module M , in which case the module of the relations is often called a syzygy module of M . The syzygy module depends on the choice of a generating set, but it is unique up to the direct sum with a free module. That is, if S 1 {\displaystyle S_{1}} and S 2 {\displaystyle S_{2}} are syzygy modules corresponding to two generating sets of the same module, then they are stably isomorphic , which means that there exist two free modules L 1 {\displaystyle L_{1}} and L 2 {\displaystyle L_{2}} such that S 1 ⊕ L 1 {\displaystyle S_{1}\oplus L_{1}} and S 2 ⊕ L 2 {\displaystyle S_{2}\oplus L_{2}} are isomorphic . Higher order syzygy modules are defined recursively: a first syzygy module of a module M is simply its syzygy module. For k > 1 , a k th syzygy module of M is a syzygy module of a ( k – 1) -th syzygy module. Hilbert's syzygy theorem states that, if R = K [ x 1 , … , x n ] {\displaystyle R=K[x_{1},\dots ,x_{n}]} is a polynomial ring in n indeterminates over a field, then every n th syzygy module is free. The case n = 0 is the fact that every finite dimensional vector space has a basis, and the case n = 1 is the fact that K [ x ] is a principal ideal domain and that every submodule of a finitely generated free K [ x ] module is also free. The construction of higher order syzygy modules is generalized as the definition of free resolutions , which allows restating Hilbert's syzygy theorem as a polynomial ring in n indeterminates over a field has global homological dimension n . If a and b are two elements of the commutative ring R , then ( b , – a ) is a relation that is said trivial . The module of trivial relations of an ideal is the submodule of the first syzygy module of the ideal that is generated by the trivial relations between the elements of a generating set of an ideal. The concept of trivial relations can be generalized to higher order syzygy modules, and this leads to the concept of the Koszul complex of an ideal, which provides information on the non-trivial relations between the generators of an ideal. Let R be a ring , and M be a left R - module . A linear relation , or simply a relation between k elements x 1 , … , x k {\displaystyle x_{1},\dots ,x_{k}} of M is a sequence ( a 1 , … , a k ) {\displaystyle (a_{1},\dots ,a_{k})} of elements of R such that If x 1 , … , x k {\displaystyle x_{1},\dots ,x_{k}} is a generating set of M , the relation is often called a syzygy of M . It makes sense to call it a syzygy of M {\displaystyle M} without regard to x 1 , . . , x k {\displaystyle x_{1},..,x_{k}} because, although the syzygy module depends on the chosen generating set, most of its properties are independent; see § Stable properties , below. If the ring R is Noetherian , or, at least coherent , and if M is finitely generated , then the syzygy module is also finitely generated. A syzygy module of this syzygy module is a second syzygy module of M . Continuing this way one can define a k th syzygy module for every positive integer k . Hilbert's syzygy theorem asserts that, if M is a finitely generated module over a polynomial ring K [ x 1 , … , x n ] {\displaystyle K[x_{1},\dots ,x_{n}]} over a field , then any n th syzygy module is a free module . Generally speaking, in the language of K-theory , a property is stable if it becomes true by making a direct sum with a sufficiently large free module . A fundamental property of syzygies modules is that there are "stably independent" of choices of generating sets for involved modules. The following result is the basis of these stable properties. Proposition — Let { x 1 , … , x m } {\displaystyle \{x_{1},\dots ,x_{m}\}} be a generating set of an R -module M , and y 1 , … , y n {\displaystyle y_{1},\dots ,y_{n}} be other elements of M . The module of the relations between x 1 , … , x m , y 1 , … , y n {\displaystyle x_{1},\dots ,x_{m},y_{1},\dots ,y_{n}} is the direct sum of the module of the relations between x 1 , … , x m , {\displaystyle x_{1},\dots ,x_{m},} and a free module of rank n . Proof. As { x 1 , … , x m } {\displaystyle \{x_{1},\dots ,x_{m}\}} is a generating set, each y i {\displaystyle y_{i}} can be written y i = ∑ α i , j x j . {\displaystyle \textstyle y_{i}=\sum \alpha _{i,j}x_{j}.} This provides a relation r i {\displaystyle r_{i}} between x 1 , … , x m , y 1 , … , y n . {\displaystyle x_{1},\dots ,x_{m},y_{1},\dots ,y_{n}.} Now, if r = ( a 1 , … , a m , b 1 , … , b n ) {\displaystyle r=(a_{1},\dots ,a_{m},b_{1},\dots ,b_{n})} is any relation, then r − ∑ b i r i {\displaystyle \textstyle r-\sum b_{i}r_{i}} is a relation between the x 1 , … , x m {\displaystyle x_{1},\dots ,x_{m}} only. In other words, every relation between x 1 , … , x m , y 1 , … , y n {\displaystyle x_{1},\dots ,x_{m},y_{1},\dots ,y_{n}} is a sum of a relation between x 1 , … , x m , {\displaystyle x_{1},\dots ,x_{m},} and a linear combination of the r i {\displaystyle r_{i}} s. It is straightforward to prove that this decomposition is unique, and this proves the result. ◼ {\displaystyle \blacksquare } This proves that the first syzygy module is "stably unique". More precisely, given two generating sets S 1 {\displaystyle S_{1}} and S 2 {\displaystyle S_{2}} of a module M , if S 1 {\displaystyle S_{1}} and S 2 {\displaystyle S_{2}} are the corresponding modules of relations, then there exist two free modules L 1 {\displaystyle L_{1}} and L 2 {\displaystyle L_{2}} such that S 1 ⊕ L 1 {\displaystyle S_{1}\oplus L_{1}} and S 2 ⊕ L 2 {\displaystyle S_{2}\oplus L_{2}} are isomorphic. For proving this, it suffices to apply twice the preceding proposition for getting two decompositions of the module of the relations between the union of the two generating sets. For obtaining a similar result for higher syzygy modules, it remains to prove that, if M is any module, and L is a free module, then M and M ⊕ L have isomorphic syzygy modules. It suffices to consider a generating set of M ⊕ L that consists of a generating set of M and a basis of L . For every relation between the elements of this generating set, the coefficients of the basis elements of L are all zero, and the syzygies of M ⊕ L are exactly the syzygies of M extended with zero coefficients. This completes the proof to the following theorem. Theorem — For every positive integer k , the k th syzygy module of a given module depends on choices of generating sets, but is unique up to the direct sum with a free module. More precisely, if S 1 {\displaystyle S_{1}} and S 2 {\displaystyle S_{2}} are k th syzygy modules that are obtained by different choices of generating sets, then there are free modules L 1 {\displaystyle L_{1}} and L 2 {\displaystyle L_{2}} such that S 1 ⊕ L 1 {\displaystyle S_{1}\oplus L_{1}} and S 2 ⊕ L 2 {\displaystyle S_{2}\oplus L_{2}} are isomorphic. Given a generating set g 1 , … , g n {\displaystyle g_{1},\dots ,g_{n}} of an R -module, one can consider a free module of L of basis G 1 , … , G n , {\displaystyle G_{1},\dots ,G_{n},} where G 1 , … , G n {\displaystyle G_{1},\dots ,G_{n}} are new indeterminates. This defines an exact sequence where the left arrow is the linear map that maps each G i {\displaystyle G_{i}} to the corresponding g i . {\displaystyle g_{i}.} The kernel of this left arrow is a first syzygy module of M . One can repeat this construction with this kernel in place of M . Repeating again and again this construction, one gets a long exact sequence where all L i {\displaystyle L_{i}} are free modules. By definition, such a long exact sequence is a free resolution of M . For every k ≥ 1 , the kernel S k {\displaystyle S_{k}} of the arrow starting from L k − 1 {\displaystyle L_{k-1}} is a k th syzygy module of M . It follows that the study of free resolutions is the same as the study of syzygy modules. A free resolution is finite of length ≤ n if S n {\displaystyle S_{n}} is free. In this case, one can take L n = S n , {\displaystyle L_{n}=S_{n},} and L k = 0 {\displaystyle L_{k}=0} (the zero module ) for every k > n . This allows restating Hilbert's syzygy theorem : If R = K [ x 1 , … , x n ] {\displaystyle R=K[x_{1},\dots ,x_{n}]} is a polynomial ring in n indeterminates over a field K , then every free resolution is finite of length at most n . The global dimension of a commutative Noetherian ring is either infinite, or the minimal n such that every free resolution is finite of length at most n . A commutative Noetherian ring is regular if its global dimension is finite. In this case, the global dimension equals its Krull dimension . So, Hilbert's syzygy theorem may be restated in a very short sentence that hides much mathematics: A polynomial ring over a field is a regular ring. In a commutative ring R , one has always ab – ba = 0 . This implies trivially that ( b , – a ) is a linear relation between a and b . Therefore, given a generating set g 1 , … , g k {\displaystyle g_{1},\dots ,g_{k}} of an ideal I , one calls trivial relation or trivial syzygy every element of the submodule the syzygy module that is generated by these trivial relations between two generating elements. More precisely, the module of trivial syzygies is generated by the relations such that x i = g j , {\displaystyle x_{i}=g_{j},} x j = − g i , {\displaystyle x_{j}=-g_{i},} and x h = 0 {\displaystyle x_{h}=0} otherwise. The word syzygy came into mathematics with the work of Arthur Cayley . [ 1 ] In that paper, Cayley used it in the theory of resultants and discriminants . [ 2 ] As the word syzygy was used in astronomy to denote a linear relation between planets, Cayley used it to denote linear relations between minors of a matrix, such as, in the case of a 2×3 matrix: Then, the word syzygy was popularized (among mathematicians) by David Hilbert in his 1890 article, which contains three fundamental theorems on polynomials, Hilbert's syzygy theorem , Hilbert's basis theorem and Hilbert's Nullstellensatz . In his article, Cayley makes use, in a special case, of what was later [ 3 ] called the Koszul complex , after a similar construction in differential geometry by the mathematician Jean-Louis Koszul .
https://en.wikipedia.org/wiki/Linear_relation
A linear response function describes the input-output relationship of a signal transducer , such as a radio turning electromagnetic waves into music or a neuron turning synaptic input into a response. Because of its many applications in information theory , physics and engineering there exist alternative names for specific linear response functions such as susceptibility , impulse response or impedance ; see also transfer function . The concept of a Green's function or fundamental solution of an ordinary differential equation is closely related. Denote the input of a system by h ( t ) {\displaystyle h(t)} (e.g. a force ), and the response of the system by x ( t ) {\displaystyle x(t)} (e.g. a position). Generally, the value of x ( t ) {\displaystyle x(t)} will depend not only on the present value of h ( t ) {\displaystyle h(t)} , but also on past values. Approximately x ( t ) {\displaystyle x(t)} is a weighted sum of the previous values of h ( t ′ ) {\displaystyle h(t')} , with the weights given by the linear response function χ ( t − t ′ ) {\displaystyle \chi (t-t')} : x ( t ) = ∫ − ∞ t d t ′ χ ( t − t ′ ) h ( t ′ ) + ⋯ . {\displaystyle x(t)=\int _{-\infty }^{t}dt'\,\chi (t-t')h(t')+\cdots \,.} The explicit term on the right-hand side is the leading order term of a Volterra expansion for the full nonlinear response. If the system in question is highly non-linear, higher order terms in the expansion, denoted by the dots, become important and the signal transducer cannot adequately be described just by its linear response function. The complex-valued Fourier transform χ ~ ( ω ) {\displaystyle {\tilde {\chi }}(\omega )} of the linear response function is very useful as it describes the output of the system if the input is a sine wave h ( t ) = h 0 sin ⁡ ( ω t ) {\displaystyle h(t)=h_{0}\sin(\omega t)} with frequency ω {\displaystyle \omega } . The output reads x ( t ) = | χ ~ ( ω ) | h 0 sin ⁡ ( ω t + arg ⁡ χ ~ ( ω ) ) , {\displaystyle x(t)=\left|{\tilde {\chi }}(\omega )\right|h_{0}\sin(\omega t+\arg {\tilde {\chi }}(\omega ))\,,} with amplitude gain | χ ~ ( ω ) | {\displaystyle |{\tilde {\chi }}(\omega )|} and phase shift arg ⁡ χ ~ ( ω ) {\displaystyle \arg {\tilde {\chi }}(\omega )} . Consider a damped harmonic oscillator with input given by an external driving force h ( t ) {\displaystyle h(t)} , x ¨ ( t ) + γ x ˙ ( t ) + ω 0 2 x ( t ) = h ( t ) . {\displaystyle {\ddot {x}}(t)+\gamma {\dot {x}}(t)+\omega _{0}^{2}x(t)=h(t).} The complex-valued Fourier transform of the linear response function is given by χ ~ ( ω ) = x ~ ( ω ) h ~ ( ω ) = 1 ω 0 2 − ω 2 + i γ ω . {\displaystyle {\tilde {\chi }}(\omega )={\frac {{\tilde {x}}(\omega )}{{\tilde {h}}(\omega )}}={\frac {1}{\omega _{0}^{2}-\omega ^{2}+i\gamma \omega }}.} The amplitude gain is given by the magnitude of the complex number χ ~ ( ω ) , {\displaystyle {\tilde {\chi }}(\omega ),} and the phase shift by the arctan of the imaginary part of the function divided by the real one. From this representation, we see that for small γ {\displaystyle \gamma } the Fourier transform χ ~ ( ω ) {\displaystyle {\tilde {\chi }}(\omega )} of the linear response function yields a pronounced maximum (" Resonance ") at the frequency ω ≈ ω 0 {\displaystyle \omega \approx \omega _{0}} . The linear response function for a harmonic oscillator is mathematically identical to that of an RLC circuit . The width of the maximum, Δ ω , {\displaystyle \Delta \omega ,} typically is much smaller than ω 0 , {\displaystyle \omega _{0},} so that the Quality factor Q := ω 0 / Δ ω {\displaystyle Q:=\omega _{0}/\Delta \omega } can be extremely large. The exposition of linear response theory, in the context of quantum statistics , can be found in a paper by Ryogo Kubo . [ 1 ] This defines particularly the Kubo formula , which considers the general case that the "force" h ( t ) is a perturbation of the basic operator of the system, the Hamiltonian , H ^ 0 → H ^ 0 − h ( t ′ ) B ^ ( t ′ ) {\displaystyle {\hat {H}}_{0}\to {\hat {H}}_{0}-h(t'){\hat {B}}(t')} where B ^ {\displaystyle {\hat {B}}} corresponds to a measurable quantity as input, while the output x ( t ) is the perturbation of the thermal expectation of another measurable quantity A ^ ( t ) {\displaystyle {\hat {A}}(t)} . The Kubo formula then defines the quantum-statistical calculation of the susceptibility χ ( t − t ′ ) {\displaystyle \chi (t-t')} by a general formula involving only the mentioned operators. As a consequence of the principle of causality the complex-valued function χ ~ ( ω ) {\displaystyle {\tilde {\chi }}(\omega )} has poles only in the lower half-plane. This leads to the Kramers–Kronig relations , which relates the real and the imaginary parts of χ ~ ( ω ) {\displaystyle {\tilde {\chi }}(\omega )} by integration. The simplest example is once more the damped harmonic oscillator . [ 2 ]
https://en.wikipedia.org/wiki/Linear_response_function
In mathematics , the linear span (also called the linear hull [ 1 ] or just span ) of a set S {\displaystyle S} of elements of a vector space V {\displaystyle V} is the smallest linear subspace of V {\displaystyle V} that contains S . {\displaystyle S.} It is the set of all finite linear combinations of the elements of S , [ 2 ] and the intersection of all linear subspaces that contain S . {\displaystyle S.} It is often denoted span( S ) [ 3 ] or ⟨ S ⟩ . {\displaystyle \langle S\rangle .} For example, in geometry , two linearly independent vectors span a plane . To express that a vector space V is a linear span of a subset S , one commonly uses one of the following phrases: S spans V ; S is a spanning set of V ; V is spanned or generated by S ; S is a generator set or a generating set of V . Spans can be generalized to many mathematical structures , in which case, the smallest substructure containing S {\displaystyle S} is generally called the substructure generated by S . {\displaystyle S.} Given a vector space V over a field K , the span of a set S of vectors (not necessarily finite) is defined to be the intersection W of all subspaces of V that contain S . It is thus the smallest (for set inclusion ) subspace containing S . It is referred to as the subspace spanned by S , or by the vectors in S . Conversely, S is called a spanning set of W , and we say that S spans W . It follows from this definition that the span of S is the set of all finite linear combinations of elements (vectors) of S , and can be defined as such. [ 4 ] [ 5 ] [ 6 ] That is, span ⁡ ( S ) = { λ 1 v 1 + λ 2 v 2 + ⋯ + λ n v n ∣ n ∈ N , v 1 , . . . v n ∈ S , λ 1 , . . . λ n ∈ K } {\displaystyle \operatorname {span} (S)={\biggl \{}\lambda _{1}\mathbf {v} _{1}+\lambda _{2}\mathbf {v} _{2}+\cdots +\lambda _{n}\mathbf {v} _{n}\mid n\in \mathbb {N} ,\;\mathbf {v} _{1},...\mathbf {v} _{n}\in S,\;\lambda _{1},...\lambda _{n}\in K{\biggr \}}} When S is empty , the only possibility is n = 0 , and the previous expression for span ⁡ ( S ) {\displaystyle \operatorname {span} (S)} reduces to the empty sum . [ a ] The standard convention for the empty sum implies thus span ( ∅ ) = { 0 } , {\displaystyle {\text{span}}(\emptyset )=\{\mathbf {0} \},} a property that is immediate with the other definitions. However, many introductory textbooks simply include this fact as part of the definition. When S = { v 1 , … , v n } {\displaystyle S=\{\mathbf {v} _{1},\ldots ,\mathbf {v} _{n}\}} is finite , one has span ⁡ ( S ) = { λ 1 v 1 + λ 2 v 2 + ⋯ + λ n v n ∣ λ 1 , . . . λ n ∈ K } {\displaystyle \operatorname {span} (S)=\{\lambda _{1}\mathbf {v} _{1}+\lambda _{2}\mathbf {v} _{2}+\cdots +\lambda _{n}\mathbf {v} _{n}\mid \lambda _{1},...\lambda _{n}\in K\}} The real vector space R 3 {\displaystyle \mathbb {R} ^{3}} has {(−1, 0, 0), (0, 1, 0), (0, 0, 1)} as a spanning set. This particular spanning set is also a basis . If (−1, 0, 0) were replaced by (1, 0, 0), it would also form the canonical basis of R 3 {\displaystyle \mathbb {R} ^{3}} . Another spanning set for the same space is given by {(1, 2, 3), (0, 1, 2), (−1, 1 ⁄ 2 , 3), (1, 1, 1)}, but this set is not a basis, because it is linearly dependent . The set {(1, 0, 0), (0, 1, 0), (1, 1, 0) } is not a spanning set of R 3 {\displaystyle \mathbb {R} ^{3}} , since its span is the space of all vectors in R 3 {\displaystyle \mathbb {R} ^{3}} whose last component is zero. That space is also spanned by the set {(1, 0, 0), (0, 1, 0)}, as (1, 1, 0) is a linear combination of (1, 0, 0) and (0, 1, 0). Thus, the spanned space is not R 3 . {\displaystyle \mathbb {R} ^{3}.} It can be identified with R 2 {\displaystyle \mathbb {R} ^{2}} by removing the third components equal to zero. The empty set is a spanning set of {(0, 0, 0)}, since the empty set is a subset of all possible vector spaces in R 3 {\displaystyle \mathbb {R} ^{3}} , and {(0, 0, 0)} is the intersection of all of these vector spaces. The set of monomials x n , where n is a non-negative integer, spans the space of polynomials . The set of all linear combinations of a subset S of V , a vector space over K , is the smallest linear subspace of V containing S . Every spanning set S of a vector space V must contain at least as many elements as any linearly independent set of vectors from V . Let V be a finite-dimensional vector space. Any set of vectors that spans V can be reduced to a basis for V , by discarding vectors if necessary (i.e. if there are linearly dependent vectors in the set). If the axiom of choice holds, this is true without the assumption that V has finite dimension. This also indicates that a basis is a minimal spanning set when V is finite-dimensional. Generalizing the definition of the span of points in space, a subset X of the ground set of a matroid is called a spanning set if the rank of X equals the rank of the entire ground set [ 7 ] The vector space definition can also be generalized to modules. [ 8 ] [ 9 ] Given an R -module A and a collection of elements a 1 , ..., a n of A , the submodule of A spanned by a 1 , ..., a n is the sum of cyclic modules R a 1 + ⋯ + R a n = { ∑ k = 1 n r k a k | r k ∈ R } {\displaystyle Ra_{1}+\cdots +Ra_{n}=\left\{\sum _{k=1}^{n}r_{k}a_{k}{\bigg |}r_{k}\in R\right\}} consisting of all R -linear combinations of the elements a i . As with the case of vector spaces, the submodule of A spanned by any subset of A is the intersection of all submodules containing that subset. In functional analysis , a closed linear span of a set of vectors is the minimal closed set which contains the linear span of that set. Suppose that X is a normed vector space and let E be any non-empty subset of X . The closed linear span of E , denoted by Sp ¯ ( E ) {\displaystyle {\overline {\operatorname {Sp} }}(E)} or Span ¯ ( E ) {\displaystyle {\overline {\operatorname {Span} }}(E)} , is the intersection of all the closed linear subspaces of X which contain E . One mathematical formulation of this is The closed linear span of the set of functions x n on the interval [0, 1], where n is a non-negative integer, depends on the norm used. If the L 2 norm is used, then the closed linear span is the Hilbert space of square-integrable functions on the interval. But if the maximum norm is used, the closed linear span will be the space of continuous functions on the interval. In either case, the closed linear span contains functions that are not polynomials, and so are not in the linear span itself. However, the cardinality of the set of functions in the closed linear span is the cardinality of the continuum , which is the same cardinality as for the set of polynomials. The linear span of a set is dense in the closed linear span. Moreover, as stated in the lemma below, the closed linear span is indeed the closure of the linear span. Closed linear spans are important when dealing with closed linear subspaces (which are themselves highly important, see Riesz's lemma ). Let X be a normed space and let E be any non-empty subset of X . Then (So the usual way to find the closed linear span is to find the linear span first, and then the closure of that linear span.)
https://en.wikipedia.org/wiki/Linear_span
A linear stage or translation stage is a component of a precise motion system used to restrict an object to a single axis of motion. The term linear slide is often used interchangeably with "linear stage", though technically "linear slide" refers to a linear motion bearing , which is only a component of a linear stage. All linear stages consist of a platform and a base, joined by some form of guide or linear bearing in such a way that the platform is restricted to linear motion with respect to the base. In common usage, the term linear stage may or may not also include the mechanism by which the position of the platform is controlled relative to the base. In three-dimensional space, an object may either rotate about, or translate along any of three axes. Thus the object is said to have six degrees of freedom (3 rotational and 3 translational). A linear stage exhibits only one degree of freedom (translation along one axis). In other words, linear stages operate by physically restricting 3 axes of rotation and 2 axes of translation thus allowing for motion on only one translational axis. Linear stages consist of a platform that moves relative to a base. The platform and base are joined by some form of guide which restricts motion of the platform to only one dimension. A variety of different styles of guides are used, each with benefits and drawbacks making each guide type more appropriate for some applications than for others. The position of the moving platform relative to the fixed base is typically controlled by a linear actuator of some form, whether manual, motorized, or hydraulic/pneumatic. The most common method is to incorporate a lead screw passing through a lead nut in the platform. The rotation of such a lead screw may be controlled either manually or by a motor. In manual linear stages, a control knob attached to a lead screw is typically used. The knob may be indexed to indicate its angular position. The linear displacement of the stage is related to the angular displacement of the knob by the lead screw pitch. For example if the lead screw pitch is 0.5 mm then one full revolution of the knob will move the stage platform 0.5 mm relative to the stage base. If the knob has 50 index marks around its circumference, then each index division is equivalent to 0.01 mm of linear motion of the stage platform. Precision stages such as those used for optics do not use a lead screw, but instead use a fine-pitch screw or a micrometer which presses on a hardened metal pad on the stage platform. Rotating the screw or micrometer pushes the platform forward. A spring provides restoring force to keep the platform in contact with the actuator. This provides more precise motion of the stage. Stages designed to be mounted vertically use a slightly different arrangement, where the actuator is attached to the movable platform and its tip rests on a metal pad on the fixed base. This allows the weight of the platform and its load to be supported by the actuator rather than the spring. In some automated stages a stepper motor may be used in place of, or in addition to a manual knob. A stepper motor moves in fixed increments called steps. In this sense it behaves very much like an indexed knob. If the lead screw pitch is 0.5 mm and the stepper motor has 200 steps per revolution (as is common), then each revolution of the motor will result in 0.5 mm of linear motion of the stage platform, and each step will result in 0.0025 mm of linear motion. In other automated stages a DC motor may be used in place of a manual control knob. A DC motor does not move in fixed increments. Therefore an alternate means is required to determine stage position. A scale may be attached to the internals of the stage and an encoder used to measure the position of the stage relative to the scale and report this to the motor controller, allowing a motion controller to reliably and repeatably move the stage to set positions. For position control in more than one direction, multiple linear stages may be used together. A "two-axis" or "X-Y" stage can be assembled from two linear stages, one mounted to the platform of the other such that the axis of motion of the second stage is perpendicular to that of the first. A two-axis stage with which many people are familiar is a microscope stage, used to position a slide under a lens. A "three-axis" or "X-Y-Z" stage is composed of three linear stages mounted to each other (often with the use of an additional angle bracket) such that the axes of motion of all stages are orthogonal. Some two-axis and three-axis stages are integrated designs rather than being assembled from separate single-axis stages. Some multiple-axis stages also include rotary or tilt elements such as rotary stages or positioning goniometers . By combining linear and rotary elements in various ways, four-axis, five-axis, and six-axis stages are also possible. Linear stages take an advanced form of high performance positioning systems in applications which require a combination of high speed, high precision and high force. Linear stages are used in semiconductor devices fabrication process for precise linear positioning of wafers of the purposes of wafer mapping dielectric, characterization, and epitaxial layer monitoring where positioning speed and precision are critical. [ 1 ]
https://en.wikipedia.org/wiki/Linear_stage
In mathematics , and more specifically in linear algebra , a linear subspace or vector subspace [ 1 ] [ note 1 ] is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces . If V is a vector space over a field K , a subset W of V is a linear subspace of V if it is a vector space over K for the operations of V . Equivalently, a linear subspace of V is a nonempty subset W such that, whenever w 1 , w 2 are elements of W and α , β are elements of K , it follows that αw 1 + βw 2 is in W . [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] The singleton set consisting of the zero vector alone and the entire vector space itself are linear subspaces that are called the trivial subspaces of the vector space. [ 7 ] In the vector space V = R 3 (the real coordinate space over the field R of real numbers ), take W to be the set of all vectors in V whose last component is 0. Then W is a subspace of V . Proof: Let the field be R again, but now let the vector space V be the Cartesian plane R 2 . Take W to be the set of points ( x , y ) of R 2 such that x = y . Then W is a subspace of R 2 . Proof: In general, any subset of the real coordinate space R n that is defined by a homogeneous system of linear equations will yield a subspace. (The equation in example I was z = 0, and the equation in example II was x = y .) Again take the field to be R , but now let the vector space V be the set R R of all functions from R to R . Let C( R ) be the subset consisting of continuous functions . Then C( R ) is a subspace of R R . Proof: Keep the same field and vector space as before, but now consider the set Diff( R ) of all differentiable functions . The same sort of argument as before shows that this is a subspace too. Examples that extend these themes are common in functional analysis . From the definition of vector spaces, it follows that subspaces are nonempty, and are closed under sums and under scalar multiples. [ 8 ] Equivalently, subspaces can be characterized by the property of being closed under linear combinations. That is, a nonempty set W is a subspace if and only if every linear combination of finitely many elements of W also belongs to W . The equivalent definition states that it is also equivalent to consider linear combinations of two elements at a time. In a topological vector space X , a subspace W need not be topologically closed , but a finite-dimensional subspace is always closed. [ 9 ] The same is true for subspaces of finite codimension (i.e., subspaces determined by a finite number of continuous linear functionals ). Descriptions of subspaces include the solution set to a homogeneous system of linear equations , the subset of Euclidean space described by a system of homogeneous linear parametric equations , the span of a collection of vectors, and the null space , column space , and row space of a matrix . Geometrically (especially over the field of real numbers and its subfields), a subspace is a flat in an n -space that passes through the origin. A natural description of a 1-subspace is the scalar multiplication of one non- zero vector v to all possible scalar values. 1-subspaces specified by two vectors are equal if and only if one vector can be obtained from another with scalar multiplication: This idea is generalized for higher dimensions with linear span , but criteria for equality of k -spaces specified by sets of k vectors are not so simple. A dual description is provided with linear functionals (usually implemented as linear equations). One non- zero linear functional F specifies its kernel subspace F = 0 of codimension 1. Subspaces of codimension 1 specified by two linear functionals are equal, if and only if one functional can be obtained from another with scalar multiplication (in the dual space ): It is generalized for higher codimensions with a system of equations . The following two subsections will present this latter description in details, and the remaining four subsections further describe the idea of linear span. The solution set to any homogeneous system of linear equations with n variables is a subspace in the coordinate space K n : { [ x 1 x 2 ⋮ x n ] ∈ K n : a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = 0 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = 0 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = 0 } . {\displaystyle \left\{\left[\!\!{\begin{array}{c}x_{1}\\x_{2}\\\vdots \\x_{n}\end{array}}\!\!\right]\in K^{n}:{\begin{alignedat}{6}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\cdots +\;&&a_{1n}x_{n}&&\;=0&\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\cdots +\;&&a_{2n}x_{n}&&\;=0&\\&&&&&&&&&&\vdots \quad &\\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\cdots +\;&&a_{mn}x_{n}&&\;=0&\end{alignedat}}\right\}.} For example, the set of all vectors ( x , y , z ) (over real or rational numbers ) satisfying the equations x + 3 y + 2 z = 0 and 2 x − 4 y + 5 z = 0 {\displaystyle x+3y+2z=0\quad {\text{and}}\quad 2x-4y+5z=0} is a one-dimensional subspace. More generally, that is to say that given a set of n independent functions, the dimension of the subspace in K k will be the dimension of the null set of A , the composite matrix of the n functions. In a finite-dimensional space, a homogeneous system of linear equations can be written as a single matrix equation: The set of solutions to this equation is known as the null space of the matrix. For example, the subspace described above is the null space of the matrix Every subspace of K n can be described as the null space of some matrix (see § Algorithms below for more). The subset of K n described by a system of homogeneous linear parametric equations is a subspace: For example, the set of all vectors ( x , y , z ) parameterized by the equations is a two-dimensional subspace of K 3 , if K is a number field (such as real or rational numbers). [ note 2 ] In linear algebra, the system of parametric equations can be written as a single vector equation: The expression on the right is called a linear combination of the vectors (2, 5, −1) and (3, −4, 2). These two vectors are said to span the resulting subspace. In general, a linear combination of vectors v 1 , v 2 , ... , v k is any vector of the form The set of all possible linear combinations is called the span : If the vectors v 1 , ... , v k have n components, then their span is a subspace of K n . Geometrically, the span is the flat through the origin in n -dimensional space determined by the points v 1 , ... , v k . A system of linear parametric equations in a finite-dimensional space can also be written as a single matrix equation: In this case, the subspace consists of all possible values of the vector x . In linear algebra, this subspace is known as the column space (or image ) of the matrix A . It is precisely the subspace of K n spanned by the column vectors of A . The row space of a matrix is the subspace spanned by its row vectors. The row space is interesting because it is the orthogonal complement of the null space (see below). In general, a subspace of K n determined by k parameters (or spanned by k vectors) has dimension k . However, there are exceptions to this rule. For example, the subspace of K 3 spanned by the three vectors (1, 0, 0), (0, 0, 1), and (2, 0, 3) is just the xz -plane, with each point on the plane described by infinitely many different values of t 1 , t 2 , t 3 . In general, vectors v 1 , ... , v k are called linearly independent if for ( t 1 , t 2 , ... , t k ) ≠ ( u 1 , u 2 , ... , u k ). [ note 3 ] If v 1 , ..., v k are linearly independent, then the coordinates t 1 , ..., t k for a vector in the span are uniquely determined. A basis for a subspace S is a set of linearly independent vectors whose span is S . The number of elements in a basis is always equal to the geometric dimension of the subspace. Any spanning set for a subspace can be changed into a basis by removing redundant vectors (see § Algorithms below for more). The set-theoretical inclusion binary relation specifies a partial order on the set of all subspaces (of any dimension). A subspace cannot lie in any subspace of lesser dimension. If dim U = k , a finite number, and U ⊂ W , then dim W = k if and only if U = W . Given subspaces U and W of a vector space V , then their intersection U ∩ W := { v ∈ V : v is an element of both U and W } is also a subspace of V . [ 10 ] Proof: For every vector space V , the set { 0 } and V itself are subspaces of V . [ 11 ] [ 12 ] If U and W are subspaces, their sum is the subspace [ 13 ] [ 14 ] U + W = { u + w : u ∈ U , w ∈ W } . {\displaystyle U+W=\left\{\mathbf {u} +\mathbf {w} \colon \mathbf {u} \in U,\mathbf {w} \in W\right\}.} For example, the sum of two lines is the plane that contains them both. The dimension of the sum satisfies the inequality max ( dim ⁡ U , dim ⁡ W ) ≤ dim ⁡ ( U + W ) ≤ dim ⁡ ( U ) + dim ⁡ ( W ) . {\displaystyle \max(\dim U,\dim W)\leq \dim(U+W)\leq \dim(U)+\dim(W).} Here, the minimum only occurs if one subspace is contained in the other, while the maximum is the most general case. The dimension of the intersection and the sum are related by the following equation: [ 15 ] dim ⁡ ( U + W ) = dim ⁡ ( U ) + dim ⁡ ( W ) − dim ⁡ ( U ∩ W ) . {\displaystyle \dim(U+W)=\dim(U)+\dim(W)-\dim(U\cap W).} A set of subspaces is independent when the only intersection between any pair of subspaces is the trivial subspace. The direct sum is the sum of independent subspaces, written as U ⊕ W {\displaystyle U\oplus W} . An equivalent restatement is that a direct sum is a subspace sum under the condition that every subspace contributes to the span of the sum. [ 16 ] [ 17 ] [ 18 ] [ 19 ] The dimension of a direct sum U ⊕ W {\displaystyle U\oplus W} is the same as the sum of subspaces, but may be shortened because the dimension of the trivial subspace is zero. [ 20 ] dim ⁡ ( U ⊕ W ) = dim ⁡ ( U ) + dim ⁡ ( W ) {\displaystyle \dim(U\oplus W)=\dim(U)+\dim(W)} The operations intersection and sum make the set of all subspaces a bounded modular lattice , where the {0} subspace , the least element , is an identity element of the sum operation, and the identical subspace V , the greatest element, is an identity element of the intersection operation. If V {\displaystyle V} is an inner product space and N {\displaystyle N} is a subset of V {\displaystyle V} , then the orthogonal complement of N {\displaystyle N} , denoted N ⊥ {\displaystyle N^{\perp }} , is again a subspace. [ 21 ] If V {\displaystyle V} is finite-dimensional and N {\displaystyle N} is a subspace, then the dimensions of N {\displaystyle N} and N ⊥ {\displaystyle N^{\perp }} satisfy the complementary relationship dim ⁡ ( N ) + dim ⁡ ( N ⊥ ) = dim ⁡ ( V ) {\displaystyle \dim(N)+\dim(N^{\perp })=\dim(V)} . [ 22 ] Moreover, no vector is orthogonal to itself, so N ∩ N ⊥ = { 0 } {\displaystyle N\cap N^{\perp }=\{0\}} and V {\displaystyle V} is the direct sum of N {\displaystyle N} and N ⊥ {\displaystyle N^{\perp }} . [ 23 ] Applying orthogonal complements twice returns the original subspace: ( N ⊥ ) ⊥ = N {\displaystyle (N^{\perp })^{\perp }=N} for every subspace N {\displaystyle N} . [ 24 ] This operation, understood as negation ( ¬ {\displaystyle \neg } ), makes the lattice of subspaces a (possibly infinite ) orthocomplemented lattice (although not a distributive lattice). [ citation needed ] In spaces with other bilinear forms , some but not all of these results still hold. In pseudo-Euclidean spaces and symplectic vector spaces , for example, orthogonal complements exist. However, these spaces may have null vectors that are orthogonal to themselves, and consequently there exist subspaces N {\displaystyle N} such that N ∩ N ⊥ ≠ { 0 } {\displaystyle N\cap N^{\perp }\neq \{0\}} . As a result, this operation does not turn the lattice of subspaces into a Boolean algebra (nor a Heyting algebra ). [ citation needed ] Most algorithms for dealing with subspaces involve row reduction . This is the process of applying elementary row operations to a matrix, until it reaches either row echelon form or reduced row echelon form . Row reduction has the following important properties: See the article on row space for an example . If we instead put the matrix A into reduced row echelon form, then the resulting basis for the row space is uniquely determined. This provides an algorithm for checking whether two row spaces are equal and, by extension, whether two subspaces of K n are equal. See the article on column space for an example . This produces a basis for the column space that is a subset of the original column vectors. It works because the columns with pivots are a basis for the column space of the echelon form, and row reduction does not change the linear dependence relationships between the columns. If the final column of the reduced row echelon form contains a pivot, then the input vector v does not lie in S . See the article on null space for an example . Given two subspaces U and W of V , a basis of the sum U + W {\displaystyle U+W} and the intersection U ∩ W {\displaystyle U\cap W} can be calculated using the Zassenhaus algorithm .
https://en.wikipedia.org/wiki/Linear_subspace
In systems theory , a linear system is a mathematical model of a system based on the use of a linear operator . Linear systems typically exhibit features and properties that are much simpler than the nonlinear case. As a mathematical abstraction or idealization, linear systems find important applications in automatic control theory, signal processing , and telecommunications . For example, the propagation medium for wireless communication systems can often be modeled by linear systems. A general deterministic system can be described by an operator, H , that maps an input, x ( t ) , as a function of t to an output, y ( t ) , a type of black box description. A system is linear if and only if it satisfies the superposition principle , or equivalently both the additivity and homogeneity properties, without restrictions (that is, for all inputs, all scaling constants and all time.) [ 1 ] [ 2 ] [ 3 ] [ 4 ] The superposition principle means that a linear combination of inputs to the system produces a linear combination of the individual zero-state outputs (that is, outputs setting the initial conditions to zero) corresponding to the individual inputs. [ 5 ] [ 6 ] In a system that satisfies the homogeneity property, scaling the input always results in scaling the zero-state response by the same factor. [ 6 ] In a system that satisfies the additivity property, adding two inputs always results in adding the corresponding two zero-state responses due to the individual inputs. [ 6 ] Mathematically, for a continuous-time system, given two arbitrary inputs x 1 ( t ) x 2 ( t ) {\displaystyle {\begin{aligned}x_{1}(t)\\x_{2}(t)\end{aligned}}} as well as their respective zero-state outputs y 1 ( t ) = H { x 1 ( t ) } y 2 ( t ) = H { x 2 ( t ) } {\displaystyle {\begin{aligned}y_{1}(t)&=H\left\{x_{1}(t)\right\}\\y_{2}(t)&=H\left\{x_{2}(t)\right\}\end{aligned}}} then a linear system must satisfy α y 1 ( t ) + β y 2 ( t ) = H { α x 1 ( t ) + β x 2 ( t ) } {\displaystyle \alpha y_{1}(t)+\beta y_{2}(t)=H\left\{\alpha x_{1}(t)+\beta x_{2}(t)\right\}} for any scalar values α and β , for any input signals x 1 ( t ) and x 2 ( t ) , and for all time t . The system is then defined by the equation H ( x ( t )) = y ( t ) , where y ( t ) is some arbitrary function of time, and x ( t ) is the system state. Given y ( t ) and H , the system can be solved for x ( t ) . The behavior of the resulting system subjected to a complex input can be described as a sum of responses to simpler inputs. In nonlinear systems, there is no such relation. This mathematical property makes the solution of modelling equations simpler than many nonlinear systems. For time-invariant systems this is the basis of the impulse response or the frequency response methods (see LTI system theory ), which describe a general input function x ( t ) in terms of unit impulses or frequency components . Typical differential equations of linear time-invariant systems are well adapted to analysis using the Laplace transform in the continuous case, and the Z-transform in the discrete case (especially in computer implementations). Another perspective is that solutions to linear systems comprise a system of functions which act like vectors in the geometric sense. A common use of linear models is to describe a nonlinear system by linearization . This is usually done for mathematical convenience. The previous definition of a linear system is applicable to SISO (single-input single-output) systems. For MIMO (multiple-input multiple-output) systems, input and output signal vectors ( x 1 ( t ) {\displaystyle {\mathbf {x} }_{1}(t)} , x 2 ( t ) {\displaystyle {\mathbf {x} }_{2}(t)} , y 1 ( t ) {\displaystyle {\mathbf {y} }_{1}(t)} , y 2 ( t ) {\displaystyle {\mathbf {y} }_{2}(t)} ) are considered instead of input and output signals ( x 1 ( t ) {\displaystyle x_{1}(t)} , x 2 ( t ) {\displaystyle x_{2}(t)} , y 1 ( t ) {\displaystyle y_{1}(t)} , y 2 ( t ) {\displaystyle y_{2}(t)} .) [ 2 ] [ 4 ] This definition of a linear system is analogous to the definition of a linear differential equation in calculus , and a linear transformation in linear algebra . A simple harmonic oscillator obeys the differential equation: m d 2 ( x ) d t 2 = − k x . {\displaystyle m{\frac {d^{2}(x)}{dt^{2}}}=-kx.} If H ( x ( t ) ) = m d 2 ( x ( t ) ) d t 2 + k x ( t ) , {\displaystyle H(x(t))=m{\frac {d^{2}(x(t))}{dt^{2}}}+kx(t),} then H is a linear operator. Letting y ( t ) = 0 , we can rewrite the differential equation as H ( x ( t )) = y ( t ) , which shows that a simple harmonic oscillator is a linear system. Other examples of linear systems include those described by y ( t ) = k x ( t ) {\displaystyle y(t)=k\,x(t)} , y ( t ) = k d x ( t ) d t {\displaystyle y(t)=k\,{\frac {\mathrm {d} x(t)}{\mathrm {d} t}}} , y ( t ) = k ∫ − ∞ t x ( τ ) d τ {\displaystyle y(t)=k\,\int _{-\infty }^{t}x(\tau )\mathrm {d} \tau } , and any system described by ordinary linear differential equations. [ 4 ] Systems described by y ( t ) = k {\displaystyle y(t)=k} , y ( t ) = k x ( t ) + k 0 {\displaystyle y(t)=k\,x(t)+k_{0}} , y ( t ) = sin ⁡ [ x ( t ) ] {\displaystyle y(t)=\sin {[x(t)]}} , y ( t ) = cos ⁡ [ x ( t ) ] {\displaystyle y(t)=\cos {[x(t)]}} , y ( t ) = x 2 ( t ) {\displaystyle y(t)=x^{2}(t)} , y ( t ) = x ( t ) {\textstyle y(t)={\sqrt {x(t)}}} , y ( t ) = | x ( t ) | {\displaystyle y(t)=|x(t)|} , and a system with odd-symmetry output consisting of a linear region and a saturation (constant) region, are non-linear because they don't always satisfy the superposition principle. [ 7 ] [ 8 ] [ 9 ] [ 10 ] The output versus input graph of a linear system need not be a straight line through the origin. For example, consider a system described by y ( t ) = k d x ( t ) d t {\displaystyle y(t)=k\,{\frac {\mathrm {d} x(t)}{\mathrm {d} t}}} (such as a constant-capacitance capacitor or a constant-inductance inductor ). It is linear because it satisfies the superposition principle. However, when the input is a sinusoid, the output is also a sinusoid, and so its output-input plot is an ellipse centered at the origin rather than a straight line passing through the origin. Also, the output of a linear system can contain harmonics (and have a smaller fundamental frequency than the input) even when the input is a sinusoid. For example, consider a system described by y ( t ) = ( 1.5 + cos ⁡ ( t ) ) x ( t ) {\displaystyle y(t)=(1.5+\cos {(t)})\,x(t)} . It is linear because it satisfies the superposition principle. However, when the input is a sinusoid of the form x ( t ) = cos ⁡ ( 3 t ) {\displaystyle x(t)=\cos {(3t)}} , using product-to-sum trigonometric identities it can be easily shown that the output is y ( t ) = 1.5 cos ⁡ ( 3 t ) + 0.5 cos ⁡ ( 2 t ) + 0.5 cos ⁡ ( 4 t ) {\displaystyle y(t)=1.5\cos {(3t)}+0.5\cos {(2t)}+0.5\cos {(4t)}} , that is, the output doesn't consist only of sinusoids of same frequency as the input ( 3 rad/s ), but instead also of sinusoids of frequencies 2 rad/s and 4 rad/s ; furthermore, taking the least common multiple of the fundamental period of the sinusoids of the output, it can be shown the fundamental angular frequency of the output is 1 rad/s , which is different than that of the input. The time-varying impulse response h ( t 2 , t 1 ) of a linear system is defined as the response of the system at time t = t 2 to a single impulse applied at time t = t 1 . In other words, if the input x ( t ) to a linear system is x ( t ) = δ ( t − t 1 ) {\displaystyle x(t)=\delta (t-t_{1})} where δ( t ) represents the Dirac delta function , and the corresponding response y ( t ) of the system is y ( t = t 2 ) = h ( t 2 , t 1 ) {\displaystyle y(t=t_{2})=h(t_{2},t_{1})} then the function h ( t 2 , t 1 ) is the time-varying impulse response of the system. Since the system cannot respond before the input is applied the following causality condition must be satisfied: h ( t 2 , t 1 ) = 0 , t 2 < t 1 {\displaystyle h(t_{2},t_{1})=0,t_{2}<t_{1}} The output of any general continuous-time linear system is related to the input by an integral which may be written over a doubly infinite range because of the causality condition: y ( t ) = ∫ − ∞ t h ( t , t ′ ) x ( t ′ ) d t ′ = ∫ − ∞ ∞ h ( t , t ′ ) x ( t ′ ) d t ′ {\displaystyle y(t)=\int _{-\infty }^{t}h(t,t')x(t')dt'=\int _{-\infty }^{\infty }h(t,t')x(t')dt'} If the properties of the system do not depend on the time at which it is operated then it is said to be time-invariant and h is a function only of the time difference τ = t − t' which is zero for τ < 0 (namely t < t' ). By redefinition of h it is then possible to write the input-output relation equivalently in any of the ways, y ( t ) = ∫ − ∞ t h ( t − t ′ ) x ( t ′ ) d t ′ = ∫ − ∞ ∞ h ( t − t ′ ) x ( t ′ ) d t ′ = ∫ − ∞ ∞ h ( τ ) x ( t − τ ) d τ = ∫ 0 ∞ h ( τ ) x ( t − τ ) d τ {\displaystyle y(t)=\int _{-\infty }^{t}h(t-t')x(t')dt'=\int _{-\infty }^{\infty }h(t-t')x(t')dt'=\int _{-\infty }^{\infty }h(\tau )x(t-\tau )d\tau =\int _{0}^{\infty }h(\tau )x(t-\tau )d\tau } Linear time-invariant systems are most commonly characterized by the Laplace transform of the impulse response function called the transfer function which is: H ( s ) = ∫ 0 ∞ h ( t ) e − s t d t . {\displaystyle H(s)=\int _{0}^{\infty }h(t)e^{-st}\,dt.} In applications this is usually a rational algebraic function of s . Because h ( t ) is zero for negative t , the integral may equally be written over the doubly infinite range and putting s = iω follows the formula for the frequency response function : H ( i ω ) = ∫ − ∞ ∞ h ( t ) e − i ω t d t {\displaystyle H(i\omega )=\int _{-\infty }^{\infty }h(t)e^{-i\omega t}dt} The output of any discrete time linear system is related to the input by the time-varying convolution sum: y [ n ] = ∑ m = − ∞ n h [ n , m ] x [ m ] = ∑ m = − ∞ ∞ h [ n , m ] x [ m ] {\displaystyle y[n]=\sum _{m=-\infty }^{n}{h[n,m]x[m]}=\sum _{m=-\infty }^{\infty }{h[n,m]x[m]}} or equivalently for a time-invariant system on redefining h , y [ n ] = ∑ k = 0 ∞ h [ k ] x [ n − k ] = ∑ k = − ∞ ∞ h [ k ] x [ n − k ] {\displaystyle y[n]=\sum _{k=0}^{\infty }{h[k]x[n-k]}=\sum _{k=-\infty }^{\infty }{h[k]x[n-k]}} where k = n − m {\displaystyle k=n-m} represents the lag time between the stimulus at time m and the response at time n .
https://en.wikipedia.org/wiki/Linear_system
In system analysis , among other fields of study, a linear time-invariant ( LTI ) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance ; these terms are briefly defined in the overview below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y ( t ) of the system to an arbitrary input x ( t ) can be found directly using convolution : y ( t ) = ( x ∗ h )( t ) where h ( t ) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h ( t ) ), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors , capacitors , inductors and linear amplifiers . [ 2 ] Linear time-invariant system theory is also used in image processing , where the systems have spatial dimensions instead of, or in addition to, a temporal dimension. These systems may be referred to as linear translation-invariant to give the terminology the most general reach. In the case of generic discrete-time (i.e., sampled ) systems, linear shift-invariant is the corresponding term. LTI system theory is an area of applied mathematics which has direct applications in electrical circuit analysis and design , signal processing and filter design , control theory , mechanical engineering , image processing , the design of measuring instruments of many sorts, NMR spectroscopy [ citation needed ] , and many other technical areas where systems of ordinary differential equations present themselves. The defining properties of any LTI system are linearity and time invariance . The fundamental result in LTI system theory is that any LTI system can be characterized entirely by a single function called the system's impulse response . The output of the system y ( t ) {\displaystyle y(t)} is simply the convolution of the input to the system x ( t ) {\displaystyle x(t)} with the system's impulse response h ( t ) {\displaystyle h(t)} . This is called a continuous time system. Similarly, a discrete-time linear time-invariant (or, more generally, "shift-invariant") system is defined as one operating in discrete time : y i = x i ∗ h i {\displaystyle y_{i}=x_{i}*h_{i}} where y , x , and h are sequences and the convolution, in discrete time, uses a discrete summation rather than an integral. LTI systems can also be characterized in the frequency domain by the system's transfer function , which is the Laplace transform of the system's impulse response (or Z transform in the case of discrete-time systems). As a result of the properties of these transforms, the output of the system in the frequency domain is the product of the transfer function and the transform of the input. In other words, convolution in the time domain is equivalent to multiplication in the frequency domain. For all LTI systems, the eigenfunctions , and the basis functions of the transforms, are complex exponentials . This is, if the input to a system is the complex waveform A s e s t {\displaystyle A_{s}e^{st}} for some complex amplitude A s {\displaystyle A_{s}} and complex frequency s {\displaystyle s} , the output will be some complex constant times the input, say B s e s t {\displaystyle B_{s}e^{st}} for some new complex amplitude B s {\displaystyle B_{s}} . The ratio B s / A s {\displaystyle B_{s}/A_{s}} is the transfer function at frequency s {\displaystyle s} . Since sinusoids are a sum of complex exponentials with complex-conjugate frequencies, if the input to the system is a sinusoid, then the output of the system will also be a sinusoid, perhaps with a different amplitude and a different phase , but always with the same frequency upon reaching steady-state. LTI systems cannot produce frequency components that are not in the input. LTI system theory is good at describing many important systems. Most LTI systems are considered "easy" to analyze, at least compared to the time-varying and/or nonlinear case. Any system that can be modeled as a linear differential equation with constant coefficients is an LTI system. Examples of such systems are electrical circuits made up of resistors , inductors , and capacitors (RLC circuits). Ideal spring–mass–damper systems are also LTI systems, and are mathematically equivalent to RLC circuits. Most LTI system concepts are similar between the continuous-time and discrete-time (linear shift-invariant) cases. In image processing, the time variable is replaced with two space variables, and the notion of time invariance is replaced by two-dimensional shift invariance. When analyzing filter banks and MIMO systems, it is often useful to consider vectors of signals. A linear system that is not time-invariant can be solved using other approaches such as the Green function method. The behavior of a linear, continuous-time, time-invariant system with input signal x ( t ) and output signal y ( t ) is described by the convolution integral: [ 3 ] where h ( t ) {\textstyle h(t)} is the system's response to an impulse : x ( τ ) = δ ( τ ) {\textstyle x(\tau )=\delta (\tau )} . y ( t ) {\textstyle y(t)} is therefore proportional to a weighted average of the input function x ( τ ) {\textstyle x(\tau )} . The weighting function is h ( − τ ) {\textstyle h(-\tau )} , simply shifted by amount t {\textstyle t} . As t {\textstyle t} changes, the weighting function emphasizes different parts of the input function. When h ( τ ) {\textstyle h(\tau )} is zero for all negative τ {\textstyle \tau } , y ( t ) {\textstyle y(t)} depends only on values of x {\textstyle x} prior to time t {\textstyle t} , and the system is said to be causal . To understand why the convolution produces the output of an LTI system, let the notation { x ( u − τ ) ; u } {\textstyle \{x(u-\tau );\ u\}} represent the function x ( u − τ ) {\textstyle x(u-\tau )} with variable u {\textstyle u} and constant τ {\textstyle \tau } . And let the shorter notation { x } {\textstyle \{x\}} represent { x ( u ) ; u } {\textstyle \{x(u);\ u\}} . Then a continuous-time system transforms an input function, { x } , {\textstyle \{x\},} into an output function, { y } {\textstyle \{y\}} . And in general, every value of the output can depend on every value of the input. This concept is represented by: y ( t ) = def O t { x } , {\displaystyle y(t)\mathrel {\stackrel {\text{def}}{=}} O_{t}\{x\},} where O t {\textstyle O_{t}} is the transformation operator for time t {\textstyle t} . In a typical system, y ( t ) {\textstyle y(t)} depends most heavily on the values of x {\textstyle x} that occurred near time t {\textstyle t} . Unless the transform itself changes with t {\textstyle t} , the output function is just constant, and the system is uninteresting. For a linear system, O {\textstyle O} must satisfy Eq.1 : And the time-invariance requirement is: In this notation, we can write the impulse response as h ( t ) = def O t { δ ( u ) ; u } . {\textstyle h(t)\mathrel {\stackrel {\text{def}}{=}} O_{t}\{\delta (u);\ u\}.} Similarly: Substituting this result into the convolution integral: ( x ∗ h ) ( t ) = ∫ − ∞ ∞ x ( τ ) ⋅ h ( t − τ ) d τ = ∫ − ∞ ∞ x ( τ ) ⋅ O t { δ ( u − τ ) ; u } d τ , {\displaystyle {\begin{aligned}(x*h)(t)&=\int _{-\infty }^{\infty }x(\tau )\cdot h(t-\tau )\,\mathrm {d} \tau \\[4pt]&=\int _{-\infty }^{\infty }x(\tau )\cdot O_{t}\{\delta (u-\tau );\ u\}\,\mathrm {d} \tau ,\,\end{aligned}}} which has the form of the right side of Eq.2 for the case c τ = x ( τ ) {\textstyle c_{\tau }=x(\tau )} and x τ ( u ) = δ ( u − τ ) . {\textstyle x_{\tau }(u)=\delta (u-\tau ).} Eq.2 then allows this continuation: ( x ∗ h ) ( t ) = O t { ∫ − ∞ ∞ x ( τ ) ⋅ δ ( u − τ ) d τ ; u } = O t { x ( u ) ; u } = def y ( t ) . {\displaystyle {\begin{aligned}(x*h)(t)&=O_{t}\left\{\int _{-\infty }^{\infty }x(\tau )\cdot \delta (u-\tau )\,\mathrm {d} \tau ;\ u\right\}\\[4pt]&=O_{t}\left\{x(u);\ u\right\}\\&\mathrel {\stackrel {\text{def}}{=}} y(t).\,\end{aligned}}} In summary, the input function, { x } {\textstyle \{x\}} , can be represented by a continuum of time-shifted impulse functions, combined "linearly", as shown at Eq.1 . The system's linearity property allows the system's response to be represented by the corresponding continuum of impulse responses , combined in the same way. And the time-invariance property allows that combination to be represented by the convolution integral. The mathematical operations above have a simple graphical simulation. [ 4 ] An eigenfunction is a function for which the output of the operator is a scaled version of the same function. That is, H f = λ f , {\displaystyle {\mathcal {H}}f=\lambda f,} where f is the eigenfunction and λ {\displaystyle \lambda } is the eigenvalue , a constant. The exponential functions A e s t {\displaystyle Ae^{st}} , where A , s ∈ C {\displaystyle A,s\in \mathbb {C} } , are eigenfunctions of a linear , time-invariant operator. A simple proof illustrates this concept. Suppose the input is x ( t ) = A e s t {\displaystyle x(t)=Ae^{st}} . The output of the system with impulse response h ( t ) {\displaystyle h(t)} is then ∫ − ∞ ∞ h ( t − τ ) A e s τ d τ {\displaystyle \int _{-\infty }^{\infty }h(t-\tau )Ae^{s\tau }\,\mathrm {d} \tau } which, by the commutative property of convolution , is equivalent to ∫ − ∞ ∞ h ( τ ) A e s ( t − τ ) d τ ⏞ H f = ∫ − ∞ ∞ h ( τ ) A e s t e − s τ d τ = A e s t ∫ − ∞ ∞ h ( τ ) e − s τ d τ = A e s t ⏟ Input ⏞ f H ( s ) ⏟ Scalar ⏞ λ , {\displaystyle {\begin{aligned}\overbrace {\int _{-\infty }^{\infty }h(\tau )\,Ae^{s(t-\tau )}\,\mathrm {d} \tau } ^{{\mathcal {H}}f}&=\int _{-\infty }^{\infty }h(\tau )\,Ae^{st}e^{-s\tau }\,\mathrm {d} \tau \\[4pt]&=Ae^{st}\int _{-\infty }^{\infty }h(\tau )\,e^{-s\tau }\,\mathrm {d} \tau \\[4pt]&=\overbrace {\underbrace {Ae^{st}} _{\text{Input}}} ^{f}\overbrace {\underbrace {H(s)} _{\text{Scalar}}} ^{\lambda },\\\end{aligned}}} where the scalar H ( s ) = def ∫ − ∞ ∞ h ( t ) e − s t d t {\displaystyle H(s)\mathrel {\stackrel {\text{def}}{=}} \int _{-\infty }^{\infty }h(t)e^{-st}\,\mathrm {d} t} is dependent only on the parameter s . So the system's response is a scaled version of the input. In particular, for any A , s ∈ C {\displaystyle A,s\in \mathbb {C} } , the system output is the product of the input A e s t {\displaystyle Ae^{st}} and the constant H ( s ) {\displaystyle H(s)} . Hence, A e s t {\displaystyle Ae^{st}} is an eigenfunction of an LTI system, and the corresponding eigenvalue is H ( s ) {\displaystyle H(s)} . It is also possible to directly derive complex exponentials as eigenfunctions of LTI systems. Let's set v ( t ) = e i ω t {\displaystyle v(t)=e^{i\omega t}} some complex exponential and v a ( t ) = e i ω ( t + a ) {\displaystyle v_{a}(t)=e^{i\omega (t+a)}} a time-shifted version of it. H [ v a ] ( t ) = e i ω a H [ v ] ( t ) {\displaystyle H[v_{a}](t)=e^{i\omega a}H[v](t)} by linearity with respect to the constant e i ω a {\displaystyle e^{i\omega a}} . H [ v a ] ( t ) = H [ v ] ( t + a ) {\displaystyle H[v_{a}](t)=H[v](t+a)} by time invariance of H {\displaystyle H} . So H [ v ] ( t + a ) = e i ω a H [ v ] ( t ) {\displaystyle H[v](t+a)=e^{i\omega a}H[v](t)} . Setting t = 0 {\displaystyle t=0} and renaming we get: H [ v ] ( τ ) = e i ω τ H [ v ] ( 0 ) {\displaystyle H[v](\tau )=e^{i\omega \tau }H[v](0)} i.e. that a complex exponential e i ω τ {\displaystyle e^{i\omega \tau }} as input will give a complex exponential of same frequency as output. The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. The one-sided Laplace transform H ( s ) = def L { h ( t ) } = def ∫ 0 ∞ h ( t ) e − s t d t {\displaystyle H(s)\mathrel {\stackrel {\text{def}}{=}} {\mathcal {L}}\{h(t)\}\mathrel {\stackrel {\text{def}}{=}} \int _{0}^{\infty }h(t)e^{-st}\,\mathrm {d} t} is exactly the way to get the eigenvalues from the impulse response. Of particular interest are pure sinusoids (i.e., exponential functions of the form e j ω t {\displaystyle e^{j\omega t}} where ω ∈ R {\displaystyle \omega \in \mathbb {R} } and j = def − 1 {\displaystyle j\mathrel {\stackrel {\text{def}}{=}} {\sqrt {-1}}} ). The Fourier transform H ( j ω ) = F { h ( t ) } {\displaystyle H(j\omega )={\mathcal {F}}\{h(t)\}} gives the eigenvalues for pure complex sinusoids. Both of H ( s ) {\displaystyle H(s)} and H ( j ω ) {\displaystyle H(j\omega )} are called the system function , system response , or transfer function . The Laplace transform is usually used in the context of one-sided signals, i.e. signals that are zero for all values of t less than some value. Usually, this "start time" is set to zero, for convenience and without loss of generality, with the transform integral being taken from zero to infinity (the transform shown above with lower limit of integration of negative infinity is formally known as the bilateral Laplace transform ). The Fourier transform is used for analyzing systems that process signals that are infinite in extent, such as modulated sinusoids, even though it cannot be directly applied to input and output signals that are not square integrable . The Laplace transform actually works directly for these signals if they are zero before a start time, even if they are not square integrable, for stable systems. The Fourier transform is often applied to spectra of infinite signals via the Wiener–Khinchin theorem even when Fourier transforms of the signals do not exist. Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain, given signals for which the transforms exist y ( t ) = ( h ∗ x ) ( t ) = def ∫ − ∞ ∞ h ( t − τ ) x ( τ ) d τ = def L − 1 { H ( s ) X ( s ) } . {\displaystyle y(t)=(h*x)(t)\mathrel {\stackrel {\text{def}}{=}} \int _{-\infty }^{\infty }h(t-\tau )x(\tau )\,\mathrm {d} \tau \mathrel {\stackrel {\text{def}}{=}} {\mathcal {L}}^{-1}\{H(s)X(s)\}.} One can use the system response directly to determine how any particular frequency component is handled by a system with that Laplace transform. If we evaluate the system response (Laplace transform of the impulse response) at complex frequency s = jω , where ω = 2 πf , we obtain | H ( s )| which is the system gain for frequency f . The relative phase shift between the output and input for that frequency component is likewise given by arg( H ( s )). When the Laplace transform of the derivative is taken, it transforms to a simple multiplication by the Laplace variable s . L { d d t x ( t ) } = s X ( s ) {\displaystyle {\mathcal {L}}\left\{{\frac {\mathrm {d} }{\mathrm {d} t}}x(t)\right\}=sX(s)} Some of the most important properties of a system are causality and stability. Causality is a necessity for a physical system whose independent variable is time, however this restriction is not present in other cases such as image processing. A system is causal if the output depends only on present and past, but not future inputs. A necessary and sufficient condition for causality is h ( t ) = 0 ∀ t < 0 , {\displaystyle h(t)=0\quad \forall t<0,} where h ( t ) {\displaystyle h(t)} is the impulse response. It is not possible in general to determine causality from the two-sided Laplace transform . However, when working in the time domain, one normally uses the one-sided Laplace transform which requires causality. A system is bounded-input, bounded-output stable (BIBO stable) if, for every bounded input, the output is finite. Mathematically, if every input satisfying ‖ x ( t ) ‖ ∞ < ∞ {\displaystyle \ \|x(t)\|_{\infty }<\infty } leads to an output satisfying ‖ y ( t ) ‖ ∞ < ∞ {\displaystyle \ \|y(t)\|_{\infty }<\infty } (that is, a finite maximum absolute value of x ( t ) {\displaystyle x(t)} implies a finite maximum absolute value of y ( t ) {\displaystyle y(t)} ), then the system is stable. A necessary and sufficient condition is that h ( t ) {\displaystyle h(t)} , the impulse response, is in L 1 (has a finite L 1 norm): ‖ h ( t ) ‖ 1 = ∫ − ∞ ∞ | h ( t ) | d t < ∞ . {\displaystyle \|h(t)\|_{1}=\int _{-\infty }^{\infty }|h(t)|\,\mathrm {d} t<\infty .} In the frequency domain, the region of convergence must contain the imaginary axis s = j ω {\displaystyle s=j\omega } . As an example, the ideal low-pass filter with impulse response equal to a sinc function is not BIBO stable, because the sinc function does not have a finite L 1 norm. Thus, for some bounded input, the output of the ideal low-pass filter is unbounded. In particular, if the input is zero for t < 0 {\displaystyle t<0} and equal to a sinusoid at the cut-off frequency for t > 0 {\displaystyle t>0} , then the output will be unbounded for all times other than the zero crossings. [ dubious – discuss ] Almost everything in continuous-time systems has a counterpart in discrete-time systems. In many contexts, a discrete time (DT) system is really part of a larger continuous time (CT) system. For example, a digital recording system takes an analog sound, digitizes it, possibly processes the digital signals, and plays back an analog sound for people to listen to. In practical systems, DT signals obtained are usually uniformly sampled versions of CT signals. If x ( t ) {\displaystyle x(t)} is a CT signal, then the sampling circuit used before an analog-to-digital converter will transform it to a DT signal: x n = def x ( n T ) ∀ n ∈ Z , {\displaystyle x_{n}\mathrel {\stackrel {\text{def}}{=}} x(nT)\qquad \forall \,n\in \mathbb {Z} ,} where T is the sampling period . Before sampling, the input signal is normally run through a so-called Nyquist filter which removes frequencies above the "folding frequency" 1/(2T); this guarantees that no information in the filtered signal will be lost. Without filtering, any frequency component above the folding frequency (or Nyquist frequency ) is aliased to a different frequency (thus distorting the original signal), since a DT signal can only support frequency components lower than the folding frequency. Let { x [ m − k ] ; m } {\displaystyle \{x[m-k];\ m\}} represent the sequence { x [ m − k ] ; for all integer values of m } . {\displaystyle \{x[m-k];{\text{ for all integer values of }}m\}.} And let the shorter notation { x } {\displaystyle \{x\}} represent { x [ m ] ; m } . {\displaystyle \{x[m];\ m\}.} A discrete system transforms an input sequence, { x } {\displaystyle \{x\}} into an output sequence, { y } . {\displaystyle \{y\}.} In general, every element of the output can depend on every element of the input. Representing the transformation operator by O {\displaystyle O} , we can write: y [ n ] = def O n { x } . {\displaystyle y[n]\mathrel {\stackrel {\text{def}}{=}} O_{n}\{x\}.} Note that unless the transform itself changes with n , the output sequence is just constant, and the system is uninteresting. (Thus the subscript, n .) In a typical system, y [ n ] depends most heavily on the elements of x whose indices are near n . For the special case of the Kronecker delta function , x [ m ] = δ [ m ] , {\displaystyle x[m]=\delta [m],} the output sequence is the impulse response : h [ n ] = def O n { δ [ m ] ; m } . {\displaystyle h[n]\mathrel {\stackrel {\text{def}}{=}} O_{n}\{\delta [m];\ m\}.} For a linear system, O {\displaystyle O} must satisfy: And the time-invariance requirement is: In such a system, the impulse response, { h } {\displaystyle \{h\}} , characterizes the system completely. That is, for any input sequence, the output sequence can be calculated in terms of the input and the impulse response. To see how that is done, consider the identity: x [ m ] ≡ ∑ k = − ∞ ∞ x [ k ] ⋅ δ [ m − k ] , {\displaystyle x[m]\equiv \sum _{k=-\infty }^{\infty }x[k]\cdot \delta [m-k],} which expresses { x } {\displaystyle \{x\}} in terms of a sum of weighted delta functions. Therefore: y [ n ] = O n { x } = O n { ∑ k = − ∞ ∞ x [ k ] ⋅ δ [ m − k ] ; m } = ∑ k = − ∞ ∞ x [ k ] ⋅ O n { δ [ m − k ] ; m } , {\displaystyle {\begin{aligned}y[n]=O_{n}\{x\}&=O_{n}\left\{\sum _{k=-\infty }^{\infty }x[k]\cdot \delta [m-k];\ m\right\}\\&=\sum _{k=-\infty }^{\infty }x[k]\cdot O_{n}\{\delta [m-k];\ m\},\,\end{aligned}}} where we have invoked Eq.4 for the case c k = x [ k ] {\displaystyle c_{k}=x[k]} and x k [ m ] = δ [ m − k ] {\displaystyle x_{k}[m]=\delta [m-k]} . And because of Eq.5 , we may write: O n { δ [ m − k ] ; m } = O n − k { δ [ m ] ; m } = def h [ n − k ] . {\displaystyle {\begin{aligned}O_{n}\{\delta [m-k];\ m\}&\mathrel {\stackrel {\quad }{=}} O_{n-k}\{\delta [m];\ m\}\\&\mathrel {\stackrel {\text{def}}{=}} h[n-k].\end{aligned}}} Therefore: which is the familiar discrete convolution formula. The operator O n {\displaystyle O_{n}} can therefore be interpreted as proportional to a weighted average of the function x [ k ]. The weighting function is h [− k ], simply shifted by amount n . As n changes, the weighting function emphasizes different parts of the input function. Equivalently, the system's response to an impulse at n =0 is a "time" reversed copy of the unshifted weighting function. When h [ k ] is zero for all negative k , the system is said to be causal . An eigenfunction is a function for which the output of the operator is the same function, scaled by some constant. In symbols, H f = λ f , {\displaystyle {\mathcal {H}}f=\lambda f,} where f is the eigenfunction and λ {\displaystyle \lambda } is the eigenvalue , a constant. The exponential functions z n = e s T n {\displaystyle z^{n}=e^{sTn}} , where n ∈ Z {\displaystyle n\in \mathbb {Z} } , are eigenfunctions of a linear , time-invariant operator. T ∈ R {\displaystyle T\in \mathbb {R} } is the sampling interval, and z = e s T , z , s ∈ C {\displaystyle z=e^{sT},\ z,s\in \mathbb {C} } . A simple proof illustrates this concept. Suppose the input is x [ n ] = z n {\displaystyle x[n]=z^{n}} . The output of the system with impulse response h [ n ] {\displaystyle h[n]} is then ∑ m = − ∞ ∞ h [ n − m ] z m {\displaystyle \sum _{m=-\infty }^{\infty }h[n-m]\,z^{m}} which is equivalent to the following by the commutative property of convolution ∑ m = − ∞ ∞ h [ m ] z ( n − m ) = z n ∑ m = − ∞ ∞ h [ m ] z − m = z n H ( z ) {\displaystyle \sum _{m=-\infty }^{\infty }h[m]\,z^{(n-m)}=z^{n}\sum _{m=-\infty }^{\infty }h[m]\,z^{-m}=z^{n}H(z)} where H ( z ) = def ∑ m = − ∞ ∞ h [ m ] z − m {\displaystyle H(z)\mathrel {\stackrel {\text{def}}{=}} \sum _{m=-\infty }^{\infty }h[m]z^{-m}} is dependent only on the parameter z . So z n {\displaystyle z^{n}} is an eigenfunction of an LTI system because the system response is the same as the input times the constant H ( z ) {\displaystyle H(z)} . The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. The Z transform H ( z ) = Z { h [ n ] } = ∑ n = − ∞ ∞ h [ n ] z − n {\displaystyle H(z)={\mathcal {Z}}\{h[n]\}=\sum _{n=-\infty }^{\infty }h[n]z^{-n}} is exactly the way to get the eigenvalues from the impulse response. [ clarification needed ] Of particular interest are pure sinusoids; i.e. exponentials of the form e j ω n {\displaystyle e^{j\omega n}} , where ω ∈ R {\displaystyle \omega \in \mathbb {R} } . These can also be written as z n {\displaystyle z^{n}} with z = e j ω {\displaystyle z=e^{j\omega }} [ clarification needed ] . The discrete-time Fourier transform (DTFT) H ( e j ω ) = F { h [ n ] } {\displaystyle H(e^{j\omega })={\mathcal {F}}\{h[n]\}} gives the eigenvalues of pure sinusoids [ clarification needed ] . Both of H ( z ) {\displaystyle H(z)} and H ( e j ω ) {\displaystyle H(e^{j\omega })} are called the system function , system response , or transfer function . Like the one-sided Laplace transform, the Z transform is usually used in the context of one-sided signals, i.e. signals that are zero for t<0. The discrete-time Fourier transform Fourier series may be used for analyzing periodic signals. Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain. That is, y [ n ] = ( h ∗ x ) [ n ] = ∑ m = − ∞ ∞ h [ n − m ] x [ m ] = Z − 1 { H ( z ) X ( z ) } . {\displaystyle y[n]=(h*x)[n]=\sum _{m=-\infty }^{\infty }h[n-m]x[m]={\mathcal {Z}}^{-1}\{H(z)X(z)\}.} Just as with the Laplace transform transfer function in continuous-time system analysis, the Z transform makes it easier to analyze systems and gain insight into their behavior. The Z transform of the delay operator is a simple multiplication by z −1 . That is, The input-output characteristics of discrete-time LTI system are completely described by its impulse response h [ n ] {\displaystyle h[n]} . Two of the most important properties of a system are causality and stability. Non-causal (in time) systems can be defined and analyzed as above, but cannot be realized in real-time. Unstable systems can also be analyzed and built, but are only useful as part of a larger system whose overall transfer function is stable. A discrete-time LTI system is causal if the current value of the output depends on only the current value and past values of the input. [ 5 ] A necessary and sufficient condition for causality is h [ n ] = 0 ∀ n < 0 , {\displaystyle h[n]=0\ \forall n<0,} where h [ n ] {\displaystyle h[n]} is the impulse response. It is not possible in general to determine causality from the Z transform, because the inverse transform is not unique [ dubious – discuss ] . When a region of convergence is specified, then causality can be determined. A system is bounded input, bounded output stable (BIBO stable) if, for every bounded input, the output is finite. Mathematically, if ‖ x [ n ] ‖ ∞ < ∞ {\displaystyle \|x[n]\|_{\infty }<\infty } implies that ‖ y [ n ] ‖ ∞ < ∞ {\displaystyle \|y[n]\|_{\infty }<\infty } (that is, if bounded input implies bounded output, in the sense that the maximum absolute values of x [ n ] {\displaystyle x[n]} and y [ n ] {\displaystyle y[n]} are finite), then the system is stable. A necessary and sufficient condition is that h [ n ] {\displaystyle h[n]} , the impulse response, satisfies ‖ h [ n ] ‖ 1 = def ∑ n = − ∞ ∞ | h [ n ] | < ∞ . {\displaystyle \|h[n]\|_{1}\mathrel {\stackrel {\text{def}}{=}} \sum _{n=-\infty }^{\infty }|h[n]|<\infty .} In the frequency domain, the region of convergence must contain the unit circle (i.e., the locus satisfying | z | = 1 {\displaystyle |z|=1} for complex z ).
https://en.wikipedia.org/wiki/Linear_time-invariant_system
Linear (or Longitudinal) Timecode ( LTC ) is an encoding of SMPTE timecode data in an audio signal , as defined in SMPTE 12M specification. The audio signal is commonly recorded on a VTR track or other storage media. The bits are encoded using the biphase mark code (also known as FM ): a 0 bit has a single transition at the start of the bit period. A 1 bit has two transitions, at the beginning and middle of the period. This encoding is self-clocking . Each frame is terminated by a ' sync word ' which has a special predefined sync relationship with any video or film content. A special bit in the linear timecode frame, the biphase mark correction bit, ensures that there are an even number of AC transitions in each timecode frame. The sound of linear timecode is a jarring and distinctive noise and has been used as a sound-effects shorthand to imply telemetry or computers . In broadcast video situations, the LTC generator should be tied into house black burst, as should all devices using timecode, to ensure correct color framing and correct synchronization of all digital clocks. When synchronizing multiple clock-dependent digital devices together with video, such as digital audio recorders, the devices must be connected to a common word clock signal that is derived from the house black burst signal. This can be accomplished by using a generator that generates both black burst and video-resolved word clock, or by synchronizing the master digital device to video, and synchronizing all subsequent devices to the word clock output of the master digital device (and to LTC). Made up of 80 bits per frame, where there may be 24, 25 or 30 frames per second, LTC timecode varies from 960 Hz (binary zeros at 24 frames/s) to 2400 Hz (binary ones at 30 frames/s), and thus is comfortably in the audio frequency range. LTC can exist as either a balanced or unbalanced signal, and can be treated as an audio signal in regards to distribution. Like audio, LTC can be distributed by standard audio wiring, connectors, distribution amplifiers, and patchbays, and can be ground-isolated with audio transformers. It can also be distributed via 75 ohm video cable and video distribution amplifiers, although the voltage attenuation caused by using a 75 ohm system may cause the signal to drop to a level that can not be read by some equipment. Care has to be taken with analog audio to avoid audible 'breakthrough' (aka "crosstalk") from the LTC track to the audio tracks. LTC care : Longitudinal SMPTE timecode should be played back at a middle-level when recorded on an audio track, as both low and high levels will introduce distortion. The basic format is an 80-bit code that gives the time of day to the second, and the frame number within the second. Values are stored in binary-coded decimal , least significant bit first. There are thirty-two bits of user data, usually used for a reel number and date.
https://en.wikipedia.org/wiki/Linear_timecode
In algebra , a linear topology on a left A {\displaystyle A} -module M {\displaystyle M} is a topology on M {\displaystyle M} that is invariant under translations and admits a fundamental system of neighborhood of 0 {\displaystyle 0} that consists of submodules of M . {\displaystyle M.} [ 1 ] If there is such a topology, M {\displaystyle M} is said to be linearly topologized . If A {\displaystyle A} is given a discrete topology, then M {\displaystyle M} becomes a topological A {\displaystyle A} -module with respect to a linear topology. The notion is used more commonly in algebra than in analysis. Indeed, "[t]opological vector spaces with linear topology form a natural class of topological vector spaces over discrete fields , analogous to the class of locally convex topological vector spaces over the normed fields of real or complex numbers in functional analysis ." [ 2 ] The term "linear topology" goes back to Lefschetz' work. [ 1 ] [ 2 ] This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Linear_topology
Transformation of three phase electrical quantities to two phase quantities is a usual practice to simplify analysis of three phase electrical circuits. Polyphase a.c machines can be represented by an equivalent two phase model provided the rotating polyphases winding in rotor and the stationary polyphase windings in stator can be expressed in a fictitious two axes coils. The process of replacing one set of variables to another related set of variable is called winding transformation or simply transformation or linear transformation. The term linear transformation means that the transformation from old to new set of variable and vice versa is governed by linear equations. [ 1 ] The equations relating old variables and new variables are called transformation equation and the following general form: Transformation matrix is a matrix containing the coefficients that relates new and old variables. Note that the second transformation matrix in the above-mentioned general form is inverse of first transformation matrix. The transformation matrix should account for power invariance in the two frames of reference. In case power invariance is not maintained, then torque calculation should be from original machine variables only. Linear transformation in rotating machines is generally carried out for the purpose of obtaining new sets of equations governing the machine model that are fewer in number and less complex in nature compared to original machine model. When referred to new frame of reference performance analysis of machine becomes much simpler, smoother and faster. All machine quantities like voltage, current, power, torque, speed etc. can be solved in the transformed model in a less laborious way without losing originality of machine properties. The most striking feature of transformation, which accounts for its high popularity, is that time varying inductances in voltage and current equations of machine are eliminated. Two most widely used transformation methods are dqo (or qdo or odq or simply d-q) transformation and αβϒ (or α-β) transformation. In d-q transformation the three phase quantities of machine in the abc reference frame is referred to d-q reference frame. Transformation equation has the general form [F dqo ] = [K][F abc ], where K is the transformation matrix, for detail refer Dqo transformation . The d-q reference frame may be stationary or rotating at certain angular speed. Based on speed of reference frame there are four major type of reference frame. For detail on abc to αβ transformation refer αβγ transform Based on speed of reference frame there are four major type of reference frame. [ 2 ] The choice of reference frame is not restricted but otherwise deeply influenced by the type of analysis that is to be performed so as to expedite the solution of the system equations or to satisfy system constraints. The best suited choice of reference frame for simulation of induction machine for various cases of analysis are listed here under: [ 3 ] It is worthwhile to note that all three types of reference frame can be obtained from arbitrary reference frame by simply changing ω. Modeling in arbitrary reference frame is therefore beneficial when a wide range of analysis is to be done. There are some restrictions in representing a rotating electrical machine by its d-q axes equivalent, as listed below:
https://en.wikipedia.org/wiki/Linear_transformation_in_rotating_electrical_machines
In mathematics , a linearised polynomial (or q -polynomial) is a polynomial for which the exponents of all the constituent monomials are powers of q and the coefficients come from some extension field of the finite field of order q . We write a typical example as L ( x ) = ∑ i = 0 n a i x q i , {\displaystyle L(x)=\sum _{i=0}^{n}a_{i}x^{q^{i}},} where each a i {\displaystyle a_{i}} is in F q m ( = GF ⁡ ( q m ) ) {\displaystyle F_{q^{m}}(=\operatorname {GF} (q^{m}))} for some fixed positive integer m {\displaystyle m} . This special class of polynomials is important from both a theoretical and an applications viewpoint. [ 1 ] The highly structured nature of their roots makes these roots easy to determine. In general, the product of two linearised polynomials will not be a linearized polynomial, but since the composition of two linearised polynomials results in a linearised polynomial, composition may be used as a replacement for multiplication and, for this reason, composition is often called symbolic multiplication in this setting. Notationally, if L 1 ( x ) and L 2 ( x ) are linearised polynomials we define L 1 ( x ) ⊗ L 2 ( x ) = L 1 ( L 2 ( x ) ) {\displaystyle L_{1}(x)\otimes L_{2}(x)=L_{1}(L_{2}(x))} when this point of view is being taken. The polynomials L ( x ) and l ( x ) = ∑ i = 0 n a i x i {\displaystyle l(x)=\sum _{i=0}^{n}a_{i}x^{i}} are q-associates (note: the exponents " q i " of L ( x ) have been replaced by " i " in l ( x )). More specifically, l ( x ) is called the conventional q-associate of L ( x ), and L ( x ) is the linearised q-associate of l ( x ). Linearised polynomials with coefficients in F q have additional properties which make it possible to define symbolic division, symbolic reducibility and symbolic factorization. Two important examples of this type of linearised polynomial are the Frobenius automorphism x ↦ x q {\displaystyle x\mapsto x^{q}} and the trace function Tr ⁡ ( x ) = ∑ i = 0 n − 1 x q i . {\textstyle \operatorname {Tr} (x)=\sum _{i=0}^{n-1}x^{q^{i}}.} In this special case it can be shown that, as an operation , symbolic multiplication is commutative , associative and distributes over ordinary addition. [ 3 ] Also, in this special case, we can define the operation of symbolic division . If L ( x ) and L 1 ( x ) are linearised polynomials over F q , we say that L 1 ( x ) symbolically divides L ( x ) if there exists a linearised polynomial L 2 ( x ) over F q for which: L ( x ) = L 1 ( x ) ⊗ L 2 ( x ) . {\displaystyle L(x)=L_{1}(x)\otimes L_{2}(x).} If L 1 ( x ) and L 2 ( x ) are linearised polynomials over F q with conventional q -associates l 1 ( x ) and l 2 ( x ) respectively, then L 1 ( x ) symbolically divides L 2 ( x ) if and only if l 1 ( x ) divides l 2 ( x ). [ 4 ] Furthermore, L 1 ( x ) divides L 2 ( x ) in the ordinary sense in this case. [ 5 ] A linearised polynomial L ( x ) over F q of degree > 1 is symbolically irreducible over F q if the only symbolic decompositions L ( x ) = L 1 ( x ) ⊗ L 2 ( x ) , {\displaystyle L(x)=L_{1}(x)\otimes L_{2}(x),} with L i over F q are those for which one of the factors has degree 1. Note that a symbolically irreducible polynomial is always reducible in the ordinary sense since any linearised polynomial of degree > 1 has the nontrivial factor x . A linearised polynomial L ( x ) over F q is symbolically irreducible if and only if its conventional q -associate l ( x ) is irreducible over F q . Every q -polynomial L ( x ) over F q of degree > 1 has a symbolic factorization into symbolically irreducible polynomials over F q and this factorization is essentially unique (up to rearranging factors and multiplying by nonzero elements of F q .) For example, [ 6 ] consider the 2-polynomial L ( x ) = x 16 + x 8 + x 2 + x over F 2 and its conventional 2-associate l ( x ) = x 4 + x 3 + x + 1. The factorization into irreducibles of l ( x ) = ( x 2 + x + 1)( x + 1) 2 in F 2 [ x ], gives the symbolic factorization L ( x ) = ( x 4 + x 2 + x ) ⊗ ( x 2 + x ) ⊗ ( x 2 + x ) . {\displaystyle L(x)=(x^{4}+x^{2}+x)\otimes (x^{2}+x)\otimes (x^{2}+x).} Let L be a linearised polynomial over F q n {\displaystyle F_{q^{n}}} . A polynomial of the form A ( x ) = L ( x ) − α for α ∈ F q n , {\displaystyle A(x)=L(x)-\alpha {\text{ for }}\alpha \in F_{q^{n}},} is an affine polynomial over F q n {\displaystyle F_{q^{n}}} . Theorem: If A is a nonzero affine polynomial over F q n {\displaystyle F_{q^{n}}} with all of its roots lying in the field F q s {\displaystyle F_{q^{s}}} an extension field of F q n {\displaystyle F_{q^{n}}} , then each root of A has the same multiplicity, which is either 1, or a positive power of q . [ 7 ]
https://en.wikipedia.org/wiki/Linearised_polynomial
In mathematics, the term linear is used in two distinct senses for two different properties: An example of a linear function is the function defined by f ( x ) = ( a x , b x ) {\displaystyle f(x)=(ax,bx)} that maps the real line to a line in the Euclidean plane R 2 that passes through the origin. An example of a linear polynomial in the variables X , {\displaystyle X,} Y {\displaystyle Y} and Z {\displaystyle Z} is a X + b Y + c Z + d . {\displaystyle aX+bY+cZ+d.} Linearity of a mapping is closely related to proportionality . Examples in physics include the linear relationship of voltage and current in an electrical conductor ( Ohm's law ), and the relationship of mass and weight . By contrast, more complicated relationships, such as between velocity and kinetic energy , are nonlinear . Generalized for functions in more than one dimension , linearity means the property of a function of being compatible with addition and scaling , also known as the superposition principle . Linearity of a polynomial means that its degree is less than two. The use of the term for polynomials stems from the fact that the graph of a polynomial in one variable is a straight line . In the term " linear equation ", the word refers to the linearity of the polynomials involved. Because a function such as f ( x ) = a x + b {\displaystyle f(x)=ax+b} is defined by a linear polynomial in its argument, it is sometimes also referred to as being a "linear function", and the relationship between the argument and the function value may be referred to as a "linear relationship". This is potentially confusing, but usually the intended meaning will be clear from the context. The word linear comes from Latin linearis , "pertaining to or resembling a line". In mathematics, a linear map or linear function f ( x ) is a function that satisfies the two properties: [ 1 ] These properties are known as the superposition principle . In this definition, x is not necessarily a real number , but can in general be an element of any vector space . A more special definition of linear function , not coinciding with the definition of linear map, is used in elementary mathematics (see below). Additivity alone implies homogeneity for rational α, since f ( x + x ) = f ( x ) + f ( x ) {\displaystyle f(x+x)=f(x)+f(x)} implies f ( n x ) = n f ( x ) {\displaystyle f(nx)=nf(x)} for any natural number n by mathematical induction , and then n f ( x ) = f ( n x ) = f ( m n m x ) = m f ( n m x ) {\displaystyle nf(x)=f(nx)=f(m{\tfrac {n}{m}}x)=mf({\tfrac {n}{m}}x)} implies f ( n m x ) = n m f ( x ) {\displaystyle f({\tfrac {n}{m}}x)={\tfrac {n}{m}}f(x)} . The density of the rational numbers in the reals implies that any additive continuous function is homogeneous for any real number α, and is therefore linear. The concept of linearity can be extended to linear operators . Important examples of linear operators include the derivative considered as a differential operator , and other operators constructed from it, such as del and the Laplacian . When a differential equation can be expressed in linear form, it can generally be solved by breaking the equation up into smaller pieces, solving each of those pieces, and summing the solutions. In a different usage to the above definition, a polynomial of degree 1 is said to be linear, because the graph of a function of that form is a straight line. [ 2 ] Over the reals, a simple example of a linear equation is given by: where m is often called the slope or gradient , and b the y-intercept , which gives the point of intersection between the graph of the function and the y -axis. Note that this usage of the term linear is not the same as in the section above, because linear polynomials over the real numbers do not in general satisfy either additivity or homogeneity. In fact, they do so if and only if the constant term – b in the example – equals 0. If b ≠ 0 , the function is called an affine function (see in greater generality affine transformation ). Linear algebra is the branch of mathematics concerned with systems of linear equations. In Boolean algebra , a linear function is a function f {\displaystyle f} for which there exist a 0 , a 1 , … , a n ∈ { 0 , 1 } {\displaystyle a_{0},a_{1},\ldots ,a_{n}\in \{0,1\}} such that Note that if a 0 = 1 {\displaystyle a_{0}=1} , the above function is considered affine in linear algebra (i.e. not linear). A Boolean function is linear if one of the following holds for the function's truth table : Another way to express this is that each variable always makes a difference in the truth value of the operation or it never makes a difference. Negation , Logical biconditional , exclusive or , tautology , and contradiction are linear functions. In physics , linearity is a property of the differential equations governing many systems; for instance, the Maxwell equations or the diffusion equation . [ 3 ] Linearity of a homogenous differential equation means that if two functions f and g are solutions of the equation, then any linear combination af + bg is, too. In instrumentation, linearity means that a given change in an input variable gives the same change in the output of the measurement apparatus: this is highly desirable in scientific work. In general, instruments are close to linear over a certain range, and most useful within that range. In contrast, human senses are highly nonlinear: for instance, the brain completely ignores incoming light unless it exceeds a certain absolute threshold number of photons. Linear motion traces a straight line trajectory. In electronics , the linear operating region of a device, for example a transistor , is where an output dependent variable (such as the transistor collector current ) is directly proportional to an input dependent variable (such as the base current). This ensures that an analog output is an accurate representation of an input, typically with higher amplitude (amplified). A typical example of linear equipment is a high fidelity audio amplifier , which must amplify a signal without changing its waveform. Others are linear filters , and linear amplifiers in general. In most scientific and technological , as distinct from mathematical, applications, something may be described as linear if the characteristic is approximately but not exactly a straight line; and linearity may be valid only within a certain operating region—for example, a high-fidelity amplifier may distort a small signal, but sufficiently little to be acceptable (acceptable but imperfect linearity); and may distort very badly if the input exceeds a certain value. [ 4 ] For an electronic device (or other physical device) that converts a quantity to another quantity, Bertram S. Kolts writes: [ 5 ] [ 6 ] There are three basic definitions for integral linearity in common use: independent linearity, zero-based linearity, and terminal, or end-point, linearity. In each case, linearity defines how well the device's actual performance across a specified operating range approximates a straight line. Linearity is usually measured in terms of a deviation, or non-linearity, from an ideal straight line and it is typically expressed in terms of percent of full scale , or in ppm (parts per million) of full scale. Typically, the straight line is obtained by performing a least-squares fit of the data. The three definitions vary in the manner in which the straight line is positioned relative to the actual device's performance. Also, all three of these definitions ignore any gain, or offset errors that may be present in the actual device's performance characteristics.
https://en.wikipedia.org/wiki/Linearity
In calculus , the derivative of any linear combination of functions equals the same linear combination of the derivatives of the functions; [ 1 ] this property is known as linearity of differentiation , the rule of linearity , [ 2 ] or the superposition rule for differentiation. [ 3 ] It is a fundamental property of the derivative that encapsulates in a single rule two simpler rules of differentiation, the sum rule (the derivative of the sum of two functions is the sum of the derivatives) and the constant factor rule (the derivative of a constant multiple of a function is the same constant multiple of the derivative). [ 4 ] [ 5 ] Thus it can be said that differentiation is linear , or the differential operator is a linear operator. [ 6 ] Let f and g be functions, with α and β constants. Now consider By the sum rule in differentiation , this is and by the constant factor rule in differentiation , this reduces to Therefore, Omitting the brackets , this is often written as: We can prove the entire linearity principle at once, or, we can prove the individual steps (of constant factor and adding) individually. Here, both will be shown. Proving linearity directly also proves the constant factor rule, the sum rule, and the difference rule as special cases. The sum rule is obtained by setting both constant coefficients to 1 {\displaystyle 1} . The difference rule is obtained by setting the first constant coefficient to 1 {\displaystyle 1} and the second constant coefficient to − 1 {\displaystyle -1} . The constant factor rule is obtained by setting either the second constant coefficient or the second function to 0 {\displaystyle 0} . (From a technical standpoint, the domain of the second function must also be considered - one way to avoid issues is setting the second function equal to the first function and the second constant coefficient equal to 0 {\displaystyle 0} . One could also define both the second constant coefficient and the second function to be 0, where the domain of the second function is a superset of the first function, among other possibilities.) On the contrary, if we first prove the constant factor rule and the sum rule, we can prove linearity and the difference rule. Proving linearity is done by defining the first and second functions as being two other functions being multiplied by constant coefficients. Then, as shown in the derivation from the previous section, we can first use the sum law while differentiation, and then use the constant factor rule, which will reach our conclusion for linearity. In order to prove the difference rule, the second function can be redefined as another function multiplied by the constant coefficient of − 1 {\displaystyle -1} . This would, when simplified, give us the difference rule for differentiation. In the proofs/derivations below, [ 7 ] [ 8 ] the coefficients a , b {\displaystyle a,b} are used; they correspond to the coefficients α , β {\displaystyle \alpha ,\beta } above. Let a , b ∈ R {\displaystyle a,b\in \mathbb {R} } . Let f , g {\displaystyle f,g} be functions. Let j {\displaystyle j} be a function, where j {\displaystyle j} is defined only where f {\displaystyle f} and g {\displaystyle g} are both defined. (In other words, the domain of j {\displaystyle j} is the intersection of the domains of f {\displaystyle f} and g {\displaystyle g} .) Let x {\displaystyle x} be in the domain of j {\displaystyle j} . Let j ( x ) = a f ( x ) + b g ( x ) {\displaystyle j(x)=af(x)+bg(x)} . We want to prove that j ′ ( x ) = a f ′ ( x ) + b g ′ ( x ) {\displaystyle j^{\prime }(x)=af^{\prime }(x)+bg^{\prime }(x)} . By definition, we can see that j ′ ( x ) = lim h → 0 j ( x + h ) − j ( x ) h = lim h → 0 ( a f ( x + h ) + b g ( x + h ) ) − ( a f ( x ) + b g ( x ) ) h = lim h → 0 ( a f ( x + h ) − f ( x ) h + b g ( x + h ) − g ( x ) h ) {\displaystyle {\begin{aligned}j^{\prime }(x)&=\lim _{h\rightarrow 0}{\frac {j(x+h)-j(x)}{h}}\\&=\lim _{h\rightarrow 0}{\frac {\left(af(x+h)+bg(x+h)\right)-\left(af(x)+bg(x)\right)}{h}}\\&=\lim _{h\rightarrow 0}\left(a{\frac {f(x+h)-f(x)}{h}}+b{\frac {g(x+h)-g(x)}{h}}\right)\\\end{aligned}}} In order to use the limits law for the sum of limits, we need to know that lim h → 0 a f ( x + h ) − f ( x ) h {\textstyle \lim _{h\to 0}a{\frac {f(x+h)-f(x)}{h}}} and lim h → 0 b g ( x + h ) − g ( x ) h {\textstyle \lim _{h\to 0}b{\frac {g(x+h)-g(x)}{h}}} both individually exist. For these smaller limits, we need to know that lim h → 0 f ( x + h ) − f ( x ) h {\textstyle \lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}} and lim h → 0 g ( x + h ) − g ( x ) h {\textstyle \lim _{h\to 0}{\frac {g(x+h)-g(x)}{h}}} both individually exist to use the coefficient law for limits. By definition, f ′ ( x ) = lim h → 0 f ( x + h ) − f ( x ) h {\textstyle f^{\prime }(x)=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}} and g ′ ( x ) = lim h → 0 g ( x + h ) − g ( x ) h {\textstyle g^{\prime }(x)=\lim _{h\to 0}{\frac {g(x+h)-g(x)}{h}}} . So, if we know that f ′ ( x ) {\displaystyle f^{\prime }(x)} and g ′ ( x ) {\displaystyle g^{\prime }(x)} both exist, we will know that lim h → 0 f ( x + h ) − f ( x ) h {\textstyle \lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}} and lim h → 0 g ( x + h ) − g ( x ) h {\textstyle \lim _{h\to 0}{\frac {g(x+h)-g(x)}{h}}} both individually exist. This allows us to use the coefficient law for limits to write lim h → 0 a f ( x + h ) − f ( x ) h = a lim h → 0 f ( x + h ) − f ( x ) h {\displaystyle \lim _{h\to 0}a{\frac {f(x+h)-f(x)}{h}}=a\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}} and lim h → 0 b g ( x + h ) − g ( x ) h = b lim h → 0 g ( x + h ) − g ( x ) h . {\displaystyle \lim _{h\to 0}b{\frac {g(x+h)-g(x)}{h}}=b\lim _{h\to 0}{\frac {g(x+h)-g(x)}{h}}.} With this, we can go back to apply the limit law for the sum of limits, since we know that lim h → 0 a f ( x + h ) − f ( x ) h {\textstyle \lim _{h\rightarrow 0}a{\frac {f(x+h)-f(x)}{h}}} and lim h → 0 b g ( x + h ) − g ( x ) h {\textstyle \lim _{h\rightarrow 0}b{\frac {g(x+h)-g(x)}{h}}} both individually exist. From here, we can directly go back to the derivative we were working on. j ′ ( x ) = lim h → 0 ( a f ( x + h ) − f ( x ) h + b g ( x + h ) − g ( x ) h ) = lim h → 0 ( a f ( x + h ) − f ( x ) h ) + lim h → 0 ( b g ( x + h ) − g ( x ) h ) = a lim h → 0 ( f ( x + h ) − f ( x ) h ) + b lim h → 0 ( g ( x + h ) − g ( x ) h ) = a f ′ ( x ) + b g ′ ( x ) {\displaystyle {\begin{aligned}j^{\prime }(x)&=\lim _{h\rightarrow 0}\left(a{\frac {f(x+h)-f(x)}{h}}+b{\frac {g(x+h)-g(x)}{h}}\right)\\&=\lim _{h\rightarrow 0}\left(a{\frac {f(x+h)-f(x)}{h}}\right)+\lim _{h\rightarrow 0}\left(b{\frac {g(x+h)-g(x)}{h}}\right)\\&=a\lim _{h\rightarrow 0}\left({\frac {f(x+h)-f(x)}{h}}\right)+b\lim _{h\rightarrow 0}\left({\frac {g(x+h)-g(x)}{h}}\right)\\&=af^{\prime }(x)+bg^{\prime }(x)\end{aligned}}} Finally, we have shown what we claimed in the beginning: j ′ ( x ) = a f ′ ( x ) + b g ′ ( x ) {\displaystyle j^{\prime }(x)=af^{\prime }(x)+bg^{\prime }(x)} . Let f , g {\displaystyle f,g} be functions. Let j {\displaystyle j} be a function, where j {\displaystyle j} is defined only where f {\displaystyle f} and g {\displaystyle g} are both defined. (In other words, the domain of j {\displaystyle j} is the intersection of the domains of f {\displaystyle f} and g {\displaystyle g} .) Let x {\displaystyle x} be in the domain of j {\displaystyle j} . Let j ( x ) = f ( x ) + g ( x ) {\displaystyle j(x)=f(x)+g(x)} . We want to prove that j ′ ( x ) = f ′ ( x ) + g ′ ( x ) {\displaystyle j^{\prime }(x)=f^{\prime }(x)+g^{\prime }(x)} . By definition, we can see that j ′ ( x ) = lim h → 0 j ( x + h ) − j ( x ) h = lim h → 0 ( f ( x + h ) + g ( x + h ) ) − ( f ( x ) + g ( x ) ) h = lim h → 0 ( f ( x + h ) − f ( x ) h + g ( x + h ) − g ( x ) h ) {\displaystyle {\begin{aligned}j^{\prime }(x)&=\lim _{h\rightarrow 0}{\frac {j(x+h)-j(x)}{h}}\\&=\lim _{h\rightarrow 0}{\frac {\left(f(x+h)+g(x+h)\right)-\left(f(x)+g(x)\right)}{h}}\\&=\lim _{h\rightarrow 0}\left({\frac {f(x+h)-f(x)}{h}}+{\frac {g(x+h)-g(x)}{h}}\right)\\\end{aligned}}} In order to use the law for the sum of limits here, we need to show that the individual limits, lim h → 0 f ( x + h ) − f ( x ) h {\textstyle \lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}} and lim h → 0 g ( x + h ) − g ( x ) h {\textstyle \lim _{h\rightarrow 0}{\frac {g(x+h)-g(x)}{h}}} both exist. By definition, f ′ ( x ) = lim h → 0 f ( x + h ) − f ( x ) h {\textstyle f^{\prime }(x)=\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}} and g ′ ( x ) = lim h → 0 g ( x + h ) − g ( x ) h {\textstyle g^{\prime }(x)=\lim _{h\rightarrow 0}{\frac {g(x+h)-g(x)}{h}}} , so the limits exist whenever the derivatives f ′ ( x ) {\displaystyle f^{\prime }(x)} and g ′ ( x ) {\displaystyle g^{\prime }(x)} exist. So, assuming that the derivatives exist, we can continue the above derivation j ′ ( x ) = lim h → 0 ( f ( x + h ) − f ( x ) h + g ( x + h ) − g ( x ) h ) = lim h → 0 f ( x + h ) − f ( x ) h + lim h → 0 g ( x + h ) − g ( x ) h = f ′ ( x ) + g ′ ( x ) {\displaystyle {\begin{aligned}j^{\prime }(x)&=\lim _{h\rightarrow 0}\left({\frac {f(x+h)-f(x)}{h}}+{\frac {g(x+h)-g(x)}{h}}\right)\\&=\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}+\lim _{h\rightarrow 0}{\frac {g(x+h)-g(x)}{h}}\\&=f^{\prime }(x)+g^{\prime }(x)\end{aligned}}} Thus, we have shown what we wanted to show, that: j ′ ( x ) = f ′ ( x ) + g ′ ( x ) {\displaystyle j^{\prime }(x)=f^{\prime }(x)+g^{\prime }(x)} . Let f , g {\displaystyle f,g} be functions. Let j {\displaystyle j} be a function, where j {\displaystyle j} is defined only where f {\displaystyle f} and g {\displaystyle g} are both defined. (In other words, the domain of j {\displaystyle j} is the intersection of the domains of f {\displaystyle f} and g {\displaystyle g} .) Let x {\displaystyle x} be in the domain of j {\displaystyle j} . Let j ( x ) = f ( x ) − g ( x ) {\displaystyle j(x)=f(x)-g(x)} . We want to prove that j ′ ( x ) = f ′ ( x ) − g ′ ( x ) {\displaystyle j^{\prime }(x)=f^{\prime }(x)-g^{\prime }(x)} . By definition, we can see that: j ′ ( x ) = lim h → 0 j ( x + h ) − j ( x ) h = lim h → 0 ( f ( x + h ) − ( g ( x + h ) ) − ( f ( x ) − g ( x ) ) h = lim h → 0 ( f ( x + h ) − f ( x ) h − g ( x + h ) − g ( x ) h ) {\displaystyle {\begin{aligned}j^{\prime }(x)&=\lim _{h\rightarrow 0}{\frac {j(x+h)-j(x)}{h}}\\&=\lim _{h\rightarrow 0}{\frac {\left(f(x+h)-(g(x+h)\right)-\left(f(x)-g(x)\right)}{h}}\\&=\lim _{h\rightarrow 0}\left({\frac {f(x+h)-f(x)}{h}}-{\frac {g(x+h)-g(x)}{h}}\right)\\\end{aligned}}} In order to use the law for the difference of limits here, we need to show that the individual limits, lim h → 0 f ( x + h ) − f ( x ) h {\textstyle \lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}} and lim h → 0 g ( x + h ) − g ( x ) h {\textstyle \lim _{h\rightarrow 0}{\frac {g(x+h)-g(x)}{h}}} both exist. By definition, f ′ ( x ) = lim h → 0 f ( x + h ) − f ( x ) h {\textstyle f^{\prime }(x)=\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}} and that g ′ ( x ) = lim h → 0 g ( x + h ) − g ( x ) h {\textstyle g^{\prime }(x)=\lim _{h\rightarrow 0}{\frac {g(x+h)-g(x)}{h}}} , so these limits exist whenever the derivatives f ′ ( x ) {\displaystyle f^{\prime }(x)} and g ′ ( x ) {\displaystyle g^{\prime }(x)} exist. So, assuming that the derivatives exist, we can continue the above derivation j ′ ( x ) = lim h → 0 ( f ( x + h ) − f ( x ) h − g ( x + h ) − g ( x ) h ) = lim h → 0 f ( x + h ) − f ( x ) h − lim h → 0 g ( x + h ) − g ( x ) h = f ′ ( x ) − g ′ ( x ) {\displaystyle {\begin{aligned}j^{\prime }(x)&=\lim _{h\rightarrow 0}\left({\frac {f(x+h)-f(x)}{h}}-{\frac {g(x+h)-g(x)}{h}}\right)\\&=\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}-\lim _{h\rightarrow 0}{\frac {g(x+h)-g(x)}{h}}\\&=f^{\prime }(x)-g^{\prime }(x)\end{aligned}}} Thus, we have shown what we wanted to show, that: j ′ ( x ) = f ′ ( x ) − g ′ ( x ) {\displaystyle j^{\prime }(x)=f^{\prime }(x)-g^{\prime }(x)} . Let f {\displaystyle f} be a function. Let a ∈ R {\displaystyle a\in \mathbb {R} } ; a {\displaystyle a} will be the constant coefficient. Let j {\displaystyle j} be a function, where j is defined only where f {\displaystyle f} is defined. (In other words, the domain of j {\displaystyle j} is equal to the domain of f {\displaystyle f} .) Let x {\displaystyle x} be in the domain of j {\displaystyle j} . Let j ( x ) = a f ( x ) {\displaystyle j(x)=af(x)} . We want to prove that j ′ ( x ) = a f ′ ( x ) {\displaystyle j^{\prime }(x)=af^{\prime }(x)} . By definition, we can see that: j ′ ( x ) = lim h → 0 j ( x + h ) − j ( x ) h = lim h → 0 a f ( x + h ) − a f ( x ) h = lim h → 0 a f ( x + h ) − f ( x ) h {\displaystyle {\begin{aligned}j^{\prime }(x)&=\lim _{h\rightarrow 0}{\frac {j(x+h)-j(x)}{h}}\\&=\lim _{h\rightarrow 0}{\frac {af(x+h)-af(x)}{h}}\\&=\lim _{h\rightarrow 0}a{\frac {f(x+h)-f(x)}{h}}\\\end{aligned}}} Now, in order to use a limit law for constant coefficients to show that lim h → 0 a f ( x + h ) − f ( x ) h = a lim h → 0 f ( x + h ) − f ( x ) h {\displaystyle \lim _{h\rightarrow 0}a{\frac {f(x+h)-f(x)}{h}}=a\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}} we need to show that lim h → 0 f ( x + h ) − f ( x ) h {\textstyle \lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}} exists. However, f ′ ( x ) = lim h → 0 f ( x + h ) − f ( x ) h {\textstyle f^{\prime }(x)=\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}} , by the definition of the derivative. So, if f ′ ( x ) {\displaystyle f^{\prime }(x)} exists, then lim h → 0 f ( x + h ) − f ( x ) h {\textstyle \lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}} exists. Thus, if we assume that f ′ ( x ) {\displaystyle f^{\prime }(x)} exists, we can use the limit law and continue our proof. j ′ ( x ) = lim h → 0 a f ( x + h ) − f ( x ) h = a lim h → 0 f ( x + h ) − f ( x ) h = a f ′ ( x ) {\displaystyle {\begin{aligned}j^{\prime }(x)&=\lim _{h\rightarrow 0}a{\frac {f(x+h)-f(x)}{h}}\\&=a\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}\\&=af^{\prime }(x)\\\end{aligned}}} Thus, we have proven that when j ( x ) = a f ( x ) {\displaystyle j(x)=af(x)} , we have j ′ ( x ) = a f ′ ( x ) {\displaystyle j^{\prime }(x)=af^{\prime }(x)} .
https://en.wikipedia.org/wiki/Linearity_of_differentiation
The linearized augmented-plane-wave method ( LAPW ) is an implementation of Kohn-Sham density functional theory (DFT) adapted to periodic materials. [ 1 ] [ 2 ] [ 3 ] It typically goes along with the treatment of both valence and core electrons on the same footing in the context of DFT and the treatment of the full potential and charge density without any shape approximation. This is often referred to as the all-electron full-potential linearized augmented-plane-wave method ( FLAPW ). [ 4 ] It does not rely on the pseudopotential approximation and employs a systematically extendable basis set . These features make it one of the most precise implementations of DFT, applicable to all crystalline materials, regardless of their chemical composition. It can be used as a reference for evaluating other approaches. [ 5 ] [ 6 ] At the core of density functional theory the Hohenberg-Kohn theorems state that every observable of an interacting many-electron system is a functional of its ground-state charge density and that this density minimizes the total energy of the system. [ 7 ] The theorems do not answer the question how to obtain such a ground-state density. A recipe for this is given by Walter Kohn and Lu Jeu Sham who introduce an auxiliary system of noninteracting particles constructed such that it shares the same ground-state density with the interacting particle system. [ 8 ] The Schrödinger-like equations describing this system are the Kohn-Sham equations . With these equations one can calculate the eigenstates of the system and with these the density. One contribution to the Kohn-Sham equations is the effective potential which itself depends on the density. As the ground-state density is not known before a Kohn-Sham DFT calculation and it is an input as well as an output of such a calculation, the Kohn-Sham equations are solved in an iterative procedure together with a recalculation of the density and the potential in every iteration. It starts with an initial guess for the density and after every iteration a new density is constructed as a mixture from the output density and previous densities. The calculation finishes as soon as a fixpoint of a self-consistent density is found, i.e., input and output density are identical. This is the ground-state density. A method implementing Kohn-Sham DFT has to realize these different steps of the sketched iterative algorithm. The LAPW method is based on a partitioning of the material's unit cell into non-overlapping but nearly touching so-called muffin-tin (MT) spheres, centered at the atomic nuclei, and an interstitial region (IR) in between the spheres. The physical description and the representation of the Kohn-Sham orbitals, the charge density, and the potential is adapted to this partitioning. In the following this method design and the extraction of quantities from it are sketched in more detail. Variations and extensions are indicated. The central aspect of practical DFT implementations is the question how to solve the Kohn-Sham equations with the single-electron kinetic energy operator T ^ s {\displaystyle {\hat {T}}_{\text{s}}} , the effective potential V eff ( r ) {\displaystyle V_{\text{eff}}(\mathbf {r} )} , Kohn-Sham states Ψ j k ( r ) {\displaystyle \Psi _{j}^{\mathbf {k} }(\mathbf {r} )} , energy eigenvalues ϵ j k {\displaystyle \epsilon _{j}^{\mathbf {k} }} , and position and Bloch vectors r {\displaystyle \mathbf {r} } and k {\displaystyle \mathbf {k} } . While in abstract evaluations of Kohn-Sham DFT the model for the exchange-correlation contribution to the effective potential is the only fundamental approximation, in practice solving the Kohn-Sham equations is accompanied by the introduction of many additional approximations. These include the incompleteness of the basis set used to represent the Kohn-Sham orbitals, the choice of whether to use the pseudopotential approximation or to consider all electrons in the DFT scheme, the treatment of relativistic effects, and possible shape approximations to the potential. Beyond the partitioning of the unit cell, for the LAPW method the central design aspect is the use of the LAPW basis set { ϕ k , G ( r ) } {\displaystyle \left\lbrace \phi _{\mathbf {k} ,\mathbf {G} }(\mathbf {r} )\right\rbrace } to represent the valence electron orbitals as where c j k , G {\displaystyle c_{j}^{\mathbf {k} ,\mathbf {G} }} are the expansion coefficients. The LAPW basis is designed to enable a precise representation of the orbitals and an accurate modelling of the physics in each region of the unit cell. Considering a unit cell of volume Ω {\displaystyle \Omega } covering atoms α {\displaystyle \alpha } at positions τ α {\displaystyle \mathbf {\tau } _{\alpha }} , an LAPW basis function is characterized by a reciprocal lattice vector G {\displaystyle \mathbf {G} } and the considered Bloch vector k {\displaystyle \mathbf {k} } . It is given as where r α = r − τ α {\displaystyle \mathbf {r} _{\alpha }=\mathbf {r} -\mathbf {\tau } _{\alpha }} is the position vector relative to the position of atom nucleus α {\displaystyle \alpha } . An LAPW basis function is thus a plane wave in the IR and a linear combination of the radial functions u l , α ( r α , E l , α ) {\displaystyle u_{l,\alpha }(r_{\alpha },E_{l,\alpha })} and u ˙ l , α ( r α , E l , α ) {\displaystyle {\dot {u}}_{l,\alpha }(r_{\alpha },E_{l,\alpha })} multiplied by spherical harmonics Y l , m {\displaystyle Y_{l,m}} in each MT sphere. The radial function u l , α ( r α , E l , α ) {\displaystyle u_{l,\alpha }(r_{\alpha },E_{l,\alpha })} is hereby the solution of the Kohn-Sham Hamiltonian for the spherically averaged potential with regular behavior at the nucleus for the given energy parameter E l , α {\displaystyle E_{l,\alpha }} . Together with its energy derivative u ˙ l , α ( r α , E l , α ) {\displaystyle {\dot {u}}_{l,\alpha }(r_{\alpha },E_{l,\alpha })} these augmentations of the plane wave in each MT sphere enable a representation of the Kohn-Sham orbitals at arbitrary eigenenergies linearized around the energy parameters. The coefficients a l , m k , G , α {\displaystyle a_{l,m}^{\mathbf {k} ,\mathbf {G} ,\alpha }} and b l , m k , G , α {\displaystyle b_{l,m}^{\mathbf {k} ,\mathbf {G} ,\alpha }} are automatically determined by enforcing the basis function to be continuously differentiable for the respective ( l , m ) {\displaystyle (l,m)} channel. The set of LAPW basis functions is defined by specifying a cutoff parameter K max = | k + G | max {\displaystyle K_{\text{max}}=|\mathbf {k} +\mathbf {G} |_{\text{max}}} . In each MT sphere, the expansion into spherical harmonics is limited to a maximum number of angular momenta l max , α ≈ K max R MT α {\displaystyle l_{{\text{max}},\alpha }\approx K_{\text{max}}R_{{\text{MT}}_{\alpha }}} , where R MT α {\displaystyle R_{{\text{MT}}_{\alpha }}} is the muffin-tin radius of atom α {\displaystyle \alpha } . The choice of this cutoff is connected to the decay of expansion coefficients for growing l {\displaystyle l} in the Rayleigh expansion of plane waves into spherical harmonics. While the LAPW basis functions are used to represent the valence states , core electron states, which are completely confined within a MT sphere, are calculated for the spherically averaged potential on radial grids, for each atom separately applying atomic boundary conditions. Semicore states, which are still localized but slightly extended beyond the MT sphere boundary, may either be treated as core electron states or as valence electron states. For the latter choice the linearized representation is not sufficient because the related eigenenergy is typically far away from the energy parameters. To resolve this problem the LAPW basis can be extended by additional basis functions in the respective MT sphere, so called local orbitals (LOs). [ 9 ] These are tailored to provide a precise representation of the semicore states. The plane-wave form of the basis functions in the interstitial region makes setting up the Hamiltonian matrix for that region simple. In the MT spheres this setup is also simple and computationally inexpensive for the kinetic energy and the spherically averaged potential, e.g., in the muffin-tin approximation . The simplicity hereby stems from the connection of the radial functions to the spherical Hamiltonian in the spheres H ^ sphr α {\displaystyle {\hat {H}}_{\text{sphr}}^{\alpha }} , i.e., H ^ sphr α | u l , α ( r α , E l , α ) ⟩ = E l , α | u l , α ( r α , E l , α ) ⟩ {\displaystyle {\hat {H}}_{\text{sphr}}^{\alpha }\left|u_{l,\alpha }(r_{\alpha },E_{l,\alpha })\right\rangle =E_{l,\alpha }\left|u_{l,\alpha }(r_{\alpha },E_{l,\alpha })\right\rangle } and H ^ sphr α | u ˙ l , α ( r α , E l , α ) ⟩ = E l , α | u ˙ l , α ( r α , E l , α ) ⟩ + | u l , α ( r α , E l , α ) ⟩ {\displaystyle {\hat {H}}_{\text{sphr}}^{\alpha }\left|{\dot {u}}_{l,\alpha }(r_{\alpha },E_{l,\alpha })\right\rangle =E_{l,\alpha }\left|{\dot {u}}_{l,\alpha }(r_{\alpha },E_{l,\alpha })\right\rangle +\left|u_{l,\alpha }(r_{\alpha },E_{l,\alpha })\right\rangle } . In comparison to the MT approximation, for the full-potential description (FLAPW) contributions from the non-spherical part of the potential are added to the Hamiltonian matrix in the MT spheres and in the IR contributions related to deviations from the constant potential. After the Hamiltonian matrix H G ′ , G k {\displaystyle H_{\mathbf {G'} ,\mathbf {G} }^{\mathbf {k} }} together with the overlap matrix S G ′ , G k = ⟨ ϕ k , G ′ | ϕ k , G ⟩ {\displaystyle S_{\mathbf {G'} ,\mathbf {G} }^{\mathbf {k} }=\left\langle \phi _{\mathbf {k} ,\mathbf {G'} }{\Big |}\phi _{\mathbf {k} ,\mathbf {G} }\right\rangle } is set up, the Kohn-Sham orbitals are obtained as eigenfunctions from the algebraic generalized dense Hermitian eigenvalue problem where ϵ j k {\displaystyle \epsilon _{j}^{\mathbf {k} }} is the energy eigenvalue of the j-th Kohn-Sham state at Bloch vector k {\displaystyle {\mathbf {k} }} and the state is given as indicated above by the expansion coefficients c j k , G {\displaystyle c_{j}^{\mathbf {k} ,\mathbf {G} }} . The considered degree of relativistic physics differs for core and valence electrons. The strong localization of core electrons due to the singularity of the effective potential at the atomic nucleus is connected to large kinetic energy contributions and thus a fully relativistic treatment is desirable and common. For the determination of the radial functions u l , α ( r α , E l , α ) {\displaystyle u_{l,\alpha }(r_{\alpha },E_{l,\alpha })} and u ˙ l , α ( r α , E l , α ) {\displaystyle {\dot {u}}_{l,\alpha }(r_{\alpha },E_{l,\alpha })} the common approach is to make an approximation to the fully relativistic description. This may be the scalar-relativistic approximation [ 10 ] [ 11 ] (SRA) or similar approaches. [ 12 ] [ 13 ] The dominant effect neglected by these approximations is the spin-orbit coupling . As indicated above the construction of the Hamiltonian matrix within such an approximation is trivial. Spin-orbit coupling can additionally be included, though this leads to a more complex Hamiltonian matrix setup or a second variation scheme, [ 14 ] [ 15 ] connected to increased computational demands. In the interstitial region it is reasonable and common to describe the valence electrons without considering relativistic effects. After calculating the Kohn-Sham eigenfunctions, the next step is to construct the electron charge density by occupying the lowest energy eigenstates up to the Fermi level with electrons. The Fermi level itself is determined in this process by keeping charge neutrality in the unit cell. The resulting charge density ρ ( r ) {\displaystyle \rho (\mathbf {r} )} then has a region-specific form i.e., it is given as a plane-wave expansion in the interstitial region and as an expansion into radial functions times spherical harmonics in each MT sphere. The radial functions hereby are numerically given on a mesh. The representation of the effective potential follows the same scheme. In its construction a common approach is to employ Weinert's method for solving the Poisson equation . [ 16 ] It efficiently and accurately provides a solution of the Poisson equation without shape approximation for an arbitrary periodic charge density based on the concept of multipole potentials and the boundary value problem for a sphere. Because they are based on the same theoretical framework, different DFT implementations offer access to very similar sets of material properties. However, the variations in the implementations result in differences in the ease of extracting certain quantities and also in differences in their interpretation. In the following, these circumstances are sketched for some examples. The most basic quantity provided by DFT is the ground-state total energy of an investigated system. To avoid the calculation of derivatives of the eigenfunctions in its evaluation, the common implementation [ 17 ] replaces the expectation value of the kinetic energy operator by the sum of the band energies of occupied Kohn-Sham states minus the energy due to the effective potential. The force exerted on an atom, which is given by the change of the total energy due to an infinitesimal displacement, has two major contributions. The first contribution is due to the displacement of the potential. It is known as Hellmann-Feynman force . The other, computationally more elaborate contribution, is due to the related change in the atom-position-dependent basis functions. It is often called Pulay force [ 18 ] and requires a method-specific implementation. [ 19 ] [ 20 ] Beyond forces, similar method-specific implementations are also needed for further quantities derived from the total energy functional. For the LAPW method, formulations for the stress tensor [ 21 ] and for phonons [ 22 ] have been realized. Independent of the actual size of an atom, evaluating atom-dependent quantities in LAPW is often interpreted as calculating the quantity in the respective MT sphere. This applies to quantities like charges at atoms, magnetic moments, or projections of the density of states or the band structure onto a certain orbital character at a given atom. Deviating interpretations of such quantities from experiments or other DFT implementations may lead to differences when comparing results. On a side note also some atom-specific LAPW inputs relate directly to the respective MT region. For example, in the DFT+U approach the Hubbard U only affects the MT sphere. [ 23 ] A strength of the LAPW approach is the inclusion of all electrons in the DFT calculation, which is crucial for the evaluation of certain quantities. One of which are hyperfine interaction parameters like electric field gradients whose calculation involves the evaluation of the curvature of the all-electron Coulomb potential near the nuclei. The prediction of such quantities with LAPW is very accurate. [ 24 ] Kohn-Sham DFT does not give direct access to all quantities one may be interested in. For example, most energy eigenvalues of the Kohn-Sham states are not directly related to the real interacting many-electron system. For the prediction of optical properties one therefore often uses DFT codes in combination with software implementing the GW approximation (GWA) to many-body perturbation theory and optionally the Bethe-Salpeter equation (BSE) to describe excitons . Such software has to be adapted to the representation used in the DFT implementation. Both the GWA and the BSE have been formulated in the LAPW context and several implementations of such tools are in use. [ 25 ] [ 26 ] [ 27 ] In other postprocessing situations it may be useful to project Kohn-Sham states onto Wannier functions . For the LAPW method such projections have also been implemented [ 28 ] [ 29 ] and are in common use. There are various software projects implementing the LAPW method and/or its variants. Examples for such codes are
https://en.wikipedia.org/wiki/Linearized_augmented-plane-wave_method
In mathematics , algebras A , B over a field k inside some field extension Ω {\displaystyle \Omega } of k are said to be linearly disjoint over k if the following equivalent conditions are met: Note that, since every subalgebra of Ω {\displaystyle \Omega } is a domain , (i) implies A ⊗ k B {\displaystyle A\otimes _{k}B} is a domain (in particular reduced ). Conversely if A and B are fields and either A or B is an algebraic extension of k and A ⊗ k B {\displaystyle A\otimes _{k}B} is a domain then it is a field and A and B are linearly disjoint. However, there are examples where A ⊗ k B {\displaystyle A\otimes _{k}B} is a domain but A and B are not linearly disjoint: for example, A = B = k ( t ), the field of rational functions over k . One also has: A , B are linearly disjoint over k if and only if the subfields of Ω {\displaystyle \Omega } generated by A , B {\displaystyle A,B} , resp. are linearly disjoint over k . (cf. Tensor product of fields ) Suppose A , B are linearly disjoint over k . If A ′ ⊂ A {\displaystyle A'\subset A} , B ′ ⊂ B {\displaystyle B'\subset B} are subalgebras, then A ′ {\displaystyle A'} and B ′ {\displaystyle B'} are linearly disjoint over k . Conversely, if any finitely generated subalgebras of algebras A , B are linearly disjoint, then A , B are linearly disjoint (since the condition involves only finite sets of elements.) This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Linearly_disjoint
Lineatin is a pheromone produced by female striped ambrosia beetle , Trypodendron lineatum Olivier. These kinds of beetles are responsible for extensive damage of coniferous forest infestation in Europe and North America. Since lineatin can act as lures used for mass-trapping of T. lineatum , it is being studied to apply as a pest control reagent. Lineatin was first isolated in 1977 by MacConnell. [ 1 ] The absolute configuration of the biologically active form was later determined as (+)-(1R,4S,5R,7R)-3,3,7-trimethyl-2,9- dioxatricyclo[3.3.1.0 4,7 ]nonane, whereas other enatinomers process no biological attraction activity. [ 2 ] After the absolute structure was determined, lineatin quickly attracted considerable synthetic interests due to its natural occurrence, biological activity, and unique structural features. A few routes describing the total synthesis of lineatin was proposed with yields of 0.5–2%. [ 3 ] [ 4 ] [ 5 ] [ 6 ] Recently, a new total synthesis route that adopted a photochemical [2 + 2] cycloaddition approach to construct diastereoselective cyclobutene and a regiocontrolled oxymercuration reaction was proposed. This route achieved in synthesizing highly pure (+)-lineatin (> 99.5% ee) through 14 steps and resulted in 14% overall yield from a homochiral 2(5H)-furanone. (Figure 1 showed the basic outline of this approach). [ 7 ] Lineatin is a monoterprene with unique tricyclic acetal structure. Most of the studies regarding lineatin were focused on the total synthesis; little attentions were put on its biosynthesis . It is suggested that lineatin is derived through oxidation and cyclization of a monoterponid precursor, but no experimental has been done on proving this route. [ 8 ] Based on its partial structure similarity to iridoid class of terprenoids, here, a possible biosynthesis pathway was proposed and outlined in figure 2.
https://en.wikipedia.org/wiki/Lineatin
The Lines of Stollhofen ( German : Bühl-Stollhofener Linie ) was a line of defensive earthworks built for the Reichsarmee at the start of the War of the Spanish Succession (1701–1714) running for about 15 kilometres (9.3 mi) from Stollhofen on the Rhine to the impenetrable woods on the hills east of Bühl. [ 1 ] The lines were constructed by order of Margrave Louis William I of Baden-Baden in order to protect northern Baden from the newly erected French fortress of Fort Louis on the River Rhine. [ citation needed ] The roughly 15 kilometres (9.3 mi) long and only partly fortified line started in the east near Obertal (today part of Bühlertal ), ran westwards over the heights to Bühl and then northwest in the Rhine valley via Vimbuch (today a village in the municipality of Bühl), Leiberstung (today part of Sinzheim ) and Stollhofen to the River Rhine . It comprised linear schanzen in the terrain, as well as individual star schanzen , hornworks , small forts and fortified villages, and used the watercourses on the Rhine Plain in order to flood the fields of fire and approach using weirs . [ citation needed ] At the same time, by including the villages of Bühl and Stollhofen, it enabled control of the old trade routes from Basle to Frankfurt (today the Bundesstraße B3) at Bühl, and from Strasbourg to Frankfurt (old Roman road, today the B 36). Until 1707, the line bounded the operational area of the French troops and barred the easiest route to Bavaria via Pforzheim . [ citation needed ] Following his Rhine crossing in mid-February 1703, Marshal Villars found the passes through the Black Forest to be still impassible because of snow. Therefore, he initially occupied Kehl Fortress on 12 March as his base east of the Rhine, united with the army of Marshal Tallard , [ 2 ] and on 19 April 1703 began an attack on the Bühl-Stollhofen Line. He bombarded the line south of Kappelwindeck and tried to bypass the line to the east with 25 battalions under Blainville. Both attempts, on 19 and 24 April, failed because the French could not capture the fortifications at Obertal. On 25 April, Villars pulled back. [ citation needed ] In summer 1703, however, Margrave Louis William could not stop Villars marching up the Kinzig valley and on into Bavaria. There, Villars was victorious in the First Battle of Höchstädt . Likewise in 1704, Tallar passed through the Black Forest unhindered along the Dreisam Valley . [ citation needed ] After the death of Margrave Louis William (9 January 1707), Villars captured the Bühl-Stollhofen Line in May without a fight and had it destroyed. [ citation needed ] Several months after the loss of the Bühl-Stollhofen Line, work began on the Ettlingen Line under the Rhine Army commander, George Louis of Brunswick-Lüneburg . The line was reinforced during the War of the Polish Succession (1733–1738), was destroyed by the French in 1734 broke and was rebuilt in 1735. [ citation needed ] As a result of the canalization of the Rhine by Tulla in the 19th century and the construction of roads and settlements in the last century the remains of the line are now visible in places only in the wooded areas east of Bühl. [ 3 ] In the Bühl Municipal Museum [ 4 ] is the 1703 map of the Bühl-Stollhofen Line drawn by Major Elster.
https://en.wikipedia.org/wiki/Lines_of_Stollhofen
The Lines of Torres Vedras were lines of forts and other military defences built in secrecy to defend Lisbon during the Peninsular War . Named after the nearby town of Torres Vedras , they were ordered by Arthur Wellesley, Viscount Wellington , constructed by Colonel Richard Fletcher and his Portuguese workers between November 1809 and September 1810, and used to stop Marshal Masséna's 1810 offensive. The Lines were declared a National Heritage by the Portuguese Government in March 2019. [ 1 ] At the beginning of the Peninsular War (1807–14) France and Spain signed the Treaty of Fontainebleau in October 1807. This provided for the invasion and subsequent division of Portuguese territory into three kingdoms. Subsequently, French troops under the command of General Junot entered Portugal, which requested support from the British. In July 1808 troops commanded by Sir Arthur Wellesley , the later Duke of Wellington, landed in Portugal and defeated French troops at the Battles of Roliça and Vimeiro . This forced Junot to negotiate the Convention of Cintra , which led to the evacuation of the French army from Portugal. In March 1809, Marshal Soult led a new French expedition that advanced south to the city of Porto before being repulsed by Portuguese-British troops and forced to withdraw. [ 2 ] [ 3 ] After this retreat, Wellesley's forces advanced into Spain to join 33,000 Spanish troops under General Cuesta . At Talavera, some 120 kilometres (75 mi) southwest of Madrid, they encountered and defeated 46,000 French soldiers under Marshal Claude Victor . [ 4 ] After the Battle of Talavera , Wellington realised that he was seriously outnumbered by the French army, giving rise to the possibility that he could be forced to retreat to Portugal and possibly evacuate. He decided to strengthen the proposed evacuation area around the Fort of São Julião da Barra on the estuary of the River Tagus , near Lisbon. In October 1809, Wellington, drawing on topographical maps prepared by José Maria das Neves Costa, and making use of a report that was prepared for General Junot in 1807, surveyed the area north of Lisbon with Lieutenant-Colonel Richard Fletcher . Eventually they chose the terrain from Torres Vedras to Lisbon because of its mountainous characteristics. From north to south, great undulations created peaks that straddled deep valleys, great gullies and wide ravines . The rugged and inhospitable area offered numerous possibilities for a stubborn rearguard fight from forts on many of the peaks. [ 5 ] Following the decision on the location, Lieutenant-Colonel Richard Fletcher ordered the work to begin on a network of interlocking fortifications, redoubts , escarpments , dams that flooded large areas, and other defences. Roads were also built to enable troops to move rapidly between forts. The work was supervised by Fletcher, assisted by Major John Thomas Jones , and 11 other British Officers, four Portuguese Army Engineers, and two KGL officers. The cost was less than £200,000 according to the Royal Engineers, [ 6 ] one of the least expensive but most productive military investments in history. When the results of the surveys by the Royal Engineers were completed, it was possible, in February 1810, to begin work on 150 smaller interlinking defensive positions, using, wherever possible, the natural features of the landscape. [ 7 ] The work received a boost after the loss to the French of the fortress at the Siege of Almeida in August 1810 led to the public conscription of Portuguese labourers. The works were sufficiently complete to halt the advance of the French troops, who arrived in October of the same year. Even after the French had retreated from Portugal, construction of the lines continued in expectation of their return, and in 1812 34,000 men were still working on them. On completion there were 152 fortifications with a total of 648 cannon. [ 4 ] [ 8 ] The work began on the main defensive works on 3 November 1809, initially at the Fort of São Julião da Barra and almost immediately afterwards at the Fort of São Vicente (St. Vincent) overlooking the town of Torres Vedras and at the Fort of Alqueidão on top of Monte Agraço. [ 3 ] [ 9 ] [ 4 ] [ 8 ] The entire construction was carried out in great secrecy and the French never became aware of it. Only one report appeared in the London newspapers, a major source of information for Napoleon. [ 10 ] It is said that the British government did not know about the forts and was stunned when Wellington first said in dispatches that he had retreated to them. Even the British Ambassador in Lisbon appears to have been unaware of what was happening. [ 3 ] These defences were accompanied by a scorched earth policy to their north in which the inhabitants were told to leave their farms, destroying all food they could not take and anything else that may be useful to the French. Although ultimately contributing to the success of the defence, this policy led to high rates of mortality among the Portuguese who had retreated south of the lines. By some estimates 40,000 died. [ 3 ] [ 9 ] Labour for construction of the forts was supplied by Portuguese regiments from Lisbon, by hired Portuguese and, ultimately, through conscription of the whole district. The 152 works were supervised by just 18 engineers. The Lines were not continuous, as in the case of a defensive wall, but consisted of a series of mutually supporting forts and other defences that both guarded roads that the French could take and also covered each other’s flanks. The majority of the defences were redoubts holding 200 to 300 troops and three to six cannon, normally 12-pounders, which could fire canister shot or cannonballs . Each redoubt was protected by a ditch or dry moat, with parapets, and was palisaded. By the time the French reached the First Line in October 1810, 126 works had been completed and were manned by 29,750 men with 247 heavy guns. Wellington did not use his front-line troops to man the forts: instead, manpower was mainly provided by the Portuguese. Construction continued after the withdrawal of the French and was not fully completed until 1812. [ 3 ] [ 9 ] Originally the Second Line was intended to be the main line of defence, 30 km (19 mi) north of Lisbon. The First Line, or Outer Line, was approximately 10 km (6.2 mi) to north of the Second Line. The original purpose of the First Line was to only delay the French. In fact, the First Line was not the original plan, the work was only carried out because the defenders were given extra time due to the slow advance of the French Army. [ 4 ] In the end, the First Line succeeded in holding the French and the Second Line was never required. A Third Line, surrounding the Fort of São Julião da Barra near Lisbon, was built to protect Wellington’s evacuation by sea from the fort. [ 3 ] [ 9 ] A fourth line, of which little remains, was built south of the Tagus opposite Lisbon to prevent a French invasion of the city by boat. Wellington's first idea had been to construct the first line from Alhandra on the banks of the Tagus to Rio São Lourenço on the Atlantic coast, with advanced works at Torres Vedras, Sobral de Monte Agraço , and other commanding points. The delays to the French arrival, however, enabled him to strengthen the first line sufficiently to warrant aiming to hold it permanently rather than just using it for delaying purposes. Surveying this line from east to west, the first section from Alhandra to Arruda was about 5 miles (8.0 km) long, of which 1 mile (1.6 km) towards the Tagus had been inundated; another 1 mile (1.6 km) or more had been scarped into a precipice , and the most vulnerable point had been obstructed by a huge abatis . The additional defences included 23 redoubts mounting 96 guns, besides a flotilla of gunboats to guard the right flank on the Tagus. This area was under the command of Hill's division . [ 5 ] Defences still visible in this section include the Fort of Subserra . [ 8 ] [ 11 ] The second section extended from Arruda to the west of Monte Agraço, which was crowned by the very large fort now known as the Fort of Alqueidão, mounting twenty-five guns, with three smaller forts to support it. Monte Agraço itself was held by Pack's brigade with Anglo-Portuguese 5th Division ( Leith's ) in reserve behind it, while the less completely fortified country to the east was entrusted to the British Light Division . [ 12 ] The third section stretched from the west of Monte Agraço for nearly 8 miles (13 km) to the gorge of the river Sizandro, a little to south of Torres Vedras. This was strengthened by two redoubts which commanded the road from Sobral to Montachique . Here, therefore, were concentrated the 1st , 4th , and 6th divisions, under the eye of Wellington himself, who established his headquarters at Pero Negro , where he remained from approximately 16 October 1810 to 15 November 1810. [ 13 ] [ 14 ] The last and most westerly section of the first line ran from the gorge of the Sizandro to the sea, a distance of nearly 12 miles (19 km), more than half of which, however, on the western side had been rendered impassable by the damming of the Sizandro and by the conversion of its lower reaches into one huge inundation. The chief defence consisted of the entrenched camp of the Fort of São Vicente, a little to the north of Torres Vedras, which dominated the paved road leading from Leiria to Lisbon. The force assigned to this part of the Line was Picton's division. [ 13 ] The second line of defence was still more formidable. It can broadly be divided into three sections, from the Fort of Casa on the Tagus to Bucelas , from Bucelas to Mafra , and from Mafra to the sea, a total distance of 22 miles (35 km). [ 13 ] The main forts along this line that remain identifiable are three forts on the Serra da Aguieira that served to support the Fort of Casa in its defence of the River Tagus as well as covering the Bucelas Gorge. They also exchanged crossfire with the Fort of Arpim to their north, which was a link between the first and second lines as it was close to three other forts designed to protect the road from Bucelas to Alverca do Ribatejo . To the west of Bucelas was a line of hill-top forts dominated by the Montachique mountain. The mountain, at an altitude of 408 metres, was not fortified but was defended by what are today known as the Fort of Mosqueiro , the Fort of Ribas and others. Closer to Mafra, overlooking the town of Malveira, was the Fort of Feira , which was at the centre of a complex of 19 strongholds in the second Line. [ 15 ] Mafra was one of the principle positions on the second line, with its defences being centred around the Tapada or royal park. [ 3 ] In the event of failure even in the face of all these precautions, a very powerful line, 2 miles (3.2 km) long, was thrown up around the Fort of São Julião da Barra on the Tagus estuary to cover a retreat and any embarkation if it became necessary. [ 13 ] This was considered to be the third line. British ships dominated the Portuguese coast and the Tagus estuary, so a waterborne invasion by the French was unlikely. However, to guard against the possibility that the French would try to bypass the lines to the north of Lisbon by heading south along the left bank of the Tagus and then approaching Lisbon by boat, a fourth line was built south of the Tagus in the Almada area. The line was 7.3 kilometres (4.5 mi) long. It had 17 redoubts and covered trenches, 86 pieces of artillery, and was defended by marines and orderlies from Lisbon, with a total of 7,500 men. [ 3 ] The Anglo-Portuguese Army was forced to retreat to the first line after winning the Battle of Buçaco on 27 September 1810. The French army under Marshal Masséna discovered a barren land (under the scorched earth policy) and an enemy behind an almost impenetrable defensive position. Masséna's forces arrived at the lines on 11 October and took Sobral de Monte Agraço the following day. On 14 October the VIII Corps tried to push forward but at the Battle of Sobral they were repelled in an attempt to assault a strong British outpost. After attempting to wait out the enemy, the lack of food and fodder in the area north of the lines meant that Masséna was forced to order a French retreat northwards, starting on the night of 14/15 November 1810, to find an area that had not been subjected to the scorched earth policy. In December 1810, fearing a French attempt on the left of the Tagus , a chain of 17 redoubts was constructed from Almada to Trafaria . [ 16 ] However, the French made no movement, and after holding out through February, when starvation really set in, Marshal Masséna ordered a retreat at the beginning of March 1811, taking a month to get to Spain. [ 16 ] Marshal Masséna had begun his campaign with his 65,000 strong army (l'Armée de Portugal). After losing 4,000 at the Battle of Buçaco , he arrived at Torres Vedras with 61,000 men in October 1810 facing attrition warfare . When he eventually returned to Spain in April 1811, he had lost a further 21,000 men, mostly from starvation, severe illness and disease. Casualties had not been helped by the fact that the Iberian peninsula had suffered one of the coldest winters it had ever known. When the Allies renewed their offensive in 1811, they were reinforced with fresh British troops. The advance started from the Lines of Torres Vedras shortly after the French retreat. Although work continued on certain sections of the lines, they saw no further action during rest of the Peninsular War. [ 9 ] The lines were divided up into districts by Wellington in a letter dated 6 October 1810. Each district was allocated one Captain and one Lieutenant of Engineers: [ 17 ] The total number of troops available to Wellington amounted, exclusive of two battalions of marines around the Fort of São Julião, to 42,000 British, of whom 35,000 were combat ready together with over 27,000 Portuguese regulars, of whom 24,000 were combat ready; about 12,000 Portuguese militia; and 20–30,000 ordenanças , a Portuguese militia force used mainly for guerrilla warfare . Lastly, the Marquis of la Romana contributed 8,000 Spanish troops to the lines around Mafra. Altogether, therefore, Wellington had some 60,000 regular frontline troops whom he could depend upon, and 20,000 more who could be trusted to man the lines. [ 18 ] The redoubts of the First Line did not require more than 20,000 men to defend them, which left the whole of the true field-army free not only to reinforce any threatened point but also to make counter-attacks. To facilitate such movements a chain of five signal stations was established from one end of the First Line to the other, which allowed a message to be sent along the lines in 7 minutes, or from the HQ to any point in 4 minutes. The signal stations on the First Line were: while on the Second Line, five stations have been identified at: A monument commemorating the victory of the Anglo-Portuguese troops over the French armies and the construction of the Torres Vedras Lines was approved in 1874 and finished in 1883. Somewhat reminiscent of Nelson’s Column in London , the column is topped by a statue of the Classical Greek figure of Hercules . This was executed by the sculptor Simões de Almeida who was also responsible for the Monument to the Restorers in Lisbon. The column used marble from the parish of Pêro Pinheiro in Sintra municipality. [ 21 ] The monument was constructed near the village of Alhandra in the municipality of Vila Franca de Xira , on the site of the Boavista redoubt (originally numbered as work Number 3). It is close to work Number 114, the Fort of Subserra (also known as the Fort of Alhandra), which can be visited. In 1911, two plaques were added to acknowledge the contributions of Richard Fletcher and of José Maria das Neves Costa, on whose original topographic maps Wellington based his plans for the Lines. [ 21 ] Substantial portions of the Lines survive today, albeit in most cases in a heavily decayed condition due to past removal of stones. Apart from some limited restoration of Fort St. Vincent in the 1960s the Lines had effectively lain abandoned from the end of the Peninsular War to the beginning of this millennium. In 2001 the six municipalities covered by the Lines (Torres Vedras, Mafra, Sobral de Monte Agraço, Arruda dos Vinhos, Loures and Villa Franca de Xira), together with agencies of what is now the Direção-Geral do Património Cultural (Directorate-General for Cultural Heritage - DGPC), and the Direção dos Serviços de Engenharia (Directorate of Military Engineering) signed a protocol to protect, restore and sustain the Lines. However, initial work was limited due to lack of resources. With the bicentennial of the Lines fast approaching the six municipalities set up an inter-municipal platform to move things forward and decided to apply for funding through the EEA and Norway Grants programme. Funding was granted in 2007. [ 22 ] EEA grants met the costs of 110 projects, while the municipalities funded the work at another 140 sites. Work involved included removal of excess vegetation, creation or restoration of access, archaeological studies, setting up of information boards, establishment of walking routes, and a Visitors' Centre in each municipality. [ 22 ] This conservation work was awarded the European Union Prize for Cultural Heritage / Europa Nostra Awards in 2014. [ 23 ] [ 24 ] The Leonel Trindade Municipal Museum, Torres Vedras in the centre of the town has a room dedicated to "The Lines" with a display of information boards and artefacts. [ 25 ] A short distance from the museum just outside of the town, Fort of São Vicente and the Fort of Olheiros have been well conserved, with the former having a visitors' centre open Tue-Sun 10-1pm and 2-6pm. The visitors' centre has well-produced historic wall displays and a 20 min video. [ 26 ] Other information centres along the lines are: Attribution:
https://en.wikipedia.org/wiki/Lines_of_Torres_Vedras
In the field of biomechanics , the lines of non-extension are notional lines running across the human body along which body movement causes neither stretching or contraction. Discovered by Arthur Iberall in work beginning in the 1940s, as part of research into space suit design, [ 1 ] [ 2 ] [ 3 ] [ 4 ] they have been further developed by Dava Newman in the development of the Space Activity Suit . [ 5 ] They were originally mapped by Iberall by drawing a series of circles over a portion of the body and then watching their deformations as the wearer walked around or performed various tasks. The circles deform into ellipses as the skin stretches over the moving musculature, and these deformations were recorded. After a huge number of such measurements the data is then examined to find all of the possible deformations of the circles, and more importantly, the non-moving points on them where the original circle and the deformed ellipse intersect (at four points per circle). By mapping these points over the entire body, a series of lines are produced. These lines may then be used to direct the placement of tension elements in a spacesuit to enable constant suit pressure regardless of the motion of the body. [ 2 ] This human musculoskeletal system article is a stub . You can help Wikipedia by expanding it . This engineering-related article is a stub . You can help Wikipedia by expanding it . This space - or spaceflight -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lines_of_non-extension
A lineshaft roller conveyor or line-shaft conveyor is, as its name suggests, powered by a shaft beneath rollers. These conveyors are suitable for light applications up to 50 kg such as cardboard boxes and tote boxes. A single shaft runs below the rollers running the length of the conveyor. On the shaft are a series of spools, one spool for each roller. An elastic polyurethane o-ring belt runs from a spool on the powered shaft to each roller. When the shaft is powered, the o-ring belt acts as a chain between the spool and the roller making the roller rotate. The rotation of the rollers pushes the product along the conveyor. The shaft is usually driven by an electrical motor that is generally controlled by an electronic PLC (programmable logic controller). The PLC electronically controls how specific sections of the conveyor system interact with the products being conveyed. Advantages of this conveyor are quiet operation, easy installation, moderate maintenance and low expense. Line-shaft conveyors are also extremely safe for people to work around because the elastic belts can stretch and not injure fingers should any get caught underneath them. Moreover, the spools will slip and allow the rollers to stop moving if clothing, hands or hair gets caught in them. In addition, since the spools are slightly loose on the shaft, they act like clutches that slip when products are required to accumulate (stop moving and bump up against each other. i.e. queue up). With the exception of soft bottomed containers like cement bags, these conveyors can be utilized for almost all applications. A disadvantage of the roller lineshaft conveyor is that it can only be used to convey products that span at least three rollers, but rollers can be as small as 17mm in diameter and as close together as 18.5mm. For items shorter than 74mm, the conveyor belt system is generally used as an alternative option. Media related to Roller conveyors at Wikimedia Commons
https://en.wikipedia.org/wiki/Lineshaft_roller_conveyor
In Euclidean geometry , the intersection of a line and a line can be the empty set , a point , or another line . Distinguishing these cases and finding the intersection have uses, for example, in computer graphics , motion planning , and collision detection . In three-dimensional Euclidean geometry, if two lines are not in the same plane , they have no point of intersection [ 1 ] and are called skew lines . If they are in the same plane, however, there are three possibilities: if they coincide (are not distinct lines), they have an infinitude of points in common (namely all of the points on either of them); if they are distinct but have the same slope , they are said to be parallel and have no points in common; otherwise, they have a single point of intersection. The distinguishing features of non-Euclidean geometry are the number and locations of possible intersections between two lines and the number of possible lines with no intersections (parallel lines) with a given line. [ further explanation needed ] A necessary condition for two lines to intersect is that they are in the same plane—that is, are not skew lines. Satisfaction of this condition is equivalent to the tetrahedron with vertices at two of the points on one line and two of the points on the other line being degenerate in the sense of having zero volume . For the algebraic form of this condition, see Skew lines § Testing for skewness . First we consider the intersection of two lines L 1 and L 2 in two-dimensional space, with line L 1 being defined by two distinct points ( x 1 , y 1 ) and ( x 2 , y 2 ) , and line L 2 being defined by two distinct points ( x 3 , y 3 ) and ( x 4 , y 4 ) . [ 2 ] The intersection P of line L 1 and L 2 can be defined using determinants . The determinants can be written out as: When the two lines are parallel or coincident, the denominator is zero. The intersection point above is for the infinitely long lines defined by the points, rather than the line segments between the points, and can produce an intersection point not contained in either of the two line segments. In order to find the position of the intersection in respect to the line segments, we can define lines L 1 and L 2 in terms of first degree Bézier parameters: (where t and u are real numbers). The intersection point of the lines is found with one of the following values of t or u , where and with There will be an intersection if 0 ≤ t ≤ 1 and 0 ≤ u ≤ 1 . The intersection point falls within the first line segment if 0 ≤ t ≤ 1 , and it falls within the second line segment if 0 ≤ u ≤ 1 . These inequalities can be tested without the need for division, allowing rapid determination of the existence of any line segment intersection before calculating its exact point. [ 3 ] In the case where the two line segments share an x axis and x 2 = x 1 + 1 {\displaystyle x_{2}=x_{1}+1} , t {\displaystyle t} and u {\displaystyle u} simplify to t = u = y 1 − y 3 y 1 − y 2 − y 3 + y 4 , {\displaystyle t=u={\frac {y_{1}-y_{3}}{y_{1}-y_{2}-y_{3}+y_{4}}},} with ( P x , P y ) = ( x 1 + t , y 1 + t ( y 2 − y 1 ) ) or ( P x , P y ) = ( x 1 + t , y 3 + t ( y 4 − y 3 ) ) . {\displaystyle (P_{x},P_{y})={\bigl (}x_{1}+t,\;y_{1}+t(y_{2}-y_{1}){\bigr )}\quad {\text{or}}\quad (P_{x},P_{y})={\bigl (}x_{1}+t,\;y_{3}+t(y_{4}-y_{3}){\bigr )}.} The x and y coordinates of the point of intersection of two non-vertical lines can easily be found using the following substitutions and rearrangements. Suppose that two lines have the equations y = ax + c and y = bx + d where a and b are the slopes (gradients) of the lines and where c and d are the y -intercepts of the lines. At the point where the two lines intersect (if they do), both y coordinates will be the same, hence the following equality: We can rearrange this expression in order to extract the value of x , and so, To find the y coordinate, all we need to do is substitute the value of x into either one of the two line equations, for example, into the first: Hence, the point of intersection is Note that if a = b then the two lines are parallel and they do not intersect, unless c = d as well, in which case the lines are coincident and they intersect at every point. By using homogeneous coordinates , the intersection point of two implicitly defined lines can be determined quite easily. In 2D, every point can be defined as a projection of a 3D point, given as the ordered triple ( x , y , w ) . The mapping from 3D to 2D coordinates is ( x ′, y ′) = ( ⁠ x / w ⁠ , ⁠ y / w ⁠ ) . We can convert 2D points to homogeneous coordinates by defining them as ( x , y , 1) . Assume that we want to find intersection of two infinite lines in 2-dimensional space, defined as a 1 x + b 1 y + c 1 = 0 and a 2 x + b 2 y + c 2 = 0 . We can represent these two lines in line coordinates as U 1 = ( a 1 , b 1 , c 1 ) and U 2 = ( a 2 , b 2 , c 2 ) . The intersection P ′ of two lines is then simply given by [ 4 ] If c p = 0 , the lines do not intersect. The intersection of two lines can be generalized to involve additional lines. The existence of and expression for the n -line intersection problem are as follows. In two dimensions, more than two lines almost certainly do not intersect at a single point. To determine if they do and, if so, to find the intersection point, write the i th equation ( i = 1, …, n ) as and stack these equations into matrix form as where the i th row of the n × 2 matrix A is [ a i 1 , a i 2 ] , w is the 2 × 1 vector [ x y ] , and the i th element of the column vector b is b i . If A has independent columns, its rank is 2. Then if and only if the rank of the augmented matrix [ A | b ] is also 2, there exists a solution of the matrix equation and thus an intersection point of the n lines. The intersection point, if it exists, is given by where A g is the Moore–Penrose generalized inverse of A (which has the form shown because A has full column rank). Alternatively, the solution can be found by jointly solving any two independent equations. But if the rank of A is only 1, then if the rank of the augmented matrix is 2 there is no solution but if its rank is 1 then all of the lines coincide with each other. The above approach can be readily extended to three dimensions. In three or more dimensions, even two lines almost certainly do not intersect; pairs of non-parallel lines that do not intersect are called skew lines . But if an intersection does exist it can be found, as follows. In three dimensions a line is represented by the intersection of two planes, each of which has an equation of the form Thus a set of n lines can be represented by 2 n equations in the 3-dimensional coordinate vector w : where now A is 2 n × 3 and b is 2 n × 1 . As before there is a unique intersection point if and only if A has full column rank and the augmented matrix [ A | b ] does not, and the unique intersection if it exists is given by In two or more dimensions, we can usually find a point that is mutually closest to two or more lines in a least-squares sense. In the two-dimensional case, first, represent line i as a point p i on the line and a unit normal vector n̂ i , perpendicular to that line. That is, if x 1 and x 2 are points on line 1, then let p 1 = x 1 and let which is the unit vector along the line, rotated by a right angle. The distance from a point x to the line ( p , n̂ ) is given by And so the squared distance from a point x to a line is The sum of squared distances to many lines is the cost function : This can be rearranged: To find the minimum, we differentiate with respect to x and set the result equal to the zero vector: so and so While n̂ i is not well-defined in more than two dimensions, this can be generalized to any number of dimensions by noting that n̂ i n̂ i T is simply the symmetric matrix with all eigenvalues unity except for a zero eigenvalue in the direction along the line providing a seminorm on the distance between p i and another point giving the distance to the line. In any number of dimensions, if v̂ i is a unit vector along the i th line, then where I is the identity matrix , and so [ 5 ] In order to find the intersection point of a set of lines, we calculate the point with minimum distance to them. Each line is defined by an origin a i and a unit direction vector n̂ i . The square of the distance from a point p to one of the lines is given from Pythagoras: where ( p − a i ) T n̂ i is the projection of p − a i on line i . The sum of distances to the square to all lines is To minimize this expression, we differentiate it with respect to p . which results in where I is the identity matrix . This is a matrix Sp = C , with solution p = S + C , where S + is the pseudo-inverse of S . In spherical geometry , any two great circles intersect. [ 6 ] In hyperbolic geometry , given any line and any point, there are infinitely many lines through that point that do not intersect the given line. [ 6 ]
https://en.wikipedia.org/wiki/Line–line_intersection
In analytic geometry , the intersection of a line and a plane in three-dimensional space can be the empty set , a point , or a line. It is the entire line if that line is embedded in the plane, and is the empty set if the line is parallel to the plane but outside it. Otherwise, the line cuts through the plane at a single point. Distinguishing these cases, and determining equations for the point and line in the latter cases, have use in computer graphics , motion planning , and collision detection . In vector notation , a plane can be expressed as the set of points p {\displaystyle \mathbf {p} } for which where n {\displaystyle \mathbf {n} } is a normal vector to the plane and p 0 {\displaystyle \mathbf {p_{0}} } is a point on the plane. (The notation a ⋅ b {\displaystyle \mathbf {a} \cdot \mathbf {b} } denotes the dot product of the vectors a {\displaystyle \mathbf {a} } and b {\displaystyle \mathbf {b} } .) The vector equation for a line is where l {\displaystyle \mathbf {l} } is a unit vector in the direction of the line, l 0 {\displaystyle \mathbf {l_{0}} } is a point on the line, and d {\displaystyle d} is a scalar in the real number domain. Substituting the equation for the line into the equation for the plane gives Expanding gives And solving for d {\displaystyle d} gives If l ⋅ n = 0 {\displaystyle \mathbf {l} \cdot \mathbf {n} =0} then the line and plane are parallel. There will be two cases: if ( p 0 − l 0 ) ⋅ n = 0 {\displaystyle (\mathbf {p_{0}} -\mathbf {l_{0}} )\cdot \mathbf {n} =0} then the line is contained in the plane, that is, the line intersects the plane at each point of the line. Otherwise, the line and plane have no intersection. If l ⋅ n ≠ 0 {\displaystyle \mathbf {l} \cdot \mathbf {n} \neq 0} there is a single point of intersection. The value of d {\displaystyle d} can be calculated and the point of intersection, p {\displaystyle \mathbf {p} } , is given by A line is described by all points that are a given direction from a point. A general point on a line passing through points l a = ( x a , y a , z a ) {\displaystyle \mathbf {l} _{a}=(x_{a},y_{a},z_{a})} and l b = ( x b , y b , z b ) {\displaystyle \mathbf {l} _{b}=(x_{b},y_{b},z_{b})} can be represented as where l a b = l b − l a {\displaystyle \mathbf {l} _{ab}=\mathbf {l} _{b}-\mathbf {l} _{a}} is the vector pointing from l a {\displaystyle \mathbf {l} _{a}} to l b {\displaystyle \mathbf {l} _{b}} . Similarly a general point on a plane determined by the triangle defined by the points p 0 = ( x 0 , y 0 , z 0 ) {\displaystyle \mathbf {p} _{0}=(x_{0},y_{0},z_{0})} , p 1 = ( x 1 , y 1 , z 1 ) {\displaystyle \mathbf {p} _{1}=(x_{1},y_{1},z_{1})} and p 2 = ( x 2 , y 2 , z 2 ) {\displaystyle \mathbf {p} _{2}=(x_{2},y_{2},z_{2})} can be represented as where p 01 = p 1 − p 0 {\displaystyle \mathbf {p} _{01}=\mathbf {p} _{1}-\mathbf {p} _{0}} is the vector pointing from p 0 {\displaystyle \mathbf {p} _{0}} to p 1 {\displaystyle \mathbf {p} _{1}} , and p 02 = p 2 − p 0 {\displaystyle \mathbf {p} _{02}=\mathbf {p} _{2}-\mathbf {p} _{0}} is the vector pointing from p 0 {\displaystyle \mathbf {p} _{0}} to p 2 {\displaystyle \mathbf {p} _{2}} . The point at which the line intersects the plane is therefore described by setting the point on the line equal to the point on the plane, giving the parametric equation: This can be rewritten as which can be expressed in matrix form as where the vectors are written as column vectors. This produces a system of linear equations which can be solved for t {\displaystyle t} , u {\displaystyle u} and v {\displaystyle v} . If the solution satisfies the condition t ∈ [ 0 , 1 ] , {\displaystyle t\in [0,1],} , then the intersection point is on the line segment between l a {\displaystyle \mathbf {l} _{a}} and l b {\displaystyle \mathbf {l} _{b}} , otherwise it is elsewhere on the line. Likewise, if the solution satisfies u , v ∈ [ 0 , 1 ] , {\displaystyle u,v\in [0,1],} , then the intersection point is in the parallelogram formed by the point p 0 {\displaystyle \mathbf {p} _{0}} and vectors p 01 {\displaystyle \mathbf {p} _{01}} and p 02 {\displaystyle \mathbf {p} _{02}} . If the solution additionally satisfies ( u + v ) ≤ 1 {\displaystyle (u+v)\leq 1} , then the intersection point lies in the triangle formed by the three points p 0 {\displaystyle \mathbf {p} _{0}} , p 1 {\displaystyle \mathbf {p} _{1}} and p 2 {\displaystyle \mathbf {p} _{2}} . The determinant of the matrix can be calculated as If the determinant is zero, then there is no unique solution; the line is either in the plane or parallel to it. If a unique solution exists (determinant is not 0), then it can be found by inverting the matrix and rearranging: which expands to and then to thus giving the solutions: The point of intersection is then equal to In the ray tracing method of computer graphics a surface can be represented as a set of pieces of planes. The intersection of a ray of light with each plane is used to produce an image of the surface. In vision-based 3D reconstruction , a subfield of computer vision, depth values are commonly measured by so-called triangulation method, which finds the intersection between light plane and ray reflected toward camera. The algorithm can be generalised to cover intersection with other planar figures, in particular, the intersection of a polyhedron with a line .
https://en.wikipedia.org/wiki/Line–plane_intersection
In natural language processing , linguistics , and neighboring fields, Linguistic Linked Open Data (LLOD) describes a method and an interdisciplinary community concerned with creating, sharing, and (re-)using language resources in accordance with Linked Data principles. The Linguistic Linked Open Data Cloud was conceived and is being maintained by the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation , but has been a point of focal activity for several W3C community groups, research projects, and infrastructure efforts since then. Linguistic Linked Open Data describes the publication of data for linguistics and natural language processing using the following principles: [ 1 ] The primary benefits of LLOD have been identified as: [ 2 ] The home of the LLOD cloud diagram is under linguistic-lod.org [ 3 ] Aside from gathering metadata and generating the LLOD cloud diagram, the LLOD community is driving the development of community standards with respect to vocabularies, metadata and best practice recommendations. According to the state-of-the-art overview by Cimiano et al. (2020), [ 4 ] these include: As of mid-2020, most of these community standards are actively worked on. Particularly problematic is the existence of multiple incompatible standards for linguistic annotations, and in early 2020, the W3C Community Group Linked Data for Language Technology has begun to work towards a consolidation of these (and other) vocabularies for linguistic annotations on the web. [ 15 ] The LLOD cloud diagram has been developed and is maintained by the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation (since 2014 Open Knowledge), an open and interdisciplinary of experts in language resources. The OWLG organizes community events and coordinates LLOD developments and facilitates interdisciplinary communication between and among LLOD contributors and users. Several W3C Business and Community Groups focus on specialized aspects of LLOD: LLOD development is driven forward by and documented in a series of international workshops, datathons, and associated publications. Among others, these include Linguistic Linked Open Data is applied to address a number of scientific research problems: Linguistic Linked Open Data is closely related with the development of Uses and development of LLOD have been subject to several large-scale research projects, including As of October 2018, the 10 most frequently linked resources in the LLOD diagram are (in order of the number of linked datasets): There are a number of recurring discussions regarding the different aspects of the term, its applicability and for a particular type of resources. [ 32 ] Aside from resources used in and created for linguistic research, the LLOD cloud diagram also includes ontologies, terminologies and general knowledge bases whose development was not originally driven by interest in language sciences or language technology, e.g., the DBpedia . As a criterion for inclusion into the LLOD diagram, the OWLG requires "linguistic relevance": "[A] dataset is linguistically relevant if it provides or describes language data that can be used for the purpose of linguistic research or natural language processing." [ 33 ] This does include linguistic resources in a strict sense ("condition 1": an annotated or otherwise structured resource created for application in language sciences or language technology, as demonstrated, for example, by a scientific publication at a linguistics-related journal or conference), but also resources "that can be used for annotating, enriching, retrieving or classifying language resources ... [if their relevance] can be verified by the existence of links between a resource (whose linguistic relevance is to be confirmed) and resources fulfilling condition (1)" ("condition 2"). [ 34 ] A related issue is the classification of linguistically relevant datasets (or language resources in general). The OWLG developed the following classification for the LLOD cloud diagram: [ 35 ] Note that in this classification, term bases might be slightly different in that they do not provide grammatical information, however, since they formalize semantic knowledge, they are of immanent relevance for natural language processing tasks, such as named entity recognition or anaphora resolution. LLOD is defined in relation to Linked Open Data, and LLOD resources ( data ) should thus conform to licenses in accordance with the Open Definition . [ 36 ] For generating the LLOD cloud diagram (and the LOD diagram), this does, however, not seem to be enforced yet, so that the technical criterion is availability over the web and a metadata entry. In the OWLG, it has been repeatedly discussed whether non-commercial (academic) resources could be included with a general consensus of admitting them for the moment (2015) but subsequently enforcing stricter requirements along with the growth of the LLOD cloud. As of January 2018, it was not agreed upon yet when this move was about to happen. [ 37 ] As of January 2020, machine-readable license metadata was available for 86 LLOD resources, of these, 82 adopted open licenses, 4 adopted non-commercial licenses. [ 38 ] In a broader sense, the term LLOD technology (infrastructures, tools, vocabularies) can also used to refer to the technology independently from whether actually open resources are involved, e.g., in the name of the EU project Pret-a-LLOD that features several commercial business cases. [ 39 ] This is justified for applications that consume (rather than provide) open data, but moreover, also when linked data technology and the adoptation of other LLOD conventions (esp., the use of RDF vocabularies developed in the context of LLOD) are applied in order to facilitates the seamless integration of LLOD resources (open resources). The abbreviation "LLOD" can be used to refer to either LLOD technology (use of Linked Data and LLOD vocabularies, independent from the legal status of the data being processed) and LLOD resources (open data). For disambiguation, the terms "LLOD resources" and "LLOD technology" can be used. For emphasizing application or applicability to non-open resources, also "LLD" (Linguistic Linked Data) has been used. [ 40 ] A possible compromise is the acronym "LL(O)D" for the technology. A "Licensed Linguistic Linked Data" cloud that contains non-open resources does currently (June 2020) not exist. [ 38 ] The definition of Linked Data requires the application of RDF or related standards. This includes the W3C recommendations SPARQL, Turtle, JSON-LD, RDF-XML, RDFa, etc. In language technology and the language sciences, however, other formalisms are currently more popular, and the inclusion of such data into the LLOD cloud diagram has been occasionally requested. [ 32 ] For several such languages, W3C-standardized wrapping mechanisms exist (e.g., for XML , CSV or relational databases, see Knowledge extraction#Extraction from structured sources to RDF ), and such data can be integrated under the condition that the corresponding mapping is provided along with the source data. A 2022 review paper is: An exhaustive description on the state of the art on LLOD is provided by The concept of a Linguistic Linked Open Data cloud has been originally introduced by The first book on the topic is According to Cimiano et al. (2020), [ 41 ] other seminal publications since then include Developments from 2015 to 2019 are summarized in the collected volume by
https://en.wikipedia.org/wiki/Linguistic_Linked_Open_Data
Linguistic sequence complexity (LC) is a measure of the 'vocabulary richness' of a genetic text in gene sequences . [ 1 ] When a nucleotide sequence is written as text using a four-letter alphabet, the repetitiveness of the text, that is, the repetition of its N-grams (words), can be calculated and serves as a measure of sequence complexity. Thus, the more complex a DNA sequence , the richer its oligonucleotide vocabulary, whereas repetitious sequences have relatively lower complexities. Subsequent work improved the original algorithm described in Trifonov (1990), [ 1 ] without changing the essence of the linguistic complexity approach. [ 2 ] [ 3 ] [ 4 ] The meaning of LC may be better understood by regarding the presentation of a sequence as a tree of all subsequences of the given sequence. The most complex sequences have maximally balanced trees, while the measure of imbalance or tree asymmetry serves as a complexity measure . The number of nodes at the tree level i is equal to the actual vocabulary size of words with the length i in a given sequence; the number of nodes in the most balanced tree, which corresponds to the most complex sequence of length N, at the tree level i is either 4 i or N-i+1, whichever is smaller. Complexity ( C ) of a sequence fragment (with a length RW) can be directly calculated as the product of vocabulary-usage measures (U i ): [ 2 ] C = U 1 U 2 . . . U i . . . . U w {\displaystyle C=U_{1}U_{2}...U_{i}....U_{w}} Vocabulary usage for oligomers of a given size i can be defined as the ratio of the actual vocabulary size of a given sequence to the maximal possible vocabulary size for a sequence of that length. For example, U 2 for the sequence ACGGGAAGCTGATTCCA = 14/16, as it contains 14 of 16 possible different dinucleotides; U 3 for the same sequence = 15/15, and U 4 =14/14. For the sequence ACACACACACACACACA, U 1 =1/2; U 2 =2/16=0.125, as it has a simple vocabulary of only two dinucleotides; U 3 for this sequence = 2/15. k-tuples with k from two to W considered, while W depends on RW. For RW values less than 18, W is equal to 3; for RW less than 67, W is equal to 4; for RW<260, W=5; for RW<1029, W=6, and so on. The value of C provides a measure of sequence complexity in the range 0<C<1 for various DNA sequence fragments of a given length. [ 2 ] This formula is different from the original LC measure [ 1 ] in two respects: in the way vocabulary usage U i is calculated, and because i is not in the range of 2 to N-1 but only up to W. This limitation on the range of U i makes the algorithm substantially more efficient without loss of power. [ 2 ] In [ 5 ] [ clarification needed ] was used another modified version, wherein linguistic complexity (LC) is defined as the ratio of the number of substrings of any length present in the string to the maximum possible number of substrings. Maximum vocabulary over word sizes 1 to m can be calculated according to the simple formula . [ 5 ] This sequence analysis complexity calculation can be used to search for conserved regions between compared sequences for the detection of low-complexity regions including simple sequence repeats, imperfect direct or inverted repeats , polypurine and polypyrimidine triple-stranded DNA structures , and four-stranded structures (such as G-quadruplexes ). [ 6 ]
https://en.wikipedia.org/wiki/Linguistic_sequence_complexity
Lingyin Li (born 1981) is a Chinese American chemical biologist who is a professor of biochemistry at Stanford University and core investigator at Arc Institute . Her research studies the chemical biology of innate immunity to design better therapeutics. She was named one of Chemical & Engineering News Talented 12 in 2020. Li was born in Xi'an . [ 1 ] She was awarded a position on the competitive University of Science and Technology of China undergraduate program. [ 2 ] She was a doctoral researcher at the University of Wisconsin–Madison , where she worked with Laura L. Kiessling . She moved to Harvard Medical School as a postdoctoral researcher in the laboratory of Tim Mitchison . Li uses chemical biology to understand the mechanisms that underpin immunity, which she will use to develop new therapeutic pathways and targets. The activation of immunity can provide new therapeutic strategies for vaccines, cancer and viral infection. At Harvard, she studied the drug Vadimezan (DMXAA), an activator of the stimulator of interferon genes (STING) pathway, and uncovered that DMXAA binds mouse but not human STING. [ 2 ] STING responds to inflammation and activates inflammatory proteins that trigger the adaptive immune system. [ 2 ] The combination of the innate and adaptive immune system eliminates pathogens and is predicted to fight cancer. Li also discovered ENPP1 as the first known hydrolase of cGAMP, the natural ligand and activator of STING. ENPP1 is an extracellular enzyme, which led her to propose that cGAMP is exported for degradation and thus must play an extracellular role in cancer. [ 3 ] In 2015, Li set up her own lab at Stanford University [ 4 ] where she pioneered the study of the paracrine role of extracellular cGAMP in innate immunity and coined the phrase immunotransmitter. Her lab identified several transporters of the immunotransmitter cGAMP including SLC19A1, [ 5 ] SLC46A2, [ 6 ] LRRC8A:C, [ 7 ] and SLC7A1. [ 8 ] While many in the field have pursued STING agonists as a strategy for cancer immunotherapy, Li proposed an alternative strategy to sustain extracellular cancer signaling through the inhibition of the cGAMP hydrolases ENPP1 and ENPP3. [ 9 ] [ 10 ] She founded Angarus Therapeutics to develop ENPP1 inhibitors, which are now being tested in clinical trials. In 2022, Li became one of the first core investigators at the Arc Institute , a nonprofit research organization that operates in partnership with Stanford University, UCSF, and UC Berkeley.
https://en.wikipedia.org/wiki/Lingyin_Li
The link in a simplicial complex is a generalization of the neighborhood of a vertex in a graph. The link of a vertex encodes information about the local structure of the complex at the vertex. Given an abstract simplicial complex X and v {\textstyle v} a vertex in V ( X ) {\textstyle V(X)} , its link Lk ⁡ ( v , X ) {\textstyle \operatorname {Lk} (v,X)} is a set containing every face τ ∈ X {\textstyle \tau \in X} such that v ∉ τ {\textstyle v\not \in \tau } and τ ∪ { v } {\textstyle \tau \cup \{v\}} is a face of X . Given a geometric simplicial complex X and v ∈ V ( X ) {\textstyle v\in V(X)} , its link Lk ⁡ ( v , X ) {\textstyle \operatorname {Lk} (v,X)} is a set containing every face τ ∈ X {\textstyle \tau \in X} such that v ∉ τ {\textstyle v\not \in \tau } and there is a simplex in X {\textstyle X} that has v {\textstyle v} as a vertex and τ {\textstyle \tau } as a face. [ 1 ] : 3 Equivalently, the join v ⋆ τ {\textstyle v\star \tau } is a face in X {\textstyle X} . [ 2 ] : 20 An alternative definition is: the link of a vertex v ∈ V ( X ) {\textstyle v\in V(X)} is the graph Lk( v , X ) constructed as follows. The vertices of Lk( v , X ) are the edges of X incident to v . Two such edges are adjacent in Lk( v , X ) iff they are incident to a common 2-cell at v . The definition of a link can be extended from a single vertex to any face. Given an abstract simplicial complex X and any face σ {\textstyle \sigma } of X , its link Lk ⁡ ( σ , X ) {\textstyle \operatorname {Lk} (\sigma ,X)} is a set containing every face τ ∈ X {\textstyle \tau \in X} such that σ , τ {\textstyle \sigma ,\tau } are disjoint and τ ∪ σ {\textstyle \tau \cup \sigma } is a face of X : Lk ⁡ ( σ , X ) := { τ ∈ X : τ ∩ σ = ∅ , τ ∪ σ ∈ X } {\textstyle \operatorname {Lk} (\sigma ,X):=\{\tau \in X:~\tau \cap \sigma =\emptyset ,~\tau \cup \sigma \in X\}} . Given a geometric simplicial complex X and any face σ ∈ X {\textstyle \sigma \in X} , its link Lk ⁡ ( σ , X ) {\textstyle \operatorname {Lk} (\sigma ,X)} is a set containing every face τ ∈ X {\textstyle \tau \in X} such that σ , τ {\textstyle \sigma ,\tau } are disjoint and there is a simplex in X {\textstyle X} that has both σ {\textstyle \sigma } and τ {\textstyle \tau } as faces. [ 1 ] : 3 The link of a vertex of a tetrahedron is a triangle – the three vertices of the link corresponds to the three edges incident to the vertex, and the three edges of the link correspond to the faces incident to the vertex. In this example, the link can be visualized by cutting off the vertex with a plane; formally, intersecting the tetrahedron with a plane near the vertex – the resulting cross-section is the link. Another example is illustrated below. There is a two-dimensional simplicial complex. At the left, a vertex is marked in yellow. At the right, the link of that vertex is marked in green. A concept closely related to the link is the star . Given an abstract simplicial complex X and any face σ ∈ X {\textstyle \sigma \in X} , V ( X ) {\textstyle V(X)} , its star St ⁡ ( σ , X ) {\textstyle \operatorname {St} (\sigma ,X)} is a set containing every face τ ∈ X {\textstyle \tau \in X} such that τ ∪ σ {\textstyle \tau \cup \sigma } is a face of X . In the special case in which X is a 1-dimensional complex (that is: a graph ), St ⁡ ( v , X ) {\textstyle \operatorname {St} (v,X)} contains all edges { u , v } {\textstyle \{u,v\}} for all vertices u {\textstyle u} that are neighbors of v {\textstyle v} . That is, it is a graph-theoretic star centered at u {\textstyle u} . Given a geometric simplicial complex X and any face σ ∈ X {\textstyle \sigma \in X} , its star St ⁡ ( σ , X ) {\textstyle \operatorname {St} (\sigma ,X)} is a set containing every face τ ∈ X {\textstyle \tau \in X} such that there is a simplex in X {\textstyle X} having both σ {\textstyle \sigma } and τ {\textstyle \tau } as faces: St ⁡ ( σ , X ) := { τ ∈ X : ∃ ρ ∈ X : τ , σ are faces of ρ } {\textstyle \operatorname {St} (\sigma ,X):=\{\tau \in X:\exists \rho \in X:\tau ,\sigma {\text{ are faces of }}\rho \}} . In other words, it is the closure of the set { ρ ∈ X : σ is a face of ρ } {\textstyle \{\rho \in X:\sigma {\text{ is a face of }}\rho \}} -- the set of simplices having σ {\textstyle \sigma } as a face. So the link is a subset of the star. The star and link are related as follows: An example is illustrated below. There is a two-dimensional simplicial complex. At the left, a vertex is marked in yellow. At the right, the star of that vertex is marked in green.
https://en.wikipedia.org/wiki/Link_(simplicial_complex)
Link Labs is an American company based in Annapolis, Maryland , that develops computer network technology for business and industrial customers. Link Labs technologies are marketed for Internet of things (IoT) applications and devices. [ 1 ] [ 2 ] The company was founded in 2014 [ 3 ] by Brian Ray and 3 engineers from the Johns Hopkins Applied Physics Laboratory . [ 4 ] In August 2015, it announced a venture capital investment of $5.7 million. The investment round was led by TCP [ which? ] and joined by the Maryland Venture Fund, Blu Venture, Inflection Point Partners, and others. [ 4 ] Symphony Link is a low power, wide-area wireless network (LPWAN) that allows for monitoring and two-way communication with sensor devices. [ 4 ] According to Link Labs, Symphony Link can support up to 250,000 endpoints on each gateway and ranges up to 7 miles. [ 5 ] Additionally, Symphony Link supports upgrading firmware over the air and allows for sending and receiving compressed bidirectional message acknowledgements. [ 2 ] AirFinder is a product division of Link Labs and a real-time location system (RTLS). It utilizes open-source iBeacon and Bluetooth Low Energy (BLE) technology to track assets and individuals. According to Link Labs, AirFinder is used to improve efficiencies through location tracking in healthcare organizations, manufacturing plants, and transport hubs. [ 6 ]
https://en.wikipedia.org/wiki/Link_Labs
Link Motion Inc , formerly NetQin and NQ Mobile , is a multinational technology company that develops, licenses, supports and sells software and services that focus on the smart ride business. Link Motion sells carputers for car businesses, consumer ride sharing services, as well as legacy mobile security , productivity and other related applications. [ 2 ] Link Motion maintains dual headquarters in Dallas , Texas , United States and Beijing , China. A Court Receiver, lawyer Robert Seiden , was appointed over Link Motion in February 2019 in the United States in the federal district court in the Southern District of New York by Judge Victor Marrero. The Receiver removed Wenyong “Vincent” Shi as chairman and chief executive officer, and replaced him by appointing Mr. Lilin “Francis” Guo. [ 3 ] Link Motion was founded as NQ Mobile in 2005 by Dr. Henry Lin , formerly the youngest associate professor at the Beijing University of Posts and Telecommunications , and Dr. Vincent Shi. [ 4 ] The company began its business by offering mobile security services and later started offering productivity products to families and enterprise customers. Their services were compatible with a wide range of handset models and almost all currently available operating systems for smartphones, including Java , Symbian , iOS , Android , Windows Phone and BlackBerry OS . NQ Mobile also collaborated closely with other mobile ecosystem participants, including chipmakers, handset manufacturers, wireless carriers, third party payment channels, retailers and other distribution channels in order to broaden the reach of their services. NQ Mobile's initial focus was the China marketplace. The company cooperated with China Mobile , China Unicom and China Telecom , the three largest mobile companies in China. NQ Mobile also cooperated with Nokia and Sony to pre-installed NQ products on their companywide mobile phones. NQ Mobile has also worked closely with Symbian , Windows Mobile and Android , developing mobile security applications based on those operating systems. In addition, Samsung , Motorola , Dopod , Lenovo , Tencent , and Baidu have all been the company's partners. In August 2011, Chris Stier was appointed managing director for the Americas and became responsible for NQ Mobile's business development throughout the Americas , overseeing sales and marketing operations as well as establishing strategic partnerships with key industry players in the region. [ 5 ] In October 2011, Geoff Casely was appointed managing director for the Europe, Middle East, and Africa (EMEA) region based in London and became responsible for NQ Mobile's business development in EMEA and building strategic partner relationships. [ 6 ] Omar Khan joined the company in January 2012 as co-CEO to direct the company alongside the current chairman and chief executive officer Dr. Henry Lin and the company changed its corporate name from NetQin Mobile Inc. to NQ Mobile Inc. [ 7 ] Mr. Khan focused on the global expansion of NQ Mobile into markets such as North America , Latin America , Europe , Japan, Korea and India. Dr. Lin continued to focus on the core markets such as China and Taiwan among other developing countries. During the first half of 2012, NQ Mobile expanded its international management with the additions of Gavin Kim as chief product officer, Kim Titus, senior director of Communication, Conrad Edwards as chief experience officer, and Victoria Repice as senior director of product management. [ 8 ] NQ Mobile expanded its mobile internet services in November 2012 with the acquisition of Feiliu. [ 9 ] [ 10 ] Feiliu was founded in 2009 and was subsequently rebranded to FL Mobile. It is a leading mobile interest-based community platform with coverage in China that engages users in real-time mobile online activities. FL Mobile provides application recommendation services, interest-based exchanges, and mobile games to its user communities. According to data published by third party marketing research company Sino MR, FL Mobile was the top iOS mobile game publisher and operator in the Chinese market in December 2012. [ 11 ] FL Mobile had 87.3 million registered users and 16.1 million monthly active users by the end of June 2013. [ 12 ] EnfoDesk Analysys International (EnfoDesk), a major market tracking company, reported that FL Mobile became the number one publisher on the iOS platform and increased its market share to 36.6 percent in the first half of 2013. [ 13 ] The first-place ranking included the top spot for both revenues and number of mobile users. The report also claims FL Mobile ranks third place across all platforms for both revenues and mobile users and maintains 18.8 percent share of total revenues in the first half of the 2013. NQ Mobile also expanded into enterprise security products and services starting in May 2012 when it acquired 55% of NationSky and the remaining 45% in July 2013. [ 14 ] [ 15 ] Founded in 2005, NationSky is a leader in providing mobile services to more than 1,250 enterprises in China. By working with carriers and smart phone platform providers, NationSky delivers device agnostic managed mobile services, self developed mobile device management (MDM) software NQSky [ 16 ] and other mobile SaaS services. Headquartered in Beijing , NationSky also has offices in Shanghai and Shenzhen . In June 2013, NQ Mobile hired Matt Mathison to the senior management position of vice president, Capital Markets . In August 2013, NQ Mobile opened a second global headquarters in Dallas , Texas . [ 17 ] [ 18 ] The company also further expanded its products and service offerings with the acquisitions of Shanghai Yinlong Information and Technology Co., Ltd. ("Yinlong") to develop content-based music information retrieval (MIR) technology based on multi platforms, NQ Mobile (Shenzhen) Co., Ltd. ("NQ Shenzhen") to offer online security education and value added services, Best Partners Ltd. ("Best Partner") for mobile advertising , Beijing Tianya Co., Ltd. ("Tianya") for mobile healthcare applications development and search engine marketing in the healthcare industry in China, Chengdu Ruifeng Technology Co., Ltd. ("Ruifeng") to provide enterprise mobility system development and iOS training programs, Tianjin Huayong Wireless Technology Co., Ltd. ("Huayong ") for research and development and marketing of live wallpapers for smart phones using the Android operating system , and expanded its market with NQ Mobile KK ("NQ Japan") in Japan. [ 19 ] [ 20 ] In 2014 NQ Mobile continued expanding through acquisitions with Beijing Trustek Technology Co., Ltd. ("Trustek") to provide enterprise mobility services, including system management , application development , business intelligence and maintenance services, Yipai Tianxia Network Technology Co., Ltd. ("Yipai") to provide mobile intelligent interactive advertising services, through integration of media channels of outdoor, newspapers, magazines etc., Beijing Showself Technology Co., Ltd. (" Showself ") to provide entertainment and dating platforms on mobile internet, and established Beijing NQ Mobile Co., Ltd. ("NQ Yizhuang ") to engage in software design and development for computer and mobile devices and other technology consulting services. The company also took a controlling stake in Link Motion . [ 20 ] In May 2015, Mr. Zemin Xu [ 21 ] took over as CEO and the company held a press conference in Beijing to announce their new business strategy and reorganized along two lines, a technical division representing mobile security, mobile enterprise and mobile health care, and an entertainment division covering mobile advertising, mobile entertainment and mobile games. [ 22 ] During the conference NQ Mobile also announced its new Showself Entertainment brand which includes Showself, Showself Live Wallpaper, Showself Music Radar and Showself Launcher. [ 23 ] In June 2015, Mr. Roland Wu was appointed as chief financial officer. [ 24 ] In August 2015 the company along with the other existing shareholders of FL Mobile Inc. agreed to sell to Beijing Jinxing Rongda Investment Management Co. Ltd., a subsidiary of Tsinghua Holdings Co., Ltd , the entire stake in FL Mobile Inc. that they currently hold for no less than RMB 4 billion (or approximately no less than US$626 million) and also the sale of all of NQ Mobile's interest in Beijing NationSky Network Technology Co., Ltd., to Mr. Hou Shuli, a founder and senior management member of Beijing NationSky, for an aggregate consideration of US$80 million. [ 25 ] The company completed the divestment of NationSky for $80 million at the end of 2015. [ 26 ] Throughout 2016 NQ Mobile continued to consolidate and began shifting its core business to smart cars while working on the divestments of FL Mobile and other businesses. On March 30, 2017, the company announced a new agreement to sell FL Mobile for RMB 4 billion along with Beijing Showself for RMB 1.23 million to Tongfang Investment Fund Series SPC, an affiliate of Tsinghua Tongfang . [ 27 ] [ 28 ] The divestment of FL Mobile and Beijing Showself was completed in December 2017. [ 29 ] In January 2018, NQ Mobile announced that its board of directors approved a rebranding effort around its new focus as a vehicular automation and mobility as a service company by change its name from “NQ Mobile Inc.” to “Link Motion Inc.” and its ticker from “NQ” to “LKM.” [ 30 ] In February 2018, the company hired MZ Group for investor relations and financial communications across all key markets [ 31 ] and changed its name to Link Motion Inc. and their ticker to LKM. [ 32 ] In March 2018, Link Motion Inc. appointed Mr. Duo Tang to executive vice president and the head of the company's smart ride business. [ 33 ] In February 2019, the federal court in New York appointed Robert W. Seiden , a lawyer and former prosecutor, as Receiver over Link Motion to preserve the assets of the company. Seiden was also appointed receiver over LKM and its subsidiaries in Hong Kong by the High Court of the Hong Kong Special Administrative Region Court of the First Instance, along with Lauren Lau of KLC. The Receiver removed Wenyong “Vincent” Shi as chairman and chief executive officer of Link Motion and replaced him by appointing Mr. Lilin “Francis” Guo. [ 3 ] Revenue sources include third-party application referrals from mobile applications, banner ads and intelligent interactive advertising services through user modeling and image recognition technology to search for advertisers’ products and services that are of potential interest. Trustek offers mobility strategy consulting, architecture design, hardware and software procurement and deployment, mobile device and application management, training, maintenance and other ongoing support services to enterprise customers. [ 34 ] In October 2005, the company launched its first mobile security product NetQin 1.0. [ 35 ] In November 2009, The 2009 China Frost & Sullivan Award for Mobile Security Market Leadership of the year was presented to NetQin Tech. Co., Ltd. (NetQin) for its leading market share in China mobile security market, continued commitment and excellence in R&D, and outstanding contribution to the industry. [ 36 ] In May 2011, The company announced that its initial public offering of 7,750,000 American depositary shares ("ADSs"), each representing five Class A common shares of the company, was priced at $11.50 per ADS, with a total offering size of US$89.125 million, assuming no exercise of the over-allotment option. [ 37 ] On May 5, 2011, NQ Mobile started trading on the New York Stock Exchange (NYSE) under the symbol “NQ”. [ 38 ] In July 2011, NQ Mobile reached 100 Million registered users nearly 100% growth since June, 2010 [ 39 ] and signed a framework agreement with Telefónica, S.A. (Telefónica) to provide mobile Internet services to the subscribers of Telefónica . Under the agreement, NQ Mobile's mobile internet services will be integrated in Telefónica's and its subsidiary's App Store and in mobile devices distributed by Telefónica and subsidiaries. [ 40 ] In September 2011, NQ Mobile and Brightstar Corp. signed a global go-to-market agreement to promote adoption of NQ Mobile security products. [ 41 ] The company also opened the NQ Mobile Security Research Center based in Raleigh, N.C. led by Dr. Xuxian Jiang, who was appointed chief scientist. [ 42 ] In January 2012, NetQin launched its new "NQ Mobile" brand, under which it now conducts all of its international business, and announced plans to change the company's corporate name from NetQin Mobile Inc. to NQ Mobile Inc. [ 43 ] [ 44 ] The company also signed an agreement to pre-install NQ Mobile Security on Motorola Android smartphones in China [ 45 ] and released a new version of its antivirus software, Mobile Security V6.0 for Android. [ 46 ] In February 2012, NQ Mobile integrated the BlueVia payment API from Telefónica , providing a mobile payment option to Telefónica's subscribers. [ 47 ] [ 48 ] In April 2012, NQ Mobile announced that The Cellular Connection (TCC) will offer NQ Mobile Security at more than 800 Verizon Wireless Premium Retail locations across the U.S. Rollout of this program will begin with availability at TCC's nearly 300 corporate stores. In May 2012, NQ Mobile visited the NYSE to celebrate the company's 1-year anniversary of listing on the NYSE. In honor of the occasion, Omar Khan and Yu Lin , CO-CEOs of NQ Mobile, rang The Closing Bell. [ 49 ] The company also acquired 55% of Beijing NationSky Network Technology, Inc. ("NationSky"), a provider of mobile services to enterprises in China and signed a collaboration agreement with A Wireless to offer NQ Mobile Guard in more than 125 Verizon Wireless Premium Retail locations in the US. [ 20 ] In August 2012, NQ Mobile and MediaTek Inc. reached an agreement regarding NQ Mobile's acquisition of approximately one-third interest in Hesine Technologies International Worldwide Inc. ("Hesine"), a wholly owned subsidiary of MediaTek and a premier mobile messaging provider. NQ Mobile's co-founder, chairman and co-CEO, Henry Lin joined the board of directors of Hesine. [ 50 ] The company also announced the launch of NQ Mobile Vault for iPhone . [ 51 ] In September 2012, NQ Mobile announced the launch of NQ Family Guardian. [ 52 ] In November 2012, acquired Beijing Feiliu Jiutian Technology Co. ("Feiliu") and later rebranded it to FL Mobile. [ 53 ] The company also announced that epay Australia, a Division of Euronet Worldwide, Inc. (NASDAQ: EEFT), will offer NQ Mobile Guard in major retail locations across Australia, including Harvey Norman and Allphones , [ 54 ] UK retailer Phones 4u will offer NQ Mobile Security at over 600 retail locations across the UK . In December 2012, NQ Mobile announced the launch of a proprietary security check service for HTC's App Store in mainland China. [ 55 ] In July 2013, NQ Mobile agrees to purchase the remaining 45 percent stake in its subsidiary, NationSky. [ 56 ] [ 57 ] In September 2013, NQ Mobile announced the release of "Music Radar," a content-based music information retrieval (MIR) application from one of its subsidiaries, Yinlong making the app available in China for both Android and iOS platforms. [ 58 ] The app was later renamed Doreso . [ 59 ] In October 2013, NQ's stock “fell a shattering 47 percent”, followed by lawsuits. The short-seller research firm Muddy Waters LLC alleged that "at least 72 percent of the company’s revenue in China is fictitious and that its actual market share in China is 1.5 percent instead of 55 percent that it had claimed". [ 60 ] An independent investigation conducted by an independent special committee of its board of directors and carried out by its independent counsel Shearman & Sterling LLP and Deloitte & Touche Financial Advisory Services Limited acting as forensic accountants found the companies disclosures were verifiable. [ citation needed ] However, in April 2015, the co-CEO of NQ Mobile, Omar Khan, stepped down after the stock had fallen nearly 84 percent. [ 61 ] NQ Mobile Security and NQ Family Guardian were both selected as top 25 apps at the Mobile Apps Showdown for CES 2013 in December, 2012. [ 62 ] NQ Mobile was granted the 2011 Technology Pioneer Award by the World Economic Forum for its technology leadership and innovation in mobile security. [ 63 ] “The company’s heavy investment in R&D has resulted in 23 patented and patent-pending technologies, giving the company a leading edge in the burgeoning mobile security market.” [ 64 ] Time Magazine named the company one of the “10 Start-Ups That Will Change Your Life” in September, 2010. [ 65 ] [ 66 ] NQ Mobile Security was selected as a top 20 app at the Global Mobile Internet Conference Silicon Valley (GMIC SV) in October, 2012. In addition, NQ Mobile Vault for Android was selected as a top 100 app. [ 67 ] Deloitte Technology Fast 50 (2010) [ 68 ] NQ Mobile Security received 4 out of 5 stars when reviewed by PC Advisor. [ 69 ] NQ Mobile Vault received 4 out of 5 star, both from CNet [ 70 ] and from PC Magazines. [ 69 ] Muddy Waters Research accused NQ Mobile of fraud in a 2013 report, alleging inflated revenues and misrepresented operations . [ 71 ] In April 2015 an analysis of the NQ Vault product indicated that it only encrypted the first 128 bytes of the data, leaving the rest unencrypted. NQ Mobile responded by saying that the encryption level was "appropriate". [ 72 ] In August 2011, NQ Mobile and MediaTek reached an agreement on mobile security cooperation whereby MediaTek will make NQ Mobile's mobile security service available to the MediaTek's smartphone chipset. [ 73 ] The company also signed an agreement with Taiwan Mobile to provide mobile anti-virus services to Taiwan Mobile subscribers in Taiwan. [ 74 ] In June 2012, NQ Mobile announced an alliance with TDMobility, the joint U.S. venture between Brightstar Corp and Tech Data Corporation . The collaboration will enable TDMobility to bring NQ Enterprise Shield to Tech Data's network of 65,000 Value Added Resellers across the US, serving small, medium, and large businesses. [ 75 ] The company also announced the official global launch of NQ Enterprise Shield [ 76 ] and scientists from NQ Mobile's Mobile Security Research Center, in collaboration with North Carolina State University disclosed a new way to detect mobile threats without relying on known malware samples and their signatures. [ 77 ] In October 2012, NQ Mobile announced that its applications, including NQ Mobile Guard, NQ Mobile Vault for Android and NQ Family Guardian, will be offered by GoWireless at more than 350 and Wireless at more than 80 Verizon Wireless Premium Retail locations across the United States. [ 78 ]
https://en.wikipedia.org/wiki/Link_Motion_Inc
In computer networking , link aggregation is the combining ( aggregating ) of multiple network connections in parallel by any of several methods. Link aggregation increases total throughput beyond what a single connection could sustain, and provides redundancy where all but one of the physical links may fail without losing connectivity. A link aggregation group ( LAG ) is the combined collection of physical ports. Other umbrella terms used to describe the concept include trunking , [ 1 ] bundling , [ 2 ] bonding , [ 1 ] channeling [ 3 ] or teaming . Implementation may follow vendor-independent standards such as Link Aggregation Control Protocol (LACP) for Ethernet , defined in IEEE 802.1AX or the previous IEEE 802.3ad , but also proprietary protocols . Link aggregation increases the bandwidth and resilience of Ethernet connections. Bandwidth requirements do not scale linearly. Ethernet bandwidths historically have increased tenfold each generation: 10 Mbit/s , 100 Mbit/s , 1000 Mbit/s , 10 000 Mbit/s . If one started to bump into bandwidth ceilings, then the only option was to move to the next generation, which could be cost prohibitive. An alternative solution, introduced by many of the network manufacturers in the early 1990s, is to use link aggregation to combine two physical Ethernet links into one logical link. Most of these early solutions required manual configuration and identical equipment on both sides of the connection. [ 4 ] There are three single points of failure inherent to a typical port-cable-port connection, in either a computer-to-switch or a switch-to-switch configuration: the cable itself or either of the ports the cable is plugged into can fail. Multiple logical connections can be made, but many of the higher level protocols were not designed to fail over completely seamlessly. Combining multiple physical connections into one logical connection using link aggregation provides more resilient communications. Network architects can implement aggregation at any of the lowest three layers of the OSI model . Examples of aggregation at layer 1 ( physical layer ) include power line (e.g. IEEE 1901 ) and wireless (e.g. IEEE 802.11) network devices that combine multiple frequency bands. OSI layer 2 ( data link layer , e.g. Ethernet frame in LANs or multi-link PPP in WANs, Ethernet MAC address ) aggregation typically occurs across switch ports, which can be either physical ports or virtual ones managed by an operating system. Aggregation at layer 3 ( network layer ) in the OSI model can use round-robin scheduling , hash values computed from fields in the packet header, or a combination of these two methods. Regardless of the layer on which aggregation occurs, it is possible to balance the network load across all links. However, in order to avoid out-of-order delivery , not all implementations take advantage of this. Most methods provide failover as well. Combining can either occur such that multiple interfaces share one logical address (i.e. IP) or one physical address (i.e. MAC address), or it allows each interface to have its own address. The former requires that both ends of a link use the same aggregation method, but has performance advantages over the latter. Channel bonding is differentiated from load balancing in that load balancing divides traffic between network interfaces on per network socket (layer 4) basis, while channel bonding implies a division of traffic between physical interfaces at a lower level, either per packet (layer 3) or a data link (layer 2) basis. [ citation needed ] By the mid-1990s, most network switch manufacturers had included aggregation capability as a proprietary extension to increase bandwidth between their switches. Each manufacturer developed its own method, which led to compatibility problems. The IEEE 802.3 working group took up a study group to create an interoperable link layer standard (i.e. encompassing the physical and data-link layers both) in a November 1997 meeting. [ 4 ] The group quickly agreed to include an automatic configuration feature which would add in redundancy as well. This became known as Link Aggregation Control Protocol (LACP). As of 2000 [update] , most gigabit channel-bonding schemes used the IEEE standard of link aggregation which was formerly clause 43 of the IEEE 802.3 standard added in March 2000 by the IEEE 802.3ad task force. [ 5 ] Nearly every network equipment manufacturer quickly adopted this joint standard over their proprietary standards. The 802.3 maintenance task force report for the 9th revision project in November 2006 noted that certain 802.1 layers (such as 802.1X security) were positioned in the protocol stack below link aggregation which was defined as an 802.3 sublayer. [ 6 ] To resolve this discrepancy, the 802.3ax (802.1AX) task force was formed, [ 7 ] resulting in the formal transfer of the protocol to the 802.1 group with the publication of IEEE 802.1AX-2008 on 3 November 2008. [ 8 ] As of February 2025 the current revision of the standard is 802.1AX-2020 . [ 9 ] [ 10 ] For an overview of the history of the 802.1AX standard, see this part of the table on the IEEE 802.1 family of standards. Within the IEEE Ethernet standards, the Link Aggregation Control Protocol (LACP) provides a method to control the bundling of several physical links together to form a single logical link. LACP allows a network device to negotiate an automatic bundling of links by sending LACP packets to their peer, a directly connected device that also implements LACP. LACP Features and practical examples LACP works by sending frames (LACPDUs) down all links that have the protocol enabled. If it finds a device on the other end of a link that also has LACP enabled, that device will independently send frames along the same links in the opposite direction enabling the two units to detect multiple links between themselves and then combine them into a single logical link. LACP can be configured in one of two modes: active or passive. In active mode, LACPDUs are sent 1 per second along the configured links. In passive mode, LACPDUs are not sent until one is received from the other side, a speak-when-spoken-to protocol. In addition to the IEEE link aggregation substandards, there are a number of proprietary aggregation schemes including Cisco's EtherChannel and Port Aggregation Protocol , Juniper's Aggregated Ethernet, AVAYA's Multi-Link Trunking , Split Multi-Link Trunking , Routed Split Multi-Link Trunking and Distributed Split Multi-Link Trunking , ZTE's Smartgroup, Huawei's Eth-Trunk, and Connectify 's Speedify . [ 13 ] Most high-end network devices support some form of link aggregation. Software-based implementations – such as the *BSD lagg package, Linux bonding driver, Solaris dladm aggr , etc. – exist for many operating systems. The Linux bonding driver [ 14 ] provides a method for aggregating multiple network interface controllers (NICs) into a single logical bonded interface of two or more so-called (NIC) slaves . The majority of modern Linux distributions come with a Linux kernel which has the Linux bonding driver integrated as a loadable kernel module and the ifenslave (if = [network] interface) user-level control program pre-installed. Donald Becker programmed the original Linux bonding driver. It came into use with the Beowulf cluster patches for the Linux kernel 2.0. Modes for the Linux bonding driver [ 14 ] (network interface aggregation modes) are supplied as parameters to the kernel bonding module at load time. They may be given as command-line arguments to the insmod or modprobe commands, but are usually specified in a Linux distribution-specific configuration file. The behavior of the single logical bonded interface depends upon its specified bonding driver mode. The default parameter is balance-rr. The Linux Team driver [ 17 ] provides an alternative to bonding driver. The main difference is that Team driver kernel part contains only essential code and the rest of the code (link validation, LACP implementation, decision making, etc.) is run in userspace as a part of teamd daemon. Link aggregation offers an inexpensive way to set up a high-capacity backbone network that transfers multiple times more data than any single port or device can deliver. Link aggregation also allows the network's backbone speed to grow incrementally as demand on the network increases, without having to replace everything and deploy new hardware. Most backbone installations install more cabling or fiber optic pairs than is initially necessary. This is done because labor costs are higher than the cost of the cable, and running extra cable reduces future labor costs if networking needs change. Link aggregation can allow the use of these extra cables to increase backbone speeds for little or no extra cost if ports are available. When balancing traffic, network administrators often wish to avoid reordering Ethernet frames. For example, TCP suffers additional overhead when dealing with out-of-order packets. This goal is approximated by sending all frames associated with a particular session across the same link. Common implementations use L2 or L3 hashes (i.e. based on the MAC or the IP addresses), ensuring that the same flow is always sent via the same physical link. [ 18 ] [ 19 ] [ 20 ] However, this may not provide even distribution across the links in the trunk when only a single or very few pairs of hosts communicate with each other, i.e. when the hashes provide too little variation. It effectively limits the client bandwidth in aggregate. [ 19 ] In the extreme, one link is fully loaded while the others are completely idle and aggregate bandwidth is limited to this single member's maximum bandwidth. For this reason, an even load balancing and full utilization of all trunked links is almost never reached in real-life implementations. NICs trunked together can also provide network links beyond the throughput of any one single NIC. For example, this allows a central file server to establish an aggregate 2-gigabit connection using two 1-gigabit NICs teamed together. Note the data signaling rate will still be 1 Gbit/s , which can be misleading depending on methodologies used to test throughput after link aggregation is employed. Microsoft Windows Server 2012 supports link aggregation natively. Previous Windows Server versions relied on manufacturer support of the feature within their device driver software. Intel , for example, released Advanced Networking Services (ANS) to bond Intel Fast Ethernet and Gigabit cards. [ 21 ] Nvidia supports teaming with their Nvidia Network Access Manager/Firewall Tool. HP has a teaming tool for HP-branded NICs which supports several modes of link aggregation including 802.3ad with LACP. In addition, there is a basic layer-3 aggregation [ 22 ] that allows servers with multiple IP interfaces on the same network to perform load balancing, and for home users with more than one internet connection, to increase connection speed by sharing the load on all interfaces. [ 23 ] Broadcom offers advanced functions via Broadcom Advanced Control Suite (BACS), via which the teaming functionality of BASP (Broadcom Advanced Server Program) is available, offering 802.3ad static LAGs, LACP, and "smart teaming" which doesn't require any configuration on the switches to work. It is possible to configure teaming with BACS with a mix of NICs from different vendors as long as at least one of them is from Broadcom and the other NICs have the required capabilities to support teaming. [ 24 ] Linux , FreeBSD , NetBSD , OpenBSD , macOS , OpenSolaris and commercial Unix distributions such as AIX implement Ethernet bonding at a higher level and, as long as the NIC is supported by the kernel, can deal with NICs from different manufacturers or using different drivers. [ 14 ] Citrix XenServer and VMware ESX have native support for link aggregation. XenServer offers both static LAGs as well as LACP. vSphere 5.1 (ESXi) supports both static LAGs and LACP natively with their virtual distributed switch. [ 25 ] Microsoft's Hyper-V does not offer link aggregation support from the hypervisor level, but the above-mentioned methods for teaming under Windows apply to Hyper-V. With the modes balance-rr , balance-xor , broadcast and 802.3ad , all physical ports in the link aggregation group must reside on the same logical switch, which, in most common scenarios, will leave a single point of failure when the physical switch to which all links are connected goes offline. The modes active-backup , balance-tlb , and balance-alb can also be set up with two or more switches. But after failover (like all other modes), in some cases, active sessions may fail (due to ARP problems) and have to be restarted. However, almost all vendors have proprietary extensions that resolve some of this issue: they aggregate multiple physical switches into one logical switch. Nortel's split multi-link trunking (SMLT) protocol allows multiple Ethernet links to be split across multiple switches in a stack, preventing any single point of failure and additionally allowing all switches to be load balanced across multiple aggregation switches from the single access stack. These devices synchronize state across an Inter-Switch Trunk (IST) such that they appear to the connecting (access) device to be a single device (switch block) and prevent any packet duplication. SMLT provides enhanced resiliency with sub-second failover and sub-second recovery for all speed trunks while operating transparently to end-devices. Multi-chassis link aggregation group provides similar features in a vendor-nonspecific manner. To the connected device, the connection appears as a normal link aggregated trunk. The coordination between the multiple sources involved is handled in a vendor-specific manner. In most implementations, all the ports used in an aggregation consist of the same physical type, such as all copper ports (10/100/1000BASE‑T), all multi-mode fiber ports, or all single-mode fiber ports. However, all the IEEE standard requires is that each link be full duplex and all of them have an identical speed (10, 100, 1,000 or 10,000 Mbit/s ). Many switches are PHY independent, meaning that a switch could have a mixture of copper, SX, LX, LX10 or other GBIC / SFP modular transceivers. While maintaining the same PHY is the usual approach, it is possible to aggregate a 1000BASE-SX fiber for one link and a 1000BASE-LX (longer, diverse path) for the second link. One path may have a longer propagation time but since most implementations keep a single traffic flow on the same physical link (using a hash of either MAC addresses, IP addresses, or IP/ transport-layer port combinations as index) this doesn't cause problematic out-of-order delivery . Aggregation mismatch refers to not matching the aggregation type on both ends of the link. Some switches do not implement the 802.1AX standard but support static configuration of link aggregation. Therefore, link aggregation between similarly statically configured switches may work but will fail between a statically configured switch and a device that is configured for LACP. On Ethernet interfaces, channel bonding requires assistance from both the Ethernet switch and the host computer's operating system , which must stripe the delivery of frames across the network interfaces in the same manner that I/O is striped across disks in a RAID 0 array. [ citation needed ] For this reason, some discussions of channel bonding also refer to Redundant Array of Inexpensive Nodes (RAIN) or to redundant array of independent network interfaces . [ 26 ] In analog modems, multiple dial-up links over POTS may be bonded. Throughput over such bonded connections can come closer to the aggregate bandwidth of the bonded links than can throughput under routing schemes which simply load-balance outgoing network connections over the links. Similarly, multiple DSL lines can be bonded to give higher bandwidth; in the United Kingdom , ADSL is sometimes bonded to give for example 512 kbit/s upload bandwidth and 4 Mbit/s download bandwidth, in areas that only have access to 2 Mbit/s bandwidth. [ citation needed ] Under the DOCSIS 3.0 and 3.1 specifications for data over cable TV systems, multiple channels may be bonded. Under DOCSIS 3.0, up to 32 downstream and 8 upstream channels may be bonded. [ 27 ] These are typically 6 or 8 MHz wide. DOCSIS 3.1 defines more complicated arrangements involving aggregation at the level of subcarriers and larger notional channels. [ 28 ] Broadband bonding is a type of channel bonding that refers to aggregation of multiple channels at OSI layers at level four or above. Channels bonded can be wired links such as a T-1 or DSL line . Additionally, it is possible to bond multiple cellular links for an aggregated wireless bonded link. Other bonding methodologies reside at lower OSI layers, requiring coordination with telecommunications companies for implementation. Broadband bonding, because it is implemented at higher layers, can be done without this coordination. [ 29 ] Commercial implementations of broadband channel bonding include: On 802.11 (Wi-Fi), channel bonding is used in Super G technology, referred to as 108 Mbit/s . It bonds two channels of standard 802.11g , which has 54 Mbit/s data signaling rate per channel. On IEEE 802.11n , a mode with a channel width of 40 MHz is specified. This is not channel bonding, but a single channel with double the older 20 MHz channel width, thus using two adjacent 20 MHz bands. This allows direct doubling of the PHY data rate from a single 20 MHz channel.
https://en.wikipedia.org/wiki/Link_aggregation
A link budget is an accounting of all of the power gains and losses that a communication signal experiences in a telecommunication system; from a transmitter, through a communication medium such as radio waves , cables , waveguides , or optical fibers , to the receiver. It is an equation giving the received power from the transmitter power, after the attenuation of the transmitted signal due to propagation, as well as the antenna gains and feedline and other losses, and amplification of the signal in the receiver or any repeaters it passes through. A link budget is a design aid, calculated during the design of a communication system to determine the received power, to ensure that the information is received intelligibly with an adequate signal-to-noise ratio . In most real world systems the losses must be estimated to some degree, and may vary. A link margin is therefore specified as a safety margin between the received power and minimum power required by the receiver to acurately detect the signal. The link margin is chosen based on the anticipated severity of a communications drop out and can be reduced by the use of mitigating techniques such as antenna diversity or multiple-input and multiple-output (MIMO). A simple link budget equation looks like this: Power levels are expressed in ( dBm ), Power gains and losses are expressed in decibels (dB), which is a logarithmic measurement, so adding decibels is equivalent to multiplying the actual power ratios. A link budget equation including the key effects for a wireless radio transmission system, expressed logarithmically, might look like: [ 1 ] where: The path loss is the loss due to propagation between the transmitting and receiving antennas and is usually the most significant contributor to the losses, and also the largest unknown. When transmitting through free space , it can be expressed in a dimensionless form by normalizing the distance to the wavelength: When substituted into the link budget equation above, the result is the logarithmic form of the Friis transmission equation . In some cases, it is convenient to consider the loss due to distance and wavelength separately, but in that case, it is important to keep track of which units are being used, as each choice involves a differing constant offset. Some examples are provided below. These alternative forms can be derived by substituting wavelength with the ratio of propagation velocity ( c , approximately 3 × 10 8 m/s ) divided by frequency, and by inserting the proper conversion factors between km or miles and meters, and between MHz and Hz. The gain of both the transmitting and receiving antennas is affected by the antenna 's directivity . For example, antennas can be isotropic, omnidirectional, directional, or sectorial, depending on the way in which the antenna power is oriented. For a line-of-sight (LOS) radio system, the path loss can be closely modeled by a single path through free space using the Friis transmission equation . This models the decrease in signal power as it spreads over an increasing area as it propagates, proportional to the square of the distance (geometric spreading) and the square of the frequency. This is a best case scenario, and additional losses are incurred in most radio links. In non-line-of-sight (NLOS) links, diffraction and reflection losses are the most important since the direct path is not available. Building obstructions such as walls and ceilings cause propagation losses indoors to be significantly higher. This occurs because of a combination of attenuation by walls and ceilings, and blockage due to equipment, furniture, and even people. Experience has shown that in dense office environments, line-of-sight propagation holds only for about the first 3 meters. Beyond 3 meters propagation losses indoors can increase at up to 30 dB per 30 meters. This is a good rule-of-thumb, in that it is conservative (it overstates path loss in most cases). [ citation needed ] Actual propagation losses may vary significantly depending on building construction and layout. The attenuation of the signal is highly dependent on the frequency of the signal. In practical situations (deep space telecommunications, weak signal DXing etc.) other sources of signal loss must also be accounted for, including: Link budgets are important in Earth–Moon–Earth communications . As the albedo of the Moon is very low (maximally 12% but usually closer to 7%), and the path loss over the 770,000 kilometre return distance is extreme (around 250 to 310 dB depending on VHF-UHF band used, modulation format and Doppler shift effects), high power (more than 100 watts) and high-gain antennas (more than 20 dB) must be used. The Voyager program spacecraft have the highest known path loss (308 dB as of 2002 [ 4 ] : 26 ) and lowest link budgets of any telecommunications circuit. The Deep Space Network has been able to maintain the link at a higher than expected bitrate through a series of improvements, such as increasing the antenna size from 64 m to 70 m for a 1.2 dB gain, and upgrading to low noise electronics for a 0.5 dB gain in 2000–2001. During the Neptune flyby, in addition to the 70-m antenna, two 34-m antennas and twenty-seven 25-m antennas were used to increase the gain by 5.6 dB, providing additional link margin to be used for a 4× increase in bitrate. [ 4 ] : 35 Guided media such as coaxial and twisted pair electrical cable and radio frequency waveguides have losses that are exponential with distance. The path loss will be in terms of dB per unit distance. This means that there is always a crossover distance beyond which the loss in a guided medium will exceed that of a line-of-sight path of the same length. The optical power budget (also fiber-optic link budget and loss budget ) in a fiber-optic communication link is the allocation of available optical power (launched into a given fiber by a given source) among various loss-producing mechanisms such as launch coupling loss , fiber attenuation , splice losses, and connector losses, in order to ensure that adequate signal strength (optical power) is available at the receiver. In optical power budget attenuation is specified in decibel (dB) and optical power in dBm . The amount of optical power launched into a given fiber by a given transmitter depends on the nature of its active optical source ( LED or laser diode ) and the type of fiber , including such parameters as core diameter and numerical aperture . Manufacturers sometimes specify an optical power budget only for a fiber that is optimum for their equipment—or specify only that their equipment will operate over a given distance, without mentioning the fiber characteristics. The user must first ascertain, from the manufacturer or by testing, the transmission losses for the type of fiber to be used, and the required signal strength for a given level of performance. In addition to transmission loss , including those of any splices and connectors, allowance should be made for at least several dB of optical power margin losses, to compensate for component aging and to allow for future splices in the event of a severed cable . Definitions: Passive optical networks use optical splitters to divide the downstream signal into up to 32 streams, most often a power of two. Each division in two halves the transmitted power and therefore causes a minimum attenuation of 3 dB ( 1 2 {\displaystyle {\tfrac {1}{2}}} ≈ 10 −0.3 ). Long distance fiber-optic communication became practical only with the development of ultra-transparent glass fibers. A typical path loss for single-mode fiber is 0.2 dB/km, [ 5 ] far lower than any other guided medium. This article incorporates public domain material from Federal Standard 1037C . General Services Administration . Archived from the original on 2022-01-22.
https://en.wikipedia.org/wiki/Link_budget
In computational geometry , the link distance between two points in a polygon is the minimum number of line segments of any polygonal chain within the polygon that has the two points as its endpoints. The link diameter of the polygon is the maximum link distance of any two of its points. A polygon is a convex polygon if and only if its link diameter is one. Every star-shaped polygon has link diameter at most two: every two points may be connected by a polygonal chain that bends once, inside the kernel of the polygon. However, this property does not characterize star-shaped polygons, as there also exist polygons with holes in which the link diameter is two. This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Link_distance
In a wireless communication system , the link margin (LKM) is a critical parameter that measures the reliability and robustness of the communication link. It is expressed in decibels (dB) and represents the difference between the minimum expected power received at the receiver's end and the receiver's sensitivity. The receiver's sensitivity is the minimum received power level at which the receiver can correctly decode the signal and function properly. [ 1 ] Source: [ 2 ] Link Margin (LKM)=P received ​−P sensitivity Where It is typical to design a system with at least a few dB of link margin, to allow for attenuation that is not modeled elsewhere. [ 4 ] For example, a satellite communications system operating in the tens of gigahertz might require additional link margin (vs. the link budget assuming lossless propagation), in order to ensure that it still works with the extra losses due to rain fade or other external factors. [ 5 ] A system with a negative link margin cannot transfer data, so one or more of the following are needed: more transmitter power; more antenna gain at the receiver or transmitter; less propagation loss (e.g., better antenna locations and/or shorter paths); lower receiver noise figure; improved error correction coding (FEC); reduced interference; or a lower data rate. [ 6 ] https://ictactjournals.in/paper/IJCT_Vol_8_Iss_3_Paper_5_1574_1581.pdf This article related to telecommunications is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Link_margin
In chemistry , linkage isomerism or ambidentate isomerism is a form of structural isomerism in which certain coordination compounds have the same composition but differ in which atom of the ligand is bonded to the metal. Typical ligands that give rise to linkage isomers are: An example of chemicals that are linkage isomers is violet-colored [(NH 3 ) 5 Co-SCN] 2+ and orange-colored [(NH 3 ) 5 Co-NCS] 2+ . The isomerization of the S -bonded (isothiocyanate) isomer to the N -bonded (thiocyanate) isomer occurs by an intramolecular rearrangement . [ 1 ] The complex cis - dichlorotetrakis(dimethylsulfoxide)ruthenium(II) ( RuCl 2 (dmso) 4 ) exhibits linkage isomerism of dimethyl sulfoxide ligands due to S - vs. O -bonding. Trans -dichlorotetrakis(dimethylsulfoxide)ruthenium(II) only exists as a single linkage isomer. [ citation needed ] Linkage isomerism was first noted for nitropentaamminecobalt(III) chloride , [Co(NH 3 ) 5 (NO 2 )] 2+ . This cationic cobalt complex can be isolated as either of two linkage isomers. In the yellow-coloured isomer, the nitro ligand is bound through nitrogen. In the red linkage isomer, the nitrito is bound through one oxygen atom. The O-bonded isomer is often written as [Co(NH 3 ) 5 (ONO)] 2+ . Although the existence of the isomers had been known since the late 1800s, only in 1907 was the difference explained. [ 2 ] It was later shown that the red isomer converted to the yellow isomer upon UV-irradiation. In this particular example, the formation of the nitro isomer ( Co-NO 2 ) from the nitrito isomer ( Co-ONO ) occurs by an intramolecular rearrangement. [ 3 ]
https://en.wikipedia.org/wiki/Linkage_isomerism
Linked-read sequencing , a type of DNA sequencing technology, uses specialized technique that tags DNA molecules with unique barcodes before fragmenting them. Unlike traditional sequencing technology, where DNA is broken into small fragments and then sequenced individually, resulting in short read lengths that has difficulties in accurately reconstructing the original DNA sequence, the unique barcodes of linked-read sequencing allows scientists to link together DNA fragments that come from the same DNA molecule. A pivotal benefit of this technology lies in the small quantities of DNA required for large genome information output, effectively combining the advantages of long-read and short-read technologies. [ 1 ] This sequencing method was originally developed by 10x Genomics in 2015, and was launched under the name 'GemCode' or 'Chromium'. GemCode employed a method of gel bead-based barcoding to amalgamate short DNA fragments. [ 2 ] The longer fragments produced by this could then be sequenced using validated technology such as Illumina next-generation sequencing . [ 2 ] [ 3 ] An updated version of linked-read sequencing was introduced by the same company in 2018, termed 'Linked-Reads V2'. While GemCode uses a single barcode for tagging of both the gel bead and the DNA fragment, Linked-Reads V2 uses separate barcodes for improved detection of genetic variants. The group developed the linked-read sequencing technology published their first paper regarding this technology in 2016. The authors of this paper developed the linked-read sequencing technology initially to sequence the genomes of both healthy individuals and cancer patients to determine somatic mutations , copy number variations , and structural variations in cancer genomes. [ 2 ] Later that year, another research group combined linked-read sequencing technology with long-read sequencing technology to assemble human genome. [ 3 ] Both studies demonstrated the utility of linked-read sequencing in comprehensive genome analysis and in understanding genetic diseases. However, in 2019, a lawsuit relating to patent infringement resulted in 10x Genomics discontinuing their line of linked-read products. The linked-read sequencing is microfluidic -based, and only needs nanograms of input DNA. [ 2 ] One nanogram of DNA can be distributed across more than 100,000 droplet partitions, where DNA fragments are barcoded and subjected to polymerase chain reactions (PCR) . [ 2 ] As a result, DNA fragments (or reads ) that share the same barcode can be grouped as coming from one single long input DNA sequence. [ 2 ] And, long range information can be assembled from short reads. Steps of Linked-read sequencing: [ 2 ] During barcode sequencing, high molecular weight DNA samples that contain the targeted DNA sequence, ranging from fifty to several hundred kilobases in size, are combined with gel beads containing unique barcodes, enzymes, and sequencing reagents. [ 2 ] Microfluidic device can partition input DNA molecules into individual nanoliter-sized droplets of water-in-oil emulsion, called GEMs. [ 2 ] Each GEM contains gel beads coated with the same barcode and primers, and a small amount of DNA. [ 2 ] The primers are complementary to specific regions of the DNA molecule, allowing for amplification of the DNA in the droplets through PCR. [ 2 ] The barcodes enable the identification and grouping of sequencing reads that originate from the same long fragment, which is crucial for downstream analysis. [ 2 ] The barcoded DNA fragments are amplified using PCR to create a library of DNA fragments with identical barcodes. All the fragments derived from a given DNA molecule are tagged with the same barcode. [ 4 ] This step increases the quantity of DNA for sequencing and reduces the chances of losing unique DNA fragments during sequencing. Droplets (or GEM) are later collected in a tube, and the emulsion is broken, releasing the amplified, barcoded DNA sequences. Standard Illumina next-generation sequencing technology can be used to sequence libraries. [ 5 ] During sequencing, the barcodes are read along with the DNA sequences, allowing researchers and scientists to group together DNA fragments that originate from the same DNA molecule. [ 5 ] Even though each DNA fragment is typically not fully sequenced, the information from many overlapping fragments in the same genomic region can be combined to reconstruct the long stretches of the genome. [ 5 ] Therefore, a genome can be easily assembled from scratch without any prior reference. The raw sequencing data is then processed through bioinformatics (e.g., the GemCode analysis software developed by 10x Genomics) to remove low-quality reads and to assign reads to their respective barcodes. [ 2 ] Reads can be aligned to a reference genome or assembled de novo to generate long-range contigs . The read alignment step is important for determining the order and orientation of the long DNA fragments, and for identifying genomic variations, such as insertions or deletions . [ citation needed ] Linked-read sequencing can facilitate de novo genome assembly , which involves reconstructing a genome from scratch without any prior reference. Linked-read sequencing enables assembly of large genomic regions, and helps improve the completeness and contiguity of the resulting genome. This can be particularly useful for studying organisms that lack a high-quality reference genome, such as non-model organisms or organisms with complex genomes. [ 6 ] Many scientists have been using linked-read sequencing technology for de novo genome assembly recently in a variety of organisms, including humans, plants, and animals. [ 7 ] [ 6 ] [ 8 ] For example, Dr. Evan Eichler and his research group used linked-read sequencing to assemble genome of orangutan , which had previously been difficult to study due to its complex genome. [ 8 ] The resulting genome assembly helped scientists to study new insights into the evolutionary history of primates and the genetic basis of human diseases. [ 8 ] Also, the aligned or assembled reads can be used for other genetic investigations or downstream analysis, such as haplotype phasing. Haplotype refers to a group of genetic variants inherited together on a chromosome from one parent due to their genetic linkage . Haplotype phasing (also called haplotype estimation ) refers to the process of reconstructing individual haplotypes, important for determining the genetic basis of diseases. [ 9 ] Linked-read sequencing allows consistent coverage of genes related to different diseases, helping scientists to obtain all the regions carrying mutations from targeted genes. [ 10 ] For example, in 2018, a group of researchers used linked-read sequencing technology to sequence genetic information from a pregnant woman who was a carrier of Duchenne muscular dystrophy (DMD) mutation. [ 10 ] Linked-read sequencing allows them to identify the maternal haplotypes and determine the presence of the mutant alleles in the foetal DNA. [ 10 ] This non-invasive prenatal diagnosis of DMD demonstrates the clinical applicability of linked-read sequencing. Structural variations , such as deletions, duplications , inversions, translocations , and other rearrangements, are common in human genomes. [ 4 ] These variations can have significant impacts on genome functions, and have been implicated in many diseases. Linked-read sequencing technology labels all reads that originate from the same long DNA fragment with the same barcode, so it enables the detection of a large number of structural variants. [ 4 ] Complexity of structural variants can be resolved with linked-read sequencing, and provide a complete picture of the genomic landscape. Many scientists have already been using linked-read sequencing to identify and characterise structural variants in diverse populations, including people with genetic disorders or cancers [ 11 ] Transcriptome analysis is the study of all the RNA transcripts that are produced by the genome of an organism. Linked-read sequencing has been used by researchers to assemble transcript isoforms and alternative splicing events. [ 12 ] Information regarding alternative splicing events can provide insights into the regulation of gene expression in human transcriptome [ 12 ] Epigenetics refers to the study of heritable changes in genetic activities that are distinct from changes in DNA sequences. Epigenetic analysis involves studying DNA-protein interactions, histone modifications, and DNA methylation . Linked-read sequencing has been used for studying DNA methylation patterns by many studies. [ 13 ] [ 14 ] For example, in 2021, a study investigated the DNA methylation differences in peripheral blood cells between twins, in which one twin had Alzheimer’s Disease and the other was cognitively normal. [ 13 ] Linked-read sequencing technology allowed researchers to identify more than 3000 differentially methylated regions between these twins discordant for Alzheimer’s Disease , and investigation of these differentially methylated regions eventually led to identification of genes enriched in neurodevelopmental processes, neuronal signalling , and immune system functions [ 13 ] In 2018, Bio-Rad Laboratories filed a lawsuit against 10x Genomics stating that their linked-read technology infringed on three patents which had been licensed from Bio-Rad at the University of Chicago . [ 15 ] Bio-Rad was awarded a sum of $23,930,716 by a jury. The 10x Genomics filed a motion for judgement as a matter of law (JMOL) but were denied in 2019, and the court proceedings concluded in 2020. Following this lawsuit, 10x Genomics discontinued their linked-read assay. [ 15 ] An exception was made for linked-read products which had already been sold by the company prior to the lawsuit, allowing 10x Genomics to continue to provide those researchers with services such as support and warranty maintenance for this technology. [ citation needed ]
https://en.wikipedia.org/wiki/Linked-read_sequencing
In mathematics , an upwards linked set A is a subset of a partially ordered set , P , in which any two of elements A have a common upper bound in P . Similarly, every pair of elements of a downwards linked set has a lower bound. Every centered set is linked, which includes, in particular, every directed set . This mathematical logic -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Linked_set
In molecular biology , linker DNA is double-stranded DNA (38-53 base pairs long) in between two nucleosome cores that, in association with histone H1 , holds the cores together. Linker DNA is seen as the string in the "beads and string model", which is made by using an ionic solution on the chromatin . Linker DNA connects to histone H1 and histone H1 sits on the nucleosome core. Nucleosome is technically the consolidation of a nucleosome core and one adjacent linker DNA; however, the term nucleosome is used freely for solely the core. Linker DNA may be degraded by endonucleases . [ 1 ] The linkers are short double stranded DNA segments which are formed of oligonucleotides . These contain target sites for the action of one or more restriction enzymes . The linkers can be synthesized chemically and can be ligated to the blunt end of foreign DNA or vector DNA . These are then treated with restriction endonuclease enzyme to produce cohesive ends of DNA fragments. The commonly used linkers are EcoRI-linkers and sal-I linkers. This electrochemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Linker_DNA
A linklog is a type of blog which is meant to act as a linked list. Common practice is for the post titles to link directly to an external URLs , and the content of the post includes information to complement the associated URL. [ 1 ] Linklogs existed as a feature of computing systems before the internet as well. In distributed file systems a link log was a method of recording data in which a record is created and added to the proper log when updating a transaction. The format of a log record closely matches the specification of the transaction type it corresponds to. Link log records consisted of two parts in such a system: a set of type-independent fields, and a set of type-specific fields. The former set consists of pointers to the preceding and succeeding records of the log. [ 2 ] In PBX systems such as AUDIX link-logs were a collection of data collecting to assist operators in maintaining the system. [ 3 ] This World Wide Web –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Linklog
Linksys Holdings, Inc. , is an American brand of data networking hardware products mainly sold to home users and small businesses. It was founded in 1988 by the couple Victor and Janie Tsao , both Taiwanese immigrants to the United States. [ 1 ] Linksys products include Wi-Fi routers , mesh Wi-Fi systems, Wifi extenders, access points, network switches, and Wi-Fi networking. It is headquartered in Irvine, California. [ 2 ] Linksys products are sold direct-to-consumer from its website, through online retailers and marketplaces, as well as off-the-shelf in consumer electronics and big-box retail stores. As of 2020, Linksys products are sold in retail locations and value-added resellers in 64 countries and was the first router company to ship 100 million products. [ 3 ] In 1988, spouses Janie and Victor Tsao founded DEW International, later renamed Linksys, in the garage of their Irvine, California home. The Tsaos were immigrants from Taiwan who held second jobs as consultants specializing in pairing American technology vendors with manufacturers in Taiwan. The founders used Taiwanese manufacturing to achieve its early success. [ 4 ] The company's first products were printer sharers that connected multiple PCs to printers. The company expanded into Ethernet hubs, network cards, and cords. [ 5 ] In 1992, the Tsaos began running Linksys full time and moved the company and its growing staff to a formal office. By 1994, it had grown to 55 employees with annual revenues of $6.5 million. [ 4 ] Linksys received a major boost in 1995, when Microsoft released Windows 95 with built-in networking functions that expanded the market for its products. Linksys established its first U.S. retail channels with Fry's Electronics (1995) and Best Buy (1996). In the late 1990s, Linksys released the first affordable multiport router, popularizing Linksys as a home networking brand. [ 5 ] By 2003, when the company was acquired by Cisco, it had 305 employees and revenues of more than $500 million. [ 4 ] [ 6 ] [ 7 ] Cisco expanded the company's product line, acquiring VoIP maker Sipura Technology in 2005 [ 8 ] and selling its products under Linksys Voice System or later Linksys Business Series brands. [ 9 ] In July 2008, Cisco acquired Seattle-based Pure Networks, a vendor of home networking-management software. [ 10 ] Cisco announced in January 2013 that it would sell its home networking division and Linksys to Belkin , giving Belkin 30% of the home router market. [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] In 2018, Belkin and its subsidiaries, including Linksys, were acquired by Foxconn , a Taiwanese multinational electronics firm and the largest provider of electronics manufacturing services, for $866 million. [ 16 ] [ 17 ] On June 4, 2021, Harry Dewhirst was appointed as CEO. [ 18 ] [ 19 ] In September, cybersecurity firm Fortinet made a $75 million investment in Linksys. [ 18 ] [ 20 ] Their focus is on the security of home networks for remote workplaces. [ 21 ] On September 24, 2021, Fortinet invested an additional $85 million in cash for shares of Series A Preferred Stock of Linksys. [ 22 ] Mark Sanders became CFO in October. [ 23 ] In 2023, the company starts its global expansion with a development centre in Taipei (Taiwan) and a new sales and marketing centre in Amstelveen (The Netherlands). The new facilities are close to leading technology centres and universities. [ 24 ] Linksys initially sold connectors for PCs and printers before newer forms of connecting home and business networks through wired Ethernet and wireless technologies. [ 25 ] [ 16 ] Its networking products include Gigabit switches, Wi-Fi routers, Intelligent Mesh Wi-Fi systems, Wi-Fi extenders, Wi-Fi access points, and networking components. Linksys Aware was introduced in 2019 as a first-to-market home monitoring system that alerts users to movement in their home through the Velop Triband system. [ 26 ] In 2020, Linksys released Linksys Shield, a parental control subscription service for the Velop AC2200 Triband that allows users to manage or block online content. The company also announced its Linksys Cloud Manager 2.0, which included a configurable captive portal. Linksys released its first Wi-Fi router in 2001 and has maintained early router releases for newer generations of Wi-Fi. [ 25 ] The WRT54G was notable for having firmware based on the Linux operating system . Since version 5, flash memory was reduced from 4 MB to 2 MB, and VxWorks was used instead of Linux. The original Linux model with 4 MB was later available as the WRT54GL. In 2017, Linksys launched the Velop line, a multi-unit tri-band mesh router system that uses three Wi-Fi radios. [ 27 ] [ 28 ] First announced in 2020, Linksys began marketing home-based Linksys smart routers and Velop Mesh Wi-Fi. [ 29 ] In April 2021, launched its first Wi-Fi 6E-certified systems, including Hydra Pro 6E router and the Atlas Max 6E mesh system. [ 30 ] In January 2022, Linksys launched Hydra Pro 6, a scaled-back version of the 6E model. [ 31 ] The Linksys Intelligent Mesh line, Velop, combines Linksys software and hardware to provide higher connection speeds throughout a location by using nodes with dynamic networking capabilities. Linksys in 2019, with the Linksys Aware line, was first to release mesh nodes as motion sensors, utilizing Wi-Fi signals without having to rely on other sensor devices. [ 32 ] Linksys markets Wi-Fi extenders that work with most Wi-Fi and ISP routers, including dual or tri-band units, and plug-in devices that eliminate Wi-Fi dead zones by wirelessly communicating with a router. [ 33 ] In 2018, Linksys released its cloud-based Wi-Fi management for business-class access points, the Linksys Cloud Manager. [ 34 ] Linksys markets mesh Wi-Fi routers built for Wi-Fi 6 capacity, offering four times the speed and capacity of Wi-Fi 5. The mesh Velop Wi-Fi 6, announced in October 2019. [ 35 ] In 2020, Linksys debuted 5G mobile hotspots, modems, mesh gateways, and outdoor routers. [ 36 ] [ 37 ] At CES 2021, Linksys announced a line of Velop mesh systems and routers that support Wi-Fi 6E. [ 38 ] HomeWRK is a two-node, mesh-enabled Wi-Fi 6 router. The appliance provides separate wireless networks for personal and business traffic. Fortinet’s security stack runs on the device, blocking malware, ransomware, and filtering content. [ 39 ]
https://en.wikipedia.org/wiki/Linksys
A Linkwitz–Riley ( L-R ) filter is an infinite impulse response filter used in Linkwitz–Riley audio crossovers . It is named after its inventors Siegfried Linkwitz and Russ Riley and was originally described in Active Crossover Networks for Noncoincident Drivers . [ 1 ] [ 2 ] It is also known as a Butterworth squared filter. A Linkwitz–Riley crossover consists of a parallel combination of a low-pass and a high-pass L-R filter. These filters are typically designed by cascading two Butterworth filter filters, each providing a −3 dB gain at the cut-off frequency. The resulting Linkwitz–Riley filter has a −6 dB gain at the cut-off frequency. This means that when summing the low-pass and high-pass outputs, the gain at the crossover frequency is 0 dB . As a result, the crossover network behaves like an all-pass , exhibiting a flat amplitude response with a smoothly changing phase response . This is a primary advantage of L-R crossovers compared to even-order Butterworth filter crossovers, whose summed output has a +3 dB peak around the crossover frequency. Since cascading two n th -order Butterworth filter filters creates a (2 n ) th -order Linkwitz–Riley filter, theoretically any (2 n ) th -order Linkwitz–Riley crossover can be designed. However, crossovers of order higher than 4 may be less practical due to their complexity and an increasing peak in group delay around the crossover frequency. Sources: [ 1 ] [ 3 ] Second-order Linkwitz–Riley crossovers (LR2) have a 12 dB/octave ( 40 dB/decade ) slope. They can be realized by cascading two one-pole filters or by using a Sallen Key filter topology with a Q 0 value of 0.5. There is a 180° phase difference between the low-pass and high-pass outputs, which can be corrected by inverting one signal. In loudspeakers , this is usually done by reversing the polarity of one driver if the crossover is passive . For active crossovers, inversion is typically achieved using a unity gain inverting op-amp . Sources: [ 1 ] [ 3 ] Fourth-order Linkwitz–Riley crossovers (LR4) are currently the most commonly used type of audio crossover. They are constructed by cascading two 2nd-order Butterworth filter filters. Their slope is 24 dB/octave ( 80 dB/decade ). The phase difference is 360°, meaning the two drivers appear in phase, although the low-pass section has a full period time delay. Source: [ 3 ] Eighth-order Linkwitz–Riley crossovers (LR8) have a very steep, 48 dB/octave ( 160 dB/decade ) slope. They can be constructed by cascading two 4th-order Butterworth filter filters.
https://en.wikipedia.org/wiki/Linkwitz–Riley_filter
The Linnaean enterprise is the task of identifying and describing all living species . It is named after Carl Linnaeus , a Swedish botanist , ecologist and physician who laid the foundations for the modern scheme of taxonomy . [ 1 ] As of 2006, the Linnaean enterprise is considered to be barely begun. There are estimated to be 10 million living species, but only about 1.5-1.8 million have been even named, and fewer than 1% of these have been studied enough to understand the basics of their ecological roles. [ 1 ] Linnaean enterprise plays a larger role in applied science and basic science. With applied science, it can assist in finding new natural products and species ( bioprospecting ) and effective conservation practices. [ 1 ] It allows for an understanding of evolutionary biology and how ecosystems function in basic science. [ 1 ] The cost of completing the Linnaean Enterprise has been estimated at US $5 billion. [ 1 ] Carl Linnaeus (1707–1778) was one of the most well known natural scientists of his time. Very unsatisfied with the contemporary way of naming living things, he was responsible for creating the binomial nomenclature system still used in science to name species of organisms. Linnaeus's work laid the basis of modern taxonomy. [ 2 ] As part of his work, Linnaeus formally described and classified numerous species of plants and animals, and created binomial (scientific) names that still are used today for many of the most common species in Europe. Notably, Linnaeus's taxonomic system was the first where humans were taxonomically grouped with apes , classifying both genus Homo as well as Simia (now defunct and replaced by several other genera) to be members of order Primates . [ 3 ] This ecology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Linnaean_enterprise
Linnaean taxonomy can mean either of two related concepts: Linnaean name also has two meanings, depending on the context: it may either refer to a formal name given by Linnaeus (personally), such as Giraffa camelopardalis Linnaeus, 1758 ; or a formal name in the accepted nomenclature (as opposed to a modernistic clade name). In his Imperium Naturae , Linnaeus established three kingdoms, namely Regnum Animale , Regnum Vegetabile and Regnum Lapideum . This approach, the Animal, Vegetable and Mineral Kingdoms, survives today in the popular mind, notably in the form of the parlour game question: "Is it animal, vegetable or mineral ?". The work of Linnaeus had a huge impact on science; it was indispensable as a foundation for biological nomenclature , now regulated by the nomenclature codes . Two of his works, the first edition of the Species Plantarum (1753) for plants and the tenth edition of the Systema Naturae (1758), are accepted as part of the starting points of nomenclature; his binomials (names for species) and generic names take priority over those of others. [ 1 ] However, the impact he had on science was not because of the value of his taxonomy. Linnaeus' kingdoms were in turn divided into classes , and they, in turn, into orders , genera (singular: genus ), and species (singular: species ), with an additional rank lower than species, though these do not precisely correspond to the use of these terms in modern taxonomy. [ 2 ] In Systema Naturae (1735), his classes and orders of plants, according to his Systema Sexuale , were not intended to represent natural groups (as opposed to his ordines naturales in his Philosophia Botanica ) but only for use in identification. However, in 1737 he published Genera Plantarum in which he claimed that his classification of genera was a natural system. [ 3 ] His botanical classification and sexual system were used well in the nineteenth century. [ 4 ] Within each class were several orders. This system is based on the number and arrangement of male ( stamens ) and female ( pistils ) organs. [ 5 ] The Linnaean classes for plants, in the Sexual System, were (page numbers refer to Species plantarum ): The classes based on the number of stamens were then subdivided by the number of pistils, e.g. Hexandria monogynia with six stamens and one pistil. [ 29 ] Index to genera p. 1201 [ 30 ] By contrast his ordines naturales numbered 69, from Piperitae to Vagae. Only in the Animal Kingdom is the higher taxonomy of Linnaeus still more or less recognizable and some of these names are still in use, but usually not quite for the same groups. He divided the Animal Kingdom into six classes. In the tenth edition, of 1758, these were: His taxonomy of minerals has long since been dropped from use. In the tenth edition, 1758, of the Systema Naturae , the Linnaean classes were: This rank-based method of classifying living organisms was originally popularized by (and much later named for) Linnaeus, although it has changed considerably since his time. The greatest innovation of Linnaeus, and still the most important aspect of this system, is the general use of binomial nomenclature , the combination of a genus name and a second term, which together uniquely identify each species of organism within a kingdom. For example, the human species is uniquely identified within the animal kingdom by the name Homo sapiens . No other species of animal can have this same binomen (the technical term for a binomial in the case of animals). Prior to Linnaean taxonomy, animals were classified according to their mode of movement. Linnaeus's use of binomial nomenclature was anticipated by the theory of definition used in Scholasticism . Scholastic logicians and philosophers of nature defined the species human, for example, as Animal rationalis , where animal was considered a genus and rationalis (Latin for "rational") the characteristic distinguishing humans from all other animals. Treating animal as the immediate genus of the species human, horse, etc. is of little practical use to the biological taxonomist, however. Accordingly, Linnaeus's classification treats animal as a class including many genera (subordinated to the animal "kingdom" via intermediary classes such as "orders"), and treats homo as the genus of a species Homo sapiens , with sapiens (Latin for "knowing" or "understanding") playing a differentiating role analogous to that played, in the Scholastic system, by rationalis (the word homo , Latin for "human being", was used by the Scholastics to denote a species, not a genus). A strength of Linnaean taxonomy is that it can be used to organize the different kinds of living organisms , simply and practically. Every species can be given a unique (and, one hopes, stable) name, as compared with common names that are often neither unique nor consistent from place to place and language to language. This uniqueness and stability are, of course, a result of the acceptance by working systematists (biologists specializing in taxonomy), not merely of the binomial names themselves, but of the rules governing the use of these names, which are laid down in formal nomenclature codes . Species can be placed in a ranked hierarchy , starting with either domains or kingdoms . Domains are divided into kingdoms . Kingdoms are divided into phyla (singular: phylum ) — for animals ; the term division , used for plants and fungi , is equivalent to the rank of phylum (and the current International Code of Botanical Nomenclature allows the use of either term). Phyla (or divisions) are divided into classes , and they, in turn, into orders , families , genera (singular: genus ), and species (singular: species ). There are ranks below species: in zoology, subspecies (but see form or morph ); in botany, variety (varietas) and form (forma), etc. Groups of organisms at any of these ranks are called taxa (singular: taxon ) or taxonomic groups . The Linnaean system has proven robust and it remains the only extant working classification system at present that enjoys universal scientific acceptance. However, although the number of ranks is unlimited, in practice any classification becomes more cumbersome the more ranks are added. Among the later subdivisions that have arisen are such entities as phyla, families, and tribes, as well as any number of ranks with prefixes (superfamilies, subfamilies, etc.). The use of newer taxonomic tools such as cladistics and phylogenetic nomenclature has led to a different way of looking at evolution (expressed in many nested clades ) and this sometimes leads to a desire for more ranks. An example of such complexity is the scheme for mammals proposed by McKenna and Bell. Over time, understanding of the relationships between living things has changed. Linnaeus could only base his scheme on the structural similarities of the different organisms. The greatest change was the widespread acceptance of evolution as the mechanism of biological diversity and species formation, following the 1859 publication of Charles Darwin's On the Origin of Species . It then became generally understood that classifications ought to reflect the phylogeny of organisms, their descent by evolution. This led to evolutionary taxonomy , where the various extant and extinct are linked together to construct a phylogeny. This is largely what is meant by the term 'Linnaean taxonomy' when used in a modern context. In cladistics , originating in the work of Willi Hennig , 1950 onwards, each taxon is grouped so as to include the common ancestor of the group's members (and thus to avoid phylogeny ). Such taxa may be either monophyletic (including all descendants) such as genus Homo , or paraphyletic (excluding some descendants), such as genus Australopithecus . Originally, Linnaeus established three kingdoms in his scheme, namely for Plants , Animals and an additional group for minerals , which has long since been abandoned. Since then, various life forms have been moved into three new kingdoms: Monera , for prokaryotes (i.e., bacteria); Protista , for protozoans and most algae; and Fungi . This five-kingdom scheme is still far from the phylogenetic ideal and has largely been supplanted in modern taxonomic work by a division into three domains: Bacteria and Archaea , which contain the prokaryotes, and Eukaryota , comprising the remaining forms. These arrangements should not be seen as definitive. They are based on the genomes of the organisms; as knowledge on this increases, classifications will change. [ 31 ] Representing presumptive evolutionary relationships within the framework of Linnaean taxonomy is sometimes seen as problematic, especially given the wide acceptance of cladistic methodology and numerous molecular phylogenies that have challenged long-accepted classifications. Therefore, some systematists have proposed a PhyloCode to replace it.
https://en.wikipedia.org/wiki/Linnaean_taxonomy
Linnaeus's flower clock was a garden plan hypothesized by Carl Linnaeus that would take advantage of several plants that open or close their flowers at particular times of the day to accurately indicate the time. [ 1 ] [ 2 ] According to Linnaeus's autobiographical notes, he discovered and developed the floral clock in 1748. [ 3 ] It builds on the fact that there are species of plants that open or close their flowers at set times of day. He proposed the concept in his 1751 publication Philosophia Botanica , calling it the horologium florae ( lit. ' flower clock ' ). [ 4 ] His observations of how plants changed over time are summarised in several publications. Calendarium florae (the Flower Almanack) describes the seasonal changes in nature and the botanic garden during the year 1755. In Somnus plantarum (the Sleep of Plants), he describes how different plants prepare for sleep during the night, and in Vernatio arborum he gives an account of the timing of leaf-bud burst in different trees and bushes. [ 5 ] [ 6 ] He may never have planted such a garden, but the idea was attempted by several botanical gardens in the early 19th century, with mixed success. Many plants exhibit a strong circadian rhythm (see also Chronobiology ), and a few have been observed to open at quite a regular time, but the accuracy of such a clock is diminished because flowering time is affected by weather and seasonal effects. The flowering times recorded by Linnaeus are also subject to differences in daylight due to latitude : his measurements are based on flowering times in Uppsala , where he taught and had received his university education. The plants suggested for use by Linnaeus are given in the table below, ordered by recorded opening time; "-" signifies that data are missing. [ 7 ] Some 30 years before Linnaeus's birth, such a floral clock may have been described by Andrew Marvell , in his poem " The Garden " (1678): How well the skilful gardener drew Of flow'rs and herbs this dial new; Where from above the milder sun Does through a fragrant zodiac run; And, as it works, th' industrious bee Computes its time as well as we. How could such sweet and wholesome hours Be reckoned but with herbs and flow'rs! In Terry Pratchett 's novel Thief of Time , a floral clock with the same premise is described. It features fictional flowers that open at night "for the moths", so runs all day. [ 8 ] Horologium Florae, released in 2023, is the album name of Japanese singer and virtual YouTuber Kyo Hanabasami.
https://en.wikipedia.org/wiki/Linnaeus's_flower_clock
Linnar Viik (born 26 February 1965 in Tallinn ) is an Estonian information technology scientist, entrepreneur and IT visionary. Currently he is a visiting lecturer at University of Tartu , Estonian Academy of Arts and Tallinn University , Partner and Member of the Board of Mobi Solutions and Chairman of the Supervisory Board of EIT Digital. As founder and Programme Director at Estonian e-Governance Academy , he has been advising more than 40 governments on their digital strategy, digital capacities and digital transformation roadmaps. He has been member of the Research and Development Council of Estonia 2001-2017, member of e-Estonia Council 1996-2021. He is also member of the Supervisory Board of SEI Tallinn and member of the Advisory Board of Lisbon Council. Linnar has been member of the Board and lecturing at Estonian IT College since 2000 where he was appointed Acting Rector in 2010. Linnar Viik was founding Member of the European Institute of Innovation and Technology Governing Board, member of Advisory Board of Nordic Investment Bank , Chairman of the Board of the Open Estonia Foundation. He is a founder and member of the boards of several mobile communications, broadband and software companies, former advisor to the Prime Minister of Estonia on ICT , innovation , R&D and civic society issues. [ 1 ] [ 2 ] Earlier occupations include United Nations Development Programme as advisor and Stockholm Environment Institute as Councilor. Linnar Viik has written over 120 articles and 10 reports, mostly on the topics of Knowledge Based Economy and Implications of Information Society, [ 2 ] as well as being instrumental in the rapid development of Estonian computer and network infrastructure, as well as the Estonian Internet Voting and eSignature projects. This Estonian academic-related biographical article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Linnar_Viik
Linnett double-quartet theory (LDQ) is a method of describing the bonding in molecules which involves separating the electrons depending on their spin , placing them into separate 'spin tetrahedra' to minimise the Pauli repulsions between electrons of the same spin. Introduced by J. W. Linnett in his 1961 monograph [ 1 ] and 1964 book, [ 2 ] this method expands on the electron dot structures pioneered by G. N. Lewis . While the theory retains the requirement for fulfilling the octet rule , it dispenses with the need to force electrons into coincident pairs . Instead, the theory stipulates that the four electrons of a given spin should maximise the distances between each other , resulting in a net tetrahedral electronic arrangement that is the fundamental molecular building block of the theory. By taking cognisance of both the charge and the spin of the electrons, the theory can describe bonding situations beyond those invoking electron pairs, for example two-centre one-electron bonds. This approach thus facilitates the generation of molecular structures which accurately reflect the physical properties of the corresponding molecules, for example molecular oxygen , benzene , nitric oxide or diborane . Additionally, the method has enjoyed some success for generating the molecular structures of excited states , radicals , and reaction intermediates . The theory has also facilitated a more complete understanding of chemical reactivity , hypervalent bonding and three-centre bonding . The cornerstone of classical bonding theories is the Lewis structure , published by G. N. Lewis in 1916 and continuing to be widely taught and disseminated to this day. [ 3 ] In this theory, the electrons in bonds are believed to pair up, forming electron pairs which result in the binding of nuclei . While Lewis’ model could explain the structures of many molecules, Lewis himself could not rationalise why electrons, negatively-charged particles which should repel, were able to form electron pairs in molecules or even why electrons can form a bond between atoms. [ 4 ] Lewis’ theory has been seminal in the understanding of the chemical bond. Yet despite this, it was formulated before the discovery of electron spin , a key intrinsic property of electrons which manifests itself through inter-electronic interactions. While spin was known about ever since the publication of Stern and Gerlach's results in 1922, with the Pauli exclusion principle being formulated in 1925, the importance of 'spin correlation' for understanding when and why electrons form pairs in molecules was not understood until the work of Lennard-Jones in the 1950s. [ 5 ] During the latter decade, J. W. Linnett and his students began to explicitly study the role of spin in determining the electronic structures of various molecules. [ 6 ] [ 7 ] This resulted in Linnett's landmark 1961 publication, [ 1 ] and subsequent 1964 book, [ 2 ] in which he outlined what became known as “Linnett double-quartet” theory. Linnett continued to expand on his theory through a number of publications [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] until his death in 1975. In these writings, Linnett recognised the continued importance of the Lewis model of bonding and the importance of satisfying the octet rule . However, he also argued that this view overemphasises the importance of electron pairing in the formation of chemical bonds. Hence, his theory sought to introduce spin into the conventional model of bonding and hence rectify some of the problems associated with Lewis’ theory. While LDQ theory is a relatively simple extension of Lewis’ bonding theory, the additional freedom of the electrons to separate into two sets, differentiated by their spins, has bestowed upon the theory exquisite agreement with the results of many experiments. [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] In its nascent years, LDQ theory attracted the interest of many researchers, furnishing greater insights into the structures of many molecules. However, LDQ theory began to fade from the spotlight in the 1970s and was mostly abandoned by researchers in the United States, Great Britain and Europe by the mid-1980s. [ 18 ] A key trait of LDQ theory that is shared with Lewis theory is the importance of using formal charges to determine the most important electronic structure. [ 19 ] LDQ theory produces the spatial distributions of the electrons by considering the two fundamental physical properties of said electrons: In Linnett's interpretation, correlation is “the mutual effect the electrons have on one another’s spatial positions”. [ 7 ] In the absence of charge correlation, the situation would be as follows: When one adds the effects of charge correlation, the situation is modified somewhat: Given these rules, it is found that: An octet is any arrangement which results in a given nucleus having a total of eight valence electrons around it. In Lewis' bonding model, the electrons tend to pair up in bonds such that an atom has a total of four chemical bonds and lone pairs associated with it: thus, the atom can satisfy its octet. LDQ theory also acknowledges that the elements in the ‘first short period’ of the periodic table tend to attain an octet of electrons surrounding them. However, in contrast with Lewis' view, Linnett argued that due to the combined effects of charge correlation and spin correlation, it is physically more meaningful to consider the octet as the sum of two tetrahedral quartets of electrons. Each quartet consists of electrons of one spin only, and these electrons can act and orient themselves independently. One can then obtain molecular structures by arranging the electrons in such a way as to maximise the separations between the electrons, hence minimising the mutual inter-electronic repulsions, while simultaneously ensuring that the basic geometry of the spin sets is not altered. Additionally, Linnett stressed that due to the Pauli exclusion principle, one should prioritise separating electrons of the same spin when considering the overall electronic structure. In chemical bonding, the presence of additional nuclei causes the electrons to seek to maximise their attractive electrostatic interactions with all nearby nuclei. This can result in the formation of coincident or ‘close-paired’ electron pairs, in accordance with Lewis’ bonding model. Thus, it has previously [ 20 ] been argued that the following should also be included in the basic postulates of LDQ theory: The electron pairing can result in a greater net binding between the nuclei, but this is not necessarily the case in all molecules. In his discussions, [ 1 ] [ 2 ] Linnett notes that due to the opposing effects of charge and spin, the correlation between the two spin quartets should be small and so the individual spin tetrahedra can be treated as being partly independent from each other. This then facilitates electron pairing since nearby nuclei can easily force the two electrons together. Linnett also argues that a relatively small deviation from the strictly regular tetrahedra of the rigorous LDQ theory approach could be energetically favourable in some cases. The structure obtained from applying LDQ theory balances the three principal interactions in the molecule: electron-electron, electron-nuclear and nuclear-nuclear. Much like Lewis’ bonding model, LDQ theory assumes that the dominant contributions result from electron-electron and electron-nuclear interactions. [ 20 ] However, it has previously been shown that the introduction of nuclear-nuclear interactions into LDQ theory can explain some trends in bond angles and bond lengths . [ 20 ] In particular, Firestone produced an extensive discussion [ 21 ] [ 22 ] of the effects of moving bonding electron density out of the internuclear region and highlighted that sometimes such a distortion is necessary to produce a more satisfactory arrangement of the spin sets. Due to the decreased shielding of the nuclear-nuclear interactions and the decreased electron-nuclear interactions associated with this change, the net energy of the molecule tends to increase: this is known as “L-strain” ( see section on reactivity later ). As an example of the application of LDQ theory to molecular bonding, take the case of the fluoride ion . By using LDQ theory, the electronic structure shown below is obtained. The two spin sets are under the action of only one nucleus and so there is no net interaction which will cause the electrons to pair up. Hence, unlike the Lewis model which predicts four lone pairs, all electrons in the fluoride ion are spatially separated. Therefore, the following statement by Luder is found to be true for all mononuclear species: [ 23 ] “In an isolated atom, no valence electron is close-paired with another”. If a proton then approaches the fluoride ion, the proton's attractive potential can distort the electronic geometry. Two electrons of opposite spin (necessary to complete the duplet of the hydrogen atom) are attracted to the proton and this attractive potential pulls them together to yield an electron pair localised to the internuclear region. This is illustrated in the LDQ structure of hydrogen fluoride shown below. Again, while the Lewis picture would predict four coincident electron pairs, the LDQ theory treatment yields only one close pair and two staggered spin tetrahedra that share a vertex . This makes sense as the other six electrons, unlike the two bonding electrons, do not significantly experience the attractive influence of the proton and hence their inter-electronic repulsions keep them separated. One of the major triumphs of LDQ theory over the traditional Lewis view is the ability of the former to generate an electronic structure which explains the paramagnetism of the ground state ( 3 Σ g − state) of molecular oxygen (O 2 ) . The LDQ structure of the ground state of O 2 does not involve any electron pairs, in contrast with the Lewis structure of the molecule. Instead, the electrons are arranged as shown below. There are seven valence electrons of one spin which occupy two tetrahedra that share a common vertex (purple spheres), and the remaining five valence electrons of the other spin occupy two tetrahedra which share a common face (green spheres). Linnett postulated that this electronic arrangement reduces the magnitude of the inter-electronic repulsions in comparison with the case where the two spin sets have six electrons each. This arrangement results in a bond order of 2 and an excess of one electron spin, giving rise to the molecule's paramagnetism: both observations are in agreement with molecular orbital theory treatments of the molecule. In effect, the LDQ structure is equivalent to the combination of a two-centre one-electron bond (purple spin set) and a two-centre three-electron bond (green spin set). Not all LDQ structures differ from those produced using Lewis’ bonding model. For example, an alkane such as methane has both spin tetrahedra totally coincident, resulting in four close-pairs of electrons as in the Lewis picture. The above three-dimensional LDQ structures are useful for visualising the molecular structures, but they can be laborious to construct. Hence, Linnett introduced two-dimensional structures, analogous to Lewis structures, that used dots and crosses to represent the relative spin states of electrons. An example is shown on the right for molecular oxygen. Further, Linnett also modified the lines used in Lewis structures to account for electron coincidence and/or non-coincidence: a thin line represents an electron pair that is not close-paired, while a thick line represents a close-pair of electrons. This is exemplified best in the case of the hydrogen fluoride molecule, the dot-and-cross diagram of which is shown on the right. Here, the Lewis structure drawn on the left of the image is compared with the LDQ line structure on the right of the image. The LDQ structure thus expands on the Lewis structure by denoting if the electrons are coincident (thick line) or if they are spatially separated (thin lines). Additionally, by adding a dot or cross above/below the bond line, one can denote an odd number of electrons which are involved in the bond. This is illustrated well in the structure of nitric oxide (NO) shown below: More details about the LDQ structures of radicals such as NO are given in the section ‘Theoretical Description of Radicals’. LDQ theory has been lauded for its ability to produce an accurate electronic structure of benzene . [ 18 ] [ 19 ] The LDQ structure for benzene is shown below. [ 16 ] [ 24 ] In this model, each carbon atom is bonded to its neighbouring carbon atoms by three non-coincident electrons, two of one spin (e.g. green spheres) and one of the other spin (e.g. purple spheres). Thus, LDQ theory is able to predict the 1.5 bond order of the carbon-carbon bonds in benzene , the equivalence of all six carbon-carbon bonds and the stability of benzene due to the fact that none of the electrons in the carbon-carbon bonds are close-paired. This is in contrast with the valence bond picture which must invoke resonance between the two Kekulé forms of benzene in order to predict the non-integral bond order. Hence, the LDQ structure is lower in energy than either of the Kekulé forms due to a reduction in the magnitude of the inter-electronic repulsions in the former. The 2D LDQ structures of benzene using both the full dot-and-cross diagram and the simplified diagram are shown on the right. Again, the bonding situation determined using LDQ theory is in good agreement with molecular orbital theory results. [ 2 ] [ 18 ] This also highlights that the additional degree of freedom afforded by having two distinct spin sets in the LDQ approach allows a single electron in a bond to be shared equally between two atoms, which produces the above structure for benzene. The ability of LDQ theory to describe electronic distributions in terms of independent spin sets has facilitated studies of the excited states of various molecules, producing excited state electronic structures that are in agreement with experiments. [ 2 ] This sets LDQ theory apart from both valence bond theory and Lewis bonding theory as these have not been previously utilised to study excited state electronic structures. [ 18 ] Further, the LDQ theory approach to studying excited states produces three-dimensional redistributions of the electron density, in contrast with the single-electron vertical transitions produced using molecular orbital methods. As outlined previously, Linnett found that disposing the electrons into two spin sets, one with seven electrons and the other with only five electrons, produced the electronic structure of the ground state of O 2 ( see above ). [ 2 ] In contrast, one can look at the case where the two spin sets both contain six electrons to generate the excited states of O 2 . When the spin sets are non-coincident, the electronic structure shown below is produced. In this case, each spin set is the same but there is no correlation between them, giving rise to a cubic arrangement of the electrons. As the average distance between the electrons is shorter than in the ground state case, this disposition of the electrons thus results in a greater net magnitude of the inter-electronic repulsion energy as compared to the ground state. Hence, the above structure corresponds to the first excited state ( 1 Δ g state) of O 2 . If one further increases the degree of inter-electronic repulsions by forcing the electrons into coincident pairs, the electronic structure shown below is generated. [ 2 ] This corresponds to the electronic structure of the second excited state of O 2 ( 1 Σ g + state), and also corresponds to the (incorrect) Lewis structure of the ground state of O 2 . Thus, a comparison of the magnitude of the inter-electronic repulsions in a series of possible molecular structures can be used to assess their relative energies and hence determine the ground and excited states. Additionally, it is found that in all three electronic structures, the net bond order is 2 as they all have four electrons in the spatial region between the oxygen nuclei. Thus, we see that this example clearly demonstrates that “not all double bonds are created equal”. [ 18 ] Linnett also used the example of acetylene to illustrate the power of the LDQ approach for understanding the structures of the excited states of molecules. [ 2 ] The dot-and-cross diagrams for both the ground state and the first excited state of acetylene are shown below. Upon excitation of the acetylene molecule, there is a net depletion of electron density from the bond region. This is captured in the above figure on the right as three electrons are withdrawn from the internuclear region and localised to the individual carbon atoms: resonance needs to be invoked in this case to explain how the three electrons can be distributed among the two carbon centres. Linnett rationalises this three-electron redistribution by arguing that it is required by the need to both form the two carbon-hydrogen bonds and retain the tetrahedral disposition of the electrons of a given spin. [ 2 ] [ 25 ] Interestingly, the excited state does not obey the octet rule as the carbon atoms have an average 6.5 valence electrons surrounding them. Further, the internuclear region contains only three electrons, the same as in the benzene molecule ( see above ), and this explains why the carbon-carbon bond length in the excited state of acetylene is the same as that in benzene. [ 1 ] [ 25 ] Most strikingly, the molecule changes its geometry upon excitation, going from a simple linear symmetry to a trans -bent structure. This is in excellent agreement with both the landmark results of Ingold and King, [ 26 ] which were the first demonstration of an excited state having a qualitatively different geometry than the ground state, and the results from molecular orbital theory methods. [ 25 ] Thus, this example illustrates that LDQ theory can be a powerful tool for understanding the geometric rearrangements that occur when excited states are formed. A major drawback of Lewis’ bonding theory is its inability to predict and understand the structures of radicals due to the presence of unpaired single electrons . LDQ theory has seen great success in explaining the structures of open shell systems such as nitric oxide or ozone due to the additional degree of freedom associated with having two independent spin sets. In the cases of nitric oxide and ozone, the maxima of the electron density of the localised orbitals result in distributions which closely mirror the dot-and-cross diagrams produced using LDQ theory. [ 27 ] The typical example of a radical that cannot be treated satisfactorily using Lewis structures is nitric oxide (NO). By allowing the electrons in the two spin sets to separate from each other, the LDQ structure for NO can be generated as shown below. Hence, the NO molecule is held together by a perfectly symmetric two-centre five-electron bond, made up of three electrons of one spin (green spheres) and two electrons of the other spin (purple spheres). This bonding arrangement satisfies the octet for both the nitrogen and oxygen atoms and results in a bond order of 2.5, in excellent agreement with the molecular orbital theory treatment of NO. [ 2 ] It has previously been highlighted that, from applications of LDQ theory, there exist two distinct classes of radicals: (a) radicals which do not have enough electrons to satisfy the octets of their constituent atoms and (b) radicals which obey the octet rule. Radicals of type (a) are thus highly reactive fragments which want to gain electrons to satisfy the octet rule, while radicals of type (b) are stable species by virtue of satisfying the octets of their constituent atoms. [ 18 ] As an example, the cyanide (CN) radical shown below is a type (a) radical that has ten bonding electrons , while the cyanogen molecule (a dimeric combination of two CN radicals) has 14 bonding electrons. Hence, the dimerisation of CN to cyanogen is favourable as it increases the degree of bonding in the overall system and reduces the total energy. In contrast, the NO molecule is a type (b) radical, also with ten electrons. However, the dimeric N 2 O 2 molecule likewise has ten bonding electrons, and hence there is no significant energetic benefit from the formation of the dimer. In fact, the formation of the nitrogen-nitrogen bond leads to an increase in the number of close-paired electrons and hence an increase in the total system energy, and so isolated NO molecules are stable against dimerisation in the gas phase. [ 2 ] LDQ theory has enjoyed some success in studies of chemical reactivity , in particular organic reactions , as it can furnish one with the ability to predict chemical reactivity from analyses of the relevant reactant and transition state structures. Firestone's extensive work constitutes the most significant application of LDQ theory to chemical reactivity thus far. [ 21 ] [ 22 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ] [ 33 ] Firestone has previously used the concept of L-strain ( see above ) to analyse the activation energies in S N 2 , S H 2 and E2 reactions, since the movement of electron density out of the internuclear region is commonly associated with the formation of transition states. LDQ structures, in particular the coincidence of electron pairs, can be used to rationalise and explain the stability and reactivity of certain families of molecules such as hydrocarbons . As shown for ethane , the electrons reside in two coincident tetrahedra which share a common vertex, and hence all the electrons are in close-pairs as expected from Lewis’ bonding model. However, compare this with the situation in ethylene : again, all the electrons are in close-pairs but now there is no electron density along the internuclear axis. The result is that the energy required to overcome charge correlation and pair the electrons up is compensated to a lesser extent by the bonding in ethylene as compared with ethane. Thus, in agreement with experiments, the ethylene molecule should be highly reactive with respect to addition reactions . Finally, the above can be compared with the situation in acetylene . Here, the six electrons involved in bonding are all anti-coincident and so the energy cost associated with charge correlation is minimised. Indeed, in agreement with experiment, carbon-carbon triple bonds are far less reactive with respect to addition reactions than carbon-carbon double bonds as transforming carbon-carbon triple bonds into double bonds also involves the formation of close-pairs of electrons, an energetically costly process. [ 17 ] [ 34 ] The strengths of LDQ theory have been applied to understand the structures and bonding modes of various molecules which, in the valence bond method, are described using the terms ‘ hypervalent ’ and ‘ three-centre bonding ’. In the case of phosphorus pentachloride (PCl 5 ) , the example shown on the right, the central phosphorus atom is bonded to five chlorine atoms. In the traditional Lewis view, this violates the octet rule as the five phosphorus-chlorine bonds would result in a net ten electrons around the phosphorus atom. Thus, the molecule is assumed to expand its bonding beyond the octet, a situation known as hypervalent bonding. LDQ theory, however, presents a different view of the bonding in this molecule. The three equatorial chlorine atoms each form two-electron bonds with the central phosphorus atom. The remaining two axial chlorine atoms each contribute only one electron to a bond with the phosphorus atom, leaving a single electron to reside exclusively on the chlorine atom. Thus, the LDQ structure for PCl 5 consists of three two-centre two-electron bonds and two two-centre one-electron bonds, thus satisfying the octet rule and dispensing with the need to invoke hypervalent bonding. This LDQ structure is also in good agreement with quantum chemical calculations. [ 18 ] LDQ theory has facilitated a more rigorous analysis of bonding in compounds which have conventionally been described in terms of three-centre two-electron bonding . For example, compare the various ways shown below to represent the bonding in the Lewis acid-base adduct of the hydride anion (H − ) and borane (BH 3 ) shown below. The LDQ approach thus enables each electron to localise in one of the boron-hydrogen internuclear bond regions, rather than being delocalised over the entire three-centre boron-hydrogen-boron moiety. This arrangement of the bonding electrons into two two-centre one-electron bonds benefits from a lowering of the net magnitude of the inter-electronic repulsions in the system. In comparison, as described by Linnett: [ 2 ] “By allowing the two electrons independent ‘movement’ in a three-centre system, the three-centre bond allows the electrons a fairly considerable chance of being near one another”. Similarly, the resonance forms shown above also increase the degree of inter-electronic repulsions as the electrons are paired up in the boron-hydrogen bonds. Thus, a more complete description of the bonding in B 2 H 7 − is obtained using LDQ theory as it can utilise two two-centre one-electron bonds, in comparison with the awkward three-centre two-electron bond or the resonance structures derived from the valence bond method. The situation is similar for diborane (B 2 H 6 ) , the archetypal example used to explain three-centre two-electron bonding. [ 35 ] The above demonstrates that the structure produced using LDQ theory again yields the lowest degree of inter-electronic repulsions. Indeed, the separation of the electrons into two distinct spin sets has enabled the theory to expand the set of possible bonding arrangements, with two-centre one-electron, two-centre three-electron and two-centre five-electron bonding patterns all possible in the theory. [ 18 ] Along with the qualitative picture outlined above, LDQ theory has also been applied to computational studies. This quantitative extension is known as the non-pairing spatial orbital (NPSO) theory. In the NPSO method, the constituent wave functions are based on the corresponding qualitative LDQ structures. This approach has previously been shown to produce lower energies as compared to valence bond or molecular orbital wave functions derived from Lewis structures for molecules such as benzene, diborane or ozone. [ 36 ] [ 37 ] [ 16 ] [ 15 ] [ 13 ] [ 38 ] [ 39 ] Hence, by the variational principle , the wave functions produced by NPSO methods are often a better approximation than those generated using molecular orbital theory methods. It is possible to visualise the reality of disposing the two spin sets separately. Recent investigations have shown that the electron localisation function (ELF) can be successfully applied to understand the disposition of the electrons in a number of molecules. The ELF of acetylene has been studied by a number of authors. [ 4 ] [ 40 ] [ 41 ] The results of this analysis are indicated in the figure below. The ELF of acetylene thus contains a toroidal basin surrounding the carbon-carbon bond axis, rather than three discrete concentrations of electron density as would be expected from the Lewis structure for a triple bond . This is directly comparable to the bonding picture produced using LDQ theory ( see above ), highlighting that the theory can accurately reflect the bonding situation in multiply-bonded species. [ 4 ] A recent report [ 42 ] on the disilyne and digermyne molecules has shown that their ELFs also result in a toroidal basin surrounding the internuclear axis. The toroidal basin represents the six electrons which are involved in the bonding between the two germanium centres in this molecule. The LDQ structure is in excellent agreement with these computational results: the toroid is angled in comparison with the case in acetylene due to the perturbation caused by the off-axis hydrogen atoms. In the VSEPR structure of chlorine trifluoride (ClF 3 ) , the molecule adopts a trigonal bipyramidal structure with the central chlorine atom violating the octet rule. This is typically rationalised by invoking d orbital participation in the bonding of the sp 3 d hybridised chlorine centre. [ 43 ] The ELF of ClF 3 is presented below. The ELF analysis of ClF 3 indicates that there is a single toroidal-shaped basin at the 'back' of each fluorine atom, corresponding analogously to the three lone pairs arranged in a ring as generated for the HF molecule ( see above ). This is in contrast with the Lewis structure which would place the fluorine lone pair electrons into discrete coincident pairs. Further, the lone pairs of electrons associated with the central chlorine atom reside in two kidney-shaped lobes which lie in the equatorial plane along with one of the fluorine atoms. This structure, consistent with the LDQ structure of the molecule, is also consistent with the VSEPR structure as the more diffuse chlorine lone pairs distort the molecular geometry and result in the bent planar geometry seen. [ 43 ] [ 44 ] In contrast, the bonding situation described by LDQ theory differs greatly from that produced using valence bond theory. Rather than having three two-centre two-electron bonds and two lone pairs, necessitating the invocation of hypervalent bonding for the chlorine atom, the LDQ structure instead allows the axial fluorine atoms to form two-centre one-electron bonds. This, when combined with a two-centre two-electron bond to the equatorial fluorine atom and the two chlorine lone pairs, restores the octet of the chlorine atom. As exemplified by the increased bond length of the axial fluorine-chlorine bonds as compared to the equatorial fluorine-chlorine bond, [ 45 ] LDQ theory is able to more accurately describe the electronic structure of ClF 3 as compared to valence bond theory. One of the main benefits is that many molecular structures, such as molecular oxygen and ozone , can be represented using a single LDQ structure without invoking any resonance structures . This lesser reliance on resonance structures is favourable as, according to Linnett, resonance structures are not satisfactory descriptions of bonding as the ‘resonance stabilisation energy’ is not easily attributable to any particular molecular feature. [ 2 ] Several other strengths of the approach include: The success of LDQ theory in elucidating structures akin to those generated using quantum chemical calculations has also afforded a better understanding of the meaning of the dots and crosses used in the theory. Accordingly, the dots and crosses have been associated with the centroids of charge of the localised orbitals , while also making the distinction between the two sets of spins in the charge analyses. [ 19 ] [ 47 ] LDQ theory greatly diminishes, but does not completely remove, the need for invoking resonance structures to explain the bonding in certain molecules. While the need for resonance structures is reduced, it is still necessary to invoke resonance for certain molecules such as semiquinones , nitryl chloride or nitrogen dioxide . Additionally, like its Lewis theory progenitor, the theory ignores the energy differences between s and p orbitals. This has garnered criticism from authors who have dismissed LDQ theory as it was seen to invoke "the inert gas magic". [ 48 ] Other authors have also claimed that LDQ theory cannot be easily extended "to larger systems for which its use generally becomes very intuitive" and that its results are "as ambiguous as those of resonance theory". [ 49 ] Linnett's vision of double-quartet theory was limited to elements which did not expand their valence beyond the octet: this produced the familiar spin tetrahedra. However, later work by W. F. Luder [ 23 ] [ 50 ] extended the principles of LDQ theory to produce electronic structures with more than four electrons in each spin set. This extension, called “electron repulsion theory” by Luder, could be applied to elements of the d and f blocks in the periodic table. For example, the structure of the zinc atom produced using electron-repulsion theory is shown above. The author asserts that the s electrons occupy the axial positions, leaving the d electrons to occupy the positions at the vertices of two pentagonal bases of the two constituent pyramids. The electronic structure of the ytterbium atom can be constructed similarly. The s electrons are again assumed to occupy the axial positions while the f electrons occupy the positions at the vertices of two heptagonal bases of the two constituent pyramids. While these results are interesting, they have been contested in the scientific literature due to Luder's abandonment of the octet rule and the author's controversial views on spin correlation. [ 18 ] Indeed, one author notes that Luder's works “[do] a great disservice to Linnett and his method”. [ 51 ] Recently, there has been a modest resurgence of LDQ theory in the scientific literature, especially among theoretical chemists. For example, a recent study found that there is a qualitative correspondence between the molecular structures produced using LDQ theory and those suggested by dynamic Voronoi metropolis sampling . [ 52 ] Another recent example is the correspondence of the results obtained using LDQ theory to those produced using the Fermi-Löwdin orbital self-interaction correction [ 53 ] (FLO-SIC) method. It was shown that this method generates structures which can successfully house two electrons of one spin in a given ‘spin channel’, and the remaining single electron can be housed in the other spin channel: [ 54 ] this can be directly related to the LDQ structures of many radicals ( see for instance NO above ). Further, the electronic geometries for many ground state molecules, such as carbon dioxide , produced via FLO-SIC methods were found to generally agree with those derived from LDQ theory. In a subsequent publication, the authors posited that the Fermi orbital descriptors [ 55 ] utilised in their work can be correlated to the electron spins generated in LDQ analyses. [ 56 ] The authors also noted that the use of LDQ theory to produce model electronic structures of molecules for quantum calculations results in calculated dipole moments that agree more closely with experiments.
https://en.wikipedia.org/wiki/Linnett_double-quartet_theory
Linnik's theorem in analytic number theory answers a natural question after Dirichlet's theorem on arithmetic progressions . It asserts that there exist positive c and L such that, if we denote p( a , d ) the least prime in the arithmetic progression where n runs through the positive integers and a and d are any given positive coprime integers with 1 ≤ a ≤ d − 1, then: The theorem is named after Yuri Vladimirovich Linnik , who proved it in 1944. [ 1 ] [ 2 ] Although Linnik's proof showed c and L to be effectively computable , he provided no numerical values for them. It follows from Zsigmondy's theorem that p(1, d ) ≤ 2 d − 1, for all d ≥ 3. It is known that p(1, p ) ≤ L p , for all primes p ≥ 5, as L p is congruent to 1 modulo p for all prime numbers p , where L p denotes the p -th Lucas number . Just like Mersenne numbers , Lucas numbers with prime indices have divisors of the form 2 kp +1. It is known that L ≤ 2 for almost all integers d . [ 3 ] On the generalized Riemann hypothesis it can be shown that where φ {\displaystyle \varphi } is the totient function , [ 4 ] and the stronger bound has been also proved. [ 5 ] It is also conjectured that: The constant L is called Linnik's constant [ 6 ] and the following table shows the progress that has been made on determining its size. Moreover, in Heath-Brown's result the constant c is effectively computable.
https://en.wikipedia.org/wiki/Linnik's_constant
Linolelaidic acid is an omega-6 trans fatty acid (TFA) and is a cis–trans isomer of linoleic acid . It is found in partially hydrogenated vegetable oils. It is a white (or colourless) viscous liquid. TFAs are classified as conjugated and nonconjugated, corresponding usually to the structural elements −CH=CH−CH=CH− and −CH=CH−CH 2 −CH=CH− , respectively. Nonconjugated TFAs are represented by elaidic acid and linolelaidic acid. Their presence is linked heart diseases. The TFA vaccenic acid , which is of animal origin, poses less of a health risk. [ 4 ] This organic chemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Linolelaidic_acid
Linseed oil , also known as flaxseed oil or flax oil (in its edible form), is a colorless to yellowish oil obtained from the dried, ripened seeds of the flax plant ( Linum usitatissimum ). The oil is obtained by pressing , sometimes followed by solvent extraction . Owing to its polymer-forming properties, linseed oil is often blended with combinations of other oils, resins or solvents as an impregnator, drying oil finish or varnish in wood finishing , as a pigment binder in oil paints , as a plasticizer and hardener in putty , and in the manufacture of linoleum . Linseed oil use has declined over the past several decades with increased availability of synthetic alkyd resins—which function similarly but resist yellowing. [ 1 ] Linseed oil is a triglyceride , like other fats. Linseed oil is distinctive for its unusually large amount of α-linolenic acid , which oxidises in air. The fatty acids in a typical linseed oil are of the following types: [ 2 ] Having a high content of di- and tri-unsaturated esters , linseed oil is susceptible to polymerization reactions upon exposure to oxygen in air. This polymerization, which is called autoxidation , results in the rigidification of the material. [ 3 ] To prevent premature drying, linseed oil-based products (oil paints, putty) are stored in airtight containers. Rags soaked with linseed oil pose fire hazard because they provide a large surface area for rapid oxidation . The oxidation of linseed oil is exothermic , which may lead to spontaneous combustion . [ 4 ] In 1991, One Meridian Plaza , in Philadelphia , was severely damaged in a fire, in which three firefighters perished, thought to be caused by rags soaked with linseed oil. [ 5 ] Most applications of linseed oil exploit its drying properties, i.e., the initial material is liquid or at least pliable and the aged material is rigid but not brittle. The water-repelling (hydrophobic) nature of the resulting hydrocarbon -based material is advantageous. [ 3 ] Linseed oil is the carrier used in oil paint . It can also be used as a painting medium, making oil paints more fluid, transparent and glossy. It is available in varieties such as cold-pressed, alkali-refined, sun-bleached, sun-thickened, and polymerised (stand oil). The introduction of linseed oil was a significant advance in the technology of oil painting. [ citation needed ] Traditional glazing putty , consisting of a paste of chalk powder and linseed oil, is a sealant for glass windows that hardens within a few weeks of application and can then be painted over. The durability of putty is owed to the drying properties of linseed oil. [ citation needed ] When used as a wood finish , linseed oil dries slowly and shrinks little upon hardening. A linseed oil finish is easily scratched and liquid water penetrates a linseed oil finish in mere minutes, and water vapour bypasses it almost completely. [ 6 ] Garden furniture treated with linseed oil may develop mildew . Oiled wood may be yellowish and is likely to darken with age. Even though the oil feels dry to the touch, studies show linseed oil does not fully cure. [ 7 ] Linseed oil is a common finish for wooden items, though very fine finish may require months to obtain. Studies show the fatty-acid structure of linseed oil has problems cross-linking and oxidizing, frequently turning black. [ 8 ] Boiled linseed oil is used as sizing in traditional oil gilding to adhere sheets of gold leaf to a substrate (parchment, canvas, Armenian bole , etc.). It has a much longer working time than water-based size and gives a firm smooth surface that is adhesive enough in the first 12–24 hours after application to cause the gold to attach firmly to the intended surface. [ citation needed ] Linseed oil is used to bind wood dust, cork particles, and related materials in the manufacture of the floor covering linoleum . After its invention in 1860 by Frederick Walton , linoleum, or "lino" for short, was a common form of domestic and industrial floor covering from the 1870s until the 1970s, when it was largely replaced by PVC ("vinyl") floor coverings. [ 9 ] However, since the 1990s, linoleum is returning to favor, being considered more environmentally sound than PVC. [ 10 ] Linoleum has given its name to the printmaking technique linocut , in which a relief design is cut into the smooth surface and then inked and used to print an image. The results are similar to those obtained by woodcut printing. [ citation needed ] Raw cold-pressed linseed oil – commonly known as flax seed oil in nutritional contexts – is easily oxidized, and rapidly becomes rancid, with an unpleasant odour , unless refrigerated . Linseed oil is not generally recommended for use in cooking. In one study, the content of alpha -linolenic acid (ALA) in whole flaxseeds did not decrease after heating the seeds to temperatures of up to 178 °C (352.4 °F) for one and a half hours. [ 11 ] Linseed oil is an edible oil in demand as a dietary supplement , as a source of α-linolenic acid , an omega-3 fatty acid . In parts of Europe, it is traditionally eaten with potatoes and quark . [ citation needed ] Food-grade flaxseed oil is cold-pressed, obtained without solvent extraction, in the absence of oxygen, and marketed as edible flaxseed oil. Fresh, refrigerated and unprocessed, linseed oil is used as a nutritional supplement and is a traditional European ethnic food, highly regarded for its nutty flavor. Regular flaxseed oil contains between 57% and 71% polyunsaturated fats ( alpha-linolenic acid , linoleic acid ). [ 12 ] Plant breeders have developed flaxseed with both higher ALA (70%) [ 12 ] and very low ALA content (< 3%). [ 13 ] The USFDA granted generally recognized as safe (GRAS) status for high alpha linolenic flaxseed oil. [ 14 ] Nutrition information from the Flax Council of Canada. [ 17 ] Per 1 tbsp (14 g) Flax seed oil contains no significant amounts of protein, carbohydrates or fibre. Stand oil is generated by heating linseed oil near 300 °C for a few days in the complete absence of air. Under these conditions, the polyunsaturated fatty esters convert to conjugated dienes , which then undergo Diels-Alder reactions , leading to crosslinking. The product, which is highly viscous, gives highly uniform coatings that "dry" to more elastic coatings than linseed oil itself. Soybean oil can be treated similarly, but converts more slowly. On the other hand, tung oil converts very quickly, being complete in minutes at 260 °C. Coatings prepared from stand oils are less prone to yellowing than are coatings derived from the parent oils. [ 49 ] Boiled linseed oil is a combination of raw linseed oil, stand oil (see above), and metallic oil drying agents (catalysts to accelerate drying). [ 49 ] In the Medieval era , linseed oil was boiled with lead oxide (litharge) to give a product called boiled linseed oil. [ 50 ] [ page needed ] The lead oxide forms lead "soaps" (lead oxide is alkaline ) that promote hardening (polymerisation) of linseed oil by reaction with atmospheric oxygen. Heating shortens its drying time. [ citation needed ] Raw linseed oil is the base oil, unprocessed and without driers or thinners. It is mostly used as a feedstock for making a boiled oil. It does not cure sufficiently well or quickly to be regarded as a drying oil . [ 51 ] Raw linseed is sometimes used for oiling cricket bats to increase surface friction for better ball control. [ 52 ] It was also used to treat leather flat belt drives to reduce slipping. [ citation needed ]
https://en.wikipedia.org/wiki/Linseed_oil
Linuron (3-(3,4-dichlorophenyl)-1-methoxy-1-methylurea) is a phenylurea herbicide [ 1 ] that is used to control the growth of grass and weeds for the purpose of supporting the growth of crops like soybeans . [ 2 ] [ 3 ] Linuron acts via inhibition of photosystem II , which is necessary for photosynthetic electron transport in plants . [ 2 ] [ 3 ] Linuron has been found to produce reproductive toxicity in animals by acting as an androgen receptor (AR) antagonist , and for this reason, is considered to be an endocrine disruptor . [ 2 ] [ 4 ] Consequently, in January 2017, the Standing Committee on Plants, Animals, Food and Feed (SCoPAFF) of the European Commission DG "Health and food safety" decided to not renew its regulatory approval. [ 5 ] Sales are expected to cease by June 2017. [ 5 ]
https://en.wikipedia.org/wiki/Linuron
Linux.com is a website that is owned by the Linux Foundation , where the goal of the site is to provide information about open source technology, careers, best practices, and industry trends. It also acts as a hub for the Linux community. [ 1 ] Linux.com offers free Linux tutorials, certifications, news and blogs, discussion forums and groups, a Linux software and hardware directory, and a job board. [ 2 ] The website caters to four different types of Linux users: Developers , [ 3 ] DevOps , [ 4 ] Enterprise (business and academic), [ 5 ] and Enthusiasts . [ 6 ] Additionally, the topics covered include: AI/ML, [ 7 ] Cloud, [ 8 ] Desktop, [ 9 ] Embedded/IOT, [ 10 ] Governance, [ 11 ] Hardware, [ 12 ] Linux, [ 13 ] Networking, [ 14 ] Open Source, [ 15 ] Security, [ 16 ] and System Administration. [ 17 ] Originally, the site was owned by Andover.net, which was taken over by VA Linux Systems (which later changed into VA Software, and then SourceForge , now Geeknet ). It was dedicated to providing news and services to the free and open source software community. The site reported 25 million hits in the first month of operation. [ 18 ] Linux.com suspended the publication of new articles in December 2008, but implied in an announcement on New Year's Day 2009 that publication would shortly resume after unspecified changes to the site; legal considerations were given as the reason why the anticipated changes were not clearly described. [ 19 ] On March 3, 2009, the Linux Foundation announced that they would be taking over the management of Linux.com. [ 20 ] This Linux -related article is a stub . You can help Wikipedia by expanding it . This article about a computing website is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Linux.com
LinuxMCE (Linux Media Center Edition) is a free and open source software platform with a 10-foot user interface designed to allow a computer to act as a home theater PC (HTPC) for the living-room TV, personal video recorder , and home automation system. It allows control of everything in the home, from lighting and climate to surveillance cameras and home security. [ 1 ] It also includes a full-featured VoIP -compatible phone system with support for video conferencing . LinuxMCE may be used as a standalone home theater PC (without any other home network connectivity), but it may also serve as a complete home LAN system in a server/ thin client configuration. In such a configuration, a central core server (a standard PC running Kubuntu ) does most of the storage and processing functions, while peripheral PCs (and other devices) provide input and output services. Thin client PCs can netboot over the LAN to serve as "Media Directors", which stream media content from the core to audiovisual devices which are connected to these thin clients. This home automation /multimedia LAN can be expanded to include home automation systems, surveillance cameras, high-tech remote controllers (called "Orbiters"), and telephone PBX systems. The core server co-ordinates the functions of all the devices on the home LAN. The advanced networking capabilities of the Linux OS allow this high level of network co-ordination. LinuxMCE was begun by Paul Webber as a fork of the PlutoHome home automation software project. It was adapted to run on top of a standard Linux distribution, Kubuntu , as its base OS, rather than to exist as a custom Linux distribution . Most of the core components, including the Orbiter (remote control) user interface, have undergone significant improvements, and are licensed under the GPL . A LinuxMCE setup consists of two parts – one Core and one or more Media Directors. The Core is the central server and provides services throughout the home. It acts as the central media storage and catalog, it routes home automation messages and commands, and it provides net boot images for the Media Directors. Each Media Director is connected to a screen (TV, computer screen or projector) and optionally to other A/V equipment. All media are presented through a Media Director. If the Core is also a Media Director (connected to a TV), it is called a hybrid system. Media Directors can be booted over the network from the Core. That way, only the Core needs to be updated and backed up to keep the whole system up-to-date. Most of the CPU-intensive processing is done on the Core. Thus, the system requirements for a Media Director are relatively small. This makes it easier to build a Media Director that is small and silent, and that fits in a living room. The Core, on the other hand, can be placed anywhere in a house. Accordingly, it may be built with a focus on price and performance instead of silence and appearance. This modular architecture allows LinuxMCE to use and control any hardware connected to the Core and Media Directors and to control it in a coordinated way. For example, if a movie is started in the living room, LinuxMCE can dim the light in that room but also switch off radio playback on the Media Director in the office. If an IP phone rings, LinuxMCE can show the number on the screen and pause media playback while the call is answered. The LinuxMCE package is installed on the Kubuntu OS, and utilizes open source applications such as Asterisk , Xine , MythTV , VDR , Firefox , VideoLAN and SlimServer . 64-bit versions of the LinuxMCE package are no longer under active development after 7.10. These programs have been given wrappers which allow them to communicate with each other, and with the Ruby scripts that control the home automation components. This communication is co-ordinated using a DCE (Data, Commands, Events) protocol through a program called the DCE Router. This added communications layer allows trigger-command features such as pausing media playback when an important phone call arrives, dimming the lights while playing a movie, and allowing media playback to follow from computer to computer whenever a Bluetooth enabled remote is carried between rooms. The DCE communications protocol allows a single program to present a standardized user interface , the Orbiter UI, to the various devices and applications used within the LinuxMCE system. LinuxMCE allows the user interface to be displayed in several different resolutions, to accommodate the graphics capabilities of the different devices (PCs, mobile phones, webpads, PDAs) that can be used to display it. Context-sensitive menus allow a single remote control to control not only LinuxMCE menus, but also audiovisual device functions.
https://en.wikipedia.org/wiki/LinuxMCE
In computing , an oops is a serious but non-fatal error in the Linux kernel . An oops may precede a kernel panic , but it may also allow continued operation with compromised reliability . The term does not stand for anything, other than that it is a simple mistake. When the kernel detects a problem, it kills any offending processes and prints an oops message , which Linux kernel engineers can use in debugging the condition that created the oops and fixing the underlying programming error. After a system has experienced an oops, some internal resources may no longer be operational. Thus, even if the system appears to work correctly, undesirable side effects may have resulted from the active task being killed. A kernel oops often leads to a kernel panic when the system attempts to use resources that have been lost. Some kernels are configured to panic when many oopses (10,000 by default) have occurred. [ 1 ] [ 2 ] This oops limit is due to the potential, for example, for attackers to repeatedly trigger an oops and an associated resource leak , which eventually overflows an integer and allows further exploitation. [ 3 ] [ 4 ] The official Linux kernel documentation regarding oops messages resides in the file Documentation/admin-guide/bug-hunting.rst [ 5 ] of the kernel sources. Some logger configurations may affect the ability to collect oops messages. [ 6 ] The kerneloops software can collect and submit kernel oopses to a repository such as the www.kerneloops.org website, [ 7 ] which provides statistics and public access to reported oopses. A simplified crash screen was introduced in Linux 6.10, similar to the Blue Screen of Death on Windows. [ 8 ]
https://en.wikipedia.org/wiki/Linux_kernel_oops
Besides the Linux distributions designed for general-purpose use on desktops and servers, distributions may be specialized for different purposes including computer architecture support, embedded systems , stability, security, localization to a specific region or language, targeting of specific user groups, support for real-time applications, or commitment to a given desktop environment. Furthermore, some distributions deliberately include only free software . As of 2015 [update] , over four hundred Linux distributions are actively developed, with about a dozen distributions being most popular for general-purpose use. [ 1 ] The popularity of Linux on standard desktop computers and laptops has been increasing over the years. [ 2 ] Most modern distributions include a graphical user environment, with, as of February 2015 [update] , the three most popular environments being the KDE Plasma Desktop , Xfce and GNOME . [ 3 ] [ 4 ] [ 5 ] No single official Linux desktop exists: rather desktop environments and Linux distributions select components from a pool of free and open-source software with which they construct a GUI implementing some more or less strict design guide. GNOME, for example, has its human interface guidelines as a design guide, which gives the human–machine interface an important role, not just when doing the graphical design, but also when considering people with disabilities , and even when focusing on security. [ 6 ] The collaborative nature of free software development allows distributed teams to perform language localization of some Linux distributions for use in locales where localizing proprietary systems would not be cost-effective. For example, the Sinhalese language version of the Knoppix distribution became available significantly before Microsoft translated Windows XP into Sinhalese. [ 7 ] In this case the Lanka Linux User Group played a major part in developing the localized system by combining the knowledge of university professors, linguists , and local developers. The performance of Linux on the desktop has been a controversial topic; [ 8 ] for example in 2007 Con Kolivas accused the Linux community of favoring performance on servers. He quit Linux kernel development out of frustration with this lack of focus on the desktop, and then gave a "tell all" interview on the topic. [ 9 ] Since then a significant amount of development has focused on improving the desktop experience. Projects such as systemd and Upstart (deprecated in 2014) aim for a faster boot time; the Wayland and Mir projects aim at replacing X11 while enhancing desktop performance, security and appearance. [ 10 ] Userspace scheduler extensions make it possible to use a scheduler specialized for a specific usage, such as gaming or desktop usage. [ 11 ] [ 12 ] Many popular applications are available for a wide variety of operating systems. For example, Mozilla Firefox , LibreOffice and Blender have downloadable versions for all major operating systems. Furthermore, some applications initially developed for Linux, such as Pidgin , and GIMP , were ported to other operating systems (including Windows and macOS ) due to their popularity. In addition, a growing number of proprietary desktop applications are also supported on Linux, [ 13 ] such as Autodesk Maya and The Foundry's Nuke in the high-end field of animation and visual effects; see the list of proprietary software for Linux for more details. There are also several companies that have ported their own or other companies' games to Linux, with Linux also being a supported platform on both the Steam and Desura digital-distribution services. [ 14 ] Many other types of applications available for Microsoft Windows and macOS also run on Linux. Commonly, either a free software application will exist which does the functions of an application found on another operating system, or that application will have a version that works on Linux, such as with Skype and some video games like Dota 2 and Team Fortress 2 . Furthermore, the Wine project provides a Windows compatibility layer to run unmodified Windows applications on Linux. It is sponsored by commercial interests including CodeWeavers , which produces a commercial version of the software. Since 2009, Google has also provided funding to the Wine project. [ 15 ] [ 16 ] CrossOver , a proprietary solution based on the open-source Wine project, supports running Windows versions of Microsoft Office , Intuit applications such as Quicken and QuickBooks , Adobe Photoshop versions through CS2, and many games such as World of Warcraft . In other cases, where there is no Linux port of some software in areas such as desktop publishing [ 17 ] and professional audio , [ 18 ] [ 19 ] [ 20 ] there is equivalent software available on Linux. It is also possible to run applications written for Android on other versions of Linux using Anbox (deprecated) or with Waydroid . Besides externally visible components, such as X window managers , a non-obvious but quite central role is played by the programs hosted by freedesktop.org , such as D-Bus or PulseAudio ; both major desktop environments (GNOME and KDE) include them, each offering graphical front-ends written using the corresponding toolkit ( GTK or Qt ). A display server is another component, which for the longest time has been communicating in the X11 display server protocol with its clients; prominent software talking X11 includes the X.Org Server and Xlib . Frustration over the cumbersome X11 core protocol, and especially over its numerous extensions, has led to the creation of a new display server protocol, Wayland . Installing, updating and removing software in Linux is typically done through the use of package managers such as the Synaptic Package Manager , PackageKit , and Yum Extender . While most major Linux distributions have extensive repositories, often containing tens of thousands of packages, not all the software that can run on Linux is available from the official repositories. Alternatively, users can install packages from unofficial repositories, download pre-compiled packages directly from websites, or compile the source code by themselves. All these methods come with different degrees of difficulty; compiling the source code is in general considered a challenging process for new Linux users, but it is hardly needed in modern distributions and is not a method specific to Linux. Linux distributions have also become popular in the netbook market, with many devices such as the Asus Eee PC and Acer Aspire One shipping with customized Linux distributions installed. [ 21 ] In 2009, Google announced its ChromeOS as a minimal Linux-based operating system, using the Chrome browser as the main user interface. ChromeOS initially did not run any non-web applications, except for the bundled file manager and media player. Netbooks that shipped with the operating system, termed Chromebooks , started appearing on the market in June 2011. [ 22 ] By 2015 Chromebooks with large screens were available, and also in other forms factors such as laptop, desktop, tablet and all-in-one. Android applications support was added. [ 23 ] As of 2018, Google added the ability to install any Linux software in a container, [ 24 ] enabling ChromeOS to be used like any other Linux distribution. Linux distributions have long been used as server operating systems, and have risen to prominence in that area; Netcraft reported in September 2006, that eight of the ten (other two with "unknown" OS) most reliable internet hosting companies ran Linux distributions on their web servers , [ 25 ] with Linux in the top position. In June 2008, Linux distributions represented five of the top ten, FreeBSD three of ten, and Microsoft two of ten; [ 26 ] since February 2010, Linux distributions represented six of the top ten, FreeBSD three of ten, and Microsoft one of ten, [ 27 ] with Linux in the top position. Linux distributions are the cornerstone of the LAMP server-software combination (Linux, Apache , MariaDB / MySQL , Perl / PHP / Python ) which is one of the more common platforms for website hosting. [ 28 ] Linux distributions have become increasingly common on mainframes , partly due to pricing and the open-source model. [ 29 ] In December 2009, computer giant IBM reported that it would predominantly market and sell mainframe-based Enterprise Linux Server. [ 30 ] At LinuxCon North America 2015 , IBM announced LinuxONE , a series of mainframes specifically designed to run Linux and open-source software. [ 31 ] [ 32 ] Linux distributions are also dominant as operating systems for supercomputers . [ 33 ] As of November 2017, all supercomputers on the 500 list run some variant of Linux. [ 34 ] Several operating systems for smart devices , such as smartphones , tablet computers , home automation , smart TVs ( Samsung and LG Smart TVs use Tizen and WebOS , respectively), [ 37 ] and in-vehicle infotainment (IVI) systems [ 38 ] (for example Automotive Grade Linux ), are based on Linux. Major platforms for such systems include Android , Firefox OS , Mer and Tizen . Based on web use, Android's usage share of operating systems dominates globally, with almost double the marketshare of Microsoft Windows. As of September 2024 it has 45.4% of the global market, followed by Windows with less than 25.6%. [ 39 ] Although Android is based on a modified version of the Linux kernel, commentators disagree on whether the term "Linux distribution" applies to it, and whether it is "Linux" according to the common usage of the term. Android is a Linux distribution according to the Linux Foundation , [ 40 ] Google's open-source chief Chris DiBona , [ 41 ] and several journalists. [ 42 ] [ 43 ] Others, such as Google engineer Patrick Brady, say that Android is not Linux in the traditional Unix-like Linux distribution sense; Android does not include the GNU C Library (it uses Bionic as an alternative C library) and some other components typically found in Linux distributions. [ 44 ] Ars Technica wrote that "Although Android is built on top of the Linux kernel, the platform has very little in common with the conventional desktop Linux stack". [ 44 ] Cellphones and PDAs running Linux on open-source platforms became more common from 2007; examples include the Nokia N810 , Openmoko 's Neo1973 , and the Motorola ROKR E8 . Continuing the trend, Palm (later acquired by HP ) produced a new Linux-derived operating system, webOS , which is built into its line of Palm Pre smartphones. Nokia 's Maemo , one of the earliest mobile operating systems, was based on Debian . [ 45 ] It was later merged with Intel 's Moblin , another Linux-based operating system, to form MeeGo . [ 46 ] The project was later terminated in favor of Tizen, an operating system targeted at mobile devices as well as IVI. Tizen is a project within The Linux Foundation . Several Samsung products are already running Tizen, Samsung Gear 2 being the most significant example. [ 47 ] Samsung Z smartphones will use Tizen instead of Android. [ 48 ] As a result of MeeGo's termination, the Mer project forked the MeeGo codebase to create a basis for mobile-oriented operating systems. [ 49 ] In July 2012, Jolla announced Sailfish OS , their own mobile operating system built upon Mer technology. Mozilla's Firefox OS consists of the Linux kernel, a hardware abstraction layer , a web-standards -based runtime environment and user interface, and an integrated web browser . [ 50 ] Canonical has released Ubuntu Touch , aiming to bring convergence to the user experience on this mobile operating system and its desktop counterpart, Ubuntu . The operating system also provides a full Ubuntu desktop when connected to an external monitor. [ 51 ] The Librem 5 is a smartphone developed by Purism . By default, it runs the company-made Linux-based PureOS , but it can also run other Linux distributions. [ 52 ] Like Ubuntu Touch, PureOS is designed with convergence in mind, allowing desktop programs to run on the smartphone. An example of this is the desktop version of Mozilla Firefox . [ 53 ] Another smartphone is the PinePhone , made by the computer manufacturer Pine64 . The PinePhone can run a variety of Linux-based operating systems such as Ubuntu Touch and postmarketOS . [ 54 ] Due to its low cost and ease of customization, Linux is often used in embedded systems . In the non-mobile telecommunications equipment sector, the majority of customer-premises equipment (CPE) hardware runs some Linux-based operating system. OpenWrt is a community-driven example upon which many of the OEM firmware releases are based. For example, the TiVo digital video recorder also uses a customized Linux, [ 55 ] as do several network firewalls and routers from such makers as Cisco / Linksys . The Korg OASYS , the Korg KRONOS , the Yamaha Motif XS /Motif XF music workstations , [ 56 ] Yamaha S90XS/S70XS, Yamaha MOX6/MOX8 synthesizers, Yamaha Motif-Rack XS tone generator module , and Roland RD-700GX digital piano also run Linux. Linux is also used in stage lighting control systems, such as the WholeHogIII console. [ 57 ] In the past, there were few games available for Linux. In recent years, more games have been released with support for Linux (especially Indie games ), with the exception of a few AAA title games. Android , a mobile platform which uses the Linux kernel , has gained much developer interest and is one of the main platforms for mobile game development along with iOS operating system by Apple for iPhone and iPad devices. On February 14, 2013, Valve released a Linux version of Steam , a gaming distribution platform on PC. [ 58 ] Many Steam games were ported to Linux. [ 59 ] On December 13, 2013, Valve released SteamOS , a gaming-oriented OS based on Debian, for beta testing , and had plans to ship Steam Machines as a gaming and entertainment platform. [ 60 ] Valve has also developed VOGL , an OpenGL debugger intended to aid video game development, [ 61 ] as well as porting its Source game engine to desktop Linux. [ 62 ] As a result of Valve's effort, several prominent games such as DotA 2 , Team Fortress 2 , Portal , Portal 2 and Left 4 Dead 2 are now natively available on desktop Linux. On July 31, 2013, Nvidia released Shield as an attempt to use Android as a specialized gaming platform. [ 63 ] Some Linux users play Windows-based games using Wine or CrossOver Linux . On August 22, 2018, Valve released their own fork of Wine called Proton , aimed at gaming. It features some improvements over the vanilla Wine such as Vulkan-based DirectX 11 and 12 implementations, Steam integration, better full screen and game controller support and improved performance for multi-threaded games. [ 64 ] In 2021, ProtonDB, an online aggregator of games supporting Linux, stated that 78% of the top thousand games on Steam were able to run on Linux using either Proton or a native port. [ 65 ] On February 25, 2022, Valve released Steam Deck , a handheld gaming console running Arch Linux -based operating system SteamOS 3.0. [ 66 ] [ 67 ] Due to the flexibility, customizability and free and open-source nature of Linux, it becomes possible to highly tailor Linux towards a specific purpose. There are two main methods to assemble a specialized Linux distribution: building from scratch or from a general-purpose distribution as a base. The distributions often used for this purpose include Debian , Fedora , Ubuntu (which is itself based on Debian), Arch Linux , Gentoo , and Slackware . In contrast, Linux distributions built from scratch do not have general-purpose bases; instead, they focus on the JeOS philosophy by including only necessary components and avoiding resource overhead caused by components considered redundant in the distribution's use cases. A home theater PC (HTPC) is a PC that is mainly used as an entertainment system, especially a home theater system . It is normally connected to a television, and often an additional audio system. OpenELEC , a Linux distribution that incorporates the media center software Kodi , is an OS tuned specifically for an HTPC. Having been built from the ground up adhering to the JeOS principle, the OS is very lightweight and very suitable for the confined usage range of an HTPC. There are also special editions of Linux distributions that include the MythTV media center software, such as Mythbuntu , a special edition of Ubuntu. Kali Linux is a Debian-based Linux distribution designed for digital forensics and penetration testing . It comes preinstalled with several software applications for penetration testing and identifying security exploits . [ 68 ] The Ubuntu derivative BackBox provides pre-installed security and network analysis tools for ethical hacking. The Arch-based BlackArch includes over 2100 tools for pentesting and security researching. [ 69 ] There are many Linux distributions created with privacy, secrecy, network anonymity and information security in mind, including Tails , Tin Hat Linux and Tinfoil Hat Linux . Lightweight Portable Security is a distribution based on Arch Linux and developed by the United States Department of Defense . Tor-ramdisk is a minimal distribution created solely to host the network anonymity software Tor . Linux Live CD sessions have long been used as a tool for recovering data from a broken computer system and for repairing the system. Building upon that idea, several Linux distributions tailored for this purpose have emerged, most of which use GParted as a partition editor, with additional data recovery and system repair software: SpaceX uses multiple redundant flight computers in a fault-tolerant design in its Falcon 9 rocket. Each Merlin engine is controlled by three voting computers, with two physical processors per computer that constantly check each other's operation. Linux is not inherently fault-tolerant (no operating system is, as it is a function of the whole system including the hardware), but the flight computer software makes it so for its purpose. [ 70 ] For flexibility, commercial off-the-shelf parts and system-wide "radiation-tolerant" design are used instead of radiation hardened parts. [ 70 ] As of July 2019 [update] , SpaceX has conducted over 76 launches of the Falcon 9 since 2010, out of which all but one have successfully delivered their primary payloads to the intended orbit , and has used it to transport astronauts to the International Space Station . The Dragon 2 crew capsule also uses Linux. [ 71 ] Windows was deployed as the operating system on non-mission critical laptops used on the space station, but it was later replaced with Linux. Robonaut 2 , the first humanoid robot in space, is also Linux-based. [ 72 ] The Jet Propulsion Laboratory has used Linux for a number of years "to help with projects relating to the construction of unmanned space flight and deep space exploration"; NASA uses Linux in robotics in the Mars rover, and Ubuntu Linux to "save data from satellites". [ 73 ] Linux distributions have been created to provide hands-on experience with coding and source code to students, on devices such as the Raspberry Pi . In addition to producing a practical device, the intention is to show students "how things work under the hood". [ 74 ] The Ubuntu derivatives Edubuntu and The Linux Schools Project , as well as the Debian derivative Skolelinux, provide education-oriented software packages. They also include tools for administering and building school computer labs and computer-based classrooms, such as the Linux Terminal Server Project (LTSP). Instant WebKiosk and Webconverger are browser-based Linux distributions often used in web kiosks and digital signage . Thinstation is a minimalist distribution designed for thin clients . Rocks Cluster Distribution is tailored for high-performance computing clusters . There are general-purpose Linux distributions that target a specific audience, such as users of a specific language or geographical area. Such examples include Ubuntu Kylin for Chinese language users and BlankOn targeted at Indonesians. Profession-specific distributions include Ubuntu Studio for media creation and DNALinux for bioinformatics . There is also a Muslim-oriented distribution of the name Sabily that consequently also provides some Islamic tools. Certain organizations use slightly specialized Linux distributions internally, including GendBuntu used by the French National Gendarmerie , Goobuntu used internally by Google, and Astra Linux developed specifically for the Russian army.
https://en.wikipedia.org/wiki/Linux_range_of_use
In combinatorial optimization , Lin–Kernighan is one of the best heuristics for solving the symmetric travelling salesman problem . [ citation needed ] It belongs to the class of local search algorithms, which take a tour ( Hamiltonian cycle ) as part of the input and attempt to improve it by searching in the neighbourhood of the given tour for one that is shorter, and upon finding one repeats the process from that new one, until encountering a local minimum. As in the case of the related 2-opt and 3-opt algorithms, the relevant measure of "distance" between two tours is the number of edges which are in one but not the other; new tours are built by reassembling pieces of the old tour in a different order, sometimes changing the direction in which a sub-tour is traversed. Lin–Kernighan is adaptive and has no fixed number of edges to replace at a step, but favours small numbers such as 2 or 3. For a given instance ( G , c ) {\displaystyle (G,c)} of the travelling salesman problem, tours are uniquely determined by their sets of edges, so we may as well encode them as such. In the main loop of the local search, we have a current tour T ⊂ E ( G ) {\displaystyle T\subset \mathrm {E} (G)} and are looking for new tour T ′ ⊂ E ( G ) {\displaystyle T'\subset \mathrm {E} (G)} such that the symmetric difference F = T △ T ′ {\displaystyle F=T\mathbin {\triangle } T'} is not too large and the length ∑ e ∈ T ′ c ( e ) {\displaystyle \sum _{e\in T'}c(e)} of the new tour is less than the length ∑ e ∈ T c ( e ) {\displaystyle \sum _{e\in T}c(e)} of the current tour. Since F {\displaystyle F} is typically much smaller than T {\displaystyle T} and T ′ {\displaystyle T'} , it is convenient to consider the quantity since g ( T △ T ′ ) = ∑ e ∈ T c ( e ) − ∑ e ∈ T ′ c ( e ) {\displaystyle g(T\mathbin {\triangle } T')=\sum _{e\in T}c(e)-\sum _{e\in T'}c(e)} : how much longer the current tour T {\displaystyle T} is than the new tour T ′ {\displaystyle T'} . Naively k {\displaystyle k} -opt can be regarded as examining all F ⊆ E ( G ) {\displaystyle F\subseteq \mathrm {E} (G)} with exactly 2 k {\displaystyle 2k} elements ( k {\displaystyle k} in T {\displaystyle T} but not in T ′ {\displaystyle T'} , and another k {\displaystyle k} in T ′ {\displaystyle T'} but not in T {\displaystyle T} ) such that T △ F {\displaystyle T\mathbin {\triangle } F} is again a tour, looking for such a set which has g ( F ) > 0 {\displaystyle g(F)>0} . It is however easier to do those tests in the opposite order: first search for plausible F {\displaystyle F} with positive gain, and only second check if T △ F {\displaystyle T\mathbin {\triangle } F} is in fact a tour. Define a trail in G {\displaystyle G} to be alternating (with respect to T {\displaystyle T} ) if its edges are alternatingly in T {\displaystyle T} and not in T {\displaystyle T} , respectively. Because the subgraphs ( V ( G ) , T ) {\displaystyle {\bigl (}\mathrm {V} (G),T{\bigr )}} and ( V ( G ) , T ′ ) {\displaystyle {\bigl (}\mathrm {V} (G),T'{\bigr )}} are 2 {\displaystyle 2} - regular , the subgraph G [ T △ T ′ ] = ( V ( G ) , T △ T ′ ) {\displaystyle G[T\mathbin {\triangle } T']={\bigl (}\mathrm {V} (G),T\mathbin {\triangle } T'{\bigr )}} will have vertices of degree 0 {\displaystyle 0} , 2 {\displaystyle 2} , and 4 {\displaystyle 4} only, and at each vertex there are as many incident edges from T {\displaystyle T} as there are from T ′ {\displaystyle T'} . Hence (essentially by Hierholzer's algorithm for finding Eulerian circuits ) the graph G [ T △ T ′ ] {\displaystyle G[T\mathbin {\triangle } T']} decomposes into closed alternating trails. Sets F ⊆ E ( G ) {\displaystyle F\subseteq \mathrm {E} (G)} that may satisfy F = T △ T ′ {\displaystyle F=T\mathbin {\triangle } T'} for some tour T ′ {\displaystyle T'} may thus be found by enumerating closed alternating trails in G {\displaystyle G} , even if not every closed alternating trail F {\displaystyle F} makes T △ F {\displaystyle T\mathbin {\triangle } F} into a tour; it could alternatively turn out to be a disconnected 2 {\displaystyle 2} -regular subgraph. Alternating trails (closed or open) are built by extending a shorter alternating trail, so when exploring the neighbourhood of the current tour T {\displaystyle T} , one is exploring a search tree of alternating trails. The key idea of the Lin–Kernighan algorithm is to remove from this tree all alternating trails which have gain ≤ 0 {\displaystyle \leq 0} . This does not prevent finding every closed trail with positive gain, thanks to the following lemma. Lemma. If a 0 , … , a n − 1 {\displaystyle a_{0},\dotsc ,a_{n-1}} are numbers such that ∑ i = 0 n − 1 a i > 0 {\displaystyle \sum _{i=0}^{n-1}a_{i}>0} , then there is a cyclic permutation of these numbers such that all partial sums are positive as well, i.e., there is some k {\displaystyle k} such that For a closed alternating trail F = e 0 e 1 … e n − 1 {\displaystyle F=e_{0}\,e_{1}\,\dots \,e_{n-1}} , one may define a i = c ( e i ) {\displaystyle a_{i}=c(e_{i})} if e i ∈ T {\displaystyle e_{i}\in T} and a i = − c ( e i ) {\displaystyle a_{i}=-c(e_{i})} if e i ∉ T {\displaystyle e_{i}\notin T} ; the sum ∑ i = 0 n − 1 a i {\displaystyle \sum \nolimits _{i=0}^{n-1}a_{i}} is then the gain g ( F ) {\displaystyle g(F)} . Here the lemma implies that there for every closed alternating trail with positive gain exists at least one starting vertex v 0 {\displaystyle v_{0}} for which all the gains of the partial trails are positive as well, so F {\displaystyle F} will be found when the search explores the branch of alternating trails starting at v 0 {\displaystyle v_{0}} . (Prior to that the search may have considered other subtrails of F {\displaystyle F} starting at other vertices but backed out because some subtrail failed the positive gain constraint.) Reducing the number of branches to explore translates directly to a reduction in runtime, and the sooner a branch can be pruned, the better. This yields the following algorithm for finding all closed, positive gain alternating trails in the graph. As an enumeration algorithm this is slightly flawed, because it may report the same trail multiple times, with different starting points, but Lin–Kernighan does not care because it mostly aborts the enumeration after finding the first hit. It should however be remarked that: The basic form of the Lin–Kernighan algorithm not only does a local search counterpart of the above enumeration, but it also introduces two parameters that narrow the search. Because there are O ( n ⌊ p 1 / 2 ⌋ ) {\displaystyle O(n^{\lfloor p_{1}/2\rfloor })} alternating trails of length p 1 {\displaystyle p_{1}} , and the final round of the algorithm may have to check all of them before concluding that the current tour is locally optimal, we get ⌊ p 1 / 2 ⌋ {\displaystyle \lfloor p_{1}/2\rfloor } (standard value 2 {\displaystyle 2} ) as a lower bound on the exponent of the algorithm complexity. Lin & Kernighan report 2.2 {\displaystyle 2.2} as an empirical exponent of n {\displaystyle n} in the average overall running time for their algorithm, but other implementors have had trouble reproducing that result. [ 1 ] It appears unlikely that the worst-case running time is polynomial. [ 2 ] In terms of a stack as above, the algorithm is: The length of the alternating trails considered are thus not explicitly bounded, but beyond the backtracking depth p 1 {\displaystyle p_{1}} no more than one way of extending the current trail is considered, which in principle stops those explorations from raising the exponent in the runtime complexity. The closed alternating trails found by the above method are all connected, but the symmetric difference T △ T ′ {\displaystyle T\mathbin {\triangle } T'} of two tours need not be, so in general this method of alternating trails cannot explore the full neighbourhood of a trail T {\displaystyle T} . The literature on the Lin–Kernighan heuristic uses the term sequential exchanges for those that are described by a single alternating trail. The smallest non-sequential exchange would however replace 4 edges and consist of two cycles of 4 edges each (2 edges added, 2 removed), so it is long compared to the typical Lin–Kernighan exchange, and there are few of these compared to the full set of 4-edge exchanges. In at least one implementation by Lin & Kernighan there was an extra final step considering such non-sequential exchanges of 4 edges before declaring a tour locally optimal, which would mean the tours produced are 4-opt unless one introduces further constraints on the search (which Lin and Kernighan in fact did). The literature is vague on exactly what is included in the Lin–Kernighan heuristic proper, and what constitutes further refinements. For the asymmetric TSP, the idea of using positive gain alternating trails to find favourable exchanges is less useful, because there are fewer ways in which pieces of a tour can be rearranged to yield new tours when one may not reverse the orientation of a piece. Two pieces can only be patched together to reproduce the original tour. Three pieces can be patched together to form a different tour in one way only, and the corresponding alternating trail does not extend to a closed trail for rearranging four pieces into a new tour. To rearrange four pieces, one needs a non-sequential exchange. The Lin–Kernighan heuristic checks the validity of tour candidates T △ F {\displaystyle T\mathbin {\triangle } F} at two points: obviously when deciding whether a better tour has been found, but also as a constraint to descending in the search tree, as controlled via the infeasibility depth p 2 {\displaystyle p_{2}} . Concretely, at larger depths in the search a vertex v 2 k + 1 {\displaystyle v_{2k+1}} is only appended to the alternating trail if T △ { v 0 v 1 , v 1 v 2 , … , v 2 k v 2 k + 1 , v 2 k + 1 v 0 } {\displaystyle T\mathbin {\triangle } \{v_{0}v_{1},v_{1}v_{2},\dotsc ,v_{2k}v_{2k+1},v_{2k+1}v_{0}\}} is a tour. By design that set of edges constitutes a 2-factor in G {\displaystyle G} , so what needs to be determined is whether that 2-factor consists of a single Hamiltonian cycle, or instead is made up of several cycles. If naively posing this subproblem as giving a subroutine the set of n {\displaystyle n} edges as input, one ends up with O ( n ) {\displaystyle O(n)} as the time complexity for this check, since it is necessary to walk around the full tour before being able to determine that it is in fact a Hamiltonian cycle. That is too slow for the second usage of this test, which gets carried out for every alternating trail with more than 2 {\displaystyle 2} edges from T {\displaystyle T} . If keeping track of more information, the test can instead be carried out in constant time. A useful degree of freedom here is that one may choose the order in which step 2.3.2 iterates over all vertices; in particular, one may follow the known tour T {\displaystyle T} . After picking k {\displaystyle k} edges from T {\displaystyle T} , the remaining subgraph ( V ( G ) , T ∖ { v 0 v 1 , … , v 2 k − 2 v 2 k − 1 } ) {\displaystyle {\bigl (}\mathrm {V} (G),T\setminus \{v_{0}v_{1},\dotsc ,v_{2k-2}v_{2k-1}\}{\bigr )}} consists of k {\displaystyle k} paths. The outcome of the Hamiltonicity test done when considering the ( k + 1 ) {\displaystyle (k+1)} th edge v 2 k v 2 k + 1 {\displaystyle v_{2k}v_{2k+1}} depends only on in which of these paths that v 2 k {\displaystyle v_{2k}} resides and whether v 2 k + 1 {\displaystyle v_{2k+1}} is before or after v 2 k {\displaystyle v_{2k}} . Hence it would be sufficient to examine 2 k {\displaystyle 2k} different cases as part of performing step 2.3.2 for v 2 k − 1 {\displaystyle v_{2k-1}} ; as far as v 2 k + 1 {\displaystyle v_{2k+1}} is concerned, the outcome of this test can be inherited information rather than something that has to be computed fresh.
https://en.wikipedia.org/wiki/Lin–Kernighan_heuristic
Lionel Salem (5 March 1937 – 29 June 2024) [ 1 ] was a French theoretical chemist , former research director at the French National Centre for Scientific Research (CNRS), retired since 1999. [ 2 ] He was a member of the International Academy of Quantum Molecular Science [ 3 ] which named him its annual award winner in 1975 for his work on photochemical processes and on chemical reaction mechanisms. [ 4 ] He has contributed to the theories of forces between molecules , of conjugated molecules , of organic reaction mechanisms and of heterogeneous catalysis . He developed the electronic theory of diradicals , as well as the concepts of diradical and zwitterionic states. [ 5 ] In 1968, he described the energy change for the approach of two molecules as a function of their orbitals ' properties; this approach, pursued independently by Gilles Klopman , led to the Klopman–Salem equation and the theory of frontier orbitals . [ 6 ] [ 7 ] He is the author of several books on chemical subjects, including The Molecular Orbital Theory of Conjugated Systems (1966), The Organic Chemist's Book of Orbitals (with William L. Jorgensen , 1973), The Marvelous Molecule (1979), and Electrons in Chemical Reactions (1982). [ 5 ] [ 8 ] Salem died in Paris on 29 June 2024, at the age of 87. [ 9 ] Anders, Udo (28 October 1997). "Interview with Professor Lionel Salem" . Early Ideas in the History of Quantum Chemistry . Retrieved 15 October 2020 .
https://en.wikipedia.org/wiki/Lionel_Salem
In differential geometry , Liouville's equation , named after Joseph Liouville , [ 1 ] [ 2 ] is the nonlinear partial differential equation satisfied by the conformal factor f of a metric f 2 (d x 2 + d y 2 ) on a surface of constant Gaussian curvature K : where ∆ 0 is the flat Laplace operator Liouville's equation appears in the study of isothermal coordinates in differential geometry: the independent variables x,y are the coordinates, while f can be described as the conformal factor with respect to the flat metric. Occasionally it is the square f 2 that is referred to as the conformal factor, instead of f itself. Liouville's equation was also taken as an example by David Hilbert in the formulation of his nineteenth problem . [ 3 ] By using the change of variables log f ↦ u , another commonly found form of Liouville's equation is obtained: Other two forms of the equation, commonly found in the literature, [ 4 ] are obtained by using the slight variant 2 log f ↦ u of the previous change of variables and Wirtinger calculus : [ 5 ] Δ 0 u = − 2 K e u ⟺ ∂ 2 u ∂ z ∂ z ¯ = − K 2 e u . {\displaystyle \Delta _{0}u=-2Ke^{u}\quad \Longleftrightarrow \quad {\frac {\partial ^{2}u}{{\partial z}{\partial {\bar {z}}}}}=-{\frac {K}{2}}e^{u}.} Note that it is exactly in the first one of the preceding two forms that Liouville's equation was cited by David Hilbert in the formulation of his nineteenth problem . [ 3 ] [ a ] In a more invariant fashion, the equation can be written in terms of the intrinsic Laplace–Beltrami operator as follows: Liouville's equation is equivalent to the Gauss–Codazzi equations for minimal immersions into the 3-space, when the metric is written in isothermal coordinates z {\displaystyle z} such that the Hopf differential is d z 2 {\displaystyle \mathrm {d} z^{2}} . In a simply connected domain Ω , the general solution of Liouville's equation can be found by using Wirtinger calculus. [ 6 ] Its form is given by where f ( z ) is any meromorphic function such that Liouville's equation can be used to prove the following classification results for surfaces: Theorem . [ 7 ] A surface in the Euclidean 3-space with metric d l 2 = g ( z , _ z )d z d _ z , and with constant scalar curvature K is locally isometric to:
https://en.wikipedia.org/wiki/Liouville's_equation
In mathematics , Liouville's formula , also known as the Abel–Jacobi–Liouville identity , is an equation that expresses the determinant of a square-matrix solution of a first-order system of homogeneous linear differential equations in terms of the sum of the diagonal coefficients of the system. The formula is named after the French mathematician Joseph Liouville . Jacobi's formula provides another representation of the same mathematical relationship. Liouville's formula is a generalization of Abel's identity and can be used to prove it. Since Liouville's formula relates the different linearly independent solutions of the system of differential equations, it can help to find one solution from the other(s), see the example application below. Consider the n -dimensional first-order homogeneous linear differential equation on an interval I of the real line , where A ( t ) for t ∈ I denotes a square matrix of dimension n with real or complex entries. Let Φ denote a matrix-valued solution on I , meaning that Φ( t ) is the so-called fundamental matrix , a square matrix of dimension n with real or complex entries and the derivative satisfies Let denote the trace of A ( s ) = ( a i , j ( s )) i , j ∈ {1,..., n } , the sum of its diagonal entries. If the trace of A is a continuous function , then the determinant of Φ satisfies for all t and t 0 in I . This example illustrates how Liouville's formula can help to find the general solution of a first-order system of homogeneous linear differential equations. Consider on the open interval I = (0, ∞) . Assume that the easy solution is already found. Let denote another solution, then is a square-matrix-valued solution of the above differential equation. Since the trace of A ( x ) is zero for all x ∈ I , Liouville's formula implies that the determinant is actually a constant independent of x . Writing down the first component of the differential equation for y , we obtain using ( 1 ) that Therefore, by integration, we see that involving the natural logarithm and the constant of integration c 2 . Solving equation ( 1 ) for y 2 ( x ) and substituting for y 1 ( x ) gives which is the general solution for y . With the special choice c 1 = 0 and c 2 = 1 we recover the easy solution we started with, the choice c 1 = 1 and c 2 = 0 yields a linearly independent solution. Therefore, is a so-called fundamental solution of the system. We omit the argument x for brevity. By the Leibniz formula for determinants , the derivative of the determinant of Φ = (Φ i , j ) i , j ∈ {0,..., n } can be calculated by differentiating one row at a time and taking the sum, i.e. Since the matrix-valued solution Φ satisfies the equation Φ' = A Φ , we have for every entry of the matrix Φ' or for the entire row When we subtract from the i -th row the linear combination of all the other rows, then the value of the determinant remains unchanged, hence for every i ∈ {1, . . . , n } by the linearity of the determinant with respect to every row. Hence by ( 2 ) and the definition of the trace. It remains to show that this representation of the derivative implies Liouville's formula. Fix x 0 ∈ I . Since the trace of A is assumed to be continuous function on I , it is bounded on every closed and bounded subinterval of I and therefore integrable, hence is a well defined function. Differentiating both sides, using the product rule, the chain rule , the derivative of the exponential function and the fundamental theorem of calculus , we obtain due to the derivative in ( 3 ). Therefore, g has to be constant on I , because otherwise we would obtain a contradiction to the mean value theorem (applied separately to the real and imaginary part in the complex-valued case). Since g ( x 0 ) = det Φ( x 0 ) , Liouville's formula follows by solving the definition of g for det Φ( x ) .
https://en.wikipedia.org/wiki/Liouville's_formula
In physics , Liouville's theorem , named after the French mathematician Joseph Liouville , is a key theorem in classical statistical and Hamiltonian mechanics . It asserts that the phase-space distribution function is constant along the trajectories of the system —that is that the density of system points in the vicinity of a given system point traveling through phase-space is constant with time. This time-independent density is in statistical mechanics known as the classical a priori probability . [ 1 ] Liouville's theorem applies to conservative systems , that is, systems in which the effects of friction are absent or can be ignored. The general mathematical formulation for such systems is the measure-preserving dynamical system . Liouville's theorem applies when there are degrees of freedom that can be interpreted as positions and momenta; not all measure-preserving dynamical systems have these, but Hamiltonian systems do. The general setting for conjugate position and momentum coordinates is available in the mathematical setting of symplectic geometry . Liouville's theorem ignores the possibility of chemical reactions , where the total number of particles may change over time, or where energy may be transferred to internal degrees of freedom . The non-squeezing theorem , which applies to all symplectic maps (the Hamiltonian is a symplectic map) implies further restrictions on phase-space flows beyond volume/density/measure conservation. There are extensions of Liouville's theorem to cover these various generalized settings, including stochastic systems. [ 2 ] The Liouville equation describes the time evolution of the phase space distribution function . Although the equation is usually referred to as the "Liouville equation", Josiah Willard Gibbs was the first to recognize the importance of this equation as the fundamental equation of statistical mechanics. [ 3 ] [ 4 ] It is referred to as the Liouville equation because its derivation for non-canonical systems utilises an identity first derived by Liouville in 1838. [ 5 ] [ 6 ] Consider a Hamiltonian dynamical system with canonical coordinates q i {\displaystyle q_{i}} and conjugate momenta p i {\displaystyle p_{i}} , where i = 1 , … , n {\displaystyle i=1,\dots ,n} . Then the phase space distribution ρ ( p , q , t ) {\displaystyle \rho (p,q,t)} determines the probability ρ ( p , q , t ) d n q d n p {\displaystyle \rho (p,q,t)\;\mathrm {d} ^{n}q\,\mathrm {d} ^{n}p} that the system will be found in the infinitesimal phase space volume d n q d n p {\displaystyle \mathrm {d} ^{n}q\,\mathrm {d} ^{n}p} at time t {\displaystyle t} . The Liouville equation is ∂ ρ ∂ t + ∑ i = 1 n ( ∂ ρ ∂ q i q ˙ i + ∂ ρ ∂ p i p ˙ i ) = 0. {\displaystyle {\frac {\partial \rho }{\partial t}}+\sum _{i=1}^{n}\left({\frac {\partial \rho }{\partial q_{i}}}{\dot {q}}_{i}+{\frac {\partial \rho }{\partial p_{i}}}{\dot {p}}_{i}\right)=0.} Time derivatives are denoted by dots, and are evaluated according to Hamilton's equations for the system. This equation demonstrates the conservation of density in phase space (which was Gibbs 's name for the theorem). Liouville's theorem states that: A proof of Liouville's theorem uses the n -dimensional divergence theorem . The proof is based on the fact that the evolution of ρ {\displaystyle \rho } obeys an 2n -dimensional version of the continuity equation : ∂ ρ ∂ t + ∇ → ⋅ ( ρ u → ) = 0 {\displaystyle {\frac {\partial \rho }{\partial t}}+{\vec {\nabla }}\cdot (\rho {\vec {u}})=0} with u → = ( q ˙ 1 , q ˙ 2 , … , q ˙ n , p ˙ 1 , p ˙ 2 , . . . , p ˙ n ) {\displaystyle {\vec {u}}=({\dot {q}}_{1},{\dot {q}}_{2},\dots ,{\dot {q}}_{n},{\dot {p}}_{1},{\dot {p}}_{2},...,{\dot {p}}_{n})} being the "velocity" vector of p i {\displaystyle p_{i}} and q i {\displaystyle q_{i}} . The above equation means that change of the total probability within a small volume in phase space is equal to the net flux of probability density into or out of the volume. After inserting u → {\displaystyle {\vec {u}}} in the above equation, we reach ∂ ρ ∂ t + ∑ i = 1 n ( ∂ ( ρ q ˙ i ) ∂ q i + ∂ ( ρ p ˙ i ) ∂ p i ) = 0. {\displaystyle {\frac {\partial \rho }{\partial t}}+\sum _{i=1}^{n}\left({\frac {\partial (\rho {\dot {q}}_{i})}{\partial q_{i}}}+{\frac {\partial (\rho {\dot {p}}_{i})}{\partial p_{i}}}\right)=0.} That is, the 3-tuple ( ρ , ρ q ˙ i , ρ p ˙ i ) {\displaystyle (\rho ,\rho {\dot {q}}_{i},\rho {\dot {p}}_{i})} is a conserved current . The above equation can be reduced to the Liouville equation based on the following identity ρ ∑ i = 1 n ( ∂ q ˙ i ∂ q i + ∂ p ˙ i ∂ p i ) = ρ ∑ i = 1 n ( ∂ 2 H ∂ q i ∂ p i − ∂ 2 H ∂ p i ∂ q i ) = 0 , {\displaystyle \rho \sum _{i=1}^{n}\left({\frac {\partial {\dot {q}}_{i}}{\partial q_{i}}}+{\frac {\partial {\dot {p}}_{i}}{\partial p_{i}}}\right)=\rho \sum _{i=1}^{n}\left({\frac {\partial ^{2}H}{\partial q_{i}\,\partial p_{i}}}-{\frac {\partial ^{2}H}{\partial p_{i}\partial q_{i}}}\right)=0,} where H {\displaystyle H} is the Hamiltonian, and we have used the relationships q ˙ i = ∂ H / ∂ p i {\displaystyle {\dot {q}}_{i}=\partial H/\partial p_{i}} and p ˙ i = − ∂ H / ∂ q i {\displaystyle {\dot {p}}_{i}=-\partial H/\partial {q_{i}}} . The derivation of the Liouville equation can be viewed as the motion through phase space as a 'fluid flow' of system points. The theorem that the convective derivative of the density, d ρ / d t {\displaystyle d\rho /dt} , is zero follows from the equation of continuity by noting that the 'velocity field' ( p ˙ , q ˙ ) {\displaystyle ({\dot {p}},{\dot {q}})} in phase space has zero divergence (which follows from Hamilton's relations). [ 7 ] The theorem above is often restated in terms of the Poisson bracket as ∂ ρ ∂ t = { H , ρ } {\displaystyle {\frac {\partial \rho }{\partial t}}=\{H,\rho \}} or, in terms of the linear Liouville operator or Liouvillian , i L ^ = ∑ i = 1 n [ ∂ H ∂ p i ∂ ∂ q i − ∂ H ∂ q i ∂ ∂ p i ] = − { H , ∙ } {\displaystyle \mathrm {i} {\widehat {\mathbf {L} }}=\sum _{i=1}^{n}\left[{\frac {\partial H}{\partial p_{i}}}{\frac {\partial }{\partial q^{i}}}-{\frac {\partial H}{\partial q^{i}}}{\frac {\partial }{\partial p_{i}}}\right]=-\{H,\bullet \}} as ∂ ρ ∂ t + i L ^ ρ = 0. {\displaystyle {\frac {\partial \rho }{\partial t}}+{\mathrm {i} {\widehat {\mathbf {L} }}}\rho =0.} In ergodic theory and dynamical systems , motivated by the physical considerations given so far, there is a corresponding result also referred to as Liouville's theorem. In Hamiltonian mechanics , the phase space is a smooth manifold that comes naturally equipped with a smooth measure (locally, this measure is the 6 n -dimensional Lebesgue measure ). The theorem says this smooth measure is invariant under the Hamiltonian flow . More generally, one can describe the necessary and sufficient condition under which a smooth measure is invariant under a flow. [ 8 ] The Hamiltonian case then becomes a corollary. We can also formulate Liouville's Theorem in terms of symplectic geometry . For a given system, we can consider the phase space ( q μ , p μ ) {\displaystyle (q^{\mu },p_{\mu })} of a particular Hamiltonian H {\displaystyle H} as a manifold ( M , ω ) {\displaystyle (M,\omega )} endowed with a symplectic 2-form ω = d p μ ∧ d q μ . {\displaystyle \omega =dp_{\mu }\wedge dq^{\mu }.} The volume form of our manifold is the top exterior power of the symplectic 2-form, and is just another representation of the measure on the phase space described above. On our phase space symplectic manifold we can define a Hamiltonian vector field generated by a function f ( q , p ) {\displaystyle f(q,p)} as X f = ∂ f ∂ p μ ∂ ∂ q μ − ∂ f ∂ q μ ∂ ∂ p μ . {\displaystyle X_{f}={\frac {\partial f}{\partial p_{\mu }}}{\frac {\partial }{\partial q^{\mu }}}-{\frac {\partial f}{\partial q^{\mu }}}{\frac {\partial }{\partial p_{\mu }}}.} Specifically, when the generating function is the Hamiltonian itself, f ( q , p ) = H {\displaystyle f(q,p)=H} , we get X H = ∂ H ∂ p μ ∂ ∂ q μ − ∂ H ∂ q μ ∂ ∂ p μ = d q μ d t ∂ ∂ q μ + d p μ d t ∂ ∂ p μ = d d t {\displaystyle X_{H}={\frac {\partial H}{\partial p_{\mu }}}{\frac {\partial }{\partial q^{\mu }}}-{\frac {\partial H}{\partial q^{\mu }}}{\frac {\partial }{\partial p_{\mu }}}={\frac {dq^{\mu }}{dt}}{\frac {\partial }{\partial q^{\mu }}}+{\frac {dp^{\mu }}{dt}}{\frac {\partial }{\partial p_{\mu }}}={\frac {d}{dt}}} where we utilized Hamilton's equations of motion and the definition of the chain rule. [ 9 ] In this formalism, Liouville's Theorem states that the Lie derivative of the volume form is zero along the flow generated by X H {\displaystyle X_{H}} . That is, for ( M , ω ) {\displaystyle (M,\omega )} a 2n-dimensional symplectic manifold, L X H ( ω n ) = 0. {\displaystyle {\mathcal {L}}_{X_{H}}(\omega ^{n})=0.} In fact, the symplectic structure ω {\displaystyle \omega } itself is preserved, not only its top exterior power. That is, Liouville's Theorem also gives [ 10 ] L X H ( ω ) = 0. {\displaystyle {\mathcal {L}}_{X_{H}}(\omega )=0.} The analog of Liouville equation in quantum mechanics describes the time evolution of a mixed state . Canonical quantization yields a quantum-mechanical version of this theorem, the von Neumann equation . This procedure, often used to devise quantum analogues of classical systems, involves describing a classical system using Hamiltonian mechanics. Classical variables are then re-interpreted as quantum operators, while Poisson brackets are replaced by commutators . In this case, the resulting equation is [ 11 ] [ 12 ] ∂ ρ ∂ t = 1 i ℏ [ H , ρ ] , {\displaystyle {\frac {\partial \rho }{\partial t}}={\frac {1}{i\hbar }}[H,\rho ],} where ρ is the density matrix . When applied to the expectation value of an observable , the corresponding equation is given by Ehrenfest's theorem , and takes the form d d t ⟨ A ⟩ = − 1 i ℏ ⟨ [ H , A ] ⟩ , {\displaystyle {\frac {d}{dt}}\langle A\rangle =-{\frac {1}{i\hbar }}\langle [H,A]\rangle ,} where A {\displaystyle A} is an observable. Note the sign difference, which follows from the assumption that the operator is stationary and the state is time-dependent. In the phase-space formulation of quantum mechanics, substituting the Moyal brackets for Poisson brackets in the phase-space analog of the von Neumann equation results in compressibility of the probability fluid , and thus violations of Liouville's theorem incompressibility. This, then, leads to concomitant difficulties in defining meaningful quantum trajectories. [ 13 ] Consider an N {\displaystyle N} -particle system in three dimensions, and focus on only the evolution of d N {\displaystyle \mathrm {d} {\mathcal {N}}} particles. Within phase space, these d N {\displaystyle \mathrm {d} {\mathcal {N}}} particles occupy an infinitesimal volume given by d Γ = ∏ i = 1 N d 3 p i d 3 q i . {\displaystyle \mathrm {d} \Gamma =\displaystyle \prod _{i=1}^{N}d^{3}p_{i}d^{3}q_{i}.} We want d N d Γ {\displaystyle {\frac {\mathrm {d} {\mathcal {N}}}{\mathrm {d} \Gamma }}} to remain the same throughout time, so that ρ ( Γ , t ) {\displaystyle \rho (\Gamma ,t)} is constant along the trajectories of the system. If we allow our particles to evolve by an infinitesimal time step δ t {\displaystyle \delta t} , we see that each particle phase space location changes as { q i ′ = q i + q i ˙ δ t , p i ′ = p i + p i ˙ δ t , {\displaystyle {\begin{cases}q_{i}'=q_{i}+{\dot {q_{i}}}\delta t,\\p_{i}'=p_{i}+{\dot {p_{i}}}\delta t,\end{cases}}} where q i ˙ {\displaystyle {\dot {q_{i}}}} and p i ˙ {\displaystyle {\dot {p_{i}}}} denote d q i d t {\displaystyle {\frac {dq_{i}}{dt}}} and d p i d t {\displaystyle {\frac {dp_{i}}{dt}}} respectively, and we have only kept terms linear in δ t {\displaystyle \delta t} . Extending this to our infinitesimal hypercube d Γ {\displaystyle \mathrm {d} \Gamma } , the side lengths change as d q i ′ = d q i + ∂ q i ˙ ∂ q i d q i δ t , d p i ′ = d p i + ∂ p i ˙ ∂ p i d p i δ t . {\displaystyle {\begin{aligned}dq_{i}'=dq_{i}+{\tfrac {\partial {\dot {q_{i}}}}{\partial q_{i}}}dq_{i}\delta t,\\[2pt]dp_{i}'=dp_{i}+{\tfrac {\partial {\dot {p_{i}}}}{\partial p_{i}}}dp_{i}\delta t.\end{aligned}}} To find the new infinitesimal phase-space volume d Γ ′ {\displaystyle \mathrm {d} \Gamma '} , we need the product of the above quantities. To first order in δ t {\displaystyle \delta t} , we get the following: d q i ′ d p i ′ = d q i d p i [ 1 + ( ∂ q i ˙ ∂ q i + ∂ p i ˙ ∂ p i ) δ t ] . {\displaystyle dq_{i}'dp_{i}'=dq_{i}dp_{i}\left[1+\left({\frac {\partial {\dot {q_{i}}}}{\partial q_{i}}}+{\frac {\partial {\dot {p_{i}}}}{\partial p_{i}}}\right)\delta t\right].} So far, we have yet to make any specifications about our system. Let us now specialize to the case of N {\displaystyle N} 3 {\displaystyle 3} -dimensional isotropic harmonic oscillators. That is, each particle in our ensemble can be treated as a simple harmonic oscillator . The Hamiltonian for this system is given by H = ∑ i = 1 3 N ( 1 2 m p i 2 + m ω 2 2 q i 2 ) . {\displaystyle H=\sum _{i=1}^{3N}\left({\frac {1}{2m}}p_{i}^{2}+{\frac {m\omega ^{2}}{2}}q_{i}^{2}\right).} By using Hamilton's equations with the above Hamiltonian we find that the term in parentheses above is identically zero, thus yielding d q i ′ d p i ′ = d q i d p i . {\displaystyle dq_{i}'dp_{i}'=dq_{i}dp_{i}.} From this we can find the infinitesimal volume of phase space: d Γ ′ = ∏ i = 1 N d 3 q i ′ d 3 p i ′ = ∏ i = 1 N d 3 q i d 3 p i = d Γ . {\displaystyle \mathrm {d} \Gamma '=\prod _{i=1}^{N}d^{3}q_{i}'d^{3}p_{i}'=\prod _{i=1}^{N}d^{3}q_{i}d^{3}p_{i}=\mathrm {d} \Gamma .} Thus we have ultimately found that the infinitesimal phase-space volume is unchanged, yielding ρ ( Γ ′ , t + δ t ) = d N d Γ ′ = d N d Γ = ρ ( Γ , t ) , {\displaystyle \rho (\Gamma ',t+\delta t)={\frac {\mathrm {d} {\mathcal {N}}}{\mathrm {d} \Gamma '}}={\frac {\mathrm {d} {\mathcal {N}}}{\mathrm {d} \Gamma }}=\rho (\Gamma ,t),} demonstrating that Liouville's theorem holds for this system. [ 14 ] The question remains of how the phase-space volume actually evolves in time. Above we have shown that the total volume is conserved, but said nothing about what it looks like. For a single particle we can see that its trajectory in phase space is given by the ellipse of constant H {\displaystyle H} . Explicitly, one can solve Hamilton's equations for the system and find q i ( t ) = Q i cos ⁡ ω t + P i m ω sin ⁡ ω t , p i ( t ) = P i cos ⁡ ω t − m ω Q i sin ⁡ ω t , {\displaystyle {\begin{aligned}q_{i}(t)&=Q_{i}\cos {\omega t}+{\frac {P_{i}}{m\omega }}\sin {\omega t},\\p_{i}(t)&=P_{i}\cos {\omega t}-m\omega Q_{i}\sin {\omega t},\end{aligned}}} where Q i {\displaystyle Q_{i}} and P i {\displaystyle P_{i}} denote the initial position and momentum of the i {\displaystyle i} -th particle. For a system of multiple particles, each one will have a phase-space trajectory that traces out an ellipse corresponding to the particle's energy. The frequency at which the ellipse is traced is given by the ω {\displaystyle \omega } in the Hamiltonian, independent of any differences in energy. As a result, a region of phase space will simply rotate about the point ( q , p ) = ( 0 , 0 ) {\displaystyle (\mathbf {q} ,\mathbf {p} )=(0,0)} with frequency dependent on ω {\displaystyle \omega } . [ 15 ] This can be seen in the animation above. To see an example where Liouville's theorem does not apply, we can modify the equations of motion for the simple harmonic oscillator to account for the effects of friction or damping. Consider again the system of N {\displaystyle N} particles each in a 3 {\displaystyle 3} -dimensional isotropic harmonic potential, the Hamiltonian for which is given in the previous example. This time, we add the condition that each particle experiences a frictional force − γ p i {\displaystyle -\gamma p_{i}} , where γ {\displaystyle \gamma } is a positive constant dictating the amount of friction. As this is a non-conservative force , we need to extend Hamilton's equations as q i ˙ = − ∂ H ∂ p i , p i ˙ = − ∂ H ∂ q i − γ p i . {\displaystyle {\begin{aligned}{\dot {q_{i}}}&={\hphantom {-}}{\frac {\partial H}{\partial p_{i}}},\\[4pt]{\dot {p_{i}}}&=-{\frac {\partial H}{\partial q_{i}}}-\gamma p_{i}.\end{aligned}}} Unlike the equations of motion for the simple harmonic oscillator, these modified equations do not take the form of Hamilton's equations, and therefore we do not expect Liouville's theorem to hold. Instead, as depicted in the animation in this section, a generic phase space volume will shrink as it evolves under these equations of motion. To see this violation of Liouville's theorem explicitly, we can follow a very similar procedure to the undamped harmonic oscillator case, and we arrive again at d q i ′ d p i ′ = d q i d p i [ 1 + ( ∂ q i ˙ ∂ q i + ∂ p i ˙ ∂ p i ) δ t ] . {\displaystyle dq_{i}'dp_{i}'=dq_{i}dp_{i}\left[1+\left({\frac {\partial {\dot {q_{i}}}}{\partial q_{i}}}+{\frac {\partial {\dot {p_{i}}}}{\partial p_{i}}}\right)\delta t\right].} Plugging in our modified Hamilton's equations, we find d q i ′ d p i ′ = d q i d p i [ 1 + ( ∂ 2 H ∂ q i ∂ p i − ∂ 2 H ∂ p i ∂ q i − γ ) δ t ] , = d q i d p i [ 1 − γ δ t ] . {\displaystyle {\begin{aligned}dq_{i}'dp_{i}'&=dq_{i}dp_{i}\left[1+\left({\frac {\partial ^{2}H}{\partial q_{i}\partial p_{i}}}-{\frac {\partial ^{2}H}{\partial p_{i}\partial q_{i}}}-\gamma \right)\delta t\right],\\[1ex]&=dq_{i}dp_{i}\left[1-\gamma \delta t\right].\end{aligned}}} Calculating our new infinitesimal phase space volume, and keeping only first order in δ t {\displaystyle \delta t} we find the following result: d Γ ′ = ∏ i = 1 N d 3 q i ′ d 3 p i ′ = [ 1 − γ δ t ] 3 N ∏ i = 1 N d 3 q i d 3 p i = d Γ [ 1 − 3 N γ δ t ] . {\displaystyle \mathrm {d} \Gamma '=\prod _{i=1}^{N}d^{3}q_{i}'d^{3}p_{i}'=\left[1-\gamma \delta t\right]^{3N}\prod _{i=1}^{N}d^{3}q_{i}d^{3}p_{i}=\mathrm {d} \Gamma \left[1-3N\gamma \delta t\right].} We have found that the infinitesimal phase-space volume is no longer constant, and thus the phase-space density is not conserved. As can be seen from the equation as time increases, we expect our phase-space volume to decrease to zero as friction affects the system. As for how the phase-space volume evolves in time, we will still have the constant rotation as in the undamped case. However, the damping will introduce a steady decrease in the radii of each ellipse. Again we can solve for the trajectories explicitly using Hamilton's equations, taking care to use the modified ones above. Letting α ≡ γ / 2 {\displaystyle \alpha \equiv {\gamma }/{2}} for convenience, we find q i ( t ) = e − α t [ Q i cos ⁡ ω 1 t + B i sin ⁡ ω 1 t ] ω 1 ≡ ω 2 − α 2 , p i ( t ) = e − α t [ P i cos ⁡ ω 1 t − m ( ω 1 Q i + 2 α B i ) sin ⁡ ω 1 t ] B i ≡ 1 ω 1 ( P i m + 2 α Q i ) , {\displaystyle {\begin{aligned}q_{i}(t)&=e^{-\alpha t}\left[Q_{i}\cos {\omega _{1}t}+B_{i}\sin {\omega _{1}t}\right]&&\omega _{1}\equiv {\sqrt {\omega ^{2}-\alpha ^{2}}},\\[1ex]p_{i}(t)&=e^{-\alpha t}\left[P_{i}\cos {\omega _{1}t}-m(\omega _{1}Q_{i}+2\alpha B_{i})\sin {\omega _{1}t}\right]&&B_{i}\equiv {\frac {1}{\omega _{1}}}\left({\frac {P_{i}}{m}}+2\alpha Q_{i}\right),\end{aligned}}} where the values Q i {\displaystyle Q_{i}} and P i {\displaystyle P_{i}} denote the initial position and momentum of the i {\displaystyle i} -th particle. As the system evolves the total phase-space volume will spiral in to the origin. This can be seen in the figure above.
https://en.wikipedia.org/wiki/Liouville's_theorem_(Hamiltonian)
In complex analysis , Liouville's theorem , named after Joseph Liouville (although the theorem was first proven by Cauchy in 1844 [ 1 ] ), states that every bounded entire function must be constant . That is, every holomorphic function f {\displaystyle f} for which there exists a positive number M {\displaystyle M} such that | f ( z ) | ≤ M {\displaystyle |f(z)|\leq M} for all z ∈ C {\displaystyle z\in \mathbb {C} } is constant. Equivalently, non-constant holomorphic functions on C {\displaystyle \mathbb {C} } have unbounded images. The theorem is considerably improved by Picard's little theorem , which says that every entire function whose image omits two or more complex numbers must be constant. Liouville's theorem: Every holomorphic function f : C → C {\displaystyle f:\mathbb {C} \to \mathbb {C} } for which there exists a positive number M {\displaystyle M} such that | f ( z ) | ≤ M {\displaystyle |f(z)|\leq M} for all z ∈ C {\displaystyle z\in \mathbb {C} } is constant . More succinctly, Liouville's theorem states that every bounded entire function must be constant. This important theorem has several proofs. A standard analytical proof uses the fact that holomorphic functions are analytic . If f {\displaystyle f} is an entire function, it can be represented by its Taylor series about 0: where (by Cauchy's integral formula ) and C r {\displaystyle C_{r}} is the circle about 0 of radius r > 0 {\displaystyle r>0} . Suppose f {\displaystyle f} is bounded: i.e. there exists a constant M {\displaystyle M} such that | f ( z ) | ≤ M {\displaystyle |f(z)|\leq M} for all z {\displaystyle z} . We can estimate directly where in the second inequality we have used the fact that | z | = r {\displaystyle |z|=r} on the circle C r {\displaystyle C_{r}} . (This estimate is known as Cauchy's estimate .) But the choice of r {\displaystyle r} in the above is an arbitrary positive number. Therefore, letting r {\displaystyle r} tend to infinity (we let r {\displaystyle r} tend to infinity since f {\displaystyle f} is analytic on the entire plane) gives a k = 0 {\displaystyle a_{k}=0} for all k ≥ 1 {\displaystyle k\geq 1} . Thus f ( z ) = a 0 {\displaystyle f(z)=a_{0}} and this proves the theorem. Another proof uses the mean value property of harmonic functions. Given two points, choose two balls with the given points as centers and of equal radius. If the radius is large enough, the two balls will coincide except for an arbitrarily small proportion of their volume. Since f {\displaystyle f} is bounded, the averages of it over the two balls are arbitrarily close, and so f {\displaystyle f} assumes the same value at any two points. The proof can be adapted to the case where the harmonic function f {\displaystyle f} is merely bounded above or below. See Harmonic function#Liouville's theorem . Another approach to prove the theorem is Suppose | f ( z ) | ≤ M {\displaystyle |f(z)|\leq M} for all z {\displaystyle z} in the complex plane, we can apply the Cauchy estimate to a disk center at any z 0 {\displaystyle z_{0}} of any radius ρ {\displaystyle \rho } to obtain: | f ′ ( z ) | ≤ M ρ {\displaystyle |f'(z)|\leq {\frac {M}{\rho }}} . Let ρ {\displaystyle \rho } tend to + ∞ {\displaystyle +\infty } , we obtain f ′ ( z ) = 0 {\displaystyle f'(z)=0} . Since This is true for all z 0 {\displaystyle z_{0}} , f ( z ) = 0 {\displaystyle f(z)=0} is a constant. There is a short proof of the fundamental theorem of algebra using Liouville's theorem. [ 4 ] Suppose for the sake of contradiction that there is a nonconstant polynomial p {\displaystyle p} with no complex root. Note that | p ( z ) | → ∞ {\displaystyle |p(z)|\to \infty } as z → ∞ {\displaystyle z\to \infty } . Take a sufficiently large ball B ( 0 , R ) {\displaystyle B(0,R)} ; for some constant M {\displaystyle M} there exists a sufficiently large R {\displaystyle R} such that 1 / | p ( z ) | < 1 {\displaystyle 1/|p(z)|<1} for all z ∉ B ( 0 , R ) {\displaystyle z\not \in B(0,R)} . Because p {\displaystyle p} has no roots, the function q ( z ) = 1 / p ( z ) {\displaystyle q(z)=1/p(z)} is entire and holomorphic inside B ( 0 , R ) {\displaystyle B(0,R)} , and thus it is also continuous on its closure B ¯ ( 0 , R ) {\displaystyle {\overline {B}}(0,R)} . By the extreme value theorem , a continuous function on a closed and bounded set obtains its extreme values, implying that 1 / | p ( z ) | ≤ C {\displaystyle 1/|p(z)|\leq C} for some constant C {\displaystyle C} and z ∈ B ¯ ( 0 , R ) {\displaystyle z\in {\overline {B}}(0,R)} . Thus, the function q ( z ) {\displaystyle q(z)} is bounded in C {\displaystyle \mathbb {C} } , and by Liouville's theorem, is constant , which contradicts our assumption that p {\displaystyle p} is nonconstant. A consequence of the theorem is that "genuinely different" entire functions cannot dominate each other, i.e. if f {\displaystyle f} and g {\displaystyle g} are entire, and | f | ≤ | g | {\displaystyle |f|\leq |g|} everywhere, then f = α g {\displaystyle f=\alpha g} for some complex number α {\displaystyle \alpha } . Consider that for g = 0 {\displaystyle g=0} the theorem is trivial so we assume g ≠ 0 {\displaystyle g\neq 0} . Consider the function h = f / g {\displaystyle h=f/g} . It is enough to prove that h {\displaystyle h} can be extended to an entire function, in which case the result follows by Liouville's theorem. The holomorphy of h {\displaystyle h} is clear except at points in g − 1 ( 0 ) {\displaystyle g^{-1}(0)} . But since h {\displaystyle h} is bounded and all the zeroes of g {\displaystyle g} are isolated, any singularities must be removable. Thus h {\displaystyle h} can be extended to an entire bounded function which by Liouville's theorem implies it is constant. Suppose that f {\displaystyle f} is entire and | f ( z ) | ≤ M | z | {\displaystyle |f(z)|\leq M|z|} , for M > 0 {\displaystyle M>0} . We can apply Cauchy's integral formula; we have that where I {\displaystyle I} is the value of the remaining integral. This shows that f ′ {\displaystyle f'} is bounded and entire, so it must be constant, by Liouville's theorem. Integrating then shows that f {\displaystyle f} is affine and then, by referring back to the original inequality, we have that the constant term is zero. The theorem can also be used to deduce that the domain of a non-constant elliptic function f {\displaystyle f} cannot be C {\displaystyle \mathbb {C} } . Suppose it was. Then, if a {\displaystyle a} and b {\displaystyle b} are two periods of f {\displaystyle f} such that a b {\displaystyle {\tfrac {a}{b}}} is not real, consider the parallelogram P {\displaystyle P} whose vertices are 0, a {\displaystyle a} , b {\displaystyle b} , and a + b {\displaystyle a+b} . Then the image of f {\displaystyle f} is equal to f ( P ) {\displaystyle f(P)} . Since f {\displaystyle f} is continuous and P {\displaystyle P} is compact , f ( P ) {\displaystyle f(P)} is also compact and, therefore, it is bounded. So, f {\displaystyle f} is constant. The fact that the domain of a non-constant elliptic function f {\displaystyle f} cannot be C {\displaystyle \mathbb {C} } is what Liouville actually proved, in 1847, using the theory of elliptic functions. [ 5 ] In fact, it was Cauchy who proved Liouville's theorem. [ 6 ] [ 7 ] If f {\displaystyle f} is a non-constant entire function, then its image is dense in C {\displaystyle \mathbb {C} } . This might seem to be a much stronger result than Liouville's theorem, but it is actually an easy corollary. If the image of f {\displaystyle f} is not dense, then there is a complex number w {\displaystyle w} and a real number r > 0 {\displaystyle r>0} such that the open disk centered at w {\displaystyle w} with radius r {\displaystyle r} has no element of the image of f {\displaystyle f} . Define Then g {\displaystyle g} is a bounded entire function, since for all z {\displaystyle z} , So, g {\displaystyle g} is constant, and therefore f {\displaystyle f} is constant. Any holomorphic function on a compact Riemann surface is necessarily constant. [ 8 ] Let f ( z ) {\displaystyle f(z)} be holomorphic on a compact Riemann surface M {\displaystyle M} . By compactness, there is a point p 0 ∈ M {\displaystyle p_{0}\in M} where | f ( p ) | {\displaystyle |f(p)|} attains its maximum. Then we can find a chart from a neighborhood of p 0 {\displaystyle p_{0}} to the unit disk D {\displaystyle \mathbb {D} } such that f ( φ − 1 ( z ) ) {\displaystyle f(\varphi ^{-1}(z))} is holomorphic on the unit disk and has a maximum at φ ( p 0 ) ∈ D {\displaystyle \varphi (p_{0})\in \mathbb {D} } , so it is constant, by the maximum modulus principle . Let C ∪ { ∞ } {\displaystyle \mathbb {C} \cup \{\infty \}} be the one-point compactification of the complex plane C {\displaystyle \mathbb {C} } . In place of holomorphic functions defined on regions in C {\displaystyle \mathbb {C} } , one can consider regions in C ∪ { ∞ } {\displaystyle \mathbb {C} \cup \{\infty \}} . Viewed this way, the only possible singularity for entire functions, defined on C ⊂ C ∪ { ∞ } {\displaystyle \mathbb {C} \subset \mathbb {C} \cup \{\infty \}} , is the point ∞ {\displaystyle \infty } . If an entire function f {\displaystyle f} is bounded in a neighborhood of ∞ {\displaystyle \infty } , then ∞ {\displaystyle \infty } is a removable singularity of f {\displaystyle f} , i.e. f {\displaystyle f} cannot blow up or behave erratically at ∞ {\displaystyle \infty } . In light of the power series expansion, it is not surprising that Liouville's theorem holds. Similarly, if an entire function has a pole of order n {\displaystyle n} at ∞ {\displaystyle \infty } —that is, it grows in magnitude comparably to z n {\displaystyle z^{n}} in some neighborhood of ∞ {\displaystyle \infty } —then f {\displaystyle f} is a polynomial. This extended version of Liouville's theorem can be more precisely stated: if | f ( z ) | ≤ M | z | n {\displaystyle |f(z)|\leq M|z|^{n}} for | z | {\displaystyle |z|} sufficiently large, then f {\displaystyle f} is a polynomial of degree at most n {\displaystyle n} . This can be proved as follows. Again take the Taylor series representation of f {\displaystyle f} , The argument used during the proof using Cauchy estimates shows that for all k ≥ 0 {\displaystyle k\geq 0} , So, if k > n {\displaystyle k>n} , then Therefore, a k = 0 {\displaystyle a_{k}=0} . Liouville's theorem does not extend to the generalizations of complex numbers known as double numbers and dual numbers . [ 9 ]
https://en.wikipedia.org/wiki/Liouville's_theorem_(complex_analysis)
In mathematics , Liouville's theorem , proved by Joseph Liouville in 1850, [ 1 ] is a rigidity theorem about conformal mappings in Euclidean space . It states that every smooth conformal mapping on a domain of R n , where n > 2, can be expressed as a composition of translations , similarities , orthogonal transformations and inversions : they are Möbius transformations (in n dimensions). [ 2 ] [ 3 ] This theorem severely limits the variety of possible conformal mappings in R 3 and higher-dimensional spaces. By contrast, conformal mappings in R 2 can be much more complicated – for example, all simply connected planar domains are conformally equivalent , by the Riemann mapping theorem . Generalizations of the theorem hold for transformations that are only weakly differentiable ( Iwaniec & Martin 2001 , Chapter 5). The focus of such a study is the non-linear Cauchy–Riemann system that is a necessary and sufficient condition for a smooth mapping f : Ω → R n to be conformal: where Df is the Jacobian derivative , T is the matrix transpose , and I is the identity matrix. A weak solution of this system is defined to be an element f of the Sobolev space W 1, n loc (Ω, R n ) with non-negative Jacobian determinant almost everywhere , such that the Cauchy–Riemann system holds at almost every point of Ω. Liouville's theorem is then that every weak solution (in this sense) is a Möbius transformation, meaning that it has the form where a , b are vectors in R n , α is a scalar, A is a rotation matrix, ε = 0 or 2, and the matrix in parentheses is I or a Householder matrix (so, orthogonal). Equivalently stated, any quasiconformal map of a domain in Euclidean space that is also conformal is a Möbius transformation. This equivalent statement justifies using the Sobolev space W 1, n , since f ∈ W 1, n loc ( Ω , R n ) then follows from the geometrical condition of conformality and the ACL characterization of Sobolev space. The result is not optimal however: in even dimensions n = 2 k , the theorem also holds for solutions that are only assumed to be in the space W 1, k loc , and this result is sharp in the sense that there are weak solutions of the Cauchy–Riemann system in W 1, p for any p < k that are not Möbius transformations. In odd dimensions, it is known that W 1, n is not optimal, but a sharp result is not known. Similar rigidity results (in the smooth case) hold on any conformal manifold . The group of conformal isometries of an n -dimensional conformal Riemannian manifold always has dimension that cannot exceed that of the full conformal group SO( n + 1, 1). Equality of the two dimensions holds exactly when the conformal manifold is isometric with the n -sphere or projective space . Local versions of the result also hold: The Lie algebra of conformal Killing fields in an open set has dimension less than or equal to that of the conformal group, with equality holding if and only if the open set is locally conformally flat.
https://en.wikipedia.org/wiki/Liouville's_theorem_(conformal_mappings)
In mathematics , Liouville's theorem , originally formulated by French mathematician Joseph Liouville in 1833 to 1841, [ 1 ] [ 2 ] [ 3 ] places an important restriction on antiderivatives that can be expressed as elementary functions . The antiderivatives of certain elementary functions cannot themselves be expressed as elementary functions. These are called nonelementary antiderivatives . A standard example of such a function is e − x 2 , {\displaystyle e^{-x^{2}},} whose antiderivative is (with a multiplier of a constant) the error function , familiar from statistics . Other examples include the functions sin ⁡ ( x ) x {\displaystyle {\frac {\sin(x)}{x}}} and x x . {\displaystyle x^{x}.} Liouville's theorem states that elementary antiderivatives, if they exist, are in the same differential field as the function, plus possibly a finite number of applications of the logarithm function. For any differential field F , {\displaystyle F,} the constants of F {\displaystyle F} is the subfield Con ⁡ ( F ) = { f ∈ F : D f = 0 } . {\displaystyle \operatorname {Con} (F)=\{f\in F:Df=0\}.} Given two differential fields F {\displaystyle F} and G , {\displaystyle G,} G {\displaystyle G} is called a logarithmic extension of F {\displaystyle F} if G {\displaystyle G} is a simple transcendental extension of F {\displaystyle F} (that is, G = F ( t ) {\displaystyle G=F(t)} for some transcendental t {\displaystyle t} ) such that D t = D s s for some s ∈ F . {\displaystyle Dt={\frac {Ds}{s}}\quad {\text{ for some }}s\in F.} This has the form of a logarithmic derivative . Intuitively, one may think of t {\displaystyle t} as the logarithm of some element s {\displaystyle s} of F , {\displaystyle F,} in which case, this condition is analogous to the ordinary chain rule . However, F {\displaystyle F} is not necessarily equipped with a unique logarithm; one might adjoin many "logarithm-like" extensions to F . {\displaystyle F.} Similarly, an exponential extension is a simple transcendental extension that satisfies D t t = D s for some s ∈ F . {\displaystyle {\frac {Dt}{t}}=Ds\quad {\text{ for some }}s\in F.} With the above caveat in mind, this element may be thought of as an exponential of an element s {\displaystyle s} of F . {\displaystyle F.} Finally, G {\displaystyle G} is called an elementary differential extension of F {\displaystyle F} if there is a finite chain of subfields from F {\displaystyle F} to G {\displaystyle G} where each extension in the chain is either algebraic, logarithmic, or exponential. Suppose F {\displaystyle F} and G {\displaystyle G} are differential fields with Con ⁡ ( F ) = Con ⁡ ( G ) , {\displaystyle \operatorname {Con} (F)=\operatorname {Con} (G),} and that G {\displaystyle G} is an elementary differential extension of F . {\displaystyle F.} Suppose f ∈ F {\displaystyle f\in F} and g ∈ G {\displaystyle g\in G} satisfy D g = f {\displaystyle Dg=f} (in words, suppose that G {\displaystyle G} contains an antiderivative of f {\displaystyle f} ). Then there exist c 1 , … , c n ∈ Con ⁡ ( F ) {\displaystyle c_{1},\ldots ,c_{n}\in \operatorname {Con} (F)} and f 1 , … , f n , s ∈ F {\displaystyle f_{1},\ldots ,f_{n},s\in F} such that f = c 1 D f 1 f 1 + ⋯ + c n D f n f n + D s . {\displaystyle f=c_{1}{\frac {Df_{1}}{f_{1}}}+\dotsb +c_{n}{\frac {Df_{n}}{f_{n}}}+Ds.} In other words, the only functions that have "elementary antiderivatives" (that is, antiderivatives living in, at worst, an elementary differential extension of F {\displaystyle F} ) are those with this form. Thus, on an intuitive level, the theorem states that the only elementary antiderivatives are the "simple" functions plus a finite number of logarithms of "simple" functions. A proof of Liouville's theorem can be found in section 12.4 of Geddes, et al. [ 4 ] See Lützen's scientific bibliography for a sketch of Liouville's original proof [ 5 ] (Chapter IX. Integration in Finite Terms), its modern exposition and algebraic treatment (ibid. §61). As an example, the field F := C ( x ) {\displaystyle F:=\mathbb {C} (x)} of rational functions in a single variable has a derivation given by the standard derivative with respect to that variable. The constants of this field are just the complex numbers C ; {\displaystyle \mathbb {C} ;} that is, Con ⁡ ( C ( x ) ) = C , {\displaystyle \operatorname {Con} (\mathbb {C} (x))=\mathbb {C} ,} The function f := 1 x , {\displaystyle f:={\tfrac {1}{x}},} which exists in C ( x ) , {\displaystyle \mathbb {C} (x),} does not have an antiderivative in C ( x ) . {\displaystyle \mathbb {C} (x).} Its antiderivatives ln ⁡ x + C {\displaystyle \ln x+C} do, however, exist in the logarithmic extension C ( x , ln ⁡ x ) . {\displaystyle \mathbb {C} (x,\ln x).} Likewise, the function 1 x 2 + 1 {\displaystyle {\tfrac {1}{x^{2}+1}}} does not have an antiderivative in C ( x ) . {\displaystyle \mathbb {C} (x).} Its antiderivatives tan − 1 ⁡ ( x ) + C {\displaystyle \tan ^{-1}(x)+C} do not seem to satisfy the requirements of the theorem, since they are not (apparently) sums of rational functions and logarithms of rational functions. However, a calculation with Euler's formula e i θ = cos ⁡ θ + i sin ⁡ θ {\displaystyle e^{i\theta }=\cos \theta +i\sin \theta } shows that in fact the antiderivatives can be written in the required manner (as logarithms of rational functions). e 2 i θ = e i θ e − i θ = cos ⁡ θ + i sin ⁡ θ cos ⁡ θ − i sin ⁡ θ = 1 + i tan ⁡ θ 1 − i tan ⁡ θ θ = 1 2 i ln ⁡ ( 1 + i tan ⁡ θ 1 − i tan ⁡ θ ) tan − 1 ⁡ x = 1 2 i ln ⁡ ( 1 + i x 1 − i x ) {\displaystyle {\begin{aligned}e^{2i\theta }&={\frac {e^{i\theta }}{e^{-i\theta }}}={\frac {\cos \theta +i\sin \theta }{\cos \theta -i\sin \theta }}={\frac {1+i\tan \theta }{1-i\tan \theta }}\\\theta &={\frac {1}{2i}}\ln \left({\frac {1+i\tan \theta }{1-i\tan \theta }}\right)\\\tan ^{-1}x&={\frac {1}{2i}}\ln \left({\frac {1+ix}{1-ix}}\right)\end{aligned}}} Liouville's theorem is sometimes presented as a theorem in differential Galois theory , but this is not strictly true. The theorem can be proved without any use of Galois theory . Furthermore, the Galois group of a simple antiderivative is either trivial (if no field extension is required to express it), or is simply the additive group of the constants (corresponding to the constant of integration). Thus, an antiderivative's differential Galois group does not encode enough information to determine if it can be expressed using elementary functions, the major condition of Liouville's theorem.
https://en.wikipedia.org/wiki/Liouville's_theorem_(differential_algebra)
In classical mechanics , a Liouville dynamical system is an exactly solvable dynamical system in which the kinetic energy T and potential energy V can be expressed in terms of the s generalized coordinates q as follows: [ 1 ] The solution of this system consists of a set of separably integrable equations where E = T + V is the conserved energy and the γ s {\displaystyle \gamma _{s}} are constants. As described below, the variables have been changed from q s to φ s , and the functions u s and w s substituted by their counterparts χ s and ω s . This solution has numerous applications, such as the orbit of a small planet about two fixed stars under the influence of Newtonian gravity . The Liouville dynamical system is one of several things named after Joseph Liouville , an eminent French mathematician. In classical mechanics , Euler's three-body problem describes the motion of a particle in a plane under the influence of two fixed centers, each of which attract the particle with an inverse-square force such as Newtonian gravity or Coulomb's law . Examples of the bicenter problem include a planet moving around two slowly moving stars , or an electron moving in the electric field of two positively charged nuclei , such as the first ion of the hydrogen molecule H 2 , namely the hydrogen molecular ion or H 2 + . The strength of the two attractions need not be equal; thus, the two stars may have different masses or the nuclei two different charges. Let the fixed centers of attraction be located along the x -axis at ± a . The potential energy of the moving particle is given by The two centers of attraction can be considered as the foci of a set of ellipses. If either center were absent, the particle would move on one of these ellipses, as a solution of the Kepler problem . Therefore, according to Bonnet's theorem , the same ellipses are the solutions for the bicenter problem. Introducing elliptic coordinates , the potential energy can be written as and the kinetic energy as This is a Liouville dynamical system if ξ and η are taken as φ 1 and φ 2 , respectively; thus, the function Y equals and the function W equals Using the general solution for a Liouville dynamical system below, one obtains Introducing a parameter u by the formula gives the parametric solution Since these are elliptic integrals , the coordinates ξ and η can be expressed as elliptic functions of u . The bicentric problem has a constant of motion, namely, from which the problem can be solved using the method of the last multiplier. To eliminate the v functions, the variables are changed to an equivalent set giving the relation which defines a new variable F . Using the new variables, the u and w functions can be expressed by equivalent functions χ and ω. Denoting the sum of the χ functions by Y , the kinetic energy can be written as Similarly, denoting the sum of the ω functions by W the potential energy V can be written as The Lagrange equation for the r th variable φ r {\displaystyle \varphi _{r}} is Multiplying both sides by 2 Y φ ˙ r {\displaystyle 2Y{\dot {\varphi }}_{r}} , re-arranging, and exploiting the relation 2 T = YF yields the equation which may be written as where E = T + V is the (conserved) total energy. It follows that which may be integrated once to yield where the γ r {\displaystyle \gamma _{r}} are constants of integration subject to the energy conservation Inverting, taking the square root and separating the variables yields a set of separably integrable equations:
https://en.wikipedia.org/wiki/Liouville_dynamical_system
In physics , Liouville field theory (or simply Liouville theory ) is a two-dimensional conformal field theory whose classical equation of motion is a generalization of Liouville's equation . Liouville theory is defined for all complex values of the central charge c {\displaystyle c} of its Virasoro symmetry algebra , but it is unitary only if and its classical limit is Although it is an interacting theory with a continuous spectrum , Liouville theory has been solved. In particular, its three-point function on the sphere has been determined analytically. Liouville theory describes the dynamics of a field φ {\displaystyle \varphi } called the Liouville field, which is defined on a two-dimensional space. This field is not a free field due to the presence of an exponential potential where the parameter b {\displaystyle b} is called the coupling constant . In a free field theory, the energy eigenvectors e 2 α φ {\displaystyle e^{2\alpha \varphi }} are linearly independent, and the momentum α {\displaystyle \alpha } is conserved in interactions. In Liouville theory, momentum is not conserved. Moreover, the potential reflects the energy eigenvectors before they reach φ = + ∞ {\displaystyle \varphi =+\infty } , and two eigenvectors are linearly dependent if their momenta are related by the reflection where the background charge is While the exponential potential breaks momentum conservation, it does not break conformal symmetry, and Liouville theory is a conformal field theory with the central charge Under conformal transformations, an energy eigenvector with momentum α {\displaystyle \alpha } transforms as a primary field with the conformal dimension Δ {\displaystyle \Delta } by The central charge and conformal dimensions are invariant under the duality The correlation functions of Liouville theory are covariant under this duality, and under reflections of the momenta. These quantum symmetries of Liouville theory are however not manifest in the Lagrangian formulation, in particular the exponential potential is not invariant under the duality. The spectrum S {\displaystyle {\mathcal {S}}} of Liouville theory is a diagonal combination of Verma modules of the Virasoro algebra , where V Δ {\displaystyle {\mathcal {V}}_{\Delta }} and V ¯ Δ {\displaystyle {\bar {\mathcal {V}}}_{\Delta }} denote the same Verma module, viewed as a representation of the left- and right-moving Virasoro algebra respectively. In terms of momenta , corresponds to The reflection relation is responsible for the momentum taking values on a half-line, instead of a full line for a free theory. Liouville theory is unitary if and only if c ∈ ( 1 , + ∞ ) {\displaystyle c\in (1,+\infty )} . The spectrum of Liouville theory does not include a vacuum state . A vacuum state can be defined, but it does not contribute to operator product expansions . In Liouville theory, primary fields are usually parametrized by their momentum rather than their conformal dimension , and denoted V α ( z ) {\displaystyle V_{\alpha }(z)} . Both fields V α ( z ) {\displaystyle V_{\alpha }(z)} and V Q − α ( z ) {\displaystyle V_{Q-\alpha }(z)} correspond to the primary state of the representation V Δ ⊗ V ¯ Δ {\displaystyle {\mathcal {V}}_{\Delta }\otimes {\bar {\mathcal {V}}}_{\Delta }} , and are related by the reflection relation where the reflection coefficient is [ 1 ] (The sign is + 1 {\displaystyle +1} if c ∈ ( − ∞ , 1 ) {\displaystyle c\in (-\infty ,1)} and − 1 {\displaystyle -1} otherwise, and the normalization parameter λ {\displaystyle \lambda } is arbitrary.) For c ∉ ( − ∞ , 1 ) {\displaystyle c\notin (-\infty ,1)} , the three-point structure constant is given by the DOZZ formula (for Dorn–Otto [ 2 ] and Zamolodchikov–Zamolodchikov [ 3 ] ), where the special function Υ b {\displaystyle \Upsilon _{b}} is a kind of multiple gamma function . For c ∈ ( − ∞ , 1 ) {\displaystyle c\in (-\infty ,1)} , the three-point structure constant is [ 1 ] where N {\displaystyle N} -point functions on the sphere can be expressed in terms of three-point structure constants, and conformal blocks . An N {\displaystyle N} -point function may have several different expressions: that they agree is equivalent to crossing symmetry of the four-point function, which has been checked numerically [ 3 ] [ 4 ] and proved analytically. [ 5 ] [ 6 ] Liouville theory exists not only on the sphere, but also on any Riemann surface of genus g ≥ 1 {\displaystyle g\geq 1} . Technically, this is equivalent to the modular invariance of the torus one-point function. Due to remarkable identities of conformal blocks and structure constants, this modular invariance property can be deduced from crossing symmetry of the sphere four-point function. [ 7 ] [ 4 ] Using the conformal bootstrap approach, Liouville theory can be shown to be the unique conformal field theory such that [ 1 ] Liouville theory is defined by the local action where g μ ν {\displaystyle g_{\mu \nu }} is the metric of the two-dimensional space on which the theory is formulated, R {\displaystyle R} is the Ricci scalar of that space, and φ {\displaystyle \varphi } is the Liouville field. The parameter λ ′ {\displaystyle \lambda '} , which is sometimes called the cosmological constant, is related to the parameter λ {\displaystyle \lambda } that appears in correlation functions by The equation of motion associated to this action is where Δ = | g | − 1 / 2 ∂ μ ( | g | 1 / 2 g μ ν ∂ ν ) {\displaystyle \Delta =|g|^{-1/2}\partial _{\mu }(|g|^{1/2}g^{\mu \nu }\partial _{\nu })} is the Laplace–Beltrami operator . If g μ ν {\displaystyle g_{\mu \nu }} is the Euclidean metric , this equation reduces to which is equivalent to Liouville's equation . Once compactified on a cylinder, Liouville field theory can be equivalently formulated as a worldline theory. [ 8 ] Using a complex coordinate system z {\displaystyle z} and a Euclidean metric the energy–momentum tensor 's components obey The non-vanishing components are Each one of these two components generates a Virasoro algebra with the central charge For both of these Virasoro algebras, a field e 2 α φ {\displaystyle e^{2\alpha \varphi }} is a primary field with the conformal dimension For the theory to have conformal invariance , the field e 2 b φ {\displaystyle e^{2b\varphi }} that appears in the action must be marginal , i.e. have the conformal dimension This leads to the relation between the background charge and the coupling constant. If this relation is obeyed, then e 2 b φ {\displaystyle e^{2b\varphi }} is actually exactly marginal, and the theory is conformally invariant. The path integral representation of an N {\displaystyle N} -point correlation function of primary fields is It has been difficult to define and to compute this path integral. In the path integral representation, it is not obvious that Liouville theory has exact conformal invariance , and it is not manifest that correlation functions are invariant under b → b − 1 {\displaystyle b\to b^{-1}} and obey the reflection relation. Nevertheless, the path integral representation can be used for computing the residues of correlation functions at some of their poles as Dotsenko–Fateev integrals in the Coulomb gas formalism , and this is how the DOZZ formula was first guessed in the 1990s. It is only in the 2010s that a rigorous probabilistic construction of the path integral was found, which led to a proof of the DOZZ formula [ 9 ] and the conformal bootstrap. [ 6 ] [ 10 ] When the central charge and conformal dimensions are sent to the relevant discrete values, correlation functions of Liouville theory reduce to correlation functions of diagonal (A-series) Virasoro minimal models . [ 1 ] On the other hand, when the central charge is sent to one while conformal dimensions stay continuous, Liouville theory tends to Runkel–Watts theory, a nontrivial conformal field theory (CFT) with a continuous spectrum whose three-point function is not analytic as a function of the momenta. [ 11 ] Generalizations of Runkel-Watts theory are obtained from Liouville theory by taking limits of the type b 2 ∉ R , b 2 → Q < 0 {\displaystyle b^{2}\notin \mathbb {R} ,b^{2}\to \mathbb {Q} _{<0}} . [ 4 ] So, for b 2 ∈ Q < 0 {\displaystyle b^{2}\in \mathbb {Q} _{<0}} , two distinct CFTs with the same spectrum are known: Liouville theory, whose three-point function is analytic, and another CFT with a non-analytic three-point function. Liouville theory can be obtained from the S L 2 ( R ) {\displaystyle SL_{2}(\mathbb {R} )} Wess–Zumino–Witten model by a quantum Drinfeld–Sokolov reduction . Moreover, correlation functions of the H 3 + {\displaystyle H_{3}^{+}} model (the Euclidean version of the S L 2 ( R ) {\displaystyle SL_{2}(\mathbb {R} )} WZW model) can be expressed in terms of correlation functions of Liouville theory. [ 12 ] [ 13 ] This is also true of correlation functions of the 2d black hole S L 2 / U 1 {\displaystyle SL_{2}/U_{1}} coset model. [ 12 ] Moreover, there exist theories that continuously interpolate between Liouville theory and the H 3 + {\displaystyle H_{3}^{+}} model. [ 14 ] Liouville theory is the simplest example of a Toda field theory , associated to the A 1 {\displaystyle A_{1}} Cartan matrix . More general conformal Toda theories can be viewed as generalizations of Liouville theory, whose Lagrangians involve several bosons rather than one boson φ {\displaystyle \varphi } , and whose symmetry algebras are W-algebras rather than the Virasoro algebra. Liouville theory admits two different supersymmetric extensions called N = 1 {\displaystyle {\mathcal {N}}=1} supersymmetric Liouville theory and N = 2 {\displaystyle {\mathcal {N}}=2} supersymmetric Liouville theory. [ 15 ] In flat space, the sinh-Gordon model is defined by the local action: The corresponding classical equation of motion is the sinh-Gordon equation . The model can be viewed as a perturbation of Liouville theory. The model's exact S-matrix is known in the weak coupling regime 0 < b < 1 {\displaystyle 0<b<1} , and it is formally invariant under b → b − 1 {\displaystyle b\to b^{-1}} . However, it has been argued that the model itself is not invariant. [ 16 ] In two dimensions, Liouville theory can be used to build a quantum theory of gravity called Liouville gravity. It should not be confused [ 17 ] [ 18 ] with the CGHS model or Jackiw–Teitelboim gravity . In two dimensions, the Einstein-Hilbert action is topological, i.e. it is proportional to the Euler characteristic . Nevertheless, after quantization, general relativity is no longer topological, because of the Weyl anomaly : under a rescaling of the metric g ↦ e ϕ g {\displaystyle g\mapsto e^{\phi }g} , while the action is invariant, the functional integration measure is not, and gives rise to a term proportional to the Liouville action for ϕ {\displaystyle \phi } . This leads to the construction of Liouville gravity as a product of three CFTs: Liouville theory for the gravitational sector, Faddeev-Popov ghosts for Weyl invariance (viewed as a gauge symmetry), and an arbitrary CFT that describes matter. The central charges of these CFTs must sum to zero in order to cancel the Weyl anomaly, and ensure that the quantum theory is topological. [ 15 ] The observables of Liouville gravity are correlation numbers: correlation functions of the product CFT, integrated over the moduli. Correlation numbers can be computed explicitly in some examples, such as the Virasoro minimal string. [ 19 ] Correlation numbers at fixed Euler characteristic are the coefficients of quantum gravity correlators, when expanded in powers of the gravitational constant . Liouville theory appears in the context of string theory when trying to formulate a non-critical version of the theory in the path integral formulation . [ 20 ] The theory also appears as the description of bosonic string theory in two spacetime dimensions with a linear dilaton and a tachyon background. The tachyon field equation of motion in the linear dilaton background requires it to take an exponential solution. The Polyakov action in this background is then identical to Liouville field theory, with the linear dilaton being responsible for the background charge term while the tachyon contributing the exponential potential. [ 21 ] There is an exact mapping between Liouville theory with c ≥ 25 {\displaystyle c\geq 25} , and certain log-correlated random energy models . [ 22 ] These models describe a thermal particle in a random potential that is logarithmically correlated. In two dimensions, such potential coincides with the Gaussian free field . In that case, certain correlation functions between primary fields in the Liouville theory are mapped to correlation functions of the Gibbs measure of the particle. This has applications to extreme value statistics of the two-dimensional Gaussian free field, and allows to predict certain universal properties of the log-correlated random energy models (in two dimensions and beyond). Liouville theory is related to other subjects in physics and mathematics, such as three-dimensional general relativity in negatively curved spaces , the uniformization problem of Riemann surfaces , and other problems in conformal mapping . It is also related to instanton partition functions in a certain four-dimensional superconformal gauge theories by the AGT correspondence . Liouville theory with c ≤ 1 {\displaystyle c\leq 1} first appeared as a model of time-dependent string theory under the name timelike Liouville theory . [ 23 ] It has also been called a generalized minimal model . [ 24 ] It was first called Liouville theory when it was found to actually exist, and to be spacelike rather than timelike. [ 4 ] As of 2022, not one of these three names is universally accepted.
https://en.wikipedia.org/wiki/Liouville_gravity
In number theory , a Liouville number is a real number x {\displaystyle x} with the property that, for every positive integer n {\displaystyle n} , there exists a pair of integers ( p , q ) {\displaystyle (p,q)} with q > 1 {\displaystyle q>1} such that The inequality implies that Liouville numbers possess an excellent sequence of rational number approximations. In 1844, Joseph Liouville proved a bound showing that there is a limit to how well algebraic numbers can be approximated by rational numbers, and he defined Liouville numbers specifically so that they would have rational approximations better than the ones allowed by this bound. Liouville also exhibited examples of Liouville numbers [ 1 ] thereby establishing the existence of transcendental numbers for the first time. [ 2 ] One of these examples is Liouville's constant in which the n th digit after the decimal point is 1 if n {\displaystyle n} is the factorial of a positive integer and 0 otherwise. It is known that π and e , although transcendental, are not Liouville numbers. [ 3 ] Liouville numbers can be shown to exist by an explicit construction. For any integer b ≥ 2 {\displaystyle b\geq 2} and any sequence of integers ( a 1 , a 2 , … ) {\displaystyle (a_{1},a_{2},\ldots )} such that a k ∈ { 0 , 1 , 2 , … , b − 1 } {\displaystyle a_{k}\in \{0,1,2,\ldots ,b-1\}} for all k {\displaystyle k} and a k ≠ 0 {\displaystyle a_{k}\neq 0} for infinitely many k {\displaystyle k} , define the number In the special case when b = 10 {\displaystyle b=10} , and a k = 1 {\displaystyle a_{k}=1} for all k {\displaystyle k} , the resulting number x {\displaystyle x} is called Liouville's constant: It follows from the definition of x {\displaystyle x} that its base - b {\displaystyle b} representation is where the n {\displaystyle n} th term is in the n ! {\displaystyle n!} th place. Since this base- b {\displaystyle b} representation is non-repeating it follows that x {\displaystyle x} is not a rational number. Therefore, for any rational number p / q {\displaystyle p/q} , | x − p / q | > 0 {\displaystyle |x-p/q|>0} . Now, for any integer n ≥ 1 {\displaystyle n\geq 1} , p n {\displaystyle p_{n}} and q n {\displaystyle q_{n}} can be defined as follows: Then, Therefore, any such x {\displaystyle x} is a Liouville number. Here the proof will show that the number x = c / d , {\displaystyle ~x=c/d~,} where c and d are integers and d > 0 , {\displaystyle ~d>0~,} cannot satisfy the inequalities that define a Liouville number. Since every rational number can be represented as such c / d , {\displaystyle ~c/d~,} the proof will show that no Liouville number can be rational . More specifically, this proof shows that for any positive integer n large enough that 2 n − 1 > d > 0 {\displaystyle ~2^{n-1}>d>0~} [equivalently, for any positive integer n > 1 + log 2 ⁡ ( d ) {\displaystyle ~n>1+\log _{2}(d)~} )], no pair of integers ( p , q ) {\displaystyle ~(\,p,\,q\,)~} exists that simultaneously satisfies the pair of bracketing inequalities If the claim is true, then the desired conclusion follows. Let p and q be any integers with q > 1 . {\displaystyle ~q>1~.} Then, If | c q − d p | = 0 , {\displaystyle \left|c\,q-d\,p\right|=0~,} then meaning that such pair of integers ( p , q ) {\displaystyle ~(\,p,\,q\,)~} would violate the first inequality in the definition of a Liouville number, irrespective of any choice of n . If, on the other hand, since | c q − d p | > 0 , {\displaystyle ~\left|c\,q-d\,p\right|>0~,} then, since c q − d p {\displaystyle c\,q-d\,p} is an integer, we can assert the sharper inequality | c q − d p | ≥ 1 . {\displaystyle \left|c\,q-d\,p\right|\geq 1~.} From this it follows that Now for any integer n > 1 + log 2 ⁡ ( d ) , {\displaystyle ~n>1+\log _{2}(d)~,} the last inequality above implies Therefore, in the case | c q − d p | > 0 {\displaystyle ~\left|c\,q-d\,p\right|>0~} such pair of integers ( p , q ) {\displaystyle ~(\,p,\,q\,)~} would violate the second inequality in the definition of a Liouville number, for some positive integer n . Therefore, to conclude, there is no pair of integers ( p , q ) , {\displaystyle ~(\,p,\,q\,)~,} with q > 1 , {\displaystyle ~q>1~,} that would qualify such an x = c / d , {\displaystyle ~x=c/d~,} as a Liouville number. Hence a Liouville number cannot be rational. No Liouville number is algebraic. The proof of this assertion proceeds by first establishing a property of irrational algebraic numbers . This property essentially says that irrational algebraic numbers cannot be well approximated by rational numbers, where the condition for "well approximated" becomes more stringent for larger denominators. A Liouville number is irrational but does not have this property, so it cannot be algebraic and must be transcendental. The following lemma is usually known as Liouville's theorem (on diophantine approximation) , there being several results known as Liouville's theorem . Lemma: If α {\displaystyle \alpha } is an irrational root of an irreducible polynomial of degree n > 1 {\displaystyle n>1} with integer coefficients, then there exists a real number A > 0 {\displaystyle A>0} such that for all integers p , q {\displaystyle p,q} with q > 0 {\displaystyle q>0} , Proof of Lemma: Let f ( x ) = ∑ k = 0 n a k x k {\displaystyle f(x)=\sum _{k\,=\,0}^{n}a_{k}x^{k}} be a minimal polynomial with integer coefficients, such that f ( α ) = 0 {\displaystyle f(\alpha )=0} . By the fundamental theorem of algebra , f {\displaystyle f} has at most n {\displaystyle n} distinct roots. Therefore, there exists δ 1 > 0 {\displaystyle \delta _{1}>0} such that for all 0 < | x − α | < δ 1 {\displaystyle 0<|x-\alpha |<\delta _{1}} we get f ( x ) ≠ 0 {\displaystyle f(x)\neq 0} . Since f {\displaystyle f} is a minimal polynomial of α {\displaystyle \alpha } we get f ′ ( α ) ≠ 0 {\displaystyle f'\!(\alpha )\neq 0} , and also f ′ {\displaystyle f'} is continuous . Therefore, by the extreme value theorem there exists δ 2 > 0 {\displaystyle \delta _{2}>0} and M > 0 {\displaystyle M>0} such that for all | x − α | < δ 2 {\displaystyle |x-\alpha |<\delta _{2}} we get 0 < | f ′ ( x ) | ≤ M {\displaystyle 0<|f'\!(x)|\leq M} . Both conditions are satisfied for δ = min { δ 1 , δ 2 } {\displaystyle \delta =\min\{\delta _{1},\delta _{2}\}} . Now let p q ∈ ( α − δ , α + δ ) {\displaystyle {\tfrac {p}{q}}\in (\alpha -\delta ,\alpha +\delta )} be a rational number. Without loss of generality we may assume that p q < α {\displaystyle {\tfrac {p}{q}}<\alpha } . By the mean value theorem , there exists x 0 ∈ ( p q , α ) {\displaystyle x_{0}\in \left({\tfrac {p}{q}},\alpha \right)} such that Since f ( α ) = 0 {\displaystyle f(\alpha )=0} and f ( p q ) ≠ 0 {\displaystyle f{\bigl (}{\tfrac {p}{q}}{\bigr )}\neq 0} , both sides of that equality are nonzero. In particular | f ′ ( x 0 ) | > 0 {\displaystyle |f'\!(x_{0})|>0} and we can rearrange: Proof of assertion: As a consequence of this lemma, let x be a Liouville number; as noted in the article text, x is then irrational. If x is algebraic, then by the lemma, there exists some integer n and some positive real A such that for all p , q Let r be a positive integer such that 1/(2 r ) ≤ A and define m = r + n . Since x is a Liouville number, there exist integers a , b with b > 1 such that which contradicts the lemma. Hence a Liouville number cannot be algebraic, and therefore must be transcendental. Establishing that a given number is a Liouville number proves that it is transcendental. However, not every transcendental number is a Liouville number. The terms in the continued fraction expansion of every Liouville number are unbounded; using a counting argument, one can then show that there must be uncountably many transcendental numbers which are not Liouville. Using the explicit continued fraction expansion of e , one can show that e is an example of a transcendental number that is not Liouville. Mahler proved in 1953 that π is another such example. [ 4 ] Consider the number 3.14(3 zeros)1(17 zeros)5(95 zeros)9(599 zeros)2(4319 zeros)6... where the digits are zero except in positions n ! where the digit equals the n th digit following the decimal point in the decimal expansion of π . As shown in the section on the existence of Liouville numbers , this number, as well as any other non-terminating decimal with its non-zero digits similarly situated, satisfies the definition of a Liouville number. Since the set of all sequences of non-null digits has the cardinality of the continuum , the same is true of the set of all Liouville numbers. Moreover, the Liouville numbers form a dense subset of the set of real numbers. [ citation needed ] From the point of view of measure theory , the set of all Liouville numbers L {\displaystyle L} is small. More precisely, its Lebesgue measure , λ ( L ) {\displaystyle \lambda (L)} , is zero. The proof given follows some ideas by John C. Oxtoby . [ 5 ] : 8 For positive integers n > 2 {\displaystyle n>2} and q ≥ 2 {\displaystyle q\geq 2} set: then Observe that for each positive integer n ≥ 2 {\displaystyle n\geq 2} and m ≥ 1 {\displaystyle m\geq 1} , then Since and n > 2 {\displaystyle n>2} then Now and it follows that for each positive integer m {\displaystyle m} , L ∩ ( − m , m ) {\displaystyle L\cap (-m,m)} has Lebesgue measure zero. Consequently, so has L {\displaystyle L} . In contrast, the Lebesgue measure of the set of all real transcendental numbers is infinite (since the set of algebraic numbers is a null set ). One could show even more - the set of Liouville numbers has Hausdorff dimension 0 (a property strictly stronger than having Lebesgue measure 0). For each positive integer n , set The set of all Liouville numbers can thus be written as Each U n {\displaystyle ~U_{n}~} is an open set ; as its closure contains all rationals (the p / q {\displaystyle ~p/q~} from each punctured interval), it is also a dense subset of real line. Since it is the intersection of countably many such open dense sets, L is comeagre , that is to say, it is a dense G δ set. The Liouville–Roth irrationality measure ( irrationality exponent, approximation exponent, or Liouville–Roth constant ) of a real number x {\displaystyle x} is a measure of how "closely" it can be approximated by rationals. It is defined by adapting the definition of Liouville numbers: instead of requiring the existence of a sequence of pairs ( p , q ) {\displaystyle (p,q)} that make the inequality hold for each n {\displaystyle n} —a sequence which necessarily contains infinitely many distinct pairs—the irrationality exponent μ ( x ) {\displaystyle \mu (x)} is defined to be the supremum of the set of n {\displaystyle n} for which such an infinite sequence exists, that is, the set of n {\displaystyle n} such that 0 < | x − p q | < 1 q n {\displaystyle 0<\left|x-{\frac {p}{q}}\right|<{\frac {1}{q^{n}}}} is satisfied by an infinite number of integer pairs ( p , q ) {\displaystyle (p,q)} with q > 0 {\displaystyle q>0} . [ 6 ] : 246 For any value n ≤ μ ( x ) {\displaystyle n\leq \mu (x)} , the infinite set of all rationals p / q {\displaystyle p/q} satisfying the above inequality yields good approximations of x {\displaystyle x} . Conversely, if n > μ ( x ) {\displaystyle n>\mu (x)} , then there are at most finitely many ( p , q ) {\displaystyle (p,q)} with q > 0 {\displaystyle q>0} that satisfy the inequality. If x {\displaystyle x} is a Liouville number then μ ( x ) = ∞ {\displaystyle \mu (x)=\infty } .
https://en.wikipedia.org/wiki/Liouville_number
In the mathematical physics of quantum mechanics , Liouville space , also known as line space , is the space of operators on Hilbert space . Liouville space is itself a Hilbert space under the Hilbert-Schmidt inner product . [ 1 ] [ 2 ] Abstractly, Liouville space is equivalent ( isometrically isomorphic ) to the tensor product of a Hilbert space with its dual . [ 1 ] [ 3 ] A common computational technique to organize computations in Liouville space is vectorization . [ 2 ] Liouville space underlies the density operator formalism and is a common computation technique in the study of open quantum systems . [ 2 ] [ 3 ] This linear algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Liouville_space
In dynamical systems theory, the Liouville–Arnold theorem states that if, in a Hamiltonian dynamical system with n degrees of freedom , there are also n independent, Poisson commuting first integrals of motion , and the level sets of all first integrals are compact, then there exists a canonical transformation to action-angle coordinates in which the transformed Hamiltonian is dependent only upon the action coordinates and the angle coordinates evolve linearly in time. Thus the equations of motion for the system can be solved in quadratures if the level simultaneous set conditions can be separated. The theorem is named after Joseph Liouville and Vladimir Arnold . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] : 270–272 The theorem was proven in its original form by Liouville in 1853 for functions on R 2 n {\displaystyle \mathbb {R} ^{2n}} with canonical symplectic structure . It was generalized to the setting of symplectic manifolds by Arnold, who gave a proof in his textbook Mathematical Methods of Classical Mechanics published 1974. Let ( M 2 n , ω ) {\displaystyle (M^{2n},\omega )} be a 2 n {\displaystyle 2n} -dimensional symplectic manifold with symplectic structure ω {\displaystyle \omega } . An integrable system on M 2 n {\displaystyle M^{2n}} is a set of n {\displaystyle n} functions on M 2 n {\displaystyle M^{2n}} , labelled F = ( F 1 , ⋯ , F n ) {\displaystyle F=(F_{1},\cdots ,F_{n})} , satisfying The Poisson bracket is the Lie bracket of vector fields of the Hamiltonian vector field corresponding to each F i {\displaystyle F_{i}} . In full, if X H {\displaystyle X_{H}} is the Hamiltonian vector field corresponding to a smooth function H : M 2 n → R {\displaystyle H:M^{2n}\rightarrow \mathbb {R} } , then for two smooth functions F , G {\displaystyle F,G} , the Poisson bracket is ( F , G ) = [ X F , X G ] {\displaystyle (F,G)=[X_{F},X_{G}]} . A point p {\displaystyle p} is a regular point if d f 1 ∧ ⋯ ∧ d f n ( p ) ≠ 0 {\displaystyle df_{1}\wedge \cdots \wedge df_{n}(p)\neq 0} . The integrable system defines a function F : M 2 n → R n {\displaystyle F:M^{2n}\rightarrow \mathbb {R} ^{n}} . Denote by L c {\displaystyle L_{\mathbf {c} }} the level set of the functions F i {\displaystyle F_{i}} , L c = { x : F i ( x ) = c i } , {\displaystyle L_{\mathbf {c} }=\{x:F_{i}(x)=c_{i}\},} or alternatively, L c = F − 1 ( c ) {\displaystyle L_{\mathbf {c} }=F^{-1}(\mathbf {c} )} . Now if M 2 n {\displaystyle M^{2n}} is given the additional structure of a distinguished function H {\displaystyle H} , the Hamiltonian system ( M 2 n , ω , H ) {\displaystyle (M^{2n},\omega ,H)} is integrable if H {\displaystyle H} can be completed to an integrable system, that is, there exists an integrable system F = ( F 1 = H , F 2 , ⋯ , F n ) {\displaystyle F=(F_{1}=H,F_{2},\cdots ,F_{n})} . If ( M 2 n , ω , F ) {\displaystyle (M^{2n},\omega ,F)} is an integrable Hamiltonian system, and p {\displaystyle p} is a regular point, the theorem characterizes the level set L c {\displaystyle L_{c}} of the image of the regular point c = F ( p ) {\displaystyle c=F(p)} : A Hamiltonian system which is integrable is referred to as 'integrable in the Liouville sense' or 'Liouville-integrable'. Famous examples are given in this section. Some notation is standard in the literature. When the symplectic manifold under consideration is R 2 n {\displaystyle \mathbb {R} ^{2n}} , its coordinates are often written ( q 1 , ⋯ , q n , p 1 , ⋯ , p n ) {\displaystyle (q_{1},\cdots ,q_{n},p_{1},\cdots ,p_{n})} and the canonical symplectic form is ω = ∑ i d q i ∧ d p i {\displaystyle \omega =\sum _{i}dq_{i}\wedge dp_{i}} . Unless otherwise stated, these are assumed for this section.
https://en.wikipedia.org/wiki/Liouville–Arnold_theorem
In mathematics , Liouville–Bratu–Gelfand equation or Liouville's equation is a non-linear Poisson equation , named after the mathematicians Joseph Liouville , [ 1 ] Gheorghe Bratu [ 2 ] and Israel Gelfand . [ 3 ] The equation reads The equation appears in thermal runaway as Frank-Kamenetskii theory , astrophysics for example, Emden–Chandrasekhar equation . This equation also describes space charge of electricity around a glowing wire [ 4 ] and describes planetary nebula . Source: [ 5 ] In two dimension with Cartesian Coordinates ( x , y ) {\displaystyle (x,y)} , Joseph Liouville proposed a solution in 1853 as where f ( z ) = u + i v {\displaystyle f(z)=u+iv} is an arbitrary analytic function with z = x + i y {\displaystyle z=x+iy} . In 1915, G.W. Walker [ 6 ] found a solution by assuming a form for f ( z ) {\displaystyle f(z)} . If r 2 = x 2 + y 2 {\displaystyle r^{2}=x^{2}+y^{2}} , then Walker's solution is where a {\displaystyle a} is some finite radius. This solution decays at infinity for any n {\displaystyle n} , but becomes infinite at the origin for n < 1 {\displaystyle n<1} , becomes finite at the origin for n = 1 {\displaystyle n=1} and becomes zero at the origin for n > 1 {\displaystyle n>1} . Walker also proposed two more solutions in his 1915 paper. If the system to be studied is radially symmetric, then the equation in n {\displaystyle n} dimension becomes where r {\displaystyle r} is the distance from the origin. With the boundary conditions and for λ ≥ 0 {\displaystyle \lambda \geq 0} , a real solution exists only for λ ∈ [ 0 , λ c ] {\displaystyle \lambda \in [0,\lambda _{c}]} , where λ c {\displaystyle \lambda _{c}} is the critical parameter called as Frank-Kamenetskii parameter . The critical parameter is λ c = 0.8785 {\displaystyle \lambda _{c}=0.8785} for n = 1 {\displaystyle n=1} , λ c = 2 {\displaystyle \lambda _{c}=2} for n = 2 {\displaystyle n=2} and λ c = 3.32 {\displaystyle \lambda _{c}=3.32} for n = 3 {\displaystyle n=3} . For n = 1 , 2 {\displaystyle n=1,\ 2} , two solution exists and for 3 ≤ n ≤ 9 {\displaystyle 3\leq n\leq 9} infinitely many solution exists with solutions oscillating about the point λ = 2 ( n − 2 ) {\displaystyle \lambda =2(n-2)} . For n ≥ 10 {\displaystyle n\geq 10} , the solution is unique and in these cases the critical parameter is given by λ c = 2 ( n − 2 ) {\displaystyle \lambda _{c}=2(n-2)} . Multiplicity of solution for n = 3 {\displaystyle n=3} was discovered by Israel Gelfand in 1963 and in later 1973 generalized for all n {\displaystyle n} by Daniel D. Joseph and Thomas S. Lundgren . [ 7 ] The solution for n = 1 {\displaystyle n=1} that is valid in the range λ ∈ [ 0 , 0.8785 ] {\displaystyle \lambda \in [0,0.8785]} is given by where ψ m = ψ ( 0 ) {\displaystyle \psi _{m}=\psi (0)} is related to λ {\displaystyle \lambda } as The solution for n = 2 {\displaystyle n=2} that is valid in the range λ ∈ [ 0 , 2 ] {\displaystyle \lambda \in [0,2]} is given by where ψ m = ψ ( 0 ) {\displaystyle \psi _{m}=\psi (0)} is related to λ {\displaystyle \lambda } as
https://en.wikipedia.org/wiki/Liouville–Bratu–Gelfand_equation
In mathematics , the Liouville–Neumann series is a function series that results from applying the resolvent formalism to solve Fredholm integral equations in Fredholm theory . The Liouville–Neumann series is defined as which, provided that λ {\displaystyle \lambda } is small enough so that the series converges, is the unique continuous solution of the Fredholm integral equation of the second kind, f ( x ) = ϕ ( x ) − λ ∫ a b K ( x , s ) ϕ ( s ) d s . {\displaystyle f(x)=\phi (x)-\lambda \int _{a}^{b}K(x,s)\phi (s)\,ds.} If the n th iterated kernel is defined as n −1 nested integrals of n operator kernels K , then with so K 0 may be taken to be δ ( x−z ) , the kernel of the identity operator . The resolvent , also called the "solution kernel" for the integral operator, is then given by a generalization of the geometric series , where K 0 is again δ ( x−z ) . The solution of the integral equation thus becomes simply Similar methods may be used to solve the Volterra integral equations . This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it . This mathematical physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Liouville–Neumann_series
In mathematics , the Liouvillian functions comprise a set of functions including the elementary functions and their repeated integrals . Liouvillian functions can be recursively defined as integrals of other Liouvillian functions. More explicitly, a Liouvillian function is a function of one variable which is the composition of a finite number of arithmetic operations (+, −, ×, ÷) , exponentials , constants , solutions of algebraic equations (a generalization of n th roots ), and antiderivatives . The logarithm function does not need to be explicitly included since it is the integral of 1 / x {\displaystyle 1/x} . It follows directly from the definition that the set of Liouvillian functions is closed under arithmetic operations, composition, and integration. It is also closed under differentiation . It is not closed under limits and infinite sums . [ example needed ] Liouvillian functions were introduced by Joseph Liouville in a series of papers from 1833 to 1841. All elementary functions are Liouvillian. Examples of well-known functions which are Liouvillian but not elementary are the nonelementary antiderivatives , for example: All Liouvillian functions are solutions of algebraic differential equations , but not conversely. Examples of functions which are solutions of algebraic differential equations but not Liouvillian include: [ 1 ] Examples of functions which are not solutions of algebraic differential equations and thus not Liouvillian include all transcendentally transcendental functions , such as:
https://en.wikipedia.org/wiki/Liouvillian_function
3988 16889 ENSG00000107798 ENSMUSG00000024781 P38571 Q9Z0M5 NM_000235 NM_001127605 NM_001288979 NM_001111100 NM_021460 NP_000226 NP_001121077 NP_001275908 NP_001104570 NP_067435 Lipase A, lysosomal acid type is a protein that in humans is encoded by the LIPA gene . [ 5 ] This gene encodes lipase A, the lysosomal acid lipase (also known as cholesterol ester hydrolase or LAL). This enzyme functions in the lysosome to catalyze the hydrolysis of cholesteryl esters and triglycerides , leading to the production of free cholesterol and fatty acids . [ 6 ] Notably, LAL is the only known acid lipase that hydrolyzes cholesteryl esters and triglycerides within the lysosomal environment. [ 6 ] LAL is essential to intracellular lipid metabolism in macrophages and hepatocytes . Upon uptake of LDL by endocytosis , cholesteryl esters and triglycerides are transported to lysosomes where they are hydrolyzed by LAL. [ 7 ] The resulting free cholesterol either exits the lysosome for future use in membrane synthesis or is re-esterified in the endoplasmic reticulum by ACAT to form lipid droplets . [ 7 ] This process is important for foam cell formation during atherogenesis . [ 8 ] The importance of LAL in cardiovascular disease has been highlighted by GWAS studies, which have identified variants in the LIPA locus that are associated with coronary artery disease . [ 9 ] LAL was found to have high expression in macrophages located in atherosclerotic plaques, where its activity contributes to the accumulation of lipid droplets and the progression of plaque development. [ 6 ] [ 8 ] Mutations in the LIPA gene that cause loss-of-function can result in infant-onset Wolman disease , caused by a complete lack of LAL production, or a later-onset Cholesterol ester storage disease (CESD), caused by a 5-10% reduction in LAL production. [ 6 ] Alternatively spliced transcript variants have been found for this gene. [provided by RefSeq, Jan 2014]. This article incorporates text from the United States National Library of Medicine , which is in the public domain . This biochemistry article is a stub . You can help Wikipedia by expanding it . This article on a gene on human chromosome 10 is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lipase_A,_lysosomal_acid_type
In membrane biology , fusion is the process by which two initially distinct lipid bilayers merge their hydrophobic cores, resulting in one interconnected structure. If this fusion proceeds completely through both leaflets of both bilayers, an aqueous bridge is formed and the internal contents of the two structures can mix. Alternatively, if only one leaflet from each bilayer is involved in the fusion process, the bilayers are said to be hemifused. In hemifusion, the lipid constituents of the outer leaflet of the two bilayers can mix, but the inner leaflets remain distinct. The aqueous contents enclosed by each bilayer also remain separated. Fusion is involved in many cellular processes, particularly in eukaryotes since the eukaryotic cell is extensively sub-divided by lipid bilayer membranes. Exocytosis , fertilization of an egg by sperm and transport of waste products to the lysosome are a few of the many eukaryotic processes that rely on some form of fusion. Fusion is also an important mechanism for transport of lipids from their site of synthesis to the membrane where they are needed. Even the entry of pathogens can be governed by fusion, as many bilayer-coated viruses have dedicated fusion proteins to gain entry into the host cell. There are four fundamental steps in the fusion process, although each of these steps actually represents a complex sequence of events. [ 1 ] First, the involved membranes must aggregate, approaching each other to within several nanometers. Second, the two bilayers must come into very close contact (within a few angstroms). To achieve this close contact, the two surfaces must become at least partially dehydrated, as the bound surface water normally present causes bilayers to strongly repel at this distance. Third, a destabilization must develop at one point between the two bilayers, inducing a highly localized rearrangement of the two bilayers. Finally, as this point defect grows, the components of the two bilayers mix and diffuse away from the site of contact. Depending on whether hemifusion or full fusion occurs, the internal contents of the membranes may mix at this point as well. [ 2 ] The exact mechanisms behind this complex sequence of events are still a matter of debate. To simplify the system and allow more definitive study, many experiments have been performed in vitro with synthetic lipid vesicles. These studies have shown that divalent cations play a critical role in the fusion process by binding to negatively charged lipids such as phosphatidylserine , phosphatidylglycerol and cardiolipin . [ 3 ] One role on these ions in the fusion process is to shield the negative charge on the surface of the bilayer, diminishing electrostatic repulsion and allowing the membranes to approach each other. This is clearly not the only role, however, since there is an extensively documented difference in the ability of Mg 2+ versus Ca 2+ to induce fusion. Although Mg 2+ will induce extensive aggregation it will not induce fusion, while Ca 2+ induces both. [ 4 ] It has been proposed that this discrepancy is due to a difference in extent of dehydration. Under this theory, calcium ions bind more strongly to charged lipids, but less strongly to water. The resulting displacement of calcium for water destabilizes the lipid-water interface and promotes intimate interbilayer contact. [ 5 ] A recently proposed alternative hypothesis is that the binding of calcium induces a destabilizing lateral tension . [ 6 ] Whatever the mechanism of calcium-induced fusion, the initial interaction is clearly electrostatic, since zwitterionic lipids are not susceptible to this effect. [ 7 ] [ 8 ] In the fusion process, the lipid head group is not only involved in charge density, but can affect dehydration and defect nucleation. These effects are independent of the effects of ions. The presence of the uncharged headgroup phosphatidylethanolamine (PE) increases fusion when incorporated into a phosphatidylcholine bilayer. This phenomenon has been explained by some as a dehydration effect similar to the influence of calcium. [ 9 ] The PE headgroup binds water less tightly than PC and therefore may allow close apposition more easily. An alternate explanation is that the physical rather than chemical nature of PE may help induce fusion. According to the stalk hypothesis of fusion, a highly curved bridge must form between the two bilayers for fusion to occur. [ 10 ] Since PE has a small headgroup and readily forms inverted micelle phases it should, according to the stalk model, promote the formation of these stalks. [ 11 ] Further evidence cited in favor of this theory is the fact that certain lipid mixtures have been shown to only support fusion when raised above the transition temperature of these inverted phases. [ 12 ] [ 13 ] This topic also remains controversial, and even if there is a curved structure present in the fusion process, there is debate in the literature over whether it is a cubic, hexagonal or more exotic extended phase. [ 14 ] The situation is further complicated when considering fusion in vivo since biological fusion is almost always regulated by the action of membrane-associated proteins . The first of these proteins to be studied were the viral fusion proteins, which allow an enveloped virus to insert its genetic material into the host cell (enveloped viruses are those surrounded by a lipid bilayer; some others have only a protein coat). Broadly, there are two classes of viral fusion proteins: acidic and pH-independent. [ 1 ] pH independent fusion proteins can function under neutral conditions and fuse with the plasma membrane , allowing viral entry into the cell. Viruses utilizing this scheme included HIV , measles and herpes . Acidic fusion proteins such as those found on influenza are only activated when in the low pH of acidic endosomes and must first be endocytosed to gain entry into the cell. Eukaryotic cells use entirely different classes of fusion proteins, the best studied of which are the SNAREs . SNARE proteins are used to direct all vesicular intracellular trafficking. Despite years of study, much is still unknown about the function of this protein class. In fact, there is still an active debate regarding whether SNAREs are linked to early docking or participate later in the fusion process by facilitating hemifusion. [ 16 ] Even once the role of SNAREs or other specific proteins is illuminated, a unified understanding of fusion proteins is unlikely as there is an enormous diversity of structure and function within these classes, and very few themes are conserved. [ 17 ] In studies of molecular and cellular biology it is often desirable to artificially induce fusion. Although this can be accomplished with the addition of calcium as discussed earlier, this procedure is often not feasible because calcium regulates many other biochemical processes and its addition would be a strong confound. Also, as mentioned, calcium induces massive aggregation as well as fusion. The addition of polyethylene glycol (PEG) causes fusion without significant aggregation or biochemical disruption. This procedure is now used extensively, for example by fusing B-cells with myeloma cells. [ 18 ] The resulting “ hybridoma ” from this combination expresses a desired antibody as determined by the B-cell involved, but is immortalized due to the myeloma component. The mechanism of PEG fusion has not been definitively identified, but some researchers believe that the PEG, by binding a large number of water molecules, effectively decreases the chemical activity of the water and thus dehydrates the lipid headgroups. [ 19 ] Fusion can also be artificially induced through electroporation in a process known as electrofusion. It is believed that this phenomenon results from the energetically active edges formed during electroporation, which can act as the local defect point to nucleate stalk growth between two bilayers. [ 20 ] Alternatively, SNARE-inspired model systems can be used to induce membrane fusion of lipid vesicles. In those systems membrane anchored complementary DNA, [ 21 ] [ 22 ] [ 23 ] PNA, [ 24 ] peptides, [ 25 ] or other molecules [ 26 ] "zip" together and pull the membranes into proximity. Such systems could have practical applications in the future, for example in drug delivery. [ 27 ] The probably best investigated system [ 28 ] consists of coiled-coil forming peptides of complementary charge (one is typically carrying an excess of positively charged lysins and is thus termed peptide K, and one negatively charged glutamic acids called peptide E). [ 29 ] Interestingly, it was discovered that not only the coiled-coil formation between the two peptides is necessary for membrane fusion to occur, but also that the peptide K interacts with the membrane surface and cause local defects. [ 30 ] There are two levels of fusion: mixing of membrane lipids and mixing of contents. Assays of membrane fusion report either the mixing of membrane lipids or the mixing of the aqueous contents of the fused entities. Assays evaluating lipid mixing make use of concentration dependent effects such as nonradiative energy transfer, fluorescence quenching and pyrene excimer formation. Mixing of aqueous contents from vesicles as a result of lysis, fusion or physiological permeability can be detected fluorometrically using low molecular weight soluble tracers.
https://en.wikipedia.org/wiki/Lipid_bilayer_fusion
Lipid bilayer mechanics is the study of the physical material properties of lipid bilayers , classifying bilayer behavior with stress and strain rather than biochemical interactions. Local point deformations such as membrane protein interactions are typically modelled with the complex theory of biological liquid crystals but the mechanical properties of a homogeneous bilayer are often characterized in terms of only three mechanical elastic moduli : the area expansion modulus K a , a bending modulus K b and an edge energy Λ {\displaystyle \Lambda } . For fluid bilayers the shear modulus is by definition zero, as the free rearrangement of molecules within plane means that the structure will not support shear stresses . These mechanical properties affect several membrane-mediated biological processes. In particular, the values of K a and K b affect the ability of proteins and small molecules to insert into the bilayer. [ 1 ] [ 2 ] Bilayer mechanical properties have also been shown to alter the function of mechanically activated ion channels . [ 3 ] Since lipid bilayers are essentially a two dimensional structure, K a is typically defined only within the plane. Intuitively, one might expect that this modulus would vary linearly with bilayer thickness as it would for a thin plate of isotropic material. In fact this is not the case and K a is only weakly dependent on bilayer thickness. The reason for this is that the lipids in a fluid bilayer rearrange easily so, unlike a bulk material where the resistance to expansion comes from intermolecular bonds , the resistance to expansion in a bilayer is a result of the extra hydrophobic area exposed to water upon pulling the lipids apart. [ 4 ] Based on this understanding, a good first approximation of K a for a monolayer is 2γ, where gamma is the surface tension of the water-lipid interface. Typically gamma is in the range of 20-50mJ/m 2 . [ 5 ] To calculate K a for a bilayer it is necessary to multiply the monolayer value by two, since a bilayer is composed of two monolayer leaflets. Based on this calculation, the estimate of K a for a lipid bilayer should be 80-200 mN/m (note: N/m is equivalent to J/m 2 ). It is not surprising given this understanding of the forces involved that studies have shown that K a varies strongly with solution conditions [ 6 ] but only weakly with tail length and unsaturation. [ 7 ] The compression modulus is difficult to measure experimentally because of the thin, fragile nature of bilayers and the consequently low forces involved. One method utilized has been to study how vesicles swell in response to osmotic stress . This method is, however, indirect and measurements can be perturbed by polydispersity in vesicle size. [ 6 ] A more direct method of measuring K a is the pipette aspiration method, in which a single giant unilamellar vesicle (GUV) is held and stretched with a micropipette . [ 8 ] More recently, atomic force microscopy (AFM) has been used to probe the mechanical properties of suspended bilayer membranes, [ 9 ] but this method is still under development. One concern with all of these methods is that, since the bilayer is such a flexible structure, there exist considerable thermal fluctuations in the membrane at many length scales down to sub-microscopic. Thus, forces initially applied to an unstressed membrane are not actually changing the lipid packing but are rather “smoothing out” these undulations, resulting in erroneous values for mechanical properties. [ 7 ] This can be a significant source of error. Without the thermal correction typical values for Ka are 100-150 mN/m and with the thermal correction this would change to 220-270 mN/m. Bending modulus is defined as the energy required to deform a membrane from its natural curvature to some other curvature. For an ideal bilayer the intrinsic curvature is zero, so this expression is somewhat simplified. The bending modulus, compression modulus and bilayer thickness are related by K b = K a t 2 {\displaystyle K_{b}=K_{a}t^{2}} such that if two of these parameters are known the other can be calculated. This relationship derives from the fact that to bend the inner face must be compressed and the outer face must be stretched. [ 4 ] The thicker the membrane, the more each face must deform to accommodate a given curvature (see bending moment ). Many of the values for K a in literature have actually been calculated from experimentally measured values of K b and t. This relation holds only for small deformations, but this is generally a good approximation as most lipid bilayers can support only a few percent strain before rupturing. [ 10 ] Only certain classes of lipids can form bilayers. Two factors primarily govern whether a lipid will form a bilayer or not: solubility and shape. For a self assembled structure such as a bilayer to form, the lipid should have a low solubility in water, which can also be described as a low critical micelle concentration (CMC). [ 5 ] Above the CMC, molecules will aggregate and form larger structures such as bilayers, micelles or inverted micelles. The primary factor governing which structure a given lipid forms is its shape (i.e.- its intrinsic curvature). [ 4 ] Intrinsic curvature is defined by the ratio of the diameter of the head group to that of the tail group. For two-tailed PC lipids, this ratio is nearly one so the intrinsic curvature is nearly zero. Other headgroups such as PS and PE are smaller and the resulting diacyl (two-tailed) lipids thus have a negative intrinsic curvature. Lysolipids tend to have positive spontaneous curvature because they have one rather than two alkyl chains in the tail region. If a particular lipid has too large a deviation from zero intrinsic curvature it will not form a bilayer. [ 11 ] Edge energy is the energy per unit length of a free edge contacting water. This can be thought of as the work needed to create a hole in the bilayer of unit length L. The origin of this energy is the fact that creating such an interface exposes some of the lipid tails to water, which is unfavorable. Λ {\displaystyle \Lambda } is also an important parameter in biological phenomena as it regulates the self-healing properties of the bilayer following electroporation or mechanical perforation of the cell membrane. [ 8 ] Unfortunately, this property is both difficult to measure experimentally and to calculate. One of the major difficulties in calculation is that the structural properties of this edge are not known. The simplest model would be no change in bilayer orientation, such that the full length of the tail is exposed. This is a high energy conformation and, to stabilize this edge, it is likely that some of the lipids rearrange their head groups to point out in a curved boundary. [ citation needed ] The extent to which this occurs is currently unknown and there is some evidence that both hydrophobic (tails straight) and hydrophilic (heads curved around) pores can coexist. [ 12 ] FE Modeling is a powerful tool for testing the mechanical deformation and equilibrium configuration of lipid membranes. [ 13 ] In this context membranes are treated under the thin-shell theory where the bending behavior of the membrane is described by the Helfrich bending model which considers the bilayer as being a very thin object and interprets it as a two-dimensional surface. This consideration imply that Kirchhoff-Love plate theory can be applied to lipid bilayers to determine their stress-deformation behavior. Furthermore, in the FE approach a bilayer surface is subdivided into discrete elements, each described by the above 2D mechanics. [ 14 ] [ 15 ] Under these considerations the weak form virtual work for the entire system is described as the sum of the contribution of all the discrete elements work components. [ 15 ] G = ∑ e = 1 n e l ( G i n t e − G e x t e ) {\displaystyle G=\sum _{e=1}^{n_{\mathrm {el} }}\left(G_{\mathrm {int} }^{e}-G_{\mathrm {ext} }^{e}\right)} For each discrete element the virtual work is determined by the force vector f {\displaystyle \mathbf {f} } and the displacement vector x {\displaystyle \mathbf {x} } each for an applied stress σ {\displaystyle \sigma } and a bending momentum M {\displaystyle M} G i n t e = δ x e T ( f i n t σ e + f i n t M e ) {\displaystyle G_{\mathrm {int} }^{e}=\delta \mathbf {x} _{e}^{\mathrm {T} }\left(\mathbf {f} _{\mathrm {int} \sigma }^{e}+\mathbf {f} _{\mathrm {int} M}^{e}\right)} The FE force vectors due to the applied bilayer stress σ α β {\displaystyle \sigma ^{\alpha \beta }} are given as f int σ e = ∫ σ α β N , α T a β d a {\displaystyle \mathbf {f} _{{\text{int }}\sigma }^{e}=\int \sigma ^{\alpha \beta }\mathbf {N} _{,\alpha }^{\mathrm {T} }{\boldsymbol {a}}_{\beta }\mathrm {d} a} Here N α T {\displaystyle \mathbf {N} _{\alpha }^{\mathrm {T} }} is the displacement state function at point α {\displaystyle \alpha } , and a β {\displaystyle {\boldsymbol {a}}_{\beta }} the tangent vector to the bilayer surface at point β {\displaystyle \beta } The above individual element vectors for the internal force f σ e {\displaystyle \mathbf {f} _{\sigma }^{e}} and the internal work G e {\displaystyle \mathbf {G} ^{e}} can be expressed in a global assembly to obtain a discretized weak form as follows: δ x T f ( x , q ) + δ q T g ( x ) = 0 {\displaystyle \delta \mathbf {x} ^{\mathrm {T} }\mathbf {f} (\mathbf {x} ,\mathbf {q} )+\delta \mathbf {q} ^{\mathrm {T} }\mathbf {g} (\mathbf {x} )=0} In the above equation x {\displaystyle \mathbf {x} } is the deformation in each discrete element while q {\displaystyle \mathbf {q} } is the Lagrange multiplier associated with area-incompressibility. The discretized weak form is satisfied when f = 0 {\displaystyle \mathbf {f} =0} and g = 0 {\displaystyle \mathbf {g} =0} . The resulting nonlinear equations are solved using Newton ‘s method . This allows the predictions of equilibrium shapes that lipid membranes adopt under different stimuli. [ 14 ] [ 16 ] [ 17 ] Most analysis are done for lipid membranes with uniform properties ( isotropic ), while this is partly true for simple membranes containing a single or few lipid species, this description can only approximate the mechanical response of more complex lipid bilayers which can contain several domains of segregated lipids having distinct material properties or intermembrane proteins as in the case of cellular membranes. Other complex cases requiring surface flow analysis, pH, and temperature dependency would require a more discrete model of a bilayer such as MD simulations . [ 16 ] FE methods can predict the equilibrium conformations of a lipid bilayer in response to external forces as shown in the following cases. In this scenario a point in the bilayer surface is pulled with a force normal to the surface plane, this leads to the elongation of a thin tether of bilayer material. The rest of the bilayer surface is subject to a tension S reflecting the pulling force of the continuous bilayer. In such this case a finer mesh is applied near the pulling force area to have a more accurate prediction of the bilayer deformation. [ 16 ] Tethering is an important mechanical event for cellular lipid bilayers, by this action membranes are able to mediate docking into substrates or components of the cytoskeleton . Lipid bilayer budding is a commonplace phenomenon in living cells and relates to the transport of metabolites in the form of vesicles. During this process, a lipid bilayer is subject to internal hydrostatic stresses, in combination with strain restrictions along a bilayer surface, this can lead to elongation of areas of the lipid bilayer by elastic shear or viscous shear. This eventually leads to a deformation of a typical spherical bilayer into different budding shapes. Such shapes are not restricted to be symmetrical along their axes, but they can have different degrees of asymmetry. In FE analysis this results in budding equilibrium shapes like, elongate plates, tubular buds and symmetric budding. [ 16 ]
https://en.wikipedia.org/wiki/Lipid_bilayer_mechanics
Lipid droplets , also referred to as lipid bodies, oil bodies or adiposomes, [ 1 ] are lipid-rich cellular organelles that regulate the storage and hydrolysis of neutral lipids and are found largely in the adipose tissue. [ 2 ] They also serve as a reservoir for cholesterol and acyl-glycerols for membrane formation and maintenance. Lipid droplets are found in all eukaryotic organisms and store a large portion of lipids in mammalian adipocytes . Initially, these lipid droplets were considered to merely serve as fat depots, but since the discovery in the 1990s of proteins in the lipid droplet coat that regulate lipid droplet dynamics and lipid metabolism, lipid droplets are seen as highly dynamic organelles that play a very important role in the regulation of intracellular lipid storage and lipid metabolism. The role of lipid droplets outside of lipid and cholesterol storage has recently begun to be elucidated and includes a close association to inflammatory responses through the synthesis and metabolism of eicosanoids and to metabolic disorders such as obesity , cancer, [ 3 ] [ 4 ] and atherosclerosis. [ 5 ] In non-adipocytes, lipid droplets are known to play a role in protection from lipotoxicity by storage of fatty acids in the form of neutral triacylglycerol, which consists of three fatty acids bound to glycerol. Alternatively, fatty acids can be converted to lipid intermediates like diacylglycerol (DAG), ceramides and fatty acyl-CoAs. These lipid intermediates can impair insulin signaling, which is referred to as lipid-induced insulin resistance and lipotoxicity. [ 6 ] Lipid droplets also serve as platforms for protein binding and degradation. Finally, lipid droplets are known to be exploited by pathogens such as the hepatitis C virus , the dengue virus and Chlamydia trachomatis among others. [ 7 ] [ 8 ] Cells need to adjust the size and structure of their organelles to keep up with growth and changing environmental conditions. To do this, they either make new phospholipids—the main components of organelle membranes—or modify their fatty acid (FA) content. Fatty acids are also used to produce triacylglycerols (TGs), which store energy in structures called lipid droplets. [ 9 ] The synthesis of triacylglycerols (TG) can be occurred through two different enzyme pathways. Diacylglycerol Acyltransferases (DGATs) like Dga1 are enzymes found in most eukaryotes. They add an acyl group from a fatty acid that has been activated with coenzyme A (FA-CoA) to diacylglycerol (DG), forming TG. Phospholipid-Diacylglycerol Acyltransferases (PDATs) are enzymes primarily found in fungi, microalgae, and plants. PDATs like Lro1 in yeast transfer a fatty acid directly from a phospholipid to DG to form TG. Moreover, Lro1 couple TG synthesis with the deacylation of membrane phospholipids (PL), resulting in the formation of TG and lysophospholipids (LPL). [ 10 ] When nutrients become available, the yeast cells enter the exponential growth phase (EXP) to grow quickly. It has been shown that during the EXP phase, Lro1-GFP is localized in the endoplasmic reticulum (ER) to synthesize triacylglycerols (TG), which are essential for phospholipid synthesis. [ 9 ] However, when nutrients become scarce, the cells enter the post-diauxic shift (PDS) phase and Lro1-GFP no longer is in the ER. Instead, it moves to a specific area of the nuclear envelope. This relocation suggests a shift in Lro1's role, possibly in response to the stress of nutrient depletion. Furthermore, this movement is influenced by signals from the cell cycle and nutrient availability, and it stops when the nucleus grows larger. [ 9 ] Two approaches were used to investigate whether Lro1 can access the inner nuclear membrane (INM). [ 9 ] In yeast, the ability of integral membrane proteins to move from the endoplasmic reticulum (ER) to the inner nuclear membrane (INM) is restricted by the size of their cytosolic domains. Proteins with cytosolic domains larger than 90 kDa cannot pass through the nuclear pore complex into the INM. It has been shown that when the Lro1's N-terminal domain was enlarged with one, two, or three copies of the maltose-binding protein (MBP), its ability to target the nucleolus was significantly reduced. This suggests that Lro1 normally resides at the INM, but when its N-domain becomes too large, it can no longer pass through the nuclear pore complex to reach the INM and these larger proteins are likely being degraded. [ 9 ] The anchor-away technique was used as the second approach. The researchers fused the INM protein Heh1 with FK506 binding protein (FKBP12) to serve as an anchor at the INM and interact with other proteins. Lro1 was fused to GFP for visualization and the FKBP12-rapamycin-binding (FRB) domain. This fusion enables Lro1 to interact with Heh1 at the INM in the presence of rapamycin. FRB-GFP construct contains GFP fused to the FRB domain but without Lro1, to show the specific effects of Lro1's presence and used as a control. [ 9 ] Upon the addition of rapamycin, FRB-GFP quickly (within 30 minutes) changed from a diffuse distribution to a ring-like pattern, indicating that it had been recruited to the INM by Heh1-FKBP12. This ring-like localization confirmed that the INM anchor (Heh1) is accessible to FRB-GFP. In the strain expressing FRB-Lro1-GFP, rapamycin treatment caused a loss of Lro1's cortical ER localization and its accumulation at a perinuclear ring, which is characteristic of INM proteins. This suggests that Lro1, via its N-domain, can indeed associate with the INM. In contrast, when Lro1's N-terminal domain was enlarged by adding 3xMBP, the fusion protein retained its localization at the cortical ER even after rapamycin treatment. [ 9 ] The nuclear membrane near the nucleolus tends to expand when there's excess phospholipid synthesis. It has been investigated that whether the protein Lro1 is catalytically active to produce triacylglycerol (TG) in this specific membrane area. [ 9 ] To test Lro1's activity, the researchers expressed it in a yeast strain that couldn't produce any neutral lipids on its own. They did this by deleting four key enzymes involved in lipid production: the DG acyltransferases (LRO1 and DGA1) and the steryl acyltransferases (ARE1 and ARE2). This mutant strain, called "4D," lacks neutral lipids and lipid droplets (LDs), making it ideal for studying Lro1's function. [ 9 ] The mutant "4D" yeast cells cannot survive under nutrient-poor conditions because they cannot make triacylglycerol (TG) or lipid droplets (LDs), which are essential for survival during this phase. When Lro1 is reintroduced as the only enzyme capable of producing TG, it rescues the cells, allowing them to survive better during the stationary phase by forming lipid droplets. [ 9 ] Moreover, Lro1 with a mutation in the conserved lipase motif cannot perform its catalytic function, meaning it cannot produce TG. As a result, these cells also fail to survive in PDS, similar to the cells without any functional Lro1, demonstrating that the catalytic activity of Lro1 is crucial for survival and LD formation. [ 9 ] These findings [ 9 ] suggest that Lro1's activity in the nucleus creates a local site for TG synthesis, which helps reshape the nuclear membrane as needed. [ 9 ] Lipid droplets are composed of a neutral lipid core consisting mainly of triacylglycerols (TAGs) and cholesteryl esters surrounded by a phospholipid monolayer. [ 2 ] The surface of lipid droplets is decorated by a number of proteins which are involved in the regulation of lipid metabolism . [ 2 ] The first and best-characterized family of lipid droplet coat proteins is the perilipin protein family, consisting of five proteins. These include perilipin 1 (PLIN1), perilipin 2 (PLIN2/ ADRP), [ 11 ] perilipin 3 (PLIN3/ TIP47), perilipin 4 (PLIN4/ S3-12) and perilipin 5 (PLIN5/ OXPAT/ LSDP5/ MLDP). [ 12 ] [ 13 ] [ 14 ] Proteomics studies have elucidated the association of many other families of proteins to the lipid surface including proteins involved in membrane trafficking, vesicle docking, endocytosis and exocytosis. [ 15 ] Analysis of the lipid composition of lipid droplets has revealed the presence of a diverse set of phospholipid species; [ 16 ] phosphatidylcholine and phosphatidylethanolamine are the most abundant, followed by phosphatidylinositol . Lipid droplets vary greatly in size, ranging from 20 to 40 nm to 100 um. [ 17 ] In adipocytes, lipid bodies tend to be larger and they may compose the majority of the cell, while in other cells they may only be induced under certain conditions and are considerably smaller in size. Lro1 is a type II integral membrane protein. The N-terminal domain facing the cytoplasm or nucleoplasm and containing a short basic region (RKRR). The larger luminal domain contains the catalytic PDAT domain, which is located within the lumen of the endoplasmic reticulum (ER). The N-terminal domain of Lro1 along with the transmembrane segment showed intranuclear localization with clear enrichment at the nucleolus. [ 9 ] When the K/R residues in the N-terminal domain are mutated to alanines, the nucleolar enrichment is partially compromised but not completely lost, indicating that other regions of Lro1 are also contributing to its proper localization. Furthermore, the N-terminal domain of Lro1 was replaced with 4 IgG binding domains of Protein A (4xIgGb). This replacement leads to a loss of localization at both the nucleolus and ER during the PDS phase. [ 9 ] Lipid droplets bud off the membrane of the endoplasmic reticulum . Initially, a lens is formed by accumulation of TAGs between the two layers of its phospholipid membrane . Nascent lipid droplets may grow by diffusion of fatty acids, endocytosis of sterols, or fusion of smaller lipid droplets through the aid of SNARE proteins . [ 17 ] The budding of lipid droplets is promoted by an asymmetric accumulation of phospholipids that decrease surface tension towards the cytosol . [ 18 ] Lipid droplets have also been observed to be created by the fission of existing lipid droplets, though this is thought to be less common than de novo formation. [ 19 ] The formation of lipid droplets from the endoplasmic reticulum begins with the synthesis of the neutral lipids to be transported. The manufacture of TAGs from diacylglycerol (by the addition of a fatty acyl chain) is catalyzed by the DGAT proteins, though the extent to which these and other proteins are required depends on cell type. [ 20 ] Neither DGAT1 nor DGAT2 is singularly essential for TAG synthesis or droplet formation, though mammalian cells lacking both cannot form lipid droplets and have severely stunted TAG synthesis. DGAT1, which seems to prefer exogenous fatty acid substrates, is not essential for life; DGAT2, which seems to prefer endogenously synthesized fatty acids, is. [ 19 ] In non-adipocytes, lipid storage, lipid droplet synthesis and lipid droplet growth can be induced by various stimuli including growth factors , long-chain unsaturated fatty acids (including oleic acid and arachidonic acid ), oxidative stress and inflammatory stimuli such bacterial lipopolysaccharides , various microbial pathogens, platelet-activating factor , eicosanoids , and cytokines . [ 21 ] An example is the endocannabinoids that are unsaturated fatty acid derivatives, which mainly are considered to be synthesised “on demand” from phospholipid precursors residing in the cell membrane , but may also be synthesised and stored in intracellular lipid droplets and released from those stores under appropriate conditions. [ 22 ] It is possible to observe the formation of lipid droplets, live and label-free, using label-free live-cell imaging .
https://en.wikipedia.org/wiki/Lipid_droplet
Lipid microdomains are formed when lipids undergo lateral phase separations yielding stable coexisting lamellar domains . These phase separations can be induced by changes in temperature , pressure , ionic strength or by the addition of divalent cations or proteins . The question of whether such lipid microdomains observed in model lipid systems also exist in biomembranes had motivated considerable research efforts. Lipid domains are not readily isolated and examined as unique species, in contrast to the examples of lateral heterogeneity . One can disrupt the membrane and demonstrate a heterogeneous range of composition in the population of the resulting vesicles or fragments . Electron microscopy can also be used to demonstrate lateral inhomogeneities in biomembranes. Often, lateral heterogeneity has been inferred from biophysical techniques where the observed signal indicates multiple populations rather than the expected homogeneous population. An example of this is the measurement of the diffusion coefficient of a fluorescent lipid analog in soybean protoplasts . Membrane microheterogeneity is sometimes inferred from the behavior of enzymes , where the enzymatic activity does not appear to be correlated with the average lipid physical state exhibited by the bulk of the membrane. Often, the methods suggest regions with different lipid fluidity , as would be expected of coexisting gel and liquid crystalline phases within the biomembrane. This is also the conclusion of a series of studies where differential effects of perturbation caused by cis and trans fatty acids are interpreted in terms of preferential partitioning of the two liquid crystalline and gel-like domains.
https://en.wikipedia.org/wiki/Lipid_microdomain
Lipid peroxidation , or lipid oxidation , is a complex chemical process that leads to oxidative degradation of lipids , [ 1 ] resulting in the formation of peroxide and hydroperoxide derivatives. [ 2 ] It occurs when free radicals , specifically reactive oxygen species (ROS), interact with lipids within cell membranes , typically polyunsaturated fatty acids (PUFAs) as they have carbon–carbon double bonds . This reaction leads to the formation of lipid radicals , collectively referred to as lipid peroxides or lipid oxidation products ( LOPs ), which in turn react with other oxidizing agents , leading to a chain reaction that results in oxidative stress and cell damage . In pathology and medicine , lipid peroxidation plays a role in cell damage which has broadly been implicated in the pathogenesis of various diseases and disease states, including ageing , [ 3 ] [ 4 ] whereas in food science lipid peroxidation is one of many pathways to rancidity . [ 5 ] The chemical reaction of lipid peroxidation consists of three phases: initiation , propagation , and termination . [ 4 ] In the initiation phase, a pro-oxidant hydroxyl radical ( OH• ) abstracts the hydrogen at the allylic position (–CH 2 –CH=CH 2 ) or methine bridge (=CH−) [ clarification needed ] on the stable lipid substrate, typically a polyunsaturated fatty acid (PUFA), to form the lipid radical ( L• ) and water (H 2 O). In the propagation phase, the lipid radical ( L• ) reacts with molecular oxygen ( O 2 ) to form a lipid hydroperoxyl radical ( LOO• ). The lipid hydroperoxyl radical ( LOO• ) can further abstract hydrogen from a new PUFA substrate, forming another lipid radical ( L• ) and now finally a lipid hydroperoxide (LOOH). [ 6 ] The lipid hydroperoxyl radical ( LOO• ) can also undergo a variety of reactions to produce new radicals. [ citation needed ] The additional lipid radical ( L• ) continues the chain reaction , whilst the lipid hydroperoxide (LOOH) is the primary end product. [ 6 ] The formation of lipid radicals is sensitive to the kinetic isotope effect . Reinforced lipids in the membrane can suppress the chain reaction of lipid peroxidation. [ 7 ] The termination step can vary, in both its actual chemical reaction and when it will occur. [ 6 ] Lipid peroxidation is a self-propagating chain reaction and will proceed until the lipid substrate is consumed and the last two remaining radicals combine, or a reaction which terminates it occurs. [ 3 ] Termination can occur when two lipid hydroperoxyl radicals ( LOO• ) react to form peroxide and oxygen (O 2 ). [ 3 ] [ clarification needed ] Termination can also occur when the concentration of radical species is high. [ citation needed ] The primary products of lipid peroxidation are lipid hydroperoxides (LOOH). [ 3 ] When arachidonic acid is a substrate, isomers of hydroperoxyeicosatetraenoic acid (HPETEs) and hydroxyeicosatetraenoic acids (HETEs) are formed. [ citation needed ] Antioxidants play a crucial role in mitigating lipid peroxidation by neutralizing free radicals, thereby halting radical chain reactions. Key antioxidants include vitamin C and vitamin E . [ 8 ] Additionally, enzymes including superoxide dismutase , catalase , and peroxidase contribute to the oxidation response by reducing the presence of hydrogen peroxide , which is a prevalent precursor of the hydroxyl radical ( OH• ). As an example, vitamin E can donate a hydrogen atom to the lipid hydroperoxyl radical ( LOO• ) to form a vitamin E radical, which further reacts with another lipid hydroperoxyl radical ( LOO• ) forming non-radical products. [ 2 ] Phototherapy may cause lipid peroxidation, leading to the rupture of red blood cell cell membranes. [ 9 ] End-products of lipid peroxidation may be mutagenic and carcinogenic . [ 10 ] For instance, the end-product MDA reacts with deoxyadenosine and deoxyguanosine in DNA, forming DNA adducts to them, primarily M 1 G . [ 10 ] Reactive aldehydes can also form Michael adducts or Schiff bases with thiol or amine groups in amino acid side chains. Thus, they are able to inactivate sensitive proteins through electrophilic stress. [ 11 ] The toxicity of lipid hydroperoxides to animals is best illustrated by the lethal phenotype of glutathione peroxidase 4 ( GPX4 ) knockout mice. These animals do not survive past embryonic day 8, indicating that the removal of lipid hydroperoxides is essential for mammalian life. [ 12 ] It is unclear whether dietary lipid peroxides are bioavailable and play a role in disease, as a healthy human body has protective mechanisms in place against such hazards. [ 13 ] Certain diagnostic tests are available for the quantification of the end-products of lipid peroxidation, to be specific, malondialdehyde (MDA). [ 10 ] The most commonly used test is called a TBARS Assay ( thiobarbituric acid reactive substances assay). Thiobarbituric acid reacts with malondialdehyde to yield a fluorescent product. However, there are other sources of malondialdehyde, so this test is not completely specific for lipid peroxidation. [ 14 ]
https://en.wikipedia.org/wiki/Lipid_peroxidation
In biophysics and colloidal chemistry , polymorphism is the ability of lipids to aggregate in a variety of ways, giving rise to structures of different shapes, known as " phases ". This can be in the form of spheres of lipid molecules ( micelles ), pairs of layers that face one another ( lamellar phase , observed in biological systems as a lipid bilayer ), a tubular arrangement ( hexagonal ), or various cubic phases (Fd 3 m, Im 3 m, Ia 3 m, Pn 3 m, and Pm 3 m being those discovered so far). More complicated aggregations have also been observed, such as rhombohedral , tetragonal and orthorhombic phases. It forms an important part of current academic research in the fields of membrane biophysics (polymorphism), biochemistry (biological impact) and organic chemistry (synthesis). Determination of the topology of a lipid system is possible by a number of methods, the most reliable of which is x-ray diffraction . This uses a beam of x-rays that are scattered by the sample, giving a diffraction pattern as a set of rings. The ratio of the distances of these rings from the central point indicates which phase(s) are present. The structural phase of the aggregation is influenced by the ratio of lipids present, temperature, hydration, pressure and ionic strength (and type). In lipid polymorphism, if the packing ratio [ clarification needed ] of lipids is greater or less than one, lipid membranes can form two separate hexagonal phases, or nonlamellar phases, in which long, tubular aggregates form according to the environment in which the lipid is introduced. This phase is favored in detergent-in-water solutions and has a packing ratio of less than one. The micellar population in a detergent/water mixture cannot increase without limit as the detergent to water ratio increases. In the presence of low amounts of water, lipids that would normally form micelles will form larger aggregates in the form of micellar tubules in order to satisfy the requirements of the hydrophobic effect. These aggregates can be thought of as micelles that are fused together. These tubes have the polar head groups facing out, and the hydrophobic hydrocarbon chains facing the interior. This phase is only seen under unique, specialized conditions, and most likely is not relevant for biological membranes. Lipid molecules in the HII phase pack inversely to the packing observed in the hexagonal I phase described above. This phase has the polar head groups on the inside and the hydrophobic, hydrocarbon tails on the outside in solution. The packing ratio for this phase is larger than one, [ 1 ] which is synonymous with an inverse cone packing. Extended arrays of long tubes will form (as in the hexagonal I phase), but because of the way the polar head groups pack, the tubes take the shape of aqueous channels. These arrays can stack together like pipes. This way of packing may leave a finite hydrophobic surface in contact with water on the outside of the array. However, the otherwise energetically favorable packing apparently stabilizes this phase as a whole. It is also possible that an outer monolayer of lipid coats the surface of the collection of tubes to protect the hydrophobic surface from interaction with the aqueous phase. It is suggested that this phase is formed by lipids in solution in order to compensate for the hydrophobic effect. The tight packing of the lipid head groups reduces their contact with the aqueous phase. This, in turn, reduces the amount of ordered, but unbound water molecules. The most common lipids that form this phase include phosphatidylethanolamine (PE), when it has unsaturated hydrocarbon chains. Diphosphatidylglycerol (DPG, otherwise known as cardiolipin) in the presence of calcium is also capable of forming this phase. There are several techniques used to map out which phase is present during perturbations done on the lipid. These perturbations include pH changes, temperature changes, pressure changes, volume changes, etc. The most common technique used to study phospholipid phase presence is phosphorus nuclear magnetic resonance (31P NMR). In this technique, different and unique powder diffraction patterns are observed for lamellar, hexagonal, and isotropic phases. Other techniques that are used and do offer definitive evidence of existence of lamellar and hexagonal phases include freeze-fracture electron microscopy, X-ray diffraction , differential scanning calorimetry (DSC), and deuterium nuclear magnetic resonance (2H NMR). Additionally, negative staining transmission electron microscopy has been shown as a useful tool to study lipid bilayer phase behavior and polymorphism into lamellar phase , micellar, unilamellar liposome , and hexagonal aqueous-lipid structures , in aqueous dispersions of membrane lipids . [ 2 ] As water-soluble negative stain is excluded from the hydrophobic part (fatty acyl chains) of lipid aggregates, the hydrophilic headgroup portions of the lipid aggregates stain dark and clearly mark the outlines of the lipid aggregates (see figure).
https://en.wikipedia.org/wiki/Lipid_polymorphism
The lipid pump sequesters carbon from the ocean's surface to deeper waters via lipids associated with overwintering vertically migratory zooplankton . Lipids are a class of hydrocarbon rich, nitrogen and phosphorus deficient compounds essential for cellular structures. This lipid carbon enters the deep ocean as carbon dioxide produced by respiration of lipid reserves and as organic matter from the mortality of zooplankton. Compared to the more general biological pump , the lipid pump also results in a "lipid shunt", where other nutrients like nitrogen and phosphorus that are consumed in excess must be excreted back to the surface environment, and thus are not removed from the surface mixed layer of the ocean. This means that the carbon transported by the lipid pump does not limit the availability of essential nutrients in the ocean surface. [ 1 ] Carbon sequestration via the lipid pump is therefore decoupled from nutrient removal, allowing carbon uptake by oceanic primary production to continue. In the Biological Pump, nutrient removal is always coupled to carbon sequestration; primary production is limited as carbon and nutrients are transported to depth together in the form of organic matter. [ 1 ] The contribution of the lipid pump to the sequestering of carbon in the deeper waters of the ocean can be substantial: the carbon transported below 1,000 metres (3,300 ft) by copepods of the genus Calanus in the Arctic Ocean almost equals that transported below the same depth annually by particulate organic carbon (POC) in this region. [ 2 ] A significant fraction of this transported carbon would not return to the surface due to respiration and mortality. Research is ongoing to more precisely estimate the amount that remains at depth. [ 1 ] [ 2 ] [ 3 ] The export rate of the lipid pump may vary from 1–9.3 g C m −2 y −1 across temperate and subpolar regions containing seasonally-migrating zooplankton. [ 3 ] The role of zooplankton, and particularly copepods, in the food web is crucial to the survival of higher trophic level organisms whose primary source of nutrition is copepods. With warming oceans and increasing melting of ice caps due to climate change , the organisms associated with the lipid pump may be affected, thus influencing the survival of many commercially important fish and endangered marine mammals . [ 4 ] [ 5 ] [ 6 ] As a new and previously unquantified component of oceanic carbon sequestration, further research on the lipid pump can improve the accuracy and overall understanding of carbon fluxes in global oceanic systems . [ 1 ] [ 2 ] [ 3 ] Through the seasonal vertical migration of zooplankton, the lipid pump creates a net difference between lipids transported to the deep during the fall (when zooplankton enter diapause ) and what returns to the surface during the spring, resulting in the sequestration of lipid carbon at depth. [ 1 ] The biological pump encompasses many processes that sequester the CO 2 taken up in the surface ocean by phytoplankton through the export of POC to the deep ocean. [ 1 ] Although zooplankton are known to play important roles in the biological pump through grazing and the repackaging of particulate matter, the active transport of seasonally-migrating zooplankton through the lipid pump has not been incorporated into global estimates of the biological pump. [ 1 ] [ 2 ] The biological pump transports 1–4 g C m −2 y −1 of POC below the thermocline annually. [ 1 ] The export flux of POC in the temperate North Atlantic out of the surface waters was found to be 29 ± 10 g C m −2 y −1 . [ 7 ] However, studies have shown that processes such as consumption and remineralization contribute to a significant amount of this POC being attenuated as it sinks below the thermocline (near overwintering depths of ~1000 m). [ 1 ] Furthermore, the remaining quantity of carbon in the North Atlantic from the export of POC below the thermocline has been calculated (2–8 g C m −2 y −1 ) to be comparable to the seasonal migration of C. finmarchicus in the North Atlantic (1–4 g C m −2 y −1 ) through the lipid pump. [ 1 ] Therefore, the lipid pump may contribute 50–100% of C sequestration to the biological pump as net transport that has not been included in its current estimates. [ 1 ] Although the sequestration of marine carbon is a primary outcome of the biological pump, the recycling of nutrients such as N and P in organic matter plays a comparatively important role in maintaining the processes that facilitate this carbon export without removing nutrients for primary production. [ 8 ] [ 9 ] One key difference between the lipid pump and biological pump is that the ratios of nutrients such as nitrogen and phosphorus relative to carbon are minimal or zero in lipids, whereas the exported POC in the biological pump retains the standard Redfield ratios found throughout the world's oceans. [ 1 ] This is primarily due to zooplankton in their copepodite stages releasing an excessive amount of nitrogen and phosphorus from excretion back into the surface. [ 1 ] Thus, the production, transport, and metabolism of lipid carbon during overwintering do not contribute to a net consumption or removal of essential nutrients in the surface ocean, which is unlike many components of the biological pump. [ 1 ] This process creates what is known as a "lipid shunt" in the biological pump, as the carbon sequestration of the lipid pump is decoupled from nutrient removal. [ 1 ] Diel Vertical Migration (DVM) is a well-studied phenomenon, widespread in the temperate and tropical oceans, and previously understood to be the most significant contributor to the active export of carbon as a result of zooplankton migration. [ 10 ] The most common form is the nocturnal DVM, a night-time ascent to the upper pelagic and a daytime descent to deeper waters. A relatively unique variation of this form is the twilight DVM, where the ascent happens during dusk and the descent around midnight (i.e., midnight sinking). [ 11 ] While DVM occurs on a daily basis, overwintering diapause (hibernation) occurs on an annual time-scale and enables zooplankton species, particularly Calanus spp., to adapt to seasonal variation in primary productivity in specific ocean basins . Individuals enter diapause and migrate deeper in the water column to overwinter below the thermocline. [ 2 ] During diapause they survive on stored lipid reserves that are generated at the end of their time at the surface when nutrients are widely available. [ 2 ] [ 12 ] The seasonal end of diapause must be closely timed with the beginning of the spring phytoplankton bloom to enable acquisition of food to permit proper egg development and hatching. If the timing is disrupted, eggs that are hatched during diapause will have limited growth time and a lower likelihood of surviving overwintering, as thus is an example of match-mismatch hypothesis . [ 13 ] Calanus spp. in ocean basins with shorter growth seasons will be increasingly sensitive to the timing of the spring bloom, such as polar regions. [ 13 ] In the Arctic and Antarctic environments, the productive season is typically short and certain copepods species vertically migrate during overwintering diapause. [ 13 ] [ 2 ] During the productive seasons of spring and summer, younger developmental stages of these copepods usually thrive in food-rich, warmer, near-surface waters, and they rapidly develop and grow. [ 1 ] During late summer and fall, grazing pressure, nutrient limitation, and annual variations of irradiance combine to limit the pelagic primary production. Consequently, the food supply fades toward fall, and overwintering diapause initiates. [ 1 ] [ 2 ] These copepods migrate to deeper waters with accumulated lipid reserves for overwintering. The overwintering diapause stages remain in deeper waters with limited physical and physiological activity and ascend back to the near-surface waters and complete the life cycle at the onset of the following productive season. [ 13 ] [ 2 ] Ecology Calanus spp . are abundantly distributed copepods, particularly in the polar and temperate North Atlantic . [ 1 ] Studies attempting to quantify the lipid pump have primarily focused on the cousin species of C. finmarchicus , Calanus glacialis and Calanus helgolandicus , C. hyperboreus . [ 2 ] C. hyperboreous , the largest of these species, uses an overwintering diapause (hibernation) strategy, and its life-history will be described in more detail as a representative Calanus spp . With a life cycle of two to six years on average, each C. hyperboreous individual can go through multiple overwintering periods. Positively buoyant eggs are spawned by females at depth and rise to the surface. Larvae (nauplii) first develop from these eggs, and complete their maturation into an early juvenile (copepodite) within one season, after which they undergo their first overwintering. Copepodite have three stages before maturing to the adult stages. While female Calanus spp. are generally expected to experience mortality after spawning , some may return to the surface to build up lipid stores before entering another overwintering and reproductive cycle. [ 2 ] Lipid accumulation and metabolism Lipids are stored by all copepodite and adult Calanus spp . in an oil sac, which can account for up to 60% of an individual's dry weight. [ 2 ] Calanus spp. accumulate these lipids while feeding closer to the ocean surface during the spring and summer months, aligning with phytoplankton blooms. Early in the growing season, Calanus spp. biogenergetics are allocated to reproduction, feeding and growth, but eventually shift to the production of lipids to provide energy during diapause. These lipids take the form of wax esters , energy-rich compounds like omega-3 fatty acids , and long-chain carbon molecules. [ 1 ] At the end of the feeding/growing season, Calanus spp . migrate downward, with to depths varying from 600 to 3000m, but with the requirement that Calanus spp. settle below the thermocline to prevent premature return to the surface waters. [ 1 ] [ 2 ] Stored lipids are metabolized at these depths, accounting for approximately 25% of the basal metabolic rate . [ 14 ] A 6–8 month-long overwintering period can drain a substantial fraction (44–93%) of the stored lipids despite the decreased metabolism. [ 1 ] Physical characteristics The physical characteristics of Calanus spp. (i.e., dry weight, prosome length, lipid content, and carbon content) are always changing, varying between different regions, temporally, and across life stages. Based on isomorphism, or the similarity in form or structure of organisms, Calanus spp. may deviate in size but their basic physical structure remains constant across different overwintering stages and between different copepod species. [ 1 ] [ 2 ] The only significant taxonomic difference is the number of segments on the tail across developmental stage CIII and older (CIV, CV). With an outcome of isomorphism, dry weight (d [mg]) and prosome length (p [mm]) can be scaled as they are related as d = cp 3 , where c is a coefficient. [ 2 ] Observations identify the relationship between dry weight and prosome length with a coefficient between 3.3 and 3.5 for C. hyperboreus . [ 2 ] Although this relationship is not supported extensively by empirical evidence, it has been used for model frameworks to observe Calanus spp. carbon content. [ 2 ] Relationships between NAO and Calanus spp . populations In the North Atlantic and Nordic Seas , a primary long-term forcing that affects Calanus spp. and its habitat is the North Atlantic Oscillation (NAO) index, defined as the normalized difference in sea surface pressure between the Azores High and the Icelandic Low . [ 14 ] While high NAO index values indicate a net flow of Atlantic water to the northeast and into the Norwegian Sea, low NAO index values indicate a reduced Atlantic water inflow into the Nordic Seas. [ 14 ] In the Northwestern Atlantic, positive trends in the abundances of Calanus spp. correspond with higher sea surface temperatures and positive NAO forcing with a lag of one or two years. [ 14 ] However, the influence of the NAO in explaining Calanus spp . abundance was substantially diminished when temporal autocorrelation and detrending analyses were involved. [ 14 ] Certain aspects of the lipid pump such as the diapause depth and duration of zooplankton can vary among regions that have different overwintering temperatures and resident community characteristics. [ 1 ] [ 3 ] There are other subarctic regions that have shown similar carbon export rates to those found in the temperate North Atlantic (1–4 g C m −2 y −1 ) via seasonally-migrating zooplankton. [ 3 ] For instance, C. glacialis and C. hyperboreus are the most dominant zooplankton species found in the Arctic Ocean at similar latitudes, and they contribute to a 3.1 g C m −2 y −1 flux of lipid carbon below 100 m during overwintering. [ 11 ] A slightly higher maximum flux in lipid carbon (2–4.3 g C m −2 y −1 ) below 150 m was observed in the subarctic North Pacific and was primarily attributed to the Neocalanus genus of copepods. [ 15 ] [ 16 ] In these areas, N. flemingeri , N. cristatus , and N. plumchrus are the primary contributors to the lipid pump, whereas, the subantarctic Southern Ocean consists primarily of N. tonsus contributing to a lipid carbon flux of 1.7–9.3 g C m −2 y −1 out of the euphotic zone . [ 17 ] The rates or magnitude of these processes may slightly vary due to characteristic differences between these subpolar regions, which have largely been under-studied relative to their contributions to the lipid pump. [ 1 ] The zooplanktonic Calanus spp. are not only important for moving carbon out of the photic zone and into the deep ocean, but these lipid-rich organisms play a critical role in the success of many marine species that depend on them as food. They comprise the majority of diets for fishes, seabirds and even large mammals such as whales. [ 4 ] [ 5 ] Copepods can account for about 70–90% of total zooplankton biomass, depending on region. [ 4 ] [ 6 ] Additionally, their eggs are a main source of food for commercially important fish stocks. The copepod eggs are buoyant and will rise to the sea surface, but are susceptible to predation by fish and other organisms. [ 2 ] [ 6 ] Copepods also provide the benthic community with food via sinking fecal pellets, meaning that as fish and smaller invertebrates excrete waste, that waste falls to the sea floor and organisms on the sea floor compete for the pellets as food. [ 6 ] The role of copepods in the food web is crucially intertwined amongst other organisms. Copepod abundance, specifically the C. finmarchicus, has a direct impact on the endangered right whales of the North Atlantic. [ 18 ] North Atlantic right whales rely on copepods as their primary prey in order to meet their nutritional needs. To meet the right whale's energetic requirements they need about 500 kg of C. finmarchicus a day. [ 12 ] Each copepod measures about 2–4 millimetres long which is about the size of a grain of rice and they weigh, on average, between 1.0274 and 1.0452 g cm −3 . [ 19 ] [ 18 ] A loss in C. finmarchicus has the potential to affect the right whale's migration, reproduction, and/or ability to successfully nurse their young (only for lactating females). [ 18 ] Many commercial and subsistence fisheries in arctic and subarctic regions fish for cod, salmon, crab, groundfish, and pollock depend on this energy-rich zooplankton as food. [ 20 ] [ 5 ] In 2017, the highest value of commercial fish species for the US was salmon ($688 million), crabs ($610 million), shrimp ($531 million), scallops ($512 million), and pollock ($413 million). [ 21 ] Pollock alone is the largest fishery in the US based on volume, but is also the second largest fishery in the world supporting 2–5% of the global fishery production. [ 21 ] [ 22 ] Not only do millions of people rely on fish for subsistence, but recreational fishing is one of the most popular activities in the US. Recreational fishing contributes about $202 million to the US economy. [ 21 ] Changes in the abundance and distribution of copepods could drastically affect the economic livelihoods of millions of people connected to the fishing industry or who rely on fishing as a primary source of protein. Anthropogenic climate change is estimated to impact the marine environment in a variety of ways. In the arctic and subarctic environments where a vast majority of Calanus spp. reside, melting ice caps and timing of the spring phytoplankton bloom could have implications for copepod density, distribution and timing of return from overwintering. A phytoplankton bloom occurs in the spring in arctic and subarctic environments when sea ice melts, allowing an increase in light to penetrate deeper into the water column, thus supporting photosynthesis. [ 20 ] An input of freshwater from the sea ice melting increases the stratification of the ocean in the summertime. Stratification leaves nutrient-rich water on the bottom and nutrient-poor water on the top due to an increase in freshwater from the ice. However, in the wintertime, this region of the world experiences an increase in storms that bring nutrient-rich waters into the more nutrient-poor surface waters. Climate change alters the timing of the spring bloom by promoting an earlier or later ice melt. Warmer waters could lead to weaker stratification, meaning the density differences between the first and second layer of the ocean are increasing due to an increased flux of freshwater from ice melt. [ 23 ] [ 22 ] Typically, the amount of total annual primary productivity in the Bering Sea associated with a spring bloom is approximately 10–65%, however warmer waters could impact the amount of primary production occurring. [ 22 ] For the C. finmarchicus species specifically, the start of reproduction is linked to the start of the spring bloom. [ 13 ] Thus, changes in the timing of the spring bloom would directly influence the reproductive capabilities of C. finmarchicus and alter the food chain from the bottom-up . However, the food chain could also be altered from the top-down through habitat disturbance and the removal of marine mammals and fish. [ 24 ] Large-scale commercial fisheries exert top-down effects by lowering the abundance of larger species and increasing the amount of lipid-rich copepods and even paving way for other species to consume them. [ 24 ] Under warming ocean conditions, prey switching is to be expected. [ 22 ] Egg production and hatching success may also be affected with increasing sea surface temperatures and ocean acidification . [ 12 ] Other climate change factors to consider that might influence these lipid-rich copepods are shifts of current systems, storm activity and sea-ice cover. [ 24 ] In some regions of the arctic, specifically the Bering Sea, studies have forecasted a decrease in storms due to warming. This impacts the mixing of the water column that brings nutrient-rich water upwards. Copepods consume primary producers that require nutrients to survive. Limiting the amount of nutrients in the water column could decrease the abundance of these primary producers and subsequently reduce Calanus spp. abundance as well. [ 22 ] Changes in the water masses and temperature could have a direct effect on the zooplankton's vertical migration. [ 5 ] The distribution of the zooplankton in the water column is controlled by the currents. The Calanus spp. use the water column for their vertical migration. Changes to the currents while Calanus spp. are in diapause could result in a reduction in the abundance of the copepods in the Norwegian Sea. [ 5 ] Since the lipid pump is controlled through the movement of copepods, particularly Calanus spp., impacts of climate change that affect copepod abundance or seasonal migration will directly impact the lipid pump and carbon export to the deep ocean. A study that utilized climate modeling to simulate the effects of predicted increases in water temperature and salinity as a result of climate change on C. finmarchicus of the eastern shelf of North America forecasts lower abundance of copepods. The decrease in favorable environmental conditions is expected to decrease the size and density of C. finmarchius , and will likely have negative effects on whales and other components of the food web that are inextricably tied to copepods. [ 12 ] The impact of diapause and variation in seasonal productivity was not explicitly included as increasing model complexity and more accurate accounting for Calanus spp. metabolic processes during diapause is required. [ 1 ] [ 12 ] The importance of diapause timing with spring plankton blooms is well-established, [ 12 ] suggesting that there is potential for additional population impacts as a result of climate change, which would further reverberate throughout the ecosystem. The 2015 paper by Jónasdóttir et al., marked the first comprehensive accounting for the amount of carbon sequestration resulting from the movement of lipids by vertically migrating zooplankton during their overwintering diapuse. Although only elucidating the impact of one particular species, in this case, C. finmarchicus , both the magnitude of carbon flux and widespread global distribution of Calanus spp. suggest the possible importance of the lipid pump in global carbon cycling by contributing an estimated 50–100% of carbon sequestration to the biological pump. [ 1 ] Subsequent research has underscored this significance as estimates that attempt to more accurately account for the mortality and respiration rates of other overwintering Calanus spp. have suggested similar, although regionally variable, magnitudes of carbon export from the lipid pump. [ 2 ] [ 12 ] [ 25 ] [ 15 ] Overwintering diapause is an ecological strategy to enable Calanus spp . to adapt to the seasonal variability in food availability in ocean basins. [ 1 ] Changes in the timing or length of high food periods are likely to negatively impact the distribution and abundance of Calanus spp . [ 2 ] [ 3 ] Changes in ocean temperature and salinity due to anthropogenic climate change are also predicted to decrease concentrations of Calanus spp. in some ocean basins. [ 22 ] In addition to potential ecosystem impacts due to the large number of species that rely on copepods as a major constituent of their diets, [ 7 ] [ 6 ] [ 24 ] there may be implications for oceanic carbon sequestration from consequent changes in the magnitude of the lipid pump due to overwintering zooplankton. [ 1 ] [ 2 ] The global estimates of the biological pump have yet to include the elements of the lipid pump which could represent 50–100% of C export that is not accounted for. [ 1 ] This is likely due to many observational challenges pertaining to the analysis of these seasonal migrations. [ 2 ] As described above, more accurate ways to measure both mortality and respiration rates of overwintering zooplankton are being conducted in recent work, which are the two factors that primarily control the amount of lipid carbon that is sequestered at depth. [ 1 ] [ 12 ] For the zooplankton that survive overwintering, their upward migration during the spring returns a fraction of the lipid reserves to the surface as nonrespired carbon, with losses attributed to predation by deep-dwelling predators, disease, starvation, and other sources of mortality generally not accounted for. [ 1 ] [ 14 ] Similar to the lysis shunt , the dynamics of the lipid shunt causes uncertainty in observational methods of the lipid pump when comparing its efficiency to that of the biological pump. [ 26 ] [ 25 ] [ 27 ] Additionally, large zooplankton usually avoid mooring instruments such as sediment traps during seasonal migrations which further explains why the lipid pump has yet to become incorporated into estimates of the global carbon export flux. [ 16 ] [ 17 ] These observations can be challenging to make given the remote locations they are conducted in and the harsh, deep sampling conditions, but these adaptations in the data collection are needed to better integrate global estimates of the carbon export flux provided by the lipid pump. [ 2 ]
https://en.wikipedia.org/wiki/Lipid_pump
Lipid signaling, broadly defined, refers to any biological cell signaling event involving a lipid messenger that binds a protein target, such as a receptor , kinase or phosphatase , which in turn mediate the effects of these lipids on specific cellular responses. Lipid signaling is thought to be qualitatively different from other classical signaling paradigms (such as monoamine neurotransmission ) because lipids can freely diffuse through membranes ( see osmosis ). One consequence of this is that lipid messengers cannot be stored in vesicles prior to release and so are often biosynthesized "on demand" at their intended site of action. As such, many lipid signaling molecules cannot circulate freely in solution but, rather, exist bound to special carrier proteins in serum . Ceramide (Cer) can be generated by the breakdown of sphingomyelin (SM) by sphingomyelinases (SMases), which are enzymes that hydrolyze the phosphocholine group from the sphingosine backbone. Alternatively, this sphingosine -derived lipid ( sphingolipid ) can be synthesized from scratch ( de novo ) by the enzymes serine palmitoyl transferase (SPT) and ceramide synthase in organelles such as the endoplasmic reticulum (ER) and possibly, in the mitochondria -associated membranes (MAMs) and the perinuclear membranes . Being located in the metabolic hub, ceramide leads to the formation of other sphingolipids , with the C1 hydroxyl (-OH) group as the major site of modification. A sugar can be attached to ceramide (glycosylation) through the action of the enzymes, glucosyl or galactosyl ceramide synthases . [ 1 ] Ceramide can also be broken down by enzymes called ceramidases , leading to the formation of sphingosine , [ 2 ] [ 3 ] Moreover, a phosphate group can be attached to ceramide (phosphorylation) by the enzyme, ceramide kinase . [ 4 ] It is also possible to regenerate sphingomyelin from ceramide by accepting a phosphocholine headgroup from phosphatidylcholine (PC) by the action of an enzyme called sphingomyelin synthase . [ 5 ] The latter process results in the formation of diacylglycerol (DAG) from PC. [ citation needed ] Ceramide contains two hydrophobic ("water-fearing") chains and a neutral headgroup. Consequently, it has limited solubility in water and is restricted within the organelle where it was formed. Also, because of its hydrophobic nature, ceramide readily flip-flops across membranes as supported by studies in membrane models and membranes from red blood cells ( erythrocytes ). [ 6 ] However, ceramide can possibly interact with other lipids to form bigger regions called microdomains which restrict its flip-flopping abilities. This could have immense effects on the signaling functions of ceramide because it is known that ceramide generated by acidic SMase enzymes in the outer leaflet of an organelle membrane may have different roles compared to ceramide that is formed in the inner leaflet by the action of neutral SMase enzymes. [ 7 ] Ceramide mediates many cell-stress responses, including the regulation of programmed cell death ( apoptosis ) [ 8 ] and cell aging ( senescence ). [ 9 ] Numerous research works have focused interest on defining the direct protein targets of action of ceramide. These include enzymes called ceramide -activated Ser-Thr phosphatases (CAPPs), such as protein phosphatase 1 and 2A (PP1 and PP2A), which were found to interact with ceramide in studies done in a controlled environment outside of a living organism ( in vitro ). [ 10 ] On the other hand, studies in cells have shown that ceramide-inducing agents such as tumor necrosis factor-alpha α (TNFα) and palmitate induce the ceramide-dependent removal of a phosphate group (dephosphorylation) of the retinoblastoma gene product RB [ 11 ] and the enzymes, protein kinases B ( AKT protein family) and C α (PKB and PKCα). [ 12 ] Moreover, there is also sufficient evidence which implicates ceramide to the activation of the kinase suppressor of Ras (KSR), [ 13 ] PKCζ, [ 14 ] [ 15 ] and cathepsin D . [ 16 ] Cathepsin D has been proposed as the main target for ceramide formed in organelles called lysosomes , making lysosomal acidic SMase enzymes one of the key players in the mitochondrial pathway of apoptosis . Ceramide was also shown to activate PKCζ , implicating it to the inhibition of AKT , regulation of the voltage difference between the interior and exterior of the cell (membrane potential) and signaling functions that favor apoptosis. [ 17 ] Chemotherapeutic agents such as daunorubicin and etoposide [ 18 ] [ 19 ] enhance the de novo synthesis of ceramide in studies done on mammalian cells. The same results were found for certain inducers of apoptosis particularly stimulators of receptors in a class of lymphocytes (a type of white blood cell) called B-cells . [ 20 ] Regulation of the de novo synthesis of ceramide by palmitate may have a key role in diabetes and the metabolic syndrome . Experimental evidence shows that there is substantial increase of ceramide levels upon adding palmitate . Ceramide accumulation activates PP2A and the subsequent dephosphorylation and inactivation of AKT , [ 21 ] a crucial mediator in metabolic control and insulin signaling . This results in a substantial decrease in insulin responsiveness (i.e. to glucose) and in the death of insulin-producing cells in the pancreas called islets of Langerhans . [ 22 ] Inhibition of ceramide synthesis in mice via drug treatments or gene-knockout techniques prevented insulin resistance induced by fatty acids , glucocorticoids or obesity . [ 23 ] An increase in in vitro activity of acid SMase has been observed after applying multiple stress stimuli such as ultraviolet (UV) and ionizing radiation, binding of death receptors and chemotherapeutic agents such as platinum , histone deacetylase inhibitors and paclitaxel . [ 24 ] In some studies, SMase activation results to its transport to the plasma membrane and the simultaneous formation of ceramide. [ 24 ] Ceramide transfer protein (CERT) transports ceramide from ER to the Golgi for the synthesis of SM. [ 25 ] CERT is known to bind phosphatidylinositol phosphates, hinting its potential regulation via phosphorylation , a step of the ceramide metabolism that can be enzymatically regulated by protein kinases and phosphatases , and by inositol lipid metabolic pathways. [ 26 ] Up to date, there are at least 26 distinct enzymes with varied subcellular localizations, that act on ceramide as either a substrate or product. Regulation of ceramide levels can therefore be performed by one of these enzymes in distinct organelles by particular mechanisms at various times. [ 27 ] Sphingosine (Sph) is formed by the action of ceramidase (CDase) enzymes on ceramide in the lysosome . Sph can also be formed in the extracellular (outer leaflet) side of the plasma membrane by the action of neutral CDase enzyme. Sph then is either recycled back to ceramide or phosphorylated by one of the sphingosine kinase enzymes, SK1 and SK2. [ 28 ] The product sphingosine-1-phosphate (S1P) can be dephosphorylated in the ER to regenerate sphingosine by certain S1P phosphatase enzymes within cells, where the salvaged Sph is recycled to ceramide . [ 29 ] Sphingosine is a single-chain lipid (usually 18 carbons in length), rendering it to have sufficient solubility in water. This explains its ability to move between membranes and to flip-flop across a membrane. Estimates conducted at physiological pH show that approximately 70% of sphingosine remains in membranes while the remaining 30% is water-soluble. [ 30 ] Sph that is formed has sufficient solubility in the liquid found inside cells ( cytosol ). Thus, Sph may come out of the lysosome and move to the ER without the need for transport via proteins or membrane-enclosed sacs called vesicles . However, its positive charge favors partitioning in lysosomes . It is proposed that the role of SK1 located near or in the lysosome is to ‘trap’ Sph via phosphorylation . [ 31 ] Since sphingosine exerts surfactant activity, it is one of the sphingolipids found at lowest cellular levels. [ 31 ] The low levels of Sph and their increase in response to stimulation of cells, primarily by activation of ceramidase by growth-inducing proteins such as platelet-derived growth factor and insulin-like growth factor , is consistent with its function as a second messenger . It was found that immediate hydrolysis of only 3 to 10% of newly generated ceramide may double the levels of Sph. [ 31 ] Treatment of HL60 cells (a type of leukemia cell line) by a plant-derived organic compound called phorbol ester increased Sph levels threefold, whereby the cells differentiated into white blood cells called macrophages . Treatment of the same cells by exogenous Sph caused apoptosis . A specific protein kinase phosphorylates 14-3-3, otherwise known as sphingosine-dependent protein kinase 1 (SDK1), only in the presence of Sph. [ 32 ] Sph is also known to interact with protein targets such as the protein kinase H homologue (PKH) and the yeast protein kinase (YPK). These targets in turn mediate the effects of Sph and its related sphingoid bases, with known roles in regulating the actin cytoskeleton , endocytosis , the cell cycle and apoptosis . [ 33 ] It is important to note however that the second messenger function of Sph is not yet established unambiguously. [ 34 ] Sphingosine-1-phosphate (S1P), like Sph, is composed of a single hydrophobic chain and has sufficient solubility to move between membranes. S1P is formed by phosphorylation of sphingosine by sphingosine kinase (SK). The phosphate group of the product can be detached (dephosphorylated) to regenerate sphingosine via S1P phosphatase enzymes or S1P can be broken down by S1P lyase enzymes to ethanolamine phosphate and hexadecenal. [ 35 ] Similar to Sph, its second messenger function is not yet clear. [ 34 ] However, there is substantial evidence that implicates S1P to cell survival, cell migration , and inflammation . Certain growth-inducing proteins such as platelet-derived growth factor (PDGF), insulin-like growth factor (IGF) and vascular endothelial growth factor (VEGF) promote the formation of SK enzymes, leading to increased levels of S1P. Other factors that induce SK include cellular communication molecules called cytokines , such as tumor necrosis factor α (TNFα) and interleukin-1 (IL-1), hypoxia or lack of oxygen supply in cells, oxidized low-density lipoproteins (oxLDL) and several immune complexes . [ 31 ] S1P is probably formed at the inner leaflet of the plasma membrane in response to TNFα and other receptor activity-altering compounds called agonists . [ 36 ] [ 37 ] S1P, being present in low nanomolar concentrations in the cell, has to interact with high-affinity receptors that are capable of sensing their low levels. So far, the only identified receptors for S1P are the high-affinity G protein-coupled receptors (GPCRs), also known as S1P receptors (S1PRs). S1P is required to reach the extracellular side (outer leaflet) of the plasma membrane to interact with S1PRs and launch typical GPCR signaling pathways. [ 38 ] [ 39 ] However, the zwitterionic headgroup of S1P makes it unlikely to flip-flop spontaneously. To overcome this difficulty, the ATP-binding cassette (ABC) transporter C1 (ABCC1) serves as the "exit door" for S1P. [ 40 ] On the other hand, the cystic fibrosis transmembrane regulator (CFTR) serves as the means of entry for S1P into the cell. [ 41 ] In contrast to its low intracellular concentration, S1P is found in high nanomolar concentrations in serum where it is bound to albumin and lipoproteins . [ 42 ] Inside the cell, S1P can induce calcium release independent of the S1PRs—the mechanism of which remains unknown. To date, the intracellular molecular targets for S1P are still unidentified. [ 31 ] The SK1-S1P pathway has been extensively studied in relation to cytokine action, with multiple functions connected to effects of TNFα and IL-1 favoring inflammation . Studies show that knockdown of key enzymes such as S1P lyase and S1P phosphatase increased prostaglandin production, parallel to increase of S1P levels. [ 37 ] This strongly suggests that S1P is the mediator of SK1 action and not subsequent compounds. Research done on endothelial and smooth muscle cells is consistent to the hypothesis that S1P has a crucial role in regulating endothelial cell growth, and movement. [ 43 ] Recent work on a sphingosine analogue, FTY270, demonstrates its ability to act as a potent compound that alters the activity of S1P receptors ( agonist ). FTY270 was further verified in clinical tests to have roles in immune modulation, such as that on multiple sclerosis . [ 44 ] This highlights the importance of S1P in the regulation of lymphocyte function and immunity . Most of the studies on S1P are used to further understand diseases such as cancer , arthritis and inflammation , diabetes , immune function and neurodegenerative disorders . [ 31 ] Glucosylceramides (GluCer) are the most widely distributed glycosphingolipids in cells serving as precursors for the formation of over 200 known glycosphingolipids. GluCer is formed by the glycosylation of ceramide in an organelle called Golgi via enzymes called glucosylceramide synthase (GCS) or by the breakdown of complex glycosphingolipids (GSLs) through the action of specific hydrolase enzymes. In turn, certain β-glucosidases hydrolyze these lipids to regenerate ceramide. [ 45 ] [ 46 ] GluCer appears to be synthesized in the inner leaflet of the Golgi. Studies show that GluCer has to flip to the inside of the Golgi or transfer to the site of GSL synthesis to initiate the synthesis of complex GSLs. Transferring to the GSL synthesis site is done with the help of a transport protein known as four phosphate adaptor protein 2 (FAPP2) while the flipping to the inside of the Golgi is made possible by the ABC transporter P- glycoprotein , also known as the multi-drug resistance 1 transporter ( MDR1 ). [ 47 ] GluCer is implicated in post-Golgi trafficking and drug resistance particularly to chemotherapeutic agents . [ 48 ] [ 49 ] For instance, a study demonstrated a correlation between cellular drug resistance and modifications in GluCer metabolism . [ 50 ] In addition to their role as building blocks of biological membranes, glycosphingolipids have long attracted attention because of their supposed involvement in cell growth, differentiation , and formation of tumors. [ 31 ] The production of GluCer from Cer was found to be important in the growth of neurons or brain cells. [ 51 ] On the other hand, pharmacological inhibition of GluCer synthase is being considered a technique to avoid insulin resistance . [ 52 ] Ceramide-1-phosphate (C1P) is formed by the action of ceramide kinase (CK) enzymes on Cer. C1P carry ionic charge at neutral pH and contain two hydrophobic chains making it relatively insoluble in aqueous environment. Thus, C1P reside in the organelle where it was formed and is unlikely to spontaneously flip-flop across membrane bilayers. [ 31 ] C1P activate phospholipase A2 and is found, along with CK, to be a mediator of arachidonic acid released in cells in response to a protein called interleukin -1β (IL-1β) and a lipid-soluble molecule that transports calcium ions (Ca 2+ ) across the bilayer, also known as calcium ionophore . [ 53 ] C1P was also previously reported to encourage cell division ( mitogenic ) in fibroblasts , block apoptosis by inhibiting acid SMase in white blood cells within tissues ( macrophages ) [ 54 ] and increase intracellular free calcium concentrations in thyroid cells. [ 55 ] C1P also has known roles in vesicular trafficking, cell survival, phagocytosis ("cell eating") and macrophage degranulation . [ 56 ] [ 57 ] PIP 2 binds directly to ion channels and modulates their activity. PIP 2 was shown to directly agonizes Inward rectifying potassium channels ( K ir ). [ 58 ] In this regard intact PIP 2 signals as a bona fide neurotransmitter-like ligand. [ 59 ] PIP 2 's interaction with many ion channels suggest that the intact form of PIP 2 has an important signaling role independent of second messenger signaling. [ citation needed ] A general second messenger system mechanism can be broken down into four steps. First, the agonist activates a membrane-bound receptor. Second, the activated G-protein produces a primary effector. Third, the primary effect stimulates the second messenger synthesis. Fourth, the second messenger activates a certain cellular process. The G-protein coupled receptors for the PIP 2 messenger system produces two effectors, phospholipase C (PLC) and phosphoinositide 3-kinase (PI3K). PLC as an effector produces two different second messengers, inositol triphosphate (IP 3 ) and Diacylglycerol (DAG). IP 3 is soluble and diffuses freely into the cytoplasm. As a second messenger, it is recognized by the inositol triphosphate receptor (IP3R), a Ca 2+ channel in the endoplasmic reticulum (ER) membrane, which stores intracellular Ca 2+ . The binding of IP 3 to IP3R releases Ca 2+ from the ER into the normally Ca 2+ -poor cytoplasm, which then triggers various events of Ca 2+ signaling. Specifically in blood vessels, the increase in Ca 2+ concentration from IP 3 releases nitric oxide, which then diffuses into the smooth muscle tissue and causes relaxation. [ 34 ] DAG remains bound to the membrane by its fatty acid "tails" where it recruits and activates both conventional and novel members of the protein kinase C family. Thus, both IP 3 and DAG contribute to activation of PKCs. [ 60 ] [ 61 ] Phosphoinositide 3-kinase (PI3K) as an effector phosphorylates phosphatidylinositol bisphosphate (PIP 2 ) to produce phosphatidylinositol (3,4,5)-trisphosphate (PIP 3 ). PIP 3 has been shown to activate protein kinase B , increase binding to extracellular proteins and ultimately enhance cell survival. [ 34 ] See main article on G-protein coupled receptors LPA is the result of phospholipase A2 action on phosphatidic acid . The SN-1 position can contain either an ester bond or an ether bond, with ether LPA being found at elevated levels in certain cancers. LPA binds the high-affinity G-protein coupled receptors LPA1 , LPA2 , and LPA3 (also known as EDG2 , EDG4 , and EDG7 , respectively). [ citation needed ] S1P is present at high concentrations in plasma and secreted locally at elevated concentrations at sites of inflammation. It is formed by the regulated phosphorylation of sphingosine . It acts through five dedicated high-affinity G-protein coupled receptors , S1P1 - S1P5 . Targeted deletion of S1P1 results in lethality in mice and deletion of S1P2 results in seizures and deafness. Additionally, a mere 3- to 5-fold elevation in serum S1P concentrations induces sudden cardiac death by an S1P3 -receptor specific mechanism. PAF is a potent activator of platelet aggregation, inflammation, and anaphylaxis. It is similar to the ubiquitous membrane phospholipid phosphatidylcholine except that it contains an acetyl -group in the SN-2 position and the SN-1 position contains an ether -linkage. PAF signals through a dedicated G-protein coupled receptor , PAFR and is inactivated by PAF acetylhydrolase. The endogenous cannabinoids , or endocannabinoids , are endogenous lipids that activate cannabinoid receptors . The first such lipid to be isolated was anandamide which is the arachidonoyl amide of ethanolamine . Anandamide is formed via enzymatic release from N-arachidonoyl phosphatidylethanolamine by the N-acyl phosphatidylethanolamine phospholipase D (NAPE-PLD). [ 62 ] Anandamide activates both the CB1 receptor, found primarily in the central nervous system , and the CB2 receptor which is found primarily in lymphocytes and the periphery. It is found at very low levels (nM) in most tissues and is inactivated by the fatty acid amide hydrolase . Subsequently, another endocannabinoid was isolated, 2-arachidonoylglycerol , which is produced when phospholipase C releases diacylglycerol which is then converted to 2-AG by diacylglycerol lipase . 2-AG can also activate both cannabinoid receptors and is inactivated by monoacylglycerol lipase . It is present at approximately 100-times the concentration of anandamide in most tissues. Elevations in either of these lipids causes analgesia and anti- inflammation and tissue protection during states of ischemia, but the precise roles played by these various endocannabinoids are still not totally known and intensive research into their function, metabolism, and regulation is ongoing. One saturated lipid from this class, often called an endocannabinoid, but with no relevant affinity for the CB1 and CB 2 receptor is palmitoylethanolamide . This signaling lipid has great affinity for the GRP55 receptor and the PPAR alpha receptor. It has been identified as an anti-inflammatory compound already in 1957, and as an analgesic compound in 1975. Rita Levi-Montalcini first identified one of its biological mechanisms of action, the inhibition of activated mast cells. Palmitoylethanolamide is the only endocannabinoid available on the market for treatment, as a food supplement. Prostaglandins are formed through oxidation of arachidonic acid by cyclooxygenases and other prostaglandin synthases . There are currently nine known G-protein coupled receptors ( eicosanoid receptors ) that largely mediate prostaglandin physiology (although some prostaglandins activate nuclear receptors , see below). FAHFAs (fatty acid esters of hydroxy fatty acids) are formed in adipose tissue, improve glucose tolerance and also reduce adipose tissue inflammation. Palmitic acid esters of hydroxy-stearic acids (PAHSAs) are among the most bioactive members able to activate G-protein coupled receptors 120. [ 63 ] Docosahexaenoic acid ester of hydroxy-linoleic acid (DHAHLA) exert anti-inflammatory and pro-resolving properties. [ 64 ] Retinaldehyde is a retinol ( vitamin A ) derivative responsible for vision. It binds rhodopsin , a well-characterized GPCR that binds all-cis retinal in its inactive state. Upon photoisomerization by a photon the cis-retinal is converted to trans-retinal causing activation of rhodopsin which ultimately leads to depolarization of the neuron thereby enabling visual perception . See the main article on nuclear receptors This large and diverse class of steroids are biosynthesized from isoprenoids and structurally resemble cholesterol . Mammalian steroid hormones can be grouped into five groups by the receptors to which they bind: glucocorticoids , mineralocorticoids , androgens , estrogens , and progestogens . Retinol ( vitamin A ) can be metabolized to retinoic acid which activates nuclear receptors such as the RAR to control differentiation and proliferation of many types of cells during development. [ 65 ] The majority of prostaglandin signaling occurs via GPCRs (see above) although certain prostaglandins activate nuclear receptors in the PPAR family. (See article eicosanoid receptors for more information).
https://en.wikipedia.org/wiki/Lipid_signaling
Lipidology is the scientific study of lipids . Lipids are a group of biological macromolecules that have a multitude of functions in the body. [ 1 ] [ 2 ] [ 3 ] Clinical studies on lipid metabolism in the body have led to developments in therapeutic lipidology for disorders such as cardiovascular disease. [ 4 ] Compared to other biomedical fields, lipidology was long-neglected as the handling of oils, smears, and greases was unappealing to scientists and lipid separation was difficult. [ 5 ] It was not until 2002 that lipidomics , the study of lipid networks and their interaction with other molecules, appeared in the scientific literature. [ 6 ] Attention to the field was bolstered by the introduction of chromatography , spectrometry , and various forms of spectroscopy to the field, allowing lipids to be isolated and analyzed. [ 5 ] The field was further popularized following the cytologic application of the electron microscope , which led scientists to find that many metabolic pathways take place within, along, and through the cell membrane - the properties of which are strongly influenced by lipid composition . [ 5 ] The Framingham Heart Study and other epidemiological studies have found a correlation between lipoproteins and cardiovascular disease (CVD). [ 7 ] Lipoproteins are generally a major target of study in lipidology since lipids are transported throughout the body in the form of lipoproteins. [ 2 ] A class of lipids known as phospholipids help make up what is known as lipoproteins, and a type of lipoprotein is called high density lipoprotein (HDL). [ 8 ] A high concentration of high density lipoproteins-cholesterols (HDL-C) have what is known as a vasoprotective effect on the body, a finding that correlates with an enhanced cardiovascular effect. [ 9 ] There is also a correlation between those with diseases such as chronic kidney disease, coronary artery disease, or diabetes mellitus and the possibility of low vasoprotective effect from HDL. [ 10 ] Another factor of CVD that is often overlooked involves the concentrations of low-density lipoproteins (LDL) and very low-density lipoproteins (VLDL). These are often seen at higher than expected and necessary levels in the body due to food uptake, family history, and a person's metabolic rate. There is a correlation between these increased levels and stroke, heart attack, and mortality. [ 11 ] Statins are a class of lipid-lowering medications used in the treatment and prevention of cardiovascular disease, specifically those associated with LDL-C. [ 12 ] Statins have been shown to reduce incident cardiovascular events by 30-40% when used as prescribed. [ 13 ] However, statins are associated with a range of adverse effects (e.g., statin myopathies and myalgias ) sometimes severe enough warrant discontinuation and/or substitution. [ 14 ] [ 15 ] For individuals completely intolerant to statins who nevertheless have an indication for lipid-lowering therapy, lipoprotein apheresis —a non-surgical method of removing lipoprotein particles from the bloodstream—is an option. Pharmacologic inhibition of proprotein convertase subtilisin/kexin type 9 (PCSK9), an enzyme crucial for maintaining lipoprotein homeostasis, can be achieved through the use of monoclonal antibodies targeting PCSK9, such as evolocumab and alirocumab . This approach offers a potential solution for individuals with statin intolerance and insufficient response to statins alone, a common scenario among patients with familial hypercholesterolemia —whereby significant reductions in circulating lipoproteins can be achieved. [ 13 ] Lipidomics is the complete profile of all lipids in a biological system at a given time. This is used to identify and quantify the lipids that can be detected. Since lipids have a variety of functions in the body, being able to understand which specific types are present in the body and at what levels is crucial to understand the diseases that result due to lipids. [ 16 ] Methods of lipidomic analysis include mass spectrometry and chromatography. [ 6 ] Monitoring lipid concentration can reveal much about an organism's health.
https://en.wikipedia.org/wiki/Lipidology
Lipidomics is the large-scale study of pathways and networks of cellular lipids in biological systems. [ 1 ] [ 2 ] [ 3 ] The word " lipidome " is used to describe the complete lipid profile within a cell, tissue, organism, or ecosystem and is a subset of the " metabolome " which also includes other major classes of biological molecules (such as amino acids, sugars, glycolysis & TCA intermediates, and nucleic acids). Lipidomics is a relatively recent research field that has been driven by rapid advances in technologies such as mass spectrometry (MS), nuclear magnetic resonance (NMR) spectroscopy, fluorescence spectroscopy , dual polarisation interferometry and computational methods, coupled with the recognition of the role of lipids in many metabolic diseases such as obesity , atherosclerosis , stroke , hypertension and diabetes . This rapidly expanding field [ 4 ] complements the huge progress made in genomics and proteomics, all of which constitute the family of systems biology . Lipidomics research involves the identification and quantification of the thousands of cellular lipid molecular species and their interactions with other lipids, proteins, and other metabolites . Investigators in lipidomics examine the structures, functions, interactions, and dynamics of cellular lipids and the changes that occur during perturbation of the system. Han and Gross [ 5 ] first defined the field of lipidomics through integrating the specific chemical properties inherent in lipid molecular species with a comprehensive mass spectrometric approach. Although lipidomics is under the umbrella of the more general field of " metabolomics ", lipidomics is itself a distinct discipline due to the uniqueness and functional specificity of lipids relative to other metabolites. In lipidomic research, a vast amount of information quantitatively describing the spatial and temporal alterations in the content and composition of different lipid molecular species is accrued after perturbation of a cell through changes in its physiological or pathological state. Information obtained from these studies facilitates mechanistic insights into changes in cellular function. Therefore, lipidomic studies play an essential role in defining the biochemical mechanisms of lipid-related disease processes through identifying alterations in cellular lipid metabolism, trafficking and homeostasis. The growing attention on lipid research is also seen from the initiatives underway of the LIPID Metabolites And Pathways Strategy ( LIPID MAPS Consortium). [ 6 ] and The European Lipidomics Initiative (ELIfe). [ 7 ] Lipids are a diverse and ubiquitous group of compounds which have many key biological functions, such as acting as structural components of cell membranes , serving as energy storage sources and participating in signaling pathways. Lipids may be broadly defined as hydrophobic or amphipathic small molecules that originate entirely or in part from two distinct types of biochemical subunits or "building blocks": ketoacyl and isoprene groups. [ 8 ] The huge structural diversity found in lipids arises from the biosynthesis of various combinations of these building blocks. For example, glycerophospholipids are composed of a glycerol backbone linked to one of approximately 10 possible headgroups and also to 2 fatty acyl / alkyl chains, which in turn may have 30 or more different molecular structures. In practice, not all possible permutations are detected experimentally, due to chain preferences depending on the cell type and also to detection limits - nevertheless several hundred distinct glycerophospholipid molecular species have been detected in mammalian cells. Plant chloroplast thylakoid membranes however, have unique lipid composition as they are deficient in phospholipids. Also, their largest constituent, monogalactosyl diglyceride or MGDG , does not form aqueous bilayers. Nevertheless, dynamic studies reveal a normal lipid bilayer organisation in thylakoid membranes. [ 9 ] Most methods of lipid extraction and isolation from biological samples exploit the high solubility of hydrocarbon chains in organic solvents . Given the diversity in lipid classes, it is not possible to accommodate all classes with a common extraction method. The traditional Bligh/Dyer procedure [ 10 ] uses chloroform / methanol -based protocols that include phase partitioning into the organic layer. However, several protocols now exist, with newer methods overcoming the shortcomings of older ones and solving problems associated with, for example, targeted lipid isolation or high throughput data collection [ 11 ] . Most protocols work relatively well for a variety of physiologically relevant lipids but they have to be adapted for species with particular properties and low-abundance and labile lipid metabolites [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] . The simplest method of lipid separation is the use of thin layer chromatography (TLC). Although not as sensitive as other methods of lipid detection, it offers a rapid and comprehensive screening tool prior to more sensitive and sophisticated techniques. Solid-phase extraction (SPE) chromatography is useful for rapid, preparative separation of crude lipid mixtures into different lipid classes. This involves the use of prepacked columns containing silica or other stationary phases to separate glycerophospholipids , fatty acids , cholesteryl esters , glycerolipids , and sterols from crude lipid mixtures. [ 18 ] High-performance liquid chromatography (HPLC or LC) is extensively used in lipidomic analysis to separate lipids prior to mass analysis. Separation can be achieved by either normal-phase (NP) HPLC or reverse-phase (RP) HPLC. For example, NP-HPLC effectively separates glycerophospholipids on the basis of headgroup polarity, [ 19 ] whereas RP-HPLC effectively separates fatty acids such as eicosanoids on the basis of chain length, degree of unsaturation and substitution. [ 20 ] For global, untargeted lipidomic studies it is common to use both RP and NP or Hydrophilic Interaction Liquid Chromatrography (HILC) columns for increased lipidome coverage. The application of nano-flow liquid chromatography (nLC) proved thereby to be most efficient to enhance both general measurement sensitivity and lipidome coverage for a global lipidomics approach. [ 21 ] Chromatographic (HPLC/UHPLC) separation of lipids may either be performed offline or online where the eluate is integrated with the ionization source of a mass spectrometer. The progress of modern lipidomics has been greatly accelerated by the development of spectrometric methods in general and soft ionization techniques for mass spectrometry such as electrospray ionization (ESI), [ 5 ] desorption electrospray ionization (DESI), [ 22 ] and matrix-assisted laser desorption/ionization (MALDI) [ 23 ] in particular. "Soft" ionization does not cause extensive fragmentation, so that comprehensive detection of an entire range of lipids within a complex mixture can be correlated to experimental conditions or disease state. In addition, the technique of atmospheric pressure chemical ionization (APCI) has become increasingly popular for the analysis of nonpolar lipids. [ 24 ] ESI-MS was initially developed by Fenn and colleagues for analysis of biomolecules. [ 25 ] It depends on the formation of gaseous ions from polar, thermally labile and mostly non-volatile molecules and thus is completely suitable for a variety of lipids. It is a soft-ionization method that rarely disrupts the chemical nature of the analyte prior to mass analysis. Various ESI-MS methods have been developed for analysis of different classes, subclasses, and individual lipid species from biological extracts. Comprehensive reviews of the methods and their application have recently been published. [ 26 ] The major advantages of ESI-MS are high accuracy, sensitivity, reproducibility, and the applicability of the technique to complex solutions without prior derivatization. Han and coworkers have developed a method known as"shotgun lipidomics" which involves direct infusion of a crude lipid extract into an ESI source optimized for intrasource separation of lipids based on their intrinsic electrical properties. [ 27 ] DESI mass spectrometry is an ambient ionization technique developed by Professor Zoltan Takáts, et al., in Professor Graham Cooks' group from Purdue University . [ 22 ] It combines the ESI and desorption ionization techniques, by directing an electrically charged mist to the sample surface that is a few millimeters away. [ 28 ] The technique has been successfully applied to lipidomics as imaging tool to map the lipid distributions within tissue specimens. [ 29 ] One of the advantages of DESI MS is that no matrix is required for tissue preparation, allowing multiple consecutive measurements on the same tissue specimen. DESI MS can also be used for imaging of lipids from tissue sections. [ 30 ] MALDI mass spectrometry is a laser-based soft-ionization method often used for analysis of large proteins, but has been used successfully for lipids. The lipid is mixed with a matrix, such as 2,5-dihydroxybenzoic acid, and applied to a sample holder as a small spot. A laser is fired at the spot, and the matrix absorbs the energy, which is then transferred to the analyte, resulting in ionization of the molecule. MALDI-Time-of-flight (MALDI-TOF) MS has become a very promising approach for lipidomics studies, particularly for the imaging of lipids from tissue slides. [ 31 ] The source for APCI is similar to ESI except that ions are formed by the interaction of the heated analyte solvent with a corona discharge needle set at a high electrical potential. Primary ions are formed immediately surrounding the needle, and these interact with the solvent to form secondary ions that ultimately ionize the sample. APCI is particularly useful for the analysis of nonpolar lipids such as triacylglycerols, sterols, and fatty acid esters. [ 32 ] The high sensitivity of DESI in the lipid range makes it a powerful technique for the detection and mapping of lipids abundances within tissue specimens. [ 33 ] Recent developments in MALDI methods have enabled direct detection of lipids in-situ. Abundant lipid-related ions are produced from the direct analysis of thin tissue slices when sequential spectra are acquired across a tissue surface that has been coated with a MALDI matrix. Collisional activation of the molecular ions can be used to determine the lipid family and often structurally define the molecular species. These techniques enable detection of phospholipids, sphingolipids and glycerolipids in tissues such as heart, kidney and brain. Furthermore, distribution of many different lipid molecular species often define anatomical regions within these tissues. [ 34 ] [ 35 ] Lipid profiling is a targeted metabolomics platform that provides a comprehensive analysis of lipid species within a cell or tissue. Profiling based on electrospray ionization tandem mass spectrometry (ESI-MS/MS) is capable of providing quantitative data and is adaptable to high throughput analyses. [ 37 ] The powerful approach of transgenics, namely deletion and/or overexpression of a gene product coupled with lipidomics, can give valuable insights into the role of biochemical pathways. [ 38 ] Lipid profiling techniques have also been applied to plants [ 39 ] and microorganisms such as yeast. [ 36 ] [ 40 ] [ 21 ] A combination of quantitative lipidomic data in conjunction with the corresponding transcriptional data (using gene-array methods) and proteomic data (using tandem MS) enables a systems biology approach to a more in-depth understanding of the metabolic or signaling pathways of interest. A major challenge for lipidomics, in particular for MS-based approaches, lies in the computational and bioinformatic demands of handling the large amount of data that arise at various stages along the chain of information acquisition and processing. [ 41 ] [ 42 ] Chromatographic and MS data collection requires substantial efforts in spectral alignment and statistical evaluation of fluctuations in signal intensities. Such variations have a multitude of origins, including biological variations, sample handling and analytical accuracy. As a consequence several replicates are normally required for reliable determination of lipid levels in complex mixtures. Within the last few years, a number of software packages have been developed by various companies and research groups to analyze data generated by MS profiling of metabolites, including lipids. The data processing for differential profiling usually proceed through several stages, including input file manipulation, spectral filtering, peak detection, chromatographic alignment, normalization, visualization, and data export. An example of metabolic profiling software is the freely-available Java-based Mzmine application. [ 43 ] Another is Metabolon, Inc’s commercial applications for metabolomic analysis using proprietary software. [ 44 ] Recently MS-DIAL 4 software was integrated with a comprehensive lipidome atlas with retention time, collision cross-section and tandem mass spectrometry information for 117 lipid subclasses and 8,051 lipids. [ 45 ] Some software packages such as Markerview [ 46 ] include multivariate statistical analysis (for example, principal component analysis) and these will be helpful for the identification of correlations in lipid metabolites that are associated with a physiological phenotype, in particular for the development of lipid-based biomarkers. Another objective of the information technology side of lipidomics involves the construction of metabolic maps from data on lipid structures and lipid-related protein and genes. Some of these lipid pathways [ 47 ] are extremely complex, for example the mammalian glycosphingolipid pathway. [ 48 ] The establishment of searchable and interactive databases [ 49 ] [ 50 ] of lipids and lipid-related genes/proteins is also an extremely important resource as a reference for the lipidomics community. Integration of these databases with MS and other experimental data, as well as with metabolic networks [ 51 ] offers an opportunity to devise therapeutic strategies to prevent or reverse these pathological states involving dysfunction of lipid-related processes.
https://en.wikipedia.org/wiki/Lipidomics
Lipinski's rule of five , also known as Pfizer's rule of five or simply the rule of five ( RO5 ), is a rule of thumb to evaluate druglikeness or determine if a chemical compound with a certain pharmacological or biological activity has chemical properties and physical properties that would likely make it an orally active drug in humans. The rule was formulated by Christopher A. Lipinski in 1997, based on the observation that most orally administered drugs are relatively small and moderately lipophilic molecules . [ 1 ] [ 2 ] The rule describes molecular properties important for a drug's pharmacokinetics in the human body, including their absorption , distribution , metabolism , and excretion (" ADME "). However, the rule does not predict if a compound is pharmacologically active. The rule is important to keep in mind during drug discovery when a pharmacologically active lead structure is optimized step-wise to increase the activity and selectivity of the compound as well as to ensure drug-like physicochemical properties are maintained as described by Lipinski's rule. [ 3 ] Candidate drugs that conform to the RO5 tend to have lower attrition rates during clinical trials and hence have an increased chance of reaching the market. [ 2 ] [ 4 ] Some authors have criticized the rule of five for the implicit assumption that passive diffusion is the only important mechanism for the entry of drugs into cells, ignoring the role of transporters. For example, O'Hagan and co-authors wrote as follows: [ 5 ] This famous "rule of 5" has been highly influential in this regard, but only about 50 % of orally administered new chemical entities actually obey it. Studies have also demonstrated that some natural products break the chemical rules used in Lipinski filters such as macrolides and peptides. [ 6 ] [ 7 ] [ 8 ] Lipinski's rule states that, in general, an orally active drug has no more than one violation of the following criteria: [ 9 ] Note that all numbers are multiples of five, which is the origin of the rule's name. As with many other rules of thumb , such as Baldwin's rules for ring closure, there are many exceptions . In an attempt to improve the predictions of druglikeness , the rules have spawned many extensions, for example the Ghose filter: [ 10 ] Veber's Rule further questions a 500 molecular weight cutoff. The polar surface area and the number of rotatable bonds has been found to better discriminate between compounds that are orally active and those that are not for a large data set of compounds. [ 11 ] In particular, compounds which meet only the two criteria of: are predicted to have good oral bioavailability. [ 11 ] During drug discovery, lipophilicity and molecular weight are often increased in order to improve the affinity and selectivity of the drug candidate. Hence it is often difficult to maintain drug-likeness (i.e., RO5 compliance) during hit and lead optimization. Hence it has been proposed that members of screening libraries from which hits are discovered should be biased toward lower molecular weight and lipophilicity so that medicinal chemists will have an easier time in delivering optimized drug development candidates that are also drug-like. Hence the rule of five has been extended to the rule of three (RO3) for defining lead-like compounds. [ 12 ] A rule of three compliant compound is defined as one that has:
https://en.wikipedia.org/wiki/Lipinski's_rule_of_five
A lipoblast is a precursor cell for an adipocyte . [ 1 ] Alternate terms include adipoblast [ 2 ] and preadipocyte . [ 3 ] Early stages are almost indistinguishable from fibroblasts . [ 4 ] Lipoblasts are seen in liposarcoma [ 7 ] and characteristically have abundant multivacuolated clear cytoplasm and a dark staining (hyperchromatic), indented nucleus . This article related to pathology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lipoblast
Lipotoxicity is a metabolic syndrome that results from the accumulation of lipid intermediates in non- adipose tissue , leading to cellular dysfunction and death . The tissues normally affected include the kidneys , liver , heart and skeletal muscle . Lipotoxicity is believed to have a role in heart failure , obesity , and diabetes , and is estimated to affect approximately 25% of the adult American population. [ 1 ] In normal cellular operations, there is a balance between the production of lipids, and their oxidation or transport. In lipotoxic cells, there is an imbalance between the amount of lipids produced and the amount used. Upon entrance of the cell, fatty acids can be converted to different types of lipids for storage. Triacylglycerol consists of three fatty acids bound to a glycerol molecule and is considered the most neutral and harmless type of intracellular lipid storage. Alternatively, fatty acids can be converted to lipid intermediates like diacylglycerol , ceramides and fatty acyl-CoAs. These lipid intermediates can impair cellular function, which is referred to as lipotoxicity. [ 2 ] Adipocytes , the cells that normally function as lipid store of the body, are well equipped to handle the excess lipids. Yet, too great of an excess will overburden these cells and cause a spillover into non-adipose cells, which do not have the necessary storage space. When the storage capacity of non-adipose cells is exceeded, cellular dysfunction and/or death result. The mechanism by which lipotoxicity causes death and dysfunction is not well understood. The cause of apoptosis and extent of cellular dysfunction is related to the type of cell affected, as well as the type and quantity of excess lipids. [ 3 ] A theory has been put forward by Cambridge researchers relating the development of lipotoxicity to the perturbation of membrane glycerophospholipid/sphingolipid homeostasis and their associated signalling events. [ 4 ] Currently, there is no universally accepted theory for why certain individuals are afflicted with lipotoxicity. Research is ongoing into a genetic cause, but no individual gene has been named as the causative agent. The causative role of obesity in lipotoxicity is controversial. Some researchers claim that obesity has protective effects against lipotoxicity as it results in extra adipose tissue in which excess lipids can be stored. Others claim obesity is a risk factor for lipotoxicity. Both sides accept that high fat diets put patients at increased risk for lipotoxic cells. Individuals with high numbers of lipotoxic cells usually experience both leptin and insulin resistance . However, no causative mechanism has been found for this correlation. [ 5 ] Renal lipotoxicity occurs when excess long-chain nonesterified fatty acids are stored in the kidney and proximal tubule cells. It is believed that these fatty acids are delivered to the kidneys via serum albumin . This condition leads to tubulointerstitial inflammation and fibrosis in mild cases, and to kidney failure and death in severe cases. The current accepted treatments for lipotoxicity in renal cells are fibrate therapy and intensive insulin therapy . [ 6 ] An excess of free fatty acids in liver cells plays a role in Nonalcoholic Fatty Liver Disease (NAFLD). In the liver, it is the type of fatty acid, not the quantity, that determines the extent of the lipotoxic effects. In hepatocytes , the ratio of monounsaturated fatty acids and saturated fatty acids leads to apoptosis and liver damage. There are several potential mechanisms by which the excess fatty acids can cause cell death and damage. They may activate death receptors , stimulate apoptotic pathways, or initiate cellular stress response in the endoplasmic reticulum . These lipotoxic effects have been shown to be prevented by the presence of excess triglycerides within the hepatocytes. [ 7 ] Lipotoxicity in cardiac tissue is attributed to excess saturated fatty acids. The apoptosis that follows is believed to be caused by unfolded protein response in the endoplasmic reticulum. Researchers are working on treatments that will increase the oxidation of these fatty acids within the heart in order to prevent the lipotoxic effects. [ 8 ] Lipotoxicity affects the pancreas when excess free fatty acids are found in beta cells , causing their dysfunction and death. The effects of the lipotoxicity is treated with leptin therapy and insulin sensitizers. [ 9 ] The skeletal muscle accounts for more than 80 percent of the postprandial whole body glucose uptake and therefore plays an important role in glucose homeostasis. Skeletal muscle lipid levels – intramyocellular lipids (IMCL) – correlate negatively with insulin sensitivity in a sedentary population and hence were considered predictive for insulin resistance and causative in obesity-associated insulin resistance. However, endurance athletes also have high IMCL levels despite being highly insulin sensitive, which indicates that not the level of IMCL accumulation per se, but rather the characteristics of this intramyocellular fat determine whether it negatively affects insulin signaling. [ 2 ] Intramyocellular lipids are mainly stored in lipid droplets , the organelles for fat storage. Recent research indicates that creating intramyocellular neutral lipid storage capacity for example by increasing the abundance of lipid droplet coat proteins [ 2 ] [ 10 ] protects against obesity-associated insulin resistance in skeletal muscle. The methods to prevent and treat lipotoxicity are divided into three main groups. The first strategy focuses on decreasing the lipid content of non-adipose tissues. This can be accomplished by either increasing the oxidation of the lipids, or increasing their secretion and transport. Current treatments involve extreme weight loss and leptin treatment. [ 11 ] Another strategy is focusing on diverting excess lipids away from non-adipose tissues, and towards adipose tissues. This is accomplished with thiazolidinediones , a group of medications that activate nuclear receptor proteins responsible for lipid metabolism. [ 12 ] The final strategy focuses on inhibiting the apoptotic pathways and signaling cascades. This is accomplished by using drugs that inhibit production of specific chemicals required for the pathways to be functional. While this may prove to the most effective protection against cell death, it will also require the most research and development due to the specificity required of the medications. [ 3 ] Lipoexpediency refers to the beneficial effects of lipids in a cell or a tissue, primarily lipid-mediated signal transmission events, that may occur even in the setting of excess fatty acids . The term was coined as an antonym to lipotoxicity. [ 13 ]
https://en.wikipedia.org/wiki/Lipoexpediency
Lipofectamine or Lipofectamine 2000 is a common transfection reagent , produced and sold by Invitrogen , used in molecular and cellular biology . [ 1 ] It is used to increase the transfection efficiency of RNA (including mRNA and siRNA ) or plasmid DNA into in vitro cell cultures by lipofection . [ 1 ] Lipofectamine contains lipid subunits that can form liposomes in an aqueous environment, which entrap the transfection payload, e.g. DNA plasmids. Lipofectamine consists of a 3:1 mixture of DOSPA (2,3‐dioleoyloxy‐N‐ [2(sperminecarboxamido)ethyl]‐N,N‐dimethyl‐1‐propaniminium trifluoroacetate) and DOPE , [ 2 ] which complexes with negatively charged nucleic acid molecules to allow them to overcome the electrostatic repulsion of the cell membrane. [ 3 ] Lipofectamine's cationic lipid molecules are formulated with a neutral co-lipid (helper lipid). [ 3 ] The DNA-containing liposomes (positively charged on their surface) can fuse with the negatively charged plasma membrane of living cells, due to the neutral co-lipid mediating fusion of the liposome with the cell membrane, allowing nucleic acid cargo molecules to cross into the cytoplasm for replication or expression. [ 3 ] In order for a cell to express a transgene , the nucleic acid must reach the nucleus of the cell to begin transcription . However, the transfected genetic material may never reach the nucleus in the first place, instead being disrupted somewhere along the delivery process. [ 3 ] In dividing cells, the material may reach the nucleus by being trapped in the reassembling nuclear envelope following mitosis. [ 3 ] But also in non-dividing cells, research has shown that Lipofectamine improves the efficiency of transfection, which suggests that it additionally helps the transfected genetic material penetrate the intact nuclear envelope. [ 3 ] This method of transfection was invented by Dr. Yongliang Chu. [ 4 ] US Active US7479573B2, Yongliang Chu; Malek Masoud & Gulliat Gebeyehu, "Transfection reagents", assigned to Life Technologies Corp and Invitrogen Group
https://en.wikipedia.org/wiki/Lipofectamine
Lipofuscin is the name given to fine yellow-brown pigment granules composed of lipid -containing residues of lysosomal digestion. [ 1 ] [ 2 ] It is considered to be one of the aging or "wear-and-tear" pigments, found in the liver , kidney , heart muscle, retina, adrenals , nerve cells, and ganglion cells. [ 3 ] Lipofuscin appears to be the product of the oxidation of unsaturated fatty acids and may be symptomatic of membrane damage, or damage to mitochondria and lysosomes . Aside from a large lipid content, lipofuscin is known to contain sugars and metals, including mercury , aluminium , iron , copper and zinc . [ 4 ] Lipofuscin is also accepted as consisting of oxidized proteins (30–70%) as well as lipids (20–50%). [ 5 ] It is a type of lipochrome [ 6 ] and is specifically arranged around the nucleus. The accumulation of lipofuscin-like material may be the result of an imbalance between formation and disposal mechanisms. Such accumulation can be induced in rats by administering a protease inhibitor ( leupeptin ); after a period of three months, the levels of the lipofuscin-like material return to normal, indicating the action of a significant disposal mechanism. [ 7 ] However, this result is controversial, as it is questionable if the leupeptin -induced material is true lipofuscin. [ 8 ] [ 9 ] There exists evidence that "true lipofuscin" is not degradable in vitro ; [ 10 ] [ 11 ] [ 12 ] whether this holds in vivo over longer time periods is not clear. The ABCR -/- knockout mouse has delayed dark adaptation but normal final rod threshold relative to controls. [ 13 ] Bleaching the retina with strong light leads to formation of toxic cationic bis -pyridinium salt, N -retinylidene- N -retinyl-ethanolamine ( A2E ), which causes dry and wet age-related macular degeneration . [ 14 ] From this experiment, it was concluded that ABCR has a significant role in preventing formation of A2E in extracellular photoreceptor surfaces during bleach recovery. [ citation needed ] Lipofuscin accumulation in the eye is a major risk factor implicated in macular degeneration , a degenerative disease, [ 15 ] and Stargardt disease , an inherited juvenile form of macular degeneration. In the peripheral nervous system , abnormal accumulation of lipofuscin known as lipofuscinosis [ 1 ] is associated with a family of neurodegenerative disorders – neuronal ceroid lipofuscinoses , the most common of these is Batten disease . Also, pathological accumulation of lipofuscin is implicated in Alzheimer's disease , Parkinson's disease , amyotrophic lateral sclerosis , certain lysosomal diseases , acromegaly , denervation atrophy , lipid myopathy , chronic obstructive pulmonary disease , [ 16 ] and centronuclear myopathy . Accumulation of lipofuscin in the colon is the cause of the condition melanosis coli . On the other hand, myocardial lipofuscin accumulation more directly reflects chronological ageing rather than human cardiac pathology. [ 17 ] Calorie restriction , [ 4 ] vitamin E , [ 4 ] and increased glutathione appear to reduce or halt the production of lipofuscin. The nootropic drug piracetam appears to significantly reduce accumulation of lipofuscin in the brain tissue of rats. [ 18 ] Other possible treatments: Wet macular degeneration can be treated using selective photothermolysis where a pulsed unfocused laser predominantly heats and kills lipofuscin-rich cells, leaving untouched healthy cells to multiply and fill in the gaps. [ citation needed ] The technique is also used as a skin treatment to remove tattoos , liverspots , and in general make skin appear younger. This ability to selectively target lipofuscin has opened up research opportunities in the field of anti-aging medicine . [ citation needed ] Soraprazan (remofuscin) has been found to remove lipofuscin from retinal pigment epithelial cells in animals. [ 24 ] This opens up a new therapy option for the treatment of dry age-related macular degeneration and Stargardt disease , for which there is currently no treatment. The drug has now been granted orphan drug designation for the treatment of Stargardt disease by the European Medicines Agency. [ 25 ] Lipofuscin quantification is used for age determination in various crustaceans such as lobsters and spiny lobsters . [ 26 ] [ 27 ] Since these animals lack bony parts, they cannot be aged in the same way as bony fish, in which annual increments in the ear-bones or otoliths are commonly used. Age determination of fish and shellfish is a fundamental step in generating basic biological data such as growth curves, and is needed for many stock assessment methods. Several studies have indicated that quantifying the amount of lipofuscin present in the eye-stalks of various crustaceans can give an index of their age. This method has not yet been widely applied in fisheries management mainly due to problems in relating lipofuscin levels in wild-caught animals with accumulation curves derived from aquarium-reared animals. [ citation needed ] 20. Young B, Lowe JS, Stevens A, Heath JW. Wheater's Functional Histology: A Text and Atlas. 6th ed. Elsevier
https://en.wikipedia.org/wiki/Lipofuscin
Lipoic acid ( LA ), also known as α-lipoic acid , alpha-lipoic acid ( ALA ) and thioctic acid , is an organosulfur compound derived from caprylic acid (octanoic acid). [ 3 ] ALA, which is made in animals normally, is essential for aerobic metabolism . It is also available as a dietary supplement or pharmaceutical drug in some countries. Lipoate is the conjugate base of lipoic acid, and the most prevalent form of LA under physiological conditions. [ 3 ] Only the ( R )-(+)- enantiomer (RLA) exists in nature. RLA is an essential cofactor of many processes. [ 3 ] Lipoic acid contains two sulfur atoms connected by a disulfide bond in the 1,2-dithiolane ring. It also carries a carboxylic acid group. It is considered to be oxidized relative to its acyclic relative dihydrolipoic acid, in which each sulfur exists as a thiol. [ 3 ] It is a yellow solid. ( R )-(+)-lipoic acid (RLA) occurs naturally, but ( S )-(-)-lipoic acid (SLA) has been synthesized. For use in dietary supplement materials and compounding pharmacies, the USP established an official monograph for R/S-LA. [ 4 ] [ 5 ] Lipoic acid is a cofactor for five enzymes or classes of enzymes: pyruvate dehydrogenase , α-ketoglutarate dehydrogenase , the glycine cleavage system , branched-chain alpha-keto acid dehydrogenase , and the α-oxo(keto)adipate dehydrogenase. The first two are critical to the citric acid cycle . The GCS regulates glycine concentrations. [ 6 ] HDAC1, HDAC2, HDAC3, HDAC6, HDAC8, and HDAC10 are targets of the reduced form (open dithiol) of ( R )-lipoic acid. [ 7 ] Most endogenously produced RLA are not " free " because octanoic acid, the precursor to RLA, is bound to the enzyme complexes prior to enzymatic insertion of the sulfur atoms. As a cofactor, RLA is covalently attached by an amide bond to a terminal lysine residue of the enzyme's lipoyl domains . The precursor to lipoic acid, octanoic acid , is made via mitochondrial fatty acid biosynthesis in the form of octanoyl- acyl carrier protein . [ 3 ] The octanoate is transferred as a thioester of acyl carrier protein from mitochondrial fatty acid biosynthesis to an amide of the lipoyl domain protein by an enzyme called an octanoyltransferase . [ 3 ] Two hydrogens of octanoate are replaced with sulfur groups via a radical SAM mechanism, by lipoyl synthase . [ 3 ] As a result, lipoic acid is synthesized attached to proteins and no free lipoic acid is produced. Lipoic acid can be removed whenever proteins are degraded and by action of the enzyme lipoamidase . [ 8 ] Free lipoate can be used by some organisms as an enzyme called lipoate protein ligase that attaches it covalently to the correct protein. The ligase activity of this enzyme requires ATP . [ 9 ] Along with sodium and the vitamins biotin (B7) and pantothenic acid (B5), lipoic acid enters cells through the SMVT (sodium-dependent multivitamin transporter). Each of the compounds transported by the SMVT is competitive with the others. For example research has shown that increasing intake of lipoic acid [ 10 ] or pantothenic acid [ 11 ] reduces the uptake of biotin and/or the activities of biotin-dependent enzymes. Lipoic acid is a cofactor for at least five enzyme systems. [ 3 ] Two of these are in the citric acid cycle through which many organisms turn nutrients into energy. Lipoylated enzymes have lipoic acid attached to them covalently. The lipoyl group transfers acyl groups in 2-oxoacid dehydrogenase complexes, and methylamine group in the glycine cleavage complex or glycine dehydrogenase . [ 3 ] Lipoic acid is the cofactor of the following enzymes in humans: [ 12 ] [ 13 ] [ 14 ] The most-studied of these is the pyruvate dehydrogenase complex. [ 3 ] These complexes have three central subunits: E1-3, which are the decarboxylase, lipoyl transferase, and dihydrolipoamide dehydrogenase , respectively. These complexes have a central E2 core and the other subunits surround this core to form the complex. In the gap between these two subunits, the lipoyl domain ferries intermediates between the active sites. [ 3 ] The lipoyl domain itself is attached by a flexible linker to the E2 core and the number of lipoyl domains varies from one to three for a given organism. The number of domains has been experimentally varied and seems to have little effect on growth until over nine are added, although more than three decreased activity of the complex. [ 15 ] Lipoic acid serves as co-factor to the acetoin dehydrogenase complex catalyzing the conversion of acetoin (3-hydroxy-2-butanone) to acetaldehyde and acetyl coenzyme A . [ 3 ] The glycine cleavage system differs from the other complexes, and has a different nomenclature. [ 3 ] In this system, the H protein is a free lipoyl domain with additional helices, the L protein is a dihydrolipoamide dehydrogenase, the P protein is the decarboxylase, and the T protein transfers the methylamine from lipoate to tetrahydrofolate (THF) yielding methylene-THF and ammonia. Methylene-THF is then used by serine hydroxymethyltransferase to synthesize serine from glycine . This system is part of plant photorespiration . [ 16 ] Lipoic acid is present in many foods in which it is bound to lysine in proteins, [ 3 ] but slightly more so in kidney, heart, liver, spinach, broccoli, and yeast extract. [ 17 ] Naturally occurring lipoic acid is always covalently bound and not readily available from dietary sources. [ 3 ] In addition, the amount of lipoic acid present in dietary sources is low. For instance, the purification of lipoic acid to determine its structure used an estimated 10 tons of liver residue, which yielded 30 mg of lipoic acid. [ 18 ] As a result, all lipoic acid available as a supplement is chemically synthesized. [ citation needed ] Baseline levels (prior to supplementation) of RLA and R-DHLA have not been detected in human plasma. [ 19 ] RLA has been detected at 12.3−43.1 ng/mL following acid hydrolysis, which releases protein-bound lipoic acid. Enzymatic hydrolysis of protein bound lipoic acid released 1.4−11.6 ng/mL and <1-38.2 ng/mL using subtilisin and alcalase , respectively. [ 20 ] [ 21 ] [ 22 ] Digestive proteolytic enzymes cleave the R-lipoyllysine residue from the mitochondrial enzyme complexes derived from food but are unable to cleave the lipoic acid- L - lysine amide bond. [ 23 ] Both synthetic lipoamide and ( R )-lipoyl- L -lysine are rapidly cleaved by serum lipoamidases, which release free ( R )-lipoic acid and either L -lysine or ammonia. [ 3 ] Little is known about the degradation and utilization of aliphatic sulfides such as lipoic acid, except for cysteine . [ 3 ] Lipoic acid is metabolized in a variety of ways when given as a dietary supplement in mammals. [ 3 ] [ 24 ] Degradation to tetranorlipoic acid, oxidation of one or both of the sulfur atoms to the sulfoxide, and S-methylation of the sulfide were observed. Conjugation of unmodified lipoic acid to glycine was detected especially in mice. [ 24 ] Degradation of lipoic acid is similar in humans, although it is not clear if the sulfur atoms become significantly oxidized. [ 3 ] [ 25 ] Apparently mammals are not capable of utilizing lipoic acid as a sulfur source. In the metabolic disease combined malonic and methylmalonic aciduria (CMAMMA) due to ACSF3 deficiency, mitochondrial fatty acid synthesis (mtFAS), which is the precursor reaction of lipoic acid biosynthesis, is impaired. [ 26 ] [ 27 ] The result is a reduced lipoylation degree of important mitochondrial enzymes, such as pyruvate dehydrogenase complex (PDC) and α-ketoglutarate dehydrogenase complex (α-KGDHC). [ 27 ] Supplementation with lipoic acid does not restore mitochondrial function. [ 28 ] [ 27 ] SLA did not exist prior to chemical synthesis in 1952. [ 29 ] [ 30 ] SLA is produced in equal amounts with RLA during achiral manufacturing processes. The racemic form was more widely used clinically in Europe and Japan in the 1950s to 1960s despite the early recognition that the various forms of LA are not bioequivalent. [ 31 ] The first synthetic procedures appeared for RLA and SLA in the mid-1950s. [ 32 ] [ 33 ] [ 34 ] [ 35 ] Advances in chiral chemistry led to more efficient technologies for manufacturing the single enantiomers by both classical resolution and asymmetric synthesis and the demand for RLA also grew at this time. In the 21st century, R/S-LA, RLA and SLA with high chemical and/or optical purities are available in industrial quantities. At the current time, most of the world supply of R/S-LA and RLA is manufactured in China and smaller amounts in Italy, Germany, and Japan. RLA is produced by modifications of a process first described by Georg Lang in a Ph.D. thesis and later patented by DeGussa. [ 36 ] [ 37 ] Although RLA is favored nutritionally due to its "vitamin-like" role in metabolism, both RLA and R/S-LA are widely available as dietary supplements. Both stereospecific and non-stereospecific reactions are known to occur in vivo and contribute to the mechanisms of action, but evidence to date indicates RLA may be the eutomer (the nutritionally and therapeutically preferred form). [ 38 ] [ 39 ] A 2007 human pharmacokinetic study of sodium RLA demonstrated the maximum concentration in plasma and bioavailability are significantly greater than the free acid form, and rivals plasma levels achieved by intravenous administration of the free acid form. [ 40 ] Additionally, high plasma levels comparable to those in animal models where Nrf2 was activated were achieved. [ 40 ] The various forms of LA are not bioequivalent. [ 31 ] Very few studies compare individual enantiomers with racemic lipoic acid. It is unclear if twice as much racemic lipoic acid can replace RLA. [ 40 ] The toxic dose of LA in cats is much lower than that in humans or dogs and produces hepatocellular toxicity. [ 41 ] The mechanism and action of lipoic acid when supplied externally to an organism is controversial. Lipoic acid in a cell seems primarily to induce the oxidative stress response rather than directly scavenge free radicals. This effect is specific for RLA. [ 42 ] Despite the strongly reducing milieu, LA has been detected intracellularly in both oxidized and reduced forms. [ 43 ] LA is able to scavenge reactive oxygen and reactive nitrogen species in a biochemical assay due to long incubation times, but there is little evidence this occurs within a cell or that radical scavenging contributes to the primary mechanisms of action of LA. [ 42 ] [ 44 ] The relatively good scavenging activity of LA toward hypochlorous acid (a bactericidal produced by neutrophils that may produce inflammation and tissue damage) is due to the strained conformation of the 5-membered dithiolane ring, which is lost upon reduction to DHLA. In cells, LA is reduced to dihydrolipoic acid, which is generally regarded as the more bioactive form of LA and the form responsible for most of the antioxidant effects and for lowering the redox activities of unbound iron and copper. [ 45 ] This theory has been challenged due to the high level of reactivity of the two free sulfhydryls, low intracellular concentrations of DHLA as well as the rapid methylation of one or both sulfhydryls, rapid side-chain oxidation to shorter metabolites and rapid efflux from the cell. Although both DHLA and LA have been found inside cells after administration, most intracellular DHLA probably exists as mixed disulfides with various cysteine residues from cytosolic and mitochondrial proteins. [ 38 ] Recent findings suggest therapeutic and anti-aging effects are due to modulation of signal transduction and gene transcription, which improve the antioxidant status of the cell. However, this likely occurs via pro-oxidant mechanisms, not by radical scavenging or reducing effects. [ 42 ] [ 44 ] [ 46 ] All the disulfide forms of LA (R/S-LA, RLA and SLA) can be reduced to DHLA although both tissue specific and stereoselective (preference for one enantiomer over the other) reductions have been reported in model systems. At least two cytosolic enzymes, glutathione reductase (GR) and thioredoxin reductase (Trx1), and two mitochondrial enzymes, lipoamide dehydrogenase and thioredoxin reductase (Trx2), reduce LA. SLA is stereoselectively reduced by cytosolic GR whereas Trx1, Trx2 and lipoamide dehydrogenase stereoselectively reduce RLA. ( R )-(+)-lipoic acid is enzymatically or chemically reduced to ( R )-(-)-dihydrolipoic acid whereas ( S )-(-)-lipoic acid is reduced to ( S )-(+)-dihydrolipoic acid. [ 47 ] [ 48 ] [ 49 ] [ 50 ] [ 51 ] [ 52 ] [ 53 ] Dihydrolipoic acid (DHLA) can also form intracellularly and extracellularly via non-enzymatic, thiol-disulfide exchange reactions . [ 54 ] RLA may function in vivo like a B-vitamin and at higher doses like plant-derived nutrients, such as curcumin , sulforaphane , resveratrol , and other nutritional substances that induce phase II detoxification enzymes , thus acting as cytoprotective agents. [ 46 ] [ 55 ] This stress response indirectly improves the antioxidant capacity of the cell. [ 42 ] The ( S )-enantiomer of LA was shown to be toxic when administered to thiamine-deficient rats. [ 56 ] [ 57 ] Several studies have demonstrated that SLA either has lower activity than RLA or interferes with the specific effects of RLA by competitive inhibition . [ 58 ] [ 59 ] [ 60 ] [ 61 ] [ 62 ] R/S-LA and RLA are widely available as over-the-counter nutritional supplements in the United States in the form of capsules, tablets, and aqueous liquids, and have been marketed as antioxidants and pertaining to cellular glucose utilization for metabolic disorders and type 2 diabetes. [ 3 ] Although the body can synthesize LA, it can also be absorbed from the diet. Dietary supplementation in doses from 200–600 mg is likely to provide up to 1000 times the amount available from a regular diet. Gastrointestinal absorption is variable and decreases with the use of food. It is therefore recommended that dietary LA be taken 30–60 minutes before or at least 120 minutes after a meal. Maximum blood levels of LA are achieved 30–60 minutes after dietary supplementation, and it is thought to be largely metabolized in the liver. [ 63 ] In Germany, LA is approved as a drug for the treatment of diabetic neuropathy since 1966 and is available as a non-prescription pharmaceutical. [ 64 ] According to the American Cancer Society as of 2013, "there is no reliable scientific evidence at this time that lipoic acid prevents the development or spread of cancer". [ 65 ] As of 2015, intravenously administered ALA is unapproved anywhere in the world except Germany for diabetic neuropathy , but has been proven reasonably safe and effective. [ 66 ] As of 2012, there was no good evidence alpha lipoic acid helps people with mitochondrial disorders . [ 67 ] A 2018 review recommended ALA as an anti-obesity supplement with low dosage (< 600 mg/day) for a short period (<10 weeks). [ 68 ]
https://en.wikipedia.org/wiki/Lipoic_acid
A lipokine is a lipid -controlling hormone . The term was coined by Hotamisligil Lab in 2008 to classify fatty acids which modulate lipid metabolism by what he called a "chaperone effect". [ 1 ] The lipokine palmitoleic acid (C16:1n7-palmitoleate) travels to the muscles and liver, where it improves cell sensitivity to insulin and blocks fat accumulation in the liver. In addition, researchers observed that palmitoleate suppresses inflammation, which is considered by many to be a primary factor leading to metabolic disease. Palmitoleic acid also serves as a biomarker for metabolic status. More specifically, a low concentration in the free acid component of the serum indicates a risk of metabolic disease, and that de novo lipogenesis should be stimulated. Additionally, administering palmitoleic acid to a subject (via nutraceutical or other means), positively impacts lipid metabolism. [ 1 ] FAHFAs (fatty acid esters of hydroxy fatty acids) are lipokines formed in adipose tissue. FAHFAs improve glucose tolerance and also reduce adipose tissue inflammation. Palmitic acid esters of hydroxy-stearic acids (PAHSAs) are among the most bioactive members able to activate G-protein coupled receptors 120. [ 2 ] Docosahexaenoic acid ester of hydroxy-linoleic acid (DHAHLA) exert anti-inflammatory and pro-resolving properties. [ 3 ] Lipokines play roles in regulation of the Central Nervous System and on adipose tissue. They bind receptors on adipose tissue in order to increase the thermogenic effect. Research has been done aiming to determine how lipokines can be used in order to increase energy expenditure and systematic metabolism in adipose tissue. [ 4 ] The lipokine 12,13-dihydroxy-9Zoctadecenoic acid (12,13-diHOME) is released by Brown Adipose Tissue (BAT) after moderate exercise. This 12,13-diHOME was given to mice and this showed increase skeletal muscle uptake of fatty acids and oxidized the fatty acids more. [ 5 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lipokine
Lipolysis / l ɪ ˈ p ɒ l ɪ s ɪ s / is the metabolic pathway through which lipid triglycerides are hydrolyzed into a glycerol and free fatty acids . It is used to mobilize stored energy during fasting or exercise , and usually occurs in fat adipocytes . The most important regulatory hormone in lipolysis is insulin ; lipolysis can only occur when insulin action falls to low levels, as occurs during fasting. Other hormones that affect lipolysis include leptin , [ 1 ] glucagon , [ 2 ] epinephrine , norepinephrine , growth hormone , atrial natriuretic peptide , brain natriuretic peptide , and cortisol . [ 3 ] In the body, stores of fat are referred to as adipose tissue . In these areas, intracellular triglycerides are stored in cytoplasmic lipid droplets . When lipase enzymes are phosphorylated, they can access lipid droplets and through multiple steps of hydrolysis, breakdown triglycerides into fatty acids and glycerol. Each step of hydrolysis leads to the removal of one fatty acid. The first step and the rate-limiting step of lipolysis is carried out by adipose triglyceride lipase (ATGL). This enzyme catalyzes the hydrolysis of triacylglycerol to diacylglycerol . Subsequently, hormone-sensitive lipase (HSL) catalyzes the hydrolysis of diacylglycerol to monoacylglycerol and monoacylglycerol lipase (MGL) catalyzes the hydrolysis of monoacylglycerol to glycerol . [ 4 ] Perilipin 1A is a key protein regulator of lipolysis in adipose tissue. This lipid droplet-associated protein, when deactivated, will prevent the interaction of lipases with triglycerides in the lipid droplet and grasp the ATGL co-activator, comparative gene identification 58 (CGI-58) (a.k.a. ABHD5 ). When perilipin 1A is phosphorylated by PKA, it releases CGI-58 and it expedites the docking of phosphorylated lipases to the lipid droplet. [ 5 ] CGI-58 can be further phosphorylated by PKA to assist in its dispersal to the cytoplasm. In the cytoplasm, CGI-58 can co-activate ATGL. [ 6 ] ATGL activity is also impacted by the negative regulator of lipolysis, G0/G1 switch gene 2 (G0S2). When expressed, G0S2 acts as a competitive inhibitor in the binding of CGI-58. [ 7 ] Fat-specific protein 27 (FSP-27) (a.k.a. CIDEC) is also a negative regulator of lipolysis. FSP-27 expression is negatively correlated with ATGL mRNA levels. [ 8 ] Lipolysis can be regulated through cAMP 's binding and activation of protein kinase A (PKA). PKA can phosphorylate lipases, perilipin 1A, and CGI-58 to increase the rate of lipolysis. Catecholamines bind to 7TM receptors (G protein-coupled receptors) on the adipocyte cell membrane, which activate adenylate cyclase . This results in increased production of cAMP, which activates PKA and leads to an increased rate of lipolysis. Despite glucagon's lipolytic activity (which stimulates PKA as well) in vitro , the role of glucagon in lipolysis in vivo is disputed. [ 9 ] Insulin counter-regulates this increase in lipolysis when it binds to insulin receptors on the adipocyte cell membrane. Insulin receptors activate insulin-like receptor substrates. These substrates activate phosphoinositide 3-kinases (PI-3K) which then phosphorylate protein kinase B (PKB) (a.k.a. Akt). PKB subsequently phosphorylates phosphodiesterase 3 B (PD3B), which then converts the cAMP produced by adenylate cyclase into 5'AMP. The resulting insulin induced reduction in cAMP levels decreases the lipolysis rate. [ 10 ] Insulin also acts in the brain at the mediobasal hypothalamus . There, it suppresses lipolysis and decreases sympathetic nervous outflow to the fatty part of the brain matter . [ 11 ] The regulation of this process involves interactions between insulin receptors and gangliosides present in the neuronal cell membrane . [ 12 ] Triglycerides are transported through the blood to appropriate tissues ( adipose , muscle , etc.) by lipoproteins such as Very-Low-Density-Lipoproteins ( VLDL ). Triglycerides present on the VLDL undergo lipolysis by the cellular lipases of target tissues, which yields glycerol and free fatty acids . Free fatty acids released into the blood are then available for cellular uptake. [ 13 ] [ self-published source? ] Free fatty acids not immediately taken up by cells may bind to albumin for transport to surrounding tissues that require energy. Serum albumin is the major carrier of free fatty acids in the blood. [ 14 ] The glycerol also enters the bloodstream and is absorbed by the liver or kidney where it is converted to glycerol 3-phosphate by the enzyme glycerol kinase . Hepatic glycerol 3-phosphate is converted mostly into dihydroxyacetonephosphate (DHAP) and then glyceraldehyde 3-phosphate (GA3P) to rejoin the glycolysis and gluconeogenesis pathway. [ 15 ] While lipolysis is triglyceride hydrolysis (the process by which triglycerides are broken down), esterification is the process by which triglycerides are formed. Esterification and lipolysis are, in essence, reversals of one another. [ 16 ] Physical lipolysis involves destruction of fat cells containing the fat droplets and can be used as part of cosmetic body contouring procedures. Currently there are four main non-invasive body contouring techniques in aesthetic medicine for reducing localized subcutaneous adipose tissue in addition to the standard minimally invasive liposuction: low-level laser therapy (LLLT), cryolipolysis , radio frequency (RF) and high-intensity focused ultrasound (HIFU). [ 17 ] [ 18 ] However, they are less effective with shorter lasting benefits and can remove significantly smaller amounts of fat compared to traditional surgical liposuction or lipectomy. However, future drug developments can be potentially combined with smaller procedures to augment the result. [ citation needed ]
https://en.wikipedia.org/wiki/Lipolysis
Lipomannan is a mycobacterium immune agonist . [ 1 ] In addition, it is a major constituent of the mycobacterium cell wall. This glycoconjugate is a virulence factor that plays a key role in the human immune system via interaction with various immune cells. It is also considered to be a precursor of lipoarabinomannans . It is a trigger for TLR 2 . It consists of an α-linked mannan , which consists of 50–70 residues, with some branches points linked glycosidically to a diglyceride of which the fatty acids are similar to those of the whole cell lipid. In addition, succinic acid residues are present as O-acyl substituents on about one in four of the mannose residues, the terminal carboxyl group of the succinic acid providing the whole polymer with a considerable number of acidic functions. Lipomannan has functional components that resemble lipoteichoic acids ; a lipophilic region and a hydrophilic portion with frequent acid groups. Lipomannan is a phosphorylated polysaccharide associated with the cell envelope and is considered to be the multimannosylated form of PIM which is primarily located in the plasma membrane. Structurally, LM is composed of two segments: a PI anchor to which is attached an α-D-mannan domain; both play key roles in inducing cytokine production by phagocytic cells. Mannose core consists of a linear α(1-6)-linked mannan backbone extending from the c-6 of the myo-inositol ; the mannan chains is further substituted by α-(1-2) man-p side branches. This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lipomannan