text
stringlengths
11
320k
source
stringlengths
26
161
Lagrangian analysis is the use of Lagrangian coordinates to analyze various problems in continuum mechanics. Lagrangian analysis may be used to analyze currents and flows of various materials by analyzing data collected from gauges/sensors embedded in the material which freely move with the motion of the material. [ 1 ] A common application is study of ocean currents in oceanography , where the movable gauges in question called Lagrangian drifters . Recently, with the development of high speed cameras and particle-tracking algorithms, there have also been applications to measuring turbulence. [ 2 ]
https://en.wikipedia.org/wiki/Lagrangian_analysis
In classical field theories , the Lagrangian specification of the flow field is a way of looking at fluid motion where the observer follows an individual fluid parcel as it moves through space and time. [ 1 ] [ 2 ] Plotting the position of an individual parcel through time gives the pathline of the parcel. This can be visualized as sitting in a boat and drifting down a river. The Eulerian specification of the flow field is a way of looking at fluid motion that focuses on specific locations in the space through which the fluid flows as time passes. [ 1 ] [ 2 ] This can be visualized by sitting on the bank of a river and watching the water pass the fixed location. The Lagrangian and Eulerian specifications of the flow field are sometimes loosely denoted as the Lagrangian and Eulerian frame of reference . However, in general both the Lagrangian and Eulerian specification of the flow field can be applied in any observer's frame of reference , and in any coordinate system used within the chosen frame of reference. The Lagrangian and Eulerian specifications are named after Joseph-Louis Lagrange and Leonhard Euler , respectively. These specifications are reflected in computational fluid dynamics , where "Eulerian" simulations employ a fixed mesh while "Lagrangian" ones (such as meshfree simulations ) feature simulation nodes that may move following the velocity field . Leonhard Euler is credited with introducing both specifications in two publications written in 1755 [ 3 ] and 1759. [ 4 ] [ 5 ] Joseph-Louis Lagrange studied the equations of motion in connection to the principle of least action in 1760, later in a treaty of fluid mechanics in 1781, [ 6 ] and thirdly in his book Mécanique analytique . [ 5 ] In this book Lagrange starts with the Lagrangian specification but later converts them into the Eulerian specification. [ 5 ] In the Eulerian specification of a field , the field is represented as a function of position x and time t . For example, the flow velocity is represented by a function u ( x , t ) . {\displaystyle \mathbf {u} \left(\mathbf {x} ,t\right).} On the other hand, in the Lagrangian specification , individual fluid parcels are followed through time. The fluid parcels are labelled by some (time-independent) vector field x 0 . (Often, x 0 is chosen to be the position of the center of mass of the parcels at some initial time t 0 . It is chosen in this particular manner to account for the possible changes of the shape over time. Therefore, the center of mass is a good parameterization of the flow velocity u of the parcel.) [ 1 ] In the Lagrangian description, the flow is described by a function X ( x 0 , t ) , {\displaystyle \mathbf {X} \left(\mathbf {x} _{0},t\right),} giving the position of the particle labeled x 0 at time t . The two specifications are related as follows: [ 2 ] u ( X ( x 0 , t ) , t ) = ∂ X ∂ t ( x 0 , t ) , {\displaystyle \mathbf {u} \left(\mathbf {X} (\mathbf {x} _{0},t),t\right)={\frac {\partial \mathbf {X} }{\partial t}}\left(\mathbf {x} _{0},t\right),} because both sides describe the velocity of the particle labeled x 0 at time t . Within a chosen coordinate system, x 0 and x are referred to as the Lagrangian coordinates and Eulerian coordinates of the flow respectively. The Lagrangian and Eulerian specifications of the kinematics and dynamics of the flow field are related by the material derivative (also called the Lagrangian derivative, convective derivative, substantial derivative, or particle derivative). [ 1 ] Suppose we have a flow field u , and we are also given a generic field with Eulerian specification F ( x , t ). Now one might ask about the total rate of change of F experienced by a specific flow parcel. This can be computed as D F D t = ∂ F ∂ t + ( u ⋅ ∇ ) F , {\displaystyle {\frac {\mathrm {D} \mathbf {F} }{\mathrm {D} t}}={\frac {\partial \mathbf {F} }{\partial t}}+\left(\mathbf {u} \cdot \nabla \right)\mathbf {F} ,} where ∇ denotes the nabla operator with respect to x , and the operator u ⋅∇ is to be applied to each component of F . This tells us that the total rate of change of the function F as the fluid parcels moves through a flow field described by its Eulerian specification u is equal to the sum of the local rate of change and the convective rate of change of F . This is a consequence of the chain rule since we are differentiating the function F ( X ( x 0 , t ), t ) with respect to t . Conservation laws for a unit mass have a Lagrangian form, which together with mass conservation produce Eulerian conservation; on the contrary, when fluid particles can exchange a quantity (like energy or momentum), only Eulerian conservation laws exist. [ 7 ] [1] Objectivity in classical continuum mechanics: Motions, Eulerian and Lagrangian functions; Deformation gradient; Lie derivatives; Velocity-addition formula, Coriolis; Objectivity.
https://en.wikipedia.org/wiki/Lagrangian_and_Eulerian_specification_of_the_flow_field
Lagrangian coherent structures ( LCSs ) are distinguished surfaces of trajectories in a dynamical system that exert a major influence on nearby trajectories over a time interval of interest. [ 1 ] [ 2 ] [ 3 ] [ 4 ] The type of this influence may vary, but it invariably creates a coherent trajectory pattern for which the underlying LCS serves as a theoretical centerpiece. In observations of tracer patterns in nature, one readily identifies coherent features, but it is often the underlying structure creating these features that is of interest. As illustrated on the right, individual tracer trajectories forming coherent patterns are generally sensitive with respect to changes in their initial conditions and the system parameters. In contrast, the LCSs creating these trajectory patterns turn out to be robust and provide a simplified skeleton of the overall dynamics of the system. [ 1 ] [ 4 ] [ 5 ] [ 6 ] The robustness of this skeleton makes LCSs ideal tools for model validation, model comparison and benchmarking. LCSs can also be used for now-casting and even short-term forecasting of pattern evolution in complex dynamical systems. Physical phenomena governed by LCSs include floating debris, oil spills, [ 7 ] surface drifters [ 8 ] [ 9 ] and chlorophyll patterns [ 10 ] in the ocean; clouds of volcanic ash [ 11 ] and spores in the atmosphere; [ 12 ] and coherent crowd patterns formed by humans [ 13 ] and animals. It has been used by underwater glider for efficient ocean navigation, [ 14 ] and is hypothesized to be used by albatross for foraging . [ 15 ] While LCSs generally exist in any dynamical system, their role in creating coherent patterns is perhaps most readily observable in fluid flows. On a phase space P {\displaystyle {\mathcal {P}}} and over a time interval I = [ t 0 , t 1 ] {\displaystyle {\mathcal {I}}=[t_{0},t_{1}]} , consider a non-autonomous dynamical system defined through the flow map F t 0 t : x 0 ↦ x ( t , t 0 , x 0 ) {\displaystyle F_{t_{0}}^{t}\colon x_{0}\mapsto x(t,t_{0},x_{0})} , mapping initial conditions x 0 ∈ P {\displaystyle x_{0}\in {\mathcal {P}}} into their position x ( t , t 0 , x 0 ) ∈ P {\displaystyle x(t,t_{0},x_{0})\in {\mathcal {P}}} for any time t ∈ I {\displaystyle t\in {\mathcal {I}}} . If the flow map F t 0 t {\displaystyle F_{t_{0}}^{t}} is a diffeomorphism for any choice of t ∈ I {\displaystyle t\in {\mathcal {I}}} , then for any smooth set M ( t 0 ) {\displaystyle {\mathcal {M}}(t_{0})} of initial conditions in P {\displaystyle {\mathcal {P}}} , the set M = { ( x , t ) ∈ P × I : [ F t 0 t ] − 1 ( x ) ∈ M ( t 0 ) } {\displaystyle {\mathcal {M}}=\{(x,t)\in {\mathcal {P}}\times {\mathcal {I}}\,\colon [F_{t_{0}}^{t}]^{-1}(x)\in {\mathcal {M}}(t_{0})\}} is an invariant manifold in the extended phase space P × I {\displaystyle {\mathcal {P}}\times {\mathcal {I}}} . Borrowing terminology from fluid dynamics , we refer to the evolving time slice M ( t ) = F t 0 t ( M ( t 0 ) ) {\displaystyle {\mathcal {M}}(t)=F_{t_{0}}^{t}({\mathcal {M}}(t_{0}))} of the manifold M {\displaystyle {\mathcal {M}}} as a material surface (see Fig. 1). Since any choice of the initial condition set M ( t 0 ) {\displaystyle {\mathcal {M}}(t_{0})} yields an invariant manifold M ∈ P × I {\displaystyle {\mathcal {M}}\in {\mathcal {P}}\times {\mathcal {I}}} , invariant manifolds and their associated material surfaces are abundant and generally undistinguished in the extended phase space. Only few of them will act as cores of coherent trajectory patterns. In order to create a coherent pattern, a material surface M ( t ) {\displaystyle {\mathcal {M}}(t)} should exert a sustained and consistent action on nearby trajectories throughout the time interval I {\displaystyle {\mathcal {I}}} . Examples of such action are attraction, repulsion, or shear. In principle, any well-defined mathematical property qualifies that creates coherent patterns out of randomly selected nearby initial conditions. Most such properties can be expressed by strict inequalities . For instance, we call a material surface M ( t ) {\displaystyle {\mathcal {M}}(t)} attracting over the interval I {\displaystyle {\mathcal {I}}} if all small enough initial perturbations to M ( t 0 ) {\displaystyle {\mathcal {M}}(t_{0})} are carried by the flow into even smaller final perturbations to M ( t 1 ) {\displaystyle {\mathcal {M}}(t_{1})} . In classical dynamical systems theory, invariant manifolds satisfying such an attraction property over infinite times are called attractors . They are not only special, but even locally unique in the phase space: no continuous family of attractors may exist. In contrast, in dynamical systems defined over a finite time interval I {\displaystyle {\mathcal {I}}} , strict inequalities do not define exceptional (i.e., locally unique) material surfaces. This follows from the continuity of the flow map F t 0 t {\displaystyle F_{t_{0}}^{t}} over I {\displaystyle {\mathcal {I}}} . For instance, if a material surface M ( t ) {\displaystyle {\mathcal {M}}(t)} attracts all nearby trajectories over the time interval I {\displaystyle {\mathcal {I}}} , then so will any sufficiently close other material surface. Thus, attracting, repelling and shearing material surfaces are necessarily stacked on each other, i.e., occur in continuous families. This leads to the idea of seeking LCSs in finite-time dynamical systems as exceptional material surfaces that exhibit a coherence-inducing property more strongly than any of the neighboring material surfaces. Such LCSs, defined as extrema (or more generally, stationary surfaces) for a finite-time coherence property, will indeed serve as observed centerpieces of trajectory patterns. Examples of attracting, repelling and shearing LCSs are in a direct numerical simulation of 2D turbulence are shown in Fig.2a. Classical invariant manifolds are invariant sets in the phase space P {\displaystyle {\mathcal {P}}} of an autonomous dynamical system. In contrast, LCSs are only required to be invariant in the extended phase space. This means that even if the underlying dynamical system is autonomous , the LCSs of the system over the interval I {\displaystyle I} will generally be time-dependent, acting as the evolving skeletons of observed coherent trajectory patterns. Figure 2b shows the difference between an attracting LCS and a classic unstable manifold of a saddle point, for evolving times, in an autonomous dynamical system. [ 4 ] Assume that the phase space of the underlying dynamical system is the material configuration space of a continuum, such as a fluid or a deformable body. For instance, for a dynamical system generated by an unsteady velocity field v = v ( x , t ) , x ∈ U ⊂ R 3 , {\displaystyle v=v(x,t),\qquad x\in U\subset \mathbb {R} ^{3},} the open set U {\displaystyle U} of possible particle positions is a material configuration space. In this space, LCSs are material surfaces, formed by trajectories. Whether or not a material trajectory is contained in an LCS is a property that is independent of the choice of coordinates, and hence cannot depend of the observer. As a consequence, LCSs are subject to the basic objectivity (material frame-indifference) requirement of continuum mechanics. [ 4 ] The objectivity of LCSs requires them to be invariant with respect to all possible observer changes, i.e., linear coordinate changes of the form x = Q ( t ) y + b ( t ) , {\displaystyle x=Q(t)y+b(t),} where y ∈ R 3 {\displaystyle y\in \mathbb {R} ^{3}} is the vector of the transformed coordinates; Q ( t ) {\displaystyle Q(t)} is an arbitrary 3 × 3 {\displaystyle 3\times 3} proper orthogonal matrix representing time-dependent rotations; and b ( t ) {\displaystyle b(t)} is an arbitrary 3 {\displaystyle 3} -dimensional vector representing time-dependent translations. As a consequence, any self-consistent LCS definition or criterion should be expressible in terms of quantities that are frame-invariant. For instance, the strain rate S ( x , t ) {\displaystyle S(x,t)} and the spin tensor W ( x , t ) {\displaystyle W(x,t)} defined as S ( x , t ) = 1 2 ( ∇ v ( x , t ) + ( ∇ v ( x , t ) ) T ) , W ( x , t ) = 1 2 ( ∇ v ( x , t ) − ( ∇ v ( x , t ) ) T ) , {\displaystyle S(x,t)={\frac {1}{2}}\left(\nabla v(x,t)+(\nabla v(x,t))^{T}\right),\qquad W(x,t)={\frac {1}{2}}\left(\nabla v(x,t)-(\nabla v(x,t))^{T}\right),} transform under Euclidean changes of frame into the quantities S ~ ( y , t ) = Q ( t ) T S ( x , t ) Q ( t ) , W ~ ( y , t ) = Q ( t ) T S ( x , t ) Q ( t ) − Q ( t ) T Q ˙ ( t ) . {\displaystyle {\tilde {S}}(y,t)=Q(t)^{T}S(x,t)Q(t),\qquad {\tilde {W}}(y,t)=Q(t)^{T}S(x,t)Q(t)-Q(t)^{T}{\dot {Q}}(t).} A Euclidean frame change is, therefore, equivalent to a similarity transform for S ( x , t ) {\displaystyle S(x,t)} , and hence an LCS approach depending only on the eigenvalues and eigenvectors of S ( x , t ) {\displaystyle S(x,t)} [ 16 ] [ 17 ] is automatically frame-invariant. In contrast, an LCS approach depending on the eigenvalues of W ( x , t ) {\displaystyle W(x,t)} is generally not frame-invariant. A number of frame-dependent quantities, such as ∇ v ( x , t ) {\displaystyle \nabla v(x,t)} , W ( y , t ) {\displaystyle {W}(y,t)} , ∇ F t 0 t {\displaystyle \nabla F_{t_{0}}^{t}} , as well as the averages or eigenvalues of these quantities, are routinely used in heuristic LCS detection. While such quantities may effectively mark features of the instantaneous velocity field v ( x , t ) {\displaystyle v(x,t)} , the ability of these quantities to capture material mixing, transport, and coherence is limited and a priori unknown in any given frame. As an example, consider the linear unsteady fluid particle motion [ 4 ] x ˙ = v ( x , t ) = ( sin ⁡ 4 t 2 + cos ⁡ 4 t − 2 + cos ⁡ 4 t − sin ⁡ 4 t ) x , {\displaystyle {\dot {x}}=v(x,t)={\begin{pmatrix}\sin {4t}&2+\cos {4t}\\-2+\cos {4t}&-\sin {4t}\end{pmatrix}}x,} which is an exact solution of the two-dimensional Navier–Stokes equations . The (frame-dependent) Okubo-Weiss criterion classifies the whole domain in this flow as elliptic (vortical) because q = 1 2 ( | S | 2 − | W | 2 ) < 0 {\displaystyle q={\frac {1}{2}}({\vert S\vert }^{2}-{\vert W\vert }^{2})<0} holds, with | ⋅ | {\displaystyle \vert \,\cdot \,\vert } referring to the Euclidean matrix norm. As seen in Fig. 3, however, trajectories grow exponentially along a rotating line and shrink exponentially along another rotating line. [ 4 ] In material terms, therefore, the flow is hyperbolic (saddle-type) in any frame. Since Newton’s equation for particle motion and the Navier–Stokes equations for fluid motion are well known to be frame-dependent, it might first seem counterintuitive to require frame-invariance for LCSs, which are composed of solutions of these frame-dependent equations. Recall, however, that the Newton and Navier–Stokes equations represent objective physical principles for material particle trajectories . As long as correctly transformed from one frame to the other, these equations generate physically the same material trajectories in the new frame. In fact, we decide how to transform the equations of motion from an x {\displaystyle x} -frame to a y {\displaystyle y} -frame through a coordinate change x = Q ( t ) y + b ( t ) {\displaystyle x=Q(t)y+b(t)} precisely by upholding that trajectories are mapped into trajectories, i.e., by requiring x ( t ) = Q ( t ) y ( t ) + b ( t ) {\displaystyle x(t)=Q(t)y(t)+b(t)} to hold for all times. Temporal differentiation of this identity and substitution into the original equation in the x {\displaystyle x} -frame then yields the transformed equation in the y {\displaystyle y} -frame. While this process adds new terms (inertial forces) to the equations of motion, these inertial terms arise precisely to ensure the invariance of material trajectories. Fully composed of material trajectories, LCSs remain invariant in the transformed equation of motion defined in the y {\displaystyle y} -frame of reference. Consequently, any self-consistent LCS definition or detection method must also be frame-invariant. Motivated by the above discussion, the simplest way to define an attracting LCS is by requiring it to be a locally strongest attracting material surface in the extended phase space P × I {\displaystyle {\mathcal {P}}\times {\mathcal {I}}} (see. Fig. 4) . Similarly, a repelling LCS can be defined as a locally strongest repelling material surface. Attracting and repelling LCSs together are usually referred to as hyperbolic LCSs , [ 2 ] [ 4 ] as they provide a finite-time generalization of the classic concept of normally hyperbolic invariant manifolds in dynamical systems . Heuristically, one may seek initial positions M ( t 0 ) {\displaystyle {\mathcal {M}}(t_{0})} of repelling LCSs as set of initial conditions at which infinitesimal perturbations to trajectories starting from M ( t 0 ) {\displaystyle {\mathcal {M}}(t_{0})} grow locally at the highest rate relative to trajectories starting off of M ( t 0 ) {\displaystyle {\mathcal {M}}(t_{0})} . [ 2 ] [ 18 ] The heuristic element here is that instead of constructing a highly repelling material surface, one simply seeks points of large particle separation. Such a separation may well be due to strong shear along the set of points so identified; this set is not at all guaranteed to exert any normal repulsion on nearby trajectories. The growth of an infinitesimal perturbation ξ ( t ) {\displaystyle {\xi }(t)} along a trajectory x ( t , t 0 , x 0 ) {\displaystyle x(t,t_{0},x_{0})} is governed by the flow map gradient ∇ F t 0 t {\displaystyle \nabla F_{t_{0}}^{t}} . Let ϵ ξ ( t 0 ) {\displaystyle \epsilon {\xi }(t_{0})} be a small perturbation to the initial condition x 0 {\displaystyle x_{0}} , with 0 < ϵ ≪ 1 {\displaystyle 0<\epsilon \ll 1} , and with ξ ( t 0 ) {\displaystyle \xi (t_{0})} denoting an arbitrary unit vector in R n {\displaystyle \mathbb {R} ^{n}} . This perturbation generally grows along the trajectory x ( t , t 0 , x 0 ) {\displaystyle x(t,t_{0},x_{0})} into the perturbation vector ξ ϵ ( t 1 ; x 0 ) = ∇ F t 0 t 1 ( x 0 ) ϵ ξ ( t 0 ) {\displaystyle {\xi }_{\epsilon }(t_{1};x_{0})=\nabla F_{t_{0}}^{t_{1}}(x_{0})\epsilon {\xi }(t_{0})} . Then the maximum relative stretching of infinitesimal perturbations at the point x 0 {\displaystyle x_{0}} can be computed as δ t 0 t 1 ( x 0 ) = lim ϵ → 0 1 ϵ max | ξ ( t 0 ) | = 1 | ξ ϵ ( t 1 ; x 0 ) | = max | ξ ( t 0 ) | = 1 ⟨ ∇ F t 0 t 1 ( x 0 ) ξ ( t 0 ) , ∇ F t 0 t 1 ( x 0 ) ξ ( t 0 ) ⟩ = max | ξ ( t 0 ) | = 1 ⟨ ξ ( t 0 ) , C t 0 t 1 ( x 0 ) ξ ( t 0 ) ⟩ {\displaystyle {\begin{aligned}\delta _{t_{0}}^{t_{1}}(x_{0})&=\lim _{\epsilon \to 0}{\frac {1}{\epsilon }}\max _{\left|\xi (t_{0})\right|=1}\left|\xi _{\epsilon }(t_{1};x_{0})\right|\\&=\max _{\left|\xi (t_{0})\right|=1}{\sqrt {\left\langle \nabla F_{t_{0}}^{t_{1}}(x_{0})\xi (t_{0}),\nabla F_{t_{0}}^{t_{1}}(x_{0})\xi (t_{0})\right\rangle }}\\&=\max _{\left|\xi (t_{0})\right|=1}{\sqrt {\left\langle \xi (t_{0}),C_{t_{0}}^{t_{1}}(x_{0})\xi (t_{0})\right\rangle }}\\\end{aligned}}} where C t 0 t 1 = [ ∇ F t 0 t 1 ] T ∇ F t 0 t 1 {\displaystyle C_{t_{0}}^{t_{1}}=\left[\nabla F_{t_{0}}^{t_{1}}\right]^{T}\nabla F_{t_{0}}^{t_{1}}} denotes the right Cauchy–Green strain tensor . One then concludes [ 2 ] that the maximum relative stretching experienced along a trajectory starting from x 0 {\displaystyle x_{0}} is just δ t 0 t 1 ( x 0 ) = λ n ( x 0 ) {\displaystyle \delta _{t_{0}}^{t_{1}}(x_{0})={\sqrt {\lambda _{n}(x_{0})}}} . As this relative stretching tends to grow rapidly, it is more convenient to work with its growth exponent ( log ⁡ δ t 0 t 1 ) / ( t 1 − t 0 ) {\displaystyle (\log {\delta _{t_{0}}^{t_{1}}})/(t_{1}-t_{0})} , which is then precisely the finite-time Lyapunov exponent (FTLE) F T L E t 0 t 1 ( x 0 ) = 1 2 ( t 1 − t 0 ) log ⁡ λ n ( x 0 ) . {\displaystyle \mathrm {FTLE} _{t_{0}}^{t_{1}}(x_{0})={\frac {1}{2(t_{1}-t_{0})}}\log \lambda _{n}(x_{0}).} Therefore, one expects hyperbolic LCSs to appear as codimension-one local maximizing surfaces (or ridges ) of the FTLE field. [ 2 ] [ 20 ] This expectation turns out to be justified in the majority of cases: time t 0 {\displaystyle t_{0}} positions of repelling LCSs are marked by ridges of F T L E t 0 t 1 ( x 0 ) {\displaystyle \mathrm {FTLE} _{t_{0}}^{t_{1}}(x_{0})} . By applying the same argument in backward time, we obtain that time t 1 {\displaystyle t_{1}} positions of attracting LCSs are marked by ridges of the backward FTLE field F T L E t 1 t 0 {\displaystyle \mathrm {FTLE} _{t_{1}}^{t_{0}}} . The classic way of computing Lyapunov exponents is solving a linear differential equation for the linearized flow map ∇ F t 0 t ( x 0 ) {\displaystyle \nabla F_{t_{0}}^{t}(x_{0})} . A more expedient approach is to compute the FTLE field from a simple finite-difference approximation to the deformation gradient. [ 2 ] For example, in a three-dimensional flow, we launch a trajectory x ( t ; t 0 , x 0 ) {\displaystyle x(t;t_{0},x_{0})} from any element x 0 {\displaystyle x_{0}} of a grid of initial conditions. Using the coordinate representation x = ( x 1 , x 2 , x 3 ) {\displaystyle x=(x^{1},x^{2},x^{3})} for the evolving trajectory x ( t ; t 0 , x 0 ) {\displaystyle x(t;t_{0},x_{0})} , we approximate the gradient of the flow map as ∇ F t 0 t ( x 0 ) ≈ ( x 1 ( t ; t 0 , x 0 + δ 1 ) − x 1 ( t ; t 0 , x 0 − δ 1 ) | 2 δ 1 | x 1 ( t ; t 0 , x 0 + δ 2 ) − x 1 ( t ; t 0 , x 0 − δ 2 ) | 2 δ 2 | x 1 ( t ; t 0 , x 0 + δ 3 ) − x 1 ( t ; t 0 , x 0 − δ 3 ) | 2 δ 3 | x 2 ( t ; t 0 , x 0 + δ 1 ) − x 2 ( t ; t 0 , x 0 − δ 1 ) | 2 δ 1 | x 2 ( t ; t 0 , x 0 + δ 2 ) − x 2 ( t ; t 0 , x 0 − δ 2 ) | 2 δ 2 | x 2 ( t ; t 0 , x 0 + δ 3 ) − x 2 ( t ; t 0 , x 0 − δ 3 ) | 2 δ 3 | x 3 ( t ; t 0 , x 0 + δ 1 ) − x 3 ( t ; t 0 , x 0 − δ 1 ) | 2 δ 1 | x 3 ( t ; t 0 , x 0 + δ 2 ) − x 3 ( t ; t 0 , x 0 − δ 2 ) | 2 δ 2 | x 3 ( t ; t 0 , x 0 + δ 3 ) − x 3 ( t ; t 0 , x 0 − δ 3 ) | 2 δ 3 | ) , {\displaystyle \nabla F_{t_{0}}^{t}(x_{0})\approx {\begin{pmatrix}{\frac {x^{1}(t;t_{0},x_{0}+\delta _{1})-x^{1}(t;t_{0},x_{0}-\delta _{1})}{\left|2\delta _{1}\right|}}&{\frac {x^{1}(t;t_{0},x_{0}+\delta _{2})-x^{1}(t;t_{0},x_{0}-\delta _{2})}{\left|2\delta _{2}\right|}}&{\frac {x^{1}(t;t_{0},x_{0}+\delta _{3})-x^{1}(t;t_{0},x_{0}-\delta _{3})}{\left|2\delta _{3}\right|}}\\{\frac {x^{2}(t;t_{0},x_{0}+\delta _{1})-x^{2}(t;t_{0},x_{0}-\delta _{1})}{\left|2\delta _{1}\right|}}&{\frac {x^{2}(t;t_{0},x_{0}+\delta _{2})-x^{2}(t;t_{0},x_{0}-\delta _{2})}{\left|2\delta _{2}\right|}}&{\frac {x^{2}(t;t_{0},x_{0}+\delta _{3})-x^{2}(t;t_{0},x_{0}-\delta _{3})}{\left|2\delta _{3}\right|}}\\{\frac {x^{3}(t;t_{0},x_{0}+\delta _{1})-x^{3}(t;t_{0},x_{0}-\delta _{1})}{\left|2\delta _{1}\right|}}&{\frac {x^{3}(t;t_{0},x_{0}+\delta _{2})-x^{3}(t;t_{0},x_{0}-\delta _{2})}{\left|2\delta _{2}\right|}}&{\frac {x^{3}(t;t_{0},x_{0}+\delta _{3})-x^{3}(t;t_{0},x_{0}-\delta _{3})}{\left|2\delta _{3}\right|}}\end{pmatrix}},} with a small vector δ i {\displaystyle \delta _{i}} pointing in the x i {\displaystyle x^{i}} coordinate direction. For two-dimensional flows, only the first 2 × 2 {\displaystyle 2\times 2} minor matrix of the above matrix is relevant. FTLE ridges have proven to be a simple and efficient tool for the visualize hyperbolic LCSs in a number of physical problems, yielding intriguing images of initial positions of hyperbolic LCSs in different applications (see, e.g., Figs. 5a-b). However, FTLE ridges obtained over sliding time windows [ t 0 + T , t 1 + T ] {\displaystyle [t_{0}+T,t_{1}+T]} do not form material surfaces. Thus, ridges of F T L E t 0 + T t 1 + T ( x 0 ) {\displaystyle \mathrm {FTLE} _{t_{0}+T}^{t_{1}+T}(x_{0})} under varying T {\displaystyle T} cannot be used to define Lagrangian objects, such as hyperbolic LCSs. Indeed, a locally strongest repelling material surface over [ t 0 , t 1 ] {\displaystyle [t_{0},t_{1}]} will generally not play the same role over [ t 0 + T , t 1 + T ] {\displaystyle [t_{0}+T,t_{1}+T]} and hence its evolving position at time t 0 + T {\displaystyle t_{0}+T} will not be a ridge for F T L E t 0 + T t 1 + T {\displaystyle \mathrm {FTLE} _{t_{0}+T}^{t_{1}+T}} . Nonetheless, evolving second-derivative FTLE ridges [ 23 ] computed over sliding intervals of the form [ t 0 + T , t 1 + T ] {\displaystyle [t_{0}+T,t_{1}+T]} have been identified by some authors broadly with LCSs. [ 23 ] In support of this identification, it is also often argued that the material flux over such sliding-window FTLE ridges should necessarily be small. [ 23 ] [ 24 ] [ 25 ] [ 26 ] The "FTLE ridge=LCS" identification, [ 23 ] [ 24 ] however, suffers form the following conceptual and mathematical problems: The local variational theory of hyperbolic LCSs builds on their original definition as strongest repelling or repelling material surfaces in the flow over the time interval [ t 0 , t 1 ] {\displaystyle [t_{0},t_{1}]} . [ 2 ] At an initial point x 0 {\displaystyle x_{0}} , let n 0 {\displaystyle n_{0}} denote a unit normal to an initial material surface M ( t 0 ) {\displaystyle {\mathcal {M}}(t_{0})} (cf. Fig. 6). By the invariance of material lines, the tangent space T x 0 M ( t 0 ) {\displaystyle T_{x_{0}}{\mathcal {M}}(t_{0})} is mapped into the tangent space of T x 1 M ( t 1 ) {\displaystyle T_{x_{1}}{\mathcal {M}}(t_{1})} by the linearized flow map ∇ F t 0 t 1 ( x 0 ) {\displaystyle \nabla F_{t_{0}}^{t_{1}}(x_{0})} . At the same time, the image of the normal n 0 {\displaystyle n_{0}} normal under ∇ F t 0 t 1 ( x 0 ) {\displaystyle \nabla F_{t_{0}}^{t_{1}}(x_{0})} generally does not remain normal to M ( t 1 ) {\displaystyle {\mathcal {M}}(t_{1})} . Therefore, in addition to a normal component of length ρ t 0 t 1 ( x 0 , n 0 ) {\displaystyle \rho _{t_{0}}^{t_{1}}(x_{0,}n_{0})} , the advected normal also develops a tangential component of length σ t 0 t 1 ( x 0 , n 0 ) {\displaystyle \sigma _{t_{0}}^{t_{1}}(x_{0},n_{0})} (cf. Fig. 7). If ρ t 0 t 1 ( x 0 , n 0 ) > 1 {\displaystyle \rho _{t_{0}}^{t_{1}}(x_{0},n_{0})>1} , then the evolving material surface M ( t ) {\displaystyle {\mathcal {M}}(t)} strictly repels nearby trajectories by the end of the time interval [ t 0 , t 1 ] {\displaystyle [t_{0},t_{1}]} . Similarly, ρ t 0 t 1 ( x 0 , n 0 ) < 1 {\displaystyle \rho _{t_{0}}^{t_{1}}(x_{0},n_{0})<1} signals that M ( t ) {\displaystyle {\mathcal {M}}(t)} strictly attracts nearby trajectories along its normal directions. A repelling (attracting) LCS over the interval [ t 0 , t 1 ] {\displaystyle [t_{0},t_{1}]} can be defined as a material surface M ( t ) {\displaystyle {\mathcal {M}}(t)} whose net repulsion ρ t 0 t 1 ( x 0 , n 0 ) {\displaystyle \rho _{t_{0}}^{t_{1}}(x_{0},n_{0})} is pointwise maximal (minimal) with respect to perturbations of the initial normal vector field n 0 {\displaystyle n_{0}} . As earlier, we refer to repelling and attracting LCSs collectively as hyperbolic LCSs . [ 2 ] Solving these local extremum principles for hyperbolic LCSs in two and three dimensions yields unit normal vector fields to which hyperbolic LCSs should everywhere be tangent. [ 30 ] [ 31 ] [ 32 ] The existence of such normal surfaces also requires a Frobenius-type integrability condition in the three-dimensional case. All these results can be summarized as follows: [ 4 ] Repelling LCSs are obtained as most repelling shrink lines, starting from local maxima of λ 2 ( x 0 ) {\displaystyle \lambda _{2}(x_{0})} . Attracting LCSs are obtained as most attracting stretch lines, starting from local minima of λ 1 ( x 0 ) {\displaystyle \lambda _{1}(x_{0})} . These starting points serve are initial positions of exceptional saddle-type trajectories in the flow. An example of the local variational computation of a repelling LCS is shown in FIg. 8. The computational algorithm is available in LCS Tool. In 3D flows, instead of solving the Frobenius PDE (see table above) for hyperbolic LCSs, an easier approach is to construct intersections of hyperbolic LCSs with select 2D planes, and fit a surface numerically to a large number of such intersection curves. Let us denote the unit normal of a 2D plane Π {\displaystyle \Pi } by n Π {\displaystyle n_{\Pi }} . The intersection curve of a 2D repelling LCS surface with the plane Π {\displaystyle \Pi } is normal to both n Π {\displaystyle n_{\Pi }} and to the unit normal ξ 3 ( x 0 ) {\displaystyle \xi _{3}(x_{0})} of the LCS. As a consequence, an intersection curve x 0 ( s ) {\displaystyle x_{0}(s)} satisfies the ODE x 0 ′ = ξ 3 ( x 0 ) × n Π , {\displaystyle x_{0}^{\prime }=\xi _{3}(x_{0})\times n_{\Pi },} whose trajectories we refer to as reduced shrink lines . [ 32 ] (Strictly speaking, this equation is not an ordinary differential equation, given that its right-hand side is not a vector field, but a direction field, which is generally not globally orientable). Intersections of hyperbolic LCSs with Π {\displaystyle \Pi } are fastest contracting reduced shrink lines. Determining such shrink lines in a smooth family of nearby Π {\displaystyle \Pi } planes, then fitting a surface to the curve family so obtained yields a numerical approximation of a 2D repelling LCS. [ 32 ] A general material surface experiences shear and strain in its deformation, both of which depend continuously on initial conditions by the continuity of the map F t 0 t {\displaystyle F_{t_{0}}^{t}} . The averaged strain and shear within a strip of O ( ϵ ) {\displaystyle {\mathcal {O}}(\epsilon )} -close material lines, therefore, typically show O ( ϵ ) {\displaystyle {\mathcal {O}}(\epsilon )} variation within such a strip. The two-dimensional geodesic theory of LCSs seeks exceptionally coherent locations where this general trend fails, resulting in an order of magnitude smaller variability in shear or strain than what is normally expected across an O ( ϵ ) {\displaystyle {\mathcal {O}}(\epsilon )} strip. Specifically, the geodesic theory searches for LCSs as special material lines around which O ( ϵ ) {\displaystyle {\mathcal {O}}(\epsilon )} material strips show no O ( ϵ ) {\displaystyle {\mathcal {O}}(\epsilon )} variability either in the material-line averaged shear ( Shearless LCSs ) or in the material-line averaged strain ( Strainless or Elliptic LCSs ). Such LCSs turn out to be null-geodesics of appropriate metric tensors defined by the deformation field—hence the name of this theory. Shearless LCSs are found to be null-geodesics of a Lorentzian metric tensor D t 0 t 1 {\displaystyle D_{t_{0}}^{t_{1}}} defined as [ 33 ] D t 0 t 1 ( x 0 ) = 1 2 [ C t 0 t 1 ( x 0 ) Ω − Ω C t 0 t 1 ( x 0 ) ] , Ω = ( 0 − 1 1 0 ) . {\displaystyle D_{t_{0}}^{t_{1}}(x_{0})={\frac {1}{2}}\left[C_{t_{0}}^{t_{1}}(x_{0})\Omega -\Omega C_{t_{0}}^{t_{1}}(x_{0})\right],\qquad \Omega ={\begin{pmatrix}0&-1\\1&0\\\end{pmatrix}}.} Such null-geodesics can be proven to be tensorlines of the Cauchy–Green strain tensor, i.e., are tangent to the direction field formed by the strain eigenvector fields ξ i ( x 0 ) {\displaystyle \xi _{i}(x_{0})} . [ 33 ] Specifically, repelling LCSs are trajectories of x 0 ′ = ξ 1 ( x 0 ) {\displaystyle x_{0}^{\prime }=\xi _{1}(x_{0})} starting from local maxima of the λ 2 ( x 0 ) {\displaystyle \lambda _{2}(x_{0})} eigenvalue field. Similarly, attracting LCSs are trajectories of x 0 ′ = ξ 2 ( x 0 ) {\displaystyle x_{0}^{\prime }=\xi _{2}(x_{0})} starting from local minims of the λ 1 ( x 0 ) {\displaystyle \lambda _{1}(x_{0})} eigenvalue field. This agrees with the conclusion of the local variational theory of LCSs. The geodesic approach, however, also sheds more light on the robustness of hyperbolic LCSs: hyperbolic LCSs only prevail as stationary curves of the averaged shear functional under variations that leave their endpoints fixed. This is to be contrasted with parabolic LCSs (see below), which are also shearless LCSs but prevail as stationary curves to the shear functional even under arbitrary variations. As a consequence, individual trajectories are objective, and statements about the coherent structures they form should also be objective. A sample application is shown in Fig. 9, where the sudden appearance of a hyperbolic core (strongest attracting part of a stretchline) within the oil spill caused the notable Tiger-Tail instability in the shape of the oil spill. Elliptc LCSs are closed and nested material surfaces that act as building blocks of the Lagrangian equivalents of vortices, i.e., rotation-dominated regions of trajectories that generally traverse the phase space without substantial stretching or folding. They mimic the behavior of Kolmogorov–Arnold–Moser (KAM) tori that form elliptic regions in Hamiltonian systems . There coherence can be approached either through their homogeneous material rotation or through their homogeneous stretching properties. As a simplest approach to rotational coherence, one may define an elliptic LCS as a tubular material surface along which small material volumes complete the same net rotation over the time intervall [ t 0 , t 1 ] {\displaystyle [t_{0},t_{1}]} of interest. [ 34 ] A challenge in that in each material volume element, all individual material fibers (tangent vectors to trajectories) perform different rotations. To obtain a well-defined bulk rotation for each material element, one may employ the unique left and right polar decompositions of the flow gradient in the form ∇ F t 0 t 1 = R t 0 t 1 U t 0 t 1 = V t 0 t 1 R t 0 t 1 , {\displaystyle \nabla F_{t_{0}}^{t_{1}}=R_{t_{0}}^{t_{1}}U_{t_{0}}^{t_{1}}=V_{t_{0}}^{t_{1}}R_{t_{0}}^{t_{1}},} where the proper orthogonal tensor R t 0 t 1 {\displaystyle R_{t_{0}}^{t_{1}}} is called the rotation tensor and the symmetric, positive definite tensors U t 0 t 1 , V t 0 t 1 {\displaystyle U_{t_{0}}^{t_{1}},V_{t_{0}}^{t_{1}}} are called the left stretch tensor and right stretch tensor , respectively. Since the Cauchy–Green strain tensor can be written as C t 0 t 1 = [ ∇ F t 0 t 1 ] T ∇ F t 0 t 1 = U t 0 t 1 U t 0 t 1 = V t 0 t 1 V t 0 t 1 , {\displaystyle C_{t_{0}}^{t_{1}}=[\nabla F_{t_{0}}^{t_{1}}]^{T}\nabla F_{t_{0}}^{t_{1}}=U_{t_{0}}^{t_{1}}U_{t_{0}}^{t_{1}}=V_{t_{0}}^{t_{1}}V_{t_{0}}^{t_{1}},} the local material straining described by the eigenvalues and eigenvectors of C t 0 t 1 {\displaystyle C_{t_{0}}^{t_{1}}} are fully captured by the singular values and singular vectors of the stretch tensors. The remaining factor in the deformation gradient is represented by R t 0 t 1 {\displaystyle R_{t_{0}}^{t_{1}}} , interpreted as the bulk solid-body rotation component of volume elements. In planar motions, this rotation is defined relative to the normal of the plane. In three dimensions, the rotation is defined relative to the axis defined by the eigenvector of R t 0 t 1 {\displaystyle R_{t_{0}}^{t_{1}}} corresponding to its unit eigenvalue. In higher-dimensional flows, the rotation tensor cannot be viewed as a rotation about a single axis. In two and three dimensions, therefore, there exists a polar rotation angle (PRA) θ t 0 t 1 ( x 0 ) {\displaystyle \theta _{t_{0}}^{t_{1}}(x_{0})} that characterises the material rotation generated by R t 0 t 1 {\displaystyle R_{t_{0}}^{t_{1}}} for a volume element centered at the initial condition x 0 {\displaystyle x_{0}} . This PRA is well-defined up to multiples of 2 π {\displaystyle 2\pi } . For two-dimensional flows, the PRA can be computed from the invariants of C t 0 t 1 {\displaystyle C_{t_{0}}^{t_{1}}} using the formulas [ 34 ] cos ⁡ θ t 0 t 1 = ⟨ ξ i , ∇ F t 0 t 1 ξ i ⟩ λ i , i = 1 o r 2 , sin ⁡ θ t 0 t 1 = ( − 1 ) j ⟨ ξ i , ∇ F t 0 t 1 ξ j ⟩ λ j , ( i , j ) = ( 1 , 2 ) o r ( 2 , 1 ) , {\displaystyle {\begin{aligned}\cos \theta _{t_{0}}^{t_{1}}&={\frac {\langle \xi _{i},\nabla F_{t_{0}}^{t_{1}}\xi _{i}\rangle }{\sqrt {\lambda _{i}}}},\quad i=1\,\,or\,\,\,2,\\\sin \theta _{t_{0}}^{t_{1}}&=\left(-1\right)^{j}{\frac {\langle \xi _{i},\nabla F_{t_{0}}^{t_{1}}\xi _{j}\rangle }{\sqrt {\lambda _{j}}}},\qquad (i,j)=(1,2)\,\,or\,\,(2,1),\\\end{aligned}}} which yield a four-quadrant version of the PRA via the formula θ t 0 t = [ 1 − s i g n ( sin ⁡ θ t 0 t ) ] π + s i g n ( sin ⁡ θ t 0 t ) cos − 1 ⁡ ( cos ⁡ θ t 0 t ) . {\displaystyle \theta _{t_{0}}^{t}=\left[1-{\rm {sign\,}}\left(\sin \theta _{t_{0}}^{t}\right)\right]\pi +{\rm {sign\,}}\left(\sin \theta _{t_{0}}^{t}\right)\cos ^{-1}\left(\cos \theta _{t_{0}}^{t}\right).} For three-dimensional flows, the PRA can again be computed from the invariants of C t 0 t 1 {\displaystyle C_{t_{0}}^{t_{1}}} from the formulas [ 34 ] cos ⁡ θ t 0 t = 1 2 ( ∑ i = 1 3 ⟨ ξ i , ∇ F t 0 t 1 ξ i ⟩ λ i − 1 ) , sin ⁡ θ t 0 t = ⟨ ξ i , ∇ F t 0 t 1 ξ j ⟩ − ⟨ ξ j , ∇ F t 0 t 1 ξ i ⟩ 2 ϵ i j k e k , i ≠ j , {\displaystyle {\begin{aligned}\cos \theta _{t_{0}}^{t}&={\frac {1}{2}}\left(\sum _{i=1}^{3}{\frac {\left\langle \xi _{i},\nabla F_{t_{0}}^{t_{1}}\xi _{i}\right\rangle }{\sqrt {\lambda _{i}}}}-1\right),\\\sin \theta _{t_{0}}^{t}&={\frac {\left\langle \xi _{i},\nabla F_{t_{0}}^{t_{1}}\xi _{j}\right\rangle -\left\langle \xi _{j},\nabla F_{t_{0}}^{t_{1}}\xi _{i}\right\rangle }{2\epsilon _{ijk}e_{k}}},\qquad i\neq j,\end{aligned}}} where ϵ i j k {\displaystyle \epsilon _{ijk}} is the Levi-Civita symbol , e = { e k } {\displaystyle \mathbf {e} =\left\{e_{k}\right\}} is the eigenvector corresponding to the unit eigenvector of the matrix [ K t 0 t ] j k = ⟨ ξ j , ∇ F t 0 t 1 ξ k ⟩ / λ k {\displaystyle \left[K_{t_{0}}^{t}\right]_{jk}=\left\langle \xi _{j},\nabla F_{t_{0}}^{t_{1}}\xi _{k}\right\rangle /{\sqrt {\lambda _{k}}}} . The time t 0 {\displaystyle t_{0}} positions of elliptic LCSs are visualized as tubular level sets of the PRA distribution θ t 0 t {\displaystyle \theta _{t_{0}}^{t}} . In two-dimensions, therefore, (polar) elliptic LCSs are simply closed level curves of the PRA, which turn out to be objective. [ 34 ] In three dimensions, (polar) elliptic LCSs are toroidal or cylindrical level surfaces of the PRA, which are, however, not objective and hence will generally change in rotating frames. Coherent Lagrangian vortex boundaries can be visualized as outermost members of nested families of elliptic LCSs. Two- and three-dimensional examples of elliptic LCS revealed by tubular level surfaces of the PRA are shown in Fig. 10a-b. The level sets of the PRA are objective in two dimensions but not in three dimensions. An additional shortcoming of the polar rotation tensor is its dynamical inconsistency: polar rotations computed over adjacent sub-intervals of a total deformation do not sum up to the rotation computed for the full-time interval of the same deformation. [ 35 ] Therefore, while R t 0 t 1 {\displaystyle R_{t_{0}}^{t_{1}}} is the closest rotation tensor to ∇ F t 0 t 1 {\displaystyle \nabla F_{t_{0}}^{t_{1}}} in the L 2 {\displaystyle L^{2}} norm over a fixed time interval [ t 0 , t 1 ] {\displaystyle [t_{0},t_{1}]} , these piecewise best fits do not form a family of rigid-body rotations as t 0 {\displaystyle t_{0}} and t 1 {\displaystyle t_{1}} are varied. For this reason, rotations predicted by the polar rotation tensor over varying time intervals divert from the experimentally observed mean material rotation of fluid elements. [ 35 ] [ 36 ] An alternative to the classic polar decomposition provides a resolution to both the non-objectivity and the dynamic inconsistency issue. Specifically, the Dynamic Polar Decomposition (DPD) [ 35 ] of the deformation gradient is also of the form ∇ F t 0 t = O t 0 t M t 0 t = N t 0 t O t 0 t , {\displaystyle \nabla F_{t_{0}}^{t}=O_{t_{0}}^{t}M_{t_{0}}^{t}=N_{t_{0}}^{t}O_{t_{0}}^{t},} where the proper orthogonal tensor O t 0 t {\displaystyle O_{t_{0}}^{t}} is the dynamic rotation tensor and the non-singular tensors M t 0 t , N t 0 t {\displaystyle M_{t_{0}}^{t},N_{t_{0}}^{t}} are the left dynamic stretch tensor and right dynamic stretch tensor , respectively. Just as the classic polar decomposition, the DPD is valid in any finite dimension. Unlike the classic polar decomposition, however, the dynamic rotation and stretch tensors are obtained from solving linear differential equations, rather than from matrix manipulations. In particular, O t 0 t = ∇ a 0 a ( t ) {\displaystyle O_{t_{0}}^{t}=\nabla _{a_{0}}a(t)} is the deformation gradient of the purely rotational flow a ˙ = W ( x ( t ; x 0 ) , t ) a , {\displaystyle {\dot {a}}=W\left(x(t;x_{0}),t\right)a,} and M t 0 t = ∇ b 0 b ( t ) {\displaystyle M_{t_{0}}^{t}=\nabla _{b_{0}}b(t)} is the deformation gradient of the purely straining flow b ˙ = O t t 0 S ( x ( t ; x 0 ) , t ) O t 0 t b . {\displaystyle {\dot {b}}=O_{t}^{t_{0}}S\left(x(t;x_{0}),t\right)O_{t_{0}}^{t}b.} . The dynamic rotation tensor O t 0 t {\displaystyle O_{t_{0}}^{t}} can further be factorized into two deformation gradients: one for a spatially uniform (rigid-body) rotation, and one that deviates from this uniform rotation: O t 0 t = Φ t 0 t Θ t 0 t . {\displaystyle O_{t_{0}}^{t}=\Phi _{t_{0}}^{t}\Theta _{t_{0}}^{t}.} As a spatially independent rigid-body rotation, the proper orthogonal relative rotation tensor Φ t 0 t = ∂ α 0 α ( t ) {\displaystyle \Phi _{t_{0}}^{t}=\partial _{\alpha _{0}}\alpha (t)} is dynamically consistent, serving as the deformation gradient of the relative rotation flow α ˙ = [ W ( x ( t ; x 0 ) , t ) − W ¯ ( t ) ] α . {\displaystyle {\dot {\alpha }}=\left[W\left(x(t;x_{0}),t\right)-{\bar {W}}\left(t\right)\right]\alpha .} In contrast, the proper orthogonal mean rotation tensor Θ t 0 t = D β 0 β ( t ) {\displaystyle \Theta _{t_{0}}^{t}=D_{\beta _{0}}\beta (t)} is the deformation gradient of the mean-rotation flow β ˙ = Φ t t 0 W ¯ ( t ) Φ t 0 t β . {\displaystyle {\dot {\beta }}=\Phi _{t}^{t_{0}}{\bar {W}}\left(t\right)\Phi _{t_{0}}^{t}\beta .} The dynamic consistency of Φ t 0 t {\displaystyle \Phi _{t_{0}}^{t}} implies that the total angle swept by Φ t 0 t {\displaystyle \Phi _{t_{0}}^{t}} around its own axis of rotation is dynamically consistent. This intrinsic rotation angle ψ t 0 t ( x 0 ) {\displaystyle \psi _{t_{0}}^{t}(x_{0})} is also objective, and turns out to equal to one half of the Lagrangian-averaged vorticity deviation ( LAVD ). [ 36 ] The LAVD is defined as the trajectory-averaged magnitude of the deviation of the vorticity from its spatial mean. With the vorticity ω ( x , t ) = ∇ × v ( x , t ) {\displaystyle \omega (x,t)=\nabla \times v(x,t)} and its spatial mean ω ¯ ( t ) = ∫ U ( t ) ω ( x , t ) d V v o l ( U ( t ) ) , {\displaystyle {\bar {\omega }}(t)={\frac {\int _{U(t)}\omega (x,t)\,dV}{\mathrm {vol} \,(U(t))}},} the LAVD over a time interval [ t 0 , t 1 ] {\displaystyle [t_{0},t_{1}]} therefore takes the form [ 36 ] L A V D t 0 t 1 ( x 0 ) := ∫ t 0 t 1 | ω ( x ( s ; x 0 ) , s ) − ω ¯ ( s ) | d s , {\displaystyle \mathrm {LAVD} _{t_{0}}^{t_{1}}(x_{0}):=\int _{t_{0}}^{t_{1}}\left|\omega (x(s;x_{0}),s)-{\bar {\omega }}(s)\right|\,ds,} with U ( t ) {\displaystyle U(t)} denoting the (possibly time-varying) domain of definition of the velocity field v ( x , t ) {\displaystyle v(x,t)} . This result applies both in two- and three dimensions, and enables the computation of a well-defined, objective and dynamically consistent material rotation angle along any trajectory. Outermost complex tubular level curves of the LAVD define initial positions of rotationally coherent material vortex boundaries in two-dimensional unsteady flows (see Fig. 11a). By construction, these boundaries may exhibit transverse filamentation, but any developing filament keeps rotating with the boundary, without global transverse departure form the material vortex. (Exceptions are inviscid flows where such a global departure of LAVD level surfaces from a vortex is possible as fluid elements preserve their material rotation rate for all times [ 36 ] ). Remarkably, centers of rotationally coherent vortices (defined by local maxima of the LAVD field) can be proven to be the observed centers of attraction or repulsion for finite-size (inertial) particle motion in geophysical flows (see Fig. 11b). [ 36 ] In three-dimensional flows, tubular level surfaces of the LAVD define initial positions of two-dimensional eddy boundary surfaces (see Fig. 11c) that remain rotationally coherent over a time intcenter|erval [ t 0 , t 1 ] {\displaystyle [t_{0},t_{1}]} (see Fig. 11d). The local variational theory of elliptic LCSs targets material surfaces that locally maximize material shear over the finite time interval [ t 0 , t 1 ] {\displaystyle [t_{0},t_{1}]} of interest. This means that at initial point each point x 0 ∈ M ( t 0 ) {\displaystyle x_{0}\in {\mathcal {M}}(t_{0})} of an elliptic LCS M ( t ) {\displaystyle {\mathcal {M}}(t)} , the tangent space T x 0 M ( t 0 ) {\displaystyle T_{x_{0}}{\mathcal {M}}(t_{0})} is the plane along which the local Lagrangian shear σ t 0 t 1 ( x 0 ) {\displaystyle \sigma _{t_{0}}^{t_{1}}(x_{0})} is maximal (cf. Fig 7). Introducing the two-dimensional shear vector field η ± ( x 0 ) := λ 2 ( x 0 ) − 1 λ 2 ( x 0 ) − λ 1 ( x 0 ) ξ 1 ( x 0 ) ± 1 − λ 1 ( x 0 ) λ 2 ( x 0 ) − λ 1 ( x 0 ) ξ 2 ( x 0 ) , {\displaystyle \eta ^{\pm }(x_{0}):={\sqrt {\frac {\lambda _{2}(x_{0})-1}{\lambda _{2}(x_{0})-\lambda _{1}(x_{0})}}}\xi _{1}(x_{0})\pm {\sqrt {\frac {1-\lambda _{1}(x_{0})}{\lambda _{2}(x_{0})-\lambda _{1}(x_{0})}}}\xi _{2}(x_{0}),} and the three-dimensional shear normal vector field n ± ( x 0 ) = λ 1 ( x 0 ) λ 1 ( x 0 ) + λ 3 ( x 0 ) ξ 1 ( x 0 ) ± λ 3 ( x 0 ) λ 1 ( x 0 ) + λ 3 ( x 0 ) ξ 3 ( x 0 ) , {\displaystyle n_{\pm }(x_{0})={\sqrt {\frac {\sqrt {\lambda _{1}(x_{0})}}{{\sqrt {\lambda _{1}(x_{0})}}+{\sqrt {\lambda _{3}(x_{0})}}}}}\xi _{1}(x_{0})\pm {\sqrt {\frac {\sqrt {\lambda _{3}(x_{0})}}{{\sqrt {\lambda _{1}(x_{0})}}+{\sqrt {\lambda _{3}(x_{0})}}}}}\xi _{3}(x_{0}),} the criteria for two- and three-dimensional elliptic LCSs can be summarized as follows: [ 32 ] [ 37 ] For 3D flows, as in the case of hyperbolic LCSs, solving the Frobenius PDE can be avoided. Instead, one can construct intersections of a tubular elliptic LCS with select 2D planes, and fit a surface numerically to a large number of these intersection curves. As for hyperbolic LCSs above, let us denote the unit normal of a 2D plane Π {\displaystyle \Pi } by n Π {\displaystyle n_{\Pi }} . Again, the intersection curves of elliptic LCSs with the plane Π {\displaystyle \Pi } are normal to both n Π {\displaystyle n_{\Pi }} and to the unit normal n ± ( x 0 ) {\displaystyle n_{\pm }(x_{0})} of the LCS. As a consequence, an intersection curve x 0 ( s ) {\displaystyle x_{0}(s)} satisfies the reduced shear ODE x 0 ′ = n ± ( x 0 ) × n Π , {\displaystyle x_{0}^{\prime }=n_{\pm }(x_{0})\times n_{\Pi },} whose trajectories we refer to as reduced shear lines . [ 32 ] (Strictly speaking, the reduced shear ODE is not an ordinary differential equation, given that its right-hand side is not a vector field, but a direction field, which is generally not globally orientable). Intersections of tubular elliptic LCSs with Π {\displaystyle \Pi } are limit cycles of the reduced shear ODE. Determining such limit cycles in a smooth family of nearby Π {\displaystyle \Pi } planes, then fitting a surface to the limit cycle family yields a numerical approximation for 2D shear surface. A three-dimensional example of this local variational computation of an elliptic LCS is shown in Fig. 11. [ 32 ] As noted above under hyperbolic LCSs, a global variational approach has been developed in two dimensions to capture elliptic LCSs as closed stationary curves of the material-line-averaged Lagrangian strain functional. [ 4 ] [ 39 ] Such curves turn out to be closed null-geodesics of the generalized Green–Lagrange strain tensor family 1 2 ( C t 0 t 1 − λ I ) {\displaystyle {\frac {1}{2}}(C_{t_{0}}^{t_{1}}-\lambda I)} , where λ > 0 {\displaystyle \lambda >0} is a positive parameter (Lagrange multiplier). The closed null-geodesics can be shown to coincide with limit cycles of the family of direction fields η λ ± ( x 0 ) := λ 2 ( x 0 ) − λ 2 λ 2 ( x 0 ) − λ 1 ( x 0 ) ξ 1 ( x 0 ) ± λ 2 − λ 1 ( x 0 ) λ 2 ( x 0 ) − λ 1 ( x 0 ) ξ 2 ( x 0 ) , {\displaystyle \eta _{\lambda }^{\pm }(x_{0}):={\sqrt {\frac {\lambda _{2}(x_{0})-\lambda ^{2}}{\lambda _{2}(x_{0})-\lambda _{1}(x_{0})}}}\xi _{1}(x_{0})\pm {\sqrt {\frac {\lambda ^{2}-\lambda _{1}(x_{0})}{\lambda _{2}(x_{0})-\lambda _{1}(x_{0})}}}\xi _{2}(x_{0}),} Note that for λ = 1 {\displaystyle \lambda =1} , the direction field η λ ± ( x 0 {\displaystyle \eta _{\lambda }^{\pm }(x_{0}} coincides with the direction field η ± ( x 0 {\displaystyle \eta ^{\pm }(x_{0}} for shearlines obtained above from the local variational theory of LCSs. Trajectories of η λ ± {\displaystyle \eta _{\lambda }^{\pm }} are referred to as λ {\displaystyle \lambda } -lines. Remarkably, they are initial positions of material lines that are infinitesimally uniformly stretching under the flow map F t 0 t 1 {\displaystyle F_{t_{0}}^{t_{1}}} . Specifically, any subset of a λ {\displaystyle \lambda } -line is stretched by a factor of λ {\displaystyle \lambda } between the times t 0 {\displaystyle t_{0}} and t 1 {\displaystyle t_{1}} . As an example, Fig. 13 shows elliptic LCSs identified as closed λ {\displaystyle \lambda } -lines within the Great Red Spot of Jupiter. [ 38 ] Parabolic LCSs are shearless material surfaces that delineate cores of jet-type sets of trajectories. Such LCSs are characterized by both low stretching (because they are inside a non-stretching structure), but also by low shearing (because material shearing is minimal in jet cores). Since both shearing and stretching are as low as possible along a parabolic LCS, one may seek initial positions of such material surfaces as trenches of the FTLE field F T L E t 0 t 1 ( x 0 ) {\displaystyle FTLE_{t_{0}}^{t_{1}}(x_{0})} . [ 40 ] [ 41 ] A geophysical example of a parabolic LCS (generalized jet core) revealed as a trench of the FTLE field is shown in Fig. 14a. In two dimensions, parabolic LCSs are also solutions of the global shearless variational principle described above for hyperbolic LCSs. [ 33 ] As such, parabolic LCSs are composed of shrink lines and stretch lines that represent geodesics of the Lorentzian metric tensor D t 0 t 1 {\displaystyle D_{t_{0}}^{t_{1}}} . In contrast to hyperbolic LCSs, however, parabolic LCSs satisfy more robust boundary conditions: they remain stationary curves of the material-line-averaged shear functional even under variations to their endpoints. This explains the high degree of robustness and observability that jet cores exhibit in mixing. This is to be contrasted with the highly sensitive and fading footprint of hyperbolic LCSs away from strongly hyperbolic regions in diffusive tracer patterns. Under variable endpoint boundary conditions, initial positions of parabolic LCSs turn out to be alternating chains of shrink lines and stretch lines that connect singularities of these line fields. [ 4 ] [ 33 ] These singularities occur at points where λ 1 ( x 0 ) = λ 2 ( x 0 ) {\displaystyle \lambda _{1}(x_{0})=\lambda _{2}(x_{0})} , and hence no infinitesimal deformation takes place between the two time instances t 0 {\displaystyle t_{0}} and t 1 {\displaystyle t_{1}} . Fig. 14b shows an example of parabolic LCSs in Jupiter's atmosphere, located using this variational theory. [ 38 ] The chevron-type shapes forming out of circular material blobs positioned along the jet core is characteristic of tracer deformation near parabolic LCSs. Particle advection and Finite-Time Lyapunov Exponent calculation: Jupyter notebooks that guide you through methods used to extract advective, diffusive, stochastic and active transport barriers from discrete velocity data.
https://en.wikipedia.org/wiki/Lagrangian_coherent_structure
In experimental fluid mechanics, Lagrangian Particle Tracking refers to the process of determining trajectories of small neutrally buoyant particles (flow tracers) that are freely suspended within a turbulent flow field. These are usually obtained by 3-D Particle Tracking Velocimetry . A collection of such particle trajectories can be used for analyzing the Lagrangian dynamics of the fluid motion, for performing Lagrangian statistics of various flow quantities etc. [ 1 ] [ 2 ] In computational fluid dynamics , the Lagrangian particle tracking (or in short LPT method) is a numerical technique for simulated tracking of particle paths Lagrangian within an Eulerian phase. It is also commonly referred to as Discrete Particle Simulation (DPS). Some simulation cases for which this method is applicable are: sprays, small bubbles, dust particles, and is especially optimal for dilute multiphase flows with large Stokes number . [ 3 ] [ better source needed ] This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lagrangian_particle_tracking
In mathematics, a Lagrangian system is a pair ( Y , L ) , consisting of a smooth fiber bundle Y → X and a Lagrangian density L , which yields the Euler–Lagrange differential operator acting on sections of Y → X . In classical mechanics , many dynamical systems are Lagrangian systems. The configuration space of such a Lagrangian system is a fiber bundle Q → R {\displaystyle Q\rightarrow \mathbb {R} } over the time axis R {\displaystyle \mathbb {R} } . In particular, Q = R × M {\displaystyle Q=\mathbb {R} \times M} if a reference frame is fixed. In classical field theory , all field systems are the Lagrangian ones. A Lagrangian density L (or, simply, a Lagrangian ) of order r is defined as an n -form , n = dim X , on the r -order jet manifold J r Y of Y . A Lagrangian L can be introduced as an element of the variational bicomplex of the differential graded algebra O ∗ ∞ ( Y ) of exterior forms on jet manifolds of Y → X . The coboundary operator of this bicomplex contains the variational operator δ which, acting on L , defines the associated Euler–Lagrange operator δL . Given bundle coordinates x λ , y i on a fiber bundle Y and the adapted coordinates x λ , y i , y i Λ , (Λ = ( λ 1 , ..., λ k ) , |Λ| = k ≤ r ) on jet manifolds J r Y , a Lagrangian L and its Euler–Lagrange operator read where denote the total derivatives. For instance, a first-order Lagrangian and its second-order Euler–Lagrange operator take the form The kernel of an Euler–Lagrange operator provides the Euler–Lagrange equations δL = 0 . Cohomology of the variational bicomplex leads to the so-called variational formula where is the total differential and θ L is a Lepage equivalent of L . Noether's first theorem and Noether's second theorem are corollaries of this variational formula. Extended to graded manifolds , the variational bicomplex provides description of graded Lagrangian systems of even and odd variables. [ 1 ] In a different way, Lagrangians, Euler–Lagrange operators and Euler–Lagrange equations are introduced in the framework of the calculus of variations . In classical mechanics equations of motion are first and second order differential equations on a manifold M or various fiber bundles Q over R {\displaystyle \mathbb {R} } . A solution of the equations of motion is called a motion . [ 2 ] [ 3 ]
https://en.wikipedia.org/wiki/Lagrangian_system
In mathematics , the Laguerre form is generally given as a third degree tensor -valued form, that can be written as, This differential geometry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Laguerre_form
The Laguerre formula (named after Edmond Laguerre ) provides the acute angle ϕ {\displaystyle \phi } between two proper real lines, [ 1 ] [ 2 ] as follows: where: The expression between vertical bars is a real number. Laguerre formula can be useful in computer vision , since the absolute conic has an image on the retinal plane which is invariant under camera displacements, and the cross ratio of four collinear points is the same for their images on the retinal plane. It may be assumed that the lines go through the origin. Any isometry leaves the absolute conic invariant, this allows to take as the first line the x axis and the second line lying in the plane z =0. The homogeneous coordinates of the above four points are respectively. Their nonhomogeneous coordinates on the infinity line of the plane z =0 are i {\displaystyle i} , − i {\displaystyle -i} , 0, ± sin ⁡ ϕ / cos ⁡ ϕ {\displaystyle \pm \sin \phi /\cos \phi } . (Exchanging I 1 {\displaystyle I_{1}} and I 2 {\displaystyle I_{2}} changes the cross ratio into its inverse, so the formula for ϕ {\displaystyle \phi } gives the same result.) Now from the formula of the cross ratio we have Cr ⁡ ( I 1 , I 2 , P 1 , P 2 ) = − − i cos ⁡ ϕ ± sin ⁡ ϕ i cos ⁡ ϕ ± sin ⁡ ϕ = e ± 2 i ϕ . {\displaystyle \operatorname {Cr} (I_{1},I_{2},P_{1},P_{2})=-{\frac {-i\cos \phi \pm \sin \phi }{i\cos \phi \pm \sin \phi }}=e^{\pm 2i\phi }.}
https://en.wikipedia.org/wiki/Laguerre_formula
In mathematics , a Laguerre plane is one of the three types of Benz plane , which are the Möbius plane , Laguerre plane and Minkowski plane . Laguerre planes are named after the French mathematician Edmond Nicolas Laguerre . The classical Laguerre plane is an incidence structure that describes the incidence behaviour of the curves y = a x 2 + b x + c {\displaystyle y=ax^{2}+bx+c} , i.e. parabolas and lines, in the real affine plane . In order to simplify the structure, to any curve y = a x 2 + b x + c {\displaystyle y=ax^{2}+bx+c} the point ( ∞ , a ) {\displaystyle (\infty ,a)} is added. A further advantage of this completion is that the plane geometry of the completed parabolas/lines is isomorphic to the geometry of the plane sections of a cylinder (see below). Originally the classical Laguerre plane was defined as the geometry of the oriented lines and circles in the real Euclidean plane (see [ 1 ] ). Here we prefer the parabola model of the classical Laguerre plane. We define: P := R 2 ∪ ( { ∞ } × R ) , ∞ ∉ R , {\displaystyle {\mathcal {P}}:=\mathbb {R} ^{2}\cup (\{\infty \}\times \mathbb {R} ),\ \infty \notin \mathbb {R} ,} the set of points , Z := { { ( x , y ) ∈ R 2 ∣ y = a x 2 + b x + c } ∪ { ( ∞ , a ) } ∣ a , b , c ∈ R } {\displaystyle {\mathcal {Z}}:=\{\{(x,y)\in \mathbb {R} ^{2}\mid y=ax^{2}+bx+c\}\cup \{(\infty ,a)\}\mid a,b,c\in \mathbb {R} \}} the set of cycles . The incidence structure ( P , Z , ∈ ) {\displaystyle ({\mathcal {P}},{\mathcal {Z}},\in )} is called classical Laguerre plane . The point set is R 2 {\displaystyle \mathbb {R} ^{2}} plus a copy of R {\displaystyle \mathbb {R} } (see figure). Any parabola/line y = a x 2 + b x + c {\displaystyle y=ax^{2}+bx+c} gets the additional point ( ∞ , a ) {\displaystyle (\infty ,a)} . Points with the same x-coordinate cannot be connected by curves y = a x 2 + b x + c {\displaystyle y=ax^{2}+bx+c} . Hence we define: Two points A , B {\displaystyle A,B} are parallel ( A ∥ B {\displaystyle A\parallel B} ) if A = B {\displaystyle A=B} or there is no cycle containing A {\displaystyle A} and B {\displaystyle B} . For the description of the classical real Laguerre plane above two points ( a 1 , a 2 ) , ( b 1 , b 2 ) {\displaystyle (a_{1},a_{2}),(b_{1},b_{2})} are parallel if and only if a 1 = b 1 {\displaystyle a_{1}=b_{1}} . ∥ {\displaystyle \parallel } is an equivalence relation , similar to the parallelity of lines. The incidence structure ( P , Z , ∈ ) {\displaystyle ({\mathcal {P}},{\mathcal {Z}},\in )} has the following properties: Lemma: Similar to the sphere model of the classical Moebius plane there is a cylinder model for the classical Laguerre plane: ( P , Z , ∈ ) {\displaystyle ({\mathcal {P}},{\mathcal {Z}},\in )} is isomorphic to the geometry of plane sections of a circular cylinder in R 3 {\displaystyle \mathbb {R} ^{3}} . The following mapping Φ {\displaystyle \Phi } is a projection with center ( 0 , 1 , 0 ) {\displaystyle (0,1,0)} that maps the x-z-plane onto the cylinder with the equation u 2 + v 2 − v = 0 {\displaystyle u^{2}+v^{2}-v=0} , axis ( 0 , 1 2 , . . ) {\displaystyle (0,{\tfrac {1}{2}},..)} and radius r = 1 2 : {\displaystyle r={\tfrac {1}{2}}\ :} The Lemma above gives rise to the following definition: Let L := ( P , Z , ∈ ) {\displaystyle {\mathcal {L}}:=({\mathcal {P}},{\mathcal {Z}},\in )} be an incidence structure with point set P {\displaystyle {\mathcal {P}}} and set of cycles Z {\displaystyle {\mathcal {Z}}} . Two points A , B {\displaystyle A,B} are parallel ( A ∥ B {\displaystyle A\parallel B} ) if A = B {\displaystyle A=B} or there is no cycle containing A {\displaystyle A} and B {\displaystyle B} . L {\displaystyle {\mathcal {L}}} is called Laguerre plane if the following axioms hold: Four points A , B , C , D {\displaystyle A,B,C,D} are concyclic if there is a cycle z {\displaystyle z} with A , B , C , D ∈ z {\displaystyle A,B,C,D\in z} . From the definition of relation ∥ {\displaystyle \parallel } and axiom B2 we get Lemma: Relation ∥ {\displaystyle \parallel } is an equivalence relation . Following the cylinder model of the classical Laguerre-plane we introduce the denotation: a) For P ∈ P {\displaystyle P\in {\mathcal {P}}} we set P ¯ := { Q ∈ P | P ∥ Q } {\displaystyle {\overline {P}}:=\{Q\in {\mathcal {P}}\ |\ P\parallel Q\}} . b) An equivalence class P ¯ {\displaystyle {\overline {P}}} is called generator . For the classical Laguerre plane a generator is a line parallel to the y-axis (plane model) or a line on the cylinder (space model). The connection to linear geometry is given by the following definition: For a Laguerre plane L := ( P , Z , ∈ ) {\displaystyle {\mathcal {L}}:=({\mathcal {P}},{\mathcal {Z}},\in )} we define the local structure and call it the residue at point P. In the plane model of the classical Laguerre plane A ∞ {\displaystyle {\mathcal {A}}_{\infty }} is the real affine plane R 2 {\displaystyle \mathbb {R} ^{2}} . In general we get Theorem: Any residue of a Laguerre plane is an affine plane . And the equivalent definition of a Laguerre plane: Theorem: An incidence structure together with an equivalence relation ∥ {\displaystyle \parallel } on P {\displaystyle {\mathcal {P}}} is a Laguerre plane if and only if for any point P {\displaystyle P} the residue A P {\displaystyle {\mathcal {A}}_{P}} is an affine plane. The following incidence structure is a "minimal model" of a Laguerre plane: Hence | P | = 6 {\displaystyle |{\mathcal {P}}|=6} and | Z | = 8 . {\displaystyle |{\mathcal {Z}}|=8\ .} For finite Laguerre planes, i.e. | P | < ∞ {\displaystyle |{\mathcal {P}}|<\infty } , we get: Lemma: For any cycles z 1 , z 2 {\displaystyle z_{1},z_{2}} and any generator P ¯ {\displaystyle {\overline {P}}} of a finite Laguerre plane L := ( P , Z , ∈ ) {\displaystyle {\mathcal {L}}:=({\mathcal {P}},{\mathcal {Z}},\in )} we have: For a finite Laguerre plane L := ( P , Z , ∈ ) {\displaystyle {\mathcal {L}}:=({\mathcal {P}},{\mathcal {Z}},\in )} and a cycle z ∈ Z {\displaystyle z\in {\mathcal {Z}}} the integer n := | z | − 1 {\displaystyle n:=|z|-1} is called order of L {\displaystyle {\mathcal {L}}} . From combinatorics we get Lemma: Let L := ( P , Z , ∈ ) {\displaystyle {\mathcal {L}}:=({\mathcal {P}},{\mathcal {Z}},\in )} be a Laguerre—plane of order n {\displaystyle n} . Then Unlike Moebius planes the formal generalization of the classical model of a Laguerre plane, i.e. replacing R {\displaystyle \mathbb {R} } by an arbitrary field K {\displaystyle K} , always leads to an example of a Laguerre plane. Theorem: For a field K {\displaystyle K} and Similarly to a Möbius plane the Laguerre version of the Theorem of Miquel holds: Theorem of Miquel: For the Laguerre plane L ( K ) {\displaystyle {\mathcal {L}}(K)} the following is true: (For a better overview in the figure there are circles drawn instead of parabolas) The importance of the Theorem of Miquel shows in the following theorem, which is due to v. d. Waerden, Smid and Chen: Theorem: Only a Laguerre plane L ( K ) {\displaystyle {\mathcal {L}}(K)} satisfies the theorem of Miquel. Because of the last theorem L ( K ) {\displaystyle {\mathcal {L}}(K)} is called a "Miquelian Laguerre plane". The minimal model of a Laguerre plane is miquelian. It is isomorphic to the Laguerre plane L ( K ) {\displaystyle {\mathcal {L}}(K)} with K = G F ( 2 ) {\displaystyle K=GF(2)} (field { 0 , 1 } {\displaystyle \{0,1\}} ). A suitable stereographic projection shows that L ( K ) {\displaystyle {\mathcal {L}}(K)} is isomorphic to the geometry of the plane sections on a quadric cylinder over field K {\displaystyle K} . There are many Laguerre planes that are not miquelian (see weblink below). The class that is most similar to miquelian Laguerre planes is the ovoidal Laguerre planes . An ovoidal Laguerre plane is the geometry of the plane sections of a cylinder that is constructed by using an oval instead of a non degenerate conic. An oval is a quadratic set and bears the same geometric properties as a non degenerate conic in a projective plane: 1) a line intersects an oval in zero, one, or two points and 2) at any point there is a unique tangent. A simple oval in the real plane can be constructed by glueing together two suitable halves of different ellipses, such that the result is not a conic. Even in the finite case there exist ovals (see quadratic set ).
https://en.wikipedia.org/wiki/Laguerre_plane
In mathematics , the Laguerre polynomials , named after Edmond Laguerre (1834–1886), are nontrivial solutions of Laguerre's differential equation: x y ″ + ( 1 − x ) y ′ + n y = 0 , y = y ( x ) {\displaystyle xy''+(1-x)y'+ny=0,\ y=y(x)} which is a second-order linear differential equation . This equation has nonsingular solutions only if n is a non-negative integer. Sometimes the name Laguerre polynomials is used for solutions of x y ″ + ( α + 1 − x ) y ′ + n y = 0 . {\displaystyle xy''+(\alpha +1-x)y'+ny=0~.} where n is still a non-negative integer. Then they are also named generalized Laguerre polynomials , as will be done here (alternatively associated Laguerre polynomials or, rarely, Sonine polynomials , after their inventor [ 1 ] Nikolay Yakovlevich Sonin ). More generally, a Laguerre function is a solution when n is not necessarily a non-negative integer. The Laguerre polynomials are also used for Gauss–Laguerre quadrature to numerically compute integrals of the form ∫ 0 ∞ f ( x ) e − x d x . {\displaystyle \int _{0}^{\infty }f(x)e^{-x}\,dx.} These polynomials, usually denoted L 0 , L 1 , ..., are a polynomial sequence which may be defined by the Rodrigues formula , L n ( x ) = e x n ! d n d x n ( e − x x n ) = 1 n ! ( d d x − 1 ) n x n , {\displaystyle L_{n}(x)={\frac {e^{x}}{n!}}{\frac {d^{n}}{dx^{n}}}\left(e^{-x}x^{n}\right)={\frac {1}{n!}}\left({\frac {d}{dx}}-1\right)^{n}x^{n},} reducing to the closed form of a following section. They are orthogonal polynomials with respect to an inner product ⟨ f , g ⟩ = ∫ 0 ∞ f ( x ) g ( x ) e − x d x . {\displaystyle \langle f,g\rangle =\int _{0}^{\infty }f(x)g(x)e^{-x}\,dx.} The rook polynomials in combinatorics are more or less the same as Laguerre polynomials, up to elementary changes of variables. Further see the Tricomi–Carlitz polynomials . The Laguerre polynomials arise in quantum mechanics, in the radial part of the solution of the Schrödinger equation for a one-electron atom. They also describe the static Wigner functions of oscillator systems in quantum mechanics in phase space . They further enter in the quantum mechanics of the Morse potential and of the 3D isotropic harmonic oscillator . Physicists sometimes use a definition for the Laguerre polynomials that is larger by a factor of n ! than the definition used here. (Likewise, some physicists may use somewhat different definitions of the so-called associated Laguerre polynomials.) One can also define the Laguerre polynomials recursively, defining the first two polynomials as L 0 ( x ) = 1 {\displaystyle L_{0}(x)=1} L 1 ( x ) = 1 − x {\displaystyle L_{1}(x)=1-x} and then using the following recurrence relation for any k ≥ 1 : L k + 1 ( x ) = ( 2 k + 1 − x ) L k ( x ) − k L k − 1 ( x ) k + 1 . {\displaystyle L_{k+1}(x)={\frac {(2k+1-x)L_{k}(x)-kL_{k-1}(x)}{k+1}}.} Furthermore, x L n ′ ( x ) = n L n ( x ) − n L n − 1 ( x ) . {\displaystyle xL'_{n}(x)=nL_{n}(x)-nL_{n-1}(x).} In solution of some boundary value problems, the characteristic values can be useful: L k ( 0 ) = 1 , L k ′ ( 0 ) = − k . {\displaystyle L_{k}(0)=1,L_{k}'(0)=-k.} The closed form is L n ( x ) = ∑ k = 0 n ( n k ) ( − 1 ) k k ! x k . {\displaystyle L_{n}(x)=\sum _{k=0}^{n}{\binom {n}{k}}{\frac {(-1)^{k}}{k!}}x^{k}.} The generating function for them likewise follows, ∑ n = 0 ∞ t n L n ( x ) = 1 1 − t e − t x / ( 1 − t ) . {\displaystyle \sum _{n=0}^{\infty }t^{n}L_{n}(x)={\frac {1}{1-t}}e^{-tx/(1-t)}.} The operator form is L n ( x ) = 1 n ! e x d n d x n ( x n e − x ) {\displaystyle L_{n}(x)={\frac {1}{n!}}e^{x}{\frac {d^{n}}{dx^{n}}}(x^{n}e^{-x})} Polynomials of negative index can be expressed using the ones with positive index: L − n ( x ) = e x L n − 1 ( − x ) . {\displaystyle L_{-n}(x)=e^{x}L_{n-1}(-x).} For arbitrary real α the polynomial solutions of the differential equation [ 2 ] x y ″ + ( α + 1 − x ) y ′ + n y = 0 {\displaystyle x\,y''+\left(\alpha +1-x\right)y'+n\,y=0} are called generalized Laguerre polynomials , or associated Laguerre polynomials . One can also define the generalized Laguerre polynomials recursively, defining the first two polynomials as L 0 ( α ) ( x ) = 1 {\displaystyle L_{0}^{(\alpha )}(x)=1} L 1 ( α ) ( x ) = 1 + α − x {\displaystyle L_{1}^{(\alpha )}(x)=1+\alpha -x} and then using the following recurrence relation for any k ≥ 1 : L k + 1 ( α ) ( x ) = ( 2 k + 1 + α − x ) L k ( α ) ( x ) − ( k + α ) L k − 1 ( α ) ( x ) k + 1 . {\displaystyle L_{k+1}^{(\alpha )}(x)={\frac {(2k+1+\alpha -x)L_{k}^{(\alpha )}(x)-(k+\alpha )L_{k-1}^{(\alpha )}(x)}{k+1}}.} The simple Laguerre polynomials are the special case α = 0 of the generalized Laguerre polynomials: L n ( 0 ) ( x ) = L n ( x ) . {\displaystyle L_{n}^{(0)}(x)=L_{n}(x).} The Rodrigues formula for them is L n ( α ) ( x ) = x − α e x n ! d n d x n ( e − x x n + α ) = x − α n ! ( d d x − 1 ) n x n + α . {\displaystyle L_{n}^{(\alpha )}(x)={x^{-\alpha }e^{x} \over n!}{d^{n} \over dx^{n}}\left(e^{-x}x^{n+\alpha }\right)={\frac {x^{-\alpha }}{n!}}\left({\frac {d}{dx}}-1\right)^{n}x^{n+\alpha }.} The generating function for them is ∑ n = 0 ∞ t n L n ( α ) ( x ) = 1 ( 1 − t ) α + 1 e − t x / ( 1 − t ) . {\displaystyle \sum _{n=0}^{\infty }t^{n}L_{n}^{(\alpha )}(x)={\frac {1}{(1-t)^{\alpha +1}}}e^{-tx/(1-t)}.} Given the generating function specified above, the polynomials may be expressed in terms of a contour integral L n ( α ) ( x ) = 1 2 π i ∮ C e − x t / ( 1 − t ) ( 1 − t ) α + 1 t n + 1 d t , {\displaystyle L_{n}^{(\alpha )}(x)={\frac {1}{2\pi i}}\oint _{C}{\frac {e^{-xt/(1-t)}}{(1-t)^{\alpha +1}\,t^{n+1}}}\;dt,} where the contour circles the origin once in a counterclockwise direction without enclosing the essential singularity at 1 The addition formula for Laguerre polynomials: [ 8 ] L n ( α 1 + ⋯ + α r + r − 1 ) ( x 1 + ⋯ + x r ) = ∑ m 1 + ⋯ + m r = n L m 1 ( α 1 ) ( x 1 ) ⋯ L m r ( α r ) ( x r ) . {\displaystyle L_{n}^{(\alpha _{1}+\dots +\alpha _{r}+r-1)}\left(x_{1}+\dots +x_{r}\right)=\sum _{m_{1}+\dots +m_{r}=n}L_{m_{1}}^{(\alpha _{1})}\left(x_{1}\right)\cdots L_{m_{r}}^{(\alpha _{r})}\left(x_{r}\right).} Laguerre's polynomials satisfy the recurrence relations L n ( α ) ( x ) = ∑ i = 0 n L n − i ( α + i ) ( y ) ( y − x ) i i ! , {\displaystyle L_{n}^{(\alpha )}(x)=\sum _{i=0}^{n}L_{n-i}^{(\alpha +i)}(y){\frac {(y-x)^{i}}{i!}},} in particular L n ( α + 1 ) ( x ) = ∑ i = 0 n L i ( α ) ( x ) {\displaystyle L_{n}^{(\alpha +1)}(x)=\sum _{i=0}^{n}L_{i}^{(\alpha )}(x)} and L n ( α ) ( x ) = ∑ i = 0 n ( α − β + n − i − 1 n − i ) L i ( β ) ( x ) , {\displaystyle L_{n}^{(\alpha )}(x)=\sum _{i=0}^{n}{\alpha -\beta +n-i-1 \choose n-i}L_{i}^{(\beta )}(x),} or L n ( α ) ( x ) = ∑ i = 0 n ( α − β + n n − i ) L i ( β − i ) ( x ) ; {\displaystyle L_{n}^{(\alpha )}(x)=\sum _{i=0}^{n}{\alpha -\beta +n \choose n-i}L_{i}^{(\beta -i)}(x);} moreover L n ( α ) ( x ) − ∑ j = 0 Δ − 1 ( n + α n − j ) ( − 1 ) j x j j ! = ( − 1 ) Δ x Δ ( Δ − 1 ) ! ∑ i = 0 n − Δ ( n + α n − Δ − i ) ( n − i ) ( n i ) L i ( α + Δ ) ( x ) = ( − 1 ) Δ x Δ ( Δ − 1 ) ! ∑ i = 0 n − Δ ( n + α − i − 1 n − Δ − i ) ( n − i ) ( n i ) L i ( n + α + Δ − i ) ( x ) {\displaystyle {\begin{aligned}L_{n}^{(\alpha )}(x)-\sum _{j=0}^{\Delta -1}{n+\alpha \choose n-j}(-1)^{j}{\frac {x^{j}}{j!}}&=(-1)^{\Delta }{\frac {x^{\Delta }}{(\Delta -1)!}}\sum _{i=0}^{n-\Delta }{\frac {n+\alpha \choose n-\Delta -i}{(n-i){n \choose i}}}L_{i}^{(\alpha +\Delta )}(x)\\[6pt]&=(-1)^{\Delta }{\frac {x^{\Delta }}{(\Delta -1)!}}\sum _{i=0}^{n-\Delta }{\frac {n+\alpha -i-1 \choose n-\Delta -i}{(n-i){n \choose i}}}L_{i}^{(n+\alpha +\Delta -i)}(x)\end{aligned}}} They can be used to derive the four 3-point-rules L n ( α ) ( x ) = L n ( α + 1 ) ( x ) − L n − 1 ( α + 1 ) ( x ) = ∑ j = 0 k ( k j ) ( − 1 ) j L n − j ( α + k ) ( x ) , n L n ( α ) ( x ) = ( n + α ) L n − 1 ( α ) ( x ) − x L n − 1 ( α + 1 ) ( x ) , or x k k ! L n ( α ) ( x ) = ∑ i = 0 k ( − 1 ) i ( n + i i ) ( n + α k − i ) L n + i ( α − k ) ( x ) , n L n ( α + 1 ) ( x ) = ( n − x ) L n − 1 ( α + 1 ) ( x ) + ( n + α ) L n − 1 ( α ) ( x ) x L n ( α + 1 ) ( x ) = ( n + α ) L n − 1 ( α ) ( x ) − ( n − x ) L n ( α ) ( x ) ; {\displaystyle {\begin{aligned}L_{n}^{(\alpha )}(x)&=L_{n}^{(\alpha +1)}(x)-L_{n-1}^{(\alpha +1)}(x)=\sum _{j=0}^{k}{k \choose j}(-1)^{j}L_{n-j}^{(\alpha +k)}(x),\\[10pt]nL_{n}^{(\alpha )}(x)&=(n+\alpha )L_{n-1}^{(\alpha )}(x)-xL_{n-1}^{(\alpha +1)}(x),\\[10pt]&{\text{or }}\\{\frac {x^{k}}{k!}}L_{n}^{(\alpha )}(x)&=\sum _{i=0}^{k}(-1)^{i}{n+i \choose i}{n+\alpha \choose k-i}L_{n+i}^{(\alpha -k)}(x),\\[10pt]nL_{n}^{(\alpha +1)}(x)&=(n-x)L_{n-1}^{(\alpha +1)}(x)+(n+\alpha )L_{n-1}^{(\alpha )}(x)\\[10pt]xL_{n}^{(\alpha +1)}(x)&=(n+\alpha )L_{n-1}^{(\alpha )}(x)-(n-x)L_{n}^{(\alpha )}(x);\end{aligned}}} combined they give this additional, useful recurrence relations L n ( α ) ( x ) = ( 2 + α − 1 − x n ) L n − 1 ( α ) ( x ) − ( 1 + α − 1 n ) L n − 2 ( α ) ( x ) = α + 1 − x n L n − 1 ( α + 1 ) ( x ) − x n L n − 2 ( α + 2 ) ( x ) {\displaystyle {\begin{aligned}L_{n}^{(\alpha )}(x)&=\left(2+{\frac {\alpha -1-x}{n}}\right)L_{n-1}^{(\alpha )}(x)-\left(1+{\frac {\alpha -1}{n}}\right)L_{n-2}^{(\alpha )}(x)\\[10pt]&={\frac {\alpha +1-x}{n}}L_{n-1}^{(\alpha +1)}(x)-{\frac {x}{n}}L_{n-2}^{(\alpha +2)}(x)\end{aligned}}} Since L n ( α ) ( x ) {\displaystyle L_{n}^{(\alpha )}(x)} is a monic polynomial of degree n {\displaystyle n} in α {\displaystyle \alpha } , there is the partial fraction decomposition n ! L n ( α ) ( x ) ( α + 1 ) n = 1 − ∑ j = 1 n ( − 1 ) j j α + j ( n j ) L n ( − j ) ( x ) = 1 − ∑ j = 1 n x j α + j L n − j ( j ) ( x ) ( j − 1 ) ! = 1 − x ∑ i = 1 n L n − i ( − α ) ( x ) L i − 1 ( α + 1 ) ( − x ) α + i . {\displaystyle {\begin{aligned}{\frac {n!\,L_{n}^{(\alpha )}(x)}{(\alpha +1)_{n}}}&=1-\sum _{j=1}^{n}(-1)^{j}{\frac {j}{\alpha +j}}{n \choose j}L_{n}^{(-j)}(x)\\&=1-\sum _{j=1}^{n}{\frac {x^{j}}{\alpha +j}}\,\,{\frac {L_{n-j}^{(j)}(x)}{(j-1)!}}\\&=1-x\sum _{i=1}^{n}{\frac {L_{n-i}^{(-\alpha )}(x)L_{i-1}^{(\alpha +1)}(-x)}{\alpha +i}}.\end{aligned}}} The second equality follows by the following identity, valid for integer i and n and immediate from the expression of L n ( α ) ( x ) {\displaystyle L_{n}^{(\alpha )}(x)} in terms of Charlier polynomials : ( − x ) i i ! L n ( i − n ) ( x ) = ( − x ) n n ! L i ( n − i ) ( x ) . {\displaystyle {\frac {(-x)^{i}}{i!}}L_{n}^{(i-n)}(x)={\frac {(-x)^{n}}{n!}}L_{i}^{(n-i)}(x).} For the third equality apply the fourth and fifth identities of this section. Differentiating the power series representation of a generalized Laguerre polynomial k times leads to d k d x k L n ( α ) ( x ) = { ( − 1 ) k L n − k ( α + k ) ( x ) if k ≤ n , 0 otherwise. {\displaystyle {\frac {d^{k}}{dx^{k}}}L_{n}^{(\alpha )}(x)={\begin{cases}(-1)^{k}L_{n-k}^{(\alpha +k)}(x)&{\text{if }}k\leq n,\\0&{\text{otherwise.}}\end{cases}}} This points to a special case ( α = 0 ) of the formula above: for integer α = k the generalized polynomial may be written L n ( k ) ( x ) = ( − 1 ) k d k L n + k ( x ) d x k , {\displaystyle L_{n}^{(k)}(x)=(-1)^{k}{\frac {d^{k}L_{n+k}(x)}{dx^{k}}},} the shift by k sometimes causing confusion with the usual parenthesis notation for a derivative. Moreover, the following equation holds: 1 k ! d k d x k x α L n ( α ) ( x ) = ( n + α k ) x α − k L n ( α − k ) ( x ) , {\displaystyle {\frac {1}{k!}}{\frac {d^{k}}{dx^{k}}}x^{\alpha }L_{n}^{(\alpha )}(x)={n+\alpha \choose k}x^{\alpha -k}L_{n}^{(\alpha -k)}(x),} which generalizes with Cauchy's formula to L n ( α ′ ) ( x ) = ( α ′ − α ) ( α ′ + n α ′ − α ) ∫ 0 x t α ( x − t ) α ′ − α − 1 x α ′ L n ( α ) ( t ) d t . {\displaystyle L_{n}^{(\alpha ')}(x)=(\alpha '-\alpha ){\alpha '+n \choose \alpha '-\alpha }\int _{0}^{x}{\frac {t^{\alpha }(x-t)^{\alpha '-\alpha -1}}{x^{\alpha '}}}L_{n}^{(\alpha )}(t)\,dt.} The derivative with respect to the second variable α has the form, [ 9 ] d d α L n ( α ) ( x ) = ∑ i = 0 n − 1 L i ( α ) ( x ) n − i . {\displaystyle {\frac {d}{d\alpha }}L_{n}^{(\alpha )}(x)=\sum _{i=0}^{n-1}{\frac {L_{i}^{(\alpha )}(x)}{n-i}}.} The generalized Laguerre polynomials obey the differential equation x L n ( α ) ′ ′ ( x ) + ( α + 1 − x ) L n ( α ) ′ ( x ) + n L n ( α ) ( x ) = 0 , {\displaystyle xL_{n}^{(\alpha )\prime \prime }(x)+(\alpha +1-x)L_{n}^{(\alpha )\prime }(x)+nL_{n}^{(\alpha )}(x)=0,} which may be compared with the equation obeyed by the k th derivative of the ordinary Laguerre polynomial, x L n [ k ] ′ ′ ( x ) + ( k + 1 − x ) L n [ k ] ′ ( x ) + ( n − k ) L n [ k ] ( x ) = 0 , {\displaystyle xL_{n}^{[k]\prime \prime }(x)+(k+1-x)L_{n}^{[k]\prime }(x)+(n-k)L_{n}^{[k]}(x)=0,} where L n [ k ] ( x ) ≡ d k L n ( x ) d x k {\displaystyle L_{n}^{[k]}(x)\equiv {\frac {d^{k}L_{n}(x)}{dx^{k}}}} for this equation only. In Sturm–Liouville form the differential equation is − ( x α + 1 e − x ⋅ L n ( α ) ( x ) ′ ) ′ = n ⋅ x α e − x ⋅ L n ( α ) ( x ) , {\displaystyle -\left(x^{\alpha +1}e^{-x}\cdot L_{n}^{(\alpha )}(x)^{\prime }\right)'=n\cdot x^{\alpha }e^{-x}\cdot L_{n}^{(\alpha )}(x),} which shows that L (α) n is an eigenvector for the eigenvalue n . The generalized Laguerre polynomials are orthogonal over [0, ∞) with respect to the measure with weighting function x α e − x : [ 10 ] ∫ 0 ∞ x α e − x L n ( α ) ( x ) L m ( α ) ( x ) d x = Γ ( n + α + 1 ) n ! δ n , m , {\displaystyle \int _{0}^{\infty }x^{\alpha }e^{-x}L_{n}^{(\alpha )}(x)L_{m}^{(\alpha )}(x)dx={\frac {\Gamma (n+\alpha +1)}{n!}}\delta _{n,m},} which follows from ∫ 0 ∞ x α ′ − 1 e − x L n ( α ) ( x ) d x = ( α − α ′ + n n ) Γ ( α ′ ) . {\displaystyle \int _{0}^{\infty }x^{\alpha '-1}e^{-x}L_{n}^{(\alpha )}(x)dx={\alpha -\alpha '+n \choose n}\Gamma (\alpha ').} If Γ ( x , α + 1 , 1 ) {\displaystyle \Gamma (x,\alpha +1,1)} denotes the gamma distribution then the orthogonality relation can be written as ∫ 0 ∞ L n ( α ) ( x ) L m ( α ) ( x ) Γ ( x , α + 1 , 1 ) d x = ( n + α n ) δ n , m . {\displaystyle \int _{0}^{\infty }L_{n}^{(\alpha )}(x)L_{m}^{(\alpha )}(x)\Gamma (x,\alpha +1,1)dx={n+\alpha \choose n}\delta _{n,m}.} The associated, symmetric kernel polynomial has the representations ( Christoffel–Darboux formula ) [ citation needed ] K n ( α ) ( x , y ) := 1 Γ ( α + 1 ) ∑ i = 0 n L i ( α ) ( x ) L i ( α ) ( y ) ( α + i i ) = 1 Γ ( α + 1 ) L n ( α ) ( x ) L n + 1 ( α ) ( y ) − L n + 1 ( α ) ( x ) L n ( α ) ( y ) x − y n + 1 ( n + α n ) = 1 Γ ( α + 1 ) ∑ i = 0 n x i i ! L n − i ( α + i ) ( x ) L n − i ( α + i + 1 ) ( y ) ( α + n n ) ( n i ) ; {\displaystyle {\begin{aligned}K_{n}^{(\alpha )}(x,y)&:={\frac {1}{\Gamma (\alpha +1)}}\sum _{i=0}^{n}{\frac {L_{i}^{(\alpha )}(x)L_{i}^{(\alpha )}(y)}{\alpha +i \choose i}}\\[4pt]&={\frac {1}{\Gamma (\alpha +1)}}{\frac {L_{n}^{(\alpha )}(x)L_{n+1}^{(\alpha )}(y)-L_{n+1}^{(\alpha )}(x)L_{n}^{(\alpha )}(y)}{{\frac {x-y}{n+1}}{n+\alpha \choose n}}}\\[4pt]&={\frac {1}{\Gamma (\alpha +1)}}\sum _{i=0}^{n}{\frac {x^{i}}{i!}}{\frac {L_{n-i}^{(\alpha +i)}(x)L_{n-i}^{(\alpha +i+1)}(y)}{{\alpha +n \choose n}{n \choose i}}};\end{aligned}}} recursively K n ( α ) ( x , y ) = y α + 1 K n − 1 ( α + 1 ) ( x , y ) + 1 Γ ( α + 1 ) L n ( α + 1 ) ( x ) L n ( α ) ( y ) ( α + n n ) . {\displaystyle K_{n}^{(\alpha )}(x,y)={\frac {y}{\alpha +1}}K_{n-1}^{(\alpha +1)}(x,y)+{\frac {1}{\Gamma (\alpha +1)}}{\frac {L_{n}^{(\alpha +1)}(x)L_{n}^{(\alpha )}(y)}{\alpha +n \choose n}}.} Moreover, [ clarification needed Limit as n goes to infinity? ] y α e − y K n ( α ) ( ⋅ , y ) → δ ( y − ⋅ ) . {\displaystyle y^{\alpha }e^{-y}K_{n}^{(\alpha )}(\cdot ,y)\to \delta (y-\cdot ).} Turán's inequalities can be derived here, which is L n ( α ) ( x ) 2 − L n − 1 ( α ) ( x ) L n + 1 ( α ) ( x ) = ∑ k = 0 n − 1 ( α + n − 1 n − k ) n ( n k ) L k ( α − 1 ) ( x ) 2 > 0. {\displaystyle L_{n}^{(\alpha )}(x)^{2}-L_{n-1}^{(\alpha )}(x)L_{n+1}^{(\alpha )}(x)=\sum _{k=0}^{n-1}{\frac {\alpha +n-1 \choose n-k}{n{n \choose k}}}L_{k}^{(\alpha -1)}(x)^{2}>0.} The following integral is needed in the quantum mechanical treatment of the hydrogen atom , ∫ 0 ∞ x α + 1 e − x [ L n ( α ) ( x ) ] 2 d x = ( n + α ) ! n ! ( 2 n + α + 1 ) . {\displaystyle \int _{0}^{\infty }x^{\alpha +1}e^{-x}\left[L_{n}^{(\alpha )}(x)\right]^{2}dx={\frac {(n+\alpha )!}{n!}}(2n+\alpha +1).} Let a function have the (formal) series expansion f ( x ) = ∑ i = 0 ∞ f i ( α ) L i ( α ) ( x ) . {\displaystyle f(x)=\sum _{i=0}^{\infty }f_{i}^{(\alpha )}L_{i}^{(\alpha )}(x).} Then f i ( α ) = ∫ 0 ∞ L i ( α ) ( x ) ( i + α i ) ⋅ x α e − x Γ ( α + 1 ) ⋅ f ( x ) d x . {\displaystyle f_{i}^{(\alpha )}=\int _{0}^{\infty }{\frac {L_{i}^{(\alpha )}(x)}{i+\alpha \choose i}}\cdot {\frac {x^{\alpha }e^{-x}}{\Gamma (\alpha +1)}}\cdot f(x)\,dx.} The series converges in the associated Hilbert space L 2 [0, ∞) if and only if ‖ f ‖ L 2 2 := ∫ 0 ∞ x α e − x Γ ( α + 1 ) | f ( x ) | 2 d x = ∑ i = 0 ∞ ( i + α i ) | f i ( α ) | 2 < ∞ . {\displaystyle \|f\|_{L^{2}}^{2}:=\int _{0}^{\infty }{\frac {x^{\alpha }e^{-x}}{\Gamma (\alpha +1)}}|f(x)|^{2}\,dx=\sum _{i=0}^{\infty }{i+\alpha \choose i}|f_{i}^{(\alpha )}|^{2}<\infty .} Monomials are represented as x n n ! = ∑ i = 0 n ( − 1 ) i ( n + α n − i ) L i ( α ) ( x ) , {\displaystyle {\frac {x^{n}}{n!}}=\sum _{i=0}^{n}(-1)^{i}{n+\alpha \choose n-i}L_{i}^{(\alpha )}(x),} while binomials have the parametrization ( n + x n ) = ∑ i = 0 n α i i ! L n − i ( x + i ) ( α ) . {\displaystyle {n+x \choose n}=\sum _{i=0}^{n}{\frac {\alpha ^{i}}{i!}}L_{n-i}^{(x+i)}(\alpha ).} This leads directly to e − γ x = ∑ i = 0 ∞ γ i ( 1 + γ ) i + α + 1 L i ( α ) ( x ) convergent iff ℜ ( γ ) > − 1 2 {\displaystyle e^{-\gamma x}=\sum _{i=0}^{\infty }{\frac {\gamma ^{i}}{(1+\gamma )^{i+\alpha +1}}}L_{i}^{(\alpha )}(x)\qquad {\text{convergent iff }}\Re (\gamma )>-{\tfrac {1}{2}}} for the exponential function. The incomplete gamma function has the representation Γ ( α , x ) = x α e − x ∑ i = 0 ∞ L i ( α ) ( x ) 1 + i ( ℜ ( α ) > − 1 , x > 0 ) . {\displaystyle \Gamma (\alpha ,x)=x^{\alpha }e^{-x}\sum _{i=0}^{\infty }{\frac {L_{i}^{(\alpha )}(x)}{1+i}}\qquad \left(\Re (\alpha )>-1,x>0\right).} In quantum mechanics the Schrödinger equation for the hydrogen-like atom is exactly solvable by separation of variables in spherical coordinates. The radial part of the wave function is a (generalized) Laguerre polynomial. [ 11 ] Vibronic transitions in the Franck-Condon approximation can also be described using Laguerre polynomials. [ 12 ] Erdélyi gives the following two multiplication theorems [ 13 ] t n + 1 + α e ( 1 − t ) z L n ( α ) ( z t ) = ∑ k = n ∞ ( k n ) ( 1 − 1 t ) k − n L k ( α ) ( z ) , e ( 1 − t ) z L n ( α ) ( z t ) = ∑ k = 0 ∞ ( 1 − t ) k z k k ! L n ( α + k ) ( z ) . {\displaystyle {\begin{aligned}&t^{n+1+\alpha }e^{(1-t)z}L_{n}^{(\alpha )}(zt)=\sum _{k=n}^{\infty }{k \choose n}\left(1-{\frac {1}{t}}\right)^{k-n}L_{k}^{(\alpha )}(z),\\[6pt]&e^{(1-t)z}L_{n}^{(\alpha )}(zt)=\sum _{k=0}^{\infty }{\frac {(1-t)^{k}z^{k}}{k!}}L_{n}^{(\alpha +k)}(z).\end{aligned}}} The generalized Laguerre polynomials are related to the Hermite polynomials : H 2 n ( x ) = ( − 1 ) n 2 2 n n ! L n ( − 1 / 2 ) ( x 2 ) H 2 n + 1 ( x ) = ( − 1 ) n 2 2 n + 1 n ! x L n ( 1 / 2 ) ( x 2 ) {\displaystyle {\begin{aligned}H_{2n}(x)&=(-1)^{n}2^{2n}n!L_{n}^{(-1/2)}(x^{2})\\[4pt]H_{2n+1}(x)&=(-1)^{n}2^{2n+1}n!xL_{n}^{(1/2)}(x^{2})\end{aligned}}} where the H n ( x ) are the Hermite polynomials based on the weighting function exp(− x 2 ) , the so-called "physicist's version." Because of this, the generalized Laguerre polynomials arise in the treatment of the quantum harmonic oscillator . Applying the addition formula, ( − 1 ) n 2 2 n n ! L n ( r 2 − 1 ) ( z 1 2 + ⋯ + z r 2 ) = ∑ m 1 + ⋯ + m r = n ∏ i = 1 r H 2 m i ( z i ) . {\displaystyle (-1)^{n}2^{2n}n!\,L_{n}^{\left({\frac {r}{2}}-1\right)}{\Bigl (}z_{1}^{2}+\cdots +z_{r}^{2}{\Bigr )}=\sum _{m_{1}+\cdots +m_{r}=n}\prod _{i=1}^{r}H_{2m_{i}}(z_{i}).} The Laguerre polynomials may be defined in terms of hypergeometric functions , specifically the confluent hypergeometric functions , as L n ( α ) ( x ) = ( n + α n ) M ( − n , α + 1 , x ) = ( α + 1 ) n n ! 1 F 1 ( − n , α + 1 , x ) {\displaystyle L_{n}^{(\alpha )}(x)={n+\alpha \choose n}M(-n,\alpha +1,x)={\frac {(\alpha +1)_{n}}{n!}}\,_{1}F_{1}(-n,\alpha +1,x)} where ( a ) n {\displaystyle (a)_{n}} is the Pochhammer symbol (which in this case represents the rising factorial). The generalized Laguerre polynomials satisfy the Hardy–Hille formula [ 14 ] [ 15 ] ∑ n = 0 ∞ n ! Γ ( α + 1 ) Γ ( n + α + 1 ) L n ( α ) ( x ) L n ( α ) ( y ) t n = 1 ( 1 − t ) α + 1 e − ( x + y ) t / ( 1 − t ) 0 F 1 ( ; α + 1 ; x y t ( 1 − t ) 2 ) , {\displaystyle \sum _{n=0}^{\infty }{\frac {n!\,\Gamma \left(\alpha +1\right)}{\Gamma \left(n+\alpha +1\right)}}L_{n}^{(\alpha )}(x)L_{n}^{(\alpha )}(y)t^{n}={\frac {1}{(1-t)^{\alpha +1}}}e^{-(x+y)t/(1-t)}\,_{0}F_{1}\left(;\alpha +1;{\frac {xyt}{(1-t)^{2}}}\right),} where the series on the left converges for α > − 1 {\displaystyle \alpha >-1} and | t | < 1 {\displaystyle |t|<1} . Using the identity 0 F 1 ( ; α + 1 ; z ) = Γ ( α + 1 ) z − α / 2 I α ( 2 z ) , {\displaystyle \,_{0}F_{1}(;\alpha +1;z)=\,\Gamma (\alpha +1)z^{-\alpha /2}I_{\alpha }\left(2{\sqrt {z}}\right),} (see generalized hypergeometric function ), this can also be written as ∑ n = 0 ∞ n ! Γ ( 1 + α + n ) L n ( α ) ( x ) L n ( α ) ( y ) t n = 1 ( x y t ) α / 2 ( 1 − t ) e − ( x + y ) t / ( 1 − t ) I α ( 2 x y t 1 − t ) . {\displaystyle \sum _{n=0}^{\infty }{\frac {n!}{\Gamma (1+\alpha +n)}}L_{n}^{(\alpha )}(x)L_{n}^{(\alpha )}(y)t^{n}={\frac {1}{(xyt)^{\alpha /2}(1-t)}}e^{-(x+y)t/(1-t)}I_{\alpha }\left({\frac {2{\sqrt {xyt}}}{1-t}}\right).} where I α {\displaystyle I_{\alpha }} denotes the modified Bessel function of the first kind, defined as I α ( z ) = ∑ k = 0 ∞ 1 k ! Γ ( k + α + 1 ) ( z 2 ) 2 k + α {\displaystyle I_{\alpha }(z)=\sum _{k=0}^{\infty }{\frac {1}{k!\,\Gamma (k+\alpha +1)}}\left({\frac {z}{2}}\right)^{2k+\alpha }} This formula is a generalization of the Mehler kernel for Hermite polynomials , which can be recovered from it by setting the Hermite polynomials as a special case of the associated Laguerre polynomials. Substitute t ↦ − t / y {\displaystyle t\mapsto -t/y} and take the y → ∞ {\displaystyle y\to \infty } limit, we obtain [ 16 ] ∑ n = 0 ∞ t n Γ ( n + 1 + α ) L n ( α ) ( x ) = e t ( − x t ) α / 2 I α ( 2 − x t ) . {\displaystyle \sum _{n=0}^{\infty }{\frac {t^{n}}{\Gamma (n+1+\alpha )}}L_{n}^{(\alpha )}(x)={\frac {e^{t}}{(-xt)^{\alpha /2}}}I_{\alpha }(2{\sqrt {-xt}}).} The generalized Laguerre polynomials are used to describe the quantum wavefunction for hydrogen atom orbitals. [ 17 ] [ 18 ] [ 19 ] The convention used throughout this article expresses the generalized Laguerre polynomials as [ 20 ] L n ( α ) ( x ) = Γ ( α + n + 1 ) Γ ( α + 1 ) n ! 1 F 1 ( − n ; α + 1 ; x ) , {\displaystyle L_{n}^{(\alpha )}(x)={\frac {\Gamma (\alpha +n+1)}{\Gamma (\alpha +1)n!}}\,_{1}F_{1}(-n;\alpha +1;x),} where 1 F 1 ( a ; b ; x ) {\displaystyle \,_{1}F_{1}(a;b;x)} is the confluent hypergeometric function . In the physics literature, [ 19 ] the generalized Laguerre polynomials are instead defined as L ¯ n ( α ) ( x ) = [ Γ ( α + n + 1 ) ] 2 Γ ( α + 1 ) n ! 1 F 1 ( − n ; α + 1 ; x ) . {\displaystyle {\bar {L}}_{n}^{(\alpha )}(x)={\frac {\left[\Gamma (\alpha +n+1)\right]^{2}}{\Gamma (\alpha +1)n!}}\,_{1}F_{1}(-n;\alpha +1;x).} The physics version is related to the standard version by L ¯ n ( α ) ( x ) = ( n + α ) ! L n ( α ) ( x ) . {\displaystyle {\bar {L}}_{n}^{(\alpha )}(x)=(n+\alpha )!L_{n}^{(\alpha )}(x).} There is yet another, albeit less frequently used, convention in the physics literature [ 21 ] [ 22 ] [ 23 ] L ~ n ( α ) ( x ) = ( − 1 ) α L ¯ n − α ( α ) . {\displaystyle {\tilde {L}}_{n}^{(\alpha )}(x)=(-1)^{\alpha }{\bar {L}}_{n-\alpha }^{(\alpha )}.} Generalized Laguerre polynomials are linked to Umbral calculus by being Sheffer sequences for D / ( D − I ) {\displaystyle D/(D-I)} when multiplied by n ! {\displaystyle n!} . In Umbral Calculus convention, [ 24 ] the default Laguerre polynomials are defined to be L n ( x ) = n ! L n ( − 1 ) ( x ) = ∑ k = 0 n L ( n , k ) ( − x ) k {\displaystyle {\mathcal {L}}_{n}(x)=n!L_{n}^{(-1)}(x)=\sum _{k=0}^{n}L(n,k)(-x)^{k}} where L ( n , k ) = ( n − 1 k − 1 ) n ! k ! {\textstyle L(n,k)={\binom {n-1}{k-1}}{\frac {n!}{k!}}} are the signless Lah numbers . ( L n ( x ) ) n ∈ N {\textstyle ({\mathcal {L}}_{n}(x))_{n\in \mathbb {N} }} is a sequence of polynomials of binomial type , ie they satisfy L n ( x + y ) = ∑ k = 0 n ( n k ) L k ( x ) L n − k ( y ) {\displaystyle {\mathcal {L}}_{n}(x+y)=\sum _{k=0}^{n}{\binom {n}{k}}{\mathcal {L}}_{k}(x){\mathcal {L}}_{n-k}(y)}
https://en.wikipedia.org/wiki/Laguerre_polynomials
The Laguerre transformations or axial homographies are an analogue of Möbius transformations over the dual numbers . [ 1 ] [ 2 ] [ 3 ] [ 4 ] When studying these transformations, the dual numbers are often interpreted as representing oriented lines on the plane. [ 1 ] The Laguerre transformations map lines to lines, and include in particular all isometries of the plane . Strictly speaking, these transformations act on the dual number projective line , which adjoins to the dual numbers a set of points at infinity. Topologically, this projective line is equivalent to a cylinder. Points on this cylinder are in a natural one-to-one correspondence with oriented lines on the plane. A Laguerre transformation is a linear fractional transformation z ↦ a z + b c z + d {\displaystyle z\mapsto {\frac {az+b}{cz+d}}} where a , b , c , d {\displaystyle a,b,c,d} are all dual numbers, z {\displaystyle z} lies on the dual number projective line, and a d − b c {\displaystyle ad-bc} is not a zero divisor . A dual number is a hypercomplex number of the form x + y ε {\displaystyle x+y\varepsilon } where ε 2 = 0 {\displaystyle \varepsilon ^{2}=0} but ε ≠ 0 {\displaystyle \varepsilon \neq 0} . This can be compared to the complex numbers which are of the form x + y i {\displaystyle x+yi} where i 2 = − 1 {\displaystyle i^{2}=-1} . The points of the dual number projective line can be defined equivalently in two ways: A line which makes an angle θ {\displaystyle \theta } with the x-axis, and whose x-intercept is denoted s {\displaystyle s} , is represented by the dual number The above doesn't make sense when the line is parallel to the x-axis. In that case, if θ = π {\displaystyle \theta =\pi } then set z = − 2 ε R {\displaystyle z={\frac {-2}{\varepsilon R}}} where R {\displaystyle R} is the y-intercept of the line. This may not appear to be valid, as one is dividing by a zero divisor, but this is a valid point on the projective dual line. If θ = 2 π {\displaystyle \theta =2\pi } then set z = 1 2 ε R {\displaystyle z={\frac {1}{2}}\varepsilon R} . Finally, observe that these coordinates represent oriented lines. An oriented line is an ordinary line with one of two possible orientations attached to it. This can be seen from the fact that if θ {\displaystyle \theta } is increased by π {\displaystyle \pi } then the resulting dual number representative is not the same. It's possible to express the above line coordinates as homogeneous coordinates z = [ sin ⁡ ( θ + ε R 2 ) : cos ⁡ ( θ + ε R 2 ) ] {\displaystyle z=\left[\sin \left({\frac {\theta +\varepsilon R}{2}}\right):\cos \left({\frac {\theta +\varepsilon R}{2}}\right)\right]} where R {\displaystyle R} is the perpendicular distance of the line from the origin. This representation has numerous advantages: One advantage is that there is no need to break into different cases, such as parallel to the x {\displaystyle x} -axis and non-parallel. The other advantage is that these homogeneous coordinates can be interpreted as vectors , allowing us to multiply them by matrices. Every Laguerre transformation can be represented as a 2×2 matrix whose entries are dual numbers. The matrix representation of z ↦ p z + q r z + s {\displaystyle z\mapsto {\frac {pz+q}{rz+s}}} is ( p q r s ) {\displaystyle {\begin{pmatrix}p&q\\r&s\end{pmatrix}}} (but notice that any non-nilpotent scalar multiple of this matrix represents the same Laguerre transformation). Additionally, as long as the determinant of a 2×2 matrix with dual-number entries is not nilpotent , then it represents a Laguerre transformation. (Note that in the above, we represent the homogeneous vector [ z : w ] {\displaystyle [z:w]} as a column vector in the obvious way, instead of as a row vector.) Laguerre transformations do not act on points. This is because if three oriented lines pass through the same point, their images under a Laguerre transformation do not have to meet at one point. Laguerre transformations can be seen as acting on oriented lines as well as on oriented circles. An oriented circle is an ordinary circle with an orientation represented by a binary value attached to it, which is either 1 {\displaystyle 1} or − 1 {\displaystyle -1} . The only exception is a circle of radius zero, which has orientation equal to 0 {\displaystyle 0} . A point is defined to be an oriented circle of radius zero. If an oriented circle has orientation equal to 1 {\displaystyle 1} , then the circle is said to be " anti-clockwise " oriented; if it has orientation equal to − 1 {\displaystyle -1} then it is " clockwise " oriented. The radius of an oriented circle is defined to be the radius r {\displaystyle r} of the underlying unoriented circle multiplied by the orientation. The image of an oriented circle under a Laguerre transformation is another oriented circle. If two oriented figures – either circles or lines – are tangent to each other then their images under a Laguerre transformation are also tangent. Two oriented circles are defined to be tangent if their underlying circles are tangent and their orientations are equal at the point of contact. Tangency between lines and circles is defined similarly. A Laguerre transformation might map a point to an oriented circle which is no longer a point. An oriented circle can never be mapped to an oriented line. Likewise, an oriented line can never be mapped to an oriented circle. This is opposite to Möbius geometry , where lines and circles can be mapped to each other, but neither can be mapped to points. Both Möbius geometry and Laguerre geometry are subgeometries of Lie sphere geometry , where points and oriented lines can be mapped to each other, but tangency remains preserved. The matrix representations of oriented circles (which include points but not lines) are precisely the invertible 2 × 2 {\displaystyle 2\times 2} skew-Hermitian dual number matrices. These are all of the form H = ( ε a b + c ε − b + c ε ε d ) {\displaystyle H={\begin{pmatrix}\varepsilon a&b+c\varepsilon \\-b+c\varepsilon &\varepsilon d\end{pmatrix}}} (where all the variables are real, and b ≠ 0 {\displaystyle b\neq 0} ). The set of oriented lines tangent to an oriented circle is given by { v ∈ D P 1 ∣ v ∗ H v = 0 } {\displaystyle \{v\in \mathbb {DP} ^{1}\mid v^{*}Hv=0\}} where D P 1 {\displaystyle \mathbb {DP} ^{1}} denotes the projective line over the dual numbers D {\displaystyle \mathbb {D} } . Applying a Laguerre transformation represented by M {\displaystyle M} to the oriented circle represented by H {\displaystyle H} gives the oriented circle represented by ( M − 1 ) ∗ H M − 1 {\displaystyle (M^{-1})^{*}HM^{-1}} . The radius of an oriented circle is equal to the half the trace . The orientation is then the sign of the trace. Note that the animated figures below show some oriented lines, but without any visual indication of a line's orientation (so two lines that differ only in orientation are displayed in the same way); oriented circles are shown as a set of oriented tangent lines, which results in a certain visual effect. The following can be found in Isaak Yaglom 's Complex numbers in geometry and a paper by Gutin entitled Generalizations of singular value decomposition to dual-numbered matrices . [ 1 ] [ 5 ] Mappings of the form z ↦ p z − q q ¯ z + p ¯ {\displaystyle z\mapsto {\frac {pz-q}{{\bar {q}}z+{\bar {p}}}}} express rigid body motions (sometimes called direct Euclidean isometries ). The matrix representations of these transformations span a subalgebra isomorphic to the planar quaternions . The mapping z ↦ − z {\displaystyle z\mapsto -z} represents a reflection about the x-axis. The transformation z ↦ 1 / z {\displaystyle z\mapsto 1/z} expresses a reflection about the y-axis. Observe that if U {\displaystyle U} is the matrix representation of any combination of the above three transformations, but normalised so as to have determinant 1 {\displaystyle 1} , then U {\displaystyle U} satisfies U U ∗ = U ∗ U = I {\displaystyle UU^{*}=U^{*}U=I} where U ∗ {\displaystyle U^{*}} means U ¯ T {\displaystyle {\overline {U}}^{\mathrm {T} }} . We will call these unitary matrices. Notice though that these are unitary in the sense of the dual numbers and not the complex numbers. The unitary matrices express precisely the Euclidean isometries . An axial dilation by t {\displaystyle t} units is a transformation of the form z + ( ε t / 2 ) ( − ε t / 2 ) z + 1 {\displaystyle {\frac {z+(\varepsilon t/2)}{(-\varepsilon t/2)z+1}}} . An axial dilation by t {\displaystyle t} units increases the radius of all oriented circles by t {\displaystyle t} units while preserving their centres. If a circle has negative orientation, then its radius is considered negative, and therefore for some positive values of t {\displaystyle t} the circle actually shrinks. An axial dilation is depicted in Figure 1, in which two circles of opposite orientations undergo the same axial dilation. On lines, an axial dilation by t {\displaystyle t} units maps any line z {\displaystyle z} to a line z ′ {\displaystyle z'} such that z {\displaystyle z} and z ′ {\displaystyle z'} are parallel, and the perpendicular distance between z {\displaystyle z} and z ′ {\displaystyle z'} is t {\displaystyle t} . Lines that are parallel but have opposite orientations move in opposite directions. The transformation z ↦ k z {\displaystyle z\mapsto kz} for a value of k {\displaystyle k} that's real preserves the x-intercept of a line, while changing its angle to the x-axis. See Figure 2 to observe the effect on a grid of lines (including the x axis in the middle) and Figure 3 to observe the effect on two circles that differ initially only in orientation (to see that the outcome is sensitive to orientation). Putting it all together, a general Laguerre transformation in matrix form can be expressed as U S V ∗ {\displaystyle USV^{*}} where U {\displaystyle U} and V {\displaystyle V} are unitary, and S {\displaystyle S} is a matrix either of the form ( a 0 0 b ) {\displaystyle {\begin{pmatrix}a&0\\0&b\end{pmatrix}}} or ( a − b ε b ε a ) {\displaystyle {\begin{pmatrix}a&-b\varepsilon \\b\varepsilon &a\end{pmatrix}}} where a {\displaystyle a} and b {\displaystyle b} are real numbers. The matrices U {\displaystyle U} and V {\displaystyle V} express Euclidean isometries . The matrix S {\displaystyle S} either represents a transformation of the form z ↦ k z {\displaystyle z\mapsto kz} or an axial dilation. The resemblance to Singular Value Decomposition should be clear. [ 5 ] Note: In the event that S {\displaystyle S} is an axial dilation, the factor V {\displaystyle V} can be set to the identity matrix. This follows from the fact that if V {\displaystyle V} is unitary and S {\displaystyle S} is an axial dilation, then it can be seen that S V = { V S , det ( V ) = + 1 V S T , det ( V ) = − 1 {\displaystyle SV={\begin{cases}VS,&\det(V)=+1\\VS^{\mathrm {T} },&\det(V)=-1\end{cases}}} , where S T {\displaystyle S^{\mathrm {T} }} denotes the transpose of S {\displaystyle S} . So U S V ∗ = { ( U V ∗ ) S , det ( V ) = + 1 ( U V ∗ ) S T , det ( V ) = − 1 {\displaystyle USV^{*}={\begin{cases}(UV^{*})S,&\det(V)=+1\\(UV^{*})S^{\mathrm {T} },&\det(V)=-1\end{cases}}} . A question arises: What happens if the role of the dual numbers above is changed to the complex numbers? In that case, the complex numbers represent oriented lines in the elliptic plane (the plane which elliptic geometry takes places over). This is in contrast to the dual numbers, which represent oriented lines in the Euclidean plane. The elliptic plane is essentially a sphere (but where antipodal points are identified), and the lines are thus great circles . We can choose an arbitrary great circle to be the equator . The oriented great circle which intersects the equator at longitude s {\displaystyle s} , and makes an angle θ {\displaystyle \theta } with the equator at the point of intersection, can be represented by the complex number tan ⁡ ( θ / 2 ) ( cos ⁡ ( s ) + i sin ⁡ ( s ) ) {\displaystyle \tan(\theta /2)(\cos(s)+i\sin(s))} . In the case where θ = π {\displaystyle \theta =\pi } (where the line is literally the same as the equator, but oriented in the opposite direction as when θ = 0 {\displaystyle \theta =0} ) the oriented line is represented as ∞ {\displaystyle \infty } . Similar to the case of the dual numbers , the unitary matrices act as isometries of the elliptic plane . The set of "elliptic Laguerre transformations" (which are the analogues of the Laguerre transformations in this setting) can be decomposed using Singular Value Decomposition of complex matrices, in a similar way to how we decomposed Euclidean Laguerre transformations using an analogue of Singular Value Decomposition for dual-number matrices . If the role of the dual numbers or complex numbers is changed to the split-complex numbers , then a similar formalism can be developed for representing oriented lines on the hyperbolic plane instead of the Euclidean or elliptic planes: A split-complex number can be written in the form ( a , − b − 1 ) {\displaystyle (a,-b^{-1})} because the algebra in question is isomorphic to R ⊕ R {\displaystyle \mathbb {R} \oplus \mathbb {R} } . (Notice though that as a *-algebra , as opposed to a mere algebra , the split-complex numbers are not decomposable in this way). The terms a {\displaystyle a} and b {\displaystyle b} in ( a , − b − 1 ) {\displaystyle (a,-b^{-1})} represent points on the boundary of the hyperbolic plane; they are respectively the starting and ending points of an oriented line. Since the boundary of the hyperbolic plane is homeomorphic to the projective line R P 1 {\displaystyle \mathbb {RP} ^{1}} , we need a {\displaystyle a} and b {\displaystyle b} to belong to the projective line R P 1 {\displaystyle \mathbb {RP} ^{1}} instead of the affine line R 1 {\displaystyle \mathbb {R} ^{1}} . Indeed, this hints that ( R ⊕ R ) P 1 ≅ R P 1 ⊕ R P 1 {\displaystyle (\mathbb {R} \oplus \mathbb {R} )\mathbb {P} ^{1}\cong \mathbb {R} \mathbb {P} ^{1}\oplus \mathbb {R} \mathbb {P} ^{1}} . The analogue of unitary matrices over the split-complex numbers are the isometries of the hyperbolic plane . This is shown by Yaglom. [ 1 ] Furthermore, the set of linear fractional transformations can be decomposed in a way that resembles Singular Value Decomposition, but which also unifies it with the Jordan decomposition . [ 6 ] [ 1 ] We therefore have a correspondence between the three planar number systems (complex, dual and split-complex numbers) and the three non-Euclidean geometries . The number system that corresponds to Euclidean geometry is the dual numbers . n-dimensional Laguerre space is isomorphic to n + 1 Minkowski space . To associate a point P = ( x 1 , x 2 , … , x n , r ) {\displaystyle P=(x_{1},x_{2},\dotsc ,x_{n},r)} in Minkowski space to an oriented hypersphere, intersect the light cone centred at P {\displaystyle P} with the t = 0 {\displaystyle t=0} hyperplane. The group of Laguerre transformations is isomorphic then to the Poincaré group R n , 1 ⋊ O ⁡ ( n , 1 ) {\displaystyle \mathbb {R} ^{n,1}\rtimes \operatorname {O} (n,1)} . These transformations are exactly those which preserve a kind of squared distance between oriented circles called their Darboux product . The direct Laguerre transformations are defined as the subgroup R n , 1 ⋊ O + ⁡ ( n , 1 ) {\displaystyle \mathbb {R} ^{n,1}\rtimes \operatorname {O} ^{+}(n,1)} . In 2 dimensions, the direct Laguerre transformations can be represented by 2×2 dual number matrices. If the 2×2 dual number matrices are understood as constituting the Clifford algebra Cl 2 , 0 , 1 ⁡ ( R ) {\displaystyle \operatorname {Cl} _{2,0,1}(\mathbb {R} )} , then analogous Clifford algebraic representations are possible in higher dimensions. If we embed Minkowski space R n , 1 {\displaystyle \mathbb {R} ^{n,1}} in the projective space R P n + 1 {\displaystyle \mathbb {RP} ^{n+1}} while keeping the transformation group the same, then the points at infinity are oriented flats. We call them "flats" because their shape is flat. In 2 dimensions, these are the oriented lines. As an aside, there are two non-equivalent definitions of a Laguerre transformation: Either as a Lie sphere transformation that preserves oriented flats, or as a Lie sphere transformation that preserves the Darboux product. We use the latter convention in this article. Note that even in 2 dimensions, the former transformation group is more general than the latter: A homothety for example maps oriented lines to oriented lines, but does not in general preserve the Darboux product. This can be demonstrated using the homothety centred at ( 0 , 0 ) {\displaystyle (0,0)} by t {\displaystyle t} units. Now consider the action of this transformation on two circles: One simply being the point ( 0 , 0 ) {\displaystyle (0,0)} , and the other being a circle of raidus 1 {\displaystyle 1} centred at ( 0 , 0 ) {\displaystyle (0,0)} . These two circles have a Darboux product equal to − 1 {\displaystyle -1} . Their images under the homothety have a Darboux product equal to − t 2 {\displaystyle -t^{2}} . This therefore only gives a Laguerre transformation when t 2 = 1 {\displaystyle t^{2}=1} . In this section, we interpret Laguerre transformations differently from in the rest of the article. When acting on line coordinates, Laguerre transformations are not understood to be conformal in the sense described here. This is clearly demonstrated in Figure 2. The Laguerre transformations preserve angles when the proper angle for the dual number plane is identified. When a ray y = mx , x ≥ 0 , and the positive x-axis are taken for sides of an angle, the slope m is the magnitude of this angle. This number m corresponds to the signed area of the right triangle with base on the interval [(√2,0), (√2, m √2)] . The line {1 + aε : a ∈ ℝ}, with the dual number multiplication, forms a subgroup of the unit dual numbers, each element being a shear mapping when acting on the dual number plane. Other angles in the plane are generated by such action, and since shear mapping preserves area, the size of these angles is the same as the original. Note that the inversion z to 1/ z leaves angle size invariant. As the general Laguerre transformation is generated by translations, dilations, shears, and inversions, and all of these leave angle invariant, the general Laguerre transformation is conformal in the sense of these angles. [ 2 ] : 81
https://en.wikipedia.org/wiki/Laguerre_transformations
Laicism (also laicity , from the Ancient Greek " λαϊκός" " laïkós" , meaning "layperson" or "non-cleric") refers to a legal and political model based on the strict separation of religion and state. The French term laïcité was coined in 1871 by French educator and future Nobel Peace Prize laureate Ferdinand Buisson , who advocated for secular education. In some countries, laicism is constitutionally enshrined, while others—primarily Western states—do not explicitly define themselves as Laicist but implement varying degrees of separation between religion and government. [ 1 ] The term "laicism" arose in France in the 19th century for an anticlerical stance that opposed any ecclesiastical influence on matters of the French state, but not Christianity itself. In 1894, the Dreyfus affair began in France. Domestic political upheavals, latent antisemitism and attempts by clerical-restorationist circles to exert influence led to years of social polarization in the country. In foreign policy, diplomatic relations between France and the Vatican were broken off in 1904. They were not resumed until 1921. [ 2 ] Domestically, the 1905 law on the separation of church and state came into effect, which the then deputy and later prime minister Aristide Briand in particular had worked to have passed. This was the first concrete application of the principle created by Buisson. The term laïcité was first used in the 1946 constitution. Its Article 1 reads: La France est une République indivisible, laïque, démocratique et sociale . Unlike the French model, where the state protects itself from religious influence (primarily Catholicism ), the American model of separation aims to protect religious institutions from government interference, often coexisting with strong religious influence in society. [ 3 ] Despite constitutional commitments, the implementation of laicism varies significantly across these nations. Within the EU, Czech Republic, France, and Portugal are the only states constitutionally defined as laicist. France’s 1905 law created a complete separation of religion and state, particularly targeting the Catholic Church, though other faiths were also affected in the interest of neutrality. However, in Portugal and certain French regions ( Alsace and Moselle ), concordats with the Catholic Church continue to provide exceptions to full laicist application. In Turkey, laicism is interpreted as the subordination of religious expression to the state. The government trains Islamic clergy and dictates religious instruction through the Presidency of Religious Affairs ( Diyanet ). The effects of France’s 1905 secular law remain visible today. Two interpretations exist: While the liberal view is widely accepted, the radical interpretation is often championed by political elites, especially on the Left. The Catholic Church has never fully accepted ideological laicism but has, since Vatican II, renounced state privileges and the notion of a state religion (abolished in Italy in 1984). [ 11 ] In modern France, laicism is a constitutional ideal. Religion is strictly a private matter; it cannot play a public or governmental role. Religious buildings constructed before 1905 remain state property, though religious communities may use them. Religious groups receive no public funding (with exceptions), though tax exemptions exist. The Alsace-Moselle region retains the Concordat of 1801 due to historical circumstances. In French Guiana , the state still funds Catholic clergy. Chaplaincy services, including military chaplains, are also permitted and since 2005 include Islamic clerics. French laicism is rigorously enforced. Public schools may not inquire about students’ religions. Since 2004, conspicuous religious symbols—like headscarves, kippahs, crosses, turbans, or religious habits—are banned in public schools. Nevertheless, religious broadcasts are aired on national media. [ 12 ] Former President Nicolas Sarkozy proposed a “positive laicism” to integrate religion more openly into public life and combat extremism, drawing criticism from laicist groups. [ 13 ] Germany, under Article 137 of the Weimar Constitution , integrated into the current Basic Law (Article 140 GG), does not have a state church. While officially neutral, the state maintains close ties with religious institutions, particularly the Catholic and Protestant churches. These are recognized as public-law corporations and can collect church taxes. [ 14 ] Though Germany is secular, it is not laicist in the French sense. The German model—often described as a cooperation model—balances state neutrality with religious partnership. However, increasing secularization and religious diversity have challenged this system's inclusivity and raised concerns about fair treatment for both religious and non-religious populations. Inspired by the French model, Turkey, under Mustafa Kemal Atatürk , adopted laicism as a constitutional principle. Initially, the state adopted an aggressively anti-religious stance, banning pilgrimages and religious education (1933–1948). Over time, the state institutionalized control over Islam through the Diyanet, effectively nationalizing religion. [ 15 ] Laicism hardened over the years. Religious symbols, including headscarves, were discouraged in public institutions. In 2008, a constitutional amendment allowed female students to wear headscarves, but it was struck down by the Constitutional Court. In 2010, the ban was permanently lifted by the Higher Education Council. [ 16 ] Religious minorities in Turkey still face discrimination. In 2008, the Chief Public Prosecutor sought to ban the ruling Justice and Development Party , partly over its religious orientation. [ 17 ] The Catholic Church, since the Second Vatican Council ( Gaudium et Spes , 1965), has accepted a form of laicity in secular affairs but retains its claim to spiritual authority. Protestant and Orthodox state churches generally reject laicism but may accept it under the theological principle of obedience to secular authority ( Romans 13:1 ). Evangelical free churches, which have historically rejected state churches, support religious freedom and often embrace secular governance. [ 18 ] This article about a political term is a stub . You can help Wikipedia by expanding it . This religious terminology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Laicism
Lajos Peter Balogh (born January 15, 1950, Hungary), mainly referred to as Lou Balogh , is a Hungarian-American scientist known for his research on polymers , dendrimer nanocomposites , and nanomedicine . Balogh is the editor-in-chief of Precision Nanomedicine (PRNANO). Based on his career-long citation numbers, he belongs to the World's Top 2% Scientists . Balogh was born on January 15, 1950, in Komádi, Hajdú-Bihar County, Hungary. He studied chemistry at the Debreceni Vegyipari Technikum then at the Kossuth Lajos University from 1969 to 1974, earning his Ph.D. in 1983. In 1991, Balogh received an invitation from the University of Massachusetts Lowell and moved to the United States. Balogh joined UMass Lowell as a visiting professor in 1991. In 1996, he left for the Michigan Molecular Institute to research dendrimers , where he was a senior associate scientist in Donald Tomalia 's group. From 1998 to 2018, Balogh worked as a professor at the University of Michigan Ann Arbor , Center for Biologic Nanotechnology, the University at Buffalo , and co-directed Nano-Biotechnology Center at the Roswell Park Comprehensive Cancer Center . As a visiting professor, he also taught at the Chinese Academy of Sciences , the Seoul National University , and the Semmelweis University . [ 1 ] Balogh is one of the co-founders of the American Society for Nanomedicine (2008). He has been a board member of several expert organizations, e.g., the U.S. Technical Advisory Group to ISO TC229 on Nanotechnology (since 2005), the Scientific Committee of the CLINAM Summits [ 2 ] (since 2011), and numerous other National and International Steering Committees. Between 2008 and 2016, Balogh, as editor-in-chief, took an upstart scientific journal ( Nanomedicine: Nanotechnology, Biology, and Medicine , Elsevier) from 5 editors and no journal impact factor to 20 editors and JIF2014 =6.9, (5-year JIF=7.5). He increased the journal's readership to over 480,000 downloads per year. In 2017, Balogh initiated Manuscript Clinic, a platform that helped scientists and students publish their research results in nanomedicine and nanotechnology and promoted both nanoscience and scientific writing. In 2018, he founded Andover House, Inc., [ 3 ] a not-for-profit online publishing company, and launched Precision Nanomedicine (PRNANO) . He serves as the editor-in-chief of this scientists-owned, fully open-access and peer-reviewed international journal. PRNANO has been designated the official journal of the International Society for Nanomedicine and CLINAM, the European Society for Clinical Nanomedicine (Basel, Switzerland). Balogh is married to Éva Kovács Balogh, a Hungarian American linguist. He has three children, Andrea, Péter, and Áki. Peter Balogh was the crew chief of the University of Michigan Solar car team 's MomentUM, which won first place at the North American Solar Challenge in 2005 and now works in the maritime industry in Asia. Aki Balogh [ 4 ] is the Co-founder and President of MarketMuse [ 5 ] and Co-founder and CEO of DLC.link. [ 6 ] Balogh discovered and pioneered dendrimer nanocomposites , [ 7 ] drug delivery platforms, [ 8 ] and co-invented new cancer treatments. [ 9 ] He is considered an international expert in nanomedicine and scholarly publications. He published 228 scientific papers in chemistry, physics, nanotechnology, and nanomedicine, gave over 230 invited lectures, and was awarded 12 patents. His publications have been cited over 10000 times (22 papers with more than 100 citations, 11 with more than 200 citations, and 2 cited over 1000 times; [ 10 ] h-index=42). [ 11 ] Balogh has been listed as belonging to the World's top 2% of Scientists. [ 12 ] Balogh is one of the five founders of the American Society for Nanomedicine. He serves on the U.S. Technical Advisory Group to ISO TC 229 Nanotechnology and on the Board of several international and U.S. national organizations. Some recent awards include a visiting professorship for Senior International Scientists at the Chinese Academy of Sciences , Beijing, the Korean Federation of Science and Technology Societies Brain Pool Program Award for Renowned Foreign Scientists to teach at Seoul National University , Seoul, Korea, and a Fulbright Teaching/Research Scholarship at Semmelweis University . Balogh is a member of the External Body of the Hungarian Academy of Sciences (since 2011).
https://en.wikipedia.org/wiki/Lajos_Balogh_(scientist)
In petroleum engineering , Lak wettability index is a quantitative indicator to measure wettability of rocks from relative permeability data. This index is based on a combination of Craig's first rule. [ 1 ] [ 2 ] and modified Craig's second rule [ 3 ] [ 4 ] where and A {\displaystyle A} and B {\displaystyle B} are two constant coefficients defined as: To use the above formula, relative permeability is defined as the effective permeability divided by the oil permeability measured at irreducible water saturation. [ 1 ] Craig [ 1 ] proposed three rules of thumb for interpretation of wettability from relative permeability curves. These rules are based on the value of interstitial water saturation, the water saturation at the crossover point of relative permeability curves (i.e., where relative permeabilities are equal to each other), and the normalized water permeability at residual oil saturation (i.e., normalized by the oil permeability at interstitial water saturation). According to Craig's first rule of thumb, in water-wet rocks the relative permeability to water at residual oil saturation is generally less than 30%, whereas in oil-wet systems this is greater than 50% and approaching 100%. The second rule of thumb considers a system as water-wet, if saturation at the crossover point of relative permeability curves is greater than water saturation of 50%, otherwise oil-wet. The third rule of thumb states that in a water-wet rock the value of interstitial water saturation is usually greater than 20 to 25% pore volume, whereas this is generally less than 15% pore volume (frequently less than 10%) for an oil-wet porous medium. [ 3 ] In 2021, Abouzar Mirzaei-Paiaman [ 3 ] investigated the validity of Craig's rules of thumb and showed that while the third rule is generally unreliable, the first rule is suitable. Moreover, he showed that the second rule needed a modification. He pointed out that using 50% water saturation as a reference value in the Craig's second rule is unrealistic. That author defined a reference crossover saturation (RCS). According to the modified Craig's second rule, the crossover point of relative permeability curves lies to the right of RCS in water-wet rocks, whereas for oil-wet systems, the crossover point is expected to be located at the left of the RCS. Modified Lak wettability index exists which is based on the areas below water and oil relative permeability curves. [ 4 ] where
https://en.wikipedia.org/wiki/Lak_wettability_index
The Lake Champlain Seaway was a canal project proposed in the late 19th century and considered as late as the 1960s to connect New York State 's Hudson River and Quebec 's St. Lawrence River with a deep-water canal. The objective was to allow easy ship traffic from New York City to Montreal through Lake Champlain, lowering transportation costs between the two cities. [ 1 ] Though supported by business groups in New York and Quebec, it proved economically unfeasible. Prohibitive costs (estimated at $100 million in 1900), [ 1 ] opposition from railroads, [ 2 ] and the diminishing utility of canal transportation prevented the project from advancing beyond the early planning stages. The Great Depression cut the project's planning budget, while World War II and completion of the St. Lawrence Seaway delayed matters. The growth of road and air transportation reduced the need for a canal, but the project was still under serious consideration as late as 1962. [ 3 ] As proposed, ships would have used a dredged channel in the Hudson River, transferred to an upgraded Champlain Canal , navigated Lake Champlain, traversed an upgraded Chambly Canal and St Ours Canal, and traveled a dredged route up the Richelieu River to Montreal. [ 1 ] Today, the seaway's planned route is covered by the Lakes to Locks Passage .
https://en.wikipedia.org/wiki/Lake_Champlain_Seaway
A lake ball (also known as a surf ball , beach ball or spill ball ) is a ball of debris found on ocean beaches and lakes large enough to have wave action. The rolling motion of the waves gathers debris in the water and eventually will form the materials into a ball. The materials vary from year to year and from location to location depending on the debris the motion gathers. [ 1 ] [ 2 ] [ 3 ] The earliest known reference to lake balls is Walden : There also I have found, in considerable quantities, curious balls, composed apparently of fine grass or roots, of pipewort perhaps, from half an inch to four inches in diameter, and perfectly spherical. These wash back and forth in shallow water on a sandy bottom, and are sometimes cast on the shore. They are either solid grass, or have a little sand in the middle. At first you would say that they were formed by the action of the waves, like a pebble; yet the smallest are made of equally coarse materials, half an inch long, and they are produced only at one season of the year. Moreover, the waves, I suspect, do not so much construct as wear down a material which has already acquired consistency. They preserve their form when dry for an indefinite period. A specific type of lake ball, a larch ball is a structure created when Western Larch needles floating in a lake become entangled in a spherical shape due to the action of waves. They are most commonly known to form in Seeley Lake, Montana ; however, they have also been known to form in similar regions such as Clark Fork and lakes in Tracy, New Brunswick such as Peltoma Lake , Big Kedron Lake , and Little Kedron Lake . Typical specimens are 3 to 4 inches (8 to 10 centimeters) in diameter. More rarely, larger ones are found. [ 4 ]
https://en.wikipedia.org/wiki/Lake_ball
A lake bifurcation occurs when a lake (a bifurcating lake ) has outflows into two different drainage basins . In this case, the drainage divide cannot be defined exactly, as it is situated in the middle of the lake. Vesijako (the name Vesijako actually means "drainage divide") and Lummene in the Finnish Lakeland are two nearby lakes in Finland . Both drain in two directions: into the Kymijoki basin that drains into the Gulf of Finland and into the Kokemäenjoki basin that drains into the Gulf of Bothnia . Similarly the lakes Isojärvi and Inhottu in the Karvianjoki basin in the Satakunta region of western Finland both have two outlets: from Inhottu the waters flow into the Gulf of Bothnia through the Eteläjoki River in Pori and into lake Isojärvi through the Pomarkunjoki River. From lake Isojärvi the waters flow to the Gulf of Bothnia through the Pohjajoki river in Pori and through the Merikarvianjoki river in Merikarvia . In the Karvianjoki basin there have formerly been two other bifurcations, both eradicated by human actions. Another example is Bontecou Lake , a shallow, man-made bifurcation lake in Dutchess County, New York . [ 1 ] Lake Diefenbaker in Saskatchewan is a reservoir created by damming South Saskatchewan River and Qu'Appelle River . The lake continues to drain into the two rivers, but the Qu'Appelle receives a much enlarged flow (in essence, a diversion of flow from the South Saskatchewan) from the damming. Both rivers eventually drain into Hudson Bay via Lake Winnipeg and the Nelson River . [ citation needed ] Also located in Saskatchewan is Wollaston Lake , which is the source of Fond du Lac River draining into the Arctic Ocean and of Cochrane River draining into Hudson Bay and the Atlantic Ocean. [ citation needed ] Isa Lake in Yellowstone National Park is a natural bifurcated lake which drains into two oceans. Its eastern drainage is to the Gulf of Mexico (part of the Atlantic Ocean) via the Firehole River , while its western drainage is to the Pacific Ocean via the Lewis River . [ citation needed ] Peeler Lake in California 's Hoover Wilderness is a natural bifurcated lake that lies along the Great Basin Divide . It has two outlets, one of which drains east into the Great Basin , and one of which drains west to the Pacific Ocean . Lake Okeechobee in Florida is a particularly rare example of a trifurcation lake. Via the artificial Okeechobee Waterway , it flows east to the Atlantic Ocean through the St. Lucie River and west to the Gulf of Mexico through the Caloosahatchee River . Meanwhile, part of the lake's water naturally flows south through the Everglades into the Florida Bay . As a result of this artificial trifurcation, the Eastern Continental Divide of North America terminates at the lake rather than further south near Miami . Heavenly Lake on the North Korea–People's Republic of China border . Lake Pedder in Tasmania , as a result of damming, drains east as the Huon River and west as the Serpentine , a tributary of the Gordon . Media related to Bifurcation lakes at Wikimedia Commons
https://en.wikipedia.org/wiki/Lake_bifurcation
A lake ecosystem or lacustrine ecosystem includes biotic (living) plants , animals and micro-organisms , as well as abiotic (non-living) physical and chemical interactions. [ 1 ] Lake ecosystems are a prime example of lentic ecosystems ( lentic refers to stationary or relatively still freshwater , from the Latin lentus , which means "sluggish"), which include ponds , lakes and wetlands , and much of this article applies to lentic ecosystems in general. Lentic ecosystems can be compared with lotic ecosystems , which involve flowing terrestrial waters such as rivers and streams . Together, these two ecosystems are examples of freshwater ecosystems . Lentic systems are diverse, ranging from a small, temporary rainwater pool a few inches deep to Lake Baikal , which has a maximum depth of 1642 m. [ 2 ] The general distinction between pools/ponds and lakes is vague, but Brown [ 1 ] states that ponds and pools have their entire bottom surfaces exposed to light, while lakes do not. In addition, some lakes become seasonally stratified. Ponds and pools have two regions: the pelagic open water zone, and the benthic zone , which comprises the bottom and shore regions. Since lakes have deep bottom regions not exposed to light, these systems have an additional zone, the profundal . [ 3 ] These three areas can have very different abiotic conditions and, hence, host species that are specifically adapted to live there. [ 1 ] Two important subclasses of lakes are ponds , which typically are small lakes that intergrade with wetlands, and water reservoirs . Over long periods of time, lakes, or bays within them, may gradually become enriched by nutrients and slowly fill in with organic sediments, a process called succession. When humans use the drainage basin , the volumes of sediment entering the lake can accelerate this process. The addition of sediments and nutrients to a lake is known as eutrophication . [ 4 ] Lake ecosystems can be divided into zones. One common system divides lakes into three zones. The first, the littoral zone , is the shallow zone near the shore. [ 5 ] This is where rooted wetland plants occur. The offshore is divided into two further zones, an open water zone and a deep water zone. In the open water zone (or photic zone) sunlight supports photosynthetic algae and the species that feed upon them. In the deep water zone, sunlight is not available and the food web is based on detritus entering from the littoral and photic zones. Some systems use other names. The off shore areas may be called the pelagic zone , the photic zone may be called the limnetic zone and the aphotic zone may be called the profundal zone . Inland from the littoral zone, one can also frequently identify a riparian zone which has plants still affected by the presence of the lake—this can include effects from windfalls, spring flooding, and winter ice damage. The production of the lake as a whole is the result of production from plants growing in the littoral zone, combined with production from plankton growing in the open water. Wetlands can be part of the lentic system, as they form naturally along most lake shores, the width of the wetland and littoral zone being dependent upon the slope of the shoreline and the amount of natural change in water levels, within and among years. Often dead trees accumulate in this zone, either from windfalls on the shore or logs transported to the site during floods. This woody debris provides important habitat for fish and nesting birds, as well as protecting shorelines from erosion. Light provides the solar energy required to drive the process of photosynthesis , the major energy source of lentic systems. [ 2 ] The amount of light received depends upon a combination of several factors. Small ponds may experience shading by surrounding trees, while cloud cover may affect light availability in all systems, regardless of size. Seasonal and diurnal considerations also play a role in light availability because the shallower the angle at which light strikes water, the more light is lost by reflection. This is known as Beer's law . [ 6 ] Once light has penetrated the surface, it may also be scattered by particles suspended in the water column. This scattering decreases the total amount of light as depth increases. [ 3 ] [ 7 ] Lakes are divided into photic and aphotic regions, the prior receiving sunlight and latter being below the depths of light penetration, making it void of photosynthetic capacity. [ 2 ] In relation to lake zonation, the pelagic and benthic zones are considered to lie within the photic region, while the profundal zone is in the aphotic region. [ 1 ] Temperature is an important abiotic factor in lentic ecosystems because most of the biota are poikilothermic , where internal body temperatures are defined by the surrounding system. Water can be heated or cooled through radiation at the surface and conduction to or from the air and surrounding substrate. [ 6 ] Shallow ponds often have a continuous temperature gradient from warmer waters at the surface to cooler waters at the bottom. In addition, temperature fluctuations can vary greatly in these systems, both diurnally and seasonally. [ 1 ] Temperature regimes are very different in large lakes. In temperate regions, for example, as air temperatures increase, the icy layer formed on the surface of the lake breaks up, leaving the water at approximately 4 °C. This is the temperature at which water has the highest density. As the season progresses, the warmer air temperatures heat the surface waters, making them less dense. The deeper waters remain cool and dense due to reduced light penetration. As the summer begins, two distinct layers become established, with such a large temperature difference between them that they remain stratified. The lowest zone in the lake is the coldest and is called the hypolimnion . The upper warm zone is called the epilimnion . Between these zones is a band of rapid temperature change called the thermocline . During the colder fall season, heat is lost at the surface and the epilimnion cools. When the temperatures of the two zones are close enough, the waters begin to mix again to create a uniform temperature, an event termed lake turnover . In the winter, inverse stratification occurs as water near the surface cools freezes, while warmer, but denser water remains near the bottom. A thermocline is established, and the cycle repeats. [ 1 ] [ 2 ] In exposed systems, wind can create turbulent, spiral-formed surface currents called Langmuir circulations . Exactly how these currents become established is still not well understood, but it is evident that it involves some interaction between horizontal surface currents and surface gravity waves. The visible result of these rotations, which can be seen in any lake, are the surface foamlines that run parallel to the wind direction. Positively buoyant particles and small organisms concentrate in the foamline at the surface and negatively buoyant objects are found in the upwelling current between the two rotations. Objects with neutral buoyancy tend to be evenly distributed in the water column. [ 2 ] [ 3 ] This turbulence circulates nutrients in the water column, making it crucial for many pelagic species, however its effect on benthic and profundal organisms is minimal to non-existent, respectively. [ 3 ] The degree of nutrient circulation is system specific, as it depends upon such factors as wind strength and duration, as well as lake or pool depth and productivity. Oxygen is essential for organismal respiration . The amount of oxygen present in standing waters depends upon: 1) the area of transparent water exposed to the air, 2) the circulation of water within the system and 3) the amount of oxygen generated and used by organisms present. [ 1 ] In shallow, plant-rich pools there may be great fluctuations of oxygen, with extremely high concentrations occurring during the day due to photosynthesis and very low values at night when respiration is the dominant process of primary producers. Thermal stratification in larger systems can also affect the amount of oxygen present in different zones. The epilimnion is oxygen rich because it circulates quickly, gaining oxygen via contact with the air. The hypolimnion, however, circulates very slowly and has no atmospheric contact. Additionally, fewer green plants exist in the hypolimnion, so there is less oxygen released from photosynthesis. In spring and fall when the epilimnion and hypolimnion mix, oxygen becomes more evenly distributed in the system. Low oxygen levels are characteristic of the profundal zone due to the accumulation of decaying vegetation and animal matter that “rains” down from the pelagic and benthic zones and the inability to support primary producers. [ 1 ] Phosphorus is important for all organisms because it is a component of DNA and RNA and is involved in cell metabolism as a component of ATP and ADP. Also, phosphorus is not found in large quantities in freshwater systems, limiting photosynthesis in primary producers, making it the main determinant of lentic system production. The phosphorus cycle is complex, but the model outlined below describes the basic pathways. Phosphorus mainly enters a pond or lake through runoff from the watershed or by atmospheric deposition. Upon entering the system, a reactive form of phosphorus is usually taken up by algae and macrophytes, which release a non-reactive phosphorus compound as a byproduct of photosynthesis. This phosphorus can drift downwards and become part of the benthic or profundal sediment, or it can be remineralized to the reactive form by microbes in the water column. Similarly, non-reactive phosphorus in the sediment can be remineralized into the reactive form. [ 2 ] Sediments are generally richer in phosphorus than lake water, however, indicating that this nutrient may have a long residency time there before it is remineralized and re-introduced to the system. [ 3 ] Bacteria are present in all regions of lentic waters. Free-living forms are associated with decomposing organic material, biofilm on the surfaces of rocks and plants, suspended in the water column, and in the sediments of the benthic and profundal zones. Other forms are also associated with the guts of lentic animals as parasites or in commensal relationships. [ 3 ] Bacteria play an important role in system metabolism through nutrient recycling, [ 2 ] which is discussed in the Trophic Relationships section. Algae, including both phytoplankton and periphyton , are the principle photosynthesizers in ponds and lakes. [ 8 ] Phytoplankton are found drifting in the water column of the pelagic zone. Many species have a higher density than water, which should cause them to sink inadvertently down into the benthos . To combat this, phytoplankton have developed density-changing mechanisms, by forming vacuoles and gas vesicles , or by changing their shapes to induce drag, thus slowing their descent. [ 9 ] A very sophisticated adaptation utilized by a small number of species is a tail-like flagellum that can adjust vertical position, and allow movement in any direction. [ 2 ] Phytoplankton can also maintain their presence in the water column by being circulated in Langmuir rotations . [ 3 ] Periphytic algae, on the other hand, are attached to a substrate. In lakes and ponds, they can cover all benthic surfaces. Both types of plankton are important as food sources and as oxygen providers. [ 2 ] Aquatic plants live in both the benthic and pelagic zones, and can be grouped according to their manner of growth: ⑴ emergent = rooted in the substrate, but with leaves and flowers extending into the air; ⑵ floating-leaved = rooted in the substrate, but with floating leaves; ⑶ submersed = growing beneath the surface; ⑷ free-floating macrophytes = not rooted in the substrate, and floating on the surface. [ 1 ] These various forms of macrophytes generally occur in different areas of the benthic zone, with emergent vegetation nearest the shoreline, then floating-leaved macrophytes, followed by submersed vegetation. Free-floating macrophytes can occur anywhere on the system's surface. [ 2 ] Aquatic plants are more buoyant than their terrestrial counterparts because freshwater has a higher density than air. This makes structural rigidity unimportant in lakes and ponds (except in the aerial stems and leaves). Thus, the leaves and stems of most aquatic plants use less energy to construct and maintain woody tissue, investing that energy into fast growth instead. [ 1 ] In order to contend with stresses induced by the wind and waves, plants must be both flexible and tough. Light, water depth, and substrate types are the most important factors controlling the distribution of submerged aquatic plants. [ 10 ] Macrophytes are sources of food, oxygen, and habitat structure in the benthic zone, but cannot penetrate the depths of the euphotic zone, and hence are not found there. [ 1 ] [ 7 ] Zooplankton are tiny animals suspended in the water column. Like phytoplankton, these species have developed mechanisms that keep them from sinking to deeper waters, including drag-inducing body forms, and the active flicking of appendages (such as antennae or spines). [ 1 ] Remaining in the water column may have its advantages in terms of feeding, but this zone's lack of refugia leaves zooplankton vulnerable to predation. In response, some species, especially Daphnia sp., make daily vertical migrations in the water column by passively sinking to the darker lower depths during the day, and actively moving towards the surface during the night. Also, because conditions in a lentic system can be quite variable across seasons, zooplankton have the ability to switch from laying regular eggs to resting eggs when there is a lack of food, temperatures fall below 2 °C, or if predator abundance is high. These resting eggs have a diapause , or dormancy period, that should allow the zooplankton to encounter conditions that are more favorable to survival when they finally hatch. [ 11 ] The invertebrates that inhabit the benthic zone are numerically dominated by small species, and are species-rich compared to the zooplankton of the open water. They include: Crustaceans (e.g. crabs , crayfish , and shrimp ), molluscs (e.g. clams and snails ), and numerous types of insects. [ 2 ] These organisms are mostly found in the areas of macrophyte growth, where the richest resources, highly-oxygenated water, and warmest portion of the ecosystem are found. The structurally diverse macrophyte beds are important sites for the accumulation of organic matter, and provide an ideal area for colonization. The sediments and plants also offer a great deal of protection from predatory fishes. [ 3 ] Very few invertebrates are able to inhabit the cold, dark, and oxygen-poor profundal zone . Those that can are often red in color, due to the presence of large amounts of hemoglobin , which greatly increases the amount of oxygen carried to cells. [ 1 ] Because the concentration of oxygen within this zone is low, most species construct tunnels or burrows in which they can hide, and utilize the minimum amount of movements necessary to circulate water through, drawing oxygen to them without expending too much energy. [ 1 ] Fish have a range of physiological tolerances that are dependent upon which species they belong to. They have different lethal temperatures, dissolved oxygen requirements, and spawning needs that are based on their activity levels and behaviors. Because fish are highly mobile, they are able to deal with unsuitable abiotic factors in one zone by simply moving to another. A detrital feeder in the profundal zone, for example, that finds the oxygen concentration has dropped too low may feed closer to the benthic zone. A fish might also alter its residence during different parts of its life history: hatching in a sediment nest, then moving to the weedy benthic zone to develop in a protected environment with food resources, and finally into the pelagic zone as an adult. Other vertebrate taxa inhabit lentic systems as well. These include amphibians (e.g. salamanders and frogs ), reptiles (e.g. snakes , turtles , and alligators ), and a large number of waterfowl species. [ 7 ] Most of these vertebrates spend part of their time in terrestrial habitats, and thus, are not directly affected by abiotic factors in the lake or pond. Many fish species are important both as consumers and as prey species to the larger vertebrates mentioned above. Lentic systems gain most of their energy from photosynthesis performed by aquatic plants and algae. [ 12 ] This autochthonous process involves the combination of carbon dioxide, water, and solar energy to produce carbohydrates and dissolved oxygen. Within a lake or pond, the potential rate of photosynthesis generally decreases with depth due to light attenuation. [ 13 ] Photosynthesis, however, is often low at the top few millimeters of the surface, likely due to inhibition by ultraviolet light. The exact depth and photosynthetic rate measurements of this curve are system-specific and depend upon: 1) the total biomass of photosynthesizing cells, 2) the amount of light attenuating materials, and 3) the abundance and frequency range of light absorbing pigments (i.e. chlorophylls ) inside of photosynthesizing cells. [ 7 ] The energy created by these primary producers is important for the community because it is transferred to higher trophic levels via consumption. [ 14 ] The vast majority of bacteria in lakes and ponds obtain their energy by decomposing vegetation and animal matter. In the pelagic zone, dead fish and the occasional allochthonous input of litterfall are examples of coarse particulate organic matter (CPOM>1 mm). Bacteria degrade these into fine particulate organic matter (FPOM<1 mm) and then further into usable nutrients. Small organisms such as plankton are also characterized as FPOM. Very low concentrations of nutrients are released during decomposition because the bacteria are utilizing them to build their own biomass. Bacteria, however, are consumed by protozoa , which are in turn consumed by zooplankton, and then further up the trophic levels . Elements other than carbon, particularly phosphorus and nitrogen, are regenerated when protozoa feed on bacterial prey [ 15 ] and this way, nutrients become once more available for use in the water column. This regeneration cycle is known as the microbial loop [ 16 ] and is a key component of lentic food webs. [ 2 ] The decomposition of organic materials can continue in the benthic and profundal zones if the matter falls through the water column before being completely digested by the pelagic bacteria. Bacteria are found in the greatest abundance here in sediments, where they are typically 2-1000 times more prevalent than in the water column. [ 11 ] Benthic invertebrates, due to their high level of species richness, have many methods of prey capture. Filter feeders create currents via siphons or beating cilia, to pull water and its nutritional contents, towards themselves for straining. Grazers use scraping, rasping, and shredding adaptations to feed on periphytic algae and macrophytes. Members of the collector guild browse the sediments, picking out specific particles with raptorial appendages. Deposit feeding invertebrates indiscriminately consume sediment, digesting any organic material it contains. Finally, some invertebrates belong to the predator guild, capturing and consuming living animals. [ 2 ] [ 17 ] The profundal zone is home to a unique group of filter feeders that use small body movements to draw a current through burrows that they have created in the sediment. This mode of feeding requires the least amount of motion, allowing these species to conserve energy. [ 1 ] A small number of invertebrate taxa are predators in the profundal zone. These species are likely from other regions and only come to these depths to feed. The vast majority of invertebrates in this zone are deposit feeders, getting their energy from the surrounding sediments. [ 17 ] Fish size, mobility, and sensory capabilities allow them to exploit a broad prey base, covering multiple zonation regions. Like invertebrates, fish feeding habits can be categorized into guilds. In the pelagic zone, herbivores graze on periphyton and macrophytes or pick phytoplankton out of the water column. Carnivores include fishes that feed on zooplankton in the water column (zooplanktivores), insects at the water's surface, on benthic structures, or in the sediment ( insectivores ), and those that feed on other fish ( piscivores ). Fish that consume detritus and gain energy by processing its organic material are called detritivores . Omnivores ingest a wide variety of prey, encompassing floral, faunal, and detrital material. Finally, members of the parasitic guild acquire nutrition from a host species, usually another fish or large vertebrate. [ 2 ] Fish taxa are flexible in their feeding roles, varying their diets with environmental conditions and prey availability. Many species also undergo a diet shift as they develop. Therefore, it is likely that any single fish occupies multiple feeding guilds within its lifetime. [ 18 ] As noted in the previous sections, the lentic biota are linked in complex web of trophic relationships. These organisms can be considered to loosely be associated with specific trophic groups (e.g. primary producers, herbivores, primary carnivores, secondary carnivores, etc.). Scientists have developed several theories in order to understand the mechanisms that control the abundance and diversity within these groups. Very generally, top-down processes dictate that the abundance of prey taxa is dependent upon the actions of consumers from higher trophic levels . Typically, these processes operate only between two trophic levels, with no effect on the others. In some cases, however, aquatic systems experience a trophic cascade ; for example, this might occur if primary producers experience less grazing by herbivores because these herbivores are suppressed by carnivores. Bottom-up processes are functioning when the abundance or diversity of members of higher trophic levels is dependent upon the availability or quality of resources from lower levels. Finally, a combined regulating theory, bottom-up:top-down, combines the predicted influences of consumers and resource availability. It predicts that trophic levels close to the lowest trophic levels will be most influenced by bottom-up forces, while top-down effects should be strongest at top levels. [ 2 ] The biodiversity of a lentic system increases with the surface area of the lake or pond. This is attributable to the higher likelihood of partly terrestrial species of finding a larger system. Also, because larger systems typically have larger populations, the chance of extinction is decreased. [ 19 ] Additional factors, including temperature regime, pH, nutrient availability, habitat complexity, speciation rates, competition, and predation, have been linked to the number of species present within systems. [ 2 ] [ 10 ] Phytoplankton and zooplankton communities in lake systems undergo seasonal succession in relation to nutrient availability, predation, and competition. Sommer et al. [ 20 ] described these patterns as part of the Plankton Ecology Group ( PEG ) model, with 24 statements constructed from the analysis of numerous systems. The following includes a subset of these statements, as explained by Brönmark and Hansson [ 2 ] illustrating succession through a single seasonal cycle: Winter 1. Increased nutrient and light availability result in rapid phytoplankton growth towards the end of winter. The dominant species, such as diatoms, are small and have quick growth capabilities. 2. These plankton are consumed by zooplankton, which become the dominant plankton taxa. Spring 3. A clear water phase occurs, as phytoplankton populations become depleted due to increased predation by growing numbers of zooplankton. Summer 4. Zooplankton abundance declines as a result of decreased phytoplankton prey and increased predation by juvenile fishes. 5. With increased nutrient availability and decreased predation from zooplankton, a diverse phytoplankton community develops. 6. As the summer continues, nutrients become depleted in a predictable order: phosphorus, silica , and then nitrogen . The abundance of various phytoplankton species varies in relation to their biological need for these nutrients. 7. Small-sized zooplankton become the dominant type of zooplankton because they are less vulnerable to fish predation. Fall 8. Predation by fishes is reduced due to lower temperatures and zooplankton of all sizes increase in number. Winter 9. Cold temperatures and decreased light availability result in lower rates of primary production and decreased phytoplankton populations. 10. Reproduction in zooplankton decreases due to lower temperatures and less prey. The PEG model presents an idealized version of this succession pattern, while natural systems are known for their variation. [ 2 ] There is a well-documented global pattern that correlates decreasing plant and animal diversity with increasing latitude, that is to say, there are fewer species as one moves towards the poles. The cause of this pattern is one of the greatest puzzles for ecologists today. Theories for its explanation include energy availability, climatic variability, disturbance, competition, etc. [ 2 ] Despite this global diversity gradient, this pattern can be weak for freshwater systems compared to global marine and terrestrial systems. [ 21 ] This may be related to size, as Hillebrand and Azovsky [ 22 ] found that smaller organisms (protozoa and plankton) did not follow the expected trend strongly, while larger species (vertebrates) did. They attributed this to better dispersal ability by smaller organisms, which may result in high distributions globally. [ 2 ] Lakes can be formed in a variety of ways, but the most common are discussed briefly below. The oldest and largest systems are the result of tectonic activities. The rift lakes in Africa, for example are the result of seismic activity along the site of separation of two tectonic plates. Ice-formed lakes are created when glaciers recede, leaving behind abnormalities in the landscape shape that are then filled with water. Finally, oxbow lakes are fluvial in origin, resulting when a meandering river bend is pinched off from the main channel. [ 2 ] All lakes and ponds receive sediment inputs. Since these systems are not really expanding, it is logical to assume that they will become increasingly shallower in depth, eventually becoming wetlands or terrestrial vegetation. The length of this process should depend upon a combination of depth and sedimentation rate. Moss [ 7 ] gives the example of Lake Tanganyika , which reaches a depth of 1500 m and has a sedimentation rate of 0.5 mm/yr. Assuming that sedimentation is not influenced by anthropogenic factors, this system should go extinct in approximately 3 million years. Shallow lentic systems might also fill in as swamps encroach inward from the edges. These processes operate on a much shorter timescale, taking hundreds to thousands of years to complete the extinction process. [ 7 ] Sulfur dioxide and nitrogen oxides are naturally released from volcanoes, organic compounds in the soil, wetlands, and marine systems, but the majority of these compounds come from the combustion of coal, oil, gasoline, and the smelting of ores containing sulfur. [ 3 ] These substances dissolve in atmospheric moisture and enter lentic systems as acid rain . [ 1 ] Lakes and ponds that contain bedrock that is rich in carbonates have a natural buffer, resulting in no alteration of pH. Systems without this bedrock, however, are very sensitive to acid inputs because they have a low neutralizing capacity, resulting in pH declines even with only small inputs of acid. [ 3 ] At a pH of 5–6 algal species diversity and biomass decrease considerably, leading to an increase in water transparency – a characteristic feature of acidified lakes. As the pH continues lower, all fauna becomes less diverse. The most significant feature is the disruption of fish reproduction. Thus, the population is eventually composed of few, old individuals that eventually die and leave the systems without fishes. [ 2 ] [ 3 ] Acid rain has been especially harmful to lakes in Scandinavia , western Scotland , west Wales and the north eastern United States. Eutrophic systems contain a high concentration of phosphorus (~30 μg/L), nitrogen (~1500 μg/L), or both. [ 2 ] Phosphorus enters lentic waters from sewage treatment effluents, discharge from raw sewage, or from runoff of farmland. Nitrogen mostly comes from agricultural fertilizers from runoff or leaching and subsequent groundwater flow. This increase in nutrients required for primary producers results in a massive increase of phytoplankton growth, termed a " plankton bloom ." This bloom decreases water transparency, leading to the loss of submerged plants. [ 23 ] The resultant reduction in habitat structure has negative impacts on the species that utilize it for spawning, maturation, and general survival. Additionally, the large number of short-lived phytoplankton result in a massive amount of dead biomass settling into the sediment. [ 7 ] Bacteria need large amounts of oxygen to decompose this material, thus reducing the oxygen concentration of the water. This is especially pronounced in stratified lakes , when the thermocline prevents oxygen-rich water from the surface to mix with lower levels. Low or anoxic conditions preclude the existence of many taxa that are not physiologically tolerant of these conditions. [ 2 ] Invasive species have been introduced to lentic systems through both purposeful events (e.g. stocking game and food species) as well as unintentional events (e.g. in ballast water ). These organisms can affect natives via competition for prey or habitat, predation, habitat alteration, hybridization , or the introduction of harmful diseases and parasites. [ 6 ] With regard to native species, invaders may cause changes in size and age structure, distribution, density, population growth, and may even drive populations to extinction. [ 2 ] Examples of prominent invaders of lentic systems include the zebra mussel and sea lamprey in the Great Lakes.
https://en.wikipedia.org/wiki/Lake_ecosystem
Lake metabolism represents a lake 's balance between carbon fixation ( gross primary production ) and biological carbon oxidation ( ecosystem respiration ). [ 1 ] Whole-lake metabolism includes the carbon fixation and oxidation from all organism within the lake , from bacteria to fishes , and is typically estimated by measuring changes in dissolved oxygen or carbon dioxide throughout the day. [ 2 ] Ecosystem respiration in excess of gross primary production indicates the lake receives organic material from the surrounding catchment , such as through stream or groundwater inflows or litterfall . Lake metabolism often controls the carbon dioxide emissions from or influx to lakes, but it does not account for all carbon dioxide dynamics since inputs of inorganic carbon from the surrounding catchment also influence carbon dioxide within lakes. [ 3 ] [ 4 ] Estimates of lake metabolism typically rely on the measurement of dissolved oxygen or carbon dioxide , or measurements of a carbon or oxygen tracer to estimate production and consumption of organic carbon. Oxygen is produced and carbon dioxide consumed through photosynthesis and oxygen is consumed and carbon dioxide produced through respiration. Here, organic matter is symbolized by glucose, though the chemical species produced and respired through these reactions vary widely. Photosynthesis : 6 C O 2 + 6 H 2 O → l i g h t C 6 H 12 O 6 + 6 O 2 {\displaystyle 6CO_{2}+6H_{2}O{\xrightarrow[{}]{light}}C_{6}H_{12}O_{6}+6O_{2}} Respiration : C 6 H 12 O 6 + 6 O 2 → 6 C O 2 + 6 H 2 O {\displaystyle C_{6}H_{12}O_{6}+6O_{2}{\xrightarrow[{}]{}}6CO_{2}+6H_{2}O} Photosynthesis and oxygen production only occurs in the presence of light , while the consumption of oxygen via respiration occurs in both the presence and absence of light. Lake metabolism terms include: Estimating lake metabolism requires approximating processes that influence the production and consumption of organic carbon by organisms within the lake. Cyclical changes on a daily scale occur in most lakes on Earth because sunlight is available for photosynthesis and production of new carbon only for a portion of the day. Researchers can take advantage of this diel pattern to measure rates of change in carbon itself or changes in dissolved gases such as carbon dioxide or oxygen that occur on a daily scale. Although daily estimates of metabolism are most common, whole-lake metabolism can be integrated over longer time periods such as seasonal or annual rates by estimating a whole-lake carbon budget . The following sections highlight the most common ways to estimate lake metabolism across a variety of temporal and spatial scales and go over some of the assumptions of each of these methods. Measurement of diel changes in dissolved gases within the lake, also known as the "free-water" method, has quickly become the most common method of estimating lake metabolism since the wide adoption of autonomous sensors used to measure dissolved oxygen and carbon dioxide in water. [ 6 ] [ 7 ] [ 8 ] The free-water method is particularly popular since many daily estimates of lake metabolism can be collected relatively cheaply and can give insights into metabolic regimes during difficult-to-observe time periods, such as during storm events. Measured changes in dissolved oxygen and carbon dioxide within a lake represents the sum of all organismal metabolism from bacteria to fishes, after accounting for abiotic changes in dissolved gases. Abiotic changes in dissolved gases include exchanges of dissolved gases between the atmosphere and lake surface, vertical or horizontal entrainment of water with differing concentrations (e.g. low-oxygen water below a lake's thermocline), or import and export of dissolved gases from inflowing streams or a lake outlet. Abiotic changes in dissolved gases can dominate changes of dissolved gases if the lake has a low metabolic rate (e.g. oligotrophic lake, cloudy day), or if there is a large event that causes abiotic factors to exceed biotic (e.g. wind event causing mixing and entrainment of low-oxygenated water). Biotic signals in dissolved gases are most evident when the sun is shining and photosynthesis is occurring, resulting in the production of dissolved oxygen and consumption of carbon dioxide. The conversion of solar energy to chemical energy is termed gross primary production (GPP) and the dissipation of this energy through biological carbon oxidation is termed ecosystem respiration (ER). High-frequency (e.g. 10 minute interval) measurements of dissolved oxygen or carbon dioxide can be translated into estimates of GPP, ER, and the difference between the two termed Net Ecosystem Production (NEP), by fitting the high-frequency data to models of lake metabolism. The governing equation for estimating lake metabolism from a single sensor located in the upper mixed layer measuring dissolved oxygen is: DO/t = GPP-ER+F Where F is the flux of gases between the lake and the atmosphere. Additional terms of abiotic gas flux can be added if those abiotic fluxes are deemed significant for a lake (e.g. mixing events, inflowing stream gases). Atmospheric gas exchange (F) is rarely directly measured and typically modeled by estimating lake surface turbulence from wind-driven and convective mixing. Most often, F is estimated from measurements of wind speed and atmospheric pressure, and different models for estimating F can result in significantly different estimates of lake metabolic rates depending on the study lake. [ 9 ] Gross primary production is assumed to be zero during the night due to low or no light, and thus ER can be estimated from nighttime changes in dissolved oxygen (or carbon dioxide) after accounting for abiotic changes in dissolved oxygen. Gross primary production can be estimated assuming that ER is equal during the day and night and accounting for dissolved oxygen changes during the day, however, this assumption may not be valid in every lake. [ 10 ] Extracting a high signal-to-noise ratio is key to obtaining good estimates of lake metabolism from the free-water technique, and there are choices that a researcher needs to make prior to collection data and during data analyses to ensure accurate estimates. Location of dissolved gas collection (typically in the surface mixed layer), number of sensors vertically and horizontally, [ 11 ] [ 12 ] [ 13 ] frequency and duration of data collection, and modeling methods need to be considered. [ 14 ] The free-water measurement techniques require mathematical models to estimate lake metabolism metrics from high-frequency dissolved gas measurements. These models range in complexity from simple algebraic models to depth-integrated modeling using more advanced statistical techniques. Several statistical techniques have been used to estimate GPP, R, and NEP or parameters relating to these metabolism terms. The light and dark bottle method uses the same concept as the free-water method to estimate rates of metabolism - GPP only occurs during the day with solar energy while ER occurs in both the presence and absence of light. [ 15 ] This method incubates lake water in two separate bottles, one that is clear and exposed to natural or artificial light regime and another that is sealed off from the light by wrapping the bottle in foil, paint, or another method. Changes in carbon fixation or dissolved gases are then measured over a certain time period (e.g. several hours to a day) to estimate the rate of metabolism for specific lake depths or an integrated lake water column. Carbon fixation is measured by injecting radioactive carbon isotope 14 C into light and dark bottles and sampling the bottles over the time - the samples are filtered onto filter paper and the amount of 14 C incorporated into algal (and bacterial) cells is estimated by measuring samples on a scintillation counter. The difference between the light and dark bottle 14 C can be considered the rate of primary productivity; however, due to non-photosynthetic uptake of CO 2 there is debate as to whether dark bottles should be used with the 14 C method or if only a light bottle and a bottle treated with the algicide DCMU should be used. Rates of change in dissolved gases, either carbon dioxide or oxygen, need both the light and dark bottles to estimate rates of productivity and respiration. Probably the most labor-intensive method of estimating a metric of lake metabolism is by measuring all the inputs and outputs of either organic or inorganic carbon to a lake over a season or year, also known as a whole-lake carbon budget. Measuring all the inputs and outputs of carbon to and from a lake can be used to estimate net ecosystem production (NEP). [ 16 ] [ 17 ] Since NEP is the difference between gross primary production and respiration (NEP = GPP - R), it can be viewed as the net biological conversion of inorganic carbon to organic carbon (and vice versa), and can thus be determined through whole-lake mass balance of either inorganic or organic carbon. [ 16 ] NEP assessed through inorganic (IC) or organic carbon (OC) can be estimated as: N E P O C = E O C + S O C − I O C {\displaystyle NEP_{OC}=E_{OC}+S_{OC}-I_{OC}} N E P I C = I I C − S I C − E I C {\displaystyle NEP_{IC}=I_{IC}-S_{IC}-E_{IC}} where E is export of OC through fluvial transport and IC through fluvial transport and carbon gas (e.g. CO 2 , CH 4 ) exchange between the lake surface to the atmosphere ; S is storage in the lake sediments and water column for OC and water column for IC; and I is the input of OC and IC from fluvial, surrounding wetland , and airborne pathways (e.g. atmospheric deposition , litterfall ). A lake that receives more OC from the watershed than it exports downstream or accumulates in the water column and sediments (I oc > E oc + S oc ) indicates that there was net conversion of OC to IC within the lake and is thus net heterotrophic (negative NEP). Likewise, a lake that accumulates and exports more IC than was received from the watershed (S ic + E ic > I ic ) also indicates net conversion of OC to IC within the lake and is thus net heterotrophic. Although the free-water method likely contains some benthic metabolic signal, isolating the benthic contribution to whole-lake metabolism requires benthic-specific methods. Analogous to the light and dark bottle methods described above, lake sediment cores can be collected and changes in dissolved oxygen or carbon fixation can be used to estimate rates of primary productivity and respiration. Relatively new methods describe isolating the sediment-water interface with transparent domes and measure changes in dissolved oxygen in-situ, which is a hybrid between the free-water method and light-dark bottle method. [ 18 ] These in-situ benthic chamber methods allow for relatively easy multi-day estimate of benthic metabolism, which helps the researcher determine how benthic metabolism changes with varying weather patterns and lake characteristics. Extrapolating site or depth specific measurements to the entire lake can be problematic as there can be significant metabolic variability both vertically and horizontally within a lake [ 11 ] (see variability section). For example, many lake metabolism studies only have a single epilimnetic estimate of metabolism, however, this may overestimate metabolic characteristics about the lake such as NEP depending on the mixed layer depth to light extinction depth ratio. [ 12 ] [ 19 ] Averaging daily metabolism estimates over longer time periods may help overcome some of these single site extrapolation issues, [ 11 ] but one must carefully consider the implications of the metabolic estimates and not over extrapolate measurements. Organismal metabolic rate, or the rate at which organisms assimilate, transform, and expend energy, is influenced by a few key constituents, namely light, nutrients, temperature, and organic matter. The influence of these constituents on organismal metabolism ultimately governs metabolism at the whole-lake scale and can dictate whether a lake is a net source or sink of carbon. In the following section, we describe the relationship between these key constituents and organismal and ecosystem-level metabolism. Although relationships between organisms and constituents described here are well-established, interacting effects of constituents on metabolic rates from organisms to lake ecosystems makes predicting changes in metabolism across lakes or within lakes through time difficult. Many of these complex interacting effects will be discussed in the spatial and temporal variability section. Temperature is a strong controlling factor on biochemical reaction rates and biological activity. Optimal temperature varies across aquatic organisms as some organisms are more cold-adapted while others prefer warmer habitats. There are rare cases of extreme thermal tolerance in hypersaline antarctic lakes (e.g. Don Juan Pond ) or hot springs (e.g. Fly Geyser ); however, most lake organisms on Earth reside in temperatures ranging from 0 to 40 °C. Metabolic rates typically scale exponentially with temperature, however, the activation energy for primary productivity and respiration often differ, with photosynthesis having a lower activation energy than aerobic respiration. These differences in activation energies could have implications for net metabolic balance within lake ecosystems as the climate warms. For example, Scharfenberger et al. (2019) [ 20 ] show that increasing water temperature resulting from climate change could switch lakes from being net autotrophic to heterotrophic due to differences in activation energy, however, the temperature at which they switch depends on the amount of nutrients available. The amount of material available for assimilating into organismal cells controls the rate of metabolism at the cellular to lake ecosystem level. In lakes, phosphorus and nitrogen are the most common limiting nutrients of primary production and ecosystem respiration. Foundational work on the positive relationship between phosphorus concentration and lake eutrophication resulted in legislation that limited the amount of phosphorus in laundry detergents , among other regulations. [ 21 ] [ 22 ] Although phosphorus is often used as a predictor of lake ecosystem productivity and excess phosphorus as an indicator of eutrophication, many studies show that metabolism is co-limited by phosphorus and nitrogen or nitrogen alone. [ 23 ] The balance between phosphorus, nitrogen, and other nutrients, termed ecological stoichiometry , can dictate rates of organismal growth and whole-lake metabolism through cellular requirements of these essential nutrients mediated by life-history traits. For example, fast-growing cladocerans have a much lower nitrogen to phosphorus ratio (N:P) than copepods , mostly due to the high amount of phosphorus-rich RNA in their cells used for rapid growth. Cladocerans residing in lakes with high N:P ratios relative to cladoceran body stoichiometry will be limited in growth and metabolism, having effects on whole-lake metabolism. Furthermore, cascading effects from food web manipulations can cause changes in productivity from changes to nutrient stoichiometry. For example, piscivore addition can reduce predation pressure on fast-growing, low N:P cladocerans which increase in population rapidly, retain phosphorus in their cells, and can cause a lake to become phosphorus limited, consequently reducing whole-lake primary productivity. Solar energy is required for converting carbon dioxide and water into organic matter, otherwise known as photosynthesis. As with temperature and nutrients, different algae have different rates of metabolic response to increasing light and also different optimal light conditions for growth, as some algae are more adapted for darker environments while others can outcompete in lighter conditions. Light can also interact with nutrients to affect species-specific algal productivity response to increasing light. [ 24 ] These different responses at the organismal level propagate up to influence metabolism at the ecosystem level. [ 25 ] [ 26 ] Even in low-nutrient lakes where nutrients would be expected to be the limiting resource for primary productivity, light can still be the limiting resource, which has cascading negative effects on higher trophic levels such as fish productivity. [ 27 ] Variability in light in different lake zones and within a lake through time creates patchiness in productivity both spatially and temporally. In addition to controlling primary productivity, sunlight can also influence rates of respiration by partially oxidizing organic matter which can make it easier for bacteria to break down and convert into carbon dioxide. This partial photooxidation essentially increases the amount of organic matter that is available for mineralization. [ 28 ] In some lakes, complete photooxidation or partial photooxidation can account for a majority of the conversion from organic to inorganic matter, however, the proportion to bacterial respiration varies greatly among lakes. Primary and secondary consumers in lakes require organic matter (either from plants or animals) to maintain organismal function. Organic matter including tree leaves, dissolved organic matter, and algae provide essential resources to these consumers and in the process increase lake ecosystem respiration rates in the conversion of organic matter to cellular growth and organismal maintenance. Some sources of organic matter may impact the availability of other constituents. For example, dissolved organic matter often darkens lake water which reduces the amount of light available in the lake, thus reducing primary production. However, increases in organic matter loading to a lake can also increase nutrients that are associated with the organic matter, which can stimulate primary production and respiration. Increased dissolved organic matter loading can create tradeoffs between increasing light limitation and release from nutrient limitation. This tradeoff can create non-linear relationships between lake primary production and dissolved organic matter loading based on how much nutrients are associated with the organic matter and how quickly the dissolved organic matter blocks out light in the water column. [ 29 ] [ 30 ] [ 31 ] [ 32 ] This is because at low dissolved organic matter concentrations as dissolved organic matter concentration increases, increased associated nutrients enhances GPP. [ 29 ] [ 30 ] [ 31 ] But as dissolved organic matter continues to increase, the reduction in light from the darkening of the lake water suppresses GPP as light becomes the limiting resource for primary productivity. [ 29 ] [ 30 ] [ 31 ] Differences in the magnitude and location of maximum GPP in response to increased DOC load are hypothesized to arise based on the ratio of DOC to nutrients coming into the lake as well as the effect of DOC on lake light climate. [ 31 ] [ 33 ] The darkening of the lake water can also change thermal regimes within the lake as darker waters typically mean that warmer waters remain at the top of the lake while cooler waters are at the bottom. This change in heat energy distribution can impact the rates of pelagic and benthic productivity (see Temperature above), and change water column stability, with impacts on vertical distribution of nutrients, therefore having effects on vertical distribution of metabolic rates. Other lake constituents can influence lake metabolic rates including CO 2 concentration, pH, salinity, and silica, among others. CO 2 can be a limiting (or co-limiting along with other nutrients) resource for primary productivity [ 35 ] and can promote more intense phytoplankton blooms. [ 36 ] Some algal species, such as chrysophytes, may not have carbon-concentrating mechanisms or the ability to use bicarbonate as a source of inorganic carbon for photosynthesis, thus, elevated levels of CO 2 may increase their rates of photosynthesis. During algal blooms , elevated dissolved CO 2 ensures that CO 2 is not a limiting resource for growth since rapid increases in production deplete CO 2 and raise pH. Changes in pH at short time scales (e.g. sub-daily) from spikes in primary productivity may cause short-term reductions in bacterial growth and respiration, but at longer timescales, bacterial communities can adapt to elevated pH. [ 37 ] [ 38 ] Salinity can also cause changes in metabolic rates of lakes through salinity impacts on individual metabolic rates and community composition. [ 39 ] [ 40 ] [ 41 ] Lake metabolic rates can be correlated both positively or negatively with salinity due to interactions of salinity with other drivers of ecosystem metabolism, such as flushing rates or droughts. [ 42 ] For example, Moreira-Turcq (2000) [ 43 ] found that excess precipitation over evaporation caused reduced salinity in a coastal lagoon, increased nutrient loading, and increased pelagic primary productivity. The positive relationship between primary productivity and salinity might be an indicator of changes in nutrient availability due to increased inflows. However, salinity increases from road salts [ 44 ] can cause toxicity in some lake organisms, [ 45 ] and extreme cases of salinity increases can restrict lake mixing which could change distribution of metabolism rates throughout the lake water column. Metabolic rates in lakes and reservoirs are controlled by many environmental factors, such as light and nutrient availability, temperature, and water column mixing regimes. Thus, spatial and temporal changes in those factors cause spatial and temporal variability in metabolic rates, and each of those factors affect metabolism at different spatial and temporal scales. Variable contributions from different lake zones (i.e. littoral , limnetic , benthic ) to whole lake metabolism depends mostly on patchiness in algal and bacterial biomass, and light and nutrient availability. In terms of the organisms contributing to metabolism in each of these zones, limnetic metabolism is dominated by phytoplankton, zooplankton, and bacterial metabolism, with low contribution from epiphytes and fish. Benthic metabolism can receive great contributions from macrophytes , macro- and microalgae, invertebrates, and bacteria. Benthic metabolism is usually highest in shallow littoral zones, or in clear-water shallow lakes, in which light reaches the bottom of the lake to stimulate primary production. In dark or turbid deep lakes, primary production may be restricted to shallower waters and aerobic respiration may be reduced or non-existent in deeper waters due to the formation of anoxic deep zones. The degree of spatial heterogeneity in metabolic rates within a lake depends on lake morphometry, catchment characteristics (e.g. differences in land use throughout the catchment and inputs from streams), and hydrodynamic processes. For example, lakes with more intense hydrodynamic processes, such as strong vertical and lateral mixing, are more laterally and vertically homogeneous in relation to metabolic rates than highly stratified lakes. On the other hand, lakes with more developed littoral areas have greater metabolic heterogeneity laterally than lakes with a more circular shape and low proportions of shallow littoral areas. Light attenuation occurring throughout the water column, in combination with thermal and chemical stratification and wind- or convective-driven turbulence, contribute to the vertical distribution of nutrients and organisms in the water column. In stratified lakes, organic matter and nutrients tend to be more concentrated at deeper layers, while light is more available at shallower layers. The vertical distribution of primary production responds to a balance between light and nutrient availability, while respiration occurs more independently of light and nutrients and more homogeneously with depth. [ 46 ] This often results in strong coupling of gross primary production (GPP) and ecosystem respiration (ER) in lake surface layers but weaker coupling at greater depths. This means that ER rates are strongly dependent on primary production in shallower layers, while in deeper layers it becomes more dependent on a mixture of organic matter from terrestrial sources and sedimentation of algae particles and organic matter produced in shallower layers. In lakes with a low concentration of nutrients in surface waters and with light penetration below the mixed layer , primary production is higher in intermediate depths, where there is sufficient light for photosynthesis and higher nutrient availability. [ 46 ] On the other hand, low transparent polymictic lakes have higher primary production on near-surface layers, usually with a net autotrophic balance (GPP > ER) between primary production and respiration. [ 12 ] Laterally, heterogeneity within lakes is driven by differences in metabolic rates in the open water limnetic zones and the more benthic-dominated littoral zones. Littoral areas are usually more complex and heterogeneous, in part because of their proximity with the terrestrial system, but also due to low water volume and high sediment-to-water volume ratio. Thus, littoral zones are more susceptible to changes in temperature, inputs of nutrients and organic matter from the landscape and river inflows, wind shear mixing and wave action, shading from terrestrial vegetation, and resuspension of the sediments (Figure 1). Additionally, littoral zones usually have greater habitat complexity due to the presence of macrophytes, which serve as shelter, nursery, and feeding place for many organisms. Consequently, metabolic rates in the littoral areas usually have high short-term variability and are typically greater than limnetic metabolic rates. [ 47 ] [ 11 ] In addition to spatial variability within lakes, whole-lake metabolic rates and their drivers also differ across lakes. Each lake has a unique set of characteristics depending on their morphometry, catchment properties, and hydrologic characteristics. These features affect lake conditions, such as water colour, temperature, nutrients, organic matter, light attenuation, vertical and horizontal mixing, with direct and indirect effects on lake metabolism. As lakes differ in the status of their constituents (e.g. light, nutrients, temperature, and organic matter), there are emerging differences in the magnitude and variability of metabolic rates among lakes. In the previous section ( Relation to Constituents ), we discussed the expected patterns of metabolic rates in response to variability in these influential constituents. Here, we will discuss how whole-lake metabolism varies across lakes due to differences in these constituents as mediated by differences in lake morphometry, catchment properties, and water residence time . Lake morphometry (e.g. lake size and shape) and catchment properties (e.g. land use, drainage area, climate, and geological characteristics) determine the flux of external inputs of organic matter and nutrients per unit of lake water volume. As the ratio between catchment size and lake water volume (drainage ratio) increases, the flux of nutrients and organic matter from the surrounding terrestrial landscape generally increases. [ 48 ] That is, small lakes with relatively large catchments will receive more external inputs of nutrients and organic matter per unit of lake volume than large lakes with relatively small catchments, thus enhancing both primary production and respiration rates. In lakes with small drainage ratio (i.e. relative large lake surface area in relation to catchment area), metabolic processes are expected to be less dependent on external inputs coming from the surrounding catchment. Additionally, small lakes are less exposed to wind-driven mixing and typically have higher terrestrial organic matter input which often results in shallower mixing depths and enhanced light attenuation, thus limiting primary production to upper portions of small lakes. Considering lakes with similar catchment properties, small lakes are generally more net heterotrophic (GPP < ER) than large lakes, since their higher respiration rates are fueled by higher allochthonous organic matter (i.e. synthesized within the drainage area, but outside of the water body) entering the system and outpaces primary production which is limited to shallower lake layers. Catchment properties, namely land cover, land use, and geologic characteristics, influence lake metabolism through their impact on the quality of organic matter and nutrients entering the lake as well as wind exposure. The organic matter quality can impact light attenuation, and along with wind exposure, can influence heat and light distribution throughout the water lake column. Lakes in landscapes dominated by agriculture have higher nutrient inputs and lower organic matter inputs compared to lakes with similar drainage ratio but in landscapes dominated by forests. Thus, lakes in agricultural-dominated landscapes are expected to have higher primary production rates, more algal blooms , and excessive macrophyte biomass compared to lakes in forest-dominated landscapes ( Figure ). However, the effects of catchment size and catchment type are complex and interactive. Relatively small forested lakes are more shaded and protected from wind exposure and also receive high amounts of allochthonous organic matter. Thus, small forested lakes are generally more humic with a shallow mixed layer and reduced light penetration. The high inputs of allochthonous organic matter (produced outside the lake) stimulate heterotrophic communities, such as bacteria, zooplankton, and fish, enhancing whole-lake respiration rates. Hence small forested lakes are more likely to be net heterotrophic, with ER rates exceeding primary production rates in the lake. On the other hand, forested lakes with low drainage ratio receive relatively less nutrients and organic matter, typically resulting in clear-water lakes, with low GPP and ER rates ( Table ). Another important difference among lakes that influences lake metabolism variability is the residence time of the water in the system, especially among lakes that are intensively managed by humans . Changes to lake level and flushing rates affects nutrient and organic matter concentrations, organism abundance, and rates of ecological processes such as photodegradation of colored organic matter , thus affecting metabolic rates magnitudes and variability. Endorheic lakes or lakes with intermediate hydraulic residence time (HRT) typically have a high retention time of nutrients and organic matter in the system, that favours growth of primary producers and bacterial degradation of organic matter. [ 49 ] Thus, these types of lakes are expected to maintain relatively higher and less variable GPP and ER rates, than lakes with low residence time in the same trophic status. On the other hand, lakes with long HRT are expected to have reduced metabolic rates due to lower inputs of nutrients and organic matter to the lake. Finally, lentic systems that have frequent and intense changes in water level and accelerated flushing rates have a dynamic closer to lotic systems , with usually low GPP and ER rates, due to nutrients, organic matter, and algae being flushed out of the system during intense flushing events. On a daily scale, GPP rates are most affected by the diel cycle of photosynthetically active radiation while ER is largely affected by changes in water temperature. [ 50 ] Additionally, ER rates are also tied to the quantity or quality of the organic substrate and relative contributions of autotrophic and heterotrophic respiration, as indicated by studies of the patterns of night-time respiration (e.g. Sadro et al. 2014 [ 10 ] ). For example, bacterioplankton respiration can be higher during the day and in the first hours of the night, due to the higher availability of labile dissolved organic matter produced by phytoplankton. As the sun rises, there is a rapid increase in primary production in the lake, often making it autotrophic (NEP > 0) and reducing dissolved CO 2 that was produced from carbon mineralization that occurred during the night. This behavior continues until reaching a peak in NEP, typically around the maximum light availability. Then there is a tendency for the NEP to fall steadily between the hours of maximum light availability until the next day's sunrise. Day-to-day differences in incoming light and temperature, due to differences in the weather, such as cloud cover and storms, affect rates of primary production and, to a lesser extent, respiration. [ 51 ] These weather variations also cause short-term variability in mixed layer depth, which in turn affects nutrients, organic matter, and light availability, as well as vertical and horizontal gas exchanges. Deep mixing reduces light availability but also increases nutrients and organic matter availability in the upper layers. Thus the effects of short-term variability in mixed layer depth on gross primary production (GPP) will depend on which are the limiting factors on each lake at a given period. Thus a deeper mixing layer could either increase or decrease GPP rates depending on the balance between nutrient and light limitation of photosynthesis ( Figure ). Responses in metabolic rates are as dynamic as the physical and chemical processes occurring in the lake, but changes in algal biomass are less variable, involving growth and loss over longer periods. High light and nutrients availability are associated with the formation of algal blooms in lakes; during these blooms GPP rates are very high, and ER rates usually increase almost as much as GPP rates, and the balance of GPP and ER is close to 1. Right after the bloom, GPP rates start to decrease but ER rates continue higher due to the high availability of labile organic matter, which can lead to a fast depletion of dissolved oxygen concentration in the water column, resulting in fish kills. Seasonal variations in metabolism can be driven by seasonal variations in temperature, ice-cover, rainfall, mixing and stratification dynamics, and community succession (e.g. phytoplankton control by zooplankton [ 52 ] ). Seasonal variations in lake metabolism will depend on how seasons alter the inputs of nutrients and organic matter, and light availability, and on which factors are limiting metabolic rates in each lake. Light is a primary driver of lake metabolism, thus seasonality in light levels is an important driver of seasonal changes in lake metabolic rates. Therefore, it is expected GPP rates to be more pronounced during seasons such as spring and summer, in which light levels are higher and days are longer. This is especially pronounced for lakes with light-limited GPP, for example, more turbid or stained lakes. Seasonality in light levels also affects ER rates. Ecosystem respiration rates are usually coupled with GPP rates, thus seasons with higher GPP will also show higher ER rates associated with increased organic matter produced within the lake. Moreover, during seasons with higher light levels photodegradation of organic matter is more pronounced, which stimulates microbial degradation, enhancing heterotrophic respiration rates. Most of the lakes in the world freeze during the winter, [ 53 ] a low-irradiance period, in which ice and snow cover limit light penetration in the water column. Light limitation occurs mainly by snow cover and not by ice, which makes primary production strongly sensitive to snow cover in those lakes. [ 54 ] In addition to light limitation, low temperatures under ice also diminish metabolic rates, but not enough to cease metabolic processes. Therefore, the metabolic balance is usually negative during the majority of the ice season, leading to dissolved oxygen depletion. Shallow lakes in arid climates have none or very little snow cover during the winter, thus, primary production sustained under-ice can be enough to prevent dissolved oxygen depletion, as reported by Song and others [ 54 ] in a Mongolian lake. Despite the high proportion of the world's lakes that freeze during the winter, few studies have been conducted on lake metabolism under-ice, mostly due to sampling technical difficulties. [ 53 ] [ 55 ] [ 54 ] Lakes that are closer to the equator experience less seasonality regarding light intensity and daylight hours than lakes at higher latitudes (temperate and polar zones). Thus, lakes at higher latitudes are more likely to experience light limitation of primary production during low-light seasons (winter and autumn). Seasonal differences in temperature are also not so important in the tropics as they are for higher latitudes lakes. Thus, the direct effect of temperature seasonal variations on metabolic rates is more important in higher latitudes lakes than in tropical lakes ( Figure ). In turn, tropical and subtropical lakes are more likely to have seasonal variations following the stratification and mixing dynamics, and rainfall regimes (wet and dry seasons), than due to the four astronomical or meteorological seasons (spring, summer, autumn, and winter) when compared to higher latitudes lakes. Seasonal changes in temperature and rainfall lead to seasonal changes in water column stability. During periods of low water column stability, a deeper mixed layer (total or partial mixing of the water column, depending on the lake) increases the inputs of nutrients and organic matter from deeper layers and through sediment resuspension, which reduce light availability. Conversely, during periods of strong water column stability, internal loadings of nutrients, organic matter, and the associated bacteria to the water column are suppressed, while algal loss due to sinking is enhanced. Moreover, light availability during this period is higher, due to photobleaching, lower resuspension of sediments, and lower mixing depth, which expose phytoplankton to a more light-rich environment. Higher ER rates during low water column stability period, as a consequence of higher organic matter availability and higher bacteria biomass associated with this organic matter, have been reported for many lakes around the world. [ 56 ] [ 57 ] [ 58 ] However, primary production rates responses to these seasonal changes have been shown different behaviors in different lakes. As said before, the responses of metabolic rates to those changes will depend on limiting factors of primary production in each lake ( Figure ). During low water column stability periods, upwelling of waters rich in nutrients can result in higher pelagic GPP rates, as has been observed in some tropical lakes. [ 59 ] [ 60 ] Conversely, during low water column stability periods, GPP rates can be limited by low light availability, as have been observed in some temperate and subtropical lakes. [ 61 ] [ 62 ] The net metabolic balance is usually more negative during de-stratified periods, even in lakes in which the well-mixed season is the most productive period. Regardless of the high GPP in these systems, ER rates are also enhanced by the increased availability of organic matter stocks from sediments and deeper waters. Seasonal differences in rainfall also affect metabolic rates. The increase in precipitation promotes the entry of organic matter and nutrients in lakes, which can stimulate ER rates and stimulate or inhibit GPP rates, depending on the balance between increased nutrients and lower light availability. On the other hand, lower precipitation also affects limnological conditions by reducing the water level and, thereby, increasing the concentration of nutrients and chlorophyll, as well as changing the thermal stability of aquatic environments. These changes could also enhance ER and GPP rates. Thus, the degree of the responses of metabolic rates to seasonal changes in rainfall will depend on lake morphometry, catchment properties and the intensity and duration of the rainfall events. Lakes frequently exposed to strong storms, such as the typhoon areas in the Northwest Pacific Ocean, receive intense rainfall events that can last for a few days. [ 63 ] During these storm seasons, a reduction in metabolic rates is expected due to reduced sunlight and flushing of water and organisms. This reduction is expected to be more pronounced in GPP than in ER rates, resulting in a more heterotrophic NEP (GPP < ER). In a subtropical lake in Taiwan, for example, a decoupling of GPP and ER rates was observed during typhoon seasons, following a shift in the organic matter pool from autochthonous-based (organic matter produced within the lake) to allochthonous-based (organic matter produced outside the lake). [ 64 ] This suggests that ER rates were more resistant to the typhoon disturbance than GPP rates. Interannual variability on metabolic rates can be driven by extensive changes in the catchment or by directional and cyclical climate change and climate disturbances, such as the events associated with the El Niño Southern Oscillation (ENSO) . Those changes in the catchment, air temperature, and precipitation between years affect metabolic rates by altering nutrient and organic matter inputs to the lake, light attenuation, mixing dynamics, and by direct temperature-dependence of metabolic processes. The increase in precipitation increases external loading of organic matter, nutrients and sediments in lakes. Moreover, increased discharge events promoted by increased rainfall can also alter mixing dynamics and cause physical flushing of organisms. While lower precipitation associated with high evaporation rates also affects limnological conditions by reducing the water level and thereby increasing the concentration of nutrients and chlorophyll, as well as changing the thermal stability of aquatic environments. During warmer years, a stronger water column stability limits the inputs of nutrients and organic matter to the photic zone. In contrast, during colder years, a less stable water column enhances resuspension of the sediments and the inputs of nutrients and organic matter from deeper waters. This lowers light availability, while enhances nutrient and organic matter availability. Thus, the effects of differences in precipitation and temperature between years in metabolic rates will depend on the intensity and duration of these changes, and also in which factors are limiting GPP and ER in each water body. In lakes with nutrients and organic matter limitation of GPP and ER, wetter years can enhance GPP and ER rates, due to higher inputs of nutrients and organic matter from the landscape. This will depend if the terrestrial inputs will be promptly available for the primary producers and heterotrophic communities or if it is going to enter the lake through deeper waters, in which metabolic processes are very low or non-existent. In this case, the inputs will only be available in the next water column mixing event. Thus, increases in metabolic rates due to rainfall depend also on the stratification and mixing dynamics, hydrology, and morphometry of the lake. On the other hand, drier years can also have enhanced GPP and ER rates if it is accompanied by lower water levels, which would lead to higher nutrients and organic matter concentrations. A lower water level is associated with a less stable water column and higher proximity with the sediments, thus increased inputs of nutrients and organic matter from deeper waters. Also, a reduction in water level through water evaporation leads to a concentration effect. In turn, during warmer years the water column is more stable, and the depth of the mixing layer is shallower, thus reducing internal inputs of nutrients and organic matter to the mixed layer. Metabolic rates, in this scenario, will be lower in the upper mixed layer. In lakes with a photic zone extending deeper than the mixed layer, metabolic rates will be higher in intermediated depths, coinciding with the deep chlorophyll maxima. In lakes with primary production limited mostly by light availability, increases in rainfall could lead to lower light availability, associated with increased dissolved organic matter and total suspended matter. Consequently, increased rainfall would be associated with lower levels of GPP, which would reduce respiration rates associated with autochthonous production, leading to a decoupling of GPP and ER rates. [ 65 ] In addition, increased allochthonous organic matter availability during wet years can lead to higher ER, and consequently leading the metabolic balance to be negative (NEP <0). [ 49 ] Changes in annual precipitation can also affect the spatial variability in metabolic rates within lakes. Williamson and collaborators, [ 49 ] for example, found that, in a hyper-eutrophic reservoir in North America, the relative spatial variability in GPP and ER rates were higher in a dry year compared to a wet one. These suggest higher relevance of internal processes, such as internal loading, nutrient uptake, sedimentation, and resuspension, to metabolic rates during dry years.
https://en.wikipedia.org/wiki/Lake_metabolism
Lakes380 – Our Lakes' Health: past, present, future is a New Zealand limnology research project focussed on determining the health, wellness and history of about 10 per cent of New Zealand's 3800 lakes , by collecting surface water , sediment samples and sediment cores and using many different techniques including environmental DNA (eDNA), and other core scanning methods to analyze them. By drawing on both traditional Māori knowledge and biophysical science, it was intended to provide a public resource to assist the development of restoration and management plans for these lakes. The project is jointly led by GNS Science and the Cawthron Institute and works with a wide range of New Zealand and international participants and partners. It was initially a five-year (2017–2022) programme, funded by an Endeavour Fund grant from the Ministry of Business, Innovation and Employment . In the "Public Statement", as part of the funding application, it was noted that there was a lack of scientific knowledge about the health of approximately 95% of New Zealand lakes and an effective way to gain information would be to examine sediment cores to uncover their environmental history using the latest techniques such as eDNA and high-resolution scanning. The data would characterise current lake health, explore rates and drivers of change over the last 1000 years and provide a deeper understanding to inform the restorations of the ecological vitality of the lakes. It was proposed that the data from this project would be used by "government agencies to undertake strategic assessments of water quality and health risk, prioritise mitigation strategies, characterise biodiversity , assess the distribution and impact of invasive species, and inform environmental policy. Regional councils, iwi/hapū and other communities will use this new knowledge to assist in setting informed and achievable restoration aspirations". [ 1 ] The project acknowledged the importance of building a close working relationship with iwi and hapū , drawing on the knowledge of Māori as a result of their historical relationships with the lakes. [ 1 ] The team is co-led by Susie Wood ( Cawthron Institute ) and Marcus Vandergoes ( GNS Science ) and has experts with a range of scientific expertise from universities and research organisations in New Zealand and overseas. It is supported by a Science, Advice and Implementation Group to provide guidance on strategy, quality and performance and prioritization of research directions. The project also has national and international collaborators and is in partnership with over 20 New Zealand organisations. [ 2 ] Lake bottom sediment and water samples and sediment cores were gathered from New Zealand lakes, with the samples then analysed using a range of methods to determine what lived in the lake, its current health, and to explore how and why the health of the lake had changed. Susie Wood explained that the sediments build up over time in a lake and hold a record of the lake and its surrounding catchment, and using a range of scientific techniques allowed the team to "recreate hundreds of years of history...like a storybook, going back in time". [ 3 ] Marcus Vandergoes from GNS Science , said the cores were stored and scanned in a specially-built facility where the analysis of the samples of the material showed past, current and future ecological and environmental change. He noted that some samples were sent to institutions around the world to be analysed and half of each core was stored at −20 °C. [ 4 ] In the introduction to a presentation at European Geosciences Union 's General Assembly in 2021, members of the research team explained the difference between the traditional paleolimnological methods of gathering data on sediments and the application of environmental DNA techniques. While Lakes380 used both methods, a key part of the project involved using 16S rRNA gene metabarcoding to explore how the microbial community had changed, particularly with regard to human intervention. It was stated that combining these molecular methods with " hyperspectral scanning and pollen data" increased the understanding of when and why these changes occurred. [ 5 ] There was a significant social science aspect to the project, including public access to lakes. In this subproject, the team aimed to "evaluate legal and practical access to lakes in the Lakes380 dataset for members of the public ... [with the lakes] ... selected based on geographic spread, altitude, species, catchment land cover, and cultural significance". It was shown that 33% of the lakes had good access while 28% could not be legally accessed due to private property issues. [ 6 ] The importance of this work has been noted because "barriers to access lakes leads to the loss of lake knowledge, stories, cultural practices and enables lake degradation". [ 7 ] In 2019, Lakes380 scientists gathered data from four lowland shallow lakes in the Southland region with a goal of using data to improve understanding of the current and historical conditions in the lakes. [ 8 ] In 2020, around 26 lakes in Waikato were sampled in a campaign regarded by Wood as the largest work of this sort ever in the region. Vandergoes said that each sample would provide important information about the nature and causes of how the lakes had changed. He explained that "Lake sediments are natural archives that continuously record environmental history, providing measures of current and historical Aquatic ecosystems and water quality." [ 9 ] During the same field sampling campaign, Lakes380 joined with a team from the University of Waikato to gather samples from three lakes to determine the effect of prehistoric earthquake activity and provide information on the potential risk to the city of Hamilton in 2020. The study established that lake sediments in the area included layers of volcanic ash ( tephra ) and that five layers of this in the cores showed possible signs of liquefaction , indicating that there may have been previously unrecognised earthquakes in the Hamilton lowlands over the past 20,000 years. Professor Lowe from Waikato University said that although Hamilton was known for having a low to moderate earthquake risk, this may now need to be revised, and "by studying the nature of the tephra layers using CT scanning and geotechnical methods, we aim to calculate the intensity of shaking and develop a new understanding of seismic hazard in and around Hamilton". [ 10 ] On 1 July 2020, the Otago Daily Times reported that samples collected from Queenstown lakes had revealed history going back thousands of years, providing what the University of Otago paleoclimatologist Chris Moy said was important information about how climate, environment and ecosystems in the area had changed over time. The sediment cores were analysed at the University of Otago, and other lakes in the region were also sampled. [ 11 ] Sampling was undertaken of lakes in the Rotorua area in 2019 and 2021 to gather evidence on how the arrival of humans and the eruption of Mount Tarawera in 1886 had impacted the ecosystems. As of February 2021, the findings had shown the bacterial communities within Lake Ōkataina had not returned to their pre-eruption state, and that in most of the lakes, there had been an increase of algae – possibly due to changes in land use and the introduction of other animals and plants. Nicki Douglas, Te Arawa Trust Environmental Manager, said that although alarming, the results were expected and the information could be used to enforce better biosecurity to protect the lakes as taonga for future generations. [ 12 ] Lake Rotoiti , in the Tasman region, has been noted by Wood as one of the most "pristine" in the country and she felt that the research, by working on lakes like this showed the importance of protecting the lake. [ 13 ] The Lakes380 project built in-depth relationships with iwi in the Wairarapa and Rangitikei districts as part of weaving together traditional knowledge, science and local history, and worked on a joint website to encourage conversations about the wellbeing of Wairarapa lakes. [ 14 ] Environment manager Rawiri Smith of Ngāti Kahungunu ki Wairarapa , said the lake stories had provoked changes in thinking and behaviour based on "common humanity rather than through confirmation or enforcement", and Charlotte Šunde of the Cawthron Institute noted that " digital storytelling like this is a fairly new and very effective way of bringing lake stories to a wide audience, reaching far beyond the impact of scientific publications". [ 15 ] As the Lakes380 team travelled around Aotearoa , they worked closely with iwi in each region. For example in 2019, Lakes380 scientists collaborated with Ngāti Kurī to study the health of the lakes in Northland , which had dune lakes that were highly threatened aquatic habitats. Harry Burkhardt, the chair of the Ngāti Kuri Trust, said the project would help to get systems in place that will "protect the area's unique biodiversity and enhance the iwi's relationship with the natural world". [ 16 ] In May 2021, Ngāti Apa ki te Rā Tō , as mana whenua and kaitiaki of Nelson lakes, welcomed people to an information session to share knowledge of the work being done by Lakes380. Ngāti Apa ki te Rā Tō shared stories of their connections with the lakes, and the Lakes380 team showed how sediment cores are collected and how they allow the history of lakes to be explored. [ 17 ] When Ngāti Kahungunu ki Wairarapa launched a programme in August 2023 with the aim of understanding the water quality and diversity of Lake Wairapapa combining "scientific techniques and mātauranga Māori", they involved the Cawthron Institute and GNS Science to continue work they had done previously with Lakes380 that had provided information on the drivers of degradation of other lakes in the area. Scanning of sediment cores in these lakes showed how the water quality had changed historically [providing] "time-stamped data including what animals, plants, algae, and insects were present in and around the lake in each period and how changes in land use and management practices around the lake might have impacted the water quality." [ 18 ] Examples from the time of post-European settlement included the effect of the burning of beech and podocarp forest causing layers of charcoal in the cores and introduced perch and trout, said by a leader of the team, [to have] "caused a big shift in the communities of bacteria that lived in the lake, indicating these fish are a major driver of the severe cyanobacterial blooms the lake [experienced] most summers." Ra Smith, the Environmental Manager for the iwi, said the ultimate goal was for "Lake Wairarapa to obtain its own tino rangatiratanga – or flourishing growth within its own ecosystem." [ 18 ] In 2019, the Partnership Through Collaboration Trust collaborated with scientists from Lakes380 and created a series of workshops for students. For the workshops held in Whanganui , the key learning points were: how to collect and assess a sediment core; understanding what cyanobacteria are and how they can be managed; and the use of environmental DNA to measure the biodiversity of lakes. Learning the traditional knowledge of Māori of the lakes was facilitated by Mike Paki who represented local iwi. [ 19 ] Another workshop involved lake sediment analysis in Rotorua where students were taught how environmental DNA is utilised to gather data from sediment samples, and what the different layers in the core represent, for example, the grey ash that is believed to have come from the eruption of Mount Tarawera in 1886. [ 20 ] Lakes380 supported a number of university students. As part of her studies toward gaining a Master of Science , McKayla Holloway, from Victoria University and an employee of the Cawthron Institute, visited Lake Troup in Doubtful Sound to work as a part of the Lakes380 team gathering samples. The work proved challenging because of boulders below the surface of the lake, but Holloway said it was a "mission to find out the secret lives of New Zealand's lakes ... [that are] ... used for recreation as well as a source of drinking water, irrigation, and electricity generation ... [and] .. provide essential habitat for our freshwater species and have high cultural significance". [ 21 ] Amy Bridges, a student at Victoria University of Wellington, worked with the Lakes380 team to follow up on oral histories which suggested that two lakes may have been affected by tsunamis in the past. Samples from Lake Whakaki in Hawke's Bay were not able to be dated because they were a mixture of reeds and shells. While those from Lake Moawhitu on D'Urville Island didn't show direct proof of tsunami, the work done was acknowledged as being useful for future research that could look at the grain sizes to figure out possible causes of this mixing. Bridges said, "it is still possible that a tsunami did occur, so further research focusing on sediment from other parts of the lake could provide more insight". [ 22 ] As a result of a partnership with GNS Science , BLAKE, a New Zealand organisation established in recognition of Sir Peter Blake , awarded an Ambassador Programme to Soltice Morrison, noting that she would work with Lakes380 and "learn about environmental reconstruction and monitoring techniques and contribute to the effort to understand the health of New Zealand's lakes through time to the present day ... [assisting with] ... the field collection and laboratory analytics teams". [ 23 ] In the Queen's Birthday Honours 2021, Emeritus Professor Carolyn Burns CBE from Otago University was awarded Dame Companion of the New Zealand Order of Merit for services to ecological research. [ 24 ] Burns was involved with the Lakes380 project in their Science, Advice and Implementation Group. She had also used the samples collected by the Lakes380 team, to study the factors that allowed two invasive species of Daphnia to spread into some New Zealand lakes, concluding from the evidence, that "it's looking as though dispersal might be related to distance to the nearest road – how accessible lakes are to humans". [ 25 ]
https://en.wikipedia.org/wiki/Lakes380
In mathematics , the lakes of Wada ( 和田の湖 , Wada no mizuumi ) are three disjoint connected open sets of the plane or open unit square with the counterintuitive property that they all have the same boundary . In other words, for any point selected on the boundary of one of the lakes, the other two lakes' boundaries also contain that point. More than two sets with the same boundary are said to have the Wada property ; examples include Wada basins in dynamical systems . This property is rare in real-world systems. The lakes of Wada were introduced by Kunizō Yoneyama ( 1917 , page 60), who credited the discovery to Takeo Wada . His construction is similar to the construction by Brouwer (1910) of an indecomposable continuum , and in fact it is possible for the common boundary of the three sets to be an indecomposable continuum. The Lakes of Wada are formed by starting with a closed unit square of dry land, and then digging 3 lakes according to the following rule: After an infinite number of days, the three lakes are still disjoint connected open sets, and the remaining dry land is the boundary of each of the 3 lakes. For example, the first five days might be (see the image on the right): A variation of this construction can produce a countable infinite number of connected lakes with the same boundary: instead of extending the lakes in the order 1, 2, 0, 1, 2, 0, 1, 2, 0, ...., extend them in the order 0, 0, 1, 0, 1, 2, 0, 1, 2, 3, 0, 1, 2, 3, 4, ... and so on. Wada basins are certain special basins of attraction studied in the mathematics of non-linear systems . A basin having the property that every neighborhood of every point on the boundary of that basin intersects at least three basins is called a Wada basin , or said to have the Wada property . Unlike the Lakes of Wada, Wada basins are often disconnected. An example of Wada basins is given by the Newton fractal describing the basins of attraction of the Newton–Raphson method for finding the roots of a cubic polynomial with distinct roots, such as z 3 − 1; see the picture. In chaos theory , Wada basins arise very frequently. Usually, the Wada property can be seen in the basin of attraction of dissipative dynamical systems. But the exit basins of Hamiltonian systems can also show the Wada property. In the context of the chaotic scattering of systems with multiple exits, basins of exits show the Wada property. M. A. F. Sanjuán et al. [ 1 ] has shown that in the Hénon–Heiles system the exit basins have this Wada property.
https://en.wikipedia.org/wiki/Lakes_of_Wada
The Lalande Prize (French: Prix Lalande also known as Lalande Medal ) was an award for scientific advances in astronomy, given from 1802 until 1970 by the French Academy of Sciences . The prize was endowed by astronomer Jérôme Lalande in 1801, a few years before his death in 1807, to enable the Academy of Sciences to make an annual award "to the person who makes the most unusual observation or writes the most useful paper to further the progress of Astronomy, in France or elsewhere." The awarded amount grew in time: in 1918 the amount awarded was 1000 Francs , and by 1950, it was 10,000 francs. [ 1 ] It was combined with the Valz Prize (Prix Valz) in 1970 to create the Lalande-Valz Prize and then with a further 122 foundation prizes in 1997, resulting in the establishment of the Grande Médaille . The Grande Medaille is not limited to the field of astronomy.
https://en.wikipedia.org/wiki/Lalande_Prize
A Lally column is a round or square thin-walled structural steel column filled with concrete, [ 1 ] and oriented vertically to provide support to beams or timbers stretching over long spans . Historically, Lally columns were made of steel up to 1/4" in thickness; today, that has been reduced in instances to 0.06". [ 2 ] As engineered structural load-bearing components, Lally columns must be installed to their specific design specs. A Lally column is formed of tubular steet. It is then filled with concrete , which carries a share of the compression load, and helps prevent local buckling of the shell. [ 3 ] In addition to its low cost, an advantage of a generic Lally column over a custom structural steel column (or conventional I-beam ) is that it may be purchased in its tubular state and cut to length on a construction site with standard jobsite power tools such as an angle grinder or reciprocating saw fitted with appropriate metal cutting blades. Lally columns are generally not as strong or durable as conventional structural steel columns. [ clarify ] The term "Lally column" is sometimes confused with a screw jack , a temporary rather than permanent steel support. The Lally column is named after a U.S. inventor, John Lally, who owned a construction company that started production of these columns in the late 19th century. He resided in Waltham, Massachusetts and Boston during the period 1898–1907. He was issued four U.S. Patents on composite columns: #614729, #869869, #901453, and #905888. Pat. #869869 was assigned to the U.S. Column Company of Cambridge, Massachusetts . Early Lally columns were made with structural steel, "standard" pipes, with wall thicknesses slightly less than 1/4". Modern Lally columns are typically made with 16 ga. (approx. 0.06") shells. [ 2 ] Modern Lallies are therefore much lower in strength than the older ones (typically less than half the strength), and are also much more subject to damage by corrosion in moist environments. Modern Lally columns are primarily intended as somewhat stronger and more durable substitutes for wood posts in light-frame wood construction, although they are sometimes also used with steel beams. This architecture -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lally_column
LamaH ( La rge-Sa m ple D a ta for H ydrology and Environmental Sciences) is a cross-state initiative for unified data preparation and collection in the field of catchment hydrology . Hydrological datasets, for example, are an integral component for creating flood forecasting models. LamaH datasets always consist of a combination of meteorological time series (e.g., precipitation, temperature) and hydrologically relevant catchment attributes (e.g., elevation, slope, forest area, soil, bedrock) aggregated over the respective catchment as well as associated hydrological time series at the catchment outlet ( discharge ). By evaluating the large and heterogeneous sample (large-sample) of catchments, it is possible to gain insights into the hydrological cycle that would probably not be achievable with local and small-scale studies. The structure of the dataset allows an evaluation based on machine learning methods ( deep learning ). The accompanying paper explains not only the data preparation but also any limitations, uncertainties and possible applications. [ 1 ] The LamaH datasets are quite similar to the CAMELS datasets, but additionally feature: [ 1 ] LamaH datasets are available for the following regions: CAMELS datasets are available for (ranked by publication date): Both the CAMELS and LamaH datasets are licensed with Creative Commons and are therefore available barrier-free for the public.
https://en.wikipedia.org/wiki/LamaH
In physics , the Lamb shift , named after Willis Lamb , is an anomalous difference in energy between two electron orbitals in a hydrogen atom . The difference was not predicted by theory and it cannot be derived from the Dirac equation , which predicts identical energies. Hence the Lamb shift is a deviation from theory seen in the differing energies contained by the 2 S 1/2 and 2 P 1/2 orbitals of the hydrogen atom. The Lamb shift is caused by interactions between the virtual photons created through vacuum energy fluctuations and the electron as it moves around the hydrogen nucleus in each of these two orbitals. The Lamb shift has since played a significant role through vacuum energy fluctuations in theoretical prediction of Hawking radiation from black holes . This effect was first measured in 1947 in the Lamb–Retherford experiment on the hydrogen microwave spectrum [ 1 ] and this measurement provided the stimulus for renormalization theory to handle the divergences. The calculation of the Lamb shift by Hans Bethe in 1947 revolutionized quantum electrodynamics . [ 2 ] The effect was the harbinger of modern quantum electrodynamics later developed by Julian Schwinger , Richard Feynman , Ernst Stueckelberg , Sin-Itiro Tomonaga and Freeman Dyson . Lamb won the Nobel Prize in Physics in 1955 for his discoveries related to the Lamb shift. Victor Weisskopf regretted that his insecurity about his mathematical abilities may have cost him a Nobel Prize when he did not publish results (which turned out to be correct) about what is now known as the Lamb shift . [ 3 ] In 1978, on Lamb's 65th birthday, Freeman Dyson addressed him as follows: "Those years, when the Lamb shift was the central theme of physics, were golden years for all the physicists of my generation. You were the first to see that this tiny shift, so elusive and hard to measure, would clarify our thinking about particles and fields." [ 4 ] This heuristic derivation of the electrodynamic level shift follows Theodore A. Welton 's approach. [ 5 ] [ 6 ] The fluctuations in the electric and magnetic fields associated with the QED vacuum perturbs the electric potential due to the atomic nucleus . This perturbation causes a fluctuation in the position of the electron , which explains the energy shift. The difference of potential energy is given by Since the fluctuations are isotropic , So one can obtain The classical equation of motion for the electron displacement ( δr ) k → induced by a single mode of the field of wave vector k → and frequency ν is and this is valid only when the frequency ν is greater than ν 0 in the Bohr orbit, ν > π c / a 0 {\displaystyle \nu >\pi c/a_{0}} . The electron is unable to respond to the fluctuating field if the fluctuations are smaller than the natural orbital frequency in the atom. For the field oscillating at ν , therefore where Ω {\displaystyle \Omega } is some large normalization volume (the volume of the hypothetical "box" containing the hydrogen atom), and h . c . {\displaystyle h.c.} denotes the hermitian conjugate of the preceding term. By the summation over all k → , {\displaystyle {\vec {k}},} This result diverges when no limits about the integral (at both large and small frequencies). As mentioned above, this method is expected to be valid only when ν > π c / a 0 {\displaystyle \nu >\pi c/a_{0}} , or equivalently k > π / a 0 {\displaystyle k>\pi /a_{0}} . It is also valid only for wavelengths longer than the Compton wavelength , or equivalently k < m c / ℏ {\displaystyle k<mc/\hbar } . Therefore, one can choose the upper and lower limit of the integral and these limits make the result converge. For the atomic orbital and the Coulomb potential , since it is known that For p orbitals, the nonrelativistic wave function vanishes at the origin (at the nucleus), so there is no energy shift. But for s orbitals there is some finite value at the origin, where the Bohr radius is Therefore, Finally, the difference of the potential energy becomes: where α {\displaystyle \alpha } is the fine-structure constant . This shift is about 500 MHz, within an order of magnitude of the observed shift of 1057 MHz. This is equal to an energy of only 7.00 x 10^-25 J., or 4.37 x 10^-6 eV. Welton's heuristic derivation of the Lamb shift is similar to, but distinct from, the calculation of the Darwin term using Zitterbewegung , a contribution to the fine structure that is of lower order in α {\displaystyle \alpha } than the Lamb shift. [ 7 ] : 80–81 In 1947 Willis Lamb and Robert Retherford carried out an experiment using microwave techniques to stimulate radio-frequency transitions between 2 S 1/2 and 2 P 1/2 levels of hydrogen. [ 8 ] By using lower frequencies than for optical transitions the Doppler broadening could be neglected (Doppler broadening is proportional to the frequency). The energy difference Lamb and Retherford found was a rise of about 1000 MHz (0.03 cm −1 ) of the 2 S 1/2 level above the 2 P 1/2 level. This particular difference is a one-loop effect of quantum electrodynamics , and can be interpreted as the influence of virtual photons that have been emitted and re-absorbed by the atom. In quantum electrodynamics the electromagnetic field is quantized and, like the harmonic oscillator in quantum mechanics , its lowest state is not zero. Thus, there exist small zero-point oscillations that cause the electron to execute rapid oscillatory motions. The electron is "smeared out" and each radius value is changed from r to r + δr (a small but finite perturbation). The Coulomb potential is therefore perturbed by a small amount and the degeneracy of the two energy levels is removed. The new potential can be approximated (using atomic units ) as follows: The Lamb shift itself is given by with k ( n , 0) around 13 varying slightly with n , and with log( k ( n ,ℓ)) a small number (approx. −0.05) making k ( n ,ℓ) close to unity. For a derivation of Δ E Lamb see for example: [ 9 ] In 1947, Hans Bethe was the first to explain the Lamb shift in the hydrogen spectrum , and he thus laid the foundation for the modern development of quantum electrodynamics . Bethe was able to derive the Lamb shift by implementing the idea of mass renormalization, which allowed him to calculate the observed energy shift as the difference between the shift of a bound electron and the shift of a free electron. [ 10 ] The Lamb shift currently provides a measurement of the fine-structure constant α to better than one part in a million, allowing a precision test of quantum electrodynamics . His calculation of the Lamb shift has been stated to have revolutionized quantum electrodynamics and having "opened the way to the modern era of particle physics ". [ 2 ]
https://en.wikipedia.org/wiki/Lamb_shift
In fluid dynamics , Lamb surfaces are smooth, connected orientable two-dimensional surfaces, which are simultaneously stream-surfaces and vortex surfaces, named after the physicist Horace Lamb . [ 1 ] [ 2 ] [ 3 ] Lamb surfaces are orthogonal to the Lamb vector ω × u {\displaystyle {\boldsymbol {\omega }}\times \mathbf {u} } everywhere, where ω {\displaystyle {\boldsymbol {\omega }}} and u {\displaystyle \mathbf {u} } are the vorticity and velocity field, respectively. The necessary and sufficient condition are Flows with Lamb surfaces are neither irrotational nor Beltrami . But the generalized Beltrami flows has Lamb surfaces.
https://en.wikipedia.org/wiki/Lamb_surface
In fluid dynamics , Lamb vector is the cross product of vorticity vector and velocity vector of the flow field, named after the physicist Horace Lamb . [ 1 ] [ 2 ] The Lamb vector is defined as where u {\displaystyle \mathbf {u} } is the velocity field and ω = ∇ × u {\displaystyle {\boldsymbol {\omega }}=\nabla \times \mathbf {u} } is the vorticity field of the flow. It appears in the Navier–Stokes equations through the material derivative term, specifically via convective acceleration term, In irrotational flows, the Lamb vector is zero, so does in Beltrami flows . The concept of Lamb vector is widely used in turbulent flows. The Lamb vector is analogous to electric field , when the Navier–Stokes equation is compared with Maxwell's equations . The Euler equations written in terms of the Lamb vector is referred to as the Gromeka–Lamb equation, named after Ippolit S. Gromeka and Horace Lamb. [ 3 ] This is given by The divergence of the lamb vector can be derived from vector identities, At the same time, the divergence can also be obtained from Navier–Stokes equation by taking its divergence. In particular, for incompressible flow, where ∇ ⋅ u = 0 {\displaystyle \nabla \cdot \mathbf {u} =0} , with body forces given by − ∇ U {\displaystyle -\nabla U} , the Lamb vector divergence reduces to where In regions where ∇ ⋅ l ≥ 0 {\displaystyle \nabla \cdot \mathbf {l} \geq 0} , there is tendency for Φ {\displaystyle \Phi } [ clarification needed ] to accumulate there and vice versa.
https://en.wikipedia.org/wiki/Lamb_vector
The Lambda-CDM , Lambda cold dark matter , or ΛCDM model is a mathematical model of the Big Bang theory with three major components: It is the current standard model of Big Bang cosmology, [ 1 ] as it is the simplest model that provides a reasonably good account of: The model assumes that general relativity is the correct theory of gravity on cosmological scales. It emerged in the late 1990s as a concordance cosmology , after a period when disparate observed properties of the universe appeared mutually inconsistent, and there was no consensus on the makeup of the energy density of the universe. The ΛCDM model has been successful in modeling broad collection of astronomical observations over decades. Remaining issues challenge the assumptions of the ΛCDM model and have led to many alternative models. [ 2 ] The ΛCDM model is based on three postulates on the structure of spacetime : [ 3 ] : 227 This combination greatly simplifies the equations of general relativity into a form called the Friedmann equations . These equations specify the evolution of the scale factor of the universe in terms of the pressure and density of a perfect fluid. The evolving density is composed of different kinds of energy and matter, each with its own role in affecting the scale factor. [ 4 ] : 7 For example, a model might include baryons , photons , neutrinos , and dark matter . [ 5 ] : 25.1.1 These component densities become parameters extracted when the model is constrained to match astrophysical observations. The model aims to describe the observable universe from approximately 0.1 s to the present. [ 1 ] : 605 The most accurate observations which are sensitive to the component densities are consequences of statistical inhomogeneity called "perturbations" in the early universe. Since the Friedmann equations assume homogeneity, additional theory must be added before comparison to experiments. Inflation is a simple model producing perturbations by postulating an extremely rapid expansion early in the universe that separates quantum fluctuations before they can equilibrate. The perturbations are characterized by additional parameters also determined by matching observations. [ 5 ] : 25.1.2 Finally, the light which will become astronomical observations must pass through the universe. The latter part of that journey will pass through ionized space , where the electrons can scatter the light, altering the anisotropies. This effect is characterized by one additional parameter. [ 5 ] : 25.1.3 The ΛCDM model includes an expansion of metric space that is well documented, both as the redshift of prominent spectral absorption or emission lines in the light from distant galaxies, and as the time dilation in the light decay of supernova luminosity curves. Both effects are attributed to a Doppler shift in electromagnetic radiation as it travels across expanding space. Although this expansion increases the distance between objects that are not under shared gravitational influence, it does not increase the size of the objects (e.g. galaxies) in space. Also, since it originates from ordinary general relativity, it, like general relativity, allows for distant galaxies to recede from each other at speeds greater than the speed of light; local expansion is less than the speed of light, but expansion summed across great distances can collectively exceed the speed of light. [ 6 ] The letter Λ ( lambda ) represents the cosmological constant , which is associated with a vacuum energy or dark energy in empty space that is used to explain the contemporary accelerating expansion of space against the attractive effects of gravity. A cosmological constant has negative pressure, p = − ρ c 2 {\displaystyle p=-\rho c^{2}} , which contributes to the stress–energy tensor that, according to the general theory of relativity, causes accelerating expansion. The fraction of the total energy density of our (flat or almost flat) universe that is dark energy, Ω Λ {\displaystyle \Omega _{\Lambda }} , is estimated to be 0.669 ± 0.038 based on the 2018 Dark Energy Survey results using Type Ia supernovae [ 7 ] or 0.6847 ± 0.0073 based on the 2018 release of Planck satellite data, or more than 68.3% (2018 estimate) of the mass–energy density of the universe. [ 8 ] Dark matter is postulated in order to account for gravitational effects observed in very large-scale structures (the "non-keplerian" rotation curves of galaxies; [ 9 ] the gravitational lensing of light by galaxy clusters; and the enhanced clustering of galaxies) that cannot be accounted for by the quantity of observed matter. [ 10 ] The ΛCDM model proposes specifically cold dark matter , hypothesized as: Dark matter constitutes about 26.5% [ 11 ] of the mass–energy density of the universe. The remaining 4.9% [ 11 ] comprises all ordinary matter observed as atoms, chemical elements, gas and plasma, the stuff of which visible planets, stars and galaxies are made. The great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10% of the ordinary matter contribution to the mass–energy density of the universe. [ 12 ] The model includes a single originating event, the " Big Bang ", which was not an explosion but the abrupt appearance of expanding spacetime containing radiation at temperatures of around 10 15 K. This was immediately (within 10 −29 seconds) followed by an exponential expansion of space by a scale multiplier of 10 27 or more, known as cosmic inflation . The early universe remained hot (above 10 000 K) for several hundred thousand years, a state that is detectable as a residual cosmic microwave background , or CMB, a very low-energy radiation emanating from all parts of the sky. The "Big Bang" scenario, with cosmic inflation and standard particle physics, is the only cosmological model consistent with the observed continuing expansion of space, the observed distribution of lighter elements in the universe (hydrogen, helium, and lithium), and the spatial texture of minute irregularities ( anisotropies ) in the CMB radiation. Cosmic inflation also addresses the " horizon problem " in the CMB; indeed, it seems likely that the universe is larger than the observable particle horizon . [ 13 ] The expansion of the universe is parameterized by a dimensionless scale factor a = a ( t ) {\displaystyle a=a(t)} (with time t {\displaystyle t} counted from the birth of the universe), defined relative to the present time, so a 0 = a ( t 0 ) = 1 {\displaystyle a_{0}=a(t_{0})=1} ; the usual convention in cosmology is that subscript 0 denotes present-day values, so t 0 {\displaystyle t_{0}} denotes the age of the universe. The scale factor is related to the observed redshift [ 14 ] z {\displaystyle z} of the light emitted at time t e m {\displaystyle t_{\mathrm {em} }} by a ( t em ) = 1 1 + z . {\displaystyle a(t_{\text{em}})={\frac {1}{1+z}}\,.} The expansion rate is described by the time-dependent Hubble parameter , H ( t ) {\displaystyle H(t)} , defined as H ( t ) ≡ a ˙ a , {\displaystyle H(t)\equiv {\frac {\dot {a}}{a}},} where a ˙ {\displaystyle {\dot {a}}} is the time-derivative of the scale factor. The first Friedmann equation gives the expansion rate in terms of the matter+radiation density ρ {\displaystyle \rho } , the curvature k {\displaystyle k} , and the cosmological constant Λ {\displaystyle \Lambda } , [ 14 ] H 2 = ( a ˙ a ) 2 = 8 π G 3 ρ − k c 2 a 2 + Λ c 2 3 , {\displaystyle H^{2}=\left({\frac {\dot {a}}{a}}\right)^{2}={\frac {8\pi G}{3}}\rho -{\frac {kc^{2}}{a^{2}}}+{\frac {\Lambda c^{2}}{3}},} where, as usual c {\displaystyle c} is the speed of light and G {\displaystyle G} is the gravitational constant . A critical density ρ c r i t {\displaystyle \rho _{\mathrm {crit} }} is the present-day density, which gives zero curvature k {\displaystyle k} , assuming the cosmological constant Λ {\displaystyle \Lambda } is zero, regardless of its actual value. Substituting these conditions to the Friedmann equation gives [ 15 ] ρ c r i t = 3 H 0 2 8 π G = 1.878 47 ( 23 ) × 10 − 26 h 2 k g ⋅ m − 3 , {\displaystyle \rho _{\mathrm {crit} }={\frac {3H_{0}^{2}}{8\pi G}}=1.878\;47(23)\times 10^{-26}\;h^{2}\;\mathrm {kg{\cdot }m^{-3}} ,} where h ≡ H 0 / ( 100 k m ⋅ s − 1 ⋅ M p c − 1 ) {\displaystyle h\equiv H_{0}/(100\;\mathrm {km{\cdot }s^{-1}{\cdot }Mpc^{-1}} )} is the reduced Hubble constant. If the cosmological constant were actually zero, the critical density would also mark the dividing line between eventual recollapse of the universe to a Big Crunch , or unlimited expansion. For the Lambda-CDM model with a positive cosmological constant (as observed), the universe is predicted to expand forever regardless of whether the total density is slightly above or below the critical density; though other outcomes are possible in extended models where the dark energy is not constant but actually time-dependent. [ citation needed ] The present-day density parameter Ω x {\displaystyle \Omega _{x}} for various species is defined as the dimensionless ratio [ 16 ] : 74 Ω x ≡ ρ x ( t = t 0 ) ρ c r i t = 8 π G ρ x ( t = t 0 ) 3 H 0 2 {\displaystyle \Omega _{x}\equiv {\frac {\rho _{x}(t=t_{0})}{\rho _{\mathrm {crit} }}}={\frac {8\pi G\rho _{x}(t=t_{0})}{3H_{0}^{2}}}} where the subscript x {\displaystyle x} is one of b {\displaystyle \mathrm {b} } for baryons , c {\displaystyle \mathrm {c} } for cold dark matter , r a d {\displaystyle \mathrm {rad} } for radiation ( photons plus relativistic neutrinos ), and Λ {\displaystyle \Lambda } for dark energy . [ citation needed ] Since the densities of various species scale as different powers of a {\displaystyle a} , e.g. a − 3 {\displaystyle a^{-3}} for matter etc., the Friedmann equation can be conveniently rewritten in terms of the various density parameters as H ( a ) ≡ a ˙ a = H 0 ( Ω c + Ω b ) a − 3 + Ω r a d a − 4 + Ω k a − 2 + Ω Λ a − 3 ( 1 + w ) , {\displaystyle H(a)\equiv {\frac {\dot {a}}{a}}=H_{0}{\sqrt {(\Omega _{\rm {c}}+\Omega _{\rm {b}})a^{-3}+\Omega _{\mathrm {rad} }a^{-4}+\Omega _{k}a^{-2}+\Omega _{\Lambda }a^{-3(1+w)}}},} where w {\displaystyle w} is the equation of state parameter of dark energy, and assuming negligible neutrino mass (significant neutrino mass requires a more complex equation). The various Ω {\displaystyle \Omega } parameters add up to 1 {\displaystyle 1} by construction. In the general case this is integrated by computer to give the expansion history a ( t ) {\displaystyle a(t)} and also observable distance–redshift relations for any chosen values of the cosmological parameters, which can then be compared with observations such as supernovae and baryon acoustic oscillations . [ citation needed ] In the minimal 6-parameter Lambda-CDM model, it is assumed that curvature Ω k {\displaystyle \Omega _{k}} is zero and w = − 1 {\displaystyle w=-1} , so this simplifies to H ( a ) = H 0 Ω m a − 3 + Ω r a d a − 4 + Ω Λ {\displaystyle H(a)=H_{0}{\sqrt {\Omega _{\rm {m}}a^{-3}+\Omega _{\mathrm {rad} }a^{-4}+\Omega _{\Lambda }}}} Observations show that the radiation density is very small today, Ω rad ∼ 10 − 4 {\displaystyle \Omega _{\text{rad}}\sim 10^{-4}} ; if this term is neglected the above has an analytic solution [ 17 ] a ( t ) = ( Ω m / Ω Λ ) 1 / 3 sinh 2 / 3 ⁡ ( t / t Λ ) {\displaystyle a(t)=(\Omega _{\rm {m}}/\Omega _{\Lambda })^{1/3}\,\sinh ^{2/3}(t/t_{\Lambda })} where t Λ ≡ 2 / ( 3 H 0 Ω Λ ) ; {\displaystyle t_{\Lambda }\equiv 2/(3H_{0}{\sqrt {\Omega _{\Lambda }}})\ ;} this is fairly accurate for a > 0.01 {\displaystyle a>0.01} or t > 10 {\displaystyle t>10} million years. Solving for a ( t ) = 1 {\displaystyle a(t)=1} gives the present age of the universe t 0 {\displaystyle t_{0}} in terms of the other parameters. [ citation needed ] It follows that the transition from decelerating to accelerating expansion (the second derivative a ¨ {\displaystyle {\ddot {a}}} crossing zero) occurred when a = ( Ω m / 2 Ω Λ ) 1 / 3 , {\displaystyle a=(\Omega _{\rm {m}}/2\Omega _{\Lambda })^{1/3},} which evaluates to a ∼ 0.6 {\displaystyle a\sim 0.6} or z ∼ 0.66 {\displaystyle z\sim 0.66} for the best-fit parameters estimated from the Planck spacecraft . [ citation needed ] Multiple variants of the ΛCDM model are used with some differences in parameters. [ 5 ] : 25.1 One such set is outlined in the table below. The Planck collaboration version of the ΛCDM model is based on six parameters : baryon density parameter; dark matter density parameter; scalar spectral index; two parameters related to curvature fluctuation amplitude; and the probability that photons from the early universe will be scattered once on route (called reionization optical depth). [ 18 ] Six is the smallest number of parameters needed to give an acceptable fit to the observations; other possible parameters are fixed at "natural" values, e.g. total density parameter = 1.00, dark energy equation of state = −1. The parameter values, and uncertainties, are estimated using computer searches to locate the region of parameter space providing an acceptable match to cosmological observations. From these six parameters, the other model values, such as the Hubble constant and the dark energy density, can be calculated. The discovery of the cosmic microwave background (CMB) in 1964 confirmed a key prediction of the Big Bang cosmology. From that point on, it was generally accepted that the universe started in a hot, dense state and has been expanding over time. The rate of expansion depends on the types of matter and energy present in the universe, and in particular, whether the total density is above or below the so-called critical density. [ citation needed ] During the 1970s, most attention focused on pure-baryonic models, but there were serious challenges explaining the formation of galaxies, given the small anisotropies in the CMB (upper limits at that time). In the early 1980s, it was realized that this could be resolved if cold dark matter dominated over the baryons, and the theory of cosmic inflation motivated models with critical density. [ citation needed ] During the 1980s, most research focused on cold dark matter with critical density in matter, around 95% CDM and 5% baryons: these showed success at forming galaxies and clusters of galaxies, but problems remained; notably, the model required a Hubble constant lower than preferred by observations, and observations around 1988–1990 showed more large-scale galaxy clustering than predicted. [ citation needed ] These difficulties sharpened with the discovery of CMB anisotropy by the Cosmic Background Explorer in 1992, and several modified CDM models, including ΛCDM and mixed cold and hot dark matter, came under active consideration through the mid-1990s. The ΛCDM model then became the leading model following the observations of accelerating expansion in 1998, and was quickly supported by other observations: in 2000, the BOOMERanG microwave background experiment measured the total (matter–energy) density to be close to 100% of critical, whereas in 2001 the 2dFGRS galaxy redshift survey measured the matter density to be near 25%; the large difference between these values supports a positive Λ or dark energy . Much more precise spacecraft measurements of the microwave background from WMAP in 2003–2010 and Planck in 2013–2015 have continued to support the model and pin down the parameter values, most of which are constrained below 1 percent uncertainty. [ citation needed ] Among all cosmological models, the ΛCDM model has been the most successful; it describes a wide range of astronomical observations with remarkable accuracy. [ 2 ] : 58 The notable successes include: In addition to explaining many pre-2000 observations, the model has made a number of successful predictions: notably the existence of the baryon acoustic oscillation feature, discovered in 2005 in the predicted location; and the statistics of weak gravitational lensing , first observed in 2000 by several teams. The polarization of the CMB, discovered in 2002 by DASI, [ 30 ] has been successfully predicted by the model: in the 2015 Planck data release, [ 31 ] there are seven observed peaks in the temperature (TT) power spectrum, six peaks in the temperature–polarization (TE) cross spectrum, and five peaks in the polarization (EE) spectrum. The six free parameters can be well constrained by the TT spectrum alone, and then the TE and EE spectra can be predicted theoretically to few-percent precision with no further adjustments allowed. [ citation needed ] Despite the widespread success of ΛCDM in matching observations of our universe, cosmologists believe that the model may be an approximation of a more fundamental model. [ 2 ] [ 32 ] [ 29 ] Extensive searches for dark matter particles have so far shown no well-agreed detection, while dark energy may be almost impossible to detect in a laboratory, and its value is extremely small compared to vacuum energy theoretical predictions . [ citation needed ] The ΛCDM model, like all models built on the Friedmann–Lemaître–Robertson–Walker metric, assume that the universe looks the same in all directions ( isotropy ) and from every location ( homogeneity ) if you look at a large enough scale: "the universe looks the same whoever and wherever you are." [ 33 ] This cosmological principle allows a metric, Friedmann–Lemaître–Robertson–Walker metric , to be derived and developed into a theory to compare to experiments. Without the principle, a metric would need to be extracted from astronomical data, which may not be possible. [ 34 ] : 408 The assumptions were carried over into the ΛCDM model. [ 35 ] However, some findings suggested violations of the cosmological principle. [ 2 ] [ 36 ] Evidence from galaxy clusters , [ 37 ] [ 38 ] quasars , [ 39 ] and type Ia supernovae [ 40 ] suggest that isotropy is violated on large scales. [ citation needed ] Data from the Planck Mission shows hemispheric bias in the cosmic microwave background in two respects: one with respect to average temperature (i.e. temperature fluctuations), the second with respect to larger variations in the degree of perturbations (i.e. densities). The European Space Agency (the governing body of the Planck Mission) has concluded that these anisotropies in the CMB are, in fact, statistically significant and can no longer be ignored. [ 41 ] Already in 1967, Dennis Sciama predicted that the cosmic microwave background has a significant dipole anisotropy. [ 42 ] [ 43 ] In recent years, the CMB dipole has been tested, and the results suggest our motion with respect to distant radio galaxies [ 44 ] and quasars [ 45 ] differs from our motion with respect to the cosmic microwave background . The same conclusion has been reached in recent studies of the Hubble diagram of Type Ia supernovae [ 46 ] and quasars . [ 47 ] This contradicts the cosmological principle. [ citation needed ] The CMB dipole is hinted at through a number of other observations. First, even within the cosmic microwave background, there are curious directional alignments [ 48 ] and an anomalous parity asymmetry [ 49 ] that may have an origin in the CMB dipole. [ 50 ] Separately, the CMB dipole direction has emerged as a preferred direction in studies of alignments in quasar polarizations, [ 51 ] scaling relations in galaxy clusters, [ 52 ] [ 53 ] strong lensing time delay, [ 36 ] Type Ia supernovae, [ 54 ] and quasars and gamma-ray bursts as standard candles . [ 55 ] The fact that all these independent observables, based on different physics, are tracking the CMB dipole direction suggests that the Universe is anisotropic in the direction of the CMB dipole. [ citation needed ] Nevertheless, some authors have stated that the universe around Earth is isotropic at high significance by studies of the combined cosmic microwave background temperature and polarization maps. [ 56 ] The homogeneity of the universe needed for the ΛCDM applies to very large volumes of space. N-body simulations in ΛCDM show that the spatial distribution of galaxies is statistically homogeneous if averaged over scales 260 /h Mpc or more. [ 57 ] Numerous claims of large-scale structures reported to be in conflict with the predicted scale of homogeneity for ΛCDM do not withstand statistical analysis. [ 58 ] [ 2 ] : 7.8 El Gordo is a massive interacting galaxy cluster in the early Universe ( z = 0.87 {\displaystyle z=0.87} ). The extreme properties of El Gordo in terms of its redshift, mass, and the collision velocity leads to strong ( 6.16 σ {\displaystyle 6.16\sigma } ) tension with the ΛCDM model. [ 59 ] [ 60 ] The properties of El Gordo are however consistent with cosmological simulations in the framework of MOND due to more rapid structure formation. [ 61 ] The KBC void is an immense, comparatively empty region of space containing the Milky Way approximately 2 billion light-years (600 megaparsecs, Mpc) in diameter. [ 62 ] [ 63 ] [ 2 ] Some authors have said the existence of the KBC void violates the assumption that the CMB reflects baryonic density fluctuations at z = 1100 {\displaystyle z=1100} or Einstein's theory of general relativity , either of which would violate the ΛCDM model, [ 64 ] while other authors have claimed that supervoids as large as the KBC void are consistent with the ΛCDM model. [ 65 ] Statistically significant differences remain in values of the Hubble constant derived by matching the ΛCDM model to data from the "early universe", like the cosmic background radiation, compared to values derived from stellar distance measurements, called the "late universe". While systematic error in the measurements remains a possibility, many different kinds of observations agree with one of these two values of the constant. This difference, called the Hubble tension , [ 66 ] widely acknowledged to be a major problem for the ΛCDM model. [ 32 ] [ 67 ] [ 2 ] [ 29 ] Dozens of proposals for modifications of ΛCDM or completely new models have been published to explain the Hubble tension. Among these models are many that modify the properties of dark energy or of dark matter over time, interactions between dark energy and dark matter, unified dark energy and matter, other forms of dark radiation like sterile neutrinos , modifications to the properties of gravity, or the modification of the effects of inflation , changes to the properties of elementary particles in the early universe, among others. None of these models can simultaneously explain the breadth of other cosmological data as well as ΛCDM. [ 66 ] The " S 8 {\displaystyle S_{8}} tension" is a name for another question mark for the ΛCDM model. [ 2 ] The S 8 {\displaystyle S_{8}} parameter in the ΛCDM model quantifies the amplitude of matter fluctuations in the late universe and is defined as S 8 ≡ σ 8 Ω m / 0.3 {\displaystyle S_{8}\equiv \sigma _{8}{\sqrt {\Omega _{\rm {m}}/0.3}}} Early- (e.g. from CMB data collected using the Planck observatory) and late-time (e.g. measuring weak gravitational lensing events) facilitate increasingly precise values of S 8 {\displaystyle S_{8}} . However, these two categories of measurement differ by more standard deviations than their uncertainties. This discrepancy is called the S 8 {\displaystyle S_{8}} tension. The name "tension" reflects that the disagreement is not merely between two data sets: the many sets of early- and late-time measurements agree well within their own categories, but there is an unexplained difference between values obtained from different points in the evolution of the universe. Such a tension indicates that the ΛCDM model may be incomplete or in need of correction. [ 2 ] Some values for S 8 {\displaystyle S_{8}} are 0.832 ± 0.013 (2020 Planck ), [ 68 ] 0.766 +0.020 −0.014 (2021 KIDS ), [ 69 ] [ 70 ] 0.776 ± 0.017 (2022 DES ), [ 71 ] 0.790 +0.018 −0.014 (2023 DES+KIDS), [ 72 ] 0.769 +0.031 −0.034 – 0.776 +0.032 −0.033 [ 73 ] [ 74 ] [ 75 ] [ 76 ] (2023 HSC-SSP ), 0.86 ± 0.01 (2024 EROSITA ). [ 77 ] [ 78 ] Values have also obtained using peculiar velocities , 0.637 ± 0.054 (2020) [ 79 ] and 0.776 ± 0.033 (2020), [ 80 ] among other methods. The " axis of evil " is a name given to a purported correlation between the plane of the Solar System and aspects of the cosmic microwave background (CMB). Such a correlation would give the plane of the Solar System and hence the location of Earth a greater significance than might be expected by chance, a result which has been claimed to be evidence of a departure from the Copernican principle . [ 81 ] However, a 2016 study compared isotropic and anisotropic cosmological models against WMAP and Planck data and found no evidence for anisotropy. [ 82 ] The actual observable amount of lithium in the universe is less than the calculated amount from the ΛCDM model by a factor of 3–4. [ 83 ] [ 2 ] : 141 If every calculation is correct, then solutions beyond the existing ΛCDM model might be needed. [ 83 ] The ΛCDM model assumes that the shape of the universe is of zero curvature (is flat) and has an undetermined topology. In 2019, interpretation of Planck data suggested that the curvature of the universe might be positive (often called "closed"), which would contradict the ΛCDM model. [ 84 ] [ 2 ] Some authors have suggested that the Planck data detecting a positive curvature could be evidence of a local inhomogeneity in the curvature of the universe rather than the universe actually being globally a 3- manifold of positive curvature. [ 85 ] [ 2 ] The ΛCDM model assumes that the strong equivalence principle is true. However, in 2020 a group of astronomers analyzed data from the Spitzer Photometry and Accurate Rotation Curves (SPARC) sample, together with estimates of the large-scale external gravitational field from an all-sky galaxy catalog. They concluded that there was highly statistically significant evidence of violations of the strong equivalence principle in weak gravitational fields in the vicinity of rotationally supported galaxies. [ 86 ] They observed an effect inconsistent with tidal effects in the ΛCDM model. These results have been challenged as failing to consider inaccuracies in the rotation curves and correlations between galaxy properties and clustering strength. [ 87 ] and as inconsistent with similar analysis of other galaxies. [ 88 ] Several discrepancies between the predictions of cold dark matter in the ΛCDM model and observations of galaxies and their clustering have arisen. Some of these problems have proposed solutions, but it remains unclear whether they can be solved without abandoning the ΛCDM model. [ 89 ] Milgrom , McGaugh , and Kroupa have criticized the dark matter portions of the theory from the perspective of galaxy formation models and supporting the alternative modified Newtonian dynamics (MOND) theory, which requires a modification of the Einstein field equations and the Friedmann equations as seen in proposals such as modified gravity theory (MOG theory) or tensor–vector–scalar gravity theory (TeVeS theory). [ citation needed ] Other proposals by theoretical astrophysicists of cosmological alternatives to Einstein's general relativity that attempt to account for dark energy or dark matter include f(R) gravity , scalar–tensor theories such as galileon [ ko ] theories (see Galilean invariance ), brane cosmologies , the DGP model , and massive gravity and its extensions such as bimetric gravity . [ citation needed ] The density distributions of dark matter halos in cold dark matter simulations (at least those that do not include the impact of baryonic feedback) are much more peaked than what is observed in galaxies by investigating their rotation curves. [ 90 ] Cold dark matter simulations predict large numbers of small dark matter halos, more numerous than the number of small dwarf galaxies that are observed around galaxies like the Milky Way . [ 91 ] Dwarf galaxies around the Milky Way and Andromeda galaxies are observed to be orbiting in thin, planar structures whereas the simulations predict that they should be distributed randomly about their parent galaxies. [ 92 ] However, latest research suggests this seemingly bizarre alignment is just a quirk which will dissolve over time. [ 93 ] Galaxies in the NGC 3109 association are moving away too rapidly to be consistent with expectations in the ΛCDM model. [ 94 ] In this framework, NGC 3109 is too massive and distant from the Local Group for it to have been flung out in a three-body interaction involving the Milky Way or Andromeda Galaxy . [ 95 ] If galaxies grew hierarchically, then massive galaxies required many mergers. Major mergers inevitably create a classical bulge . On the contrary, about 80% of observed galaxies give evidence of no such bulges, and giant pure-disc galaxies are commonplace. [ 96 ] The tension can be quantified by comparing the observed distribution of galaxy shapes today with predictions from high-resolution hydrodynamical cosmological simulations in the ΛCDM framework, revealing a highly significant problem that is unlikely to be solved by improving the resolution of the simulations. [ 97 ] The high bulgeless fraction was nearly constant for 8 billion years. [ 98 ] If galaxies were embedded within massive halos of cold dark matter , then the bars that often develop in their central regions would be slowed down by dynamical friction with the halo. This is in serious tension with the fact that observed galaxy bars are typically fast. [ 99 ] Comparison of the model with observations may have some problems on sub-galaxy scales, possibly predicting too many dwarf galaxies and too much dark matter in the innermost regions of galaxies. This problem is called the "small scale crisis". [ 100 ] These small scales are harder to resolve in computer simulations, so it is not yet clear whether the problem is the simulations, non-standard properties of dark matter, or a more radical error in the model. Observations from the James Webb Space Telescope have resulted in various galaxies confirmed by spectroscopy at high redshift, such as JADES-GS-z13-0 at cosmological redshift of 13.2. [ 101 ] [ 102 ] Other candidate galaxies which have not been confirmed by spectroscopy include CEERS-93316 at cosmological redshift of 16.4. Existence of surprisingly massive galaxies in the early universe challenges the preferred models describing how dark matter halos drive galaxy formation. It remains to be seen whether a revision of the Lambda-CDM model with parameters given by Planck Collaboration is necessary to resolve this issue. The discrepancies could also be explained by particular properties (stellar masses or effective volume) of the candidate galaxies, yet unknown force or particle outside of the Standard Model through which dark matter interacts, more efficient baryonic matter accumulation by the dark matter halos, early dark energy models, [ 103 ] or the hypothesized long-sought Population III stars . [ 104 ] [ 105 ] [ 106 ] [ 107 ] Massimo Persic and Paolo Salucci [ 108 ] first estimated the baryonic density today present in ellipticals, spirals, groups and clusters of galaxies. They performed an integration of the baryonic mass-to-light ratio over luminosity (in the following M b / L {\textstyle M_{\rm {b}}/L} ), weighted with the luminosity function ϕ ( L ) {\textstyle \phi (L)} over the previously mentioned classes of astrophysical objects: ρ b = ∑ ∫ L ϕ ( L ) M b L d L . {\displaystyle \rho _{\rm {b}}=\sum \int L\phi (L){\frac {M_{\rm {b}}}{L}}\,dL.} The result was: Ω b = Ω ∗ + Ω gas = 2.2 × 10 − 3 + 1.5 × 10 − 3 h − 1.3 ≃ 0.003 , {\displaystyle \Omega _{\rm {b}}=\Omega _{*}+\Omega _{\text{gas}}=2.2\times 10^{-3}+1.5\times 10^{-3}\;h^{-1.3}\simeq 0.003,} where h ≃ 0.72 {\displaystyle h\simeq 0.72} . Note that this value is much lower than the prediction of standard cosmic nucleosynthesis Ω b ≃ 0.0486 {\displaystyle \Omega _{\rm {b}}\simeq 0.0486} , so that stars and gas in galaxies and in galaxy groups and clusters account for less than 10% of the primordially synthesized baryons. This issue is known as the problem of the "missing baryons". The missing baryon problem is claimed to be resolved. Using observations of the kinematic Sunyaev–Zel'dovich effect spanning more than 90% of the lifetime of the Universe, in 2021 astrophysicists found that approximately 50% of all baryonic matter is outside dark matter haloes , filling the space between galaxies. [ 109 ] Together with the amount of baryons inside galaxies and surrounding them, the total amount of baryons in the late time Universe is compatible with early Universe measurements. It has been argued that the ΛCDM model has adopted conventionalist stratagems , rendering it unfalsifiable in the sense defined by Karl Popper . When faced with new data not in accord with a prevailing model, the conventionalist will find ways to adapt the theory rather than declare it false. Thus dark matter was added after the observations of anomalous galaxy rotation rates. Thomas Kuhn viewed the process differently, as "problem solving" within the existing paradigm. [ 110 ] Extended models allow one or more of the "fixed" parameters above to vary, in addition to the basic six; so these models join smoothly to the basic six-parameter model in the limit that the additional parameter(s) approach the default values. For example, possible extensions of the simplest ΛCDM model allow for spatial curvature ( Ω tot {\displaystyle \Omega _{\text{tot}}} may be different from 1); or quintessence rather than a cosmological constant where the equation of state of dark energy is allowed to differ from −1. Cosmic inflation predicts tensor fluctuations ( gravitational waves ). Their amplitude is parameterized by the tensor-to-scalar ratio (denoted r {\displaystyle r} ), which is determined by the unknown energy scale of inflation. Other modifications allow hot dark matter in the form of neutrinos more massive than the minimal value, or a running spectral index; the latter is generally not favoured by simple cosmic inflation models. Allowing additional variable parameter(s) will generally increase the uncertainties in the standard six parameters quoted above, and may also shift the central values slightly. The table below shows results for each of the possible "6+1" scenarios with one additional variable parameter; this indicates that, as of 2015, there is no convincing evidence that any additional parameter is different from its default value. Some researchers have suggested that there is a running spectral index, but no statistically significant study has revealed one. Theoretical expectations suggest that the tensor-to-scalar ratio r {\displaystyle r} should be between 0 and 0.3, and the latest results are within those limits.
https://en.wikipedia.org/wiki/Lambda-CDM_model
In applied mathematics , lambda-connectedness (or λ-connectedness ) deals with partial connectivity for a discrete space . Assume that a function on a discrete space (usually a graph ) is given. A degree of connectivity (connectedness) will be defined to measure the connectedness of the space with respect to the function. It was invented to create a new method for image segmentation . The method has expanded to handle other problems related to uncertainty for incomplete information analysis. [ 1 ] [ 2 ] For a digital image and a certain value of λ {\displaystyle \lambda } , two pixels are called λ {\displaystyle \lambda } -connected if there is a path linking those two pixels and the connectedness of this path is at least λ {\displaystyle \lambda } . λ {\displaystyle \lambda } -connectedness is an equivalence relation. [ 3 ] Connectedness is a basic measure in many areas of mathematical science and social sciences. In graph theory, two vertices are said to be connected if there is a path between them. In topology , two points are connected if there is a continuous function that could move from one point to another continuously. In management science, for example, in an institution, two individuals are connected if one person is under the supervision of the other. Such connected relations only describe either full connection or no connection. lambda-connectedness is introduced to measure incomplete or fuzzy relations between two vertices, points, human beings, etc. In fact, partial relations have been studied in other aspects. Random graph theory allows one to assign a probability to each edge of a graph. This method assumes, in most cases, each edge has the same probability. On the other hand, Bayesian networks are often used for inference and analysis when relationships between each pair of states/events, denoted by vertices, are known. These relationships are usually represented by conditional probabilities among these vertices and are usually obtained from outside of the system. λ {\displaystyle \lambda } -connectedness is based on graph theory; however, graph theory only deals with vertices and edges with or without weights. In order to define a partial, incomplete, or fuzzy connectedness, one needs to assign a function on the vertex in the graph. Such a function is called a potential function. It can be used to represent the intensity of an image, the surface of a XY -domain, or the utility function of a management or economic network. A generalized definition of λ {\displaystyle \lambda } -connectedness can be described as follows: a simple system ⟨ G , ρ ⟩ {\displaystyle \langle G,\rho \rangle } , where ρ {\displaystyle \rho } is called a potential function of G {\displaystyle G} . If ⟨ G , ρ ⟩ {\displaystyle \langle G,\rho \rangle } is an image, then G {\displaystyle G} is a 2D or 2D grid space and ρ {\displaystyle \rho } is an intensity function. For a color image, one can use f = ( red , green , blue ) {\displaystyle f=({\text{red}},{\text{green}},{\text{blue}})} to represent ρ {\displaystyle \rho } . Neighbor connectivity will be first defined on a pair of adjacent points. Then one can define the general connectedness for any two points. Assume α ρ ( x , y ) {\displaystyle \alpha _{\rho }(x,y)} is used to measure the neighbor-connectivity of x,y where x and y are adjacent. In graph G = ( V , E ), a finite sequence x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} is called a path, if ( x i , x i + 1 ) ∈ E {\displaystyle (x_{i},x_{i+1})\in E} . The path-connectivity β {\displaystyle \beta } of a path π = π ( x 1 , x n ) = { x 1 , x 2 , . . . , x n } {\displaystyle \pi =\pi (x_{1},x_{n})=\{x_{1},x_{2},...,x_{n}\}} is defined as Finally, the degree of connectedness (connectivity) of two vertices x,y with respect to ρ {\displaystyle \rho } is defined as For a given λ ∈ [ 0 , 1 ] {\displaystyle \lambda \in [0,1]} , point p = ( x , ρ ( x ) ) {\displaystyle p=(x,\rho (x))} and q = ( y , ρ ( y ) ) {\displaystyle q=(y,\rho (y))} are said to be λ {\displaystyle \lambda } -connected if C ρ ( x , y ) ≥ λ {\displaystyle C_{\rho }(x,y)\geq \lambda } . λ {\displaystyle \lambda } -connectedness is a equivalence relation. It can be used in image segmentation. The lambda-connected segmentation is a region-growing segmentation method in general. It can also be made for split-and-merge segmentation. [ 4 ] Its time complexity also reaches the optimum at O ( n l o g n ) {\displaystyle O(nlogn)} where n {\displaystyle n} is the number of pixels in the image. [ 5 ] The lambda-connectedness has close relationships to Data Science that can be found in the Book entitled Mathematical Problems in Data Science. [ 6 ] Researchers applied related techniques to smooth 3D data processing and transportation network management recently. [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/Lambda-connectedness
In mathematical logic and computer science , the lambda-mu calculus is an extension of the lambda calculus introduced by Michel Parigot. [ 1 ] It introduces two new operators: the μ operator (which is completely different both from the μ operator found in computability theory and from the μ operator of modal μ-calculus ) and the bracket operator. Proof-theoretically , it provides a well-behaved formulation of classical natural deduction . One of the main goals of this extended calculus is to be able to describe expressions corresponding to theorems in classical logic . According to the Curry–Howard isomorphism , lambda calculus on its own can express theorems in intuitionistic logic only, and several classical logical theorems can't be written at all. However with these new operators one is able to write terms that have the type of, for example, Peirce's law . The μ operator corresponds to Felleisen 's undelimited control operator C and bracket corresponds to calling a captured continuation . [ 2 ] The three forms of expressions in lambda calculus are as follows: In addition to the traditional λ-variables, the lambda-mu calculus includes a distinct set of μ-variables, which can be understood as continuation variables. The set of terms is divided into unnamed (all traditional lambda expressions are of this kind) and named terms. The terms that are added by the lambda-mu calculus are of the form: The basic reduction rules used in the lambda-mu calculus are the following: These rules cause the calculus to be confluent . To obtain call-by-value semantics, one must refine the beta reduction rule and add another form of structural reduction: [ 3 ] This addition corresponds to the addition of an additional evaluation context former when moving from call-by-name to call-by-value evaluation. For a closer correspondence with conventional formalizations of control operators, the distinction between named and unnamed terms can be abolished, meaning that [α]M is of the same sort as other lambda-expressions and the body of μ-abstraction can also be any expression. [ 2 ] Another variant in this vein is the Λμ-calculus. [ 4 ] One can consider a structural reduction rule symmetric to the original one: [ 1 ] M ( μ α . t ) → μ α . t [ [ α ] ( M N ) / [ α ] N ] {\displaystyle M(\mu \alpha .t)\to \mu \alpha .t\left[[\alpha ](MN)/[\alpha ]N\right]} This, however, breaks confluence and the correspondence to control operators.
https://en.wikipedia.org/wiki/Lambda-mu_calculus
Lambda architecture is a data-processing architecture designed to handle massive quantities of data by taking advantage of both batch and stream-processing methods. This approach to architecture attempts to balance latency , throughput , and fault-tolerance by using batch processing to provide comprehensive and accurate views of batch data, while simultaneously using real-time stream processing to provide views of online data. The two view outputs may be joined before presentation. The rise of lambda architecture is correlated with the growth of big data , real-time analytics, and the drive to mitigate the latencies of map-reduce . [ 1 ] Lambda architecture depends on a data model with an append-only, immutable data source that serves as a system of record. [ 2 ] : 32 It is intended for ingesting and processing timestamped events that are appended to existing events rather than overwriting them. State is determined from the natural time-based ordering of the data. Lambda architecture describes a system consisting of three layers: batch processing, speed (or real-time) processing, and a serving layer for responding to queries. [ 3 ] : 13 The processing layers ingest from an immutable master copy of the entire data set. This paradigm was first described by Nathan Marz in a blog post titled "How to beat the CAP theorem " in which he originally termed it the "batch/realtime architecture". [ 4 ] The batch layer precomputes results using a distributed processing system that can handle very large quantities of data. The batch layer aims at perfect accuracy by being able to process all available data when generating views. This means it can fix any errors by recomputing based on the complete data set, then updating existing views. Output is typically stored in a read-only database, with updates completely replacing existing precomputed views. [ 3 ] : 18 By 2014, Apache Hadoop was estimated to be a leading batch-processing system. [ 5 ] Later, other, relational databases like Snowflake , Redshift, Synapse and Big Query were also used in this role. The speed layer processes data streams in real time and without the requirements of fix-ups or completeness. This layer sacrifices throughput as it aims to minimize latency by providing real-time views into the most recent data. Essentially, the speed layer is responsible for filling the "gap" caused by the batch layer's lag in providing views based on the most recent data. This layer's views may not be as accurate or complete as the ones eventually produced by the batch layer, but they are available almost immediately after data is received, and can be replaced when the batch layer's views for the same data become available. [ 3 ] : 203 Stream-processing technologies typically used in this layer include Apache Kafka , Amazon Kinesis , Apache Storm , SQLstream, Apache Samza , Apache Spark , Azure Stream Analytics , Apache Flink . Output is typically stored on fast NoSQL databases., [ 6 ] [ 7 ] or as a commit log. [ 8 ] Output from the batch and speed layers are stored in the serving layer, which responds to ad-hoc queries by returning precomputed views or building views from the processed data. Examples of technologies used in the serving layer include Apache Druid , Apache Pinot , ClickHouse and Tinybird which provide a single platform to handle output from both layers. [ 9 ] Dedicated stores used in the serving layer include Apache Cassandra , Apache HBase , Azure Cosmos DB , MongoDB , VoltDB or Elasticsearch for speed-layer output, and Elephant DB , Apache Impala , SAP HANA or Apache Hive for batch-layer output. [ 2 ] : 45 [ 6 ] To optimize the data set and improve query efficiency, various rollup and aggregation techniques are executed on raw data, [ 9 ] : 23 while estimation techniques are employed to further reduce computation costs. [ 10 ] And while expensive full recomputation is required for fault tolerance, incremental computation algorithms may be selectively added to increase efficiency, and techniques such as partial computation and resource-usage optimizations can effectively help lower latency. [ 3 ] : 93, 287, 293 Metamarkets, which provides analytics for companies in the programmatic advertising space, employs a version of the lambda architecture that uses Druid for storing and serving both the streamed and batch-processed data. [ 9 ] : 42 For running analytics on its advertising data warehouse, Yahoo has taken a similar approach, also using Apache Storm , Apache Hadoop , and Druid . [ 11 ] : 9, 16 The Netflix Suro project has separate processing paths for data, but does not strictly follow lambda architecture since the paths may be intended to serve different purposes and not necessarily to provide the same type of views. [ 12 ] Nevertheless, the overall idea is to make selected real-time event data available to queries with very low latency, while the entire data set is also processed via a batch pipeline. The latter is intended for applications that are less sensitive to latency and require a map-reduce type of processing. Criticism of lambda architecture has focused on its inherent complexity and its limiting influence. The batch and streaming sides each require a different code base that must be maintained and kept in sync so that processed data produces the same result from both paths. Yet attempting to abstract the code bases into a single framework puts many of the specialized tools in the batch and real-time ecosystems out of reach. [ 13 ] Jay Kreps introduced the kappa architecture to use a pure streaming approach with a single code base. [ 13 ] In a technical discussion over the merits of employing a pure streaming approach, it was noted that using a flexible streaming framework such as Apache Samza could provide some of the same benefits as batch processing without the latency. [ 14 ] Such a streaming framework could allow for collecting and processing arbitrarily large windows of data, accommodate blocking, and handle state.
https://en.wikipedia.org/wiki/Lambda_architecture
Lambda phage ( coliphage λ , scientific name Lambdavirus lambda ) is a bacterial virus, or bacteriophage , that infects the bacterial species Escherichia coli ( E. coli ). It was discovered by Esther Lederberg in 1950. [ 2 ] The wild type of this virus has a temperate life cycle that allows it to either reside within the genome of its host through lysogeny or enter into a lytic phase, during which it kills and lyses the cell to produce offspring. Lambda strains, mutated at specific sites, are unable to lysogenize cells; instead, they grow and enter the lytic cycle after superinfecting an already lysogenized cell. [ 3 ] The phage particle consists of a head (also known as a capsid ), [ 4 ] a tail, and tail fibers (see image of virus below). The head contains the phage's double-strand linear DNA genome. During infections, the phage particle recognizes and binds to its host, E. coli , causing DNA in the head of the phage to be ejected through the tail into the cytoplasm of the bacterial cell. Usually, a " lytic cycle " ensues, where the lambda DNA is replicated and new phage particles are produced within the cell. This is followed by cell lysis , releasing the cell contents, including virions that have been assembled, into the environment. However, under certain conditions, the phage DNA may integrate itself into the host cell chromosome in the lysogenic pathway. In this state, the λ DNA is called a prophage and stays resident within the host's genome without apparent harm to the host. The host is termed a lysogen when a prophage is present. This prophage may enter the lytic cycle when the lysogen enters a stressed condition. The virus particle consists of a head and a tail that can have tail fibers. The whole particle consists of 12–14 different proteins with more than 1000 protein molecules total and one DNA molecule located in the phage head. However, it is still not entirely clear whether the L and M proteins are part of the virion. [ 5 ] All characterized lambdoid phages possess an N protein-mediated transcription antitermination mechanism, with the exception of phage HK022. [ 6 ] The genome contains 48,502 [ 7 ] base pairs of double-stranded, linear DNA, with 12-base single-strand segments at both 5' ends. [ 8 ] These two single-stranded segments are the "sticky ends" of what is called the cos site. The cos site circularizes the DNA in the host cytoplasm. In its circular form, the phage genome, therefore, is 48,502 base pairs in length. [ 8 ] The lambda genome can be inserted into the E. coli chromosome and is then called a prophage. See section below for details. The tail of lambda phages is made of at least 6 proteins (H, J, U, V, Stf, Tfa) and requires 7 more for assembly (I, K, L, M, Z, G/T). This assembly process begins with protein J, which then recruits proteins I, L, K, and G/T to add protein H. Once G and G/T leave the complex, protein V can assemble onto the J/H scaffold. Then, protein U is added to the head-proximal end of the tail. Protein Z is able to connect the tail to the head. Protein H is cleaved due to the actions of proteins U and Z. [ 5 ] Lambda phage is a non-contractile tailed phage, meaning during an infection event it cannot 'force' its DNA through a bacterial cell membrane. It must instead use an existing pathway to invade the host cell, having evolved the tip of its tail to interact with a specific pore to allow entry of its DNA to the hosts. On initial infection, the stability of cII determines the lifestyle of the phage; stable cII will lead to the lysogenic pathway, whereas if cII is degraded the phage will go into the lytic pathway. Low temperature, starvation of the cells and high multiplicity of infection (MOI) are known to favor lysogeny (see later discussion). [ 13 ] This occurs without the N protein interacting with the DNA; the protein instead binds to the freshly transcribed mRNA. Nut sites contain 3 conserved "boxes", of which only BoxB is essential. This is the lifecycle that the phage follows following most infections, where the cII protein does not reach a high enough concentration due to degradation, so does not activate its promoters. [ citation needed ] Rightward transcription expresses the O , P and Q genes. O and P are responsible for initiating replication, and Q is another antiterminator that allows the expression of head, tail, and lysis genes from P R’ . [ 6 ] Pr is the promoter for rightward transcription, and the cro gene is a regulator gene. The cro gene will encode for the Cro protein that will then repress Prm promoter.  Once Pr transcription is underway the Q gene will then be transcribed at the far end of the operon for rightward transcription. The Q gene is a regulator gene found on this operon, which will control the expression of later genes for rightward transcription. Once the gene's regulatory proteins allow for expression, the Q protein will then act as an anti-terminator. This will then allow for the rest of the operon to be read through until it reaches the transcription terminator. Thus allowing expression of later genes in the operon, and leading to the expression of the lytic cycle. [ 15 ] Pr promoter has been found to activate the origin in the use of rightward transcription, but the whole picture of this is still somewhat misunderstood. Given there are some caveats to this, for instance this process is different for other phages such as N15 phage, which may encode for DNA polymerase. Another example is the P22 phage may replace the p gene, which encodes for an essential replication protein for something that is capable of encoding for a DnaB helices. [ 6 ] Q is similar to N in its effect: Q binds to RNA polymerase in Qut sites and the resulting complex can ignore terminators, however the mechanism is very different; the Q protein first associates with a DNA sequence rather than an mRNA sequence. [ 16 ] Leftward transcription expresses the gam, xis , bar and int genes. [ 6 ] Gam proteins are involved in recombination. Gam is also important in that it inhibits the host RecBCD nuclease from degrading the 3’ ends in rolling circle replication. Int and xis are integration and excision proteins vital to lysogeny. [ citation needed ] Leftward transcription is believed to result in a deletion mutation of the rap gene resulting in a lack of growth of lambda phage. This is due to RNA polymerase attaching to pL promoter site instead of the pR promotor site. Leftward transcription results in bar I and bar II transcription on the left operon. Bar positive phenotype is present when the rap gene is absent. The lack of growth of lambda phage is believed to occur due to a temperature sensitivity resulting in inhibition of growth. [ 18 ] The lysogenic lifecycle begins once the cI protein reaches a high enough concentration to activate its promoters, after a small number of infections. The prophage is duplicated with every subsequent cell division of the host. The phage genes expressed in this dormant state code for proteins that repress expression of other phage genes (such as the structural and lysis genes) in order to prevent entry into the lytic cycle. These repressive proteins are broken down when the host cell is under stress, resulting in the expression of the repressed phage genes. Stress can be from starvation , poisons (like antibiotics ), or other factors that can damage or destroy the host. In response to stress, the activated prophage is excised from the DNA of the host cell by one of the newly expressed gene products and enters its lytic pathway. The integration of phage λ takes place at a special attachment site in the bacterial and phage genomes, called att λ . The sequence of the bacterial att site is called attB , between the gal and bio operons, and consists of the parts B-O-B', whereas the complementary sequence in the circular phage genome is called attP and consists of the parts P-O-P'. The integration itself is a sequential exchange (see genetic recombination ) via a Holliday junction and requires both the phage protein Int and the bacterial protein IHF ( integration host factor ). Both Int and IHF bind to attP and form an intasome, a DNA-protein-complex designed for site-specific recombination of the phage and host DNA. The original B-O-B' sequence is changed by the integration to B-O-P'-phage DNA-P-O-B'. The phage DNA is now part of the host's genome. [ 19 ] The classic induction of a lysogen involved irradiating the infected cells with UV light. Any situation where a lysogen undergoes DNA damage or the SOS response of the host is otherwise stimulated leads to induction. Multiplicity reactivation (MR) is the process by which multiple viral genomes, each containing inactivating genome damage, interact within an infected cell to form a viable viral genome. MR was originally discovered with phage T4, but was subsequently found in phage λ (as well as in numerous other bacterial and mammalian viruses [ 20 ] ). MR of phage λ inactivated by UV light depends on the recombination function of either the host or of the infecting phage. [ 21 ] Absence of both recombination systems leads to a loss of MR. Survival of UV-irradiated phage λ is increased when the E. coli host is lysogenic for an homologous prophage, a phenomenon termed prophage reactivation. [ 22 ] Prophage reactivation in phage λ appears to occur by a recombinational repair process similar to that of MR. The repressor found in the phage lambda is a notable example of the level of control possible over gene expression by a very simple system. As discovered by Barbara J. Meyer , [ 23 ] it forms a 'binary switch' with two genes under mutually exclusive expression. The lambda repressor gene system consists of (from left to right on the chromosome): The lambda repressor is a self assembling dimer also known as the cI protein. [ 24 ] It binds DNA in the helix-turn-helix binding motif. It regulates the transcription of the cI protein and the Cro protein. The life cycle of lambda phages is controlled by cI and Cro proteins. The lambda phage will remain in the lysogenic state if cI proteins predominate, but will be transformed into the lytic cycle if cro proteins predominate. The cI dimer may bind to any of three operators, O R 1, O R 2, and O R 3, in the order O R 1 > O R 2 > O R 3. Binding of a cI dimer to O R 1 enhances binding of a second cI dimer to O R 2, an effect called cooperativity . Thus, O R 1 and O R 2 are almost always simultaneously occupied by cI. However, this does not increase the affinity between cI and O R 3, which will be occupied only when the cI concentration is high. At high concentrations of cI, the dimers will also bind to operators O L 1 and O L 2 (which are over 2 kb downstream from the R operators). When cI dimers are bound to O L 1, O L 2, O R 1, and O R 2 a loop is induced in the DNA, allowing these dimers to bind together to form an octamer. This is a phenomenon called long-range cooperativity . Upon formation of the octamer, cI dimers may cooperatively bind to O L 3 and O R 3, repressing transcription of cI. This autonegative regulation ensures a stable minimum concentration of the repressor molecule and, should SOS signals arise, allows for more efficient prophage induction. [ 25 ] An important distinction here is that between the two decisions; lysogeny and lysis on infection, and continuing lysogeny or lysis from a prophage. The latter is determined solely by the activation of RecA in the SOS response of the cell, as detailed in the section on induction. The former will also be affected by this; a cell undergoing an SOS response will always be lysed, as no cI protein will be allowed to build up. However, the initial lytic/lysogenic decision on infection is also dependent on the cII and cIII proteins. In cells with sufficient nutrients, protease activity is high, which breaks down cII. This leads to the lytic lifestyle. In cells with limited nutrients, protease activity is low, making cII stable. This leads to the lysogenic lifestyle. cIII appears to stabilize cII, both directly and by acting as a competitive inhibitor to the relevant proteases. This means that a cell "in trouble", i.e. lacking in nutrients and in a more dormant state, is more likely to lysogenise. This would be selected for because the phage can now lie dormant in the bacterium until it falls on better times, and so the phage can create more copies of itself with the additional resources available and with the more likely proximity of further infectable cells. A full biophysical model for lambda's lysis-lysogeny decision remains to be developed. Computer modeling and simulation suggest that random processes during infection drive the selection of lysis or lysogeny within individual cells. [ 26 ] However, recent experiments suggest that physical differences among cells, that exist prior to infection, predetermine whether a cell will lyse or become a lysogen. [ 27 ] Lambda phage has been used heavily as a model organism and has been an excellent tool first in microbial genetics , and then later in molecular genetics . [ 28 ] Some of its uses include its application as a vector for the cloning of recombinant DNA ; the use of its site-specific recombinase (int) for the shuffling of cloned DNAs by the gateway method ; [ 29 ] and the application of its Red operon , including the proteins Red alpha (also called 'exo'), beta and gamma in the DNA engineering method called recombineering . The 48 kb DNA fragment of lambda phage is not essential for productive infection and can be replaced by foreign DNA, [ 30 ] which can then be replicated by the phage. Lambda phage will enter bacteria more easily than plasmids, making it a useful vector that can either destroy or become part of the host's DNA. [ 31 ] Lambda phage can also be manipulated and used as an anti-cancer vaccine that targets human aspartyl (asparaginyl) β-hydroxylase (ASPH, HAAH), which has been shown to be beneficial in cases of hepatocellular carcinoma in mice. [ 32 ] Lambda phage has also been of major importance in the study of specialized transduction . [ 33 ]
https://en.wikipedia.org/wiki/Lambda_phage
The λ (lambda) universality class is a group in condensed matter physics . It regroups several systems possessing strong analogies, namely, superfluids , superconductors and smectics ( liquid crystals ). All these systems are expected to belong to the same universality class for the thermodynamic critical properties of the phase transition. While these systems are quite different at the first glance, they all are described by similar formalisms and their typical phase diagrams are identical.
https://en.wikipedia.org/wiki/Lambda_transition
In general relativity , a lambdavacuum solution is an exact solution to the Einstein field equation in which the only term in the stress–energy tensor is a cosmological constant term. This can be interpreted physically as a kind of classical approximation to a nonzero vacuum energy . These are discussed here as distinct from the vacuum solutions in which the cosmological constant is vanishing. Terminological note: this article concerns a standard concept, but there is apparently no standard term to denote this concept, so we have attempted to supply one for the benefit of Wikipedia . The Einstein field equation is often written as G a b + Λ g a b = κ T a b , {\displaystyle G_{ab}+\Lambda \,g_{ab}=\kappa \,T_{ab},} with a so-called cosmological constant term Λ g a b {\displaystyle \Lambda \,g_{ab}} . However, it is possible to move this term to the right hand side and absorb it into the stress–energy tensor T a b {\displaystyle T^{ab}} , so that the cosmological constant term becomes just another contribution to the stress–energy tensor. When other contributions to that tensor vanish, the result G a b = − Λ g a b {\displaystyle G_{ab}=-\Lambda \,g_{ab}} is a lambdavacuum. An equivalent formulation in terms of the Ricci tensor is R a b = Λ g a b {\displaystyle R_{ab}=\Lambda g_{ab}} in space-time dimension 4 {\displaystyle 4} , or R a b = 2 n − 2 Λ g a b {\displaystyle R_{ab}={\tfrac {2}{n-2}}\Lambda g_{ab}} in space-time dimension n {\displaystyle n} . A nonzero cosmological constant term can be interpreted in terms of a nonzero vacuum energy . There are two cases: The idea of the vacuum having a nonvanishing energy density might seem counterintuitive, but this does make sense in quantum field theory. Indeed, nonzero vacuum energies can even be experimentally verified in the Casimir effect . The components of a tensor computed with respect to a frame field rather than the coordinate basis are often called physical components , because these are the components which can (in principle) be measured by an observer. A frame consists of four unit vector fields e → 0 , e → 1 , e → 2 , e → 3 {\displaystyle {\vec {e}}_{0},\;{\vec {e}}_{1},\;{\vec {e}}_{2},\;{\vec {e}}_{3}} Here, the first is a timelike unit vector field and the others are spacelike unit vector fields, and e → 0 {\displaystyle {\vec {e}}_{0}} is everywhere orthogonal to the world lines of a family of observers (not necessarily inertial observers). Remarkably, in the case of lambdavacuum, all observers measure the same energy density and the same (isotropic) pressure. That is, the Einstein tensor takes the form G a ^ b ^ = − Λ [ − 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] {\displaystyle G^{{\hat {a}}{\hat {b}}}=-\Lambda {\begin{bmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}}} Saying that this tensor takes the same form for all observers is the same as saying that the isotropy group of a lambdavacuum is SO(1,3) , the full Lorentz group . The characteristic polynomial of the Einstein tensor of a lambdavacuum must have the form χ ( ζ ) = ( ζ + Λ ) 4 {\displaystyle \chi (\zeta )=\left(\zeta +\Lambda \right)^{4}} Using Newton's identities , this condition can be re-expressed in terms of the traces of the powers of the Einstein tensor as t 2 = 1 4 t 1 2 , t 3 = 1 16 t 1 3 , t 4 = 1 64 t 1 4 {\displaystyle t_{2}={\tfrac {1}{4}}t_{1}^{2},\;t_{3}={\tfrac {1}{16}}t_{1}^{3},\;t_{4}={\tfrac {1}{64}}t_{1}^{4}} where t 1 = G a a , t 2 = G a b G b a , t 3 = G a b G b c G c a , t 4 = G a b G b c G c d G d a {\displaystyle {\begin{aligned}t_{1}&={G^{a}}_{a},&t_{2}&={G^{a}}_{b}\,{G^{b}}_{a},\\t_{3}&={G^{a}}_{b}\,{G^{b}}_{c}\,{G^{c}}_{a},&t_{4}&={G^{a}}_{b}\,{G^{b}}_{c}\,{G^{c}}_{d}\,{G^{d}}_{a}\end{aligned}}} are the traces of the powers of the linear operator corresponding to the Einstein tensor, which has second rank. The definition of a lambdavacuum solution makes sense mathematically irrespective of any physical interpretation, and lambdavacuums are a special case of a concept that is studied by pure mathematicians. Einstein manifolds are pseudo-Riemannian manifolds in which the Ricci tensor is proportional to the metric tensor . This mathematical terminology is particularly well-established in Riemannian geometry , which is to say in the context of positive-definite metrics. The Lorentzian manifolds that are also Einstein manifolds are precisely the lambdavacuum solutions. Noteworthy individual examples of lambdavacuum solutions include:
https://en.wikipedia.org/wiki/Lambdavacuum_solution
The Lambek–Moser theorem is a mathematical description of partitions of the natural numbers into two complementary sets . For instance, it applies to the partition of numbers into even and odd , or into prime and non-prime (one and the composite numbers ). There are two parts to the Lambek–Moser theorem. One part states that any two non-decreasing integer functions that are inverse, in a certain sense, can be used to split the natural numbers into two complementary subsets, and the other part states that every complementary partition can be constructed in this way. When a formula is known for the n {\displaystyle n} th natural number in a set, the Lambek–Moser theorem can be used to obtain a formula for the n {\displaystyle n} th number not in the set. The Lambek–Moser theorem belongs to combinatorial number theory . It is named for Joachim Lambek and Leo Moser , who published it in 1954, [ 1 ] and should be distinguished from an unrelated theorem of Lambek and Moser, later strengthened by Wild, on the number of primitive Pythagorean triples . [ 2 ] It extends Rayleigh's theorem, which describes complementary pairs of Beatty sequences , the sequences of rounded multiples of irrational numbers. Let f {\displaystyle f} be any function from positive integers to non-negative integers that is both non-decreasing (each value in the sequence f ( 1 ) , f ( 2 ) , f ( 3 ) , … {\displaystyle f(1),f(2),f(3),\dots } is at least as large as any earlier value) and unbounded (it eventually increases past any fixed value). The sequence of its values may skip some numbers, so it might not have an inverse function with the same properties. Instead, define a non-decreasing and unbounded integer function f ∗ {\displaystyle f^{*}} that is as close as possible to the inverse in the sense that, for all positive integers n {\displaystyle n} , f ( f ∗ ( n ) ) < n ≤ f ( f ∗ ( n ) + 1 ) . {\displaystyle f{\bigl (}f^{*}(n){\bigr )}<n\leq f{\bigl (}f^{*}(n)+1{\bigr )}.} Equivalently, f ∗ ( n ) {\displaystyle f^{*}(n)} may be defined as the number of values x {\displaystyle x} for which f ( x ) < n {\displaystyle f(x)<n} . It follows from either of these definitions that f ∗ ∗ = f {\displaystyle f^{*}{}^{*}=f} . [ 3 ] If the two functions f {\displaystyle f} and f ∗ {\displaystyle f^{*}} are plotted as histograms , they form mirror images of each other across the diagonal line x = y {\displaystyle x=y} . [ 4 ] From these two functions f {\displaystyle f} and f ∗ {\displaystyle f^{*}} , define two more functions F {\displaystyle F} and F ∗ {\displaystyle F^{*}} , from positive integers to positive integers, by F ( n ) = f ( n ) + n F ∗ ( n ) = f ∗ ( n ) + n {\displaystyle {\begin{aligned}F(n)&=f(n)+n\\F^{*}(n)&=f^{*}(n)+n\\\end{aligned}}} Then the first part of the Lambek–Moser theorem states that each positive integer occurs exactly once among the values of either F {\displaystyle F} or F ∗ {\displaystyle F^{*}} . That is, the values obtained from F {\displaystyle F} and the values obtained from F ∗ {\displaystyle F^{*}} form two complementary sets of positive integers. More strongly, each of these two functions maps its argument n {\displaystyle n} to the n {\displaystyle n} th member of its set in the partition. [ 3 ] As an example of the construction of a partition from a function, let f ( n ) = n 2 {\displaystyle f(n)=n^{2}} , the function that squares its argument. Then its inverse is the square root function, whose closest integer approximation (in the sense used for the Lambek–Moser theorem) is f ∗ ( n ) = ⌊ n − 1 ⌋ {\displaystyle f^{*}(n)=\lfloor {\sqrt {n-1}}\rfloor } . These two functions give F ( n ) = n 2 + n {\displaystyle F(n)=n^{2}+n} and F ∗ ( n ) = ⌊ n − 1 ⌋ + n . {\displaystyle F^{*}(n)=\lfloor {\sqrt {n-1}}\rfloor +n.} For n = 1 , 2 , 3 , … {\displaystyle n=1,2,3,\dots } the values of F {\displaystyle F} are the pronic numbers while the values of F ∗ {\displaystyle F^{*}} are These two sequences are complementary: each positive integer belongs to exactly one of them. [ 4 ] The Lambek–Moser theorem states that this phenomenon is not specific to the pronic numbers, but rather it arises for any choice of f {\displaystyle f} with the appropriate properties. [ 3 ] The second part of the Lambek–Moser theorem states that this construction of partitions from inverse functions is universal, in the sense that it can explain any partition of the positive integers into two infinite parts. If S = s 1 , s 2 , … {\displaystyle S=s_{1},s_{2},\dots } and S ∗ = s 1 ∗ , s 2 ∗ , … {\displaystyle S^{*}=s_{1}^{*},s_{2}^{*},\dots } are any two complementary increasing sequences of integers, one may construct a pair of functions f {\displaystyle f} and f ∗ {\displaystyle f^{*}} from which this partition may be derived using the Lambek–Moser theorem. To do so, define f ( n ) = s n − n {\displaystyle f(n)=s_{n}-n} and f ∗ ( n ) = s n ∗ − n {\displaystyle f^{*}(n)=s_{n}^{*}-n} . [ 3 ] One of the simplest examples to which this could be applied is the partition of positive integers into even and odd numbers . The functions F ( n ) {\displaystyle F(n)} and F ∗ ( n ) {\displaystyle F^{*}(n)} should give the n {\displaystyle n} th even or odd number, respectively, so F ( n ) = 2 n {\displaystyle F(n)=2n} and F ∗ ( n ) = 2 n − 1 {\displaystyle F^{*}(n)=2n-1} . From these are derived the two functions f ( n ) = F ( n ) − n = n {\displaystyle f(n)=F(n)-n=n} and f ∗ ( n ) = F ∗ ( n ) − n = n − 1 {\displaystyle f^{*}(n)=F^{*}(n)-n=n-1} . They form an inverse pair, and the partition generated via the Lambek–Moser theorem from this pair is just the partition of the positive integers into even and odd numbers. Another integer partition, into evil numbers and odious numbers (by the parity of the binary representation ) uses almost the same functions, adjusted by the values of the Thue–Morse sequence . [ 6 ] In the same work in which they proved the Lambek–Moser theorem, Lambek and Moser provided a method of going directly from F {\displaystyle F} , the function giving the n {\displaystyle n} th member of a set of positive integers, to F ∗ {\displaystyle F^{*}} , the function giving the n {\displaystyle n} th non-member, without going through f {\displaystyle f} and f ∗ {\displaystyle f^{*}} . Let F # ( n ) {\displaystyle F^{\#}(n)} denote the number of values of x {\displaystyle x} for which F ( x ) ≤ n {\displaystyle F(x)\leq n} ; this is an approximation to the inverse function of F {\displaystyle F} , but (because it uses ≤ {\displaystyle \leq } in place of < {\displaystyle <} ) offset by one from the type of inverse used to define f ∗ {\displaystyle f^{*}} from f {\displaystyle f} . Then, for any n {\displaystyle n} , F ∗ ( n ) {\displaystyle F^{*}(n)} is the limit of the sequence n , n + F # ( n ) , n + F # ( n + F # ( n ) ) , … , {\displaystyle n,n+F^{\#}(n),n+F^{\#}{\bigl (}n+F^{\#}(n){\bigr )},\dots ,} meaning that this sequence eventually becomes constant and the value it takes when it does is F ∗ ( n ) {\displaystyle F^{*}(n)} . [ 7 ] Lambek and Moser used the prime numbers as an example, following earlier work by Viggo Brun and D. H. Lehmer . [ 8 ] If π ( n ) {\displaystyle \pi (n)} is the prime-counting function (the number of primes less than or equal to n {\displaystyle n} ), then the n {\displaystyle n} th non-prime (1 or a composite number ) is given by the limit of the sequence [ 7 ] n , n + π ( n ) , n + π ( n + π ( n ) ) , … {\displaystyle n,n+\pi (n),n+\pi {\bigl (}n+\pi (n){\bigr )},\dots } For some other sequences of integers, the corresponding limit converges in a fixed number of steps, and a direct formula for the complementary sequence is possible. In particular, the n {\displaystyle n} th positive integer that is not a k {\displaystyle k} th power can be obtained from the limiting formula as [ 9 ] n + ⌊ n + ⌊ n k ⌋ k ⌋ . {\displaystyle n+\left\lfloor {\sqrt[{k}]{n+\lfloor {\sqrt[{k}]{n}}\rfloor }}\right\rfloor .} The theorem was discovered by Leo Moser and Joachim Lambek , who published it in 1954. Moser and Lambek cite the previous work of Samuel Beatty on Beatty sequences as their inspiration, and also cite the work of Viggo Brun and D. H. Lehmer from the early 1930s on methods related to their limiting formula for F ∗ {\displaystyle F^{*}} . [ 1 ] Edsger W. Dijkstra has provided a visual proof of the result, [ 10 ] and later another proof based on algorithmic reasoning. [ 11 ] Yuval Ginosar has provided an intuitive proof based on an analogy of two athletes running in opposite directions around a circular racetrack. [ 12 ] A variation of the theorem applies to partitions of the non-negative integers, rather than to partitions of the positive integers. For this variation, every partition corresponds to a Galois connection of the ordered non-negative integers to themselves. This is a pair of non-decreasing functions ( f , f ∗ ) {\displaystyle (f,f^{*})} with the property that, for all x {\displaystyle x} and y {\displaystyle y} , f ( x ) ≤ y {\displaystyle f(x)\leq y} if and only if x ≤ f ( y ) {\displaystyle x\leq f(y)} . The corresponding functions F {\displaystyle F} and F ∗ {\displaystyle F^{*}} are defined slightly less symmetrically by F ( n ) = f ( n ) + n {\displaystyle F(n)=f(n)+n} and F ∗ ( n ) = f ∗ ( n ) + n + 1 {\displaystyle F^{*}(n)=f^{*}(n)+n+1} . For functions defined in this way, the values of F {\displaystyle F} and F ∗ {\displaystyle F^{*}} (for non-negative arguments, rather than positive arguments) form a partition of the non-negative integers, and every partition can be constructed in this way. [ 13 ] Rayleigh's theorem states that for two positive irrational numbers r {\displaystyle r} and s {\displaystyle s} , both greater than one, with 1 r + 1 s = 1 {\displaystyle {\tfrac {1}{r}}+{\tfrac {1}{s}}=1} , the two sequences ⌊ i ⋅ r ⌋ {\displaystyle \lfloor i\cdot r\rfloor } and ⌊ i ⋅ s ⌋ {\displaystyle \lfloor i\cdot s\rfloor } for i = 1 , 2 , 3 , … {\displaystyle i=1,2,3,\dots } , obtained by rounding down to an integer the multiples of r {\displaystyle r} and s {\displaystyle s} , are complementary. It can be seen as an instance of the Lambek–Moser theorem with f ( n ) = ⌊ r n ⌋ − n {\displaystyle f(n)=\lfloor rn\rfloor -n} and f ∗ ( n ) = ⌊ s n ⌋ − n {\displaystyle f^{\ast }(n)=\lfloor sn\rfloor -n} . The condition that r {\displaystyle r} and s {\displaystyle s} be greater than one implies that these two functions are non-decreasing; the derived functions are F ( n ) = ⌊ r n ⌋ {\displaystyle F(n)=\lfloor rn\rfloor } and F ∗ ( n ) = ⌊ s n ⌋ . {\displaystyle F^{*}(n)=\lfloor sn\rfloor .} The sequences of values of F {\displaystyle F} and F ∗ {\displaystyle F^{*}} forming the derived partition are known as Beatty sequences , after Samuel Beatty 's 1926 rediscovery of Rayleigh's theorem. [ 14 ]
https://en.wikipedia.org/wiki/Lambek–Moser_theorem
In optics , Lambert's cosine law says that the observed radiant intensity or luminous intensity from an ideal diffusely reflecting surface or ideal diffuse radiator is directly proportional to the cosine of the angle θ between the observer's line of sight and the surface normal ; I = I 0 cos θ . [ 1 ] [ 2 ] The law is also known as the cosine emission law [ 3 ] or Lambert's emission law . It is named after Johann Heinrich Lambert , from his Photometria , published in 1760. [ 4 ] A surface which obeys Lambert's law is said to be Lambertian , and exhibits Lambertian reflectance . Such a surface has a constant radiance / luminance , regardless of the angle from which it is observed; a single human eye perceives such a surface as having a constant brightness, regardless of the angle from which the eye observes the surface. It has the same radiance because, although the emitted power from a given area element is reduced by the cosine of the emission angle, the solid angle, subtended by surface visible to the viewer, is reduced by the very same amount. Because the ratio between power and solid angle is constant, radiance (power per unit solid angle per unit projected source area) stays the same. When an area element is radiating as a result of being illuminated by an external source, the irradiance (energy or photons /time/area) landing on that area element will be proportional to the cosine of the angle between the illuminating source and the normal. A Lambertian scatterer will then scatter this light according to the same cosine law as a Lambertian emitter. This means that although the radiance of the surface depends on the angle from the normal to the illuminating source, it will not depend on the angle from the normal to the observer. For example, if the moon were a Lambertian scatterer, one would expect to see its scattered brightness appreciably diminish towards the terminator due to the increased angle at which sunlight hit the surface. The fact that it does not diminish illustrates that the moon is not a Lambertian scatterer, and in fact tends to scatter more light into the oblique angles than a Lambertian scatterer. The emission of a Lambertian radiator does not depend on the amount of incident radiation, but rather from radiation originating in the emitting body itself. For example, if the sun were a Lambertian radiator, one would expect to see a constant brightness across the entire solar disc. The fact that the sun exhibits limb darkening in the visible region illustrates that it is not a Lambertian radiator. A black body is an example of a Lambertian radiator. The situation for a Lambertian surface (emitting or scattering) is illustrated in Figures 1 and 2. For conceptual clarity we will think in terms of photons rather than energy or luminous energy . The wedges in the circle each represent an equal angle d Ω, of an arbitrarily chosen size, and for a Lambertian surface, the number of photons per second emitted into each wedge is proportional to the area of the wedge. The length of each wedge is the product of the diameter of the circle and cos( θ ). The maximum rate of photon emission per unit solid angle is along the normal, and diminishes to zero for θ = 90°. In mathematical terms, the radiance along the normal is I photons/(s·m 2 ·sr) and the number of photons per second emitted into the vertical wedge is I d Ω dA . The number of photons per second emitted into the wedge at angle θ is I cos( θ ) d Ω dA . Figure 2 represents what an observer sees. The observer directly above the area element will be seeing the scene through an aperture of area dA 0 and the area element dA will subtend a (solid) angle of d Ω 0 , which is a portion of the observer's total angular field-of-view of the scene. Since the wedge size d Ω was chosen arbitrarily, for convenience we may assume without loss of generality that it coincides with the solid angle subtended by the aperture when "viewed" from the locus of the emitting area element dA. Thus the normal observer will then be recording the same I d Ω dA photons per second emission derived above and will measure a radiance of The observer at angle θ to the normal will be seeing the scene through the same aperture of area dA 0 (still corresponding to a d Ω wedge) and from this oblique vantage the area element dA is foreshortened and will subtend a (solid) angle of d Ω 0 cos( θ ). This observer will be recording I cos( θ ) d Ω dA photons per second, and so will be measuring a radiance of which is the same as the normal observer. In general, the luminous intensity of a point on a surface varies by direction; for a Lambertian surface, that distribution is defined by the cosine law, with peak luminous intensity in the normal direction. Thus when the Lambertian assumption holds, we can calculate the total luminous flux , F tot {\displaystyle F_{\text{tot}}} , from the peak luminous intensity , I max {\displaystyle I_{\max }} , by integrating the cosine law: F tot = ∫ 0 2 π ∫ 0 π / 2 cos ⁡ ( θ ) I max sin ⁡ ( θ ) d θ d ϕ = 2 π ⋅ I max ∫ 0 π / 2 cos ⁡ ( θ ) sin ⁡ ( θ ) d θ = 2 π ⋅ I max ∫ 0 π / 2 sin ⁡ ( 2 θ ) 2 d θ {\displaystyle {\begin{aligned}F_{\text{tot}}&=\int _{0}^{2\pi }\int _{0}^{\pi /2}\cos(\theta )\,I_{\max }\,\sin(\theta )\,d\theta \,d\phi \\&=2\pi \cdot I_{\max }\int _{0}^{\pi /2}\cos(\theta )\sin(\theta )\,d\theta \\&=2\pi \cdot I_{\max }\int _{0}^{\pi /2}{\frac {\sin(2\theta )}{2}}\,d\theta \end{aligned}}} and so where sin ⁡ ( θ ) {\displaystyle \sin(\theta )} is the determinant of the Jacobian matrix for the unit sphere , and realizing that I max {\displaystyle I_{\max }} is luminous flux per steradian . [ 5 ] Similarly, the peak intensity will be 1 / ( π s r ) {\displaystyle 1/(\pi \,\mathrm {sr} )} of the total radiated luminous flux. For Lambertian surfaces, the same factor of π s r {\displaystyle \pi \,\mathrm {sr} } relates luminance to luminous emittance , radiant intensity to radiant flux , and radiance to radiant emittance . [ citation needed ] Radians and steradians are, of course, dimensionless and so "rad" and "sr" are included only for clarity. Example: A surface with a luminance of say 100 cd/m 2 (= 100 nits, typical PC monitor) will, if it is a perfect Lambert emitter, have a luminous emittance of 100π lm/m 2 . If its area is 0.1 m 2 (~19" monitor) then the total light emitted, or luminous flux, would thus be 31.4 lm.
https://en.wikipedia.org/wiki/Lambert's_cosine_law
In mathematics , a Lambert series , named for Johann Heinrich Lambert , is a series taking the form It can be resummed formally by expanding the denominator: where the coefficients of the new series are given by the Dirichlet convolution of a n with the constant function 1( n ) = 1: This series may be inverted by means of the Möbius inversion formula , and is an example of a Möbius transform . Since this last sum is a typical number-theoretic sum, almost any natural multiplicative function will be exactly summable when used in a Lambert series. Thus, for example, one has where σ 0 ( n ) = d ( n ) {\displaystyle \sigma _{0}(n)=d(n)} is the number of positive divisors of the number n . For the higher order sum-of-divisor functions , one has where α {\displaystyle \alpha } is any complex number , Li {\displaystyle \operatorname {Li} } is the polylogarithm , and is the divisor function. In particular, for α = 1 {\displaystyle \alpha =1} , the Lambert series one gets is which is (up to the factor of q {\displaystyle q} ) the logarithmic derivative of the usual generating function for partition numbers Additional Lambert series related to the previous identity include those for the variants of the Möbius function given below μ ( n ) {\displaystyle \mu (n)} Related Lambert series over the Möbius function include the following identities for any prime α ∈ Z + {\displaystyle \alpha \in \mathbb {Z} ^{+}} : The proof of the first identity above follows from a multi-section (or bisection) identity of these Lambert series generating functions in the following form where we denote L f ( q ) := q {\displaystyle L_{f}(q):=q} to be the Lambert series generating function of the arithmetic function f : For Euler's totient function φ ( n ) {\displaystyle \varphi (n)} : For Von Mangoldt function Λ ( n ) {\displaystyle \Lambda (n)} : For Liouville's function λ ( n ) {\displaystyle \lambda (n)} : with the sum on the right similar to the Ramanujan theta function , or Jacobi theta function ϑ 3 ( q ) {\displaystyle \vartheta _{3}(q)} . Note that Lambert series in which the a n are trigonometric functions , for example, a n = sin(2 n x ), can be evaluated by various combinations of the logarithmic derivatives of Jacobi theta functions . Generally speaking, we can extend the previous generating function expansion by letting χ m ( n ) {\displaystyle \chi _{m}(n)} denote the characteristic function of the m t h {\displaystyle m^{th}} powers, n = k m ∈ Z + {\displaystyle n=k^{m}\in \mathbb {Z} ^{+}} , for positive natural numbers m > 2 {\displaystyle m>2} and defining the generalized m -Liouville lambda function to be the arithmetic function satisfying χ m ( n ) := ( 1 ∗ λ m ) ( n ) {\displaystyle \chi _{m}(n):=(1\ast \lambda _{m})(n)} . This definition of λ m ( n ) {\displaystyle \lambda _{m}(n)} clearly implies that λ m ( n ) = ∑ d m | n μ ( n d m ) {\displaystyle \lambda _{m}(n)=\sum _{d^{m}|n}\mu \left({\frac {n}{d^{m}}}\right)} , which in turn shows that We also have a slightly more generalized Lambert series expansion generating the sum of squares function r 2 ( n ) {\displaystyle r_{2}(n)} in the form of [ 3 ] In general, if we write the Lambert series over f ( n ) {\displaystyle f(n)} which generates the arithmetic functions g ( m ) = ( f ∗ 1 ) ( m ) {\displaystyle g(m)=(f\ast 1)(m)} , the next pairs of functions correspond to other well-known convolutions expressed by their Lambert series generating functions in the forms of where ε ( n ) = δ n , 1 {\displaystyle \varepsilon (n)=\delta _{n,1}} is the multiplicative identity for Dirichlet convolutions , Id k ⁡ ( n ) = n k {\displaystyle \operatorname {Id} _{k}(n)=n^{k}} is the identity function for k t h {\displaystyle k^{th}} powers, χ sq {\displaystyle \chi _{\operatorname {sq} }} denotes the characteristic function for the squares, ω ( n ) {\displaystyle \omega (n)} which counts the number of distinct prime factors of n {\displaystyle n} (see prime omega function ), J t {\displaystyle J_{t}} is Jordan's totient function , and d ( n ) = σ 0 ( n ) {\displaystyle d(n)=\sigma _{0}(n)} is the divisor function (see Dirichlet convolutions ). The conventional use of the letter q in the summations is a historical usage, referring to its origins in the theory of elliptic curves and theta functions, as the nome . Substituting q = e − z {\displaystyle q=e^{-z}} one obtains another common form for the series, as where as before. Examples of Lambert series in this form, with z = 2 π {\displaystyle z=2\pi } , occur in expressions for the Riemann zeta function for odd integer values; see Zeta constants for details. In the literature we find Lambert series applied to a wide variety of sums. For example, since q n / ( 1 − q n ) = L i 0 ( q n ) {\displaystyle q^{n}/(1-q^{n})=\mathrm {Li} _{0}(q^{n})} is a polylogarithm function, we may refer to any sum of the form as a Lambert series, assuming that the parameters are suitably restricted. Thus which holds for all complex q not on the unit circle , would be considered a Lambert series identity. This identity follows in a straightforward fashion from some identities published by the Indian mathematician S. Ramanujan . A very thorough exploration of Ramanujan's works can be found in the works by Bruce Berndt . A somewhat newer construction recently published over 2017–2018 relates to so-termed Lambert series factorization theorems of the form [ 4 ] where s o ( n , k ) ± s e ( n , k ) = [ q n ] ( ∓ q ; q ) ∞ q k 1 ± q k {\displaystyle s_{o}(n,k)\pm s_{e}(n,k)=[q^{n}](\mp q;q)_{\infty }{\frac {q^{k}}{1\pm q^{k}}}} is the respective sum or difference of the restricted partition functions s e / o ( n , k ) {\displaystyle s_{e/o}(n,k)} which denote the number of k {\displaystyle k} 's in all partitions of n {\displaystyle n} into an even (respectively, odd ) number of distinct parts. Let s n , k := s e ( n , k ) − s o ( n , k ) = [ q n ] ( q ; q ) ∞ q k 1 − q k {\displaystyle s_{n,k}:=s_{e}(n,k)-s_{o}(n,k)=[q^{n}](q;q)_{\infty }{\frac {q^{k}}{1-q^{k}}}} denote the invertible lower triangular sequence whose first few values are shown in the table below. Another characteristic form of the Lambert series factorization theorem expansions is given by [ 5 ] where ( q ; q ) ∞ {\displaystyle (q;q)_{\infty }} is the (infinite) q-Pochhammer symbol . The invertible matrix products on the right-hand-side of the previous equation correspond to inverse matrix products whose lower triangular entries are given in terms of the partition function and the Möbius function by the divisor sums The next table lists the first several rows of these corresponding inverse matrices. [ 6 ] We let G j := 1 2 ⌈ j 2 ⌉ ⌈ 3 j + 1 2 ⌉ {\displaystyle G_{j}:={\frac {1}{2}}\left\lceil {\frac {j}{2}}\right\rceil \left\lceil {\frac {3j+1}{2}}\right\rceil } denote the sequence of interleaved pentagonal numbers , i.e., so that the pentagonal number theorem is expanded in the form of Then for any Lambert series L f ( q ) {\displaystyle L_{f}(q)} generating the sequence of g ( n ) = ( f ∗ 1 ) ( n ) {\displaystyle g(n)=(f\ast 1)(n)} , we have the corresponding inversion relation of the factorization theorem expanded above given by [ 7 ] This work on Lambert series factorization theorems is extended in [ 8 ] to more general expansions of the form where C ( q ) {\displaystyle C(q)} is any (partition-related) reciprocal generating function, γ ( n ) {\displaystyle \gamma (n)} is any arithmetic function , and where the modified coefficients are expanded by The corresponding inverse matrices in the above expansion satisfy so that as in the first variant of the Lambert factorization theorem above we obtain an inversion relation for the right-hand-side coefficients of the form Within this section we define the following functions for natural numbers n , x ≥ 1 {\displaystyle n,x\geq 1} : We also adopt the notation from the previous section that where ( q ; q ) ∞ {\displaystyle (q;q)_{\infty }} is the infinite q-Pochhammer symbol . Then we have the following recurrence relations for involving these functions and the pentagonal numbers proved in: [ 7 ] Derivatives of a Lambert series can be obtained by differentiation of the series termwise with respect to q {\displaystyle q} . We have the following identities for the termwise s t h {\displaystyle s^{th}} derivatives of a Lambert series for any s ≥ 1 {\displaystyle s\geq 1} [ 9 ] [ 10 ] where the bracketed triangular coefficients in the previous equations denote the Stirling numbers of the first and second kinds . We also have the next identity for extracting the individual coefficients of the terms implicit to the previous expansions given in the form of Now if we define the functions A t ( n ) {\displaystyle A_{t}(n)} for any n , t ≥ 1 {\displaystyle n,t\geq 1} by where [ ⋅ ] δ {\displaystyle [\cdot ]_{\delta }} denotes Iverson's convention , then we have the coefficients for the t t h {\displaystyle t^{th}} derivatives of a Lambert series given by Of course, by a typical argument purely by operations on formal power series we also have that
https://en.wikipedia.org/wiki/Lambert_series
The Lamb–Chaplygin dipole model is a mathematical description for a particular inviscid and steady dipolar vortex flow. It is a non-trivial solution to the two-dimensional Euler equations . The model is named after Horace Lamb and Sergey Alexeyevich Chaplygin , who independently discovered this flow structure. [ 1 ] This dipole is the two-dimensional analogue of Hill's spherical vortex . A two-dimensional (2D), solenoidal vector field u {\displaystyle \mathbf {u} } may be described by a scalar stream function ψ {\displaystyle \psi } , via u = − e z × ∇ ψ {\displaystyle \mathbf {u} =-\mathbf {e_{z}} \times \mathbf {\nabla } \psi } , where e z {\displaystyle \mathbf {e_{z}} } is the right-handed unit vector perpendicular to the 2D plane. By definition, the stream function is related to the vorticity ω {\displaystyle \omega } via a Poisson equation : − ∇ 2 ψ = ω {\displaystyle -\nabla ^{2}\psi =\omega } . The Lamb–Chaplygin model follows from demanding the following characteristics: [ citation needed ] The solution ψ {\displaystyle \psi } in cylindrical coordinates ( r , θ {\displaystyle r,\theta } ), in the co-moving frame of reference reads: ψ = { − 2 U J 1 ( k r ) k J 0 ( k R ) s i n ( θ ) , for r < R , U ( R 2 r − r ) s i n ( θ ) , for r ≥ R , {\displaystyle {\begin{aligned}\psi ={\begin{cases}{\frac {-2UJ_{1}(kr)}{kJ_{0}(kR)}}\mathrm {sin} (\theta ),&{\text{for }}r<R,\\U\left({\frac {R^{2}}{r}}-r\right)\mathrm {sin} (\theta ),&{\text{for }}r\geq R,\end{cases}}\end{aligned}}} where J 0 and J 1 {\displaystyle J_{0}{\text{ and }}J_{1}} are the zeroth and first Bessel functions of the first kind, respectively. Further, the value of k {\displaystyle k} is such that k R = 3.8317... {\displaystyle kR=3.8317...} , the first non-trivial zero of the first Bessel function of the first kind. [ citation needed ] Since the seminal work of P. Orlandi, [ 2 ] the Lamb–Chaplygin vortex model has been a popular choice for numerical studies on vortex-environment interactions. The fact that it does not deform make it a prime candidate for consistent flow initialization. A less favorable property is that the second derivative of the flow field at the dipole's edge is not continuous. [ 3 ] Further, it serves a framework for stability analysis on dipolar-vortex structures. [ 4 ]
https://en.wikipedia.org/wiki/Lamb–Chaplygin_dipole
In physics, the Lamb–Mössbauer factor (LMF, after Willis Lamb and Rudolf Mössbauer ) or elastic incoherent structure factor (EISF) is the ratio of elastic to total incoherent neutron scattering , or the ratio of recoil-free to total nuclear resonant absorption in Mössbauer spectroscopy . The corresponding factor for coherent neutron or X-ray scattering is the Debye–Waller factor ; often, that term is used in a more generic way to include the incoherent case as well. When first reporting on recoil-free resonance absorption, Mössbauer (1959) [ 1 ] cited relevant theoretical work by Lamb (1939). [ 2 ] The first use of the term "Mössbauer–Lamb factor" seems to be by Tzara (1961); [ 3 ] from 1962 on, the form "Lamb–Mössbauer factor" came into widespread use. Singwi and Sjölander (1960) [ 4 ] pointed out the close relation to incoherent neutron scattering. With the invention of backscattering spectrometers , [ 5 ] it became possible to measure the Lamb–Mössbauer factor as a function of the wavenumber (whereas Mössbauer spectroscopy operates at a fixed wavenumber). Subsequently, the term elastic incoherent structure factor became more frequent. This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it . This scattering –related article is a stub . You can help Wikipedia by expanding it . This spectroscopy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lamb–Mössbauer_factor
A lamella ( pl. : lamellae ) in biology refers to a thin layer, membrane or plate of tissue . [ 1 ] This is a very broad definition, and can refer to many different structures. Any thin layer of organic tissue can be called a lamella and there is a wide array of functions an individual layer can serve. For example, an intercellular lipid lamella is formed when lamellar disks fuse to form a lamellar sheet. It is believed that these disks are formed from vesicles , giving the lamellar sheet a lipid bilayer that plays a role in water diffusion. [ 2 ] Another instance of cellular lamellae can be seen in chloroplasts . Thylakoid membranes are actually a system of lamellar membranes working together, and are differentiated into different lamellar domains. This lamellar system allows plants to convert light energy into chemical energy . [ 3 ] Chloroplasts are characterized by a system of membranes embedded in a hydrophobic proteinaceous matrix, or stroma. The basic unit of the membrane system is a flattened single vesicle called the thylakoid; thylakoids stack into grana. All the thylakoids of a granum are connected with each other, and the grana are connected by intergranal lamellae. [ 4 ] It is placed between the two primary cell walls of two plant cells and made up of intracellular matrix. The lamella comprises a mixture of polygalacturons (D-galacturonic acid) and neutral carbohydrates. It is soluble in the pectinase enzyme. Lamella , in cell biology, is also used to describe the leading edge of a motile cell, of which the lamellipodia is the most forward portion. [ 5 ] The lipid bilayer core of biological membranes is also called lamellar phase . [ 6 ] Thus, each bilayer of multilamellar liposomes and wall of a unilamellar liposome is also referred to as a lamella . This cell biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lamella_(cell_biology)
A lamella ( pl. : lamellae ) is a small plate or flake, from the Latin, and may also refer to collections of fine sheets of material held adjacent to one another in a gill -shaped structure, often with fluid in between though sometimes simply a set of "welded" plates. The term is used in biological contexts for thin membranes of plates of tissue . In the context of materials science , the microscopic structures in bone and nacre are called lamellae. Moreover, the term lamella is often used to describe crystal structure of some materials. [ 1 ] In surface chemistry (especially mineralogy and materials science ), lamellar structures are fine layers, alternating between different materials. They can be produced by chemical effects (as in eutectic solidification ), biological means, or a deliberate process of lamination , such as pattern welding . Lamellae can also describe the layers of atoms in the crystal lattices of materials such as metals. In surface anatomy , a lamella is a thin plate-like structure, often one amongst many lamellae very close to one another, with open space between. In chemical engineering , the term is used for devices such as filters and heat exchangers . In mycology , a lamella (or gill) is a papery hymenophore rib under the cap of some mushroom species, most often agarics . The term has been used to describe the construction of lamellar armour , as well as the layered structures that can be described by a lamellar vector field . In medical professions, especially orthopedic surgery , the term is used to refer to 3D printed titanium technology which is used to create implantable medical devices (in this case, orthopedic implants ). [ 2 ] In context of water-treatment , lamellar filters may be referred to as plate filters or tube filters . This term is used to describe a certain type of ichthyosis , a congenital skin condition. Lamellar Ichthyosis often presents with a "colloidal" membrane at birth. It is characterized by generalized dark scaling. The term lamella(e) is used in the flooring industry to describe the finished top-layer of an engineered wooden floor. For example, an engineered walnut floor will have several layers of wood and a top walnut lamella. In archaeology , the term is used for a variety of small flat and thin objects, such as Amulet MS 5236 , a very thin gold plate with a stamped text from Ancient Greece in the 6th century BC. In crystallography , the term was first used by Christopher Chantler and refers to a very thin layer of a perfect crystal, from which curved crystal physics may be derived. [ 3 ] In textile industry , a lamella is a thin metallic strip used alone or wound around a core thread for goldwork embroidery and tapestry weaving . [ 4 ] In September 2010, the U.S. Food and Drug Administration (FDA) announced a recall of two medications which contained "extremely thin glass flakes (lamellae) that are barely visible in most cases. The lamellae result from the interaction of the formulation with glass vials over the shelf life of the product." [ 5 ] This article about materials science is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lamella_(materials)
The Lamella roof (also sometimes called the "Zollinger roof" for its inventor Friedrich Zollinger , a municipal building surveyor from Merseburg in the German state of Saxony-Anhalt [ 1 ] ) is a construction type where the roof is supported by an arched network of overlapping lamellae in rhombic form. [ 1 ] As such it may be understood as a subset of gridshell type roof constructions. This roof style was designed by Zollinger to satisfy urban expansion needs, where material costs made new construction cost-prohibitive, but existing buildings couldn't support additional stories by adding further masonry walls and high-pitch trusses [ 2 ] . The vault system comprises short structural members interwoven across a curved surface in a diamond pattern. [ 3 ] [ 4 ] Lamella structures can be constructed of wood timber or lumber, concrete, or metal. [ 5 ] Modern versions of this type of structure include glazed metal-framed systems referred to as "transparent shells." [ 6 ]
https://en.wikipedia.org/wiki/Lamella_(structure)
A lamella clarifier or inclined plate settler (IPS) is a type of clarifier designed to remove particulates from liquids. Lamella clarifiers can be used in a range of industries, including mining and metal finishing, as well as to treat groundwater , industrial process water and backwash from sand filters . [ 1 ] Lamella clarifiers are ideal for applications where the solids loading is variable and the solids sizing is fine. [ 2 ] They are more common than conventional clarifiers at many industrial sites, due to their smaller footprint. [ 3 ] One specific application is pre-treatment for effluent entering membrane filters. Lamella clarifiers are considered one of the best options for pre-treatment ahead of ultrafiltration . [ 4 ] Their all-steel design minimizes the chances that part of the inclined plate will chip off and be carried over into the membrane, especially compared to tube settlers, which are made of plastic. Further, lamella clarifiers may maintain the required water quality to the membrane with or without the use of chemicals. This is a cost-saving measure, both in purchasing chemicals and limiting damage to the membrane, as membranes do not work well with the large particles contained in flocculants and coagulants . Lamella clarifiers are also used in the municipal wastewater treatment processes. [ 5 ] The most common wastewater application for lamella clarifiers is as part of the tertiary treatment stage. Lamella clarifiers can be integrated into the treatment process or stand-alone units can be used to increase the flow through existing water treatment plants. [ 6 ] One option for integrating lamella clarifiers into existing plants is for conventional or sludge blanket clarifiers to be upgraded by attaching a bundle of inclined plates or tubes before the overflow in the so-called "clear water zone". This can increase the settling area by two-fold resulting in a decrease in the solids loading in the overflow. [ 7 ] The main advantage of lamella clarifiers over other clarifying systems is the large effective settling area caused by the use of inclined plates, which improves the operating conditions of the clarifiers in a number of ways. The unit is more compact usually requiring only 65-80 % of the area of clarifiers operating without inclined plates. [ 3 ] Therefore, where site footprint constraints are of concern a lamella clarifier system is preferred. The reduced required area allows the possibility for the clarifiers to be located and operated inside, reducing some of the common problems of algae growth, clogging due to blowing debris accumulation and odour control, that occur when the machinery is outdoors. Operation within an enclosed space also allows for a better control of operating temperature and pressure conditions. [ 8 ] The inclined plates mean the clarifier can operate with overflow rates 2 to 4 times that of traditional clarifiers which allow a greater influent flow rate and thus a more time efficient clarification process. [ 3 ] Lamella clarifiers also offer a simple design without requiring the use of chemicals. They are therefore able to act as pre-treatment for delicate membrane processes. Where necessary flocculants may be added to promote efficiency. Lamella clarifier performance may be improved by the addition of flocculants and coagulants. [ 9 ] These chemicals optimize the settling process and cause a higher purity of overflow water by ensuring all smaller solids are settled into the sludge underflow. [ 10 ] A further advantage of the lamella clarifier is its distinct absence of mechanical, moving parts. The system therefore requires no energy input except for the influent pump and has a much lower propensity for mechanical failure than other clarifiers. This advantage extends to safety considerations when operating the plant. The absence of mechanical results in a safer working environment, with less possibility for injury. [ 10 ] Whilst the lamella clarifier has overcome many difficulties encountered by the use of more traditional clarifiers, there are still some disadvantages involved with the configuration and running of the equipment. Lamella clarifiers are unable to treat most raw feed mixtures, which require some pre-treatment to remove materials that could decrease separation efficiency. The feed requires initial processing in advanced fine screening and grit and grease removal to ensure the influent mixture is of an appropriate composition. [ 8 ] The layout of the clarifier creates extra turbulence as the water turns a corner from the feed to the inclined plates. This area of increased turbulence coincides with the sludge collection point and the flowing water can cause some re suspension of solids, whilst simultaneously diluting the sludge. [ 11 ] This results in the need for further treatment to remove the excess moisture from the sludge. Clarifier inlets and discharge must be designed to distribute flow evenly. [ 3 ] Regular maintenance is required as sludge flows down the inclined plates leaving them dirty. Regular cleaning helps prevent uneven flow distribution. [ 3 ] Additionally, poorly maintained plates can cause uneven flow distribution and sacrifice the efficiency of the process. [ 12 ] The closely packed plates make the cleaning difficult. However, removable and independently supported lamellar plates can be installed. [ 8 ] Commercially available lamella clarifiers require different concrete basin geometry and structural support to conventional clarifications system widely used in industry, [ 13 ] thus increasing the cost of installing a new (lamellar) clarification system. Typical lamella clarifier design consists of a series of inclined plates inside a vessel, see first figure. The untreated feed water stream enters from the top of the vessel and flows down a feed channel underneath the inclined plates. Water then flows up inside the clarifier between the inclined plates. During this time solids settle onto the plates and eventually fall to the bottom of the vessel. [ 3 ] The route a particle takes will be dependent upon the flow rate of the suspension and the settling rate of the particle and can be seen in the second figure. At the bottom of the vessel a hopper or funnel collects these particles as sludge. Sludge may be continuously or intermittently discharged. Above the inclined plates all particles have settled and clarified water is produced which is drawn off into an outlet channel. The clarified water exits the system in an outlet stream. There are a number of proprietary lamella clarifier designs. Inclined plates may be based on circular, hexagonal or rectangular tubes. Some possible design characteristics include: Lamella clarifiers can handle a maximum feed water concentration of 10000 mg/L of grease and 3000 mg/L of solids. Expected separation efficiencies for a typical unit are: Initial investment required for a typical lamella clarifier varies from US$750 to US$2500 per cubic meter of water to be treated, depending on the design of the clarifier. [ 10 ] The surface loading rate (also known as surface overflow rate or surface settling rate) for a lamella clarifier falls between 10 and 25 m/h. For these settling rates, the retention time in the clarifier is low, at around 20 minutes or less, [ 7 ] with operating capacities tending to range from 1–3 m 3 /hour/m 2 (of projected area). [ 15 ] Separation of solids is described by sedimentation effectiveness, η. Which is dependent on concentration, flow rate, particle size distribution, flow patterns and plate packing and is defined by the following equation. [ 16 ] η = (c 1 -c 2 )/c 2 where c 1 is inlet concentration and c 2 outlet concentration. Inclined angle of plates allows for increased loading rate/throughput and decreased retention time relative to conventional clarifiers. Increase in the loading rate of 2-3 times the conventional clarifier (of the same size). [ 14 ] The total surface area required for settling can be calculated for a lamella plate with N plates, each plate of width W, with plate pitch θ and tube spacing p. Where, A = W∙(Np+cos θ) Table 1 presents the characteristics and operating ranges of different clarification units. [ 14 ] Where overflow rate is a measure of the fluid loading capacity of the clarifier and is defined as, the influent flow rate divided by the horizontal area of the clarifier. The retention time is the average time that a particulate remains in the clarifier. The turbidity is a measure of cloudiness. Higher values for turbidity removal efficiency correspond to less particulates remaining in the clarified stream. The settling velocity of a particulate can also be determined by using Stokes' law . [ 17 ] Both the overflow stream and the underflow stream from a lamella clarifier will often require post-treatment. The underflow stream is often put through a dewatering process such as a thickener or a belt press filter to increase the density of slurry . This is an important post-treatment as the underflow slurry is often not able to be recycled back into the process. In such a case it often needs to be transported to a disposal plant, and the cost of this transport depends on the volume and weight of the slurry. [ 3 ] Hence an efficient dewatering process can result in a substantial cost saving. Where the slurry can be recycled through the process it often needs to be dried, and dewatering again is an important step in this process. The post-treatment required for the overflow stream depends both on the nature of the inlet stream and what the overflow will be used for. For example, if the fluid being put through the lamella clarifier comes from a heavy industrial plant it may require post-treatment to remove oil and grease especially if the effluent is going to be discharged to the environment. A separation process unit such as a coalescer is often used to physically trigger a separation of the water and the oils. [ 19 ] For the treatment of potable water the overflow from the lamella clarifier will require further treatment to remove organic molecules as well as disinfection to remove bacteria. It will also be passed through a series of polishing units to remove the odour and improve the colour of the water. [ 3 ] There is a tendency with lamella clarifiers for algae to grow on the inclined plates and this can be a problem especially if the overflow is being discharged to the environment or if the lamella clarifier is being utilized as pre-treatment for a membrane filtration unit. In either of these cases the overflow requires post-treatment such as an anthracite -sand filter to prevent the algae from spreading downstream of the lamella clarifier. As the inclined plates in the lamella clarifier are made of steel it is not recommended that chlorine be used to control the biological growth as it could accelerate the corrosion of the plates. [ 7 ] One variation on the standard design of a lamella clarifier being developed is the way the effluent is collected at the top of the inclined plates. Rather than the effluent flowing over the top of the inclined plates to the outlet channel it flows through orifices at the top of the plates. This design allows for more consistent back pressure in the channels between the plates and hence a more consistent flow profile develops. Obviously this design only works for relatively clean effluent streams as the orifices would quickly become blocked with deposits which would severely reduce the efficiency of the unit. [ 6 ] Another new design includes an adjustable upper portion of the vessel so that vessel height can be changed. This height adjustment is relative to a deflector, which directs the inlet stream. This design intended to be used for decanting storm water. [ 20 ] Another design variation, which improves the efficiency of the separation unit is the way the effluent enters the lamella clarifier. Standard clarifier design has the effluent entering at the bottom of the inclined plates, colliding with the sludge sliding down the plates. This mixing region renders the bottom 20% of the inclined plates unusable for settling. By designing the lamella clarifier so that the effluent enters the inclined plates without interfering with the downward slurry flow the capacity of the lamella clarifier can be improved by 25%. [ 1 ]
https://en.wikipedia.org/wiki/Lamella_clarifier
Lamellar phase refers generally to packing of polar -headed, long chain , nonpolar-tailed molecules ( amphiphiles ) in an environment of bulk polar liquid, as sheets of bilayers separated by bulk liquid. In biophysics , polar lipids (mostly, phospholipids , and rarely, glycolipids ) pack as a liquid crystalline bilayer, with hydrophobic fatty acyl long chains directed inwardly and polar headgroups of lipids aligned on the outside in contact with water, as a 2-dimensional flat sheet surface. Under transmission electron microscopy (TEM), after staining with polar headgroup reactive chemical osmium tetroxide , lamellar lipid phase appears as two thin parallel dark staining lines/sheets, constituted by aligned polar headgroups of lipids. 'Sandwiched' between these two parallel lines, there exists one thicker line/sheet of non-staining closely packed layer of long lipid fatty acyl chains. This TEM-appearance became famous as Robertson's unit membrane - the basis of all biological membranes , and structure of lipid bilayer in unilamellar liposomes . In multilamellar liposomes , many such lipid bilayer sheets are layered concentrically with water layers in between. In lamellar lipid bilayers, polar headgroups of lipids align together at the interface of water and hydrophobic fatty-acid acyl chains align parallel to one another 'hiding away' from water. The lipid head groups are somewhat more 'tightly' packed than relatively 'fluid' hydrocarbon fatty acyl long chains. The lamellar lipid bilayer organization, thus reveals a 'flexibility gradient' of increasing freedom of motions from near the head-groups towards the terminal fatty-acyl chain methyl groups. Existence of such a dynamic organization of lamellar phase in liposomes as well as biological membranes can be confirmed by spin label electron paramagnetic resonance and high resolution nuclear magnetic resonance spectroscopy studies of biological membranes and liposomes. [ 1 ] In 'soft matter science', where physics and chemistry meet biological science, a bilayer lamellar phase has been recently created from fluorinated silica, and it has been projected for use as a shear-thinning lubricant . [ 2 ] This biophysics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lamellar_phase
In materials science , lamellar structures or microstructures are composed of fine, alternating layers of different materials in the form of lamellae . They are often observed in cases where a phase transition front moves quickly, leaving behind two solid products, as in rapid cooling of eutectic (such as solder ) or eutectoid (such as pearlite ) systems. Such conditions force phases of different composition to form but allow little time for diffusion to produce those phases' equilibrium compositions. Fine lamellae solve this problem by shortening the diffusion distance between phases, but their high surface energy makes them unstable and prone to break up when annealing allows diffusion to progress. A deeper eutectic or more rapid cooling will result in finer lamellae; as the size of an individual lamellum approaches zero, the system will instead retain its high-temperature structure. Two common cases of this include cooling a liquid to form an amorphous solid , and cooling eutectoid austenite to form martensite . In biology , normal adult bones possess a lamellar structure which may be disrupted by some diseases. [ 1 ]
https://en.wikipedia.org/wiki/Lamellar_structure
A lament or lamentation is a passionate expression of grief , often in music , poetry , or song form. The grief is most often born of regret , or mourning . Laments can also be expressed in a verbal manner in which participants lament about something that they regret or someone that they have lost, and they are usually accompanied by wailing, moaning and/or crying . [ 1 ] Laments constitute some of the oldest forms of writing, and examples exist across human cultures. Many of the oldest and most lasting poems in human history have been laments. [ 2 ] The Lament for Sumer and Ur dates back at least 4000 years to ancient Sumer , the world's first urban civilization. Laments are present in both the Iliad and the Odyssey , and laments continued to be sung in elegiacs accompanied by the aulos in classical and Hellenistic Greece. [ 3 ] Elements of laments appear in Beowulf , in the Hindu Vedas , and in ancient Near Eastern religious texts. They are included in the Mesopotamian City Laments such as the Lament for Ur and the Jewish Tanakh , or Christian Old Testament . In many oral traditions, both early and modern, the lament has been a genre usually performed by women: [ 4 ] Batya Weinbaum made a case for the spontaneous lament of women chanters in the creation of the oral tradition that resulted in the Iliad [ 5 ] The material of lament, the "sound of trauma" is as much an element in the Book of Job as in the genre of pastoral elegy , such as Shelley 's "Adonais" or Matthew Arnold 's "Thyrsis". [ 6 ] The Book of Lamentations or Lamentations of Jeremiah figures in the Old Testament. The Lamentation of Christ (under many closely variant terms) is a common subject from the Life of Christ in art , showing Jesus' dead body being mourned after the Crucifixion . Jesus himself lamented over the prospective fall of Jerusalem as he and his disciples entered the city ahead of his passion . [ 7 ] A lament in the Book of Lamentations or in the Psalms , in particular in the Lament/Complaint Psalms of the Tanakh , may be looked at as "a cry of need in a context of crisis when Israel lacks the resources to fend for itself". [ 8 ] Another way of looking at it is all the more basic: laments simply being "appeals for divine help in distress". [ 9 ] These laments, too, often have a set format: an address to God, description of the suffering/anguish from which one seeks relief, a petition for help and deliverance, a curse towards one's enemies, an expression of the belief of ones innocence or a confession of the lack thereof, a vow corresponding to an expected divine response, and lastly, a song of thanksgiving. [ 9 ] Examples of a general format of this, both in the individual and communal laments, can be seen in Psalm 3 and Psalm 44 respectively. [ 9 ] The Lament of Edward II , if it is actually written by Edward II of England , is the sole surviving composition of his. A heroine's lament is a conventional fixture of baroque opera seria , accompanied usually by strings alone, in descending tetrachords . [ 10 ] Because of their plangent cantabile melodic lines, evocatively free, non- strophic construction and adagio pace, operatic laments have remained vividly memorable soprano or mezzo-soprano arias even when separated from the emotional pathos of their operatic contexts. An early example is Ariadne's "Lasciatemi morire", which is the only survivor of Claudio Monteverdi 's lost Arianna . Francesco Cavalli 's operas extended the lamento formula, in numerous exemplars, of which Ciro's "Negatemi respiri" from Ciro is notable. [ 11 ] Other examples include Dido's Lament ("When I am laid in earth") ( Henry Purcell , Dido and Aeneas ), " Lascia ch'io pianga " ( George Frideric Handel , Rinaldo ), "Caro mio ben" ( Tomaso or Giuseppe Giordani ). The lament continued to represent a musico-dramatic high point. In the context of opera buffa , the Countess's lament, " Dove sono ", comes as a surprise to the audience of Wolfgang Amadeus Mozart 's The Marriage of Figaro , and in Gioachino Rossini 's Barber of Seville , Rosina's plaintive words at her apparent abandonment are followed, not by the expected lament aria, but by a vivid orchestral interlude of storm music. The heroine's lament remained a fixture in romantic opera, and the Marschallin's monologue in act 1 of Der Rosenkavalier can be understood as a penetrating psychological lament. [ 12 ] In modernity, discourses about melancholia and trauma take the functional place ritual laments hold in premodern societies. This entails a shift from a focus on community and convention to individuality and authenticity. [ 13 ] The purely instrumental lament is a common form in piobaireachd music for the Scottish bagpipes . "MacCrimmon's Lament" dates to the Jacobite uprising of 1745. The tune is held to have been written by Donald Ban MacCrimmon, piper to the MacLeods of Dunvegan, who supported the Hanoverians. It is said that Donald Ban, who was killed at Moy in 1746, had an intimation that he would not return. [ clarification needed ] [ 14 ] A well-known Gaelic lullaby is " Griogal Cridhe " ("Beloved Gregor"). It was composed in 1570 after the execution of Gregor MacGregor by the Campbells. The grief-stricken widow, Marion Campbell, describes what happened as she sings to her child. [ 15 ] " Cumhadh na Cloinne " ("Lament for the Children") is a pìobaireachd composed by Padruig Mór MacCrimmon in the early 1650s. It is generally held to be based on the loss of seven of MacCrimmon's eight sons within a year to smallpox , [ 16 ] [ 17 ] possibly brought to Skye by a Spanish trading vessel. Poet and writer Angus Peter Campbell , quoting poet Sorley MacLean , has called it "one of the great artistic glories of all Europe". [ 18 ] Author Bridget MacKenzie, in Piping Traditions of Argyll , suggests that it refers to the slaughter of the MacLeod's fighting Cromwell's forces at the Battle of Worcester. It may have been inspired by both. [ 19 ] Other Scottish laments from outside of the piobaireachd tradition include "Lowlands Away" [ citation needed ] , "MacPherson's Rant", and "Hector the Hero". Ritual lament was intertwined with aspects of performance in Ancient Greece. Originally practiced as a part of funerary rites, lamentation was considered a musical and feminine form of expression that was used to appease the deceased. As lament was brought into popular culture, specifically Greek theater and literature, men participated in the tradition as well, but the act of lamentation itself was still closely associated with women. Performed primarily by women during the próthesis step of the burial, ritual lament in the Archaic and Homeric periods was a ritualized expression of emotion imbued with musical elements. The lament involved both verbal and physical actions, such as singing, wailing, tearing of the clothes, and beating the breast, all of which contributed to the sound of lamentation. [ 20 ] Depictions of lament can be found on vessels, funerary plaques, and other archeological remains, where the imagery of the women’s expressive actions contrast with the more static poses of the men. [ 21 ] The gendering of ritual lamentation reflects the gender roles of the time, wherein women were perceived to be more prone to emotion in contrast to men, who were seen as creatures of logos. [ 22 ] In the Archaic and Homeric ages, lament was understood to be divided into two distinct parts: gôos and thrënos. Moving into the Classical period, however, gôos and thrënos were often used as interchangeably, particularly in Athenian tragedy . [ 23 ] Lamenting women appeared in works by well-known tragedians, such as Cassandra's lament in Aeschylus ' Agamenon , Electra's lament in Sophocles ' Electra , and Hecuba's lament in Euripides ' Trojan Women . Tragedians also developed another genre of lament, kommos, that appeared exclusively in tragedies. [ 24 ] Ritual lament also inspired male poets, who adopted the practice into more literary forms. Written laments could be addressed to the divine or personalized for a poet’s close friend. [ 25 ] [ 26 ] Ritual Lament in Athens During the Age of Solon’s Laws Athenian policymaker Solon placed restrictions on women’s participation in funerary rites. Solon’s laws set limitations on women’s dress and behavior, controlling the way that women were allowed to appear in public for funerary occasions. His laws also had an impact on the burial proceedings in relation to women’s roles, as he forbid “laceration of the flesh by mourners,” “bewailing” and the use of set lamentations. [ 27 ] These policies could have been made to address the level of noise that accompanied the ritual lament step of funerals and to curb extravagance from the wealthy. However, Plutarch comments that Solon’s laws concerning women seemed, in general, “very absurd.” He expressed that Solon’s laws were rather unfavorable towards women, using examples such as Solon’s policies on sexual assault. [ 28 ] Modern interpretations of these changes comment on the disruptive potential of the lament on a political level. In Athens, where logic and rationality were valued, the emotional nature of the lament was not viewed favorably by men in power. [ 29 ] Lament During the Festival of Adonia The connection between lamentation and femininity is made apparent in the Athenian festival of Adonia. An event held exclusively for women, by women, the main purpose of this festival was to mourn the death of Adonis, the lover of the goddess Aphrodite. During this festival, women participated in collective lamenting. Women took to the rooftops to perform their lament and held a procession in the streets. [ 30 ] In fragments of Sappho’s work, a lament for Adonis appears. Sappho’s work gives insight on some of the activities that may have occurred during this festival. In her poem, Sappho calls on women to engage in actions such as “beating your bosoms” and “rending your tunics.” [ 31 ] These actions are the same activities that women would do for burial rituals. The Greek poet Bion also wrote a Lament for Adonis. His poem records the ritual laments of Adonia in hexameter, unlike Sappho, who wrote in lyric meter. Throughout his lament, he makes frequent references to Aphrodite, who also referred to by the name Cytherea. His words show the close association between Adonis and Aphrodite. [ 32 ] Sappho and Bion’s works are also demonstrative of how the tradition lament expanded from oral to literary form. References to the Adonia is made in Aristophanes’ Lysistrata. In the play, the male characters express a distaste for the Adonia, particularly due to the loud nature of the lamentation process. In fact, there is a scene in the Lysistrata , a play by Aristophanes, where the lamentations of the women celebrating the Adonia drown out those of the male characters who are attempting to hold an Assembly. [ 33 ] Modern interpretations of this festival have drawn upon the disruptive characteristic of the Adonia to suggest that the festival was a form of subversion. Firstly, the Adonia was not only organized strictly by women, but also was a celebration that was not associated with the state. The exclusion of men in the entirety of the festival process is demonstrates female agency. Furthermore, during the Adonia, Athenian women were allowed to be in public and to make their voices heard in a dramatic manner. The festival allowed women the opportunity to create a type of independent community as well as to present their voices and bodies in the public sphere. Athenian women were expected to remain in the household, whereas men were the ones who engaged in politics, business, and agriculture. It is argued that women embraced this festival because Adonia permitted them to subvert gender roles in a socially acceptable way. [ 34 ] Types of Musical Lament There is a short, free musical form appearing in the Baroque and then again in the Romantic periods, called lament. It is typically a set of harmonic variations in homophonic texture, wherein the bass ( Lament bass ) descends through a tetrachord, usually one suggesting a minor mode . [ citation needed ]
https://en.wikipedia.org/wiki/Lament
2LLL , 5BNW 84823 16907 ENSG00000176619 ENSMUSG00000062075 Q03252 P21619 NM_032737 NM_010722 NM_001347140 NP_116126 NP_001334069 NP_034852 Lamin B2 is a protein that in humans is encoded by the LMNB2 gene . It is the second of two type B nuclear lamins , and it is associated with laminopathies . This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lamin_B2
Lamina is a general anatomical term meaning "plate" or "layer". [ 1 ] It is used in both gross anatomy and microscopic anatomy to describe structures. Some examples include:
https://en.wikipedia.org/wiki/Lamina_(anatomy)
Laminar flame speed is an intrinsic characteristic of premixed combustible mixtures. [ 1 ] It is the speed at which an un-stretched laminar flame will propagate through a quiescent mixture of unburned reactants. Laminar flame speed is given the symbol s L . According to the thermal flame theory of Ernest-François Mallard and Le Chatelier , the un-stretched laminar flame speed is dependent on only three properties of a chemical mixture: the thermal diffusivity of the mixture, the reaction rate of the mixture and the temperature through the flame zone: s L ∘ = α ω ˙ ( T b − T i T i − T u ) {\displaystyle s_{\mathrm {L} }^{\circ }={\sqrt {\alpha {\dot {\omega }}\left({\dfrac {T_{\mathrm {b} }-T_{\mathrm {i} }}{T_{\mathrm {i} }-T_{\mathrm {u} }}}\right)}}} α {\displaystyle \alpha } is thermal diffusivity , ω ˙ {\displaystyle {\dot {\omega }}} is reaction rate, and the temperature subscript u is for unburned, b is for burned and i is for ignition temperature. Laminar flame speed is a property of the mixture (fuel structure, stoichiometry) and thermodynamic conditions upon mixture ignition (pressure, temperature). Turbulent flame speed is a function of the aforementioned parameters, but also heavily depends on the flow field. As flow velocity increases and turbulence is introduced, a flame will begin to wrinkle, then corrugate and eventually the flame front will be broken and transport properties will be enhanced by turbulent eddies in the flame zone. As a result, the flame front of a turbulent flame will propagate at a speed that is not only a function of the mixture's chemical and transport properties but also properties of the flow and turbulence.
https://en.wikipedia.org/wiki/Laminar_flame_speed
The laminar flamelet model is a mathematical method for modelling turbulent combustion . The laminar flamelet model is formulated specifically as a model for non-premixed combustion [ 1 ] The concept of ensemble of laminar flamelets was first introduced by Forman A. Williams in 1975, [ 2 ] while the theoretical foundation was developed by Norbert Peters in the early 80s. [ 3 ] [ 4 ] [ 5 ] The flamelet concept considers the turbulent flame as an aggregate of thin, laminar (Re < 2000), locally one-dimensional flamelet structures present within the turbulent flow field. Counterflow diffusion flame is a common laminar flame which is used to represent a flamelet in a turbulent flow. Its geometry consists of opposed and axi-symmetric fuel and oxidizer jets. As the distance between the jets is decreased and/or the velocity of the jets is increased, the flame is strained and departs from its chemical equilibrium until it eventually extinguishes. The mass fraction of species and temperature fields can be measured or calculated in laminar counterflow diffusion flame experiments. When calculated, a self-similar solution exists, and the governing equations can be simplified to only one dimension i.e. along the axis of the fuel and oxidizer jets. It is in this direction where complex chemistry calculations can be performed affordably. [ 6 ] To model a non-premixed combustion, governing equations for fluid elements are required. The conservation equation for the species mass fraction is as follows:- Le k → lewis number of k th species and the above formula was derived with keeping constant heat capacity . The energy equation with variable heat capacity:- As can be seen from above formulas that the mass fraction and temperature are dependent on 1. Mixture fraction Z 2. Scalar dissipation χ 3. Time Many times we neglect the unsteady terms in above equation and assume the local flame structure having a balance between steady chemical equations and steady diffusion equation which result in Steady Laminar Flamelet Models (SLFM). For this, an average value of χ is computed known as Favre value [ 7 ] The basic assumption of a SLFM model is that a turbulent flame front behaves locally as a one dimensional, steady and laminar which proves to be a very useful while reducing the situation to a much simpler terms but it does create problems as few of the effects are not accounted for. The advantages of using this combustion model are as follows: 1. They have the advantage of showing strong coupling between chemical reactions and molecular transport. 2. The steady laminar flamelet model is also used to predict chemical non-equilibrium due to aerodynamic straining of the flame by the turbulence. The disadvantages of Steady Laminar Flamelet model due to above mentioned reason are: 1. It does not account for the curvature effects which can change the flame structure and is more detrimental while the structure hasn’t reached the quasi-steady state. 2. Such transient effects also arise in turbulent flow, the scalar dissipation experience a sudden change. As the flame structure take time to get stabilize. [ 8 ] To improve the above SLFM models, few more models has been proposed like Transient laminar flamelet model (TLFM) by Ferreira. 1. Versteeg H.K. and Malalasekera W., An introduction to computational fluid dynamics , ISBN 978-81-317-2048-6 . 2. Stefano Giuseppe Piffaretti, Flame Age Model: a transient laminar flamelet approach for turbulent diffusion flames , A dissertation submitted to the Swiss Federal Institute of Technology in Zurich . 3. N. Peters, Institut für Technische Mechanik RWTH Aachen, Four Lectures on turbulent Combustion .
https://en.wikipedia.org/wiki/Laminar_flamelet_model
Laminar flow ( / ˈ l æ m ɪ n ər / ) is the property of fluid particles in fluid dynamics to follow smooth paths in layers, with each layer moving smoothly past the adjacent layers with little or no mixing. [ 1 ] At low velocities, the fluid tends to flow without lateral mixing, and adjacent layers slide past one another smoothly. There are no cross-currents perpendicular to the direction of flow, nor eddies or swirls of fluids. [ 2 ] In laminar flow, the motion of the particles of the fluid is very orderly with particles close to a solid surface moving in straight lines parallel to that surface. [ 3 ] Laminar flow is a flow regime characterized by high momentum diffusion and low momentum convection . When a fluid is flowing through a closed channel such as a pipe or between two flat plates, either of two types of flow may occur depending on the velocity and viscosity of the fluid: laminar flow or turbulent flow . Laminar flow occurs at lower velocities, below a threshold at which the flow becomes turbulent. The threshold velocity is determined by a dimensionless parameter characterizing the flow called the Reynolds number , which also depends on the viscosity and density of the fluid and dimensions of the channel. Turbulent flow is a less orderly flow regime that is characterized by eddies or small packets of fluid particles, which result in lateral mixing. [ 2 ] In non-scientific terms, laminar flow is smooth , while turbulent flow is rough . The type of flow occurring in a fluid in a channel is important in fluid-dynamics problems and subsequently affects heat and mass transfer in fluid systems. The dimensionless Reynolds number is an important parameter in the equations that describe whether fully developed flow conditions lead to laminar or turbulent flow. The Reynolds number is the ratio of the inertial force to the shearing force of the fluid: how fast the fluid is moving relative to how viscous it is, irrespective of the scale of the fluid system. Laminar flow generally occurs when the fluid is moving slowly or the fluid is very viscous. As the Reynolds number increases, such as by increasing the flow rate of the fluid, the flow will transition from laminar to turbulent flow at a specific range of Reynolds numbers, the laminar–turbulent transition range depending on small disturbance levels in the fluid or imperfections in the flow system. If the Reynolds number is very small, much less than 1, then the fluid will exhibit Stokes , or creeping, flow, where the viscous forces of the fluid dominate the inertial forces. The specific calculation of the Reynolds number, and the values where laminar flow occurs, will depend on the geometry of the flow system and flow pattern. The common example is flow through a pipe , where the Reynolds number is defined as where: For such systems, laminar flow occurs when the Reynolds number is below a critical value of approximately 2,040, though the transition range is typically between 1,800 and 2,100. [ 4 ] For fluid systems occurring on external surfaces, such as flow past objects suspended in the fluid, other definitions for Reynolds numbers can be used to predict the type of flow around the object. The particle Reynolds number Re p would be used for particle suspended in flowing fluids, for example. As with flow in pipes, laminar flow typically occurs with lower Reynolds numbers, while turbulent flow and related phenomena, such as vortex shedding , occur with higher Reynolds numbers. A common application of laminar flow is in the smooth flow of a viscous liquid through a tube or pipe. In that case, the velocity of flow varies from zero at the walls to a maximum along the cross-sectional centre of the vessel. The flow profile of laminar flow in a tube can be calculated by dividing the flow into thin cylindrical elements and applying the viscous force to them. [ 5 ] Another example is the flow of air over an aircraft wing . The boundary layer is a very thin sheet of air lying over the surface of the wing (and all other surfaces of the aircraft). Because air has viscosity , this layer of air tends to adhere to the wing. As the wing moves forward through the air, the boundary layer at first flows smoothly over the streamlined shape of the airfoil . Here, the flow is laminar and the boundary layer is a laminar layer. Prandtl applied the concept of the laminar boundary layer to airfoils in 1904. [ 6 ] [ 7 ] An everyday example is the slow, smooth and optically transparent flow of shallow water over a smooth barrier. [ 8 ] When water leaves a tap without an aerator with little force, it first exhibits laminar flow, but as acceleration by the force of gravity immediately sets in, the Reynolds number of the flow increases with speed, and the laminar flow of the water downstream from the tap can transition to turbulent flow. Optical transparency is then reduced or lost entirely. Laminar airflow is used to separate volumes of air, or prevent airborne contaminants from entering an area. Laminar flow hoods are used to exclude contaminants from sensitive processes in science, electronics and medicine. Air curtains are frequently used in commercial settings to keep heated or refrigerated air from passing through doorways. A laminar flow reactor (LFR) is a reactor that uses laminar flow to study chemical reactions and process mechanisms. A laminar flow design for animal husbandry of rats for disease management was developed by Beall et al. 1971 and became a standard around the world [ 9 ] including in the then- Eastern Bloc . [ 10 ]
https://en.wikipedia.org/wiki/Laminar_flow
A laminar flow cabinet or tissue culture hood is a partially enclosed bench work surface designed to prevent contamination of biological samples , semiconductor wafer , or any particle-sensitive materials. Air is drawn through a HEPA filter and blown in a very smooth laminar flow in a narrow vertical curtain, separating the interior of the cabinet from the environment around it. The cabinet is usually made of stainless steel with no gaps or joints where spores might collect. [ 1 ] Despite their similar appearance, a laminar flow cabinet should not to be confused with a fume hood . A laminar flow cabinet blows unfiltered exhaust air towards the worker and is not safe for work with pathogenic agents, [ 2 ] : 13 [ 3 ] while a fume hood maintains negative pressure with constant exhaust to protect the user, but does not protect the work materials from contamination by the surrounding environment. A biosafety cabinet is also easily-confused with a laminar flow cabinet, but like the fume hood is primarily designed to protect the worker rather than the biological samples. This is achieved by drawing surrounding air in and exhausting it through a HEPA filter to remove potentially hazardous microorganisms. Laminar flow cabinets exist in both horizontal and vertical configurations, and there are many different types of cabinets with a variety of airflow patterns and acceptable uses. Cabinets may have a UV-C germicidal lamp to sterilize the interior and contents before use to prevent contamination of the experiment. Germicidal lamps are usually kept on for fifteen minutes to sterilize the interior before the cabinet is used. The light must be switched off when the cabinet is being used, to limit exposure to skin and eyes as stray ultraviolet light emissions can cause cancer and cataracts . [ 4 ]
https://en.wikipedia.org/wiki/Laminar_flow_cabinet
A laminar flow reactor ( LFR ) is a type of chemical reactor that uses laminar flow to control reaction rate , and/or reaction distribution. LFR is generally a long tube with constant diameter that is kept at constant temperature. Reactants are injected at one end and products are collected and monitored at the other. [ 1 ] Laminar flow reactors are often used to study an isolated elementary reaction or multi-step reaction mechanism . Laminar flow reactors employ the characteristics of laminar flow to achieve various research purposes. For instance, LFRs can be used to study fluid dynamics in chemical reactions , or they can be utilized to generate special chemical structures such as carbon nanotubes . One feature of the LFR is that the residence time (The time interval during which the chemicals stay in the reactor) of the chemicals in the reactor can be varied by either changing the distance between the reactant input point and the point at which the product/sample is taken, or by adjusting the velocity of the gas/fluid. Therefore the benefit of a laminar flow reactor is that the different factors that may affect a reaction can be easily controlled and adjusted throughout an experiment. Means of analyzing the reaction include using a probe that enters into the reactor; or more accurately, sometimes one can utilize non-intrusive optical methods (e.g. use spectrometer to identify and analyze contents) to study reactions in the reactor. Moreover, taking the entire sample of the gas/fluid at the end of the reactor and collecting data may be useful as well. [ 1 ] Using methods mentioned above, various data such as concentration , flow velocity etc. can be monitored and analyzed. Fluids or gases with controlled velocity pass through a laminar flow reactor in a fashion of laminar flow . That is, streams of fluids or gases slide over each other like cards. When analyzing fluids with the same viscosity ("thickness" or "stickiness") but different velocity, fluids are typically characterized into two types of flows: laminar flow and turbulent flow . Compared to turbulent flow, laminar flow tends to have a lower velocity and is generally at a lower Reynolds number . Turbulent flow, on the other hand, is irregular and travels at a higher speed. Therefore the flow velocity of a turbulent flow on one cross section is often assumed to be constant, or "flat". The "non-flat" flow velocity of laminar flow helps explain the mechanism of an LFR. For the fluid/gas moving in an LFR, the velocity near the center of the pipe is higher than the fluids near the wall of the pipe. Thus, the velocity distribution of the reactants tends to decrease from the center to the wall. The velocity near the center of the pipe is higher than the fluids near the wall of the pipe. Thus, the velocity distribution of the reactants tends to be higher in the center and lower on the side. Consider fluid being pumped through an LFR at constant velocity from the inlet, and the concentration of the fluid is monitored at the outlet. The graph of the residence time distribution should look like a negative slope with positive concavity. And the graph is modeled by the function: E ( t ) = 0 {\displaystyle E(t)=0} if t {\displaystyle t} is smaller than τ / 2 {\displaystyle \tau /2} ; E ( t ) = ( τ 2 / 2 ) t 3 {\displaystyle E(t)=(\tau ^{2}/2)t^{3}} if t {\displaystyle t} is greater than or equal to τ / 2 {\displaystyle \tau /2} . [ 2 ] Notice that the graph has the E ( t ) {\displaystyle E(t)} value of zero initially, this is simply because it takes sometime for the substance to travel through the reactor. When the material is starting to reach the outlet, the concentration drastically increases, and it gradually decreases as time proceeds. The laminar flows inside of a LFR has the unique characteristic of flowing in a parallel fashion without disturbing one another. The velocity of the fluid or gas will naturally decrease as it gets closer to the wall and farther from the center. Therefore the reactants have an increasing residence time in the LFR from the center to the side. A gradually increasing residence time gives researchers a clear layout of the reaction at different times. Besides, when studying reactions in LFR, radial gradients in velocity, composition and temperature are significant. [ 3 ] In other words, in other reactors where laminar flow is not significant, for instance, in a plug flow reactor , velocity of the object is assumed to be the same on one cross section since the flows are mostly turbulent. In a laminar flow reactor, velocity is significantly different at various points on the same cross section. Therefore the velocity differences throughout the reactor need to be taken into consideration when working with a LFR. Various researches pertaining to the modeling of LFR and formations of substances within a LFR have been done over the past decades. For instance, the formation of Single-walled carbon nanotube was investigated in a LFR. [ 4 ] As another example, conversion from methane to higher hydrocarbons have been studied in a laminar flow reactor. [ 5 ]
https://en.wikipedia.org/wiki/Laminar_flow_reactor
A cleanroom or clean room is an engineered space that maintains a very low concentration of airborne particulates . It is well-isolated, well-controlled from contamination , and actively cleansed. Such rooms are commonly needed for scientific research and in industrial production for all nanoscale processes, such as semiconductor device manufacturing . A cleanroom is designed to keep everything from dust to airborne organisms or vaporised particles away from it, and so from whatever material is being handled inside it. A cleanroom can also prevent the escape of materials. This is often the primary aim in hazardous biology , nuclear work , pharmaceutics , and virology . Cleanrooms typically come with a cleanliness level quantified by the number of particles per cubic meter at a predetermined molecule measure. The ambient outdoor air in a typical urban area contains 35,000,000 particles for each cubic meter in the size range 0.5 μm and bigger, equivalent to an ISO 9 certified cleanroom. By comparison, an ISO 14644 -1 level 1 certified cleanroom permits no particles in that size range, and just 12 particles for each cubic meter of 0.3 μm and smaller. Semiconductor facilities often get by with level 7 or 5, while level 1 facilities are exceedingly rare. The modern cleanroom was invented by American physicist Willis Whitfield . [ 1 ] As an employee of the Sandia National Laboratories , Whitfield created the initial plans for the cleanroom in 1960. [ 1 ] Prior to Whitfield's invention, earlier cleanrooms often had problems with particles and unpredictable airflows . Whitfield designed his cleanroom with a constant, highly filtered airflow to flush out impurities. [ 1 ] Within a few years of its invention in the 1960s, Whitfield's modern cleanroom had generated more than US$50 billion in sales worldwide (approximately $499 billion today). [ 2 ] [ 3 ] By mid-1963, more than 200 U.S. industrial plants had such specially constructed facilities—then using the terminology “White Rooms,” “Clean Rooms,” or “Dust-Free Rooms”—including the Radio Corporation of America, McDonnell Aircraft, Hughes Aircraft, Sperry Rand, Sylvania Electric, Western Electric, Boeing, and North American Aviation. [ 4 ] RCA began such a conversion of part of its Cambridge, Ohio facilities in February 1961. Totalling 70,000 square feet, it was used to prepare control equipment for the Minuteman ICBM missiles. [ 5 ] The majority of the integrated circuit manufacturing facilities in Silicon Valley were made by three companies: MicroAire, PureAire, and Key Plastics. These competitors made laminar flow units, glove boxes, cleanrooms and air showers , along with the chemical tanks and benches used in the "wet process" building of integrated circuits. These three companies were the pioneers of the use of Teflon for airguns, chemical pumps, scrubbers, water guns, and other devices needed for the production of integrated circuits . William (Bill) C. McElroy Jr. worked as an engineering manager, drafting room supervisor, QA/QC, and designer for all three companies, and his designs added 45 original patents to the technology of the time. McElroy also wrote a four-page article for MicroContamination Journal, wet processing training manuals, and equipment manuals for wet processing and cleanrooms. [ 6 ] [ citation needed ] A cleanroom is a necessity in the manufacturing of semiconductors , rechargeable batteries , pharmaceutical products , and any other field that is highly sensitive to environmental contamination. Cleanrooms can range from the very small to the very large. On the one hand, a single-user laboratory can be built to cleanroom standards within several square meters, and on the other, entire manufacturing facilities can be contained within a cleanroom with factory floors covering thousands of square meters. Between the large and the small, there are also modular cleanrooms. [ 7 ] They have been argued to lower costs of scaling the technology, and to be less susceptible to catastrophic failure. With such a wide area of application, not every cleanroom is the same. For example, the rooms utilized in semiconductor manufacturing need not be sterile (i.e., free of uncontrolled microbes), [ 8 ] while the ones used in biotechnology usually must be. Vice versa, operating rooms need not be absolutely pure of nanoscale inorganic salts, such as rust , while nanotechnology absolutely requires it. What then is common to all cleanrooms is strict control of airborne particulates , possibly with secondary decontamination of air, surfaces, workers entering the room, implements, chemicals, and machinery. Sometimes particulates exiting the compartment are also of concern, such as in research into dangerous viruses , or where radioactive materials are being handled. First, outside air entering a cleanroom is filtered and cooled by several outdoor air handlers using progressively finer filters to exclude dust. Within, air is constantly recirculated through fan units containing high-efficiency particulate absorbing filters ( HEPA ), and/or ultra-low particulate air ( ULPA ) filters to remove internally generated contaminants. Special lighting fixtures, walls, equipment and other materials are used to minimize the generation of airborne particles. Plastic sheets can be used to restrict air turbulence if the cleanroom design is of the laminar airflow type. [ 9 ] [ 10 ] [ 11 ] Air temperature and humidity levels inside a cleanroom are tightly controlled, because they affect the efficiency and means of air filtration. If a particular room requires low enough humidity to make static electricity a concern, it too will be controlled by, e.g., introducing controlled amounts of charged ions into the air using a corona discharge . Static discharge is of particular concern in the electronics industry, where it can instantly destroy components and circuitry. Equipment inside any cleanroom is designed to generate minimal air contamination. The selection of material for the construction of a cleanroom should not generate any particulates; hence, monolithic epoxy or polyurethane floor coating is preferred. Buffed stainless steel or powder-coated mild steel sandwich partition panels and ceiling panel are used instead of iron alloys prone to rusting and then flaking . Corners like the wall to wall, wall to floor, wall to ceiling are avoided by providing coved surface , and all joints need to be sealed with epoxy sealant to avoid any deposition or generation of particles at the joints, by vibration and friction . Many cleanrooms have a "tunnel" design in which there are spaces called "service chases" that serve as air plenums carrying the air from the bottom of the room to the top so that it can be recirculated and filtered at the top of the cleanroom. [ 12 ] Cleanrooms maintain particulate-free air through the use of either HEPA or ULPA filters employing laminar or turbulent airflow principles. Laminar, or unidirectional, airflow systems direct filtered air downward or in horizontal direction in a constant stream towards filters located on walls near the cleanroom floor or through raised perforated floor panels to be recirculated. Laminar airflow systems are typically employed across 80% of a cleanroom ceiling to maintain constant air processing. Stainless steel or other non shedding materials are used to construct laminar airflow filters and hoods to prevent excess particles entering the air. Turbulent, or non-unidirectional, airflow uses both laminar airflow hoods and nonspecific velocity filters to keep air in a cleanroom in constant motion, although not all in the same direction. The rough air seeks to trap particles that may be in the air and drive them towards the floor, where they enter filters and leave the cleanroom environment. US FDA and EU have laid down stringent guidelines and limits to ensure freedom from microbial contamination in pharmaceutical products. [ 13 ] Plenums between air handlers and fan filter units , along with sticky mats , may also be used. In addition to air filters, cleanrooms can also use ultraviolet light to disinfect the air. [ 14 ] UV devices can be fitted into ceiling light fixtures and irradiate air, killing potentially infectious particulates , including 99.99 percent of airborne microbial and fungal contaminants. [ 15 ] UV light has previously been used to clean surface contaminants in sterile environments such as hospital operating rooms. Their use in other cleanrooms may increase as equipment becomes more affordable. Potential advantages of UV-based decontamination includes a reduced reliance on chemical disinfectants and the extension of HVAC filter life. Some cleanrooms are kept at a positive pressure so if any leaks occur, air leaks out of the chamber instead of unfiltered air coming in. This is most typically the case in semiconductor manufacturing, where even minute amounts of particulates leaking in could contaminate the whole process, while anything leaking out would not be harmful to the surrounding community [ citation needed ] . The opposite is done, e.g., in the case of high-level bio-laboratories that handle dangerous bacteria or viruses; those are always held at negative pressure , with the exhaust being passed through high-efficiency filters, and further sterilizing procedures. Both are still cleanrooms because the particulate level inside is maintained within very low limits. Some cleanroom HVAC systems control the humidity to such low levels that extra equipment like air ionizers are required to prevent electrostatic discharge problems. This is a particular concern within the semiconductor business, because static discharge can easily damage modern circuit designs. On the other hand, active ions in the air can harm exposed components as well. Because of this, most workers in high electronics and semiconductor facilities have to wear conductive boots while working. Low-level cleanrooms may only require special shoes, with completely smooth soles that do not track in dust or dirt. However, for safety reasons, shoe soles must not create slipping hazards. Access to a cleanroom is usually restricted to those wearing a cleanroom suit , including the necessary machinery. In cleanrooms in which the standards of air contamination are less rigorous, the entrance to the cleanroom may not have an air shower. An anteroom (known as a "gray room") is used to put on cleanroom clothing. This practice is common in many nuclear power plants, which operate as low-grade inverse pressure cleanrooms, as a whole. Recirculating vs. one pass cleanrooms Recirculating cleanrooms return air to the negative pressure plenum via low wall air returns. The air then is pulled by HEPA fan filter units back into the cleanroom. The air is constantly recirculating and by continuously passing through HEPA filtration removing particles from the air each time. Another advantage of this design is that air conditioning can be incorporated. One pass cleanrooms draw air from outside and pass it through HEPA fan filter units into the cleanroom. The air then leaves through exhaust grills. The advantage of this approach is the lower cost. The disadvantages are comparatively shorter HEPA fan filter life, worse particle counts than a recirculating cleanroom, and that it cannot accommodate air conditioning. Aseptic practices are critical in environments where contamination control is paramount, particularly in the pharmaceutical , biotechnology, and medical device industries. Aseptic processing involves maintaining a sterile environment to prevent the introduction of contaminants during the manufacturing of products, such as sterile injectable medications and sterile medical equipment. This requires stringent control over personnel behavior, equipment sterilization, and the cleanroom environment. [ 16 ] There are different classifications for aseptic or sterile processing cleanrooms. The Pharmaceutical Inspection Co-operation Scheme (PIC/S) classifies cleanrooms into four grades (A, B, C, and D) based on their cleanliness level, particularly the concentration of airborne particles and viable microorganisms. In order to minimize the carrying of particulate by a person moving into the cleanroom, staff enter and leave through airlocks (sometimes including an air shower stage) and wear protective clothing such as hoods , face masks, gloves, boots, and coveralls . Common materials such as paper , pencils , and fabrics made from natural fibers are often excluded because they shed particulates in use. Particle levels are usually tested using a particle counter and microorganisms detected and counted through environmental monitoring methods [ clarify ] . [ 17 ] [ 18 ] Polymer tools used in cleanrooms must be carefully determined to be chemically compatible with cleanroom processing fluids [ 19 ] as well as ensured to generate a low level of particle generation. [ 20 ] When cleaning, only special mops and buckets are used. Cleaning chemicals used tend to involve sticky elements to trap dust, and may need a second step with light molecular weight solvents to clear. Cleanroom furniture is designed to produce a minimum of particles and is easy to clean. A cleanroom is as much a process and a meticulous culture to maintain, as it is a space as such. The greatest threat to cleanroom contamination comes from the users themselves. [ 21 ] In the healthcare and pharmaceutical sectors, control of microorganisms is important, especially microorganisms likely to be deposited into the air stream from skin shedding . Studying cleanroom microflora is of importance for microbiologists and quality control personnel to assess changes in trends. Shifts in the types of microflora may indicate deviations from the "norm" such as resistant strains or problems with cleaning practices. In assessing cleanroom microorganisms, the typical flora are primarily those associated with human skin ( Gram-positive cocci ), although microorganisms from other sources such as the environment ( Gram-positive rods ) and water ( Gram-negative rods ) are also detected, although in lower number. Common bacterial genera include Micrococcus , Staphylococcus , Corynebacterium , and Bacillus , and fungal genera include Aspergillus and Penicillium . [ 18 ] Cleanrooms are classified according to the number and size of particles permitted per volume of air. Large numbers like "class 100" or "class 1000" refer to FED-STD-209E , and denote the number of particles of size 0.5 μm or larger permitted per cubic foot of air. The standard also allows interpolation; for example SNOLAB is maintained as a class 2000 cleanroom. A discrete, light-scattering airborne particle counter is used to determine the concentration of airborne particles, equal to and larger than the specified sizes, at designated sampling locations. Small numbers refer to ISO 14644-1 standards, which specify the decimal logarithm of the number of particles 0.1 μm or larger permitted per m 3 of air. So, for example, an ISO class 5 cleanroom has at most 10 5 particles/m 3 . Both FS 209E and ISO 14644-1 assume log-log relationships between particle size and particle concentration. For that reason, zero particle concentration does not exist. Some classes do not require testing some particle sizes, because the concentration is too low or too high to be practical to test for, but such blanks should not be read as zero. Because 1 m 3 is about 35 ft 3 , the two standards are mostly equivalent when measuring 0.5 μm particles, although the testing standards differ. Ordinary room air is around class 1,000,000 or ISO 9. [ 23 ] ISO 14644-1 and ISO 14698 are non-governmental standards developed by the International Organization for Standardization (ISO). [ 24 ] The former applies to cleanrooms in general (see table below), the latter to cleanrooms where biocontamination may be an issue. Since the strictest standards have been achieved only for space applications, it is sometimes difficult to know whether they were achieved in vacuum or standard conditions. ISO 14644-1 defines the maximum concentration of particles per class and per particle size with the following formula [ 25 ] C N = 10 N ( 0.1 D ) 2.08 {\displaystyle {\text{C}}_{\text{N}}=10^{\text{N}}\left({\frac {0.1}{\text{D}}}\right)^{2.08}} Where C N {\displaystyle {\text{C}}_{\text{N}}} is the maximum concentration of particles in a volume of 1m 3 {\displaystyle ^{3}} of airborne particles that are equal to, or larger, than the considered particle size which is rounded to the nearest whole number, using no more than three significant figures, N {\displaystyle {\text{N}}} is the ISO class number, D {\displaystyle {\text{D}}} is the size of the particle in μ {\displaystyle \mu } m and 0.1 is a constant expressed in μ {\displaystyle \mu } m. The result for standard particle sizes is expressed in the following table. b These concentrations will lead to large air sample volumes for classification. Sequential sampling procedure may be applied; see Annex D. c Concentration limits are not applicable in this region of the table due to very high particle concentration. d Sampling and statistical limitations for particles in low concentrations make classification inappropriate. e Sample collection limitations for both particles in low concentrations and sizes greater than 1 μm make classification at this particle size inappropriate due to potential particle losses in the sampling system. US FED-STD-209E was a United States federal standard. It was officially cancelled by the General Services Administration on November 29, 2001, [ 26 ] [ 27 ] but is still widely used. [ 28 ] Current regulating bodies include ISO, USP 800, US FED STD 209E (previous standard, still used). EU GMP guidelines are more stringent than others, requiring cleanrooms to meet particle counts at operation (during manufacturing process) and at rest (when manufacturing process is not carried out, but room AHU is on). BS 5295 is a British Standard . BS 5295 Class 1 also requires that the greatest particle present in any sample can not exceed 5 μm. [ 31 ] BS 5295 has been superseded, withdrawn since the year 2007 and replaced with "BS EN ISO 14644-6:2007". [ 32 ] USP 800 is a United States standard developed by the United States Pharmacopeial Convention (USP) with an effective date of December 1, 2019. [ 33 ] In hospitals , theatres are similar to cleanrooms for surgical patients' operations with incisions to prevent any infections for the patient. In another case, severely immunocompromised patients sometimes have to be held in prolonged isolation from their surroundings, for fear of infection. At the extreme, this necessitates a cleanroom environment. The same is the case for patients carrying airborne infectious diseases, only they are handled at negative, not positive pressure. Since larger cleanrooms are very sensitive controlled environments upon which multibillion-dollar industries depend, sometimes they are even fitted with numerous seismic base isolation systems to prevent costly equipment malfunction. [ 34 ]
https://en.wikipedia.org/wiki/Laminar_room
In fluid dynamics , the process of a laminar flow becoming turbulent is known as laminar–turbulent transition . The main parameter characterizing transition is the Reynolds number . Transition is often described as a process proceeding through a series of stages. Transitional flow can refer to transition in either direction, that is laminar–turbulent transitional or turbulent–laminar transitional flow. The process applies to any fluid flow, and is most often used in the context of boundary layers . In 1883 Osborne Reynolds demonstrated the transition to turbulent flow in a classic experiment in which he examined the behaviour of water flow under different flow rates using a small jet of dyed water introduced into the centre of flow in a larger pipe. The larger pipe was glass, so the behaviour of the layer of dyed flow could be observed, and at the end of this pipe was a flow-control valve used to vary the water velocity inside the tube. When the velocity was low, the dyed layer remained distinct through the entire length of the large tube. When the velocity was increased, the layer broke up at a given point and diffused throughout the fluid's cross-section. The point at which this happened was the transition point from laminar to turbulent flow. Reynolds identified the governing parameter for the onset of this effect, which was a dimensionless constant later called the Reynolds number . Reynolds found that the transition occurred between Re = 2000 and 13000, depending on the smoothness of the entry conditions. When extreme care is taken, the transition can even happen with Re as high as 40000. On the other hand, Re = 2000 appears to be about the lowest value obtained at a rough entrance. [ 1 ] Reynolds' publications in fluid dynamics began in the early 1870s. His final theoretical model published in the mid-1890s is still the standard mathematical framework used today. Examples of titles from his more groundbreaking reports are: A boundary layer can transition to turbulence through a number of paths. Which path is realized physically depends on the initial conditions such as initial disturbance amplitude and surface roughness. The level of understanding of each phase varies greatly, from near complete understanding of primary mode growth to a near-complete lack of understanding of bypass mechanisms . The initial stage of the natural transition process is known as the Receptivity phase and consists of the transformation of environmental disturbances – both acoustic (sound) and vortical (turbulence) – into small perturbations within the boundary layer. The mechanisms by which these disturbances arise are varied and include freestream sound and/or turbulence interacting with surface curvature, shape discontinuities and surface roughness. These initial conditions are small, often unmeasurable perturbations to the basic state flow. From here, the growth (or decay) of these disturbances depends on the nature of the disturbance and the nature of the basic state. Acoustic disturbances tend to excite two-dimensional instabilities such as Tollmien–Schlichting waves (T-S waves), while vortical disturbances tend to lead to the growth of three-dimensional phenomena such as the crossflow instability . [ 3 ] Numerous experiments in recent decades have revealed that the extent of the amplification region, and hence the location of the transition point on the body surface, is strongly dependent not only upon the amplitude and/or the spectrum of external disturbances but also on their physical nature. Some of the disturbances easily penetrate into the boundary layer whilst others do not. Consequently, the concept of boundary layer transition is a complex one and still lacks a complete theoretical exposition. If the initial, environmentally-generated disturbance is small enough, the next stage of the transition process is that of primary mode growth. In this stage, the initial disturbances grow (or decay) in a manner described by linear stability theory . [ 4 ] The specific instabilities that are exhibited in reality depend on the geometry of the problem and the nature and amplitude of initial disturbances. Across a range of Reynolds numbers in a given flow configuration, the most amplified modes can and often do vary. There are several major types of instability which commonly occur in boundary layers. In subsonic and early supersonic flows, the dominant two-dimensional instabilities are T-S waves. For flows in which a three-dimensional boundary layer develops such as a swept wing, the crossflow instability becomes important. For flows navigating concave surface curvature, Görtler vortices may become the dominant instability. Each instability has its own physical origins and its own set of control strategies - some of which are contraindicated by other instabilities – adding to the difficulty in controlling laminar-turbulent transition. Simple harmonic sound as a precipitating factor in the sudden transition from laminar to turbulent flow might be attributed to Elizabeth Barrett Browning . Her poem, Aurora Leigh (1856), revealed how musical notes (the pealing of a particular church bell), triggered wavering turbulence in the previously steady laminar-flow flames of street gaslights (“...gaslights tremble in the streets and squares” [ 5 ] ). Her instantly acclaimed poem might have alerted scientists (e.g., Leconte 1859) to the influence of simple harmonic (SH) sound as a cause of turbulence. A contemporary flurry of scientific interest in this effect culminated in Sir John Tyndall (1867) deducing that specific SH sounds, directed perpendicular to the flow had waves that blended with similar SH waves created by friction along the boundaries of tubes, amplifying them and triggering the phenomenon of high-resistance turbulent flow. His interpretation re-surfaced over 100 years later (Hamilton 2015). Walter Tollmien (1931) and Hermann Schlichting (1929) proposed that friction (viscosity) along a smooth flat boundary, created SH boundary layer (BL) oscillations that gradually increased in amplitude until turbulence erupted. [ 6 ] [ 7 ] Although contemporary wind tunnels failed to confirm the theory, Schubauer and Skramstad (1943) created a refined wind tunnel that deadened the vibrations and sounds that might impinge on the wind tunnel flat plate flow studies. They confirmed the development of SH long-crested BL oscillations, the dynamic shear waves of transition to turbulence. They showed that specific SH fluttering vibrations induced electromagnetically into a BL ferromagnetic ribbon could amplify similar flow-induced SH BL flutter (BLF) waves, precipitating turbulence at much lower flow rates. Furthermore, certain other specific frequencies interfered with the development of the SH BLF waves, preserving laminar flow to higher flow rates. An oscillation of a mass in a fluid is a vibration that creates a sound wave. SH BLF oscillations in boundary layer fluid along a flat plate must produce SH sound that reflects off the boundary perpendicular to the fluid laminae. In late transition, Schubauer and Skramstad found foci of amplification of BL oscillations, associated with bursts of noise (“turbulent spots”). Focal amplification of the transverse sound in late transition was associated with BL vortex formation. The focal amplified sound of turbulent spots along a flat plate with high energy oscillation of molecules perpendicularly through the laminae, might suddenly cause localized freezing of laminar slip. The sudden braking of “frozen” spots of fluid would transfer resistance to the high resistance at the boundary, and might explain the head-over-heels BL vortices of late transition. Osborne Reynolds described similar turbulent spots during transition in water flow in cylinders ("flashes of turbulence"). [ 8 ] When many random vortices erupt as turbulence onsets, the generalized freezing of laminar slip (laminar interlocking) is associated with noise and a dramatic increase in resistance to flow. This might also explain the parabolic isovelocity profile of laminar flow abruptly changing to the flattened profile of turbulent flow – as laminar slip is replaced by laminar interlocking as turbulence erupts (Hamilton 2015). [ 9 ] The primary modes themselves don't actually lead directly to breakdown, but instead lead to the formation of secondary instability mechanisms. As the primary modes grow and distort the mean flow, they begin to exhibit nonlinearities and linear theory no longer applies. Complicating the matter is the growing distortion of the mean flow, which can lead to inflection points in the velocity profile a situation shown by Lord Rayleigh to indicate absolute instability in a boundary layer. These secondary instabilities lead rapidly to breakdown. These secondary instabilities are often much higher in frequency than their linear precursors.
https://en.wikipedia.org/wiki/Laminar–turbulent_transition
In topology , a branch of mathematics, a lamination is a : A lamination of a surface is a partition of a closed subset of the surface into smooth curves. It may or may not be possible to fill the gaps in a lamination to make a foliation . [ 2 ] This topology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lamination_(topology)
Laminin–111 (also "laminin–1") is a protein of the type known as laminin isoforms . It was among the first of the laminin isoforms to be discovered. [ 2 ] The "111" identifies the isoform's chain composition of α1β1γ1. [ 2 ] This protein plays an important role in embryonic development . Injections of this substance are used in treatment for Duchenne muscular dystrophy , and its cellular action may potentially become a focus of study in cancer research . The distribution of the different laminin isoforms is tissue-specific. [ 3 ] Laminin–111 is predominantly expressed in the embryonic epithelium , but can also be found in some adult epithelium such as the kidney , liver , testis , ovaries , and brain blood vessels . [ 3 ] [ 4 ] Different levels of expression of α chains have a large influence on the differential expression of laminin, thereby determining the isoform produced. [ 3 ] From studying a mouse model, it was found that transcription factors present in the parietal endoderm regulate the expression of the α1 and large amounts of laminin-111 are produced. [ 3 ] The synthesized laminin–111 formed in an embryo contributes to the formation of Reichert’s membrane , a thick extra-embryonic basement membrane. [ 5 ] When the laminin α1 chain is deficient in an organism, an embryo dies, likely as a result of a defective Reichert’s membrane due to a lack of laminin–111 being produced. [ 4 ] Laminin-111 has been identified as a crucial molecule for development of the embryo as shown by the consequences that occur when laminin-111 is lacking. Laminin-111 is expressed very early on in development and is present in the blastocyst . [ 1 ] When various parts of the trimer chains are knocked out by mutations , devastating consequences occur in the embryo . If the β1 or γ1 chains of laminin-111 are absent the basement membrane fails to form. [ 1 ] Without a basement membrane cells have nowhere to attach and all dependent activities such as cell migration and epithelial formation can no longer occur. [ 5 ] [ 1 ] The self-assembly and tight network formation by laminin-111 are essential for holding the basement membrane together. Although it is expressed abundantly during the early embryonic stage, laminin-111 is mostly absent in adults. [ 5 ] The injection of laminin-111, however, helps with Duchenne muscular dystrophy , a neuromuscular disease in which the connection between the extracellular matrix and cell cytoskeleton is lost. [ 6 ] [ 7 ] Increased levels of laminin-111 triggered an increase in the expression of α7-integrin receptor and this prevented onset of the disease. [ 7 ] Additionally, the presence of laminin-111 increased muscle strength and protected it from injury. [ 7 ] When injected with myoblast transplants, laminin–111 decreased degeneration and inflammatory reactions and increased the success of the transplantation . [ 6 ] The experiments utilizing laminin–111 as a source of therapy for Duchenne muscular dystrophy suggest that it has protective qualities in addition to its association with muscle tissue. In cell adhesion laminin-111 and other isoforms are important proteins that anchor cells to the extracellular matrix (ECM). [ 8 ] The linkage between cells and the ECM is formed by binding cell surface receptors to one end of the laminin α chain and binding ECM components to another region of the laminin. [ 8 ] Globular domains (G-Domain) of the α chain are the regions on laminin-111 that allow the binding of integrins , glycoproteins , sulfated glycolipids and dystroglycan . [ 8 ] Besides anchoring cells to the ECM, laminins are also involved in the signalling of cells and other components of the ECM. [ 8 ] Even though there is not a general mechanism that applies to all laminins in signalling, there are some common pathways that can be seen in more than one isoform of laminin. [ 8 ] For example, the PI3K/AKT pathway is used by laminin-111 (promotes cell-survival), 511 (prevents apoptosis with laminin 521), and 521 (stabilizes pluripotency of human embryonic stem cells ). [ 8 ] The pathway begins with the adhesion of the cell to the ECM for activation of the lipid-associated PI3K . [ 9 ] Once PI3K is activated, it will localize AKT that is in the cytoplasm to the cell membrane where AKT is then phosphorylated to promote cell survival. [ 8 ] When α chains [ 10 ] of laminin-111 bind to cell surface receptors integrins α1β1, α3β1, α4β1, α6β1 and Cdc42 GTPase are activated. [ 11 ] The activated GTPase then activates Cdc42 which further activates c-Jun kinases and phosphorylation of Jun. [ 11 ] Activation of c-Jun kinases leads to high levels of c-Jun expression which results in neurite outgrowth. [ 12 ] The synthesis of Nitric Oxides resides somewhere in the pathway and is yet to be determined. [ 10 ] Weston et al. (2000) proposed that the synthesis of Nitric Oxide may be upstream to the activation of Cdc42. [ 11 ] Nonetheless, Nitric Oxide synthesis is shown to be an important element in laminin-mediated neurite outgrowth. [ 10 ] The dynamic reciprocity theory states that a cell’s fate depends on the exchange of chemical signals between the extracellular matrix and the nucleus of the cell. [ 13 ] Focussing on connections between laminin-111 and other proteins involved in cell-to-cell communication could spark further research that may help to further our current understanding of cancer and how to slow down or stop its process. Actin plays a role in nuclear activity which is an important process with regard to cell signalling influencing cell differentiation and replication. It has been suggested that actin interactions directly influence gene transcription as it interacts with chromatin remodeling complexes as well as RNA polymerases I , II and III . [ 14 ] However, the exact role that actin plays in transcription has not yet been determined. A group of distinguished scientists [ 15 ] from the U.S. Department of Energy ’s (DOE) Lawrence Berkeley National Laboratory (Berkeley Lab) did a recent study on how laminin-111 interacts with the cytoplasmic protein, actin. Their study gave the following conclusions: The biological process in which a cell ceases to continue growing and dividing is called quiescence (the opposite of cancer). ECM laminin-111 sends chemical signals that promotes adhesion of a cell and its ECM. Although the mechanism is unknown, these signals have also been linked to cell quiescence. Adding laminin-111 to breast epithelial cells leads to quiescence by altering nuclear actin. High levels of laminin-111 deplete nuclear actin which induces quiescence of cells. However, when an isoform of actin, that cannot exit a cell’s nucleus, is active, cells continue to grow and divide even when laminin levels are high. ECM laminin-111 levels in a normal breast cell are significantly higher than laminin-111 levels in tissues of cancerous breast tissue. Simply increasing laminin levels in the ECM of cancerous breast cells is not enough to lead to quiescence. Therefore, it is implied that there are multiple factors working together influencing cell-to-cell communication. How laminin-111 and nuclear actin communicate is one of these factors. Laminin-111 could be the physiological regulator of nuclear actin which would suggest that depleting nuclear actin could be a key to achieving cell quiescence and returning to homeostatic operating conditions. Decreased expression of laminin-111 and the growth-inhibitory signals that it produces in malignant myoepithelial cells begs for further investigation with regard to cancer research. Therefore, further exploration of laminin-111 and nuclear actin interaction could be a target for future experimental therapeutic investigations. [ 15 ]
https://en.wikipedia.org/wiki/Laminin_111
The Lamm equation [ 1 ] describes the sedimentation and diffusion of a solute under ultracentrifugation in traditional sector -shaped cells. (Cells of other shapes require much more complex equations.) It was named after Ole Lamm , later professor of physical chemistry at the Royal Institute of Technology , who derived it during his PhD studies under Svedberg at Uppsala University . The Lamm equation can be written: [ 2 ] [ 3 ] where c is the solute concentration, t and r are the time and radius, and the parameters D , s , and ω represent the solute diffusion constant, sedimentation coefficient and the rotor angular velocity , respectively. The first and second terms on the right-hand side of the Lamm equation are proportional to D and sω 2 , respectively, and describe the competing processes of diffusion and sedimentation . Whereas sedimentation seeks to concentrate the solute near the outer radius of the cell, diffusion seeks to equalize the solute concentration throughout the cell. The diffusion constant D can be estimated from the hydrodynamic radius and shape of the solute, whereas the buoyant mass m b can be determined from the ratio of s and D where k B T is the thermal energy, i.e., the Boltzmann constant k B multiplied by the absolute temperature T . Solute molecules cannot pass through the inner and outer walls of the cell, resulting in the boundary conditions on the Lamm equation at the inner and outer radii, r a and r b , respectively. By spinning samples at constant angular velocity ω and observing the variation in the concentration c ( r , t ), one may estimate the parameters s and D and, thence, the (effective or equivalent) buoyant mass of the solute.
https://en.wikipedia.org/wiki/Lamm_equation
In topology , a branch of mathematics, and specifically knot theory , the lamp cord trick is an observation that two certain spaces are homeomorphic , even if one of the components is knotted. The spaces are M 3 ∖ T i , i = 1 , 2 {\displaystyle M^{3}\backslash T_{i},i=1,2} , where M 3 {\displaystyle M^{3}} is a hollow ball homeomorphic to S 2 × [ 0 , 1 ] {\displaystyle S^{2}\times [0,1]} and T i {\displaystyle T_{i}} a tube connecting the boundary components of M 3 {\displaystyle M^{3}} . The name comes from R. H. Bing's book "The Geometric Topology of 3-manifolds". [ 1 ] This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Lamp_cord_trick
Lamya Essemlali (born 1979) is a French environmental activist, of Moroccan origin. [ 1 ] She's the co-founder and president of Sea Shepherd France, [ 2 ] the French branch of the anti-poaching organisation Sea Shepherd Conservation Society . She is also co-director of Sea Shepherd Global and co-president of the non-governmental organisation Rewild. Lamya's family is originally from Morocco , but she was born and grew up in Gennevilliers ( France ), near Paris . [ 3 ] [ 4 ] Essemlali has a master's degree in environmental sciences and an associate degree in business communications. Prior to her engagement with the Sea Shepherd Conservation Society , she was involved with Greenpeace and the World Wildlife Fund . [ 5 ] Essemlali met Sea Shepherd Conservation Society founder Paul Watson at a conference in Paris in 2005. Her first missions with Sea Shepherd were to Antarctica [ 5 ] and the Galápagos Islands . [ 4 ] In 2006 she co-founded Sea Shepherd France. She became the president of the organisation in 2008. [ 6 ] Essemlali has led several campaigns for Sea Shepherd Global in the Mediterranean Sea , the Faroe Islands ("GrindStop" campaign) and the Indian Ocean ( Réunion Island) to defend bluefin tuna , [ 7 ] dolphins , pilot whales , [ 8 ] sea cucumbers , and sharks . [ 9 ] She has campaigned against dolphin bycatch in the Bay of Biscay [ 4 ] and the poaching of Hawksbill sea turtles in Mayotte . [ 5 ] She published the book Captain Paul Watson, interview with a pirate in 2012. [ 10 ] [ 11 ] Essemlali is the co-president of the non-governmental organisation Rewild [ 12 ] and co-director of Sea Shepherd Global. [ 13 ]
https://en.wikipedia.org/wiki/Lamya_Essemlali
Lamé's stress ellipsoid is an alternative to Mohr's circle for the graphical representation of the stress state at a point . The surface of the ellipsoid represents the locus of the endpoints of all stress vectors acting on all planes passing through a given point in the continuum body. In other words, the endpoints of all stress vectors at a given point in the continuum body lie on the stress ellipsoid surface, i.e., the radius-vector from the center of the ellipsoid, located at the material point in consideration, to a point on the surface of the ellipsoid is equal to the stress vector on some plane passing through the point. In two dimensions, the surface is represented by an ellipse . Once the equations of the ellipsoid is known, the magnitude of the stress vector can then be obtained for any plane passing through that point. To determine the equation of the stress ellipsoid we consider the coordinate axes x 1 , x 2 , x 3 {\displaystyle x_{1},x_{2},x_{3}\,\!} taken in the directions of the principal axes, i.e., in a principal stress space. Thus, the coordinates of the stress vector T ( n ) {\displaystyle \mathbf {T} ^{(\mathbf {n} )}\,\!} on a plane with normal unit vector n {\displaystyle \mathbf {n} \,\!} passing through a given point P {\displaystyle P\,\!} is represented by And knowing that n {\displaystyle \mathbf {n} \,\!} is a unit vector we have which is the equation of an ellipsoid centered at the origin of the coordinate system, with the lengths of the semiaxes of the ellipsoid equal to the magnitudes of the principal stresses, i.e. the intercepts of the ellipsoid with the principal axes are ± σ 1 , ± σ 2 , ± σ 3 {\displaystyle \pm \sigma _{1},\pm \sigma _{2},\pm \sigma _{3}\,\!} . The stress ellipsoid by itself, however, does not indicate the plane on which the given traction vector acts. Only for the case where the stress vector lies along one of the principal directions it is possible to know the direction of the plane, as the principal stresses act perpendicular to their planes. To find the orientation of any other plane we used the stress-director surface [ 1 ] or stress director quadric [ 1 ] represented by the equation The stress represented by a radius-vector of the stress ellipsoid acts on a plane oriented parallel to the tangent plane to the stress-director surface at the point of its intersection with the radius-vector. [ 1 ]
https://en.wikipedia.org/wiki/Lamé's_stress_ellipsoid
Lamé's Theorem is the result of Gabriel Lamé's analysis of the complexity of the Euclidean algorithm . Using Fibonacci numbers , he proved in 1844 [ 1 ] [ 2 ] that when looking for the greatest common divisor (GCD) of two integers a and b , the algorithm finishes in at most 5 k steps, where k is the number of digits (decimal) of b . [ 3 ] [ 4 ] The number of division steps in the Euclidean algorithm with entries u {\displaystyle u\,\!} and v {\displaystyle v\,\!} is less than 5 {\displaystyle 5} times the number of decimal digits of min ( u , v ) {\displaystyle \min(u,v)\,\!} . Let u > v {\displaystyle u>v} be two positive integers. Applying to them the Euclidean algorithm provides two sequences ( q 1 , … , q n ) {\displaystyle (q_{1},\ldots ,q_{n})} and ( v 2 , … , v n ) {\displaystyle (v_{2},\ldots ,v_{n})} of positive integers such that, setting v 0 = u , {\displaystyle v_{0}=u,} v 1 = v {\displaystyle v_{1}=v} and v n + 1 = 0 , {\displaystyle v_{n+1}=0,} one has for i = 1 , … , n , {\displaystyle i=1,\ldots ,n,} and The number n is called the number of steps of the Euclidean algorithm, since it is the number of Euclidean divisions that are performed. The Fibonacci numbers are defined by F 0 = 0 , {\displaystyle F_{0}=0,} F 1 = 1 , {\displaystyle F_{1}=1,} and for n > 0. {\displaystyle n>0.} The above relations show that v n ≥ 1 = F 2 , {\displaystyle v_{n}\geq 1=F_{2},} and v n − 1 ≥ 2 = F 3 . {\displaystyle v_{n-1}\geq 2=F_{3}.} By induction , So, if the Euclidean algorithm requires n steps, one has v ≥ F n + 1 . {\displaystyle v\geq F_{n+1}.} One has F k ≥ φ k − 2 {\displaystyle F_{k}\geq \varphi ^{k-2}} for every integer k > 2 {\displaystyle k>2} , where φ = 1 + 5 2 {\textstyle \varphi ={\frac {1+{\sqrt {5}}}{2}}} is the Golden ratio . This can be proved by induction, starting with F 2 = φ 0 = 1 , {\displaystyle F_{2}=\varphi ^{0}=1,} F 3 = 2 > φ , {\displaystyle F_{3}=2>\varphi ,} and continuing by using that φ 2 = φ + 1 : {\displaystyle \varphi ^{2}=\varphi +1:} So, if n is the number of steps of the Euclidean algorithm, one has and thus using 1 log 10 ⁡ φ < 5. {\textstyle {\frac {1}{\log _{10}\varphi }}<5.} If k is the number of decimal digits of v {\displaystyle v} , one has v < 10 k {\displaystyle v<10^{k}} and log 10 ⁡ v < k . {\displaystyle \log _{10}v<k.} So, and, as both members of the inequality are integers, which is exactly what Lamé's theorem asserts. As a side result of this proof, one gets that the pairs of integers ( u , v ) {\displaystyle (u,v)} that give the maximum number of steps of the Euclidean algorithm (for a given size of v {\displaystyle v} ) are the pairs of consecutive Fibonacci numbers.
https://en.wikipedia.org/wiki/Lamé's_theorem
Lana Skirboll is the former director of the National Institutes of Health Office of Science Policy . Skirboll is an international leader in science policy. [ 1 ] She graduated from New York University in 1970 with a bachelor's degree in Biology and completed a master's degree in Physiology in 1972 at Miami University (Ohio) . She received her Ph.D. with honors from the Department of Pharmacology, Georgetown University School of Medicine in 1977, and conducted her postdoctoral training in the Departments of Psychiatry and Pharmacology at the Yale School of Medicine . Following her postdoctoral training, she was a Fogarty Fellow at the Karolinska Institute in Stockholm, Sweden in the laboratory of Tomas Hökfelt . Dr. Skirboll is the author of more than 75 scientific publications. After leaving Stockholm, Dr. Skirboll was chief of the Electrophysiology Unit in the Intramural Research Program of the U.S. National Institute of Mental Health ( NIMH ) prior to joining the U.S. Alcohol Drug Abuse and Mental Health Administration (ADAMHA), as the Deputy Science Advisor. She was subsequently appointed as the Chief of Staff to the Agency Administrator and Associate Administrator for Science, where she focused on animals in research and patent policy. In 1992, when ADAMHA was reorganized and its three research Institutes (NIMH, NIDA , and NIAAA ) returned to the NIH, Dr. Skirboll was appointed Director of the Office of Science Policy in the NIMH. In 1995, Harold Varmus , Director of the U.S. National Institutes of Health ( NIH ) appointed Dr. Skirboll as Director of the NIH Office of Science Policy. During her tenure, she managed a wide range of policy issues, including the ethical, legal, social, and economic implications of biomedical research; human subject protections; the privacy and confidentiality of research records; [ 2 ] conflicts of interest; genetics, health, and society; and dual use research, among others. Her office was responsible for NIH's oversight of gene therapy research, including the activities of the Recombinant DNA Advisory Committee (RAC) [ 3 ] [ 4 ] as well as for the activities of the HHS Secretary's Advisory Committee on Genetics, Health and Society; the Secretary's Advisory Committee on Xenotransplantation; the National Science Advisory Board for Biosecurity ( NSABB ); the Clinical Research Policy Analysis and Coordination Program (CRpac); and the NIH Office of Science Education . Dr. Skirboll was the NIH liaison to the U.S. Food and Drug Administration , the Foundation for the NIH , and the HHS Office for Human Research Protections . Her work involved collaboration within the U.S. Government, industry, and foreign governments and institutions. Under three Presidential Administrations, Dr. Skirboll was the agency’s lead on policy issues related to fetal tissue, cloning , and stem cell research. [ 5 ] She was responsible for drafting both the 2000 and 2009 NIH Guidelines for Research Using Human Embryonic Stem Cells . [ 6 ] [ 7 ] [ 8 ] Starting 2003, Dr. Skirboll worked with NIH Director Elias Zerhouni in creating the NIH Roadmap for Medical Research , the Trans-NIH Nanotechnology Task Force and the NIH program on Public-Private Partnerships . In 2009, Zerhouni named Skirboll to serve as Acting NIH Deputy Director for, and Director of the Division of Program Coordination, Planning, and Strategic Initiatives (DPCPSI) , the NIH entity responsible for the NIH Common Fund. In this capacity, Dr. Skirboll directed national efforts to identify and address emerging scientific opportunities and rising public health challenges through biomedical research. In addition, Dr. Skirboll directed efforts to develop NIH’s portfolio analysis capabilities and was chair of the NIH Council of Councils . In addition, Dr. Skirboll oversaw NIH’s office of evaluation and the program offices responsible for coordination of research and activities related to research on AIDS, behavioral and social sciences, women's health, disease prevention, rare diseases, and dietary supplements—efforts that reside in DPCPSI as a result of implementing requirements of the NIH Reform Act of 2006 . Dr. Skirboll has received three DHHS Secretarial Awards for Distinguished Service and a Presidential Rank Award of Meritorious Executive. In May 2010, Skirboll joined former NIH Director Elias Zerhouni in a new global science and health consulting firm, the Zerhouni Group, LLC. She recently retired from Vice President and Head of Science Policy at the pharmaceutical company Sanofi after 10 years of service. Skirboll resides in Alexandria, VA and is married to architect and hospital administrator, Leonard Taylor, Jr., who was Senior Vice President for Asset Management at the University of Maryland Medical Systems . She has two grown children, Patrick and Eleanor, and four grandchildren.
https://en.wikipedia.org/wiki/Lana_Skirboll
The Lancashire Loom was a semi-automatic power loom invented by James Bullough and William Kenworthy in 1842. Although it is self-acting, it has to be stopped to recharge empty shuttles. It was the mainstay of the Lancashire cotton industry for a century. John Bullough (1800–1868) was from Accrington , often described as a simple-minded Westhoughton weaver. Originally a handloom weaver, unlike others of his trade Bullough embraced new developments such as Edmund Cartwright 's power loom (1785). While colleagues were busy rejecting new devices such as in the power-loom riots that broke out in Lancashire in 1826, Bullough improved his own loom by inventing various components, including the "self-acting temple" that kept the woven cloth at its correct width, and a loose reed that allowed the lathe to back away on encountering a shuttle trapped in the warp. Bullough also invented a simple but effective warning device which rang a bell every time a warp thread broke on his loom. Bullough moved to Blackburn and worked with William Kenworthy at Brookhouse Mills , with whom he applied his inventions to develop an improved power loom that later became known as the "Lancashire Loom". [ 1 ] He was forced to quit Blackburn, for fear of angry handloom weavers . He later settled in Accrington to form Howard & Bullough in partnership with John Howard at Globe Works in Accrington alongside the Leeds-Liverpool Canal in Accrington. Here he invented the slasher, which founded the company's success. He was one of the country's largest manufacturers. At the height of the business the Globe works employed almost 6000 workers and covered 52 acres (210,000 m 2 ). 75% of production was exported. [ 2 ] Howard and Bullough became part of the Textile Machinery Makers Limited group, which were bought out by Platt, and in 1991 the company name changed to Platt Saco Lowell. The Globe works closed in 1993. [ 3 ] From 1830 there had been a series of incremental improvements to the basic Roberts Loom . There now appear a series of useful improvements that are contained in patents for useless devices [ 5 ] At this point the loom has become fully automatic, this is the Kenworthy and Bullough Lancashire Loom. The Cartwight loom weaver could work one loom at 120–130 picks per minute- with a Kenworthy and Bullough's Lancashire Loom, a weaver can run up to six looms working at 220–260 picks per minute- thus giving 12 times more throughput. The power loom is now referred to as "a perfect machine", it produced textile of a better quality than the hand weaver for less cost. An economic success. [ 6 ] Other improvements were the The three primary movements of a loom are shedding, picking, and beating-up. The principal advantage of the Lancashire loom was that it was semi-automatic, when a warp thread broke the weaver was notified. When the shuttle ran out of thread, the machine stopped. An operative thus could work 4 or more looms whereas previously they could only work a single loom. Indeed, the term A Four Loom Weaver was used to describe the operatives. Labour cost was quartered. In some mills an operative would operate 6 or even 8 looms, [ 9 ] although that was governed by the thread being used. By 1900, the loom was challenged by the Northrop Loom , which was fully automatic and could be worked in larger numbers. The Northrop was suitable for coarse thread but for fine cotton, the Lancashire loom was still preferred. By 1914, Northrop looms made up 40% of looms in American mills, but in the United Kingdom labour costs were not as significant and they only supplied 2% of the British market. Notes Bibliography
https://en.wikipedia.org/wiki/Lancashire_Loom
The Lancashire hearth was used to fine pig iron , removing carbon to produce wrought iron . Until the early 19th century, the usual method of producing wrought iron involved a charcoal -fired finery in a finery forge . In the beginning of the 19th century this became an obsolete process and was slowly replaced by the coal-fueled puddling process . However, charcoal continued to be used in some forges after most of the iron industry had abandoned it for coke . [ 1 ] When John Bradley & Co. (whose leading partner was James Foster ) took over forges at Eardington in Shropshire , in 1813, a potting and stamping forge, they reverted to using charcoal. In 1820, he bought Hampton Loade Forge, which then became a tinplate works and in 1826 another charcoal forge. This was followed by other charcoal forges at Horsehay in 1832 and at the Old Park ironworks of the Botfield family about 1826. Cookley Forge in the Stour valley also reverted to charcoal working in 1814, supplying wire and tinplate mills. [ 1 ] By the 1830s, these forges were sometimes producing over 2,000 tons of iron per year, compared with a few hundred from earlier finery forges. It is likely that these forges were using a more efficient variety of hearth, which from Swedish usage has come to be known as a Lancashire hearth. [ 1 ] Faced with competition from cheaper British iron production, the Swedish iron industry needed to find a new cheaper method of making iron. In the 1810s, experiments were made with puddling , but this proved unsatisfactory, as it needed coal of which Sweden had none. After Gustav Ekman [ Wikidata ] visited Britain, he published a report of his observations. [ when? ] He had seen closed finery hearths in south Wales and near Ulverston , then in Lancashire (now Cumbria ). Those in south Wales were similar to puddling furnaces, but in Lancashire, he saw closed furnaces, where the metal was in contact with the fuel. On his return to Sweden, Ekman experimented and built furnaces similar to what he had seen near Ulverston, [ 2 ] most probably at Newland ironworks . In 1829-30, Waern installed a furnace of the south Wales type at Backefors ironworks, while independently Ekman built Lancashire hearths at Dormsjö and Söderfors . From there the process spread to other forges. Charcoal consumption by the metallurgical industry in Sweden peaked about 1885. [ 3 ] In 1887, 406 hearths made 210,500 tons of iron. The last Lancashire forge in Sweden was at Ramnäs , closed in 1964. [ 2 ] [ 4 ] In Shropshire, charcoal iron production continued on a significant scale, but declined after 1870, rods for wire-drawing being a significant product. However most charcoal forges were probably closed by 1890. [ 1 ] The Swedish Lancashire hearth consisted of a rectangular closed furnace with a chimney (8 metres high) at one end and a working arch in front of the hearth proper at the other. Pig iron was charged through a door at the foot of the chimney and stacked on an iron-clad bridge so that it could be heated by the waste gases from the hearth. The hearth was blown through a single water-cooled tuyere with pre-heated air . The hearth consisted of a rectangular box of iron plates, the bottom plate being water-cooled. Surplus slag was removed with a shovel between finings, but some was left to help the process. Pig stacked on the bridge at the back of the hearth was then pulled forward with a hook and charcoal added. The blast was then turned on and fining began. [ 2 ] When the pigs began to melt, rabelling [ check spelling ] began (as in the Walloon process ) using two bars of iron one to stir the iron and the other to lift it back into the blast. Periodically the tuyere had to be cleaned of matter adhering to it with a third bar. Finally, the iron was gathered into a 'loop' which was lifted out of the hearth with a heavier bar and tongs, and taken to the shingling hammer . [ 2 ] The process was more fuel-efficient and more productive than its predecessors. [ 2 ]
https://en.wikipedia.org/wiki/Lancashire_hearth
Lancefield grouping is a system of classification that classifies catalase -negative Gram-positive cocci based on the carbohydrate composition of bacterial antigens found on their cell walls . [ 1 ] The system, created by Rebecca Lancefield , was historically used to organize the various members of the family Streptococcaceae , which includes the genera Lactococcus and Streptococcus , but now is largely superfluous due to explosive growth in the number of streptococcal species identified since the 1970s. [ 2 ] However, it has retained some clinical usefulness even after the taxonomic changes, [ 1 ] and as of 2018, Lancefield designations are still often used to communicate medical microbiological test results. The classification assigns a letter code to each serotype. There are 20 described serotypes assigned the letters A to V (excluding E, I and J). [ 3 ] Bacteria of the genus Enterococcus , formerly known as group D streptococci, were classified as members of the genus Streptococcus until 1984 and are included in the original Lancefield grouping. [ 4 ] Many—but not all—species of streptococcus are beta-hemolytic . Notably, enterococci and Streptococcus bovis (Lancefield group D) are not beta-hemolytic. [ 5 ] Though there are many groups of streptococci, the principal organisms that are known to cause human disease belong to group A ( Streptococcus pyogenes ) , group B ( Streptococcus agalactiae ), group C/G ( Streptococcus dysgalactiae ) both members of group D ( Streptococcus gallolyticus and Streptococcus infantarius , both members of the Streptococcus bovis group), and two alpha-haemolytic groups that lack the Lancefield carbohydrate antigen: Streptococcus pneumoniae and viridans streptococci . [ 2 ] [ 3 ] Other Streptococcus species are classified as 'non-Lancefield streptococci'.
https://en.wikipedia.org/wiki/Lancefield_grouping
Lanchester's laws are mathematical formulas for calculating the relative strengths of military forces . The Lanchester equations are differential equations describing the time dependence of two armies' strengths A and B as a function of time, with the function depending only on A and B. [ 1 ] [ 2 ] In 1915 and 1916 during World War I , M. Osipov [ 3 ] : vii–viii and Frederick Lanchester independently devised a series of differential equations to demonstrate the power relationships between opposing forces. [ 4 ] Among these are what is known as Lanchester's linear law (for ancient combat ) and Lanchester's square law (for modern combat with long-range weapons such as firearms). As of 2017 modified variations of the Lanchester equations continue to form the basis of analysis in many of the US Army’s combat simulations, [ 5 ] and in 2016 a RAND Corporation report examined by these laws the probable outcome in the event of a Russian invasion into the Baltic nations of Estonia, Latvia, and Lithuania. [ 6 ] For ancient combat, between phalanxes of soldiers with spears for example, one soldier could only ever fight exactly one other soldier at a time. If each soldier kills, and is killed by, exactly one other, then the number of soldiers remaining at the end of the battle is simply the difference between the larger army and the smaller, assuming identical weapons. The linear law also applies to unaimed fire into an enemy-occupied area. The rate of attrition depends on the density of the available targets in the target area as well as the number of weapons shooting. If two forces, occupying the same land area and using the same weapons, shoot randomly into the same target area, they will both suffer the same rate and number of casualties, until the smaller force is eventually eliminated: the greater probability of any one shot hitting the larger force is balanced by the greater number of shots directed at the smaller force. Lanchester's square law is also known as the N-square law . With firearms engaging each other directly with aimed shooting from a distance, they can attack multiple targets and can receive fire from multiple directions. The rate of attrition now depends only on the number of weapons shooting. Lanchester determined that the power of such a force is proportional not to the number of units it has, but to the square of the number of units. This is known as Lanchester's square law. More precisely, the law specifies the casualties a shooting force will inflict over a period of time, relative to those inflicted by the opposing force. In its basic form, the law is only useful to predict outcomes and casualties by attrition. It does not apply to whole armies, where tactical deployment means not all troops will be engaged all the time. It only works where each unit (soldier, ship, etc.) can kill only one equivalent unit at a time. For this reason, the law does not apply to machine guns, artillery with unguided munitions, or nuclear weapons. The law requires an assumption that casualties accumulate over time: it does not work in situations in which opposing troops kill each other instantly, either by shooting simultaneously or by one side getting off the first shot and inflicting multiple casualties. Note that Lanchester's square law does not apply to technological force, only numerical force; so it requires an N-squared-fold increase in quality to compensate for an N-fold decrease in quantity. Suppose that two armies, Red and Blue, are engaging each other in combat. Red is shooting a continuous stream of bullets at Blue. Meanwhile, Blue is shooting a continuous stream of bullets at Red. Let symbol A represent the number of soldiers in the Red force. Each one has offensive firepower α , which is the number of enemy soldiers it can incapacitate (e.g., kill or injure) per unit time. Likewise, Blue has B soldiers, each with offensive firepower β . Lanchester's square law calculates the number of soldiers lost on each side using the following pair of equations. [ 7 ] Here, dA/dt represents the rate at which the number of Red soldiers is changing at a particular instant. A negative value indicates the loss of soldiers. Similarly, dB/dt represents the rate of change of the number of Blue soldiers. The solution to these equations shows that: The first three of these conclusions are obvious. The final one is the origin of the name "square law". Lanchester's equations are related to the more recent salvo combat model equations, with two main differences. First, Lanchester's original equations form a continuous time model, whereas the basic salvo equations form a discrete time model. In a gun battle, bullets or shells are typically fired in large quantities. Each round has a relatively low chance of hitting its target, and does a relatively small amount of damage. Therefore, Lanchester's equations model gunfire as a stream of firepower that continuously weakens the enemy force over time. By comparison, cruise missiles typically are fired in relatively small quantities. Each one has a high probability of hitting its target, and carries a relatively powerful warhead. Therefore, it makes more sense to model them as a discrete pulse (or salvo) of firepower in a discrete time model. Second, Lanchester's equations include only offensive firepower, whereas the salvo equations also include defensive firepower. Given their small size and large number, it is not practical to intercept bullets and shells in a gun battle. By comparison, cruise missiles can be intercepted (shot down) by surface-to-air missiles and anti-aircraft guns. Therefore, missile combat models include those active defenses. Lanchester's laws have been used to model historical battles for research purposes. Examples include Pickett's Charge of Confederate infantry against Union infantry during the 1863 Battle of Gettysburg , [ 8 ] the 1940 Battle of Britain between the British and German air forces, [ 9 ] and the Battle of Kursk . [ 10 ] In modern warfare, to take into account that to some extent both linear and the square apply often, an exponent of 1.5 is used. [ 11 ] [ 12 ] [ 3 ] : 7-5–7-8 Lanchester's laws have also been used to model guerrilla warfare . [ 13 ] The laws have also been applied to repeat battles with a range of inter-battle reinforcement strategies. [ 14 ] Attempts have been made to apply Lanchester's laws to conflicts between animal groups. [ 15 ] Examples include tests with chimpanzees [ 16 ] and ants . The chimpanzee application was relatively successful. A study of Australian meat ants and Argentine ants confirmed the square law, [ 17 ] but a study of fire ants did not confirm the square law. [ 18 ] The Helmbold Parameters offer precise numerical indices, grounded in historical data, for quickly and accurately comparing battles in terms of bitterness and the degree of advantage held by each side. While their definition is modeled after a solution of the Lanchester Square Law's differential equations, their numerical values are based entirely on the initial and final strengths of the opponents and in no way depend upon the validity of Lanchester's Square Law as a model of attrition during the course of a battle. The solution of Lanchester's Square Law used here can be written as: a ( t ) = cosh ⁡ ( λ t ) − μ sinh ⁡ ( λ t ) d ( t ) = cosh ⁡ ( λ t ) − μ − 1 sinh ⁡ ( λ t ) ε = λ T {\displaystyle {\begin{aligned}a(t)&=\cosh(\lambda t)-\mu \sinh(\lambda t)\\d(t)&=\cosh(\lambda t)-\mu ^{-1}\sinh(\lambda t)\\\varepsilon &=\lambda T\end{aligned}}} Where: If the initial and final strengths of the two sides are known it is possible to solve for the parameters a ( T ) {\displaystyle a(T)} , d ( T ) {\displaystyle d(T)} , μ {\displaystyle \mu } , and ε {\displaystyle \varepsilon } . If the battle duration T {\displaystyle T} is also known, then it is possible to solve for λ {\displaystyle \lambda } . [ 19 ] [ 20 ] [ 21 ] If, as is normally the case, ε {\displaystyle \varepsilon } is small enough that the hyperbolic functions can, without any significant error, be replaced by their series expansion up to terms in the first power of ε {\displaystyle \varepsilon } , and if abbreviations adopted for the casualty fractions are F A = 1 − a ( T ) {\displaystyle F_{A}=1-a(T)} and F D = 1 − d ( T ) {\displaystyle F_{D}=1-d(T)} , then the approximate relations that hold include ε = F A F D {\displaystyle \varepsilon ={\sqrt {F_{A}F_{D}}}} and μ = F A / F D {\displaystyle \mu =F_{A}/F_{D}} . [ 22 ] That ε {\displaystyle \varepsilon } is a kind of "average" (specifically, the geometric mean ) of the casualty fractions justifies using it as an index of the bitterness of the battle. Statistical work prefers natural logarithms of the Helmbold Parameters. They are noted log ⁡ μ {\displaystyle \log \mu } , log ⁡ ε {\displaystyle \log \varepsilon } , and log ⁡ λ {\displaystyle \log \lambda } . See Helmbold (2021): Some observers have noticed a similar post-WWII decline in casualties at the level of wars instead of battles. [ 28 ] [ 29 ] [ 30 ] [ 31 ]
https://en.wikipedia.org/wiki/Lanchester's_laws
The Lanczos tensor or Lanczos potential is a rank 3 tensor in general relativity that generates the Weyl tensor . [ 1 ] It was first introduced by Cornelius Lanczos in 1949. [ 2 ] The theoretical importance of the Lanczos tensor is that it serves as the gauge field for the gravitational field in the same way that, by analogy, the electromagnetic four-potential generates the electromagnetic field . [ 3 ] [ 4 ] The Lanczos tensor can be defined in a few different ways. The most common modern definition is through the Weyl–Lanczos equations, which demonstrate the generation of the Weyl tensor from the Lanczos tensor. [ 4 ] These equations, presented below, were given by Takeno in 1964. [ 1 ] The way that Lanczos introduced the tensor originally was as a Lagrange multiplier [ 2 ] [ 5 ] on constraint terms studied in the variational approach to general relativity . [ 6 ] Under any definition, the Lanczos tensor H exhibits the following symmetries: The Lanczos tensor always exists in four dimensions [ 7 ] but does not generalize to higher dimensions. [ 8 ] This highlights the specialness of four dimensions . [ 3 ] Note further that the full Riemann tensor cannot in general be derived from derivatives of the Lanczos potential alone. [ 7 ] [ 9 ] The Einstein field equations must provide the Ricci tensor to complete the components of the Ricci decomposition . The Curtright field has a gauge-transformation dynamics similar to that of Lanczos tensor. But Curtright field exists in arbitrary dimensions > 4D. [ 10 ] The Weyl–Lanczos equations express the Weyl tensor entirely as derivatives of the Lanczos tensor: [ 11 ] where C a b c d {\displaystyle C_{abcd}} is the Weyl tensor, the semicolon denotes the covariant derivative , and the subscripted parentheses indicate symmetrization . Although the above equations can be used to define the Lanczos tensor, they also show that it is not unique but rather has gauge freedom under an affine group . [ 12 ] If Φ a {\displaystyle \Phi ^{a}} is an arbitrary vector field , then the Weyl–Lanczos equations are invariant under the gauge transformation where the subscripted brackets indicate antisymmetrization . An often convenient choice is the Lanczos algebraic gauge, Φ a = − 2 3 H a b b , {\displaystyle \Phi _{a}=-{\frac {2}{3}}H_{ab}{}^{b},} which sets H a b ′ b = 0. {\displaystyle H'_{ab}{}^{b}=0.} The gauge can be further restricted through the Lanczos differential gauge H a b c ; c = 0 {\displaystyle H_{ab}{}^{c}{}_{;c}=0} . These gauge choices reduce the Weyl–Lanczos equations to the simpler form The Lanczos potential tensor satisfies a wave equation [ 13 ] where ◻ {\displaystyle \Box } is the d'Alembert operator and is known as the Cotton tensor . Since the Cotton tensor depends only on covariant derivatives of the Ricci tensor , it can perhaps be interpreted as a kind of matter current. [ 14 ] The additional self-coupling terms have no direct electromagnetic equivalent. These self-coupling terms, however, do not affect the vacuum solutions , where the Ricci tensor vanishes and the curvature is described entirely by the Weyl tensor. Thus in vacuum, the Einstein field equations are equivalent to the homogeneous wave equation ◻ H a b c = 0 , {\displaystyle \Box H_{abc}=0,} in perfect analogy to the vacuum wave equation ◻ A a = 0 {\displaystyle \Box A_{a}=0} of the electromagnetic four-potential. This shows a formal similarity between gravitational waves and electromagnetic waves , with the Lanczos tensor well-suited for studying gravitational waves. [ 15 ] In the weak field approximation where g a b = η a b + h a b {\displaystyle g_{ab}=\eta _{ab}+h_{ab}} , a convenient form for the Lanczos tensor in the Lanczos gauge is [ 14 ] The most basic nontrivial case for expressing the Lanczos tensor is, of course, for the Schwarzschild metric . [ 4 ] The simplest, explicit component representation in natural units for the Lanczos tensor in this case is with all other components vanishing up to symmetries. This form, however, is not in the Lanczos gauge. The nonvanishing terms of the Lanczos tensor in the Lanczos gauge are It is further possible to show, even in this simple case, that the Lanczos tensor cannot in general be reduced to a linear combination of the spin coefficients of the Newman–Penrose formalism , which attests to the Lanczos tensor's fundamental nature. [ 11 ] Similar calculations have been used to construct arbitrary Petrov type D solutions. [ 16 ]
https://en.wikipedia.org/wiki/Lanczos_tensor
Land is a monthly peer-reviewed , open access , scientific journal that is published by MDPI . It was established in 2012. The journal explores land use /land change, land management , land system science, and landscape -related issues. [ 1 ] The editor-in-chief is Andrew Millington ( Flinders University ). The journal is abstracted and indexed in:
https://en.wikipedia.org/wiki/Land_(journal)
Land Art Generator Initiative ( LAGI ), founded by Elizabeth Monoian and Robert Ferry, [ 1 ] is an organization dedicated to devising alternative energy solutions through sustainable design and public art [ 2 ] by providing platforms for scientists and engineers to collaborate with artists, architects and other creatives on public art projects that generate sustainable energy infrastructures. [ 3 ] Since 2010, LAGI has hosted biannual international competitions stimulating artists to design public art that produces renewable energy. [ 1 ] Sites for these contests have included Abu Dhabi, United Arab Emirates, Copenhagen, Denmark, New York City and Santa Monica, California. [ 4 ] Land Art Generator Initiative also led efforts that have resulted in the world's first Solar Mural artworks. [ 5 ] Every two years since 2010, Land Art Generator Initiative has conducted international competitions leading design teams from over forty countries to create for art-based solutions to renewable energy challenges. [ 6 ] 2010 - Adu Dhabi, United Arab Emirates [ 7 ] 2012 - Freshkills Park , New York [ 8 ] 2014 - Copenhagen , Denmark [ 9 ] 2016 - Santa Monica, California 2018 - Melbourne, Australia [ 10 ] 2019 - Abu Dubai, United Arab Emirates [ 11 ] 2020 - Fly Ranch, Nevada [ 12 ] 2022 - Mannheim, Germany [ 13 ] The world's first Solar Mural artworks, developed through leadership from the Land Art Generator Initiative, are located in San Antonio, Texas. These artworks are the result of an advanced photovoltaic film technology that allows light to filter through an image-printed film adhered to solar panels. The first is a stand-alone work called La Monarca. [ 14 ] The world's first wall-mounted Solar Mural artwork is on the facade of Brackenridge Elementary School. [ 15 ]
https://en.wikipedia.org/wiki/Land_Art_Generator_Initiative
The Land Information and Management System ( LIMS ) is a system introduced in Pakistan to improve contemporary agricultural practices. [ 1 ] The former Prime Minister, Shehbaz Sharif and Chief of Army Staff (COAS) General Asim Munir , inaugurated the system. Headed by Maj Gen Muhammad Ayub Ahsan Bhatti, its goal is to boost the agriculture sector , which accounts for nearly 25% of the country's GDP . [ 2 ] [ 3 ] LIMS is based on a geographic information system (GIS), and aims to streamline the digitization of farming processes. [ 3 ] It gives farmers online access to data on climate shifts, satellite-based crop monitoring, water usage, fertilizer application, and targeted spray zones. Its developers believe that LIMS will create jobs for the youth and rejuvenate unused and underutilized land. [ 4 ] [ 5 ] [ 6 ] LIMS offers real-time updates to farmers regarding soil conditions, crops, weather, water availability, and pest monitoring using remote sensing and geospatial technologies. Additionally, the system is designed to reduce the reliance on intermediaries through an effective marketing framework. [ 7 ] [ 5 ] LIMS has main goals centered around bolstering food security, advancing agricultural exports, and alleviating the financial burden caused by imports on the nation's treasury. It is designed to convert underutilized or low-yield land within the country, and reduce food insecurity, malnutrition, and the escalating costs associated with agricultural imports. [ 7 ] [ 4 ] LIMS collaborates with several countries including Saudi Arabia , the United Arab Emirates , Qatar , Bahrain , and China for multiple agricultural ventures aimed at boosting Pakistan's exports. [ 7 ] In 2023, Saudi Arabia made a $500 million investment to establish the facility. [ 8 ]
https://en.wikipedia.org/wiki/Land_Information_and_Management_System
LADSS , or land allocation decision support system , is an agricultural land-use planning tool developed at The Macaulay Institute . More recently the term LADSS is used to refer to the research of the team behind the original planning tool. [ 1 ] The focus of the research of the LADSS team has evolved over time from land use decision support towards policy support, climate change and the concepts of resilience and adaptive capacity. The team has recently published a study which examines, from a Scottish perspective, a number of alternative scenarios for reform of CAP Pillar 1 Area Payments. It focuses on two alternative classifications: the Macaulay Land Capability for Agriculture classification; and Less Favoured Area Designations; and includes analysis of the redistribution of payments from the current historical system. The study is entitled: Modelling Scenarios for CAP Pillar 1 Area Payments using Macaulay Land Capability for Agriculture (& Less Favoured Area Designations) and was used to inform the Pack Inquiry . The EU FP7 SMILE (Synergies in Multi-scale Inter-Linkages of Eco-social Systems) project, focuses on the concept of social metabolism that draws attention to how energy, material, money and ideas are utilised by society. The Aquarius project, which is aims is to find and implement sustainable, integrated land-water management through engaging with land managers. The COP15 website which provides a series of briefing and scoping papers produced by the United Nations Environment Programme (UNEP) and contributed to by The Macaulay Institute to raise the profile of the ecosystems approach in the UNFCC 15th Conference of the Parties meeting in Copenhagen to tackling not just climate change mitigation and adaptation, but also poverty alleviation, disaster risk reduction , biodiversity loss and many other environmental issues. The LADSS planning tool is implemented using the programming language G2 from Gensym alongside a Smallworld GIS application using the Magik programming language and an Oracle database . LADSS models crops using the CropSyst simulation model. LADSS also contains a livestock model plus social, environmental and economic impact assessments. LADSS has been used to address climate change issues affecting agriculture in Scotland and Italy . Part of this work has involved the use of General Circulation Models (also known as Global climate models ) to predict future climate scenarios. Other work has included a study into how Common Agricultural Policy reform will affect the uplands of Scotland , an assessment of agricultural sustainability and rural development research within the AGRIGRID project. Peer reviewed papers produced by LADSS are available for download in PDF format.
https://en.wikipedia.org/wiki/Land_allocation_decision_support_system
In biogeography , a land bridge is an isthmus or wider land connection between otherwise separate areas, over which animals and plants are able to cross and colonize new lands. A land bridge can be created by marine regression , in which sea levels fall, exposing shallow, previously submerged sections of continental shelf ; or when new land is created by plate tectonics ; or occasionally when the sea floor rises due to post-glacial rebound after an ice age . In the late 19th and early 20th centuries, vanished land bridges were an explanation for observed affinities of plants and animals in distant locations. Such scientists as Joseph Dalton Hooker noted puzzling geological, botanical, and zoological similarities between widely separated areas, and proposed land bridges between appropriate land masses that allowed species to spread between land masses. [ 3 ] [ 4 ] In geology, the concept was first proposed by Jules Marcou in Lettres sur les roches du Jura et leur distribution géographique dans les deux hémisphères ("Letters on the rocks of the Jura [Mountains] and their geographic distribution in the two hemispheres"), 1857–1860. [ 4 ] Hypothesized land bridges included: [ 4 ] The theory of continental drift provided an alternate explanation that did not require land bridges. [ 5 ] However the continental drift theory was not widely accepted until the development of plate tectonics in the early 1960s, which more completely explained the motion of continents over geological time. [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/Land_bridge
Due to changes in sea level , Japan has at various times been connected to the continent by land bridges ( 陸橋 , rikukyō ) , with continental Russia to the north via the Sōya Strait , Sakhalin , and the Mamiya Strait , and with the Korean Peninsula to the southwest, via the Tsushima Strait and Korea Strait . [ 1 ] : 962 Land bridges also connected the Japanese Islands with each other. These land bridges enabled the migration of terrestrial fauna from the continent and their dispersal within Japan. [ 1 ] : 961 Around 25 million years ago, the Sea of Japan began to open, separating Japan from the continent and giving rise to the Japanese island arc system of today. [ 2 ] : 1 The Sea of Japan as a back-arc basin was open both to the northeast and to the southwest by 14 Ma, [ 2 ] : 14 while marine transgression further contributed to the isolation and insulation of Japan. [ 1 ] : 961 Due to the level of tectonic activity in the area and significant subsidence of the Japanese Islands since the Miocene , exact quantification of historic sea level changes is problematic. [ 1 ] : 962 Based on current depths, a 55 m (180 ft) reduction in sea level would be sufficient to connect Hokkaidō with the mainland. [ 3 ] : 1135 The Sōya land bridge ( 宗谷陸橋 ) and Mamiya land bridge ( 間宮陸橋 ) — sometimes referred to jointly as the Saghalien land bridge ( 樺太陸橋 ) [ 4 ] or Sakhalin land bridge — are thus thought to have been in place during most glacial periods . [ 1 ] : 962 [ 3 ] : 1135 With a minimum depth of 130 m (430 ft) and based in part on the appearance in Japan of Proboscidea , the Tsushima land bridge ( 津軽陸橋 ) and Korean land bridge ( 朝鮮陸橋 ) — sometimes referred to jointly as the Korean land bridge [ 4 ] — are understood to have been in place at 1.2 Ma, 0.63 Ma, and 0.43 Ma. [ 1 ] : 962 [ 5 ] : 314 A Kuril land bridge ( 千島陸橋 ) has been insufficient to connect Hokkaidō with Kamchatka during the Quaternary . [ 4 ] The southern Kuril land bridge that connected Kunashiri and the Lesser Kurils to Hokkaidō during the Early Holocene was insufficient with the rising sea level at around 6,000 BP . [ 6 ] : 133 Honshū , Shikoku , and Kyūshū are separated by shallow straits that rarely exceed 50 m (160 ft) in depth. [ 3 ] : 1135 Consequently, they were frequently connected together as a single land mass. [ 1 ] : 962 [ 3 ] : 1135 The Tsugaru Strait , with a depth in excess of 130 metres (430 ft), represents a more significant faunal boundary, known as Blakiston's Line . [ 3 ] : 1135 The most recent age of the Tsugaru land bridge ( 津軽陸橋 ) is uncertain. [ 7 ] The Ryūkyū Islands , separated by deeper straits still (the Tokara Gap ), have been isolated from the main islands throughout the Quaternary . [ 1 ] : 962 The Ryūkyū land bridge ( 琉球陸橋 ) was sufficient temporarily to connect Miyako-jima with Taiwan during the late Middle Pleistocene , allowing for the migration of the Steppe mammoth ( Mammuthus trogontherii ). [ 4 ] [ 8 ] During this period, the Miyako Strait was sufficient to prevent the land bridge reaching Okinawa Island . [ 8 ]
https://en.wikipedia.org/wiki/Land_bridges_of_Japan
Land change models (LCMs) describe, project, and explain changes in and the dynamics of land use and land-cover. LCMs are a means of understanding ways that humans change the Earth's surface in the past, present, and future. Land change models are valuable in development policy, helping guide more appropriate decisions for resource management and the natural environment at a variety of scales ranging from a small piece of land to the entire spatial extent. [ 1 ] [ 2 ] Moreover, developments within land-cover , environmental and socio-economic data (as well as within technological infrastructures) have increased opportunities for land change modeling to help support and influence decisions that affect human-environment systems , [ 1 ] as national and international attention increasingly focuses on issues of global climate change and sustainability . Changes in land systems have consequences for climate and environmental change on every scale. Therefore, decisions and policies in relation to land systems are very important for reacting these changes and working towards a more sustainable society and planet. [ 3 ] Land change models are significant in their ability to help guide the land systems to positive societal and environmental outcomes at a time when attention to changes across land systems is increasing. [ 3 ] [ 4 ] A plethora of science and practitioner communities have been able to advance the amount and quality of data in land change modeling in the past few decades. That has influenced the development of methods and technologies in model land change. The multitudes of land change models that have been developed are significant in their ability to address land system change and useful in various science and practitioner communities. [ 3 ] For the science community, land change models are important in their ability to test theories and concepts of land change and its connections to human-environment relationships, as well as explore how these dynamics will change future land systems without real-world observation. [ 3 ] Land change modeling is useful to explore spatial land systems, uses, and covers. Land change modeling can account for complexity within dynamics of land use and land cover by linking with climatic, ecological, biogeochemical, biogeophysical and socioeconomic models. Additionally, LCMs are able to produce spatially explicit outcomes according to the type and complexity within the land system dynamics within the spatial extent. Many biophysical and socioeconomic variables influence and produce a variety of outcomes in land change modeling. [ 3 ] A notable property of all land change models is that they have some irreducible level of uncertainty in the model structure, parameter values, and/or input data. For instance, one uncertainty within land change models is a result from temporal non-stationarity that exists in land change processes, so the further into the future the model is applied, the more uncertain it is. [ 5 ] [ 6 ] Another uncertainty within land change models are data and parameter uncertainties within physical principles (i.e., surface typology), which leads to uncertainties in being able to understand and predict physical processes. [ 5 ] Furthermore, land change model design are a product of both decision-making and physical processes. Human-induced impact on the socio-economic and ecological environment is important to take into account, as it is constantly changing land cover and sometimes model uncertainty. To avoid model uncertainty and interpret model outputs more accurately, a model diagnosis is used to understand more about the connections between land change models and the actual land system of the spatial extent. The overall importance of model diagnosis with model uncertainty issues is its ability to assess how interacting processes and the landscape are represented, as well as the uncertainty within the landscape and its processes. [ 5 ] A machine-learning approach uses land-cover data from the past to try to assess how land will change in the future, and works best with large datasets. There are multiple types of machine-learning and statistical models - a study in western Mexico from 2011 found that results from two outwardly similar models were considerably different, as one used a neural network and the other used a simple weights-of-evidence model. [ 7 ] A cellular land change model uses maps of suitability for various types of land use, and compares areas that are immediately adjacent to one another to project changes into the future. Variations in the scale of cells in a cellular model can have significant impacts on model outputs. [ 8 ] Economic models are built on principles of supply and demand . They use mathematical parameters in order to predict what land types will be desired and which will be discarded. These are frequently built for urban areas, such as a 2003 study of the highly dense Pearl River Delta in southern China . [ 9 ] Agent-based models try to simulate the behavior of many individuals making independent choices, and then see how those choices affect the landscape as a whole. Agent-based modeling can be complex - for instance, a 2005 study combined an agent-based model with computer-based genetic programming to explore land change in the Yucatan peninsula of Mexico. [ 10 ] Many models do not limit themselves to one of the approaches above - they may combine several in order to develop a fully comprehensive and accurate model. [ citation needed ] Land change models are evaluated to appraise and quantify the performance of a model’s predictive power in terms of spatial allocation and quantity of change. Evaluating a model allows the modeler to evaluate a model’s performance to edit a “model’s output, data measurement, and the mapping and modeling of data” for future applications. The purpose for model evaluation is not to develop a singular metric or method to maximize a “correct” outcome, but to develop tools to evaluate and learn from model outputs to produce better models for their specific applications [ 11 ] There are two types of validation in land change modeling: process validation and pattern validation. Process Validation compares the match between “the process in the model and the process operating in the real world”. Process validation is most commonly used in agent-based modeling whereby the modeler is using the behaviors and decisions to inform the process determining land change in the model. Pattern validation compares model outputs (ie. predicted change) and observed outputs (ie. reference change). [ 2 ] Three map analyses are a commonly used method for pattern validation in which three maps, a reference map at time 1, a reference map at time 2, and a simulated map of time 2, are compared. [ citation needed ] This generates a cross-comparison of the three maps where the pixels are classified as one of these five categories: Because three map comparisons include both errors and correctly simulated pixels, it results in a visual expression of both allocation and quantity errors. Single-summary metrics are also used to evaluate LCMs. There are many single summary metrics that modelers have used to evaluate their models and are often utilized to compare models to each other. One such metric is the Figure of Merit (FoM) which uses the hit, miss, and false alarm values generated from a three-map comparison to generate a percentage value that expresses the intersection between reference and simulated change. [ 11 ] Single summary metrics can obfuscate important information, but the FoM can be useful especially when the hit, miss and false alarm values are reported as well. The separation of calibration from validation has been identified as a challenge that should be addressed as a modeling challenge. This is commonly caused by modelers use of information from after the first time period. This can cause a map to appear to have a level of accuracy that is much higher than a model’s actual predictive power. [ 13 ] Additional improvements that have been discussed within the field include characterizing the difference between allocation errors and quantity errors, which can be done through three map comparisons, as well as including both observed and predicted change in the analysis of land change models. [ 13 ] Single summary metrics have been overly relied on in the past, and have varying levels of usefulness  when evaluating LCMs. Even the best single summary metrics often leave out important information, and reporting metrics like FoM along with the maps and values that are used to generate them can communicate necessary information that would otherwise be obfuscated. [ 14 ] Scientists use LCMs to build and test theories in land change modeling for a variety of human and environmental dynamics. [ 15 ] Land change modeling has a variety of implementation opportunities in many science and practice disciplines, such as in decision-making, policy, and in real-world application in public and private domains. Land change modeling is a key component of land change science , which uses LCMs to assess long-term outcomes for land cover and climate. The science disciplines use LCMs to formalize and test land change theory, and the explore and experiment with different scenarios of land change modeling. The practical disciplines use LCMs to analyze current land change trends and explore future outcomes from policies or actions in order to set appropriate guidelines, limits and principles for policy and action. Research and practitioner communities may study land change to address topics related to land-climate interactions, water quantity and quality, food and fiber production, and urbanization, infrastructure, and the built environment. [ 15 ] One improvement for land change modeling can be made through better data and integration with available data and models. Improved observational data can influence modeling quality. Finer spatial and temporal resolution data that can integrate with socioeconomic and biogeophysical data can help land change modeling couple the socioeconomic and biogeological modeling types. Land change modelers should value data at finer scales. Fine data can give a better conceptual understanding of underlying constructs of the model and capture additional dimensions of land use. It is important to maintain the temporal and spatial continuity of data from airborne-based and survey-based observation through constellations of smaller satellite coverage, image processing algorithms, and other new data to link satellite-based land use information and land management information. It is also important to have better information on land change actors and their beliefs, preferences, and behaviors to improve the predictive ability of models and evaluate the consequences of alternative policies. [ 2 ] One important improvement for land change modeling can be made though better aligning model choices with model goals. It is important to choose the appropriate modeling approach based on the scientific and application contexts of the specific study of interest. For example, when someone needs to design a model with policy and policy actors in mind, they may choose an agent-based model. Here, structural economic or agent-based approaches are useful, but specific patterns and trends in land change as with many ecological systems may not be as useful. When one needs to grasp the early stages of problem identification, and thus needs to understand the scientific patterns and trend of land change, machine learning and cellular approaches are useful. [ 2 ] Land Change Modeling should also better integrate positive and normative approaches to explanation and prediction based on evidence-based accounts of land systems. It should also integrate optimization approaches to explore the outcomes that are the most beneficial and the processes that might produce those outcomes. [ 2 ] It is important to integrate data across scales. A models design is based on the dominant processes and data from a specific scale of application and spatial extent. Cross-scale dynamics and feedbacks between temporal and spatial scales influences the patterns and processes of the model. Process like telecoupling , indirect land use change , and adaption to climate change at multiple scales requires better representation by cross-scale dynamics. Implementing these processes will require a better understanding of feedback mechanisms across scales. [ 16 ] As there is continuous reinvention of modeling environments, frameworks, and platforms, land change modeling can improve from better research infrastructure support. For example, model and software infrastructure development can help avoid duplication of initiatives by land change modeling community members, co-learn about land change modeling, and integrate models to evaluate impacts of land change. Better data infrastructure can provide more data resources to support compilation, curation, and comparison of heterogeneous data sources. Better community modeling and governance can advance decision-making and modeling capabilities within a community with specific and achievable goals. Community modeling and governance would provide a step towards reaching community agreement on specific goals to move modeling and data capabilities forward. [ 17 ] A number of modern challenges in land change modeling can potentially be addressed through contemporary advances in cyberinfrastructure such as crowd-source, “mining” for distributed data, and improving high-performance computing . Because it is important for modelers to find more data to better construct, calibrate, and validate structural models, the ability to analyze large amount of data on individual behaviors is helpful. For example, modelers can find point-of-sales data on individual purchases by consumers and internet activities that reveal social networks. However, some issues of privacy and propriety for crowdsourcing improvements have not yet been resolved. [ citation needed ] The land change modeling community can also benefit from Global Positioning System and Internet-enabled mobile device data distribution. Combining various structural-based data-collecting methods can improve the availability of microdata and the diversity of people that see the findings and outcomes of land change modeling projects. For example, citizen-contributed data supported the implementation of Ushahidi in Haiti after the 2010 earthquake , helping at least 4,000 disaster events. Universities, non-profit agencies, and volunteers are needed to collect information on events like this to make positive outcomes and improvements in land change modeling and land change modeling applications. Tools such as mobile devices are available to make it easier for participants to participate in collecting micro-data on agents. Google Maps uses cloud-based mapping technologies with datasets that are co-produced by the public and scientists. Examples in agriculture such as coffee farmers in Avaaj Otalo showed use of mobile phones for collecting information and as an interactive voice. [ citation needed ] Cyberinfrastructure developments may also increase the ability of land change modeling to meet computational demands of various modeling approaches given increasing data volumes and certain expected model interactions. For example, improving the development of processors, data storage, network bandwidth, and coupling land change and environmental process models at high resolution. [ 18 ] An additional way to improve land change modeling is through improvement of model evaluation approaches. Improvement in sensitivity analysis are needed to gain a better understand of the variation in model output in response to model elements like input data, model parameters, initial conditions, boundary conditions, and model structure. Improvement in pattern validation can help land change modelers make comparisons between model outputs parameterized for some historic case, like maps, and observations for that case. Improvement in uncertainty sources is needed to improve forecasting of future states that are non-stationary in processes, input variables, and boundary conditions. One can explicitly recognize stationarity assumptions and explore data for evidence in non-stationarity to better acknowledge and understand model uncertainty to improve uncertainty sources. Improvement in structural validation can help improve acknowledgement and understanding of the processes in the model and the processes operating in the real world through a combination of qualitative and quantitative measures. [ 2 ]
https://en.wikipedia.org/wiki/Land_change_modeling
Land consumption as part of human resource consumption is the conversion of land with healthy soil and intact habitats into areas for industrial agriculture , traffic ( road building ) and especially urban human settlements . More formally, the EEA [ 1 ] has identified three land consuming activities: In all of those respects, land consumption is equivalent to typical land use in industrialized regions and civilizations. Since often aforementioned conversion activities are virtually irreversible , the term land loss is also used. From 1990 to 2000, 1.4 million hectares (3.5 × 10 ^ 6 acres) of open space were consumed in the U.S. [ 2 ] In Germany, land is being consumed at a rate of more than 70 hectares (170 acres) every day (~250 thousand hectares (620,000 acres) per 10 years). [ 3 ] In European Union, land take is estimated approximately about to 1.2 million hectares in 21 EU countries over the period 1990–2006. [ 4 ] Urban growth reduces open space in and around cities , impacting biodiversity and ecosystem services Land loss can also happen due to natural factors, like erosion or desertification - nevertheless most of those can also eventually be tracked back to human activities . Another slightly different interpretation of the term is the forced displacement or compulsory acquisition of a native people or settlers from their original land due to land grabbing , etc. Again, in most cases, this will be due to economic reasons like search for profitable investment and commodification of natural resources . Reducing global land loss, which progresses at an alarming rate, is vital since the land footprint , the area required both domestically and abroad to produce the goods and services consumed by a country or region, can be much larger than the land actually used or even available in the country itself. [ 3 ] [ 5 ] While land prices have surged in the first few years of the 21st century, land consumption economy still lacks environmental full-cost accounting to add the long-term costs of environmental degradation . The major effects of land conversion for economic growth are:
https://en.wikipedia.org/wiki/Land_consumption
Land cover is the physical material at the land surface of Earth . Land covers include flora , concrete , built structures, bare ground, and temporary water . Earth cover is the expression used by ecologist Frederick Edward Clements that has as its closest modern equivalent vegetation . [ 1 ] : 52 The expression continues to be used by the United States Bureau of Land Management . [ 2 ] There are two primary methods for capturing information on land cover: field survey , and analysis of remotely sensed imagery . [ 3 ] Land change models can be built from these types of data to assess changes in land cover over time. One of the major land cover issues (as with all natural resource inventories) is that every survey defines similarly named categories in different ways. For instance, there are many definitions of " forest "—sometimes within the same organisation—that may or may not incorporate a number of different forest features (e.g., stand height, canopy cover, strip width, inclusion of grasses, and rates of growth for timber production ). [ 4 ] Areas without trees may be classified as forest cover "if the intention is to re-plant" ( UK and Ireland ), while areas with many trees may not be labelled as forest "if the trees are not growing fast enough" ( Norway and Finland ). "Land cover" is distinct from " land use ", despite the two terms often being used interchangeably. Land use is a description of how people utilize the land and of socio-economic activity . Urban and agricultural land uses are two of the most common land use classes. At any one point or place there may be multiple and alternate land uses, the specification of which may have a political dimension. The origins of the "land cover/land use" couplet and the implications of their confusion are discussed in Fisher et al. (2005). [ 5 ] Following table is Land Cover statistics by Food and Agriculture Organization (FAO) with 14 classes. Land cover change detection using remote sensing and geospatial data provides baseline information for assessing the climate change impacts on habitats, biodiversity, and natural resources in the target areas. Land cover change detection and mapping is a key component of interdisciplinary land change science , which uses it to determine the consequences of land change on climate.
https://en.wikipedia.org/wiki/Land_cover
Land development is the alteration of landscape in any number of ways such as: Land development has a history dating to Neolithic times around 8,000 BC. From the dawn of civilization, the process of land development has elaborated the progress of improvements on a piece of land based on codes and regulations, particularly housing complexes. In an economic context, land development is also sometimes advertised as land improvement or land amelioration . It refers to investment making land more usable by humans. For accounting purposes, it refers to any variety of projects that increase the value of the process . Most are depreciable, but some land improvements are not able to be depreciated because a useful life cannot be determined. Home building and containment [ clarification needed ] are two of the most common and the oldest types of development. In an urban context, land development furthermore includes: A landowner or developer of a project of any size, will often want to maximise profits , minimise risk , and control cash flow . This "profitable energy" means identifying and developing the best scheme for the local marketplace, whilst satisfying the local planning process. Development analysis puts development prospects and the development process itself under the microscope, identifying where enhancements and improvements can be introduced. These improvements aim to align with best design practice, political sensitivities, and the inevitable social requirements of a project, with the overarching objective of increasing land values and profit margins on behalf of the landowner or developer. [ 1 ] Development analysis can add significantly to the value of land and development, and as such is a crucial tool for landowners and developers. It is an essential step in Kevin A. Lynch 's 1960 book The Image of the City , and is considered to be essential to realizing the value potential of land. [ 2 ] The landowner can share in additional planning gain (significant value uplift) via an awareness of the land's development potential . This is done via a residual development appraisal or residual valuation. The residual appraisal calculates the sale value of the end product (the gross development value or GDV) and hypothetically deducts costs, including planning and construction costs, finance costs and developer's profit. The "residue", or leftover proportion, represents the land value. Therefore, in maximizing the GDV (that which one could build on the land), land value is concurrently enhanced. Land value is highly sensitive to supply and demand (for the end product), build costs, planning and affordable housing contributions, and so on. Understanding the intricacies of the development system and the effect of "value drivers" can result in massive differences in the landowner's sale value. Land development puts more emphasis on the expected economic development as a result of the process; "land conversion" tries to focus on the general physical and biological aspects of the land use change . "Land improvement" in the economic sense can often lead to land degradation from the ecological perspective. Land development and the change in land value does not usually take into account changes in the ecology of the developed area. While conversion of (rural) land with a vegetation carpet to building land may result in a rise in economic growth and rising land prices , the irreversibility of lost flora and fauna because of habitat destruction , the loss of ecosystem services and resulting decline in environmental value is only considered a priori in environmental full-cost accounting . Conversion to building land is as a rule associated with road building , which in itself already brings topsoil abrasion, [ 3 ] soil compaction [ 4 ] and modification of the soil's chemical composition through soil stabilization , creation of impervious surfaces and, subsequently, (polluted) surface runoff water. Construction activity often effectively seals off a larger part of the soil from rainfall and the nutrient cycle , so that the soil below buildings and roads is effectively "consumed" and made infertile . With the notable exception of attempts at rooftop gardening and hanging gardens in green buildings (possibly as constituents of green urbanism ), vegetative cover of higher plants is lost to concrete and asphalt surfaces, complementary interspersed garden and park areas notwithstanding. [ citation needed ] New creation of farmland (or 'agricultural land conversion') will rely on the conversion and development of previous forests , savannas or grassland . Recreation of farmland from wasteland , deserts or previous impervious surfaces is considerably less frequent because of the degraded or missing fertile soil in the latter. Starting from forests, land is made arable by assarting or slash-and-burn . Agricultural development furthermore includes: Because the newly created farmland is more prone to erosion than soil stabilized by tree roots , such a conversion may mean irreversible crossing of an ecological threshold . The resulting deforestation is also not easily compensated for by reforestation or afforestation . This is because plantations of other trees as a means for water conservation and protection against wind erosion ( shelterbelts ), as a rule, lack the biodiversity of the lost forest, especially when realized as monocultures . [ 5 ] [ 6 ] [ 7 ] [ 8 ] These deforestation consequences may have lasting effects on the environment including soil stabilization and erosion control measures that may not be as effective in preserving topsoil as the previous intact vegetation . Massive land conversion without proper consideration of ecological and geological consequences may lead to disastrous results , such as: While deleterious effects can be particularly visible when land is developed for industrial or mining usage, agro-industrial and settlement use can also have a massive and sometimes irreversible impact on the affected ecosystem. [ 9 ] Examples of land restoration / land rehabilitation counted as land development in the strict sense are still rare. However, renaturation , reforestation , stream restoration may all contribute to a healthier environment and quality of life, especially in densely populated regions. The same is true for planned vegetation like parks and gardens , but restoration plays a particular role, because it reverses previous conversions to built and agricultural areas. The environmental impact of land use and development is a substantial consideration for land development projects. On the local level an environmental impact report (EIR) may be necessary. [ definition needed ] In the United States, federally funded projects typically require preparation of an environmental impact statement (EIS). The concerns of private citizens or political action committees (PACs) [ further explanation needed ] can influence the scope, or even cancel, a project based on concerns like the loss of an endangered species’ habitat. [ citation needed ] In most cases, the land development project will be allowed to proceed if mitigation requirements are met. [ citation needed ] Mitigation banking is the most prevalent example, and necessitates that the habitat will have to be replaced at a greater rate than it is removed. This increase in total area helps to establish the new ecosystem, though it will require time to reach maturity. [ citation needed ] The extent, and type of land use directly affects wildlife habitat and thereby impacts local and global biodiversity . [ 10 ] Human alteration of landscapes from natural vegetation (e.g. wilderness ) to any other use can result in habitat loss , degradation , and fragmentation , all of which can have devastating effects on biodiversity. [ 11 ] Land conversion is the single greatest cause of extinction of terrestrial species . [ 12 ] An example of land conversion being a chief cause of the critically endangered status of a carnivore is the reduction in habitat for the African wild dog , Lycaon pictus . [ 13 ] Deforestation is also the reason for loss of a natural habitat , with large numbers of trees being cut down for residential and commercial use. Urban growth has become a problem for forests and agriculture, the expansion of structures prevents natural resources from producing in their environment. [ 14 ] To prevent the loss of wildlife the forests must maintain a stable climate and the land must remain unaffected by development. [ citation needed ] Furthermore, forests can be sustained by different forest management techniques such as reforestation and preservation. Reforestation is a reactive approach designed to replant previously logged trees within the forest boundary in attempts to re-stabilize this ecosystem. Preservation , on the other hand, is a proactive idea that promotes the concept of leaving the forest without using this area for its ecosystem goods and services. [ 15 ] Both of these methods to mitigate deforestation are being used throughout the world. [ citation needed ] The U.S. Forest Service predicts that urban and developing terrain in the U.S. will expand by 41 percent in 2060. [ 16 ] These conditions cause displacement for the wildlife and limited resources for the environment to maintain a sustainable balance. [ 17 ]
https://en.wikipedia.org/wiki/Land_development
Drainage is the natural or artificial removal of a surface's water and sub-surface water from an area with excess water. The internal drainage of most agricultural soils can prevent severe waterlogging (anaerobic conditions that harm root growth), but many soils need artificial drainage to improve production or to manage water supplies. The Indus Valley Civilization had sewerage and drainage systems. All houses in the major cities of Harappa and Mohenjo-daro had access to water and drainage facilities. Waste water was directed to covered gravity sewers , which lined the major streets. [ 1 ] The invention of hollow-pipe drainage is credited to Sir Hugh Dalrymple, who died in 1753. [ 2 ] Simple infrastructure such as open drains, pipes, and berms are still common. In modern times, more complex structures involving substantial earthworks and new technologies have been common as well. New storm water drainage systems incorporate geotextile filters that retain and prevent fine grains of soil from passing into and clogging the drain. Geotextiles are synthetic textile fabrics specially manufactured for civil and environmental engineering applications. Geotextiles are designed to retain fine soil particles while allowing water to pass through. In a typical drainage system, they would be laid along a trench which would then be filled with coarse granular material : gravel , sea shells , stone or rock . The geotextile is then folded over the top of the stone and the trench is then covered by soil. Groundwater seeps through the geotextile and flows through the stone to an outfell. In high groundwater conditions a perforated plastic ( PVC or PE ) pipe is laid along the base of the drain to increase the volume of water transported in the drain. Alternatively, a prefabricated plastic drainage system made of HDPE , often incorporating geotextile, coco fiber or rag filters can be considered. The use of these materials has become increasingly more common due to their ease of use, since they eliminate the need for transporting and laying stone drainage aggregate, which is invariably more expensive than a synthetic drain and concrete liners. Over the past 30 years, geotextile, PVC filters, and HDPE filters have become the most commonly used soil filter media. They are cheap to produce and easy to lay, with factory controlled properties that ensure long term filtration performance even in fine silty soil conditions. Seattle's Public Utilities created a pilot program called Street Edge Alternatives Project. The project focuses on designing a system "to provide drainage that more closely mimics the natural landscape prior to development than traditional piped systems". [ 3 ] The streets are characterized by ditches along the side of the roadway, with plantings designed throughout the area. An emphasis on non-curbed sidewalks allows water to flow more freely into the areas of permeable surface on the side of the streets. Because of the plantings, the run off water from the urban area does not all directly go into the ground, but can also be absorbed into the surrounding environment. Monitoring conducted by Seattle Public Utilities reports a 99 percent reduction of storm water leaving the drainage project. [ 3 ] Drainage has undergone a large-scale environmental review in the recent past [ when? ] in the United Kingdom. Sustainable urban drainage systems (SUDS) are designed to encourage contractors to install drainage system that more closely mimic the natural flow of water in nature. Since 2010 local and neighbourhood planning in the UK is required by law to factor SUDS into any development projects that they are responsible for. Slot drainage is a channel drainage system designed to eliminate the need for further pipework systems to be installed in parallel to the drainage, reducing the environmental impact of production as well as improving water collection. Stainless steel , concrete channel, PVC and HDPE are all materials available for slot drainage which have become industry standards on construction projects. The civil engineer is responsible for drainage in construction projects. During the construction process, they set out all the necessary levels for roads , street gutters , drainage, culverts and sewers involved in construction operations. Civil engineers and construction managers work alongside architects and supervisors, planners, quantity surveyors , and the general workforce, as well as subcontractors. Typically, most jurisdictions have some body of drainage law to govern to what degree a landowner can alter the drainage from their parcel. Drainage options for the construction industry include: The surface opening of channel drainage usually comes in the form of gratings (polymer, plastic, steel or iron) or a single slot (slot drain) that run along the ground surface (typically manufactured from steel or iron). Earth retaining structures such as retaining walls also need to have groundwater drainage considered during their construction. Typical retaining walls are constructed of impermeable material, which can block the path of groundwater. When groundwater flow is obstructed, hydrostatic water pressure buildups against the wall and may cause significant damage. If the water pressure is not drained appropriately, retaining walls can bow, move, and fracture, causing seams to separate. The water pressure can also erode soil particles, leading to voids behind the wall and sinkholes in the above soil. Traditional retaining wall drainage systems can include French drains , drain pipes or weep holes . To prevent soil erosion, geotextile filter fabrics are installed with the drainage system. Drainage in planters refers to the implementation of effective drainage systems specifically designed for plant containers or pots. Proper drainage is crucial in planters to prevent waterlogging and promote healthy plant growth. Planter Drainage involves the incorporation of drainage holes, drainage layers, or specialized drainage systems to ensure excess water can escape from the planter. This helps to prevent root rot , water accumulation, and other issues that can negatively impact plant health. By providing adequate drainage in planters, it supports optimal plant growth and contributes to the overall success of gardening or landscaping projects. [ 4 ] Drainage options for the planter include: Wetland soils may need drainage to be used for agriculture . In the northern United States and Europe, glaciation created numerous small lakes , which gradually filled with humus to make marshes . Some of these were drained using open ditches and trenches to make mucklands , which are primarily used for high-value crops such as vegetables . The world's largest project of this type has been in process for centuries in the Netherlands . The area between Amsterdam , Haarlem and Leiden was, in prehistoric times, swampland and small lakes. Turf cutting ( peat mining ), subsidence and shoreline erosion gradually caused the formation of one large lake, the Haarlemmermeer , or lake of Haarlem. The invention of wind-powered pumping engines in the 15th century permitted some of the marginal land drainage. Still, the final drainage of the lake had to await the design of large steam-powered pumps and agreements between regional authorities. The lake was eliminated between 1849 and 1852, creating thousands of km 2 of new land. Coastal plains and river deltas may have seasonally or permanently high water tables and must have drainage improvements if they are to be used for agriculture. An example is the flatwoods citrus -growing region of Florida , United States. After periods of high rainfall, drainage pumps are employed to prevent damage to the citrus groves from overly wet soils. Rice production requires complete water control, as fields must be flooded or drained at different stages of the crop cycle. The Netherlands has also led the way in this type of drainage by draining lowlands along the shore and pushing back the sea until the original nation has been greatly enlarged. In moist climates, soils may be adequate for cropping with the exception that they become waterlogged for brief periods each year, from snow melt or from heavy rains . Soils that are predominantly clay will pass water very slowly downward. Meanwhile, plant roots suffocate because the excessive water around the roots eliminates air movement through the soil. Other soils may have an impervious layer of mineralized soil, called a hardpan , or relatively impervious rock layers may underlie shallow soils. Drainage is especially important in tree fruit production. Soils that are otherwise excellent may be waterlogged for a week of the year, which is sufficient to kill fruit trees and cost the productivity of the land until replacements can be established. In each of these cases, appropriate drainage carries off temporary flushes of water to prevent damage to annual or perennial crops. Drier areas are often farmed by irrigation , and one would not consider drainage necessary. However, irrigation water always contains minerals and salts , which can be concentrated to toxic levels by evapotranspiration . Irrigated land may need periodic flushes with excessive irrigation water and drainage to control soil salinity . Otherwise:
https://en.wikipedia.org/wiki/Land_drainage
The purpose of a land drain is to allow water in wet or swampy ground to rapidly drain away [ 1 ] or to relieve hydrostatic pressure . They are subterranean linear structures which are laid to a fall which should be as steep as practicable. They are used in agriculture and in building construction sites. Modern land drains take the form of a perforated or discontinuous (i.e. open-jointed) pipe. Typically, the land drains conduct the surplus water to an open ditch or natural water source. Traditionally, land drains were formed in clay soils and peats by excavating a trench and forming a "tunnel" using flat stones. This was very labour-intensive but could often be done using free materials at hand. Typically they were two to three feet (600mm-900mm) below the surface. Agricultural land drains have to be installed sufficiently deep to avoid plough damage. In 1843 in England short earthenware pipes were first used laid edge to edge. The earliest type consisted of a "u" shaped trough onto which a flat lid was placed. Later the extruded clay pipe was developed. These are still used. These can be laid in an excavated trench, or a horizontal hole is formed in the ground using a mole plough and the pipes are forced in by means of a hand or mechanical press. By this means, heavy wet soils, bogs and swamps could be rendered amendable to agriculture. Virtually all crops need a well-drained soil to grow well. Many modern land drains are created utilising rigid or flexible plastic pipes pierced with holes, laid in pea gravel. (The pea gravel is pea-sized pebbles without sharp points to damage the pipe.) Geotextile material can surround the gravel to keep out silt. This can be installed in an excavated trench. Specialised mole ploughs are available that can form the hole, insert the perforated pipe (and gravel if required), all in one simultaneous and continuous process. An extremely powerful (usually tracked) tractor is necessary. The flexible pipe is carried as a roll on the back of the machine. There is a sometimes a hopper for gravel which is kept topped up by an adjacent machine. The pipe and gravel go down apertures in the plough blade as the tractor proceeds along the desired route. The purpose of land drains in building construction is somewhat different. If voids are created in the ground for any reason they tend to fill with water. Also the static loads on any subterranean structure and retaining walls can be massively increased by the presence of water in the surrounding ground. Land drains are introduced to relieve this pressure. Traditionally, the drains were created by backfilling behind retaining walls etc. with rubble and allowing the water to drain through the rubble to some suitable point. Instead of having open ditches at the side of highways, land drains can be installed. The excavated trenches are completely filled with gravel (i.e., no soil cover). This is far safer than open trenches if a vehicle should run off the highway. Holes or gaps have to be left in the pipes to allow water to transfer from the subsoil to the pipe and these tend to block with soil or allow silt into the pipe, so blocking it or reducing the flow of water. This can be partially overcome by surrounding the pipes with gravel. However, with time even the gravel becomes choked with soil/silt, so in the latest practice, the gravel is surrounded with a geotextile material which filters out soil particles. Ideally, land drains are laid with access points so that high pressure water jetting is possible to clear silt. However, whatever the technology, all land drains have a finite life and eventually become ineffective due to the ingress of silt and/or the blocking of the surrounding filter media.
https://en.wikipedia.org/wiki/Land_drains
A land ethic is a philosophy or theoretical framework about how, ethically, humans should regard the land. The term was coined by Aldo Leopold (1887–1948) in his A Sand County Almanac (1949), a classic text of the environmental movement. There he argues that there is a critical need for a "new ethic", an "ethic dealing with human's relation to land and to the animals and plants which grow upon it". [ 1 ] Leopold offers an ecologically based land ethic that rejects strictly human-centered views of the environment and focuses on the preservation of healthy, self-renewing ecosystems. A Sand County Almanac was the first systematic presentation of a holistic or ecocentric approach to the environment. [ 2 ] Although Leopold is credited with coining the term "land ethic", there are many philosophical theories that speak to how humans should treat the land. Some of the most prominent land ethics include those rooted in economics, utilitarianism , libertarianism , egalitarianism , and ecology. This is a land ethic based wholly upon economic self-interest. [ 3 ] Leopold sees two flaws in this type of ethic. First, he argues that most members of an ecosystem have no economic worth. For this reason, such an ethic can ignore or even eliminate these members when they are actually necessary for the health of the biotic community of the land. And second, it tends to relegate conservation necessary for healthy ecosystems to the government and these tasks are too large and dispersed to be adequately addressed by such an institution. This ties directly into the context within which Leopold wrote A Sand County Almanac. For example, when the US Forest Service was founded by Gifford Pinchot , the prevailing ethos was economic and utilitarian . Leopold argued for an ecological approach, becoming one of the first to popularize this term coined by Henry Chandler Cowles of the University of Chicago during his early 1900s research at the Indiana Dunes . Conservation became the preferred term for the more anthropocentric model of resource management , while the writing of Leopold and his inspiration, John Muir , led to the development of environmentalism . [ 4 ] Utilitarianism was most prominently defended by British philosophers Jeremy Bentham and John Stuart Mill . Though there are many varieties of utilitarianism, generally it is the view that a morally right action is an action that produces the maximum good for people. [ 5 ] Utilitarianism has often been used when deciding how to use land and it is closely connected with an economic-based ethic. For example, it forms the foundation for industrial farming; an increase in yield, which would increase the number of people able to receive goods from farmed land, is judged from this view to be a good action or approach. In fact, a common argument in favor of industrial agriculture is that it is a good practice because it increases the benefits for humans; benefits such as food abundance and a drop in food prices. However, a utilitarian-based land ethic is different from a purely economic one as it could be used to justify the limiting of a person's rights to make a profit. For example, in the case of the farmer planting crops on a slope, if the runoff of soil into the community creek led to the damage of several neighbor's properties, then the good of the individual farmer would be overridden by the damage caused to his neighbors. Thus, while a utilitarian-based land ethic can be used to support economic activity, it can also be used to challenge this activity. Another philosophical approach often used to guide actions when making (or not making) changes to the land is libertarianism . Roughly, libertarianism is the ethical view that agents own themselves and have particular moral rights, including the right to acquire the property. [ 6 ] In a looser sense, libertarianism is commonly identified with the belief that each individual person has a right to a maximum amount of freedom or liberty when this freedom does not interfere with other people's freedom. A well-known libertarian theorist is John Hospers . For right-libertarians, property rights are natural rights. Thus, it would be acceptable for the above farmer to plant on a slope as long as this action does not limit the freedom of his or her neighbors. This view is closely connected to utilitarianism. Libertarians often use utilitarian arguments to support their own arguments. For example, in 1968, Garrett Hardin applied this philosophy to land issues when he argued that the only solution to the " Tragedy of the Commons " was to place soil and water resources into the hands of private citizens. [ 7 ] Hardin supplied utilitarian justifications to support his argument. However, it can be argued that this leaves libertarian-based land ethics open to the above critique lodged against economic-based approaches. Even excepting this, the libertarian view has been challenged by the critique that numerous people making self-interested decisions often cause large ecological disasters, such as the Dust Bowl disaster. [ 8 ] Even so, libertarianism is a philosophical view commonly held within the United States and, especially, held by U.S. ranchers and farmers. [ dubious – discuss ] Egalitarian -based land ethics are often developed as a response to libertarianism. This is because, while libertarianism ensures the maximum amount of human liberty, it does not require that people help others. It also leads to the uneven distribution of wealth. A well-known egalitarian philosopher is John Rawls . When focusing on land use, egalitarianism evaluates its uneven distribution and the uneven distribution of the fruits of that land. [ 8 ] While both a utilitarian- and libertarian-based land ethic could conceivably rationalize this mal-distribution, an egalitarian approach typically favors equality, whether that be an equal entitlement to land or access to food. [ 9 ] However, there is also the question of negative rights when holding to an egalitarian-based ethic. In other words, if it is recognized that a person has a right to something, then someone has the responsibility to supply this opportunity or item; whether that be an individual person or the government. Thus, an egalitarian-based land ethic could provide a strong argument for the preservation of soil fertility and water because it links land and water with the right to food, the growth of human populations, and the decline of soil and water resources. [ 8 ] Land ethics may also be based upon the principle that the land (and the organisms that live off the land) has intrinsic value. These ethics are, roughly, based on an ecological or systems view . This position was first put forth by Ayers Brinser in Our Use of the Land , published in 1939. Brinser argued that white settlers brought with them "the seeds of a civilization which has grown by consuming the land, that is, a civilization which has used up the land in much the same way that a furnace burns coal.” [ citation needed ] Later, Aldo Leopold 's posthumously published A Sand County Almanac (1949) popularized this idea. Another example is the deep ecology view, which argues that human communities are built upon a foundation of the surrounding ecosystems or the biotic communities and that all life is of inherent worth. [ 10 ] Similar to egalitarian-based land ethics, the above land ethics were also developed as alternatives to utilitarian and libertarian-based approaches. Leopold's ethic is one of the most popular ecological approaches in the early 21st century. Other writers and theorists who hold this view include Wendell Berry (b. 1934), N. Scott Momaday , J. Baird Callicott , Paul B. Thompson , and Barbara Kingsolver . In his classic essay, "The Land Ethic," published posthumously in A Sand County Almanac (1949), Leopold proposes that the next step in the evolution of ethics is the expansion of ethics to include nonhuman members of the biotic community , collectively referred to as "the land." [ 11 ] Leopold states the basic principle of his land ethic as: "A thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community. It is wrong when it tends otherwise." [ 12 ] He also describes it in this way: "The land ethic simply enlarges the boundaries of the community to include soils, waters, plants, and animals, or collectively: the land . . . [A] land ethic changes the role of Homo sapiens from conqueror of the land community to plain member and citizen of it. It implies respect for his fellow-members, and also respect for the community as such." [ 11 ] Leopold was a naturalist, not a philosopher. There is much scholarly debate about what exactly Leopold's land ethic asserts and how he argues for it. At its core, the land ethics claims (1) that humans should view themselves as plain members and citizens of biotic communities, not as "conquerors" of the land; (2) that we should extend ethical consideration to ecological wholes ("soils, waters, plants, and animals"), (3) that our primary ethical concern should not be with individual plants or animals, but with the healthy functioning of whole biotic communities, and (4) that the "summary moral maxim" of ecological ethics is that we should seek to preserve the integrity, stability, and beauty of the biotic community. Beyond this, scholars disagree about the extent to which Leopold rejected traditional human-centered approaches to the environment and how literally he intended his basic moral maxim to be applied. They also debate whether Leopold based his land ethic primarily on human-centered interests, as many passages in A Sand County Almanac suggest, or whether he placed significant weight on the intrinsic value of nature. One prominent student of Leopold, J. Baird Callicott , has suggested that Leopold grounded his land ethics on various scientific claims, including a Darwinian view of ethics as rooted in special affections for kith and kin, a Copernican view of humans as plain members of nature and the cosmos, and the finding of modern ecology that ecosystems are complex, interrelated wholes. [ 13 ] However, this interpretation has recently been challenged by Roberta Millstein , who has offered evidence that Darwin's influence on Leopold was not related to Darwin's views about moral sentiments, but rather to Darwin's views about interdependence in the struggle for existence. [ 14 ] Leopold's ecocentric land ethic is popular today with mainstream environmentalists for a number of reasons. Unlike more radical environmental approaches, such as deep ecology or biocentrism , it does not require huge sacrifices of human interests. Leopold does not, for example, believe that humans should stop eating or hunting, or experimenting on animals. Nor does he call for a massive reduction in the human population, or for permitting humans to interfere with nature only to satisfy vital human needs (regardless of economic or other human costs). As an environmental ethic, Leopold's land ethic is a comparatively moderate view that seeks to strike a balance between human interests and a healthy and biotically diverse natural environment. Many of the things mainstream environmentalists favor—preference for native plants and animals over invasive species, hunting or selective culling to control overpopulated species that are damaging to the environment, and a focus on preserving healthy, self-regenerating natural ecosystems both for human benefit and for their own intrinsic value—jibe with Leopold's ecocentric land ethic. A related understanding has been framed as global land as a commons. In this view biodiversity and terrestrial carbon storage - an element of climate change mitigation - are global public goods. Hence, land should be governed on a global scale as a commons, requiring increased international cooperation on nature preservation. [ 15 ] Some critics fault Leopold for lack of clarity in spelling out exactly what the land ethic is and its specific implications for how humans should think about the environment. [ 16 ] It is clear that Leopold did not intend his basic normative principle ("A thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community") to be regarded as an ethical absolute. Thus construed, it would prohibit clearing land to build homes, schools, or farms, and generally require a "hands-off" approach to nature that Leopold plainly did not favor. Presumably, therefore, his maxim should be seen as a general guideline for valuing natural ecosystems and striving to achieve what he terms a sustainable state of "harmony between men and land." But this is vague and, according to some critics, not terribly helpful. A second common criticism of Leopold is that he fails to state clearly why we should adopt the land ethic. [ 17 ] He often cites examples of environmental damage (e.g., soil erosion, pollution, and deforestation) that result from traditional human-centered, "conqueror" attitudes towards nature. But it is unclear why such examples support the land ethic specifically, as opposed to biocentrism or some other nature-friendly environmental ethic. Leopold also frequently appeals to modern ecology, evolutionary theory, and other scientific discoveries to support his land ethic. Some critics have suggested that such appeals may involve an illicit move from facts to values. [ 17 ] At a minimum, such critics claim, more should be said about the normative basis of Leopold's land ethic. Other critics object to Leopold's ecological holism. According to animal rights advocate, Tom Regan, Leopold's land ethic condones sacrificing the good of individual animals to the good of the whole, and is thus a form of "environmental fascism." [ 18 ] According to these critics, we rightly reject such holistic approaches in human affairs. Why, they ask, should we adopt them in our treatment of non-human animals? Finally, some critics have questioned whether Leopold's land ethic might require unacceptable interferences with nature in order to protect current, but transient, ecological balances. [ 19 ] If the fundamental environmental imperative is to preserve the integrity and stability of natural ecosystems, wouldn't this require frequent and costly human interventions to prevent naturally occurring changes to natural environments? In nature, the "stability and integrity" of ecosystems are disrupted or destroyed all the time by drought, fire, storms, pests, newly invasive predators, etc. Must humans act to prevent such ecological changes, and if so, at what cost? Why should we place such high value on current ecological balances? Why think it is our role to be nature's steward or policeman? According to these critics, Leopold's stress on preserving existing ecological balances is overly human-centered and fails to treat nature with the respect it deserves.
https://en.wikipedia.org/wiki/Land_ethic
Land navigation is the discipline of following a route through unfamiliar terrain on foot or by vehicle, using maps with reference to terrain, a compass , and other navigational tools. [ 1 ] It is distinguished from travel by traditional groups, such as the Tuareg [ 2 ] across the Sahara and the Inuit [ 3 ] across the Arctic , who use subtle cues to travel across familiar, yet minimally differentiated terrain. Land navigation is a core military discipline, which uses courses or routes that are an essential part of military training. Often, these courses are several miles long in rough terrain and are performed under adverse conditions, such as at night or in the rain. [ 4 ] In the late 19th century, land navigation developed into the sport of orienteering . [ 5 ] The earliest use of the term 'orienteering' appears to be in 1886. Nordic military garrisons began orienteering competitions in 1895. [ 6 ] In the United States military , land navigation courses are required for the Marine Corps [ 7 ] and the Army . [ 8 ] Air Force escape and evasion training includes aspects of land navigation. Army Training Circular 3-25.26 is devoted to land navigation. [ 8 ] [ 9 ]
https://en.wikipedia.org/wiki/Land_navigation
Land reclamation , often known as reclamation , and also known as land fill (not to be confused with a waste landfill ), is the process of creating new land from oceans , seas , riverbeds or lake beds. The land reclaimed is known as reclamation ground , reclaimed land , or land fill . In ancient Egypt , the rulers of the Twelfth Dynasty (c. 2000–1800 BC) undertook a far-sighted land reclamation scheme to increase agricultural output. They constructed levees and canals to connect the Faiyum with the Bahr Yussef waterway, diverting water that would have flowed into Lake Moeris and causing gradual evaporation around the lake's edges, creating new farmland from the reclaimed land. A similar land reclamation system using dams and drainage canals was used in the Greek Copaic Basin during the Middle Helladic Period (c. 1900–1600 BC). [ 1 ] Another early large-scale project was the Beemster Polder in the Netherlands, adding 70 square kilometres (27 sq mi) of land in 1612. In Hong Kong, the Praya Reclamation Scheme added 20 to 24 hectares (50 to 60 acres) of land in 1890 during the second phase of construction. It was one of the most ambitious projects undertaken during the era of colonial Hong Kong . [ 2 ] Some 20% of land in the Tokyo Bay area has been reclaimed, [ 3 ] most notably Odaiba artificial island. The city of Rio de Janeiro was largely built on reclaimed land, as was Wellington , New Zealand. Land reclamation can be achieved by a number of different methods. The simplest method involves filling the area with large amounts of heavy rock and/or cement , then filling with clay and dirt until the desired height is reached. The process is called "infilling" [ 4 ] and the material used to fill the space is generally called "infill". [ 5 ] [ 6 ] Draining of submerged wetlands is often used to reclaim land for agricultural use. Deep cement mixing is used typically in situations in which the material displaced by either dredging or draining may be contaminated and hence needs to be contained. Land dredging is also another method of land reclamation. It is the removal of sediments and debris from the bottom of a body of water. It is commonly used for maintaining reclaimed land masses as sedimentation, a natural process, fills channels and harbors. [ 7 ] Morocco Nigeria South Africa Tanzania Bahrain China India Indonesia Japan Lebanon Maldives Malaysia Pakistan Philippines Qatar Singapore South Korea Sri Lanka United Arab Emirates Belarus Belgium Denmark Estonia Finland France Greece Ireland Italy Monaco Netherlands Norway Russia Spain Turkey United Kingdom Jersey Ukraine Bahamas Bermuda Canada Mexico United States Australia Fiji New Zealand Argentina Brazil Chile Colombia Panama Uruguay Venezuela Agriculture was a driver of land reclamation before industrialisation . [ 27 ] In South China , farmers reclaimed paddy fields by enclosing an area with a stone wall on the sea shore near a river mouth or river delta . The species of rice that are grown on these grounds are more salt tolerant. Another use of such enclosed land is the creation of fish ponds . It is commonly seen on the Pearl River Delta and Hong Kong . These reclaimed areas also attract species of migrating birds . A related practice is the draining of swampy or seasonally submerged wetlands to convert them to farmland . While this does not create new land exactly, it allows commercially productive use of land that would otherwise be restricted to wildlife habitat. It is also an important method of mosquito control . Even in the post-industrial age, there have been land reclamation projects intended for increasing available agricultural land. For example, the village of Ogata in Akita, Japan , was established on land reclaimed from Lake Hachirōgata (Japan's second largest lake at the time) starting in 1957. By 1977, the amount of land reclaimed totalled 172.03 square kilometres (66.42 sq mi). [ 28 ] Artificial islands are an example of land reclamation. Creating an artificial island is an expensive and risky undertaking. It is often considered in places with high population density and a scarcity of flat land. Kansai International Airport (in Osaka ) and Hong Kong International Airport are examples where this process was deemed necessary. The Palm Islands , The World and hotel Burj al-Arab off Dubai in the United Arab Emirates are other examples of artificial islands (although there is yet no real "scarcity of land" in Dubai), as well as the Flevopolder in the Netherlands which is the largest artificial island in the world. Beach rebuilding is the process of repairing beaches using materials such as sand or mud from inland. This can be used to build up beaches suffering from beach starvation or erosion from longshore drift . It stops the movement of the original beach material through longshore drift and retains a natural look to the beach. Although it is not a long-lasting solution, it is cheap compared to other types of coastal defences . An example of this is the city of Mumbai. [ 10 ] As human overcrowding of developed areas intensified during the 20th century, it has become important to develop land re-use strategies for completed landfills. Some of the most common usages are for parks, golf courses and other sports fields. Increasingly, however, office buildings and industrial uses are made on a completed landfill. In these latter uses, methane capture is customarily carried out to minimize explosive hazard within the building. An example of a Class A office building constructed over a landfill is the Dakin Building at Sierra Point , Brisbane, California . The underlying fill was deposited from 1965 to 1985, mostly consisting of construction debris from San Francisco and some municipal wastes . Aerial photographs prior to 1965 show this area to be tidelands of the San Francisco Bay . A clay cap was constructed over the debris prior to building approval. [ 29 ] A notable example is Sydney Olympic Park , the primary venue for the 2000 Summer Olympic Games , which was built atop an industrial wasteland that included landfills. Another strategy for landfill is the incineration of landfill trash at high temperature via the plasma-arc gasification process , which is currently used at two facilities in Japan , and was proposed to be used at a facility in St. Lucie County , Florida . [ 30 ] The planned facility in Florida was later canceled. [ 31 ] Draining wetlands for ploughing, for example, is a form of habitat destruction . In some parts of the world, new reclamation projects are restricted or no longer allowed, due to environmental protection laws. Reclamation projects have strong negative impacts on coastal populations, although some species can take advantage of the newly created area. [ 32 ] A 2022 global analysis estimated that 39% of losses (approximately 5,300 km 2 or 2,000 sq mi) and 14% of gains (approximately 1,300 km 2 or 500 sq mi) of tidal wetlands ( mangroves , tidal flats , and tidal marshes ) between 1999 and 2019 were due to direct human activities, including conversion to aquaculture, agriculture, plantations, coastal developments and other physical structures. [ 33 ] The State of California created a state commission, the San Francisco Bay Conservation and Development Commission , in 1965 to protect San Francisco Bay and regulate development near its shores. The commission was created in response to growing concern over the shrinking size of the bay. Hong Kong legislators passed the Protection of the Harbour Ordinance , proposed by the Society for Protection of the Harbour , in 1997 in an effort to safeguard the increasingly threatened Victoria Harbour against encroaching land development. [ 34 ] Several large reclamation schemes at Green Island, West Kowloon, and Kowloon Bay were subsequently shelved, and others reduced in size. Reclaimed land is highly susceptible to soil liquefaction during earthquakes, [ 35 ] which can amplify the amount of damage that occurs to buildings and infrastructure. Subsidence is another issue, both from soil compaction on filled land, and also when wetlands are enclosed by levees and drained to create polders . Drained marshes will eventually sink below the surrounding water level, increasing the danger from flooding . 67 km 2 (26 sq mi) of land was reclaimed up to 2013. Praya Reclamation Scheme began in the late 1860s and consisted of two stages totaling 20 to 24 hectares (50 to 60 acres). [ 2 ] Hong Kong Disneyland , Hong Kong International Airport , and its predecessor, Kai Tak Airport , were all built on reclaimed land. In addition, much reclamation has taken place in prime locations on the waterfront on both sides of Victoria Harbour . This has raised environmental issues of the protection of the harbour which was once the source of prosperity of Hong Kong, traffic congestion in the Central District , [ 38 ] as well as the collusion of the Hong Kong Government with the real estate developers in the territory. [ 39 ] [ 40 ] In addition, as the city expanded, new towns in different decades were mostly built on reclaimed land, such as Kwun Tong , Sha Tin - Ma On Shan , Tai Po , Tseung Kwan O , Tuen Mun , and West Kowloon . 20 percent of the original size or 135 km 2 (52 sq mi). As of 2003 [update] , plans for 99 km 2 (38 sq mi) more are to go ahead, [ 47 ] even though disputes persist with Malaysia over Singapore's extensive land reclamation works. [ 48 ] Parts of Changi Airport are also on reclaimed land. Dubai has a total of four reclaimed islands (the Palm Jumeirah , Jebal Ali , The Burj al Arab Island, and The World Islands ), with a fifth under construction (the Palm Deira ). There are several human-made islands in Abu Dhabi , such as Yas Island and Al Lulu Island . 0.41 km 2 (0.16 sq mi) out of 2.05 km 2 (0.79 sq mi), or one fifth of Monaco comes from land taken from the sea, mainly in the neighborhoods of Fontvieille, La Condamine , and Larvotto/Bas Moulins . About 1 ⁄ 6 (almost 17%) of the entire country, or about 7,000 km 2 (2,700 sq mi) in total, has been reclaimed from the sea, lakes, marshes and swamps. The province of Flevoland has almost completely been reclaimed from the Zuiderzee .
https://en.wikipedia.org/wiki/Land_reclamation
Land rehabilitation as a part of environmental remediation is the process of returning the land in a given area to some degree of its former state, after some process ( industry , natural disasters , etc.) has resulted in its damage. Many projects and developments will result in the land becoming degraded , for example mining , farming and forestry . It is crucial that governments and businesses act proactively by working on improvement, lay out rehabilitation standards and ensure that decisions on mediation should be based around value judgment for higher sustainability in the future. [ 1 ] In some jurisdictions, including parts of the United States , [ 2 ] the term "reclamation" can refer to land rehabilitation, as in returning disturbed lands to an improved state, instead of the land fill of water bodies. In Alberta , Canada, for example, reclamation is defined by the provincial government as "The process of reconverting disturbed land to its former or other productive uses." [ 3 ] Modern mine rehabilitation aims to minimize and mitigate the environmental effects of modern mining , which may in the case of open pit mining involve movement of significant volumes of rock. Rehabilitation management is an ongoing process, often resulting in open pit mines being backfilled. After mining finishes, the mine area must undergo rehabilitation. For underground mines, rehabilitation is not always a significant problem or cost. This is because of the higher grade of the ore and lower volumes of waste rock and tailings. In some situations, stopes are backfilled with concrete slurry using waste, so that minimal waste is left at surface. The removal of plant and infrastructure is not always part of a rehabilitation programme, as many old mine plants have cultural heritage and cultural value. Often in gold mines, rehabilitation is performed by scavenger operations which treat the soil within the plant area for spilled gold using modified placer mining gravity collection plants. Also possible is that the section of the mine that is below ground, is kept and used to provide heating, water and/or methane. Heat extraction can be done using heat exchangers, that convey the heat to a nearby city (hence making it be used for district heating purposes. [ 4 ] Water can be harvested from the mine as well (mines are often filled with water once the mine has been shut down and the pumps no longer operate). Methane is also often present in the mine shafts, in small quantities (often around 0.1%). This can still be recovered though with specialised systems. [ 5 ] [ 6 ] [ 7 ] An added advantage of recovering the methane finally is that the methane does not come into the atmosphere, and so does not contribute to global warming. As research methods continue to expand the focus for future studies should be directed at the correlation that can be observed between biodiversity, mine ecological restoration and carbon sequestration. [ 8 ] Depending on the country, mining companies are regulated by federal and state bodies to rehabilitate the affected land and to restore biodiversity offset areas around the mines. [ 9 ] [ 10 ] Mine rehabilitation, a legal obligation for mining companies in Australia for which they are required to pay bonds, could be a source of considerable employment generation and economic investment in regional areas, if governments were willing to enforce the laws covering the process. [ 11 ] [ 12 ] [ 13 ] [ 14 ] Before mining activities begin, a rehabilitation security bond must be provided. [ 15 ] The Australian mine rehabilitation bonds totals $9.49 billion, with the state of NSW bond totaling $2.68bn in 2019. The size of mining security bonds has been questioned by NSW's Auditor General [ 16 ] as being insufficient to cover the complete costs associated with mine rehabilitation activities. In addition to operational mine rehabilitation activities, often termed 'progressive rehabilitation', abandoned mines are also restored. The financing for restoring abandoned mines is drawn from operating mines and public sources. The cost of reclaiming the abandoned mines in the US is estimated at $9.6bn.
https://en.wikipedia.org/wiki/Land_rehabilitation
Land restoration , which may include renaturalisation or rewilding , is the process of restoring land to a different or previous state with an intended purpose. That purpose can be a variety of things such as what follows: being safe for humans, plants, and animals; stabilizing ecological communities; cleaning up pollution; creating novel ecosystems; [ 1 ] or restoring the land to a historical condition, for example how indigenous people managed the land. [ 2 ] Ecological destruction or degradation , to which land restoration serves as an antidote, is usually the consequence of human influence's intended or unintended consequences. This can include pollution , deforestation , salination , or species endangerment , among many more. Land restoration is not the same as land reclamation , where existing ecosystems are altered or destroyed to give way for cultivation or construction. Land restoration can enhance the supply of valuable ecosystem services that benefit people. In order to increase the chances for successful landscape restoration, several key parameters need to be determined. A shared understanding of the definition of restoration should be defined for the project. As there can be many different motivations for landscape restoration – influenced by personal or environmental ethics, opinions, priorities, available data, economics, etc. – the definition of the term can mean different things to different people and has changed over time. [ 3 ] Additionally, in order to monitor the success of a restoration project, a reference model or reference ecosystem should be selected in order to make comparisons. Along with this, proper surveys of existing conditions should take place. Furthermore, design considerations like restoration methods, contingency plans, monitoring, maintenance, permits, resources, budget, and timeline need to be known and will influence landscape restoration capabilities. [ 3 ] Adaptive management is "an approach for simultaneously managing and learning about natural resources." [ 4 ] It is the primary method used for managing land restoration projects because natural resources can respond to management techniques but the longevity and desirability of those responses are uncertain and dependent on controllable and uncontrollable factors. [ 4 ] Therefore, adapting how a project is managed based on responses from the ecosystem is a more informed approach to landscape restoration. Traditional ecological knowledge has had increase significance and usage in landscape restoration spheres. [ 5 ] Using traditional ecological knowledge alongside Western ecological knowledge is becoming the more mainstream approach to landscape restoration, as many landscapes have evolved alongside humans over thousands of years, and because often times the ideal landscape used as the reference ecosystem is the pre-colonial ecological landscape. [ 6 ] Land reclamation in deserts involves Stabilizing and fixating the soil is usually done in several phases. The first phase is fixating the soil to such extent that dune movement is ceased. This is done by grasses, and plants providing wind protection such as shelterbelts , windbreaks and woodlots . Shelterbelts are wind protections composed of rows of trees, arranged perpendicular to the prevailing wind, while woodlots are more extensive areas of woodland. [ 7 ] The second phase involves improving/enriching the soil by planting nitrogen-fixating plants and using the soil immediately to grow crops. Nitrogen fixating plants used include clover , yellow mustard, beans, etc., and food crops include wheat , barley , beans , peas , sweet potatoes , date , olives , limes , figs , apricot , guava , tomato , certain herbs , etc. Regardless of the cover crop used, the crops (not including any trees) are each year harvested and/or plowed into the soil (e.g. with clover). In addition, each year the plots are used for another type of crop (known as crop rotation ) to prevent depleting the soil on specific trace elements. A recent development is the Seawater Greenhouse and Seawater Forest. This proposal is to construct these devices on coastal deserts in order to create fresh water and grow food. [ 8 ] A similar approach is the Desert Rose concept. [ 9 ] These approaches are of widespread applicability, since the relative costs of pumping large quantities of seawater inland are low. [ 10 ] Another related concept is ADRECS [ clarification needed ] – a proposed system for rapidly delivering soil stabilisation and re-forestation techniques coupled with renewable energy generation. [ 11 ]
https://en.wikipedia.org/wiki/Land_restoration
Land stewardship has various connotations across the world but the common underlying theme is caring for a piece of land regardless of its ownership [ 1 ] taking into consideration ecological, economic, social, and cultural dimensions. [ 2 ] A closely connected term is Land ethic coined by American environmentalist, Aldo Leopold . While Land ethic is considered a theoretical and philosophical framework that has its roots in the environmentalism of the United States, Land stewardship as a movement is slowly gaining traction in European countries, most notably in Spain where it even has legal recognition. [ 3 ] [ 4 ] [ 5 ] According to Forest Europe , the concept of Land stewardship was introduced in 2003 by the Xarxa de Custòdia del Territori (Catalan Land Stewardship Network) , an NGO actively working to promote land stewardship as a conservation strategy in Catalonia . [ 6 ] The term has been defined by Xarxa de Custòdia del Territori as “…a conservation strategy that involves a wide range of civil society stakeholders. Nature, biodiversity, ecological integrity, cultural heritage and landscape values are maintained and restored through voluntary agreements between landowners/users and land stewardship organisations, General Administration, funding institutions and research centres, usually act as enabling agents”. [ 7 ] These voluntary agreements or contracts have been recognised by the Catalan Civil Code (2017) [ 4 ] The concept of land stewardship is closely connected but not exactly the same as a Land trust or Environmental stewardship . While Land trusts can also be an arrangement between two individuals, land stewardship is explicitly undertaken in the interest of ecological, social and cultural values, [ 2 ] and is therefore often a particular type of land trust. Furthermore, land stewardship is broader than environmental stewardship, as it connects land to community, and the importance of not only ecological sustainability, but also the sustainability of social and cultural practices, values and benefits. A very interesting case of the application of Land Stewardship can be found in the project carried out by the Spanish association Recartografías that takes place in Mas Blanco, located in the municipality of San Agustín (Teruel). Mas Blanco is one of the fifteen neighbourhoods belonging to the municipality of San Agustin which has approximately 119 inhabitants and is located in Teruel region of Gúdar-Javalambre. More correctly, Mas Blanco can be described as a mas or masada , terms that designate a type of rural construction and exploitation model. Mas Blanco is included within the so-called Celtiberian mountain range or “South Lapland”, a territory that covers about 65.000 km 2 and includes municipalities of ten Spanish provinces (Soria, Teruel, Guadalajara, Cuenca, Valencia, Castellón, Zaragoza, Burgos, Segovia and La Rioja) with a population density is approximately 7,34 inhab/km 2 . [ 8 ] [ 9 ] The origin of Mas Blanco can be dated back to the first half of the 19th century, when the arrival of some families from Gúdar mountain range took place. They were probably looking for more proper lands and less harsh climatic conditions. In the middle of the 20th century, shortly before the rural exodus began, in Mas Blanco almost a hundred people lived, constituting one of the most important neighbourhoods in the area. [ 8 ] [ 10 ] Mas Blanco's constructions were built up using local materials that the surrounding environment provides. It is interesting to point out that Mas Blanco has a curious rainwater collection system. The existence of underground cisterns located under the houses allows the storage of the water that fell on roofs of corrals and houses and that is conducted using something similar to gutters placed on the roofs. Mas Blanco's inhabitants devised this system due to the lack of sources or rivers in the vicinity. The main activities that the inhabitants carried out and that made up their livelihood were on the one hand, the cultivation of vine, various types of cereal, as well as sheep farming. On the other hand, most of the houses had pens with pigs, chickens and rabbits, which complemented their diet and allowed them to elaborate sausages and hams. It was common to exchange those products, together with wine, with visitors. They also often travelled to other population centers in order to exchange products. Those exchanges made it possible for Mas Blanco's neighbours to obtain rice, oil, fruits, vegetables but also clothes, medicines or tools. [ 10 ] It is also important to add that life in Mas Blanco, as well as in many rural areas, was not idyllic due to the harsh life conditions. Inhabitants needed to perform intense work in order to maintain themselves, to which must be added the lack of basic services such as a doctor, a school or running water. Some examples are the fact that Mas Blanco's inhabitants had to go to the neighbouring mas called “El Pozo de la Muela”, three kilometres away, in order to pick up water from the fountain using jugs, or to wash the clothes in the puddle there. Also, children had to travel several kilometers to go to the school, and in case of emergency it was needed to go on horseback to San Agustin (1 km away) to bring the doctor. [ 10 ] However, the difficult conditions of life does not mean that those life models were less worthy. As mentioned below, the lack of basic services led Mas Blanco's inhabitants, together with their neighbourhoods from other villages (Pozo de la Muela, Tarín Nuevo, Tarín Viejo, Casa Carrasco and Los Linares), to the creation of a community based society called “La Humanitaria". Its creation took place in 1919 and the basic goal was to guarantee mutual aid and relief in the event of illness and death.  One of the main tasks of this organisation was the creation of a common annual fund for the payment of the doctor, as well as the organization between the neighbourhoods in order to assist the sick or deceased. Communal traits of social organization can be also found looking at the communal buildings. These buildings were built up by and for the masoveros , the inhabitants of Mas Blanco. The storage of rainwater was possible thanks to the cistern ( aljibe ) , built with a seven-meter vault, and allowed the watering of the livestock. The building for the pressing of the vine and storage of the wine was managed in a way that according to the quantity of vine that each family harvested, they took the equivalent amount in wine. Another important example is the common oven . Every 15 days, each family was in charge of turning on it in order to reach the right temperature for cooking food such as bread. Furthermore, Mas Blanco's inhabitants, due to the fact of the hard winters and the insecurity of the paths during the postwar period that followed the Spanish Civil War, decided to carry out the project of building a school for their children. After getting the permission of the Civil Government, the inhabitants were the ones that managed to build up the building which was finished in the 1950. Together with the school, a house for the teacher was constructed as well. Another example of communal practices in Mas Blanco and San Agustin's area can be found in the so-called “luck of firewood” , which has lasted until today. Every year the city council raffles municipal and communal forest plots so that neighbourhoods are associated to a plot in which they can collect firewood to heat the houses. [ 8 ] [ 10 ] It is also interesting to point out that money and monetary relations were not regular in neighbourhoods like this one. As mentioned previously, the barter of products played a fundamental and significant role. Recartografías is a territory custody association dedicated to the study of rural heritage and depopulation. It was born precisely as an investigation-action association in 2014. The motivation that led the founders was their concern about the consequences of the depopulation of the rural world, specifically in Spain, and its consequences both in terms of cultural loss and environmental degradation (the latter caused by the abandonment of the Mediterranean mountain, which has a close relationship with the human being). [ 11 ] The original objectives of the project were to avoid the ruin of several buildings and to recover community use through land stewardship, trying to restart old activities but also new ones. The land stewardship agreement was signed with the local administration, establishing that the latter had to transfer the management of the neighbourhood's communal buildings to the association. This voluntary agreement, without economic use, has not only the objective of reactivating the social fabric and community organization in Mas Blanco but also to rediscover the history and environmental and cultural heritage of this area. Similar agreements have been reached with some owners and ancient inhabitants. Another important idea that the association performs is the interest for the inhabitants, in other words, the importance to let them to take part of the project and to get involved in the local environment instead of having a sufficient attitude. This is very important to achieve that locals perceive the project as something important and meaningful not only for them but for their place. Despite the fact of the significant deterioration of the buildings, some of them have already been recovered by the members of the association themselves with the help of volunteers using mainly traditional, low-impact construction techniques. Nowadays Recartografías has achieved the recovery of the school, the common oven and the teacher's house. They have been doing some maintenance tasks in other buildings and spaces as well as signaling tasks. The result of this work has been that several buildings can be used again and some of them are part of one of the most important projects of the association: the rural museum ( Museo de las Masías y de la Memoria Rural , in Spanish). The museum, inaugurated in February 2019, has the main objective of showing and recovering the culture of masoveros, it is said how was the life there and how was their relation with the environment and surroundings. Furthermore, some important historical episodes and its consequences are reviewed as well, such as the Spanish Civil War and postwar period or the intense rural exodus and therefore the rural depopulation. Eight visitable spaces are part of the museum: the teacher's house, the school, the common oven, the pressing building, a cellar, the cistern ( aljibe ) and a shelter and command post of the Civil War. Behind the final result there is not only a hard task of recovering the buildings but also a very important task of research. Disclosure, research and sharing knowledge are significant tasks for Recartografías as well. The association promotes an educational space called “La Universidad de las Masías”. Within this project, Recartografías organises an annual seminar in the University of Valencia with the aim to address an issue related to the rural world. Moreover, in August 2019 took place the first summer course in Mas Blanco, named “Architecture, environment and socio-politics in the masovero world”. This course had talks, excursions and practical workshops in order to learn more about the masovero ’s culture, the rural environment and the challenges that the rural world faces. Not only that, Recartografias organises visits to the museum and some playful days in Mas Blanco directed to the use of the communal oven or a camp with boys scouts, among others. Moreover, the association been participating in some researches related to different aspects of the rural environment not only in Mas Blanco. [ 12 ] [ 13 ] Some outcomes of the activity that Recartografias has been doing since 2014 are the direct enrichment of the participants; an increased visibility of Mas Blanco, its surroundings and therefore of the importance and challenges of the rural world; the enhancement of different ways of life and rural knowledge; the return of some Mas Blanco's inhabitants after several years to attend the re-opening of the school, among others; the awakening of the consciousness of many of them about the value that their life had; the meeting of different people interested and concerned about similar issues and the possibility to create network. [ 10 ] Despite the fact that there has not been a real return of communal goods in this village, the renewal of Mas Blanco in the form of land stewardship can be seen as a form of commoning. It is a project that tries to break with capitalist logic that pursues profit, and tries to pursue, safeguard and revalorise other ways of organising social life. We see this in the fact that the houses were rebuilt in an ecologically sustainable way, while using traditional materials. That way, the traditional architectural style was revived and saved from ruins. People also built a room that was meant to be a space for both cultural expression and workshops related to ecology and other fields. They also rehabilitated the communal oven, thereby not only revaluing traditional baking methods, but also showing that this oven can be communally owned, and need not be privatised. As has been already pointed out, one of the most important outcomes of the project for the association is the change in the way that the people that had to leave Mas Blanco and the surrounding neighbourhoods perceive this space. Different neighbours returned to their place of origin to rebuild their properties. Many of them had not been in town for years, even decades. The recovery of common spaces and the organisation of activities have favoured that social relations happen again in Mas Blanco. Communal goods and common spaces have made possible the recovery of meeting spaces and therefore the social practices that are likely to appear. [ 10 ] [ 14 ] It can also be said that through this commoning practice, not only the people that has collaborated with the association has had the opportunity to experience and reflect about the rural world, but also and very important: people that came from this rural place or that live there have experimented a subjective process and therefore a psychological shift which has empowered them in the sense of dignifying and appreciate their lifestyles, culture, knowledge and the rural space. Hence, the recovery of the materiality of some communal goods has led to a continuous transformation of the space and the interactions that take place between people, favouring subjectivation and learning processes. The Sonoma Land Trust is a land trust located in Sonoma County, in California. Its mission is to take care of the land of Sonoma county and protect its beauty and vitality. This means they conserve the natural area and protect its biodiversity and wildlife. However, as they say, it is not only about the natural world: land is the heart of the community. They are convinced that a healthy environment promotes a healthy community. They also aim to make this land accessible to the community and help local farmers. Finally, they reach out to school children to increase their experience with sustainable land practices, and actively encourage volunteer participation. [ 15 ] Though the land trust is a private organisation, it is enabling a protection of nature from unsustainable economic exploitation, while creating benefits freely accessible to the whole community, and offering people the experience of working with nature. The Land Stewardship Centre is a private, non-profit organisation dedicated to steward farmland, promote sustainable agriculture and to develop sustainable communities. This organisation has as its core values stewardship, justice and democracy. Some of their goals include: [ 2 ] They also aim to advance ‘our own narrative of food and farming that rejects the dominant narrative that tells us corporate agriculture feeds the world, that only profits matter, and that all of us are on our own. Our narrative will lift up what we know is true — we can accomplish much more together than we can alone, that the land and the people who work with it are what sustain and feeds us, and that a better food and farming system is possible.’ The Land Stewardship Centre realises that land stewardship is not just about ecological conservation. It is related to a different organisation of land access and agriculture, that benefits communities. Therefore, they aim to create land as commons, whereby people can take control of the land they live on, and through mutual aid and solidarity provide for their own needs. It proposes explicitly to think about the social good, rather than profits.
https://en.wikipedia.org/wiki/Land_stewardship
Land use, land-use change, and forestry ( LULUCF ), also referred to as Forestry and other land use ( FOLU ) or Agriculture, Forestry and Other Land Use ( AFOLU) , [ 3 ] [ 4 ] : 65 is defined as a " greenhouse gas inventory sector that covers emissions and removals of greenhouse gases resulting from direct human-induced land use such as settlements and commercial uses, land-use change , and forestry activities." [ 5 ] LULUCF has impacts on the global carbon cycle and as such, these activities can add or remove carbon dioxide (or, more generally, carbon ) from the atmosphere, influencing climate . [ 6 ] LULUCF has been the subject of two major reports by the Intergovernmental Panel on Climate Change (IPCC), but is difficult to measure. [ 7 ] : 12 Additionally, land use is of critical importance for biodiversity . [ 8 ] The United Nations Framework Convention on Climate Change (UNFCCC) Article 4(1)(a) requires all Parties to "develop, periodically update, publish and make available to the Conference of the Parties" as well as "national inventories of anthropogenic emissions by sources" "removals by sinks of all greenhouse gases not controlled by the Montreal Protocol ." Under the UNFCCC reporting guidelines, human-induced greenhouse emissions must be reported in six sectors: energy (including stationary energy and transport); industrial processes; solvent and other product use; agriculture; waste; and land use, land use change and forestry (LULUCF). [ 9 ] The rules governing accounting and reporting of greenhouse gas emissions from LULUCF under the Kyoto Protocol are contained in several decisions of the Conference of Parties under the UNFCCC. LULUCF has been the subject of two major reports by the Intergovernmental Panel on Climate Change (IPCC). [ 10 ] The Kyoto Protocol article 3.3 thus requires mandatory LULUCF accounting for afforestation (no forest for last 50 years), reforestation (no forest on 31 December 1989) and deforestation, as well as (in the first commitment period) under article 3.4 voluntary accounting for cropland management, grazing land management, revegetation and forest management (if not already accounted under article 3.3). [ 11 ] This decision sets out the rules that govern how Kyoto Parties with emission reduction commitments (so-called Annex 1 Parties) account for changes in carbon stocks in land use, land-use change and forestry. [ 12 ] It is mandatory for Annex 1 Parties to account for changes in carbons stocks resulting from deforestation , reforestation and afforestation (B Article 3.3) [ 13 ] and voluntary to account for emissions from forest management, cropland management, grazing land management and revegetation (B. Article 3.4). [ 12 ] The flexibility mechanisms under the Kyoto Protocol, including the Clean Development Mechanism (CDM) and Joint Implementation (JI), also include provisions for LULUCF projects, further enhancing the integration of land use considerations into climate change mitigation strategies. Land-use change can be a factor in CO 2 (carbon dioxide) atmospheric concentration, and is thus a contributor to global climate change . [ 14 ] IPCC estimates that land-use change (e.g. conversion of forest into agricultural land) contributes a net 1.6 ± 0.8 Gt carbon per year to the atmosphere. For comparison, the major source of CO 2 , namely emissions from fossil fuel combustion and cement production, amount to 6.3 ± 0.6 Gt carbon per year. [ 15 ] In 2021 the Global Carbon Project estimated annual land-use change emissions were 4.1 ± 2.6 Gt CO 2 (CO 2 not carbon: 1 Gt carbon = 3.67 Gt CO 2 [ 16 ] ) for 2011–2020. [ 17 ] The land-use sector is critical to achieving the aim of the Paris Agreement to limit global warming to 2 °C (3.6 °F). [ 18 ] Land-use change alters not just atmospheric CO 2 concentration but also land surface biophysics such as albedo and evapotranspiration , both of which affect climate. [ 19 ] The impact of land-use change on the climate is also more and more recognized by the climate modeling community. On regional or local scales, the impact of LUC can be assessed by Regional climate models (RCMs). This is however difficult, particularly for variables, which are inherently noisy, such as precipitation. For this reason, it is suggested to conduct RCM ensemble simulations. [ 20 ] A 2021 study estimated, with higher resolution data, that land-use change has affected 17% of land in 1960–2019, or when considering multiple change events 32%, "around four times" previous estimates. They also investigate its drivers, identifying global trade affecting agriculture as a main driver. [ 22 ] [ 21 ] Earth system modeling has traditionally been used to analyze forests for climate projections. However, in recent years, there has been a shift away from this modeling towards more mitigation and adaptation projections. [ 23 ] These projections can give researchers a better understanding of future forest management practices to employ. Furthermore, this new modeling approach also allows for land management practices to be analyzed in the model. Land management practices include forest harvest, tree species selection, grazing, and crop harvest. Land management practices produce biophysical and biogeochemical effects on the forest, and following the model can increase the likelihood of success. Where there is a lack of available data for these practices, further monitoring and data collecting are needed to improve the models' accuracy. [ 24 ]
https://en.wikipedia.org/wiki/Land_use,_land-use_change,_and_forestry
In algebra , a nested radical is a radical expression (one containing a square root sign, cube root sign, etc.) that contains (nests) another radical expression. Examples include 5 − 2 5 , {\displaystyle {\sqrt {5-2{\sqrt {5}}\ }},} which arises in discussing the regular pentagon , and more complicated ones such as 2 + 3 + 4 3 3 . {\displaystyle {\sqrt[{3}]{2+{\sqrt {3}}+{\sqrt[{3}]{4}}\ }}.} Some nested radicals can be rewritten in a form that is not nested. For example, 3 + 2 2 = 1 + 2 , {\displaystyle {\sqrt {3+2{\sqrt {2}}}}=1+{\sqrt {2}}\,,} 2 3 − 1 3 = 1 − 2 3 + 4 3 9 3 . {\displaystyle {\sqrt[{3}]{{\sqrt[{3}]{2}}-1}}={\frac {1-{\sqrt[{3}]{2}}+{\sqrt[{3}]{4}}}{\sqrt[{3}]{9}}}\,.} Another simple example, 2 3 = 2 6 {\displaystyle {\sqrt[{3}]{\sqrt {2}}}={\sqrt[{6}]{2}}} Rewriting a nested radical in this way is called denesting . This is not always possible, and, even when possible, it is often difficult. In the case of two nested square roots, the following theorem completely solves the problem of denesting. [ 2 ] If a and c are rational numbers and c is not the square of a rational number, there are two rational numbers x and y such that a + c = x ± y {\displaystyle {\sqrt {a+{\sqrt {c}}}}={\sqrt {x}}\pm {\sqrt {y}}} if and only if a 2 − c {\displaystyle a^{2}-c~} is the square of a rational number d . If the nested radical is real, x and y are the two numbers a + d 2 {\displaystyle {\frac {a+d}{2}}~} and a − d 2 , {\displaystyle ~{\frac {a-d}{2}}~,~} where d = a 2 − c {\displaystyle ~d={\sqrt {a^{2}-c}}~} is a rational number. In particular, if a and c are integers, then 2 x and 2 y are integers. This result includes denestings of the form a + c = z ± y , {\displaystyle {\sqrt {a+{\sqrt {c}}}}=z\pm {\sqrt {y}}~,} as z may always be written z = ± z 2 , {\displaystyle z=\pm {\sqrt {z^{2}}},} and at least one of the terms must be positive (because the left-hand side of the equation is positive). A more general denesting formula could have the form a + c = α + β x + γ y + δ x y . {\displaystyle {\sqrt {a+{\sqrt {c}}}}=\alpha +\beta {\sqrt {x}}+\gamma {\sqrt {y}}+\delta {\sqrt {x}}{\sqrt {y}}~.} However, Galois theory implies that either the left-hand side belongs to Q ( c ) , {\displaystyle \mathbb {Q} ({\sqrt {c}}),} or it must be obtained by changing the sign of either x , {\displaystyle {\sqrt {x}},} y , {\displaystyle {\sqrt {y}},} or both. In the first case, this means that one can take x = c and γ = δ = 0. {\displaystyle \gamma =\delta =0.} In the second case, α {\displaystyle \alpha } and another coefficient must be zero. If β = 0 , {\displaystyle \beta =0,} one may rename xy as x for getting δ = 0. {\displaystyle \delta =0.} Proceeding similarly if α = 0 , {\displaystyle \alpha =0,} it results that one can suppose α = δ = 0. {\displaystyle \alpha =\delta =0.} This shows that the apparently more general denesting can always be reduced to the above one. Proof : By squaring, the equation a + c = x ± y {\displaystyle {\sqrt {a+{\sqrt {c}}}}={\sqrt {x}}\pm {\sqrt {y}}} is equivalent with a + c = x + y ± 2 x y , {\displaystyle a+{\sqrt {c}}=x+y\pm 2{\sqrt {xy}},} and, in the case of a minus in the right-hand side, (square roots are nonnegative by definition of the notation). As the inequality may always be satisfied by possibly exchanging x and y , solving the first equation in x and y is equivalent with solving a + c = x + y ± 2 x y . {\displaystyle a+{\sqrt {c}}=x+y\pm 2{\sqrt {xy}}.} This equality implies that x y {\displaystyle {\sqrt {xy}}} belongs to the quadratic field Q ( c ) . {\displaystyle \mathbb {Q} ({\sqrt {c}}).} In this field every element may be uniquely written α + β c , {\displaystyle \alpha +\beta {\sqrt {c}},} with α {\displaystyle \alpha } and β {\displaystyle \beta } being rational numbers. This implies that ± 2 x y {\displaystyle \pm 2{\sqrt {xy}}} is not rational (otherwise the right-hand side of the equation would be rational; but the left-hand side is irrational). As x and y must be rational, the square of ± 2 x y {\displaystyle \pm 2{\sqrt {xy}}} must be rational. This implies that α = 0 {\displaystyle \alpha =0} in the expression of ± 2 x y {\displaystyle \pm 2{\sqrt {xy}}} as α + β c . {\displaystyle \alpha +\beta {\sqrt {c}}.} Thus a + c = x + y + β c {\displaystyle a+{\sqrt {c}}=x+y+\beta {\sqrt {c}}} for some rational number β . {\displaystyle \beta .} The uniqueness of the decomposition over 1 and c {\displaystyle {\sqrt {c}}} implies thus that the considered equation is equivalent with a = x + y and ± 2 x y = c . {\displaystyle a=x+y\quad {\text{and}}\quad \pm 2{\sqrt {xy}}={\sqrt {c}}.} It follows by Vieta's formulas that x and y must be roots of the quadratic equation z 2 − a z + c 4 = 0 ; {\displaystyle z^{2}-az+{\frac {c}{4}}=0~;} its Δ = a 2 − c = d 2 > 0 {\displaystyle ~\Delta =a^{2}-c=d^{2}>0~} ( ≠ 0 , otherwise c would be the square of a ), hence x and y must be a + a 2 − c 2 {\displaystyle {\frac {a+{\sqrt {a^{2}-c}}}{2}}~} and a − a 2 − c 2 . {\displaystyle ~{\frac {a-{\sqrt {a^{2}-c}}}{2}}~.} Thus x and y are rational if and only if d = a 2 − c {\displaystyle d={\sqrt {a^{2}-c}}~} is a rational number. For explicitly choosing the various signs, one must consider only positive real square roots, and thus assuming c > 0 . The equation a 2 = c + d 2 {\displaystyle a^{2}=c+d^{2}} shows that | a | > √ c . Thus, if the nested radical is real, and if denesting is possible, then a > 0 . Then the solution is a + c = a + d 2 + a − d 2 , a − c = a + d 2 − a − d 2 . {\displaystyle {\begin{aligned}{\sqrt {a+{\sqrt {c}}}}&={\sqrt {\frac {a+d}{2}}}+{\sqrt {\frac {a-d}{2}}},\\[6pt]{\sqrt {a-{\sqrt {c}}}}&={\sqrt {\frac {a+d}{2}}}-{\sqrt {\frac {a-d}{2}}}.\end{aligned}}} Srinivasa Ramanujan demonstrated a number of curious identities involving nested radicals. Among them are the following: [ 3 ] 3 + 2 5 4 3 − 2 5 4 4 = 5 4 + 1 5 4 − 1 = 1 2 ( 3 + 5 4 + 5 + 125 4 ) , {\displaystyle {\sqrt[{4}]{\frac {3+2{\sqrt[{4}]{5}}}{3-2{\sqrt[{4}]{5}}}}}={\frac {{\sqrt[{4}]{5}}+1}{{\sqrt[{4}]{5}}-1}}={\tfrac {1}{2}}\left(3+{\sqrt[{4}]{5}}+{\sqrt {5}}+{\sqrt[{4}]{125}}\right),} 28 3 − 27 3 = 1 3 ( 98 3 − 28 3 − 1 ) , {\displaystyle {\sqrt {{\sqrt[{3}]{28}}-{\sqrt[{3}]{27}}}}={\tfrac {1}{3}}\left({\sqrt[{3}]{98}}-{\sqrt[{3}]{28}}-1\right),} 32 5 5 − 27 5 5 3 = 1 25 5 + 3 25 5 − 9 25 5 , {\displaystyle {\sqrt[{3}]{{\sqrt[{5}]{\frac {32}{5}}}-{\sqrt[{5}]{\frac {27}{5}}}}}={\sqrt[{5}]{\frac {1}{25}}}+{\sqrt[{5}]{\frac {3}{25}}}-{\sqrt[{5}]{\frac {9}{25}}},} and In 1989 Susan Landau introduced the first algorithm for deciding which nested radicals can be denested. [ 5 ] Earlier algorithms worked in some cases but not others. Landau's algorithm involves complex roots of unity and runs in exponential time with respect to the depth of the nested radical. [ 6 ] In trigonometry , the sines and cosines of many angles can be expressed in terms of nested radicals. For example, sin ⁡ π 60 = sin ⁡ 3 ∘ = 1 16 [ 2 ( 1 − 3 ) 5 + 5 + 2 ( 5 − 1 ) ( 3 + 1 ) ] {\displaystyle \sin {\frac {\pi }{60}}=\sin 3^{\circ }={\frac {1}{16}}\left[2(1-{\sqrt {3}}){\sqrt {5+{\sqrt {5}}}}+{\sqrt {2}}({\sqrt {5}}-1)({\sqrt {3}}+1)\right]} and sin ⁡ π 24 = sin ⁡ 7.5 ∘ = 1 2 2 − 2 + 3 = 1 2 2 − 1 + 3 2 . {\displaystyle \sin {\frac {\pi }{24}}=\sin 7.5^{\circ }={\frac {1}{2}}{\sqrt {2-{\sqrt {2+{\sqrt {3}}}}}}={\frac {1}{2}}{\sqrt {2-{\frac {1+{\sqrt {3}}}{\sqrt {2}}}}}.} The last equality results directly from the results of § Two nested square roots . Nested radicals appear in the algebraic solution of the cubic equation . Any cubic equation can be written in simplified form without a quadratic term, as x 3 + p x + q = 0 , {\displaystyle x^{3}+px+q=0,} whose general solution for one of the roots is x = − q 2 + q 2 4 + p 3 27 3 + − q 2 − q 2 4 + p 3 27 3 . {\displaystyle x={\sqrt[{3}]{-{q \over 2}+{\sqrt {{q^{2} \over 4}+{p^{3} \over 27}}}}}+{\sqrt[{3}]{-{q \over 2}-{\sqrt {{q^{2} \over 4}+{p^{3} \over 27}}}}}.} In the case in which the cubic has only one real root, the real root is given by this expression with the radicands of the cube roots being real and with the cube roots being the real cube roots. In the case of three real roots, the square root expression is an imaginary number; here any real root is expressed by defining the first cube root to be any specific complex cube root of the complex radicand, and by defining the second cube root to be the complex conjugate of the first one. The nested radicals in this solution cannot in general be simplified unless the cubic equation has at least one rational solution. Indeed, if the cubic has three irrational but real solutions, we have the casus irreducibilis , in which all three real solutions are written in terms of cube roots of complex numbers. On the other hand, consider the equation x 3 − 7 x + 6 = 0 , {\displaystyle x^{3}-7x+6=0,} which has the rational solutions 1, 2, and −3. The general solution formula given above gives the solutions x = − 3 + 10 3 i 9 3 + − 3 − 10 3 i 9 3 . {\displaystyle x={\sqrt[{3}]{-3+{\frac {10{\sqrt {3}}i}{9}}}}+{\sqrt[{3}]{-3-{\frac {10{\sqrt {3}}i}{9}}}}.} For any given choice of cube root and its conjugate, this contains nested radicals involving complex numbers, yet it is reducible (even though not obviously so) to one of the solutions 1, 2, or −3. Under certain conditions infinitely nested square roots such as x = 2 + 2 + 2 + 2 + ⋯ {\displaystyle x={\sqrt {2+{\sqrt {2+{\sqrt {2+{\sqrt {2+\cdots }}}}}}}}} represent rational numbers. This rational number can be found by realizing that x also appears under the radical sign, which gives the equation x = 2 + x . {\displaystyle x={\sqrt {2+x}}.} If we solve this equation, we find that x = 2 (the second solution x = −1 doesn't apply, under the convention that the positive square root is meant). This approach can also be used to show that generally, if n > 0 , then n + n + n + n + ⋯ = 1 2 ( 1 + 1 + 4 n ) {\displaystyle {\sqrt {n+{\sqrt {n+{\sqrt {n+{\sqrt {n+\cdots }}}}}}}}={\tfrac {1}{2}}\left(1+{\sqrt {1+4n}}\right)} and is the positive root of the equation x 2 − x − n = 0 . For n = 1 , this root is the golden ratio φ , approximately equal to 1.618. The same procedure also works to obtain, if n > 0 , n − n − n − n − ⋯ = 1 2 ( − 1 + 1 + 4 n ) , {\displaystyle {\sqrt {n-{\sqrt {n-{\sqrt {n-{\sqrt {n-\cdots }}}}}}}}={\tfrac {1}{2}}\left(-1+{\sqrt {1+4n}}\right),} which is the positive root of the equation x 2 + x − n = 0 . The nested square roots of 2 are a special case of the wide class of infinitely nested radicals. There are many known results that bind them to sines and cosines . For example, it has been shown that nested square roots of 2 as [ 7 ] R ( b k , … , b 1 ) = b k 2 2 + b k − 1 2 + b k − 2 2 + ⋯ + b 2 2 + x {\displaystyle R(b_{k},\ldots ,b_{1})={\frac {b_{k}}{2}}{\sqrt {2+b_{k-1}{\sqrt {2+b_{k-2}{\sqrt {2+\cdots +b_{2}{\sqrt {2+x}}}}}}}}} where x = 2 sin ⁡ ( π b 1 / 4 ) {\displaystyle x=2\sin(\pi b_{1}/4)} with b 1 {\displaystyle b_{1}} in [−2,2] and b i ∈ { − 1 , 0 , 1 } {\displaystyle b_{i}\in \{-1,0,1\}} for i ≠ 1 {\displaystyle i\neq 1} , are such that R ( b k , … , b 1 ) = cos ⁡ θ {\displaystyle R(b_{k},\ldots ,b_{1})=\cos \theta } for θ = ( 1 2 − b k 4 − b k b k − 1 8 − b k b k − 1 b k − 2 16 − ⋯ − b k b k − 1 ⋯ b 1 2 k + 1 ) π . {\displaystyle \theta =\left({\frac {1}{2}}-{\frac {b_{k}}{4}}-{\frac {b_{k}b_{k-1}}{8}}-{\frac {b_{k}b_{k-1}b_{k-2}}{16}}-\cdots -{\frac {b_{k}b_{k-1}\cdots b_{1}}{2^{k+1}}}\right)\pi .} This result allows to deduce for any x ∈ [ − 2 , 2 ] {\displaystyle x\in [-2,2]} the value of the following infinitely nested radicals consisting of k nested roots as R k ( x ) = 2 + 2 + ⋯ + 2 + x . {\displaystyle R_{k}(x)={\sqrt {2+{\sqrt {2+\cdots +{\sqrt {2+x}}}}}}.} If x ≥ 2 {\displaystyle x\geq 2} , then [ 8 ] R k ( x ) = 2 + 2 + ⋯ + 2 + x = ( x + x 2 − 4 2 ) 1 / 2 k + ( x + x 2 − 4 2 ) − 1 / 2 k {\displaystyle {\begin{aligned}R_{k}(x)&={\sqrt {2+{\sqrt {2+\cdots +{\sqrt {2+x}}}}}}\\&=\left({\frac {x+{\sqrt {x^{2}-4}}}{2}}\right)^{1/2^{k}}+\left({\frac {x+{\sqrt {x^{2}-4}}}{2}}\right)^{-1/2^{k}}\end{aligned}}} These results can be used to obtain some nested square roots representations of π {\displaystyle \pi } . Let us consider the term R ( b k , … , b 1 ) {\displaystyle R\left(b_{k},\ldots ,b_{1}\right)} defined above. Then [ 7 ] π = lim k → ∞ [ 2 k + 1 2 − b 1 R ( 1 , − 1 , 1 , 1 , … , 1 , 1 , b 1 ⏟ k terms ) ] {\displaystyle \pi =\lim _{k\rightarrow \infty }\left[{\frac {2^{k+1}}{2-b_{1}}}R(\underbrace {1,-1,1,1,\ldots ,1,1,b_{1}} _{k{\text{ terms }}})\right]} where b 1 ≠ 2 {\displaystyle b_{1}\neq 2} . Ramanujan posed the following problem to the Journal of Indian Mathematical Society : ? = 1 + 2 1 + 3 1 + ⋯ . {\displaystyle ?={\sqrt {1+2{\sqrt {1+3{\sqrt {1+\cdots }}}}}}.} This can be solved by noting a more general formulation: ? = a x + ( n + a ) 2 + x a ( x + n ) + ( n + a ) 2 + ( x + n ) ⋯ . {\displaystyle ?={\sqrt {ax+(n+a)^{2}+x{\sqrt {a(x+n)+(n+a)^{2}+(x+n){\sqrt {\mathrm {\cdots } }}}}}}.} Setting this to F ( x ) and squaring both sides gives us F ( x ) 2 = a x + ( n + a ) 2 + x a ( x + n ) + ( n + a ) 2 + ( x + n ) ⋯ , {\displaystyle F(x)^{2}=ax+(n+a)^{2}+x{\sqrt {a(x+n)+(n+a)^{2}+(x+n){\sqrt {\mathrm {\cdots } }}}},} which can be simplified to F ( x ) 2 = a x + ( n + a ) 2 + x F ( x + n ) . {\displaystyle F(x)^{2}=ax+(n+a)^{2}+xF(x+n).} It can be shown that F ( x ) = x + n + a {\displaystyle F(x)={x+n+a}} satisfies the equation for F ( x ) {\displaystyle F(x)} , so it can be hoped that it is the true solution. For a complete proof, we would need to show that this is indeed the solution to the equation for F ( x ) {\displaystyle F(x)} . So, setting a = 0 , n = 1 , and x = 2 , we have 3 = 1 + 2 1 + 3 1 + ⋯ . {\displaystyle 3={\sqrt {1+2{\sqrt {1+3{\sqrt {1+\cdots }}}}}}.} Ramanujan stated the following infinite radical denesting in his lost notebook : 5 + 5 + 5 − 5 + 5 + 5 + 5 − ⋯ = 2 + 5 + 15 − 6 5 2 . {\displaystyle {\sqrt {5+{\sqrt {5+{\sqrt {5-{\sqrt {5+{\sqrt {5+{\sqrt {5+{\sqrt {5-\cdots }}}}}}}}}}}}}}={\frac {2+{\sqrt {5}}+{\sqrt {15-6{\sqrt {5}}}}}{2}}.} The repeating pattern of the signs is ( + , + , − , + ) . {\displaystyle (+,+,-,+).} Viète's formula for π , the ratio of a circle's circumference to its diameter, is 2 π = 2 2 ⋅ 2 + 2 2 ⋅ 2 + 2 + 2 2 ⋯ . {\displaystyle {\frac {2}{\pi }}={\frac {\sqrt {2}}{2}}\cdot {\frac {\sqrt {2+{\sqrt {2}}}}{2}}\cdot {\frac {\sqrt {2+{\sqrt {2+{\sqrt {2}}}}}}{2}}\cdots .} In certain cases, infinitely nested cube roots such as x = 6 + 6 + 6 + 6 + ⋯ 3 3 3 3 {\displaystyle x={\sqrt[{3}]{6+{\sqrt[{3}]{6+{\sqrt[{3}]{6+{\sqrt[{3}]{6+\cdots }}}}}}}}} can represent rational numbers as well. Again, by realizing that the whole expression appears inside itself, we are left with the equation x = 6 + x 3 . {\displaystyle x={\sqrt[{3}]{6+x}}.} If we solve this equation, we find that x = 2 . More generally, we find that n + n + n + n + ⋯ 3 3 3 3 {\displaystyle {\sqrt[{3}]{n+{\sqrt[{3}]{n+{\sqrt[{3}]{n+{\sqrt[{3}]{n+\cdots }}}}}}}}} is the positive real root of the equation x 3 − x − n = 0 for all n > 0 . For n = 1 , this root is the plastic ratio ρ , approximately equal to 1.3247. The same procedure also works to get n − n − n − n − ⋯ 3 3 3 3 {\displaystyle {\sqrt[{3}]{n-{\sqrt[{3}]{n-{\sqrt[{3}]{n-{\sqrt[{3}]{n-\cdots }}}}}}}}} as the real root of the equation x 3 + x − n = 0 for all n > 1 . An infinitely nested radical a 1 + a 2 + ⋯ {\displaystyle {\sqrt {a_{1}+{\sqrt {a_{2}+\dotsb }}}}} (where all a i {\displaystyle a_{i}} are nonnegative ) converges if and only if there is some M ∈ R {\displaystyle M\in \mathbb {R} } such that M ≥ a n 2 − n {\displaystyle M\geq a_{n}^{2^{-n}}} for all n {\displaystyle n} , [ 9 ] or in other words sup a n 2 − n < + ∞ . {\textstyle \sup a_{n}^{2^{-n}}<+\infty .} We observe that a 1 + a 2 + ⋯ ≤ M 2 1 + M 2 2 + ⋯ = M 1 + 1 + ⋯ < 2 M . {\displaystyle {\sqrt {a_{1}+{\sqrt {a_{2}+\dotsb }}}}\leq {\sqrt {M^{2^{1}}+{\sqrt {M^{2^{2}}+\cdots }}}}=M{\sqrt {1+{\sqrt {1+\dotsb }}}}<2M.} Moreover, the sequence ( a 1 + a 2 + … a n ) {\displaystyle \left({\sqrt {a_{1}+{\sqrt {a_{2}+\dotsc {\sqrt {a_{n}}}}}}}\right)} is monotonically increasing. Therefore it converges, by the monotone convergence theorem . If the sequence ( a 1 + a 2 + ⋯ a n ) {\displaystyle \left({\sqrt {a_{1}+{\sqrt {a_{2}+\cdots {\sqrt {a_{n}}}}}}}\right)} converges, then it is bounded. However, a n 2 − n ≤ a 1 + a 2 + ⋯ a n {\displaystyle a_{n}^{2^{-n}}\leq {\sqrt {a_{1}+{\sqrt {a_{2}+\cdots {\sqrt {a_{n}}}}}}}} , hence ( a n 2 − n ) {\displaystyle \left(a_{n}^{2^{-n}}\right)} is also bounded.
https://en.wikipedia.org/wiki/Landau's_algorithm
In algebra , a Landau-Mignotte bound (sometimes only referred to as Mignotte's bound [ 1 ] ) is one of a family of inequalities concerning a univariate integer polynomial f ( x ) and one of its factors h ( x ). A basic version states that the coefficients of h ( x ) are bounded independently of h ( x ) by an exponential expression involving only the degree and coefficients of f ( x ), i.e. only depending on f ( x ). It has applications in computer algebra where these bounds can give a priori estimates on the run time and complexity of algorithms . [ 2 ] For f ( x ) , h ( x ) ∈ Z [ x ] {\displaystyle f(x),h(x)\in \mathbb {Z} [x]} such that h ( x ) {\displaystyle h(x)} divides f ( x ) {\displaystyle f(x)} denote by ‖ h ‖ 1 {\displaystyle \|h\|_{1}} resp. ‖ f ‖ 1 {\displaystyle \|f\|_{1}} the sum of the absolute values of the coefficients of h ( x ) {\displaystyle h(x)} resp. f ( x ) {\displaystyle f(x)} and let n {\displaystyle n} be the degree of f ( x ) {\displaystyle f(x)} , then f , g , h ∈ C [ x ] {\displaystyle f,g,h\in \mathbb {C} [x]} will be univariate complex polynomials which later will be restricted to be integer polynomials , i.e. in Z [ x ] {\displaystyle \mathbb {Z} [x]} . Explicitly n , m , k {\displaystyle n,m,k} are the degrees , the leading coefficients are f n , g m , h k {\displaystyle f_{n},g_{m},h_{k}} . Define norms by considering the coefficients as vectors, explicitly By the fundamental theorem of algebra f {\displaystyle f} has n {\displaystyle n} roots z 1 , z 2 , … , z n {\displaystyle z_{1},z_{2},\ldots ,z_{n}} (with multiplicity ). Set the Mahler measure of f {\displaystyle f} to be Similarly define ‖ g ‖ 2 {\displaystyle \|g\|_{2}} , M ( h ) {\displaystyle M(h)} , etc. Landau proved [ 3 ] in 1905 a key inequality linking the Mahler measure of a polynomial to its Euclidean norm . In general norms obey the following inequalities The Mahler measure satisfies M ( f ) ≥ | f n | {\displaystyle M(f)\geq |f_{n}|} which for non-trivial integer polynomials implies M ( f ) ≥ 1 {\displaystyle M(f)\geq 1} . See also Lehmer's conjecture . The Mahler measure is multiplicative, i.e. if f = g h {\displaystyle f=gh} then Mignotte used Landau's inequality in 1974 to prove a basic version [ 4 ] of the following bounds [ 2 ] : 164 ff in the notation introduced above. For complex polynomials in C [ x ] {\displaystyle \mathbb {C} [x]} , if h {\displaystyle h} divides f {\displaystyle f} then and individual coefficients obey the inequalities If additionally f {\displaystyle f} and h {\displaystyle h} are integer polynomials in Z [ x ] {\displaystyle \mathbb {Z} [x]} then 0 < | h k | | f n | ≤ 1 {\displaystyle 0<{\frac {|h_{k}|}{|f_{n}|}}\leq 1} and if f {\displaystyle f} is additionally monic then even | h k | | f n | = 1 {\displaystyle {\frac {|h_{k}|}{|f_{n}|}}=1} . In these cases one can simplify by omitting the fraction. Including products in the analysis we have the following theorem. Let f , g , h ∈ Z [ x ] {\displaystyle f,g,h\in \mathbb {Z} [x]} such that g h {\displaystyle gh} divides f {\displaystyle f} then Using Stirling's formula applied to binomial coefficients we get asymptotically a slight improvement when using binomial coefficients From the bounds on the individual coefficients one can deduce the following related bound. If f ∈ Z [ x ] {\displaystyle f\in \mathbb {Z} [x]} is reducible then it has a non-trivial factor h {\displaystyle h} of degree k ≤ ⌊ n / 2 ⌋ {\displaystyle k\leq \lfloor n/2\rfloor } such that Combining this with Stirling's formula to replace the binomial coefficients leads to more explicit versions. While the upper bounds that are independent of h {\displaystyle h} and only depend on f {\displaystyle f} are of great theoretical interest and aesthetic appeal, in practical application one has usually information about the degree k {\displaystyle k} of h {\displaystyle h} . This is why the sharper bounds that additionally depend on k {\displaystyle k} are often more relevant. For f = x n − 1 {\displaystyle f=x^{n}-1} the cyclotomic polynomials h = Φ n ( x ) {\displaystyle h=\Phi _{n}(x)} is an irreducible divisor of degree k = φ ( n ) {\displaystyle k=\varphi (n)} , Euler's totient function . In this case ‖ f ‖ 2 = 2 {\displaystyle \|f\|_{2}={\sqrt {2}}} and it is custom to denote ‖ h ‖ ∞ = A ( n ) {\displaystyle \|h\|_{\infty }=A(n)} . A result of Vaugn states [ 5 ] for infinitely many positive integers n {\displaystyle n} a superpolynomial bound in the degree n {\displaystyle n} . Comparing with Mignotte's bound and using Stirling's formula as well as bounds for Euler's totient function we get for infinitely many n {\displaystyle n} This leaves a gap between Mignotte's upper bound and what is known to be attained through cyclotomic polynomials. Cyclotomic polynomials cannot close this gap by a result of Bateman that states [ 6 ] for every ε > 0 {\displaystyle \varepsilon >0} for all sufficiently large positive integers n {\displaystyle n} we have Also note that despite the superpolynomial growth of Vaugn's lower bound in practice looking at examples of cyclotomic polynomials the coefficients of h = Φ n ( x ) {\displaystyle h=\Phi _{n}(x)} are far smaller than Mignotte's bound. Abbot gives the following example [ 7 ] related to cyclotomic polynomials. Set and consider for positive integers j {\displaystyle j} Note that the degrees are k = 3 j {\displaystyle k=3j} resp. n = 6 j {\displaystyle n=6j} . Abbot shows that asymptotically for large j {\displaystyle j} we have Using Mignotte's bound in the version ‖ h ‖ ∞ ≤ 2 k n + 1 ‖ f ‖ ∞ {\displaystyle \|h\|_{\infty }\leq 2^{k}{\sqrt {n+1}}\|f\|_{\infty }} we compare Ignoring the root terms leads to Abbot claims that [ 7 ] : 24 An exhaustive search in low degrees suggests that this family of factorizations is close to extremal. While there is still an exponential gap between the example and Mignotte's bound, the example shows that exponential growth is the right order for such a general bound. Note that Abbot also compares Mignotte's bound with other types of bounds and gives examples where Mignotte's bound is best and examples where other bounds are better [ 7 ] : 7ff . Also note that, while the cyclotomic polynomials h = Φ n ( x ) {\displaystyle h=\Phi _{n}(x)} from the previous section are irreducible factors, the factors h = h j = H ( x ) j = ( x + 1 ) j ( x 2 + x + 1 ) j {\displaystyle h=h_{j}=H(x)^{j}=(x+1)^{j}(x^{2}+x+1)^{j}} have many factors themselves. Abbot speculates [ 7 ] : 32 The examples [...] compel any ideal “irreducible single factor bound” to grow with degree, though the rate of growth appears to be much slower than for single factor bounds valid for any (suitably scaled) factorization in C [ x ] {\displaystyle \mathbb {C} [x]} . This suggests that such an ideal single factor bound could be very much smaller than the currently known ones. Usually the Mignotte bounds are only stated for complex or integer polynomials. They are equally valid for any subring R ⊂ C {\displaystyle R\subset \mathbb {C} } , in particular when considering only monic polynomials for which | h k | | f n | = 1 {\displaystyle {\frac {|h_{k}|}{|f_{n}|}}=1} . Any abstract number field and its ring of integers can be considered a subring of C {\displaystyle \mathbb {C} } , however there can be multiple embeddings which are inequivalent with respect to absolute values. The Mignotte bounds are abstract and general enough that they hold independent of the chosen embedding. This may be taken as a hint that they are not as tight as possible in principle, as can indeed be seen from competing bounds that are sometimes better [ 7 ] : 7ff . In computer algebra when doing effective computations with integer polynomials often the following strategy is applied. One reduces a polynomial f {\displaystyle f} modulo a suitable prime number p {\displaystyle p} to get f p {\displaystyle f_{p}} , solves a related problem over Z / p Z {\displaystyle \mathbb {Z} /p\mathbb {Z} } instead of Z {\displaystyle \mathbb {Z} } which is often simpler, and finally uses Hensel lifting to transfer the result for f p {\displaystyle f_{p}} back to f {\displaystyle f} . Hensel lifting is an iterative process and it is in general not clear when to stop it. The Landau-Mignotte bounds can supply additional a priori information that makes it possible to give explicit bounds on how often Hensel lifting has to be iterated to recover the solution for f {\displaystyle f} from a solution for f p {\displaystyle f_{p}} . In particular this can be applied to factoring integer polynomials [ 1 ] or for computing the gcd of integer polynomials [ 2 ] : 166 . Although effective , this approach may not be the most efficient , as can be seen in the case of factoring .
https://en.wikipedia.org/wiki/Landau-Mignotte_bound
In gas dynamics , the Landau derivative or fundamental derivative of gas dynamics , named after Lev Landau who introduced it in 1942, [ 1 ] [ 2 ] refers to a dimensionless physical quantity characterizing the curvature of the isentrope drawn on the specific volume versus pressure plane. Specifically, the Landau derivative is a second derivative of specific volume with respect to pressure. The derivative is denoted commonly using the symbol Γ {\displaystyle \Gamma } or α {\displaystyle \alpha } and is defined by [ 3 ] [ 4 ] [ 5 ] Γ = c 4 2 υ 3 ( ∂ 2 υ ∂ p 2 ) s {\displaystyle \Gamma ={\frac {c^{4}}{2\upsilon ^{3}}}\left({\frac {\partial ^{2}\upsilon }{\partial p^{2}}}\right)_{s}} where Alternate representations of Γ {\displaystyle \Gamma } include Γ = υ 3 2 c 2 ( ∂ 2 p ∂ υ 2 ) s = 1 c ( ∂ ρ c ∂ ρ ) s = 1 + c υ ( ∂ c ∂ p ) s = 1 + c υ ( ∂ c ∂ p ) T + c T υ c p ( ∂ υ ∂ T ) p ( ∂ c ∂ T ) p . {\displaystyle {\begin{aligned}\Gamma &={\frac {\upsilon ^{3}}{2c^{2}}}\left({\frac {\partial ^{2}p}{\partial \upsilon ^{2}}}\right)_{s}={\frac {1}{c}}\left({\frac {\partial \rho c}{\partial \rho }}\right)_{s}=1+{\frac {c}{\upsilon }}\left({\frac {\partial c}{\partial p}}\right)_{s}\\[2ex]&=1+{\frac {c}{\upsilon }}\left({\frac {\partial c}{\partial p}}\right)_{T}+{\frac {cT}{\upsilon c_{p}}}\left({\frac {\partial \upsilon }{\partial T}}\right)_{p}\left({\frac {\partial c}{\partial T}}\right)_{p}.\end{aligned}}} For most common gases, Γ > 0 {\displaystyle \Gamma >0} , whereas abnormal substances such as the BZT fluids exhibit Γ < 0 {\displaystyle \Gamma <0} . In an isentropic process, the sound speed increases with pressure when Γ > 1 {\displaystyle \Gamma >1} ; this is the case for ideal gases. Specifically for polytropic gases (ideal gas with constant specific heats), the Landau derivative is a constant and given by Γ = 1 2 ( γ + 1 ) , {\displaystyle \Gamma ={\tfrac {1}{2}}(\gamma +1),} where γ > 1 {\displaystyle \gamma >1} is the specific heat ratio . Some non-ideal gases falls in the range 0 < Γ < 1 {\displaystyle 0<\Gamma <1} , for which the sound speed decreases with pressure during an isentropic transformation. This physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Landau_derivative
The Landau kernel is named after the German number theorist Edmund Landau . The kernel is a summability kernel defined as: [ 1 ] L n ( t ) = { ( 1 − t 2 ) n c n if − 1 ≤ t ≤ 1 0 otherwise {\displaystyle L_{n}(t)={\begin{cases}{\frac {(1-t^{2})^{n}}{c_{n}}}&{\text{if }}{-1}\leq t\leq 1\\0&{\text{otherwise}}\end{cases}}} where the coefficients c n {\displaystyle c_{n}} are defined as follows: c n = ∫ − 1 1 ( 1 − t 2 ) n d t . {\displaystyle c_{n}=\int _{-1}^{1}(1-t^{2})^{n}\,dt.} Using integration by parts, one can show that: [ 2 ] c n = ( n ! ) 2 2 2 n + 1 ( 2 n ) ! ( 2 n + 1 ) . {\displaystyle c_{n}={\frac {(n!)^{2}\,2^{2n+1}}{(2n)!(2n+1)}}.} Hence, this implies that the Landau kernel can be defined as follows: L n ( t ) = { ( 1 − t 2 ) n ( 2 n ) ! ( 2 n + 1 ) ( n ! ) 2 2 2 n + 1 for t ∈ [ − 1 , 1 ] 0 elsewhere {\displaystyle L_{n}(t)={\begin{cases}(1-t^{2})^{n}{\frac {(2n)!(2n+1)}{(n!)^{2}\,2^{2n+1}}}&{\text{for }}t\in [-1,1]\\0&{\text{elsewhere}}\end{cases}}} Plotting this function for different values of n reveals that as n goes to infinity, L n ( t ) {\displaystyle L_{n}(t)} approaches the Dirac delta function , as seen in the image, [ 1 ] where the following functions are plotted. Some general properties of the Landau kernel is that it is nonnegative and continuous on R {\displaystyle \mathbb {R} } . These properties are made more concrete in the following section. Definition: Dirac sequence — A Dirac sequence is a sequence { K n ( t ) } {\displaystyle \{K_{n}(t)\}} of functions K n ( t ) : R → R {\displaystyle K_{n}(t)\colon \mathbb {R} \to \mathbb {R} } that satisfies the following properities: The third bullet point means that the area under the graph of the function y = K n ( t ) {\displaystyle y=K_{n}(t)} becomes increasingly concentrated close to the origin as n approaches infinity. This definition lends us to the following theorem. Theorem — The sequence of Landau kernels is a Dirac sequence Proof : We prove the third property only. In order to do so, we introduce the following lemma: Lemma — The coefficients satsify the following relationship, c n ≥ 2 n + 1 {\displaystyle c_{n}\geq {\frac {2}{n+1}}} Proof of the Lemma: Using the definition of the coefficients above, we find that the integrand is even, we may write c n 2 = ∫ 0 1 ( 1 − t 2 ) n d t = ∫ 0 1 ( 1 − t ) n ( 1 + t ) n d t ≥ ∫ 0 1 ( 1 − t ) n d t = 1 1 + n {\displaystyle {\frac {c_{n}}{2}}=\int _{0}^{1}(1-t^{2})^{n}\,dt=\int _{0}^{1}(1-t)^{n}(1+t)^{n}\,dt\geq \int _{0}^{1}(1-t)^{n}\,dt={\frac {1}{1+n}}} completing the proof of the lemma. A corollary of this lemma is the following: Corollary — For all positive, real δ : {\displaystyle \delta :} ∫ R ∖ [ − δ , δ ] K n ( t ) d t ≤ 2 c n ∫ δ 1 ( 1 − t 2 ) n d t ≤ ( n + 1 ) ( 1 − r 2 ) n {\displaystyle \int _{\mathbb {R} \smallsetminus [-\delta ,\delta ]}K_{n}(t)\,dt\leq {\frac {2}{c_{n}}}\int _{\delta }^{1}(1-t^{2})^{n}\,dt\leq (n+1)(1-r^{2})^{n}}
https://en.wikipedia.org/wiki/Landau_kernel
In quantum mechanics , the energies of cyclotron orbits of charged particles in a uniform magnetic field are quantized to discrete values, thus known as Landau levels . These levels are degenerate , with the number of electrons per level directly proportional to the strength of the applied magnetic field. It is named after the Soviet physicist Lev Landau . [ 1 ] Landau quantization contributes towards magnetic susceptibility of metals, known as Landau diamagnetism . Under strong magnetic fields, Landau quantization leads to oscillations in electronic properties of materials as a function of the applied magnetic field known as the De Haas–Van Alphen and Shubnikov–de Haas effects . Landau quantization is a key ingredient in explanation of the integer quantum Hall effect . Consider a system of non-interacting particles with charge q and spin S confined to an area A = L x L y in the x-y plane. Apply a uniform magnetic field B = ( 0 0 B ) {\displaystyle \mathbf {B} ={\begin{pmatrix}0\\0\\B\end{pmatrix}}} along the z -axis. In SI units, the Hamiltonian of this system (here, the effects of spin are neglected) is H ^ = 1 2 m ( p ^ − q A ^ ) 2 . {\displaystyle {\hat {H}}={\frac {1}{2m}}\left({\hat {\mathbf {p} }}-q{\hat {\mathbf {A} }}\right)^{2}.} Here, p ^ {\textstyle {\hat {\mathbf {p} }}} is the canonical momentum operator and A ^ {\textstyle {\hat {\mathbf {A} }}} is the operator for the electromagnetic vector potential A {\textstyle \mathbf {A} } (in position space A ^ = A {\textstyle {\hat {\mathbf {A} }}=\mathbf {A} } ). The vector potential is related to the magnetic field by B = ∇ × A . {\displaystyle \mathbf {B} =\mathbf {\nabla } \times \mathbf {A} .} There is some gauge freedom in the choice of vector potential for a given magnetic field. The Hamiltonian is gauge invariant , which means that adding the gradient of a scalar field to A changes the overall phase of the wave function by an amount corresponding to the scalar field. But physical properties are not influenced by the specific choice of gauge. From the possible solutions for A , a gauge fixing introduced by Lev Landau is often used for charged particles in a constant magnetic field. [ 2 ] When B = ( 0 0 B ) {\displaystyle \mathbf {B} ={\begin{pmatrix}0\\0\\B\end{pmatrix}}} then A = ( 0 B ⋅ x 0 ) {\displaystyle \mathbf {A} ={\begin{pmatrix}0\\B\cdot x\\0\end{pmatrix}}} is a possible solution [ 3 ] in the Landau gauge (not to be mixed up with the Landau R ξ {\displaystyle R_{\xi }} gauge ). In this gauge, the Hamiltonian is H ^ = p ^ x 2 2 m + 1 2 m ( p ^ y − q B x ^ ) 2 + p ^ z 2 2 m . {\displaystyle {\hat {H}}={\frac {{\hat {p}}_{x}^{2}}{2m}}+{\frac {1}{2m}}\left({\hat {p}}_{y}-qB{\hat {x}}\right)^{2}+{\frac {{\hat {p}}_{z}^{2}}{2m}}.} The operator p ^ y {\displaystyle {\hat {p}}_{y}} commutes with this Hamiltonian, since the operator y ^ {\displaystyle {\hat {y}}} is absent for this choice of gauge. Thus the operator p ^ y {\displaystyle {\hat {p}}_{y}} can be replaced by its eigenvalue ℏ k y {\displaystyle \hbar k_{y}} . Since z ^ {\displaystyle {\hat {z}}} does not appear in the Hamiltonian and only the z-momentum appears in the kinetic energy, this motion along the z-direction is a free motion. The Hamiltonian can also be written more simply by noting that the cyclotron frequency is ω c = q B / m {\displaystyle \omega _{c}=qB/m} , giving H ^ = p ^ x 2 2 m + 1 2 m ω c 2 ( x ^ − ℏ k y m ω c ) 2 + p ^ z 2 2 m . {\displaystyle {\hat {H}}={\frac {{\hat {p}}_{x}^{2}}{2m}}+{\frac {1}{2}}m\omega _{\rm {c}}^{2}\left({\hat {x}}-{\frac {\hbar k_{y}}{m\omega _{\rm {c}}}}\right)^{2}+{\frac {{\hat {p}}_{z}^{2}}{2m}}.} This is exactly the Hamiltonian for the quantum harmonic oscillator , except with the minimum of the potential shifted in coordinate space by x 0 = ℏ k y / m ω c {\displaystyle x_{0}=\hbar k_{y}/m\omega _{c}} . To find the energies, note that translating the harmonic oscillator potential does not affect the energies. The energies of this system are thus identical to those of the standard quantum harmonic oscillator , [ 4 ] E n = ℏ ω c ( n + 1 2 ) + p z 2 2 m , n ≥ 0. {\displaystyle E_{n}=\hbar \omega _{\rm {c}}\left(n+{\frac {1}{2}}\right)+{\frac {p_{z}^{2}}{2m}},\quad n\geq 0.} The energy does not depend on the quantum number k y {\displaystyle k_{y}} , so there will be a finite number of degeneracies (If the particle is placed in an unconfined space, this degeneracy will correspond to a continuous sequence of p y {\displaystyle p_{y}} ). The value of p z {\displaystyle p_{z}} is continuous if the particle is unconfined in the z-direction and discrete if the particle is bounded in the z-direction also. Each set of wave functions with the same value of n {\displaystyle n} is called a Landau level . For the wave functions, recall that p ^ y {\displaystyle {\hat {p}}_{y}} commutes with the Hamiltonian. Then the wave function factors into a product of momentum eigenstates in the y {\displaystyle y} direction and harmonic oscillator eigenstates | ϕ n ⟩ {\displaystyle |\phi _{n}\rangle } shifted by an amount x 0 {\displaystyle x_{0}} in the x {\displaystyle x} direction: Ψ ( x , y , z ) = e i ( k y y + k z z ) ϕ n ( x − x 0 ) {\displaystyle \Psi (x,y,z)=e^{i(k_{y}y+k_{z}z)}\phi _{n}(x-x_{0})} where k z = p z / ℏ {\displaystyle k_{z}=p_{z}/\hbar } . In sum, the state of the electron is characterized by the quantum numbers, n {\displaystyle n} , k y {\displaystyle k_{y}} and k z {\displaystyle k_{z}} . The derivation treated x and y as asymmetric. However, by the symmetry of the system, there is no physical quantity which distinguishes these coordinates. The same result could have been obtained with an appropriate interchange of x and y . A more adequate choice of gauge, is the symmetric gauge, which refers to the choice A ^ = 1 2 B × r ^ = 1 2 ( − B y B x 0 ) . {\displaystyle {\hat {\mathbf {A} }}={\frac {1}{2}}\mathbf {B} \times {\hat {\mathbf {r} }}={\frac {1}{2}}{\begin{pmatrix}-By\\Bx\\0\end{pmatrix}}.} In terms of dimensionless lengths and energies, the Hamiltonian can be expressed as H ^ = 1 2 [ ( − i ∂ ∂ x + y 2 ) 2 + ( − i ∂ ∂ y − x 2 ) 2 ] {\displaystyle {\hat {H}}={\frac {1}{2}}\left[\left(-i{\frac {\partial }{\partial x}}+{\frac {y}{2}}\right)^{2}+\left(-i{\frac {\partial }{\partial y}}-{\frac {x}{2}}\right)^{2}\right]} The correct units can be restored by introducing factors of q , ℏ , B {\displaystyle q,\hbar ,\mathbf {B} } and m {\displaystyle m} . Consider operators a ^ = 1 2 [ ( x 2 + ∂ ∂ x ) − i ( y 2 + ∂ ∂ y ) ] a ^ † = 1 2 [ ( x 2 − ∂ ∂ x ) + i ( y 2 − ∂ ∂ y ) ] b ^ = 1 2 [ ( x 2 + ∂ ∂ x ) + i ( y 2 + ∂ ∂ y ) ] b ^ † = 1 2 [ ( x 2 − ∂ ∂ x ) − i ( y 2 − ∂ ∂ y ) ] {\displaystyle {\begin{aligned}{\hat {a}}&={\frac {1}{\sqrt {2}}}\left[\left({\frac {x}{2}}+{\frac {\partial }{\partial x}}\right)-i\left({\frac {y}{2}}+{\frac {\partial }{\partial y}}\right)\right]\\{\hat {a}}^{\dagger }&={\frac {1}{\sqrt {2}}}\left[\left({\frac {x}{2}}-{\frac {\partial }{\partial x}}\right)+i\left({\frac {y}{2}}-{\frac {\partial }{\partial y}}\right)\right]\\{\hat {b}}&={\frac {1}{\sqrt {2}}}\left[\left({\frac {x}{2}}+{\frac {\partial }{\partial x}}\right)+i\left({\frac {y}{2}}+{\frac {\partial }{\partial y}}\right)\right]\\{\hat {b}}^{\dagger }&={\frac {1}{\sqrt {2}}}\left[\left({\frac {x}{2}}-{\frac {\partial }{\partial x}}\right)-i\left({\frac {y}{2}}-{\frac {\partial }{\partial y}}\right)\right]\end{aligned}}} These operators follow certain commutation relations [ a ^ , a ^ † ] = [ b ^ , b ^ † ] = 1. {\displaystyle [{\hat {a}},{\hat {a}}^{\dagger }]=[{\hat {b}},{\hat {b}}^{\dagger }]=1.} In terms of above operators the Hamiltonian can be written as H ^ = ℏ ω c ( a ^ † a ^ + 1 2 ) , {\displaystyle {\hat {H}}=\hbar \omega _{\rm {c}}\left({\hat {a}}^{\dagger }{\hat {a}}+{\frac {1}{2}}\right),} where we reintroduced the units back. The Landau level index n {\displaystyle n} is the eigenvalue of the operator N ^ = a ^ † a ^ {\displaystyle {\hat {N}}={\hat {a}}^{\dagger }{\hat {a}}} . The application of b ^ † {\displaystyle {\hat {b}}^{\dagger }} increases m z {\displaystyle m_{z}} by one unit while preserving n {\displaystyle n} , whereas a ^ † {\displaystyle {\hat {a}}^{\dagger }} application simultaneously increase n {\displaystyle n} and decreases m z {\displaystyle m_{z}} by one unit. The analogy to quantum harmonic oscillator provides solutions H ^ | n , m z ⟩ = E n | n , m z ⟩ , {\displaystyle {\hat {H}}|n,m_{z}\rangle =E_{n}|n,m_{z}\rangle ,} where E n = ℏ ω c ( n + 1 2 ) {\displaystyle E_{n}=\hbar \omega _{\rm {c}}\left(n+{\frac {1}{2}}\right)} and | n , m z ⟩ = ( b ^ † ) m z + n ( m z + n ) ! ( a ^ † ) n n ! | 0 , 0 ⟩ . {\displaystyle |n,m_{z}\rangle ={\frac {({\hat {b}}^{\dagger })^{m_{z}+n}}{\sqrt {(m_{z}+n)!}}}{\frac {({\hat {a}}^{\dagger })^{n}}{\sqrt {n!}}}|0,0\rangle .} One may verify that the above states correspond to choosing wavefunctions proportional to ψ n , m z ( x , y ) = ( ∂ ∂ w − w ¯ 4 ) n w n + m z e − | w | 2 / 4 {\displaystyle \psi _{n,m_{z}}(x,y)=\left({\frac {\partial }{\partial w}}-{\frac {\bar {w}}{4}}\right)^{n}w^{n+m_{z}}e^{-|w|^{2}/4}} where w = x − i y {\displaystyle w=x-iy} . In particular, the lowest Landau level n = 0 {\displaystyle n=0} consists of arbitrary analytic functions multiplying a Gaussian, ψ ( x , y ) = f ( w ) e − | w | 2 / 4 {\displaystyle \psi (x,y)=f(w)e^{-|w|^{2}/4}} . The effects of Landau levels may only be observed when the mean thermal energy kT is smaller than the energy level separation, k T ≪ ℏ ω c {\displaystyle kT\ll \hbar \omega _{c}} , meaning low temperatures and strong magnetic fields. Each Landau level is degenerate because of the second quantum number k y {\displaystyle k_{y}} , which can take the values k y = 2 π N L y , {\displaystyle k_{y}={\frac {2\pi N}{L_{y}}},} where N {\displaystyle N} is an integer. The allowed values of N {\displaystyle N} are further restricted by the condition that the center of force of the oscillator, x 0 {\displaystyle x_{0}} , must physically lie within the system, 0 ≤ x 0 < L x {\displaystyle 0\leq x_{0}<L_{x}} . This gives the following range for N {\displaystyle N} , 0 ≤ N < m ω c L x L y 2 π ℏ . {\displaystyle 0\leq N<{\frac {m\omega _{\rm {c}}L_{x}L_{y}}{2\pi \hbar }}.} For particles with charge q = Z e {\displaystyle q=Ze} , the upper bound on N {\displaystyle N} can be simply written as a ratio of fluxes , Z B L x L y ( h / e ) = Z Φ Φ 0 , {\displaystyle {\frac {ZBL_{x}L_{y}}{(h/e)}}=Z{\frac {\Phi }{\Phi _{0}}},} where Φ 0 = h / e {\displaystyle \Phi _{0}=h/e} is the fundamental magnetic flux quantum and Φ = B A {\displaystyle \Phi =BA} is the flux through the system (with area A = L x L y {\displaystyle A=L_{x}L_{y}} ). Thus, for particles with spin S {\displaystyle S} , the maximum number D {\displaystyle D} of particles per Landau level is D = Z ( 2 S + 1 ) Φ Φ 0 , {\displaystyle D=Z(2S+1){\frac {\Phi }{\Phi _{0}}},} which for electrons (where Z = 1 {\displaystyle Z=1} and S = 1 / 2 {\displaystyle S=1/2} ) gives D = 2 Φ / Φ 0 {\displaystyle D=2\Phi /\Phi _{0}} , two available states for each flux quantum that penetrates the system. The above gives only a rough idea of the effects of finite-size geometry. Strictly speaking, using the standard solution of the harmonic oscillator is only valid for systems unbounded in the x {\displaystyle x} -direction (infinite strips). If the size L x {\displaystyle L_{x}} is finite, boundary conditions in that direction give rise to non-standard quantization conditions on the magnetic field, involving (in principle) both solutions to the Hermite equation. The filling of these levels with many electrons is still [ 5 ] an active area of research. In general, Landau levels are observed in electronic systems. As the magnetic field is increased, more and more electrons can fit into a given Landau level. The occupation of the highest Landau level ranges from completely full to entirely empty, leading to oscillations in various electronic properties (see De Haas–Van Alphen effect and Shubnikov–de Haas effect ). If Zeeman splitting is included, each Landau level splits into a pair, one for spin up electrons and the other for spin down electrons. Then the occupation of each spin Landau level is just the ratio of fluxes D = Φ / Φ 0 {\displaystyle D=\Phi /\Phi _{0}} . Zeeman splitting has a significant effect on the Landau levels because their energy scales are the same, 2 μ B B = ℏ ω c {\displaystyle 2\mu _{B}B=\hbar \omega _{c}} . However, the Fermi energy and ground state energy stay roughly the same in a system with many filled levels, since pairs of split energy levels cancel each other out when summed. Moreover, the above derivation in the Landau gauge assumed an electron confined in the z {\displaystyle z} -direction, which is a relevant experimental situation — found in two-dimensional electron gases, for instance. Still, this assumption is not essential for the results. If electrons are free to move along the z {\displaystyle z} -direction, the wave function acquires an additional multiplicative term exp ⁡ ( i k z z ) {\displaystyle \exp(ik_{z}z)} ; the energy corresponding to this free motion, ( ℏ k z ) 2 / ( 2 m ) {\displaystyle (\hbar k_{z})^{2}/(2m)} , is added to the E {\displaystyle E} discussed. This term then fills in the separation in energy of the different Landau levels, blurring the effect of the quantization. Nevertheless, the motion in the x {\displaystyle x} - y {\displaystyle y} -plane, perpendicular to the magnetic field, is still quantized. Each Landau level has degenerate orbitals labeled by the quantum numbers m z {\displaystyle m_{z}} in symmetric gauge. The degeneracy per unit area is the same in each Landau level. The z component of angular momentum is L ^ z = − i ℏ ∂ ∂ θ = − ℏ ( b ^ † b ^ − a ^ † a ^ ) {\displaystyle {\hat {L}}_{z}=-i\hbar {\frac {\partial }{\partial \theta }}=-\hbar ({\hat {b}}^{\dagger }{\hat {b}}-{\hat {a}}^{\dagger }{\hat {a}})} Exploiting the property [ H ^ , L ^ z ] = 0 {\displaystyle [{\hat {H}},{\hat {L}}_{z}]=0} we chose eigenfunctions which diagonalize H ^ {\displaystyle {\hat {H}}} and L ^ z {\displaystyle {\hat {L}}_{z}} , The eigenvalue of L ^ z {\displaystyle {\hat {L}}_{z}} is denoted by − m z ℏ {\displaystyle -m_{z}\hbar } , where it is clear that m z ≥ − n {\displaystyle m_{z}\geq -n} in the n {\displaystyle n} th Landau level. However, it may be arbitrarily large, which is necessary to obtain the infinite degeneracy (or finite degeneracy per unit area) exhibited by the system. An electron following Dirac equation under a constant magnetic field, can be analytically solved. [ 6 ] [ 7 ] The energies are given by E r e l = ± ( m c 2 ) 2 + ( c ℏ k z ) 2 + 2 ν ℏ ω c m c 2 {\displaystyle E_{\rm {rel}}=\pm {\sqrt {(mc^{2})^{2}+(c\hbar k_{z})^{2}+2\nu \hbar \omega _{\rm {c}}mc^{2}}}} where c is the speed of light, the sign depends on the particle-antiparticle component and ν is a non-negative integer. Due to spin, all levels are degenerate except for the ground state at ν = 0 . The massless 2D case can be simulated in single-layer materials like graphene near the Dirac cones , where the eigenergies are given by [ 8 ] E g r a p h e n e = ± 2 ν ℏ e B v F 2 {\displaystyle E_{\rm {graphene}}=\pm {\sqrt {2\nu \hbar eBv_{\rm {F}}^{2}}}} where the speed of light has to be replaced with the Fermi speed v F of the material and the minus sign corresponds to electron holes . The Fermi gas (an ensemble of non-interacting fermions ) is part of the basis for understanding of the thermodynamic properties of metals. In 1930 Landau derived an estimate for the magnetic susceptibility of a Fermi gas, known as Landau susceptibility , which is constant for small magnetic fields. Landau also noticed that the susceptibility oscillates with high frequency for large magnetic fields, [ 9 ] this physical phenomenon is known as the De Haas–Van Alphen effect . The tight binding energy spectrum of charged particles in a two dimensional infinite lattice is known to be self-similar and fractal , as demonstrated in Hofstadter's butterfly . For an integer ratio of the magnetic flux quantum and the magnetic flux through a lattice cell, one recovers the Landau levels for large integers. [ 10 ] The energy spectrum of the semiconductor in a strong magnetic field forms Landau levels that can be labeled by integer indices. In addition, the Hall resistivity also exhibits discrete levels labeled by an integer ν . The fact that these two quantities are related can be shown in different ways, but most easily can be seen from Drude model : the Hall conductivity depends on the electron density n as ρ x y = B n e . {\displaystyle \rho _{xy}={\frac {B}{ne}}.} Since the resistivity plateau is given by ρ x y = 2 π ℏ e 2 1 ν , {\displaystyle \rho _{xy}={\frac {2\pi \hbar }{e^{2}}}{\frac {1}{\nu }},} the required density is n = B Φ 0 ν , {\displaystyle n={\frac {B}{\Phi _{0}}}\nu ,} which is exactly the density required to fill the Landau level. The gap between different Landau levels along with large degeneracy of each level renders the resistivity quantized.
https://en.wikipedia.org/wiki/Landau_levels
Landau theory (also known as Ginzburg–Landau theory , despite the confusing name [ 1 ] ) in physics is a theory that Lev Landau introduced in an attempt to formulate a general theory of continuous (i.e., second-order) phase transitions . [ 2 ] It can also be adapted to systems under externally-applied fields, and used as a quantitative model for discontinuous (i.e., first-order) transitions. Although the theory has now been superseded by the renormalization group and scaling theory formulations, it remains an exceptionally broad and powerful framework for phase transitions, and the associated concept of the order parameter as a descriptor of the essential character of the transition has proven transformative. Landau was motivated to suggest that the free energy of any system should obey two conditions: Given these two conditions, one can write down (in the vicinity of the critical temperature, T c ) a phenomenological expression for the free energy as a Taylor expansion in the order parameter . Consider a system that breaks some symmetry below a phase transition, which is characterized by an order parameter η {\displaystyle \eta } . This order parameter is a measure of the order before and after a phase transition; the order parameter is often zero above some critical temperature and non-zero below the critical temperature. In a simple ferromagnetic system like the Ising model , the order parameter is characterized by the net magnetization m {\displaystyle m} , which becomes spontaneously non-zero below a critical temperature T c {\displaystyle T_{c}} . In Landau theory, one considers a free energy functional that is an analytic function of the order parameter. In many systems with certain symmetries, the free energy will only be a function of even powers of the order parameter, for which it can be expressed as the series expansion [ 3 ] In general, there are higher order terms present in the free energy, but it is a reasonable approximation to consider the series to fourth order in the order parameter, as long as the order parameter is small. For the system to be thermodynamically stable (that is, the system does not seek an infinite order parameter to minimize the energy), the coefficient of the highest even power of the order parameter must be positive, so b ( T ) > 0 {\displaystyle b(T)>0} . For simplicity, one can assume that b ( T ) = b 0 {\displaystyle b(T)=b_{0}} , a constant, near the critical temperature. Furthermore, since a ( T ) {\displaystyle a(T)} changes sign above and below the critical temperature, one can likewise expand a ( T ) ≈ a 0 ( T − T c ) {\displaystyle a(T)\approx a_{0}(T-T_{c})} , where it is assumed that a > 0 {\displaystyle a>0} for the high-temperature phase while a < 0 {\displaystyle a<0} for the low-temperature phase, for a transition to occur. With these assumptions, minimizing the free energy with respect to the order parameter requires The solution to the order parameter that satisfies this condition is either η = 0 {\displaystyle \eta =0} , or It is clear that this solution only exists for T < T c {\displaystyle T<T_{c}} , otherwise η = 0 {\displaystyle \eta =0} is the only solution. Indeed, η = 0 {\displaystyle \eta =0} is the minimum solution for T > T c {\displaystyle T>T_{c}} , but the solution η 0 {\displaystyle \eta _{0}} minimizes the free energy for T < T c {\displaystyle T<T_{c}} , and thus is a stable phase. Furthermore, the order parameter follows the relation below the critical temperature, indicating a critical exponent β = 1 / 2 {\displaystyle \beta =1/2} for this Landau mean-theory model. The free-energy will vary as a function of temperature given by From the free energy, one can compute the specific heat, which has a finite jump at the critical temperature of size Δ c = a 0 2 T c / b 0 {\displaystyle \Delta c=a_{0}^{2}T_{c}/b_{0}} . This finite jump is therefore not associated with a discontinuity that would occur if the system absorbed latent heat , since T c Δ S = 0 {\displaystyle T_{c}\Delta S=0} . It is also noteworthy that the discontinuity in the specific heat is related to the discontinuity in the second derivative of the free energy, which is characteristic of a second -order phase transition. Furthermore, the fact that the specific heat has no divergence or cusp at the critical point indicates its critical exponent for c ∼ | T − T c | − α {\displaystyle c\sim |T-T_{c}|^{-\alpha }} is α = 0 {\displaystyle \alpha =0} . Landau expanded his theory to consider the restraints that it imposes on the symmetries before and after a transition of second order. They need to comply with a number of requirements: In the latter case more than one daughter structure should be reacheable through a continuous transition. A good example of this are the structure of MnP (space group Cmca) and the low temperature structure of NbS (space group P6 3 mc). They are both daughters of the NiAs-structure and their distortions transform according to the same irrep of that spacegroup. [ 4 ] In many systems, one can consider a perturbing field h {\displaystyle h} that couples linearly to the order parameter. For example, in the case of a classical dipole moment μ {\displaystyle \mu } , the energy of the dipole-field system is − μ B {\displaystyle -\mu B} . In the general case, one can assume an energy shift of − η h {\displaystyle -\eta h} due to the coupling of the order parameter to the applied field h {\displaystyle h} , and the Landau free energy will change as a result: In this case, the minimization condition is One immediate consequence of this equation and its solution is that, if the applied field is non-zero, then the magnetization is non-zero at any temperature. This implies there is no longer a spontaneous symmetry breaking that occurs at any temperature. Furthermore, some interesting thermodynamic and universal quantities can be obtained from this above condition. For example, at the critical temperature where a ( T c ) = 0 {\displaystyle a(T_{c})=0} , one can find the dependence of the order parameter on the external field: indicating a critical exponent δ = 3 {\displaystyle \delta =3} . Furthermore, from the above condition, it is possible to find the zero-field susceptibility χ ≡ ∂ η / ∂ h | h = 0 {\displaystyle \chi \equiv \partial \eta /\partial h|_{h=0}} , which must satisfy In this case, recalling in the zero-field case that η 2 = − a / b {\displaystyle \eta ^{2}=-a/b} at low temperatures, while η 2 = 0 {\displaystyle \eta ^{2}=0} for temperatures above the critical temperature, the zero-field susceptibility therefore has the following temperature dependence: which is reminiscent of the Curie-Weiss law for the temperature dependence of magnetic susceptibility in magnetic materials, and yields the mean-field critical exponent γ = 1 {\displaystyle \gamma =1} . It is noteworthy that although the critical exponents so obtained are incorrect for many models and systems, they correctly satisfy various exponent equalities such as the Rushbrooke equality : α + 2 β + γ = 2 {\displaystyle \alpha +2\beta +\gamma =2} . Landau theory can also be used to study first-order transitions . There are two different formulations, depending on whether or not the system is symmetric under a change in sign of the order parameter. Here we consider the case where the system has a symmetry and the energy is invariant when the order parameter changes sign. A first-order transition will arise if the quartic term in F {\displaystyle F} is negative. To ensure that the free energy remains positive at large η {\displaystyle \eta } , one must carry the free-energy expansion to sixth-order, [ 5 ] [ 6 ] where A ( T ) = A 0 ( T − T 0 ) {\displaystyle A(T)=A_{0}(T-T_{0})} , and T 0 {\displaystyle T_{0}} is some temperature at which A ( T ) {\displaystyle A(T)} changes sign. We denote this temperature by T 0 {\displaystyle T_{0}} and not T c {\displaystyle T_{c}} , since it will emerge below that it is not the temperature of the first-order transition, and since there is no critical point, the notion of a "critical temperature" is misleading to begin with. A 0 , B 0 , {\displaystyle A_{0},B_{0},} and C 0 {\displaystyle C_{0}} are positive coefficients. We analyze this free energy functional as follows: (i) For T > T 0 {\displaystyle T>T_{0}} , the η 2 {\displaystyle \eta ^{2}} and η 6 {\displaystyle \eta ^{6}} terms are concave upward for all η {\displaystyle \eta } , while the η 4 {\displaystyle \eta ^{4}} term is concave downward. Thus for sufficiently high temperatures F {\displaystyle F} is concave upward for all η {\displaystyle \eta } , and the equilibrium solution is η = 0 {\displaystyle \eta =0} . (ii) For T < T 0 {\displaystyle T<T_{0}} , both the η 2 {\displaystyle \eta ^{2}} and η 4 {\displaystyle \eta ^{4}} terms are negative, so η = 0 {\displaystyle \eta =0} is a local maximum, and the minimum of F {\displaystyle F} is at some non-zero value ± η 0 ( T ) {\displaystyle \pm \eta _{0}(T)} , with F ( T 0 , η 0 ( T 0 ) ) < 0 {\displaystyle F(T_{0},\eta _{0}(T_{0}))<0} . (iii) For T {\displaystyle T} just above T 0 {\displaystyle T_{0}} , η = 0 {\displaystyle \eta =0} turns into a local minimum, but the minimum at η 0 ( T ) {\displaystyle \eta _{0}(T)} continues to be the global minimum since it has a lower free energy. It follows that as the temperature is raised above T 0 {\displaystyle T_{0}} , the global minimum cannot continuously evolve from η 0 ( T ) {\displaystyle \eta _{0}(T)} to 0. Rather, at some intermediate temperature T ∗ {\displaystyle T_{*}} , the minima at η 0 ( T ∗ ) {\displaystyle \eta _{0}(T_{*})} and η = 0 {\displaystyle \eta =0} must become degenerate. For T > T ∗ {\displaystyle T>T_{*}} , the global minimum will jump discontinuously from η 0 ( T ∗ ) {\displaystyle \eta _{0}(T_{*})} to 0. To find T ∗ {\displaystyle T_{*}} , we demand that free energy be zero at η = η 0 ( T ∗ ) {\displaystyle \eta =\eta _{0}(T_{*})} (just like the η = 0 {\displaystyle \eta =0} solution), and furthermore that this point should be a local minimum. These two conditions yield two equations, which are satisfied when η 2 ( T ∗ ) = B 0 / 2 C 0 {\displaystyle \eta ^{2}(T_{*})={B_{0}}/{2C_{0}}} . The same equations also imply that A ( T ∗ ) = A 0 ( T ∗ − T 0 ) = B 0 2 / 4 C 0 {\displaystyle A(T_{*})=A_{0}(T_{*}-T_{0})=B_{0}^{2}/4C_{0}} . That is, From this analysis both points made above can be seen explicitly. First, the order parameter suffers a discontinuous jump from ( B 0 / 2 C 0 ) 1 / 2 {\displaystyle (B_{0}/2C_{0})^{1/2}} to 0. Second, the transition temperature T ∗ {\displaystyle T_{*}} is not the same as the temperature T 0 {\displaystyle T_{0}} where A ( T ) {\displaystyle A(T)} vanishes. At temperatures below the transition temperature, T < T ∗ {\displaystyle T<T_{*}} , the order parameter is given by which is plotted to the right. This shows the clear discontinuity associated with the order parameter as a function of the temperature. To further demonstrate that the transition is first-order, one can show that the free energy for this order parameter is continuous at the transition temperature T ∗ {\displaystyle T_{*}} , but its first derivative (the entropy) suffers from a discontinuity, reflecting the existence of a non-zero latent heat. Next we consider the case where the system does not have a symmetry. In this case there is no reason to keep only even powers of η {\displaystyle \eta } in the expansion of F {\displaystyle F} , and a cubic term must be allowed (The linear term can always be eliminated by a shift η → η {\displaystyle \eta \to \eta } + constant.) We thus consider a free energy functional Once again A ( T ) = A 0 ( T − T 0 ) {\displaystyle A(T)=A_{0}(T-T_{0})} , and A 0 , B 0 , C 0 {\displaystyle A_{0},B_{0},C_{0}} are all positive. The sign of the cubic term can always be chosen to be negative as we have done by reversing the sign of η {\displaystyle \eta } if necessary. We analyze this free energy functional as follows: (i) For T < T 0 {\displaystyle T<T_{0}} , we have a local maximum at η = 0 {\displaystyle \eta =0} , and since the free energy is bounded below, there must be two local minima at nonzero values η − ( T ) < 0 {\displaystyle \eta _{-}(T)<0} and η + ( T ) > 0 {\displaystyle \eta _{+}(T)>0} . The cubic term ensures that η + {\displaystyle \eta _{+}} is the global minimum since it is deeper. (ii) For T {\displaystyle T} just above T 0 {\displaystyle T_{0}} , the minimum at η − {\displaystyle \eta _{-}} disappears, the maximum at η = 0 {\displaystyle \eta =0} turns into a local minimum, but the minimum at η + {\displaystyle \eta _{+}} persists and continues to be the global minimum. As the temperature is further raised, F ( T , η + ( T ) ) {\displaystyle F(T,\eta _{+}(T))} rises until it equals zero at some temperature T ∗ {\displaystyle T_{*}} . At T ∗ {\displaystyle T_{*}} we get a discontinuous jump in the global minimum from η + ( T ∗ ) {\displaystyle \eta _{+}(T_{*})} to 0. (The minima cannot coalesce for that would require the first three derivatives of F {\displaystyle F} to vanish at η = 0 {\displaystyle \eta =0} .) To find T ∗ {\displaystyle T_{*}} , we demand that free energy be zero at η = η + ( T ∗ ) {\displaystyle \eta =\eta _{+}(T_{*})} (just like the η = 0 {\displaystyle \eta =0} solution), and furthermore that this point should be a local minimum. These two conditions yield two equations, which are satisfied when η ( T ∗ ) = C 0 / 2 B 0 {\displaystyle \eta (T_{*})={C_{0}}/{2B_{0}}} . The same equations also imply that A ( T ∗ ) = A 0 ( T ∗ − T 0 ) = C 0 2 / 4 B 0 {\displaystyle A(T_{*})=A_{0}(T_{*}-T_{0})=C_{0}^{2}/4B_{0}} . That is, As in the symmetric case the order parameter suffers a discontinuous jump from ( C 0 / 2 B 0 ) {\displaystyle (C_{0}/2B_{0})} to 0. Second, the transition temperature T ∗ {\displaystyle T_{*}} is not the same as the temperature T 0 {\displaystyle T_{0}} where A ( T ) {\displaystyle A(T)} vanishes. It was known experimentally that the liquid–gas coexistence curve and the ferromagnet magnetization curve both exhibited a scaling relation of the form | T − T c | β {\displaystyle |T-T_{c}|^{\beta }} , where β {\displaystyle \beta } was mysteriously the same for both systems. This is the phenomenon of universality . It was also known that simple liquid–gas models are exactly mappable to simple magnetic models, which implied that the two systems possess the same symmetries. It then followed from Landau theory why these two apparently disparate systems should have the same critical exponents, despite having different microscopic parameters. It is now known that the phenomenon of universality arises for other reasons (see Renormalization group ). In fact, Landau theory predicts the incorrect critical exponents for the Ising and liquid–gas systems. The great virtue of Landau theory is that it makes specific predictions for what kind of non-analytic behavior one should see when the underlying free energy is analytic. Then, all the non-analyticity at the critical point, the critical exponents, are because the equilibrium value of the order parameter changes non-analytically, as a square root, whenever the free energy loses its unique minimum. The extension of Landau theory to include fluctuations in the order parameter shows that Landau theory is only strictly valid near the critical points of ordinary systems with spatial dimensions higher than 4. This is the upper critical dimension , and it can be much higher than four in more finely tuned phase transitions. In Mukhamel 's analysis of the isotropic Lifschitz point, the critical dimension is 8. This is because Landau theory is a mean field theory , and does not include long-range correlations. This theory does not explain non-analyticity at the critical point, but when applied to superfluid and superconductor phase transition, Landau's theory provided inspiration for another theory, the Ginzburg–Landau theory of superconductivity . Consider the Ising model free energy above. Assume that the order parameter Ψ {\displaystyle \Psi } and external magnetic field, h {\displaystyle h} , may have spatial variations. Now, the free energy of the system can be assumed to take the following modified form: where D {\displaystyle D} is the total spatial dimensionality. So, Assume that, for a localized external magnetic perturbation h ( x ) → 0 + h 0 δ ( x ) {\displaystyle h(x)\rightarrow 0+h_{0}\delta (x)} , the order parameter takes the form ψ ( x ) → ψ 0 + ϕ ( x ) {\displaystyle \psi (x)\rightarrow \psi _{0}+\phi (x)} . Then, That is, the fluctuation ϕ ( x ) {\displaystyle \phi (x)} in the order parameter corresponds to the order-order correlation. Hence, neglecting this fluctuation (like in the earlier mean-field approach) corresponds to neglecting the order-order correlation, which diverges near the critical point. One can also solve [ 7 ] for ϕ ( x ) {\displaystyle \phi (x)} , from which the scaling exponent, ν {\displaystyle \nu } , for correlation length ξ ∼ ( T − T c ) − ν {\displaystyle \xi \sim (T-T_{c})^{-\nu }} can deduced. From these, the Ginzburg criterion for the upper critical dimension for the validity of the Ising mean-field Landau theory (the one without long-range correlation) can be calculated as: In our current Ising model, mean-field Landau theory gives β = 1 / 2 = ν {\displaystyle \beta =1/2=\nu } and so, it (the Ising mean-field Landau theory) is valid only for spatial dimensionality greater than or equal to 4 (at the marginal values of D = 4 {\displaystyle D=4} , there are small corrections to the exponents). This modified version of mean-field Landau theory is sometimes also referred to as the Landau–Ginzburg theory of Ising phase transitions. As a clarification, there is also a Ginzburg–Landau theory specific to superconductivity phase transition, which also includes fluctuations.
https://en.wikipedia.org/wiki/Landau_theory
Landauer's principle is a physical principle pertaining to a lower theoretical limit of energy consumption of computation . It holds that an irreversible change in information stored in a computer, such as merging two computational paths, dissipates a minimum amount of heat to its surroundings. [ 1 ] It is hypothesized that energy consumption below this lower bound would require the development of reversible computing . The principle was first proposed by Rolf Landauer in 1961. Landauer's principle states that the minimum energy needed to erase one bit of information is proportional to the temperature at which the system is operating. Specifically, the energy needed for this computational task is given by where k B {\displaystyle k_{\text{B}}} is the Boltzmann constant and T {\displaystyle T} is the temperature in Kelvin . [ 2 ] At room temperature , the Landauer limit represents an energy of approximately 0.018 eV (2.9 × 10 −21 J). As of 2012 [update] , modern computers use about a billion times as much energy per operation. [ 3 ] [ 4 ] Rolf Landauer first proposed the principle in 1961 while working at IBM . [ 5 ] He justified and stated important limits to an earlier conjecture by John von Neumann . This refinement is sometimes called the Landauer bound, or Landauer limit. In 2008 and 2009, researchers showed that Landauer's principle can be derived from the second law of thermodynamics and the entropy change associated with information gain, developing the thermodynamics of quantum and classical feedback-controlled systems. [ 6 ] [ 7 ] In 2011, the principle was generalized to show that while information erasure requires an increase in entropy, this increase could theoretically occur at no energy cost. [ 8 ] Instead, the cost can be taken in another conserved quantity , such as angular momentum . In a 2012 article published in Nature , a team of physicists from the École normale supérieure de Lyon , University of Augsburg and the University of Kaiserslautern described that for the first time they have measured the tiny amount of heat released when an individual bit of data is erased. [ 9 ] In 2014, physical experiments tested Landauer's principle and confirmed its predictions. [ 10 ] In 2016, researchers used a laser probe to measure the amount of energy dissipation that resulted when a nanomagnetic bit flipped from off to on. Flipping the bit required about 0.026 eV (4.2 × 10 −21 J) at 300 K, which is just 44% above the Landauer minimum. [ 11 ] A 2018 article published in Nature Physics features a Landauer erasure performed at cryogenic temperatures ( T = 1 K) on an array of high-spin ( S = 10) quantum molecular magnets . The array is made to act as a spin register where each nanomagnet encodes a single bit of information. [ 12 ] The experiment has laid the foundations for the extension of the validity of the Landauer principle to the quantum realm. Owing to the fast dynamics and low "inertia" of the single spins used in the experiment, the researchers also showed how an erasure operation can be carried out at the lowest possible thermodynamic cost—that imposed by the Landauer principle—and at a high speed. [ 12 ] [ 1 ] The principle is widely accepted as physical law , but it has been challenged for using circular reasoning and faulty assumptions. [ 13 ] [ 14 ] [ 15 ] [ 16 ] Others [ 1 ] [ 17 ] [ 18 ] have defended the principle, and Sagawa and Ueda (2008) [ 6 ] and Cao and Feito (2009) [ 7 ] have shown that Landauer's principle is a consequence of the second law of thermodynamics and the entropy reduction associated with information gain. On the other hand, recent advances in non-equilibrium statistical physics have established that there is not a prior relationship between logical and thermodynamic reversibility. [ 19 ] It is possible that a physical process is logically reversible but thermodynamically irreversible. It is also possible that a physical process is logically irreversible but thermodynamically reversible. At best, the benefits of implementing a computation with a logically reversible system are nuanced. [ 20 ] In 2016, researchers at the University of Perugia claimed to have demonstrated a violation of Landauer’s principle, [ 21 ] though their conclusions were disputed. [ 22 ]
https://en.wikipedia.org/wiki/Landauer's_principle
In mesoscopic physics , the Landauer formula —named after Rolf Landauer , who first suggested its prototype in 1957 [ 1 ] —is a formula relating the electrical resistance of a quantum conductor to the scattering properties of the conductor. [ 2 ] It is the equivalent of Ohm's law for mesoscopic circuits with spatial dimensions in the order of or smaller than the phase coherence length of charge carriers ( electrons and holes ). In metals, the phase coherence length is of the order of the micrometre for temperatures less than 1 K . [ 3 ] In the simplest case where the system only has two terminals, and the scattering matrix of the conductor does not depend on energy, the formula reads where G {\displaystyle G} is the electrical conductance, G 0 = e 2 / ( π ℏ ) ≈ 7.75 × 10 − 5 Ω − 1 {\displaystyle G_{0}=e^{2}/(\pi \hbar )\approx 7.75\times 10^{-5}\Omega ^{-1}} is the conductance quantum , T n {\displaystyle T_{n}} are the transmission eigenvalues of the channels, and the sum runs over all transport channels in the conductor. This formula is very simple and physically sensible: The conductance of a nanoscale conductor is given by the sum of all the transmission possibilities that an electron has when propagating with an energy equal to the chemical potential , E = μ {\displaystyle E=\mu } . [ 4 ] A generalization of the Landauer formula for multiple terminals is the Landauer–Büttiker formula , [ 5 ] [ 4 ] proposed by Markus Büttiker [ de ] . If terminal j {\displaystyle j} has voltage V j {\displaystyle V_{j}} (that is, its chemical potential is e V j {\displaystyle eV_{j}} and differs from terminal i {\displaystyle i} chemical potential), and T i , j {\displaystyle T_{i,j}} is the sum of transmission probabilities from terminal i {\displaystyle i} to terminal j {\displaystyle j} (note that T i , j {\displaystyle T_{i,j}} may or may not equal T j , i {\displaystyle T_{j,i}} depending on the presence of a magnetic field), the net current leaving terminal i {\displaystyle i} is In the case of a system with two terminals, the contact resistivity symmetry yields and the generalized formula can be rewritten as which leads us to which implies that the scattering matrix of a system with two terminals is always symmetrical, even with the presence of a magnetic field. The reversal of the magnetic field will only change the propagation direction of the edge states , without affecting the transmission probability. As an example, in a three contact system, the net current leaving the contact 1 can be written as Which is the carriers leaving contact 1 with a potential V 1 {\displaystyle V_{1}} from which we subtract the carriers from contacts 2 and 3 with potentials V 2 {\displaystyle V_{2}} and V 3 {\displaystyle V_{3}} respectively, going into contact 1. In the absence of an applied magnetic field, the generalized equation would be the result of applying Kirchhoff's law to a system of conductance G i j = ( e 2 ) / ( 2 π ℏ ) T i j {\displaystyle G_{ij}=(e^{2})/(2\pi \hbar )T_{ij}} . However, in the presence of a magnetic field, the time reversal symmetry would be broken and therefore, T i j ≠ T j i {\displaystyle T_{ij}\neq T_{ji}} . In the presence of more than two terminals in the system, the two terminals symmetry is broken. In the earlier given example, T 21 ≠ T 32 + T 13 {\displaystyle T_{21}\neq T_{32}+T_{13}} . This is due to the fact that the terminals "recycle" the incoming electrons, for which the phase coherence is lost when another electron is emitted towards terminal 1. However, since the carriers are moving through edge states, one can see that T 21 B = T 12 − B {\displaystyle T_{21}^{B}=T_{12}^{-B}} even with the presence of a third terminal. This is due to the fact that under magnetic field inversion, the edge states simply change their propagation orientation. This is especially true if terminal 3 is taken as a perfect potential probe.
https://en.wikipedia.org/wiki/Landauer_formula
In mathematics , the Landau–Kolmogorov inequality , named after Edmund Landau and Andrey Kolmogorov , is the following family of interpolation inequalities between different derivatives of a function f defined on a subset T of the real numbers: [ 1 ] For k = 1, n = 2 and T = [ c ,∞) or T = R , the inequality was first proved by Edmund Landau [ 2 ] with the sharp constants C (2, 1, [ c ,∞)) = 2 and C (2, 1, R ) = √2. Following contributions by Jacques Hadamard and Georgiy Shilov , Andrey Kolmogorov found the sharp constants and arbitrary n , k : [ 3 ] where a n are the Favard constants . Following work by Matorin and others, the extremising functions were found by Isaac Jacob Schoenberg , [ 4 ] explicit forms for the sharp constants are however still unknown. There are many generalisations, which are of the form Here all three norms can be different from each other (from L 1 to L ∞ , with p = q = r =∞ in the classical case) and T may be the real axis, semiaxis or a closed segment. The Kallman–Rota inequality generalizes the Landau–Kolmogorov inequalities from the derivative operator to more general contractions on Banach spaces . [ 5 ] →
https://en.wikipedia.org/wiki/Landau–Kolmogorov_inequality
In fluid dynamics , Landau–Levich flow or the Landau–Levich problem describes the flow created by a moving plate which is pulled out of a liquid surface. Landau–Levich flow finds many applications in thin film coating. The solution to the problem was described by Lev Landau and Veniamin Levich in 1942. [ 1 ] [ 2 ] [ 3 ] The problem assumes that the plate is dragged out of the liquid slowly, so that the three major forces which are in balance are viscous force, the force due to gravity, and the force due to surface tension. Landau and Levich split the entire flow regime into two regimes, a lower regime and an upper regime. In the lower regime closer to the liquid surface, the flow is assumed to be static, leading to the problem of the Young–Laplace equation (a static meniscus). In the upper region far away from the liquid surface, the thickness of the liquid layer attaching to the plate is very small and also the since the velocity of the plate is small, this regime comes under the approximation of lubrication theory . The solution of these two problems are then matched using method of matched asymptotic expansions . This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Landau–Levich_problem
In solid-state physics , the Landau–Lifshitz equation ( LLE ), named for Lev Landau and Evgeny Lifshitz , is a partial differential equation describing time evolution of magnetism in solids, depending on 1 time variable and 1, 2, or 3 space variables. The LLE describes an anisotropic magnet. The equation is described in ( Faddeev & Takhtajan 2007 , chapter 8) as follows: it is an equation for a vector field S , in other words a function on R 1+ n taking values in R 3 . The equation depends on a fixed symmetric 3-by-3 matrix J , usually assumed to be diagonal ; that is, J = diag ⁡ ( J 1 , J 2 , J 3 ) {\displaystyle J=\operatorname {diag} (J_{1},J_{2},J_{3})} . The LLE is then given by Hamilton's equation of motion for the Hamiltonian (where J ( S ) is the quadratic form of J applied to the vector S ) which is In 1+1 dimensions, this equation is In 2+1 dimensions, this equation takes the form which is the (2+1)-dimensional LLE. For the (3+1)-dimensional case, the LLE looks like In the general case LLE (2) is nonintegrable, but it admits two integrable reductions:
https://en.wikipedia.org/wiki/Landau–Lifshitz_model
In physics, the Landau–Lifshitz–Gilbert equation (usually abbreviated as LLG equation), named for Lev Landau , Evgeny Lifshitz , and T. L. Gilbert , is a name used for a differential equation describing the dynamics (typically the precessional motion ) of magnetization M in a solid . It is a modified version by Gilbert of the original equation of Landau and Lifshitz. [ 1 ] The LLG equation is similar to the Bloch equation , but they differ in the form of the damping term. The LLG equation describes a more general scenario of magnetization dynamics beyond the simple Larmor precession . In particular, the effective field driving the precessional motion of M is not restricted to real magnetic fields; it incorporates a wide range of mechanisms including magnetic anisotropy , exchange interaction , and so on. The various forms of the LLG equation are commonly used in micromagnetics to model the effects of a magnetic field and other magnetic interactions on ferromagnetic materials . It provides a practical way to model the time-domain behavior of magnetic elements. Recent developments generalizes the LLG equation to include the influence of spin-polarized currents in the form of spin-transfer torque . [ 2 ] In a ferromagnet , the magnitude of the magnetization M at each spacetime point is approximated by the saturation magnetization M s (although it can be smaller when averaged over a chunk of volume). The LLG equation describes the rotation of the magnetization in response to the effective field H eff and accounts for not only a real magnetic field but also internal magnetic interactions such as exchange and anisotropy. An earlier, but equivalent, equation (the Landau–Lifshitz equation) was introduced by Landau & Lifshitz (1935) : [ 1 ] where γ is the electron gyromagnetic ratio and λ is a phenomenological damping parameter, often replaced by where α is a dimensionless constant called the damping factor. The effective field H eff is a combination of the external magnetic field, the demagnetizing field , and various internal magnetic interactions involving quantum mechanical effects, which is typically defined as the functional derivative of the magnetic free energy with respect to the local magnetization M . To solve this equation, additional conditions for the demagnetizing field must be included to accommodate the geometry of the material. In 1955 Gilbert replaced the damping term in the Landau–Lifshitz (LL) equation by one that depends on the time derivative of the magnetization: This is the Landau–Lifshitz–Gilbert (LLG) equation, where η is the damping parameter, which is characteristic of the material. It can be transformed into the Landau–Lifshitz equation: [ 3 ] where In this form of the LL equation, the precessional term γ' depends on the damping term. This better represents the behavior of real ferromagnets when the damping is large. [ 4 ] [ 5 ] In 1996 John Slonczewski expanded the model to account for the spin-transfer torque , i.e. the torque induced upon the magnetization by spin -polarized current flowing through the ferromagnet. This is commonly written in terms of the unit moment defined by m = M / M S : where α {\displaystyle \alpha } is the dimensionless damping parameter, τ ⊥ {\displaystyle \tau _{\perp }} and τ ∥ {\displaystyle \tau _{\parallel }} are driving torques, and x is the unit vector along the polarization of the current. [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/Landau–Lifshitz–Gilbert_equation
Landau–Peierls instability refers to the phenomenon in which the mean square displacements due to thermal fluctuations diverge in the thermodynamic limit and is named after Lev Landau (1937) and Rudolf Peierls (1934). [ 1 ] [ 2 ] This instability prevails in one-dimensional ordering of atoms/molecules in 3D space such as 1D crystals and smectics and also in two-dimensional ordering in 2D space such as a monomolecular adsorbed filsms at the interface between two isotrophic phases. The divergence is logarthmic, which is rather slow and therefore it is possible to realize substances (such as the smectics ) in practice that are subject to Landau–Peierls instability. Consider a one-dimensionally ordered crystal in 3D space. The density function is then given by ρ = ρ ( z ) {\displaystyle \rho =\rho (z)} . Since this is a 1D system, only the displacement u {\displaystyle u} along the z {\displaystyle z} -direction due to thermal fluctuations can smooth out the density function; displacements in other two directions are irrelevant. The net change in the free energy due to the fluctuations is given by where F 0 {\displaystyle F_{0}} is the free energy without fluctuations. Note that F {\displaystyle {\mathcal {F}}} cannot depend on u {\displaystyle u} or be a linear function of ∇ u {\displaystyle \nabla u} because the first case corresponds to a simple uniform translation and the second case is unstable. Thus, F {\displaystyle {\mathcal {F}}} must be quadratic in the derivatives of u {\displaystyle u} . These are given by [ 3 ] where C {\displaystyle C} , λ 1 {\displaystyle \lambda _{1}} and λ 2 {\displaystyle \lambda _{2}} are material constants; in smectics, where the symmetry z ↦ − z {\displaystyle z\mapsto -z} must be obeyed, the second term has to be set zero, i.e., λ 1 = 0 {\displaystyle \lambda _{1}=0} . In the Fourier space (in a unit volume), the free energy is just From the equipartition theorem (each Fourier mode, on average, is allotted an energy equal to k B T / 2 {\displaystyle k_{B}T/2} ) , we can deduce that [ 4 ] The mean square displacement is then given by where the integral is cut off at a large wavenumber that is comparable to the linear dimension of the element undergoing deformation. In the thermodynamic limit, L → ∞ {\displaystyle L\to \infty } , the integral diverges logarthmically. This means that an element at a particular point is displaced through very large distances and therefore smoothes out the function ρ ( z ) {\displaystyle \rho (z)} , leaving ρ = {\displaystyle \rho =} constant as the only solution and destroying the 1D ordering.
https://en.wikipedia.org/wiki/Landau–Peierls_instability
Landau–Placzek ratio is a ratio of the integrated intensity of Rayleigh scattering to the combined integrated intensity of Brillouin scattering of a triplet frequency spectrum of light scattered by homogenous liquids or gases. The triplet consists of two frequency shifted Brillouin scattering and a central unshifted Rayleigh scattering line split. The triplet structure was explained by Lev Landau and George Placzek in 1934 in a short publication, [ 1 ] [ 2 ] summarizing major results of their analysis. Landau and Placzek noted in their short paper that a more detailed discussion will be published later although that paper does not seem to have been published. However, a detailed discussion is provided in Lev Landau and Evgeny Lifshitz 's book . [ 3 ] The Landau–Placzek ratio is defined as where The Landau–Placzek formula provides an approximate theoretical prediction for the Landau–Placzek ratio, [ 4 ] [ 5 ] where
https://en.wikipedia.org/wiki/Landau–Placzek_ratio