text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
In fluid dynamics , Luke's variational principle is a Lagrangian variational description of the motion of surface waves on a fluid with a free surface , under the action of gravity . This principle is named after J.C. Luke, who published it in 1967. [ 1 ] This variational principle is for incompressible and inviscid potential flows , and is used to derive approximate wave models like the mild-slope equation , [ 2 ] or using the averaged Lagrangian approach for wave propagation in inhomogeneous media. [ 3 ]
Luke's Lagrangian formulation can also be recast into a Hamiltonian formulation in terms of the surface elevation and velocity potential at the free surface. [ 4 ] [ 5 ] [ 6 ] This is often used when modelling the spectral density evolution of the free-surface in a sea state , sometimes called wave turbulence .
Both the Lagrangian and Hamiltonian formulations can be extended to include surface tension effects, and by using Clebsch potentials to include vorticity . [ 1 ]
Luke's Lagrangian formulation is for non-linear surface gravity waves on an— incompressible , irrotational and inviscid — potential flow .
The relevant ingredients, needed in order to describe this flow, are:
The Lagrangian L {\displaystyle {\mathcal {L}}} , as given by Luke, is: L = − ∫ t 0 t 1 { ∭ V ( t ) ρ [ ∂ Φ ∂ t + 1 2 | ∇ Φ | 2 + 1 2 ( ∂ Φ ∂ z ) 2 + g z ] d x d y d z } d t . {\displaystyle {\mathcal {L}}=-\int _{t_{0}}^{t_{1}}\left\{\iiint _{V(t)}\rho \left[{\frac {\partial \Phi }{\partial t}}+{\frac {1}{2}}\left|{\boldsymbol {\nabla }}\Phi \right|^{2}+{\frac {1}{2}}\left({\frac {\partial \Phi }{\partial z}}\right)^{2}+g\,z\right]\,\mathrm {d} x\;\mathrm {d} y\;\mathrm {d} z\right\}\mathrm {d} t.}
From Bernoulli's principle , this Lagrangian can be seen to be the integral of the fluid pressure over the whole time-dependent fluid domain V ( t ) . This is in agreement with the variational principles for inviscid flow without a free surface, found by Harry Bateman . [ 7 ]
Variation with respect to the velocity potential Φ( x , z , t ) and free-moving surfaces like z = η ( x , t ) results in the Laplace equation for the potential in the fluid interior and all required boundary conditions : kinematic boundary conditions on all fluid boundaries and dynamic boundary conditions on free surfaces. [ 8 ] This may also include moving wavemaker walls and ship motion.
For the case of a horizontally unbounded domain with the free fluid surface at z = η ( x , t ) and a fixed bed at z = − h ( x ) , Luke's variational principle results in the Lagrangian: L = − ∫ t 0 t 1 ∬ { ∫ − h ( x ) η ( x , t ) ρ [ ∂ Φ ∂ t + 1 2 | ∇ Φ | 2 + 1 2 ( ∂ Φ ∂ z ) 2 ] d z + 1 2 ρ g η 2 } d x d t . {\displaystyle {\mathcal {L}}=-\,\int _{t_{0}}^{t_{1}}\iint \left\{\int _{-h({\boldsymbol {x}})}^{\eta ({\boldsymbol {x}},t)}\rho \,\left[{\frac {\partial \Phi }{\partial t}}+\,{\frac {1}{2}}\left|{\boldsymbol {\nabla }}\Phi \right|^{2}+\,{\frac {1}{2}}\left({\frac {\partial \Phi }{\partial z}}\right)^{2}\right]\;\mathrm {d} z\;+\,{\frac {1}{2}}\,\rho \,g\,\eta ^{2}\right\}\;\mathrm {d} {\boldsymbol {x}}\;\mathrm {d} t.}
The bed-level term proportional to h 2 in the potential energy has been neglected, since it is a constant and does not contribute in the variations.
Below, Luke's variational principle is used to arrive at the flow equations for non-linear surface gravity waves on a potential flow.
The variation δ L = 0 {\displaystyle \delta {\mathcal {L}}=0} in the Lagrangian with respect to variations in the velocity potential Φ( x , z , t ), as well as with respect to the surface elevation η ( x , t ) , have to be zero. We consider both variations subsequently.
Consider a small variation δ Φ in the velocity potential Φ . [ 8 ] Then the resulting variation in the Lagrangian is: δ Φ L = L ( Φ + δ Φ , η ) − L ( Φ , η ) = − ∫ t 0 t 1 ∬ { ∫ − h ( x ) η ( x , t ) ρ ( ∂ ( δ Φ ) ∂ t + ∇ Φ ⋅ ∇ ( δ Φ ) + ∂ Φ ∂ z ∂ ( δ Φ ) ∂ z ) d z } d x d t . {\displaystyle {\begin{aligned}\delta _{\Phi }{\mathcal {L}}\,&=\,{\mathcal {L}}(\Phi +\delta \Phi ,\eta )\,-\,{\mathcal {L}}(\Phi ,\eta )\\&=\,-\,\int _{t_{0}}^{t_{1}}\iint \left\{\int _{-h({\boldsymbol {x}})}^{\eta ({\boldsymbol {x}},t)}\rho \,\left({\frac {\partial (\delta \Phi )}{\partial t}}+\,{\boldsymbol {\nabla }}\Phi \cdot {\boldsymbol {\nabla }}(\delta \Phi )+\,{\frac {\partial \Phi }{\partial z}}\,{\frac {\partial (\delta \Phi )}{\partial z}}\,\right)\;\mathrm {d} z\,\right\}\;\mathrm {d} {\boldsymbol {x}}\;\mathrm {d} t.\end{aligned}}}
Using Leibniz integral rule , this becomes, in case of constant density ρ : [ 8 ] δ Φ L = − ρ ∫ t 0 t 1 ∬ { ∂ ∂ t ∫ − h ( x ) η ( x , t ) δ Φ d z + ∇ ⋅ ∫ − h ( x ) η ( x , t ) δ Φ ∇ Φ d z } d x d t + ρ ∫ t 0 t 1 ∬ { ∫ − h ( x ) η ( x , t ) δ Φ ( ∇ ⋅ ∇ Φ + ∂ 2 Φ ∂ z 2 ) d z } d x d t + ρ ∫ t 0 t 1 ∬ [ ( ∂ η ∂ t + ∇ Φ ⋅ ∇ η − ∂ Φ ∂ z ) δ Φ ] z = η ( x , t ) d x d t + ρ ∫ t 0 t 1 ∬ [ ( ∇ Φ ⋅ ∇ h + ∂ Φ ∂ z ) δ Φ ] z = − h ( x ) d x d t = 0. {\displaystyle {\begin{aligned}\delta _{\Phi }{\mathcal {L}}\,=\,&-\,\rho \,\int _{t_{0}}^{t_{1}}\iint \left\{{\frac {\partial }{\partial t}}\int _{-h({\boldsymbol {x}})}^{\eta ({\boldsymbol {x}},t)}\delta \Phi \;\mathrm {d} z\;+\,{\boldsymbol {\nabla }}\cdot \int _{-h({\boldsymbol {x}})}^{\eta ({\boldsymbol {x}},t)}\delta \Phi \,{\boldsymbol {\nabla }}\Phi \;\mathrm {d} z\,\right\}\;\mathrm {d} {\boldsymbol {x}}\;\mathrm {d} t\\&+\,\rho \,\int _{t_{0}}^{t_{1}}\iint \left\{\int _{-h({\boldsymbol {x}})}^{\eta ({\boldsymbol {x}},t)}\delta \Phi \;\left({\boldsymbol {\nabla }}\cdot {\boldsymbol {\nabla }}\Phi \,+\,{\frac {\partial ^{2}\Phi }{\partial z^{2}}}\right)\;\mathrm {d} z\,\right\}\;\mathrm {d} {\boldsymbol {x}}\;\mathrm {d} t\\&+\,\rho \,\int _{t_{0}}^{t_{1}}\iint \left[\left({\frac {\partial \eta }{\partial t}}\,+\,{\boldsymbol {\nabla }}\Phi \cdot {\boldsymbol {\nabla }}\eta \,-\,{\frac {\partial \Phi }{\partial z}}\right)\,\delta \Phi \right]_{z=\eta ({\boldsymbol {x}},t)}\;\mathrm {d} {\boldsymbol {x}}\;\mathrm {d} t\\&+\,\rho \,\int _{t_{0}}^{t_{1}}\iint \left[\left({\boldsymbol {\nabla }}\Phi \cdot {\boldsymbol {\nabla }}h\,+\,{\frac {\partial \Phi }{\partial z}}\right)\,\delta \Phi \right]_{z=-h({\boldsymbol {x}})}\;\mathrm {d} {\boldsymbol {x}}\;\mathrm {d} t\\=\,&0.\end{aligned}}}
The first integral on the right-hand side integrates out to the boundaries, in x and t , of the integration domain and is zero since the variations δ Φ are taken to be zero at these boundaries. For variations δ Φ which are zero at the free surface and the bed, the second integral remains, which is only zero for arbitrary δ Φ in the fluid interior if there the Laplace equation holds: Δ Φ = 0 for − h ( x ) < z < η ( x , t ) , {\displaystyle \Delta \Phi \,=\,0\qquad {\text{ for }}-h({\boldsymbol {x}})\,<\,z\,<\,\eta ({\boldsymbol {x}},t),} with Δ = ∇ ⋅ ∇ + ∂ 2 /∂ z 2 the Laplace operator .
If variations δ Φ are considered which are only non-zero at the free surface, only the third integral remains, giving rise to the kinematic free-surface boundary condition: ∂ η ∂ t + ∇ Φ ⋅ ∇ η − ∂ Φ ∂ z = 0. at z = η ( x , t ) . {\displaystyle {\frac {\partial \eta }{\partial t}}\,+\,{\boldsymbol {\nabla }}\Phi \cdot {\boldsymbol {\nabla }}\eta \,-\,{\frac {\partial \Phi }{\partial z}}\,=\,0.\qquad {\text{ at }}z\,=\,\eta ({\boldsymbol {x}},t).}
Similarly, variations δ Φ only non-zero at the bottom z = − h result in the kinematic bed condition: ∇ Φ ⋅ ∇ h + ∂ Φ ∂ z = 0 at z = − h ( x ) . {\displaystyle {\boldsymbol {\nabla }}\Phi \cdot {\boldsymbol {\nabla }}h\,+\,{\frac {\partial \Phi }{\partial z}}\,=\,0\qquad {\text{ at }}z\,=\,-h({\boldsymbol {x}}).}
Considering the variation of the Lagrangian with respect to small changes δη gives: δ η L = L ( Φ , η + δ η ) − L ( Φ , η ) = − ∫ t 0 t 1 ∬ [ ρ δ η ( ∂ Φ ∂ t + 1 2 | ∇ Φ | 2 + 1 2 ( ∂ Φ ∂ z ) 2 + g η ) ] z = η ( x , t ) d x d t = 0. {\displaystyle \delta _{\eta }{\mathcal {L}}\,=\,{\mathcal {L}}(\Phi ,\eta +\delta \eta )\,-\,{\mathcal {L}}(\Phi ,\eta )=\,-\,\int _{t_{0}}^{t_{1}}\iint \left[\rho \,\delta \eta \,\left({\frac {\partial \Phi }{\partial t}}+\,{\frac {1}{2}}\,\left|{\boldsymbol {\nabla }}\Phi \right|^{2}\,+\,{\frac {1}{2}}\,\left({\frac {\partial \Phi }{\partial z}}\right)^{2}+\,g\,\eta \right)\,\right]_{z=\eta ({\boldsymbol {x}},t)}\;\mathrm {d} {\boldsymbol {x}}\;\mathrm {d} t\,=\,0.}
This has to be zero for arbitrary δη , giving rise to the dynamic boundary condition at the free surface: ∂ Φ ∂ t + 1 2 | ∇ Φ | 2 + 1 2 ( ∂ Φ ∂ z ) 2 + g η = 0 at z = η ( x , t ) . {\displaystyle {\frac {\partial \Phi }{\partial t}}+\,{\frac {1}{2}}\,\left|{\boldsymbol {\nabla }}\Phi \right|^{2}\,+\,{\frac {1}{2}}\,\left({\frac {\partial \Phi }{\partial z}}\right)^{2}+\,g\,\eta \,=\,0\qquad {\text{ at }}z\,=\,\eta ({\boldsymbol {x}},t).}
This is the Bernoulli equation for unsteady potential flow, applied at the free surface, and with the pressure above the free surface being a constant — which constant pressure is taken equal to zero for simplicity.
The Hamiltonian structure of surface gravity waves on a potential flow was discovered by Vladimir E. Zakharov in 1968, and rediscovered independently by Bert Broer and John Miles : [ 4 ] [ 5 ] [ 6 ] ρ ∂ η ∂ t = + δ H δ φ , ρ ∂ φ ∂ t = − δ H δ η , {\displaystyle {\begin{aligned}\rho \,{\frac {\partial \eta }{\partial t}}\,&=\,+\,{\frac {\delta {\mathcal {H}}}{\delta \varphi }},\\\rho \,{\frac {\partial \varphi }{\partial t}}\,&=\,-\,{\frac {\delta {\mathcal {H}}}{\delta \eta }},\end{aligned}}} where the surface elevation η and surface potential φ — which is the potential Φ at the free surface z = η ( x , t ) — are the canonical variables . The Hamiltonian H ( φ , η ) {\displaystyle {\mathcal {H}}(\varphi ,\eta )} is the sum of the kinetic and potential energy of the fluid: H = ∬ { ∫ − h ( x ) η ( x , t ) 1 2 ρ [ | ∇ Φ | 2 + ( ∂ Φ ∂ z ) 2 ] d z + 1 2 ρ g η 2 } d x . {\displaystyle {\mathcal {H}}\,=\,\iint \left\{\int _{-h({\boldsymbol {x}})}^{\eta ({\boldsymbol {x}},t)}{\frac {1}{2}}\,\rho \,\left[\left|{\boldsymbol {\nabla }}\Phi \right|^{2}\,+\,\left({\frac {\partial \Phi }{\partial z}}\right)^{2}\right]\,\mathrm {d} z\,+\,{\frac {1}{2}}\,\rho \,g\,\eta ^{2}\right\}\;\mathrm {d} {\boldsymbol {x}}.}
The additional constraint is that the flow in the fluid domain has to satisfy Laplace's equation with appropriate boundary condition at the bottom z = − h ( x ) and that the potential at the free surface z = η is equal to φ : δ H / δ Φ = 0. {\displaystyle \delta {\mathcal {H}}/\delta \Phi \,=\,0.}
The Hamiltonian formulation can be derived from Luke's Lagrangian description by using Leibniz integral rule on the integral of ∂Φ/∂ t : [ 6 ] L H = ∫ t 0 t 1 ∬ { φ ( x , t ) ∂ η ( x , t ) ∂ t − H ( φ , η ; x , t ) } d x d t , {\displaystyle {\mathcal {L}}_{H}=\int _{t_{0}}^{t_{1}}\iint \left\{\varphi ({\boldsymbol {x}},t)\,{\frac {\partial \eta ({\boldsymbol {x}},t)}{\partial t}}\,-\,H(\varphi ,\eta ;{\boldsymbol {x}},t)\right\}\;\mathrm {d} {\boldsymbol {x}}\;\mathrm {d} t,} with φ ( x , t ) = Φ ( x , η ( x , t ) , t ) {\displaystyle \varphi ({\boldsymbol {x}},t)=\Phi ({\boldsymbol {x}},\eta ({\boldsymbol {x}},t),t)} the value of the velocity potential at the free surface, and H ( φ , η ; x , t ) {\displaystyle H(\varphi ,\eta ;{\boldsymbol {x}},t)} the Hamiltonian density — sum of the kinetic and potential energy density — and related to the Hamiltonian as: H ( φ , η ) = ∬ H ( φ , η ; x , t ) d x . {\displaystyle {\mathcal {H}}(\varphi ,\eta )\,=\,\iint H(\varphi ,\eta ;{\boldsymbol {x}},t)\;\mathrm {d} {\boldsymbol {x}}.}
The Hamiltonian density is written in terms of the surface potential using Green's third identity on the kinetic energy: [ 9 ]
H = 1 2 ρ 1 + | ∇ η | 2 φ ( D ( η ) φ ) + 1 2 ρ g η 2 , {\displaystyle H\,=\,{\frac {1}{2}}\,\rho \,{\sqrt {1\,+\,\left|{\boldsymbol {\nabla }}\eta \right|^{2}}}\;\;\varphi \,{\bigl (}D(\eta )\;\varphi {\bigr )}\,+\,{\frac {1}{2}}\,\rho \,g\,\eta ^{2},}
where D ( η ) φ is equal to the normal derivative of ∂Φ/∂ n at the free surface. Because of the linearity of the Laplace equation — valid in the fluid interior and depending on the boundary condition at the bed z = − h and free surface z = η — the normal derivative ∂Φ/∂ n is a linear function of the surface potential φ , but depends non-linear on the surface elevation η . This is expressed by the Dirichlet-to-Neumann operator D ( η ) , acting linearly on φ .
The Hamiltonian density can also be written as: [ 6 ] H = 1 2 ρ φ [ w ( 1 + | ∇ η | 2 ) − ∇ η ⋅ ∇ φ ] + 1 2 ρ g η 2 , {\displaystyle H\,=\,{\frac {1}{2}}\,\rho \,\varphi \,{\Bigl [}w\,\left(1\,+\,\left|{\boldsymbol {\nabla }}\eta \right|^{2}\right)-\,{\boldsymbol {\nabla }}\eta \cdot {\boldsymbol {\nabla }}\,\varphi {\Bigr ]}\,+\,{\frac {1}{2}}\,\rho \,g\,\eta ^{2},} with w ( x , t ) = ∂Φ/∂ z the vertical velocity at the free surface z = η . Also w is a linear function of the surface potential φ through the Laplace equation, but w depends non-linear on the surface elevation η : [ 9 ] w = W ( η ) φ , {\displaystyle w\,=\,W(\eta )\,\varphi ,} with W operating linear on φ , but being non-linear in η . As a result, the Hamiltonian is a quadratic functional of the surface potential φ . Also the potential energy part of the Hamiltonian is quadratic. The source of non-linearity in surface gravity waves is through the kinetic energy depending non-linear on the free surface shape η . [ 9 ]
Further ∇ φ is not to be mistaken for the horizontal velocity ∇Φ at the free surface:
∇ φ = ∇ Φ ( x , η ( x , t ) , t ) = [ ∇ Φ + ∂ Φ ∂ z ∇ η ] z = η ( x , t ) = [ ∇ Φ ] z = η ( x , t ) + w ∇ η . {\displaystyle {\boldsymbol {\nabla }}\varphi \,=\,{\boldsymbol {\nabla }}\Phi {\bigl (}{\boldsymbol {x}},\eta ({\boldsymbol {x}},t),t{\bigr )}\,=\,\left[{\boldsymbol {\nabla }}\Phi \,+\,{\frac {\partial \Phi }{\partial z}}\,{\boldsymbol {\nabla }}\eta \right]_{z=\eta ({\boldsymbol {x}},t)}\,=\,{\Bigl [}{\boldsymbol {\nabla }}\Phi {\Bigr ]}_{z=\eta ({\boldsymbol {x}},t)}\,+\,w\,{\boldsymbol {\nabla }}\eta .}
Taking the variations of the Lagrangian L H {\displaystyle {\mathcal {L}}_{H}} with respect to the canonical variables φ ( x , t ) {\displaystyle \varphi ({\boldsymbol {x}},t)} and η ( x , t ) {\displaystyle \eta ({\boldsymbol {x}},t)} gives: ρ ∂ η ∂ t = + δ H δ φ , ρ ∂ φ ∂ t = − δ H δ η , {\displaystyle {\begin{aligned}\rho \,{\frac {\partial \eta }{\partial t}}\,&=\,+\,{\frac {\delta {\mathcal {H}}}{\delta \varphi }},\\\rho \,{\frac {\partial \varphi }{\partial t}}\,&=\,-\,{\frac {\delta {\mathcal {H}}}{\delta \eta }},\end{aligned}}} provided in the fluid interior Φ satisfies the Laplace equation, ΔΦ = 0 , as well as the bottom boundary condition at z = − h and Φ = φ at the free surface. | https://en.wikipedia.org/wiki/Luke's_variational_principle |
In signal processing , Lulu smoothing is a nonlinear mathematical technique for removing impulsive noise from a data sequence such as a time series . It is a nonlinear equivalent to taking a moving average (or other smoothing technique) of a time series, and is similar to other nonlinear smoothing techniques, such as Tukey or median smoothing . [ 1 ]
LULU smoothers are compared in detail to median smoothers by Jankowitz and found to be superior in some aspects, particularly in mathematical properties like idempotence . [ 2 ]
Lulu operators have a number of attractive mathematical properties, among them idempotence – meaning that repeated application of the operator yields the same result as a single application – and co-idempotence.
An interpretation of idempotence is that: 'Idempotence means that there is no “noise” left in the smoothed data and co-idempotence means that there is no “signal” left in the residual.' [ 3 ]
When studying smoothers there are four properties that are useful to optimize: [ 4 ]
The operators can also be used to decompose a signal into various subcomponents similar to wavelet or Fourier decomposition. [ 5 ]
Lulu smoothers were discovered by C. H. Rohwer and have been studied for the last 30 years. [ 6 ] [ 7 ] Their exact and asymptotic distributions have been derived. [ 3 ]
Applying a Lulu smoother consists of repeated applications of the min and max operators over a given subinterval of the data.
As with other smoothers, a width or interval must be specified. The Lulu smoothers are composed of repeated applications of the L (lower) and U (Upper) operators, which are defined as follows:
For an L operator of width n over an infinite sequence of x s (..., x j , x j +1 ,...), the operation on x j is calculated as follows:
Thus for width 2, the L operator is:
This is identical to the L operator, except that the order of Min and Max is reversed, i.e. for width 2:
Examples of the U and L operators, as well as combined UL and LU operators on a sample data set are shown in the following figures.
It can be seen that the results of the UL and LU operators can be different. The combined operators are very effective at removing impulsive noise, the only cases where the noise is not removed effectively is where we get multiple noise signals very close together, in which case the filter 'sees' the multiple noises as part of the signal. | https://en.wikipedia.org/wiki/Lulu_smoothing |
Lume is a short term for the luminous phosphorescent glowing solution applied on watch dials . There are some people who "relume" watches, or replace faded lume. Formerly, lume consisted mostly of radium ; however, radium is radioactive and has been mostly replaced on new watches by less bright, but less toxic compounds. After radium was effectively outlawed in 1968, tritium became the luminescent material of choice, because, while still radioactive, it is much less potent than radium, tritium being about as radioactive as an x-ray, the decrease in radioactivity resulting from a diminishment of strength and quantity of the beta waves that are given off by tritium as an element. [ 1 ]
Common pigments used in lume include the phosphorescent pigments zinc sulfide and strontium aluminate . Use of zinc sulfide for safety related products dates back to the 1930s. However, the development of strontium oxide aluminate, with a luminance approximately 10 times greater than zinc sulfide, has relegated most zinc sulfide based products to the novelty category. Strontium oxide aluminate based pigments are now used in exit signs, pathway marking, and other safety related signage.
Strontium aluminate based afterglow pigments are marketed under brandnames like Super-LumiNova , [ 2 ] [ 3 ] Watchlume Co, [ 4 ] NoctiLumina, [ 5 ] and Glow in the Dark (Phosphorescent) Technologies. [ 6 ] | https://en.wikipedia.org/wiki/Lume |
In biology , a lumen ( pl. : lumina ) is the inside space of a tubular structure, such as an artery or intestine. [ 1 ] It comes from Latin lumen ' an opening ' .
It can refer to:
In cell biology , lumen is a membrane-defined space that is found inside several organelles , cellular components , or structures, including thylakoid , endoplasmic reticulum , Golgi apparatus , lysosome , mitochondrion , and microtubule .
Transluminal procedures are procedures occurring through lumina, including: [ citation needed ]
This anatomy article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lumen_(anatomy) |
lumi is a free , open source and open development software project for the analysis and comprehension of Illumina expression and methylation microarray data. The project was started in the summer of 2006 and set out to provide algorithms and data management tools of Illumina in the framework of Bioconductor . It is based on the statistical R programming language .
The lumi package provides an analysis pipeline for probe-level Illumina expression and methylation microarray data, including probe-identifier management ( nuID ), updated probe-to-gene mapping and annotation using the latest release of RefSeq (nuIDblast), probe-intensity transformation (VST) and normalization (RSN), quality control (QA/QC) and preprocessing methods specific for Illumina methylation data. By extending the ExprSet object with Illumina-specific features, lumi is designed to work with other Bioconductor packages, such as Limma and GOstats to detect differential genes and conduct Gene Ontology analysis.
The lumi project was started in the summer of 2006 at the Bioinformatics Core Facility of the Robert H. Lurie Comprehensive Cancer Center , Northwestern University . Originally lumi was designed for the analysis of Illumina Expression BeadArray data. Starting from 2010 (version > 2.0), functions of analyzing Illumina methylation microarray data was added. The project team consists of Drs. Pan Du, Simon M. Lin, and Warren A. Kibbe. The project was started upon a request for collaboration from Dr. Serdar E. Bulun to analyze a set of new Illumina microarray data acquired at his lab on the study of the effect of retinoic acids on cancers. Dr. Pan Du led the software development of the project. lumi was the first software package to utilize the unique design of redundancy of beadArrays for the data transformation and normalization processes. The first release of lumi was on January 3, 2007 through the Bioconductor website. Before its formal release, it was beta-tested at Norwegian Radiumhospital, Leiden University Medical Center, Universiteit van Amsterdam, Università degli Studi di Brescia, UC Davis, Wayne State University, NIH, M.D. Anderson Cancer Center, Case Western Reserve University, Harvard University, Washington University, and Walter and Eliza Hall Institute of Medical Research. | https://en.wikipedia.org/wiki/Lumi_(software) |
Lumicera is a transparent ceramic developed by Murata Manufacturing Co., Ltd.
Murata Manufacturing first developed transparent polycrystalline ceramics in February 2001. This polycrystalline ceramic is a type of dielectric resonator material commonly used in microwaves and millimeter waves . While offering superior electrical properties, high levels of transmissivity , and refractive index , it also has good optical characteristics without birefringence .
Normally, ceramics are opaque because pores are formed at triple points where grains intersect, causing scattering of incident light. Murata has optimized the entire development process of making dense and homogenous ceramics to improve their performance.
Under recommendations from Casio , the material itself has been refined for use in digital camera optical lenses by endowing it with improved transmission of short wavelength light and by reducing pores inside ceramics that reduce transparency.
Lumicera has the same light transmitting qualities as optical glass commonly used in today's conventional camera lenses, however it has a refractive index (nd = 2.08 at 587 nm [ 1 ] ) much greater than that of optical glass (nd = 1.5 – 1.85 [ 2 ] ) and offers superior strength. The Lumicera Z variant is described as barium oxide based material, [ 3 ] not containing any environmentally hazardous materials (e.g. lead ).
Lumicera is transparent up to 10 micrometers, making it useful for instruments operating in the mid- infrared spectrum. [ 4 ]
Lumicera is a trademark of Murata Manufacturing Co., Ltd.
Lumicera is used in some Casio Exilim cameras, where it allowed 20% reduction of the lens profile. [ 5 ]
This product article is a stub . You can help Wikipedia by expanding it .
This ceramic art and design -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lumicera |
Luminance is a photometric measure of the luminous intensity per unit area of light travelling in a given direction. [ 1 ] It describes the amount of light that passes through, is emitted from, or is reflected from a particular area, and falls within a given solid angle .
The procedure for conversion from spectral radiance to luminance is standardized by the CIE and ISO . [ 2 ]
Brightness is the term for the subjective impression of the objective luminance measurement standard (see Objectivity (science) § Objectivity in measurement for the importance of this contrast).
The SI unit for luminance is candela per square metre (cd/m 2 ). A non-SI term for the same unit is the nit . The unit in the Centimetre–gram–second system of units (CGS) (which predated the SI system) is the stilb , which is equal to one candela per square centimetre or 10 kcd/m 2 .
Luminance is often used to characterize emission or reflection from flat, diffuse surfaces. Luminance levels indicate how much luminous power could be detected by the human eye looking at a particular surface from a particular angle of view . Luminance is thus an indicator of how bright the surface will appear. In this case, the solid angle of interest is the solid angle subtended by the eye's pupil .
Luminance is used in the video industry to characterize the brightness of displays. A typical computer display emits between 50 and 300 cd/m 2 . The sun has a luminance of about 1.6 × 10 9 cd/m 2 at noon. [ 3 ]
Luminance is invariant in geometric optics . [ 4 ] This means that for an ideal optical system, the luminance at the output is the same as the input luminance.
For real, passive optical systems, the output luminance is at most equal to the input. As an example, if one uses a lens to form an image that is smaller than the source object, the luminous power is concentrated into a smaller area, meaning that the illuminance is higher at the image. The light at the image plane, however, fills a larger solid angle so the luminance comes out to be the same assuming there is no loss at the lens. The image can never be "brighter" than the source.
Retinal damage can occur when the eye is exposed to high luminance. Damage can occur because of local heating of the retina. Photochemical effects can also cause damage, especially at short wavelengths. [ 5 ]
The IEC 60825 series gives guidance on safety relating to exposure of the eye to lasers, which are high luminance sources. The IEC 62471 series gives guidance for evaluating the photobiological safety of lamps and lamp systems including luminaires. Specifically it specifies the exposure limits, reference measurement technique and classification scheme for the evaluation and control of photobiological hazards from all electrically powered incoherent broadband sources of optical radiation, including LEDs but excluding lasers, in the wavelength range from 200 nm through 3000 nm . This standard was prepared as Standard CIE S 009:2002 by the International Commission on Illumination.
A luminance meter is a device used in photometry that can measure the luminance in a particular direction and with a particular solid angle . The simplest devices measure the luminance in a single direction while imaging luminance meters measure luminance in a way similar to the way a digital camera records color images. [ 6 ]
The luminance of a specified point of a light source, in a specified direction, is defined by the mixed partial derivative L v = d 2 Φ v d Σ d Ω Σ cos θ Σ {\displaystyle L_{\mathrm {v} }={\frac {\mathrm {d} ^{2}\Phi _{\mathrm {v} }}{\mathrm {d} \Sigma \,\mathrm {d} \Omega _{\Sigma }\cos \theta _{\Sigma }}}} where
If light travels through a lossless medium, the luminance does not change along a given light ray . As the ray crosses an arbitrary surface S , the luminance is given by L v = d 2 Φ v d S d Ω S cos θ S {\displaystyle L_{\mathrm {v} }={\frac {\mathrm {d} ^{2}\Phi _{\mathrm {v} }}{\mathrm {d} S\,\mathrm {d} \Omega _{S}\cos \theta _{S}}}} where
More generally, the luminance along a light ray can be defined as L v = n 2 d Φ v d G {\displaystyle L_{\mathrm {v} }=n^{2}{\frac {\mathrm {d} \Phi _{\mathrm {v} }}{\mathrm {d} G}}} where
The luminance of a reflecting surface is related to the illuminance it receives: ∫ Ω Σ L v d Ω Σ cos θ Σ = M v = E v R , {\displaystyle \int _{\Omega _{\Sigma }}L_{\text{v}}\mathrm {d} \Omega _{\Sigma }\cos \theta _{\Sigma }=M_{\text{v}}=E_{\text{v}}R,} where the integral covers all the directions of emission Ω Σ ,
In the case of a perfectly diffuse reflector (also called a Lambertian reflector ), the luminance is isotropic, per Lambert's cosine law . Then the relationship is simply L v = E v R π . {\displaystyle L_{\text{v}}={\frac {E_{\text{v}}R}{\pi }}.}
A variety of units have been used for luminance, besides the candela per square metre. Luminance is essentially the same as surface brightness , the term used in astronomy. This is measured with a logarithmic scale, magnitudes per square arcsecond (MPSAS). | https://en.wikipedia.org/wiki/Luminance |
Luminescence is a spontaneous emission of radiation from an electronically or vibrationally excited species not in thermal equilibrium with its environment. [ 1 ] A luminescent object emits cold light in contrast to incandescence , where an object only emits light after heating. [ 2 ] Generally, the emission of light is due to the movement of electrons between different energy levels within an atom after excitation by external factors. However, the exact mechanism of light emission in vibrationally excited species is unknown.
The dials, hands, scales, and signs of aviation and navigational instruments and markings are often coated with luminescent materials in a process known as luminising . [ 3 ]
Luminescence occurs in some minerals when they are exposed to low-powered sources of ultraviolet or infrared electromagnetic radiation (for example, portable UV lamps ) at atmospheric pressure and atmospheric temperatures. This property of these minerals can be used during the process of mineral identification at rock outcrops in the field or in the laboratory.
The term luminescence was first introduced in 1888 by German physicist Eilhard Wiedemann . [ 9 ] | https://en.wikipedia.org/wiki/Luminescence |
Luminescent bacteria emit light as the result of a chemical reaction during which chemical energy is converted to light energy. Luminescent bacteria exist as symbiotic organisms carried within a larger organism, such as many deep sea organisms, including the Lantern Fish , the Angler fish , certain jellyfish , certain clams and the Gulper eel . The light is generated by an enzyme-catalyzed chemoluminescence reaction, wherein the pigment luciferin is oxidised by the enzyme luciferase . The expression of genes related to bioluminescence is controlled by an operon called the lux operon .
Some species of luminescent bacteria possess quorum sensing, the ability to determine local population by the concentration of chemical messengers. Species which have quorum sensing can turn on and off certain chemical pathways, commonly luminescence; in this way, once population levels reach a certain point the bacteria switch on light-production [ 1 ]
Bioluminescence is a form of luminescence , or "cold light" emission ; less than 20% of the light generates thermal radiation . It should not be confused with fluorescence , phosphorescence or refraction of light. Most forms of bioluminescence are brighter (or only exist) at night, following a circadian rhythm .
' | https://en.wikipedia.org/wiki/Luminescent_bacteria |
Luminex Corporation is a biotechnology company which develops, manufactures and markets proprietary biological testing technologies with applications in life-sciences.
Luminex's Multi-Analyte Profiling (xMAP) technology allows simultaneous analysis of up to 500 bioassays from a small sample volume, typically a single drop of fluid, by reading biological tests on the surface of microscopic polystyrene beads called microspheres .
The xMAP technology combines this miniaturized liquid array bioassay capability with small lasers , light emitting diodes (LEDs), digital signal processors , photo detectors , charge-coupled device imaging and proprietary software to create a system offering advantages in speed, precision, flexibility and cost. The technology is currently being used within various segments of the life sciences industry, which includes the fields of drug discovery and development, and for clinical diagnostics, genetic analysis, bio-defense, food safety and biomedical research.
The Luminex MultiCode technology is used for real-time polymerase chain reaction (PCR) and multiplexed PCR assays. [ 2 ] Luminex Corporation owns 315 issued patents worldwide, including over 124 issued patents in the United States based on its multiplexing xMAP platform. | https://en.wikipedia.org/wiki/Luminex_Corporation |
Luminol (C 8 H 7 N 3 O 2 ) is a chemical that exhibits chemiluminescence , with a blue glow, when mixed with an appropriate oxidizing agent . Luminol is a white-to-pale-yellow crystalline solid that is soluble in most polar organic solvents but insoluble in water.
Forensic investigators use luminol to detect trace amounts of blood at crime scenes , as it reacts with the iron in hemoglobin . Biologists use it in cellular assays to detect copper , iron , and cyanides as well as specific proteins via western blotting . [ 2 ]
When luminol is sprayed evenly across an area, trace amounts of an activating oxidant make the luminol emit a blue glow that can be seen in a darkened room. The glow only lasts about 30 seconds but can be documented photographically. The glow is stronger in areas receiving more spray; the intensity of the glow does not indicate the amount of blood or other activator present.
Luminol is synthesized in a two-step process, beginning with 3-nitro phthalic acid . [ 3 ] [ 4 ] First, hydrazine (N 2 H 4 ) is heated with the 3-nitrophthalic acid in a high-boiling solvent such as triethylene glycol and glycerol . A condensation reaction occurs, with loss of water, forming 3-nitrophthalhydrazide. Reduction of the nitro group to an amino group with sodium dithionite (Na 2 S 2 O 4 ), via a transient hydroxylamine intermediate, produces luminol.
The compound was first synthesized in Germany in 1902 [ 5 ] but was not named luminol until 1934. [ 3 ] [ 6 ]
To exhibit its luminescence, the luminol must be activated with an oxidant. Usually, a solution containing hydrogen peroxide (H 2 O 2 ) and hydroxide ions in water is the activator. In the presence of a catalyst such as an iron or periodate compound, the hydrogen peroxide decomposes to form oxygen and water:
Laboratory settings often use potassium ferricyanide or potassium periodate for the catalyst. In the forensic detection of blood, the catalyst is the iron present in hemoglobin . [ 7 ] Enzymes in a variety of biological systems may also catalyse the decomposition of hydrogen peroxide.
The exact mechanism of luminol chemiluminescence is a complex multi-step reaction, especially in aqueous conditions. A recent theoretical investigation has been able to elucidate the reaction cascade as follows. [ 8 ] Luminol is first deprotonated in basic conditions then oxidized to the anionic radical, which in turn has two paths available to give the key intermediate α-hydroxy- peroxide. After cyclization to the endoperoxide, the mono-anion will undergo decomposition without luminescence if the pH is too low (< 8.2) for a second deprotonation. The endoperoxide dianion, however, can give the retro-Diels–Alder product: 1,2-dioxane-3,6-dione dianion which, after chemiexcitation by two single-electron transfers (SET) gives 3-aminophthalate dianion in its first singlet excited state (S1). This highly unstable molecule relaxes to the ground state, emitting light of around 425 nm wavelength (purple-blue) in the process termed chemiluminescence .
In 1928, German chemist H. O. Albrecht found that blood , among other substances, enhanced the luminescence of luminol in an alkaline solution of hydrogen peroxide. [ 9 ] [ 10 ] In 1936, Karl Gleu and Karl Pfannstiel confirmed this enhancement in the presence of haematin , a component of blood. [ 11 ] In 1937, German forensic scientist Walter Specht made extensive studies of luminol's application to the detection of blood at crime scenes. [ 12 ] In 1939, San Francisco pathologists Frederick Proescher and A. M. Moody made three important observations about luminol: [ 13 ] [ 14 ]
Crime scene investigators use luminol to find traces of blood, even if someone has cleaned or removed it. The investigator sprays a solution of luminol and the oxidant. The iron in blood catalyses the luminescence. The amount of catalyst necessary to cause the reaction is very small relative to the amount of luminol, allowing detection of even trace amounts of blood. The blue glow lasts for about 30 seconds per application. Detecting the glow requires a fairly dark room. Any glow detected may be documented by a long-exposure photograph .
Luminol's use in a crime scene investigation is somewhat hampered by the fact that it reacts to iron- and copper -containing compounds, [ 15 ] bleaches , horseradish , fecal matter , and cigarette smoke residue. [ 14 ] Application of luminol to a piece of evidence may prevent other tests from being performed on it; however, DNA has been successfully extracted from samples exposed to luminol. [ 16 ] | https://en.wikipedia.org/wiki/Luminol |
In chemistry , a luminophore (sometimes shortened to lumophore ) is an atom or functional group in a chemical compound that is responsible for its luminescent properties. [ 1 ] Luminophores can be either organic [ 2 ] or inorganic .
Luminophores can be further classified as fluorophores or phosphors , depending on the nature of the excited state responsible for the emission of photons . However, some luminophores cannot be classified as being exclusively fluorophores or phosphors. Examples include transition-metal complexes such as tris(bipyridine)ruthenium(II) chloride , whose luminescence comes from an excited (nominally triplet) metal-to-ligand charge-transfer (MLCT) state, which is not a true triplet state in the strict sense of the definition; and colloidal quantum dots , whose emissive state does not have either a purely singlet or triplet spin.
Most luminophores consist of conjugated π systems or transition-metal complexes. There are also purely inorganic luminophores, such as zinc sulfide doped with rare-earth metal ions, rare-earth metal oxysulfides doped with other rare-earth metal ions, yttrium oxide doped with rare-earth metal ions, zinc orthosilicate doped with manganese ions, etc. Luminophores can be observed in action in fluorescent lights, television screens, computer monitor screens, organic light-emitting diodes and bioluminescence .
The correct, textbook terminology is luminophore , not lumophore , although the latter term has been frequently used in the chemical literature. [ 3 ] | https://en.wikipedia.org/wiki/Luminophore |
Luminosity is an absolute measure of radiated electromagnetic energy per unit time, and is synonymous with the radiant power emitted by a light-emitting object. [ 1 ] [ 2 ] In astronomy , luminosity is the total amount of electromagnetic energy emitted per unit of time by a star , galaxy , or other astronomical objects . [ 3 ] [ 4 ]
In SI units, luminosity is measured in joules per second, or watts . In astronomy, values for luminosity are often given in the terms of the luminosity of the Sun , L ⊙ . Luminosity can also be given in terms of the astronomical magnitude system: the absolute bolometric magnitude ( M bol ) of an object is a logarithmic measure of its total energy emission rate, while absolute magnitude is a logarithmic measure of the luminosity within some specific wavelength range or filter band .
In contrast, the term brightness in astronomy is generally used to refer to an object's apparent brightness: that is, how bright an object appears to an observer. Apparent brightness depends on both the luminosity of the object and the distance between the object and observer, and also on any absorption of light along the path from object to observer. Apparent magnitude is a logarithmic measure of apparent brightness. The distance determined by luminosity measures can be somewhat ambiguous, and is thus sometimes called the luminosity distance .
When not qualified, the term "luminosity" means bolometric luminosity, which is measured either in the SI units, watts , or in terms of solar luminosities ( L ☉ ). A bolometer is the instrument used to measure radiant energy over a wide band by absorption and measurement of heating. A star also radiates neutrinos , which carry off some energy (about 2% in the case of the Sun), contributing to the star's total luminosity. [ 5 ] The IAU has defined a nominal solar luminosity of 3.828 × 10 26 W to promote publication of consistent and comparable values in units of the solar luminosity. [ 6 ]
While bolometers do exist, they cannot be used to measure even the apparent brightness of a star because they are insufficiently sensitive across the electromagnetic spectrum and because most wavelengths do not reach the surface of the Earth. In practice bolometric magnitudes are measured by taking measurements at certain wavelengths and constructing a model of the total spectrum that is most likely to match those measurements. In some cases, the process of estimation is extreme, with luminosities being calculated when less than 1% of the energy output is observed, for example with a hot Wolf-Rayet star observed only in the infrared. Bolometric luminosities can also be calculated using a bolometric correction to a luminosity in a particular passband. [ 7 ] [ 8 ]
The term luminosity is also used in relation to particular passbands such as a visual luminosity of K-band luminosity. [ 9 ] These are not generally luminosities in the strict sense of an absolute measure of radiated power, but absolute magnitudes defined for a given filter in a photometric system . Several different photometric systems exist. Some such as the UBV or Johnson system are defined against photometric standard stars, while others such as the AB system are defined in terms of a spectral flux density . [ 10 ]
A star's luminosity can be determined from two stellar characteristics: size and effective temperature . [ 11 ] The former is typically represented in terms of solar radii , R ⊙ , while the latter is represented in kelvins , but in most cases neither can be measured directly. To determine a star's radius, two other metrics are needed: the star's angular diameter and its distance from Earth. Both can be measured with great accuracy in certain cases, with cool supergiants often having large angular diameters, and some cool evolved stars having masers in their atmospheres that can be used to measure the parallax using VLBI . However, for most stars the angular diameter or parallax, or both, are far below our ability to measure with any certainty. Since the effective temperature is merely a number that represents the temperature of a black body that would reproduce the luminosity, it obviously cannot be measured directly, but it can be estimated from the spectrum.
An alternative way to measure stellar luminosity is to measure the star's apparent brightness and distance. A third component needed to derive the luminosity is the degree of interstellar extinction that is present, a condition that usually arises because of gas and dust present in the interstellar medium (ISM), the Earth's atmosphere , and circumstellar matter . Consequently, one of astronomy's central challenges in determining a star's luminosity is to derive accurate measurements for each of these components, without which an accurate luminosity figure remains elusive. [ 12 ] Extinction can only be measured directly if the actual and observed luminosities are both known, but it can be estimated from the observed colour of a star, using models of the expected level of reddening from the interstellar medium.
In the current system of stellar classification , stars are grouped according to temperature, with the massive, very young and energetic Class O stars boasting temperatures in excess of 30,000 K while the less massive, typically older Class M stars exhibit temperatures less than 3,500 K. Because luminosity is proportional to temperature to the fourth power, the large variation in stellar temperatures produces an even vaster variation in stellar luminosity. [ 13 ] Because the luminosity depends on a high power of the stellar mass, high mass luminous stars have much shorter lifetimes. The most luminous stars are always young stars, no more than a few million years for the most extreme. In the Hertzsprung–Russell diagram , the x-axis represents temperature or spectral type while the y-axis represents luminosity or magnitude. The vast majority of stars are found along the main sequence with blue Class O stars found at the top left of the chart while red Class M stars fall to the bottom right. Certain stars like Deneb and Betelgeuse are found above and to the right of the main sequence, more luminous or cooler than their equivalents on the main sequence. Increased luminosity at the same temperature, or alternatively cooler temperature at the same luminosity, indicates that these stars are larger than those on the main sequence and they are called giants or supergiants.
Blue and white supergiants are high luminosity stars somewhat cooler than the most luminous main sequence stars. A star like Deneb , for example, has a luminosity around 200,000 L ⊙ , a spectral type of A2, and an effective temperature around 8,500 K, meaning it has a radius around 203 R ☉ (1.41 × 10 11 m ). For comparison, the red supergiant Betelgeuse has a luminosity around 100,000 L ⊙ , a spectral type of M2, and a temperature around 3,500 K, meaning its radius is about 1,000 R ☉ (7.0 × 10 11 m ). Red supergiants are the largest type of star, but the most luminous are much smaller and hotter, with temperatures up to 50,000 K and more and luminosities of several million L ⊙ , meaning their radii are just a few tens of R ⊙ . For example, R136a1 has a temperature over 46,000 K and a luminosity of more than 6,100,000 L ⊙ [ 14 ] (mostly in the UV), it is only 39 R ☉ (2.7 × 10 10 m ).
The luminosity of a radio source is measured in W Hz −1 , to avoid having to specify a bandwidth over which it is measured. The observed strength, or flux density , of a radio source is measured in Jansky where 1 Jy = 10 −26 W m −2 Hz −1 .
For example, consider a 10 W transmitter at a distance of 1 million metres, radiating over a bandwidth of 1 MHz. By the time that power has reached the observer, the power is spread over the surface of a sphere with area 4 πr 2 or about 1.26×10 13 m 2 , so its flux density is 10 / 10 6 / (1.26×10 13 ) W m −2 Hz −1 = 8×10 7 Jy .
More generally, for sources at cosmological distances, a k-correction must be made for the spectral index α of the source, and a relativistic correction must be made for the fact that the frequency scale in the emitted rest frame is different from that in the observer's rest frame . So the full expression for radio luminosity, assuming isotropic emission, is L ν = S o b s 4 π D L 2 ( 1 + z ) 1 + α {\displaystyle L_{\nu }={\frac {S_{\mathrm {obs} }4\pi {D_{L}}^{2}}{(1+z)^{1+\alpha }}}} where L ν is the luminosity in W Hz −1 , S obs is the observed flux density in W m −2 Hz −1 , D L is the luminosity distance in metres, z is the redshift, α is the spectral index (in the sense I ∝ ν α {\displaystyle I\propto {\nu }^{\alpha }} , and in radio astronomy, assuming thermal emission the spectral index is typically equal to 2. ) [ 15 ]
For example, consider a 1 Jy signal from a radio source at a redshift of 1, at a frequency of 1.4 GHz. Ned Wright's cosmology calculator calculates a luminosity distance for a redshift of 1 to be 6701 Mpc = 2×10 26 m giving a radio luminosity of 10 −26 × 4 π (2×10 26 ) 2 / (1 + 1) (1 + 2) = 6×10 26 W Hz −1 .
To calculate the total radio power, this luminosity must be integrated over the bandwidth of the emission. A common assumption is to set the bandwidth to the observing frequency, which effectively assumes the power radiated has uniform intensity from zero frequency up to the observing frequency. In the case above, the total power is 4×10 27 × 1.4×10 9 = 5.7×10 36 W . This is sometimes expressed in terms of the total (i.e. integrated over all wavelengths) luminosity of the Sun which is 3.86×10 26 W , giving a radio power of 1.5×10 10 L ⊙ .
The Stefan–Boltzmann equation applied to a black body gives the value for luminosity for a black body, an idealized object which is perfectly opaque and non-reflecting: [ 11 ] L = σ A T 4 , {\displaystyle L=\sigma AT^{4},} where A is the surface area, T is the temperature (in kelvins) and σ is the Stefan–Boltzmann constant , with a value of 5.670 374 419 ... × 10 −8 W⋅m −2 ⋅K −4 . [ 16 ]
Imagine a point source of light of luminosity L {\displaystyle L} that radiates equally in all directions. A hollow sphere centered on the point would have its entire interior surface illuminated. As the radius increases, the surface area will also increase, and the constant luminosity has more surface area to illuminate, leading to a decrease in observed brightness.
F = L A , {\displaystyle F={\frac {L}{A}},} where
The surface area of a sphere with radius r is A = 4 π r 2 {\displaystyle A=4\pi r^{2}} , so for stars and other point sources of light: F = L 4 π r 2 , {\displaystyle F={\frac {L}{4\pi r^{2}}}\,,} where r {\displaystyle r} is the distance from the observer to the light source.
For stars on the main sequence , luminosity is also related to mass approximately as below: L L ⊙ ≈ ( M M ⊙ ) 3.5 . {\displaystyle {\frac {L}{L_{\odot }}}\approx {\left({\frac {M}{M_{\odot }}}\right)}^{3.5}.}
Luminosity is an intrinsic measurable property of a star independent of distance. The concept of magnitude, on the other hand, incorporates distance. The apparent magnitude is a measure of the diminishing flux of light as a result of distance according to the inverse-square law . [ 17 ] The Pogson logarithmic scale is used to measure both apparent and absolute magnitudes, the latter corresponding to the brightness of a star or other celestial body as seen if it would be located at an interstellar distance of 10 parsecs (3.1 × 10 17 metres ). In addition to this brightness decrease from increased distance, there is an extra decrease of brightness due to extinction from intervening interstellar dust. [ 18 ]
By measuring the width of certain absorption lines in the stellar spectrum , it is often possible to assign a certain luminosity class to a star without knowing its distance. Thus a fair measure of its absolute magnitude can be determined without knowing its distance nor the interstellar extinction.
In measuring star brightnesses, absolute magnitude, apparent magnitude, and distance are interrelated parameters—if two are known, the third can be determined. Since the Sun's luminosity is the standard, comparing these parameters with the Sun's apparent magnitude and distance is the easiest way to remember how to convert between them, although officially, zero point values are defined by the IAU.
The magnitude of a star, a unitless measure, is a logarithmic scale of observed visible brightness. The apparent magnitude is the observed visible brightness from Earth which depends on the distance of the object. The absolute magnitude is the apparent magnitude at a distance of 10 pc (3.1 × 10 17 m ), therefore the bolometric absolute magnitude is a logarithmic measure of the bolometric luminosity.
The difference in bolometric magnitude between two objects is related to their luminosity ratio according to: [ 19 ] M bol1 − M bol2 = − 2.5 log 10 L 1 L 2 {\displaystyle M_{\text{bol1}}-M_{\text{bol2}}=-2.5\log _{10}{\frac {L_{\text{1}}}{L_{\text{2}}}}}
where:
The zero point of the absolute magnitude scale is actually defined as a fixed luminosity of 3.0128 × 10 28 W . Therefore, the absolute magnitude can be calculated from a luminosity in watts: M b o l = − 2.5 log 10 L ∗ L 0 ≈ − 2.5 log 10 L ∗ + 71.1974 {\displaystyle M_{\mathrm {bol} }=-2.5\log _{10}{\frac {L_{*}}{L_{0}}}\approx -2.5\log _{10}L_{*}+71.1974} where L 0 is the zero point luminosity 3.0128 × 10 28 W
and the luminosity in watts can be calculated from an absolute magnitude (although absolute magnitudes are often not measured relative to an absolute flux): L ∗ = L 0 × 10 − 0.4 M b o l {\displaystyle L_{*}=L_{0}\times 10^{-0.4M_{\mathrm {bol} }}} | https://en.wikipedia.org/wiki/Luminosity |
Luminosity distance D L is defined in terms of the relationship between the absolute magnitude M and apparent magnitude m of an astronomical object.
which gives:
where D L is measured in parsecs . For nearby objects (say, in the Milky Way ) the luminosity distance gives a good approximation to the natural notion of distance in Euclidean space .
The relation is less clear for distant objects like quasars far beyond the Milky Way since the apparent magnitude is affected by spacetime curvature , redshift , and time dilation . Calculating the relation between the apparent and actual luminosity of an object requires taking all of these factors into account. The object's actual luminosity is determined using the inverse-square law and the proportions of the object's apparent distance and luminosity distance.
Another way to express the luminosity distance is through the flux-luminosity relationship,
where F is flux (W·m −2 ), and L is luminosity (W). From this the luminosity distance (in meters) can be expressed as:
The luminosity distance is related to the "comoving transverse distance" D M {\displaystyle D_{M}} by
and with the angular diameter distance D A {\displaystyle D_{A}} by the Etherington's reciprocity theorem :
where z is the redshift . D M {\displaystyle D_{M}} is a factor that allows calculation of the comoving distance between two objects with the same redshift but at different positions of the sky; if the two objects are separated by an angle δ θ {\displaystyle \delta \theta } , the comoving distance between them would be D M δ θ {\displaystyle D_{M}\delta \theta } . In a spatially flat universe, the comoving transverse distance D M {\displaystyle D_{M}} is exactly equal to the radial comoving distance D C {\displaystyle D_{C}} , i.e. the comoving distance from ourselves to the object. [ 1 ] | https://en.wikipedia.org/wiki/Luminosity_distance |
Luminous efficacy is a measure of how well a light source produces visible light. It is the ratio of luminous flux to power , measured in lumens per watt in the International System of Units (SI). Depending on context, the power can be either the radiant flux of the source's output, or it can be the total power (electric power, chemical energy, or others) consumed by the source. [ 1 ] [ 2 ] [ 3 ] Which sense of the term is intended must usually be inferred from the context, and is sometimes unclear. The former sense is sometimes called luminous efficacy of radiation , [ 4 ] and the latter luminous efficacy of a light source [ 5 ] or overall luminous efficacy . [ 6 ] [ 7 ]
Not all wavelengths of light are equally visible, or equally effective at stimulating human vision, due to the spectral sensitivity of the human eye ; radiation in the infrared and ultraviolet parts of the spectrum is useless for illumination. The luminous efficacy of a source is the product of how well it converts energy to electromagnetic radiation, and how well the emitted radiation is detected by the human eye.
Luminous efficacy can be normalized by the maximum possible luminous efficacy to a dimensionless quantity called luminous efficiency . The distinction between efficacy and efficiency is not always carefully maintained in published sources, so it is not uncommon to see "efficiencies" expressed in lumens per watt, or "efficacies" expressed as a percentage.
By definition, light outside the visible spectrum cannot be seen by the standard human vision system , and therefore does not contribute to, and indeed can subtract from, luminous efficacy.
Luminous efficacy of radiation measures the fraction of electromagnetic power which is useful for lighting. It is obtained by dividing the luminous flux by the radiant flux . [ 4 ] Light wavelengths outside the visible spectrum reduce luminous efficacy, because they contribute to the radiant flux, while the luminous flux of such light is zero. Wavelengths near the peak of the eye's response contribute more strongly than those near the edges.
Wavelengths of light outside of the visible spectrum are not useful for general illumination [ note 1 ] . Furthermore, human vision responds more to some wavelengths of light than others. This response of the eye is represented by the luminous efficiency function . This is a standardized function representing photopic vision , which models the response of the eye's cone cells , that are active under typical daylight conditions. A separate curve can be defined for dark/night conditions, modeling the response of rod cells without cones, known as scotopic vision . ( Mesopic vision describes the transition zone in dim conditions, between photopic and scotopic, where both cones and rods are active.)
Photopic luminous efficacy of radiation has a maximum possible value of 683.002 lm/W , for the case of monochromatic light at a wavelength of 555 nm . [ note 2 ] Scotopic luminous efficacy of radiation reaches a maximum of 1700 lm/W for monochromatic light at a wavelength of 507 nm . [ note 3 ]
Luminous efficacy (of radiation) , denoted K , is defined as [ 4 ]
where
of radiation (lm/W)
efficiency [ note 4 ]
Artificial light sources are usually evaluated in terms of luminous efficacy of the source, also sometimes called wall-plug efficacy . This is the ratio between the total luminous flux emitted by a device and the total amount of input power (electrical, etc.) it consumes. The luminous efficacy of the source is a measure of the efficiency of the device with the output adjusted to account for the spectral response curve (the luminosity function). When expressed in dimensionless form (for example, as a fraction of the maximum possible luminous efficacy), this value may be called luminous efficiency of a source , overall luminous efficiency or lighting efficiency .
The main difference between the luminous efficacy of radiation and the luminous efficacy of a source is that the latter accounts for input energy that is lost as heat or otherwise exits the source as something other than electromagnetic radiation. Luminous efficacy of radiation is a property of the radiation emitted by a source. Luminous efficacy of a source is a property of the source as a whole.
The following table lists luminous efficacy of a source and efficiency for various light sources. Note that all lamps requiring electrical/electronic ballast are unless noted (see also voltage) listed without losses for that, reducing total efficiency.
Sources that depend on thermal emission from a solid filament, such as incandescent light bulbs , tend to have low overall efficacy because, as explained by Donald L. Klipstein, "An ideal thermal radiator produces visible light most efficiently at temperatures around 6300 °C (6600 K or 11,500 °F). Even at this high temperature, a lot of the radiation is either infrared or ultraviolet, and the theoretical luminous [efficacy] is 95 lumens per watt. No substance is solid and usable as a light bulb filament at temperatures anywhere close to this. The surface of the sun is not quite that hot." [ 22 ] At temperatures where the tungsten filament of an ordinary light bulb remains solid (below 3683 kelvin), most of its emission is in the infrared . [ 22 ] | https://en.wikipedia.org/wiki/Luminous_efficacy |
A luminous efficiency function or luminosity function represents the average spectral sensitivity of human visual perception of light . It is based on subjective judgements of which of a pair of different-colored lights is brighter, to describe relative sensitivity to light of different wavelengths . It is not an absolute reference to any particular individual, but is a standard observer representation of visual sensitivity of a theoretical human eye . It is valuable as a baseline for experimental purposes, and in colorimetry . Different luminous efficiency functions apply under different lighting conditions, varying from photopic in brightly lit conditions through mesopic to scotopic under low lighting conditions. When not specified, the luminous efficiency function generally refers to the photopic luminous efficiency function.
The CIE photopic luminous efficiency function y (λ) or V (λ) is a standard function established by the Commission Internationale de l'Éclairage (CIE) and standardized in collaboration with the ISO , [ 1 ] and may be used to convert radiant energy into luminous (i.e., visible) energy. It also forms the central color matching function in the CIE 1931 color space .
There are two luminous efficiency functions in common use. For everyday light levels, the photopic luminosity function best approximates the response of the human eye. For low light levels, the response of the human eye changes, and the scotopic curve applies. The photopic curve is the CIE standard curve used in the CIE 1931 color space.
The luminous flux (or visible power) in a light source is defined by the photopic luminosity function. The following equation calculates the total luminous flux in a source of light:
where
Formally, the integral is the inner product of the luminosity function with the spectral power distribution . [ 2 ] In practice, the integral is replaced by a sum over discrete wavelengths for which tabulated values of the luminous efficiency function are available. The CIE distributes standard tables with luminosity function values at 5 nm intervals from 380 nm to 780 nm . [ cie 1 ]
The standard luminous efficiency function is normalized to a peak value of unity at 555 nm (see luminous coefficient ). The value of the constant in front of the integral is usually rounded off to 683 lm/W . The small excess fractional value comes from the slight mismatch between the definition of the lumen and the peak of the luminosity function. The lumen is defined to be unity for a radiant energy of 1/683 W at a frequency of 540 THz , which corresponds to a standard air wavelength of 555.016 nm rather than 555 nm , which is the peak of the luminosity curve. The value of y ( λ ) is 0.999 997 at 555.016 nm , so that a value of 683/ 0.999 997 = 683.002 is the multiplicative constant. [ 3 ]
The number 683 is connected to the modern (1979) definition of the candela , the unit of luminous intensity . [ cie 2 ] This arbitrary number made the new definition give numbers equivalent to those from the old definition of the candela.
The CIE 1924 photopic V ( λ ) luminosity function, [ cie 3 ] which is included in the CIE 1931 color-matching functions as the y ( λ ) function, has long been acknowledged to underestimate the contribution of the blue end of the spectrum to perceived luminance. There have been numerous attempts to improve the standard function, to make it more representative of human vision. Judd in 1951, [ 4 ] improved by Vos in 1978, [ 5 ] resulted in a function known as CIE V M ( λ ). [ 6 ] More recently, Sharpe, Stockman, Jagla & Jägle (2005) developed a function consistent with the Stockman & Sharpe cone fundamentals ; [ 7 ] their curves are plotted in the figure above.
Stockman & Sharpe has subsequently produced an improved function in 2011, taking into account the effects of chromatic adaptation under daylight . [ 8 ] Their work in 2008 [ 9 ] has revealed that "luminous efficiency or V(l) functions change dramatically with chromatic adaptation". [ 10 ]
The ISO standard is ISO/CIE FDIS 11664-1. The standard provides an incremental table by nm of each value in the visible range for the CIE 1924 function. [ 11 ] [ 12 ]
For very low levels of intensity ( scotopic vision ), the sensitivity of the eye is mediated by rods, not cones, and shifts toward the violet , peaking around 507 nm for young eyes; the sensitivity is equivalent to 1699 lm/W [ 13 ] or 1700 lm/W [ 14 ] at this peak. The standard scotopic luminous efficiency function or V ′ ( λ ) was adopted by the CIE in 1951, based on measurements by Wald (1945) and by Crawford (1949). [ 15 ]
Luminosity for mesopic vision , a wide transitioning band between scotopic and phototic vision, is more poorly standardized. The consensus is that this luminous efficiency can be written as a weighted average of scotopic and mesopic luminosities, but different organizations provide different weighting factors. [ 16 ]
Color blindness changes the sensitivity of the eye as a function of wavelength. For people with protanopia , the peak of the eye's response is shifted toward the short-wave part of the spectrum (approximately 540 nm), while for people with deuteranopia , there is a slight shift in the peak of the spectrum, to about 560 nm. [ 17 ] People with protanopia have essentially no sensitivity to light of wavelengths more than 670 nm.
Most non- primate mammals have the same luminous efficiency function as people with protanopia. Their insensitivity to long-wavelength red light makes it possible to use such illumination while studying the nocturnal life of animals. [ 18 ]
For older people with normal color vision, the crystalline lens may become slightly yellow due to cataracts , which moves the maximum of sensitivity to the red part of the spectrum and narrows the range of perceived wavelengths. [ citation needed ] | https://en.wikipedia.org/wiki/Luminous_efficiency_function |
In photometry , luminous energy is the perceived energy of light . This is sometimes called the quantity of light . [ 1 ] Luminous energy is not the same as radiant energy , the corresponding objective physical quantity . This is because the human eye can only see light in the visible spectrum and has different sensitivities to light of different wavelengths within the spectrum. When adapted for bright conditions ( photopic vision ), the eye is most sensitive to light at a wavelength of 555 nm . Light with a given amount of radiant energy will have more luminous energy if the wavelength is 555 nm than if the wavelength is longer or shorter. Light whose wavelength is well outside the visible spectrum has a luminous energy of zero, regardless of the amount of radiant energy present.
The SI unit of luminous energy is the lumen second , which is unofficially known as the "talbot" in honor of William Henry Fox Talbot . In other systems of units , luminous energy may be expressed in basic units of energy.
Luminous energy Q v {\displaystyle Q_{\mathrm {v} }} is related to radiant energy Q e {\displaystyle Q_{\mathrm {e} }} by the expression Q v = 683.002 l m / W ⋅ ∫ 0 ∞ Q e ( λ ) y ¯ ( λ ) d λ . {\displaystyle Q_{\mathrm {v} }=683.002\ \mathrm {lm/W} \cdot \int _{0}^{\infty }Q_{\mathrm {e} }(\lambda ){\overline {y}}(\lambda )\,\mathrm {d} \lambda .} Here λ {\displaystyle \lambda } is the wavelength of light, and y ¯ ( λ ) {\displaystyle {\overline {y}}(\lambda )} is the luminous efficiency function , which represents the eye's sensitivity to different wavelengths of light.
Luminous energy is the integrated luminous flux in a given period of time: Q v = ∫ 0 T Φ v ( t ) d t {\displaystyle Q_{\mathrm {v} }=\int _{0}^{T}{\mathit {\Phi _{\mathrm {v} }}}(t)\,\mathrm {d} t}
This optics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Luminous_energy |
A luminous flame is a burning flame which is brightly visible. Much of its output is in the form of visible light , as well as heat or light in the non-visible wavelengths.
An early study of flame luminosity was conducted by Michael Faraday and became part of his series of Royal Institution Christmas Lectures , The Chemical History of a Candle . [ 1 ]
In the simplest case, the yellow flame is luminous due to small soot particles in the flame which are heated to incandescence . Producing a deliberately luminous flame requires either a shortage of combustion air (as in a Bunsen burner ) or a local excess of fuel (as for a kerosene torch). [ citation needed ] Because of this dependency upon relatively inefficient combustion, luminosity is associated with diffusion flames and is lessened with premixed flames .
The flame is yellow because of its temperature. To produce enough soot to be luminous, the flame is operated at a lower temperature than its efficient heating flame (see Bunsen burner ). The colour of simple incandescence is due to black-body radiation . By Planck's law , as the temperature decreases, the peak of the black-body radiation curve moves to longer wavelengths, i.e. from the blue to the yellow. However, the blue light from a gas burner's premixed flame is primarily a product of molecular emission ( Swan bands ) rather than black-body radiation.
Other factors, particularly the fuel chemistry and its propensity for forming soot, have an influence on luminosity. [ citation needed ]
One of the most familiar instances of a luminous flame is produced by a Bunsen burner . This burner has a controllable air supply and a constant gas jet: when the air supply is reduced, a highly luminous, and thus visible, orange 'safety flame' is produced. For heating work, the air inlet is opened and the burner produces a much hotter blue flame. [ citation needed ]
Efficient combustion relies on the complete combustion of the fuel. Production of soot and/or carbon monoxide represents a waste of fuel (further burning was possible) the potential problem of soot build-up in burners. Heating burners are thus usually designed to produce a non- luminous flame. [ citation needed ]
Lamps for illumination rather than heat may use a deliberately luminous flame. A more efficient method overall uses a mantle instead. [ 2 ] Like the incandescent soot in a luminous flame, the mantle is heated and then glows. The flame does not provide much light itself, and so a more heat-efficient non-luminous flame is preferred. Unlike simple soot, a mantle uses rare-earth elements to provide a bright white glow; the colour of the glow comes from the spectral lines of these elements, not from simple black-body radiation. [ citation needed ]
When performing a flame test , the colour of a flame is affected by external materials added to it. A non-luminous flame is used, to avoid masking the test colour by the flame's colour. [ citation needed ] | https://en.wikipedia.org/wiki/Luminous_flame |
In photometry , luminous flux [ 1 ] or luminous power [ 2 ] is the measure of the perceived power of light . It differs from radiant flux , the measure of the total power of electromagnetic radiation (including infrared , ultraviolet , and visible light), in that luminous flux is adjusted to reflect the varying sensitivity of the human eye to different wavelengths of light.
The SI unit of luminous flux is the lumen (lm). One lumen is defined as the luminous flux of light produced by a light source that emits one candela of luminous intensity over a solid angle of one steradian .
1 lm = 1 cd × 1 sr {\displaystyle 1\ {\text{lm}}=1\ {\text{cd}}\times 1\ {\text{sr}}}
In other systems of units, luminous flux may have units of power .
The luminous flux accounts for the sensitivity of the eye by weighting the power at each wavelength with the luminosity function , which represents the eye's response to different wavelengths. The luminous flux is a weighted sum of the power at all wavelengths in the visible band. Light outside the visible band does not contribute. The ratio of the total luminous flux to the radiant flux is called the luminous efficacy . This model of the human visual brightness perception, is standardized by the CIE and ISO . [ 7 ]
Luminous flux is often used as an objective measure of the useful light emitted by a light source , and is typically reported on the packaging for light bulbs , although it is not always prominent. Consumers commonly compare the luminous flux of different light bulbs since it provides an estimate of the apparent amount of light the bulb will produce, and a lightbulb with a higher ratio of luminous flux to consumed power is more efficient.
Luminous flux is not used to compare brightness , as this is a subjective perception which varies according to the distance from the light source and the angular spread of the light from the source.
Luminous flux of artificial light sources is typically measured using an integrating sphere , or a goniophotometer outfitted with a photometer or a spectroradiometer. [ 8 ]
Luminous flux (in lumens) is a measure of the total amount of light a lamp puts out. The luminous intensity (in candelas) is a measure of how bright the beam in a particular direction is. If a lamp has a 1 lumen bulb and the optics of the lamp are set up to focus the light evenly into a 1 steradian beam, then the beam would have a luminous intensity of 1 candela. If the optics were changed to concentrate the beam into 1/2 steradian then the source would have a luminous intensity of 2 candela. The resulting beam is narrower and brighter, however the luminous flux remains the same. | https://en.wikipedia.org/wiki/Luminous_flux |
Luminous paint (or luminescent paint ) is paint that emits visible light through fluorescence , phosphorescence , or radioluminescence .
Fluorescent paints 'glow' when exposed to short-wave ultraviolet (UV) radiation. These UV wavelengths are found in sunlight and many artificial lights, but the paint requires a special black light to view so these glowing-paint applications are called 'black-light effects'. Fluorescent paint is available in a wide range of colors and is used in theatrical lighting and effects, posters, and as entertainment for children.
The fluorescent chemicals in fluorescent paint absorb the invisible UV radiation, then emit the energy as longer wavelength visible light of a particular color. Human eyes perceive this light as the unusual 'glow' of fluorescence. The painted surface also reflects any ordinary visible light striking it, which tends to wash out the dim fluorescent glow. So viewing fluorescent paint requires a longwave UV light which does not emit much visible light. This is called a black light . It has a dark blue filter material on the bulb which lets the invisible UV pass but blocks the visible light the bulb produces, allowing only a little purple light through. Fluorescent paints are best viewed in a darkened room.
Fluorescent paints are made in both 'visible' and 'invisible' types. Visible fluorescent paint also has ordinary visible light pigments, so under white light it appears a particular color, and the color just appears enhanced brilliantly under black lights. Invisible fluorescent paints appear transparent or pale under daytime lighting, but will glow under UV light. Since patterns painted with this type are invisible under ordinary visible light, they can be used to create a variety of clever effects.
Both types of fluorescent painting benefit when used within a contrasting ambiance of clean, matte-black backgrounds and borders. Such a "black out" effect will minimize other awareness, so cultivating the peculiar luminescence of UV fluorescence. Both types of paints have extensive application where artistic lighting effects are desired, particularly in "black box" entertainments and environments such as theaters, bars, shrines, etc. The effective wattage needed to light larger empty spaces increases, with narrow-band light such as UV wavelengths being rapidly scattered in outdoor environments.
Phosphorescent paint is commonly called "glow-in-the-dark" paint. It is made from phosphors such as silver-activated zinc sulfide or doped strontium aluminate , and typically glows a pale green to greenish-blue color. The mechanism for producing light is similar to that of fluorescent paint, but the emission of visible light persists long after it has been exposed to light. Phosphorescent paints have a sustained glow which lasts for up to 12 hours after exposure to light, fading over time.
This type of paint has been used to mark escape paths in aircraft and for decorative use such as "stars" applied to walls and ceilings. It is an alternative to radioluminescent paint. Kenner 's Lightning Bug Glo-Juice was a popular non-toxic paint product in 1968, marketed at children, alongside other glow-in-the-dark toys and novelties. Phosphorescent paint is typically used as body paint, on children's walls and outdoors.
When applied as a paint or a more sophisticated coating (e.g. a thermal barrier coating ), phosphorescence can be used for temperature detection or degradation measurements known as phosphor thermometry .
Radioluminescent paint is a self-luminous paint that consists of a small amount of a radioactive isotope ( radionuclide ) mixed with a radioluminescent phosphor chemical. The radioisotope continually decays, emitting radiation particles which strike molecules of the phosphor, exciting them to emit visible light. The isotopes selected are typically strong emitters of beta radiation , preferred since this radiation will not penetrate an enclosure. Radioluminescent paints will glow without exposure to light until the radioactive isotope has decayed (or the phosphor degrades), which may be many years.
Because of safety concerns and tighter regulation, consumer products such as clocks and watches now increasingly use phosphorescent rather than radioluminescent substances. Previously radioluminicesent paints were used extensively on watch and clock dials and known colloquially to watchmakers as "clunk". [ 1 ] Radioluminescent paint may still be preferred in specialist applications, such as diving watches . [ 2 ]
Radioluminescent paint was invented in 1908 by Sabin Arnold von Sochocky [ 3 ] [ failed verification – see discussion ] and originally incorporated radium -226. Radium paint was widely used for 40 years on the faces of watches, compasses, and aircraft instruments, so they could be read in the dark. Radium is a radiological hazard , emitting gamma rays that can penetrate a glass watch dial and into human tissue. During the 1920s and 1930s, the harmful effects of this paint became increasingly clear. A notorious case involved the " Radium Girls ", a group of women who painted watchfaces and later suffered adverse health effects from ingestion, in many cases resulting in death. In 1928, Dr von Sochocky himself died of aplastic anemia as a result of radiation exposure. [ 3 ] Thousands of legacy radium dials are still owned by the public and the paint can still be dangerous if ingested in sufficient quantities, which is why it has been banned in many countries.
Radium paint used zinc sulfide phosphor, usually trace metal doped with an activator , such as copper (for green light), silver (blue-green), and more rarely copper-magnesium (for yellow-orange light). The phosphor degrades relatively fast and the dials lose luminosity in several years to a few decades; clocks and other devices available from antique shops and other sources therefore are not luminous any more. However, due to the long 1600 year half-life of the Ra-226 isotope they are still radioactive and can be identified with a Geiger counter .
The dials can be renovated by application of a very thin layer of fresh phosphor, without the radium content (with the original material still acting as the energy source); the phosphor layer has to be thin due to the light self-absorption in the material.
In the second half of the 20th century, radium was progressively replaced with promethium -147. Promethium is only a relatively low-energy beta-emitter, which, unlike alpha emitters, does not degrade the phosphor lattice and the luminosity of the material does not degrade as fast. Promethium-based paints are significantly safer than radium, but the half-life of 147 Pm is only 2.62 years and therefore it is not suitable for long-life applications.
Promethium-based paint was used to illuminate Apollo Lunar Module electrical switch tips, the Apollo command and service module hatch and EVA handles, and control panels of the Lunar Roving Vehicle . [ 4 ] [ 5 ]
The latest generation of the radioluminescent materials is based on tritium , a radioactive isotope of hydrogen with half-life of 12.32 years that emits very low-energy beta radiation. The devices are similar to a fluorescent tube in construction, as they consist of a hermetically sealed (usually borosilicate-glass) tube, coated inside with a phosphor, and filled with tritium. They are known under many names – e.g. gaseous tritium light source (GTLS), traser, betalight.
Tritium light sources are most often seen as "permanent" illumination for the hands of wristwatches intended for diving, nighttime, or tactical use. They are additionally used in glowing novelty keychains , in self-illuminated exit signs , and formerly in fishing lures. They are favored by the military for applications where a power source may not be available, such as for instrument dials in aircraft, compasses , lights for map reading, and sights for weapons.
Tritium lights are also found in some old rotary dial telephones, though due to their age they no longer produce a useful amount of light. | https://en.wikipedia.org/wiki/Luminous_paint |
Lumirubin is a structural isomer of bilirubin , which is formed during phototherapy used to treat neonatal jaundice . This polar isomer resulting from the blue-green lights of phototherapy has an active site to albumin, and its effects are considered less toxic than those of bilirubin. [ 1 ] [ 2 ] [ 3 ] Lumirubin is excreted into bile or urine. ZZ, ZE, EE and EZ are the four structural isomers of bilirubin. ZZ is the stable, more insoluble form. Other forms are relatively soluble and are known as lumirubins. Phototherapy converts the ZZ form into lumirubins. Monoglucuronylated lumirubins are easily excreted. [ 4 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lumirubin |
The Lumière–Barbier method is a method of acetylating aromatic amines in aqueous solutions. [ 1 ] Illustrative is the acetylation of aniline. First aniline is dissolved in water using one equivalent of hydrochloric acid . This solution is subsequently treated, sequentially, with acetic anhydride and aqueous sodium acetate. Aniline attacks acetic anhydride followed by deprotonation of the ammonium ion:
Acetate then acts as a leaving group:
The acetanilide product is insoluble in water and can therefore be filtered off as crystals. | https://en.wikipedia.org/wiki/Lumière–Barbier_method |
A lump sum contract in construction is one type of construction contract , sometimes referred to as stipulated-sum, where a single price is quoted for an entire project based on plans and specifications and covers the entire project and the owner knows exactly how much the work will cost in advance. [ 1 ] This type of contract requires a full and complete set of plans and specifications and includes all the indirect costs plus the profit and the contractor will receive progress payments each month minus retention. The flexibility of this contract is very minimal and changes in design or deviation from the original plans would require a change order paid by the owner. [ 2 ] In this contract the payment is made according to the percentage of work completed. [ 3 ] The lump sum contract is different from guaranteed maximum price in a sense that the contractor is responsible for additional costs beyond the agreed price, however, if the final price is less than the agreed price then the contractor will gain and benefit from the savings. [ 4 ]
There are some factors that make for a successful execution of a lump sum contract on a project such as experience and confidence, management skills, communication skills, having a clear work plan, proper list of deliverables, contingency, and dividing the responsibility among the project team. [ 5 ]
According to Associated General Contractors of America (AGC),
In a lump sum contract, the owner has essentially assigned all the risk to the contractor, who in turn can be expected to ask for a higher markup in order to take care of unforeseen contingencies. A Contractor under a lump sum agreement will be responsible for the proper job execution and will provide its own means and methods to complete the work. [ 6 ]
With a lump sum contract or fixed-price contract , the contractor assesses the value of work as per the documents available, primarily the specifications and the drawings. At pre-tender stage the contractor evaluates the cost to execute the project (based on the above documents such as drawings, specifications, schedules, tender instruction and any clarification received in response to queries) and quotes a fixed inclusive price. [ 7 ]
Variations occur due to fluctuation in prices and inflation, provisional items, statutory fees, relevant events such as failure of the owner to deliver goods, etc. [ 8 ] [ 9 ] Where the cost of a specific activity is identified as a "provisional sum", a variation in actual cost may be accepted by the employer.
Variations are typically broken down into two categories, beneficial and detrimental, where the former is for improvement of work quality, cost and schedule reduction, and the latter is a negative change in performance or quality of work due to client's financial difficulties. There are many reasons for variations to occur but main causes are normally due to omission in design, inadequate design, changes in specifications and scope, and lack of coordination and communication among the stakeholders. [ 11 ] | https://en.wikipedia.org/wiki/Lump_sum_contract |
The lumped-element model (also called lumped-parameter model , or lumped-component model ) is a simplified representation of a physical system or circuit that assumes all components are concentrated at a single point and their behavior can be described by idealized mathematical models. The lumped-element model simplifies the system or circuit behavior description into a topology . It is useful in electrical systems (including electronics ), mechanical multibody systems , heat transfer , acoustics , etc. This is in contrast to distributed parameter systems or models in which the behaviour is distributed spatially and cannot be considered as localized into discrete entities.
The simplification reduces the state space of the system to a finite dimension , and the partial differential equations (PDEs) of the continuous (infinite-dimensional) time and space model of the physical system into ordinary differential equations (ODEs) with a finite number of parameters.
The lumped-matter discipline is a set of imposed assumptions in electrical engineering that provides the foundation for lumped-circuit abstraction used in network analysis . [ 1 ] The self-imposed constraints are:
The first two assumptions result in Kirchhoff's circuit laws when applied to Maxwell's equations and are only applicable when the circuit is in steady state . The third assumption is the basis of the lumped-element model used in network analysis . Less severe assumptions result in the distributed-element model , while still not requiring the direct application of the full Maxwell equations.
The lumped-element model of electronic circuits makes the simplifying assumption that the attributes of the circuit, resistance , capacitance , inductance , and gain , are concentrated into idealized electrical components ; resistors , capacitors , and inductors , etc. joined by a network of perfectly conducting wires.
The lumped-element model is valid whenever L c ≪ λ {\displaystyle L_{c}\ll \lambda } , where L c {\displaystyle L_{c}} denotes the circuit's characteristic length, and λ {\displaystyle \lambda } denotes the circuit's operating wavelength . Otherwise, when the circuit length is on the order of a wavelength, we must consider more general models, such as the distributed-element model (including transmission lines ), whose dynamic behaviour is described by Maxwell's equations . Another way of viewing the validity of the lumped-element model is to note that this model ignores the finite time it takes signals to propagate around a circuit. Whenever this propagation time is not significant to the application the lumped-element model can be used. This is the case when the propagation time is much less than the period of the signal involved. However, with increasing propagation time there will be an increasing error between the assumed and actual phase of the signal which in turn results in an error in the assumed amplitude of the signal. The exact point at which the lumped-element model can no longer be used depends to a certain extent on how accurately the signal needs to be known in a given application.
Real-world components exhibit non-ideal characteristics which are, in reality, distributed elements but are often represented to a first-order approximation by lumped elements. To account for leakage in capacitors for example, we can model the non-ideal capacitor as having a large lumped resistor connected in parallel even though the leakage is, in reality distributed throughout the dielectric. Similarly a wire-wound resistor has significant inductance as well as resistance distributed along its length but we can model this as a lumped inductor in series with the ideal resistor.
A lumped-capacitance model , also called lumped system analysis , [ 2 ] reduces a thermal system to a number of discrete “lumps” and assumes that the temperature difference inside each lump is negligible. This approximation is useful to simplify otherwise complex differential heat equations. It was developed as a mathematical analog of electrical capacitance , although it also includes thermal analogs of electrical resistance as well.
The lumped-capacitance model is a common approximation in transient conduction, which may be used whenever heat conduction within an object is much faster than heat transfer across the boundary of the object. The method of approximation then suitably reduces one aspect of the transient conduction system (spatial temperature variation within the object) to a more mathematically tractable form (that is, it is assumed that the temperature within the object is completely uniform in space, although this spatially uniform temperature value changes over time). The rising uniform temperature within the object or part of a system, can then be treated like a capacitative reservoir which absorbs heat until it reaches a steady thermal state in time (after which temperature does not change within it).
An early-discovered example of a lumped-capacitance system which exhibits mathematically simple behavior due to such physical simplifications, are systems which conform to Newton's law of cooling . This law simply states that the temperature of a hot (or cold) object progresses toward the temperature of its environment in a simple exponential fashion. Objects follow this law strictly only if the rate of heat conduction within them is much larger than the heat flow into or out of them. In such cases it makes sense to talk of a single "object temperature" at any given time (since there is no spatial temperature variation within the object) and also the uniform temperatures within the object allow its total thermal energy excess or deficit to vary proportionally to its surface temperature, thus setting up the Newton's law of cooling requirement that the rate of temperature decrease is proportional to difference between the object and the environment. This in turn leads to simple exponential heating or cooling behavior (details below).
To determine the number of lumps, the Biot number (Bi), a dimensionless parameter of the system, is used. Bi is defined as the ratio of the conductive heat resistance within the object to the convective heat transfer resistance across the object's boundary with a uniform bath of different temperature. When the thermal resistance to heat transferred into the object is larger than the resistance to heat being diffused completely within the object, the Biot number is less than 1. In this case, particularly for Biot numbers which are even smaller, the approximation of spatially uniform temperature within the object can begin to be used, since it can be presumed that heat transferred into the object has time to uniformly distribute itself, due to the lower resistance to doing so, as compared with the resistance to heat entering the object.
If the Biot number is less than 0.1 for a solid object, then the entire material will be nearly the same temperature, with the dominant temperature difference being at the surface. It may be regarded as being "thermally thin". The Biot number must generally be less than 0.1 for usefully accurate approximation and heat transfer analysis. The mathematical solution to the lumped-system approximation gives Newton's law of cooling .
A Biot number greater than 0.1 (a "thermally thick" substance) indicates that one cannot make this assumption, and more complicated heat transfer equations for "transient heat conduction" will be required to describe the time-varying and non-spatially-uniform temperature field within the material body.
The single capacitance approach can be expanded to involve many resistive and capacitive elements, with Bi < 0.1 for each lump. As the Biot number is calculated based upon a characteristic length of the system, the system can often be broken into a sufficient number of sections, or lumps, so that the Biot number is acceptably small.
Some characteristic lengths of thermal systems are:
For arbitrary shapes, it may be useful to consider the characteristic length to be volume / surface area.
A useful concept used in heat transfer applications once the condition of steady state heat conduction has been reached, is the representation of thermal transfer by what is known as thermal circuits. A thermal circuit is the representation of the resistance to heat flow in each element of a circuit, as though it were an electrical resistor . The heat transferred is analogous to the electric current and the thermal resistance is analogous to the electrical resistor. The values of the thermal resistance for the different modes of heat transfer are then calculated as the denominators of the developed equations. The thermal resistances of the different modes of heat transfer are used in analyzing combined modes of heat transfer. The lack of "capacitative" elements in the following purely resistive example, means that no section of the circuit is absorbing energy or changing in distribution of temperature. This is equivalent to demanding that a state of steady state heat conduction (or transfer, as in radiation) has already been established.
The equations describing the three heat transfer modes and their thermal resistances in steady state conditions, as discussed previously, are summarized in the table below:
In cases where there is heat transfer through different media (for example, through a composite material ), the equivalent resistance is the sum of the resistances of the components that make up the composite. Likely, in cases where there are different heat transfer modes, the total resistance is the sum of the resistances of the different modes. Using the thermal circuit concept, the amount of heat transferred through any medium is the quotient of the temperature change and the total thermal resistance of the medium.
As an example, consider a composite wall of cross-sectional area A {\displaystyle A} . The composite is made of an L 1 {\displaystyle L_{1}} long cement plaster with a thermal coefficient k 1 {\displaystyle k_{1}} and L 2 {\displaystyle L_{2}} long paper faced fiber glass, with thermal coefficient k 2 {\displaystyle k_{2}} . The left surface of the wall is at T i {\displaystyle T_{i}} and exposed to air with a convective coefficient of h i {\displaystyle h_{i}} . The right surface of the wall is at T o {\displaystyle T_{o}} and exposed to air with convective coefficient h o {\displaystyle h_{o}} .
Using the thermal resistance concept, heat flow through the composite is as follows: Q ˙ = T i − T o R i + R 1 + R 2 + R o = T i − T 1 R i = T i − T 2 R i + R 1 = T i − T 3 R i + R 1 + R 2 = T 1 − T 2 R 1 = T 3 − T o R 0 {\displaystyle {\dot {Q}}={\frac {T_{i}-T_{o}}{R_{i}+R_{1}+R_{2}+R_{o}}}={\frac {T_{i}-T_{1}}{R_{i}}}={\frac {T_{i}-T_{2}}{R_{i}+R_{1}}}={\frac {T_{i}-T_{3}}{R_{i}+R_{1}+R_{2}}}={\frac {T_{1}-T_{2}}{R_{1}}}={\frac {T_{3}-T_{o}}{R_{0}}}} where R i = 1 h i A {\displaystyle R_{i}={\frac {1}{h_{i}A}}} , R o = 1 h o A {\displaystyle R_{o}={\frac {1}{h_{o}A}}} , R 1 = L 1 k 1 A {\displaystyle R_{1}={\frac {L_{1}}{k_{1}A}}} , and R 2 = L 2 k 2 A {\displaystyle R_{2}={\frac {L_{2}}{k_{2}A}}}
Newton's law of cooling is an empirical relationship attributed to English physicist Sir Isaac Newton (1642–1727). This law stated in non-mathematical form is the following:
The rate of heat loss of a body is proportional to the temperature difference between the body and its surroundings.
Or, using symbols: Rate of cooling ∼ Δ T {\displaystyle {\text{Rate of cooling}}\sim \Delta T}
An object at a different temperature from its surroundings will ultimately come to a common temperature with its surroundings. A relatively hot object cools as it warms its surroundings; a cool object is warmed by its surroundings. When considering how quickly (or slowly) something cools, we speak of its rate of cooling – how many degrees' change in temperature per unit of time.
The rate of cooling of an object depends on how much hotter the object is than its surroundings. The temperature change per minute of a hot apple pie will be more if the pie is put in a cold freezer than if it is placed on the kitchen table. When the pie cools in the freezer, the temperature difference between it and its surroundings is greater. On a cold day, a warm home will leak heat to the outside at a greater rate when there is a large difference between the inside and outside temperatures. Keeping the inside of a home at high temperature on a cold day is thus more costly than keeping it at a lower temperature. If the temperature difference is kept small, the rate of cooling will be correspondingly low.
As Newton's law of cooling states, the rate of cooling of an object – whether by conduction , convection , or radiation – is approximately proportional to the temperature difference Δ T . Frozen food will warm up faster in a warm room than in a cold room. Note that the rate of cooling experienced on a cold day can be increased by the added convection effect of the wind . This is referred to as wind chill . For example, a wind chill of -20 °C means that heat is being lost at the same rate as if the temperature were -20 °C without wind.
This law describes many situations in which an object has a large thermal capacity and large conductivity, and is suddenly immersed in a uniform bath which conducts heat relatively poorly. It is an example of a thermal circuit with one resistive and one capacitative element. For the law to be correct, the temperatures at all points inside the body must be approximately the same at each time point, including the temperature at its surface. Thus, the temperature difference between the body and surroundings does not depend on which part of the body is chosen, since all parts of the body have effectively the same temperature. In these situations, the material of the body does not act to "insulate" other parts of the body from heat flow, and all of the significant insulation (or "thermal resistance") controlling the rate of heat flow in the situation resides in the area of contact between the body and its surroundings. Across this boundary, the temperature-value jumps in a discontinuous fashion.
In such situations, heat can be transferred from the exterior to the interior of a body, across the insulating boundary, by convection, conduction, or diffusion, so long as the boundary serves as a relatively poor conductor with regard to the object's interior. The presence of a physical insulator is not required, so long as the process which serves to pass heat across the boundary is "slow" in comparison to the conductive transfer of heat inside the body (or inside the region of interest—the "lump" described above).
In such a situation, the object acts as the "capacitative" circuit element, and the resistance of the thermal contact at the boundary acts as the (single) thermal resistor. In electrical circuits, such a combination would charge or discharge toward the input voltage, according to a simple exponential law in time. In the thermal circuit, this configuration results in the same behavior in temperature: an exponential approach of the object temperature to the bath temperature.
Newton's law is mathematically stated by the simple first-order differential equation: d Q d t = − h ⋅ A ( T ( t ) − T env ) = − h ⋅ A Δ T ( t ) {\displaystyle {\frac {dQ}{dt}}=-h\cdot A(T(t)-T_{\text{env}})=-h\cdot A\Delta T(t)} where
Putting heat transfers into this form is sometimes not a very good approximation, depending on ratios of heat conductances in the system. If the differences are not large, an accurate formulation of heat transfers in the system may require analysis of heat flow based on the (transient) heat transfer equation in nonhomogeneous or poorly conductive media.
If the entire body is treated as lumped-capacitance heat reservoir, with total heat content which is proportional to simple total heat capacity C {\displaystyle C} , and T {\displaystyle T} , the temperature of the body, or Q = C T {\displaystyle Q=CT} . It is expected that the system will experience exponential decay with time in the temperature of a body.
From the definition of heat capacity C {\displaystyle C} comes the relation C = d Q / d T {\displaystyle C=dQ/dT} . Differentiating this equation with regard to time gives the identity (valid so long as temperatures in the object are uniform at any given time): d Q / d t = C ( d T / d t ) {\displaystyle dQ/dt=C(dT/dt)} . This expression may be used to replace d Q / d t {\displaystyle dQ/dt} in the first equation which begins this section, above. Then, if T ( t ) {\displaystyle T(t)} is the temperature of such a body at time t {\displaystyle t} , and T env {\displaystyle T_{\text{env}}} is the temperature of the environment around the body: d T ( t ) d t = − r ( T ( t ) − T env ) = − r Δ T ( t ) {\displaystyle {\frac {dT(t)}{dt}}=-r(T(t)-T_{\text{env}})=-r\Delta T(t)} where r = h A / C {\displaystyle r=hA/C} is a positive constant characteristic of the system, which must be in units of s − 1 {\displaystyle s^{-1}} , and is therefore sometimes expressed in terms of a characteristic time constant t 0 {\displaystyle t_{0}} given by: t 0 = 1 / r = − Δ T ( t ) / ( d T ( t ) / d t ) {\displaystyle t_{0}=1/r=-\Delta T(t)/(dT(t)/dt)} . Thus, in thermal systems, t 0 = C / h A {\displaystyle t_{0}=C/hA} . (The total heat capacity C {\displaystyle C} of a system may be further represented by its mass- specific heat capacity c p {\displaystyle c_{p}} multiplied by its mass m {\displaystyle m} , so that the time constant t 0 {\displaystyle t_{0}} is also given by m c p / h A {\displaystyle mc_{p}/hA} ).
The solution of this differential equation, by standard methods of integration and substitution of boundary conditions, gives: T ( t ) = T e n v + ( T ( 0 ) − T e n v ) e − r t . {\displaystyle T(t)=T_{\mathrm {env} }+(T(0)-T_{\mathrm {env} })\ e^{-rt}.}
If:
then the Newtonian solution is written as: Δ T ( t ) = Δ T ( 0 ) e − r t = Δ T ( 0 ) e − t / t 0 . {\displaystyle \Delta T(t)=\Delta T(0)\ e^{-rt}=\Delta T(0)\ e^{-t/t_{0}}.}
This same solution is almost immediately apparent if the initial differential equation is written in terms of Δ T ( t ) {\displaystyle \Delta T(t)} , as the single function to be solved for. d T ( t ) d t = d Δ T ( t ) d t = − 1 t 0 Δ T ( t ) {\displaystyle {\frac {dT(t)}{dt}}={\frac {d\Delta T(t)}{dt}}=-{\frac {1}{t_{0}}}\Delta T(t)}
This mode of analysis has been applied to forensic sciences to analyze the time of death of humans. Also, it can be applied to HVAC (heating, ventilating and air-conditioning, which can be referred to as "building climate control"), to ensure more nearly instantaneous effects of a change in comfort level setting. [ 3 ]
The simplifying assumptions in this domain are:
In this context, the lumped-component model extends the distributed concepts of acoustic theory subject to approximation. In the acoustical lumped-component model, certain physical components with acoustical properties may be approximated as behaving similarly to standard electronic components or simple combinations of components.
A simplifying assumption in this domain is that all heat transfer mechanisms are linear, implying that radiation and convection are linearised for each problem.
Several publications can be found that describe how to generate lumped-element models of buildings. In most cases, the building is considered a single thermal zone and in this case, turning multi-layered walls into lumped elements can be one of the most complicated tasks in the creation of the model. The dominant-layer method is one simple and reasonably accurate method. [ 4 ] In this method, one of the layers is selected as the dominant layer in the whole construction, this layer is chosen considering the most relevant frequencies of the problem. [ 5 ]
Lumped-element models of buildings have also been used to evaluate the efficiency of domestic energy systems, by running many simulations under different future weather scenarios. [ 6 ]
Fluid systems can be described by means of lumped-element cardiovascular models by using voltage to represent pressure and current to represent flow; identical equations from the electrical circuit representation are valid after substituting these two variables. Such applications can, for example, study the response of the human cardiovascular system to ventricular assist device implantation. [ 7 ] | https://en.wikipedia.org/wiki/Lumped-element_model |
Lumped damage mechanics or LDM is a branch of structural mechanics that is concerned with the analysis of frame structures. It is based on continuum damage mechanics and fracture mechanics . It combines the ideas of these theories with the concept of plastic hinge [ 1 ] LDM can be defined as the fracture mechanics of complex structural systems . In the models of LDM, cracking or local buckling as well as plasticity are lumped at the inelastic hinges. As in continuum damage mechanics, LDM uses state variables to represent the effects of damage on the remaining stiffness and strength of the frame structure. In reinforced concrete structures, the damage state variable quantifies the crack density in the plastic hinge zone; [ 1 ] in unreinforced concrete components and steel beams, it is a dimensionless measure of the crack surface; [ 2 ] in tubular steel elements, the damage variable measures the degree of local buckling [ 3 ] The LDM evolution laws can be derived from continuum damage mechanics [ 3 ] [ 4 ] or fracture mechanics. [ 1 ] [ 2 ] In the latter case, concepts such as the energy release rate or the stress intensity factor of a plastic hinge are introduced. LDM allows for the numerical simulation of the collapse of complex structures with a fraction of the computational cost and human effort of its continuum mechanics counterparts. LDM is also a regularization procedure that eliminates the mesh-dependence phenomenon that is observed in structural analysis with local damage models. [ 5 ] In addition, LDM method has been implemented in the finite element analysis of crack propagation of steel beam-to-column connections subjected to ultra-low cycle fatigue. [ 6 ] [ 7 ] | https://en.wikipedia.org/wiki/Lumped_damage_mechanics |
The Lunar Crater Radio Telescope (LCRT) is a proposal by the NASA Institute for Advanced Concepts (NIAC) to create an ultra-long-wavelength (that is, wavelengths greater than 10 m, corresponding to frequencies below 30 MHz ) radio telescope inside a lunar crater on the far side of the Moon . [ 1 ] [ a ]
The reason for building the LCRT on the far side of the Moon would be to avoid interference faced by radio telescopes on the Earth's surface. [ 2 ] The Moon would block many sources of radio interference originating on Earth, and would avoid the problems that come from Earth's ionosphere at long radio wavelengths. [ 3 ]
If completed, the telescope would have a structural diameter of 1.3 km, and the reflector would be 350m in diameter. [ 4 ] [ 5 ] [ 6 ] Robotic lift wires and an anchoring system would enable origami deployment of the parabolic reflector. [ 7 ]
A previous proposal put the reflector size at 1 km diameter. [ 8 ] In 2021, the LCRT project went into phase II of development in the NIAC program and was awarded $500,000 to continue work. As of 2023, work on the lunar crater radio telescope is ongoing at Caltech / NASA Jet Propulsion Laboratory . [ 2 ]
To be sensitive to long radio wavelengths, the LCRT would need to be huge. The idea is to create an antenna over half-a-mile (1 kilometer) wide in a crater over 3 kilometers (2 miles) wide. The biggest single-dish radio telescopes on Earth – like the Five-hundred-meter Aperture Spherical Telescope (FAST) in China and the now-inoperative 305-meter-wide Arecibo Observatory in Puerto Rico – were built inside natural bowl-like depressions in the landscape to provide a support structure. [ 2 ]
This class of radio telescope uses thousands of reflecting panels suspended inside the depression to make the entire dish’s surface reflective to radio waves. The receiver then hangs via a system of cables at a focal point over the dish, anchored by towers at the dish’s perimeter, to measure the radio waves bouncing off the curved surface below. But despite its size and complexity, even FAST is not sensitive to radio wavelengths longer than about 14 feet (4.3 meters). [ 2 ]
The LCRT concept eliminates the need to transport prohibitively heavy material to the Moon and utilizes robots to automate the construction process. Instead of using thousands of reflective panels to focus incoming radio waves, the LCRT would be made of thin wire mesh in the center of the crater. One spacecraft would deliver the mesh, and a separate lander would deposit DuAxel rovers to build the dish over several days or weeks. [ 2 ]
DuAxel, a robotic concept being developed at JPL, is composed of two single-axle rovers (called Axel) that can undock from each other but stay connected via a tether. One half would act as an anchor at the rim of the crater as the other rappels down to do the building. [ 2 ] [ 11 ]
Another concept, that reduces both cost and complexity by almost half, is using a Lift Wire Deployment and Anchoring System for LCRT, as shown in picture. | https://en.wikipedia.org/wiki/Lunar_Crater_Radio_Telescope |
The Lunar Infrastructure for Exploration (LIFE) was a proposed project to build a space telescope on the far side of the Moon , actively promoted by EADS Astrium Space Transportation of Germany and the Netherlands Foundation for Research in Astronomy ASTRON / LOFAR . The project was presented for the first time publicly at the 2005 IAF Congress in Fukuoka . [ citation needed ]
The 1.3 billion euro project would have involved a radio telescope to be located on the polar region of the far side of the Moon.
The radio telescope was intended to look for exoplanets and detect signals in the 1-10 MHz range. Such signals cannot be detected on Earth because of ionosphere interference.
The proposed telescope would have been constructed by a lander vehicle to deploy dipoles across a 300-400 m area. The dipoles, which receive the cosmic radio signals, would be deployed either by a dispenser or by a team of small mobile robots . The telescope would have been located near the South Pole to ensure permanent sunlight and direct communication with Earth. The proposed lander would also have had geophones , which could listen to meteorite impacts on the Moon's surface.
Another German aerospace consortium, OHB-System , also promoted a lunar lander concept called Mona Lisa. [ 1 ] Models of both concepts were displayed at ILA in 2006.
This article about one or more spacecraft of Germany is a stub . You can help Wikipedia by expanding it .
This article about one or more spacecraft of the Netherlands is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lunar_Infrastructure_for_Exploration |
The Lunar Receiving Laboratory ( LRL ) was a facility at NASA 's Lyndon B. Johnson Space Center (Building 37) that was constructed to quarantine astronauts and material brought back from the Moon during the Apollo program to reduce the risk of back-contamination . After recovery at sea, crews from Apollo 11 , Apollo 12 , and Apollo 14 walked from their helicopter to the Mobile Quarantine Facility on the deck of an aircraft carrier and were brought to the LRL for quarantine. Samples of rock and regolith that the astronauts collected and brought back were flown directly to the LRL and initially analyzed in glovebox vacuum chambers .
The quarantine requirement was dropped for Apollo 15 and later missions. [ 1 ] The LRL was used for study, distribution, and safe storage of the lunar samples. Between 1969 and 1972, six Apollo space flight missions brought back 382 kilograms (842 pounds) of lunar rocks, core samples, pebbles, sand, and dust from the lunar surface—in all, 2,200 samples from six exploration sites. [ 2 ] Other lunar samples were returned to Earth by three automated Soviet spacecraft, Luna 16 in 1970, Luna 20 in 1972, and Luna 24 in 1976, which returned samples totaling 300 grams (about 3/4 pound). [ clarification needed ]
In 1976, some of the samples were moved to Brooks Air Force Base in San Antonio, Texas , for second-site storage. In 1979, a Lunar Sample Laboratory Facility was built to serve as the chief repository for the Apollo samples: permanent storage in a physically secure and non-contaminating environment. The facility includes vaults for the samples and records, and laboratories for sample preparation and study. [ 3 ] The Lunar Receiving Laboratory building was later occupied by NASA's Life Sciences division, contained biomedical and environment labs, and was used for experiments involving human adaptation to microgravity . [ 4 ]
In September 2019, NASA announced that the Lunar Receiving Laboratory had not been used for two years and would be demolished. [ 5 ] [ 6 ] [ clarification needed ] | https://en.wikipedia.org/wiki/Lunar_Receiving_Laboratory |
The Lunar Surface Electromagnetics Experiment (LuSEE-Night) is a planned robotic radio telescope observatory designed to land and function on the far side of Earth's Moon . [ 1 ] [ 2 ] The project is under development by the U.S. Department of Energy and the National Aeronautics and Space Administration . [ 3 ] If successfully deployed and activated, LuSEE-Night will attempt measurements of an early period of the history of the Universe that occurred relatively soon after the Big Bang , referred to as the Dark Ages of the Universe, which predates the formation of luminous stars and galaxies. [ 4 ] The instrument is planned to be landed on the lunar far side as soon as 2026 aboard the Blue ghost lunar lander. [ 5 ] [ 6 ] LuSEE-Night, not to be confused with a companion lander planned for lunar landing in 2024 named LuSEE-Lite, is to be delivered to the lunar far side by Commercial Lunar Payload Services (CLPS) . [ 7 ]
This article about a specific observatory, telescope or astronomical instrument is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lunar_Surface_Electromagnetics_Experiment |
The Lunar Surface Magnetometer ( LSM ) was a lunar science experiment with the aim of providing insights into the interior of the Moon and how its latent magnetic field interacts with the solar wind . It was deployed on the Moon as part of Apollo 12 , Apollo 14 and Apollo 16 missions.
Two lunar orbital satellites, Luna 10 and Explorer 35 , laid much of the groundwork for a working understanding of the Moon's magnetic field . They established that the Moon has at best little more than a remnant field, and at worst no intrinsic magnetic field at all. Those spacecraft's instruments were not sensitive enough, and too far from the Moon's surface to distinguish between these two possibilities. [ 1 ] [ 2 ] In addition to these two missions, analysis had been conducted on regolith samples brought back from the Moon by Apollo 11 . [ 1 ] This analysis established some of the surface materials' magnetic properties. [ 1 ]
A lunar magnetometer experiment had a number of requirements that shaped its capabilities. The instrument needed to be able to operate during the lunar night since it was believed the collection of data from a full lunar rotation would be required. The instrument also needed to be able to perform its own self-calibrations on a regular cadence to account for wide temperature ranges experienced over a long period of time. [ 1 ]
The instrument's main magnetic measurements were calculated from three Ames fluxgate magnetometer sensor heads, located at the end of three 100 cm (39 in) booms, positioned perpendicular to each other. [ 1 ] Each sensor consists of a flattened toroidal core made of permalloy tape placed inside a wound sensing element. [ 3 ] [ 4 ] The sensors could be gimballed by motors, controlled either from commands sent from Earth or from commands generated by the Apollo Lunar Surface Experiments Package (ALSEP). [ 1 ] These sensors were thermally controlled by thermistors driving the operation of resistance heaters . Alignment and positional measurement of the instrument were provided by two features: the onboard gravimeters that measured the tilt-angle of the magnetometer, and a sundial or " shadowgraph " that enabled astronauts to take an azimuthal reading. [ 1 ] The sensor booms were folded to facilitate easier stowage and reduced strain during transportation during flight. When deployed, the three sensors were situated 75 cm (30 in) above the ground at a 35-degree angle from the surface. [ 1 ] [ 3 ]
To be able to measure the magnetic properties of the Moon as a whole, instrument placement would need to avoid any localised nickel-iron / stoney-iron material. If this material was magnetized or capable of producing localised induction fields, this would result in incorrect readings of the Moon-wide magnetic field. [ 1 ]
The magnetometer received its power from a 70 watt radioisotope thermoelectric generator that provided power to a number of ALSEP instruments which enabled the LSM to operate both day and night. The instrument would on average use 3 watts of power. Power and a connection to the ALSEP radio transmitter were made available via a 15-meter (49 ft) ribbon cable. [ 4 ]
The first LSM was fully deployed and activated on the Moon at 14:40 UTC November 19, 1969, by astronauts Charles Conrad and Alan Bean , within the Oceanus Procellarum . [ 1 ] [ 2 ] In the selenographic coordinate system , the instrument was located at 23.35W and 2.97S. The instrument returned the first measurements of a magnetic field intrinsic to the Moon, rather than induced by the solar wind. The instruments detected a field strength of 32-36 nanotesla that was likely produced mainly by a nearby localised magnetised body, between 200 m (660 ft) and 200 km (120 mi) from the magnetometer. [ 2 ] [ 3 ] This was due to constraints on the lunar magnetic dipole strength due to measurements made simultaneously by Explorer 35, [ 3 ] and the ruling out of other artificial sources due to their size. [ 1 ] The instrument likely detected a field effect caused by the hydromagnetic flow of the solar wind as it passed the Moon. [ 3 ] [ 2 ]
While the instrument carried on Apollo 16 was similar to that on Apollo 12, its sensors were upgraded with high stability cores developed by the Naval Ordnance Laboratory . [ 4 ] | https://en.wikipedia.org/wiki/Lunar_Surface_Magnetometer |
Lunar Ultraviolet Cosmic Imager ( LUCI ) is a small planned telescope that will be landed on the Moon to scan the sky in near UV wavelengths. It is a technology demonstrator developed by the Indian Institute of Astrophysics , [ 4 ] [ 1 ] [ 5 ] [ 6 ] and it was planned to be one of several small payloads to be deployed by the commercial Z-01 lander developed by TeamIndus in partnership with OrbitBeyond . The mission was planned to be launched in 2020 as part of NASA's Commercial Lunar Payload Services (CLPS). [ 7 ] On 29 July 2019 OrbitBeyond announced that it would drop out of the CLPS contract with NASA, meaning that the 2020 launch was canceled and it is unknown whether the mission will ever take place.
The science objectives of LUCI telescope are primarily to search for transient astronomic events such as supernovae , novae , tidal disruption events by massive black holes , and more exotic energetic sources such as superluminous supernovae and flashes from cosmic collisions which can be very energetic on all scales. [ 4 ] [ 8 ]
LUCI will also look for faint asteroids and comets in the Solar System , especially for near-Earth objects (NEO) and potentially hazardous objects . [ 4 ] The aims are focused on UV sources not accessible by the more sensitive large space missions. [ 4 ]
The Earth's atmosphere absorbs and scatters UV photons, preventing observations of the active Universe. Placing a telescope on the surface of the Moon is advantageous because of its absence of atmosphere and ionosphere offers an unobstructed view of the space in all wavelengths. The Moon surface provides not just a stable platform, but an inexpensive and long-term access to observations in wavelengths not normally used by large orbital telescope missions. The only UV astronomical observations from the Moon to date were made by Apollo 16 team in 1972 [ 8 ] and theLunar-based Ultraviolet telescope aboard the Chang'e 3 lunar lander in 2018. LUCI project started in 2013 and is funded by India's Department of Science and Technology . [ 6 ] The telescope team is headed by Jayant Murthy. [ 4 ]
The telescope has been completed as of March 2019, and was awaiting integration to the Z-01 lander . [ 3 ] It was planned be launched in Q3 2020 [ 9 ] on a Falcon 9 rocket [ 10 ] and land at Mare Imbrium (29.52º N 25.68º W). [ 9 ] On 29 July 2019 OrbitBeyond, the builder of the lander, announced that it will withdraw from the launch and the mission. Thus the mission is effectively dead. OrbitBeyond and NASA agreed that OrbitBeyond will be released from the NASA CLPS contract in general. However, OrbitBeyond remains eligible to bid for future NASA CLPS contracts.
LUCI is a small technology demonstrator without 3-axis pointing freedom, so it will rely on the motion of the lunar sky. [ 8 ] The optical system is a two-spherical mirror configuration and a double-pass corrector lens. Its primary lens is all-spherical measuring 80 mm transmitting light through the system to a photon-counting charge-coupled device (CCD) detector which is sensitive to ultraviolet wavelengths. [ 11 ] [ 3 ] The detector is an 8 mm UV-sensitive CCD with the response between 200 - 900 nm, so the engineers placed a solar blind filter before the CCD to restrict the bandpass to 200 - 320 nm. [ 3 ]
LUCI is planned to be mostly contained within the lander, and it will be lowered back into its storage bay during the cold lunar nights. [ 8 ] The baseline for LUCI's operation is "a few months". [ 8 ] | https://en.wikipedia.org/wiki/Lunar_Ultraviolet_Cosmic_Imager |
The instantaneous Earth–Moon distance , or distance to the Moon , is the distance from the center of Earth to the center of the Moon . In contrast, the Lunar distance ( LD or Δ ⊕ L {\textstyle \Delta _{\oplus L}} ), or Earth–Moon characteristic distance , is a unit of measure in astronomy . More technically, it is the semi-major axis of the geocentric lunar orbit . The average lunar distance is approximately 385,000 km (239,000 mi), or 1.3 light-seconds . [ 1 ] It is roughly 30 times Earth's diameter [ 2 ] and a non-stop plane flight traveling that distance would take more than two weeks. [ 3 ] Around 389 lunar distances make up an astronomical unit (roughly the distance from Earth to the Sun).
Lunar distance is commonly used to express the distance to near-Earth object encounters. [ 4 ] Lunar semi-major axis is an important astronomical datum. It has implications for testing gravitational theories such as general relativity [ 5 ] and for refining other astronomical values, such as the mass , [ 6 ] radius , [ 7 ] and rotation of Earth. [ 8 ] The measurement is also useful in measuring the lunar radius , as well as the distance to the Sun.
Millimeter-precision measurements of the lunar distance are made by measuring the time taken for laser light to travel between stations on Earth and retroreflectors placed on the Moon . The precision of the range measurements determines the semi-major axis to a few decimeters. The Moon is spiraling away from Earth at an average rate of 3.8 cm (1.5 in) per year, as detected by the Lunar Laser Ranging experiment . [ 9 ] [ 10 ] [ 11 ]
Because of the influence of the Sun and other perturbations, the Moon's orbit around the Earth is not a precise ellipse. Nevertheless, different methods have been used to define a semi-major axis . Ernest William Brown provided a formula for the parallax of the Moon as viewed from opposite sides of the Earth, involving trigonometric terms. This is equivalent to a formula for the inverse of the distance, and the average value of this is the inverse of 384,399 km (238,854 mi). [ 12 ] [ 13 ] On the other hand, the time-averaged distance (rather than the inverse of the average inverse distance) between the centers of Earth and the Moon is 385,000.6 km (239,228.3 mi). One can also model the orbit as an ellipse that is constantly changing, and in this case one can find a formula for the semi-major axis, again involving trigonometric terms. The average value by this method is 383,397 km. [ 14 ]
The actual distance varies over the course of the orbit of the Moon . Values at closest approach ( perigee ) or at farthest ( apogee ) are rarer the more extreme they are. The graph at right shows the distribution of perigee and apogee over six thousand years.
Jean Meeus gives the following extreme values for 1500 BC to AD 8000: [ 15 ]
[ 18 ] [ 19 ]
The instantaneous lunar distance is constantly changing. The actual distance between the Moon and Earth can change as quickly as 75 meters per second , [ 23 ] or more than 1,000 km (620 mi) in just 6 hours, due to its non-circular orbit. [ 24 ] There are other effects that also influence the lunar distance. Some factors are listed in the sections below.
The distance to the Moon can be measured to an accuracy of 2 mm over a 1-hour sampling period, [ 25 ] which results in an overall uncertainty of a decimeter for the semi-major axis. However, due to its elliptical orbit with varying eccentricity, the instantaneous distance varies with monthly periodicity. Furthermore, the distance is perturbed by the gravitational effects of various astronomical bodies – most significantly the Sun and less so Venus and Jupiter. Other forces responsible for minute perturbations are: gravitational attraction to other planets in the Solar System and to asteroids; tidal forces; and relativistic effects. [ 26 ] [ 27 ] The effect of radiation pressure from the Sun contributes an amount of ± 3.6 mm to the lunar distance. [ 25 ]
Although the instantaneous uncertainty is a few millimeters, the measured lunar distance can change by more than 30,000 km (19,000 mi) from the mean value throughout a typical month. These perturbations are well understood [ 28 ] and the lunar distance can be accurately modeled over thousands of years. [ 26 ]
Through the action of tidal forces , the angular momentum of Earth's rotation is slowly being transferred to the Moon's orbit. [ 29 ] The result is that Earth's rate of spin is gradually decreasing (at a rate of 2.4 milliseconds/century ), [ 30 ] [ 31 ] [ 32 ] [ 33 ] and the lunar orbit is gradually expanding. The rate of recession is 3.830 ± 0.008 cm per year . [ 28 ] [ 31 ] However, it is believed that this rate has recently increased, as a rate of 3.8 cm/year would imply that the Moon is only 1.5 billion years old, whereas scientific consensus supports an age of about 4 billion years. [ 34 ] It is also believed that this anomalously high rate of recession may continue to accelerate. [ 35 ]
Theoretically, the lunar distance will continue to increase until the Earth and Moon become tidally locked , as are Pluto and Charon . This would occur when the duration of the lunar orbital period equals the rotational period of Earth, which is estimated to be 47 Earth days. The two bodies would then be at equilibrium, and no further rotational energy would be exchanged. However, models predict that 50 billion years would be required to achieve this configuration, [ 36 ] which is significantly longer than the expected lifetime of the Solar System .
Laser measurements show that the average lunar distance is increasing, which implies that the Moon was closer in the past, and that Earth's days were shorter. Fossil studies of mollusk shells from the Campanian era (80 million years ago) show that there were 372 days (of 23 h 33 min) per year during that time, which implies that the lunar distance was about 60.05 R 🜨 (383,000 km or 238,000 mi). [ 29 ] There is geological evidence that the average lunar distance was about 52 R 🜨 (332,000 km or 205,000 mi) during the Precambrian Era ; 2500 million years BP . [ 34 ]
The widely accepted giant impact hypothesis states that the Moon was created as a result of a catastrophic impact between Earth and another planet, resulting in a re-accumulation of fragments at an initial distance of 3.8 R 🜨 (24,000 km or 15,000 mi). [ 37 ] This theory assumes the initial impact to have occurred 4.5 billion years ago. [ 38 ]
Until the late 1950s most measurements of lunar distance were based on optical angular measurements : the earliest accurate measurement was by Aristarchus of Samos , and later Hipparchus in the 2nd century BC. The space age marked a turning point when the precision of this value was much improved. During the 1950s and 1960s, there were experiments using radar, lasers, and spacecraft, conducted with the benefit of computer processing and modeling. [ 39 ]
Some historically significant or otherwise interesting methods of determining the lunar distance:
The earliest account of attempts to measure the lunar distance using an eclipse were by Greek astronomer and mathematician Aristarchus in 270 BC. [ 40 ] He exploited observations of a lunar eclipse combined with knowledge of Earth's radius and an understanding that the Sun is much further than the Moon. By observing the duration of an eclipse, which is about 4 hours, and comparing that to the orbital period of the moon (28 days), the circumference of the moon's orbit was determined. [ 41 ]
Later, in 129 BC, Hipparchus performed a calculation based on observing a solar eclipse from two separate locations. In one location, the eclipse was complete, but in another, the sun was partially visible. Using trigonometry , his calculations produced a result of 62-73 R 🜨 . [ 42 ] This method later found its way into the work of Ptolemy , [ 43 ] who produced a result of 64 + 1 ⁄ 6 R 🜨 ( 409 000 km or 253 000 mi ) at its farthest point. [ 44 ]
Early methods involved measuring the angle between the Moon and a chosen reference point from multiple locations, simultaneously. The synchronization can be coordinated by making measurements at a pre-determined time, or during an event which is observable to all parties. Before accurate mechanical chronometers, the synchronization event was typically a lunar eclipse, occultation, or the moment when the Moon crossed the meridian (if the observers shared the same longitude). This measurement technique is known as lunar parallax .
For increased accuracy, the measured angle can be adjusted to account for refraction and distortion of light passing through the atmosphere.
An expedition by French astronomer A.C.D. Crommelin observed lunar meridian transits on the same night from two different locations. Careful measurements from 1905 to 1910 measured the angle of elevation at the moment when a specific lunar crater ( Mösting A ) crossed the local meridian, from stations at Greenwich and at Cape of Good Hope . [ 45 ] A distance was calculated with an uncertainty of 30 km , and this remained the definitive lunar distance value for the next half century.
By recording the instant when the Moon occults a background star, (or similarly, measuring the angle between the Moon and a background star at a predetermined moment) the lunar distance can be determined, as long as the measurements are taken from multiple locations of known separation.
Astronomers O'Keefe and Anderson calculated the lunar distance by observing four occultations from nine locations in 1952. [ 46 ] They calculated a semi-major axis of 384 407 .6 ± 4.7 km (238,859.8 ± 2.9 mi). This value was refined in 1962 by Irene Fischer , who incorporated updated geodetic data to produce a value of 384 403 .7 ± 2 km (238,857.4 ± 1 mi). [ 7 ]
The distance to the moon was directly measured by means of radar first in 1946 as part of Project Diana . [ 48 ]
Later, an experiment was conducted in 1957 at the U.S. Naval Research Laboratory that used the echo from radar signals to determine the Earth-Moon distance. Radar pulses lasting 2 μs were broadcast from a 50-foot (15 m) diameter radio dish. After the radio waves echoed off the surface of the Moon, the return signal was detected and the delay time measured. From that measurement, the distance could be calculated. In practice, however, the signal-to-noise ratio was so low that an accurate measurement could not be reliably produced. [ 49 ]
The experiment was repeated in 1958 at the Royal Radar Establishment , in England. Radar pulses lasting 5 μs were transmitted with a peak power of 2 megawatts, at a repetition rate of 260 pulses per second. After the radio waves echoed off the surface of the Moon, the return signal was detected and the delay time measured. Multiple signals were added together to obtain a reliable signal by superimposing oscilloscope traces onto photographic film. From the measurements, the distance was calculated with an uncertainty of 1.25 km (0.777 mi). [ 50 ]
These initial experiments were intended to be proof-of-concept experiments and only lasted one day. Follow-on experiments lasting one month produced a semi-major axis of 384 402 ± 1.2 km (238,856 ± 0.75 mi), [ 51 ] which was the most precise measurement of the lunar distance at the time.
An experiment which measured the round-trip time of flight of laser pulses reflected directly off the surface of the Moon was performed in 1962, by a team from Massachusetts Institute of Technology , and a Soviet team at the Crimean Astrophysical Observatory . [ 52 ]
During the Apollo missions in 1969, astronauts placed retroreflectors on the surface of the Moon for the purpose of refining the accuracy and precision of this technique. The measurements are ongoing and involve multiple laser facilities. The instantaneous precision of the Lunar Laser Ranging experiments can achieve small millimeter resolution, and is the most reliable method of determining the lunar distance. The semi-major axis is determined to be 384,399.0 km. [ 13 ]
Due to the modern accessibility of accurate timing devices, high resolution digital cameras, GPS receivers, powerful computers and near-instantaneous communication, it has become possible for amateur astronomers to make high accuracy measurements of the lunar distance.
On May 23, 2007, digital photographs of the Moon during a near-occultation of Regulus were taken from two locations, in Greece and England. By measuring the parallax between the Moon and the chosen background star, the lunar distance was calculated. [ 53 ]
A more ambitious project called the "Aristarchus Campaign" was conducted during the lunar eclipse of 15 April 2014. [ 24 ] During this event, participants were invited to record a series of five digital photographs from moonrise until culmination (the point of greatest altitude).
The method took advantage of the fact that the Moon is actually closest to an observer when it is at its highest point in the sky, compared to when it is on the horizon. Although it appears that the Moon is biggest when it is near the horizon, the opposite is true. This phenomenon is known as the Moon illusion . The reason for the difference in distance is that the distance from the center of the Moon to the center of the Earth is nearly constant throughout the night, but an observer on the surface of Earth is actually 1 Earth radius from the center of Earth. This offset brings them closest to the Moon when it is overhead.
Modern cameras have achieved a resolution capable of capturing the Moon with enough precision to detect and measure this tiny variation in apparent size. The results of this experiment were calculated as LD = 60.51 +3.91 −4.19 R 🜨 . The accepted value for that night was 60.61 R 🜨 , which implied a 3% accuracy. The benefit of this method is that the only measuring equipment needed is a modern digital camera (equipped with an accurate clock, and a GPS receiver).
Other experimental methods of measuring the lunar distance that can be performed by amateur astronomers involve:
The collection of tables that describe the moon's position is called lunar ephemeris . Modern methods compute the ephemeris using equations which accommodate for the known perturbation effects. These include gravitational forces of the Earth, Sun, and other planets, and also minor variation due to tidal forces, relativistic effects, and changes within the solar system. [ 54 ]
The formula for ephemeris ELP2000 , by Chapront and Touzé for the distance in kilometers begins with the terms: [ 12 ]
Where G M {\displaystyle G_{M}} is the mean anomaly (more or less how moon has moved from perigee) and D {\displaystyle D} is the mean elongation (more or less how far it has moved from conjunction with the Sun at new moon). They can be calculated from
G M = 134.963 411 38° + 13.064 992 953 630°/d · t
D = 297.850 204 20° + 12.190 749 117 502°/d · t
where t is the time (in days) since January 1, 2000 (see Epoch (astronomy) ).
This shows that the smallest perigee occurs at either new moon or full moon (ca 356 870 km ), as does the greatest apogee (ca 406 079 km ), whereas the greatest perigee will be around half-moon (ca 370 180 km ), as will be the smallest apogee (ca 404 593 km ). The exact values will be slightly different due to other terms. Twice in every full moon cycle of about 411 days there will be a minimal perigee and a maximal apogee, separated by two weeks, and a maximal perigee and a minimal apogee, also separated by two weeks. | https://en.wikipedia.org/wiki/Lunar_distance |
In celestial navigation , lunar distance , also called a lunar , is the angular distance between the Moon and another celestial body . The lunar distances method uses this angle and a nautical almanac to calculate Greenwich time if so desired, or by extension any other time. That calculated time can be used in solving a spherical triangle . The theory was first published by Johannes Werner in 1524, before the necessary almanacs had been published. A fuller method was published in 1763 and used until about 1850 when it was superseded by the marine chronometer . A similar method uses the positions of the Galilean moons of Jupiter .
In celestial navigation , knowledge of the time at Greenwich (or another known place) and the measured positions of one or more celestial objects allows the navigator to calculate longitude . [ 1 ] Reliable marine chronometers were unavailable until the late 18th century and not affordable until the 19th century. [ 2 ] [ 3 ] [ 4 ] After the method was first published in 1763 by British Astronomer Royal Nevil Maskelyne , based on pioneering work by Tobias Mayer , for about a hundred years (until about 1850) [ 5 ] mariners lacking a chronometer used the method of lunar distances to determine Greenwich time as a key step in determining longitude. Conversely, a mariner with a chronometer could check its accuracy using a lunar determination of Greenwich time. [ 2 ] The method saw usage all the way up to the beginning of the 20th century on smaller vessels that could not afford a chronometer or had to rely on this technique for correction of the chronometer. [ 6 ]
The method relies on the relatively quick movement of the moon across the background sky, completing a circuit of 360 degrees in 27.3 days (the sidereal month), or 13.2 degrees per day. In one hour it will move approximately half a degree, [ 1 ] roughly its own angular diameter , with respect to the background stars and the Sun.
Using a sextant , the navigator precisely measures the angle between the moon and another body . [ 1 ] That could be the Sun or one of a selected group of bright stars lying close to the Moon's path, near the ecliptic . At that moment, anyone on the surface of the earth who can see the same two bodies will, after correcting for parallax , observe the same angle. The navigator then consults a prepared table of lunar distances and the times at which they will occur. [ 1 ] [ 7 ] By comparing the corrected lunar distance with the tabulated values, the navigator finds the Greenwich time for that observation.
Knowing Greenwich time and local time, the navigator can work out longitude. [ 1 ]
Local time can be determined from a sextant observation of the altitude of the Sun or a star. [ 8 ] [ 9 ] Then the longitude (relative to Greenwich) is readily calculated from the difference between local time and Greenwich Time, at 15 degrees per hour of difference.
Having measured the lunar distance and the heights of the two bodies, the navigator can find Greenwich time in three steps:
Having found the (absolute) Greenwich time, the navigator either compares it with the observed local apparent time (a separate observation) to find his longitude, or compares it with the Greenwich time on a chronometer (if available) if one wants to check the chronometer. [ 1 ]
By 1810, the errors in the almanac predictions had been reduced to about one-quarter of a minute of arc. By about 1860 (after lunar distance observations had mostly faded into history), the almanac errors were finally reduced to less than the error margin of a sextant in ideal conditions (one-tenth of a minute of arc).
Later sextants (after c. 1800 ) could indicate angle to 0.1 arc-minutes, after the use of the vernier was popularized by its description in English in the book Navigatio Britannica published in 1750 by John Barrow , the mathematician and historian. In practice at sea, actual errors were somewhat larger.
If the sky is cloudy or the Moon is new (hidden close to the glare of the Sun), lunar distance observations could not be performed.
A lunar distance changes with time at a rate of roughly half a degree, or 30 arc-minutes, in an hour. [ 1 ] The two sources of error, combined, typically amount to about one-half arc-minute in Lunar distance, equivalent to one minute in Greenwich time, which corresponds to an error of as much as one-quarter of a degree of longitude, or about 15 nautical miles (28 km) at the equator.
Captain Joshua Slocum , in making the first solo circumnavigation of the Earth in 1895–1898, somewhat anachronistically used the lunar method along with dead reckoning in his navigation . He comments in Sailing Alone Around the World on a sight taken in the South Pacific . After correcting an error he found in his log tables , the result was surprisingly accurate: [ 17 ]
I found from the result of three observations, after long wrestling with lunar tables, that her longitude agreed within five miles of that by dead-reckoning.
This was wonderful; both, however, might be in error, but somehow I felt confident that both were nearly true, and that in a few hours more I should see land; and so it happened, for then I made out the island of Nukahiva , the southernmost of the Marquesas group, clear-cut and lofty. The verified longitude when abreast was somewhere between the two reckonings; this was extraordinary. All navigators will tell you that from one day to another a ship may lose or gain more than five miles in her sailing-account, and again, in the matter of lunars, even expert lunarians are considered as doing clever work when they average within eight miles of the truth...
The result of these observations naturally tickled my vanity, for I knew it was something to stand on a great ship’s deck and with two assistants take lunar observations approximately near the truth. As one of the poorest of American sailors, I was proud of the little achievement alone on the sloop, even by chance though it may have been...
The work of the lunarian, though seldom practised in these days of chronometers, is beautifully edifying, and there is nothing in the realm of navigation that lifts one’s heart up more in adoration.
In his 1777 book, "A Voyage around the World", naturalist Georg Forster described his impressions of navigation with captain James Cook on board the ship HMS Resolution in the South Pacific. Cook had two of the new chronometers on board, one made by Larcum Kendall the other by John Arnold , following the lead of the famous John Harrison clocks. On March 12, 1774, approaching Easter Island , Forster found praiseworthy the method of lunar distances as the best and most precise method to determine longitude, as compared to clocks which may fail due to mechanical problems. | https://en.wikipedia.org/wiki/Lunar_distance_(navigation) |
In celestial navigation , lunar distance , also called a lunar , is the angular distance between the Moon and another celestial body . The lunar distances method uses this angle and a nautical almanac to calculate Greenwich time if so desired, or by extension any other time. That calculated time can be used in solving a spherical triangle . The theory was first published by Johannes Werner in 1524, before the necessary almanacs had been published. A fuller method was published in 1763 and used until about 1850 when it was superseded by the marine chronometer . A similar method uses the positions of the Galilean moons of Jupiter .
In celestial navigation , knowledge of the time at Greenwich (or another known place) and the measured positions of one or more celestial objects allows the navigator to calculate longitude . [ 1 ] Reliable marine chronometers were unavailable until the late 18th century and not affordable until the 19th century. [ 2 ] [ 3 ] [ 4 ] After the method was first published in 1763 by British Astronomer Royal Nevil Maskelyne , based on pioneering work by Tobias Mayer , for about a hundred years (until about 1850) [ 5 ] mariners lacking a chronometer used the method of lunar distances to determine Greenwich time as a key step in determining longitude. Conversely, a mariner with a chronometer could check its accuracy using a lunar determination of Greenwich time. [ 2 ] The method saw usage all the way up to the beginning of the 20th century on smaller vessels that could not afford a chronometer or had to rely on this technique for correction of the chronometer. [ 6 ]
The method relies on the relatively quick movement of the moon across the background sky, completing a circuit of 360 degrees in 27.3 days (the sidereal month), or 13.2 degrees per day. In one hour it will move approximately half a degree, [ 1 ] roughly its own angular diameter , with respect to the background stars and the Sun.
Using a sextant , the navigator precisely measures the angle between the moon and another body . [ 1 ] That could be the Sun or one of a selected group of bright stars lying close to the Moon's path, near the ecliptic . At that moment, anyone on the surface of the earth who can see the same two bodies will, after correcting for parallax , observe the same angle. The navigator then consults a prepared table of lunar distances and the times at which they will occur. [ 1 ] [ 7 ] By comparing the corrected lunar distance with the tabulated values, the navigator finds the Greenwich time for that observation.
Knowing Greenwich time and local time, the navigator can work out longitude. [ 1 ]
Local time can be determined from a sextant observation of the altitude of the Sun or a star. [ 8 ] [ 9 ] Then the longitude (relative to Greenwich) is readily calculated from the difference between local time and Greenwich Time, at 15 degrees per hour of difference.
Having measured the lunar distance and the heights of the two bodies, the navigator can find Greenwich time in three steps:
Having found the (absolute) Greenwich time, the navigator either compares it with the observed local apparent time (a separate observation) to find his longitude, or compares it with the Greenwich time on a chronometer (if available) if one wants to check the chronometer. [ 1 ]
By 1810, the errors in the almanac predictions had been reduced to about one-quarter of a minute of arc. By about 1860 (after lunar distance observations had mostly faded into history), the almanac errors were finally reduced to less than the error margin of a sextant in ideal conditions (one-tenth of a minute of arc).
Later sextants (after c. 1800 ) could indicate angle to 0.1 arc-minutes, after the use of the vernier was popularized by its description in English in the book Navigatio Britannica published in 1750 by John Barrow , the mathematician and historian. In practice at sea, actual errors were somewhat larger.
If the sky is cloudy or the Moon is new (hidden close to the glare of the Sun), lunar distance observations could not be performed.
A lunar distance changes with time at a rate of roughly half a degree, or 30 arc-minutes, in an hour. [ 1 ] The two sources of error, combined, typically amount to about one-half arc-minute in Lunar distance, equivalent to one minute in Greenwich time, which corresponds to an error of as much as one-quarter of a degree of longitude, or about 15 nautical miles (28 km) at the equator.
Captain Joshua Slocum , in making the first solo circumnavigation of the Earth in 1895–1898, somewhat anachronistically used the lunar method along with dead reckoning in his navigation . He comments in Sailing Alone Around the World on a sight taken in the South Pacific . After correcting an error he found in his log tables , the result was surprisingly accurate: [ 17 ]
I found from the result of three observations, after long wrestling with lunar tables, that her longitude agreed within five miles of that by dead-reckoning.
This was wonderful; both, however, might be in error, but somehow I felt confident that both were nearly true, and that in a few hours more I should see land; and so it happened, for then I made out the island of Nukahiva , the southernmost of the Marquesas group, clear-cut and lofty. The verified longitude when abreast was somewhere between the two reckonings; this was extraordinary. All navigators will tell you that from one day to another a ship may lose or gain more than five miles in her sailing-account, and again, in the matter of lunars, even expert lunarians are considered as doing clever work when they average within eight miles of the truth...
The result of these observations naturally tickled my vanity, for I knew it was something to stand on a great ship’s deck and with two assistants take lunar observations approximately near the truth. As one of the poorest of American sailors, I was proud of the little achievement alone on the sloop, even by chance though it may have been...
The work of the lunarian, though seldom practised in these days of chronometers, is beautifully edifying, and there is nothing in the realm of navigation that lifts one’s heart up more in adoration.
In his 1777 book, "A Voyage around the World", naturalist Georg Forster described his impressions of navigation with captain James Cook on board the ship HMS Resolution in the South Pacific. Cook had two of the new chronometers on board, one made by Larcum Kendall the other by John Arnold , following the lead of the famous John Harrison clocks. On March 12, 1774, approaching Easter Island , Forster found praiseworthy the method of lunar distances as the best and most precise method to determine longitude, as compared to clocks which may fail due to mechanical problems. | https://en.wikipedia.org/wiki/Lunar_distance_method |
The lunar effect is a purported correlation between specific stages of the roughly 29.5-day lunar cycle and behavior and physiological changes in living beings on Earth, including humans. A considerable number of studies have examined the effect on humans. By the late 1980s, there were at least 40 published studies on the purported lunar-lunacy connection, [ 1 ] and at least 20 published studies on the purported lunar-birthrate connection. [ 2 ] Literature reviews and metanalyses have found no correlation between the lunar cycle and human biology or behavior. [ 1 ] [ 2 ] [ 3 ] [ 4 ] In cases such as the approximately monthly cycle of menstruation in humans (but not other mammals), the coincidence in timing reflects no known lunar influence. The widespread and persistent beliefs about the influence of the Moon may depend on illusory correlation – the perception of an association that does not in fact exist. [ 5 ]
In a number of marine animals, there is stronger evidence for the effects of lunar cycles. Observed effects relating to reproductive synchrony may depend on external cues relating to the presence or amount of moonlight . Corals contain light-sensitive cryptochromes , proteins that are sensitive to different levels of light. Coral species such as Dipsastraea speciosa tend to synchronize spawning in the evening or night, around the last quarter moon of the lunar cycle. In Dipsastraea speciosa , a period of darkness between sunset and moonrise appears to be a trigger for synchronized spawning. Another marine animal, the bristle worm Platynereis dumerilii , spawns a few days after a full moon. It contains a protein with light-absorbing flavin structures that differentially detect moonlight and sunlight. It is used as a model for studying the biological mechanisms of marine lunar cycles. [ 6 ] [ 7 ] [ 8 ]
Claims of a lunar connection have appeared in the following contexts:
It is widely believed that the Moon has a relationship with fertility due to the corresponding human menstrual cycle , which averages 28 days. [ 9 ] [ 10 ] [ 11 ] However, no connection between lunar rhythms and menstrual onset has been conclusively shown to exist, and the similarity in length between the two cycles is most likely coincidental. [ 12 ] [ 13 ]
Multiple studies have found no connection between birth rate and lunar phases. A 1957 analysis of 9,551 births in Danville, Pennsylvania , found no correlation between birth rate and the phase of the Moon. [ 14 ] Records of 11,961 live births and 8,142 natural births (not induced by drugs or cesarean section) over a 4-year period (1974–1978) at the UCLA hospital did not correlate in any way with the cycle of lunar phases. [ 15 ] Analysis of 3,706 spontaneous births (excluding births resulting from induced labor) in 1994 showed no correlation with lunar phase. [ 16 ] The distribution of 167,956 spontaneous vaginal deliveries, at 37 to 40 weeks gestation, in Phoenix, Arizona , between 1995 and 2000, showed no relationship with lunar phase. [ 17 ] Analysis of 564,039 births (1997 to 2001) in North Carolina showed no predictable influence of the lunar cycle on deliveries or complications. [ 18 ] Analysis of 6,725 deliveries (2000 to 2006) in Hannover revealed no significant correlation of birth rate to lunar phases. [ 19 ] A 2001 analysis of 70,000,000 birth records from the National Center for Health Statistics revealed no correlation between birth rate and lunar phase. [ 20 ] An extensive review of 21 studies from seven different countries showed that the majority of studies reported no relationship to lunar phase, and that the positive studies were inconsistent with each other. [ 2 ] A review of six additional studies from five different countries similarly showed no evidence of relationship between birth rate and lunar phase. [ 21 ] In 2021, an analysis of 38.7 million births in France over 50 years, with a detailed correction for birth variations linked to holidays, and robust statistical methods to avoid false detections linked to multiple tests, found a very small (+0.4%) but statistically significant surplus of births on the full moon day, and to a lesser extent the following day. The probability of this excess being due to chance is very low, of the order of one chance in 100,000 (p-value = 1.5 x 10-5). The belief that there is a large surplus of births on full moon days is incorrect, and it is completely impossible for an observer to detect the small increase of +0.4% in a maternity hospital, even on a long time scale. [ 22 ]
It is sometimes claimed that surgeons used to refuse to operate during the full Moon because of the increased risk of death of the patient through blood loss. [ 23 ] One team, in Barcelona , Spain , reported a weak correlation between lunar phase and hospital admissions due to gastrointestinal bleeding , but only when comparing full Moon days to all non-full Moon days lumped together. [ 24 ] This methodology has been criticized, and the statistical significance of the results disappears if one compares day 29 of the lunar cycle (full Moon) to days 9, 12, 13, or 27 of the lunar cycle, which all have an almost equal number of hospital admissions. [ 25 ] The Spanish team acknowledged that the wide variation in the number of admissions throughout the lunar cycle limited the interpretation of the results. [ 24 ]
In October 2009, British politician David Tredinnick asserted that during a full Moon "[s]urgeons will not operate because blood clotting is not effective and the police have to put more people on the street.". [ 26 ] A spokesman for the Royal College of Surgeons said they would "laugh their heads off" at the suggestion they could not operate on the full Moon. [ 27 ]
A study into epilepsy found a significant negative correlation between the mean number of epileptic seizures per day and the fraction of the Moon that is illuminated, but the effect resulted from the overall brightness of the night, rather than from the moon phase per se. [ 28 ]
Senior police officers in Brighton , UK, announced in June 2007 that they were planning to deploy more officers over the summer to counter trouble they believe is linked to the lunar cycle. [ 29 ] This followed research by the Sussex Police force that concluded there was a rise in violent crime when the Moon was full. A spokeswoman for the police force said "research carried out by us has shown a correlation between violent incidents and full moons". A police officer responsible for the research told the BBC that "From my experience of 19 years of being a police officer, undoubtedly on full moons we do seem to get people with sort of strange behavior – more fractious, argumentative." [ 30 ]
Police in Ohio and Kentucky have blamed temporary rises in crime on the full Moon. [ 31 ] [ 32 ] [ 33 ]
In January 2008, New Zealand 's Justice Minister Annette King suggested that a spate of stabbings in the country could have been caused by the lunar cycle. [ 34 ]
A reported correlation between Moon phase and the number of homicides in Miami-Dade County was found, through later analysis, not to be supported by the data and to have been the result of inappropriate and misleading statistical procedures. [ 3 ]
A study of 13,029 motorcyclists killed in nighttime crashes found that there were 5.3% more fatalities on nights with a full moon compared to other nights. [ 35 ] The authors speculate that the increase might be due to visual distractions created by the moon, especially when it is near the horizon and appears abruptly between trees, around turns, etc.
Several studies have argued that the stock market's average returns are much higher during the half of the month closest to the new moon than the half closest to the full moon. The reasons for this have not been studied, but the authors suggest this may be due to lunar influences on mood. [ 36 ] [ 37 ] [ 38 ] Another study has found contradictory results and questioned these claims. [ 39 ]
A meta-analysis of thirty-seven studies that examined relationships between the Moon's four phases and human behavior revealed no significant correlation. The authors found that, of twenty-three studies that had claimed to show correlation, nearly half contained at least one statistical error. [ 1 ] [ 3 ] Similarly, in a review of twenty studies examining correlations between Moon phase and suicides, most of the twenty studies found no correlation, and the ones that did report positive results were inconsistent with each other. [ 3 ] A 1978 review of the literature also found that lunar phases and human behavior are not related. [ 40 ]
A 2013 study by Christian Cajochen and collaborators at the University of Basel suggested a correlation between the full Moon and human sleep quality. [ 41 ] However, the validity of these results may be limited because of a relatively small (n=33) sample size and inappropriate controls for age and sex. [ 42 ] A 2014 study with larger sample sizes (n1=366, n2=29, n3=870) and better experimental controls found no effect of the lunar phase on sleep quality metrics. [ 42 ] A 2015 study of 795 children found a three-minute increase in sleep duration near the full moon, [ 43 ] but a 2016 study of 5,812 children found a five-minute decrease in sleep duration near the full moon. [ 44 ] No other modification in activity behaviors were reported, [ 44 ] and the lead scientist concluded: "Our study provides compelling evidence that the moon does not seem to influence people's behavior." [ 45 ] A study published in 2021 by researchers from the University of Washington , Yale University , and the National University of Quilmes showed a correlation between lunar cycles and sleep cycles. During the days preceding a full moon, people went to bed later and slept for shorter periods (in some cases with differences of up to 90 minutes), even in locations with full access to electric light. [ 46 ] Finally, a Swedish study including one-night at-home sleep recordings from 492 women and 360 men found that men whose sleep was recorded during nights in the waxing period of the lunar cycle exhibited lower sleep efficiency and increased time awake after sleep onset compared to men whose sleep was measured during nights in the waning period. In contrast, the sleep of women remained largely unaffected by the lunar cycle. These results were robust to adjustment for chronic sleep problems and obstructive sleep apnea severity. [ 47 ]
As for how the belief started in the first place, a 1999 study conjectures that the alleged connection of moon to lunacy might be a ‘cultural fossil’ from a time before the advent of outdoor lighting, when the bright light of the full moon might have induced sleep deprivation in people living outside, thereby triggering erratic behaviour in predisposed people with mental conditions such as bipolar disorder. [ 48 ]
Corals contain light-sensitive cryptochromes , proteins that are sensitive to different levels of light. [ 6 ] Spawning of coral Platygyra lamellina occurs at night during the summer on a date determined by the phase of the Moon ; in the Red Sea , this is the three- to five-day period around the new Moon in July and the similar period in August. [ 49 ] Acropora coral time their simultaneous release of sperm and eggs to just one or two days a year, after sundown with a full moon. [ 50 ] Dipsastraea speciosa tends to synchronize spawning in the evening or night, around the last quarter moon of the lunar cycle. [ 6 ] [ 7 ] [ 51 ]
Another marine animal, the bristle worm Platynereis dumerilii , also spawns a few days after a full moon. It is used as a model for studying cryptochromes and photoreduction in proteins. The L-Cry protein can distinguish between sunlight and moonlight through the differential activity of two protein strands that contain light-absorbing structures called flavins. Another molecule, called r-Opsin, may act as a moonrise sensor. Exactly how different biological signals are transmitted within the worm is not yet known. [ 6 ] [ 7 ] [ 8 ]
Correlation between hormonal changes in the testis and lunar periodicity was found in streamlined spinefoot (a type of fish), which spawns synchronously around the last Moon quarter. [ 52 ] In orange-spotted spinefoot , lunar phases affect the levels of melatonin in the blood. [ 52 ]
California grunion fish have an unusual mating and spawning ritual during the spring and summer months. The egg laying takes place on four consecutive nights, beginning on the nights of the full and new Moons, when tides are highest. This well understood reproductive strategy is related to tides, which are highest when the Sun, Earth, and Moon are aligned, i.e., at new Moon or full Moon. [ 53 ]
In insects, the lunar cycle may affect hormonal changes. [ 52 ] The body weight of honeybees peaks during new Moon . [ 52 ] The midge Clunio marinus has a biological clock synchronized with the Moon. [ 41 ] [ 54 ]
Evidence for lunar effect in reptiles, birds and mammals is scant, [ 52 ] but among reptiles marine iguanas (which live in the Galápagos Islands ) time their trips to the sea in order to arrive at low tide. [ 55 ]
A relationship between the Moon and the birth rate of cows was reported in a 2016 study. [ 56 ]
In 2000, a retrospective study in the United Kingdom reported an association between the full moon and significant increases in animal bites to humans. The study reported that patients presenting to the A&E with injuries stemming from bites from an animal rose significantly at the time of a full moon in the period 1997–1999. The study concluded that animals have an increased inclination to bite a human during a full moon period. It did not address the question of how humans came into contact with the animals, and whether this was more likely to happen during the full moon. [ 57 ]
Serious doubts have been raised [ 58 ] about the claim that a species of Ephedra synchronizes its pollination peak to the full moon in July. [ 59 ] Reviewers conclude that more research is needed to answer this question. [ 60 ] | https://en.wikipedia.org/wiki/Lunar_effect |
Lunar fluorescence is the process where minerals along the surface of the Moon , like the silicate minerals plagioclase feldspar , pyroxene , and olivine , absorb solar radiation , i.e. UV or X-ray solar radiation and release visible light . These minerals release the light at specific wavelengths according to the chemical composition of the minerals. [ 1 ] Plagioclase feldspar , for instance, generally has blue or green fluorescence , while olivine has reddish fluorescence. [ 2 ]
This process of lunar fluorescence is due to the interaction between the solar wind and the lunar regolith . Lunar fluorescence aids the identification of minerals along the surface of the Moon. [ 1 ] From the observation of fluorescence spectra , the geochemical characteristics, distribution of minerals, and the soil composition of the moon can be determined. Lunar fluorescence gives information regarding the geological history of the Moon, including volcanic activity and ancient impacts. Recently, scientists detected the existence of water molecules along the polar regions of the Moon using fluorescence data. Remote sensing and in-situ approaches are employed to monitor lunar fluorescence. NASA’s Lunar Reconnaissance Orbiter (LRO) Lyman-Alpha Mapping Project (LAMP) instrument receives UV fluorescence. Further, in the year 2020, water molecules and novel minerals along the surface of the Moon were detected through the use of fluorescence data, using samples retrieved using the Chang'e-5 mission. [ 3 ] The LAMP instrument of the LRO mission proved the existence of shadowy ice along the craters at the south pole of the Moon using fluorescence signals. Nevertheless, fluorescence signals tend to be weak and are detected using high-resolution instruments. Solar wind , along with the interference of cosmic rays, can complicate the signals, too. [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Lunar_fluorescence |
The lunar occultation of Venus refers to a natural phenomenon in which the Moon passes in front of Venus , obstructing it from view on some regions of the Earth. Since the orbital planes of both the moon and Venus are tilted relative to the ecliptic , occultations only happen about twice a year rather than once a month. A computer search predicts that 101 lunar occultations occur in the date range of 1995–2045. [ 1 ]
Occultations can occur at any value of the moon's argument of latitude , not just near its nodes, because Venus goes further north and south of the ecliptic than the moon does. In 2054 for example there are five occultations at intervals of just one month (January through May), and the first two are when Venus is more than 5.1° north of the ecliptic. [ 2 ]
Whether there is an occultation depends on whether the distance of the centre of the moon is greater or less than 8093 kilometres (the sum of the earth's polar radius and the moon's radius) away from the line connecting the centre of the earth to Venus. The angle between the lines from the centre of the earth to the centres of the moon and Venus will then be the arc sine of 8093 km divided by the distance to the moon. Since this distance can vary in the range of 356,400 to 406,700 kilometres, there will always be an occultation (of the centre of Venus) if the said angle is less than 1.14° and there will never be one if the angle is more than 1.30°. Venus itself can have an angular radius up to nearly 0.01°, which needs to be taken into account when determining whether all of Venus will be hidden. This is similar to considerations of gamma for solar eclipses.
For years up to 2025, a website giving easily observable occultations for the year is available. [ 3 ]
On 9 November 2023, there was a lunar occultation observed from Europe. [ 21 ] [ 22 ] [ 23 ] | https://en.wikipedia.org/wiki/Lunar_occultation_of_Venus |
The lunar penetrometer was a spherical electronic tool that served to measure the load-bearing characteristics of the Moon in preparation for spacecraft landings . [ 1 ] [ 2 ] It was designed by NASA to be dropped onto the surface from a vehicle orbiting overhead and transmit information to the spacecraft. [ 3 ] [ 4 ] [ 5 ] However, despite it being proposed for several lunar and planetary missions, the device was never actually fielded by NASA. [ 6 ]
The lunar penetrometer was first developed in the early 1960s as part of NASA Langley Research Center ’s Lunar Penetrometer Program. [ 7 ] At the time, immense pressures from the ongoing Space Race caused NASA to shift its focus from conducting purely scientific lunar expeditions to landing a man on the Moon before the Russians. As a result, the Jet Propulsion Laboratory's lunar flight projects, Ranger and Surveyor , were reconfigured to provide direct support to Project Apollo . [ 6 ]
One of the major problems that NASA faced in preparation for the Apollo Moon landing was the inability to determine the surface characteristics of the Moon with regard to spacecraft landings and post-landing locomotion of exploratory vehicles and personnel. While radio and optical technology situated on Earth at the time could make out large-scale characteristics such as the size and distribution of mountains and craters, there wasn't an Earth-based method of measuring small-scale features, such as the lunar surface texture and topographical details, with adequate resolution. [ 8 ] [ 9 ] In 1961, NASA's chief engineer Abe Silverstein proposed to the U.S. Congress that Project Ranger would help provide important data on the Moon's surface topography to facilitate the Apollo lunar landing. Once funding was provided to the Ranger program, Silverstein directed NASA laboratories to investigate potential instruments that could return information on the hardness of the lunar surface. [ 6 ]
Introduced shortly after Silverstein's directive, the Lunar Penetrometer Program devised the development of an impact-measuring instrumented projectile, or penetrometer , that provided preliminary information about the Moon's surface. The lunar penetrometer housed an impact accelerometer that measured the deceleration time history of the projectile as it made contact with the lunar surface to measure its hardness, bearing strength, and penetrability as well as a radio telemeter that could transmit the impact information to a remote receiver. Knowledge of the complete impact acceleration time history would have also made it possible for NASA researchers to ascertain the physical composition of the soil and whether it was granular, powdery, or brittle. [ 8 ] If successful, the lunar penetrometer was planned for deployment for uncrewed landings in the Ranger and Surveyor programs as well as for the Apollo mission. [ 5 ] [ 9 ]
However, the Jet Propulsion Laboratory Space Sciences Division Manager Robert Meghreblian decided in August 1963 that the use of the lunar penetrometer to provide information on the lunar surface in situ was too risky. Instead, it was decided that the lunar surface composition would be determined by using gamma-ray spectrometry and surface topography via television photography and radar probing. [ 6 ] In 1966, the lunar penetrometer was investigated as a potential sounding device for the Apollo missions, but no information exists on whether it was used in that manner. [ 10 ]
In order to function properly, the lunar penetrometer was designed to sense the accelerations encountered by the projectile body during the impact process and telemeter the collected information to a nearby receiving station. Doing so required the penetrometer to package an acceleration sensing device as well as an independent telemetry system with a power supply , transmitter , and antenna system. The components also needed to be housed within a casing that could withstand a wide range of impact loads. [ 9 ]
The lunar penetrometer came in the form of a spherical omnidirectional penetrometer that did not have to account for the orientation of the penetrometer during impact, which was difficult to factor in an environment with little to no atmosphere like the lunar surface. [ 9 ] [ 11 ] The omnidirectional design packaged the accelerometer, computer, power supply, and the telemetry system within a 3-inch diameter sphere. [ 12 ] The lunar penetrometer's spherical instrumentation compartment had an omnidirectional acceleration sensor located at the center surrounded by concentrically placed batteries and electronic modules. The components were enclosed within an electromagnetic shield that provided a uniform metallic reference for the omnidirectional antenna encircling the instrumentation compartment. [ 12 ] Outside the compartment, an impact limiter made out of balsa wood provided shock absorption to limit the impact forces on the internal components to tolerable levels and provided a low overall penetrometer density to assure sensitivity to soft, weak target surfaces. The balsa impact limiter was coated in a thin outer shell made out of fiber-glass epoxy. [ 5 ] [ 12 ]
As part of the Lunar Penetrometer Program, the NASA Langley Research Center tasked the Harry Diamond Laboratories (later consolidated to form the U.S. Army Research Laboratory ) with the development of the omnidirectional accelerometer for the lunar penetrometer. [ 3 ] [ 4 ] [ 7 ] The omnidirectional accelerometer, or the omnidirectional acceleration sensor, was an accelerometer capable of measuring the acceleration time histories independent of its angular acceleration or orientation at impact. [ 11 ] The researchers at Harry Diamond Laboratories originally employed a hollow piezoelectric sphere but later transitioned to modifying a conventional triaxial accelerometer. The instantaneous magnitude of the acceleration was computed by obtaining the square root of the sum of the squares of the three orthogonal, acceleration-time signatures. [ 7 ] [ 13 ] The omnidirectional accelerometer withstood a maximum of 40,000 G during shock testing and operated using a 20V power supply drawing 10 mA. [ 12 ]
The telemetry system for the lunar penetrometer was commissioned by NASA to the Canadian defence contractor Computing Devices of Canada (now known as General Dynamics Mission Systems ). [ 7 ] It consisted of a network that fed the output of the accelerometer to a radio frequency power amplifier that was also connected to a master oscillator and a buffer amplifier . The amplifiers and the oscillator functioned together to act as a transmitter, whose outputs were fed to a spherical antenna that was embedded in the outer skin of the penetrometer. [ 11 ]
Due to limitations in available power, antenna efficiency, and other factors, the impact acceleration information from the lunar penetrometers could not be transmitted for extensive distances. As a result, a relay craft needed to be placed within the transmission field of the lunar penetrometers to intercept the lunar penetrometer signals and transmit them to a distant receiving station. When located within moderate range of a receiving station like a parent spacecraft, the relay craft served to simply amplify and redirect the lunar penetrometer signals. At greater distances, the relay craft would perform data signal processing where it exchanged the peak power requirement of instantaneous data transmission for longer transmission time to decrease the demands placed upon the power supply. The relay craft functioned so that it would receive the lunar penetrometer signals and transmit them to the receiving station only after the lunar penetrometers landed on the surface and before the relay craft itself crashed onto the ground. As a result, a strict time limit would be imposed on the relay craft to deliver the necessary data sent by the penetrometers. [ 9 ]
During lunar reconnaissance, a payload containing the lunar penetrometer and the relay station structure would be mounted on the spacecraft as it traveled to its destination. Above the lunar surface, the spacecraft would release the payload, which would spin for axis attitude stability and use the main retrorocket motor to reduce the descent velocity. At approximately 5,600 feet above the target area, the second retrorocket would fire once the main retrorocket was jettisoned from the payload. The centrifugal force resulting from the spin stabilization technique would cause a salvo of lunar penetrometers to disperse and free fall toward the lunar surface. The payload carriage would hold 16 lunar penetrometers in total that would be released in salvos of four at about 2 second intervals. The impact of the lunar penetrometers would be categorized as elastic, plastic, or penetration depending on the target surface. After the secondary retrorocket burns out, the payload would free fall to the lunar surface as well. Once the penetrometers make contact with the lunar surface, the impact information would then be transmitted to the descending payload relay station, which would then be relayed to a transmitting antenna system on Earth. In short, this chain of communication would take place within the time interval between the release of the lunar penetrometers and the moment the payload relay station lands on the lunar surface. [ 11 ]
Harry Diamond Laboratories was tasked with developing a high-energy shock testing method that monitored the omnidirectional accelerometer's behavior during acceleration peaking at 20,000 G. Components of the omnidirectional accelerometer, such as the resistors , capacitors , oscillators , and magnetic cores , were subjected to a modified air gun test. The component being tested was placed within a target body inside an extension tube in front of an air gun. The air gun would fire a projectile, impacting the target body and accelerating it to a peak of 20,000 G until it hit the lead target only a short distance away inside the extension tube. The results of the shock test showed that the resistors and capacitors changed very little during shock, while the commercial subcarrier oscillator and the tape-wound magnetic cores were affected considerably. [ 7 ]
More than 200 impact tests were conducted with the spherical lunar penetrometer in investigating its soil penetration characteristics. [ 1 ] Most consisted of impacting the penetrometers to a wide range of target materials at velocities ranging from 6 to 76 m/s and then recording the measured impact characteristics. [ 5 ] Several experiments investigated the penetrometer's ability to predict the depth to which a lunar module would penetrate the surface of the landing zone. The results of these studies found that the lunar penetrometers were successful in not only identifying the nature of the impacted surface, i.e. whether the surface was rigid or collapsible, but also in distinguishing between particulate materials of different bearing strength from peak impact accelerations. The lunar penetrometers were able to accurately predict the conditions of the landing pad penetrations. [ 14 ]
The lunar penetrometer was studied as a potential sounding device for a crewed Apollo lunar module landing in 1966. The device was suggested to assist astronauts in on-the-spot decision making regarding whether a safe landing of the lunar module could be made. Once dropped individually or in salvo within the landing zone, the lunar penetrometers could autonomously transmit an acceleration-time profile upon impact and characterize the surface hardness of the landing zone. A short study on the feasibility of this application was conducted to determine the flight, trajectory, and impact parameters of the lunar penetrometers once launched from a lunar module. The study found that the lunar penetrometer's impact velocities were limited to range from 120 ft/s to 200 ft/s, meaning that the velocities impact angles would have to vary between 54 and 62 percent from the vertical. The earliest that a lunar penetrometer had to be launched was at a range of 3,400 feet and an altitude of 1,075 feet, which would grant the crew in the lunar module 16 seconds to analyze the penetrometer data. [ 10 ] | https://en.wikipedia.org/wiki/Lunar_penetrometer |
A lunar regolith simulant is a terrestrial material synthesized in order to approximate the chemical, mechanical, engineering, mineralogical , or particle-size distribution properties of lunar regolith . [ 1 ] Lunar regolith simulants are used by researchers who wish to research the materials handling, excavation, transportation, and uses of lunar regolith. Samples of actual lunar regolith are too scarce, and too small, for such research, and have been contaminated by exposure to Earth's atmosphere .
In the run-up to the Apollo program , crushed terrestrial rocks were first used to simulate the anticipated soils that astronauts would encounter on the lunar surface. [ 2 ] In some cases the properties of these early simulants were substantially different from actual lunar soil, and the issues associated with the pervasive, fine-grained, sharp dust grains on the Moon came as a surprise. [ 3 ]
After Apollo and particularly during the development of the Constellation program , there was a large proliferation of lunar simulants produced by different organizations and researchers. Many of these were given three-letter acronyms to distinguish them (e.g., MLS-1, JSC-1), and numbers to designate subsequent versions. These simulants were broadly divided into highlands or mare soils, and were usually produced by crushing and sieving analogous terrestrial rocks (anorthosite for highlands, basalt for mare). Returned Apollo and Luna samples were used as reference materials in order to target specific properties such as elemental chemistry or particle size distribution. Many of these simulants were criticized by prominent lunar scientist Larry Taylor for a lack of quality control and wasted money on features like nanophase iron that had no documented purpose. [ 4 ]
JSC-1 ( Johnson Space Center Number One ) was a lunar regolith simulant that was developed in 1994 by NASA and the Johnson Space Center . Its developers intended it to approximate the lunar soil of the maria . It was sourced from a basaltic ash with a high glass content. [ 1 ]
In 2005, NASA contracted with Orbital Technologies Corporation (ORBITEC) for a second batch of simulant in three grades: [ 5 ]
NASA received 14 metric tons of JSC-1A, and one ton each of AF and AC in 2006. Another 15 tons of JSC-1A and 100 kg of JSC-1F were produced by ORBITEC for commercial sale, but ORBITEC is no longer selling simulants and was acquired by the Sierra Nevada Corporation. An 8-ton sand box of commercial JSC-1A is available for daily rental from the NASA Solar System Exploration Research Virtual Institute (SSERVI). [ 6 ]
JSC-1A can geopolymerize in an alkaline solutions resulting in a hard, rock-like, material. [ 7 ] [ 8 ] Tests show that the maximum compressive and flexural strength of the 'lunar' geopolymer is comparable to that of conventional cements. [ 8 ]
JSC-1 and JSC-1A are now no longer available outside of NASA centers. [ citation needed ]
Two lunar highlands simulants, the NU-LHT (lunar highlands type) series and OB-1 (olivine-bytownite) were developed and produced in anticipation of the Constellation activities. Both of these simulants are sourced mostly from rare anorthosite deposits on the Earth. For NU-LHT the anorthosite came from the Stillwater complex, and for OB-1 it came from the Shawmere Anorthosite in Ontario. Neither of these simulants were widely distributed.
Most of the previously developed lunar simulants are no longer being produced or distributed outside of NASA. Multiple companies have tried to sell regolith simulants for profit, including Zybek Advanced Products, ORBITEC, and Deep Space Industries . None of these efforts have seen much success. NASA is unable to sell simulants, or distribute unlimited amounts for free; however, NASA can award set amounts of simulant to grant winners.
Several lunar simulants have been developed recently and are either being sold commercially or are available for rent inside large regolith bins. These include the OPRL2N Standard Representative Lunar Mare Simulant [ 9 ] and Standard Representative Lunar Highland Simulant . [ 10 ] Off Planet Research also produces customized simulants for specific locations on the Moon including lunar polar icy regolith simulants that include the volatiles identified in the LCROSS mission.
Other simulants include Lunar Highlands Simulant (LHS-1) [ 11 ] and Lunar Mare Simulant (LMS-1) [ 12 ] produced and distributed by the not-for-profit Exolith Lab run out of the University of Central Florida . [ 13 ]
Indian Space Research Organisation has developed its own lunar highland soil simulant called LSS-ISAC1 for its Chandrayaan programme . [ 14 ] [ 15 ] The raw material for this simulant was sourced from Sithampoondi and Kunnamalai villages in Tamil Nadu. [ 16 ] [ 17 ]
In 2020, a team of independent researchers from Thailand also developed the Thailand Lunar Simulant - Batch 1 (TLS-1) [ 18 ] using domestic sources - the first ever successful simulant production attempt in the country that is based on the properties of the Apollo 11 sample, [ 19 ] [ 20 ] further applications in the field of space and material engineering were also made using the produced simulant. [ 21 ] | https://en.wikipedia.org/wiki/Lunar_regolith_simulant |
Lunar swirls are enigmatic features found across the Moon 's surface, which are characterized by having a high albedo , appearing optically immature (i.e. having the optical characteristics of a relatively young regolith ), and (often) having a sinuous shape. Their curvilinear shape is often accentuated by low albedo regions that wind between the bright swirls. They appear to overlay the lunar surface, superposed on craters and ejecta deposits, but impart no observable topography. Swirls have been identified on the lunar maria and on highlands - they are not associated with a specific lithologic composition. Swirls on the maria are characterized by strong albedo contrasts and complex, sinuous morphology, whereas those on highland terrain appear less prominent and exhibit simpler shapes, such as single loops or diffuse bright spots.
The lunar swirls are coincident with regions of the magnetic field of the Moon with relatively high strength on a planetary body that lacks, and may never have had, an active core dynamo with which to generate its own magnetic field. Every swirl has an associated magnetic anomaly, but not every magnetic anomaly has an identifiable swirl. Orbital magnetic field mapping by the Apollo 15 and 16 sub-satellites, Lunar Prospector , and Kaguya show regions with a local magnetic field. Because the Moon has no currently active global magnetic field, these regional anomalies are regions of remnant magnetism; their origin remains controversial. [ citation needed ]
There are three leading models for swirl formation. Each model must address two characteristics of lunar swirls formation, namely that a swirl is optically immature, and that it is associated with magnetic anomaly.
Models for creation of the magnetic anomalies associated with lunar swirls point to the observation that several of the magnetic anomalies are antipodal to the younger, large impact basins on the Moon. [ 1 ]
This model argues that the high albedo of the swirls is the result of an impact with a comet. The impact would cause scouring of the top-most surface regolith by the comet’s turbulent flow of gas and dust, which exposed fresh material and redeposited the fine, scoured material in discrete deposits. [ 2 ] According to this model, the associated strong magnetic anomalies are the result of magnetization of near-surface materials heated above the Curie temperature through hyper-velocity gas collisions and micro-impacts as the coma impacted the surface. Proponents of the cometary impact model consider the occurrence of many swirls antipodal to the major basins to be coincidental or the result of incomplete mapping of swirl locations. [ 3 ] [ 4 ]
This model argues that swirls are formed because lighter-colored regolith is protected from the solar wind due to a magnetic anomaly. [ 5 ] The swirls represent exposed silicate materials whose albedos have been selectively preserved over time from the effects of space weathering via deflection of solar wind ion bombardment. According to this model, optical maturation of exposed silicate surfaces is a result of solar wind ion bombardment. This model suggests that swirl formation is a continuing process, which began after creation of the magnetic anomaly.
Mathematical simulations conducted in 2018 showed that lava tubes could have become magnetic as they cooled, which would provide a magnetic field consistent with the observations near the lunar swirls. [ 6 ]
This model argues that weak electric fields created by interaction between the crustal magnetic anomalies and the solar wind plasma could attract or repel electrically charged fine dust. High–albedo feldspathic material is the dominant component of the finest particles of lunar soil. Electrostatic movement of dust lofted above the surface during terminator crossings could cause this material to preferentially accumulate and form the bright, looping swirl patterns. [ 7 ] [ 8 ]
Direct magnetic observations of the lunar swirls have been conducted by several lunar spacecraft, including Clementine and Lunar Prospector . The results of these observations are inconsistent with the Cometary impact model. [ 9 ] Further observations by the Lunar Reconnaissance Orbiter support the model that solar wind is being deflected by a magnetic field. [ citation needed ]
Spectral observations by the Moon Mineralogy Mapper instrument on Chandrayaan-1 confirmed that the lighter-colored regions are deficient in hydroxide , which also supports the hypothesis that solar wind is being deflected in the pale areas. [ 10 ]
As of 2018 [update] , a CubeSat mission concept is under study at NASA, with the goal of understanding the formation of the lunar swirls. The proposed Bi-sat Observations of the Lunar Atmosphere above Swirls , or BOLAS mission would involve two small satellites connected with a 25 km (16 mi) space tether . The lower CubeSat would orbit at an altitude of six miles above the surface. [ 11 ] [ 12 ]
NASA intends to send a rover to Reiner Gamma to obtain in-situ observations of the surface materials there. Funding for the Lunar Vertex mission, run by the JHU Applied Physics Laboratory , was selected for flight through the PRISM call for proposals. [ 13 ] [ 14 ] Delivery of the rover for the mission was included in the CLPS CP-11 task order. [ 15 ] The rover, carrying a multispectral microscope, will determine coarseness and brightness of surface particles and transmit its data to the lander, which will communicate with Earth-based handlers. [ 16 ] [ 17 ] [ 18 ] | https://en.wikipedia.org/wiki/Lunar_swirls |
A lunar terrane is a major geologic province on the Moon . Three terranes have been identified on the Moon: the Procellarum KREEP Terrane, the Feldspathic Highlands Terrane, and the South Pole–Aitken Terrane. [ 1 ] Each terrane has a unique origin, composition, and thermal evolution. [ 2 ]
The Procellarum KREEP Terrane, or PKT, is a large province on the near side of the Moon that has high abundances of KREEP . KREEP is an acronym built from the letters K (the atomic symbol for potassium ), REE ( rare-earth elements ) and P (for phosphorus ), [ 3 ] and is a geochemical component of some lunar impact breccia and basaltic rocks. Notably, it is high in the KREEP element thorium , at a level of 4.8 ppm. [ 4 ] This is a major factor distinguishing it from the other terranes. The PKT is on the near side of the moon, and covers 10% of the lunar surface, [ 4 ] or 16% if one includes the maria lying within the FHT. [ 5 ] [ 6 ] [ 4 ] Despite this, it contains 60% of all basaltic flows. [ 6 ] KREEP has been shown to lower the melting point of rocks similar to those found on the Moon, and is expected to have contributed to volcanism in the region. [ 3 ]
The Oceanus Procellarum and Mare Imbrium regions lie within the PKT. [ 7 ] In general, many maria, such as (but not limited to) Mare Frigoris and Mare Cognitum are members of the PKT. Not all maria are in the PKT, however - Mare Crisium and Mare Orientale are located within the outer Feldspathic Highlands. [ 4 ]
The PKT is the only terrane to lie exclusively in the near side of the Moon . Human and robotic missions have been done to this terrane, and samples have been returned to Earth for further study. [ 8 ] [ 9 ]
The Felspathic Highlands Terrane (alternatively Feldspathic ), or FHT, is composed predominantly of ancient anorthositic materials. It has low iron oxide and thorium levels. The FHT can be split into an inner and outer felspathic highlands. The outer FHT has comparatively higher levels of iron oxide and thorium. It is thought to be part of the FHT, with the differences being due to modification by ejecta from impacts in other terranes. The FHT covers 65% of the lunar surface. [ 5 ]
Overall, 6% of the lunar surface (and hence 9% of the FHT) consists of maria within the FHT, such as Mare Moscoviense . FHT maria have on average only 2.2 ppm of thorium, which is twice as much as the lunar average but significantly less than the levels seen in the PKT maria. Outer FHT non-maria regions contain 1 ppm of thorium, and only 0.3 ppm of thorium in the inner FHT. [ 4 ]
The inner FHT lies exclusively in the far side of the Moon , whereas the outer FHT spans both sides and is the one of two terranes on the near side of the Moon, along with the PKT. No spacecraft have landed on the inner FHT, as the only lander to the far side touched down in the South Pole-Aitken Terrane. [ 10 ] In contrast, the outer FHT has been the subject of human landings and sample return. [ 11 ]
The South Pole-Aitken Terrane, or SPAT, may simply represent deep crustal materials of the Feldspathic Highlands Terrane. It has thorium levels between those of the PKT and FHT. The SPAT can be divided into two terranes, an outer and an inner SPAT. [ 12 ] The outer SPAT has less iron oxide and thorium than its inner counterpart, although the thorium levels are still between those of the PKT and FHT. [ 5 ] The terrane has its origins in a large impact that occurred early in the Moon's history and which had a major impact on the Moon's thermal evolution. [ 4 ]
The inner and outer SPAT cover 5.3% and 5.7% of the lunar surface, respectively. Despite collectively covering 11% of the surface, SPAT contains only 5.8% of the thorium in the lunar crust. [ 4 ]
The SPAT lies exclusively in the far side of the moon, and contains the South Pole-Aitken basin . Chang'e 4 , the first and only far-side lander, has landed in the region. [ 10 ] It is not a sample return mission, [ 13 ] and thus no samples have been directly taken from this terrane yet.
In the 1600s, the Moon was divided into two terranes, terra and maria . The terra terrane was thought to be landmass, and the maria terrane was thought to be the Moon's ocean, [ 14 ] although this is now known to be false. The maria terrane is lower in elevation and younger in age than the terra terrane, and was formed by lava. The terra terrane is higher and older, and hence more cratered. Visually, the maria correspond to the dark regions of the Moon, and the terra to the light. [ 5 ] | https://en.wikipedia.org/wiki/Lunar_terrane |
Lunar theory attempts to account for the motions of the Moon . There are many small variations (or perturbations ) in the Moon's motion, and many attempts have been made to account for them. After centuries of being problematic, lunar motion can now be modeled to a very high degree of accuracy (see section Modern developments ).
Lunar theory includes:
Lunar theory has a history of over 2000 years of investigation. Its more modern developments have been used over the last three centuries for fundamental scientific and technological purposes, and are still being used in that way.
Applications of lunar theory have included the following:
The Moon has been observed for millennia. Over these ages, various levels of care and precision have been possible, according to the techniques of observation available at any time. There is a correspondingly long history of lunar theories: it stretches from the times of the Babylonian and Greek astronomers, down to modern lunar laser ranging.
The history can be considered to fall into three parts: from ancient times to Newton; the period of classical (Newtonian) physics; and modern developments.
Of Babylonian astronomy , practically nothing was known to historians of science before the 1880s. [ 3 ] Surviving ancient writings of Pliny had made bare mention of three astronomical schools in Mesopotamia – at Babylon, Uruk, and 'Hipparenum' (possibly 'Sippar'). [ 4 ] But definite modern knowledge of any details only began when Joseph Epping deciphered cuneiform texts on clay tablets from a Babylonian archive: In these texts he identified an ephemeris of positions of the Moon. [ 5 ] Since then, knowledge of the subject, still fragmentary, has had to be built up by painstaking analysis of deciphered texts, mainly in numerical form, on tablets from Babylon and Uruk (no trace has yet been found of anything from the third school mentioned by Pliny).
To the Babylonian astronomer Kidinnu (in Greek or Latin, Kidenas or Cidenas) has been attributed the invention (5th or 4th century BC) of what is now called "System B" for predicting the position of the moon, taking account that the moon continually changes its speed along its path relative to the background of fixed stars. This system involved calculating daily stepwise changes of lunar speed, up or down, with a minimum and a maximum approximately each month. [ 6 ] The basis of these systems appears to have been arithmetical rather than geometrical, but they did approximately account for the main lunar inequality now known as the equation of the center .
The Babylonians kept very accurate records for hundreds of years of new moons and eclipses. [ 7 ] Some time between the years 500 BC and 400 BC they identified and began to use the 19 year cyclic relation between lunar months and solar years now known as the Metonic cycle . [ 8 ]
This helped them build a numerical theory of the main irregularities in the Moon's motion, reaching remarkably good estimates for the (different) periods of the three most prominent features of the Moon's motion:
The Babylonian estimate for the synodic month was adopted for the greater part of two millennia by Hipparchus, Ptolemy, and medieval writers (and it is still in use as part of the basis for the calculated Hebrew (Jewish) calendar ).
Thereafter, from Hipparchus and Ptolemy in the Bithynian and Ptolemaic epochs down to the time of Newton 's work in the seventeenth century, lunar theories were composed mainly with the help of geometrical ideas, inspired more or less directly by long series of positional observations of the moon. Prominent in these geometrical lunar theories were combinations of circular motions – applications of the theory of epicycles . [ 14 ]
Hipparchus, whose works are mostly lost and known mainly from quotations by other authors, assumed that the Moon moved in a circle inclined at 5° to the ecliptic , rotating in a retrograde direction (i.e. opposite to the direction of annual and monthly apparent movements of the Sun and Moon relative to the fixed stars) once in 18 2 ⁄ 3 years. The circle acted as a deferent , carrying an epicycle along which the Moon was assumed to move in a retrograde direction. The center of the epicycle moved at a rate corresponding to the mean change in Moon's longitude, while the period of the Moon around the epicycle was an anomalistic month. This epicycle approximately provided for what was later recognized as the elliptical inequality, the equation of the center , and its size approximated to an equation of the center of about 5° 1'. This figure is much smaller than the modern value : but it is close to the difference between the modern coefficients of the equation of the center (1st term) and that of the evection : the difference is accounted for by the fact that the ancient measurements were taken at times of eclipses, and the effect of the evection (which subtracts under those conditions from the equation of the center) was at that time unknown and overlooked. For further information see also separate article Evection .
Ptolemy's work the Almagest had wide and long-lasting acceptance and influence for over a millennium. He gave a geometrical lunar theory that improved on that of Hipparchus by providing for a second inequality of the Moon's motion, using a device that made the apparent apogee oscillate a little – prosneusis of the epicycle. This second inequality or second anomaly accounted rather approximately, not only for the equation of the center, but also for what became known (much later) as the evection . But this theory, applied to its logical conclusion, would make the distance (and apparent diameter) of the Moon appear to vary by a factor of about 2, which is clearly not seen in reality. [ 15 ] (The apparent angular diameter of the Moon does vary monthly, but only over a much narrower range of about 0.49°–0.55°. [ 16 ] ) This defect of the Ptolemaic theory led to proposed replacements by Ibn al-Shatir in the 14th century [ 17 ] and by Copernicus in the 16th century. [ 18 ]
Significant advances in lunar theory were made by the Arab astronomer , Ibn al-Shatir (1304–1375). Drawing on the observation that the distance to the Moon did not change as drastically as required by Ptolemy's lunar model, he produced a new lunar model that replaced Ptolemy's crank mechanism with a double epicycle model that reduced the computed range of distances of the Moon from the Earth. [ 17 ] [ 19 ] A similar lunar theory, developed some 150 years later by the Renaissance astronomer Nicolaus Copernicus , had the same advantage concerning the lunar distances. [ 20 ] [ 21 ]
Tycho Brahe and Johannes Kepler refined the Ptolemaic lunar theory, but did not overcome its central defect of giving a poor account of the (mainly monthly) variations in the Moon's distance, apparent diameter and parallax . Their work added to the lunar theory three substantial further discoveries.
The refinements of Brahe and Kepler were recognized by their immediate successors as improvements, but their seventeenth-century successors tried numerous alternative geometrical configurations for the lunar motions to improve matters further. A notable success was achieved by Jeremiah Horrocks , who proposed a scheme involving an approximate 6 monthly libration in the position of the lunar apogee and also in the size of the elliptical eccentricity. This scheme had the great merit of giving a more realistic description of the changes in distance, diameter and parallax of the Moon.
A first gravitational period for lunar theory started with the work of Newton. He was the first to define the problem of the perturbed motion of the Moon in recognisably modern terms. His groundbreaking work is shown for example in the Principia [ 22 ] in all versions including the first edition published in 1687.
Newton's biographer, David Brewster , reported that the complexity of Lunar Theory impacted Newton's health: "[H]e was deprived of his appetite and sleep" during his work on the problem in 1692–3, and told the astronomer John Machin that "his head never ached but when he was studying the subject". According to Brewster, Edmund Halley also told John Conduitt that when pressed to complete his analysis Newton "always replied that it made his head ache, and kept him awake so often, that he would think of it no more " [Emphasis in original]. [ 23 ]
Newton identified how to evaluate the perturbing effect on the relative motion of the Earth and Moon, arising from their gravity towards the Sun, in Book 1, Proposition 66, [ 24 ] and in Book 3, Proposition 25. [ 25 ] The starting-point for this approach is Corollary VI to the laws of motion. [ 26 ] This shows that if the external accelerative forces from some massive body happens to act equally and in parallel on some different other bodies considered, then those bodies would be affected equally, and in that case their motions (relative to each other) would continue as if there were no such external accelerative forces at all. It is only in the case that the external forces (e.g. in Book 1, Prop. 66, and Book 3, Prop. 25, the gravitational attractions towards the Sun) are different in size or in direction in their accelerative effects on the different bodies considered (e.g. on the Earth and Moon), that consequent effects are appreciable on the relative motions of the latter bodies. (Newton referred to accelerative forces or accelerative gravity due to some external massive attractor such as the Sun. The measure he used was the acceleration that the force tends to produce (in modern terms, force per unit mass), rather than what we would now call the force itself.)
Thus Newton concluded that it is only the difference between the Sun's accelerative attraction on the Moon and the Sun's attraction on the Earth that perturbs the motion of the Moon relative to the Earth.
Newton then in effect used vector decomposition of forces, [ 27 ] to carry out this analysis. In Book 1, Proposition 66 and in Book 3, Proposition 25, [ 28 ] he showed by a geometrical construction, starting from the total gravitational attraction of the Sun on the Earth, and of the Sun on the Moon, the difference that represents the perturbing effect on the motion of the Moon relative to the Earth. In summary, line LS in Newton's diagram as shown below represents the size and direction of the perturbing acceleration acting on the Moon in the Moon's current position P (line LS does not pass through point P, but the text shows that this is not intended to be significant, it is a result of the scale factors and the way the diagram has been built up).
Shown here is Newton's diagram from the first (1687) Latin edition of the Principia (Book 3, Proposition 25, p. 434). Here he introduced his analysis of perturbing accelerations on the Moon in the Sun-Earth-Moon system. Q represents the Sun, S the Earth, and P the Moon.
Parts of this diagram represent distances, other parts gravitational accelerations (attractive forces per unit mass). In a dual significance, SQ represents the Earth-Sun distance, and then it also represents the size and direction of the Earth-Sun gravitational acceleration. Other distances in the diagram are then in proportion to distance SQ. Other attractions are in proportion to attraction SQ.
The Sun's attractions are SQ (on the Earth) and LQ (on the Moon). The size of LQ is drawn so that the ratio of attractions LQ:SQ is the inverse square of the ratio of distances PQ:SQ. (Newton constructs KQ=SQ, giving an easier view of the proportions.) The Earth's attraction on the Moon acts along direction PS. (But line PS signifies only distance and direction so far, nothing has been defined about the scale factor between solar and terrestrial attractions).
After showing solar attractions LQ on the Moon and SQ on the Earth, on the same scale, Newton then makes a vector decomposition of LQ into components LM and MQ. Then he identifies the perturbing acceleration on the Moon as the difference of this from SQ. SQ and MQ are parallel to each other, so SQ can be directly subtracted from MQ, leaving MS. The resulting difference, after subtracting SQ from LQ, is therefore the vector sum of LM and MS: these add up to a perturbing acceleration LS.
Later Newton identified another resolution of the perturbing acceleration LM+MS = LS, into orthogonal components: a transverse component parallel to LE, and a radial component, effectively ES.
Newton's diagrammatic scheme, since his time, has been re-presented in other and perhaps visually clearer ways. Shown here is a vector presentation [ 29 ] indicating, for two different positions, P1 and P2, of the Moon in its orbit around the Earth, the respective vectors LS1 and LS2 for the perturbing acceleration due to the Sun. The Moon's position at P1 is fairly close to what it was at P in Newton's diagram; corresponding perturbation LS1 is like Newton's LS in size and direction. At another position P2, the Moon is farther away from the Sun than the Earth is, the Sun's attraction LQ2 on the Moon is weaker than the Sun's attraction SQ=SQ2 on the Earth, and then the resulting perturbation LS2 points obliquely away from the Sun.
Constructions like those in Newton's diagram can be repeated for many different positions of the Moon in its orbit. For each position, the result is a perturbation vector like LS1 or LS2 in the second diagram. Shown here is an often-presented form of the diagram that summarises sizes and directions of the perturbation vectors for many different positions of the Moon in its orbit. Each small arrow is a perturbation vector like LS, applicable to the Moon in the particular position around the orbit from which the arrow begins. The perturbations on the Moon when it is nearly in line along the Earth-Sun axis, i.e. near new or full moon, point outwards, away from the Earth. When the Moon-Earth line is 90° from the Earth-Sun axis they point inwards, towards the Earth, with a size that is only half the maximum size of the axial (outwards) perturbations. (Newton gave a rather good quantitative estimate for the size of the solar perturbing force: at quadrature where it adds to the Earth's attraction he put it at 1 ⁄ 178.725 of the mean terrestrial attraction, and twice as much as that at the new and full moons where it opposes and diminishes the Earth's attraction.) [ 28 ]
Newton also showed that the same pattern of perturbation applies, not only to the Moon, in its relation to the Earth as disturbed by the Sun, but also to other particles more generally in their relation to the solid Earth as disturbed by the Sun (or by the Moon); for example different portions of the tidal waters at the Earth's surface. [ a ] The study of the common pattern of these perturbing accelerations grew out of Newton's initial study of the perturbations of the Moon, which he also applied to the forces moving tidal waters. Nowadays this common pattern itself has become often known as a tidal force whether it is being applied to the disturbances of the motions of the Moon, or of the Earth's tidal waters – or of the motions of any other object that suffers perturbations of analogous pattern.
After introducing his diagram 'to find the force of the Sun to perturb the Moon' in Book 3, Proposition 25, Newton developed a first approximation to the solar perturbing force, showing in further detail how its components vary as the Moon follows its monthly path around the Earth. He also took the first steps in investigating how the perturbing force shows its effects by producing irregularities in the lunar motions. [ b ]
For a selected few of the lunar inequalities, Newton showed in some quantitative detail how they arise from the solar perturbing force.
Much of this lunar work of Newton's was done in the 1680s, and the extent and accuracy of his first steps in the gravitational analysis was limited by several factors, including his own choice to develop and present the work in what was, on the whole, a difficult geometrical way, and by the limited accuracy and uncertainty of many astronomical measurements in his time.
The main aim of Newton's successors, from Leonhard Euler , Alexis Clairaut and Jean d'Alembert in the mid-eighteenth century, down to Ernest William Brown in the late nineteenth and early twentieth century, was to account completely and much more precisely for the moon's motions on the basis of Newton's laws, i.e. the laws of motion and of universal gravitation by attractions inversely proportional to the squares of the distances between the attracting bodies. They also wished to put the inverse-square law of gravitation to the test, and for a time in the 1740s it was seriously doubted, on account of what was then thought to be a large discrepancy between the Newton-theoretical and the observed rates in the motion of the lunar apogee. However Clairaut showed shortly afterwards (1749–50) that at least the major cause of the discrepancy lay not in the lunar theory based on Newton's laws, but in excessive approximations that he and others had relied on to evaluate it.
Most of the improvements in theory after Newton were made in algebraic form: they involved voluminous and highly laborious amounts of infinitesimal calculus and trigonometry. It also remained necessary, for completing the theories of this period, to refer to observational measurements. [ 30 ] [ 31 ] [ 32 ] [ 33 ]
The lunar theorists used (and invented) many different mathematical approaches to analyse the gravitational problem. Not surprisingly, their results tended to converge. From the time of the earliest gravitational analysts among Newton's successors, Euler, Clairaut and d'Alembert, it was recognized that nearly all of the main lunar perturbations could be expressed in terms of just a few angular arguments and coefficients. These can be represented by: [ 33 ]
From these basic parameters, just four basic differential angular arguments are enough to express, in their different combinations, nearly all of the most significant perturbations of the lunar motions. They are given here with their conventional symbols due to Delaunay ; they are sometimes known as the Delaunay arguments:
This work culminated into Brown's lunar theory (1897–1908) [ 34 ] [ 35 ] [ 36 ] [ 37 ] [ 38 ] and Tables of the Motion of the Moon (1919). [ 32 ] These were used in the American Ephemeris and Nautical Almanac until 1968, and in a modified form until 1984.
Several of the largest lunar perturbations in longitude (contributions to the difference in its true ecliptic longitude relative to its mean longitude) have been named. In terms of the differential arguments, they can be expressed in the following way, with coefficients rounded to the nearest second of arc ("): [ 39 ]
The analysts of the mid-18th century expressed the perturbations of the Moon's position in longitude using about 25-30 trigonometrical terms. However, work in the nineteenth and twentieth century led to very different formulations of the theory so these terms are no longer current. The number of terms needed to express the Moon's position with the accuracy sought at the beginning of the twentieth century was over 1400; and the number of terms needed to emulate the accuracy of modern numerical integrations based on laser-ranging observations is in the tens of thousands: there is no limit to the increase in number of terms needed as requirements of accuracy increase. [ 41 ]
Since the Second World War and especially since the 1960s, lunar theory has been further developed in a somewhat different way. This has been stimulated in two ways: on the one hand, by the use of automatic digital computation, and on the other hand, by modern observational data-types, with greatly increased accuracy and precision.
Wallace John Eckert , a student of Ernest William Brown and employee at IBM , used the experimental digital computers developed there after the Second World War for computation of astronomical ephemerides. One of the projects was to put Brown's lunar theory into the machine and evaluate the expressions directly. Another project was something entirely new: a numerical integration of the equations of motion for the Sun and the four major planets. This became feasible only after electronic digital computers became available. Eventually this led to the Jet Propulsion Laboratory Development Ephemeris series.
In the meantime, Brown's theory was improved with better constants and the introduction of Ephemeris Time and the removal of some empirical corrections associated with this. This led to the Improved Lunar Ephemeris (ILE), [ 33 ] which, with some minor successive improvements, was used in the astronomical almanacs from 1960 through 1983 [ 42 ] [ c ] and enabled lunar landing missions .
The most significant improvement of position observations of the Moon have been the Lunar Laser Ranging measurements, obtained using Earth-bound lasers and special retroreflectors placed on the surface of the Moon. The time-of-flight of a pulse of laser light to one of the retroreflectors and back gives a measure of the Moon's distance at that time. The first of five retroreflectors that are operational today was taken to the Moon in the Apollo 11 spacecraft in July 1969 and placed in a suitable position on the Moon's surface by Buzz Aldrin . [ 43 ] Range precision has been extended further by the Apache Point Observatory Lunar Laser-ranging Operation , established in 2005.
The lunar theory, as developed numerically to fine precision using these modern measures, is based on a larger range of considerations than the classical theories: It takes account not only of gravitational forces (with relativistic corrections) but also of many tidal and geophysical effects and a greatly extended theory of lunar libration . Like many other scientific fields this one has now developed so as to be based on the work of large teams and institutions. An institution notably taking one of the leading parts in these developments has been the Jet Propulsion Laboratory (JPL) at California Institute of Technology ; and names particularly associated with the transition, from the early 1970s onwards, from classical lunar theories and ephemerides towards the modern state of the science include those of J. Derral Mulholland and J.G. Williams, and for the linked development of solar system (planetary) ephemerides E. Myles Standish. [ 44 ]
Since the 1970s, JPL has produced a series of numerically integrated Development Ephemerides (numbered DExxx), incorporating Lunar Ephemerides (LExxx). Planetary and lunar ephemerides DE200/LE200 were used in the official Astronomical Almanac ephemerides for 1984–2002, and ephemerides DE405/LE405 , of further improved accuracy and precision, have been in use as from the issue for 2003. [ 45 ] The current ephemeris is DE440. [ 46 ]
In parallel with these developments, a new class of analytical lunar theory has also been developed in recent years, notably the Ephemeride Lunaire Parisienne [ 47 ] by Jean Chapront and Michelle Chapront-Touzé from the Bureau des Longitudes . Using computer-assisted algebra, the analytical developments have been taken further than previously could be done by the classical analysts working manually. Also, some of these new analytical theories (like ELP) have been fitted to the numerical ephemerides previously developed at JPL as mentioned above. The main aims of these recent analytical theories, in contrast to the aims of the classical theories of past centuries, have not been to generate improved positional data for current dates; rather, their aims have included the study of further aspects of the motion, such as long-term properties, which may not so easily be apparent from the modern numerical theories themselves. [ 48 ]
Among notable astronomers and mathematicians down the ages, whose names are associated with lunar theories, are:
European Middle Ages
Other notable mathematicians and mathematical astronomers also made significant contributions. | https://en.wikipedia.org/wiki/Lunar_theory |
Lundbeck Seattle Biopharmaceuticals is a pharmaceutical development company based in Bothell, Washington . Formerly known as Alder Biopharmaceuticals , it specializes in therapeutic monoclonal antibodies .
In May 2014, Alder went public. [ 1 ] In early 2018, the company made a public stock offering, aiming to raise US$250 million . [ 2 ] The company identifies, develops, and manufactures antibody therapeutics to alleviate human suffering in cancer, pain, cardiovascular, and autoimmune and inflammatory disease areas. [ 3 ]
As of September 2019, the Alder Biopharmaceuticals shares have increased with 83% in price, following the company's acquisition by the Denmark-based H. Lundbeck , in a deal valued at $1.95 billion. [ 4 ] [ 5 ] The company subsequently changed its name to Lundbeck Seattle Biopharmaceuticals after the acquisition. [ 6 ]
This article about a medical , pharmaceutical or biotechnological corporation or company is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lundbeck_Seattle_Biopharmaceuticals |
Lung-on-a-chip (LoC), also known as Lung Chips , are micro- and millifluidic organ-on-a-chip devices designed to replicate the structure and function of the human lung, mimicking the breathing motions and fluid dynamics that occur during inhalation and exhalation. [ 1 ] LoCs represent the most promising alternative to replace animal testing .
Huh et al. developed the first polydimethylsiloxane (PDMS)-based microfluidic system for culturing primary diseased small airway epithelial cells at the air-liquid interface (ALI). Despite its simplicity, this system successfully replicated crackling sounds associated with mechanical injury in the airway lumen. [ 2 ]
The first LoC, published in the June 25, 2010, issue of Science , was developed by Dan Huh and Donald E. Ingber at the Wyss Institute using a microfabrication technique called soft lithography , which was pioneered by George M. Whitesides . A typical alveolus LoC comprises two microchannels primarily lined with epithelial cells on the apical side and endothelial cells on the basal side. [ 3 ] Air is delivered to the lung lining cells, a culture medium flows in the capillary channel to mimic blood, and cyclic mechanical stretching is generated by a vacuum applied to the chambers adjacent to the cell culture channels to mimic breathing. The device is made using human lung and blood vessel cells and it can predict absorption of airborne nanoparticles and mimic the inflammatory response triggered by microbial pathogens . It can be used to test the effects of environmental toxins, absorption of aerosolized therapeutics, and the safety and efficacy of new drugs.
Since the introduction of LoCs in 2010, numerous advancements have been made to develop valid, functional, and clinically relevant models. [ 4 ]
The breathing movements in typical LoC such as Wyss platform occur in 2D, rather than the physiologically relevant three-dimensional (3D) format. Most organ-on-chip models, including LoC, are made from PDMS, which has several limitations. [ 5 ] For example, the two-compartment platform chip, similar to the Wyss chip, is at least 10-15 times thicker than its in vivo counterpart (the commercial Wyss chip has a thickness of 50 μm according to its datasheet). [ 6 ] This increased thickness is significant because it impedes the cross-talk between the two sides of the PDMS membranes.
The main issue with PDMS is its adsorption properties, which lead to unrealistic ADME and, consequently, inaccurate pharmacokinetics analysis. [ 5 ] [ 7 ] Other limitations of PDMS include biodegradation , leaching , cell delamination, and molecule absorption, all of which affect the accuracy and reliability of cell assays. [ 8 ] | https://en.wikipedia.org/wiki/Lung-on-a-chip |
A lung counter is a system consisting of a radiation detector , or detectors, and associated electronics that is used to measure radiation emitted from radioactive material that has been inhaled by a person and is sufficiently insoluble as to remain in the lung for weeks, months, or years. [ 1 ] They are frequently used in occupations where workers may be exposed to radiation. [ 2 ]
The lung counter may be placed on or near the body. [ 1 ] These systems are also often housed in a low background counting chamber . Such a chamber may have thick walls made of low-background steel (~20–25 cm thick) and lined with lead , cadmium , tin , or polypropylene , with a final layer of copper . [ 3 ] The purpose of the lead, cadmium (or tin), and copper is to reduce the background in the low energy region of a gamma spectrum (typically less than 200 keV). [ citation needed ]
As a lung counter is primarily measuring radioactive materials that emit low energy gamma rays or x-rays , the phantom used to calibrate the system must be anthropometric . [ citation needed ] An example of such a phantom is the Lawrence Livermore National Laboratory Torso Phantom. [ 1 ] | https://en.wikipedia.org/wiki/Lung_counter |
Lungscape is a translational research program designed, implemented and conducted by the European Thoracic Oncology Platform (ETOP) in collaboration with a series of leading hospitals and clinics across Europe and beyond. [ 1 ] [ 2 ]
The Lungscape program aims to address the challenges of studying the molecular epidemiology of lung cancer and to expedite the knowledge of current and evolving clinical and molecular biomarkers. [ 3 ] [ 4 ]
Lungscape comprises the coordination and harmonization of the procedures within a group of lung cancer specialists, in order to allow the analysis of larger series of cases. The international collaborative effort provides a platform for molecular correlative studies and thus creates a basis for the development of clinical trials of novel therapeutics. [ 3 ] [ 5 ] [ 6 ]
The basis of Lungscape is a decentralized biobank with fully annotated tissue samples from resected stage I - III non-small-cell lung carcinoma (NSCLC). An electronic database (termed iBiobank) is used to store the anonymized comprehensive molecular and clinical data and tracking biological material and derivatives thereof. Participating centers use a secure web-based application to enter data into this central database.
The virtual nature of iBiobank and the introduction of stringent standardized biomolecular assessments, a so-called external quality assurance (EQA) process to establish laboratory performance levels [5], remove the need of transferring samples to a central location for evaluation.
The system captures detailed parameters like tumor stage, grade, histological subtype, precise surgical procedure as well as patient characteristics. [ 3 ]
The Lungscape master protocol defines the setting in which specific hypotheses will be investigated. It describes the mode of cooperation of the participating investigators, the selection of documentation of the NSCLC cohort, laboratory requirements as well as the regulatory framework. Specific protocol modules (subprojects) then formulate a hypothesis to be investigated in the framework of Lungscape. [ 3 ] [ 6 ] | https://en.wikipedia.org/wiki/Lungscape |
The luopan or geomantic compass is a Chinese magnetic compass, also known as a feng shui compass . It is used by a feng shui practitioner to determine the precise direction of a structure, place or item. Luo Pan contains a lot of information and formulas regarding its functions. The needle points towards the south magnetic pole .
Like a conventional compass, a luopan is a direction finder. However, a luopan differs from a compass in several important ways. The most obvious difference is the feng shui formulas embedded in up to 40 concentric rings on the surface. This is a metal or wooden plate known as the heaven dial . The circular metal or wooden plate typically sits on a wooden base known as the earth plate . The heaven dial rotates freely on the earth plate.
A red wire or thread that crosses the earth plate and heaven dial at 90-degree angles is the Heaven Center Cross Line , or Red Cross Grid Line . [ 1 ] This line is used to find the direction and note position on the rings.
A conventional compass has markings for four or eight directions, while a luopan typically contains markings for 24 directions . This translates to 15 degrees per direction. The Sun takes approximately 15.2 days to traverse a solar term , a series of 24 points on the ecliptic . Since there are 360 degrees on the luopan and approximately 365.25 days in a mean solar year, each degree on a luopan approximates a terrestrial day.
Unlike a typical compass, a luopan does not point to the north magnetic pole of Earth. The needle of a luopan points to the south magnetic pole (it does not point to the geographic South Pole ). The Chinese word for compass , 指南針 ( zhǐnánzhēn in Mandarin ), translates to “south-pointing needle.”
Since the Ming and Qing dynasties, three types of luopan have been popular. They have some formula rings in common, such as the 24 directions and the early and later heaven arrangements.
This luopan was said to have been used in the Tang dynasty . [ 2 ] The San He contains three basic 24-direction rings. Each ring relates to a different method and formula. (The techniques grouped under the name "Three Harmonies" are the San He methods.)
This luopan, also known as the jiang pan (after Jiang Da Hong) or the Yi Pan (because of the presence of Yijing hexagrams) [ 2 ] incorporates many formulas used in San Yuan (Three Cycles). It contains one 24-direction ring, known as the Earth Plate Correct Needle, the ring for the 64 hexagrams , and others. (The techniques grouped under the name "Flying Stars" are an example of San Yuan methods.)
This luopan combines rings from the San He and San Yuan. It contains three 24-direction-rings and the 64 trigrams ring.
Each feng shui master may design a luopan to suit preference and to offer students. Some designs incorporate the bagua (trigram) numbers, directions from the Eight Mansions ( 八宅 ; bāzhái ) methods, and English equivalents.
The luopan is an image of the cosmos (a world model) based on tortoise plastrons used in divination. [ 3 ] At its most basic level it serves as a means to assign proper positions in time and space, like the Ming Tang (Hall of Light). [ 4 ] The markings are similar to those on a liubo board.
The oldest precursors of the luopan are the 式 ; shì or 式盤 ; shìpán , meaning astrolabe or diviner's board —also sometimes called liuren astrolabes [ 5 ] —unearthed from tombs that date between 278 BCE and 209 BCE. These astrolabes consist of a lacquered, two-sided board with astronomical sightlines. Along with divination for Da Liu Ren , the boards were commonly used to chart the motion of Taiyi through the nine palaces. [ 6 ] [ 7 ] The markings are virtually unchanged from the shi to the first magnetic compasses. [ 5 ] The schematic of earth plate, heaven plate, and grid lines is part of the "two cords and four hooks" ( 二繩四鉤 ; èrshéngsìgōu ) geometrical diagram in use since at least the Warring States period. [ 5 ] The zhinan zhen or south-pointing needle, is the original magnetic compass , and was developed for feng shui. [ 8 ] It featured the two cords and four hooks diagram, direction markers, and a magnetized spoon in the center. | https://en.wikipedia.org/wiki/Luopan |
Lupeol is a pharmacologically active pentacyclic triterpenoid . It has several potential medicinal properties, like anticancer and anti-inflammatory activity. [ 1 ]
Lupeol is found in a variety of plants, including mango , Acacia visco and Abronia villosa . [ 2 ] It is also found in dandelion coffee . Lupeol is present as a major component in Camellia japonica leaf. [ 1 ]
The first total synthesis of lupeol was reported by Gilbert Stork et al . [ 3 ]
In 2009, Surendra and Corey reported a more efficient and enantioselective total synthesis of lupeol, starting from ( 1E,5E )-8-[( 2S )-3,3-dimethyloxiran-2-yl]-2,6-dimethylocta-1,5-dienyl acetate by use of a polycyclization. [ 4 ]
Lupeol is produced by several organisms from squalene epoxide . Dammarane and baccharane skeletons are formed as intermediates. The reactions are catalyzed by the enzyme lupeol synthase . [ 5 ] A recent study on the metabolomics of Camellia japonica leaf revealed that lupeol is produced from squalene epoxide where squalene play the role as a precursor. [ 1 ]
Lupeol has a complex pharmacology, displaying antiprotozoal , antimicrobial, antiinflammatory, antitumor and chemopreventive properties. [ 6 ]
Animal models suggest lupeol may act as an anti-inflammatory agent. A 1998 study found lupeol to decrease paw swelling in rats by 39%, compared to 35% for the standardized control compound indomethacin . [ 7 ]
One study has also found some activity as a dipeptidyl peptidase-4 inhibitor and prolyl oligopeptidase inhibitor at high concentrations (in the millimolar range). [ 8 ]
It is an effective inhibitor in laboratory models of prostate and skin cancers . [ 9 ] [ 10 ] [ 11 ]
As an anti-inflammatory agent, lupeol functions primarily on the interleukin system. Lupeol to decreases interleukin 4 (IL-4) production by T-helper type 2 cells . [ 6 ] [ 12 ]
Lupeol has been found to have a contraceptive effect due to its inhibiting effect on the calcium channel of sperm ( CatSper ). [ 13 ]
Lupeol has also been shown to exert anti-angiogenic and anti-cancer effects via the downregulation of TNF-alpha and VEGFR-2 . [ 14 ]
The leaves of Camellia japonica contain lupeol. [ 1 ] | https://en.wikipedia.org/wiki/Lupeol |
The Lurgi–Ruhrgas process is an above-ground coal liquefaction and shale oil extraction technology. It is classified as a hot recycled solids technology. [ 1 ]
The Lurgi–Ruhrgas process was originally invented in the 1940s and further developed in the 1950s for a low-temperature liquefaction of lignite (brown coal). [ 2 ] [ 3 ] The technology is named after its developers Lurgi Gesellschaft für Wärmetechnik G.m.b.H. and Ruhrgas AG . Over a time, the process was used for coal processing in Japan, Germany, the United Kingdom, Argentina, and former Yugoslavia. The plant in Japan processed also cracking petroleum oils to olefins . [ 2 ]
In 1947–1949, the Lurgi–Ruhrgas process was used in Germany for shale oil production. In Lukavac , Bosnia and Herzegovina , two retorts for liquefaction of lignite were in operation from 1963 to 1968. The capacity of the plant was 850 tons of lignite per day. Initially, two Lurgi-Ruhrgas plants were built and operated in the U.K.: the first to be opened (by Queen Elizabeth II in 1963) was the Westfield plant in Fife, Scotland which was operated by the Scottish Gas Board. The second plant, also opened in 1963, was sited near Coleshill in the West Midlands and was operated by the West Midlands Gas Board. [ 4 ] A third, smaller plant, was sited in Lincolnshire , United Kingdom, but only operated during 1978–1979 with a maximum capacity of 900 tons of coal per day.
In the late 1960s and early 1970s, oil shales from different European countries and from the Green River Formation of Colorado , the United States, were tested at the Lurgi's pilot plant in Frankfurt . [ 2 ] [ 5 ] [ 6 ] In the United States, the technology was promoted in cooperation with Dravo Corporation . In the 1970s, the technology was licensed to the Rio Blanco Shale Oil Project for construction of a modular retort in combination with the modified in situ process. [ 2 ] However, this plan was terminated.
In 1974, the Westfield plant in Scotland was converted to a new Lurgi 'slagging gasifier' system developed jointly by British Gas and the Lurgi company. Whilst this British Gas-Lurgi process was never used commercially in the U.K., similar designs are now being built and operated in China. [ 4 ] This converted plant finally ceased gas production in early 1998, with the site lying dormant until a demolition order was given in 2014.
In 1980, the Natural Resources Authority of Jordan commissioned from the Klöckner - Lurgi consortium a pre-feasibility study of construction of an oil shale retorting complex in Jordan using the Lurgi–Ruhrgas process. However, although the study found the technology feasible, it was never implemented. [ 7 ]
The Lurgi–Ruhrgas process is a hot recycled solids technology, which processes fine particles of coal or oil shale sized 0.25 to 0.5 inches (6.4 to 12.7 mm). As a heat carrier, it uses spent char or spent oil shale (oil shale ash), mixed with sand or other more durable materials. [ 3 ] [ 8 ] In this process, crushed coal or oil shale is fed into the top of the retort. [ 9 ] In retort, coal or oil shale is mixed with the 550 °C (1,020 °F) heated char or spent oil shale particles in the mechanical mixer ( screw conveyor ). [ 8 ] [ 10 ] The heat is transferred from the heated char or spent oil shale to the coal or raw oil shale causing pyrolysis. As a result, oil shale decomposes to shale oil vapors, oil shale gas and spent oil shale. [ 2 ] The oil vapor and product gases pass through a hot cyclone for cleaning before sending to a condenser . In the condenser, shale oil is separated from product gases. [ 3 ] [ 8 ]
The spent oil shale, still including residual carbon ( char ), is burnt at a lift pipe combustor to heat the process. [ 6 ] [ 8 ] If necessary, additional fuel oil is used for combustion. [ 8 ] During the combustion process, heated solid particles in the pipe are moved to the surge bin by pre-heated air that is introduced from the bottom of the pipe. At the surge bin, solids and gases are separated, and solid particles are transferred to the mixer unit to conduct the pyrolysis of the raw oil shale. [ 11 ]
One of the disadvantages of this technology is the fact that produced shale oil vapors are mixed with shale ash causing impurities in shale oil. Ensuring the quality of produced shale oil is complicated as compared with other mineral dusts the shale ash is more difficult to collect. [ 2 ] | https://en.wikipedia.org/wiki/Lurgi–Ruhrgas_process |
The Luria–Delbrück experiment (1943) (also called the Fluctuation Test ) demonstrated that in bacteria , genetic mutations arise in the absence of selective pressure rather than being a response to it. Thus, it concluded Darwin 's theory of natural selection acting on random mutations applies to bacteria as well as to more complex organisms. Max Delbrück and Salvador Luria won the 1969 Nobel Prize in Physiology or Medicine in part for this work.
Suppose a single bacterium is introduced to a growth medium with rich nutrients, and allowed to grow for N × {\displaystyle N\times } of its doubling time, we would obtain 2 N {\displaystyle 2^{N}} offsprings. Then, we introduce a challenge by bacteriophages. This would kill off most bacteria, but leave some alive. We can then smear the growth medium over a new growth medium, and count the number of colonies as the number of survivors.
In the Lamarckian scenario, each bacterium faces the challenge alone. Most would perish, but a few would survive the ordeal and found a new colony. In the Darwinian scenario, resistance to the phage would randomly occur during the replication. Those that inherited the resistance would survive, while those that did not would die.
In the Lamarckian scenario, assuming each bacterium has an equally small probability of survival, then the number of new colonies is Poisson distributed, which decays exponentially at large number of survivors.
In the Darwinian scenario, assuming that the probability of mutation is small enough that we expect only a single mutation during the entire replication phase, and that, for simplicity, we really do get just a single mutation, then with probability 1 / 2 {\displaystyle 1/2} there is a single survivor, with probability 1 / 4 {\displaystyle 1/4} there are 2 survivors, etc. That is, the probability scales as 1 number of survivors {\displaystyle {\frac {1}{\text{number of survivors}}}} .
In particular, if the distribution of survivor number turns out to decay more like a power law than like an exponential, then we can conclude with high statistical likelihood that Darwinian scenario is true. This is a rough overview of the Luria–Delbrück experiment. (Section 4.4 [ 1 ] )
By the 1940s the ideas of inheritance and mutation were generally accepted, though the role of DNA as the hereditary material had not yet been established. It was thought that bacteria were somehow different and could develop heritable genetic mutations depending on the circumstances they found themselves: in short, was the mutation in bacteria pre-adaptive (pre-existent) or post-adaptive (directed adaption)? [ 2 ]
In their experiment, Luria and Delbrück inoculated a small number of bacteria ( Escherichia coli ) into separate culture tubes. After a period of growth, they plated equal volumes of these separate cultures onto agar containing the T1 phage (virus). If resistance to the virus in bacteria were caused by an induced activation in bacteria i.e. if resistance were not due to heritable genetic components, then each plate should contain roughly the same number of resistant colonies.
Assuming a constant rate of mutation, Luria hypothesized that if mutations occurred after and in response to exposure to the selective agent, the number of survivors would be distributed according to a Poisson distribution with the mean equal to the variance . This was not what Delbrück and Luria found: Instead the number of resistant colonies on each plate varied drastically: the variance was considerably greater than the mean.
Luria and Delbrück proposed that these results could be explained by the occurrence of a constant rate of random mutations in each generation of bacteria growing in the initial culture tubes. Based on these assumptions Delbrück derived a probability distribution (now called the Luria–Delbrück distribution [ 3 ] [ 4 ] ) that gives a relationship between moments consistent with the experimentally obtained values. Therefore, the conclusion was that mutations in bacteria, as in other organisms, are random rather than directed. [ 5 ]
The results of Luria and Delbrück were confirmed in more graphical, but less quantitative, way by Newcombe. Newcombe incubated bacteria in a Petri dish for a few hours, then replica plated it onto two new Petri dishes treated with phage. The first plate was left unspread, and the second plate was then respread, that is, bacterial cells were moved around allowing single cells in some colony to form their own new colonies. If colonies contained resistant bacterial cells before entering into contact with the phage virus, one would expect that some of these cells would form new resistant colonies on the respread dish and so to find a higher number of surviving bacteria there. When both plates were incubated for growth, there were actually as much as 50 times greater number of bacterial colonies on the respread dish. This showed that bacterial mutations to virus resistance had randomly occurred during the first incubation. Once again, the mutations occurred before selection was applied. [ 6 ]
More recently, the results of Luria and Delbrück were questioned by Cairns and others, who studied mutations in sugar metabolism as a form of environmental stress. [ 7 ] Some scientists suggest that this result may have been caused by selection for gene amplification and/or a higher mutation rate in cells unable to divide. [ 8 ] Others have defended the research and propose mechanisms which account for the observed phenomena consistent with adaptive mutagenesis . [ 9 ]
This distribution appears to have been first determined by Haldane . [ 10 ] An unpublished manuscript was discovered in 1991 at University College London describing this distribution. The derivation is different but the results are difficult to compute without the use of a computer.
A small number of cells are used to inoculate parallel cultures in a non-selective medium. [ 11 ] The cultures are grown to saturation to obtain equal cell densities. The cells are plated onto selective media to obtain the number of mutants ( r ). Dilutions are plated onto rich medium to calculate the total number of viable cells ( N t ). The number of mutants that appear in the saturated culture is a measure of both the mutation rate and when the mutants arise during the growth of the culture: mutants appearing early in the growth of the culture will propagate many more mutants than those that arise later during growth. These factors cause the frequency ( r / N t ) to vary greatly, even if the number of mutational events ( m ) is the same. Frequency is not a sufficiently accurate measure of mutation and the mutation rate ( m / N t ) should always be calculated.
The estimation of the mutation rate ( μ ) is complex. Luria and Delbruck estimated this parameter from the mean of the distribution but this estimator was subsequently shown to be biased.
The Lea–Coulson method of the median was introduced in 1949. [ 12 ] This method is based on the equation
This method has since been improved on but these more accurate methods are complex. The Ma–Sandri–Sarkar maximum likelihood estimator is currently the best known estimator . [ 14 ] A number of additional methods and estimates from experimental data have been described. [ 15 ]
Two web-applications for the calculation of the mutation rate are freely available: Falcor [ 11 ] and bz-rates . Bz-rates implements a generalized version of the Ma–Sandri–Sarkar maximum likelihood estimator that can take into account the relative differential growth rate between mutant and wild-type cells as well as a generating function estimator that can estimate both the mutation rate and the differential growth rate. A worked example is shown in this paper by Jones et al . [ 16 ]
In all these models the mutation rate ( μ ) and growth rate ( β ) were assumed to be constant. The model can be easily generalized to relax these and other constraints. [ 17 ] These rates are likely to differ in non experimental settings. The models also require that N t μ >> 1 where N t is the total number of organisms. This assumption is likely to hold in most realistic or experimental settings.
Luria and Delbrück [ 5 ] estimated the mutation rate (mutations per bacterium per unit time) from the equation
where β is the cellular growth rate, n 0 is the initial number of bacteria in each culture, t is the time, and
where N s is the number of cultures without resistant bacteria and N is the total number of cultures.
Lea and Coulson's model [ 12 ] differed from the original in that they considered a collection of independent Yule processes (a filtered Poisson process ). Numerical comparisons of these two models with realistic values of the parameters has shown that they differ only slightly. [ 18 ] The generating function for this model was found by Bartlett in 1978 [ 19 ] and is
where μ is the mutation rate (assumed to be constant), φ = 1 − e − βt with β as the cellular growth rate (also assumed to be constant) and t as the time.
The determination of μ from this equation has proved difficult but a solution was discovered in 2005 [ citation needed ] . Differentiation of the generating function with respect to μ allows the application of the Newton–Raphson method which together with the use of a score function allows one to obtain confidence intervals for μ .
The mechanism of resistance to the phage T1 appears to have been due to mutations in the fhu A gene - a membrane protein that acts as the T1 receptor. [ 20 ] The ton B gene product is also required for infection by T1. The FhuA protein is actively involved in the transport of ferrichrome , albomycin and rifamycin . [ 21 ] It also confers sensitivity to microcin J25 and colicin M and acts as a receptor for the phages T5 and phi80 as well as T1.
The FhuA protein has a beta-barrel domain (residues 161 to 714) that is closed by a globular cork domain (residues 1 to 160). [ 22 ] Within the cork domain is the TonB binding region (residues 7 to 11). The large membrane spanning monomeric β-barrel domains have 22 β-strands of variable length, several of which extend significantly beyond the membrane hydrophobic core into the extracellular space. There are 11 extracellular loops numbered L1 to L11. The L4 loop is where the T1 phage binds. | https://en.wikipedia.org/wiki/Luria–Delbrück_experiment |
In descriptive set theory and mathematical logic , Lusin's separation theorem states that if A and B are disjoint analytic subsets of Polish space , then there is a Borel set C in the space such that A ⊆ C and B ∩ C = ∅. [ 1 ] It is named after Nikolai Luzin , who proved it in 1927. [ 2 ]
The theorem can be generalized to show that for each sequence ( A n ) of disjoint analytic sets there is a sequence ( B n ) of disjoint Borel sets such that A n ⊆ B n for each n . [ 1 ]
An immediate consequence is Suslin's theorem , which states that if a set and its complement are both analytic, then the set is Borel.
This mathematical logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lusin's_separation_theorem |
In the mathematical field of mathematical analysis , Lusin's theorem (or Luzin's theorem , named for Nikolai Luzin ) or Lusin's criterion states that an almost-everywhere finite function is measurable if and only if it is a continuous function on nearly all its domain. In the informal formulation of J. E. Littlewood , "every measurable function is nearly continuous".
For an interval [ a , b ], let
be a measurable function. Then, for every ε > 0, there exists a compact E ⊆ [ a , b ] such that f restricted to E is continuous and
Note that E inherits the subspace topology from [ a , b ]; continuity of f restricted to E is defined using this topology.
Also for any function f , defined on the interval [ a, b ] and almost-everywhere finite, if for any ε > 0 there is a function ϕ , continuous on [ a, b ], such that the measure of the set
is less than ε , then f is measurable. [ 1 ]
Let ( X , Σ , μ ) {\displaystyle (X,\Sigma ,\mu )} be a Radon measure space and Y be a second-countable topological space equipped with a Borel algebra , and let f : X → Y {\displaystyle f:X\rightarrow Y} be a measurable function. Given ε > 0 {\displaystyle \varepsilon >0} , for every A ∈ Σ {\displaystyle A\in \Sigma } of finite measure there is a closed set E {\displaystyle E} with μ ( A ∖ E ) < ε {\displaystyle \mu (A\setminus E)<\varepsilon } such that f {\displaystyle f} restricted to E {\displaystyle E} is continuous. If A {\displaystyle A} is locally compact and Y = R d {\displaystyle Y=\mathbb {R} ^{d}} , we can choose E {\displaystyle E} to be compact and even find a continuous function f ε : X → R d {\displaystyle f_{\varepsilon }:X\rightarrow \mathbb {R} ^{d}} with compact support that coincides with f {\displaystyle f} on E {\displaystyle E} and such that
Informally, measurable functions into spaces with countable base can be approximated by continuous functions on arbitrarily large portion of their domain.
The proof of Lusin's theorem can be found in many classical books. Intuitively, one expects it as a consequence of Egorov's theorem and density of smooth functions. Egorov's theorem states that pointwise convergence is nearly uniform, and uniform convergence preserves continuity.
The strength of Lusin's theorem might not be readily apparent, as can be demonstrated by example. Consider Dirichlet function , that is the indicator function 1 Q : [ 0 , 1 ] → { 0 , 1 } {\displaystyle 1_{\mathbb {Q} }:[0,1]\to \{0,1\}} on the unit interval [ 0 , 1 ] {\displaystyle [0,1]} taking the value of one on the rationals, and zero, otherwise. Clearly the measure of this function should be zero, but how can one find regions that are continuous, given that the rationals are dense in the reals? The requirements for Lusin's theorem can be satisfied with the following construction of a set E . {\displaystyle E.}
Let { x n ; n = 1 , 2 , … } {\displaystyle \{x_{n};n=1,2,\dots \}} be any enumeration of Q {\displaystyle \mathbb {Q} } . Set
and
Then the sequence of open sets G n {\displaystyle G_{n}} "knocks out" all of the rationals, leaving behind a compact, closed set E {\displaystyle E} which contains no rationals, and has a measure of more than 1 − 2 ε {\displaystyle 1-2\varepsilon } .
Sources
Citations | https://en.wikipedia.org/wiki/Lusin's_theorem |
Lusser's law in systems engineering is a prediction of reliability . Named after engineer Robert Lusser , [ 1 ] and also known as Lusser's product law or the probability product law of series components , it states that the reliability of a series of components is equal to the product of the individual reliabilities of the components, if their failure modes are known to be statistically independent . For a series of N components, this is expressed as: [ 2 ] [ 3 ]
where R s is the overall reliability of the system, and r n is the reliability of the n th component.
If the failure probabilities of all components are equal, then as Lusser's colleague Erich Pieruschka observed, this can be expressed simply as: [ 2 ]
Lusser's law has been described as the idea that a series system is "weaker than its weakest link", as the product reliability of a series of components can be less than the lowest-value component. [ 4 ]
For example, given a series system of two components with different reliabilities — one of 0.95 and the other of 0.8 — Lusser's law will predict a reliability of
which is lower than either of the individual components. | https://en.wikipedia.org/wiki/Lusser's_law |
Lute (from Latin Lutum , meaning mud, clay etc.) [ 1 ] was a substance used to seal and affix apparatus employed in chemistry and alchemy , and to protect component vessels against heat damage by fire; it was also used to line furnaces . Lutation was thus the act of "cementing vessels with lute".
In pottery , luting is a technique for joining pieces of unfired leather-hard clay together, using a wet clay slip or slurry as adhesive. The complete object is then fired. Large objects are often built up in this way, for example the figures of the Terracotta Army in ancient China. The edges being joined might be scored or cross-hatched to promote adhesion, but clay and water are the only materials used.
Lute was commonly used in distillation , which required airtight vessels and connectors to ensure that no vapours were lost; thus it was employed by chemists and alchemists , the latter being known to refer to it as " lutum sapientiae " or the " lute of Wisdom ". [ 2 ]
The earthen and glass vessels commonly employed in these processes were very vulnerable to cracking, both on heating and on cooling; one way of protecting them was by coating the vessels with lute and allowing it to set. One mixture for this purpose included "fat earth" (terra pinguis), Windsor loam , sand, iron filings or powdered glass , and cow's hair. [ 3 ]
Another use for lute was to act as a safety valve , preventing the buildup of vapour pressure from shattering a vessel and possibly causing an explosion. For this purpose, a hole was bored in the flask and covered with luting material of a particular composition, which was kept soft so that excessive buildup of vapour would cause it to come away from the vessel, thus releasing the pressure safely. This process could also be performed manually by the operator removing and reaffixing the lute as required. Lute was also used to effect repairs to cracked glass vessels. [ 3 ] In The Alchemist’s Experiment Takes Fire , 1687, one alembic is exploding; the luting used to seal a receiving bottle to another alembic can be seen behind the alchemist's upraised arm.
Lute was frequently applied to the joints between vessels (such as retorts and receivers), making them airtight and preventing vapour from escaping; this was especially important for more penetrating "spiritous" vapours and required a mixture that would set hard - such as a mix of quicklime and either egg white or size etc. However a stronger lute had to be used to confine acid vapours, and for this purpose fat earth [ 4 ] and linseed oil were mixed to form " fat lute ", which could be rolled into cylinders of convenient size, ready for use. [ 5 ] Where the vapour was more "aqueous", and less penetrating, strips of paper affixed with sizing would suffice or "bladder long steeped in water". [ 3 ]
Another related use for lute was for lining furnaces, and was described as far back as the 16th century by Georg Agricola in his " De re metallica ". [ 6 ]
Fat Lute was made of clay mixed with oil and beaten until it had the consistency of putty . It could be stored in a sealed earthenware vessel, which retained moisture and kept the material pliable. [ 7 ] An alchemical writer of the 16th century recommended a lute made up of "loam mixed to a compost with horse dung " [ 8 ] while the French chemist Chaptal used a similar mixture of "fat earth" and horse dung, mixed in water and formed into a soft paste. [ 9 ]
Linseed meal or Almond meal could be made into a lute by mixing with water or dissolved starch or weak glue, and used in combination with strips of rag or moistened bladder ; however, it was combustible which limited its range of applications.
Lime could be made into an effective lute by mixing it with egg white or glue; for sealing joints it was used in conjunction with strips of rag.
Linen rags mixed with paste, or strips of Bladder soaked in warm water, then coated with paste or egg white, also served as a lute. [ 7 ]
Fire Lute was used to protect vessels from heat damage. It consisted of clay mixed with sand and either horse-hair or straw or tow (coarse, broken fibre of crops such as flax , hemp , or jute ). It had to be allowed to dry thoroughly before use to be effective. [ 7 ]
Fusible lute was used to coat earthenware vessels to ensure impermeability. A mixture of Borax and slaked lime , mixed with water into a fine paste, served this purpose. [ 7 ]
Parker's Cement , Plaster of Paris and Fusible fluxes (a clay and Borax mixture in 10:1 proportion, mixed to a paste in water) could all be used as lutes, rendering heat protection and air-tightness. Stourbridge clay mixed with water could withstand the highest heat of any lute. [ 7 ]
Hard cement was also commonly used to join glass vessels and fix cracks; it was composed of resin , beeswax and either brick dust or "bole earth", or red ochre or venetian red . Soft cement , made of yellow wax , turpentine and venetian red, was also used for repair. [ 7 ] | https://en.wikipedia.org/wiki/Lute_(material) |
The lute of Pythagoras is a self-similar geometric figure made from a sequence of pentagrams .
The lute may be drawn from a sequence of pentagrams .
The centers of the pentagrams lie on a line and (except for the first and largest of them) each shares two vertices with the next larger one in the sequence. [ 1 ] [ 2 ]
An alternative construction is based on the golden triangle , an isosceles triangle with base angles of 72° and apex angle 36°. Two smaller copies of the same triangle may be drawn inside the given triangle, having the base of the triangle as one of their sides. The two new edges of these two smaller triangles, together with the base of the original golden triangle, form three of the five edges of the polygon. Adding a segment between the endpoints of these two new edges cuts off a smaller golden triangle, within which the construction can be repeated. [ 3 ] [ 4 ]
Some sources add another pentagram, inscribed within the inner pentagon of the largest pentagram of the figure. The other pentagons of the figure do not have inscribed pentagrams. [ 3 ] [ 4 ] [ 5 ]
The convex hull of the lute is a kite shape with three 108° angles and one 36° angle. [ 2 ] The sizes of any two consecutive pentagrams in the sequence are in the golden ratio to each other, and many other instances of the golden ratio appear within the lute. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ]
The lute is named after the ancient Greek mathematician Pythagoras , but its origins are unclear. [ 3 ] An early reference to it is in a 1990 book on the golden ratio by Boles and Newman. [ 6 ] | https://en.wikipedia.org/wiki/Lute_of_Pythagoras |
7FIG , 7FIH , 7FII , 7FIJ
3973
16867
ENSG00000138039
ENSMUSG00000024107
P22888
P30730
NM_000233
NM_013582 NM_001364898
NP_000224
NP_038610 NP_001351827
The luteinizing hormone/choriogonadotropin receptor ( LHCGR ), also lutropin/choriogonadotropin receptor ( LCGR ) or luteinizing hormone receptor ( LHR ), is a transmembrane receptor found predominantly in the ovary and testis , but also many extragonadal organs such as the uterus and breasts . The receptor interacts with both luteinizing hormone (LH) and chorionic gonadotropins (such as hCG in humans) and represents a G protein-coupled receptor (GPCR). Its activation is necessary for the hormonal functioning during reproduction.
The gene for the LHCGR is found on chromosome 2 p21 in humans, close to the FSH receptor gene. It consists of 70 kbp (versus 54 kpb for the FSHR). [ 5 ] The gene is similar to the gene for the FSH receptor and the TSH receptor.
The LHCGR consists of 674 amino acids and has a molecular mass of about 85–95 kDA based on the extent of glycosylation. [ 6 ]
Like other GPCRs, the LHCG receptor possess seven membrane-spanning domains or transmembrane helices . [ 7 ] The extracellular domain of the receptor is heavily glycosylated . These transmembrane domains contain two highly conserved cysteine residues, which build disulfide bonds to stabilize the receptor structure. The transmembrane part is highly homologous with other members of the rhodopsin family of GPCRs. [ 8 ] The C-terminal domain is intracellular and brief, rich in serine and threonine residues for possible phosphorylation .
Upon binding of LH to the external part of the membrane spanning receptor, a transduction of the signal takes place. This process results in the activation of a heterotrimeric G protein . Binding of LH to the receptor shifts its conformation . The activated receptor promotes the binding of GTP to the G protein and its subsequent activation. After binding GTP, the G protein heterotrimer detaches from the receptor and disassembles. The alpha-subunit Gs binds adenylate cyclase and activates the cAMP system. [ 9 ]
It is believed that a receptor molecule exists in a conformational equilibrium between active and inactive states. The binding of LH (or CG) to the receptor shifts the equilibrium towards the active form of the receptor. For a cell to respond to LH only a small percentage (≈1%) of receptor sites need to be activated.
Cyclic AMP-dependent protein kinases ( protein kinase A ) are activated by the signal cascade originated by the activation of the G protein Gs by the LHCG-receptor. Activated Gs binds the enzyme adenylate cyclase and this leads to the production of cyclic AMP (cAMP). Cyclin AMP-dependent protein kinases are present as tetramers with two regulatory subunits and two catalytic subunits. Upon binding of cAMP to the regulatory subunits, the catalytic units are released and initiate the phosphorylation of proteins leading to the physiologic action. Cyclic AMP is degraded by phosphodiesterase and release 5’AMP. One of the targets of protein kinase A is the Cyclic AMP Response Element Binding Protein, CREB , which binds DNA in the cell nucleus via direct interactions with specific DNA sequences called cyclic AMP response elements (CRE); this process results in the activation or inactivation of gene transcription . [ 5 ]
The signal is amplified by the involvement of cAMP and the resulting phosphorylation. The process is modified by prostaglandins . Other cellular regulators that participate are the intracellular calcium concentration regulated by phospholipase C activation, nitric oxide , and other growth factors.
Other pathways of signaling exist for the LHCGR. [ 6 ]
The LHCG receptor's main function is the regulation of steroidogenesis . This is accomplished by increasing the intracellular levels of the enzyme cholesterol side chain cleaving enzyme , a member of the cytochrome P450 family. This leads to increased conversion of cholesterol into androgen precursors required to make many steroid hormones, including testosterone and estrogens. [ 10 ]
In the ovary, the LHCG receptor is necessary for follicular maturation and ovulation, as well as luteal function. Its expression requires appropriate hormonal stimulation by FSH and estradiol . The LHCGR is present on granulosa cells , theca cells, luteal cells, and interstitial cells [ 6 ] The LCGR is restimulated by increasing levels of chorionic gonadotropins in case a pregnancy is developing. In turn, luteal function is prolonged and the endocrine milieu is supportive of the nascent pregnancy.
In the male the LHCGR has been identified on the Leydig cells that are critical for testosterone production, and support spermatogenesis .
Normal LHCGR functioning is critical for male fetal development, as the fetal Leydig cells produce androstenedione which is converted to testosterone in fetal Sertoli cells to induce masculinization.
LHCGR have been found in many types of extragonadal tissues, and the physiologic role of some has remained largely unexplored. Thus receptors have been found in the uterus , sperm , seminal vesicles , prostate , skin , breast , adrenals , thyroid , neural retina , neuroendocrine cells, and (rat) brain . [ 6 ]
Upregulation refers to the increase in the number of receptor sites on the membrane. Estrogen and FSH upregulate LHCGR sites in preparation for ovulation . After ovulation, the luteinized ovary maintains LHCGR s that allow activation in case there is an implantation. Upregulation in males requires gene transcription to synthesize LH receptors within the cell cytoplasm. Some reasons as to why downregulated LH receptors are not upregulated are: lack of gene transcription, lack of RNA to protein conversion and lack of cell membrane targeted shipments from Golgi.
The LHCGRs become desensitized when exposed to LH for some time. A key reaction of this downregulation is the phosphorylation of the intracellular (or cytoplasmic ) receptor domain by protein kinases . This process uncouples Gs protein from the LHCGR.
Downregulation refers to the decrease in the number of receptor molecules. This is usually the result of receptor endocytosis . In this process, the bound LCGR-hormone complex binds arrestin and concentrates in clathrin coated pits . Clathrin coated pits recruit dynamin and pinch off from the cell surface, becoming clathrin-coated vesicles . Clathrin-coated vesicles are processed into endosomes , some of which are recycled to the cell surface while others are targeted to lysosomes . Receptors targeted to lysosomes are degraded. Use of long-acting agonists will downregulate the receptor population by promoting their endocytosis.
Antibodies to LHCGR can interfere with LHCGR activity.
In 2019, the discovery of potent, and selective antagonists of the Luteinizing Hormone Receptor (BAY-298 and BAY-899) were reported which were able to reduce sex hormone levels in vivo . [ 11 ] The latter fulfils the quality criteria for a 'Donated Chemical Probe' as defined by the Structural Genomics Consortium . [ 12 ]
A series of thienopyr(im)idine-based compounds [ 13 ] leading to optimized Org 43553 were described as the first Luteinizing Hormone Receptor agonists. [ 14 ] [ 15 ]
Loss-of-function mutations in females can lead to infertility . In 46, XY individuals severe inactivation can cause male pseudohermaphroditism , as fetal Leydig cells during may not respond and thus interfere with masculinization. [ 16 ] Less severe inactivation can result in hypospadias or a micropenis . [ 6 ]
Alfred G. Gilman and Martin Rodbell received the 1994 Nobel Prize in Medicine and Physiology for the discovery of the G Protein System.
Luteinizing hormone/choriogonadotropin receptor has been shown to interact with GIPC1 . [ 17 ] | https://en.wikipedia.org/wiki/Luteinizing_hormone/choriogonadotropin_receptor |
This page provides supplementary chemical data on Lutetium(III) oxide
The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet ( MSDS ) for this chemical from a reliable source such as SIRI , and follow its directions. | https://en.wikipedia.org/wiki/Lutetium(III)_oxide_(data_page) |
Lutetium phthalocyanine ( LuPc 2 ) is a coordination compound derived from lutetium and two phthalocyanines . It was the first known example of a molecule that is an intrinsic semiconductor . [ 1 ] [ 2 ] It exhibits electrochromism , changing color when subject to a voltage.
LuPc 2 is a sandwich compound consisting of a Lu 3+ ion coordinated to the conjugate base of two phthalocyanines. The rings are arranged in a staggered conformation . The extremities of the two ligands are slightly distorted outwards. [ 3 ] The complex features a non-innocent ligand, in the sense that the macrocycles carry an extra electron. [ 4 ] It is a free radical [ 1 ] with the unpaired electron sitting in a half-filled molecular orbital between the highest occupied and lowest unoccupied orbitals, allowing its electronic properties to be finely tuned. [ 3 ]
LuPc 2 , along with many substituted derivatives like the alkoxy -methyl derivative Lu[(C 8 H 17 OCH 2 ) 8 Pc] 2 , can be deposited as a thin film with intrinsic semiconductor properties; [ 4 ] said properties arise due to its radical nature [ 1 ] and its low reduction potential compared to other metal phthalocyanines. [ 2 ] This initially green film exhibits electrochromism; the oxidized form LuPc + 2 is red, whereas the reduced form LuPc − 2 is blue and the next two reduced forms are dark blue and violet, respectively. [ 4 ] The green/red oxidation cycle can be repeated over 10,000 times in aqueous solution with dissolved alkali metal halides , before it is degraded by hydroxide ions; the green/blue redox degrades faster in water. [ 4 ]
LuPc 2 and other lanthanide phthalocyanines are of interest in the development of organic thin-film field-effect transistors . [ 3 ] [ 5 ]
LuPc 2 derivatives can be selected to change color in the presence of certain molecules, such as in gas detectors ; [ 2 ] for example, the thioether derivative Lu[(C 6 H 13 S) 8 Pc] 2 changes from green to brownish-purple in the presence of NADH . [ 6 ] | https://en.wikipedia.org/wiki/Lutetium_phthalocyanine |
In condensed matter physics , Luttinger's theorem [ 1 ] [ 2 ] is a result derived by J. M. Luttinger and J. C. Ward in 1960 that has broad implications in the field of electron transport. It arises frequently in theoretical models of correlated electrons, such as the high-temperature superconductors , and in photoemission , where a metal's Fermi surface can be directly observed.
Luttinger's theorem states that the volume enclosed by a material's Fermi surface is directly proportional to the particle density .
While the theorem is an immediate result of the Pauli exclusion principle in the case of noninteracting particles, it remains true even as interactions between particles are taken into consideration provided that the appropriate definitions of Fermi surface and particle density are adopted. Specifically, in the interacting case the Fermi surface must be defined according to the criteria that
where G {\displaystyle G} is the single-particle Green function in terms of frequency and momentum . Then Luttinger's theorem can be recast into the form [ 3 ]
where R e G {\displaystyle {\mathcal {Re\,}}G} is the real part of the above Green function and d D k {\displaystyle d^{D}k} is the differential volume of k {\displaystyle k} -space in D {\displaystyle D} dimensions. | https://en.wikipedia.org/wiki/Luttinger's_theorem |
A Luttinger liquid , or Tomonaga–Luttinger liquid , is a theoretical model describing interacting electrons (or other fermions ) in a one-dimensional conductor (e.g. quantum wires such as carbon nanotubes ). [ 1 ] Such a model is necessary as the commonly used Fermi liquid model breaks down for one dimension.
The Tomonaga–Luttinger's liquid was first proposed by Sin-Itiro Tomonaga in 1950. The model showed that under certain constraints, second-order interactions between electrons could be modelled as bosonic interactions. In 1963, J.M. Luttinger reformulated the theory in terms of Bloch sound waves and showed that the constraints proposed by Tomonaga were unnecessary in order to treat the second-order perturbations as bosons. But his solution of the model was incorrect; the correct solution was given by Daniel C. Mattis [ de ] and Elliot H. Lieb 1965. [ 2 ]
Luttinger liquid theory describes low energy excitations in a 1D electron gas as bosons. Starting with the free electron Hamiltonian:
H = ∑ k ϵ k c k † c k {\displaystyle H=\sum _{k}\epsilon _{k}c_{k}^{\dagger }c_{k}}
is separated into left and right moving electrons and undergoes linearization with the approximation ϵ k ≈ ± v F ( k − k F ) {\displaystyle \epsilon _{k}\approx \pm v_{\rm {F}}(k-k_{\rm {F}})} over the range Λ {\displaystyle \Lambda } :
H = ∑ k = k F − Λ k F + Λ v F k ( c k R † c k R − c k L † c k L ) {\displaystyle H=\sum _{k=k_{\rm {F}}-\Lambda }^{k_{\rm {F}}+\Lambda }v_{\rm {F}}k\left(c_{k}^{\mathrm {R} \dagger }c_{k}^{\mathrm {R} }-c_{k}^{\mathrm {L} \dagger }c_{k}^{\mathrm {L} }\right)}
Expressions for bosons in terms of fermions are used to represent the Hamiltonian as a product of two boson operators in a Bogoliubov transformation .
The completed bosonization can then be used to predict spin-charge separation. Electron-electron interactions can be treated to calculate correlation functions.
Among the hallmark features of a Luttinger liquid are the following:
The Luttinger model is thought to describe the universal low-frequency/long-wavelength behaviour of any one-dimensional system of interacting fermions (that has not undergone a phase transition into some other state).
Attempts to demonstrate Luttinger-liquid-like behaviour in those systems are the subject of ongoing experimental research in condensed matter physics .
Among the physical systems believed to be described by the Luttinger model are: | https://en.wikipedia.org/wiki/Luttinger_liquid |
The Luttinger–Kohn model is a flavor of the k·p perturbation theory used for calculating the structure of multiple, degenerate electronic bands in bulk and quantum well semiconductors . The method is a generalization of the single band k · p theory.
In this model, the influence of all other bands is taken into account by using Löwdin 's perturbation method. [ 1 ]
All bands can be subdivided into two classes:
The method concentrates on the bands in Class A , and takes into account Class B bands perturbatively.
We can write the perturbed solution, ϕ {\displaystyle \phi _{}^{}} , as a linear combination of the unperturbed eigenstates ϕ i ( 0 ) {\displaystyle \phi _{i}^{(0)}} :
Assuming the unperturbed eigenstates are orthonormalized, the eigenequations are:
where
From this expression, we can write:
where the first sum on the right-hand side is over the states in class A only, while the second sum is over the states on class B. Since we are interested in the coefficients a m {\displaystyle a_{m}} for m in class A, we may eliminate those in class B by an iteration procedure to obtain:
Equivalently, for a n {\displaystyle a_{n}} ( n ∈ A {\displaystyle n\in A} ):
and
When the coefficients a n {\displaystyle a_{n}} belonging to Class A are determined, so are a γ {\displaystyle a_{\gamma }} .
The Hamiltonian including the spin-orbit interaction can be written as:
where σ ¯ {\displaystyle {\bar {\sigma }}} is the Pauli spin matrix vector. Substituting into the Schrödinger equation in Bloch approximation we obtain
where
and the perturbation Hamiltonian can be defined as
The unperturbed Hamiltonian refers to the band-edge spin-orbit system (for k =0). At the band edge, the conduction band Bloch waves exhibits s-like symmetry, while the valence band states are p-like (3-fold degenerate without spin). Let us denote these states as | S ⟩ {\displaystyle |S\rangle } , and | X ⟩ {\displaystyle |X\rangle } , | Y ⟩ {\displaystyle |Y\rangle } and | Z ⟩ {\displaystyle |Z\rangle } respectively. These Bloch functions can be pictured as periodic repetition of atomic orbitals, repeated at intervals corresponding to the lattice spacing. The Bloch function can be expanded in the following manner:
where j' is in Class A and γ {\displaystyle \gamma } is in Class B. The basis functions can be chosen to be
Using Löwdin's method, only the following eigenvalue problem needs to be solved
where
The second term of Π {\displaystyle \Pi } can be neglected compared to the similar term with p instead of k . Similarly to the single band case, we can write for U j j ′ A {\displaystyle U_{jj'}^{A}}
We now define the following parameters
and the band structure parameters (or the Luttinger parameters ) can be defined to be
These parameters are very closely related to the effective masses of the holes in various valence bands. γ 1 {\displaystyle \gamma _{1}} and γ 2 {\displaystyle \gamma _{2}} describe the coupling of the | X ⟩ {\displaystyle |X\rangle } , | Y ⟩ {\displaystyle |Y\rangle } and | Z ⟩ {\displaystyle |Z\rangle } states to the other states. The third parameter γ 3 {\displaystyle \gamma _{3}} relates to the anisotropy of the energy band structure around the Γ {\displaystyle \Gamma } point when γ 2 ≠ γ 3 {\displaystyle \gamma _{2}\neq \gamma _{3}} .
The Luttinger-Kohn Hamiltonian D j j ′ {\displaystyle \mathbf {D_{jj'}} } can be written explicitly as a 8X8 matrix (taking into account 8 bands - 2 conduction, 2 heavy-holes, 2 light-holes and 2 split-off)
2. Luttinger, J. M. Kohn, W., "Motion of Electrons and Holes in Perturbed Periodic Fields", Phys. Rev. 97,4. pp. 869-883, (1955). https://journals.aps.org/pr/abstract/10.1103/PhysRev.97.869 | https://en.wikipedia.org/wiki/Luttinger–Kohn_model |
In solid state physics , the Luttinger–Ward functional , [ 1 ] proposed by Joaquin Mazdak Luttinger and John Clive Ward in 1960, [ 2 ] is a scalar functional of the bare electron-electron interaction and the renormalized one-particle propagator . In terms of Feynman diagrams , the Luttinger–Ward functional is the sum of all closed, bold, two-particle irreducible diagrams, i.e., all diagrams without particles going in or out that do not fall apart if one removes two propagator lines. It is usually written as Φ [ G ] {\displaystyle \Phi [G]} or Φ [ G , U ] {\displaystyle \Phi [G,U]} , where G {\displaystyle G} is the one-particle Green's function and U {\displaystyle U} is the bare interaction.
The Luttinger–Ward functional has no direct physical meaning, but it is useful in proving conservation laws .
The functional is closely related to the Baym–Kadanoff functional constructed independently by Gordon Baym and Leo Kadanoff in 1961. [ 3 ] Some authors use the terms interchangeably; [ 4 ] if a distinction is made, then the Baym–Kadanoff functional is identical to the two-particle irreducible effective action Γ [ G ] {\displaystyle \Gamma [G]} , which differs from the Luttinger–Ward functional by a trivial term.
Given a system characterized by the action S [ c , c ¯ ] {\displaystyle S[c,{\bar {c}}]} in terms of Grassmann fields c i , c ¯ i {\displaystyle c_{i},{\bar {c}}_{i}} , the partition function can be expressed as the path integral :
where J {\displaystyle J} is a binary source field. By expansion in the Dyson series , one finds that Z = Z [ J = 0 ] {\displaystyle Z=Z[J=0]} is the sum of all (possibly disconnected), closed Feynman diagrams. Z [ J ] {\displaystyle Z[J]} in turn is the generating functional of the N-particle Green's function:
The linked-cluster theorem asserts that the effective action W = − log Z {\displaystyle W=-\log Z} is the sum of all closed, connected, bare diagrams. W [ J ] = − log Z [ J ] {\displaystyle W[J]=-\log Z[J]} in turn is the generating functional for the connected Green's function. As an example, the two particle connected Green's function reads:
To pass to the two-particle irreducible (2PI) effective action, one performs a Legendre transform of W [ J ] {\displaystyle W[J]} to a new binary source field. One chooses an, at this point arbitrary, convex G i j {\displaystyle G_{ij}} as the source and obtains the 2PI functional, also known as Baym–Kadanoff functional:
Unlike the connected case, one more step is required to obtain a generating functional from the two-particle irreducible effective action Γ {\displaystyle \Gamma } because of the presence of a non-interacting part. By subtracting it, one obtains the Luttinger–Ward functional: [ 5 ]
where Σ {\displaystyle \Sigma } is the self-energy . Along the lines of the proof of the linked-cluster theorem, one can show that this is the generating functional for the two-particle irreducible propagators.
Diagrammatically, the Luttinger–Ward functional is the sum of all closed, bold, two-particle irreducible Feynman diagrams (also known as “skeleton” diagrams):
The diagrams are closed as they do not have any external legs, i.e., no particles going in or out of the diagram. They are “bold” because they are formulated in terms of the interacting or bold propagator rather than the non-interacting one. They are two-particle irreducible since they do not become disconnected if we sever up to two fermionic lines.
The Luttinger–Ward functional is related to the grand potential Ω {\displaystyle \Omega } of a system:
Φ {\displaystyle \Phi } is a generating functional for irreducible vertex quantities: the first functional derivative with respect to G {\displaystyle G} gives the self-energy , while the second derivative gives the partially two-particle irreducible four-point vertex:
While the Luttinger–Ward functional exists, it can be shown to be not unique for Hubbard-like models . [ 6 ] In particular, the irreducible vertex functions show a set of divergencies, which causes the self-energy to bifurcate into a physical and an unphysical solution. [ 7 ]
Baym and Kadanoff showed that we can satisfy the conservation law for any functional Φ [ G ] {\displaystyle \Phi \left[G\right]} , thanks to the Noether's theorem. This is followed by the fact that the equation of motion of G {\displaystyle G} responding to one-body external fields apparently satisfies the space- and time- translational symmetries as well as the abelian gauge symmetry (phase symmetry), as long as the equation of motion is given with the derivative of Φ [ G ] {\displaystyle \Phi \left[G\right]} . [ 3 ] Note that reverse is also true. Based on the diagramatic analysis, what Baym found is that δ Σ ( 1 , [ G ] ) δ G ( 2 ) = δ Σ ( 2 , [ G ] ) δ G ( 1 ) {\displaystyle {\frac {\delta \Sigma (1,\left[G\right])}{\delta G(2)}}={\frac {\delta \Sigma (2,\left[G\right])}{\delta G(1)}}} is needed to satisfy the conservation law. This is nothing but the completely-integrable condition, implying the existence
of Φ [ G ] {\displaystyle \Phi \left[G\right]} such that Σ [ G ] = δ Φ [ G ] δ G {\displaystyle \Sigma \left[G\right]={\frac {\delta \Phi \left[G\right]}{\delta G}}} (recall the completely-integrable condition for d f = A ( x , y ) d x + B ( x , y ) d y {\displaystyle df=A(x,y)dx+B(x,y)dy} ).
Thus the remaining problem is how to determine Φ [ G ] {\displaystyle \Phi \left[G\right]} approximately.
Such approximations are called as conserving approximation . Some examples: | https://en.wikipedia.org/wiki/Luttinger–Ward_functional |
In computational engineering , Luus–Jaakola (LJ) denotes a heuristic for global optimization of a real-valued function. [ 1 ] In engineering use, LJ is not an algorithm that terminates with an optimal solution; nor is it an iterative method that generates a sequence of points that converges to an optimal solution (when one exists). However, when applied to a twice continuously differentiable function, the LJ heuristic is a proper iterative method, that generates a sequence that has a convergent subsequence; for this class of problems, Newton's method is recommended and enjoys a quadratic rate of convergence, while no convergence rate analysis has been given for the LJ heuristic. [ 1 ] In practice, the LJ heuristic has been recommended for functions that need be neither convex nor differentiable nor locally Lipschitz : The LJ heuristic does not use a gradient or subgradient when one be available, which allows its application to non-differentiable and non-convex problems.
Proposed by Luus and Jaakola, [ 2 ] LJ generates a sequence of iterates. The next iterate is selected from a sample from a neighborhood of the current position using a uniform distribution . With each iteration, the neighborhood decreases, which forces a subsequence of iterates to converge to a cluster point. [ 1 ]
Luus has applied LJ in optimal control , [ 3 ] [ 4 ] transformer design , [ 5 ] metallurgical processes , [ 6 ] and chemical engineering . [ 7 ]
At each step, the LJ heuristic maintains a box from which it samples points randomly, using a uniform distribution on the box. For a unimodal function , the probability of reducing the objective function decreases as the box approach a minimum. The picture displays a one-dimensional example.
Let f : R n → R {\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} } be the fitness or cost function which must be minimized. Let x ∈ R n {\displaystyle {\textbf {x}}\in \mathbb {R} ^{n}} designate a position or candidate solution in the search-space. The LJ heuristic iterates the following steps:
Luus notes that ARS (Adaptive Random Search) algorithms proposed to date differ in regard to many aspects. [ 8 ]
Nair proved a convergence analysis. For twice continuously differentiable functions, the LJ heuristic generates a sequence of iterates having a convergent subsequence. [ 1 ] For this class of problems, Newton's method is the usual optimization method, and it has quadratic convergence ( regardless of the dimension of the space, which can be a Banach space , according to Kantorovich 's analysis).
The worst-case complexity of minimization on the class of unimodal functions grows exponentially in the dimension of the problem, according to the analysis of Yudin and Nemirovsky, however. The Yudin-Nemirovsky analysis implies that no method can be fast on high-dimensional problems that lack convexity:
"The catastrophic growth [in the number of iterations needed to reach an approximate solution of a given accuracy] as [the number of dimensions increases to infinity] shows that it is meaningless to pose the question of constructing universal methods of solving ... problems of any appreciable dimensionality 'generally'. It is interesting to note that the same [conclusion] holds for ... problems generated by uni-extremal [that is, unimodal] (but not convex) functions." [ 9 ]
When applied to twice continuously differentiable problems, the LJ heuristic's rate of convergence decreases as the number of dimensions increases. [ 10 ] | https://en.wikipedia.org/wiki/Luus–Jaakola |
Ly49 is a family of membrane C-type lectin -like receptors expressed mainly on NK cells but also on other immune cells (some CD8+ and CD3+ T lymphocytes, intestinal epithelial lymphocytes (IELs), NKT cells , uterine NK cells (uNK) cells, macrophages or dendritic cells ). [ 1 ] Their primary role is to bind MHC-I molecules to distinguish between self healthy cells and infected or altered cells. Ly49 family is coded by Klra gene cluster and include genes for both inhibitory and activating paired receptors , but most of them are inhibitory. [ 2 ] Inhibitory Ly49 receptors play a role in the recognition of self cells and thus maintain self-tolerance and prevent autoimmunity by suppressing NK cell activation. [ 1 ] On the other hand, activating receptors recognise ligands from cancer or viral infected cells (induced-self hypothesis) and are used when cells lack or have abnormal expression of MHC-I molecules (missing-self hypothesis), which activate cytokine production and cytotoxic activity of NK and immune cells. [ 3 ]
Ly49 receptors are expressed in some mammals including rodents, cattle, some primates but not in humans. [ 4 ] Only one human gene homologous to rodent Ly49 receptors is found in the human genome , KLRA1P (LY49L), however, it represents a non-functional pseudogene . [ 5 ] However killer cell immunoglobulin-like receptors (KIR) have the same function in humans. They have different molecular structure but recognise HLA class I molecules as ligands and include both inhibitory (mainly) and activating receptors. [ 3 ]
The function of NK cells is the killing of virally infected or cancerous cells. Therefore, they must have a precisely regulated system of self-cell recognition to prevent the destruction of healthy cells. They express several types of inhibitory and activating receptors on their surface, including the Ly49 receptor family, which have roles in NK cell licensing, antiviral, and antitumor immunity,. [ 1 ]
NK cells are activated when signal from activating receptors outweighs inhibitory signals. This could happen when activating receptors recognise viral proteins presented on infected cell surface (induced-self theory). [ 3 ] Some Ly49 receptors have evolved to recognise specific viral proteins, for example Ly49H binds to murine cytomegalovirus (MCMV) glycoprotein m157. [ 1 ] Mouse strains without Ly49H are more susceptible to MCMV infection. In addition these Ly49H positive NK cells have properties of MCMV specific memory NK cells and react better during secondary MCMV infections. [ 6 ]
Another example of NK cell activation is recognition of tumor cells, which stop expressing MHC I molecules in order to avoid killing by cytotoxic T lymphocytes . Inhibitory receptors of NK cells don't obtain signal resulting in cell activation via activating receptors. This mechanism describes the missing self hypothesis. [ 3 ]
In order to be fully functional and have cytotoxic activity, NK cells need to get signals from self-MHC I molecules on inhibitory Ly49 receptors in rodents (KIR in humans) especially during their development. [ 1 ] [ 7 ] This educational process prevents generation of autoreactive NK cells and it was called "NK cell licensing" by Yokoyama and colleagues. If inhibitory Ly49 receptors miss the signal from MHCI during their development, they are unlicensed (un-educated) and don't react to stimulation on activating receptors. But this hyporesponsive state isn't definite and they can be re-educated in certain conditions. [ 6 ] Besides, it has been shown un-educated cells can be activated by certain acute viral infections or by some tumors and kill these cells more efficiently than educated cells. [ 6 ]
Inhibitory receptors play a role in the NK cell licensing and are important for recognition and tolerance of self cells.
Stimulation of inhibitory receptors leads to phosphorylation of immunoreceptor tyrosine‐based inhibitory motif (ITIM), located in the cytoplasmic part of these receptors. [ 1 ] [ 3 ] Phosphorylated Ly49 molecule recruits the src homology 2 (SH2) domain containing protein phosphatase SHP-1 , which dephosphorylates ITIM and thus prevents cell activation.
Inhibitory receptors include Ly49A, B, C, E, G, Q. [ 2 ]
Activating receptors are involved in antiviral and antitumor immunity.
They signal through immunoreceptor tyrosine-based activation motif (ITAM) which is part of an associated molecule DAP-12 attached to arginine in the transmembrane segment of Ly49. [ 1 ] [ 3 ] After stimulation of the receptor and phosphorylation of ITAM, SH2 domain with protein kinase is recruited and that starts kinase signaling cascade leading to activating cell effector functions.
Activating receptors include Ly49D, H, L. [ 2 ] | https://en.wikipedia.org/wiki/Ly49 |
In probability theory , the central limit theorem ( CLT ) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution . This holds even if the original variables themselves are not normally distributed . There are several versions of the CLT, each applying in the context of different conditions.
The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions.
This theorem has seen many changes during the formal development of probability theory. Previous versions of the theorem date back to 1811, but in its modern form it was only precisely stated as late as 1920. [ 1 ]
In statistics , the CLT can be stated as: let X 1 , X 2 , … , X n {\displaystyle X_{1},X_{2},\dots ,X_{n}} denote a statistical sample of size n {\displaystyle n} from a population with expected value (average) μ {\displaystyle \mu } and finite positive variance σ 2 {\displaystyle \sigma ^{2}} , and let X ¯ n {\displaystyle {\bar {X}}_{n}} denote the sample mean (which is itself a random variable ). Then the limit as n → ∞ {\displaystyle n\to \infty } of the distribution of ( X ¯ n − μ ) n {\displaystyle ({\bar {X}}_{n}-\mu ){\sqrt {n}}} is a normal distribution with mean 0 {\displaystyle 0} and variance σ 2 {\displaystyle \sigma ^{2}} . [ 2 ]
In other words, suppose that a large sample of observations is obtained, each observation being randomly produced in a way that does not depend on the values of the other observations, and the average ( arithmetic mean ) of the observed values is computed. If this procedure is performed many times, resulting in a collection of observed averages, the central limit theorem says that if the sample size is large enough, the probability distribution of these averages will closely approximate a normal distribution.
The central limit theorem has several variants. In its common form, the random variables must be independent and identically distributed (i.i.d.). This requirement can be weakened; convergence of the mean to the normal distribution also occurs for non-identical distributions or for non-independent observations if they comply with certain conditions.
The earliest version of this theorem, that the normal distribution may be used as an approximation to the binomial distribution , is the de Moivre–Laplace theorem .
Let { X 1 , … , X n } {\displaystyle \{X_{1},\ldots ,X_{n}}\} be a sequence of i.i.d. random variables having a distribution with expected value given by μ {\displaystyle \mu } and finite variance given by σ 2 . {\displaystyle \sigma ^{2}.} Suppose we are interested in the sample average
X ¯ n ≡ X 1 + ⋯ + X n n . {\displaystyle {\bar {X}}_{n}\equiv {\frac {X_{1}+\cdots +X_{n}}{n}}.}
By the law of large numbers , the sample average converges almost surely (and therefore also converges in probability ) to the expected value μ {\displaystyle \mu } as n → ∞ . {\displaystyle n\to \infty .}
The classical central limit theorem describes the size and the distributional form of the stochastic fluctuations around the deterministic number μ {\displaystyle \mu } during this convergence. More precisely, it states that as n {\displaystyle n} gets larger, the distribution of the normalized mean n ( X ¯ n − μ ) {\displaystyle {\sqrt {n}}({\bar {X}}_{n}-\mu )} , i.e. the difference between the sample average X ¯ n {\displaystyle {\bar {X}}_{n}} and its limit μ , {\displaystyle \mu ,} scaled by the factor n {\displaystyle {\sqrt {n}}} , approaches the normal distribution with mean 0 {\displaystyle 0} and variance σ 2 . {\displaystyle \sigma ^{2}.} For large enough n , {\displaystyle n,} the distribution of X ¯ n {\displaystyle {\bar {X}}_{n}} gets arbitrarily close to the normal distribution with mean μ {\displaystyle \mu } and variance σ 2 / n . {\displaystyle \sigma ^{2}/n.}
The usefulness of the theorem is that the distribution of n ( X ¯ n − μ ) {\displaystyle {\sqrt {n}}({\bar {X}}_{n}-\mu )} approaches normality regardless of the shape of the distribution of the individual X i . {\displaystyle X_{i}.} Formally, the theorem can be stated as follows:
Lindeberg–Lévy CLT — Suppose X 1 , X 2 , X 3 … {\displaystyle X_{1},X_{2},X_{3}\ldots } is a sequence of i.i.d. random variables with E [ X i ] = μ {\displaystyle \operatorname {E} [X_{i}]=\mu } and Var [ X i ] = σ 2 < ∞ . {\displaystyle \operatorname {Var} [X_{i}]=\sigma ^{2}<\infty .} Then, as n {\displaystyle n} approaches infinity, the random variables n ( X ¯ n − μ ) {\displaystyle {\sqrt {n}}({\bar {X}}_{n}-\mu )} converge in distribution to a normal N ( 0 , σ 2 ) {\displaystyle {\mathcal {N}}(0,\sigma ^{2})} : [ 4 ]
n ( X ¯ n − μ ) ⟶ d N ( 0 , σ 2 ) . {\displaystyle {\sqrt {n}}\left({\bar {X}}_{n}-\mu \right)\mathrel {\overset {d}{\longrightarrow }} {\mathcal {N}}\left(0,\sigma ^{2}\right).}
In the case σ > 0 , {\displaystyle \sigma >0,} convergence in distribution means that the cumulative distribution functions of n ( X ¯ n − μ ) {\displaystyle {\sqrt {n}}({\bar {X}}_{n}-\mu )} converge pointwise to the cdf of the N ( 0 , σ 2 ) {\displaystyle {\mathcal {N}}(0,\sigma ^{2})} distribution: for every real number z , {\displaystyle z,}
lim n → ∞ P [ n ( X ¯ n − μ ) ≤ z ] = lim n → ∞ P [ n ( X ¯ n − μ ) σ ≤ z σ ] = Φ ( z σ ) , {\displaystyle \lim _{n\to \infty }\mathbb {P} \left[{\sqrt {n}}({\bar {X}}_{n}-\mu )\leq z\right]=\lim _{n\to \infty }\mathbb {P} \left[{\frac {{\sqrt {n}}({\bar {X}}_{n}-\mu )}{\sigma }}\leq {\frac {z}{\sigma }}\right]=\Phi \left({\frac {z}{\sigma }}\right),}
where Φ ( z ) {\displaystyle \Phi (z)} is the standard normal cdf evaluated at z . {\displaystyle z.} The convergence is uniform in z {\displaystyle z} in the sense that
lim n → ∞ sup z ∈ R | P [ n ( X ¯ n − μ ) ≤ z ] − Φ ( z σ ) | = 0 , {\displaystyle \lim _{n\to \infty }\;\sup _{z\in \mathbb {R} }\;\left|\mathbb {P} \left[{\sqrt {n}}({\bar {X}}_{n}-\mu )\leq z\right]-\Phi \left({\frac {z}{\sigma }}\right)\right|=0~,}
where sup {\displaystyle \sup } denotes the least upper bound (or supremum ) of the set. [ 5 ]
In this variant of the central limit theorem the random variables X i {\textstyle X_{i}} have to be independent, but not necessarily identically distributed. The theorem also requires that random variables | X i | {\textstyle \left|X_{i}\right|} have moments of some order ( 2 + δ ) {\textstyle (2+\delta )} , and that the rate of growth of these moments is limited by the Lyapunov condition given below.
Lyapunov CLT [ 6 ] — Suppose { X 1 , … , X n , … } {\textstyle \{X_{1},\ldots ,X_{n},\ldots \}} is a sequence of independent random variables, each with finite expected value μ i {\textstyle \mu _{i}} and variance σ i 2 {\textstyle \sigma _{i}^{2}} . Define
s n 2 = ∑ i = 1 n σ i 2 . {\displaystyle s_{n}^{2}=\sum _{i=1}^{n}\sigma _{i}^{2}.}
If for some δ > 0 {\textstyle \delta >0} , Lyapunov’s condition
lim n → ∞ 1 s n 2 + δ ∑ i = 1 n E [ | X i − μ i | 2 + δ ] = 0 {\displaystyle \lim _{n\to \infty }\;{\frac {1}{s_{n}^{2+\delta }}}\,\sum _{i=1}^{n}\operatorname {E} \left[\left|X_{i}-\mu _{i}\right|^{2+\delta }\right]=0}
is satisfied, then a sum of X i − μ i s n {\textstyle {\frac {X_{i}-\mu _{i}}{s_{n}}}} converges in distribution to a standard normal random variable, as n {\textstyle n} goes to infinity:
1 s n ∑ i = 1 n ( X i − μ i ) ⟶ d N ( 0 , 1 ) . {\displaystyle {\frac {1}{s_{n}}}\,\sum _{i=1}^{n}\left(X_{i}-\mu _{i}\right)\mathrel {\overset {d}{\longrightarrow }} {\mathcal {N}}(0,1).}
In practice it is usually easiest to check Lyapunov's condition for δ = 1 {\textstyle \delta =1} .
If a sequence of random variables satisfies Lyapunov's condition, then it also satisfies Lindeberg's condition. The converse implication, however, does not hold.
In the same setting and with the same notation as above, the Lyapunov condition can be replaced with the following weaker one (from Lindeberg in 1920).
Suppose that for every ε > 0 {\textstyle \varepsilon >0} ,
lim n → ∞ 1 s n 2 ∑ i = 1 n E [ ( X i − μ i ) 2 ⋅ 1 { | X i − μ i | > ε s n } ] = 0 {\displaystyle \lim _{n\to \infty }{\frac {1}{s_{n}^{2}}}\sum _{i=1}^{n}\operatorname {E} \left[(X_{i}-\mu _{i})^{2}\cdot \mathbf {1} _{\left\{\left|X_{i}-\mu _{i}\right|>\varepsilon s_{n}\right\}}\right]=0}
where 1 { … } {\textstyle \mathbf {1} _{\{\ldots \}}} is the indicator function . Then the distribution of the standardized sums
1 s n ∑ i = 1 n ( X i − μ i ) {\displaystyle {\frac {1}{s_{n}}}\sum _{i=1}^{n}\left(X_{i}-\mu _{i}\right)}
converges towards the standard normal distribution N ( 0 , 1 ) {\textstyle {\mathcal {N}}(0,1)} .
Rather than summing an integer number n {\displaystyle n} of random variables and taking n → ∞ {\displaystyle n\to \infty } , the sum can be of a random number N {\displaystyle N} of random variables, with conditions on N {\displaystyle N} .
Robbins CLT [ 7 ] [ 8 ] — Let { X i , i ≥ 1 } {\displaystyle \{X_{i},i\geq 1\}} be independent, identically distributed random variables with E ( X i ) = μ {\displaystyle E(X_{i})=\mu } and Var ( X i ) = σ 2 {\displaystyle {\text{Var}}(X_{i})=\sigma ^{2}} , and let { N n , n ≥ 1 } {\displaystyle \{N_{n},n\geq 1\}} be a sequence of non-negative integer-valued random variables that are independent of { X i , i ≥ 1 } {\displaystyle \{X_{i},i\geq 1\}} . Assume for each n = 1 , 2 , … {\displaystyle n=1,2,\dots } that E ( N n 2 ) < ∞ {\displaystyle E(N_{n}^{2})<\infty } and
N n − E ( N n ) Var ( N n ) → d N ( 0 , 1 ) {\displaystyle {\frac {N_{n}-E(N_{n})}{\sqrt {{\text{Var}}(N_{n})}}}\xrightarrow {\quad d\quad } {\mathcal {N}}(0,1)}
where → d {\displaystyle \xrightarrow {\,d\,} } denotes convergence in distribution and N ( 0 , 1 ) {\displaystyle {\mathcal {N}}(0,1)} is the normal distribution with mean 0, variance 1.
Then
∑ i = 1 N n X i − μ E ( N n ) σ 2 E ( N n ) + μ 2 Var ( N n ) → d N ( 0 , 1 ) {\displaystyle {\frac {\sum _{i=1}^{N_{n}}X_{i}-\mu E(N_{n})}{\sqrt {\sigma ^{2}E(N_{n})+\mu ^{2}{\text{Var}}(N_{n})}}}\xrightarrow {\quad d\quad } {\mathcal {N}}(0,1)}
Proofs that use characteristic functions can be extended to cases where each individual X i {\textstyle \mathbf {X} _{i}} is a random vector in R k {\textstyle \mathbb {R} ^{k}} , with mean vector μ = E [ X i ] {\textstyle {\boldsymbol {\mu }}=\operatorname {E} [\mathbf {X} _{i}]} and covariance matrix Σ {\textstyle \mathbf {\Sigma } } (among the components of the vector), and these random vectors are independent and identically distributed. The multidimensional central limit theorem states that when scaled, sums converge to a multivariate normal distribution . [ 9 ] Summation of these vectors is done component-wise.
For i = 1 , 2 , 3 , … , {\displaystyle i=1,2,3,\ldots ,} let
X i = [ X i ( 1 ) ⋮ X i ( k ) ] {\displaystyle \mathbf {X} _{i}={\begin{bmatrix}X_{i}^{(1)}\\\vdots \\X_{i}^{(k)}\end{bmatrix}}}
be independent random vectors. The sum of the random vectors X 1 , … , X n {\displaystyle \mathbf {X} _{1},\ldots ,\mathbf {X} _{n}} is
∑ i = 1 n X i = [ X 1 ( 1 ) ⋮ X 1 ( k ) ] + [ X 2 ( 1 ) ⋮ X 2 ( k ) ] + ⋯ + [ X n ( 1 ) ⋮ X n ( k ) ] = [ ∑ i = 1 n X i ( 1 ) ⋮ ∑ i = 1 n X i ( k ) ] {\displaystyle \sum _{i=1}^{n}\mathbf {X} _{i}={\begin{bmatrix}X_{1}^{(1)}\\\vdots \\X_{1}^{(k)}\end{bmatrix}}+{\begin{bmatrix}X_{2}^{(1)}\\\vdots \\X_{2}^{(k)}\end{bmatrix}}+\cdots +{\begin{bmatrix}X_{n}^{(1)}\\\vdots \\X_{n}^{(k)}\end{bmatrix}}={\begin{bmatrix}\sum _{i=1}^{n}X_{i}^{(1)}\\\vdots \\\sum _{i=1}^{n}X_{i}^{(k)}\end{bmatrix}}}
and their average is
X ¯ n = [ X ¯ i ( 1 ) ⋮ X ¯ i ( k ) ] = 1 n ∑ i = 1 n X i . {\displaystyle \mathbf {{\bar {X}}_{n}} ={\begin{bmatrix}{\bar {X}}_{i}^{(1)}\\\vdots \\{\bar {X}}_{i}^{(k)}\end{bmatrix}}={\frac {1}{n}}\sum _{i=1}^{n}\mathbf {X} _{i}.}
Therefore,
1 n ∑ i = 1 n [ X i − E ( X i ) ] = 1 n ∑ i = 1 n ( X i − μ ) = n ( X ¯ n − μ ) . {\displaystyle {\frac {1}{\sqrt {n}}}\sum _{i=1}^{n}\left[\mathbf {X} _{i}-\operatorname {E} \left(\mathbf {X} _{i}\right)\right]={\frac {1}{\sqrt {n}}}\sum _{i=1}^{n}(\mathbf {X} _{i}-{\boldsymbol {\mu }})={\sqrt {n}}\left({\overline {\mathbf {X} }}_{n}-{\boldsymbol {\mu }}\right).}
The multivariate central limit theorem states that
n ( X ¯ n − μ ) ⟶ d N k ( 0 , Σ ) , {\displaystyle {\sqrt {n}}\left({\overline {\mathbf {X} }}_{n}-{\boldsymbol {\mu }}\right)\mathrel {\overset {d}{\longrightarrow }} {\mathcal {N}}_{k}(0,{\boldsymbol {\Sigma }}),} where the covariance matrix Σ {\displaystyle {\boldsymbol {\Sigma }}} is equal to Σ = [ Var ( X 1 ( 1 ) ) Cov ( X 1 ( 1 ) , X 1 ( 2 ) ) Cov ( X 1 ( 1 ) , X 1 ( 3 ) ) ⋯ Cov ( X 1 ( 1 ) , X 1 ( k ) ) Cov ( X 1 ( 2 ) , X 1 ( 1 ) ) Var ( X 1 ( 2 ) ) Cov ( X 1 ( 2 ) , X 1 ( 3 ) ) ⋯ Cov ( X 1 ( 2 ) , X 1 ( k ) ) Cov ( X 1 ( 3 ) , X 1 ( 1 ) ) Cov ( X 1 ( 3 ) , X 1 ( 2 ) ) Var ( X 1 ( 3 ) ) ⋯ Cov ( X 1 ( 3 ) , X 1 ( k ) ) ⋮ ⋮ ⋮ ⋱ ⋮ Cov ( X 1 ( k ) , X 1 ( 1 ) ) Cov ( X 1 ( k ) , X 1 ( 2 ) ) Cov ( X 1 ( k ) , X 1 ( 3 ) ) ⋯ Var ( X 1 ( k ) ) ] . {\displaystyle {\boldsymbol {\Sigma }}={\begin{bmatrix}{\operatorname {Var} \left(X_{1}^{(1)}\right)}&\operatorname {Cov} \left(X_{1}^{(1)},X_{1}^{(2)}\right)&\operatorname {Cov} \left(X_{1}^{(1)},X_{1}^{(3)}\right)&\cdots &\operatorname {Cov} \left(X_{1}^{(1)},X_{1}^{(k)}\right)\\\operatorname {Cov} \left(X_{1}^{(2)},X_{1}^{(1)}\right)&\operatorname {Var} \left(X_{1}^{(2)}\right)&\operatorname {Cov} \left(X_{1}^{(2)},X_{1}^{(3)}\right)&\cdots &\operatorname {Cov} \left(X_{1}^{(2)},X_{1}^{(k)}\right)\\\operatorname {Cov} \left(X_{1}^{(3)},X_{1}^{(1)}\right)&\operatorname {Cov} \left(X_{1}^{(3)},X_{1}^{(2)}\right)&\operatorname {Var} \left(X_{1}^{(3)}\right)&\cdots &\operatorname {Cov} \left(X_{1}^{(3)},X_{1}^{(k)}\right)\\\vdots &\vdots &\vdots &\ddots &\vdots \\\operatorname {Cov} \left(X_{1}^{(k)},X_{1}^{(1)}\right)&\operatorname {Cov} \left(X_{1}^{(k)},X_{1}^{(2)}\right)&\operatorname {Cov} \left(X_{1}^{(k)},X_{1}^{(3)}\right)&\cdots &\operatorname {Var} \left(X_{1}^{(k)}\right)\\\end{bmatrix}}~.}
The multivariate central limit theorem can be proved using the Cramér–Wold theorem . [ 9 ]
The rate of convergence is given by the following Berry–Esseen type result:
Theorem [ 10 ] — Let X 1 , … , X n , … {\displaystyle X_{1},\dots ,X_{n},\dots } be independent R d {\displaystyle \mathbb {R} ^{d}} -valued random vectors, each having mean zero. Write S = ∑ i = 1 n X i {\displaystyle S=\sum _{i=1}^{n}X_{i}} and assume Σ = Cov [ S ] {\displaystyle \Sigma =\operatorname {Cov} [S]} is invertible. Let Z ∼ N ( 0 , Σ ) {\displaystyle Z\sim {\mathcal {N}}(0,\Sigma )} be a d {\displaystyle d} -dimensional Gaussian with the same mean and same covariance matrix as S {\displaystyle S} . Then for all convex sets U ⊆ R d {\displaystyle U\subseteq \mathbb {R} ^{d}} ,
| P [ S ∈ U ] − P [ Z ∈ U ] | ≤ C d 1 / 4 γ , {\displaystyle \left|\mathbb {P} [S\in U]-\mathbb {P} [Z\in U]\right|\leq C\,d^{1/4}\gamma ~,} where C {\displaystyle C} is a universal constant, γ = ∑ i = 1 n E [ ‖ Σ − 1 / 2 X i ‖ 2 3 ] {\displaystyle \gamma =\sum _{i=1}^{n}\operatorname {E} \left[\left\|\Sigma ^{-1/2}X_{i}\right\|_{2}^{3}\right]} , and ‖ ⋅ ‖ 2 {\displaystyle \|\cdot \|_{2}} denotes the Euclidean norm on R d {\displaystyle \mathbb {R} ^{d}} .
It is unknown whether the factor d 1 / 4 {\textstyle d^{1/4}} is necessary. [ 11 ]
The generalized central limit theorem (GCLT) was an effort of multiple mathematicians ( Bernstein , Lindeberg , Lévy , Feller , Kolmogorov , and others) over the period from 1920 to 1937. [ 12 ] The first published complete proof of the GCLT was in 1937 by Paul Lévy in French. [ 13 ] An English language version of the complete proof of the GCLT is available in the translation of Gnedenko and Kolmogorov 's 1954 book. [ 14 ]
The statement of the GCLT is as follows: [ 15 ]
In other words, if sums of independent, identically distributed random variables converge in distribution to some Z , then Z must be a stable distribution .
A useful generalization of a sequence of independent, identically distributed random variables is a mixing random process in discrete time; "mixing" means, roughly, that random variables temporally far apart from one another are nearly independent. Several kinds of mixing are used in ergodic theory and probability theory. See especially strong mixing (also called α-mixing) defined by α ( n ) → 0 {\textstyle \alpha (n)\to 0} where α ( n ) {\textstyle \alpha (n)} is so-called strong mixing coefficient .
A simplified formulation of the central limit theorem under strong mixing is: [ 16 ]
Theorem — Suppose that { X 1 , … , X n , … } {\textstyle \{X_{1},\ldots ,X_{n},\ldots \}} is stationary and α {\displaystyle \alpha } -mixing with α n = O ( n − 5 ) {\textstyle \alpha _{n}=O\left(n^{-5}\right)} and that E [ X n ] = 0 {\textstyle \operatorname {E} [X_{n}]=0} and E [ X n 12 ] < ∞ {\textstyle \operatorname {E} [X_{n}^{12}]<\infty } . Denote S n = X 1 + ⋯ + X n {\textstyle S_{n}=X_{1}+\cdots +X_{n}} , then the limit
σ 2 = lim n → ∞ E ( S n 2 ) n {\displaystyle \sigma ^{2}=\lim _{n\rightarrow \infty }{\frac {\operatorname {E} \left(S_{n}^{2}\right)}{n}}}
exists, and if σ ≠ 0 {\textstyle \sigma \neq 0} then S n σ n {\textstyle {\frac {S_{n}}{\sigma {\sqrt {n}}}}} converges in distribution to N ( 0 , 1 ) {\textstyle {\mathcal {N}}(0,1)} .
In fact,
σ 2 = E ( X 1 2 ) + 2 ∑ k = 1 ∞ E ( X 1 X 1 + k ) , {\displaystyle \sigma ^{2}=\operatorname {E} \left(X_{1}^{2}\right)+2\sum _{k=1}^{\infty }\operatorname {E} \left(X_{1}X_{1+k}\right),}
where the series converges absolutely.
The assumption σ ≠ 0 {\textstyle \sigma \neq 0} cannot be omitted, since the asymptotic normality fails for X n = Y n − Y n − 1 {\textstyle X_{n}=Y_{n}-Y_{n-1}} where Y n {\textstyle Y_{n}} are another stationary sequence .
There is a stronger version of the theorem: [ 17 ] the assumption E [ X n 12 ] < ∞ {\textstyle \operatorname {E} \left[X_{n}^{12}\right]<\infty } is replaced with E [ | X n | 2 + δ ] < ∞ {\textstyle \operatorname {E} \left[{\left|X_{n}\right|}^{2+\delta }\right]<\infty } , and the assumption α n = O ( n − 5 ) {\textstyle \alpha _{n}=O\left(n^{-5}\right)} is replaced with
∑ n α n δ 2 ( 2 + δ ) < ∞ . {\displaystyle \sum _{n}\alpha _{n}^{\frac {\delta }{2(2+\delta )}}<\infty .}
Existence of such δ > 0 {\textstyle \delta >0} ensures the conclusion. For encyclopedic treatment of limit theorems under mixing conditions see ( Bradley 2007 ).
Theorem — Let a martingale M n {\textstyle M_{n}} satisfy
then M n n {\textstyle {\frac {M_{n}}{\sqrt {n}}}} converges in distribution to N ( 0 , 1 ) {\textstyle {\mathcal {N}}(0,1)} as n → ∞ {\textstyle n\to \infty } . [ 18 ] [ 19 ]
The central limit theorem has a proof using characteristic functions . [ 20 ] It is similar to the proof of the (weak) law of large numbers .
Assume { X 1 , … , X n , … } {\textstyle \{X_{1},\ldots ,X_{n},\ldots \}} are independent and identically distributed random variables, each with mean μ {\textstyle \mu } and finite variance σ 2 {\textstyle \sigma ^{2}} . The sum X 1 + ⋯ + X n {\textstyle X_{1}+\cdots +X_{n}} has mean n μ {\textstyle n\mu } and variance n σ 2 {\textstyle n\sigma ^{2}} . Consider the random variable
Z n = X 1 + ⋯ + X n − n μ n σ 2 = ∑ i = 1 n X i − μ n σ 2 = ∑ i = 1 n 1 n Y i , {\displaystyle Z_{n}={\frac {X_{1}+\cdots +X_{n}-n\mu }{\sqrt {n\sigma ^{2}}}}=\sum _{i=1}^{n}{\frac {X_{i}-\mu }{\sqrt {n\sigma ^{2}}}}=\sum _{i=1}^{n}{\frac {1}{\sqrt {n}}}Y_{i},}
where in the last step we defined the new random variables Y i = X i − μ σ {\textstyle Y_{i}={\frac {X_{i}-\mu }{\sigma }}} , each with zero mean and unit variance ( var ( Y ) = 1 {\textstyle \operatorname {var} (Y)=1} ). The characteristic function of Z n {\textstyle Z_{n}} is given by
φ Z n ( t ) = φ ∑ i = 1 n 1 n Y i ( t ) = φ Y 1 ( t n ) φ Y 2 ( t n ) ⋯ φ Y n ( t n ) = [ φ Y 1 ( t n ) ] n , {\displaystyle \varphi _{Z_{n}}\!(t)=\varphi _{\sum _{i=1}^{n}{{\frac {1}{\sqrt {n}}}Y_{i}}}\!(t)\ =\ \varphi _{Y_{1}}\!\!\left({\frac {t}{\sqrt {n}}}\right)\varphi _{Y_{2}}\!\!\left({\frac {t}{\sqrt {n}}}\right)\cdots \varphi _{Y_{n}}\!\!\left({\frac {t}{\sqrt {n}}}\right)\ =\ \left[\varphi _{Y_{1}}\!\!\left({\frac {t}{\sqrt {n}}}\right)\right]^{n},}
where in the last step we used the fact that all of the Y i {\textstyle Y_{i}} are identically distributed. The characteristic function of Y 1 {\textstyle Y_{1}} is, by Taylor's theorem , φ Y 1 ( t n ) = 1 − t 2 2 n + o ( t 2 n ) , ( t n ) → 0 {\displaystyle \varphi _{Y_{1}}\!\left({\frac {t}{\sqrt {n}}}\right)=1-{\frac {t^{2}}{2n}}+o\!\left({\frac {t^{2}}{n}}\right),\quad \left({\frac {t}{\sqrt {n}}}\right)\to 0}
where o ( t 2 / n ) {\textstyle o(t^{2}/n)} is " little o notation " for some function of t {\textstyle t} that goes to zero more rapidly than t 2 / n {\textstyle t^{2}/n} . By the limit of the exponential function ( e x = lim n → ∞ ( 1 + x n ) n {\textstyle e^{x}=\lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}} ), the characteristic function of Z n {\displaystyle Z_{n}} equals
φ Z n ( t ) = ( 1 − t 2 2 n + o ( t 2 n ) ) n → e − 1 2 t 2 , n → ∞ . {\displaystyle \varphi _{Z_{n}}(t)=\left(1-{\frac {t^{2}}{2n}}+o\left({\frac {t^{2}}{n}}\right)\right)^{n}\rightarrow e^{-{\frac {1}{2}}t^{2}},\quad n\to \infty .}
All of the higher order terms vanish in the limit n → ∞ {\textstyle n\to \infty } . The right hand side equals the characteristic function of a standard normal distribution N ( 0 , 1 ) {\textstyle {\mathcal {N}}(0,1)} , which implies through Lévy's continuity theorem that the distribution of Z n {\textstyle Z_{n}} will approach N ( 0 , 1 ) {\textstyle {\mathcal {N}}(0,1)} as n → ∞ {\textstyle n\to \infty } . Therefore, the sample average
X ¯ n = X 1 + ⋯ + X n n {\displaystyle {\bar {X}}_{n}={\frac {X_{1}+\cdots +X_{n}}{n}}}
is such that
n σ ( X ¯ n − μ ) = Z n {\displaystyle {\frac {\sqrt {n}}{\sigma }}({\bar {X}}_{n}-\mu )=Z_{n}}
converges to the normal distribution N ( 0 , 1 ) {\textstyle {\mathcal {N}}(0,1)} , from which the central limit theorem follows.
The central limit theorem gives only an asymptotic distribution . As an approximation for a finite number of observations, it provides a reasonable approximation only when close to the peak of the normal distribution; it requires a very large number of observations to stretch into the tails. [ citation needed ]
The convergence in the central limit theorem is uniform because the limiting cumulative distribution function is continuous. If the third central moment E [ ( X 1 − μ ) 3 ] {\textstyle \operatorname {E} \left[(X_{1}-\mu )^{3}\right]} exists and is finite, then the speed of convergence is at least on the order of 1 / n {\textstyle 1/{\sqrt {n}}} (see Berry–Esseen theorem ). Stein's method [ 21 ] can be used not only to prove the central limit theorem, but also to provide bounds on the rates of convergence for selected metrics. [ 22 ]
The convergence to the normal distribution is monotonic, in the sense that the entropy of Z n {\textstyle Z_{n}} increases monotonically to that of the normal distribution. [ 23 ]
The central limit theorem applies in particular to sums of independent and identically distributed discrete random variables . A sum of discrete random variables is still a discrete random variable , so that we are confronted with a sequence of discrete random variables whose cumulative probability distribution function converges towards a cumulative probability distribution function corresponding to a continuous variable (namely that of the normal distribution ). This means that if we build a histogram of the realizations of the sum of n independent identical discrete variables, the piecewise-linear curve that joins the centers of the upper faces of the rectangles forming the histogram converges toward a Gaussian curve as n approaches infinity; this relation is known as de Moivre–Laplace theorem . The binomial distribution article details such an application of the central limit theorem in the simple case of a discrete variable taking only two possible values.
Studies have shown that the central limit theorem is subject to several common but serious misconceptions, some of which appear in widely used textbooks. [ 24 ] [ 25 ] [ 26 ] These include:
The law of large numbers as well as the central limit theorem are partial solutions to a general problem: "What is the limiting behavior of S n as n approaches infinity?" In mathematical analysis, asymptotic series are one of the most popular tools employed to approach such questions.
Suppose we have an asymptotic expansion of f ( n ) {\textstyle f(n)} :
f ( n ) = a 1 φ 1 ( n ) + a 2 φ 2 ( n ) + O ( φ 3 ( n ) ) ( n → ∞ ) . {\displaystyle f(n)=a_{1}\varphi _{1}(n)+a_{2}\varphi _{2}(n)+O{\big (}\varphi _{3}(n){\big )}\qquad (n\to \infty ).}
Dividing both parts by φ 1 ( n ) and taking the limit will produce a 1 , the coefficient of the highest-order term in the expansion, which represents the rate at which f ( n ) changes in its leading term.
lim n → ∞ f ( n ) φ 1 ( n ) = a 1 . {\displaystyle \lim _{n\to \infty }{\frac {f(n)}{\varphi _{1}(n)}}=a_{1}.}
Informally, one can say: " f ( n ) grows approximately as a 1 φ 1 ( n ) ". Taking the difference between f ( n ) and its approximation and then dividing by the next term in the expansion, we arrive at a more refined statement about f ( n ) :
lim n → ∞ f ( n ) − a 1 φ 1 ( n ) φ 2 ( n ) = a 2 . {\displaystyle \lim _{n\to \infty }{\frac {f(n)-a_{1}\varphi _{1}(n)}{\varphi _{2}(n)}}=a_{2}.}
Here one can say that the difference between the function and its approximation grows approximately as a 2 φ 2 ( n ) . The idea is that dividing the function by appropriate normalizing functions, and looking at the limiting behavior of the result, can tell us much about the limiting behavior of the original function itself.
Informally, something along these lines happens when the sum, S n , of independent identically distributed random variables, X 1 , ..., X n , is studied in classical probability theory. [ citation needed ] If each X i has finite mean μ , then by the law of large numbers, S n / n → μ . [ 28 ] If in addition each X i has finite variance σ 2 , then by the central limit theorem,
S n − n μ n → ξ , {\displaystyle {\frac {S_{n}-n\mu }{\sqrt {n}}}\to \xi ,}
where ξ is distributed as N (0, σ 2 ) . This provides values of the first two constants in the informal expansion
S n ≈ μ n + ξ n . {\displaystyle S_{n}\approx \mu n+\xi {\sqrt {n}}.}
In the case where the X i do not have finite mean or variance, convergence of the shifted and rescaled sum can also occur with different centering and scaling factors:
S n − a n b n → Ξ , {\displaystyle {\frac {S_{n}-a_{n}}{b_{n}}}\rightarrow \Xi ,}
or informally
S n ≈ a n + Ξ b n . {\displaystyle S_{n}\approx a_{n}+\Xi b_{n}.}
Distributions Ξ which can arise in this way are called stable . [ 29 ] Clearly, the normal distribution is stable, but there are also other stable distributions, such as the Cauchy distribution , for which the mean or variance are not defined. The scaling factor b n may be proportional to n c , for any c ≥ 1 / 2 ; it may also be multiplied by a slowly varying function of n . [ 30 ] [ 31 ]
The law of the iterated logarithm specifies what is happening "in between" the law of large numbers and the central limit theorem. Specifically it says that the normalizing function √ n log log n , intermediate in size between n of the law of large numbers and √ n of the central limit theorem, provides a non-trivial limiting behavior.
The density of the sum of two or more independent variables is the convolution of their densities (if these densities exist). Thus the central limit theorem can be interpreted as a statement about the properties of density functions under convolution: the convolution of a number of density functions tends to the normal density as the number of density functions increases without bound. These theorems require stronger hypotheses than the forms of the central limit theorem given above. Theorems of this type are often called local limit theorems. See Petrov [ 32 ] for a particular local limit theorem for sums of independent and identically distributed random variables .
Since the characteristic function of a convolution is the product of the characteristic functions of the densities involved, the central limit theorem has yet another restatement: the product of the characteristic functions of a number of density functions becomes close to the characteristic function of the normal density as the number of density functions increases without bound, under the conditions stated above. Specifically, an appropriate scaling factor needs to be applied to the argument of the characteristic function.
An equivalent statement can be made about Fourier transforms , since the characteristic function is essentially a Fourier transform.
Let S n be the sum of n random variables. Many central limit theorems provide conditions such that S n / √ Var( S n ) converges in distribution to N (0,1) (the normal distribution with mean 0, variance 1) as n → ∞ . In some cases, it is possible to find a constant σ 2 and function f(n) such that S n /(σ √ n⋅f ( n ) ) converges in distribution to N (0,1) as n → ∞ .
Lemma [ 33 ] — Suppose X 1 , X 2 , … {\displaystyle X_{1},X_{2},\dots } is a sequence of real-valued and strictly stationary random variables with E ( X i ) = 0 {\displaystyle \operatorname {E} (X_{i})=0} for all i {\displaystyle i} , g : [ 0 , 1 ] → R {\displaystyle g:[0,1]\to \mathbb {R} } , and S n = ∑ i = 1 n g ( i n ) X i {\displaystyle S_{n}=\sum _{i=1}^{n}g\left({\tfrac {i}{n}}\right)X_{i}} . Construct
σ 2 = E ( X 1 2 ) + 2 ∑ i = 1 ∞ E ( X 1 X 1 + i ) {\displaystyle \sigma ^{2}=\operatorname {E} (X_{1}^{2})+2\sum _{i=1}^{\infty }\operatorname {E} (X_{1}X_{1+i})}
The logarithm of a product is simply the sum of the logarithms of the factors. Therefore, when the logarithm of a product of random variables that take only positive values approaches a normal distribution, the product itself approaches a log-normal distribution . Many physical quantities (especially mass or length, which are a matter of scale and cannot be negative) are the products of different random factors, so they follow a log-normal distribution. This multiplicative version of the central limit theorem is sometimes called Gibrat's law .
Whereas the central limit theorem for sums of random variables requires the condition of finite variance, the corresponding theorem for products requires the corresponding condition that the density function be square-integrable. [ 34 ]
Asymptotic normality, that is, convergence to the normal distribution after appropriate shift and rescaling, is a phenomenon much more general than the classical framework treated above, namely, sums of independent random variables (or vectors). New frameworks are revealed from time to time; no single unifying framework is available for now.
Theorem — There exists a sequence ε n ↓ 0 for which the following holds. Let n ≥ 1 , and let random variables X 1 , ..., X n have a log-concave joint density f such that f ( x 1 , ..., x n ) = f (| x 1 |, ..., | x n |) for all x 1 , ..., x n , and E( X 2 k ) = 1 for all k = 1, ..., n . Then the distribution of
X 1 + ⋯ + X n n {\displaystyle {\frac {X_{1}+\cdots +X_{n}}{\sqrt {n}}}}
is ε n -close to N ( 0 , 1 ) {\textstyle {\mathcal {N}}(0,1)} in the total variation distance . [ 35 ]
These two ε n -close distributions have densities (in fact, log-concave densities), thus, the total variance distance between them is the integral of the absolute value of the difference between the densities. Convergence in total variation is stronger than weak convergence.
An important example of a log-concave density is a function constant inside a given convex body and vanishing outside; it corresponds to the uniform distribution on the convex body, which explains the term "central limit theorem for convex bodies".
Another example: f ( x 1 , ..., x n ) = const · exp(−(| x 1 | α + ⋯ + | x n | α ) β ) where α > 1 and αβ > 1 . If β = 1 then f ( x 1 , ..., x n ) factorizes into const · exp (−| x 1 | α ) … exp(−| x n | α ), which means X 1 , ..., X n are independent. In general, however, they are dependent.
The condition f ( x 1 , ..., x n ) = f (| x 1 |, ..., | x n |) ensures that X 1 , ..., X n are of zero mean and uncorrelated ; [ citation needed ] still, they need not be independent, nor even pairwise independent . [ citation needed ] By the way, pairwise independence cannot replace independence in the classical central limit theorem. [ 36 ]
Here is a Berry–Esseen type result.
Theorem — Let X 1 , ..., X n satisfy the assumptions of the previous theorem, then [ 37 ]
| P ( a ≤ X 1 + ⋯ + X n n ≤ b ) − 1 2 π ∫ a b e − 1 2 t 2 d t | ≤ C n {\displaystyle \left|\mathbb {P} \left(a\leq {\frac {X_{1}+\cdots +X_{n}}{\sqrt {n}}}\leq b\right)-{\frac {1}{\sqrt {2\pi }}}\int _{a}^{b}e^{-{\frac {1}{2}}t^{2}}\,dt\right|\leq {\frac {C}{n}}}
for all a < b ; here C is a universal (absolute) constant . Moreover, for every c 1 , ..., c n ∈ R such that c 2 1 + ⋯ + c 2 n = 1 ,
| P ( a ≤ c 1 X 1 + ⋯ + c n X n ≤ b ) − 1 2 π ∫ a b e − 1 2 t 2 d t | ≤ C ( c 1 4 + ⋯ + c n 4 ) . {\displaystyle \left|\mathbb {P} \left(a\leq c_{1}X_{1}+\cdots +c_{n}X_{n}\leq b\right)-{\frac {1}{\sqrt {2\pi }}}\int _{a}^{b}e^{-{\frac {1}{2}}t^{2}}\,dt\right|\leq C\left(c_{1}^{4}+\dots +c_{n}^{4}\right).}
The distribution of X 1 + ⋯ + X n / √ n need not be approximately normal (in fact, it can be uniform). [ 38 ] However, the distribution of c 1 X 1 + ⋯ + c n X n is close to N ( 0 , 1 ) {\textstyle {\mathcal {N}}(0,1)} (in the total variation distance) for most vectors ( c 1 , ..., c n ) according to the uniform distribution on the sphere c 2 1 + ⋯ + c 2 n = 1 .
Theorem ( Salem – Zygmund ) — Let U be a random variable distributed uniformly on (0,2π) , and X k = r k cos( n k U + a k ) , where
Then [ 39 ] [ 40 ]
X 1 + ⋯ + X k r 1 2 + ⋯ + r k 2 {\displaystyle {\frac {X_{1}+\cdots +X_{k}}{\sqrt {r_{1}^{2}+\cdots +r_{k}^{2}}}}}
converges in distribution to N ( 0 , 1 2 ) {\textstyle {\mathcal {N}}{\big (}0,{\frac {1}{2}}{\big )}} .
Theorem — Let A 1 , ..., A n be independent random points on the plane R 2 each having the two-dimensional standard normal distribution. Let K n be the convex hull of these points, and X n the area of K n Then [ 41 ]
X n − E ( X n ) Var ( X n ) {\displaystyle {\frac {X_{n}-\operatorname {E} (X_{n})}{\sqrt {\operatorname {Var} (X_{n})}}}} converges in distribution to N ( 0 , 1 ) {\textstyle {\mathcal {N}}(0,1)} as n tends to infinity.
The same also holds in all dimensions greater than 2.
The polytope K n is called a Gaussian random polytope.
A similar result holds for the number of vertices (of the Gaussian polytope), the number of edges, and in fact, faces of all dimensions. [ 42 ]
A linear function of a matrix M is a linear combination of its elements (with given coefficients), M ↦ tr( AM ) where A is the matrix of the coefficients; see Trace (linear algebra)#Inner product .
A random orthogonal matrix is said to be distributed uniformly, if its distribution is the normalized Haar measure on the orthogonal group O( n , R ) ; see Rotation matrix#Uniform random rotation matrices .
Theorem — Let M be a random orthogonal n × n matrix distributed uniformly, and A a fixed n × n matrix such that tr( AA *) = n , and let X = tr( AM ) . Then [ 43 ] the distribution of X is close to N ( 0 , 1 ) {\textstyle {\mathcal {N}}(0,1)} in the total variation metric up to [ clarification needed ] 2 √ 3 / n − 1 .
Theorem — Let random variables X 1 , X 2 , ... ∈ L 2 (Ω) be such that X n → 0 weakly in L 2 (Ω) and X n → 1 weakly in L 1 (Ω) . Then there exist integers n 1 < n 2 < ⋯ such that
X n 1 + ⋯ + X n k k {\displaystyle {\frac {X_{n_{1}}+\cdots +X_{n_{k}}}{\sqrt {k}}}}
converges in distribution to N ( 0 , 1 ) {\textstyle {\mathcal {N}}(0,1)} as k tends to infinity. [ 44 ]
The central limit theorem may be established for the simple random walk on a crystal lattice (an infinite-fold abelian covering graph over a finite graph), and is used for design of crystal structures. [ 45 ] [ 46 ]
A simple example of the central limit theorem is rolling many identical, unbiased dice. The distribution of the sum (or average) of the rolled numbers will be well approximated by a normal distribution. Since real-world quantities are often the balanced sum of many unobserved random events, the central limit theorem also provides a partial explanation for the prevalence of the normal probability distribution. It also justifies the approximation of large-sample statistics to the normal distribution in controlled experiments.
Regression analysis , and in particular ordinary least squares , specifies that a dependent variable depends according to some function upon one or more independent variables , with an additive error term . Various types of statistical inference on the regression assume that the error term is normally distributed. This assumption can be justified by assuming that the error term is actually the sum of many independent error terms; even if the individual error terms are not normally distributed, by the central limit theorem their sum can be well approximated by a normal distribution.
Given its importance to statistics, a number of papers and computer packages are available that demonstrate the convergence involved in the central limit theorem. [ 47 ]
Dutch mathematician Henk Tijms writes: [ 48 ]
The central limit theorem has an interesting history. The first version of this theorem was postulated by the French-born mathematician Abraham de Moivre who, in a remarkable article published in 1733, used the normal distribution to approximate the distribution of the number of heads resulting from many tosses of a fair coin. This finding was far ahead of its time, and was nearly forgotten until the famous French mathematician Pierre-Simon Laplace rescued it from obscurity in his monumental work Théorie analytique des probabilités , which was published in 1812. Laplace expanded De Moivre's finding by approximating the binomial distribution with the normal distribution. But as with De Moivre, Laplace's finding received little attention in his own time. It was not until the nineteenth century was at an end that the importance of the central limit theorem was discerned, when, in 1901, Russian mathematician Aleksandr Lyapunov defined it in general terms and proved precisely how it worked mathematically. Nowadays, the central limit theorem is considered to be the unofficial sovereign of probability theory.
Sir Francis Galton described the Central Limit Theorem in this way: [ 49 ]
I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the "Law of Frequency of Error". The law would have been personified by the Greeks and deified, if they had known of it. It reigns with serenity and in complete self-effacement, amidst the wildest confusion. The huger the mob, and the greater the apparent anarchy, the more perfect is its sway. It is the supreme law of Unreason. Whenever a large sample of chaotic elements are taken in hand and marshalled in the order of their magnitude, an unsuspected and most beautiful form of regularity proves to have been latent all along.
The actual term "central limit theorem" (in German: "zentraler Grenzwertsatz") was first used by George Pólya in 1920 in the title of a paper. [ 50 ] [ 51 ] Pólya referred to the theorem as "central" due to its importance in probability theory. According to Le Cam, the French school of probability interprets the word central in the sense that "it describes the behaviour of the centre of the distribution as opposed to its tails". [ 51 ] The abstract of the paper On the central limit theorem of calculus of probability and the problem of moments by Pólya [ 50 ] in 1920 translates as follows.
The occurrence of the Gaussian probability density 1 = e − x 2 in repeated experiments, in errors of measurements, which result in the combination of very many and very small elementary errors, in diffusion processes etc., can be explained, as is well-known, by the very same limit theorem, which plays a central role in the calculus of probability. The actual discoverer of this limit theorem is to be named Laplace; it is likely that its rigorous proof was first given by Tschebyscheff and its sharpest formulation can be found, as far as I am aware of, in an article by Liapounoff . ...
A thorough account of the theorem's history, detailing Laplace's foundational work, as well as Cauchy 's, Bessel 's and Poisson 's contributions, is provided by Hald. [ 52 ] Two historical accounts, one covering the development from Laplace to Cauchy, the second the contributions by von Mises , Pólya , Lindeberg , Lévy , and Cramér during the 1920s, are given by Hans Fischer. [ 53 ] Le Cam describes a period around 1935. [ 51 ] Bernstein [ 54 ] presents a historical discussion focusing on the work of Pafnuty Chebyshev and his students Andrey Markov and Aleksandr Lyapunov that led to the first proofs of the CLT in a general setting.
A curious footnote to the history of the Central Limit Theorem is that a proof of a result similar to the 1922 Lindeberg CLT was the subject of Alan Turing 's 1934 Fellowship Dissertation for King's College at the University of Cambridge . Only after submitting the work did Turing learn it had already been proved. Consequently, Turing's dissertation was not published. [ 55 ] | https://en.wikipedia.org/wiki/Lyapunov's_central_limit_theorem |
In the mathematics of dynamical systems , the concept of Lyapunov dimension was suggested by Kaplan and Yorke [ 1 ] for estimating the Hausdorff dimension of attractors .
Further the concept has been developed and rigorously justified in a number of papers, and nowadays various different approaches to the definition of Lyapunov dimension are used. Remark that the attractors with noninteger Hausdorff dimension are called strange attractors . [ 2 ] Since the direct numerical computation of the Hausdorff dimension of attractors is often a problem of high numerical complexity, estimations via the Lyapunov dimension became widely spread.
The Lyapunov dimension was named [ 3 ] after the Russian mathematician Aleksandr Lyapunov because of the close connection with the Lyapunov exponents .
Consider a dynamical system ( { φ t } t ≥ 0 , ( U ⊆ R n , ‖ ⋅ ‖ ) ) {\displaystyle {\big (}\{\varphi ^{t}\}_{t\geq 0},(U\subseteq \mathbb {R} ^{n},\|\cdot \|){\big )}} , where φ t {\displaystyle \varphi ^{t}} is the shift operator along the solutions: φ t ( u 0 ) = u ( t , u 0 ) {\displaystyle \varphi ^{t}(u_{0})=u(t,u_{0})} ,
of ODE u ˙ = f ( u ) {\displaystyle {\dot {u}}=f({u})} , t ≤ 0 {\displaystyle t\leq 0} ,
or difference equation u ( t + 1 ) = f ( u ( t ) ) {\displaystyle {u}(t+1)=f({u}(t))} , t = 0 , 1 , . . . {\displaystyle t=0,1,...} ,
with continuously differentiable vector-function f {\displaystyle f} .
Then D φ t ( u ) {\displaystyle D\varphi ^{t}(u)} is the fundamental matrix of solutions of linearized system
and denote by σ i ( t , u ) = σ i ( D φ t ( u ) ) , i = 1... n {\displaystyle \sigma _{i}(t,u)=\sigma _{i}(D\varphi ^{t}(u)),\ i=1...n} , singular values with respect to their algebraic multiplicity ,
ordered by decreasing for any u {\displaystyle u} and t {\displaystyle t} .
The concept of finite-time Lyapunov dimension and related definition of the Lyapunov dimension, developed in the works by N. Kuznetsov , [ 4 ] [ 5 ] is convenient for the numerical experiments where only finite time can be observed.
Consider an analog of the Kaplan–Yorke formula for the finite-time Lyapunov exponents:
with respect to the ordered set of finite-time Lyapunov exponents { L E i ( t , u ) } i = 1 n = { 1 t ln σ i ( t , u ) } i = 1 n {\displaystyle \{{\rm {LE}}_{i}(t,u)\}_{i=1}^{n}=\{{\frac {1}{t}}\ln \sigma _{i}(t,u)\}_{i=1}^{n}} at the point u {\displaystyle u} .
The finite-time Lyapunov dimension of dynamical system with respect
to invariant set K {\displaystyle K} is defined as follows
In this approach the use of the analog of Kaplan–Yorke formula
is rigorously justified by the Douady–Oesterlè theorem, [ 6 ] which proves that for any fixed t > 0 {\displaystyle t>0} the finite-time Lyapunov dimension for a closed bounded invariant set K {\displaystyle K} is an upper estimate of the Hausdorff dimension:
Looking for best such estimation inf t > 0 dim L ( t , K ) = lim inf t → + ∞ sup u ∈ K dim L ( t , u ) {\displaystyle \inf _{t>0}\dim _{\rm {L}}(t,K)=\liminf _{t\to +\infty }\sup \limits _{u\in K}\dim _{\rm {L}}(t,u)} , the Lyapunov dimension is defined as follows: [ 4 ] [ 5 ]
The possibilities of changing the order of the time limit and the supremum over set is discussed, e.g., in. [ 7 ] [ 8 ]
Note that the above defined Lyapunov dimension is invariant under Lipschitz diffeomorphisms . [ 4 ] [ 9 ]
Let the Jacobian matrix D f ( u eq ) {\displaystyle Df(u_{\text{eq}})} at one of the equilibria have simple real eigenvalues: { λ i ( u eq ) } i = 1 n , λ i ( u eq ) ≥ λ i + 1 ( u eq ) {\displaystyle \{\lambda _{i}(u_{\text{eq}})\}_{i=1}^{n},\lambda _{i}(u_{\text{eq}})\geq \lambda _{i+1}(u_{\text{eq}})} ,
then
If the supremum of local Lyapunov dimensions on the global attractor, which involves all equilibria, is achieved at an equilibrium point, then this allows one to get analytical formula of the exact Lyapunov dimension of the global attractor (see corresponding Eden’s conjecture ).
Following the statistical physics approach and assuming the ergodicity the Lyapunov dimension of attractor is estimated [ 1 ] by
limit value of the local Lyapunov dimension lim t → + ∞ dim L ( t , u 0 ) {\displaystyle \lim _{t\to +\infty }\dim _{\rm {L}}(t,u_{0})} of a typical trajectory, which belongs to the attractor.
In this case { lim t → + ∞ L E i ( t , u 0 ) } i n = { L E i ( u 0 ) } 1 n {\displaystyle \{\lim \limits _{t\to +\infty }{\rm {LE}}_{i}(t,u_{0})\}_{i}^{n}=\{{\rm {LE}}_{i}(u_{0})\}_{1}^{n}} and dim L u 0 = d K Y ( { L E i ( u 0 ) } i = 1 n ) = j ( u 0 ) + L E 1 ( u 0 ) + ⋯ + L E j ( u 0 ) ( u 0 ) | L E j ( u 0 ) + 1 ( u 0 ) | {\displaystyle \dim _{\rm {L}}u_{0}=d_{\rm {KY}}(\{{\rm {LE}}_{i}(u_{0})\}_{i=1}^{n})=j(u_{0})+{\frac {{\rm {LE}}_{1}(u_{0})+\cdots +{\rm {LE}}_{j(u_{0})}(u_{0})}{|{\rm {LE}}_{j(u_{0})+1}(u_{0})|}}} .
From a practical point of view, the rigorous use of ergodic Oseledec theorem ,
verification that the considered trajectory u ( t , u 0 ) {\displaystyle u(t,u_{0})} is a typical trajectory,
and the use of corresponding Kaplan–Yorke formula is a challenging task
(see, e.g. discussions in [ 10 ] ).
The exact limit values of finite-time Lyapunov exponents,
if they exist and are the same for all u 0 ∈ U {\displaystyle u_{0}\in U} ,
are called the absolute ones [ 3 ] { lim t → + ∞ L E i ( t , u 0 ) } i n = { L E i ( u 0 ) } 1 n ≡ { L E i } 1 n {\displaystyle \{\lim \limits _{t\to +\infty }{\rm {LE}}_{i}(t,u_{0})\}_{i}^{n}=\{{\rm {LE}}_{i}(u_{0})\}_{1}^{n}\equiv \{{\rm {LE}}_{i}\}_{1}^{n}} and used in the Kaplan–Yorke formula .
Examples of the rigorous use of the ergodic theory for the computation of the Lyapunov exponents and dimension can be found in. [ 11 ] [ 12 ] [ 13 ] | https://en.wikipedia.org/wiki/Lyapunov_dimension |
In mathematics , the Lyapunov exponent or Lyapunov characteristic exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories . Quantitatively, two trajectories in phase space with initial separation vector δ 0 {\displaystyle {\boldsymbol {\delta }}_{0}} diverge (provided that the divergence can be treated within the linearized approximation) at a rate given by
| δ ( t ) | ≈ e λ t | δ 0 | {\displaystyle |{\boldsymbol {\delta }}(t)|\approx e^{\lambda t}|{\boldsymbol {\delta }}_{0}|}
where λ {\displaystyle \lambda } is the Lyapunov exponent.
The rate of separation can be different for different orientations of initial separation vector. Thus, there is a spectrum of Lyapunov exponents —equal in number to the dimensionality of the phase space. It is common to refer to the largest one as the maximal Lyapunov exponent (MLE), because it determines a notion of predictability for a dynamical system. A positive MLE is usually taken as an indication that the system is chaotic (provided some other conditions are met, e.g., phase space compactness). Note that an arbitrary initial separation vector will typically contain some component in the direction associated with the MLE, and because of the exponential growth rate, the effect of the other exponents will be obliterated over time.
The exponent is named after Aleksandr Lyapunov .
The maximal Lyapunov exponent can be defined as follows: λ = lim t → ∞ lim | δ 0 | → 0 1 t ln | δ ( t ) | | δ 0 | {\displaystyle \lambda =\lim _{t\to \infty }\lim _{|{\boldsymbol {\delta }}_{0}|\to 0}{\frac {1}{t}}\ln {\frac {|{\boldsymbol {\delta }}(t)|}{|{\boldsymbol {\delta }}_{0}|}}}
The limit | δ 0 | → 0 {\displaystyle |{\boldsymbol {\delta }}_{0}|\to 0} ensures the validity of the linear approximation
at any time. [ 1 ]
For discrete time system (maps or fixed point iterations) x n + 1 = f ( x n ) {\displaystyle x_{n+1}=f(x_{n})} , for an orbit starting with x 0 {\displaystyle x_{0}} this translates into: λ ( x 0 ) = lim n → ∞ 1 n ∑ i = 0 n − 1 ln | f ′ ( x i ) | {\displaystyle \lambda (x_{0})=\lim _{n\to \infty }{\frac {1}{n}}\sum _{i=0}^{n-1}\ln |f'(x_{i})|}
For a dynamical system with evolution equation x ˙ i = f i ( x ) {\displaystyle {\dot {x}}_{i}=f_{i}(x)} in an n –dimensional phase space, the spectrum of Lyapunov exponents { λ 1 , λ 2 , … , λ n } , {\displaystyle \{\lambda _{1},\lambda _{2},\ldots ,\lambda _{n}\}\,,} in general, depends on the starting point x 0 {\displaystyle x_{0}} . However, we will usually be interested in the attractor (or attractors) of a dynamical system, and there will normally be one set of exponents associated with each attractor. The choice of starting point may determine which attractor the system ends up on, if there is more than one. (For Hamiltonian systems, which do not have attractors, this is not a concern.) The Lyapunov exponents describe the behavior of vectors in the tangent space of the phase space and are defined from the Jacobian matrix J i j ( t ) = d f i ( x ) d x j | x ( t ) {\displaystyle J_{ij}(t)=\left.{\frac {df_{i}(x)}{dx_{j}}}\right|_{x(t)}} this Jacobian defines the evolution of the tangent vectors, given by the matrix Y {\displaystyle Y} , via the equation Y ˙ = J Y {\displaystyle {\dot {Y}}=JY} with the initial condition Y i j ( 0 ) = δ i j {\displaystyle Y_{ij}(0)=\delta _{ij}} . The matrix Y {\displaystyle Y} describes how a small change at the point x ( 0 ) {\displaystyle x(0)} propagates to the final point x ( t ) {\displaystyle x(t)} . The limit Λ = lim t → ∞ 1 2 t log ( Y ( t ) Y T ( t ) ) {\displaystyle \Lambda =\lim _{t\rightarrow \infty }{\frac {1}{2t}}\log(Y(t)Y^{T}(t))} defines a matrix Λ {\displaystyle \Lambda } (the conditions for the existence of the limit are given by the Oseledets theorem ). The Lyapunov exponents λ i {\displaystyle \lambda _{i}} are defined by the eigenvalues of Λ {\displaystyle \Lambda } .
The set of Lyapunov exponents will be the same for almost all starting points of an ergodic component of the dynamical system.
To introduce Lyapunov exponent consider a fundamental matrix X ( t ) {\displaystyle X(t)} (e.g., for linearization along a stationary solution x 0 {\displaystyle x_{0}} in a continuous system), the fundamental matrix is exp ( d f t ( x ) d x | x 0 t ) {\displaystyle \exp \left(\left.{\frac {df^{t}(x)}{dx}}\right|_{x_{0}}t\right)} consisting of the linearly-independent solutions of the first-order approximation of the system. The singular values { α j ( X ( t ) ) } 1 n {\displaystyle \{\alpha _{j}{\big (}X(t){\big )}\}_{1}^{n}} of the matrix X ( t ) {\displaystyle X(t)} are the square roots of the eigenvalues of the matrix X ( t ) ∗ X ( t ) {\displaystyle X(t)^{*}X(t)} .
The largest Lyapunov exponent λ m a x {\displaystyle \lambda _{\mathrm {max} }} is as follows [ 2 ] λ m a x = max j lim sup t → ∞ 1 t ln α j ( X ( t ) ) . {\displaystyle \lambda _{\mathrm {max} }=\max \limits _{j}\limsup _{t\rightarrow \infty }{\frac {1}{t}}\ln \alpha _{j}{\big (}X(t){\big )}.} Lyapunov proved that if the system of the first approximation is regular (e.g., all systems with constant and periodic coefficients are regular) and its largest Lyapunov exponent is negative, then the solution of the original system is asymptotically Lyapunov stable .
Later, it was stated by O. Perron that the requirement of regularity of the first approximation is substantial.
In 1930 O. Perron constructed an example of a second-order system, where the first approximation has negative Lyapunov exponents along a zero solution of the original system but, at the same time, this zero solution of the original nonlinear system is Lyapunov unstable. Furthermore, in a certain neighborhood of this zero solution almost all solutions of original system have positive Lyapunov exponents. Also, it is possible to construct a reverse example in which the first approximation has positive Lyapunov exponents along a zero solution of the original system but, at the same time, this zero solution of original nonlinear system
is Lyapunov stable. [ 3 ] [ 4 ] The effect of sign inversion of Lyapunov exponents of solutions of the original system and the system of first approximation with the same initial data was subsequently
called the Perron effect. [ 3 ] [ 4 ]
Perron's counterexample shows that a negative largest Lyapunov exponent does not, in general, indicate stability, and that
a positive largest Lyapunov exponent does not, in general, indicate chaos.
Therefore, time-varying linearization requires additional justification. [ 4 ]
If the system is conservative (i.e., there is no dissipation ), a volume element of the phase space will stay the same along a trajectory. Thus the sum of all Lyapunov exponents must be zero. If the system is dissipative, the sum of Lyapunov exponents is negative.
If the system is a flow and the trajectory does not converge to a single point, one exponent is always zero—the Lyapunov exponent corresponding to the eigenvalue of L {\displaystyle L} with an eigenvector in the direction of the flow.
The Lyapunov spectrum can be used to give an estimate of the rate of entropy production,
of the fractal dimension , and of the Hausdorff dimension of the considered dynamical system . [ 5 ] In particular from the knowledge of the Lyapunov spectrum it is possible to obtain the so-called Lyapunov dimension (or Kaplan–Yorke dimension ) D K Y {\displaystyle D_{KY}} , which is defined as follows: D K Y = k + ∑ i = 1 k λ i | λ k + 1 | {\displaystyle D_{KY}=k+\sum _{i=1}^{k}{\frac {\lambda _{i}}{|\lambda _{k+1}|}}} where k {\displaystyle k} is the maximum integer such that the sum of the k {\displaystyle k} largest exponents is still non-negative. D K Y {\displaystyle D_{KY}} represents an upper bound for the information dimension of the system. [ 6 ] Moreover, the sum of all the positive Lyapunov exponents gives an estimate of the Kolmogorov–Sinai entropy accordingly to Pesin's theorem. [ 7 ] Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, which is based on the direct Lyapunov method with special Lyapunov-like functions. [ 8 ] The Lyapunov exponents of bounded trajectory and the Lyapunov dimension of attractor are invariant under diffeomorphism of the phase space. [ 9 ]
The multiplicative inverse of the largest Lyapunov exponent is sometimes referred in literature as Lyapunov time , and defines the characteristic e -folding time. For chaotic orbits, the Lyapunov time will be finite, whereas for regular orbits it will be infinite.
Generally the calculation of Lyapunov exponents, as defined above, cannot be carried out analytically, and in most cases one must resort to numerical techniques. An early example, which also constituted the first demonstration of the exponential divergence of chaotic trajectories, was carried out by R. H. Miller in 1964. [ 10 ] Currently, the most commonly used numerical procedure estimates the L {\displaystyle L} matrix based on averaging several finite time approximations of the limit defining L {\displaystyle L} .
One of the most used and effective numerical techniques to calculate the Lyapunov spectrum for a smooth dynamical system relies on periodic Gram–Schmidt orthonormalization of the Lyapunov vectors to avoid a misalignment of all the vectors along the direction of maximal expansion. [ 11 ] [ 12 ] [ 13 ] [ 14 ] The Lyapunov spectrum of various models are described. [ 15 ] Source codes for nonlinear systems such as the Hénon map, the Lorenz equations, a delay differential equation and so on are introduced. [ 16 ] [ 17 ] [ 18 ]
For the calculation of Lyapunov exponents from limited experimental data, various methods have been proposed. However, there are many difficulties with applying these methods and such problems should be approached with care. The main difficulty is that the data does not fully explore the phase space, rather it is confined to the attractor which has very limited (if any) extension along certain directions. These thinner or more singular directions within the data set are the ones associated with the more negative exponents. The use of nonlinear mappings to model the evolution of small displacements from the attractor has been shown to dramatically improve the ability to recover the Lyapunov spectrum, [ 19 ] [ 20 ] provided the data has a very low level of noise. The singular nature of the data and its connection to the more negative exponents has also been explored. [ 21 ]
Whereas the (global) Lyapunov exponent gives a measure for the total predictability of a system, it is sometimes of interest to estimate the local predictability around a point x 0 in phase space. This may be done through the eigenvalues of the Jacobian matrix J 0 ( x 0 ) . These eigenvalues are also called local Lyapunov exponents. [ 22 ] Local exponents are not invariant under a nonlinear change of coordinates.
This term is normally used regarding synchronization of chaos , in which there are two systems that are coupled, usually in a unidirectional manner so that there is a drive (or master) system and a response (or slave) system. The conditional exponents are those of the response system with the drive system treated as simply the source of a (chaotic) drive signal. Synchronization occurs when all of the conditional exponents are negative. [ 23 ] | https://en.wikipedia.org/wiki/Lyapunov_exponent |
Various types of stability may be discussed for the solutions of differential equations or difference equations describing dynamical systems . The most important type is that concerning the stability of solutions near to a point of equilibrium. This may be discussed by the theory of Aleksandr Lyapunov . In simple terms, if the solutions that start out near an equilibrium point x e {\displaystyle x_{e}} stay near x e {\displaystyle x_{e}} forever, then x e {\displaystyle x_{e}} is Lyapunov stable . More strongly, if x e {\displaystyle x_{e}} is Lyapunov stable and all solutions that start out near x e {\displaystyle x_{e}} converge to x e {\displaystyle x_{e}} , then x e {\displaystyle x_{e}} is said to be asymptotically stable (see asymptotic analysis ). The notion of exponential stability guarantees a minimal rate of decay, i.e., an estimate of how quickly the solutions converge. The idea of Lyapunov stability can be extended to infinite-dimensional manifolds, where it is known as structural stability , which concerns the behavior of different but "nearby" solutions to differential equations. Input-to-state stability (ISS) applies Lyapunov notions to systems with inputs.
Lyapunov stability is named after Aleksandr Mikhailovich Lyapunov , a Russian mathematician who defended the thesis The General Problem of Stability of Motion at Kharkov University in 1892. [ 1 ] A. M. Lyapunov was a pioneer in successful endeavors to develop a global approach to the analysis of the stability of nonlinear dynamical systems by comparison with the widely spread local method of linearizing them about points of equilibrium. His work, initially published in Russian and then translated to French, received little attention for many years. The mathematical theory of stability of motion, founded by A. M. Lyapunov, considerably anticipated the time for its implementation in science and technology. Moreover Lyapunov did not himself make application in this field, his own interest being in the stability of rotating fluid masses with astronomical application. He did not have doctoral students who followed the research in the field of stability and his own destiny was terribly tragic because of his suicide in 1918 [ citation needed ] . For several decades the theory of stability sank into complete oblivion. The Russian-Soviet mathematician and mechanician Nikolay Gur'yevich Chetaev working at the Kazan Aviation Institute in the 1930s was the first who realized the incredible magnitude of the discovery made by A. M. Lyapunov. The contribution to the theory made by N. G. Chetaev [ 2 ] was so significant that many mathematicians, physicists and engineers consider him Lyapunov's direct successor and the next-in-line scientific descendant in the creation and development of the mathematical theory of stability.
The interest in it suddenly skyrocketed during the Cold War period when the so-called "Second Method of Lyapunov" (see below) was found to be applicable to the stability of aerospace guidance systems which typically contain strong nonlinearities not treatable by other methods. A large number of publications appeared then and since in the control and systems literature. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] More recently the concept of the Lyapunov exponent (related to Lyapunov's First Method of discussing stability) has received wide interest in connection with chaos theory . Lyapunov stability methods have also been applied to finding equilibrium solutions in traffic assignment problems. [ 8 ]
Consider an autonomous nonlinear dynamical system
where x ( t ) ∈ D ⊆ R n {\displaystyle x(t)\in {\mathcal {D}}\subseteq \mathbb {R} ^{n}} denotes the system state vector , D {\displaystyle {\mathcal {D}}} an open set containing the origin, and f : D → R n {\displaystyle f:{\mathcal {D}}\rightarrow \mathbb {R} ^{n}} is a continuous vector field on D {\displaystyle {\mathcal {D}}} . Suppose f {\displaystyle f} has an equilibrium at x e {\displaystyle x_{e}} , so that f ( x e ) = 0 {\displaystyle f(x_{e})=0} . Then:
Conceptually, the meanings of the above terms are the following:
The trajectory x ( t ) = ϕ ( t ) {\displaystyle x(t)=\phi (t)} is (locally) attractive if
for all trajectories x ( t ) {\displaystyle x(t)} that start close enough to ϕ ( t ) {\displaystyle \phi (t)} , and globally attractive if this property holds for all trajectories.
That is, if x belongs to the interior of its stable manifold , it is asymptotically stable if it is both attractive and stable. (There are examples showing that attractivity does not imply asymptotic stability. [ 9 ] [ 10 ] [ 11 ] Such examples are easy to create using homoclinic connections .)
If the Jacobian of the dynamical system at an equilibrium happens to be a stability matrix (i.e., if the real part of each eigenvalue is strictly negative), then the equilibrium is asymptotically stable.
Instead of considering stability only near an equilibrium point (a constant solution x ( t ) = x e {\displaystyle x(t)=x_{e}} ), one can formulate similar definitions of stability near an arbitrary solution x ( t ) = ϕ ( t ) {\displaystyle x(t)=\phi (t)} . However, one can reduce the more general case to that of an equilibrium by a change of variables called a "system of deviations". Define y = x − ϕ ( t ) {\displaystyle y=x-\phi (t)} , obeying the differential equation:
This is no longer an autonomous system, but it has a guaranteed equilibrium point at y = 0 {\displaystyle y=0} whose stability is equivalent to the stability of the original solution x ( t ) = ϕ ( t ) {\displaystyle x(t)=\phi (t)} .
Lyapunov, in his original 1892 work, proposed two methods for demonstrating stability . [ 1 ] The first method developed the solution in a series which was then proved convergent within limits. The second method, which is now referred to as the Lyapunov stability criterion or the Direct Method, makes use of a Lyapunov function V(x) which has an analogy to the potential function of classical dynamics. It is introduced as follows for a system x ˙ = f ( x ) {\displaystyle {\dot {x}}=f(x)} having a point of equilibrium at x = 0 {\displaystyle x=0} . Consider a function V : R n → R {\displaystyle V:\mathbb {R} ^{n}\rightarrow \mathbb {R} } such that
Then V(x) is called a Lyapunov function and the system is stable in the sense of Lyapunov. (Note that V ( 0 ) = 0 {\displaystyle V(0)=0} is required; otherwise for example V ( x ) = 1 / ( 1 + | x | ) {\displaystyle V(x)=1/(1+|x|)} would "prove" that x ˙ ( t ) = x {\displaystyle {\dot {x}}(t)=x} is locally stable.) An additional condition called "properness" or "radial unboundedness" is required in order to conclude global stability. Global asymptotic stability (GAS) follows similarly.
It is easier to visualize this method of analysis by thinking of a physical system (e.g. vibrating spring and mass) and considering the energy of such a system. If the system loses energy over time and the energy is never restored then eventually the system must grind to a stop and reach some final resting state. This final state is called the attractor . However, finding a function that gives the precise energy of a physical system can be difficult, and for abstract mathematical systems, economic systems or biological systems, the concept of energy may not be applicable.
Lyapunov's realization was that stability can be proven without requiring knowledge of the true physical energy, provided a Lyapunov function can be found to satisfy the above constraints.
The definition for discrete-time systems is almost identical to that for continuous-time systems. The definition below provides this, using an alternate language commonly used in more mathematical texts.
Let ( X , d ) be a metric space and f : X → X a continuous function . A point x in X is said to be Lyapunov stable , if,
We say that x is asymptotically stable if it belongs to the interior of its stable set , i.e. if,
A linear state space model
where A {\displaystyle A} is a finite matrix, is asymptotically stable (in fact, exponentially stable ) if all real parts of the eigenvalues of A {\displaystyle A} are negative. This condition is equivalent to the following one: [ 12 ]
is negative definite for some positive definite matrix M = M T {\displaystyle M=M^{\textsf {T}}} . (The relevant Lyapunov function is V ( x ) = x T M x {\displaystyle V(x)=x^{\textsf {T}}Mx} .)
Correspondingly, a time-discrete linear state space model
is asymptotically stable (in fact, exponentially stable) if all the eigenvalues of A {\displaystyle A} have a modulus smaller than one.
This latter condition has been generalized to switched systems: a linear switched discrete time system (ruled by a set of matrices { A 1 , … , A m } {\displaystyle \{A_{1},\dots ,A_{m}\}} )
is asymptotically stable (in fact, exponentially stable) if the joint spectral radius of the set { A 1 , … , A m } {\displaystyle \{A_{1},\dots ,A_{m}\}} is smaller than one.
A system with inputs (or controls) has the form
where the (generally time-dependent) input u(t) may be viewed as a control , external input , stimulus , disturbance , or forcing function . It has been shown [ 13 ] that near to a point of equilibrium which is Lyapunov stable the system remains stable under small disturbances. For larger input disturbances the study of such systems is the subject of control theory and applied in control engineering . For systems with inputs, one must quantify the effect of inputs on the stability of the system. The main two approaches to this analysis are BIBO stability (for linear systems ) and input-to-state stability (ISS) (for nonlinear systems )
This example shows a system where a Lyapunov function can be used to prove Lyapunov stability but cannot show asymptotic stability.
Consider the following equation, based on the Van der Pol oscillator equation with the friction term changed:
Let
so that the corresponding system is
The origin x 1 = 0 , x 2 = 0 {\displaystyle x_{1}=0,\ x_{2}=0} is the only equilibrium point.
Let us choose as a Lyapunov function
which is clearly positive definite . Its derivative is
It seems that if the parameter ε {\displaystyle \varepsilon } is positive, stability is asymptotic for x 2 2 < 3. {\displaystyle x_{2}^{2}<3.} But this is wrong, since V ˙ {\displaystyle {\dot {V}}} does not depend on x 1 {\displaystyle x_{1}} , and will be 0 everywhere on the x 1 {\displaystyle x_{1}} axis. The equilibrium is Lyapunov stable but not asymptotically stable.
It may be difficult to find a Lyapunov function with a negative definite derivative as required by the Lyapunov stability criterion, however a function V {\displaystyle V} with V ˙ {\displaystyle {\dot {V}}} that is only negative semi-definite may be available. In autonomous systems, the invariant set theorem can be applied to prove asymptotic stability, but this theorem is not applicable when the dynamics are a function of time. [ 14 ]
Instead, Barbalat's lemma allows for Lyapunov-like analysis of these non-autonomous systems. The lemma is motivated by the following observations. Assuming f is a function of time only:
Barbalat's Lemma says:
An alternative version is as follows:
In the following form the Lemma is true also in the vector valued case:
The following example is taken from page 125 of Slotine and Li's book Applied Nonlinear Control . [ 14 ]
Consider a non-autonomous system
This is non-autonomous because the input w {\displaystyle w} is a function of time. Assume that the input w ( t ) {\displaystyle w(t)} is bounded.
Taking V = e 2 + g 2 {\displaystyle V=e^{2}+g^{2}} gives V ˙ = − 2 e 2 ≤ 0. {\displaystyle {\dot {V}}=-2e^{2}\leq 0.}
This says that V ( t ) ≤ V ( 0 ) {\displaystyle V(t)\leq V(0)} by first two conditions and hence e {\displaystyle e} and g {\displaystyle g} are bounded. But it does not say anything about the convergence of e {\displaystyle e} to zero, as V ˙ {\displaystyle {\dot {V}}} is only negative semi-definite (note g {\displaystyle g} can be non-zero when V ˙ {\displaystyle {\dot {V}}} =0) and the dynamics are non-autonomous.
Using Barbalat's lemma:
This is bounded because e {\displaystyle e} , g {\displaystyle g} and w {\displaystyle w} are bounded. This implies V ˙ → 0 {\displaystyle {\dot {V}}\to 0} as t → ∞ {\displaystyle t\to \infty } and hence e → 0 {\displaystyle e\to 0} . This proves that the error converges.
This article incorporates material from asymptotically stable on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License . | https://en.wikipedia.org/wiki/Lyapunov_stability |
In mathematics , the Lyapunov time is the characteristic timescale on which a dynamical system is chaotic . It is named after the Russian mathematician Aleksandr Lyapunov . It is defined as the inverse of a system's largest Lyapunov exponent . [ 1 ]
The Lyapunov time mirrors the limits of the predictability of the system. By convention, it is defined as the time for the distance between nearby trajectories of the system to increase by a factor of e . However, measures in terms of 2-foldings and 10-foldings are sometimes found, since they correspond to the loss of one bit of information or one digit of precision respectively. [ 2 ] [ 3 ]
While it is used in many applications of dynamical systems theory, it has been particularly used in celestial mechanics where it is important for the problem of the stability of the Solar System . However, empirical estimation of the Lyapunov time is often associated with computational or inherent uncertainties. [ 4 ] [ 5 ]
Typical values are: [ 2 ] | https://en.wikipedia.org/wiki/Lyapunov_time |
In applied mathematics and dynamical system theory, Lyapunov vectors , named after Aleksandr Lyapunov , describe characteristic expanding and contracting directions of a dynamical system. They have been used in predictability analysis and as initial perturbations for ensemble forecasting in numerical weather prediction . [ 1 ] In modern practice they are often replaced by bred vectors for this purpose. [ 2 ]
Lyapunov vectors are defined along the trajectories of a dynamical system. If the system can be described by a d-dimensional state vector x ∈ R d {\displaystyle x\in \mathbb {R} ^{d}} the Lyapunov vectors v ( k ) ( x ) {\displaystyle v^{(k)}(x)} , ( k = 1 … d ) {\displaystyle (k=1\dots d)} point in the directions in which an infinitesimal perturbation will grow asymptotically, exponentially at an average rate given by the Lyapunov exponents λ k {\displaystyle \lambda _{k}} .
If the dynamical system is differentiable and the Lyapunov vectors exist, they can be found by forward and backward iterations of the linearized system along a trajectory. [ 5 ] [ 6 ] Let x n + 1 = M t n → t n + 1 ( x n ) {\displaystyle x_{n+1}=M_{t_{n}\to t_{n+1}}(x_{n})} map the system with state vector x n {\displaystyle x_{n}} at time t n {\displaystyle t_{n}} to the state x n + 1 {\displaystyle x_{n+1}} at time t n + 1 {\displaystyle t_{n+1}} . The linearization of this map, i.e. the Jacobian matrix J n {\displaystyle ~J_{n}} describes the change of an infinitesimal perturbation h n {\displaystyle h_{n}} . That is
Starting with an identity matrix Q 0 = I {\displaystyle Q_{0}=\mathbb {I} ~} the iterations
where Q n + 1 R n + 1 {\displaystyle Q_{n+1}R_{n+1}} is given by the Gram-Schmidt QR decomposition of J n Q n {\displaystyle J_{n}Q_{n}} , will asymptotically converge to matrices that depend only on the points x n {\displaystyle x_{n}} of a trajectory but not on the initial choice of Q 0 {\displaystyle Q_{0}} . The rows of the orthogonal matrices Q n {\displaystyle Q_{n}} define a local orthogonal reference frame at each point and the first k {\displaystyle k} rows span the same space as the Lyapunov vectors corresponding to the k {\displaystyle k} largest Lyapunov exponents. The upper triangular matrices R n {\displaystyle R_{n}} describe the change of an infinitesimal perturbation from one local orthogonal frame to the next. The diagonal entries r k k ( n ) {\displaystyle r_{kk}^{(n)}} of R n {\displaystyle R_{n}} are local growth factors in the directions of the Lyapunov vectors. The Lyapunov exponents are given by the average growth rates
and by virtue of stretching, rotating and Gram-Schmidt orthogonalization the Lyapunov exponents are ordered as λ 1 ≥ λ 2 ≥ ⋯ ≥ λ d {\displaystyle \lambda _{1}\geq \lambda _{2}\geq \dots \geq \lambda _{d}} . When iterated forward in time a random vector contained in the space spanned by the first k {\displaystyle k} columns of Q n {\displaystyle Q_{n}} will almost surely asymptotically grow with the largest Lyapunov exponent and align with the corresponding Lyapunov vector. In particular, the first column of Q n {\displaystyle Q_{n}} will point in the direction of the Lyapunov vector with the largest Lyapunov exponent if n {\displaystyle n} is large enough. When iterated backward in time a random vector contained in the space spanned by the first k {\displaystyle k} columns of Q n + m {\displaystyle Q_{n+m}} will almost surely, asymptotically align with the Lyapunov vector corresponding to the k {\displaystyle k} th largest Lyapunov exponent, if n {\displaystyle n} and m {\displaystyle m} are sufficiently large. Defining c n = Q n T h n {\displaystyle c_{n}=Q_{n}^{T}h_{n}} we find c n − 1 = R n − 1 c n {\displaystyle c_{n-1}=R_{n}^{-1}c_{n}} . Choosing the first k {\displaystyle k} entries of c n + m {\displaystyle c_{n+m}} randomly and the other entries zero, and iterating this vector back in time, the vector Q n c n {\displaystyle Q_{n}c_{n}} aligns almost surely with the Lyapunov vector v ( k ) ( x n ) {\displaystyle v^{(k)}(x_{n})} corresponding to the k {\displaystyle k} th largest Lyapunov exponent if m {\displaystyle m} and n {\displaystyle n} are sufficiently large. Since the iterations will exponentially blow up or shrink a vector it can be re-normalized at any iteration point without changing the direction. | https://en.wikipedia.org/wiki/Lyapunov_vector |
The Lyapunov–Malkin theorem (named for Aleksandr Lyapunov and Ioel Malkin [ ru ] ) is a mathematical theorem detailing stability of nonlinear systems. [ 1 ] [ 2 ]
In the system of differential equations ,
where x ∈ R m {\displaystyle x\in \mathbb {R} ^{m}} and y ∈ R n {\displaystyle y\in \mathbb {R} ^{n}} are components of the system state, A ∈ R m × m {\displaystyle A\in \mathbb {R} ^{m\times m}} is a matrix that represents the linear dynamics of x {\displaystyle x} , and X : R m × R n → R m {\displaystyle X:\mathbb {R} ^{m}\times \mathbb {R} ^{n}\to \mathbb {R} ^{m}} and Y : R m × R n → R n {\displaystyle Y:\mathbb {R} ^{m}\times \mathbb {R} ^{n}\to \mathbb {R} ^{n}} represent higher-order nonlinear terms. If all eigenvalues of the matrix A {\displaystyle A} have negative real parts, and X ( x , y ), Y ( x , y ) vanish when x = 0, then the solution x = 0, y = 0 of this system is stable with respect to ( x , y ) and asymptotically stable with respect to x . If a solution ( x ( t ), y ( t )) is close enough to the solution x = 0, y = 0, then
Consider the vector field given by
x ˙ = − x + x 2 y , y ˙ = x y 2 {\displaystyle {\dot {x}}=-x+x^{2}y,\quad {\dot {y}}=xy^{2}}
In this case, A = -1 and X (0, y ) = Y (0, y ) = 0 for all y , so this system satisfy the hypothesis of Lyapunov-Malkin theorem.
The figure below shows a plot of this vector field along with some trajectories that pass near (0,0). As expected by the theorem, it can be seen that trajectories in the neighborhood of (0,0) converges to a point in the form (0, c ). | https://en.wikipedia.org/wiki/Lyapunov–Malkin_theorem |
Lycopane (C 40 H 82 ; 2,6,10,14,19,23,27,31-octamethyldotriacontane), a 40 carbon alkane isoprenoid , is a widely present biomarker that is often found in anoxic settings. It has been identified in anoxically deposited lacustrine sediments (such as the Messel formation [ 1 ] and the Condor oil shale deposit [ 2 ] ). It has been found in sulfidic and anoxic hypersaline environments (such as the Sdom Formation [ 3 ] ). It has been widely identified in modern marine sediments, including the Peru upwelling zone, [ 4 ] the Black Sea, [ 5 ] and the Cariaco Trench. [ 6 ] It has been found only rarely in crude oils. [ 7 ]
The pathway for production of lycopane has not been conclusively identified. There are several theories for its origins/production.
Some of the earliest theories for the biosynthesis of lycopane center around it being anaerobically produced by methanogenic archaea . Lycopane has been observed in recent marine sediments in contexts where methanogenic activity is occurring. In older sediments, methanogenic activity is harder to conclusively determine, as methane can migrate from other layers and not necessarily be a product of that geological time. It is possible that isoprenoid alkanes such as lycopane serve as biomarkers for methanogenesis and methanogenic archaea. [ 8 ]
Lycopane has not yet been directly isolated in any biological organism, so its linkage to methanogenic archaea is conjecture. However, the process has been identified in a different isoprenoid alkane: squalane. Squalane was not initially thought to be directly biologically synthesized, but was later determined to be present in archaea . [ 9 ]
Some acyclic unsaturated tetraterpenoids (structurally similar to lycopane) have been detected in Thermococcus hydrothermalis , a deep-sea hydrothermal vent archaea. Lycopane has also been found alongside archaeal ethers in certain marine sediments. [ 10 ] These findings provide support for a methanogenic origin of lycopane, but it is not conclusive. Furthermore, lycopane has been identified in water columns that contain sulfate , which is potentially an argument against lycopane having a methanogenic origin. Methanogens are generally not widespread in sulfate-rich environments. [ 11 ]
Lycopane may be sourced from diagenesis of an unsaturated precursor such as lycopene , a carotenoid that is abundantly present in photosynthetic organisms . In cyanobacteria , lycopene can be an important intermediate in the biosynthesis of other carotenoids. [ 12 ] Diagenesis, broadly referring to physical and chemical changes that occur while biological material is undergoing fossilization, may include hydrogenation and transformation of unsaturated precursors to alkane derivatives. Some diagenetic time-dependent reduction of double bonds in carotenoids has been observed in marine sediments. [ 13 ]
A direct geochemical diagenetic process for the transformation of lycopene to lycopane during sedimentation has not been determined. However, this process has been identified in other carotenoids (e.g. carotene to carotane ). Sulfur has been proposed as a general agent in the diagenesis of isoprenoid alkenes to alkanes. A sulfur polymer (with sulfur binding to unsaturated carbons) could eventually yield isoprenoid alkanes, as carbon-sulfur bonds are weaker than carbon-carbon bonds. Some experimental evidence in support of this theory has been gathered, but it has not been demonstrated in any sediment samples. [ 14 ]
It has also been theorized that lycopane is directly synthesized by marine photoautotrophs such as cyanobacteria or green algae . Lycopene is abundantly present in marine photosynthetic organisms; possibly it is the precursor in a lycopene-to-lycopane pathway. [ 15 ] The detection of lycopa-14(E),18(E)-diene in the green alga Botryococcus braunii strengthens this theory, as the conversion of lycopadiene to lycopane would be simpler and more feasible than that of lycopene to lycopane. [ 16 ]
Gas chromatography-mass spectrometry is a common tool for detecting and analyzing biomarkers. Depending on the stationary phase used in the column, lycopane tends to co-elute with the n -C 35 alkane. [ 17 ] Its tail-to-tail linkage yields diagnostic mass fragments. [ 18 ] The mass spectrum has a periodic fragmentation pattern. [ 19 ]
Raman spectroscopy , a non-destructive analytical technique with no sample preparation, is a powerful tool for analyzing biomarkers. [ 20 ] Lycopene, the unsaturated carotenoid that lycopane may be derived from, has a very characteristic Raman spectrum that is easily distinguishable. The spectrum of lycopane differs by a strong band at 1455 cm −1 (CH 2 scissoring), a series of bands from 1390–1000 cm −1 (C-C stretching), and some bands from 1000–800 cm −1 (methyl in-plane rocking and C-H out-of-plane bending). [ 21 ]
The amount of carbon-13 present in lycopane found in sediment can give indications of its producer, particularly differentiating between methanogenic and algal origin. Lower levels of 13 C suggest that the compound originated in methanogens, while higher levels support an algal origin. The high level of 13 C found in the Messel shale lycopane (-20.8‰) suggests an algal producer. [ 22 ]
Recent work has proposed elevated levels of lycopane as a proxy for anoxicity . When the C 35 /C 31 n-alkane ratio was calculated both within and outside of the Oxygen Minimum Zone (OMZ) in the Arabian Sea , ratios inside of the OMZ were approximately two to three times higher than they were outside of this zone. This increased ratio was determined to be due to the presence of lycopane, which coelutes with C 35 n -alkane. Thus, it was determined that the lycopane/C 31 ratio is correlated with degree of anoxicity. Similar trends were observed in the Peru Upwelling region. This further solidifies the viability of lycopane abundance as an indicator of oxicity/anoxicity and provides additional support for a methanogenic origin of lycopane. [ 23 ]
One of the challenges involved in searching for life on other planets is the practical limitations of instrumentation. While GC/MS or NMR may give unequivocal evidence of the existence of biomarkers, it is not practical to include these instruments on highly optimized spacecraft. Raman spectroscopy has emerged as a leading technique due to its sensitivity, miniaturizability , and lack of sample preparation. [ 24 ]
Carotenoids have long generated astrobiological interest given their diagnostic Raman spectra, their unlikelihood of being abiotically synthesized, and their high preservation potential . [ 25 ] [ 26 ] Recent work has indicated that the Raman spectrum of lycopane is sufficiently different from that of lycopene. The two molecules are distinguishable. While functionalized carotenoids in themselves are an attractive astrobiological biomarker, detecting their diagenetic products may be equally characteristic of extraterrestrial life. Detection of diagenetically reduced lycopane on other planetary bodies may be an unambiguous indication of life, as diagenesis occurs during biological fossilization. [ 27 ] | https://en.wikipedia.org/wiki/Lycopane |
This page provides supplementary chemical data on lycopene .
The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet ( MSDS ) for this chemical from a reliable source and follow its directions.
All- trans -lycopene with canonical numbering:
To date, no X-ray crystal structure of lycopene has been reported. | https://en.wikipedia.org/wiki/Lycopene_(data_page) |
Lycoperdonosis is a respiratory disease caused by the inhalation of large amounts of spores from mature puffballs . It is classified as a hypersensitivity pneumonitis (also called extrinsic allergic alveolitis)—an inflammation of the alveoli within the lung caused by hypersensitivity to inhaled natural dusts. [ 1 ] It is one of several types of hypersensitivity pneumonitis caused by different agents that have similar clinical features. [ 2 ] Typical progression of the disease includes symptoms of a cold hours after spore inhalation, followed by nausea , rapid pulse, crepitant rales (a sound like that made by rubbing hairs between the fingers, heard at the end of inhalation), and dyspnea . Chest radiographs reveal the presence of lung nodules . [ 3 ] The early symptoms presented in combination with pulmonary abnormalities apparent on chest radiographs may lead to misdiagnosis of the disease as tuberculosis , histiocytosis , or pneumonia caused by Pneumocystis carinii . Lycoperdonosis is generally treated with corticosteroids , which decrease the inflammatory response ; these are sometimes given in conjunction with antimicrobials . [ 4 ] [ 5 ]
The disease was first described in the medical literature in 1967 by R.D. Strand and colleagues in the New England Journal of Medicine . [ 6 ] In 1976, a 4-year-old was reported developing the disease in Norway after purposely inhaling a large quantity of Lycoperdon spores to stop a nosebleed. Lycoperdon species are sometimes used in folk medicine in the belief that their spores have haemostatic properties. [ 7 ] A 1997 case report discussed several instances of teenagers inhaling the spores. In one severe case, the individual inhaled enough spores so as to be able to blow them out of his mouth. He underwent bronchoscopy and then had to be on life support before recovering in about four weeks. In another instance, a teenager spent 18 days in a coma, had portions of his lung removed, and suffered severe liver damage . [ 4 ] In Wisconsin , eight teenagers who inhaled spores at a party presented clinical symptoms such as cough, fever , shortness of breath , myalgia , and fatigue within a week. Five of the eight required hospitalization; of these, two required intubation to assist in breathing. [ 5 ] The disease is rare, possibly because of the large quantity of spores that need to be inhaled for clinical effects to occur. [ 4 ] Lycoperdonosis also occurs in dogs; in the few reported cases, the animals had been playing or digging in areas known to contain puffballs. [ 3 ] [ 8 ] [ 9 ] Known species of puffballs implicated in the etiology of the published cases include the widespread Lycoperdon perlatum (the "devil's snuff-box", L. gemmatum ) and Calvatia gigantea , both of the family Lycoperdaceae . [ 6 ] [ 8 ] | https://en.wikipedia.org/wiki/Lycoperdonosis |
Lycopodium powder is a yellow-tan dust-like powder, consisting of the dry spores of clubmoss plants, or various fern relatives . When it is mixed with air, the spores are highly flammable and are used to create dust explosions as theatrical special effects. The powder was traditionally used in physics experiments to demonstrate phenomena such as Brownian motion .
The powder consists of the dry spores of clubmoss plants, or various fern relatives principally in the genera Lycopodium and Diphasiastrum . The preferred source species are Lycopodium clavatum (stag's horn clubmoss) and Diphasiastrum digitatum (common groundcedar) , because these widespread and often locally abundant species are both prolific in their spore production and easy to collect. [ citation needed ]
Today, the principal use of the powder is to create flashes or flames that are large and impressive but relatively easy to manage safely in magic acts and for cinema and theatrical special effects . [ 1 ] Historically it was also used as a photographic flash powder . [ 2 ] Both these uses rely on the same principle as a dust explosion , as the spores have a large surface area per unit of volume (a single spore's diameter is about 33 micrometers (μm) ), [ 3 ] and a high fat content.
It is also used in fireworks and explosives , fingerprint powders , as a covering for pills , and as an ice cream stabilizer.
Lycopodium powder is also sometimes used as a lubricating dust on skin-contacting latex (natural rubber) goods, such as condoms and medical gloves . [ 4 ]
In physics experiments and demonstrations, lycopodium powder can be used to make sound waves in air visible for observation and measurement, and to make a pattern of electrostatic charge visible. The powder is also highly hydrophobic ; if the surface of a cup of water is coated with lycopodium powder, a finger or other object inserted straight into the cup will come out dusted with the powder but remain completely dry.
Because of the very small size of its particles, lycopodium powder can be used to demonstrate Brownian motion . A microscope slide, with or without a well, is prepared with a droplet of water, and a fine dusting of lycopodium powder is applied. Then, a cover-glass can be placed over the water and spore sample in order to reduce convection in the water by evaporation. Under several hundred diameters magnification, one will see in the microscope, when well focused upon individual lycopodium particles, that the spore particles "dance" randomly. This is in response to asymmetric collisional forces applied to the macroscopic (but still quite small) powder particle by microscopic water molecules in random thermal motion. [ 5 ]
As a then-common laboratory supply, lycopodium powder was often used by inventors developing experimental prototypes. For example, Nicéphore Niépce used lycopodium powder in the fuel for one of the first internal combustion engines, the Pyréolophore , in about 1807, [ 6 ] and Chester Carlson used lycopodium powder in 1938 in his early experiments to demonstrate xerography . [ 7 ] | https://en.wikipedia.org/wiki/Lycopodium_powder |
In condensed matter physics , the Lyddane–Sachs–Teller relation (or LST relation ) determines the ratio of the natural frequency of longitudinal optic lattice vibrations ( phonons ) ( ω LO {\displaystyle \omega _{\text{LO}}} ) of an ionic crystal to the natural frequency of the transverse optical lattice vibration ( ω TO {\displaystyle \omega _{\text{TO}}} ) for long wavelengths (zero wavevector). [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] The ratio is that of the static permittivity ε st {\displaystyle \varepsilon _{\text{st}}} to the permittivity for frequencies in the visible range ε ∞ {\displaystyle \varepsilon _{\infty }} . [ 6 ]
ω LO 2 ω TO 2 = ε st ε ∞ {\displaystyle {\frac {\omega _{\text{LO}}^{2}}{\omega _{\text{TO}}^{2}}}={\frac {\varepsilon _{\text{st}}}{\varepsilon _{\infty }}}}
The relation holds for systems with a single optical branch, such as cubic systems with two different atoms per unit cell. For systems with many phonon branches, the relation does not necessarily hold, as the permittivity for any pair of longitudinal and transverse modes will be altered by the other modes in the system. The Lyddane–Sachs–Teller relation is named after the physicists R. H. Lyddane, Robert G. Sachs , and Edward Teller .
The Lyddane–Sachs–Teller relation applies to optical lattice vibrations that have an associated net polarization density , so that they can produce long ranged electromagnetic fields (over ranges much longer than the inter-atom distances). The relation assumes an idealized polar ("infrared active") optical lattice vibration that gives a contribution to the frequency-dependent permittivity described by a lossless Lorentzian oscillator:
where ε ( ∞ ) {\displaystyle \varepsilon (\infty )} is the permittivity at high frequencies, ε s t {\displaystyle \varepsilon _{st}} is the static DC permittivity, and ω TO {\displaystyle \omega _{\text{TO}}} is the "natural" oscillation frequency of the lattice vibration taking into account only the short-ranged (microscopic) restoring forces.
The above equation can be plugged into Maxwell's equations to find the complete set of normal modes including all restoring forces (short-ranged and long-ranged), which are sometimes called phonon polaritons . These modes are plotted in the figure. At every wavevector there are three distinct modes:
The longitudinal mode appears at the frequency where the permittivity passes through zero, i.e. ε ( ω LO ) = 0 {\displaystyle \varepsilon (\omega _{\text{LO}})=0} . Solving this for the Lorentzian resonance described above gives the Lyddane–Sachs–Teller relation. [ 3 ]
Since the Lyddane–Sachs–Teller relation is derived from the lossless Lorentzian oscillator, it may break down in realistic materials where the permittivity function is more complicated for various reasons:
In the case of multiple, lossy Lorentzian oscillators, there are generalized Lyddane–Sachs–Teller relations available. [ 8 ] Most generally, the permittivity cannot be described as a combination of Lorentizan oscillators, and the longitudinal mode frequency can only be found as a complex zero in the permittivity function. [ 8 ]
The most general Lyddane–Sachs–Teller relation applicable in crystals where the phonons are affected by anharmonic damping has been derived in Ref. [ 9 ] and reads as
| ω LO | 2 | ω TO | 2 = ε st ε ∞ {\displaystyle {\frac {|\omega _{\text{LO}}|^{2}}{|\omega _{\text{TO}}|^{2}}}={\frac {\varepsilon _{\text{st}}}{\varepsilon _{\infty }}}}
the absolute value is necessary since the phonon frequencies are now complex, with an imaginary part that is equal to the finite lifetime of the phonon, and proportional to the anharmonic phonon damping (described by Klemens' theory for optical phonons).
A corollary of the LST relation is that for non-polar crystals, the LO and TO phonon modes are degenerate , and thus ε st = ε ∞ {\displaystyle \varepsilon _{\text{st}}=\varepsilon _{\infty }} . This indeed holds for the purely covalent crystals of the group IV elements , such as for diamond (C), silicon, and germanium. [ 10 ]
In the frequencies between ω TO {\displaystyle \omega _{\text{TO}}} and ω LO {\displaystyle \omega _{\text{LO}}} there is 100% reflectivity. This range of frequencies (band) is called the Reststrahl band . The name derives from the German reststrahl which means "residual ray". [ 11 ]
The static and high-frequency dielectric constants of NaCl are ε st = 5.9 {\displaystyle \varepsilon _{\text{st}}=5.9} and ε ∞ = 2.25 {\displaystyle \varepsilon _{\infty }=2.25} , and the TO phonon frequency ν TO {\displaystyle \nu _{\text{TO}}} is 4.9 {\displaystyle 4.9} THz. Using the LST relation, we are able to calculate that [ 12 ]
One of the ways to experimentally determine ω TO {\displaystyle \omega _{\text{TO}}} and ω LO {\displaystyle \omega _{\text{LO}}} is through Raman spectroscopy . [ 13 ] [ 14 ] As previously mentioned, the phonon frequencies used in the LST relation are those corresponding to the TO and LO branches evaluated at the gamma-point ( k = 0 {\displaystyle k=0} ) of the Brillouin zone . This is also the point where the photon-phonon coupling most often occurs for the Stokes shift measured in Raman. Hence two peaks will be present in the Raman spectrum , each corresponding to the TO and LO phonon frequency. | https://en.wikipedia.org/wiki/Lyddane–Sachs–Teller_relation |
The Lydersen method is a group contribution method for the estimation of critical properties temperature ( T c ), pressure ( P c ) and volume ( V c ). The method is named after Aksel Lydersen who published it in 1955. [ 1 ] The Lydersen method is the prototype for and ancestor of many new models like Joback , [ 2 ] Klincewicz , [ 3 ] Ambrose, [ 4 ] Gani-Constantinou [ 5 ] and others.
The Lydersen method is based in case of the critical temperature on the Guldberg rule which establishes a relation between the normal boiling point and the critical temperature .
Guldberg has found that a rough estimate of the normal boiling point T b , when expressed in kelvins (i.e., as an absolute temperature ), is approximately two-thirds of the critical temperature T c . Lydersen uses this basic idea but calculates more accurate values.
M is the molar mass and G i are the group contributions (different for all three properties) for functional groups of a molecule .
Acetone is fragmented in two different groups, one carbonyl group and two methyl groups. For the critical volume the following calculation results:
V c = 40 + 60.0 + 2 * 55.0 = 210 cm 3
In the literature (such as in the Dortmund Data Bank ) the values 215.90 cm 3 , [ 6 ] 230.5 cm 3 [ 7 ] and 209.0 cm 3 [ 8 ] are published. | https://en.wikipedia.org/wiki/Lydersen_method |
Lydia Fairchild (born 1976) is an American woman who exhibits chimerism , having two distinct populations of DNA among the cells of her body. She was pregnant with her third child when she and the father of her children, Jamie Townsend, separated. When Fairchild applied for enforcement of child support in 2002, providing DNA evidence of Townsend's paternity was a routine requirement. While the results showed Townsend to certainly be their father, they seemed to rule out her being their mother.
Fairchild stood accused of fraud by either claiming benefits for other people's children, or taking part in a surrogacy scam, and records of her prior births were put similarly in doubt. Prosecutors called for her two children to be taken away from her, believing them not to be hers. As time came for her to give birth to her third child, the judge ordered that an observer be present at the birth, ensure that blood samples were immediately taken from both the child and Fairchild, and be available to testify. Two weeks later, DNA tests seemed to indicate that she was also not the mother of that child.
A breakthrough came when her defense attorney, [ 1 ] Alan Tindell, learned of Karen Keegan , a chimeric woman in Boston, and suggested a similar possibility for Fairchild and then introduced an article in the New England Journal of Medicine about Keegan. [ 2 ] [ 3 ] He realized that Fairchild's case might also be caused by chimerism . As in Keegan's case, DNA samples were taken from members of the extended family. The DNA of Fairchild's children matched that of Fairchild's mother to the extent expected of a grandmother. They also found that, although the DNA in Fairchild's skin and hair did not match her children's, the DNA from a cervical smear test did match. Fairchild was carrying two different sets of DNA, the defining characteristic of chimerism.
This United States biographical article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lydia_Fairchild |
Lye is a hydroxide, either sodium hydroxide or potassium hydroxide . The word lye most accurately refers to sodium hydroxide (NaOH), [ citation needed ] but historically has been conflated to include other alkali materials, most notably potassium hydroxide (KOH). In order to distinguish between the two, sodium hydroxide may be referred to as soda lye while potassium hydroxide may be referred to as potash lye.
Traditionally, it was obtained by using rainwater to leach wood ashes (which are highly soluble in water and strongly alkaline ) of their potassium hydroxide (KOH). A caustic basic solution is produced, called lye water . Then, the lye water would either be used as such, as for curing olives before brining them, or be evaporated of water to produce crystalline lye. [ 1 ] [ 2 ]
Today, lye is commercially manufactured using a membrane cell chloralkali process . It is supplied in various forms such as flakes, pellets, microbeads, coarse powder or a solution . Lye has traditionally been used as a major ingredient in soapmaking .
The English word lye / ˈ l aɪ / has cognates in all Germanic languages , and originally designated a bath or hot spring. [ 3 ]
Lyes are used to cure many types of food, including the traditional Nordic lutefisk , olives (making them less bitter), canned mandarin oranges , hominy , lye rolls , century eggs , pretzels , candied pumpkins, and bagels . They are also used as a tenderizer in the crust of baked Cantonese moon cakes , in " zongzi " ( glutinous rice dumplings wrapped in bamboo leaves), in chewy southern Chinese noodles popular in Hong Kong and southern China, and in Japanese ramen noodles . Lye provides the crisp glaze on hard pretzels. It's used in kutsinta , a type of rice cake from the Philippines together with pitsi-pitsî . [ 4 ] In Assam, north east India, extensive use is made of a type of lye called khar in Assamese and karwi in Boro which is obtained by filtering the ashes of various banana stems, roots and skin in their cooking and also for curing, as medicine and as a substitute for soap. Lye made out of wood ashes is also used in the nixtamalization process of hominy corn by the tribes of the Eastern Woodlands in North America .
In the United States , food-grade lye must meet the requirements outlined in the Food Chemicals Codex (FCC), [ 5 ] as prescribed by the U.S. Food and Drug Administration (FDA). [ 6 ] Lower grades of lye that are unsuitable for use in food preparation are commonly used as drain cleaners and oven cleaners. [ 6 ] [ page needed ]
Both sodium hydroxide and potassium hydroxide are used in making soap . Potassium hydroxide soaps are softer and more easily dissolved in water than sodium hydroxide soaps. Sodium hydroxide and potassium hydroxide are not interchangeable in either the proportions required or the properties produced in making soaps. [ citation needed ]
"Hot process" soap making also uses lye as the main ingredient. Lye is added to water, cooled for a few minutes and then added to oils and butters. The mixture is then cooked over a period of time (1–2 hours), typically in a slow cooker , and then placed into a mold.
Lyes are also valued for their cleaning effects. Sodium hydroxide is commonly the major constituent in commercial and industrial oven cleaners and clogged drain openers , due to its grease -dissolving abilities. Lyes decompose greases via alkaline ester hydrolysis , yielding water-soluble residues that are easily removed by rinsing.
Sodium or potassium hydroxide can be used to digest tissues of animal carcasses. Often referred to as alkaline hydrolysis , the process involves placing the animal carcass into a sealed chamber, adding a mixture of lye and water and the application of heat to accelerate the process. After several hours the chamber will contain a liquid with coffee-like appearance, [ 7 ] [ 8 ] [ 9 ] and the only solids that remain are very fragile bone hulls of mostly calcium phosphate , which can be mechanically crushed to a fine powder with very little force. [ 10 ] [ 11 ] Sodium hydroxide is frequently used in the process of decomposing roadkill dumped in landfills by animal disposal contractors. [ 8 ] Due to its low cost and easy availability, it has also been used to dispose of corpses by criminals. Italian serial killer Leonarda Cianciulli used this chemical to turn dead bodies into soap. [ 12 ] In Mexico, a man who worked for drug cartels admitted to having disposed of more than 300 bodies with it. [ 13 ]
A 3–10% solution of potassium hydroxide (KOH) gives a color change in some species of mushrooms:
When a person has been exposed to lye, sources recommend immediate removal of contaminated clothing/materials, gently brushing/wiping excess off of skin, and then flushing the area of exposure with running water for 15–60 minutes as well as contacting emergency services. [ 14 ]
Personal protective equipment including safety glasses, chemical-resistant gloves, and adequate ventilation are required for the safe handling of lye. When in proximity to lye that is dissolving in an open container of water, the use of a vapor-resistant face mask is recommended. Adding lye too quickly can cause a runaway thermal reaction which can result in the mixture boiling or erupting.
Lye in its solid state is deliquescent and has a strong affinity for moisture in the air. As a result, lye will dissolve when exposed to open air, absorbing large amounts of atmospheric moisture. Accordingly, lye is stored in air-tight (and correspondingly moisture tight) containers. Glass is not a good material to be used for storage as severe alkalis are mildly corrosive to it. Similar to the case of other corrosives, the containers should be labeled to indicate the potential danger of the contents and stored away from children, pets, heat, and moisture.
The majority of safety concerns with lye are also common with most corrosives, such as their potentially destructive effects on living tissues ; examples are the skin , flesh , and the cornea . Solutions containing lyes can cause chemical burns , permanent injuries, scarring and blindness , immediately upon contact. Lyes may be harmful or even fatal if swallowed; ingestion can cause esophageal stricture . Moreover, the solvation of dry solid lye is highly exothermic and the resulting heat may cause additional burns or ignite flammables.
The reaction between sodium hydroxide and some metals is also hazardous. Aluminium , magnesium , zinc , tin , chromium , brass and bronze all react with lye to produce hydrogen gas. Since hydrogen is flammable , mixing a large quantity of lye with aluminium could result in an explosion. Both the potassium and sodium forms are able to dissolve copper. | https://en.wikipedia.org/wiki/Lye |
Lymphatic disease is a class of disorders which directly affect the components of the lymphatic system .
Examples include Castleman's disease [ 1 ] and lymphedema . [ 2 ]
Diseases and disorder
Hodgkin's Disease/Hodgkin's Lymphoma Hodgkin lymphoma This is a type of cancer of the lymphatic system. It can start almost anywhere in the body. It is believed to be caused by HIV , Epstein-Barr Syndrome , age, and family history. Symptoms include weight gain, fever, swollen lymph nodes , night sweats, itchy skin, fatigue, chest pain, coughing, or trouble swallowing. [ citation needed ]
Non-Hodgkin's Lymphoma
Lymphoma is usually malignant cancer. It is caused by the body producing too many abnormal white blood cells . It is not the same as Hodgkin's Disease. Symptoms usually include painless, enlarged lymph node or nodes in the neck, weakness, fever, weight loss, and anemia. [ citation needed ]
Lymphadenitis
Lymphadenitis is an infection of the lymph nodes usually caused by a virus, bacteria or fungi. Symptoms include redness or swelling around the lymph node. [ citation needed ]
Lymphangitis
Lymphangitis is an inflammation of the lymph vessels. Symptoms usually include swelling, redness, warmth, pain or red streaking around the affected area. [ citation needed ]
Lymphedema
Lymphedema is the chronic pooling of lymph fluid in the tissue. Lymphedema can start anywhere in the lymphatic system of the body. It's also a side-effect of some surgical procedures. Kathy Bates is an advocate and supporter for further research for lymphedema. [ 3 ]
Lymphocytosis
Lymphocytosis is a high lymphocyte count. It can be caused by an infection, blood cancer, lymphoma, or autoimmune disorders that are accompanied by chronic swelling. [ citation needed ] | https://en.wikipedia.org/wiki/Lymphatic_disease |
Lymphocyte function-associated antigen 1 ( LFA-1 ) is an integrin found on lymphocytes and other leukocytes. [ 1 ] LFA-1 plays a key role in emigration, which is the process by which leukocytes leave the bloodstream to enter the tissues. LFA-1 also mediates firm arrest of leukocytes. [ 2 ] Additionally, LFA-1 is involved in the process of cytotoxic T cell mediated killing as well as antibody mediated killing by granulocytes and monocytes. [ 3 ] As of 2007, LFA-1 has 6 known ligands: ICAM-1 , ICAM-2 , ICAM-3, ICAM-4, ICAM-5, and JAM-A. [ 2 ] LFA-1/ICAM-1 interactions have recently been shown to stimulate signaling pathways that influence T cell differentiation. [ 4 ] LFA-1 belongs to the integrin superfamily of adhesion molecules. [ 1 ]
LFA-1 is a heterodimeric glycoprotein with non-covalently linked subunits. [ 3 ] LFA-1 has two subunits designated as the alpha subunit and beta subunit. [ 2 ] The alpha subunit was named aL in 1983. [ 2 ] The alpha subunit is designated CD11a ; and the beta subunit, unique to leukocytes, is beta-2 or CD18 . [ 2 ] The ICAM binding site is on the alpha subunit. [ 5 ] The general binding region of the alpha subunit is the I-domain. Due to the presence of a divalent cation site in the I-domain, the specific binding site is often referred to as the metal-ion dependent adhesion site (MIDAS). [ 5 ]
In an inactive state, LFA-1 rests in a bent conformation and has a low affinity for ICAM binding. [ 5 ] This bent conformation conceals the MIDAS. Chemokines stimulate the activation process of LFA-1. [ 5 ] The activation process begins with the activation of Rap1 , an intracellular g-protein. [ 2 ] Rap1 assists in breaking the constraint between the alpha and beta subunits of LFA-1. [ 2 ] This induces an intermediate extended conformation. [ 2 ] The conformational change stimulates a recruitment of proteins to form an activation complex. The activation complex further destabilizes the alpha and beta subunits. [ 2 ] Chemokines also stimulate an I-like domain on the beta subunit, which causes the MIDAS site on the beta subunit to bind to glutamate on the I domain of the alpha subunit. [ 5 ] This binding process causes the beta subunit to pull down the alpha 7 helix of the I domain, exposing and opening up the MIDAS site on the alpha subunit for binding. [ 5 ] This causes LFA-1 to undergo a conformational change to the fully extended conformation. The process of activating LFA-1 is known as inside out signaling, which causes LFA-1 to shift from low affinity to high affinity by opening the ligand-binding site. [ 5 ]
Early discovery of cellular adhesion molecules involved the use of monoclonal antibodies to inhibit cellular adhesion processes. [ 2 ] The antigen that bound to the monoclonal antibodies was identified as an important molecule in cellular recognition processes. [ 2 ] These experiments yielded the protein name “integrin” as a description of the proteins' integral role in cellular adhesion processes and the transmembrane association between the extracellular matrix and the cytoskeleton. [ 2 ] LFA-1, a leukocyte integrin, was first discovered by Timothy Springer in mice in the 1980s. [ 2 ]
Leukocyte adhesion deficiency is an immunodeficiency caused by the absence of key adhesion surface proteins, including LFA-1. [ 6 ] LAD is a genetic defect caused by autosomal recessive genes. [ 6 ] The deficiency causes ineffective migration and phagocytosis for impacted leukocytes. [ 3 ] Patients with LAD also have poorly functioning neutrophils. [ 2 ] LAD1 , a subtype of LAD, is caused by a lack of integrins that contain the beta subunit, including LFA-1. [ 3 ] LAD1 is characterized by recurring bacterial infection, delayed (>30 days) separation of umbilical cord, ineffective wound healing and pus formation, and granulocytosis. [ 7 ] LAD1 is caused by low expression of CD11 and CD18. CD18 is found on chromosome 21 and CD11 is found on chromosome 16. [ 6 ] | https://en.wikipedia.org/wiki/Lymphocyte_function-associated_antigen_1 |
In pathology , lymphoepithelial lesion refers to a discrete abnormality that consists of lymphoid cells and epithelium , which may or may not be benign .
It may refer to a benign lymphoepithelial lesion of the parotid gland or benign lymphoepithelial lesion of the lacrimal gland , or may refer to the infiltration of malignant lymphoid cells into epithelium, in the context of primary gastrointestinal lymphoma. [ 1 ]
In the context of GI tract lymphoma, it is most often associated with MALT lymphomas . [ 1 ]
This article related to pathology is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lymphoepithelial_lesion |
In cell biology , a lymphokine-activated killer cell (also known as a LAK cell ) is a white blood cell , consisting mostly of natural killer , natural killer T , and T cells that has been stimulated to kill tumor cells, but because of the function in which they activate, and the cells they can successfully target, they are classified as different than the classical natural killer and T lymphocyte systems.
It has been shown that when Peripheral blood leukocytes (PBL) are cultured in the presence of Interleukin 2 , it results in the development of effector cells, which are cytotoxic and are seen to localize to tumor sites and are capable of lysing fresh, non-cultured cancer cells, both primary and metastatic. [ 1 ] LAK cells respond to these lymphokines , particularly IL-2 , by developing into effector cells capable of lysing tumor cells that are known to be resistant to NK cell activity. After stimulated by IL-2, LAK cells can target and kill tumor cells in the early innate response. [ 2 ]
The mechanism of LAK cells is distinctive from that of natural killer cell because they can lyse cells that an NK cell cannot. LAK cells are also capable of acting against cells that do not display the major histocompatibility complex , as has been shown by the ability to cause lysis in non-immunogenic, allogeneic and syngeneic tumors. LAK cells function in the same way as NK cells in the peripheral blood but are more sensitive to and can target tumor cells.
The use of LAK cells has been found to be helpful in treating human cells with different cancers in vitro [ 3 ] . LAK cell therapy is a method that uses interleukin 2 (IL-2) to enhance the number of lymphocytes in an in vitro setting, and it has formed the foundation of many immunotherapy assays that are now in use. [ 4 ] LAK cells have shown potential as a cellular agent for cancer therapy and have been utilized therapeutically in association with IL-2 for the treatment of various cancers. LAK cells have anticancer efficacy against homologous carcinoma cells and can grow ex vivo in the presence of IL-2. [ 5 ] In melanoma and gastric cancer cells, intercellular adhesion molecule 1 (ICAM-1) antibody can significantly inhibit in vitro LAK-induced lysis of cancer cells. A study has shown that ICAM1 in lung cancer cells increases LAK cell-mediated tumor cell death as a new anti-tumor mechanism. [ 6 ] One study uses a 4 hour chromium release assay, which is an assay used to measure the cytotoxicity of T cells and natural killer cells, to measure lysis of the fresh solid tumor cells from 10 cancer patients and found that in all 10 cancer patients the fresh autologous tumor cells were resistant to lysis by PBL with natural killer cells, but were lysed by the LAK cells. [ 2 ]
LAK cells, along with the administration of IL-2 have been experimentally used to treat cancer in mice and humans, but there is very high toxicity with this treatment - Severe fluid retention was the major side effect of therapy, although all side effects resolved after interleukin-2 administration was stopped. Treatment of IL-2 cells by themselves to treat cancers are more dangerous than treatment with the combination of IL-2 and LAK cells. [ 7 ]
This cell biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lymphokine-activated_killer_cell |
Lynda Soderholm is a physical chemist at the U.S. Department of Energy's (DOE) Argonne National Laboratory with a specialty in f-block elements. [ 1 ] She is a senior scientist and the lead of the Actinide, Geochemistry & Separation Sciences Theme within Argonne's Chemical Sciences and Engineering Division. Her specific role is the Separation Science group leader within Heavy Element Chemistry and Separation Science (HESS), directing basic research focused on low-energy methods for isolating lanthanide and actinide elements from complex mixtures. She has made fundamental contributions to understanding f-block chemistry and characterizing f-block elements. [ 1 ] [ 2 ]
Soderholm became a Fellow of the American Association for the Advancement of Science (AAAS) in 2013, [ 3 ] and is also an Argonne Distinguished Fellow. [ 2 ]
Soderholm was awarded her PhD in 1982 by McMaster University under the direction of Prof John Greedan. Her dissertation focused on characterizing the structural and magnetic properties of a series of ternary f-ion oxides. After graduating, she was awarded a NATO postdoctoral fellow at the Centre national de la recherche scientifique in France from 1982 until 1985. After a short postdoctoral appointment as an Argonne postdoctoral fellow she was promoted to staff scientist the same year. Over several years, she moved up the ranks, becoming a senior chemist in 2001. She was also an adjunct professor at the University of Notre Dame from 2003 until 2007. In 2021, Soderholm was appointed interim Division Director for the Chemical Sciences and Engineering Division. [ 4 ]
Early in her career, Soderholm focused on the characterizing the magnetic and electronic behavior of compounds containing f-ions (lanthanides and actinides) with a focus on high-T c materials, compounds that are superconducting under usually high temperatures. She was part of the research group that first determined [ 5 ] the structure of YBa 2 Cu 3 O 7 . Their discovery formed the foundation for the further developments in the broad field of superconductivity.
Continuing her interest in the f-elements, Soderholm shifted her focus from solid-state materials to nanoparticles and solutions, taking advantage of advances in X-ray structural probes made available by synchrotron facilities. Building on her earlier work using neutron scattering, her team became the first to discover [ 6 ] that plutonium exists in solution as tiny, well-defined nanoparticles . This work solved a longstanding problem in understanding transport of plutonium in the environment and resulted in the development of a new, patented approach [ 7 ] to separating plutonium during nuclear reprocessing.
Soderholm's more recent projects use machine learning to understand the influence of complex molecular structuring in solutions, in connection with low-energy processes for separation of f-block elements from complex mixtures.
Lynda Soderholm publications indexed by Google Scholar | https://en.wikipedia.org/wiki/Lynda_Soderholm |
Lyngbyastatins 1 and 3 are cytotoxic cyclic depsipeptides that possess antiproliferative activity against human cancer cell lines. [ 1 ] These compounds, first isolated from the extract of a Lyngbya majuscula/Schizothrix calcicola assemblage and from L. majuscula Harvey ex Gomont (Oscillatoriaceae) strains, respectively, target the actin cytoskeleton of eukaryotic cells. [ 2 ] [ 3 ]
Lyngbyastatins 1 and 3 are encoded for by a 52 kb biosynthetic gene cluster (BGC) containing one polyketide synthase (PKS)/non-ribosomal peptide synthetase (NRPS) hybrid (LbnA), four NRPSs (LbnB-D, LbnF), and one PKS (LbnE).
Biosynthesis commences with PKS activity — thiolation of propanoic (Lyngbyastatin 1) or butyric (Lyngbyastatin 3) acid and subsequent loading onto the ketosynthase (KS) of LbnA. An acyl unit from malonyl CoA is then coupled onto the initial substrate via an acyltransferase (AT) and then methylated at the alpha carbon through a C-methyltransferase (CMT) before an aminotransferase (AmT) conducts a transamination of the initial substrate carbonyl. The latter half of LbnA follows traditional NRPS activity containing condensation (C), adenylation (A), and thiolation (T) domains to couple 2-hydroxy-3-methylvaleric acid, which is believed to be formed from the 2-oxo analog through PKS ketoreductase (KR) activity.
LbnB, a traditional NRPS, adds glycine into the growing thioester by its amino group. LbnC is another traditional NRPS that adds L-leucine and glycine, respectively, except the L-leucine domain possesses an active N-methyltransferase (NMT) domain that methylates the nitrogen of L-leucine.
NRPS LbnD then adds L-valine, L-tyrosine, and L or D-valine, respectively to the growing molecule. PKS LbnE couples an acyl unit from malonyl-CoA onto the C-terminus of the valine residue before a C-methyltransferase methylates the carbon alpha to the thioester twice to produce a quaternary alpha carbon.
NRPS LbnF completes the biosynthesis by coupling L-alanine before the thioesterase (TE) domain conducts a head-to-tail cyclization to produce the final depsipeptide products.
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lyngbyastatins |
Lyngurium or Ligurium is the name of a mythical gemstone believed to be formed of the solidified urine of the lynx (the best ones coming from wild males). It was included in classical and "almost every medieval lapidary " [ 1 ] or book of gems until it gradually disappeared from view in the 17th century. [ 2 ]
As well as various medical properties, lyngurium was credited with the power to attract objects, including metal; in fact it seems likely that what was thought to be lyngurium was either a type of yellow amber , which was known to the Ancient Greeks, but obtained from the distant Baltic coast, or less likely forms of tourmaline . [ 3 ] The first surviving description of Lyngurium is by Theophrastus (died c. 287 BC), and most later descriptions derive from his account. [ 4 ] Theophrastus said it was: [ 5 ]
...carved into signets and is hard as any stone, [and] has an unusual power. For it attracts other objects just as amber does, and some people claim that it acts not only on straws and leaves, but also on thin pieces of copper and iron, as Diocles maintained. The lyngurium is cold and very clear. A wild lynx produces better stones than a tame animal, and a male better ones than a female, there being a difference in the diet, in the exercise taken or not taken, and, in general, in the natural constitution of the body, in as much as the body is drier in the case of the former and more moist in the case of the latter. The stone is discovered only when experienced searchers dig it up, for when the lynx has passed its urine, it conceals it and scrapes soil over it.
In the 1st century AD Pliny the Elder discusses the stone, but makes it clear that he does not believe in it, or at least its supposed origin: [ 6 ] "I for my part am of the opinion that the whole story is false and that no gemstone bearing this name has been seen in our time. Also false are the statements made simultaneously about its medical properties, to the effect that when it is taken in liquid it breaks up stones in the bladder, and that it relieves jaundice if it is swallowed in wine or even looked at". [ 7 ] He also mentioned the belief that the hiding of the solidified urine was because lynxes had a "grudge against mankind", and deliberately hid what they knew to be highly beneficial objects for man. [ 8 ] This idea was apparently also mentioned by Theophrastus in a different, lost, work On creatures said to be grudging , and was still alive in the 15th century: "she hidith it for envy that hire vertues shulde not helpe vs". [ 9 ] Another version was that the lynx swallowed the stone and "withholt in his throte wel depe that the grete vertues there-of ne shulde nought be helpyng to vs" ("withholds it in his throat knowing that the virtues thereof should not be helping us"). [ 10 ]
The belief that male urine produced better stones related to a general ancient and medieval idea that inorganic materials could be gendered into generally superior male forms and their weaker female forms. [ 11 ] The 11th century Islamic scientist Abū Rayḥān al-Bīrūnī was critical of a popular belief, not mentioned in other sources, that the stone could make people change gender. [ 12 ]
The meaning and origin of the word seems to have been confused early on with a geographical origin, either in Liguria in northern Italy, or a part of Sicily which produced amber. [ 13 ] A version of the name, apparently started by Flavius Josephus was ligure , and under this name the Vulgate Latin Bible described the seventh stone on the Priestly breastplate in the Book of Exodus , called either amber or jacinth in modern translations, though one 19th-century Danish translation used lyncuren . [ 14 ]
Although "the first English zoology" The Noble Lyte and Nature of Man (1521) written or at least printed by Lawrence Andrewe, still said that the lynx's "pisse baketh in ye sonne and that becommeth a ryche stone", by 1607 the clergyman Edward Topsell , though repeating many fabulous medieval beliefs about zoology, rejected lyngurium: "Latines did feigne an etymology of the word Lyncurium and uppon this weake foundation have they raised that vaine buildinge". [ 15 ] The death of belief in lyngurium generated a few attempts to find more scientific explanations, and a considerable amount of scholarly squabbling, but the absence of physical specimens was soon fatal. [ 16 ] | https://en.wikipedia.org/wiki/Lyngurium |
Lynk Global is an American company developing a satellite-to-mobile-phone satellite constellation that aims to provide a "cell tower in space" capability for global mobile phone service coverage, including in underserved rural areas without cellular coverage.
Lynk has requested a license from the US Federal Communications Commission to launch up to ten test satellites as early as 2022, with the goal to begin continuous global coverage in 2025 using a constellation of several thousand satellites. [ 1 ]
As of March 2025, Lynk has an agreement with global satellite operator, SES , for funding and services using SES's fleet of geostationary and medium Earth orbit satellites and ground infrastructure to enhance Lynk's direct-to-device capabilities. [ 2 ]
Lynk Global Inc. was founded in 2017 by Charles Miller , Margo Deckard, and Tyghe Speidel. [ 3 ] The business plan for Lynk came out of a multi-year effort to look for the killer app for small satellites , specifically satellites as small as cubesat -class nanosatellites , which led to the concept of connecting a satellite directly to a mobile phone . The idea had been thought to not be possible by some, but the Lynk concept and patents gave Lynk founders and investors confidence it was achievable. [ 3 ] Lynk raised US$20 million from investors during early years and expects to raise a US$100 million round later in 2021. [ 1 ]
In February 2020, Lynk "sent the world's first text message from a satellite in orbit to a standard mobile phone on the ground" in a test supported by both NASA and several mobile network operators. [ 4 ]
On 25 May 2021, Lynk filed with the US telecommunications regulator , the FCC , to license Lynk's satellites and multiple satellite launches, with the goal to enable global mobile connectivity from space-based assets. [ 4 ]
By May 2021, Lynk had launched four "cell-tower-in-space" test satellites into orbit. [ 5 ] The fifth one, Shannon , was launched on 29 June 2021 [ 6 ] and is a test sat of a new design suitable for mass production . Shannon is larger and operates at a higher power level and greater telecom capacity than the earlier test satellites. According to Lynk, the design is capable of being scaled up to provide greater communications throughput. [ 7 ]
On 25 July 2023, Lynk published the first public video demonstrating a satellite-to-phone voice-call, [ 8 ] though earlier in April of the same year, AST SpaceMobile claimed to have made the first space-based two-way telephone call with an unmodified smartphone. [ 9 ]
In March 2025, Lynk Global and satellite operator, SES announced a parnership, in which SES will provide investment in Lynk Global along with integrated services to relay traffic between the Lynk Low Earth orbit (LEO) satellite constellation and SES's O3b mPOWER medium Earth orbit (LEO) satellite system to access gateways for secure real-time data delivery; and telemetry, tracking, command and monitoring services. Lynk and SES will also collaborate in the development of Lynk’s network architecture and satellite manufacturing in the US and Europe. [ 2 ]
According to the company, Lynk satellite mobile technology is capable of connecting to standard [ 4 ] mobile phones from satellites in 500 km (310 mi)- altitude orbits . [ 3 ]
Lynk technology connects to mobile phones on the ground in a way similar to roaming networks, where the satellite mobile service will connect to another available cellular network when outside the range of its home network. To accomplish the regulatory side of this novel telecommunications method will require that Lynk work through the various geographically dispersed, and often country-specific, mobile network operators in any area of the world in which the service is to be available. [ 1 ]
The first Lynk payloads to be tested in space have been flown attached to Cygnus spacecraft following their departure from the ISS . The first was tested on Cygnus NG-10 in February 2019, the second on Cygnus NG-11 in August 2019 and the third on Cygnus NG-12 in January 2020. [ 10 ] [ 11 ] Those have been followed by two free-flying test satellites, Lynk 04 ULTP and Lynk 06 Shannon , that have been launched on Falcon 9 Block 5 rockets in March 2020 and June 2021 respectively. [ 12 ] [ 13 ] The launch of operational satellites, named Lynk Towers, started in April 2022 with 3 satellites launched as of January 2023. [ 13 ] | https://en.wikipedia.org/wiki/Lynk_Global |
Lynn Margulis (born Lynn Petra Alexander ; March 5, 1938 – November 22, 2011) was an American evolutionary biologist, and was the primary modern proponent for the significance of symbiosis in evolution . In particular, Margulis transformed and fundamentally framed current understanding of the evolution of cells with nuclei by proposing it to have been the result of symbiotic mergers of bacteria. Margulis was also the co-developer of the Gaia hypothesis with the British chemist James Lovelock , proposing that the Earth functions as a single self-regulating system, and was the principal defender and promulgator of the five kingdom classification of Robert Whittaker .
Throughout her career, Margulis' work could arouse intense objections, [ 1 ] [ 2 ] and her formative paper, "On the Origin of Mitosing Cells", appeared in 1967 after being rejected by about fifteen journals. [ 3 ] Still a junior faculty member at Boston University at the time, her theory that cell organelles such as mitochondria and chloroplasts were once independent bacteria was largely ignored for another decade, becoming widely accepted only after it was powerfully substantiated through genetic evidence. Margulis was elected a member of the US National Academy of Sciences in 1983. President Bill Clinton presented her the National Medal of Science in 1999. The Linnean Society of London awarded her the Darwin-Wallace Medal in 2008.
Margulis was a strong critic of neo-Darwinism . [ 4 ] Her position sparked lifelong debate with leading neo-Darwinian biologists, including Richard Dawkins , [ 5 ] George C. Williams , and John Maynard Smith . [ 1 ] : 30, 67, 74–78, 88–92 Margulis' work on symbiosis and her endosymbiotic theory had important predecessors, going back to the mid-19th century – notably Andreas Franz Wilhelm Schimper , Konstantin Mereschkowski , Boris Kozo-Polyansky , and Ivan Wallin – and Margulis not only promoted greater recognition for their contributions, but personally oversaw the first English translation of Kozo-Polyansky's Symbiogenesis: A New Principle of Evolution , which appeared the year before her death. Many of her major works, particularly those intended for a general readership, were collaboratively written with her son Dorion Sagan .
In 2002, Discover magazine recognized Margulis as one of the 50 most important women in science. [ 6 ]
Lynn Petra Alexander [ 7 ] [ 8 ] was born on March 5, 1938 [ 9 ] in Chicago , to a Jewish family. [ 10 ] Her parents were Morris Alexander and Leona Wise Alexander. She was the eldest of four daughters. Her father was an attorney who also ran a company that made road paints. Her mother operated a travel agency. [ 11 ] She entered the Hyde Park Academy High School in 1952, [ 12 ] describing herself as a bad student who frequently had to stand in the corner. [ 8 ]
A precocious child, she was accepted at the University of Chicago Laboratory Schools [ 13 ] at the age of fifteen. [ 14 ] [ 15 ] [ 16 ] In 1957, at age 19, she earned a BA from the University of Chicago in Liberal Arts . She joined the University of Wisconsin to study biology under Hans Ris and Walter Plaut, her supervisor, and graduated in 1960 with an MS in genetics and zoology. (Her first publication, published with Plaut in 1958 in the Journal of Protozoology , was on the genetics of Euglena , flagellates which have features of both animals and plants.) [ 17 ] She then pursued research at the University of California, Berkeley , under the zoologist Max Alfert. Before she could complete her dissertation, she was offered research associateship and then lectureship at Brandeis University in Massachusetts in 1964. It was while working there that she obtained her PhD from the University of California, Berkeley in 1965. Her thesis was An Unusual Pattern of Thymidine Incorporation in Euglena . [ 18 ]
In 1966 she moved to Boston University , where she taught biology for twenty-two years. She was initially an Adjunct Assistant Professor, then was appointed to Assistant Professor in 1967. She was promoted to Associate Professor in 1971, to full Professor in 1977, and to University Professor in 1986. In 1988 she was appointed Distinguished Professor of Botany at the University of Massachusetts at Amherst . She was Distinguished Professor of Biology in 1993. In 1997 she transferred to the Department of Geosciences at UMass Amherst to become Distinguished Professor of Geosciences "with great delight", [ 19 ] the post which she held until her death. [ 20 ]
In 1966, as a young faculty member at Boston University , Margulis wrote a theoretical paper titled "On the Origin of Mitosing Cells". [ 22 ] The paper, however, was "rejected by about fifteen scientific journals," she recalled. [ 3 ] It was finally accepted by Journal of Theoretical Biology and is considered today a landmark in modern endosymbiotic theory . Weathering constant criticism of her ideas for decades, Margulis was famous for her tenacity in pushing her theory forward, despite the opposition she faced at the time. [ 8 ] The descent of mitochondria from bacteria and of chloroplasts from cyanobacteria was experimentally demonstrated in 1978 by Robert Schwartz and Margaret Dayhoff . [ 23 ] This formed the first experimental evidence for the symbiogenesis theory. [ 8 ] The endosymbiosis theory of organogenesis became widely accepted in the early 1980s, after the genetic material of mitochondria and chloroplasts had been found to be significantly different from that of the symbiont's nuclear DNA . [ 24 ]
In 1995, English evolutionary biologist Richard Dawkins had this to say about Lynn Margulis and her work:
I greatly admire Lynn Margulis's sheer courage and stamina in sticking by the endosymbiosis theory, and carrying it through from being an unorthodoxy to an orthodoxy. I'm referring to the theory that the eukaryotic cell is a symbiotic union of primitive prokaryotic cells. This is one of the great achievements of twentieth-century evolutionary biology, and I greatly admire her for it. [ 3 ]
Margulis opposed competition-oriented views of evolution, stressing the importance of symbiotic or cooperative relationships between species. [ 25 ]
She later formulated a theory that proposed symbiotic relationships between organisms of different phyla, or kingdoms, as the driving force of evolution , and explained genetic variation as occurring mainly through transfer of nuclear information between bacterial cells or viruses and eukaryotic cells . [ 25 ] Her organelle genesis ideas are now widely accepted, but the proposal that symbiotic relationships explain most genetic variation is still something of a fringe idea. [ 25 ]
Margulis also held a negative view of certain interpretations of Neo-Darwinism that she felt were excessively focused on competition between organisms, as she believed that history will ultimately judge them as comprising "a minor twentieth-century religious sect within the sprawling religious persuasion of Anglo-Saxon Biology." [ 25 ] She wrote that proponents of the standard theory "wallow in their zoological, capitalistic, competitive, cost-benefit interpretation of Darwin – having mistaken him ... Neo-Darwinism, which insists on [the slow accrual of mutations by gene-level natural selection], is in a complete funk." [ 25 ]
Margulis initially sought out the advice of James Lovelock for her own research: she explained that, "In the early seventies, I was trying to align bacteria by their metabolic pathways. I noticed that all kinds of bacteria produced gases. Oxygen, hydrogen sulfide, carbon dioxide, nitrogen, ammonia—more than thirty different gases are given off by the bacteria whose evolutionary history I was keen to reconstruct. Why did every scientist I asked believe that atmospheric oxygen was a biological product but the other atmospheric gases—nitrogen, methane, sulfur, and so on—were not? 'Go talk to Lovelock,' at least four different scientists suggested. Lovelock believed that the gases in the atmosphere were biological." [ 3 ]
Margulis met with Lovelock, who explained his Gaia hypothesis to her, and very soon they began an intense collaborative effort on the concept. [ 3 ] One of the earliest significant publications on Gaia was a 1974 paper co-authored by Lovelock and Margulis, which succinctly defined the hypothesis as follows: "The notion of the biosphere as an active adaptive control system able to maintain the Earth in homeostasis we are calling the 'Gaia hypothesis.'" [ 26 ]
Like other early presentations of Lovelock's idea, the Lovelock-Margulis 1974 paper seemed to give living organisms complete agency in creating planetary self-regulation, whereas later, as the idea matured, this planetary-scale self-regulation was recognized as an emergent property of the Earth system , life and its physical environment taken together. [ 27 ] When climatologist Stephen Schneider convened the 1989 American Geophysical Union Chapman Conference around the issue of Gaia, the idea of "strong Gaia" and "weak Gaia" was introduced by James Kirchner, after which Margulis was sometimes associated with the idea of "weak Gaia", incorrectly (her essay " Gaia is a Tough Bitch " dates from 1995 – and it stated her own distinction from Lovelock as she saw it, which was primarily that she did not like the metaphor of Earth as a single organism, because, she said, "No organism eats its own waste"). [ 3 ] In her 1998 book Symbiotic Planet , Margulis explored the relationship between Gaia and her work on symbiosis. [ 28 ]
In 1969, life on earth was classified into five kingdoms , as introduced by Robert Whittaker . [ 29 ] Margulis became the most important supporter, as well as critic [ 30 ] – while supporting parts, she was the first to recognize the limitations of Whittaker's classification of microbes. [ 31 ] But later discoveries of new organisms, such as archaea , and emergence of molecular taxonomy challenged the concept. [ 32 ] By the mid-2000s, most scientists began to agree that there are more than five kingdoms. [ 33 ] [ 34 ] Margulis became the most important defender of the five kingdom classification. She rejected the three-domain system introduced by Carl Woese in 1990, which gained wide acceptance. She introduced a modified classification by which all life forms, including the newly discovered, could be integrated into the classical five kingdoms. According to Margulis, the main problem, archaea, falls under the kingdom Prokaryotae alongside bacteria (in contrast to the three-domain system, which treats archaea as a higher taxon than kingdom, or the six-kingdom system, which holds that it is a separate kingdom). [ 32 ] Margulis' concept is given in detail in her book Five Kingdoms , written with Karlene V. Schwartz. [ 35 ] It has been suggested that it is mainly because of Margulis that the five-kingdom system survives. [ 19 ]
In 2009, via a then-standard publication-process known as "communicated submission" (which bypassed traditional peer review ), she was instrumental in getting the Proceedings of the National Academy of Sciences ( PNAS ) to publish a paper by Donald I. Williamson rejecting "the Darwinian assumption that larvae and their adults evolved from a single common ancestor." [ 36 ] [ 37 ] Williamson's paper provoked immediate response from the scientific community , including a countering paper in PNAS . [ 36 ] Conrad Labandeira of the Smithsonian National Museum of Natural History said, "If I was reviewing [Williamson's paper] I would probably opt to reject it," he says, "but I'm not saying it's a bad thing that this is published. What it may do is broaden the discussion on how metamorphosis works and [...] [on] the origin of these very radical life cycles." But Duke University insect developmental biologist Fred Nijhout said that the paper was better suited for the " National Enquirer than the National Academy." [ 38 ] In September it was announced that PNAS would eliminate communicated submissions in July 2010. PNAS stated that the decision had nothing to do with the Williamson controversy. [ 37 ]
In 2009 Margulis and seven others authored a position paper concerning research on the viability of round body forms of some spirochetes, "Syphilis, Lyme disease, & AIDS: Resurgence of 'the great imitator'?" [ 39 ] which states that, "Detailed research that correlates life histories of symbiotic spirochetes to changes in the immune system of associated vertebrates is sorely needed", and urging the "reinvestigation of the natural history of mammalian, tick -borne, and venereal transmission of spirochetes in relation to impairment of the human immune system". The paper went on to suggest "that the possible direct causal involvement of spirochetes and their round bodies to symptoms of immune deficiency be carefully and vigorously investigated". [ 39 ]
In a Discover Magazine interview, Margulis explained her reason for interest in the topic of the 2009 "AIDS" paper: "I'm interested in spirochetes only because of our ancestry. I'm not interested in the diseases", and stated that she had called them "symbionts" because both the spirochete which causes syphilis ( Treponema ) and the spirochete which causes Lyme disease ( Borrelia ) only retain about 20% of the genes they would need to live freely, outside of their human hosts. [ 4 ]
However, in the Discover Magazine interview Margulis said that "the set of symptoms, or syndrome, presented by syphilitics overlaps completely with another syndrome: AIDS", and also noted that Kary Mullis [ a ] said that "he went looking for a reference substantiating that HIV causes AIDS and discovered, 'There is no such document' ". [ 4 ]
This provoked a widespread supposition that Margulis had been an " AIDS denialist ". Jerry Coyne reacted on his Why Evolution is True blog against his interpretation that Margulis believed "that AIDS is really syphilis, not viral in origin at all." [ 40 ] Seth Kalichman , a social psychologist who studies behavioral and social aspects of AIDS, cited her [Margulis] 2009 paper as an example of AIDS denialism "flourishing", [ 41 ] and asserted that her [Margulis] "endorsement of HIV/AIDS denialism defies understanding". [ 42 ]
Historian Jan Sapp has said that "Lynn Margulis's name is as synonymous with symbiosis as Charles Darwin 's is with evolution." [ 1 ] She has been called "science's unruly earth mother", [ 43 ] a "vindicated heretic", [ 44 ] or a scientific "rebel", [ 45 ] It has been suggested that initial rejection of Margulis' work on the endosymbiotic theory, and the controversial nature of it as well as Gaia theory, made her identify throughout her career with scientific mavericks, outsiders, and unaccepted theories generally. [ 1 ]
In the last decade of her life, while key components of her life's work began to be understood as fundamental to a modern scientific viewpoint – the widespread adoption of Earth System Science and the incorporation of key parts of endosymbiotic theory into biology curricula worldwide – Margulis if anything became more embroiled in controversy, not less. Journalist John Wilson explained this by saying that Lynn Margulis "defined herself by oppositional science," [ 46 ] and in the commemorative collection of essays Lynn Margulis: The Life and Legacy of a Scientific Rebel , commentators again and again depict her as a modern embodiment of the "scientific rebel", [ 1 ] akin to Freeman Dyson 's 1995 essay The Scientist as Rebel , a tradition Dyson saw embodied in Benjamin Franklin , and which Dyson believed to be essential to good science. [ 47 ]
Margulis married astronomer Carl Sagan in 1957 soon after she got her bachelor's degree. Sagan was then a graduate student in physics at the University of Chicago. Their marriage ended in 1964, just before she completed her PhD. They had two sons, Dorion Sagan , who later became a popular science writer and her collaborator, and Jeremy Sagan, software developer and founder of Sagan Technology. [ citation needed ]
In 1967 she married Thomas N. Margulis, a crystallographer . They had a son named Zachary Margulis-Ohnuma, a New York City criminal defense lawyer, and a daughter Jennifer Margulis, teacher and author. [ 64 ] [ 65 ] They divorced in 1980. [ citation needed ]
She commented, "I quit my job as a wife twice," and, "it's not humanly possible to be a good wife, a good mother, and a first-class scientist. No one can do it — something has to go." [ 65 ]
In the 2000s she had a relationship with fellow biologist Ricardo Guerrero. [ 12 ]
Margulis argued that the September 11 attacks were a " false-flag operation , which has been used to justify the wars in Afghanistan and Iraq as well as unprecedented assaults on [...] civil liberties." She wrote that there was "overwhelming evidence that the three buildings [of the World Trade Center] collapsed by controlled demolition." [ 1 ]
She was a religious agnostic , [ 12 ] and a staunch evolutionist , but rejected the modern evolutionary synthesis , [ 4 ] and said: "I remember waking up one day with an epiphanous revelation: I am not a neo-Darwinist! I recalled an earlier experience, when I realized that I wasn't a humanistic Jew. Although I greatly admire Darwin's contributions and agree with most of his theoretical analysis and I am a Darwinist, I am not a neo-Darwinist." [ 3 ] She argued that "Natural selection eliminates and maybe maintains, but it doesn't create", and maintained that symbiosis was the major driver of evolutionary change. [ 4 ]
Margulis died on November 22, 2011, at home in Amherst , Massachusetts , five days after suffering a hemorrhagic stroke . [ 9 ] [ 7 ] [ 8 ] [ 65 ] [ 66 ] As her wish, she was cremated and her ashes were scattered in her favorite research areas, near her home. [ 67 ] | https://en.wikipedia.org/wiki/Lynn_Margulis |
The Lynx X-ray Observatory ( Lynx ) is a NASA -funded Large Mission Concept Study commissioned as part of the National Academy of Sciences 2020 Astronomy and Astrophysics Decadal Survey . The concept study phase is complete as of August 2019, and the Lynx final report [ 1 ] has been submitted to the Decadal Survey for prioritization. If launched, Lynx would be the most powerful X-ray astronomy observatory constructed to date, enabling order-of-magnitude advances in capability [ 2 ] over the current Chandra X-ray Observatory and XMM-Newton space telescopes.
In 2016, following recommendations laid out in the so-called Astrophysics Roadmap of 2013, NASA established four space telescope concept studies for future Large strategic science missions . In addition to Lynx (originally called X-ray Surveyor in the Roadmap document ) , they are the Habitable Exoplanet Imaging Mission (HabEx), the Large Ultraviolet Optical Infrared Surveyor (LUVOIR), and the Origins Space Telescope (OST, originally called the Far-Infrared Surveyor). The four teams completed their final reports in August 2019, and turned them over to both NASA and the National Academy of Sciences , whose independent Decadal Survey committee advises NASA on which mission should take top priority. If it receives top prioritization and therefore funding, Lynx would launch in approximately 2036. It would be placed into a halo orbit around the second Sun–Earth Lagrange point (L2), and would carry enough propellant for more than twenty years of operation without servicing. [ 1 ] [ 2 ]
The Lynx concept study involved more than 200 scientists and engineers across multiple international academic institutions , aerospace , and engineering companies. [ 3 ] The Lynx Science and Technology Definition Team (STDT) was co-chaired by Alexey Vikhlinin and Feryal Özel . Jessica Gaskin was the NASA Study Scientist, and the Marshall Space Flight Center managed the Lynx Study Office jointly with the Smithsonian Astrophysical Observatory , which is part of the Center for Astrophysics | Harvard & Smithsonian .
According to the concept study's Final Report , the Lynx Design Reference Mission was intentionally optimized to enable major advances in the following three astrophysical discovery areas:
Collectively, these serve as three "science pillars" that set the baseline requirements for the observatory. Those requirements include greatly enhanced sensitivity , a sub-arcsecond point spread function stable across the telescope's field of view , and very high spectral resolution for both imaging and gratings spectroscopy . These requirements, in turn, enable a broad science case with major contributions across the astrophysical landscape (as summarized in Chapter 4 of the Lynx Report ), including multi-messenger astronomy , black hole accretion physics, large-scale structure , Solar System science, and even exoplanets . The Lynx team markets the mission's science capabilities as "transformationally powerful, flexible, and long-lived", inspired by the spirit of NASA 's Great Observatories program .
As described in Chapters 6-10 of the concept study's Final Report , Lynx is designed as an X-ray observatory with a grazing incidence X-ray telescope and detectors that record the position, energy, and arrival time of individual X-ray photons . Post-facto aspect reconstruction leads to modest requirements on pointing precision and stability, while enabling accurate sky locations for detected photons. The design of the Lynx spacecraft draws heavily on heritage from the Chandra X-ray Observatory , with few moving parts and high technology readiness level elements. Lynx will operate in a halo orbit around Sun-Earth L2 , enabling high observing efficiency in a stable environment. Its maneuvers and operational procedures on-orbit are nearly identical to Chandra' s, and similar design approaches promote longevity. Without in-space servicing, Lynx will carry enough consumables to enable continuous operation for at least twenty years. The spacecraft and payload elements are, however, designed to be serviceable, potentially enabling an even longer lifetime.
The major advances in sensitivity, spatial, and spectral resolution in the Lynx Design Reference Mission are enabled by the spacecraft's payload, namely the mirror assembly and suite of three science instruments. The Lynx Report notes that each of the payload elements features state-of-the-art technologies while also representing a natural evolution of existing instrumentation technology development over the last two decades. The key technologies are currently at Technology Readiness Levels (TRL) 3 or 4. The Lynx Report notes that, with three years of targeted pre-phase A development in early 2020s, three of four key technologies will be matured to TRL 5 and one will reach TRL 4 by start of Phase A, achieving TRL 5 shortly thereafter. The Lynx payload consists of the following four major elements:
The Chandra X-ray Observatory experience provides the blueprint for developing the systems required to operate Lynx, leading to a significant cost reduction relative to starting from scratch. This starts with a single prime contractor for the science and operations center, staffed by a seamless, integrated team of scientists, engineers, and programmers. Many of the system designs, procedures, processes, and algorithms developed for Chandra will be directly applicable for Lynx, although all will be recast in a software/hardware environment appropriate for the 2030s and beyond.
The science impact of Lynx will be maximized by subjecting all of its proposed observations to peer review, including those related to the three science pillars. Time pre-allocation can be considered only for a small number of multi-purpose key programs, such as surveys in pre-selected regions of the sky. Such an open General Observer (GO) program approach has been successfully employed by large missions such as Hubble Space Telescope , Chandra X-ray Observatory , and Spitzer Space Telescope , and is planned for James Webb Space Telescope and Nancy Grace Roman Space Telescope . The Lynx GO program will have ample exposure time to achieve the objectives of its science pillars, make impacts across the astrophysical landscape, open new directions of inquiry, and produce as yet unimagined discoveries.
The cost of the Lynx X-ray Observatory is estimated to be between US$4.8 billion to US$6.2 billion (in FY20 dollars at 40% and 70% confidence levels , respectively). This estimated cost range includes the launch vehicle , cost reserves, and funding for five years of mission operations, while excluding potential foreign contributions (such as participation by the European Space Agency (ESA)). As described in Section 8.5 of the concept study's Final Report , the Lynx team commissioned five independent cost estimates , all of which arrived at similar estimates for the total mission lifecycle cost. | https://en.wikipedia.org/wiki/Lynx_X-ray_Observatory |
Lyodura was a medical product used in neurosurgery that has been shown to transmit Creutzfeldt–Jakob disease , a degenerative neurological disorder that is incurable, from affected donor cadavers to surgical recipients. Lyodura was introduced in 1969 as a product of B. Braun Melsungen AG , a leading hospital supply company based in Germany . [ 1 ]
The product was used as a quick and effective patch material for surgery on the brain . It was a section of freeze-dried tissue which could be stored for extended periods on hospital shelves and could be made ready for use simply by soaking it in water for a few minutes. [ 2 ]
What was not known by the consumer was the origin of the source material, the efficacy of its processing methods, and the danger of its use.
The raw material for Lyodura was the dura mater of a human cadaver . The tissue would usually be harvested during an autopsy and then sold to the manufacturer. After neurological diseases were linked to use of Lyodura, an investigation determined that the manufacturer had obtained the donor tissue by black market methods. Autopsy staff would remove the tissue from cadavers, regardless of whether the deceased's family had agreed to an autopsy or not, and sell it in quantity to representatives of the manufacturer. Due to this illegal method of collection, no record of patient history accompanied the tissue to production. [ 3 ]
The harvested tissue was sterilized in large batches using gamma radiation and freeze-drying. The manufacturer believed that its sterilization procedure was sufficiently powerful to render any diseases in the tissue harmless and was therefore unconcerned about cross-contamination from CJD-containing tissue to other tissue in the same sterilization vat. It is now believed that almost all affected Lyodura product was tainted with Creutzfeldt–Jakob disease through this process. [ 4 ] In 1987, after the first deaths linked to Lyodura, the manufacturer began processing tissue from each individual donor separately to prevent cross-contamination and rinsing it with sodium hydroxide , a proven means of deactivating prions, afterwards. [ 2 ] That same year, the American Food and Drug Administration issued a safety alert advising medical professionals to dispose of all Lyodura that they could not confirm was from a different batch than the contaminated one, then an import alert stating that Lyodura was believed to carry Creutzfeldt-Jakob disease and shipments of it should be stopped by US customs agents as an "adulterated drug". [ 2 ] [ 5 ] The Australian Therapeutic Goods Administration also revoked its approval for use in 1987. [ 2 ]
Lyodura was removed from sale in 1996. The World Health Organization recommended in 1997 that the medical field move away from cadaver-sourced dura mater grafts due to the risk of transmitting Creutzfeldt-Jakob disease highlighted by Lyodura-related cases. [ 6 ] Dural grafts are now made from bovine tissue, various synthetic materials, or part of the patient's own body.
In 2017, 154 patients in Japan had been diagnosed with Creutzfeldt-Jakob disease after receiving dural grafts. Every patient where the brand of graft could be identified from medical records had received a Lyodura graft. Patients continued to develop symptoms up to thirty years after their surgery. [ 4 ] In 2004, five Australian patients had been diagnosed with Creutzfeldt-Jacob disease after receiving Lyodura grafts. Due to the long latent period of Creutzfeldt-Jacob disease, epidemiologists remain uncertain how many people will be affected by the disease. [ 2 ]
An award-winning documentary was produced on the subject. The Canadian Broadcasting Corporation's The Fifth Estate segment, "Deadly Harvest", dealt with the product's history, sale in Canada, and health effects worldwide. The product has since been banned for use in Canada. | https://en.wikipedia.org/wiki/Lyodura |
Lyoluminescence refers to the emission of light while dissolving a solid into a liquid solvent . It is a form of chemiluminescence . The most common lyoluminescent effect is seen when solid samples which have been heavily irradiated by ionizing radiation are dissolved in water. The total amount of light emitted by the material increases proportionally with the total radiation dose received by the material up to a certain level called the saturation value.
Many gamma-irradiated substances are known to produce lyoluminescence; these include spices , powdered milk , soups, cotton and paper. [ 1 ] While the broad variety of materials which exhibit lyoluminescence confounds explanation by a single common mechanism there is a common feature to the phenomenon, the production of free radicals in solution. Lyoluminescence intensity can be increased by performing the dissolution of the solid in a solution containing conventionally chemiluminescent compounds such as luminol . [ 2 ] These are thus called lyoluminescence sensitizers. | https://en.wikipedia.org/wiki/Lyoluminescence |
Lyotropic liquid crystals result when amphiphiles , which are both hydrophobic and hydrophilic , dissolve into a solution that behaves both like a liquid and a solid crystal . This liquid crystalline mesophase includes everyday mixtures like soap and water. [ 1 ] [ 2 ]
The term lyotropic comes from Ancient Greek λύω (lúō) ' to dissolve ' and τροπικός (tropikós) ' change ' . Historically, the term was used to describe the common behavior of materials composed of amphiphilic molecules upon the addition of a solvent . Such molecules comprise a hydrophilic (literally 'water-loving') head- group (which may be ionic or non-ionic) attached to a hydrophobic ('water-hating') group.
The micro-phase segregation of two incompatible components on a nanometer scale results in different type of solvent-induced extended anisotropic [ 3 ] arrangement, depending on the volume balances between the hydrophilic part and hydrophobic part. In turn, they generate the long-range order of the phases, with the solvent molecules filling the space around the compounds to provide fluidity to the system. [ 4 ]
In contrast to thermotropic liquid crystals, lyotropic liquid crystals have therefore an additional degree of freedom, that is the concentration that enables them to induce a variety of different phases. As the concentration of amphiphilic molecules is increased, several different type of lyotropic liquid crystal structures occur in solution. Each of these different types has a different extent of molecular ordering within the solvent matrix, from spherical micelles to larger cylinders, aligned cylinders and even bilayered and multiwalled aggregates. [ 5 ]
Examples of amphiphilic compounds are the salts of fatty acids, phospholipids . Many simple amphiphiles are used as detergents . A mixture of soap and water is an everyday example of a lyotropic liquid crystal.
Biological structures such as fibrous proteins showings relatively long and well-defined hydrophobic and hydrophilic ‘‘blocks’’ of aminoacids can also show lyotropic liquid crystalline behaviour. [ 6 ]
A typical amphiphilic flexible surfactant can form aggregates through a self-assembly process that results of specific interactions between the molecules of the amphiphilic mesogen and those of the non-mesogenic solvent.
In aqueous media, the driving force of the aggregation is the " hydrophobic effect ". The aggregates formed by amphiphilic molecules are characterised by structures in which the hydrophilic head-groups expose their surface to aqueous solution, shielding the hydrophobic chains from contact with water.
For most lyotropic systems aggregation occurs only when the concentration of the amphiphile exceeds a critical concentration (known variously as the critical micelle concentration (CMC) or the critical aggregation concentration (CAC) ).
At very low amphiphile concentration, the molecules will be dispersed randomly without any ordering. At slightly higher (but still low) concentration, above the CMC, self-assembled amphiphile aggregates exist as independent entities in equilibrium with monomeric amphiphiles in solution, but with no long ranged orientational or positional (translational) order. As a result, phases are isotropic (i.e. not liquid crystalline). These dispersions are generally referred to as ' micellar solutions ', often denoted by the symbol L 1 , while the constituent spherical aggregates are known as ' micelles '.
At higher concentration, the assemblies will become ordered. True lyotropic liquid crystalline phases are formed as the concentration of amphiphile in water is increased beyond the point where the micellar aggregates are forced to be disposed regularly in space. For amphiphiles that consist of a single hydrocarbon chain the concentration at which the first liquid crystalline phases are formed is typically in the range 25–30 wt%. [ citation needed ]
The simplest liquid crystalline phase that is formed by spherical micelles is the ' micellar cubic ', denoted by the symbol I 1 . This is a highly viscous, optically isotropic phase in which the micelles are arranged on a cubic lattice. Prior to becoming macroscopic liquid crystals, tactoids are formed, which are liquid crystal microdomains in an isotrophic phase. At higher amphiphile concentrations the micelles fuse to form cylindrical aggregates of indefinite length, and these cylinders are arranged on a long-ranged hexagonal lattice. This lyotropic liquid crystalline phase is known as the ' hexagonal phase ', or more specifically the ' normal topology ' hexagonal phase and is generally denoted by the symbol H I .
At higher concentrations of amphiphile the ' lamellar phase ' is formed. This phase is denoted by the symbol L α and can be considered the lyotropic equivalent of a smectic A mesophase. [ 1 ] This phase consists of amphiphilic molecules arranged in bilayer sheets separated by layers of water. Each bilayer is a prototype of the arrangement of lipids in cell membranes.
For most amphiphiles that consist of a single hydrocarbon chain, one or more phases having complex architectures are formed at concentrations that are intermediate between those required to form a hexagonal phase and those that lead to the formation of a lamellar phase. Often this intermediate phase is a bicontinuous cubic phase .
Increasing the amphiphile concentration beyond the point where lamellar phases are formed would lead to the formation of the inverse topology lyotropic phases, namely the inverse cubic phases, the inverse hexagonal columnar phase (columns of water encapsulated by amphiphiles, (H II ) and the inverse micellar cubic phase (a bulk liquid crystal sample with spherical water cavities). In practice inverse topology phases are more readily formed by amphiphiles that have at least two hydrocarbon chains attached to a headgroup. The most abundant phospholipids that are found in cell membranes of mammalian cells are examples of amphiphiles that readily form inverse topology lyotropic phases.
Even within the same phases, self-assembled structures are tunable by the concentration: For example, in lamellar phases, the layer distances increase with the solvent volume. Since lyotropic liquid crystals rely on a subtle balance of intermolecular interactions, it is more difficult to analyze their structures and properties than those of thermotropic liquid crystals.
The objects created by the amphiphiles are usually spherical (as in the case of micelles), but may also be disc-like (bicelles), rod-like, or biaxial (all three micelle axes are distinct). These anisotropic self-assembled nano-structures can then order themselves in much the same way as thermotropic liquid crystals do, forming large-scale versions of all the thermotropic phases (such as a nematic phase of rod-shaped micelles).
It is possible that specific molecules are dissolved in lyotropic mesophases, where they can be located mainly inside, outside, or at the surface of the aggregates.
Some of such molecules act as dopants, inducing specific properties to the whole phase, other ones can be considered simple guests with limited effect on the surrounding environment but possibly strong consequences on their physico-chemical properties, and some of them are used as probe to detect molecular-level properties of the whole mesophase in specific analytical techniques. [ 7 ]
The term lyotropic has also been applied to the liquid crystalline phases that are formed by certain polymeric materials, particularly those consisting of rigid rod-like macromolecules, when they are mixed with appropriate solvents. [ 8 ] Examples are suspensions of rod-like viruses such as the tobacco mosaic virus as well as synthetic macromolecules, such as Li 2 Mo 6 Se 6 nanowire [ 9 ] or colloidal suspensions of non-spherical colloidal particles. [ 10 ] Cellulose and cellulose derivatives form lyotropic liquid crystal phases as do nanocrystalline ( nanocellulose ) suspensions . [ 11 ] Other examples include DNA and Kevlar , which dissolve in sulfuric acid to give a lyotropic phase. It is noted that in these cases the solvent acts to lower the melting point of the materials thereby enabling the liquid crystalline phases to be accessible. These liquid crystalline phases are closer in architecture to thermotropic liquid crystalline phases than to the conventional lyotropic phases. In contrast to the behaviour of amphiphilic molecules, the lyotropic behaviour of the rod-like molecules does not involve self-assembly . [ citation needed ]
Examples of lyotropic liquid crystals can also be generated using 2D nanosheets. The most striking example of a true nematic phase has been demonstrated for many smectite clays . The issue of the existence of such a lyotropic phase was raised by Langmuir in 1938, [ 12 ] but remained an open question for a very long time and was only confirmed recently. [ 13 ] [ 14 ] With the rapid development of nanosciences, and the synthesis of many new anisotropic 2D nanoparticles, the number of such Nematic mesophase based on 2D nanosheet has increased quickly, with, for example graphene oxide colloidal suspensions.
Noteworthy, a lamellar phase was even discovered, H 3 Sb 3 P 2 O 14 , which exhibits hyperswelling up to ~250 nm for the interlamellar distance. [ 15 ] [ 16 ] | https://en.wikipedia.org/wiki/Lyotropic_liquid_crystal |
Lysergic acid , also known as D -lysergic acid and (+)-lysergic acid , is a precursor for a wide range of ergoline alkaloids that are produced by the ergot fungus and found in the seeds of Argyreia nervosa ( Hawaiian baby woodrose ), and Ipomoea species ( morning glories , ololiuhqui , tlitliltzin ).
Amides of lysergic acid, lysergamides , are widely used as pharmaceuticals and as psychedelic drugs , e.g. lysergic acid diethylamide (LSD). Lysergic acid is listed as a Table I precursor under the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances . [ 3 ]
The name Lysergic acid comes from the fact that it is a carboxylic acid , and it was first made by hydro lys is of various erg ot alkaloids . [ 4 ]
Lysergic acid is generally produced by hydrolysis [ 5 ] of natural lysergamides, but can also be synthesized in the laboratory by a complex total synthesis , for example by Robert Burns Woodward 's team in 1956. [ 6 ] An enantioselective total synthesis based on a palladium -catalyzed domino cyclization reaction has been described in 2011 by Fujii and Ohno. [ 7 ] Lysergic acid monohydrate crystallizes in very thin hexagonal leaflets when recrystallized from water. Lysergic acid monohydrate, when dried (140 °C at 2 mmHg or 270 Pa) forms anhydrous lysergic acid.
The biosynthetic route is based on the alkylation of the amino acid tryptophan with dimethylallyl diphosphate ( isoprene derived from 3 R - mevalonic acid ) giving 4-dimethylallyl- L -tryptophan which is N -methylated with S -adenosyl- L -methionine . Oxidative ring closure followed by decarboxylation, reduction, cyclization, oxidation, and allylic isomerization yields D -(+)-lysergic acid. [ 4 ] The biosynthetic pathway has been reconsituted in transgenic baker's yeast. [ 8 ]
Lysergic acid is a chiral compound with two stereocenters . The isomer with inverted configuration at carbon atom 8 close to the carboxyl group is called isolysergic acid . Inversion at carbon 5 close to the nitrogen atom leads to L -lysergic acid and L -isolysergic acid , respectively.
In the United States, Lysergic acid and Lysergic acid amide are Schedule III substances. [ 9 ] | https://en.wikipedia.org/wiki/Lysergic_acid |
A field lysimeter (from Greek λύσις (loosening) and the suffix -meter ) is a cylindrical container filled with soil , which can be used to study the transport of water and material through the soil. This type of lysimeter can be equipped with different measuring probes at different depths (e.g., soil temperature, tensiometer for measuring water tension ). The soil contained in the field lysimeter can either be collected as a monolith (i.e., in one piece) or be reconstructed from the different layers present at the sampling site. Most lysimeters contain an opening at the bottom allowing the leachate to be collected and analyzed over time.
Lysimeters can be used to measure the amount of actual evapotranspiration which is released by plants (usually crops or trees ). By recording the amount of precipitation that an area receives and the amount lost through the soil, the amount of water lost to evapotranspiration can be calculated . [ 1 ] There are multiple types of lysimeters, with each designed for specific purposes; the choice of lysimeter depends on project objectives, parameters to be measured, and the environmental conditions under investigation. Some types of lysimeters include:
The list above is not comprehensive; there are many types of lysimeters and many ways that lysimetry can be used to understand soil- porewater relationships.
In the rest of this article, "lysimeter" refers to a field lysimeter for understanding interaction between soil-plant interactions.
A lysimeter is most accurate when vegetation is grown in a large soil tank which allows the rainfall input and water lost through the soil to be easily calculated. The amount of water lost by evapotranspiration can be worked out by calculating the difference between the weight before and after the precipitation input. [ citation needed ]
For farm crops, a lysimeter can represent field conditions well since the device is installed and used outside the laboratory. A weighing lysimeter, for example, reveals the amount of water crops use by constantly weighing a huge block of soil in a field to detect losses of soil moisture (as well as any gains from precipitation). [ 2 ] An example of their use is in the development of new xerophytic apple tree cultivars in order to adapt to changing climate patterns of reduced rainfall in traditional apple growing regions. [ 3 ]
The University of Arizona's Biosphere 2 built the world's largest weighing lysimeters using a mixture of thirty 220,000 and 333,000 lb-capacity (ca. 100,000 and 150,000 kg) column load cells from Honeywell, Inc . as part of its Landscape Evolution Observatory project. [ 4 ]
To date, physiology -based, high-throughput phenotyping systems (also known as plant functional phenotyping systems), which, used in combination with soil–plant–atmosphere continuum (SPAC) measurements and fitting models of plant responses to continuous and fluctuating environmental conditions, should be further investigated in order to serve as a phenotyping tool to better understand and characterize plant stress response. [ 5 ] In these systems (known also as gravimetric system), plants are placed on weighing lysimeters that measure changes in pot weight at a high frequency. This data is then combined with measurements of environmental parameters in the greenhouse, including radiation, humidity and temperature, as well as soil water conditions. Using pre-measured data including soil weight and initial plant weight, a great deal of phenotypic data can be extracted including data on stomatal conductance , growth rates, transpiration and soil water content and plant dynamic behaviour such as the critical ɵ point, which is the soil water content at which plants start to respond to stress by reducing their stomatal conductance. [ 6 ]
The Faculty of Agriculture in the Hebrew University of Jerusalem owns one of the most advanced functional phenotyping system, with more than 500 units screened simultaneously. [ 7 ]
Lysimeters can also be used to study degradation patterns of substances in specific types of soils. For example, lysimeters can be filled with material from railways. For instance, in Wädenswil , Switzerland , 10 lysimeters are used to study the degradation of herbicides in the soil of railway tracks. By filling the lysimeters with material from different railway tracks, researchers are able to create conditions that mimic the conditions found in these specific environments. [ 8 ]
In 1875 Edward Lewis Sturtevant , a botanist from Massachusetts , built the first lysimeter in the United States. [ 9 ] | https://en.wikipedia.org/wiki/Lysimeter |
Lysis ( / ˈ l aɪ s ɪ s / LY -sis ; from Greek λῠ́σῐς lýsis 'loosening') is the breaking down of the membrane of a cell , often by viral , enzymic , or osmotic (that is, "lytic" / ˈ l ɪ t ɪ k / LIT -ik ) mechanisms that compromise its integrity. A fluid containing the contents of lysed cells is called a lysate . In molecular biology , biochemistry , and cell biology laboratories, cell cultures may be subjected to lysis in the process of purifying their components, as in protein purification , DNA extraction , RNA extraction , or in purifying organelles .
Many species of bacteria are subject to lysis by the enzyme lysozyme , found in animal saliva , egg white , and other secretions . [ 1 ] Phage lytic enzymes ( lysins ) produced during bacteriophage infection are responsible for the ability of these viruses to lyse bacterial cells. [ 2 ] Penicillin and related β-lactam antibiotics cause the death of bacteria through enzyme-mediated lysis that occurs after the drug causes the bacterium to form a defective cell wall . [ 3 ] If the cell wall is completely lost and the penicillin was used on gram-positive bacteria , then the bacterium is referred to as a protoplast , but if penicillin was used on gram-negative bacteria , then it is called a spheroplast .
Cytolysis occurs when a cell bursts due to an osmotic imbalance that has caused excess water to move into the cell.
Cytolysis can be prevented by several different mechanisms, including the contractile vacuole that exists in some paramecia , which rapidly pump water out of the cell. Cytolysis does not occur under normal conditions in plant cells because plant cells have a strong cell wall that contains the osmotic pressure, or turgor pressure , that would otherwise cause cytolysis to occur.
Oncolysis is the destruction of neoplastic cells or of a tumour .
The term is also used to refer to the reduction of any swelling . [ 4 ]
Plasmolysis is the contraction of cells within plants due to the loss of water through osmosis . In a hypertonic environment, the cell membrane peels off the cell wall and the vacuole collapses. These cells will eventually wilt and die unless the flow of water caused by osmosis can stop the contraction of the cell membrane . [ 5 ]
Erythrocytes ' hemoglobin release free radicals in response to pathogens when lysed by them. This can damage the pathogens. [ 6 ] [ 7 ]
Cell lysis is used in laboratories to break open cells and purify or further study their contents. Lysis in the laboratory may be affected by enzymes or detergents or other chaotropic agents . Mechanical disruption of cell membranes, as by repeated freezing and thawing, sonication , pressure, or filtration may also be referred to as lysis. Many laboratory experiments are sensitive to the choice of lysis mechanism; often it is desirable to avoid mechanical shear forces that would denature or degrade sensitive macromolecules, such as proteins and DNA , and different types of detergents can yield different results. The unprocessed solution immediately after lysis but before any further extraction steps is often referred to as a crude lysate . [ 8 ] [ 9 ]
For example, lysis is used in western and Southern blotting to analyze the composition of specific proteins , lipids , and nucleic acids individually or as complexes . Depending on the detergent used, either all or some membranes are lysed. For example, if only the cell membrane is lysed then gradient centrifugation can be used to collect certain organelles . Lysis is also used for protein purification , DNA extraction , and RNA extraction . [ 8 ] [ 9 ]
Several methods for cell lysis exist, sometimes used in combination. Examples include liquid homogenization, freeze thawing, and physical disruption such as sonication, or the use of hypotonic solutions that cause osmotic swelling and eventual bursting of the cell. [ 10 ]
This method uses chemical disruption. It is the most popular and simple approach. Chemical lysis chemically deteriorates/solubilizes the proteins and lipids present within the membrane of targeted cells. [ 11 ] Common lysis buffers contain sodium hydroxide (NaOH) and sodium dodecyl sulfate (SDS). Cell lysis is best done at a pH range of 11.5–12.5. Although simple, it is a slow process, taking anywhere from 6 to 12 hours. [ 12 ]
This method uses ultrasonic waves to generate areas of high and low pressure which causes cavitation and in turn, cell lysis. Though this method usually comes out clean, it fails to be cost effective and consistent. [ 11 ]
This method uses physical penetration to pierce or cut a cell membrane. [ 11 ]
This method uses enzymes such as lysozyme or proteases to disintegrate the cell membrane. [ 13 ] | https://en.wikipedia.org/wiki/Lysis |
A lysis buffer is a buffer solution used for the purpose of breaking open cells for use in molecular biology experiments that analyze the labile macromolecules of the cells (e.g. western blot for protein, or for DNA extraction ). Most lysis buffers contain buffering salts (e.g. Tris-HCl ) and ionic salts (e.g. NaCl ) to regulate the pH and osmolarity of the lysate . Sometimes detergents (such as Triton X-100 or SDS ) are added to break up membrane structures. For lysis buffers targeted at protein extraction , protease inhibitors are often included, and in difficult cases may be almost required. Lysis buffers can be used on both animal and plant tissue cells. [ 1 ]
The primary purpose of lysis buffer is isolating the molecules of interest and keeping them in a stable environment. For proteins, for some experiments, the target proteins should be completely denatured , while in some other experiments the target protein should remain folded and functional. Different proteins also have different properties and are found in different cellular environments. Thus, it is essential to choose the best buffer based on the purpose and design of the experiments. The important factors to be considered are: pH, ionic strength, usage of detergent, protease inhibitors to prevent proteolytic processes. [ 2 ] For example, detergent addition is necessary when lysing Gram-negative bacteria, but not for Gram-positive bacteria. [ 3 ] It is common that a protease inhibitor is added to lysis buffer, along with other enzyme inhibitors of choice, such as a phosphatase inhibitor when studying proteins with phosphorylation.
Buffer creates an environment for isolated proteins. Each buffer choice has a specific pH range, so the buffer should be chosen based on whether the experiment's target protein is stable under a certain pH. Also, for buffers with similar pH ranges, it is important to consider whether the buffer is compatible with the experiment's target protein. [ 4 ] The table below contains several most commonly used buffers and their pH ranges. [ 4 ]
Lysis buffer usually contains one or more salts. The function of salts in lysis buffer is to establish an ionic strength in the buffer solution. Some of the most commonly used salts are NaCl, KCl, and (NH 4 ) 2 SO 4. They are usually used with a concentration between 50 and 150 mM. [ 4 ]
Detergents are organic amphipathic (with hydrophobic tail and a hydrophilic head) surfactants. They are used to separate membrane proteins from membrane because the hydrophobic part of detergent can surround biological membranes and thus isolate membrane proteins from membranes. [ 5 ] Although detergents are widely used and have similar functions, the physical and chemical properties of detergents of interest must be considered in light of the goals of an experiment.
Detergents are often categorized as nonionic, anionic, cationic, or zwitterionic, based on their hydrophilic head group feature. [ 5 ]
Nonionic detergents like Triton X-100 and zwitterionic detergents like CHAPS (3-[(3-cholamidopropyl)dimethylammonio]-1-propanesulfonate) are nondenaturing (will not disrupt protein functions). Ionic detergents like sodium dodecyl sulfate (SDS) and cationic detergents like ethyl trimethyl ammonium bromide are denaturing (will disrupt protein functions). [ 6 ] Detergents are a major ingredient that determines the lysis strength of a given lysis buffer.
One common issue faced by many cell lysis buffers is the disruption of protein structures during the lysis process, partially caused by use of detergents. Detergents often prevent the restoration of native conditions necessary for proper protein folding. [ 7 ]
For the longest time, after a detergent-based cell lysis, a buffer exchange and/or dialysis had to be performed to remove the detergent among other hindering compounds to restore native conditions. [ 8 ]
To overcome this a solution has emerged in the form of a detergent-free cell lysis buffer. The GentleLys buffer employs copolymers instead of detergents, ensuring efficient cell lysis while maintaining the native environment crucial for the correct folding of cellular components, such as proteins.
Other additives include metal ions, sugar like glucose, glycerol, metal chelators (e.g. EDTA ), and reducing agents like dithiothreitol (DTT). [ 4 ]
It may be the most widely used lysis buffer. The solubilizing agent is NP-40 , which can be replaced by other detergents at different concentrations. Since NP-40 is a nonionic detergent, this lysis buffer has a milder effect than RIPA buffer. It can be used when protein functions are to be retained with minimal disruption. [ 9 ]
Recipe: [ 9 ]
RIPA buffer is a commonly used lysis buffer for immunoprecipitation and general protein extraction from cells and tissues. The buffer can be stored without vanadate at 4 °C for up to 1 year. [ 10 ] RIPA buffer releases proteins from cells as well as disrupts most weak interactions between proteins. [ 9 ]
Recipe: [ 10 ]
SDS is ionic denaturing detergent. Hot SDS buffer is often used when the proteins need to be completely solubilized and denatured.
Recipe: [ 10 ]
ACK is used for lysis of red blood cells in biological samples where other cells such as white blood cells are of greater interest. [ 11 ]
Recipe: [ 12 ] [ 13 ]
The GentleLys buffer employs synthetic nanodisc copolymers to gently disrupt the cell membrane, offering a milder alternative to conventional detergent-based lysis buffers. This gentle approach eliminates the need for harsh chemicals, creating an environment that preserves the native state of cellular proteins. Consequently, the proteins maintain their structural integrity and functionality, a marked departure from the denaturing effects of detergent-based buffers.
Cell lysis is a critical step in the purification of enzymes from bacterial cells, various components are commonly included in lysing buffers to facilitate effective cell disruption and release of the target enzyme. These components include detergents, salts, and enzymes, each playing a specific role in the lysis process. Examples of detergents used in lysing buffers include:
Detergents:
Detergents are amphipathic molecules that possess both hydrophilic and hydrophobic properties. In the context of cell lysis, detergents act by disrupting the lipid bilayer of the bacterial cell membrane, leading to membrane permeabilization and release of intracellular components, including the target enzyme.
Commonly used detergents in lysing buffers include:
a. Triton X-100 : a nonionic detergent frequently employed due to its mild and effective membrane-disrupting properties, it solubilizes lipids and membrane proteins, allowing the release of intracellular contents.
b. Sodium dodecyl sulfate (SDS): an anionic detergent that denatures proteins by disrupting their secondary and tertiary structures, it solubilizes cellular membranes and aids in protein extraction.
c. Tween-20: a nonionic detergent is milder compared to SDS and Triton X-100. It assists in membrane permeabilization and solubilization of proteins without causing significant denaturation.
Salts:
Salts are crucial components of lysing buffers as they help maintain optimal cellular conditions and provide ionic strength to facilitate cell disruption.
Commonly used salts in lysing buffers include:
a. Sodium chloride (NaCl): NaCl is often included to maintain isotonic conditions, preventing osmotic shock and cell rupture during the lysis process.
b. Potassium chloride (KCl): Similar to NaCl, KCl can be used to adjust the ionic strength and facilitate cell lysis.
Enzymes:
Certain enzymes are added to lysing buffers to enhance cell lysis by digesting specific cellular components that can interfere with the extraction of the target enzyme.
Examples of enzymes used in lysing buffers include:
a. Lysozyme: Lysozyme breaks down the peptidoglycan layer of bacterial cell walls, weakening their structural integrity and facilitating subsequent disruption. It is particularly effective for Gram-positive bacteria.
b. DNase (Deoxyribonuclease): DNase degrades DNA present in the lysate, reducing its viscosity and preventing DNA-related interference in downstream purification steps.
c. RNase (Ribonuclease): Similar to DNase, RNase degrades RNA in the lysate, reducing its viscosity and minimizing RNA-related interference.
The specific combination and concentrations of detergents, salts, and enzymes in lysing buffers can vary depending on the target enzyme, cell type, and experimental requirements, optimization of these components is crucial to achieve efficient cell lysis while preserving the stability and activity of the desired enzyme during the purification process.
In studies like DNA fingerprinting the lysis buffer is used for DNA isolation. Dish soap can be used in a pinch to break down the cell and nuclear membranes, allowing the DNA to be released. Other such lysis buffers include the proprietary Qiagen product Buffer P2. | https://en.wikipedia.org/wiki/Lysis_buffer |
A lysochrome is a soluble dye used for histochemical staining of lipids , which include triglycerides , fatty acids , and lipoproteins . Lysochromes such as Sudan IV dissolve in the lipid and show up as colored regions. The dye does not stick to any other substrates, so a quantification or qualification of lipid presence can be obtained.
The name was coined by John Baker (biologist) in his book "Principles of Biological Microtechnique", published in 1958, from the Greek words lysis (solution) and chroma (colour). [ 1 ] | https://en.wikipedia.org/wiki/Lysochrome |
The lysocline is the depth in the ocean dependent upon the carbonate compensation depth (CCD), usually around 5 km, below which the rate of dissolution of calcite increases dramatically because of a pressure effect. While the lysocline is the upper bound of this transition zone of calcite saturation, the CCD is the lower bound of this zone. [ 1 ]
CaCO 3 content in sediment varies with different depths of the ocean, spanned by levels of separation known as the transition zone. In the mid-depth area of the ocean, sediments are rich in CaCO 3 , content values reaching 85–95%. [ 1 ] This area is then spanned hundreds of meters by the transition zone, ending in the abyssal depths with 0% concentration. The lysocline is the upper bound of the transition zone, where amounts of CaCO 3 content begins to noticeably drop from the mid-depth 85–95% sediment. The CaCO 3 content drops to 0% concentration at the lower bound, known as the calcite compensation depth. [ 1 ]
Shallow marine waters are generally supersaturated in calcite, CaCO 3 , because as marine organisms (which often have shells made of calcite or its polymorph, aragonite ) die, they tend to fall downwards without dissolving. [ 2 ] As depth and pressure increases within the water column , calcite solubility increases, causing supersaturated water above the saturation depth, allowing for preservation and burial of CaCO 3 on the seafloor. [ 3 ] However, this creates undersaturated seawater below the saturation depth, preventing CaCO 3 burial on the sea floor as the shells start to dissolve.
The equation Ω = [Ca 2+ ] X [CO 3 2- ]/K' sp expresses the CaCO 3 saturation state of seawater. [ 4 ] The calcite saturation horizon is where Ω =1; dissolution proceeds slowly below this depth. The lysocline is the depth that this dissolution impacts is again notable, also known as the inflection point with sedimentary CaCO 3 versus various water depths. [ 4 ]
The calcite compensation depth (CCD) occurs at the depth that the rate of calcite to the sediments is balanced with the dissolution flux, the depth at which the CaCO 3 content are values 2–10%. [ 4 ] Hence, the lysocline and CCD are not equivalent. The lysocline and compensation depth occur at greater depths in the Atlantic (5000–6000 m) than in the Pacific (4000–5000 m), and at greater depths in equatorial regions than in polar regions . [ 5 ]
The depth of the CCD varies as a function of the chemical composition of the seawater and its temperature. [ 6 ] Specifically, it is the deep waters that are undersaturated with calcium carbonate primarily because its solubility increases strongly with increasing pressure and salinity and decreasing temperature. As the atmospheric concentration of carbon dioxide continues to increase, the CCD can be expected to decrease in depth, as the ocean's acidity rises. [ 3 ] | https://en.wikipedia.org/wiki/Lysocline |
The lysophosphatidic acid receptors ( LPARs ) are a group of G protein-coupled receptors for lysophosphatidic acid (LPA) that include:
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lysophosphatidic_acid_receptor |
Lysophosphatidylinositol ( LPI , lysoPI ), or L -α-lysophosphatidylinositol , is an endogenous lysophospholipid and endocannabinoid neurotransmitter . [ 1 ] LPI, along with its 2-arachidonoyl- derivative, 2-arachidonoyl lysophosphatidylinositol (2-ALPI), have been proposed as the endogenous ligands of GPR55 . [ 2 ] [ 3 ] [ 4 ] [ 5 ] Recent studies have shown that the fatty acyl composition of LPI influences neuroinflammatory responses in primary neuronal cultures, highlighting its potential role in neuroinflammation. [ 6 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lysophosphatidylinositol |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.